Skip to content

Changing the way agent handles consecutive beacon failures#29

Draft
uruwhy wants to merge 2 commits intomasterfrom
failed-comms-fallback
Draft

Changing the way agent handles consecutive beacon failures#29
uruwhy wants to merge 2 commits intomasterfrom
failed-comms-fallback

Conversation

@uruwhy
Copy link
Contributor

@uruwhy uruwhy commented Aug 18, 2020

Agent will keep track of first successful server address and comms method. The agent will also keep track of its normal sleep time as determined by successful beacon responses from the C2.

When handling a failed beacon:
Case 1: the consecutive failure counter has not been reached
The agent will sleep for the last sleep time that the C2 server gave it (15 seconds default) before retrying with the current C2 communication methods

Case 2: the consecutive failure counter has been reached (currently set to 3 fails)
The agent will reset the failure counter and perform the following protocol:

  • Check if there are any available peer proxy receivers to use to reach the C2.
  • If there are no more proxies available, and if the agent hasn't successfully reached the C2 before, throw an error.
  • If there are no more proxies available because they have all been attempted previously, refresh the proxy list, sleep double the previous server-provided sleep duration before trying the first successful server address and comms method.
  • If there are no proxies available because there were none to begin with, sleep double the previous server-provided sleep duration before trying the first successful server address and comms method.
  • Otherwise, there are proxies left to try out. Pick one, sleep for the previous server-provided sleep duration (15 seconds default) before reattempting beacons.

Copy link

@christophert christophert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small semantic changes, functionality was verified.

agent/agent.go Outdated
return err
}
a.server = server
a.firstSuccessFulServer = ""

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refer to comment on variable declaration

agent/agent.go Outdated
profile := a.GetFullProfile()
response := a.beaconContact.GetBeaconBytes(profile)
if response != nil {
if len(a.firstSuccessFulServer) == 0 {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refer to comment on variable declaration

…l keep track of first successful server address and comms method. Upon 3 consecutive beacon failures, agent will switch to a proxy receiver it hasn't tried before. If there are no proxy receivers, then the agent will terminate (if it hasn't made a successful connection to the C2 before), or it will sleep twice its normal sleep amount and then try the first successful server address & comms method (will cycle back through all the proxy receivers upon repeated failure).

Cleaned up new fallback logic and comments
@uruwhy uruwhy force-pushed the failed-comms-fallback branch from a6cd1a4 to 75b77eb Compare October 7, 2020 16:28
@christophert christophert assigned wbooth and unassigned christophert Oct 7, 2020
@christophert christophert requested a review from wbooth October 7, 2020 18:27
@wbooth
Copy link
Contributor

wbooth commented Oct 14, 2020

I think we can simplify this further and still achieve your objective here. As is, it's a long explanation to convey the behavior and requires lots of decision points. Let's set a time to discuss.

Copy link
Contributor

@wbooth wbooth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.

@wbooth wbooth marked this pull request as draft October 21, 2020 17:32
@wbooth wbooth removed their assignment Nov 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants