How to Investigate Insider Threats (Forensic Methodology)
Insider threats are unfortunately a real and active threat.
The forensic investigation of a suspected insider follows a different approach in methodology than the classic methodology for investigating threat actors. The main difference between insider jobs and other jobs is the fact that clients usually want a timeline of both activity around the “malicious action” and also a timeline of “legitimate” activity leading up to, during and post the malicious actions to remove reasonable doubt that it was somebody else. During an insider job, artefacts that show system wake/hibernation, or artefacts proving a user opened something on their taskbar are just as important as the malicious activity itself depending on the client needs.
For these cases, analysts should *consider* create TWO timelines depending on the client needs and the nature of the incident:
- One timeline for malicious activity
- One timeline capturing ALL relevant activity showing what the user was actively doing since being identified as an insider
Why two timelines? Because once an employee is identified as an insider, information access becomes a key concern. Also make sure you take note of any corporate policies the client has because this might change what you choose to exclude/include in the timelines.
Generally, clients and forensic analysts assume all malicious activity is done by an outsider… but this is not always the case, I have worked full domain compromises caused by insiders with credential dumping and the other usual TTPs. Now in some instances it’s clear very quickly that it’s not an insider, but in other cases, you will realise when performing the analysis, that it doesn’t appear to be an outsider. The nature of the investigation will obviously change.
To give some examples of malicious insider jobs:
- Malicious insider decides to sell credentials for money leading to a FULL domain compromise (I have seen this happen more than once!)
- Malicious insider exfiltrates sensitive code from the organisation or issues changes to the code base
- Malicious insider tries to frame another employee/colleague for malicious action
- Malicious insider executes malware on various systems
- Employee gets a new job and exfiltrates files (cloud or removable device) NOTE: I do not work these cases personally AND THIS BLOG IS NOT ABOUT THIS KIND OF INCIDENT.
This blog is the high-level methodology I follow when approaching cases like this. I will also share some ideas for artefacts that can give you some quick wins when it comes to figuring out what happened and some good artefacts to frame the activity around the user.
I don’t want to turn this into a full forensics blog about every artefact on the planet so it will only cover some key artefacts to give you an idea of what to focus on when demonstrating the user was “active”.
As an investigator, your job is not to jump in assuming the suspected human is behind the activity. Your job is to remain 100% unbiased and let the evidence do the story telling. At the end of the day, the focus of the job is to slowly stack evidence (if there even is any) so you can gradually eliminate the probability that it was somebody else. When working on a suspected insider threat case, the context around the activity that flagged suspicion is the key focus of the case.
The investigation methodology I follow for insider cases is guided by four key questions:
- How was the device accessed around the suspected behaviour?
- Where was the user/device when this occurred?
- Was the insider active on their system?
- What did the user do?
I believe that prior to any investigation, you need to set up a plan for analysis, with highest value targets placed first for analysis. This eliminates time spent, especially during time-critical cases where clients want an answer because it impacts their BAU / remediation.
I will not be covering lateral movement as there’s plenty of articles about that and the methodology doesn’t differ from investigating a threat actor.
1. HOW WAS THE DEVICE ACCESSED AROUND THE SUSPECTED BEHAVIOUR?
Purpose: To answer one key question that clients will always ask, “was the user’s credentials compromised or was it the user themselves? “
Step 1: Get the user account the malware/activity was executed under
The first most obvious approach is to figure out what context the activity/malware was executed under. I point you therefore to my favourite THREE artefacts for this:
- Prefetch / Amcache
For a high-level overview of SRUM please refer to my tweet below (I will not cover what this artefact is in this blog post). This tweet explains what the artefact is and how to perform the analysis.
- What application executed?
- How long the application ran for (execution time)?
- WHO or what SID it was running under?
- Security.evtx (Logon type 2, 7, 11)
- User Profile Service Operational.evtx (EventID 2, 3)
- Microsoft Windows TerminalServices RemoteConnectionManager/Operational.evtx
- Microsoft Windows TerminalServices LocalSessionManager/Operational.evtx
- Was the access to the system physical or remote?
- What were they doing around the time of the suspect activity?
- Network SSID Artefacts (SRUM, Event Logs, Registry)
3. WAS THE USER ACTIVE ON THE SYSTEM?
- AppLaunch – tracks applications launched from the task bar that are pinned
- AppSwitched – Shows when a user is physically switching from one application to another in the task bar (also proves execution)
4. WHAT DID THE USER DO?
- One timeline that covers only malicious activity undertaken
- Another timeline that captures what else the user was doing SINCE being an insider
- Shellbags – what folders the user was viewing
- LNK files – what the user was accessing
- Jumplists – what the user was viewing
- Browser history – what the user was looking at if a browser was active
- Other artefacts you would analyze
- Configuration changes (reverting states i.e. in Azure Audit Logs)
- File deletions of any kind
- Deletion of conversation records on Teams / Outlook etc
1\ How do you prove a TA deleted a file and when?⁰— InverseCos ᐡ ꒳ ᐡ (@inversecos) October 28, 2021
Most threat actors including #APT groups perform file deletion. This can be very important to an investigation.
If the client has no EDR/SIEM and has an OT legacy environment.. what do you do?
Parse the $J file 😇#DFIR pic.twitter.com/CzsJpeJHwb
Happy investigating UwU <3