Security, et al

Randy's Blog on Infosec and Other Stuff

Cloud Security Starts at Home

Tue, 30 Aug 2016 10:28:14 GMT

Cloud security is getting attention and that’s as it should be. But before you get hung up on techie security details like whether SAML is more secure than OpenID Connect and the like, it’s good to take a step back. One of the tenets of information security is to follow the risk. Risk is largely a measure of damage and likelihood. When you are looking at different threats to the same cloud-based data then it becomes a function of the likelihood of those risks.

In the cloud we worry about the technology and the host of the cloud. Let’s focus on industrial-strength infrastructure and platform-as-a-service clouds like AWS and Azure. And let’s throw in O365 – it’s not infrastructure or platform but it’s scale and quality of hosting fits our purposes in terms of security and risk. I don’t have any special affection for any of the cloud providers but it’s a fact that they have the scale to do a better, more comprehensive, more active job on security that my little company does and I’m far from alone. This level of cloud doesn’t historically get hacked because of stupid operational mistakes or flimsy coding practices with cryptography and password handling. Or because of obscure vulnerabilities in standards like SAML and OpenID Connect (they are present). It’s because of tenant-vectored risks. Either poor security practices by the tenant’s admins or vulnerabilities in the tenant’s technology which the cloud is exposed to or on which it is reliant.

Here are just a few scenarios of cloud intrusions with a tenant origin vector


Tenant Vulnerability

Cloud Intrusion


Admin’s PC infected with malware

Cloud tenant admin password stolen


Tenant’s on-prem network penetrated

VPN connection between cloud and on-prem network


Tenant’s Active Directory unmonitored

Federation/synchronization with on-prem AD results in an on-prem admin’s account having privileged access to the cloud.

I’m going to focus on the latter scenario. The point is that most organizations integrate their cloud with their on-prem Active Directory and that’s as it should be. We hardly want to go back to the inefficient and insecure world of countless user accounts and passwords per person. We were able to largely reduce that of the years by bringing more and more on-prem apps, databases and systems online with Active Directory. Let’s not lose ground on that with the cloud.

But your greatest risk in the cloud might just be right under your nose here in AD on your local network. Do you monitor changes in Active Directory? Are you aware when there are failed logons or unusual logons to privileged accounts? And I’m not just talking about admin accounts. Really, just as important, are those user accounts who have access to the data that your security measures are all about. So that means identifying not just the IT groups in AD but also those groups which are used to entitle users to that important data. Very likely some of those groups are re-used in the cloud to entitle users there as well. Of course the same goes for the actual user accounts.

Even for those of us who can say our network isn’t connected by VPN or any direct connections (like ExpressRoute for Azure/O365) and there’s no federation or sync between our on-prem and cloud directories your on-prem, internal security efforts will make or break your security in the cloud and that’s simply because of #1. At some point your cloud admin has to connect to the cloud from some device. And if that device isn’t secure or the cloud admin’s credential handling is lax you’re in trouble.

That’s why I say that for most of us in the cloud need to first look inward for risks. Monitoring as always is key. The detective control you get with a well implemented and correctly used SIEM is incredible and often the only control you can deploy at key points, technologies or processes in your network.

"This article by Randy Smith was originally published by EventTracker." 

email this digg reddit dzone
comments (0)references (0)

5 Indicators of Endpoint Evil
Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond

The Leftovers: A Data Recovery Study

Thu, 18 Aug 2016 08:17:08 GMT

I did a webinar a while back with Paul Henry on “What One Digital Forensics Expert Found On Hundreds of Hard Drives, iPhones and Android Devices” which was sponsored by Blancco Technology Group who makes really cool data erasure software for the enterprise.

Blancco has released a whitepaper, The Leftovers: A Data Recovery Study, based on the same work that Paul did. To demonstrate just how easy, common and dangerous it is when data is improperly removed before used electronics are resold, Blancco Technology Group purchased a total of 200 used hard disk drives and solid state drives from eBay and Craigslist in the first quarter of 2016.

Here are the top findings from their study:

  • 67 percent of the used hard disk drives and solid state drives hold personally identifiable information and 11 percent contain sensitive corporate data.
  • Upon analyzing the 200 used drives, company emails were recovered on 9 percent of the drives, followed by spreadsheets containing sales projections and product inventories (5 percent) and CRM records (1 percent).
  • 36 percent of the used HDDs/SSDs containing residual data had data improperly deleted from them by simply dragging files to the ‘Recycle Bin’ or using the basic delete button.

Check out the paper at

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond

Keeping An Eye on Your Unix & Linux Privileged Accounts

Mon, 06 Jun 2016 10:27:03 GMT

With sudo you can give admins the authority they need without giving away root and all the security risks and compliance problems caused by doing so. But once you carefully delegate limited, privileged authority with sudo you still need an audit trail of what admins are doing. A privileged user audit trail is irreplaceable as a deterrent and detective control over admins and in terms of implementing basic accountability. But in today’s environment of advanced and persistent attackers you also need the ability to actively monitor privileged user activity for quick detection of suspicious events.

In this webinar, I'll dive into the logging capabilities of sudo. Sudo provides event auditing for tracking command execution by sudoers – both for successful and denied sudo requests as well as errors. I'll show you how to enable sudo auditing and how to control where it’s logged, if syslog is used and more importantly: what do sudo logs looks like and how do you interpret them?

But sudo also offers session auditing (aka the iolog) which allows you to capture entire sudo sessions including both input and output of commands executed through sudo whether in an interactive shell or via script. I'll demonstrate how to configure sudo session logging and how to view recorded sessions with sudoreplay.

After, BeyondTrust Product Manager, Paul Harper will walk you through how to augment sudo for complete control and auditing over Unix and Linux user activity.

Click here to register now!

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
5 Indicators of Endpoint Evil
Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond

Secure, Fast and Efficient Password Management

Mon, 23 May 2016 14:33:37 GMT

All the way back in the late 90’s I realized that passwords, even for myself, were a big vulnerability. With more websites requiring logins I realized that my multiplying “Post-It Note” situation was not going to work. This left me two options:

  1. A password protected word doc full of usernames and passwords.
  2. A unique username with one password used for all accounts.

You can easily see that neither of those two options were secure or viable. At that time especially, document encryption was either easy but way weak or strong but highly inconvenient. Besides who wants to copy and paste all the time? And then worry about your password sitting around in your clipboard? The risks go on and on. So as most InfoSec techies would do – I turned to Google. In those days a google search of “password manager” turned up much less results than the 48,000,000+ results you will get today.

After a bit of research, I decided to test a password manager product by RoboForm. Little did I know that 17 years later; using RoboForm would be a de facto standard at my company. I remember one of our contractors had his web-based email compromised and it took him half a day to login into each of his online accounts and change all his passwords since he was using one password for all accounts. He is now a RoboForm user.

RoboForm allows you to use unique usernames and unique passwords for each web login you have. It will actually help generate unique passwords using the character limits you specify and then save these complex passwords to your system under “lock and key”.

Fig 1. - Password requirement options

You only need to remember one unique master password to gain access to all of your RoboForm complex passwords. When you visit the logon page of a website, RoboForm automatically senses it and allows you to fill in your credentials with a single click. If your device is lost or stolen or malware compromises your computer, the files containing your credentials are encrypted with a key derived from your master password.

Fig 2. - A single click on the login named “Dev” will fill and submit the login

Of course we’ve seen over and over again that encryption is complex and programmers often do it wrong. I trust RoboForm’s encryption. They take a no compromise approach to security. The master password is not stored anywhere except your head; not locally and not on RoboForm’s servers. “RoboForm’s servers?” you ask? Yes, if you choose to use the feature, RoboForm uploads all your usernames and passwords to their server which then allows all your devices with RoboForm to share up-to-date credentials. This is called RoboForm Everywhere and it works awesome. Whether I’m on my desktop, Surface, smartphone or tablet I always have my passwords without sacrificing security.

You are probably asking, and rightly so, “How good is the protection in RoboForm’s ‘cloud’?” Well, first, you have a password on your RoboForm everywhere account – different than your master password which is used for encryption. But even if the RoboForm cloud is compromised (and we’ve already seen this happen to other password managers repeatedly) your credentials are still protected. RoboForm’s no-compromise approach on security means that they simply do not have your master password. Your credentials stored in the cloud are encrypted with the same key derived from your master password just like the files on your local Windows or mobile device. So memorize a good master password and don’t use it for anything else than RoboForm.

If you have a compatible finger-print reader and trust Windows security you can protect your master password with your fingerprint. To unlock RoboForm, you provide your fingerprint and avoid entering even your master password. Are their risks to that? Yes, but it’s up to you. You don’t have to use it.

RoboForm has a few products but everyone at my company uses RoboForm Everywhere which gives you the added benefit of syncing these passwords across multiple systems, mobile devices and tablets. RoboForm also has a built in browser which means no cumbersome copying and pasting of passwords on your mobile devices.

In 1980, password management wouldn’t have been an issue but nowadays, if you’re like me, you have a plethora of online user accounts, not to mention Windows Security popups which RoboForm also manages. Personally I have 500+ unique logins and this is only in my “Personal” folder (I keep my logins organized so I also have a “Work” folder).

Fig 3. – Roboform also manages Windows Security popups

I should also mention that RoboForm can manage identities if you choose to use it as well as financial info like banking details and credit card data which makes every merchant site payment process almost as user friendly and fast as Amazon. The Safenote feature is also very useful allowing you to secure and lockdown your virtual “Post-It Notes”.

I recommend that you give RoboForm a try.  You can get it completely free with a 10 saved login limit. If you are still in college you can actually get RoboForm completely free with unlimited logins. You can get the 1st year of RoboForm Everywhere 50% off by clicking here.

Stay tuned for another blog next month where I go in depth on a unique use case using RoboForm and some isolated servers we use for high security functions in our organization.

email this digg reddit dzone
comments (0)references (0)

Live with Dell at RSA 2015
Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Live with LogRhythm at RSA

Get rid of QuickTime as Quickly and Efficiently – For FREE!

Mon, 25 Apr 2016 12:53:01 GMT

Hi folks.  If you are wondering how many computers on your network have QuickTime installed and how to get rid of it, I’ve got some help for you in the form of a video, PowerShell script, AppLocker policy and free tools from SolarWinds.  If you don’t already know why it’s urgent to uninstall QuickTime, be aware that Apple has announced it’s no longer supporting QuickTime for Windows even though TrendMicro has announced 2 zero-day heap corruption vulnerabilities that allow remote code execution.  According to my understanding of this, Apple never provided any warning that they’d stop patching their software.  That’s really lame.  You have to say this for Microsoft, they give you warning.  So every Windows endpoint with QuickTime installed is a sitting duck.  Even the Department of Homeland Security is warning folks to kill QuickTime before the bad guys exploit it against you and your network.

Barry and I have put together 2 videos:

1.  How to spend about 15 minutes with a trial download of SolarWinds Patch Manager to

a.  Quickly inventory all the endpoints with QuickTime installed

i.  We got the folks at SolarWinds to post a report on Thwack that reports all computers with QuickTime installed.

b.  Remotely un-install QuickTime from those PCs

c.  Without installing any agents!

2.  Or you can use AppLocker to block QuickTime from executing on PCs where it is installed

I recommend using the SolarWinds Patch Manager option because it’s fast, easy and free and it eliminates the risk by uninstalling QuickTime.  My alternative AppLocker procedure only blocks QuickTime; it doesn’t install it and it doesn’t address malware that knows how to bypass the Application Identity service.

If you are going to the 30-day trial of SolarWinds Patch Manager to remove QuickTime please use this URL to download it because that helps us keep the lights on here at UltimateWindowsSecurity.  And don’t worry, the good folks at SolarWinds are good with you using the eval to solve this problem.  You might want to keep Patch Manager once you see it.  After explaining how to use it to get rid of QuickTime I’ll explain why I like Patch Manager.

Download PatchManager and install it.  Watch Barry’s video to help you save time.  It only takes Barry 11 minutes to install Patch Manager, find all the PCs with QuickTime and uninstall it.  Follow along with Barry and you’ll be done in time to take the rest of the morning off. 

If you are interested in my alternative (and less secure) AppLocker method, watch this video.

Download Randy's Powershell Script here:

Both methods work without agents!  But only Patch Manager actually eliminates the risk.  And the no agent thing is what I love about Patch Manager.  It provides software inventory and 3rd party patching (Adobe, Java, Apple, etc) without requiring you to install yet another agent.  How does it do it?  It’s pretty cool. Patch Manager uses WMI for querying PCs but then it leverages the already existing Windows Update agent baked into every Windows computer to push 3rd party patches and of course Microsoft patches too.  It does this through a really cool integration with WSUS. 

So you get the best of both worlds.  Leverage the built-in infrastructure Windows already provides for patching Microsoft products to patch 3rd party products too!  Brilliant.  Again, if you want to use Patch Manager for getting rid of QuickTime for free or just want to try it out, please use this URL.  It helps fund our research and real training for free we provide nearly each week.

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Live with Dell at RSA 2015

Certificates and Digitally Signed Applications: A Double Edged Sword

Mon, 11 Apr 2016 11:28:31 GMT

Windows supports the digitally signing of EXEs and other application files so that you can verify the provenance of software before it executes on your system. This is an important element in the defense against malware. When a software publisher like Adobe signs their application they use the private key associated with a certificate they’ve obtained from one of the major certification authorities like Verisign.

Later when you attempt to run a program Windows can check the file’s signature and verify that it was signed by Adobe and that its bits haven’t been tampered with such as by the insertion of malicious code.

Windows doesn’t enforce digital signatures or limit which publisher’s programs can execute by default but you can enable with AppLocker. As powerful as AppLocker potentially is, it is also complicated to set up except for environments with a very limited and standardized set of applications. You must create rules for at least every publisher whose code runs on your system.

The good news however is that AppLocker can also be activated in audit mode. And you can quickly set up a base set of allow rules by having AppLocker scan a sample system. The idea with running AppLocker in audit mode is that you then monitor the AppLocker event log for warnings about programs that failed to match any of the allow rules. This means the program has an invalid signature, was signed by a publisher you don’t trust or isn’t signed at all. The events you look for are 8003, 8006, 8021 and 8024 and these events are in the logs under AppLocker as shown here:

These events are described here which is part of the AppLocker Technical Reference.

If you are going to use AppLocker in audit mode for detecting untrusted software remember that Windows logs these events on teach local system. So be sure you are using a SIEM with an efficient agent like EventTracker to collect these events or use Windows Event Forwarding.

There are some other issues to be aware of though with digitally signed applications and certificates. Certificates are part of a very complicated technology called Public Key Infrastructure (PKI). PKI has so many components and ties together so many different parties there is unfortunately a lot of room for error. Here’s a brief list of what has gone wrong in the past year or so with signed applications and the PKI that signatures depend on:

  1. Compromised code-signing server

    I’d said earlier that code signing allows you to make sure a program really came from the publisher and that it hasn’t been modified (tampered). But depends on how well publisher protects their private key. And unfortunately Adobe is a case in point. A while back some bad guys broke into Adobe’s network and eventually found their way to the very server Adobe uses to sign applications like Acrobat. They uploaded their own malware and signed it with Adobe’s code signing certificate’s private key and then proceeded to deploy that malware to target systems that graciously ran the program as a trusted Adobe application. How do you protect against publishers that get hacked? There’s only so much you can do. You can create stricter rules that limit execution to specific versions of known applications but of course that makes your policy much more fragile.

  2. Fraudulently obtained certificates

    Everything in PKI depends on the Certification Authority only issuing certificates after rigorously verifying the party purchasing the certificate is really who they way the are. This doesn’t always work. A pretty recent example is Spymel a piece of malware signed by a certificate DigiCert issued to a company called SBO Invest. What can you do here? Well, using something like AppLocker to limit software to known publishers does help in this case. Of course if the CA itself is hacked then you can’t trust any certificate issued by it. And that brings us to the next point.

  3. Untrustworthy CAs

    I’ve always been amazed at all the CA Windows trusts out of the box. It’s better than it used to be but at one time I remember that my Windows 2000 system automatically trusted certificates issued by some government agency of Peru. But you don’t have trust every CA Microsoft does. Trusted CAs are defined in the Trusted Root CAs store in the Certificates MMC snap-in and you can control the contents of this store centrally via group policy

  4. Insecure CAs from PC Vendors

    Late last year Dell made the headlines when it was discovered that they were shipping PCs with their own CA’s certificate in the Trusted Root store. This was so that drivers and other files signed by Dell would be trusted. That might have been OK but they mistakenly broke The Number One Rule in PKI. They failed to keep the private key private. That’s bad with any certificate let alone a CA’s root certificate. Specifically, Dell included the private key with the certificate. That allowed anyone that bought an affected Dell PC to sign their own custom malware with Dell’s private key and then once deployed on other affected Dell systems to run it with impunity since it appeared to be legit and from Dell.

So, certificates and code signing is far from perfect show me any security control that is. I really encourage you to try out AppLocker in audit mode and monitor the warnings it produces.  You won’t break any user experience, the performance impact hardly measurable and if you are monitoring those warnings you might just detect some malware the first time it executes instead of the 6 months or so that it takes on average.

This article by Randy Smith was originally published by EventTracker

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond

Catching Hackers Living of the Land Requires More than Just Logs

Mon, 21 Dec 2015 13:56:30 GMT

If attackers can deploy a remote administration tool (RAT) on your network, it makes it so much easier for them. RATs make it luxurious for bad guys; it’s like being there on your network. RATs can log keystrokes, capture screens, provide RDP-like remote control, steal password hashes, scan networks, scan for files and upload them back to home. So if you can deny attackers the use of RATs you’ve just made life a lot harder for them.

And we are getting better at catching so-called advanced persistent threats by detecting the malware they deploy on compromised systems. We can say this because experts are seeing more attackers “living off the land”. Living off the land means for an attacker to go malware free and instead rely on the utilities, scripting engines, command shells and other native resources available on systems where they gain an entry point.

By living off the land, they keep a much lower profile. They aren’t stopped as much by application control and whitelisting controls. There’s no malware for antivirus to detect.

And Windows provides plenty of native resources for this kind of attacker. (Linux and UNIX do too, but I’m focusing on Windows since client endpoints initially targeted by today’s attackers mostly run Windows.) You might be surprised how much you can do with just simple batch files. Let alone PowerShell. And then there’s WMI. Both PowerShell and WMI provide a crazy amount of functionality. You can access remote systems and basically interface with any API of the operating system. You can open up network connections for “phoning home” to command and control servers and more. This is all stuff that in years past required an EXE or DLL. Now you can basically do anything that a custom built EXE can do but without touching the file system which so much of our current security technology is based on.

How do you prevent attacks like this? PowerShell has optional security restrictions you can implement for preventing API access and limiting script execution to signed script files. With WMI it’s not as clear. Obviously all the normal endpoint security technologies have a part to play.

But let’s focus on detection. It’s impossible to prevent everything and mitigate every vulnerability. So we can’t neglect detection. The challenge with detecting attackers living off the land is 2 fold. The activities you need to monitor:

  1. Aren’t found in logs
  2. Are happening on client endpoints

Both of these create big challenges. Let’s talk about #1 first. A.N. Ananth and I describe the types of activities that are clues to possible attacker living off the land in 5 Indicators of Evil on Windows Hosts using Endpoint Threat Detection and Response and I encourage you to watch that session which is full of good technical tips. But the point is that the thing you need to watch for aren’t in the Windows security log or other logs. Instead detection requires a combination of file scanning, configuration checks, querying of running processes and so on. All stuff that requires code running on the local system or very powerful and complex remote access. If we were only talking about servers we could consider deploying an agent. But to catch todays threats you need to be monitoring where they begin which is on client endpoints – the desktops and laptops of your employees. And there’s no way to remotely reach into that many systems in real time even if you overcame the technical hurdles of that kind of remote access. So that leaves agents which always causes a degree of pushback.

But it’s time to stop calling them agents. Today what we need on endpoints are sensors. It’s a subtle but important shift in mindset. In the physical world, everyone understand the need for sensors and that sensors have to deployed where the condition is being monitored. If you want to know when someone enters your building at night you need sensor on every door. Likewise, if you want the earliest possible warning that your organizations have been compromised you need a sensor on every endpoint.

So I encourage you to start thinking and speaking in terms of leveraging your endpoints as a sensor rather than yet another system that requires an agent. And look for security vendors that get this. EventTracker has done a great job of evolving their agent into a powerful and irreplaceable endpoint security agent that “see” things that are just impossible any other way.

This article by Randy Smith was originally published by EventTracker

email this digg reddit dzone
comments (0)references (0)

5 Indicators of Endpoint Evil
Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
Live with Dell at RSA 2015
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log

How to Detect Low Level Permission Changes in Active Directory

Wed, 16 Dec 2015 09:26:50 GMT

We hear a lot about tracking privileged access today because privileged users like Domain Admins can do a lot of damage but more importantly if there accounts are compromised the attacker gets full control of your environment.

In line with this concern, many security standards and compliance documents recommend tracking changes to privileged groups like Administrators, Domain Admins and Enterprise Admins in Windows and related groups and roles in other applications and platforms.

But in some systems you can also granularly delegate privileged access – ultimately giving someone the same level of authority as a Domain Admins but “underneath the radar”. This is especially true in AD. This capability is a double edged sword because it’s necessary if you are going to implement least privilege but it also creates a way for privileged access to be granted inadvertently or even maliciously in such a way that will go unnoticed unless you are specifically looking for it. Here’s how:

First you need to enable “Audit Directory Service Changes” on your domain controllers – probably using the Default Domain Controllers Policy GPO.

Then open Active Directory Users and Computers and enable Advanced Features under View. Next select the root of the domain and open Properties. Navigate the Audit tab of the domain’s Advanced Security Settings dialog shown below.

Add an entry for Everyone that audits “Modify permissions” on all objects like the entry highlighted above. At this point domain controllers will record Event ID 5136 whenever someone delegates authority of any object in the domain – whether an entire OU or a single user account. Here’s an example event:

A directory service object was modified.


     Security ID:         MTG\pad-rsmith

     Account Name:        pad-rsmith

     Account Domain:      MTG

     Logon ID:       0x5061582

Directory Service:

     Name: mtg.local

     Type: Active Directory Domain Services


     DN:  OU=scratch,DC=mtg,DC=local

     GUID: OU=scratch,DC=mtg,DC=local

     Class:     organizationalUnit


     LDAP Display Name:   nTSecurityDescriptor

     Syntax (OID):



     Type: Value Added

     Correlation ID: {29fbbb83-5567-4935-9593-73496cc98461}

     Application Correlation ID:     -

This event tells you that a MTG\pad-rsmith (that’s me) modified the permissions on the Scratch organizational unit in the MTG.local domain. nTSecurityDescriptor and “Value Added” tell us it was a permissions change. The Class field tells the type of object and DN gives us the distinguished name of the object whose permissions were changed. Subject tells us who made the change. I removed the lengthy text for Attribute Value because it’s too long to display and it’s in SDDL format which isn’t really human readable without a significant amount of effort. Technically it does provide you with the full content of the OU’s new access control list (aka Security Descriptor) but it’s just not practical to try to decode it. It’s probably going to be faster to actually find the object in Active Directory Users and Computers and view its security settings dialog via the GUI.

So the Security Log isn’t perfect but this method does give you a comprehensive audit trail of all permission changes and delegation within Active Directory. If you combine this with group membership auditing you’ll have a full picture of all changes that could impact privileged access in AD which is a key part of security and compliance.

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure
Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond

Anatomy of a Hack Disrupted: How one of SIEM’s out-of-the-box rules caught an intrusion and beyond

Thu, 15 Oct 2015 15:37:14 GMT

Every year, organizations spend millions of frustrating hours and countless sums of money trying to reverse the damage done by malware attacks. The harm caused by malware can be astronomical, going well beyond intellectual property loss and huge fines levied for non-compliance. In 2014, the cost of malware attacks and resulting breaches was estimated at $491 billion. [i] And these costs include more than just the money spent trying to directly respond to security breaches. Productivity, long-term profitability, and brand reputation are often severely impacted as well.

The malware threat is growing larger and becoming more challenging to respond to every year. It seems like every month there are more major breaches. Target, Neiman Marcus, and UPS have all been victims of costly breaches in the past couple years, with each event showing signs that the breaches could have been prevented. Phishing-based malware was the starting point 95 percent of the time in state-sponsored attacks, and 67 percent of the time in cyber-espionage attacks.[ii]

With such high-profile organizations being the target of attacks, do you really need to be worried?


It’s easy to shrug off the threat of malware and believe that the target will always be a retail organization or a huge brand name, that it will never be your organization. However, according to a 2015 Ponemon study, 80 percent of all organizations experience some form of Web-borne malware. [iii] So don’t be lulled into a false sense of security: All industries are at risk, including the financial, health care, and government sectors you hear about in the news.

And remember, these attacks aren’t confined to large, multinational corporations. Cybercriminals frequently target small and midsized businesses (SMBs). A prime example of an attack on a small business is Pennsylvania-based A cyberattack occurred in 2014, costing the company $200,000, not including lost sales after the attack while the company had to temporarily stop accepting credit card payments. Granted, this attack is not on the same scale from a total dollar perspective as the more well-publicized breaches we hear about in the media. But for a small company, an attack of this size can be just as devastating, if not more so.

Malware is not just an annoyance or minor inconvenience. It is the gateway to far more serious problems for a company and its customers. And it invades a network easily. There are many insidious ways it can infect a system email attachments, phishing email messages, various file-sharing programs, and out-of-date OS patching, to name a few. And once it affects a single computer or network node, it can quickly spread throughout your network like an out-of-control forest fire.  This is called a horizontal kill chain and features heavily in every attack we analyze.


What if there were a way to solve these potentially devastating problems before they got out of hand? Or even before they occurred in the first place? There is. This paper discusses just such a real-life situation, in which a malware attack took place but was discovered by the built-in rules of LogRhythm before any damage occurred.

The situation involves a customer of LogRhythm. LogRhythm is a leading provider of security intelligence and analytics solutions. LogRhythm empowers organizations around the globe to rapidly detect, respond to, and neutralize damaging cyber threats, giving clients the ability to catch and proactively solve problems they might not have otherwise anticipated.

What follows is a textbook example of the kind of problems LogRhythm solves on a regular basis and the risks it mitigates for its customers every day. This malicious activity could have led to a very serious intrusion with devastating repercussions, but LogRhythm caught it immediately and the client was able to research and mitigate it easily and quickly by using additional capabilities of LogRhythm including packet analysis, custom alarms and more.


It all started when the organization received a SIEM alarm from LogRhythm’s Advanced Intelligence Engine (AIE), notifying the IT team of a suspicious situation: A single domain user account had established simultaneous VPN access from two separate locations. This anomaly was caught because of a default, out of the box rule in LogRhythm as shown in Figure 1.

Figure 1: LogRhythm's AI Engine

The situation was an obvious case of compromised user credentials. A corporate end user should typically not be logging in simultaneously from two geographically separate locations. In response, the organization’s Security Operations Center (SOC) called the end user (who happened to be a technical security staff member himself) to investigate the matter. The SOC wondered if the user had set up a proxy device from home, or was perhaps using his mobile device to initiate a connection or even running his own penetration test just to play with his colleagues. The SOC determined that the end user had no malicious intent; he was using the VPN in a legitimate fashion while traveling on a business trip.

Because he was boarding a return flight soon and would not need his laptop, the SOC instructed the user to turn it off until he arrived back at the home office and could deliver it to the investigation team. Additionally, the SOC disabled the compromised Active Directory account, and the user’s computer account was removed from the network.


Once the laptop was received, IT ran a full antivirus scan and found no suspicious files or programs on the system. The IT team then placed the unit in an isolation/test lab for observation before reimaging it, because they wanted to identify the source of the problem and take steps to prevent it in the future. So, the computer was isolated and observed with LogRhythm’s network monitoring probe running.

At many organization management frequently over-relies on antivirus and assumes the organization is protected from any sort of malware damage. This is a serious misconception.

This particular threat was polymorphic in nature and as the name implies, it has the ability to change or “morph” regularly, thereby altering the appearance of its code. This characteristic bypasses detection by traditional antivirus tools and signatures. In our scenario, a more advanced scanner was deployed, and a file related to the threat was indeed found.

A proven, reliable antivirus solution is an important network security tool that you need on your network, but in today’s virulent, ever-changing threat landscape, it by no means provides the comprehensive protection you need. There is no substitute for comprehensive monitoring by a SIEM with a wealth of built-in knowledge about cryptic security logs and intelligent, pre-built rules to catch unusual activity.

Adobe Flash was suspected as the malware’s entry point because Shockwave was found to be improperly patched during a patch-scanning assessment of the computer.  (Figure 2) Unusual, irregular browser helper objects were also found; this situation is common when malware wants to hijack and redirect a browser session or send a user to a malicious site.


The organization used LogRhythm to initiate a full packet capture and deep packet inspection (DPI) of all traffic initiated during tests on the computer. A common destination IP address was found that did not belong to the organization. Naturally, this address raised suspicions: All traffic from the isolated laptop was going to the same IP address (which did not belong to the organization), indicating a possible hidden proxy mechanism on the isolated computer. See Figure 3.

Figure 3: A DPI showed traffic consistently going to the same IP address

Running ipconfig/displaydns showed that all traffic from the computer resolved to a common host record. Obviously, this was a glaring red-flag. Because the computer was sending every outbound packet to the same IP address, the problem was identified as intentional DNS poisoning.

It was important to the SOC to find out where the traffic was headed. Studying the IP address itself, the team identified a proxy IP address (DHCP lease) from an ISP in the United States. The SOC then contacted the ISP, which confirmed that the server was a compromised computer on its watch list.

The ISP then notified its customer (who had no malicious intent) that their server was hosting a compromised node (which was redirecting traffic to a location in Finland). The customer then took the server off the network and got the situation resolved.

Cybercriminals had been capturing and redirecting traffic through illegally compromised systems and would have had many opportunities to do harm, but they were thwarted in this case.


The investigation team uploaded the suspicious files to the antivirus community for the purpose of building awareness and with the hope that the community could create and deploy signatures and other heuristics to combat the malware threat.

Finally, to help prevent the same problem from happening again, the organization used LogRhythm to create a DPI rule to flag, alert, and capture proxy traffic and the same malware, should it reappear. (Figure 4) The computer that experienced the suspicious activity was reimaged, and patching was tightened on it and on computers across the company for potential Flash- and Shockwave-related problems for even greater risk mitigation.

Figure 4: A DPI rule monitors for traffic sent to the malicious IP address

None of the organization’s vital information was compromised, because the suspicious activity was caught so quickly and aggressively, and because effective action was taken so promptly. What could have been a major incident, or even a catastrophic data breach, was a mere bump in the road.


Malicious external attackers will use any means to access corporate information. Delivery mechanisms such as phishing-based attachments and malware-laden websites allow attackers to enter the figurative four walls of your organization. Unpatched applications such as Flash and Java allow access to credentials, the underlying operating system, data, and applications, giving the external attacker the ability to not just access corporate data but, as in the case of the scenario above, the ability to pass any obtained information outside the corporate walls for further malicious use. It all starts with one compromised endpoint.

Organizations can no longer rely simply on signature-based scanning of machines to identify malware. Polymorphic malware takes on an infinite number of forms, making it difficult to identify. And malware doesn’t exist for the sake of just existing; it has a purpose in mind that always involves taking something from you. So, a comprehensive approach to protecting your organization will entail not just looking at malware as a set of files to be detected, but also looking at it in terms of the actions it takes. You should be looking for ways to detect those actions on your network with the same determination with which you’d use an antivirus scanner to look for malware executables.

By taking this approach to thwarting malware, LogRhythm’s customer was able to automatically identify and address a potential issue the moment it arose, well before any damage could be done. Expanding your anti-malware efforts beyond simple machine scans to include scanning the network for malware activity will create a layered defense, ensuring the greatest effort in stopping malware in its tracks.

ABOUT Logrhythm

LogRhythm, a leader in security intelligence and analytics, empowers organizations around the globe to rapidly detect, respond to and neutralize damaging cyber threats. The company’s award-winning platform unifies next-generation SIEM, log management, network and endpoint monitoring and forensics, and security analytics. In addition to protecting customers from the risks associated with cyber threats, LogRhythm provides innovative compliance automation and assurance, and enhanced IT intelligence.

Consistently recognized by third-party experts, LogRhythm has been positioned as a Leader in Gartner’s SIEM Magic Quadrant report for four consecutive years, named a “Champion” in Info-Tech Research Group’s 2014-15 SIEM Vendor Landscape report and ranked Best-in-Class (No. 1) in DCIG’s 2014-15 SIEM Appliance Buyer’s Guide, awarded the SANS Institute's "Best of 2014" award in SIEM and received the SC Magazine Reader Trust Award for "Best SIEM Solution" in April 2015. Additionally, the company earned Frost & Sullivan’s SIEM/LM Global Market Penetration Leadership Award and been named a Top Workplace by the Denver Post. LogRhythm is headquartered in Boulder, Colorado, with operations throughout North and South America, Europe and the Asia Pacific region.


[i]  IDC, The Link Between Pirated Software and Cybersecurity Breaches (2014)

[ii]  Verizon, Data Breach Investigations Report (2015)

[iii]  Ponemon, State of the Endpoint Report: User-Centric Risk (2015)

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure

Strengthen your defenses where the battle is actually being fought – the endpoint

Tue, 29 Sep 2015 08:32:38 GMT

Defense-in-depth pretty much backs up the thought that every security technology has a place. But are they all created equal? Security is not a democratic process and no one is going to complain about security inequality if you are successful in stopping breaches. So I think we need to acknowledge a few things. Right now the bad guys are winning on the endpoint – in particular the workstation. One way or another they are getting users to execute bad code on their workstation. Having achieved a beach head, they work their way out across our network following a horizontal kill chain until they reach “the goods”. Next generation firewalls, identity and access control and privileged account management all have a part to play in detecting and slowing down this process. But we are not doing enough on the endpoint to recognize malicious code and key changes in user and application behavior. The strength of NGFWs is their eye in the sky ability to watch network traffic as a whole. But they can’t see inside encrypted packets and they don’t know which program inside the endpoint is sending or receiving observed packets. Much less can an NGFW tell you when that program appeared on the endpoint, how it got there, who executed it and so on.

So am I arguing for collecting endpoint security logs? Including workstations? Well that’s a start. But getting all your workstation security logs is challenging and may not meet your requirements because native logs do lack important information. If you have more than a handful of workstations, forget trying to collect their logs using any kind of pull/polling method; it just isn’t going to work. If you stick with native logs you need implement Windows native Event Forwarding which is a great technology but right now lacks management tools. So for most organizations that means agents.

Historically there’s been a lot of push back to deploying YAA (yet another agent) on workstations simply for the purpose of collecting logs. And I have to agree that going to the trouble of installing and maintaining an agent on every workstation when all you get is it’s native logs is a tough proposition.

That’s why I like what EventTracker has done with EventTracker 8 and the powerful detection, behavior analysis and prevention capabilities in their new agent. Basically it goes like this:

  1. We are losing the war on the endpoint front
  2. Ergo, we need to beef up defenses on the endpoint
  3. But native logs aren’t valuable enough alone to justify installing an agent
  4. Conclusion: increase the value of the agent by doing more than just efficiently forwarding logs

EventTracker 8’s Windows agent does much more than just forward logs. In fact, maybe we shouldn’t call it an agent. Perhaps sensor would be a better term.

One of the key things we need to do on endpoints is analyze the programs executing and identify new, suspect and known-bad programs. With native logs all you can get is the name of the program, who ran it and when (event ID 4688). The native log can’t tell you anything about the contents (i.e. the “bits”) of the program, whether it’s been signed, etc. Here’s what EventTracker 8 does every time a process is launched. It takes the process’s signature, pathname and md5 hash. It compares that information against:

  • A local whitelist
  • National Software Reference Library
  • VirusTotal

This is stuff you can only do if you have your own bits (i.e. an agent) running on the endpoint. You can’t do it with native logs and or with an NGFW. Here’s an example “synthetic” event generated by EventTracker that says it all:


I wish Windows had that event.

“But, wait. There’s more!”

Visibility inside the programs running on your endpoints and being able to compare them against internal and external reputation data is extremely valuable to detecting and stopping attacks. But if we have a good agent on the endpoint we can do even more. We can analyze what that program is doing on the network. What other systems is trying to access internally and where is it sending data out on the Internet? Here’s an example of what EventTracker 8 does with that information. How would you like to know whenever a non-browser application connects to a standard port on some unnamed system on the Internet? Check out the event below.

If you are up on malware techniques, though, you realize that discreet EXEs are not the only way attackers get arbitrary code to run on target systems. They have developed many different ways to hide bad guy code inside legit processes. One thing EventTracker does to detect this is by looking for suspicious threads injected into commonly abused processes like svchost.exe. EventTracker also does sophisticated analysis of the user too – not just programs – and alerts you when it sees suspicious combinations of user account, destination and source IP addresses.

EventTracker combines all the data that can only be obtained with an endpoint agent with general blacklist data from outside security organizations and specific whitelist data automatically built from internal activity. This is a great example of what you can do once you have your own code running on the endpoint. Combine native logs from each endpoint with all this other information and you are way ahead of the game.

email this digg reddit dzone
comments (0)references (0)

Auditing Privileged Operations and Mailbox Access in Office 365 Exchange Online
5 Indicators of Endpoint Evil
Live with Dell at RSA 2015
Severing the Horizontal Kill Chain: The Role of Micro-Segmentation in Your Virtualization Infrastructure

previous | next

powered by Bloget™


Recent Blogs


Additional Resources