Security, et al

Randy's Blog on Infosec and Other Stuff

System Center Endpoint Protection (SCEP) and EMET are Effective and Free for Many Organizations But You Can Afford More

Thu, 09 Apr 2015 11:53:09 GMT

If I were to lose sleep about technology risks it would be about endpoint security. Endpoint security is probably every organization’s biggest risk area and it shows with the frequency and increasing severity of data breaches. You can trace almost every data breach over the past several years to endpoints – especially workstations.

Preventing unwanted (malicious, unauthorized, unlicensed or otherwise) code from executing on endpoints is not easy. There are a million ways to trick users, browswers and applications into executing arbitrary code. Additionally, the bad guys keep getting more sophisticated.

I don’t think it’s very far away that spending money on signature-based antivirus instead of more advanced protection will be regarded as wasteful and irresponsible. You may not want to go without traditional AV, but that doesn’t mean you have to necessarily pay for it.

Many customers already get Microsoft’s System Center Endpoint Protection (SCEP) “free” with their enterprise agreement. That money could be spent on advanced endpoint security technologies, such as application control, that prevent untrusted code from running – even if it’s part of a targeted attack for which no signatures have been developed.

Bit9 + Carbon Black is a sponsor here at Ultimate Windows Security and one of the companies that produces advanced endpoint security. They have recently integrated their technologies with Microsoft’s SCEP and Enhanced Mitigation Experience Toolkit (EMET). Their integration will really help organizations that use these products respond to malware infections more effectively.

For instance, let’s say SCEP detects a piece of malware and quarantines it. Is that it? Job done?

Actually it would be nice to know:

  • How the file got there?
  • How long was it there?
  • Where has that file been before being detected on that endpoint?
  • What other computers has it been opened on?
  • If it executed, what did it do

Most organizations never pursue questions like this. Who has the time and resources? And most of the information required to answer those questions was never captured in the first place.

At the very least this leads to repeated infections and quarantining because the AV technology detects the malware but not the “dropper”. So, the file continues to show up on that (or different) systems, consuming IT security staff time and increasing the chance that it will finally get a chance to run.

Bit9 + Carbon Black’s integration with SCEP enables SCEP to notify the Bit9 security platform when known malware is detected. Bit9 then correlates that event with data its agent has collected on that endpoint (as well as others) to help you investigate the scope of the attack. After finding out how the malware was dropped in the first place, you may be able to remediate the problem and prevent future infections from parent processes. You may even uncover a much bigger problem.

Bit9 + Carbon Black also add value to users of EMET. EMET provides some highly advanced hardening to protect applications from: memory attacks such as heap spraying, structured exception handling overwrites, return oriented programming and DLL injection. But EMET protected systems are basically islands in terms of monitoring and management.

It’s not easy to answer:

  • Where is EMET deployed?
  • Where is it missing?
  • Which applications is EMET protecting and how is it configured?
  • When and where is EMET detecting attacks and shutting down applications?

That last one is a particularly big question because if EMET is responding to a false positive, you need to know that before it finally filters you through end-user support and your business is at a standstill because of your security software.

On the other hand, if EMET is doing it’s job and stopping an attack that’s evaded your other layers of defense, you need to know about it as soon as possible so that you can track down the source and prevent other systems (such as non EMET-protected computers) from being hacked.

Carbon Black’s integration with EMET addresses these issues and enables you to deploy and manage EMET centrally from the Carbon Black console.

You can learn more about Bit9 + Carbon Black’s integration with MS technologies at https://www.bit9.com/partners/microsoft/#overview.

email this digg reddit dzone
comments (0)references (0)

Related:
System Center Endpoint Protection (SCEP) and EMET are Effective and Free for Many Organizations But You Can Afford More
Virtualization Security: What Are the Real World Risks?
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log
Why Workstation Security Logs Are So Important

Why naming conventions are important to log monitoring

Wed, 08 Apr 2015 10:18:28 GMT

Log monitoring is difficult for many reasons. For one thing there are not many events that unquestionably indicate an intrusion or malicious activity. If it were that easy the system would just prevent the attack in the first place. One way to improve log monitoring is to implement naming conventions that imbed information about objects like user accounts, groups and computers such as type or sensitivity. This makes it easy for relatively simple log analysis rules to recognize important objects or improper combinations of information that would be impossible otherwise.

However asking for special naming convention changes for the sake of log monitoring may be difficult to pull off. It’s common to treat log monitoring as strictly one-way activity in relation to the production environment. By that I mean that security analysts are expected to monitor logs and detect intrusions with no interaction or involvement with the administrators of the systems being monitored other than for facilitating log collection.

I realize that such a situation may not be easy to change but if security analysts can have some input in the standards and procedures followed upstream from log collection they can greatly increase the detectability of suspicious or questionable security events. Here’s a few examples.

There are at least 3 kinds of user accounts that every organization uses.

  • End-user accounts
  • Privileged accounts for administrators
  • Service/application accounts

Each of these 3 accounts are used in different ways and should be subject to certain best practice controls. For instance no person should ever logon to an interactive logon session (local console or remote desktop) with a service or application account. But of course a malicious insider or external threat actor is more than happy to exploit such accounts since they often have privileged authority and are frequently insecure because of difficulties in managing these accounts. Conversely, end-user and admin accounts assigned to people should not be used to run services and applications. Doing so will cause all kinds of problems. For instance, if Service A is running as User B and that user leaves the company, Service A will fail the next time it is started after User B is disabled. In audits I’ve seen highly privileged admin accounts of long departed employees still active because staff knew that there were different applications and services running with these credentials. This of course creates all kinds of security holes including residual access for the terminated employee.

Event ID 4624 makes it easy to distinguish between different logon session types with the Logon Type field. See the table below. But of course Windows can’t tell you what type of account just logged on. Windows doesn’t know the difference between end user, admin or service accounts. But if your naming convention embeds that information you can easily compare account type and logon type and alert on inappropriate combinations. Let’s say that your naming convention specifies that service accounts all begin with “s-“. Now all you need to do is set up a rule to alert you whenever it sees Event ID 4624 where Logon Type is 2 or 10 and account name is like “s-*”.

This is just one example of why it is so valuable to implement naming conventions that embed key information about objects. If you name groups with prefixes or something else that tags privileged groups as such, it now becomes very easy to detect whenever a member is added to privileged group. Perhaps you follow certain procedures to protect privileged accounts from pass-the-hash attacks such as limiting admins to only logging on to certain jump boxes. If privileged accounts and jump box systems are recognizable as such by their name then you can easily alert when a privileged account attempts logon from a non jump box system.

This of course requires upfront cooperation from administrators who may be resistant to changing their naming styles just for the sake of logs. And you need to get to know the procedures and controls used to keep your network secure so that you can configure your SIEM to recognize when intruders or malicious insiders bypass these controls. But both challenges are worth the effort to face.

Logon Type

Description

2

Interactive (logon at keyboard and screen of system)

3

Network (i.e. connection to shared folder on this computer from elsewhere on network

4

Batch (i.e. scheduled task)

5

Service (Service startup

7

Unlock (i.e. unnattended workstation with password protected screen saver)

8

NetworkCleartext (Logon with credentials sent in the clear text. Most often indicates a logon to IIS with "basic authentication") See this article for more information.

9

NewCredentials such as with RunAs or mapping a network drive with alternate credentials. This logon type does not seem to show up in any events. If you want to track users attempting to logon with alternate credentials see 4648

10

RemoteInteractive (Terminal Services, Remote Desktop or Remote Assistance

11

CachedInteractive (logon with cached domain credentials such as when logging on to a laptop when away from the network

email this digg reddit dzone
comments (0)references (0)

Related:
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log
SolarWinds Log & Event Manager Includes My Favorite Feature in a SIEM…
How Randy and Company Do IT: Server and Application Monitoring
Audit Myth Busters: SharePoint, SQL Server, Exchange

At the End of Day You Can’t Control What Privileged Users Do: It about Detective/Deterrent Controls and Accountability

Tue, 31 Mar 2015 17:19:33 GMT

Sudo is awesome and so is every other technology that helps you implement least privilege over admins. But at the end of the day you are just getting more granular with the risk but the risk is still here. Take a help desk staffer who needs to handle forgotten password resets for end users. Giving a privileged user like that just the authority she needs to get her job done is way less risky than giving her full root authority. But there’s still risk, right? If she is dishonest or becomes disgruntled she can reset the password of your chief engineer or CEO and access some heavy duty information.

So with any trusted user (whether a privileged admin or end user whose responsibilities require access to sensitive resources) you are ultimately left with detective/deterrent controls. You can’t prevent a user from trying to use whatever authority they have for evil but at least you can audit their activity. Ideally this gives you the chance to detect it and respond and at the very least it ensures accountability which is an important deterrent control. After all if you know everything you do is being recorded and subject to review, you think more than twice about doing something bad.

Besides being in control against malicious insiders, a privileged user audit trail is irreplaceable in today’s environment of advanced and persistent attackers. Such attackers actively try to gain privileged access so you also need the ability to actively monitor privileged user activity for quick detection of suspicious events.

In past webinars with BeyondTrust I’ve talked about how to use sudo to control what admins can do. In this webinar I’ll look at how to audit what admins do inside Linux and UNIX with sudo’s logging capabilities.

Click here to register now.

email this digg reddit dzone
comments (0)references (0)

Related:
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log
Virtualization Security: What Are the Real World Risks?
Automating Review and Response to Security Events
The Growing Threat of Friendly Fire from Vendors

How Randy and Company Do IT: Server and Application Monitoring

Thu, 19 Mar 2015 14:37:52 GMT

Note: This is part of an occasional series called “How Randy & Co Do It”.

We are a small but technology-heavy shop. We have a lot of servers, strict security requirements and a dispersed workforce. I also dabble and tinker a lot so because of that and other reasons our IT infrastructure is more complicated than most companies our size. My longtime sidekick Barry and I share responsibility for IT but we both have lots of other work to do so we try to set things up right and leave them alone. Nevertheless, things break and troubleshooting stuff can really kill our schedule and put us behind.

We needed something to help us keep a better handle on the status of our ever growing array of virtualization hosts, VMs, applications, VPNs, scheduled tasks and all the links between these components.

SolarWinds is one of our great sponsors and I’ve found them to be kind of a sweet spot in terms of IT tools with enterprise functionality yet with an SMB-size cost and complexity – perfect for us. So we worked out a deal with SolarWinds to try out their Server and Application Manager (SAM) and if it worked out well to share my experience. SAM is designed to help you “manage, monitor & troubleshoot application performance and availability”. We downloaded SAM early this year and here’s what we’ve found:

Installation

When you already don’t have time for troubleshooting the last thing you need is a trouble shooting tool that takes time to install and setup. SAM is a large download (close to a GB) but once you’ve got the file, it just installs – there’s no failed installs because of not having some version of .NET installed or whatever. If it isn’t there and SAM needs it, SAM installs it – including a built-in copy of SQL Server if you don’t have a SQL Server for handling SAM data.

Once installed, you just logon to the console and a wizard walks you through adding nodes for monitoring. Nodes are servers and other devices SAM can monitor. There are a number of ways to do this ranging from manual to automatic scheduled discovery of new nodes. I simply entered a range of IP addresses for my Windows servers to start out with and provided a domain credential. SAM automatically found all our servers. Then it showed the applications it automatically recognized on each server and allowed me to confirm them for monitoring.

Applications

That brings me to one of the features I really value about SAM and that is it’s concept of “applications”. SAM doesn’t just monitor systems, it also catalogs the applications found on each system and then automatically builds a dashboard that shows you the status of each installation of that application across all your servers. For instance, the SQL Server dashboard allows you to see how SQL Server is running across your entire network wherever it is running. This is a great way to look at your network instead of strictly in terms of each server and the apps running on it.

SolarWinds recognizes hundreds of applications out of the box and knows how to determine if the application is “up” which of course varies from one application to another. For instance, with SQL Server it knows each instance needs at least the main database service to be in the running state in order to count the application as up. SolarWinds aggregates all this status information into a pie chart so that you can instantly visualize the current status of each application across the entire network.

But there will always be applications that SolarWinds, or any other monitoring solution, doesn’t know about. For instance we have a scheduled task that runs every few minutes to move form submissions from our websites over to our CRM system. Normally I would have just created a monitoring rule to alert us if SAM sees any error events from that process. But it occurred to me that I should set this up as an Application in SAM. SAM can monitor anything (see below) you need to and you can group these monitors as custom applications which are then surfaced on dashboards side-by-side with the shrink wrapped apps that SolarWinds recognizes out of the box. At the top level, you can get a quick visual idea of the overall health of all applications and then drill down. For a given application you can see everything being monitored about it which may vary between many different data types such as:

  • Service status
  • Event logs
  • Performance counters
  • Response times
  • File sizes

So in this case, I created a new “application” in SAM called “Website to CRM Integration” using one of the available templates. I setup the application to watch the event log for errors logged by our custom integration program – all I had to do was to specify the Application log, Error as the event type and choose my application as the source but I could have just as easily specified a range of Event IDs. But no news isn’t always good news – just because there aren’t any errors doesn’t mean the scheduled task is actually running. This particular process wakes up every few minutes and processes any transaction file that has been created by the web server since last check. If the file is there it processes all the entries and deletes it. So added a “File Age Monitor” that looks at the age of any transaction file. If the file gets older than 1 hour, I know that the process isn’t running correctly because it should process and delete any transaction file within minutes. I love this check! No matter what goes wrong, for whatever reason (network outage, schedule task logon problem, etc), if a transaction file is sitting around out there and not getting processed we’ll know about it.

Monitor Everything

I haven’t come up with a single situation yet that SAM can’t handle. We use it to monitor much more than logs and service status. The technical sources of data that SAM can monitor are called “component monitors” and the quantity and variety is a bit mindboggling You can check DHCP, query a web service, logon to a web site, run a database query, look for a process… the list goes on and on.

So now every time I realize I’m manually “checking” something I try to ask “How could I set this up in SAM?”

Positive Monitoring

The beauty of not just checking for actual positive results instead of just for errors or downed services is that, as in the example above, SAM can tell you whenever there is a problem with a given resource regardless of the problem. Then, by having both checks and troubleshooting information (e.g. event logs) grouped as “applications” SAM can instantly show you the available information for determining the cause of the problem. It’s so nice when you know about the problem before your users or customers do.

In support of that goal SAM offers a variety of “user experience” monitors which we need to make more use of. These monitors simulate operations end-users are routinely performing and alert you as soon as response time or availability issues are detected.

System and Virtualization Monitoring

As soon as I added my Windows servers as nodes, SAM immediately alerted me to some serious issues on several servers that I wouldn’t have discovered until real problems had developed. It doesn’t take rocket science to detect volumes low on disk space but who has the time to check that manually? SAM did this immediately and automatically for me.

However, I was surprised that the Hardware dashboard remained empty. But then I added my VMWare vCenter server as a node and enabled virtualization monitoring. Then, wham! The hardware dashboard populated and it made sense. SAM was smart enough to realize that all my Windows servers were VMs and had no real hardware. But as soon as it started monitoring vCenter it discovered my 2 ESXi hosts, queried their hardware status and populated the dashboard. What blows my mind is that SAM can aggregate hardware status info from ESXi, Windows and Linux and other platforms and aggregate it all in to one dashboard. Now if I could just start monitoring the coffee maker like all those startups flush with venture capital to spend on critical tasks like that.

Another way SAM’s virtualization support surprised me is how it recognizes a virtual machine in VMWare as the same thing as a Windows Server node that it is monitoring directly. I haven’t drilled down into how it’s doing that but I’m impressed.

We’d been experiencing general slowdown in our virtualized environment and thought it might just be related to growth. But SAM gave us the visibility and freed up time to figure out that the slowdown was caused by weird stuff our cloud based AV solution was doing since the vendor had been acquired and they’d attempted to migrate our systems to their new agent. What a relief!

In the future I’ll try to write about SAM’s many other features like it’s Top 10 dashboards, server warranty monitoring, AppStak Environment , Network Sonar Discovery, AppInsight, geo maps and so on. In the meantime I encourage you to try out Server and Application Monitor. You can download a trial or just browse over to the interactive demo and instantly start playing with SAM at http://systems.demo.solarwinds.com/Orion/SummaryView.aspx

email this digg reddit dzone
comments (0)references (0)

Related:
How Randy and Company Do IT: Server and Application Monitoring
The Growing Threat of Friendly Fire from Vendors
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log
9 Mistakes APT Victims Make

Monitoring What Your Privileged Users are doing on Linux and UNIX

Tue, 17 Mar 2015 07:52:43 GMT

In previous webinars I showed how to control what privileged authority is in Linux and UNIX. With sudo you can give admins the authority they need without giving away root and all the security risks and compliance problems caused by doing so. But once you carefully delegate limited, privileged authority with sudo you still need an audit trail of what admins are doing. A privileged user audit trail is irreplaceable as a deterrent and detective control over admins and in terms of implementing basic accountability. But in today’s environment of advanced and persistent attackers you also need the ability to actively monitor privileged user activity for quick detection of suspicious events.

So in this webinar, I will dive into the logging capabilities of sudo. Sudo provides event auditing for tracking command execution by sudoers – both for successful and denied sudo requests as well as errors. Then I  will show you how to enable sudo auditing and how to control where it’s logged, if syslog is used and more importantly: what do sudo logs looks like and how do you interpret them?

But sudo also offers session auditing (aka the iolog) which allows you to capture entire sudo sessions including both input and output of commands executed through sudo whether in an interactive shell or via script. I will show you how to configure sudo session logging and how to view recorded sessions with sudoreplay.

After my session, Paul Harper from BeyondTrust will show you how PowerBroker UNIX & Linux builds on sudo’s audit capabilities.

This will be an interesting and technical session.

Click here to register now!

email this digg reddit dzone
comments (0)references (0)

Related:
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log
Automating Review and Response to Security Events
New Features in LogRhythm 4.0 Deserve a Place on Your Short List
How Randy and Company Do IT: Server and Application Monitoring

4 Fundamentals of Good Security Log Monitoring

Mon, 23 Feb 2015 11:17:54 GMT

Effective security log monitoring is a very technical challenge that requires a lot of arcane knowledge and it is easy to get lost in the details. Over the years, there are 4 things that stand out to me as fundamentals when it comes to keeping the big picture and meeting the challenge:

  1. Just do it

    Sometimes organizations prevaricate about implementing a SIEM/log management solution because they aren’t sure they will be able to fully utilize it because of staff or skill shortage and a host of other reasons. Making sure someone is watching your SIEM and follow up on what it finds is certainly important but don’t let that hold you back from implementing SIEM and log management. There are multiple levels on the security monitoring maturity model and you can’t necessarily start off where you’d like to. But you need to be collecting and securely archiving logs no matter what – even if no one is looking at them at all. If you don’t capture logs when they are created you lose them forever; logs will only be there when you need them if you’ve at least been collecting and securely archiving them. That may be the first step on the maturity model but without it you lose accountability and the ability to conduct critical forensics to determine the history and extent of security incidents.

  2. Let your environment inform your monitoring

    Security analytics technology like SIEMs is getting better and better at recognizing malicious activity out of the box. But there will always be a limit to what shrink wrapped analytics can find. At some point you need to analyze your environment and tailor your monitoring and alerting criteria to take into account what makes your environment different. How is your network divided internally as far as security zones? Which systems should be making outbound connections to the Internet? Which shouldn’t? Which PCs should be making administrative connections to other systems – which shouldn’t. Those are just a few examples. But the more local intelligence you build into your monitoring the better your SIEM will be at recognizing stuff you should investigate.

  3. The more secure and clean your environment – the easier it is to detect malicious activity

    Here’s a couple examples of what I’m talking about. First a security example: let’s say you lock down your servers so that they can only accept remote desktop connections from a “jump box” you set up which also requires multi-factor authentication. That’s a great step. Now leverage that restriction by teaching your SIEM to alert you when it sees remote desktop connection attempts to those servers from unauthorized systems. APT and other malicious outsiders will likely be unaware of your setup at first and will trip the alarm. Here’s a cleanliness example. Let’s say you have a naming convent for user accounts that allows you to distinguish end user accounts, privileged admin accounts and non-human accounts for services and applications. If you strictly follow that convention you now have all kinds of opportunities to catch bad things as they happen in your environment. For instance if you see a non-human service account trying to logon interactively or via Remote Desktop you may very well have an insider misusing that account or an external attacker who has successfully ran a pass-the-hash attack on that account. Or if you see an administrative account created that doesn’t match naming conventions – that may be a tipoff of an APT creating a back-door account.

  4. Leverage a SIEM solution that is intelligent and automates as much as possible

    If you are going to follow through on my #2 and #3 recommendations you need a SIEM that frees you up from doing all the obvious stuff that can be packaged by the vendor. EventTracker has been around a long time and has a huge amount of knowledge and support already built into it for many, many different log sources as well as intelligent behavior analysis.

Log monitoring is a rigorous, technical exercise but good SIEM technology frees you up to focus on what makes your environment different and how to leverage those differences to recognize malicious activity as it happens. But if you aren’t at the point to get truly sophisticated in your monitoring don’t let that hold you back from at least collecting and archiving those logs that they are secure and available.

email this digg reddit dzone
comments (0)references (0)

Related:
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log
Following a User’s Logon Tracks throughout the Windows Domain
Auditing File Shares with the Windows Security Log
Pay Attention to System Security Access Events

NEW Free & Easy to Use Tool, Event Log Forwarder for Windows

Sun, 22 Feb 2015 22:13:47 GMT

Right or wrong, Syslog remains the de facto standard protocol for log forwarding. Every SIEM and log management solution in the world accepts syslog. So frequently you run into the situation of needing to forward Windows events via syslog. But Windows doesn’t support syslog and the “free” forwarders I’ve looked at in the past were just not pretty. Some even written in Java. Ugh. Besides being klunky and hard to configure they weren’t flexible in terms of which event logs they could forward much less which events within those logs.

But SolarWinds has just released a new and completely free Event Log Forwarder for Windows (ELF). ELF takes seconds to download, seconds to install and a minute to configure. Just select the logs you want to forward (below example shows successful and failed logons and process start events from the security log):


and specify the address of your syslog server:


ELF runs as a background service and immediately starts sending events out via syslog as you can see here on my syslog server.


I love how easy it is to filter exactly which events are sent. This allows you to filter out noise events at their source – conserving bandwidth and log management resources all the way down the line.

But what if you have many systems that need to be configured to forward events? I took a look at the folder where ELF was installed and found a LogForwarderSettings.cfg file that is very easy to read. Moreover there’s even a LogForwarder.PDF file in the Docs folder that fully documents this settings file. I don’t see anything installation dependent in this file so it looks to me like you could use the ELF GUI Client to configure one installation and then copy LogForwarderSettings.cfg to all the other systems where you want the same behavior.

You can download SolarWinds Event Log Forwarder here http://www.solarwinds.com/register/registrationb.aspx?program=20056&c=701500000011a71&CMP=BIZ-TAD-RFS-ELF_Review-ELF-DL-2015

email this digg reddit dzone
comments (0)references (0)

Related:
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log
How Randy and Company Do IT: Server and Application Monitoring
Virtualization Security: What Are the Real World Risks?
SolarWinds Log & Event Manager Includes My Favorite Feature in a SIEM…

Mobile and Remote Endpoints – Don’t Leave Them Out of Your Monitoring

Mon, 09 Feb 2015 16:49:17 GMT

I’ve always tried to raise awareness about the importance of workstation security logs. Workstation endpoints are a crucial component of security and the first target of today’s bad guys. Look at news reports and you’ll find that APT attacks and outsider data thefts always begin with the workstation endpoint. So unless you want to ignore your first opportunity to detect and disrupt such attacks you need to be monitoring them.

For example, if you aren’t monitoring workstation endpoints you don’t know:

  1. When the user is really physically present using the endpoint vs when an attacker is posing as the user while they are absent
  2. When new executables start for the first time – a key indicator of an APT-agent
  3. When new software is installed or existing code modified
  4. Removable media and other devices are connected

That’s certainly true of endpoints connected to your internal network. But what about the occasionally connected (if ever) workstations of mobile and remote employees, outside sales and field forces, telecommuters, etc?

Beyond the points above, with mobile/remote endpoints you don’t have the luxury of analyzing their network traffic patterns because they aren’t visible to net-flow analyzers on your internal network. So you don’t have any opportunity to detect network signatures indicative of a compromised endpoint “phoning home” to its command and control center.

If you agree internal endpoints are important to monitor then you have to admit that mobile and remote ones are too. Some may counter that “internal endpoints” are a threat because they are on the internal network – mobile/remote systems are not. Well, if you don’t use a VPN, the first part of that statement may be true but the threat is far from eliminated.

True, attackers who gain access to a VPN-less mobile/remote endpoint can’t immediately start a network scan and begin directly attacking other systems on the internal network in order to extend the horizontal kill chain.

But advanced persistent threat actors commonly use other techniques that are viable to mobile/remote endpoint users. Having gained control of that endpoint they can “become” the user and access anything resource that user can. For instance attackers are known to drop infected files in file sharing locations accessible to that remote user and patiently wait for someone else on the inside to open that file.

At the end of the day, this is just another example of how there is no real solid boundary between our networks and the outside world. Perhaps as far as packet routing but not in terms of content. Getting it by APT type attacks is nearly a foregone conclusion. So early detection is critical in order to stop real losses – like information breaches that plaster the front page today. And the place to start is the endpoint – whether it’s technically on your internal network or not.

But how can you monitor remote systems that may be anywhere? About the only thing you can count on is for those systems to have web access via http or https. Today, such endpoints use VPNs less and less with the rise of web-based applications, the cloud, reverse proxies and other remote access technologies. So even if pulling entire event logs over the VPN were practical, it’s decreasing as an option.

The good news is that there are great ways to monitor mobile and remote systems in near real-time without a VPN, using just the restricted web-based access you can expect mobile and remote employees to have most of the time. Check out my latest webinar sponsored by EventTracker, Early Detection: Monitoring Mobile and Remote Workstations in Real-Time with the Windows Security Log. It’s available now for on-demand viewing.

email this digg reddit dzone
comments (0)references (0)

Related:
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log
Why Workstation Security Logs Are So Important
Mobile and Remote Endpoints – Don’t Leave Them Out of Your Monitoring
How Randy and Company Do IT: Server and Application Monitoring

How to sudo it right for security, manageability, compliance and accountability

Mon, 02 Feb 2015 16:18:03 GMT

UNIX and Linux with sudo is a fact of life. It’s one of the first things auditors look for and it’s the native option for you to protect root from being abused. It’s also the standard way to implement least privilege and enforce accountability over privileged admins.

But sudo – like most components of Linux/UNIX – is very configurable and it’s easy to (pardon the pun) “sudo it wrong”. In this webinar I will provide a quick intro on sudo - explaining what sudo does in terms of eliminating the need to logon as all-powerful root and providing accountability and least privilege. I will also show you a number of common sudo pitfalls and the risks with sudo if not configured and used correctly.

I’ll explain to you how to sudo it right by doing things like:

  • Using include files to eliminate duplicate sudo policies between systems
  • Managing sudo consistently across multiple systems
  • Avoiding ALL
  • Using groups instead of user names
  • Specifying secure path
  • Logging
  • Configuring timeouts

Finally, Paul Harper, product manager from BeyondTrust, will review commercial options for augmenting sudo and attaining least privilege on UNIX and Linux.

This will be a very technical and useful webinar to help you improve the security, manageability, compliance and accountability of your *nix environment.

Click here register now!

email this digg reddit dzone
comments (0)references (0)

Related:
How Randy and Company Do IT: Server and Application Monitoring
Virtualization Security: What Are the Real World Risks?
SolarWinds Log & Event Manager Includes My Favorite Feature in a SIEM…
Automating Review and Response to Security Events

Randy's Review of a Fast, Easy and Affordable SIEM and Log Management

Thu, 29 Jan 2015 17:46:06 GMT

One of the most frequent complaints I hear from you folks is “We need a SIEM but can’t afford the big enterprise solutions.”  And as a tech-heavy small business owner I truly understand the need for software that installs in minutes and doesn’t require a ton of planning, learning, design and professional services before you start getting results.

Well, I’ve installed SolarWinds Log and Event Manager (LEM) in my lab and I can say that it is all of the above and more.  There’s actually no install of software or provisioning of a server because it’s a prebuilt virtual appliance.  When you download and run the LEM install package it simply unpacks the OVA template.  You just open VMWare or Hyper-V, deploy a new VM from template and point it at the file from SolarWinds.  After it boots up for the first time all you have to do is point your web browser at its DHCP assigned address which you can see in VMWare or Hyper-V.  Answer a few configuration questions such as static IP address and you are up and running.  To start pulling events from your servers click on Ops Center and click on the green plus sign.  We’re talking minutes.

LEM has all the features you need and expect from a SIEM.  And it’s flexible; you can monitor server logs with or without agents and you can also accept SNMP traps and Syslog flows from devices and UNIX/Linux systems. 

LEM is affordable, too.  It starts at $4495 and monitors up to 30 servers.  That’s the total price – no server OS or databases to license much less manage.

Since there’s such a need for affordable SIEM and log management and so many of you in my webinar are still trying get by with free utilities I’ve partnered with SolarWinds to raise awareness about LEM.  Please download it and try it out.  Even if you don’t have a virtualization server, you can still run the virtual appliance with a free desktop virtualization program like VM Player.  

LEM is affordable but it’s not “cheap” software.  LEM is actually one of the few SEIMs out there that implements my #1 feature: normalization and categorization.  LEM understands what events actually mean from each of the many, many log sources it supports.  By that I mean that whether the event comes from Linux, Windows, Cisco or anything other source if it’s a logon event (for instance) it gets parse and categorized as such.  This is important because every log source out there logs the same kind of events but in a different format.  None of us have time to learn all the formats and arcana out there about each log source.  LEM’s normalization makes so many things not only possible but also effortless.  For instance “show me all failed logon events for Randy Smith across all my systems and devices regardless of log source and format”.  Voila!

So, please, take a look at LEM.  Download it here.  

email this digg reddit dzone
comments (0)references (0)

Related:
Understanding the Difference between “Account Logon” and “Logon/Logoff” Events in the Windows Security Log
SolarWinds Log & Event Manager Includes My Favorite Feature in a SIEM…
Virtualization Security: What Are the Real World Risks?
Automating Review and Response to Security Events

previous | next

powered by Bloget™

Search


Categories
Recent Blogs
Archive