Wednesday, August 20, 2014

Finding Users Who Use the Conference Room Computers as a Proxy to Surf

I received an interesting alert today, indicating that a host in a conference room was attempting to reach out to a site hosting an exploit kit.  This is not the first alert I have received on this machine, so I was a little puzzled.  So, I went to the machine in order to remediate; and noticed that the AV software on the machine had blocked the connection attempt.  I ran both AV and Malwarebytes to ensure nothing is found.

The machine was clean.

Just before I logged out, I noticed that Windows Update had run and needed to restart the computer.  As I clicked to restart, the warning notice that other people connected to the machine would lose their connection.  Hmmm....what's really going on here?

A little digging showed that you can find out who is/has logged into a machine via RDP by examining the Event Logs.  Open Event Viewer, and navigate to:

Applications and Services Log -> Microsoft -> Windows -> Terminal Services - LocalSessionManager.

There you will find events for who logged in, with what account, and from what source.

Now to go find some users......

(And yes, we need to fix our policy on conference room computers...but that's a battle for another day.)

Friday, May 16, 2014

Finding a Specific Microsoft Patch on a Host

After the Word (.rtf) 0-day was announced at the end of March, we turned on an alert to let us know when an .rtf file was delivered to the company.  Until the patch was applied, we actually blocked the incoming mail, inspected it, and if it was clean, we allowed it to reach its destination.  After the patch, we just alerted on the incoming mail. 

It's been a couple of months, and we are still getting the alerts.  Before I turned off the alerts, I wanted to ensure that the patch was on my host.  A quick script I ran to look for the specific patch was:

wmic qfe | find "KB2953095"

It seemed to work ok.

If there are better/easier ways to do this, leave a comment.

Somewhat off-topic....I can't believe the number of people that still send documents as .rtf.  Why not just use Word?  Or a text document?  The number of incoming .rtf documents was way higher than I would have guessed.  Most were resumes or travel booking documents.

Monday, May 12, 2014

SANS SIFT 3 and the Desktop Share

I had the new SIFT 3.0 downloaded for a while, but I haven't been using it as much as I would like.  I've been using the older 2.x version. One of the main reasons is that on the 2.x version of SIFT, there was a desktop shortcut that took me directly to a directory of the host OS.  This is missing in the 3.0 version of SIFT.  I fully admit, I don't know linux as well as I know Windows.

Quickly reading up on the issue, and I found that this mount to the guest OS should be found in mount_points/hgfs.  I had that directory, but nothing was populated there.  And, in the Virtual Machine Settings, I had the Shared Folders set to Always Enabled.  Still nothing.

On a reboot, I noticed that there was an update to VMWare Player.  I updated, and checked the mountpoints directory, but still nothing.  One last google suggested running vmware-config-tools.pl.

Sure enough, after answering the questions, that did the trick.  Now, in the mout_points/hgfs folder, I see a subfolder for "C".  Bingo.

Now I have to get used to Unity and finding what I used to be able to find in SANS SIFT 2.x.

If anyone else has tips on making that transition, feel free to leave advice in the comments.

Friday, May 9, 2014

Finding Inactive Accounts

The SANS Top 20 Controls has a control named Account Monitoring and Control.  Within that control is a Quick Win:  Ensure that systems automatically create a report on a daily basis that includes a list of locked-out accounts, disabled accounts, accounts with passwords that exceed the maximum password age, and accounts with passwords that never expire. This list should be sent to the associated system administrator in a secure fashion.

We don't have an automated report of those types of accounts, and quite frankly, we have very poor visibility into account control.  Coming from a DoD environment, I'm not used to having such lax controls.  Slowly, I'm starting to push the company forward, but it is taking time.

My first thought was to look at the inactive accounts.  I figured that these accounts would be low enough of the low-hanging fruit to start with, and here's how I have gone about finding them.

(Note: that I have created a master script that will do more than what this post details...I'm only describing inactive accounts at this time.)

1.  This command is in a batch file:  dsquery user -inactive 4 -limit 3000 > accountout.txt

Call the output file what you like.  The -inactive 4 parameter tells dsquery to look for accounts that have been inactive for at least four weeks.  I picked four to start with, as I realize that we have users that travel extensively.  My hope is that once we manage the output, I'll be able to lower that number.

2.  I took the output of the file, and copied it to Excel.  From there, I went to Data>Text-to-Columns in order to break up the data nicely.

3.  Column 2 seemed to be where I could differentiate between user and non-user accounts.  I filtered on just user accounts and copied that to a new sheet.

My results were staggering.  There are way too many accounts.  My next step is to find or create a process to validate that these accounts are a) legitimate, and b) truly inactive.  Spot checking a bunch of these users revealed users that are contractors.  And, if I have to guess, they are no longer with the company.  Prime targets to attack - which is why they should be disabled or deleted.

Once that's done, I'll need to automate the process and schedule it to run weekly or so.  As for locked-out, disabled, and password length checking...those will be added in time.

Thursday, April 17, 2014

Letter to Management

This letter was posted today....and it could have been sent to our management.  (It's not.)  Many of the points echo exactly what is happening here.  I would say, the biggest excuse I hear is that management does not want to disrupt the corporate culture in implementing security controls.

I fear that a breach or severe incident will be the catalyst for change and implementing controls.  Yes, I've had many small wins, but there is lots to do.

Wednesday, April 16, 2014

humans.txt Appearing in our Firewall Logs

This morning I came in to work figuring I would continue working on analysis of our infrastructure for the Heartbleed bug.  We seem to be fine...this is a case where fortunately, we get lucky because we have been using such an old version of OpenSSL.  Which means we're probably vulnerable to a whole host of other vulnerabilities.  But, we appear to be not vulnerable to Heartbleed.  Which is a good thing.

Early, a co-worker came to me asking if I had seen the note from our DDoS mitigation provider.  It was the first such email to provide a source address for the attack.  The "attack" only lasted four minutes, and to me, was not much to worry about.  However, there was at least an indicator to look for in the logs.  I popped the source address into our firewall logs, and was presented 134 records back; all targeting various servers of ours. 

And here was the unique finding.  Every request string looked something like:
http://ourserver/some_bogus_directory/some_bogus_file.php?php121dir=http://www.google.com/humans.txt
I fully admit, I had never heard of a humans.txt file; I knew about robots.txt, but not humans.txt.  So I looked it up.  We don't use it.  Next, I fetched Google's human.txt file to see what was in there.  Nothing untoward.

The best I can come up with is that this is some kind of remote file inclusion attack and the attacker is looking for vulnerable php servers.

I found a great site that had a little more info here, but their mitigation was in using .htaccess; we use our firewall.  I did not find much more information, so anyone that wants to shed a little more light on the subject, feel free to leave a comment.

Friday, March 28, 2014

Sophos Antivirus and EMET

In looking for mitigations for the recently announced Microsoft Word 0-day, I decided to install EMET on both my desktop and my laptop.  I fully admit, I'm not an EMET guru, nor do I know a lot about it.  I have found many directions for EMET (a good one here) so installation was a bit of a breeze.  However, tweaking it is another story.

First, Firefox 28 wouldn't start.  So, I had to tweak the application settings for Firefox to find out which particular protection was preventing it from starting up.  (Turns out, it was ROP.)

Then, upon turning EMET loose, I received two "Quarantine Announcements" from our Sophos Antivirus.  The notice was for a buffer overflow in IE and Acrobat reader.  From my analysis, the best I can tell is that Sophos saw EMET protecting those applications and didn't know how to report it.  I asked our Sophos administrator if he had heard anything about Sophos and EMET, but he didn't know what EMET was.  I authorized the activity in Sophos, and rebooted a couple of times to see if Sophos would report the activity each time I booted up.  So far, so good.

If I find out exactly how the buffer overflow was caught by Sophos, I'll update this post.

Monday, March 24, 2014

Python Links Updated

Just a quick note that I've updated my list of Learning Python resources after reading Harlan's great post.  My original (and now updated) post (and list) can be found here.

Thursday, March 20, 2014

Getting Started with Security Onion

After getting alerts from our DoS protection company that are vague, one of the network engineers and myself decided we needed to gain more visibility into the network.  We want to better understand these events and make a decision as to whether or not they are truly incidents.  Further, after we get notification of an event, we want to find the traffic to study it.  Enter Security Onion.  This tool is awesome, as we can run Snort, shoot the output to Snorby, and capture the data as well.  We're pretty sure that we have a box capable of running Security Onion, it's more a matter of how much data we want to keep.  Right now, we have a 1+ terabyte drive doing the heavy lifting.  We're just barely making it before the job to purge runs.

Our first shot at getting it up and running was fairly successful.  Data is flowing, we saw some alerts.  Next on the agenda was to start tuning it such that we are not drinking from the fire hose.

And, now we've broken Security Onion.  We're not sure where yet.  Events are coming in.  Our sensor NIC has packets traversing it.  However, there's nothing showing up in Snorby.  So, on to more trouble shooting.  Fortunately, this Security Onion server is not production-ready.  We knew going in that we would have much tuning before we could start truly relying on the output in a production environment.  The next step is to figure out what broke down and see what we can get back.

Wednesday, March 12, 2014

Finding hostnames on a Subnet

We have offices all over the world.  What I came to learn today is that there are two countries where we have very poor visibility into our own corporate networks.  To the point that I suspect that they are not managed very well, if at all.  I know I chase down malware in a couple of the networks on a daily basis. 

One of our admins asked me if there is a way that we can get all of the hostnames on one of those subnets that we don't have much visibility to.  NMAP would have worked well, but I wanted to come up with a command that I could have had a non-technical person run and send me the output. So, using a little Command Line Kung-Fu, I came up with:

for /L %A in (0 1 255) do nbtstat -A "XXX.XX.XXX.%A">>hosts.txt
Substitute your subnet for the Xs in that command.

It worked like a champ.  I suspect that there is an easier way to do this, but this worked easy enough.