Wednesday, December 31, 2014

2015 To Do: The Low-Hanging Fruit

I'm not going to try and recap this last year it's been great.  I know I've done good things and improved the security posture at the company as best I could.  Sure, there's more I could do, there's more I want to do, there's been battles won, and battles lost.

So, as a mental note, and to set a baseline, I'm outlining these mini-projects I want to get done as fast as possible.  I'll try to revisit this so I can see how long it took to complete these endeavors (and hoping that they get done.)  And this is not in any particular order.

1.  I'd like to get Two-Factor Authentication (2FA) on all the servers.  We use 2FA for VPN connections and it works well.  However, I would like to get it added to all of our production servers so we can (a) better track logins to these servers, and (b) add the extra authentication step to critical and production servers.  One challenge here will be leveraging our existing 2FA infrastructure and add it to servers.

2.  We have many proxy services employed in the network infrastructure.  Headquarters has a slew of them, depending on where network connections start and finish.  (Layers :-) )  When the headquarters (and data center) moved in October and November, our Zscaler connection was knocked off line.  This did not hamper headquarters much due to the other proxy services, but some of our branch offices rely on Zscaler as the primary proxy service.

3.  Our firewall solution is pretty robust - we have a lot of rules defined (that's a whole separate project.  Cleanup.)  The firewall has an IPS blade that receives signature updates from the vendor.  However, the network team has not implemented the signatures "because it's too hard / will block too much (!) / might cause a load on the firewall."  I want to come up with an automated solution where we can "auto-approve" most signatures.  For example, it would be great to come up with a policy where all Critical and High signatures get applied automatically.  Further, anything else that has a high confidence and low to medium impact we would apply as well.  The rest we can look at.  It would be a start at keeping the IPS in tune with current threats.

4.  Our GPO is used more for creating accounts and putting those accounts into business groups.  It is not really used to enforce security controls.  As such, there are a bunch of low-hanging fruit type controls we could implement without causing much pain.  Controls like locking screen savers, account lockout polices, and some password policies would be easy wins.

5.  We have an AV solution, it updates, and it appears to do it's job.  However, we don't have scheduled scans automatically turned on.  The users complain.  However, I suspect the AV will miss things without a scheduled scan to look.  Already, I've piloted turning on scheduled scans with a group to see what the real issues are.

6.  Our firewall and managed SIEM do a great job of alerting on known threats.  Our process to block some of those known threats is manual, though.  When we get an alert, we have to research the activity, then add the source address to a blocklist.  Manually.  There has to be a method to automagically block those sources on the first "malicious" event.  We need to turn this on.

7.  One of our FireEye appliances was taken offline due to the move.  We need to get this appliance back up and running.

I feel this list is simple enough; where completion of the items on the list will raise the security posture of the organization without many costs.  We'll see.

Thursday, December 4, 2014

EMET 5.1 - Windows 7 64bit - IE 11

Our user machines are deployed with Windows 7 64bit and IE 11 installed.  I notice that when I go to sites that check the browser, the sites respond that "browsers less than version 8 are not supported" or words to that effect.  Usually, the browser works fine and there are no issues.  I have been trying to get Microsoft's EMET to work with this configuration since version 4.0; and I have not been successful.  Internet Explorer crashes with EMET running.

I read that a new version, 5.1, might fix this.  After setting up EMET, and tweaking it to my environment, I checked Internet Explorer, and it still crashes.  Are there fixes or workarounds for this issue?

Wednesday, August 20, 2014

Finding Users Who Use the Conference Room Computers as a Proxy to Surf

I received an interesting alert today, indicating that a host in a conference room was attempting to reach out to a site hosting an exploit kit.  This is not the first alert I have received on this machine, so I was a little puzzled.  So, I went to the machine in order to remediate; and noticed that the AV software on the machine had blocked the connection attempt.  I ran both AV and Malwarebytes to ensure nothing is found.

The machine was clean.

Just before I logged out, I noticed that Windows Update had run and needed to restart the computer.  As I clicked to restart, the warning notice that other people connected to the machine would lose their connection.  Hmmm....what's really going on here?

A little digging showed that you can find out who is/has logged into a machine via RDP by examining the Event Logs.  Open Event Viewer, and navigate to:

Applications and Services Log -> Microsoft -> Windows -> Terminal Services - LocalSessionManager.

There you will find events for who logged in, with what account, and from what source.

Now to go find some users......

(And yes, we need to fix our policy on conference room computers...but that's a battle for another day.)

Friday, May 16, 2014

Finding a Specific Microsoft Patch on a Host

After the Word (.rtf) 0-day was announced at the end of March, we turned on an alert to let us know when an .rtf file was delivered to the company.  Until the patch was applied, we actually blocked the incoming mail, inspected it, and if it was clean, we allowed it to reach its destination.  After the patch, we just alerted on the incoming mail. 

It's been a couple of months, and we are still getting the alerts.  Before I turned off the alerts, I wanted to ensure that the patch was on my host.  A quick script I ran to look for the specific patch was:

wmic qfe | find "KB2953095"

It seemed to work ok.

If there are better/easier ways to do this, leave a comment.

Somewhat off-topic....I can't believe the number of people that still send documents as .rtf.  Why not just use Word?  Or a text document?  The number of incoming .rtf documents was way higher than I would have guessed.  Most were resumes or travel booking documents.

Monday, May 12, 2014

SANS SIFT 3 and the Desktop Share

I had the new SIFT 3.0 downloaded for a while, but I haven't been using it as much as I would like.  I've been using the older 2.x version. One of the main reasons is that on the 2.x version of SIFT, there was a desktop shortcut that took me directly to a directory of the host OS.  This is missing in the 3.0 version of SIFT.  I fully admit, I don't know linux as well as I know Windows.

Quickly reading up on the issue, and I found that this mount to the guest OS should be found in mount_points/hgfs.  I had that directory, but nothing was populated there.  And, in the Virtual Machine Settings, I had the Shared Folders set to Always Enabled.  Still nothing.

On a reboot, I noticed that there was an update to VMWare Player.  I updated, and checked the mountpoints directory, but still nothing.  One last google suggested running vmware-config-tools.pl.

Sure enough, after answering the questions, that did the trick.  Now, in the mout_points/hgfs folder, I see a subfolder for "C".  Bingo.

Now I have to get used to Unity and finding what I used to be able to find in SANS SIFT 2.x.

If anyone else has tips on making that transition, feel free to leave advice in the comments.

Friday, May 9, 2014

Finding Inactive Accounts

The SANS Top 20 Controls has a control named Account Monitoring and Control.  Within that control is a Quick Win:  Ensure that systems automatically create a report on a daily basis that includes a list of locked-out accounts, disabled accounts, accounts with passwords that exceed the maximum password age, and accounts with passwords that never expire. This list should be sent to the associated system administrator in a secure fashion.

We don't have an automated report of those types of accounts, and quite frankly, we have very poor visibility into account control.  Coming from a DoD environment, I'm not used to having such lax controls.  Slowly, I'm starting to push the company forward, but it is taking time.

My first thought was to look at the inactive accounts.  I figured that these accounts would be low enough of the low-hanging fruit to start with, and here's how I have gone about finding them.

(Note: that I have created a master script that will do more than what this post details...I'm only describing inactive accounts at this time.)

1.  This command is in a batch file:  dsquery user -inactive 4 -limit 3000 > accountout.txt

Call the output file what you like.  The -inactive 4 parameter tells dsquery to look for accounts that have been inactive for at least four weeks.  I picked four to start with, as I realize that we have users that travel extensively.  My hope is that once we manage the output, I'll be able to lower that number.

2.  I took the output of the file, and copied it to Excel.  From there, I went to Data>Text-to-Columns in order to break up the data nicely.

3.  Column 2 seemed to be where I could differentiate between user and non-user accounts.  I filtered on just user accounts and copied that to a new sheet.

My results were staggering.  There are way too many accounts.  My next step is to find or create a process to validate that these accounts are a) legitimate, and b) truly inactive.  Spot checking a bunch of these users revealed users that are contractors.  And, if I have to guess, they are no longer with the company.  Prime targets to attack - which is why they should be disabled or deleted.

Once that's done, I'll need to automate the process and schedule it to run weekly or so.  As for locked-out, disabled, and password length checking...those will be added in time.

Thursday, April 17, 2014

Letter to Management

This letter was posted today....and it could have been sent to our management.  (It's not.)  Many of the points echo exactly what is happening here.  I would say, the biggest excuse I hear is that management does not want to disrupt the corporate culture in implementing security controls.

I fear that a breach or severe incident will be the catalyst for change and implementing controls.  Yes, I've had many small wins, but there is lots to do.

Wednesday, April 16, 2014

humans.txt Appearing in our Firewall Logs

This morning I came in to work figuring I would continue working on analysis of our infrastructure for the Heartbleed bug.  We seem to be fine...this is a case where fortunately, we get lucky because we have been using such an old version of OpenSSL.  Which means we're probably vulnerable to a whole host of other vulnerabilities.  But, we appear to be not vulnerable to Heartbleed.  Which is a good thing.

Early, a co-worker came to me asking if I had seen the note from our DDoS mitigation provider.  It was the first such email to provide a source address for the attack.  The "attack" only lasted four minutes, and to me, was not much to worry about.  However, there was at least an indicator to look for in the logs.  I popped the source address into our firewall logs, and was presented 134 records back; all targeting various servers of ours. 

And here was the unique finding.  Every request string looked something like:
http://ourserver/some_bogus_directory/some_bogus_file.php?php121dir=http://www.google.com/humans.txt
I fully admit, I had never heard of a humans.txt file; I knew about robots.txt, but not humans.txt.  So I looked it up.  We don't use it.  Next, I fetched Google's human.txt file to see what was in there.  Nothing untoward.

The best I can come up with is that this is some kind of remote file inclusion attack and the attacker is looking for vulnerable php servers.

I found a great site that had a little more info here, but their mitigation was in using .htaccess; we use our firewall.  I did not find much more information, so anyone that wants to shed a little more light on the subject, feel free to leave a comment.

Friday, March 28, 2014

Sophos Antivirus and EMET

In looking for mitigations for the recently announced Microsoft Word 0-day, I decided to install EMET on both my desktop and my laptop.  I fully admit, I'm not an EMET guru, nor do I know a lot about it.  I have found many directions for EMET (a good one here) so installation was a bit of a breeze.  However, tweaking it is another story.

First, Firefox 28 wouldn't start.  So, I had to tweak the application settings for Firefox to find out which particular protection was preventing it from starting up.  (Turns out, it was ROP.)

Then, upon turning EMET loose, I received two "Quarantine Announcements" from our Sophos Antivirus.  The notice was for a buffer overflow in IE and Acrobat reader.  From my analysis, the best I can tell is that Sophos saw EMET protecting those applications and didn't know how to report it.  I asked our Sophos administrator if he had heard anything about Sophos and EMET, but he didn't know what EMET was.  I authorized the activity in Sophos, and rebooted a couple of times to see if Sophos would report the activity each time I booted up.  So far, so good.

If I find out exactly how the buffer overflow was caught by Sophos, I'll update this post.

Monday, March 24, 2014

Python Links Updated

Just a quick note that I've updated my list of Learning Python resources after reading Harlan's great post.  My original (and now updated) post (and list) can be found here.

Thursday, March 20, 2014

Getting Started with Security Onion

After getting alerts from our DoS protection company that are vague, one of the network engineers and myself decided we needed to gain more visibility into the network.  We want to better understand these events and make a decision as to whether or not they are truly incidents.  Further, after we get notification of an event, we want to find the traffic to study it.  Enter Security Onion.  This tool is awesome, as we can run Snort, shoot the output to Snorby, and capture the data as well.  We're pretty sure that we have a box capable of running Security Onion, it's more a matter of how much data we want to keep.  Right now, we have a 1+ terabyte drive doing the heavy lifting.  We're just barely making it before the job to purge runs.

Our first shot at getting it up and running was fairly successful.  Data is flowing, we saw some alerts.  Next on the agenda was to start tuning it such that we are not drinking from the fire hose.

And, now we've broken Security Onion.  We're not sure where yet.  Events are coming in.  Our sensor NIC has packets traversing it.  However, there's nothing showing up in Snorby.  So, on to more trouble shooting.  Fortunately, this Security Onion server is not production-ready.  We knew going in that we would have much tuning before we could start truly relying on the output in a production environment.  The next step is to figure out what broke down and see what we can get back.

Wednesday, March 12, 2014

Finding hostnames on a Subnet

We have offices all over the world.  What I came to learn today is that there are two countries where we have very poor visibility into our own corporate networks.  To the point that I suspect that they are not managed very well, if at all.  I know I chase down malware in a couple of the networks on a daily basis. 

One of our admins asked me if there is a way that we can get all of the hostnames on one of those subnets that we don't have much visibility to.  NMAP would have worked well, but I wanted to come up with a command that I could have had a non-technical person run and send me the output. So, using a little Command Line Kung-Fu, I came up with:

for /L %A in (0 1 255) do nbtstat -A "XXX.XX.XXX.%A">>hosts.txt
Substitute your subnet for the Xs in that command.

It worked like a champ.  I suspect that there is an easier way to do this, but this worked easy enough.

Thursday, March 6, 2014

Hunting for Zeus Throughout the Network

I average finding a little more than one Zeus infection a day.  I know the reasons.  The root causes are there are some major security controls missing from the environment due to culture.  Adding those controls is a challenge and is a long-term strategy.  We are in the infancy of a Security Awareness campaign that is just starting to teach people the dangers of clicking on links in Spam or falling for phishing.  Occasionally, the FireEye sensor would alert to someone clicking a Zeus link.  I suspect more click the links than I am aware.

Using the Check Point's SmartLog, I've worked up a little query to help me spot some of the big outbreaks.  I grabbed the domains from the ZeusTracker, and built a mini-query (which I then pasted in the query bar.) 

dest:(domain or domain or domain or bizserviceszero.com or ....)
Periodically, I'll check the domains on ZeusTracker and run a diff to see what enters the list and what gets removed.  I know that there are better ways to do this, and I'd love to implement  some of those methods.  High on my list is adding a Snort box, or even SecurityOnion.

A small win for the day, but at least I can find these machines.  Hopefully, when I've built up some metrics, I can support changing the environment, and use the number of infections cleaned up as the driver.

Sunday, March 2, 2014

RSA Conference: Friday

I woke up on Friday to cloudy weather.  With an afternoon flight, I was on the fence with going to hear more talks.  I figured I would wait until after a shower.  Upon getting out of the shower, it was POURING out.  I'm glad California was getting the rain it needed, but...I didn't want to walk in it.  However, after getting dressed, it had stopped raining, and actually looked like it was lightening up. I decided to chance it for one more talk.  I probably could have gone to two...but there was no way I wanted it to be close at the airport.

The talk I went to was:
  • Operation Olympic Games is the Tom Clancy Spy Story that Changed Everything - by Rick Howard.  Wow, this was a great talk. I'm looking at my notes, and I see that I stopped somewhere after the first ten or fifteen minutes.  It was that good and engaging.  Rather than tell the technical story of Stuxnet, this talk discussed the history of the operations and the planning that went into it.  Further, Rick put forth some interesting theories that certainly have merit.
And with that I bid the conference goodbye. I fully admit that I wasn't sure what I would get out of it, and expectations were exceeded.  I've written the dates down for next year, and we'll see what happens.  As I write this, I'm starting to plan on how I'll use and implement some of the notes and ideas that were generated at the show.

Friday, February 28, 2014

RSA Conference 2014: Thursday

I woke up and it was absolutely pouring out, I thought it was going to be a repeat of yesterday.  Fortunately, by the time I started walking down the hill towards the Moscone Center, it had stopped raining.  And, while eating breakfast I noticed the sun coming out, a welcome addition.  Breakfast is served 7-8 in the morning.  The first conference I had scheduled was 9:20, but I thought to myself, I don't want to waste the time, so I decided on an 8:00 talk.  I'm glad I did.  Here's the talks I went to today:
  • Cloud Ninja: Catch Me If You Can - by Rob Ragan and Oscar Salazar.  Initially, I had this time slot open but at the last minute, I decided to pick a talk and go.  I'm glad I did.  This talk was awesome.  Initially, I thought it might be neat to hear a session with a little offense to it, seeing as how I mostly focus on defensive security.  But, as the talk focused on (ab)using free trials of company's software to build a botnet, I realized that there were dire implications for the company where I work.  This was a great talk that gave me information to go home and battle the developers.
  • Keeping Up with the Joneses: How Does Your Insider Threat Program Stack Up - by Dawn Cappelli and Randall Trzeciak.  Probably of all the talks I scheduled myself to see, this was number one.  I have their book, so it was great to hear Dawn and Randall talk.  Of course they backed up their research with plenty of numbers and examples.  They gave great advice on building and working an Insider Threat program.
  • The Future of Exploits, Developing Hidden C&C and Kittens by James Lyne. I picked this talk as I wanted to hear a talk by one of our company's vendors and I suspected it might get a little deep.  It didn't get too deep, and I'll tell you, I've never laughed so hard in a conference talk.  A great talk, kept light, with lots of great information.  And, now I've learned a great little story to explain buffer overflows.
I did not attend the Keynote talks today, instead I took a walk up to Fisherman's Wharf to see the Rock, seals, and a tour of the USS Pampanito.

I did attend the Codebreakers Bash, which was really well done.  They gave out these blinking LEDs, and now it is my hotel room has become a disco.  I'll have to cover it before going to sleep.

I fly out tomorrow, in the afternoon.  So, I'm on the fence with going to a talk tomorrow.  I'm tempted, to go see one more.  Probably the decision will be made by what time I get up. I will miss the keynotes tomorrow, and that means missing Stephen Colbert.  But I think I'll be ready to get on a plane.

Wednesday, February 26, 2014

RSA Conference 2014: Wednesday

In getting yesterday's post up, there are a couple of things I forgot to include: a couple of general thoughts on the conference.  First, there is usually twenty minutes (or more)  between sessions.  So far, I've found this to be ample time to get from one track to another...and that includes going between West and one of the other buildings, like North.  That even holds true for today, when it rained.  I noticed today that in all talks you can hear the jingling of the badge holders - it reminds me of the clacking of poker chips in a poker room - and ultimately, it's white noise.  Pro Tip:  If you are sitting in the front of a session, be careful with what you are surfing on your laptop.  Screens project more than you think.  Finally, one thing that irks me are the session attendees that have to take a picture of EVERY slide, with their IPAD.  Really?  In one talk yesterday, I noticed a presenter spotted someone doing that, and I got the feeling he varied his pace just to throw the person taking pictures off.

Today was another busy day.  Here are the talks I went to:
  • Hacking Exposed: Day of Destruction by the CrowdStrike guys, George Kurtz and Dmitri Alperovitch.  This was an awesome talk, where they literally destroyed some computers.  Yes, I think a couple were VMs, but they bricked at least one laptop.  And they showed how malware could literally fry a machine.  My question I didn't get to ask was:  could you do that on an airplane, or a hospital?  Consequences would be dire.
  • Gumshoes - Security Investigative Journalists Speak Out - Dan Hubbard from OpenDNS moderated a panel of Brian Krebs, Nicole Perlroth, and Kevin Poulsen. Again, this was another really great talk that I selected because I follow Brian Krebs' and Kevin Poulsen's blog religiously.  I hadn't heard of Nicole's work before, but I just added it to my feed reader.  Lots of great stuff was discussed.
  • Using Data Breadcrumbs to ID Targeted Attacks - Dan Hubbard.  This was only a twenty minute talk, and I enjoyed it.  It gave me some ideas to take back to the mother ship.
In the afternoon I went to the keynote talks.  Already, I look forward to tomorrow's talks.

RSA Conference 2014: Tuesday

I didn't get a post up yesterday, and for that, I apologize.  Take my comments with a grain of salt, this is the first security conference I've been to.  And to be sure, I'm having a blast and learning a lot.  Already, I've written plenty of notes from some of the talks I've heard that I will take back to work with me.  To be sure, there's going to be work for someone, and much depends on some of the output of what I bring back.

On Monday, I registered.  The schedule seemed light so I went to the Leadership talks.  They were ok, but nothing to really write home about.  I used the afternoon to catch up on work.  But, I returned to the show for the welcome reception, really - free beer and food.  This was the first time I walked around the expo floor.  It is definitely a site to be seen; I liken it to a country carnival, where the various exhibits are competing for your attention.  I actually have an agenda of exhibitors I need/want to see for various reasons.

Today though, was my first full day at the conference.  I got their early for the "continental breakfast" but to me it seemed more like a lunch.  Then, I got in the line for the keynotes.  My impression of the keynotes was that I was at a concert; what with the lights and sounds.  William Shatner's intro was very well done.  I would have liked to have heard an emphatic denial regarding RSA's activity and the NSA, and the other talks were well done.  I had a meeting with one of our corporate vendors at noon, then it was a full afternoon of talks:
  • Establishing Trust After A Breach - I really thought this was how you work with your customers and the community-at-large after suffering a breach.  It wasn't.  To me, it was DFIR 101 and what to do.
  • NSA Surveillance: What We Know and What to Do About it - this was my first time hearing Bruce Schneier talk and it was all I expected.  It was very good, but I follow his blog, so there wasn't TOO much new here.
  • The Seven Most Dangerous New Attack Techniques and What's Coming Next -  By far and away, this has been the most popular talk I've been to.  The room was packed.  Period.  And with good reason.  Especially if you are a fan of the SANS guys.  I am.  More importantly, Ed Skoudis taught my Sec 504 class.  I learned more from his office hours than the actual class.  He's engaging, crazy smart, and gets his points across in a great to digest manner.  This was definitely a great talk.
  • Use Anomoalies to Detect Advanced attacks Before Bad Guys Use It Against You - there were a bunch of talks that I wanted to attend at this time slot, but I picked this one.  This was a great talk, a little in depth, but I took from it some nuggets of practical information that I will bring back to the company to implement.
After dinner, my co-worker and I went to the party given by OneLogin.  A good time was  had.  And now, I'm beat, especially after all the walking, (and climbing Nob Hill AGAIN).  Sleep will be easy tonight.  I know I have a packed  morning tomorrow, and I believe the keynotes are after lunch.  Plus, I have to make time for the exhibits.

Monday, February 10, 2014

Too Much Zeus, Need Recommendations

It's been a little over three months since I have started the new job.  To be certain, I love it.  I'm really starting to get  my arms around all that goes on (or doesn't) around here.  And while I know I have a daunting task to help guide this place towards becoming more secure; I know I have already taken great strides in moving forward.

I fully admit that there are some pretty basic controls that are not implemented.  If I were an auditor from my previous contracting job, my head would probably explode with some of the findings here.  Some of them are THAT basic.  But, these decisions have been made way in the past, and for the most part fall into the politics/culture category.  It will take a while to get movement on those controls.  Or a decent-sized breach.

All of that said, I was looking through my incident notes for the past month (or so.)  And looking at the fires I put out on a daily basis, I see that I work to eradicate at least one Zeus-infected host a day.  That's an average.  I've given up on remediating the hosts.  I send the IP to our helpdesk and let them get it off the network and reimaged.

In light of the controls that need grassroots work, I'm looking for a solution that I can dump on the client hosts to help combat zero-days, attachments, etc.

One recommendation I have received so far is Invincea.

Are there any potential solutions I should be aware of?

Thursday, January 30, 2014

A Day of Updates: both Nessus Client and Burp

It just happens to be one of those days where everything I use has to be updated.  It started with Nessus, when opening my client I was prompted to download and install the new client.  Nessus has moved from 5.2.4 to 5.2.5.

Later, it was on to auditing a small application, and that meant firing up Burp Suite.  Burp notified that it needed to be updated from 1.5.20 to 1.5.21.

So, if you are using those apps, now is the time to update.


Friday, January 17, 2014

PhishMe and Sophos Enterprise

We're starting up a security awareness campaign, where, in part, we will be using PhishMe's service to phish our employees; where falling for the phish will lead to training and education.  PhishMe provided us with the list of all the domains that they own so that we could white-list them in our proxies.  I tested them all, but there were two or three domains I still couldn't reach.

Sophos Enterprise provides a method to explicitly allow connections to specific web sites even if the Sophos proxy would normally block it.  (Exceptions can be added in the Web Control, on the "Website Exceptions" tab.)  Pretty simple.

However, after adding the domains to the exception, there were still two or three domains that I still could not reach.  I didn't want to just exclude those domains from scenarios, as I felt it might limit my choices.  I found that the Anti-Virus and HIPS Policy has a section that addresses domain blocking...and I believe this is if the AV thinks that there is malware on the page.  If you view your "Anti-Virus and HIPS Policy" you will see a section mid-way down the screen titled Web Protection.  If it is on, click the Authorization button at the top of the screen.  Go to the Websites tab, and add the domains you want to white-list.  Bear in mind that the AV will not scan these domains.  Once you hit OK, you should be able to browse the domains.