Tuesday, December 23, 2008

SQL Server scans

A new lesson learned. At least, I'm filing it that way so I can jog my memory for future testing engagements.

We tested at a client the other week that claimed to have one Oracle database, running on top of a Windows 2003 server. It turns out that they had another Oracle database, sitting on a Solaris machine. (The IT department didn't know about the database because they didn't administer the machine....a whole other issue.) That wasn't such a big deal, as we had the scripts to test the database with us. However, while a co-worker was interviewing the DBA, he happened to see a MS SQL Server instance on the DBA's monitor. When we got back to the office, I poured through the vulnerability scans looking for a sql server. I found five. Three instances were found on client XP workstations. And, if I had to guess, those instances probably came bundled with specific software that was installed. A whole other issue for these networks. However, I found two instances residing in the data center on servers located there. Knowing this client, I think they were just forgotten, or not included because the databases were not part of a web application. But, they definitely should have been scanned, and it was noted in our initial documentation.

So, after each testing engagement, I'm searching the vulnerability scans for SQL Servers (of any type, for databases not mentioned to us;) both in the datacenter and on the client LAN. And, I'll probably do this early, so we can scan/test the databases the next day.

Thursday, December 18, 2008

All testing will be much more stringent now

When testing a site, we either are testing with the intent of writing an initial security assessment report; or, final testing to complete a DIACAP package. For one of my first engagements, we were testing for an initial assessment. So, we grab data from a representative sample of like machines. However, just this week, most likely due to external politics that I am not privy to, the decision was to create a final DIACAP package from the data collected. Obviously, the customer is not going to get the best picture of their security posture. And, there are highly important issues that will get reported instead of fixed with an initial security report.

It's been highly frustrating for me, to say the least.

However, the lesson learned is that from now on, I will test every system as if I am testing for a final DIACAP package, even if the outcome is an initial report.

Tuesday, December 16, 2008

When to perform the interview during an accreditation

Normally, when accrediting a system, there is a team of us security warriors; probably performing a myriad of tasks. The interview of the Sys Admins occurs when any one of us has a spare hour or two to ask the "non-technical" questions and go over documentation and process. However, for the engagement I am currently working, my partner suggested to our client that we perform the interview FIRST in order to get the interview (and pain) out of the way and allow us to test the systems during the rest of the engagement.

One lesson I think I've learned from this move: By performing the interview FIRST, we find some issues/areas where we may want to take a closer look. The customer may have inadvertently said something that gives us reason to look at a particular issue. Or, they may say something that leads to a finding we might never have found. And, if the interview is conducted late in the engagement, there might not be time to further investigate.

I'll know soon enough, as we cast our eye about the network and systems starting tomorrow.

Microsoft IE and the out-of-bound-patch

By now, the news has made it's rounds that Microsoft is releasing an out-of-bound patch for the zero-day exploit regarding IE. What I find interesting is....with so many people/news outlets/blogs/etc suggesting to switch browsers AWAY from IE, it seems MS was forced into pushing a patch out early. If the exploit did not impact IE, would the patch come so quick?

Just musing....

Wednesday, December 10, 2008

Be extremely detailed when taking notes

Another lesson learned here. I'm going through notes that I took on some of the systems we worked on. And, I'm finding that my notes are not as detailed as I would have liked. For example, I wrote at one point: "starting SRR script on first Solaris machine." However, it would have been better if I documented the full version number of the operating system.

Going forward, I want to try to remember to write down: machine name, IP address, OS, and version number, and specifically what is being done.

Monday, December 8, 2008

Retina and scanning network sizes

I just got back from my latest testing engagement. Overall, it was a good trip. The site was not as prepared for our testing as they thought they were. And, as such, they were not as happy with our results. Suffice to say, they have some work to do. However, it seems that the team hasn't been together that long, so there is much upside, and I'm sure they'll come together.

And again, for the second straight trip, this organization had "problem children" that seemed to flaunt the fact that they were not going to play by the established rules. Of course, it will all come out in the documentation.

The lesson learned from this trip: break up the Retina network scanning of clients into subnets. There are over 500 hosts in my .rdt file. It's my fault, I let the IASO perform the scan (although I was watching over his shoulder.) But, that file is huge. And it's impossible to load into Retina. It takes forever to load and produce reports. Besides the size advantage, scanning by subnet will aid in keeping the files manageable.

I have another trip scheduled in two weeks, so I'll get to put it all into practice.

Saturday, December 6, 2008

Zone Alarm Update

I just downloaded and upgraded Zone Alarm. There was a pop-up on the screen that said Zone Alarm had fixed a security hole. It looks like I've been updated to: version, TrueVector, and Driver version So far the biggest difference I've seen is cosmetic.

Wednesday, December 3, 2008

SANS Forensics blog and hidden processes

I just discovered a new blog (to me). sansforensics.wordpress.com is a great blog, from SANS, dealing with forensics. And, this post is just to jog my memory as to where to find a post on live-system memory forensics.

Here's their post on finding hidden processes.

Monday, December 1, 2008

The AutoRun issue

I'm in San Antonio, getting ready for a testing engagement. While the flight in was a little bumpier than I would like, it was great landing in sunny and warm(er) weather. I spent about an hour at the Alamo, and I wish I could have spent more time there. Very interesting; and I admit, I remember reading about it in high school, but I really didn't know the story.

Anyway, I'm listing a couple of links regarding the AutoRuns situation. (Mostly because I'm exhausted and I really need some sleep...I need to re-read these articles.)

ThreatExpert has a post on Agent.btz and the Pentagon

ZeroDay has two posts on the issue:
- a post on affected systems in Afghanistan
- a guest post with a little historical perspective

LA Times article