Wednesday, June 25, 2014

We've Found No Evidence...Means What Exactly?

1 comment :
For its part, LexisNexis confirmed that the compromises appear to have begun in April of this year, but said it found no evidence that customer or consumer data were reached or retrieved, via the hacked systems. The company indicated that it was still in the process of investigating whether other systems on its network may have been compromised by the intrusion.
             Source: krebsonsecurity.com

Shucks, I can't find any evidence!
I think it's funny how organizations are careful to craft public statements in the wake of data breaches or public exposure that systems were compromised.  For instance, the statement above, "no evidence that customer or consumer data were reached or retrieved," appears to be put forth in an effort to ease customer concerns.  However, as an information security professional, my first thought was to analyze that statement through my jaded security lens.

Thinking critically though, "no evidence was found" could mean quite a few things:

1) Logs were reviewed and no exfiltration of sensitive data was observed
2) Systems were scanned with anti-virus and anti-malware software and were reported clean
3) No consumers have complained about their information being compromised (which is typically found by either a consumer having their personal account(s) hijacked and/or unauthorized charges on credit card or banking statements.

Log Analysis

As someone who troubleshoots various networking issues, when someone tells me they've reviewed the logs and found no evidence, my first thought is who QC'd the review?  In complex security and/or networking issues, I've found it's extremely helpful to have someone else either assist or vet your work.  We all have varying degrees of experience and talent which influences the tedious work of analyzing log files, packet traces, etc. and can result in one person easily noticing something of interest while another person would miss it entirely.

Are there such certifications in the area of log analysis?  To my knowledge, there's no industry accepted certification for log analysis like there is in the way of other certifications like CISSP, Net+, or CEH.  So, at most that leaves either a vendor-specific certification, training, and/or experience.

With vendor-specific training, you learn the basics of the device/software and maybe there's some advanced training that takes you on a deeper dive of the product, but most of the time such training doesn't teach someone how to analyze information and properly interpret the results. 

Okay, what about training that isn't vendor-specific?  Now we're getting somewhere.  The type of training here though would be akin to what an intelligence analyst has to do, which is crawl through vast amounts of collected informaton to find connections and correlations.  Academically, it's tough to find such training.  I did find a couple of related courses on Coursera:

     Reasoning, Data Analysis and Writing
     Statistics: Making Sense of Data

Lastly, there's experience, where over time you've learned to recognize such connections and correlations.

When companies suffer a data breach, of course junior analysts can assist, but the effort should be overseen by someone who has had analyst training and/or years of experience.  How much experience?  For the CISSP it takes five years of experience and an endorsement, so maybe there could be something along those lines established.

Scanned Systems

Next, we have the assumption that machines were scanned and no evidence of malicious software was found.  How many machines were scanned?  With what tools were they scanned?

There are numerous reports of how anti-virus based on signatures, although still necessary, is considered an entry point in terms of an anti-virus defense.  It should be supplemented by software that scans for heuristics as well.  But lately, as Brian Krebs also reports, there's an entire underground industry developing around the goal of obfuscating malware payloads so they aren't recognizable.  So, if scanning systems for viruses is now as basic an action as is locking your front door, something more is needed.

This is an area where companies like Carbon BlackCrowd Strike, and Mandiant are making names for themselves.  Although their tools are reactive in nature, they are oriented toward Incident Response and identifying the method of exploit and what systems and data were touched.  Combining the output from those tools with the log analysis above should provide a picture of what systems and data were affected.

Reporting Standards

If a company has performed both the log analysis and has the scanned system output, the next course of action would be public disclosure.  This presents an issue though because if there is suspected criminal activity, law enforcement would be involved and the general rule is that information is prohibited from disclosure in ongoing investigations.  However, just saying "no evidence found" isn't enough.

To address the issue on both fronts, disclosure standards should be defined so companies can incorporate those standards into their incident response plan.  By the way, if you work for a company that doesn't have one, you might want to start building one now.

The U.S. does have data breach disclosure notification laws, but nothing specifying how the information should be presented to the public or what details can be included when law enforcement is or isn't involved.  Have a look at the link above and you'll see most states have individual statutes specifying what constitutes sensitive data and when individuals should be notified.  And even within that context, states handle data breach disclosure handling differently.  A federal law would provide clear guidance for states to incorporate into their own codified laws and for companies to use when these events occur.

Maybe then we can get clarity on "no evidence was found"

Don't worry.  Every company says they didn't find anything...


Read More

Thursday, June 12, 2014

BackTrack 5r3: Make it a Team Effort

No comments :

Background

In October 2012, I was prepping for our finals round in the Global CyberLympics competition (where we took 2nd place).

From previous practice sessions, my team and I agreed the best way to distribute information rapidly (and visually) among team members was to use Armitage's Team Server.  At the same time, a couple team members had custom tools they wanted to use, which presented a problem: running those custom tools would not feed the results into the central Team Server instance.  So, we needed a way to have them retain the ability to use their own stuff, but still share that information to everyone.  Our solution was to identify one team member to not only run Armitage Team Server, but also make the Metasploit database externally accessible.  Side note: for those not already familiar, the metasploit framework (msf) does not have a "free" GUI.  Armitage was developed by Raphael Mudge to provide a GUI as well as enable easy team operations.  Read more at Raph's Armitage website: fastandeasyhacking.com.

I also figured I'd change the default listening port and default database credentials since the database would be externally accessible.  You can run Metasploit with different database management systems, but this article's focus is only if it's run on PostgreSQL.

Changing the default port

I decided to make the listening port a higher port, but because I was lazy, it went from 4444 (the default) to 44441.

NOTE: the default port varies depending on what BackTrack distro you're using.  In the one I downloaded from the BackTrack Linux site, the default port was 4444.  In the screen caps below I'm using a BT distro from Black Hat.  To find out what port yours is listening on, run this command:

To change the port bindings, there are two areas where this can be modified, in the postgresql.conf and the setenv.sh files.  Both files are in the directory /opt/metasploit/postgresql/, with postgresql.conf located in the ../postgresql/data directory and setevn.sh located in the ../postgresql/bin directory.

Many configuration settings are available in the postgresql.conf file, including an option to change the default port:


You can either change the port here, or if you notice at the bottom of the file, there's a message advising that settings that can be changed in the setenv.sh file instead.  Since the port parameter is commented out in the postgresql.conf file, I suggest only focusing on the port value in the setenv.sh file.  Change the PGPORT parameter to what you want:


There's one other area we need to amend to reference the new port value, and that's the postgresql startup script.  This is located at:

/opt/metasploit/postgresql/scripts/ctl.sh


Change the port value to match what you set in the setenv.sh file.

Last step - either reboot to restart the postgres process, or restart it manually.

IMPORTANT!  
Now that we've changed the PostgreSQL settings, we need to make metasploit framework aware by changing the postgres_port value in the metasploit properties file, located at:  

/opt/metasploit/properties.ini

And we have to modify the database.yml file to reflect the new port value as well.  While we're there though, why not change the default database credentials?

Changing default credentials

Navigate to /opt/metasploit/config

You can change the database credentials (and the port) by editing the database.yml file and changing the relevant parameters under the production header.



Once you change these settings and launch the metasploit framework console (msfconsole), enter the db_status command to verify database connectivity is successful.  If you see an error, you may have missed a step above.


Modifying PostgreSQL: Listen externally

Change entries in

/opt/metasploit/postgresql/data/pg_hba.conf

The instructions advise you how new entries in the control list should be formatted:



When you scroll down, you'll see the lines specifying what connections the database permits.  NOTE: I've added the second line under IPv4.


You can permit access to all databases (the first all), from all users (the second all), on all addresses.  The line item I added allows external access to all databases from all users on all IP addresses this BackTrack instance, but you can make it more granular for tighter control by changing the database and user values to what you have in the database.yml file above.  Furthermore, you could lock down the subnet or put multiple line items to allow only your teammates to connect.

Now we edit the file

/opt/metasploit/postgresql/data/postgresql.conf

Uncomment the line "listen_addresses".  You'll need to change listen_addresses to '*':


Restart PostgreSQL or reboot and now you're listening for external connections!

End Result

Now you can accept external requests from others to log into your msf database, and the output from tools they run will populate your database.

To test the connection, connect from another BackTrack/Kali session via these steps:

  • Launch msf console
  • Type command string similar to "db_connect msf3:20394965@192.168.229.133:7338/msf3dev"
    •         db_connect, msf command to connect to another database
    •         msf3:20394965, username and hash set in the database.yml file above
    •         IP address of remote instance
    •         Port database is listening on
    •         /msf3dev, the name of the database to connect to
If you like this post and it works for you, or if you have any other related tweaks please let me know in the comments!

Read More