Archive for the ‘General Security’ category

Are Virtualized Systems More Or Less Secure?

May 18th, 2010

I’ve had the above question asked enough times that I felt it worthy of a blog post. While a few years back the answer may have been “less secure”, today the answer is “both”. I know, sounds like Chris being non-committal, but that answer really does most accurately describe the current state of the technology.

Virtualization Changes Everything

I’ve heard a few folks remark that virtualization is about to impact the industry the same way that the Internet did in the 90’s. To be honest, I think there is merit in that opinion. In the early 90’s most folks were running IPX, AppleTalk, NetBUI and a plethora of other protocols on closed networks. By the end of the 90’s, most folks were running IP exclusively with connectivity to the entire world. The way we did business, as well as the way we applied security, completely changed over that 10 years. Both network administration and security skills that were cutting edge in 1990 were all but useless by 1999.

Virtualization is starting to ramp up to have the same impact on the industry. Virtualization deployment requires a complete rethinking of how to apply security. Back in the 1990’s, admins who simply plugged into the Internet, without regard for how this would impact their network, got burned big time. We are lining up to see a similar outcome as folks adopt virtualization.

What Makes Virtualization Less Secure

The Achilles heel of virtualization is in the software itself. We are hoping we can trust the software to keep guest systems away from each other, as well as the host and/or hypervisor. There are two major problems with this expectation:

  1. No software is bug free
  2. Software can be misconfigured

A few years back Core Research showed they could break out of a guest and gain full control of the host OS. While a hypervisor is supposed to limit that type of exposure, we’ve certainly seen cases where even the hypervisor has been bypassed. We’ve even seen cases were software becomes exploitable only when run in a virtualized environment. These links show a small cross section of the virtualization problems that have been discovered over the last few years. Google can give you a more complete list if you are interested.

So a prudent security professional is going to be cautious of blindly trusting software to be secure. The problem is vendors do not always take this same approach. Take VMware with their ESX (soon to be ESXi) product as an example. Many of us were flabbergasted when a VMware representative announced at CanSecWest that it was theoretically impossible to attack the ESX hypervisor. When we simply assume something is unbreakable, someone more creative is going to figure out a way to punch through.

One of my biggest concerns with ESX/ESXi is that VMware has designed it to be modular (via VMSafe). On the plus side, this means that outside vendors can create products to help improve the hypervisor’s functionality and security. On the downside this dramatically increases the chances of bad code being introduced which can compromise security.

We’ve seen a great example of this in the past. Marcus Ranum created the Gauntlet firewall, which at that time was one of the most secure and kick butt security devices available. When three letter agencies wanted the best security, they turned to Gauntlet. Marcus sold Gauntlet to Network Associates (later became McAfee) who immediately started adding in features. It was not long before a steady string of vulnerabilities were being discovered, each introduced by these new “features”. From there, the product lost its security cred and slid off of the radar.

Now it is certainly possible to add features and keep things secure. The FreeBSD folks are an excellent example of how to do this correctly. To ensure security they maintain a very strict auditing process. Is it perfect? Absolutely not, but their auditing process has set the bar for secure software implementation. With any luck VMware will do similar, but I have not heard any buzz about this being the case.

Getting Your Head Straight

OK, so we can’t blindly trust virtualization software to keep attackers at bay. We can however still take precautions to help minimize the impact if the worst does occur. One of the biggest steps you can take is to carefully consider which servers get hosted, and what other guest systems are permitted to run on the same box. The security zone concept used by network architects is just as applicable here.

A security zone is simply a collection of systems that share the same relative level of risk. For example Web, name and SMTP servers are usually all located on a DMZ, because they all share similar risk from direct attack. On the internal portion of the network, desktops are usually placed in a different security zone than the servers. This is because servers have little to no access to the Internet while desktops are usually permitted to communicate directly. This places the desktops at higher risk of attack than the servers.

We can apply this same logic when implementing virtualization. A DMZ server and an internal server should not be guests on the same hardware (both CPU and disk array). Doing so could allow an attacker to create an alternate route into our network. Rather than having to pass through any firewall, NIDS, NIPS, etc. devices that has been deployed on the wire, an attacker may be able to gain access to internal resources via the virtualization software. Is it an easy attack? Not from what we have seen so far. Functional exploits have been discovered however, so why introduce unnecessary risk if we don’t have to.

By the way, these same security zone rules should be applied to your virtualized network gear. For example it is a bad idea to use the same physical switch to VLAN the DMZ and the internal network. I’ve seen a couple of clients get whacked that way.

What Makes Virtualization More Secure

Fortunately, from a security perspective, virtualization is not all bad news. In fact there are some very cool security practices you can apply in a virtualized environment that you simply cannot do without it. This was one of the reasons we started using virtualization within the Honeynet as early as 2000.

One of the biggest security issues we face today is kernel level rootkits. What makes this strain of malware so insidious is it effectively turns the operating system itself into malware. This makes detection extremely difficult, as all security checks must pass through the kernel. If the kernel itself is compromised, we can’t rely on the kernel to accurately report security information. We end up having to shutdown the system, mount the drive on a known to be clean OS, and performing our forensic checks from there. Oh course the problem with this process is that it does not scale well. If we have dozens or hundreds of servers, there simply is not enough time in a day to check them all properly.

As mentioned earlier, VMware is now permitting third party vendors access to the hypervisor API via VMSafe. This permits access to privileged state information, such as memory and network traffic, on each of the guest OSs. By plugging into the hypervisor, some extremely cool security options become possible.

For example let’s say a guest OS is attacked by a kernel level rootkit. By analyzing guest memory, the rootkit can be detected from outside of the virtual operating system. When performing the checks via the hypervisor, there is far less of a chance that a rootkit can stealth its activities and go undetected. As mentioned earlier, there is no comparable option with a non-virtualized system.

The API plug also creates new possibilities for dealing with encrypted traffic. When end to end encryption is employed (like a VPN), network based checks of the application layer are easily bypassed. Your only real option was to run agent software on the end point, so security could be implemented after the decryption process. Of course the problem here is that if the agent is attacked, all bets are off. Again, by plugging into the hypervisor we are in a better position to more safely scrutinize this data.

We are just starting to see new products that leverage the VMSafe API plug. Since all of the products are relatively new, the jury is still out on how effective they can be. Offerings run the gambit from replacing host based firewall and IDS protection to full policy enforcement. It will be interesting to see how this product niche shakes out over the next year.

Summary

So as I mentioned at the beginning of this post, virtualization has the ability to make your environment either more or less secure, depending on how you deploy it. If you simply start running everything on a single box, you are probably going to get whacked. If you extend the best practices that have been developed over the years into the virtualization realm, as well as leverage some of the new security features that are being released, you can actually create a better overall security posture.

Day 2 Keynote

January 12th, 2010

Thanks to all who came out to the Encryption/DLP summit. Here are the slides from my keynote on day 2.

encryption-dlp-keynote

Poor Man’s DLP

January 11th, 2010

Greets all,

I’m in New Orleans at the SANS Encryption & DLP conference giving a talk titled “Poor Man’s Data Leak Prevention”. I promised the attendees a copy of the slides, so here ya go.

poor-mans-dlp

PDF of “Protecting Against Targeted Attacks” talk

October 14th, 2009

Over the next few weeks I’ll be giving this talk in a number of locations. For those who attended and requested a PDF version of the slides, here is the link I promised:  protecting-against-targeted-attacks-R2

Cybersecurity Act of 2009 In-Depth – Part 2

September 11th, 2009

In yesterday’s post I covered the first half of the Cybersecurity Act of 2009. Here’s the write up on the second half of the bill.

Section 13: Cybersecurity competition and challenge

As the name implies, this sets up funding for a series of competitions to help identify the best and the brightest.

(a) IN GENERAL- The Director of the National Institute of Standards and Technology, directly or through appropriate Federal entities, shall establish cybersecurity competitions and challenges with cash prizes in order to–

(1) attract, identify, evaluate, and recruit talented individuals for the Federal information technology workforce; and

(2) stimulate innovation in basic and applied cybersecurity research, technology development, and prototype demonstration that have the potential for application to the Federal information technology activities of the Federal Government.

No red flags here. Prizes cannot exceed $1M without checks and balances kicking in. Don’t get your hopes up. That’s for an entire event, not one specific prize.

Section 14: Public-private clearinghouse

This section seems pretty benign, till you read it closely. Here’s the opening section:

(a) DESIGNATION- The Department of Commerce shall serve as the clearinghouse of cybersecurity threat and vulnerability information to Federal Government and private sector owned critical infrastructure information systems and networks.

Yawn. I see this as something you cannot mandate. If you can provide useful information, users will seek out what you have to say. If you simply reprint what has already been released as open source, then my Google news feed will probably get me the info faster and with a better interface. It is easy to want to ignore this section based on this opening statement, but please read a bit further:

(b) FUNCTIONS- The Secretary of Commerce–

(1) shall have access to all relevant data concerning such networks without regard to any provision of law, regulation, rule, or policy restricting such access;

What??? This to me is the ultimate power grab. So any network or system that can be deemed “critical infrastructure” has to let the commerce department have unfettered access to their network. This access is without regard to due process or the rule of law. “Relevant” is a highly subjective term that can be applied to anything.

So it comes back to that “critical infrastructure” description that we already stated is the judgment call of a single individual. Maybe Microsoft’s network should be deemed critical infrastructure, as they are the government’s primary desktop vendor. Perhaps Linux development servers should also be deemed “critical” as servers, appliances, and embedded technology is based on this platform. What about Anti-Virus and firewall vendors who supply products to the government? Internet service providers servicing government networks? Telco’s servicing government employees? Universities funded to develop cyber protection techniques? This can be an extremely slippery slope.

To me, this is probably the single most dangerous part of the bill.

Section 15: Cybersecurity risk management report

In short, this section requires the President to produce a report within one year that identifies:

(1) creating a market for cybersecurity risk management, including the creation of a system of civil liability and insurance (including government reinsurance); and

(2) requiring cybersecurity to be a factor in all bond ratings.

This item could be taken in a number of directions. If they are smart, they will look at the feasibility of voiding end user agreements so that software vendors must accept liability for the security failing in their product. Without liability, vendors have little motivation to architect in a security framework from product inception. It is much easier and cheaper to glue it on after paying customers have already encounter problems.

Section 16: Legal framework review and report

This section calls for the President’s office to review existing cybersecurity laws regarding:

the Federal statutory and legal framework applicable to cyber-related activities in the United States

In short, this is a review to see if the laws are still applicable or need updating.

Section 17: Authentication and civil liberties report

Here’s the entire section:

Within 1 year after the date of enactment of this Act, the President, or the President’s designee, shall review, and report to Congress, on the feasibility of an identity management and authentication program, with the appropriate civil liberties and privacy protections, for government and critical infrastructure information systems and networks.

I’m not sure what to make of this section. It reads like they want to find a single sign-on solution for government networks. If that is the case, I don’t understand the “appropriate civil liberties and privacy protections” statement. This implies an application that is geared more towards the general public. Jury is still out on this section as I have not seen any other opinions on it.

Section 18: Cybersecurity responsibility and authority

Here’s the section that everyone is freaking out about. The blurb:

The President–

(2) may declare a cybersecurity emergency and order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network;

Sounds bad, but think of it this way. When planes were crashing into building the President ordered the grounding of all commercial flights. I doubt there was a specific law giving him that specific authority, but given it was an emergency situation no one argued the point or considered it an abuse of power.

I see this provision as being similar. If it is confirmed that attackers have taken control of the power grid and are now systematically shutting it down, no one is going to fault the President for requiring those organizations to isolate themselves from the Internet at large. It may or may not actually fix the problem, but it would be an expected defense posture. This would occur with or without this provision in the bill.

So to me this section is a lot of hoopla about nothing. Some of the previously discussed sections are far scarier.

Another interesting point in this section:

(5) shall direct the periodic mapping of Federal Government and United States critical infrastructure information systems or networks, and shall develop metrics to measure the effectiveness of the mapping process

To some extent, this process has already started as part of the Trusted Internet Connect (TIC) program. I’m actually kind of surprised it is not already a requirement. It is possible this is already being done but that data was unavailable when the bill was written.

Section 19: Quadrennial cyber review

(a) IN GENERAL- Beginning with 2013 and in every fourth year thereafter, the President, or the President’s designee, shall complete a review of the cyber posture of the United States, including an unclassified summary of roles, missions, accomplishments, plans, and programs.

In short, each new president gets to provide commentary on how they think their predecessor performed with regards to cybersecurity. This report would be far more useful if it was required a year earlier. That way it would act as a briefing for the new President. It would give them a better idea of what is required going forward.

Section 20: Joint intelligence threat assessment

Specifies (yet another) annual report on cybersecurity to Congress. Nothing to see here. Move along.

Section 21: International norms and cybersecurity deterrence measures

Here’s the clip:

The President shall–

(1) work with representatives of foreign governments–

(A) to develop norms, organizations, and other cooperative activities for international engagement to improve cybersecurity; and

(B) to encourage international cooperation in improving cybersecurity on a global basis

I see this as being more the role of the Department of Justice. What is needed is better interaction between law enforcement across international borders, not PR snippets and posturing. Think of it this way, what would be more effective in deterring physical crimes over state borders, frequent interaction between state law enforcement agencies, or frequent interaction between Governors?

Section 22: Federal secure products and services acquisitions board

To me, this is probably one of the most positive sections of the bill. Here’s the blurb:

(a) ESTABLISHMENT- There is established a Secure Products and Services Acquisitions Board. The Board shall be responsible for cybersecurity review and approval of high value products and services acquisition and, in coordination with the National Institute of Standards and Technology, for the establishment of appropriate standards for the validation of software to be acquired by the Federal Government.

In short, the government would be using its combined purchasing power to enforce security standards for all software purchases. This can have a profound impact on the commercial industry. Vendors love to complain that it is too expensive to ship secured software. Now if they wish to sell to the government, they will have to meet the appropriate NIST standards. Most likely the secured software would be available for commercial purchase as well. So out of the box you would end up with a more secure product.

Again, I see this as an extremely positive requirement. While vendors may grumble about it, as customers we would all benefit.

Section 23: Definitions

This is simply a definition of terms used in the bill. All are either common terms (like “Internet”) or described in earlier sections.

Exec Summary

There are things to love as well as fear in this bill. It increases funding for cybersecurity research as well as leverages the government’s buying power to generate more secure software for everyone. At the same time it attempts to circumvent established processes (as well as rules of law) that have the potential to make the cybersecurity situation worse rather than better. The bill is currently being reviewed by the Senate Committee on Commerce, Science, and Transportation. Now is the time to voice any praises or concerns you may have.

Cybersecurity Act of 2009 In-Depth – Part 1

September 10th, 2009

There have been quite a few articles on the Cybersecurity Act of 2009. Most have focused on the section that would give the president the power to “shutdown the Internet”. But are there other things in this bill you should be even more concerned about? Is there anything actually useful in the bill? In this two part post I’ll take you through the bill section by section.

The first two sections are simply the index and the findings. One notable quote from section 2:

(1) America’s failure to protect cyberspace is one of the most urgent national security problems facing the country.

This sets the tone for the rest of the section and I have to say I agree with the statement. Security wise we truly are in worse shape than most people want to believe.

Section 3: Cybersecurity advisory panel

These two quotes really say it all:

(a) IN GENERAL- The President shall establish or designate a Cybersecurity Advisory Panel.

(c) DUTIES- The panel shall advise the President on matters relating to the national cybersecurity program and strategy

I have mixed feelings regarding these points. I think that cybersecurity is important enough to deserve high-level visibility. However this bill goes hand in hand with S. 788, a bill to create the position of Cybersecurity Advisor, and H.R. 1910, a bill to create the position of Chief Technology Officer. Both of these positions would report directly to the president, so it seems more useful to have the panel fall under these two rolls in the national org chart. May just be semantics, but one of the issues we have today is parallel tenure with no clear ownership of problems. If all three bills pass I see a higher chance of creating conflicts rather than resolutions.

Section 4: Real time cybersecurity dashboard

I’ve seen little attention given to this item, but there is an easily dismissible statement made in this section:

The Secretary of Commerce shall

(1) in consultation with the Office of Management and Budget, develop a plan within 90 days after the date of enactment of this Act to implement a system to provide dynamic, comprehensive, real-time cybersecurity status and vulnerability information of all Federal Government information systems and networks managed by the Department of Commerce;

A couple of points here, why just the department of commerce? If this will be a truly useful resource, why not extend it’s use beyond this one government office? Also, the statement is a bit vague. This could be as ineffectual as the National Threat level or a subset of the data provided by sites such as DShield or Homeland Security’s Open Source Infrastructure Report. Either way I see this as a long-term failure.

Section 5: State and regional cybersecurity program

Here’s the focus of this section:

(a) CREATION AND SUPPORT OF CYBERSECURITY CENTERS- The Secretary of Commerce shall provide assistance for the creation and support of Regional Cybersecurity Centers for the promotion and implementation of cybersecurity standards. Each Center shall be affiliated with a United States-based nonprofit institution or organization, or consortium thereof, that applies for and is awarded financial assistance under this section.

Sounds good on the first read, but what’s up with the “affiliated with… nonprofit organizations” section? We could easily end up with a non-centralized system with no clear point of contact for their target audience. So if I need help with cybersecurity, I should go to… The Jimmy Fund? Farm Aid? Or maybe it’s the Tennessee Elephant Sanctuary?

Personally, I think these centers should be affiliated with InfraGard. They are established in nearly every state, already have a long history of community outreach, and are already focused on dealing with cybersecurity issues. My guess is that the commerce department wants complete control, while InfraGard is already associated with the FBI.

So what is the goal of creating these centers?

(b) PURPOSE- The purpose of the Centers is to enhance the cybersecurity of small and medium sized businesses in United States

This is an admirable goal. Due to lack of resources, small and medium size businesses are struggling the most. Probably the only demographic that is larger would be home users. If we could take steps to support these organizations, it would go a long way towards fortifying our national security posture.

The centers would support small and medium businesses by:

(1) disseminate cybersecurity technologies, standard, and processes based on research by the Institute for the purpose of demonstrations and technology transfer;

(2) actively transfer and disseminate cybersecurity strategies, best practices, standards, and technologies to protect against and mitigate the risk of cyber attacks to a wide range of companies and enterprises, particularly small- and medium-sized businesses; and

(3) make loans, on a selective, short-term basis, of items of advanced cybersecurity countermeasures to small businesses with less than 100 employees.

Again, I see these activities as a great fit for InfraGard. Deployment would be expedited as there is already a national structure. These would dramatically cut the curve on making these resources available.

Section 6: NIST standards development and compliance

The bill looks to NIST to develop security standards for all government agencies:

(a) IN GENERAL- Within 1 year after the date of enactment of this Act, the National Institute of Standards and Technology shall establish measurable and auditable cybersecurity standards for all Federal Government, government contractor, or grantee critical infrastructure information systems and networks

NIST is already responsible for setting standards. In fact their security documents are considered to be some of the best in the industry. Per the Information Technology Reform Act of 1996, NIST is already charged with developing Federal Information Processing Standards (FIPS).

I’m not a lawyer, but I don’t see anything in this section that has not already been specified by earlier bills except this tid bit under “(d) Compliance enforcement”:

(2) shall require each Federal agency, and each operator of an information system or network designated by the President as a critical infrastructure information system or network, periodically to demonstrate compliance with the standards established under this section.

I’m honestly not sure if the President currently has the power to (arbitrarily?) designate any network or system as “critical” and thus subject to this section. I prefer specific definitions versus subjectively trusting the judgment of a single individual. This way we are covered in both directions, from systems that should have been included but were missed, as well as systems that don’t really belong on the list.

Section 7: Licensing and certification of cybersecurity professionals

This section really scares me as it has the potential to do more harm than good. Here’s the description:

(a) IN GENERAL- Within 1 year after the date of enactment of this Act, the Secretary of Commerce shall develop or coordinate and integrate a national licensing, certification, and periodic recertification program for cybersecurity professionals.

To me, someone who has no idea of the scope of what is needed to address the problem wrote this section. Cybersecurity is not a single discipline. There are experts that focus on Malware analysis, perimeter security, packet decoding and intrusion analysis, incident handling, host specific security, auditing, forensics, wireless, databases, and the list goes on and on. A national certification and licensing program would end up being one of the following:

  1. So general it really does not mean anything
  2. So difficult “certified” resources would be hard to come by

Because of the diversity of the cybersecurity field, there really is no middle ground. This section then goes on to say:

(b) MANDATORY LICENSING- Beginning 3 years after the date of enactment of this Act, it shall be unlawful for any individual to engage in business in the United States, or to be employed in the United States, as a provider of cybersecurity services to any Federal agency or an information system or network designated by the President, or the President’s designee, as a critical infrastructure information system or network, who is not licensed and certified under the program.

Wait a minute. Let’s just take one glaring example. Alan Paller is the Director of Research at SANS, was quoted in this bill (Section 2, #8), and is one of my personal heroes in this industry. He’s provided council to the White House and Congress multiple times. He’s one of those unique individuals that can mediate the gap between folks that speak different languages (geeks, CFO, COO, etc.). While he knows the industry, he’s not the kind of guy that spends time writing Nessus plug-ins or decoding TCP attack streams. Is it truly the intent of this bill to loose resources like Alan if they choose not to certify?

There is a pattern here however. Like so many line items before it, this section puts control in the hands of the commerce department. So I personally think this is less about ensuring we have skilled personnel supporting network security, and more about grabbing power.

Section 8: Review of NTIA domain name contracts

This is another scary section:

(a) IN GENERAL- No action by the Assistant Secretary of Commerce for Communications and Information after the date of enactment of this Act with respect to the renewal or modification of a contract related to the operation of the Internet Assigned Numbers Authority, shall be final until the Advisory Panel–

(1) has reviewed the action;

(2) considered the commercial and national security implications of the action; and

(3) approved the action.

The Internet Assigned Numbers Authority (IANA) is run by The Internet Corporation for Assigned Names and Numbers (ICANN). This is a non-profit international organization that is responsible for guiding (not implementing) high-level operations of the Internet. They take guidance from a number of organizations, including the Internet Engineering Task Force (IETF) who defines the standards for Internet communications. The IETF is an international organization made up of everyone from individual researchers to vendors.

To me, this section sounds like an attempt to bring financial pressure on these organizations. Again, this seems to be an attempt to consolidate more power under the department of commerce. Especially when you combine it with section 9.

Section 9: Secure domain name addresses system

Here’s the clip:

(a) IN GENERAL- Within 3 years after the date of enactment of this Act, the Assistant Secretary of Commerce for Communications and Information shall develop a strategy to implement a secure domain name addressing system. The Assistant Secretary shall publish notice of the system requirements in the Federal Register together with an implementation schedule for Federal agencies and information systems or networks designated by the President, or the President’s designee, as critical infrastructure information systems or networks.

As mentioned in the last section, developing Internet standards in the role of the IETF, not the commerce department. Further, we already have standards to secure the domain name structure (DNSSEC) as well as routing and the IP addressing scheme (sBGP). The problem is their deployment has been extremely slow. What we need is deployment of the existing standards, not competitive ones developed outside of the accepted IETF process.

This section then goes on to say:

(b) COMPLIANCE REQUIRED- The President shall ensure that each Federal agency and each such system or network implements the secure domain name addressing system in accordance with the schedule published by the Assistant Secretary.

OK here’s the problem. In order to secure IP and DNS the solution has to be implemented globally. That’s part of the reason why it has been taking so long. If the federal government today deployed DNSSEC and sBGP it would do little to prevent domain name hijacking or route redirection because attackers could simply work outside of the government’s perimeter.

I have to say I share the frustration in this area. Both DNSSEC and sBGP have been around for 10 years. I think we need to suck it up on the disruptions that may be caused by deployment and just get the job done. Perhaps ICANN needs a fire lit under their butts to create some forward motion. I’m just not convinced these two sections are the way to go about it.

Section 10: Promoting cybersecurity awareness

You knew a PR campaign has to be included in here somewhere, right? Here’s the blurb:

The Secretary of Commerce shall develop and implement a national cybersecurity awareness campaign

Not sure how useful this will be because the news feeds are already full of stories that describe our current state of security. I see this as having the potential to be silly rather than informative. I have these visions of walking into my kid’s school and seeing a poster that states “Billy Bytes Says Don’t Be A H4X0r”. OK, hopefully that will never happen, but you never know. ;)

Section 11: Federal cybersecurity research and development

Here’s the initial statement:

(a) FUNDAMENTAL CYBERSECURITY RESEARCH- The Director of the National Science Foundation shall give priority to computer and information science and engineering research to ensure substantial support is provided to meet the following challenges in cybersecurity:

This section dumps a lot of money into the research and development of cybersecurity techniques. It amends existing bills to increase spending by $265M in 2010, to over $310M by 2014. There are already other programs that fund cybersecurity research, but provided the funds are managed appropriately I see this as being helpful to the cause.

Section 12: Federal cyber scholarship for service program

Here’s the clip:

(a) IN GENERAL- The Director of the National Science Foundation shall establish a Federal Cyber Scholarship-for-Service program to recruit and train the next generation of Federal information technology workers and security managers.

This is no different than many other “scholarship for service” programs. I see this as being beneficial to both the student as well as the government. $50M has been allocated to the program, increasing to $70M by 2014.

Summary

That’s it for now. Tomorrow I’ll post the last half of the bill.

DLP FAQ

August 7th, 2009

I’ve had a few queries regarding the SANS Data Leak Prevention & Encryption Summit I’ll be keynoting next month. The questions have revolved around DLP in general, so I thought I would give a run down on the technology.

What is DLP?

DLP stands for “Data Leak Prevention” or “Data Loss Prevention”, depending on which vendor you are talking to. There are a few other names currently being bounced around (gotta love marketing people trying to make their stuff look newer and cooler ;) ), but they are effectively the same technology. DLP attempts to log, or possibly prohibit, the transfer of sensitive information from a secure location to an insecure location.

Sensitive information usually includes data like credit card numbers or social security numbers. Most will also give you the ability to define phrases or specific files as sensitive as well. Of course how much customization you get depends on the product, but these features are pretty standard. The big difference tends to be with the ease of policy creation. Some let you use a simple, natural language while others may require you to learn a Regex type of expression language to create policies and write filters.

Think of DLP devices as intrusion detection systems for specific keywords and you’ll get the idea. In fact some established NIDS and NIPS vendors are now touting their DLP capabilities as well. You also have a number of startups that are focused specifically on the DLP market.

How does DLP work?

Currently there are three different methods of DLP deployment:

  • On the wire
  • On the server
  • On the desktop

Some vendors support a single method of deployment while others support all three. There are strengths and weaknesses to each, which I will cover later in this FAQ.

How much does DLP cost?

Since it’s a new technology, prices are all over the board. A medium size company (50-500 nodes) can expect to pay anywhere from $30,000 to $200,000 US. These devices are by no means plug and play, so a portion of the cost includes configuring the device and customizing it for the specific environment. You should also expect a bit of lead-time in getting the device(s) deployed properly.

What are the problems with DLP?

Probably the biggest problem with DLP technology is that it can easily be defeated. It is really designed to prevent accidental data leakage, rather than a true attack. You should consider DLP an enhancement to your existing security posture, not a replacement for any previously deployed technology.

For example, deploying DLP on the wire is probably the fastest and most effective deployment. The problem is it can easily be defeated by encryption. So if I encrypt a sensitive file prior to transmission, or leverage a VPN technology (see items 5 and 4 on my Top 5 Firewall Threats) post, the network based DLP will be unable to see the passing information.

Some DLP devices can give you limited ability to work around the encryption problem. For example Fedelis will integrate with a number of proxy products to check passing HTTPS. You have to purchase a supported product however and configure it specifically to prevent end-to-end encryption of HTTPS (the proxy breaks the encrypted stream so payload can be analyzed). Even then you’ve only solved the problem over HTTPS. Encrypted data through other ports will still be an issue. Or an attacker could encrypt the file locally and then transmit via HTTPS because all the proxy can strip away is the SSL encryption.

Deploying DLP on the desktop solves some of these problems, but not all of them. For example the desktop agents I’ve looked at do a pretty good job of preventing me from transferring a sensitive file via the Internet or to a local USB drive. If you run an agent based DLP, try this:

  1. Open a sensitive file
  2. Create a screen capture of sensitive info (CTRL-ALT-Print screen)
  3. Open Windows Paint and press CTRL-V
  4. Save the file as a GIF or JPG
  5. Copy to a USB drive or transfer via the Internet

If your results are similar to mine, you’ll find this very simple trick fools the agent into letting the data pass by. If you wanted to get really slick, you could add a bit of Steganography.

Exec Summary

DLP is a powerful technology that can help prevent the release of sensitive information. Currently it is better suited for preventing against accidental data leakage rather than a determined attacker. If the release of sensitive data is a serious concern, you may need to rework your current architecture in order to close the holes DLP cannot defend.

Proactive Cyber Defence Seminar

July 29th, 2009

I did the keynote today at the Proactive Cyber Defence Seminar held at the International Spy Museum. Very cool venue and worth checking out. It made for a nice mix of old school and cutting edge security. Worth the trip if you are in DC. Make sure you check out the spy poo. ;)

Thanks to all who attended as I had an absolute blast. I promised to post a PDF version of the slides, so here ya go…

proactive-cyber-defense-seminar

Making The Web a Safer Place With NoScript

July 24th, 2009

Yesterday I was reviewing the stats for this site and was pleasantly surprised to see that 70%+ of all visitors are using the Firefox browser. In a previous post I discussed What Makes A System Vulnerable and defined it as being when we permit remote users to interact with code running on the local system. Firefox has an excellent security extension called NoScript which can dramatically reduce this vector of exposure.

The premise of NoScript is so rudimentary, you have to wonder why every browser vendor does not make this functionality a built in option. NoScript gives you control of which sites can execute code on your system. Its that simple. So merely browsing to a Web site no longer immediately implies that you trust them enough to execute programs (Java, Flash, etc.) on your desktop. NoScript is flexible, relatively unobtrusive, and a “must have” extension for staying safe on the Internet.

Getting NoScript

The easiest way to retrieve and install NoScript is right through the Firefox Add-ons window. Simply click “Tools” from the main menu bar and select “Add-ons” from the drop down menu. When the Add-ons window appears, click the “Get Add-ons” button at the top left side of the Window. If you do not see NoScript mentioned on the “Recommended” list, click the “Browse All Add-ons” link on the top right of the screen.

Clicking the link will spawn a new Firefox tab directing you to the Firefox add-ons site. In the search bar type in “noscript”. When NoScript appears in the results, click the “Add to Firefox” button. When the installation is complete simply restart your Firefox browser. You are now ready for safer Web browsing. When new updates become available you will be automatically notified.

Using NoScript

When you first start using NoScript it may appear that many of your favorite Web sites are broken. Flash video will no longer auto-load, drop down menus may fail, etc. Take a look at the bottom of your Firefox window. You will see output similar to Figure #1. NoScript is telling us that all script execution is currently disabled for this site. The site tried to run 12 scripts and there were 0 embedded objects (like frames displaying text or video from other sites). To change this behavior simply click the “Options…” button.

noscript-status

Clicking “Options…” will produce a menu similar to Figure #2. The information pertaining to this specific site is at the bottom of the menu. NoScript is telling us that the site tried to execute scripts from four different domains; mmismm.com, revsci.net, com.com and cnet.com. We are given two options for each domain, let the scripts from that domain run just for this session (Temporarily), or permit the domain to execute scripts for this and future sessions as well (Allow).

noscript-options

Mmismm.com and revsci.net are advertising companies. They also have a poor trust rating through the Web Of Trust (WOT is another cool Firefox plug-in by the way) so we may want to leave scripts from these domains disabled. The remaining two domains are part of CNET. So if we like to view news and articles from this company we may wish to grant access. Note this should not be automatic however. For example this menu was generated while I was visiting the CNET News site. I was still able to view all the content I was interested in just fine, so really there is no reason to permit any of these domains to execute scripts and expose myself to potential attack.

If you do permit script execution from certain domains, Firefox will automatically reload the page and execute the permitted scripts. You’ll now notice that the NoScript status bar will now look more like Figure #3. NoScript is telling us that the site we visited tried to execute scripts from six different domains, but only four of them were permitted. The domains allowed to run scripts are then listed out for us to review. There were 69 scripts total and zero embedded object.

noscript-status2

If we later decide a site is not so trustworthy, its easy to revoke permissions. If we are at the site in question, simply click “Options…” and select the “Forbid” menu item for that domain. If we are not currently browsing the site, go to the top of the menu and select “Options…” (second appearance of this title). Click the “Whitelist” tab, scroll through the list to find the site in question, and click the Remove Selected Sites” button. Problem solved.

Once you have used NoScript for a while and wish to get into some of the more advanced options, the NoScript site has some excellent information. Start with the FAQ and then move on to the user forums. If you find NoScript saves you even once from an attack, it may be worth clicking the “Donate” button at the top of the main page. ;)

What makes a computer system vulnerable?

July 19th, 2009

Consider the following five systems:

  • A Web server
  • A desktop system
  • A “Next Gen” or Unified Threat Management (UTM) firewall
  • A Network Based Intrusion Prevention System (NIPS)
  • An isolated system only used to process Web server logs

Here’s the $42 question; if we assume the above network has an Internet connection, which of these systems are susceptible to remote attack (meaning over the wire from the Internet, not via direct access to the keyboard)?

Seriously, don’t just skim the question, give it some serious thought. Your answer is obviously going to have a direct impact on how you implement a security posture or assess network risk. Pretty major stuff.

Let’s talk about each system individually before we say exactly how many are vulnerable to remote attack. The Web server has at least TCP/80 exposed to the Internet. This provides a socket that a remote attacker can connect to in order to interact with code running on the Web server. Its this interaction with local code that makes the Web server susceptible to potential attack. Consider this the classic view of risk as we’ve known these system are vulnerable for many years.

So let’s talk about the desktop. While desktops usually do not have sockets exposed to access from the Internet, they do initiate communication sessions with remote servers. Java, ActiveX, etc. can be leveraged by those remote servers in order to interact with code running on the desktop itself (think Conficker and you’ll get the idea). So as it turns out the desktop is vulnerable as well because a remote system can interact with locally executing code via these outbound sessions.

So what about the UTM firewall? If it has no open ports and does not originate outbound sessions, surly it must be safe, right? Think about how a UTM firewall operates. The IP header and payload is scrutinized in order to provide Malware, content, SPAM, etc. checking. In other words, the packet is read into memory and processed by local code in order to provide these services. “Processed by local code” means of course we’re interacting with it. So its entirely possible that a remote attacker could leverage this level of access to whack the system (usually this takes the form of a simple DoS attack, but remote code execution has shown up in the wild).

OK, what about the NIPS? That one is easy. Both NIPS and UTM firewalls are based on the same underlying technology (stateful inspection) so the same problems arise here as well. The NIPS is also vulnerable to remote exploit.

So that leaves us with just the isolated system which is parsing Web server logs. Safe or not? Turns out, this system can be remotely whacked as well, the attacker just has to be a bit more clever. Let’s follow the path from the attacker’s system to this internal host.

The attacker hits the Web server, which dutifully writes what it sees to a log file. If I can embed malicious code in the log file, that will get passed to the isolated internal system when it parses the Web logs. Mike Poor told me of an interesting hack he ran into at a client where a remote attacker injected Java code in the user agent field as they visited his client’s Web site. When the local Admin used a Web browser to view their Web server logs (of course running as an Administrator equivalent!), the browser saw the Java code and executed it locally. The code then attempted to create a reverse socket connection so the attacker could gain remote access to the box.

So for those who are keeping score, every system listed above is vulnerable to remote attack.

What’s the moral of the story? Exposure to remote attack is not about “open listening ports on the local system”, its about “permitting a remote system to interact with code running on the local system”. This can either be directly as in the first four examples, or indirectly as in the last one. Once we realize that remote code access is the true root cause of the problem, we also realize our exposure to risk is a lot higher than we thought.