Friday, December 17, 2010

How to Secure the Corporate Data on Your iPad or iPhone

A recent survey of CIOs showed that 85% had received requests for Apple iPhones, iPods or iPads to be used in the enterprise, and that almost 75% had found that end users were connecting those devices to the enterprise network with or without permission.

This push towards using employee-owned devices from the bottom of the organization has been matched by the push to use iPads in particular from board-level executives, and IT security professionals are being squeezed in the middle, forced to support devices which were never designed for enterprise use and which offer unique challenges to secure, deploy and manage effectively.

Given the popularity of the iPad among executives, it was important that Apple made significant improvements to make its devices more enterprise-friendly, and this it attempted to do with a raft of new features in iOS4. Alongside new management capabilities came improved data protection, making iOS4 devices far more secure and more straightforward to manage than their predecessors.

However, there remains some confusion between "encryption" and "Data Protection," as used by Apple when referencing its latest security capabilities in iOS 4. Apple has created a framework for Data Protection that goes far beyond previous encryption capabilities and addresses many of the prevailing data security concerns. Encryption was introduced in iOS 3 and is "always on," but even when the device passcode is set it does not prevent files from being accessible in the clear under certain circumstances.

Though additional file-level encryption is available under the new Data Protection capabilities in iOS 4, the default state of data on an iPhone or iPad is "always available" to preserve backward compatibility, and sensitive data stored on iOS devices remains unprotected in many cases.

Of the Apple applications, only Mail supports full data encryption right now, and few third-party software developers have implemented the Data Protection APIs. Therefore, sensitive corporate data can be at risk if an iOS device is compromised.

A brand new Analysis Brief is in the pipeline covering iOS5, asking how secure Apple's new Data Protection capabilities are, and providing actionable advice on securing corporate data on iOS4 devices.

Follow me on Twitter (@bwalder) to be kept informed of new research.

Thursday, December 09, 2010

A Good Security Testing Plan Will Save Time and Money

Few enterprises in today's environment of highly constrained IT and security resources can afford to waste time and budget on network security products that exceed — or do not match — their requirements. While it is tempting to forge ahead in evaluating the biggest and fastest, hardware-accelerated, nuclear-powered "Next Generation" security toys, a well-designed testing plan may demonstrate that a lower level of performance is acceptable at certain points on the network, and this can reduce purchase and deployment costs.

An effective testing plan will enable the enterprise to select cost-effective security solutions that align with internal requirements for performance and system integration. The availability of advanced test tools enables a complete test lab to be created in a single rack of equipment, making it possible for almost any organization to perform in-house testing.

When embarking on a testing project, it is also important to decide in advance the eventual use case for the products being tested — a device intended for a branch office environment is unlikely to perform well if tested as an enterprise core product, for example.

In consulting independent test reports, be wary of those test houses that do not recognize the value of use-case testing. Look for those that either seek to certify a product against a particular use case, or that recommend one or more use cases based on the results of the test. A simple "pass/fail" result with no indication of a suitable use case renders a test worse than useless — even misleading.

We have an ANalysis Brief in the pipeline that examines each of these issues in more depth and defines testing best practices that will save precious resources when evaluating complex security devices.

Follow me on Twitter (@bwalder) to be kept informed of new research.

Monday, December 06, 2010

Firesheep: Should CISOs Ban Employees From Using Unsecured Public Wireless Networks?

The release of the Firesheep plug-in for the Firefox browser has made it trivial for even unskilled attackers to intercept and interfere with private data on unsecured public wireless networks.

Since attackers can use the tool to send messages and make posts on behalf of the victim, organizations using social networks for marketing, support or brand enhancement may suffer serious consequences as a result.

Chief information security officers (CISOs) need to make employees aware of the risks and provide them with the necessary tools to counter them, but should they be banning the use of unsecured wireless networks for any company-related communications?

This note (for subscribers only), entitled "What CIOs need to know about SSL and its effect on network traffic inspection capabilities" answers that question and provides action plans for both employees and software developers to combat the threat of session hijacking, in addition to covering how IT departments can balance the need for enhanced security with the need to inspect encrypted traffic on the corporate network.

Don't forget to follow me on Twitter (@bwalder) to be kept informed of new research.

Friday, October 29, 2010

Like Lambs To The Slaughter - What Is Firesheep?

As with Advanced Evasion Techniques (AET), Firesheep has garnered significant publicity recently by rejuvenating interest in an old security problem via the creation of a slick new tool. Unlike AETs, however, the tool at the centre of this publicity storm has been released to the general public, for good or ill.

HTTP session hijacking, or "Sidejacking" as it is sometimes called, is nothing new. Papers exist discussing the technique as far back as 2004. Several applications have also been written in the past (Ferret, Hamster, Cookie Monster and FBcontroller to name a few) to take advantage of the technique. However, Eric Butler, a Seattle-based freelance software developer, has rekindled interest in the issue via the release of a simple-to-use Firefox plugin called Firesheep.

Either on its own on a Mac, or coupled with Winpcap (or Ettercap) on a PC, Firesheep can capture traffic on any unsecured wireless network to which you are connected and extract details from session cookies used by any of the web sites configured within the Firesheep application. These cookies are used by web applications such as Twitter or Facebook to register the fact that you have successfully authenticated to the host site. They do not contain your password details, but they do not need to. By using the cookie to piggyback on your unencrypted communication, the attacker running Firesheep can impersonate you and gain access to the application you are using.

It couldn't be easier to use. The attacker just fires it up, turns on packet capture, and waits for the sidebar to populate with account details it has detected on the network. He clicks on your details, and hey presto - he sees on his screen exactly what that you see on yours. And he can interact directly with the host application. He could post status updates on Twitter or Facebook on your behalf, for example. OK, that might not be too serious for some, but for those whose job it is to represent the public face of a major corporation then the potential for mischief is significant.

Should you stop using all public, unsecured wireless networks? Well, no. That would be overkill.

At the end of the day, the real solution is for providers of web applications like Facebook and Twitter to use secure connections for all their operations. In the mean time, there are a number of precautions you could, and should, take, and these (and other key points) are the subject of a research note I have just completed (subscribers only).

Don't forget to follow me on Twitter (@bwalder) to be kept informed of new research. Just don't do it from an unsecured wireless network - you never know who might be watching!

Wednesday, October 27, 2010

AET Update

Stonesoft held a joint publicity exercise with ICSA Labs last night in the form of a live Q&A session via conference call.

It was fairly embarrassing, given that there was a total of three questions (two from the same person which seemed to confuse evasion techniques with actual exploits), and the whole thing was wrapped up after 25 minutes with most of it being taken up by Stonesoft execs repeatedly denying that this was just a publicity stunt (and still no real details).

So, why was it a bust? Lack of interest or lack of understanding?

Well, given the confusion mentioned above, I suspect a lack of understanding, which is worrying. And one reason why I am inclined to forgive Stonesoft this blatant hijacking of the evasion issue, since if it continues to at least raise awareness and force other vendors to take it more seriously in their own testing, then it will have been A Good Thing.

So let me clear up the confusion. Evasion techniques are not, in and of themselves, exploits. Any attacker would need a functioning exploit which is already proven to work against the target host. If the host is unpatched and the in-line defences (IPS/NGFW) have no appropriate signature, the exploit will be successful - game over. If the IPS/NGFW has a signature covering the exploit, then it will be blocked - score one for The Good Guys.

This is where evasions come into play, however. Having noted that his exploit has been blocked, the attacker will then begin to use the same exploit coupled with one or more evasion techniques to disguise the exploit and render it invisible to the IPS/NGFW inspection engine. Chances are, right now, it will then work, since so many IPS engines fare so badly against even the most basic evasion techniques.

Note that if the target host has been patched against the exploit, then no amount of evasion will help. This is the key differentiator here - evasion techniques are only good for "cloaking" and delivering an exploit unmolested past a NGFW or IPS. Once your host system is patched against a particular vulnerability, it is safe (until the next one is discovered!)

Take a look at the most recent NSS Labs IPS Group Test Report to get some idea of which IPS products do well against evasions and which do not. Now this is where Stonesoft is to be commended. Because in trying to fix its own problems it went beyond those tools which are freely available to testers and wondered what would happen if it extended a few of the techniques and combined them. The result was the Predator tool and this latest slew of publicity.

It bears repeating that the criticism levelled at Stonesoft to this point is due to a lack of originality, not lack of seriousness of the problem. In the conference call last night ICSA voiced a very significant qualification - that 9 of the 14 PCAPs Stonesoft provided them to validate the claims had not been seen before in tools which were freely available. In other words, Stonesoft has not invented or discovered a whole new type of evasion technique (as I have already pointed out, I was personally using several of their so-called "new" evasion techniques in public testing over seven years go) - it has, instead, extended and combined existing known techniques to create a new set of problems for NGFW/IPS vendors to solve.

In other words, we are no worse off now than we were before Stonesoft made its claims - but there is still a significant problem which needs addressing. And it is time the IPS industry woke up and addressed this issue. There are products on the market today which have had issues with evasion techniques since the day V1.0 was launched, despite being pulled up time and time again in independent tests.

Which vendors are you considering for your next NGFW/IPS product? Ask them about evasions. Ask them about the Stonesoft AETs. And then make them PROVE they have an answer. In your own network, under your control. Or in an independent test lab under the control of a trusted third party. But NOT in their own labs.

Because the thing is, some vendors don't seem to understand the problem any more than the public at large. If they did, I wouldn't have had to fail the same products, year after year, for the same problem when I was testing these things myself.

As I mentioned previously, I have a research note in the works covering evasion techniques and how they can (and can't) be used against your perimeter defences. Given the level of interest in this subject, I might try to push up the delivery date.

Follow me on Twitter (@bwalder) to be kept informed.

Wednesday, October 20, 2010

Storm In A Teacup? More on Advanced Evasion Techniques (AET)

Following my recent post on the Advanced Evasion Techniques (AET) "discovered" by Stonesoft, I thought I would update you with a few discoveries of my own.

After further investigation it would appear that there is not really that much that is actually new here. Don't get me wrong, there is certainly a threat here, and if there is one good thing that comes out of this it is that a few vendors might start taking evasion testing more seriously than they have in the past.

It appears that Stonesoft went through an independent testing process at the end of last year, failed several of the evasion tests, and started to do some research in order to improve their product. In developing their own tool to help them test, they started "fuzzing" the evasion techniques - an automated process which tries millions of random evasions, both in isolation and in various combinations, in order to find those which work. Bear in mind that it is possible to "evade" a typical TCP/IP stack too, so for an evasion test to be valid, it should allow a previously-detected exploit to bypass and IPS/IDS undetected whilst remaining capable of being reassembled by the target vulnerable host.

What they came up with was a number of new "discoveries", which under closer scrutiny appear to be techniques which have been well known for many years in the testing industry. In particular, they are laying claim to the discovery that layering multiple evasions - particularly evasions from different layers of the protocol stack - can succeed where single evasions will not. Well I know for a fact that this technique - along with around 90% of the others which they are claiming are new, have been in use for 7 years or more in the testing industry. How do I know this? Because I was the one doing it!

As founder and CEO of NSS Labs, I pioneered a range of IPS/IDS/Firewall testing techniques which are still in use today. In particular, I devoted a significant amount of time to the study of evasion techniques and was using many of the "new" Stonesoft AETs - including the all-powerful layering - way back in the naughties. I had to use my own tools back then, developed in-house. That certainly made it a challenge to layer MSRPC fragmentation with TCP segmentation and IP fragmentation in the same attack, but it was doable. And I did it. What IS new from Stonesoft is the fancy Predator tool, which they are not releasing to anyone (sensibly). It is a GUI-driven "One Stop Evasion Shop" and looks a lot nicer than the multiple command line tools I developed....

In addition, one of the "evasions" they have discovered seems to be less of an evasion and more of an exploitation of a particular bug which can be found in some IPS products. Again, part of a data leakage test which I was running against these products some years ago. I am surprised that it is still causing problems for some vendors... but there you go!

There is nothing new under the sun. What Stonesoft has done is taken existing evasion techniques and extended them. In doing this, they have created a few specific evasions I have not used before, but they are still extensions of known techniques. Kudos to them for taking this so seriously - it should do wonders for the security of their IPS and firewall products. Hopefully it will also force other vendors to follow suit and take this more seriously. You, the customer, deserve that at least. There are far too many IPS/IDS products which are still today failing to protect against even the most basic of these techniques (as seen in recent independent tests), let alone the more complex variations Stonesoft is publicising. Signatures are just not enough!

But don't fall for the FUD here... nothing has changed. AETs are not the WMD that will bring our perimeter security to its knees. Yes, they are a serious problem, but no more serious than before Stonesoft launched its publicity drive. Except, of course, that the bad guys are watching too...

Don't forget to follow me on Twitter (@bwalder) to keep up with my blog entries, research notes and random thoughts on wine, coffee, Labradors, golf, life in France and.... oh yes.... security.

Sunday, October 17, 2010

Discovery of Advanced Evasion Techniques (AET) Could Cause Headaches For IPS/NGFW Vendors

The Finnish security company Stonesoft said today it had found new techniques that bypass current security systems and which cyber-criminals could use to gain access to internal protected assets of many companies. Stonesoft said that as a result of the advanced evasion techniques (AETs) "companies may suffer a significant data breach including the loss of confidential corporate information."

Is this another round of hype or is there a genuine threat here?

Well, the bad news is that AETs do appear to exist. However, they are an extension of an existing threat category rather than a new one.

The problem is that a lot of in-line security devices - IPS in particular - don't do that good a job of coping with the basic stuff that is already out there, so this stuff is just going to make things worse!

Why is this a threat? Let's imagine you have something like Stuxnet which is proven to be effective at spreading itself around via remote exploits (amongst other techniques). Hopefully users will patch their systems, but in the mean time, they deploy sigs on their IPS, thinking that gives them additional time to test and roll out patches. It would be a trivial matter to alter Stuxnet to incorporate these evasion techniques, thus prolonging its life (don't forget - many users won't bother patching at all, and many more will delay - we know this is true from experience).

Or, another scenario: I am a cyber criminal with a new exploit for which I paid $5000 and which guarantees 100% ownership of a particular system. This I have tested and verified. So I run it against a public-facing target and find it is ineffective. I can be pretty sure this is as a result of in-line defenses. Do I throw out my $5k investment and move on? Not on your life. I deploy some simple evasion techniques and breeze on through.

For casual hacking by non-tech morons using toolkits and pre-packaged attack tools, evasion techniques are not widely used (though a number of the more advanced/expensive "blackware" tools do include evasion techniques). For those involved in targeted attacks, however, they are in common usage.

Right now Stonesoft has not released any of these tools (thank goodness!) Nor, I have to say, has it been particularly forthcoming in releasing any technical details. It claims that the AETs have been verified as real by independent test labs, but I have yet to see any evidence that this is true beyond a couple of vague quotes and sound bites. This has all the hallmarks of a carefully stage-managed publicity stunt about it.

That does not mean the threat is not real - I have seen the techniques in action and I am convinced they have the potential to cause significant mischief. There is a big difference, however, between watching a carefully managed demo by Stonesoft personnel over a secure link to getting one's hands dirty by testing hands-on. Right now it is possible that the majority of what is deemed "new" could be little more than layering older techniques on top of one another (something I was doing a decade ago to test IDS products). That doesn't make them any less effective, of course, it just means that this particular announcement is more about marketing than security. Once I see some hands-on verification by a trusted third party I will be happier.

I am also convinced that Stonesoft is not the only one to have discovered these flaws. My guess is that this is also just the tip of the proverbial iceberg. If I was making a living out of targeted attacks and cyber crime I would have been keeping these under my hat for a while now - I bet those shady folks are not happy that they are finally out in the open.

Even with the range of evasion tools and techniques currently freely available, however, security vendors have proven themselves incapable of handling even some of the most basic of those techniques. There are products on sale right now that I tested over 5 years ago and which still to this day cannot handle these issues. It is hard to do good TCP stream (and even IP packet) reassembly at high speeds - one major IPS vendor, for example, ships its IPS with all anti-evasion protection turned off by default because it is such a performance hog! It is not too much of a stretch to say that you might as well not bother deploying the thing at all if you are not going to switch them on!

If there is one takeaway from this round of publicity it is that you should make sure that the IDS/IPS/NGFW product you are about to buy or have already installed is resistant to these kinds of evasion techniques - and don't just take the vendor's word for it!

I have a research note in the works covering evasion. Follow me on Twitter (@bwalder) to keep up with announcements of research note releases.

One final point - this stuff is applicable to IDS and in-line protection only (i.e. IPS/NGFW) and does not help bypass good anti-malware scanning or EPP. Defense in depth, folks... defense in depth...

Friday, August 20, 2010

Intel + McAfee: Game Changer or Disaster Waiting to Happen?

While an acquisition of McAfee was hardly a shock (it has been on the cards for some time) the acquirer did come as something of a surprise. I am sure we can all think of at least one - if not more - suitors who would have been a better fit for McAfee. Mind you, what does McAfee care? Payday is payday...

Intel obviously wants to improve the security posture of its products and can gain some good R&D from McAfee to help with this. However there appears to be very little synergy between the two companies. They have different customers, different routes to market, different cultures. Intel development cycles are measured in years, whilst McAfee needs to be able to react quickly. There are no channel benefits, no new market opportunities, and not a whole lot of revenue enhancement. And to cap it all, Intel has never really demonstrated that it actually understands the software business. Or the security business for that matter - look what happened to LANDesk and Shiva.

The biggest area of speculation is over whether it is feasible for Intel to build in EPP-type protection into its silicon, since this would provide the most exciting outcome from this merger (though one in which the anti-trust folks would doubtless take a long hard look). How feasible it is to embed security at such a low level – given that silicon is relatively fixed and security products need to be able to change on almost a daily basis – remains to be seen. Low-level capabilities with APIs and firmware hooks are probably the way to go here, though other security vendors will presumably be able to exploit those as well (if not, the lawyers will have a field day).

Clearly given the recent acquisition of Wind River Intel also has its eye on the embedded/mobile market - which is going to be huge(r) - and the McAfee acquisition could dovetail quite nicely with this, as well as giving a boost to Intel's vPro platform. But if this is all Intel wanted, it could have paid a lot less for a smaller company with better technology and less baggage - but that company would not have had the McAfee brand name, of course, which will be important as Intel chases a diverse range of customers for its new security technology!

And there is always the little niggle that in the mobile world, vendors such as Apple, RIM and Microsoft have control of the platform - and therefore the security - not the chip makers. Additional layers of security can't harm, but it is unclear whether they are as necessary as in the PC world. To date, users have been unable and/or unwilling to pay for additional security software on smartphones (Apple, for example, will not permit the use of key system calls required by antimalware vendors under the terms of its SDK).

While there is undoubtedly some intellectual property and R&D at McAfee that will be able to help Intel in its goal of offering more security features in its chipsets and related software utilities, it is unclear why it felt it needed to own McAfee to deliver this. It was already benefitting from an established partnership, and given that Intel clearly paid full value then it is obvious that it REALLY wanted this to happen – perhaps it is a defensive move to prevent others getting their hands on a key partner? Either way, almost $8 billion is a lot to pay for McAfee.

The first fruits of this union are slated to be delivered some time in 2011, apparently based around exposing limited security capabilities built into existing Intel chips. Integrating EPP-type security into silicon, if feasible, will take much longer.

One area which worries me is that I do not see where the network infrastructure security product line fits into Intel’s plans. I am hoping that IntruShield, one of the market-leading NIPS products, is not left to languish in the bowels of Intel and die a slow and painful death (McAfee assures me it won't since, it (McAfee) will continue to operate as a separate business unit). Intel could tinker with IntruShield, of course, by swapping out the network processing hardware for their own (if it is not already in there!) and replacing custom silicon (ASICs/FPGAs) with generic Intel processors. This could revitalize the IntruShield product line or it could finish it off altogether. If they have no clear strategy (and if they have, then why put McAfee in the Software & Services division?) it would be better if they spun it off into a separate company or sold the technology to an interested third party.

Bottom line: while in the long term this acquisition may benefit Intel in its fight with ARM for the embedded processor market and even AMD in the PC market, it is fraught with potential pitfalls for McAfee’s existing customers if the company gets distracted in a very competitive market.

New McAfee enterprise clients and existing ones coming to the end of a refresh cycle will be looking long and hard at how focused they think McAfee will be on their business in the next 12 months. The fact that this comes hot on the heels of the recent flawed security update which crippled thousands of corporate PCs will not help matters. Symantec, Sophos and Trend Micro (amongst others) must be rubbing their collective hands in glee right about now.

But perhaps the bigger questions are: will other chip manufacturers feel they have to follow suit to keep up with Intel? Or is Intel about to go on a security shopping spree? And which security vendor will be the next to be snapped up?

Friday, July 09, 2010

Who Pays For Testing You Can Trust?

This is a question often overlooked both by those who scream "bias" and those who cry "but I want all my information for free!"

The point is, should you stop and think about it for more than a minute, there is no such thing as a free lunch - or a free independent test report. Someone, somewhere, has to pay for it. And at the end of the day, the test lab has to make a living, and there are only three ways it can do that:

1. Free testing, free reports, money comes from advertising

2. Money comes from participating vendors - reports are made available for free

3. Testing is free to vendors, end-users have to pay for reports

That's it. Those are your choices. And in all honesty, there is no difference between options 1 and 2, except that advertising revenue is hard to come by and the tests are never likely to be as thorough as you would like. Option 1 is the magazine model, and we can ignore it when discussing independent test labs.

So the proper labs are left with two choices - vendor pays or end-user pays.

First question is, does the fact that the vendor pays for the test devalue that test in any way? The answer is, "it depends on the integrity of the lab". If the lab prepares a solid, vendor-agnostic test methodology and sticks to it and reports all results, warts and all, for all vendors in the same way, then the model works just fine. Where the vendor (or in some cases, a consortium of vendors, even when watered down with tame test labs) gets to define the test methodology, or veto test methodologies it does not like, then there is something rotten in the state of Denmark. Avoid reports that come out of such a process.

You can usually sniff out the best methodologies - look for the ones that are open, thorough, published, clearly vendor-agnostic and which result in tests which are repeated time after time in the same way. Avoid "methodologies" which are aiming for the lowest common denominator, or which are "one-offs", clearly specified by a single vendor to show their product in the best light. How can you spot those? Simple - they are indeed one-offs, and you will never see that test methodology used to test another product. Labs should have a different test methodology for each product category - watch out for the ones which have a different methodology for each vendor!

I speak from experience here, having spent almost 20 years in the testing and certification business before joining Gartner. Now personally, I never used to accept single-vendor sponsored reports. Not because I wasn't confident I could still do the same rigorous, independent test, but because of the perception. If the vendor concerned doesn't like the report, he gets to squash it - that's his right as the commissioning entity. But if he does well in the test, then he will be more than happy to publish the results. Unfortunately, no matter how scrupulous the tester and the testing process, anyone who doesn't like what the report has to say (other vendors, or end users who purchased competing products and don't like that their choice was not validated publicly) will cry - usually loudly and publicly - "well of course they were bound to win - they paid for it!" Very unfair on all concerned, but almost inevitable.

Group tests usually work better, since even when the vendors are paying, it is obvious that they are a) all paying the same, and b) they are all being tested and reported under the same methodology. Unfortunately, the vendors still usually get the option to squash reports which can have the unwanted side-effect of a group test of 12 vendors resulting in a finished report containing only 2! In addition, vendors can hide behind budgetary issues as an excuse for non-participation.

This brings us to option number 3. This is a huge gamble for the test lab, which can spend months testing products only to find that sales do not cover costs. But the advantages are clear. They can dictate who is tested and can include vendors who would prefer not to participate because of technical issues. This approach is fine as long as the vendors are given the option to provide technical support and ensure their product is correctly configured.

As with the paid group test, everyone is treated equally and the results are reported warts and all. This time, the vendors don't get the option to pull out of the test if they do badly, of course, and this can result in some nasty repercussions for the lab. Vendors who do badly will go on a massive PR damage limitation offensive which will include some very public denouncements of the process and findings. Sometimes these attacks are not so public, aimed at existing customers via private communications, making it almost impossible for the lab to defend itself against unfounded allegations.

The end result, however, is a report which is much more valuable to the end user and potential purchaser of the products under test. The down side, of course, is that now it has to be paid for! C'est la vie. You can't have your cake and eat it too!

The vendors, too, must learn that they cannot have it both ways. If they do not want to pay for testing up front, then when the lab finds problems with their product what can they expect for free? Certainly the lab should tell them what they found and why the product did poorly. But how much information are they obligated to provide? Surely that is the extent of it? Should they be expected to act as an unpaid QA facility for vendors? Or should they - should we all - expect that these products do what the vendors claim, and if they don't they need to be fixed at the vendor's expense?

The vendor is always at liberty to go away and invest in research and technical staff to reproduce the bugs or problems found. Or it can choose to pay for consultancy to expedite that process. I keep seeing vendors complaining in public forums about how they did poorly in tests and the test lab won't provide them with all of their test material to reproduce the tests.

Well why should they? Shouldn't that be considered their intellectual property? Should they not be recompensed for helping vendors fix these glaring errors? How do you as end users feel about vendors which will not invest in their own QA process but expect external entities to do it for free?

Who amongst us here is willing to work for free? It is not a widely accepted concept - don't apply it to others unless you are prepared to do it yourself.

Wednesday, May 19, 2010

Terminology Bloat - What's Wrong With the Horseless Carriage?

We all know how much the IT industry loves its terminology and especially its TLAs. So do we really need more?

Neil MacDonald's blog entry posits the idea that it might be time to retire the term firewalls. He raises a good point that with the addition of user- and content-aware technology to provide more control than the IP address/port approach of "traditional" firewalls, the Next Generation Firewall (NGFW - hey, that's a FLA!) the technology has advanced beyond what was originally envisaged from simple policy-enforcement devices. But, and this is a big but, they are still policy enforcement devices, wherever we place them in the network or, indeed, the stack.

What is wrong with retaining the original term, modified with something that describes its new functionality ("Next Generation") or specific purpose ("Web Application")?

In England, when we go to the builder's merchants to buy tiles, we specify roof tiles, wall tiles, bathroom tiles, floor tiles, patio tiles, etc. They are all tiles, we just prefix them with their intended location. In France, however, we have a different word for each type of tile (tuile, carrelage, carreau, faience, etc.) This makes life difficult for the foreigner, and much more confusing.

Whilst I am sure those of us working in the industry would just love to invent a new term for these wonderful new devices, spare a thought for the poor end-user. Enterprises may have got to grips with the terminology, for example, but the SOHO user has only just begun to understand what a firewall is all about. And we still can't decide what exactly constitutes a NGFW, for that matter, so how many new terms will we need to come up with to cover all of the feature options the vendors are scrabbling to include in their new firewall products? Hopefully not as many as there are French words for tile!

Let's not move the goalposts again. Let's stick with the horseless carriage option for just a little longer.

[UPDATE]: Neil responded in his latest blog post and makes a great point. Where functionality and, more importantly, the administration requirements are significantly different from a "traditional" firewall - as in Neil's example of the WAF - then a name change would be appropriate. The biggest problem with keeping the same term for all those different tools is, if you have a hammer in your hand, everything starts to look like a nail. My argument was simply that I would prefer to see the term NGFW adopted before AASG (Application Aware Security Gateway) - I don't think we need to ditch the term firewall just yet.

Wednesday, March 17, 2010

Don't Shoot The Messenger

Testing is hard.

For a while there I thought of making that the end of this blog post, but I guess I should elaborate a little. Testing is hard, whether you are a vendor looking to do QA, an independent test lab doing competitive analysis, or an end-user trying to decide which product to buy.

Good test plans are difficult to draw up, and solid methodologies are difficult to create. End-users often use independent reports to create short-lists before doing their own in-house testing or proof-of-concept projects. This is why vendors get so upset when they don't do well in such reports. This is understandable, but what the vendor does next is often a good indicator of character.

The first thing to do, of course, is to verify that problems highlighted in the report are genuine. Vendors should work with the test lab wherever possible and be prepared to do so with an open mind, not get all defensive about the fact their precious product has a flaw. If the test lab can show you time and time again (live or on video) that they owned a target host protected by your product, then you probably have an issue that needs fixing!

Secondly, dedicate some resources to fixing the problem rather than generating marketing FUD to disguise it or deflect attention away from it. Yes, this costs money, whether you do it all in house or engage the test lab to help. Don't expect someone else to fix your product for free!

Third, bask in the glory that comes with fixing a problem quickly and professionally thus leaving your customers exposed for the minimum possible time.

What you SHOULDN'T do is shoot the messenger!

I have seen three examples recently of vendors going on the attack straight away when they don't like what is in an independent report - one in the IPS area, one in Web Application Scanning, and one in AV.

In each case the vendor in question launched public attacks on the various test labs, one of which led Mike Rothman of Securosis to predict the death of product reviews. I think Mike is wrong in this dire prediction, and end-users had better hope that I'm right, because such reviews - when done well - are all that stands between the purchaser and all that vendor hype. That and a Magic Quadrant!

Of course, the vendor is entitled to put forward his point of view. It is not difficult to spot weak methodologies, and these can do more harm than good, and the only recourse a vendor has to to refute the results publicly.

But when you have been caught out, when your product has been shown to have a repeatable flaw, posting falsehoods and ad hominem attacks in an attempt to discredit the report, the methodology, and the engineers who carried out the tests is simply not professional.

The problem is, if the test lab in question DIDN'T foul up the test, you are going to look pretty stupid when they are forced to reveal more and more of the problem in order to dispel your FUD attack. And your customers are going to be upset too, as you dedicate marketing resources to hide an issue better addressed by engineering resources.

If you are a customer of a vendor who engages in these tactics, I would encourage you to make every effort to talk to whoever produced the report which upset them. Try to understand the problem, and make sure that it doesn't affect you. If it DOES affect you, see if they can help you reproduce the tests in your own environment (if it is not too dangerous to do so). At that point you can go back to your vendor with some concrete data, and you will also be in a position to verify any fixes they release for the problem in the future.

I have a series of research notes in the pipeline right now on testing: what you should know, and how to do it properly. It strikes me they are sorely needed!

Monday, March 08, 2010

Identity Theft - A True Story To Chill The Heart

It's typical that on the evening before you are about to leave on business for four days you realise your propane tank is empty (there is no mains gas in our village). And you will not be back home until Friday evening by which time it is too late for them to make a delivery before the weekend. And, oh look, the weather forecast has turned to snow by Monday. And so you face a bleak, cold weekend with neither heating nor hot water before they can replenish your gas supply on Monday. Oh joy.

What has this got to do with IAM, you might ask. Nothing at all. But it does give you some idea of my state of mind as I headed north to London to attend the fourth Gartner Identity and Access Management (IAM) Summit - not the happiest, as you can imagine.

But solace was to be found in the warmth of the welcome I received from my colleagues, most of whom I was meeting for the first time in London. And drink. But mainly the welcome...

IAM is a key area for Gartner's clients, of course, and so the agenda was packed with the best and brightest of those Gartner analysts who specialize in Letting The Good Guys In (LTGGI). As a tin-head myself, and part of a separate group in the Gartner Security, Privacy & Risk team tasked with covering technologies for Keeping The Bad Guys Out (KTBGO), I was not actually involved in any of the presentations. Instead, I got to observe my new colleagues in action, brainstorm ideas for research, try to sell them on the fact that if we kept everyone out, Good and Bad, it would make life (and security policy creation) a lot easier, and talk to some of our clients face to face for the first time.

Pretty soon I had forgotten all about propane problems as I immersed myself in IAM-enabled cloud architectures, security monitoring, role & entitlements management, fraud prevention and federated identity management. There were workshops too, many of which were fully booked almost as soon as the summit started, and a constant stream of analysts and clients to and from the one-on-one meeting rooms. Attendee numbers were good, an excellent sign in tough economic times, and everyone I spoke to seemed to be getting a lot out of the event. If you missed it - shame on you. Book early for next year!

Things ended on a light note with a true story of identity theft from writer and comedian Bennett Arron. It all started with a mail-shot from a home shopping catalogue company to an old address, which allowed the unscrupulous person now residing at that address to place an order and open an account with the home shopping company. That credit account allowed him to acquire a mobile phone or two. From there it was not too difficult to open bank accounts and obtain credit cards - all in Bennett Arron's name.

The end result was Arron, who had already given notice on rented accommodation to buy a house, failed to acquire a mortgage, couldn't rent another property, couldn't get a line of credit, burned through savings and ended up penniless and living with parents with his pregnant wife. It took him two years to clear his name, by which time property prices had tripled and he could no longer afford to buy a house anyway! No compensation was forthcoming from any of the companies who allowed a criminal to open accounts in someone else's name, though Arron did get a one-man comedy show out of the material and Channel 4 made a documentary on him. So that's OK then.

One remarkable thing that he demonstrated in the documentary was how trusting people can be when faced with official-looking situations. He donned a suit and tie and set up a stand in a local shopping mall offering people advice on the perils of identity theft. He also offered a free service to protect their most sensitive information provided they would... yes, you guessed it.... give him their most sensitive information.

In just two hours he spoke to twenty people, eighteen of whom happily handed over their name, address, date of birth, credit card numbers, expiry dates and even the 3-character CVV/CVS numbers from the signature strips. Only two people refused. Only one person thought better of it and returned to the stand.

"Hey, this isn't a scam is it?", he asked.

"Errrr.... no"

"Oh, that's OK then. Thought I'd better check though...."

True story!

And just goes to show that no matter how many firewalls and IPS you have installed, social engineering will get you every time.

As part of the documentary Arron attempted to prove how easy it would be to steal someone else's identity. He settled on then Home Secretary Kenneth Clarke, since it would need to be a high profile "theft" to get everyone's attention.

Arron applied for a duplicate birth certificate in Clarke's name, and within 3 days it arrived. Using that, he applied for a duplicate driving license from the UK Drivers & Vehicle Licensing Authority (DVLA), which took just a couple of weeks to arrive. As part of this process, the DVLA requested photographs for the license which had to be authenticated on the reverse with a statement from a trusted, non-family member that this was a true likeness of Kenneth Clarke. This Arron completed himself using a false name. Something of a root trust issue, here, I think....

Naturally, with a birth certificate and driving license Arron could have gone on to open various accounts, building up to bank accounts and credit cards. Scary stuff. One good thing came from this - it is now no longer acceptable to use a birth certificate as the sole means of ID when applying for a UK driving license. Wonder if they have plugged that photo certification loophole too?

As the summit comes to an end and I set off back to my home in France, I reflect on how identity theft would be so much more difficult to accomplish here by virtue of a few simple controls that are universal.

In France, if you want to open any sort of account, from a bank account, through mobile phone all the way down to the humble supermarket loyalty card, you need to provide one piece of photo ID (ironically, a UK driving license is permitted!) and at least one justification of current address. This needs to be something serious, like a bank statement or major utility bill (electricity bill, fixed phone line, but NOT a mobile phone bill).

We often consider these controls to be the bane of our lives here, since they add a layer of complexity to the most simple tasks, but in the light of Bennet Arron's story it makes perfect sense.

Sometimes customer satisfaction is not everything - sometimes you have to put security requirements ahead of signing up that new prospect.

Tuesday, February 16, 2010

Why we shouldn't write off chip-and-PIN just yet

There has been some pretty wild speculation in the last few days that the death of chip-and-PIN is inevitable, based on the creation of a man-in-the-middle attack developed by computer researchers at Cambridge University.

According to the Cambridge researchers, there are over 730 million payment smart cards in circulation worldwide using the EMV (Europay, MasterCard & Visa) protocol (2008 figures). Known to bank customers as “Chip and PIN”, it is widely used in Europe, is being introduced in Canada, and there is pressure from banks to introduce it in the USA.

Since its introduction in the UK the fraud landscape has changed significantly: lost and stolen card fraud is down, and counterfeit card fraud experienced a two year lull. Inconvenient, then, that they now claim the protocol is broken.

Apparently, the Cambridge researchers succeeded in building a man-in-the-middle device that reads a valid card and, at the appropriate point in the card verification process, sends the correct "PIN verified" code to the terminal, whether or not a valid PIN code was entered. Of course, the man-in-the-middle device needs a way to communicate with the card reader, and this is achieved by inserting a fake card into the reader which is connected to the MITM device by a bunch of wires.

So, it is is an interesting theoretical attack, to be sure. However, you would need a valid stolen card to start with (OK, not impossible) plus a backpack full of electronic gear and a fake card dangling from some wires. Obvious enough to tip off the merchant that something is afoot, do you think?

Here in France, where they have been using chip-and-PIN technology successfully for over a decade, many retailers have portable card reading terminals. They will take your card from you to insert into the reader and pass it back for you to enter your PIN. Not much opportunity to use your umbilically-challenged fake card there, then!

Even using fixed readers at the point of sale (POS), the fraudster would have to keep their hand in such an unnatural position throughout the transaction (whilst entering the PIN with their free hand) that it is beyond belief that alarms would not be raised (they would look like some weird, deformed, miniature concert pianist in action!)

In other words, even though a flaw in the EMV protocol has been discovered, to claim chip and PIN is broken is a bit harsh on the back of this. Indeed, simple physical protocol changes would be enough to foil any attempt to use this technology.

People are claiming that they have experienced fraudulent debits from their accounts via chip-and-PIN cards. The banks are denying liability because subsequent investigations show that a valid PIN number was entered. Leaving aside that it is highly unlikely that any of these fraudulent withdrawals have been made as a result of the Cambridge technology (nor is it likely any will be made in the near future) these sort of claims always make the papers.

The less-newsworthy reality is, however, that the majority of these withdrawals will be the result of careless disclosure of the PIN number (either by allowing someone an over-the-shoulder view when entering the PIN at the ATM, or by having a PIN "cheat sheet" stored in the wallet or purse) followed by theft or loss of the debit card. Human nature dictates that very few people will own up to these personal failures, and will instead blame the banks in an attempt to recover their money.

Thursday, January 28, 2010

Apple iPad: Security Considerations

Now all the brouhaha surrounding the new Apple iPad has passed, let's take a more considered look at this device: world changer, or solution looking for a problem?

Many people have erroneously stated that the iPad is a product with no market because the netbook already covers that gap between smartphone and laptop perfectly adequately, and thus - as a device with a "proper keyboard", is superior to the iPad.

They are missing the point.

In his presentation, Jobs stated that in order for this new device to have a reason for being, it would have to outperform either (or both) the smartphone or the laptop in seven key areas:



Photos (sharing/viewing)





The netbook can do all of this, but does none of them better than a laptop. A netbook is, after all, just a small laptop, and exists purely because it offers a lower price point.

The iPad, however, scores at least 4 or 5 out of 7 (I am not convinced it can do either music or iPhone-type games better than the iPhone, nor e-mail better than a laptop), which is enough to give it a pretty significant potential market.

Yes it has some perceived "problems": strange screen aspect ratio; no GPS; no camera (think video conferencing, not photo taking); and, above all, no multi-tasking (that in is a REAL shame). But despite all of that, it is still the proverbial game changer.

Once you factor in the ability to use the new iWork apps to do some serious word processing, spreadsheet or presentation work, you have a serious contender for Travelling Companion of the Year for most corporate road warriors.

Let's face it, unless you are doing some serious keyboard/mouse work or need some significant screen real-estate, there is not much reason to choose a 6lb laptop over a 1.5lb iPad. And even the keyboard issue could be resolved with the addition of the keyboard dock accessory.

And herein lies the potential problem for the corporate security guys. In creating the perfect road warrior machine for the mobile workforce, Apple has created a repository for gigabytes of sensitive corporate data without any apparent way to a) secure it or b) remote-wipe it should the machine be lost or (more likely given its initial highly desirable status!) stolen.

It took some time for Apple to offer a nod to the security world with the iPhone and include the sort of features that meant at least the CSO wasn't tearing his hair out every time an employee turned up to work with one. These included both encryption and remote wipe capabilities, but no mention was made of these during the iPad launch.

The reason for that is probably that, if they work at all, they wouldn't be very effective in a device like this. The remote wipe capability, for example, relies on the iPhone being connected to a cellular network. Unfortunately, the vast majority of iPads will probably be sold with no 3G capability, thus eliminating this feature (not that removing the SIM card wouldn't have the same effect, of course!)

Does the iPad offer device-wide encryption for all user documents? There was no mention of this, and the iPhone's encryption mechanism proved fairly straightforward to bypass by anyone with a modicum of hacking knowledge anyway.

Whereas the iPhone was never likely to be used to store gigabytes of corporate data, however, the iPad is designed for just that. And the use of basic office productivity applications means that some means of quickly and easily getting the documents on and off the device is required. A quick look through the new SDK reveals that it will be achieved by making those documents available via a mountable share - a far cry from the current situation where applications and their data are sandboxed.

How well that mechanism will be protected (if at all) remains to be seen. But one thing is for sure - in the next 60 days the CSO/CISO is going to have to put some thought into how this latest creation from the boys in Cupertino is going to fit into his corporate security policy.