Schneier on Security: Heartbleed

Heartbleed is a catastrophic bug in OpenSSL:

"The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

Basically, an attacker can grab 64K of memory from a server. The attack leaves no trace, and can be done multiple times to grab a different random 64K of memory. This means that anything in memory -- SSL private keys, user keys, anything -- is vulnerable. And you have to assume that it is all compromised. All of it.

"Catastrophic" is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

This article is worth reading. Hacker News thread is filled with commentary. XKCD cartoon.

EDITED TO ADD (4/9): Has anyone looked at all the low-margin non-upgradable embedded systems that use OpenSSL? An upgrade path that involves the trash, a visit to Best Buy, and a credit card isn't going to be fun for anyone.

Tags: Internet, keys, man-in-the-middle attacks, passwords, patching, SSL, vulnerabilities, web, zero-day

Posted on April 9, 2014 at 5:03 AM • 126 Comments

About writing more secure software, I am sure others here have said similar things than I am about to say (some in the comments above) and probably got it together better than me and have been thinking about this longer and hopefully more thoroughly, but still this - after all I am a man with maybe less inner barriers than most, not only because I am a physicist:

Open Source is in my view a very important tool for writing secure software, now and in the future. But ideally it would come in the form of small simple components that can be verified thoroughly, optimally even to a large degree using formal methods and formally proving as much as possible. Technically, such components could still be assembled to larger libraries, if separation between the components is good and the interplay can again be verified similarly, building things up layer upon layer of trust.

Openssl fails here for two reasons that have already been mentioned in the comments above, it is too complex in itself and it is written in C. A different language is needed that can be verified and then prevents things like random access to memory much more reliably.

Once you would have such a world of layered simple reliable components in the future, it might also come within reach at least for larger companies (resp. specialized smaller companies implementing such things for anybody willing to pay for it) to put additional checks and measures etc. on top of standard protocols. Maybe you would then again download specific clients for different companies (Amazon, Facebook, ...) like today on smartphones instead of using a standard browser or maybe the world would solve things in a different way then (plugins or similar), hard to predict the future is. If I look at the trouble I had to access the Apple Store from just one laptop, such things are partially even here already.

Heinrich Rohrer who got the Nobel Prize in physics (as one of two) for inventing the Scanning Tunneling Microscope once said in a talk that people often overestimate what can change in 4 years, but underestimate what can change in 10 years. His basic idea behind that was that many things in technology evolve logarithmically and people tend to think linerarily.

At the time I considered this to be a bit naive, but now, a bit older, and applying this, one would expect the internet and computers 4 years after Snowden, i.e. summer 2017 not to be much more secure than now, but 10 years from now, summer 2023, things might be already significantly better. I hope I will be able to make a tiny contribution to that, here and there.

(And obviously, the NSA will still have an advantage then, at least that would be my current guess... ;)

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..