Saturday, May 17, 2014


Protecting private keys



Web servers use private keys which they alone have in order to secure connections with users. Private keys must be protected at all costs.

In order to protect private keys on disk, one generally encrypts them with a password, which is then needed by the web server upon launch in order to decrypt and use it in memory. However, if measures aren't taken to secure the memory containing the private key, it can be stolen from there too, which would be catastrophic.

Normally, one doesn't need to worry about outsiders getting a hold of data from memory unless the attackers have direct access to the server itself. But bugs like Heartbleed allow remote users to grab random data from memory. Once data is in memory, the application and all the libraries it uses could divulge secret data if there is a buffer overflow lurking somewhere.

To protect against exploiting such bugs, one should ensure that buffer overflows do not have access to memory containing private data. The memory containing private keys and similar kinds of data should be protected, meaning nothing should be allowed to read from them, not even the web server itself.

Now obviously a program needs some access to a private key in order to work with it, so it can't just prevent all access from it. Rather, once a private key or similar data is loaded into memory, that memory should have its read permissions removed. When, and only when some activity needs to be performed with the private key, read permissions can be restored, the activity performed, and then read permissions revoked. This will ensure the rest of the application cannot access what it does not need, nor should be allowed to access.

On UNIX systems, one can use mprotect() to change the permission on a page of memory. On Windows, one can use VirtualProtect().

The above however has a crucial flaw - multi-threading. In a threaded application, all threads have access to data of other threads. So while one thread may be performing some critical private key related code, and allows read access for the moment, another thread can read it outside of the critical portion of code too. Therefore, even more isolation is needed.

To truly isolate the code that uses a private key and similar data, all the code that handles that stuff should be placed into its own process. The rest of the application can then request that well defined activities be performed via a pipe or other form of inter-process communication. This will also ensure that other kinds of bugs in the application, such as buffer overflows that allow arbitrary code execution cannot reestablish read access to the secret data.

On UNIX systems, one can use fork() to create a process which is still part of the same application. On all systems, the separate process can be a separate application with a well defined restrictive IPC API with limited and secured access by the web server.

No insecure library or silly bug in your web server should ever allow the application to divulge such secrets. If such services are not utilizing the techniques above, then they're just biding their time until the next Heartbleed.

7 comments:

henke37 said...

The most common implementation of the separate process idea is going to be simply outsourcing the ssl stuff entirely to a separate process.

It would ironically not protect against a vulnerability like the hearthbleed bug, but hopefully there will be no more such issues. It is the application code that is the normal danger and as such it is the code that needs the most distrust.

insane coder said...

Outsourcing all of SSL would be the wrong approach.

One needs to solely outsource the code which encrypts/decrypts with a private key and similar.

dreamer said...

This post makes perfect theoretical sense. However I am wondering what's the solution for servers/services that:

- Don't allow encrypted private keys (for instance, certain XMPP servers, postfix etc.)

- Operators, who, even if the software supports encrypted private keys, would need to take manual action when restarting services when/if they crash. This can be quite the operational overhead and may well pose a security concern, in terms of availability.

insane coder said...

Hi dreamer,

You're right that these issues prevent the keys to be encrypted on disk. Even so, if you are using unencrypted keys on disk, you don't necessarily want them to be freely accessible in memory, so things like Heartbleed, which don't even need file system access can grab them.

I don't know of any bullet proof solution for the encrypted key issue. What I see done is either multiple sysadmins who can unlock the keys as needed, and use multiple servers, so only some are offline and can be fixed more leisurely. Or use a key server to provide keys as need to the other servers, and move the problem to a different level.

insane coder said...

This technique is starting to be used:
TITUS (which is like stunnel): https://www.opsmate.com/titus/
OpenBSD's relayd: http://marc.info/?l=openbsd-cvs&m=139782935008235&w=2
OpenSMTPD: http://marc.info/?l=openbsd-cvs&m=139879883203226&w=2

justme said...

Hi. Just stumbled on this blogpost. I suppose one thing is protecting keys in memory. How about on disk? In a webfarm, these servers need to come up/restart without human intervention. I suppose a key-service might work to avoid packaging keys, but otherwise?

insane coder said...

Hello justme, see my comment above to dreamer which discusses this.

You may also want to invest in hardware specifically designed for storing keys which can self destruct in case your physical office is raided.