Suppose you have a very difficult threat model for your embedded system. Not only is your adversary technically skilled, but they are going to have physical access to your device.
Maybe you are exporting a SCADA controller containing critical or trade secret data to a country that might want to pirate it, maybe you have a military device that might crash or otherwise fall into enemy hands, or maybe you are deploying your bitcoin mining rig to an untrusted data center.
In this kind of situation, what road blocks can you place in the attacker’s way that might actually slow them down?
Your first instinct might be that you need confidentiality. Your data needs to be protected “at rest” (i.e. when the system is shut down). This is absolutely the right instinct and completely necessary. If the attacker can read your hard drive or flash and discover your secrets, you have already lost.
Unfortunately, confidentiality alone is not sufficient. Thinking back to the CIA model of security, you will remember that your system data’s integrity can be at least as valuable as its confidentiality. In fact the NIST CCoE considers it an essential building block of security.
Encryption is obviously needed to hide the data, but an attacker can manipulate encrypted data without knowing the key if they have access to the underlying cipher-text. The attacker may not understand how the change will affect the decrypted data, but just corrupting some critical data on your system by randomly changing an encrypted block may be enough.
For example, consider that all the filesystem permissions (read, write, execute), SELinux security contexts, and even SELinux security policy files are stored as bits on the disk. Corrupting these or similar files could give an attacker access to a system. Additionally, any watchdog programs can have their code corrupted by modifying the underlying encrypted disk page that stores them. Typically, this will cause the application to crash immediately on startup. Disabling these kinds of security services by corrupting their config files or code is highly valuable to an attacker.
In most systems these types of modifications will go unnoticed. A periodic filesystem check may note a bad inode or similar error, but the security system will be completely unaware that anything malicious has taken place. In addition to protecting data at rest, it would be very helpful to provide protection to files at runtime. As we will see below, many solutions involve decrypting a single master key, usually at boot, and often at a location that is easy to discover. From then on, all cryptographic operations are handled transparently using the master key. This is convenient and easy to implement, but means that any attack that gets code running on the system has the ability to ask the system to perform malicious reads of the decrypted data and in almost all cases these types of systems cannot distinguish benign reads from malicious reads.
Finally, we would like to mitigate the risk from hardware based attacks as fully as possible from software. If the RAM of the system is dumped or captured via a cold boot attack, we would like to avoid having a single master encryption key at a reliable location in memory. If the attacker is performing any type of side-channel attack, we would like to use cryptographic keys as infrequently as possible to give them as few data points as possible for their statistical analysis.
Finding the right level
Given these fairly extreme security goals, at what level of the system stack should we protect our data?
Here we see a rough layout of the abstraction layers for file data. Userland sees files and directories at their various mount points. Below that, the kernel sees the filesystem implementation and volumes containing filesystem data (comprised of one or more partitions). Below this level is the hardware that manages the reads and writes.
Encryption can occur at any of these levels. So let’s evaluate the pros and cons.
These are typically specialized self-encrypting drives or drive controllers. In general, the self-encrypting drives available to consumers aren’t very good and those that are good can be prohibitively expensive. As foreshadowed above, these drives typically authenticate once at boot and thereafter offer no protection against rogue processes reading sensitive data.
The drives do have some advantages. First, they are operating at the native hardware speed of whatever your storage bus is without adding any additional resource usage to your CPU or memory. Second, it takes a more sophisticated hardware attacker to bypass the disk controller and perform the ciphertext manipulation attack described above to corrupt configs or other files.
In short, hardware solutions tend to provide a costly solution to data-at-rest protection without providing anything in the way of runtime protection.
This is the level chosen by most modern server / desktop operating systems. Linux uses dm-crypt / LUKS, Windows calls it Bitlocker, and macOS has FileVault. Again, this is a solution where you authenticate once, typically by providing a passphrase at boot, and thereafter trust all reads / writes to the encrypted storage. These types of systems also tend to be vulnerable to the cold boot attacks described above because the volume key must always be resident in memory. Additionally, because the volume key is used repeatedly for every read or write it can be quite vulnerable to side-channel attacks.
They have the advantage of being very inexpensive (free!) and easy to implement.
Protecting the system at this level isn’t a bad tradeoff for the kind of desktop or server systems most people use, but it doesn’t really meet the security needs of our device in a hostile environment.
Protecting data at the filesystem level is fairly unusual in the desktop world outside of crypto-nerds. In the mobile world that faces a more hostile environment for crypto, it is more common. For instance, it is used in iOS and Android also adopted file based encryption to allow accessibility features to be decrypted ahead of the user entering their PIN See: [ 1 ] & [ 2 ] ). It is the model adopted by eCryptFS and CryFS as well as Star Lab’s own FortiFS.
Because encryption takes place at a level of the system that is aware of the concept of “files” and not simply blocks & sectors, encryption at this level can use different keys for different files. In fact, it can use a unique key for each and every file. Each individual key lives in memory for a lot less time, reducing the risk from cold-boot attacks. Each individual key is used a whole lot less, helping to mitigate side channel attacks.
We like this level for mitigating data-at-rest and data-at-runtime encryption challenges in hostile environments because it gives the kernel insight into the cryptographic system and therefore allows for many nice security properties.
Implementing protection at this level also frees us from worrying about the physical device storing encrypted data. The Linux Kernel’s stackable filesystem support allows us to store the ciphertext on any existing filesystem. This means that the encrypted files can live on the appropriate native filesystem for their use case. Network-based filesystems continue to be tuned to the constraints of the network; flash based filesystems are optimized for flash; etc. This allows the encrypted filesystem to support a wide variety of use cases and hardware.
This option isn’t really viable for general use, although it may be appropriate in certain cases. It requires each application to be written in a crypto-aware way. It causes a major key-management problem because simply embedding keys in a binary is not sufficient protection. Also, in our hostile environment userland applications must beware of attacks “from below” in the trusted computing base. Often privileged users will be able to access applications’ memory (through debugging interfaces or /proc/pid/mem) and applications will have difficulty resisting a cold-boot type attack without hardware assistance (such as a TPM or HSM).
How Star Lab Uses and Improves on Filesystem Level Encryption
Most modern encrypted filesystems do a pretty good job of providing data-at-rest protection, but because our threat model also includes protecting filesystem data in a hostile environment at runtime, we made some enhancements.
First, to further reduce an attacker’s ability to find key material in memory, we not only use a unique key for each file, but go one step further and use a unique key for every 4096-byte block within each and every file. This means that key material is more difficult to find in memory and that any key found is less valuable because it is only valid for one 4096-byte block of one file.
Here you may be saying to yourself, “Wait, you are generating a 256-bit AES key for every 4096-bit block in the filesystem!? How are all these keys managed and stored?” To which I will reply that the trick is not to store them. They are derived from a root filesystem key through an extended chain of key derivation functions. These functions, like SHA and other hash functions, cannot be run backwards to gain knowledge of other keys earlier in the chain. So even if an attacker does capture a key using a cold-boot or similar attack, they won’t have the keys to the kingdom. They only know enough to open 4096 bytes of data.
To further protect from cold boot and data remnant attacks, all filesystem related keys in FortiFS are zeroed in memory when not in use.
Breaking keys up in this way helps protect against side channel attacks. Because keys only correspond to short blocks of data, each key is used much less frequently. This makes collecting statistical data to reduce uncertainty about the bits of the key much much more difficult for an attacker.
As mentioned above, block chaining encryption modes used by most disk and volume level systems are not an integrity / authentication mechanism. An attacker may be OK with a couple blocks of data corruption from a CBC mode cipher. With FortiFS we needed cryptographic guarantees that blocks were not corrupted or modified.
To achieve this we derive an additional 256-bit AES authentication key per file. Each 4096-byte block is associated with a known-good keyed hash (HMAC) from when it was last written correctly. Whenever a request to read or write the block comes through, the authentication key is used to recompute the keyed hash and compare it against the known-good keyed-hash. If they don’t match, a filesystem error is raised and, importantly, the confidentiality-key above is not used. This means that a bad guy attempting a fault injection attack may learn something about the authentication key, but they will never see any of the actual keys covering the data used.
Star Lab also goes to great lengths to protect the integrity of data all the way up from the earliest boot stages using TrueBoot, but that is beyond the scope of this post.
Interaction with other kernel security tools
As an added benefit, because the kernel is aware of the encryption at the filesystem level a Linux Security Module, like Star Lab’s Titanium, can be used to enforce access control to files cryptographically! Even a root user who is not on the mandatory access control list for a protected file will not be able to cause the file to become decrypted; even just in memory.
We have also noticed that not all of our users have a need for what is typically considered an “encrypted filesystem”. Some users only care about the integrity of the data. For instance, even if you don’t have anything to hide on your system, you might want to be sure that your essential binaries were not modified or the shared libraries on which they depend have not been hijacked.
Because of this need we are currently developing AuthFS which will provide all of the integrity guarantees of FortiFS without the additional overhead of the per-block encryption.