Welcome back to part 2/n in my ‘hey look at the cool things that happened at RWC’ series.
Since 2015, the Levchin Prize has been awarded at RWC. So far, each year there’s been one individual recipient and one team recipient. This year, it went to Hugo Krawczyk (he created HMAC!) and the OpenSSL team (for dramatic improvements to code quality).
I can attest to the improvements to the quality in OpenSSL. A few years ago, part of a final in school was to trace some OpenSSL code. I love tracing code, and it had me jumping all over the place until I couldn’t open any more vim buffers (only a slight exaggeration). It’s much less awful today.
More importantly, the trophies are awesome. They’re based on a ‘cryptex’ and a wheel cipher. There are 7 rotating disks, and when they’re aligned correctly, the trophy opens to reveal an inner chamber. No clue what’s in it.
Side-channel mitigation – Olya Ohrimenko
Azure is Microsoft’s cloud computing platform. They’ve integrated confidential computing so that when data is being processed, it’s in a Trusted Execution Environment (TEE). When the data is at rest, it’s encrypted (as far as I can tell), but processing encrypted data is really inefficient (and not always possible). This presents a problem for cloud users with sensitive data–they have to protect the data, but they can’t simultaneously use it and ensure privacy protections.
Microsoft currently supports both software (Virtual Secure Mode) and hardware (Intel SGX) secure enclaves, and is working to make more available.
In particular, this talk addressed information leakage through memory accesses (she explicitly was not addressing Spectre/Meltdown). Consider a binary decision tree where all of the data is encrypted. Although the data is encrypted, you can still observe when memory is accessed and where. If you then do multiple traces, you could possibly reverse engineer the tree content.
That’s…not good–imagine that decision tree is a diagnostic tool, where the nodes contain your medical information. Now anyone who saw those memory accesses knows that you have Lupus (do you not spend your Saturday nights looking at cloud memory accesses?)! As with other side-channel flavors, you need some sort of ‘secret-independence.’
Memory side-channel mitigations
A naive approach is to insert dummy accesses into the code, but this is ad hoc, and still might leak data. Instead, you want to use a traditional, game-based security proof. This resulted in an idea called ‘data-obliviousness.’ Informally, this means that any algorithms shouldn’t branch on secret data. More precisely
We specify public parameters that are allowed to be disclosed (such as the input sizes and number of iterations to perform) and we treat all other inputs as private. We then say that an algorithm is data-oblivious if an attacker that interacts with it and observes its interaction with memory, disk and network learns nothing except possibly those public parameters. - USENIX paper
The talk mentioned four mitigation techniques:
- Data obliviousness
- isolate computations in private memory
- attacker can see addresses that go in/out, but not how the content is manipulated
- limited private memory (for example, max 2kb in registers1)
- Explicit private memory
- operate on registers directly in assembly
- IMO, assembly should not be the goto option for side-channel mitigation
- Intel TSX
- initially developed for concurrent data access
- side effect: if data is evicted from the cache, the transaction aborts
- can be used to turn caches into private memory 2
- data becomes part of the read set of the transactions; therefore, if the attacker tries to evict it from the cache, the transaction aborts
- cache misses can’t happen, so caches become private memory * size limitations
- Oblivious RAM (ORAM)
- A more general solution to the problem, can be thought of as a proxy
- hides which addresses are accessed and how often, but not the number of requests
- data solution only, not code
But, back to the point–the talk was about data oblivious algorithms. They wrote data-oblivious algorithms for SVMs, matrix factorization, decision trees, neural networks and k-means clustering. Then, they ran oblivious and non-oblivious versions using SGX enclaves for decryption and processing. They validated the data-obliviousness by collecting traces at cache-line with Intel’s Pin tool.
A quick summary
The following table (from the USENIX paper) shows the overhead when compared to a baseline the processes the data in plaintext without SGX. Both the oblivious and non-oblivious versions use SGX protection and encryption.
There are a few quick conclusions from this chart:
- using SGX and encryption doesn’t add significant performance overhead
- some algorithms are already predominantly data-oblivious (CNN, SVM, K-Means)
One thing to note is that these are not state-of-the-art implementations, which might use data-dependent optimizations. But it’s neat to see that you can harden machine learning against data leakage with much more reasonable overheads than previous work.
Microsoft is calling this their ‘Always Encrypted capability.’ I take issue with this, because it’s not always encrypted–it’s just that it’s only decrypted and processed in an enclave. That’s not to say I have any better suggestions…but I’m opposed to misleading nomenclature like this. I’ve heard far too many talks that claim end-to-end encryption via TLS to think that a phrase like ‘Always Encrypted’ is a good idea when decryption is called at some point. While secure enclaves are great tools, they aren’t infallable.
I admit that The Cloud isn’t my favorite thing to talk about, mostly from dealing with people who think that it’s a magical way to make everything work, but I really enjoyed this talk. I was intrigued by the machine learning examples cited and noticed that they’ve applied for a patent on the technique described in the talk. I foresee more similar papers joining my TODO stack
1: SGX private memory is limited to registers
2: in very brief pseudocode:
xbegin(); load_sensitive_code; load_sensitive_data; run_algorithm; xend();