TL;DR
Yes, cache attacks can be executed on single-threaded architectures, although they are more difficult to pull off and generally less effective than on multi-threaded systems. The core principle relies on manipulating the cache state to create timing differences that reveal information about data access patterns.
Understanding Cache Attacks
Cache attacks exploit the fact that accessing data from the CPU cache is much faster than fetching it from main memory. By observing how long it takes to access certain data, an attacker can infer whether that data has recently been used (and therefore likely resides in the cache).
Why Single-Threaded Systems Are Different
In multi-threaded systems, different threads can simultaneously access and modify the cache. This makes it easier to evict specific cache lines and measure timing variations. In a single-threaded system, you only have one execution path at a time, making direct eviction more challenging.
How Cache Attacks Work on Single Threads
- Prime the Cache: Fill the cache with dummy data to create a known starting state. This is often done by repeatedly accessing a large array of values.
- Access Target Data: The process accesses the sensitive data you want to learn about. This will bring it into the cache.
- Measure Access Time: After accessing the target data, measure the time it takes to access other data (often from the same array used in step 1). If the access is faster than expected, it suggests that the target data is still in the cache.
- Repeat and Analyse: Repeat steps 2 and 3 many times and statistically analyse the timing differences. This will reveal patterns related to the target data’s usage.
Practical Techniques
Several techniques can be used, even on single-threaded systems:
- Flush+Reload: This is a classic technique. The attacker flushes a cache line and then measures the time it takes to reload it after accessing potential target data.
- Time-Based Side Channels: Carefully measure execution times of operations that depend on cache hits or misses. Even small timing differences can be detectable with enough repetitions.
- Cache Set Partitioning: If you know the memory layout, you might be able to partition the cache into sets and observe access patterns within those sets.
Code Example (Conceptual – C)
This is a simplified example to illustrate the principle. Actual attacks are much more complex.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>>
int main() {
// Array to prime the cache
int array[1024];
for (int i = 0; i < 1024; ++i) {
array[i] = i;
}
// Target data (in reality, this would be something sensitive)
int target_data = 42;
// Time access to the array after accessing target_data
clock_t start_time = clock();
for (int i = 0; i < 1024; ++i) {
array[i] = array[i]; // Access each element
}
clock_t end_time = clock();
double elapsed_time = (double)(end_time - start_time) / CLOCKS_PER_SEC;
printf("Elapsed time: %f secondsn", elapsed_time);
return 0;
}
Important Note: The timing differences in this example are likely to be very small and affected by many factors. This is just a conceptual illustration.
Mitigation Strategies
- Cache Partitioning: Allocate separate cache regions for sensitive data and less-sensitive data.
- Randomization: Randomize the memory layout to make it harder for attackers to predict which cache lines will be affected.
- Constant-Time Operations: Design code so that execution time does not depend on the value of sensitive data. This is often difficult to achieve in practice.
- Hardware Support: Some CPUs provide hardware features to help protect against cache attacks (e.g., cache isolation).