In this part of the Cache Lab, the mission is simple yet devious: optimize matrix transposition for three specific sizes: 32x32, 64x64, and 61x67. Our primary enemy? Cache misses.
Matrix Transposition
A standard transposition swaps rows and columns directly:
1 2 3 4 5 6 7 8 9 10 11 12
voidtrans(int M, int N, int A[N][M], int B[M][N]) { int i, j, tmp;
for (i = 0; i < N; i++) { for (j = 0; j < M; j++) { tmp = A[i][j]; B[j][i] = tmp; } }
}
While correct, this approach is a cache-miss nightmare because it ignores how data is actually stored in memory.
Cache Overview
To optimize effectively, we first have to understand our hardware constraints. The lab specifies a directly mapped cache with the following parameters:
Parameter
Value
Sets (S)
32
Block Size (B)
32 bytes
Associativity (E)
1 (Direct-mapped)
Integer Size
4 bytes
Capacity per line
8 integers
We will use Matrix Tiling and Loop Unrolling to optimize the codes.
32x32 Case
In this case, a row of the matrix needs 32/8 = 4 sets of cache to store. And cache conflicts occur every 32/4 = 8 rows. This makes 8x8 tiling the sweet spot.
By processing the matrix in 8×8 blocks, we ensure that once a line of A is loaded, we use all 8 integers before it gets evicted. We also use loop unrolling with 8 local variables to minimize the overhead of accessing B.
Since 61 and 67 are not powers of two, the conflict misses don’t occur in a regular pattern like they do in the square matrices. This “irregularity” is actually a blessing. We can get away with simple tiling. A 16x16 block size typically yields enough performance to pass the miss-count threshold.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
int BLOCK_SIZE = 16; int i,j,k,l,tmp; int a,b; for(i = 0; i<N; i+=BLOCK_SIZE){ for(j = 0; j<M; j+=BLOCK_SIZE){ a = i+BLOCK_SIZE; b = j+BLOCK_SIZE; for(k = i; k<N && k<a; k++) { for(l = j; l<M && l<b; l++){ tmp = A[k][l]; B[l][k] = tmp; } } } }
64x64 Case
This is the hardest part. In a 64x64 matrix, a row needs 8 sets, but conflict misses occur every 32/8=4 rows. If we use 8x8 tiling, the bottom half of the block will evict the top half.
We can try a 4x4 matrix tiling first.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
int BLOCK_SIZE = 4; int i,j,k,l,tmp; int a,b; for(i = 0; i<N; i+=BLOCK_SIZE){ for(j = 0; j<M; j+=BLOCK_SIZE){ a = i+BLOCK_SIZE; b = j+BLOCK_SIZE; for(k = i; k<N && k<a; k++) { for(l = j; l<M && l<b; l++){ tmp = A[k][l]; B[l][k] = tmp; } } } }
But this isn’t enough to pass the miss-count threshold.
We try a 8x8 matrix tiling. We solve this by partitioning the 8×8 block into four 4×4 sub-blocks and using the upper-right corner of B as a “buffer” to store data temporarily.
int i, j, k; int tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7, tmp8;
// Iterate through the matrix in 8x8 blocks to improve spatial locality for (i = 0; i < N; i += 8) { for (j = 0; j < M; j += 8) { /** * STEP 1: Handle the top half of the 8x8 block (rows i to i+3) */ for (k = 0; k < 4; k++) { // Read 8 elements from row i+k of matrix A into registers tmp1 = A[i + k][j]; tmp2 = A[i + k][j + 1]; tmp3 = A[i + k][j + 2]; tmp4 = A[i + k][j + 3]; // Top-left 4x4 tmp5 = A[i + k][j + 4]; tmp6 = A[i + k][j + 5]; tmp7 = A[i + k][j + 6]; tmp8 = A[i + k][j + 7]; // Top-right 4x4
// Transpose top-left 4x4 from A directly into top-left of B B[j][i + k] = tmp1; B[j + 1][i + k] = tmp2; B[j + 2][i + k] = tmp3; B[j + 3][i + k] = tmp4;
// Temporarily store top-right 4x4 of A in the top-right of B // This avoids cache misses by using the already-loaded cache line in B B[j][i + k + 4] = tmp5; B[j + 1][i + k + 4] = tmp6; B[j + 2][i + k + 4] = tmp7; B[j + 3][i + k + 4] = tmp8; }
/** * STEP 2: Handle the bottom half and fix the temporary placement */ for (k = 0; k < 4; k++) { // Read bottom-left 4x4 column-wise from A tmp1 = A[i + 4][j + k]; tmp2 = A[i + 5][j + k]; tmp3 = A[i + 6][j + k]; tmp4 = A[i + 7][j + k]; // Read bottom-right 4x4 column-wise from A tmp5 = A[i + 4][j + k + 4]; tmp6 = A[i + 5][j + k + 4]; tmp7 = A[i + 6][j + k + 4]; tmp8 = A[i + 7][j + k + 4];
// Retrieve the top-right elements we temporarily stored in B in Step 1 int t1 = B[j + k][i + 4]; int t2 = B[j + k][i + 5]; int t3 = B[j + k][i + 6]; int t4 = B[j + k][i + 7];
// Move bottom-left of A into the top-right of B B[j + k][i + 4] = tmp1; B[j + k][i + 5] = tmp2; B[j + k][i + 6] = tmp3; B[j + k][i + 7] = tmp4;
// Move the retrieved temporary values into the bottom-left of B B[j + k + 4][i] = t1; B[j + k + 4][i + 1] = t2; B[j + k + 4][i + 2] = t3; B[j + k + 4][i + 3] = t4;
// Place bottom-right of A into the bottom-right of B B[j + k + 4][i + 4] = tmp5; B[j + k + 4][i + 5] = tmp6; B[j + k + 4][i + 6] = tmp7; B[j + k + 4][i + 7] = tmp8; } } }
Note: The key trick here is traversing B by columns where possible (so B stays right in the cache) and utilizing local registers (temporary variables) to bridge the gap between conflicting cache lines.
Conclusion
Optimizing matrix transposition is less about the math and more about mechanical sympathy—understanding the underlying hardware to write code that plays nice with the CPU’s cache.
The jump from the naive version to these optimized versions isn’t just a marginal gain; it’s often a 10x reduction in cache misses. It serves as a stark reminder that in systems programming, how you access your data is just as important as the algorithm itself.
A cache can be described with the following four parameters:
S=2s (Cache Sets): The cache is divided into sets.
E (Cache Lines per set): This is the “associativity.”
If E=1, it’s a direct-mapped cache. If E>1, it’s set-associative.
Each line contains a valid bit, a tag, and the actual data block.
B=2b (Block Size): The number of bytes stored in each line.
The b bits at the end of an address tell the cache the offset within that block.
m: The bits of the machine memory address.
2. Address Decomposition
When the CPU wants to access a 64-bit address, the cache doesn’t look at the whole number at once. It slices the address into three distinct fields:
Field
Purpose
Tag
Used to uniquely identify the memory block within a specific set. t = m - b - s
Set Index
Determines which set the address maps to.
Block Offset
Identifies the specific byte within the cache line.
3. The “Search and Match” Process
When our simulator receives an address (e.g., from an L or S operation in the trace file), it follows these steps:
Find the Set: Use the set index bits to jump to the correct set in our cache structure.
Search the Lines: Look through all the lines in that set.
Hit: If a line has valid == trueAND the tag matches the address tag.
Miss: If no line matches.
Handle the Miss:
Cold Start: If there is an empty line (valid == false), fill it with the new tag and set valid = true.
Eviction: If all lines are full, we must kick one out. This is where the LRU (Least Recently Used) policy comes in: we find the line that hasn’t been touched for the longest time and replace it.
Lab Requirements
For this Lab Project, we will write a cache simulator that takes a valgrind memory trace as an input.
Input
The input looks like:
1 2 3 4
I0400d7d4,8 M0421c7f0,4 L04f6b868,8 S7ff0005c8,8
Each line denotes one or two memory accesses. The format of each line is
1
[space]operation address,size
The operation field denotes the type of memory access:
“I” denotes an instruction load, “L” a data load,
“S” a data store
“M” a data modify (i.e., a data load followed by a data store).
Mind you: There is never a space before each “I”. There is always a space before each “M”, “L”, and “S”.
The address field specifies a 64-bit hexadecimal memory address. The size field specifies the number of bytes accessed by the operation.
CLI
Our program should take the following command line arguments:
getopt comes in unistd.h, but the compiler option is set to -std=c99, which hides all POSIX extensions. GNU systems provide a standalone <getopt.h> header. So we include getopt.h instead.
1
opt = getopt(argc, argv, "hvs:E:b:t:")
h and v: These are boolean flags.
s:, E:, b:, and t:: These are required arguments. The colon tells getopt that these flags must be followed by a value (e.g., -s 4).
After parsing the arguments, we set the initial value of our Cache Data Model.
fscanf does not skip spaces before %c, so we add a space before %c in the format string.
!feof(traceFile) does not work correctly here.It only returns true after a read operation has already attempted to go past the end of the file and failed. Using it as a loop condition (e.g., while (!feof(p))) causes an “off-by-one” error, where the loop executes one extra time with garbage data from the last successful read.
voidloadData(longlong address, int size) { // Simulate accessing data at the given address int s = getSetIndex(address); longlong t = getTag(address); global_timer++;
for (int i = 0; i < associativity; i++) { if (cache[s][i].valid && cache[s][i].tag == t) { hit_count++; cache[s][i].lru_counter = global_timer; if (verboseMode) printf(" hit"); return; } }
miss_count++; if (verboseMode) printf(" miss");
for (int i = 0; i < associativity; i++) { if (!cache[s][i].valid) { cache[s][i].valid = true; cache[s][i].tag = t; cache[s][i].lru_counter = global_timer; return; } }
eviction_count++; if (verboseMode) printf(" eviction");
int victim_index = 0; int min_lru = cache[s][0].lru_counter;
for (int i = 1; i < associativity; i++) { if (cache[s][i].lru_counter < min_lru) { min_lru = cache[s][i].lru_counter; victim_index = i; } }
We first check if the data already exists in the cache.
If it doesn’t exist, we have to scan for blank lines to load the data.
If blank lines don’t exist, we need to evict a line using the LRU strategy. We replace the victim line with the new line.
Other Operations
1 2 3 4 5 6 7 8 9 10 11
voidstoreData(longlong address, int size) { // Simulate storing data at the given address loadData(address, size); }
voidmodifyData(longlong address, int size) { // Simulate modifying data at the given address loadData(address, size); hit_count++; if (verboseMode) printf(" hit\n"); }
For this simulator, storing data and modifying data are basically the same thing as loading data.
Print Summary
We are asked to output the answer using the printSummary function.
In this project, we moved from the theory of hierarchy to the practical reality of memory management. By building this simulator, we reinforced several core concepts of computer systems.
With our simulator passing all the trace tests, we’ve effectively mirrored how a CPU “thinks” about memory. The next step is applying these insights to optimize actual code, ensuring our algorithms play nicely with the hardware we’ve just simulated.
In this part of the Cache Lab, the mission is simple yet devious: optimize matrix transposition for three specific sizes: 32x32, 64x64, and 61x67. Our primary enemy? Cache misses.
Matrix Transposition
A standard transposition swaps rows and columns directly:
1 2 3 4 5 6 7 8 9 10 11 12
voidtrans(int M, int N, int A[N][M], int B[M][N]) { int i, j, tmp;
for (i = 0; i < N; i++) { for (j = 0; j < M; j++) { tmp = A[i][j]; B[j][i] = tmp; } }
}
While correct, this approach is a cache-miss nightmare because it ignores how data is actually stored in memory.
Cache Overview
To optimize effectively, we first have to understand our hardware constraints. The lab specifies a directly mapped cache with the following parameters:
Parameter
Value
Sets (S)
32
Block Size (B)
32 bytes
Associativity (E)
1 (Direct-mapped)
Integer Size
4 bytes
Capacity per line
8 integers
We will use Matrix Tiling and Loop Unrolling to optimize the codes.
32x32 Case
In this case, a row of the matrix needs 32/8 = 4 sets of cache to store. And cache conflicts occur every 32/4 = 8 rows. This makes 8x8 tiling the sweet spot.
By processing the matrix in 8×8 blocks, we ensure that once a line of A is loaded, we use all 8 integers before it gets evicted. We also use loop unrolling with 8 local variables to minimize the overhead of accessing B.
Since 61 and 67 are not powers of two, the conflict misses don’t occur in a regular pattern like they do in the square matrices. This “irregularity” is actually a blessing. We can get away with simple tiling. A 16x16 block size typically yields enough performance to pass the miss-count threshold.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
int BLOCK_SIZE = 16; int i,j,k,l,tmp; int a,b; for(i = 0; i<N; i+=BLOCK_SIZE){ for(j = 0; j<M; j+=BLOCK_SIZE){ a = i+BLOCK_SIZE; b = j+BLOCK_SIZE; for(k = i; k<N && k<a; k++) { for(l = j; l<M && l<b; l++){ tmp = A[k][l]; B[l][k] = tmp; } } } }
64x64 Case
This is the hardest part. In a 64x64 matrix, a row needs 8 sets, but conflict misses occur every 32/8=4 rows. If we use 8x8 tiling, the bottom half of the block will evict the top half.
We can try a 4x4 matrix tiling first.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
int BLOCK_SIZE = 4; int i,j,k,l,tmp; int a,b; for(i = 0; i<N; i+=BLOCK_SIZE){ for(j = 0; j<M; j+=BLOCK_SIZE){ a = i+BLOCK_SIZE; b = j+BLOCK_SIZE; for(k = i; k<N && k<a; k++) { for(l = j; l<M && l<b; l++){ tmp = A[k][l]; B[l][k] = tmp; } } } }
But this isn’t enough to pass the miss-count threshold.
We try a 8x8 matrix tiling. We solve this by partitioning the 8×8 block into four 4×4 sub-blocks and using the upper-right corner of B as a “buffer” to store data temporarily.
int i, j, k; int tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7, tmp8;
// Iterate through the matrix in 8x8 blocks to improve spatial locality for (i = 0; i < N; i += 8) { for (j = 0; j < M; j += 8) { /** * STEP 1: Handle the top half of the 8x8 block (rows i to i+3) */ for (k = 0; k < 4; k++) { // Read 8 elements from row i+k of matrix A into registers tmp1 = A[i + k][j]; tmp2 = A[i + k][j + 1]; tmp3 = A[i + k][j + 2]; tmp4 = A[i + k][j + 3]; // Top-left 4x4 tmp5 = A[i + k][j + 4]; tmp6 = A[i + k][j + 5]; tmp7 = A[i + k][j + 6]; tmp8 = A[i + k][j + 7]; // Top-right 4x4
// Transpose top-left 4x4 from A directly into top-left of B B[j][i + k] = tmp1; B[j + 1][i + k] = tmp2; B[j + 2][i + k] = tmp3; B[j + 3][i + k] = tmp4;
// Temporarily store top-right 4x4 of A in the top-right of B // This avoids cache misses by using the already-loaded cache line in B B[j][i + k + 4] = tmp5; B[j + 1][i + k + 4] = tmp6; B[j + 2][i + k + 4] = tmp7; B[j + 3][i + k + 4] = tmp8; }
/** * STEP 2: Handle the bottom half and fix the temporary placement */ for (k = 0; k < 4; k++) { // Read bottom-left 4x4 column-wise from A tmp1 = A[i + 4][j + k]; tmp2 = A[i + 5][j + k]; tmp3 = A[i + 6][j + k]; tmp4 = A[i + 7][j + k]; // Read bottom-right 4x4 column-wise from A tmp5 = A[i + 4][j + k + 4]; tmp6 = A[i + 5][j + k + 4]; tmp7 = A[i + 6][j + k + 4]; tmp8 = A[i + 7][j + k + 4];
// Retrieve the top-right elements we temporarily stored in B in Step 1 int t1 = B[j + k][i + 4]; int t2 = B[j + k][i + 5]; int t3 = B[j + k][i + 6]; int t4 = B[j + k][i + 7];
// Move bottom-left of A into the top-right of B B[j + k][i + 4] = tmp1; B[j + k][i + 5] = tmp2; B[j + k][i + 6] = tmp3; B[j + k][i + 7] = tmp4;
// Move the retrieved temporary values into the bottom-left of B B[j + k + 4][i] = t1; B[j + k + 4][i + 1] = t2; B[j + k + 4][i + 2] = t3; B[j + k + 4][i + 3] = t4;
// Place bottom-right of A into the bottom-right of B B[j + k + 4][i + 4] = tmp5; B[j + k + 4][i + 5] = tmp6; B[j + k + 4][i + 6] = tmp7; B[j + k + 4][i + 7] = tmp8; } } }
Note: The key trick here is traversing B by columns where possible (so B stays right in the cache) and utilizing local registers (temporary variables) to bridge the gap between conflicting cache lines.
Conclusion
Optimizing matrix transposition is less about the math and more about mechanical sympathy—understanding the underlying hardware to write code that plays nice with the CPU’s cache.
The jump from the naive version to these optimized versions isn’t just a marginal gain; it’s often a 10x reduction in cache misses. It serves as a stark reminder that in systems programming, how you access your data is just as important as the algorithm itself.
A cache can be described with the following four parameters:
S=2s (Cache Sets): The cache is divided into sets.
E (Cache Lines per set): This is the “associativity.”
If E=1, it’s a direct-mapped cache. If E>1, it’s set-associative.
Each line contains a valid bit, a tag, and the actual data block.
B=2b (Block Size): The number of bytes stored in each line.
The b bits at the end of an address tell the cache the offset within that block.
m: The bits of the machine memory address.
2. Address Decomposition
When the CPU wants to access a 64-bit address, the cache doesn’t look at the whole number at once. It slices the address into three distinct fields:
Field
Purpose
Tag
Used to uniquely identify the memory block within a specific set. t = m - b - s
Set Index
Determines which set the address maps to.
Block Offset
Identifies the specific byte within the cache line.
3. The “Search and Match” Process
When our simulator receives an address (e.g., from an L or S operation in the trace file), it follows these steps:
Find the Set: Use the set index bits to jump to the correct set in our cache structure.
Search the Lines: Look through all the lines in that set.
Hit: If a line has valid == trueAND the tag matches the address tag.
Miss: If no line matches.
Handle the Miss:
Cold Start: If there is an empty line (valid == false), fill it with the new tag and set valid = true.
Eviction: If all lines are full, we must kick one out. This is where the LRU (Least Recently Used) policy comes in: we find the line that hasn’t been touched for the longest time and replace it.
Lab Requirements
For this Lab Project, we will write a cache simulator that takes a valgrind memory trace as an input.
Input
The input looks like:
1 2 3 4
I0400d7d4,8 M0421c7f0,4 L04f6b868,8 S7ff0005c8,8
Each line denotes one or two memory accesses. The format of each line is
1
[space]operation address,size
The operation field denotes the type of memory access:
“I” denotes an instruction load, “L” a data load,
“S” a data store
“M” a data modify (i.e., a data load followed by a data store).
Mind you: There is never a space before each “I”. There is always a space before each “M”, “L”, and “S”.
The address field specifies a 64-bit hexadecimal memory address. The size field specifies the number of bytes accessed by the operation.
CLI
Our program should take the following command line arguments:
getopt comes in unistd.h, but the compiler option is set to -std=c99, which hides all POSIX extensions. GNU systems provide a standalone <getopt.h> header. So we include getopt.h instead.
1
opt = getopt(argc, argv, "hvs:E:b:t:")
h and v: These are boolean flags.
s:, E:, b:, and t:: These are required arguments. The colon tells getopt that these flags must be followed by a value (e.g., -s 4).
After parsing the arguments, we set the initial value of our Cache Data Model.
fscanf does not skip spaces before %c, so we add a space before %c in the format string.
!feof(traceFile) does not work correctly here.It only returns true after a read operation has already attempted to go past the end of the file and failed. Using it as a loop condition (e.g., while (!feof(p))) causes an “off-by-one” error, where the loop executes one extra time with garbage data from the last successful read.
voidloadData(longlong address, int size) { // Simulate accessing data at the given address int s = getSetIndex(address); longlong t = getTag(address); global_timer++;
for (int i = 0; i < associativity; i++) { if (cache[s][i].valid && cache[s][i].tag == t) { hit_count++; cache[s][i].lru_counter = global_timer; if (verboseMode) printf(" hit"); return; } }
miss_count++; if (verboseMode) printf(" miss");
for (int i = 0; i < associativity; i++) { if (!cache[s][i].valid) { cache[s][i].valid = true; cache[s][i].tag = t; cache[s][i].lru_counter = global_timer; return; } }
eviction_count++; if (verboseMode) printf(" eviction");
int victim_index = 0; int min_lru = cache[s][0].lru_counter;
for (int i = 1; i < associativity; i++) { if (cache[s][i].lru_counter < min_lru) { min_lru = cache[s][i].lru_counter; victim_index = i; } }
We first check if the data already exists in the cache.
If it doesn’t exist, we have to scan for blank lines to load the data.
If blank lines don’t exist, we need to evict a line using the LRU strategy. We replace the victim line with the new line.
Other Operations
1 2 3 4 5 6 7 8 9 10 11
voidstoreData(longlong address, int size) { // Simulate storing data at the given address loadData(address, size); }
voidmodifyData(longlong address, int size) { // Simulate modifying data at the given address loadData(address, size); hit_count++; if (verboseMode) printf(" hit\n"); }
For this simulator, storing data and modifying data are basically the same thing as loading data.
Print Summary
We are asked to output the answer using the printSummary function.
In this project, we moved from the theory of hierarchy to the practical reality of memory management. By building this simulator, we reinforced several core concepts of computer systems.
With our simulator passing all the trace tests, we’ve effectively mirrored how a CPU “thinks” about memory. The next step is applying these insights to optimize actual code, ensuring our algorithms play nicely with the hardware we’ve just simulated.