prefetched into memory ◦ Masks latency if data is accessed sequentially • Cache Prefetching ◦ Data in memory is prefetched into CPU cache ◦ Masks latency if data is accessed sequentially
or value types ◦ Open Addressing - represent a hash table in an array ◦ Ring Buffer - represent a queue in an array • Write Ahead Log ◦ Most databases, many filesystems ◦ Logging for crash recovery
written ◦ Flushing thread loops flushing the queue ◦ Adaptively increases the size of the writes under heavy load • Group Commit / Write Coalescing ◦ MySQL, PostgreSQL, SSD/HDD Controllers ◦ Combine concurrent operations into one write
application per core • Linux provides the ability to ◦ Pin threads to specific cores ◦ Give each core it’s own queue for disk and network I/O • Minimize cross-core communication ◦ Partition data such that processing can occur on one core
Swap) operations ◦ Lock free data structures • Context Switching ◦ One thread per core • False Sharing ◦ Keep hot data in its own “cache line” • Branch Misprediction ◦ Branchless algorithms (bit manipulation tricks)