Data structures, and sometimes the algorithms that operate on them, can be described as "cache friendly" or "cache hostile", but what is meant by that, and does it really matter?
Cache memory in modern CPUs can be a hundred times faster than main memory, but caches are very small and have some interesting properties, that some times can be counter-intuitive. Getting good performance requires thinking about how your data structures are laid out in memory, and how they are accessed.
This presentation will explain why some constructions are problematic and show better alternatives. I will show tools for analyzing cache efficiency, and things to think about when making changes to gain performance. You will develop an intuition for writing fast software by default, and learn techniques to improve it.