Slide 1

Slide 1 text

let’s talk about… A bunch of STUFF ? ? ?

Slide 2

Slide 2 text

Romain Guy

Slide 3

Slide 3 text

Romain Guy

Slide 4

Slide 4 text

Romain Guy

Slide 5

Slide 5 text

Romain Guy Google Android Robotics

Slide 6

Slide 6 text

curious-creature.com @romainguy

Slide 7

Slide 7 text

Performance matters

Slide 8

Slide 8 text

Why does it matter?

Slide 9

Slide 9 text

Why does it matter? LATENCY

Slide 10

Slide 10 text

Why does it matter? LATENCY SCALABILITY

Slide 11

Slide 11 text

Why does it matter? LATENCY POWER SCALABILITY

Slide 12

Slide 12 text

4:40

Slide 13

Slide 13 text

No content

Slide 14

Slide 14 text

No content

Slide 15

Slide 15 text

EASY MEDIUM HARD

Slide 16

Slide 16 text

EASY MEDIUM HARD >>>HARD

Slide 17

Slide 17 text

We have this awesome, super useful, telemetry application

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

It does 2D

Slide 20

Slide 20 text

It does 2D It does 3D

Slide 21

Slide 21 text

It does 2D It does 3D It goes fast

Slide 22

Slide 22 text

How fast?

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

Cameras → 30 Hz

Slide 25

Slide 25 text

Cameras → 30 Hz Motors → 0.2/3 kHz

Slide 26

Slide 26 text

Cameras → 30 Hz Motors → 0.2/3 kHz Boards → 80 kHz

Slide 27

Slide 27 text

It processes a lot of data!

Slide 28

Slide 28 text

No content

Slide 29

Slide 29 text

>>>>>>>>>>>>> <<<<<<<<<<<<< l l several GB/s

Slide 30

Slide 30 text

Our users love the app

Slide 31

Slide 31 text

Our users love the app —— — — — — — —— —

Slide 32

Slide 32 text

Then we got an email It read like this…

Slide 33

Slide 33 text

art by haretrinity.deviantart.com very much SORROW >>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>

Slide 34

Slide 34 text

art by haretrinity.deviantart.com very much SORROW >>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>

Slide 35

Slide 35 text

<16ms per frame TARGET

Slide 36

Slide 36 text

<16ms per frame TARGET 150ms per frame ACTUAL

Slide 37

Slide 37 text

DataSeries& getDataSeries() {
 return m_series[m_current];
 }
 
 // For each data series
 for (size_t i = 0; i < m_buffer.size(); i++) {
 getDataSeries()->getValue(i);
 }

Slide 38

Slide 38 text

in-depth look MATRIX MULTIPLICATION

Slide 39

Slide 39 text

Why this example? 3D graphics, 2D graphics, UI toolkits, simulations, perception, simulations…

Slide 40

Slide 40 text

No content

Slide 41

Slide 41 text

X = m m1 m2

Slide 42

Slide 42 text

// c = a x b
 private void multiply(float[] a, float[] b, float[] c) {
 for (int i = 0; i < m; i++) {
 for (int j = 0; j < m; j++) {
 for (int k = 0; k < n; k++) {
 c[i * m + j] += a[i * n + k] * b[k * m + j];
 }
 }
 }
 }

Slide 43

Slide 43 text

Testing conditions Two 1024x1024 matrices Intel Core i7 3667U (2 cores @ 2 Ghz) OS X 10.10 Oracle JDK 1.8 >>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Slide 44

Slide 44 text

>>>>>>>>>>>>>> 10,286 ms <<<<<<<<<<<<<<

Slide 45

Slide 45 text

Is this a good result? ? ? ?

Slide 46

Slide 46 text

instructions per second

Slide 47

Slide 47 text

6 MIPS >>>>>>>>>>>>>> <<<<<<<<<<<<<<

Slide 48

Slide 48 text

My CPU ≈ 8,000 MIPS

Slide 49

Slide 49 text

>> 8,000 6 We can optimize!

Slide 50

Slide 50 text

C++

Slide 51

Slide 51 text

No content

Slide 52

Slide 52 text

No content

Slide 53

Slide 53 text

#define INDEX(i, j, stride) ((i) * (stride) + (j))
 
 // c = a x b
 void multiplyMatrices(float* a, float* b, float* c) {
 for (size_t i = 0; i < MAT_M; i++) {
 for (size_t j = 0; j < MAT_M; j++) {
 for (size_t k = 0; k < MAT_N; k++) {
 c[INDEX(i, j, MAT_M)] += a[INDEX(i, k, MAT_N)] * b[INDEX(k, j, MAT_M)];
 }
 }
 }
 }

Slide 54

Slide 54 text

>>>>>>>>>>>>>> 9.6 s <<<<<<<<<<<<<<

Slide 55

Slide 55 text

Performance counters ………………………………………………………

Slide 56

Slide 56 text

Performance counters # instructions ………………………………………………………

Slide 57

Slide 57 text

Performance counters # instructions IPC (instructions per cycle) ………………………………………………………

Slide 58

Slide 58 text

Performance counters # instructions IPC (instructions per cycle) CPI (cycles per instructions) ………………………………………………………

Slide 59

Slide 59 text

Performance counters # instructions IPC (instructions per cycle) CPI (cycles per instructions) L2 loads ………………………………………………………

Slide 60

Slide 60 text

Performance counters # instructions IPC (instructions per cycle) CPI (cycles per instructions) L2 loads L2 hit rate ………………………………………………………

Slide 61

Slide 61 text

Test environment …………………………………………..……

Slide 62

Slide 62 text

Test environment L1 32 kB/core …………………………………………..……

Slide 63

Slide 63 text

Test environment L1 32 kB/core L2 265 kB/core …………………………………………..……

Slide 64

Slide 64 text

Test environment L1 32 kB/core L2 265 kB/core L3 4 MB …………………………………………..……

Slide 65

Slide 65 text

Test environment L1 32 kB/core L2 265 kB/core L3 4 MB clang (LLVM) 3.5 …………………………………………..……

Slide 66

Slide 66 text

Test environment L1 32 kB/core L2 265 kB/core L3 4 MB clang (LLVM) 3.5 compile flag -Os …………………………………………..……

Slide 67

Slide 67 text

# Instructions 11 B IPC 0.35 CPI 2.8 L2 loads 1.1 B L2 hit rate 8.8%

Slide 68

Slide 68 text

# Instructions 11 B IPC 0.35 CPI 2.8 L2 loads 1.1 B L2 hit rate 8.8%

Slide 69

Slide 69 text

# Instructions 11 B IPC 0.35 CPI 2.8 L2 loads 1.1 B L2 hit rate 8.8%

Slide 70

Slide 70 text

# Instructions 11 B IPC 0.35 CPI 2.8 L2 loads 1.1 B L2 hit rate 8.8%

Slide 71

Slide 71 text

Memory layout & access

Slide 72

Slide 72 text

No content

Slide 73

Slide 73 text

Data is fetched by CACHE LINE

Slide 74

Slide 74 text

Contiguous accesses are always better Data is fetched by CACHE LINE

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

CACHE LINE 64 bytes 16 floats

Slide 77

Slide 77 text

Each read is 1024 floats away Each access is a cache miss!

Slide 78

Slide 78 text

How bad is a CACHE MISS anyway?

Slide 79

Slide 79 text

1 CPU cycle 0.3 ns 1 s L1 access 0.9 ns 3 s L2 access 2.8 ns 9 s L3 access 12.9 ns 43 s RAM access 120 ns 6 min

Slide 80

Slide 80 text

Let’s try to MINIMIZE MISSES

Slide 81

Slide 81 text

Memory layout & access

Slide 82

Slide 82 text

void transpose(float* a, float* b) {
 for (size_t i = 0; i < MAT_N; i++) {
 for (size_t j = 0; j < MAT_M; j++) {
 b[INDEX(j, i, MAT_N)] = a[INDEX(i, j, MAT_M)];
 }
 }
 }

Slide 83

Slide 83 text

void multiplyMatricesT(float* a, float* b, float* c) {
 for (size_t i = 0; i < MAT_M; i++) {
 for (size_t j = 0; j < MAT_M; j++) {
 for (size_t k = 0; k < MAT_N; k++) {
 c[INDEX(i, j, MAT_M)] += a[INDEX(i, k, MAT_N)] * b[INDEX(j, k, MAT_N)];
 }
 }
 }
 }

Slide 84

Slide 84 text

Time 9.6 s 1.17 s # Instructions 11 B 8.5 B IPC 0.35 2.22 CPI 2.8 0.45 L2 loads 1.1 B 67 M L2 hit rate 8.8% 94.6%

Slide 85

Slide 85 text

10x

Slide 86

Slide 86 text

10x

Slide 87

Slide 87 text

void multiplyMatricesT(float* a, float* b, float* c) {
 for (size_t i = 0; i < MAT_M; i++) {
 for (size_t j = 0; j < MAT_M; j++) {
 float s = 0;
 for (size_t k = 0; k < MAT_N; k += 4) {
 s += a[INDEX(i, k + 0, MAT_N)] * b[INDEX(j, k + 0, MAT_N)] +
 a[INDEX(i, k + 1, MAT_N)] * b[INDEX(j, k + 1, MAT_N)] +
 a[INDEX(i, k + 2, MAT_N)] * b[INDEX(j, k + 2, MAT_N)] +
 a[INDEX(i, k + 3, MAT_N)] * b[INDEX(j, k + 3, MAT_N)];
 }
 c[INDEX(i, j, MAT_M)] = s;
 }
 }
 }

Slide 88

Slide 88 text

Time 9.6 s 1.17 s 0.5 s # Instructions 11 B 8.5 B 4.7 B IPC 0.35 2.22 2.7 CPI 2.8 0.45 0.38 L2 loads 1.1 B 67 M 65 M L2 hit rate 8.8% 94.6% 95%

Slide 89

Slide 89 text

20x

Slide 90

Slide 90 text

20x

Slide 91

Slide 91 text

$ clang++ -O0 -std=c++11 -o main main.cpp

Slide 92

Slide 92 text

Time 9.6 s 1.17 s 0.5 s 19 s # Instructions 11 B 8.5 B 4.7 B 26 B IPC 0.35 2.22 2.7 0.44 CPI 2.8 0.45 0.38 2.27 L2 loads 1.1 B 67 M 65 M 1.1 B L2 hit rate 8.8% 94.6% 95% 2.4%

Slide 93

Slide 93 text

$ clang++ -Ofast -mavx -std=c++11 -o main main.cpp

Slide 94

Slide 94 text

Time 9.6 s 1.17 s 0.5 s 19 s 0.25 s # Inst. 11 B 8.5 B 4.7 B 26 B 1.1 B IPC 0.35 2.22 2.7 0.44 1.2 CPI 2.8 0.45 0.38 2.27 0.8 L2 loads 1.1 B 67 M 65 M 1.1 B 58 M L2 hit rate 8.8% 94.6% 95% 2.4% 76%

Slide 95

Slide 95 text

40x

Slide 96

Slide 96 text

40x

Slide 97

Slide 97 text

What about Java? ? ? ?

Slide 98

Slide 98 text

// c = a x b
 private void multiply(float[] a, float[] b, float[] c) {
 for (int i = 0; i < m; i++) {
 for (int j = 0; j < m; j++) {
 for (int k = 0; k < n; k++) {
 c[i * m + j] += a[i * n + k] * b[j * n + k];
 }
 }
 }
 }

Slide 99

Slide 99 text

>>>>>>>>>>>>>> 1,173 ms <<<<<<<<<<<<<< 10 x

Slide 100

Slide 100 text

X = m m1 m2

Slide 101

Slide 101 text

X = m m1 m2 X Thread 1 Thread 2

Slide 102

Slide 102 text

final int coreCount = Runtime.getRuntime().availableProcessors();
 List> tasks = new ArrayList<>(coreCount);
 
 for (int i = m / coreCount; i <= m; i += m / coreCount) {
 tasks.add(createTask(i));
 }
 
 ExecutorService executor = Executors.newFixedThreadPool(coreCount);
 List > results = executor.invokeAll(tasks);
 for (Future result : results) {
 result.get();
 }

Slide 103

Slide 103 text

>>>>>>>>>>>>>> 517 ms <<<<<<<<<<<<<< 20 x

Slide 104

Slide 104 text

What about SIMD? ? ? ?

Slide 105

Slide 105 text

If a VM can’t do it…

Slide 106

Slide 106 text

A human can do it…

Slide 107

Slide 107 text

4x4 transpose

Slide 108

Slide 108 text

pushq %rbp movq %rsp, %rbp movq %rdi, -0x8(%rbp) movq %rsi, -0x10(%rbp) movq $0x0, -0x18(%rbp) cmpq $0x4, -0x18(%rbp) jae 0x1000016bd movq $0x0, -0x20(%rbp) cmpq $0x4, -0x20(%rbp) jae 0x1000016a5 movq -0x18(%rbp), %rax shlq $0x2, %rax addq -0x20(%rbp), %rax movq -0x8(%rbp), %rcx movss (%rcx,%rax,4), %xmm0 movq -0x20(%rbp), %rax shlq $0x2, %rax addq -0x18(%rbp), %rax movq -0x10(%rbp), %rcx movss %xmm0, (%rcx,%rax,4) movq -0x20(%rbp), %rax addq $0x1, %rax movq %rax, -0x20(%rbp) jmp 0x10000165a jmp 0x1000016aa movq -0x18(%rbp), %rax addq $0x1, %rax movq %rax, -0x18(%rbp) jmp 0x100001644 popq %rbp retq -O0 vmovss %xmm1, -0xb0(%rbp) vmovss -0xb8(%rbp), %xmm1 vmovss %xmm1, -0xa0(%rbp) vmovss -0xbc(%rbp), %xmm1 vmovss %xmm1, -0x90(%rbp) vmovss -0xc0(%rbp), %xmm1 vmovss %xmm1, -0x80(%rbp) vmovss -0xc4(%rbp), %xmm1 vmovss %xmm1, -0xac(%rbp) vmovss -0xc8(%rbp), %xmm1 vmovss %xmm1, -0x9c(%rbp) vmovss -0xcc(%rbp), %xmm1 vmovss %xmm1, -0x8c(%rbp) vmovss -0xd0(%rbp), %xmm1 vmovss %xmm1, -0x7c(%rbp) vmovss -0xd4(%rbp), %xmm1 vmovss %xmm1, -0xa8(%rbp) vmovss -0xd8(%rbp), %xmm1 vmovss %xmm1, -0x98(%rbp) vmovss -0xdc(%rbp), %xmm1 vmovss %xmm1, -0x88(%rbp) vmovss -0xe0(%rbp), %xmm1 vmovss %xmm1, -0x78(%rbp) vmovss -0xe4(%rbp), %xmm1 vmovss %xmm1, -0xa4(%rbp) vmovss -0xe8(%rbp), %xmm1 vmovss %xmm1, -0x94(%rbp) vmovss -0xec(%rbp), %xmm1 vmovss %xmm1, -0x84(%rbp) vmovss %xmm0, -0x34(%rbp) vmovss %xmm0, -0x74(%rbp) -Ofast

Slide 109

Slide 109 text

ARM NEON vld1.32 {d0-d3}, [r1]! vld1.32 {d4-d7}, [r1]! vtrn.32 q0, q1 vtrn.32 q2, q3 vswp d1, d4 vswp d3, d6 vst1.32 {d0-d3}, [r0]! vst1.32 {d4-d7}, [r0]! Written by hand

Slide 110

Slide 110 text

Back to our telemetry application

Slide 111

Slide 111 text

DataSeries& getDataSeries() {
 return m_series[m_current];
 }
 
 // For each data series
 for (size_t i = 0; i < m_buffer.size(); i++) {
 getDataSeries()->getValue(i);
 }

Slide 112

Slide 112 text

// For each data series
 const DataSeries& series(m_series[m_current]);
 for (size_t i = 0; i < m_buffer.size(); i++) {
 series.getValue(i);
 }

Slide 113

Slide 113 text

3ms per frame AFTER

Slide 114

Slide 114 text

3ms per frame AFTER 150ms per frame BEFORE

Slide 115

Slide 115 text

All because of the L1/L2 cache

Slide 116

Slide 116 text

I don’t care?! 1024x1024 MATRIX ☹

Slide 117

Slide 117 text

bool bool float[16] Node bool bool float[16] bool bool float[16] Node Node

Slide 118

Slide 118 text

bool bool bool bool bool bool float[48]

Slide 119

Slide 119 text

virtual void foo() BaseClass virtual void foo() virtual void foo() ChildClass OtherChildClass

Slide 120

Slide 120 text

mutable bool m_dirty;
 
 const Sphere& getBoundingSphere() const {
 if (m_dirty) {
 m_world_sphere = m_sphere * m_world_transform; m_dirty = false;
 }
 return m_world_sphere;
 }

Slide 121

Slide 121 text

Discussion