CodeFest 2019. Ujjwal Sharma (Node.js) — V8 by example: A journey through the compilation pipeline

16b6c87229eaf58768d25ed7b2bbbf52?s=47 CodeFest
April 06, 2019

CodeFest 2019. Ujjwal Sharma (Node.js) — V8 by example: A journey through the compilation pipeline

V8 is complicated. Things change way too fast and it’s really hard to keep track of what’s the fastest way of doing every specific action. But not anymore. Join me, a V8 contributor, on a journey through the compilation pipeline of V8 and understand how it all works under the hood. We’ll take the example of a popular JavaScript builtin method and find that what does and does not trigger de-optimization. By the end of the talk, you will have a fairly decent idea of how builtins are written inside the V8 compilation pipeline, and how to make sure you always take the fast path, no matter what.

16b6c87229eaf58768d25ed7b2bbbf52?s=128

CodeFest

April 06, 2019
Tweet

Transcript

  1. @ryzokuken V8 by Example A journey through the compilation pipeline

    1
  2. @ryzokuken Ujjwal Sharma (@ryzokuken) • Node.js – Core Collaborator •

    Contribute to the JS/Node.js ecosystem ◦ V8 ◦ TC39 ◦ libuv ◦ … • Student • Google Summer of Code • Speaker 2
  3. @ryzokuken @ryzokuken What is V8? 3

  4. @ryzokuken @ryzokuken 4

  5. @ryzokuken @ryzokuken How does V8 work? 5

  6. @ryzokuken @ryzokuken 6 Source

  7. @ryzokuken @ryzokuken 7 Source Parser

  8. @ryzokuken @ryzokuken 8 Source Parser

  9. @ryzokuken @ryzokuken 9 Source IIFE?

  10. @ryzokuken @ryzokuken 10 Source IIFE? Eager

  11. @ryzokuken @ryzokuken 11 Source IIFE? Lazy Eager

  12. @ryzokuken @ryzokuken 12 Source IIFE? * Eventually Eager Lazy

  13. @ryzokuken @ryzokuken 13 Source IIFE? Eager Lazy AST + Scopes

  14. @ryzokuken @ryzokuken 14 Source Parser AST + Scopes

  15. @ryzokuken @ryzokuken 15 Source Parser AST + Scopes Ignition

  16. @ryzokuken @ryzokuken 16 Source Parser AST + Scopes Ignition Bytecode

  17. @ryzokuken @ryzokuken 17 Source Parser AST + Scopes Ignition Bytecode

  18. @ryzokuken @ryzokuken 18 But interpreters are so slow! Idea: Let’s

    not be slow. Let’s use a JIT compiler.
  19. @ryzokuken @ryzokuken 19

  20. @ryzokuken @ryzokuken 20 Source Parser AST + Scopes Ignition Bytecode

    Turbofan + profiling data OPTIMIZE!
  21. @ryzokuken @ryzokuken 21 Source Parser AST + Scopes Ignition Bytecode

    Turbofan Optimized Code DEOPTIMIZE!
  22. @ryzokuken @ryzokuken 22 Source Parser AST + Scopes Ignition Bytecode

    Turbofan Optimized Code
  23. @ryzokuken @ryzokuken How you feel about everything. Let’s take a

    couple steps back. 23
  24. @ryzokuken @ryzokuken How does V8 work? 24

  25. @ryzokuken @ryzokuken How does V8 work? a compiler 25

  26. @ryzokuken @ryzokuken Source Code Parser Abstract Syntax Tree Assembler Machine

    Code (R) Compiler Assembly Program Linker Machine Code (T) 26
  27. @ryzokuken @ryzokuken Is it simple enough yet? 27

  28. @ryzokuken @ryzokuken Source Code Parser Abstract Syntax Tree Assembler Machine

    Code (R) Compiler Assembly Program Linker Machine Code (T) 28
  29. @ryzokuken @ryzokuken Source Code Parser Abstract Syntax Tree Compiler Machine

    Code (T) 29
  30. @ryzokuken @ryzokuken 30 BAD NEWS: V8 isn’t this simple

  31. @ryzokuken @ryzokuken Because ECMAScript is slow by default, that’s why.

    31
  32. @ryzokuken @ryzokuken Idea: Let’s not be slow. BUT HOW? Speculative

    Optimization 32
  33. @ryzokuken @ryzokuken But wait… What even is Speculative Optimization? 33

  34. @ryzokuken @ryzokuken Speculative Optimization [spek-yuh-ley-tiv, -luh-tiv][op-tuh-muh-zey-shuh n] Noun. Guessing what is

    about to happen based on what has happened. 34
  35. @ryzokuken @ryzokuken Speculative Optimization [spek-yuh-ley-tiv, -luh-tiv][op-tuh-muh-zey-shuh n] Noun. Guessing what is

    about to happen based on what has happened. Making assumptions about possible future inputs based on previous inputs. 35
  36. @ryzokuken @ryzokuken Top three reasons why a JavaScript function might

    need optimization: 1. Dynamic Types 2. Dynamic Types 3. Dynamic Types Alright, but how does it help? 36
  37. @ryzokuken @ryzokuken Source Code Parser Abstract Syntax Tree Compiler Machine

    Code (T) 37
  38. @ryzokuken @ryzokuken Source Code Parser Abstract Syntax Tree 1. Baseline

    Interpreter 2. Optimizing Compiler Machine Code (T) 38
  39. @ryzokuken @ryzokuken AST Baseline Interpreter Optimizing Compiler Bytecode Optimized Code

    optimize deoptimize + profiling data 39
  40. @ryzokuken @ryzokuken Compiler Baseline Interpreter Optimizing Compiler 40

  41. @ryzokuken @ryzokuken AST Baseline Interpreter Optimizing Compiler Bytecode Optimized Code

    optimize deoptimize 41 + profiling data
  42. @ryzokuken @ryzokuken AST Bytecode Optimized Code optimize deoptimize + profiling

    data 42
  43. @ryzokuken @ryzokuken 43 Okay, that’s great. But how does it

    even work? Let’s consider an example.
  44. @ryzokuken @ryzokuken 44 function add(x, y) { return x +

    y; } console.log(add(1, 2));
  45. @ryzokuken @ryzokuken 45 Not too intuitive? I’ve got you covered,

    fam. Let’s see how the Abstract Syntax Tree actually looks.
  46. @ryzokuken @ryzokuken 46 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y"
  47. @ryzokuken @ryzokuken 47 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y" FUNC
  48. @ryzokuken @ryzokuken 48 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y" FUNC
  49. @ryzokuken @ryzokuken 49 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y" FUNC NAME “add”
  50. @ryzokuken @ryzokuken 50 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y" FUNC NAME “add” PARAMS
  51. @ryzokuken @ryzokuken 51 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y" FUNC NAME “add” PARAMS VAR “x”
  52. @ryzokuken @ryzokuken 52 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y" FUNC NAME “add” PARAMS VAR “x” VAR “y”
  53. @ryzokuken @ryzokuken 53 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y" FUNC NAME “add” PARAMS RETURN VAR “x” VAR “y”
  54. @ryzokuken @ryzokuken 54 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y" FUNC NAME “add” PARAMS RETURN VAR “x” VAR “y” ADD
  55. @ryzokuken @ryzokuken 55 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y" FUNC NAME “add” PARAMS RETURN VAR “x” VAR “y” ADD PROXY “x”
  56. @ryzokuken @ryzokuken 56 [generating bytecode for function: add] --- AST

    --- FUNC at 12 . KIND 0 . SUSPEND COUNT 0 . NAME "add" . PARAMS . . VAR (...) (mode = VAR) "x" . . VAR (...) (mode = VAR) "y" . RETURN at 22 . . ADD at 31 . . . VAR PROXY parameter[0] (...) (mode = VAR) "x" . . . VAR PROXY parameter[1] (...) (mode = VAR) "y" FUNC NAME “add” PARAMS RETURN VAR “x” VAR “y” ADD PROXY “x” PROXY “y”
  57. @ryzokuken @ryzokuken 57 Scope Resolution FUNC NAME “add” PARAMS RETURN

    VAR “x” VAR “y” ADD PROXY “y” PROXY “x”
  58. @ryzokuken @ryzokuken 58 Let’s step through the bytecode. Hope you

    like Assembly. Once this is done, we can generate bytecode.
  59. @ryzokuken @ryzokuken 59 [generated bytecode for function: add] Parameter count

    3 Frame size 0 12 E> 0x38d5f59df42 @ 0 : a5 StackCheck 22 S> 0x38d5f59df43 @ 1 : 25 02 Ldar a1 31 E> 0x38d5f59df45 @ 3 : 34 03 00 Add a0, [0] 35 S> 0x38d5f59df48 @ 6 : a9 Return Constant pool (size = 0) Handler Table (size = 0)
  60. @ryzokuken @ryzokuken 60 [generated bytecode for function: add] Parameter count

    3 Frame size 0 12 E> 0x38d5f59df42 @ 0 : a5 StackCheck 22 S> 0x38d5f59df43 @ 1 : 25 02 Ldar a1 31 E> 0x38d5f59df45 @ 3 : 34 03 00 Add a0, [0] 35 S> 0x38d5f59df48 @ 6 : a9 Return Constant pool (size = 0) Handler Table (size = 0)
  61. @ryzokuken @ryzokuken 61 [generated bytecode for function: add] Parameter count

    3 Frame size 0 12 E> 0x38d5f59df42 @ 0 : a5 StackCheck 22 S> 0x38d5f59df43 @ 1 : 25 02 Ldar a1 31 E> 0x38d5f59df45 @ 3 : 34 03 00 Add a0, [0] 35 S> 0x38d5f59df48 @ 6 : a9 Return Constant pool (size = 0) Handler Table (size = 0)
  62. @ryzokuken @ryzokuken 62 [generated bytecode for function: add] Parameter count

    3 Frame size 0 12 E> 0x38d5f59df42 @ 0 : a5 StackCheck 22 S> 0x38d5f59df43 @ 1 : 25 02 Ldar a1 31 E> 0x38d5f59df45 @ 3 : 34 03 00 Add a0, [0] 35 S> 0x38d5f59df48 @ 6 : a9 Return Constant pool (size = 0) Handler Table (size = 0)
  63. @ryzokuken @ryzokuken 63 StackCheck Ldar a1 Add a0, [0] Return

  64. @ryzokuken @ryzokuken 64 StackCheck Ldar a1 Add a0, [0] Return

  65. @ryzokuken @ryzokuken 65 StackCheck Ldar a1 Add a0, [0] Return

  66. @ryzokuken @ryzokuken 66 StackCheck Ldar a1 Add a0, [0] Return

  67. @ryzokuken @ryzokuken 67 Two important things happen here. 1. Addition:

    It’s complicated. 2. Profiling: Feedback Vectors help.
  68. @ryzokuken @ryzokuken 68 StackCheck Ldar a1 Add a0, [0] Return

  69. @ryzokuken @ryzokuken 69 What about that Speculative Optimization thingie he

    was talking about earlier? Now that we finally have the baseline code running, let’s put that profiling data to good use. But wait...
  70. @ryzokuken @ryzokuken 70 Oh, also, remember that time when I

    said addition in JavaScript was complicated?
  71. @ryzokuken @ryzokuken 71

  72. @ryzokuken @ryzokuken 72 Let me break it down for you.

  73. @ryzokuken @ryzokuken 73 Number String Object

  74. @ryzokuken @ryzokuken 74

  75. @ryzokuken @ryzokuken 75 Remember Feedback Vectors? Let’s see how they

    really look to see how it all works. Awesome! But how do we make that assumption?
  76. @ryzokuken @ryzokuken 76 function add(x, y) { return x +

    y; } console.log(add(1, 2));
  77. @ryzokuken @ryzokuken 77 function add(x, y) { return x +

    y; } console.log(add(1, 2)); %DebugPrint(add);
  78. @ryzokuken @ryzokuken 78 ... - feedback vector: 0x18bc7711df89: [FeedbackVector] in

    OldSpace - map: 0x18bc38c00bc1 <Map> - length: 1 - shared function info: 0x18bc7711dc59 <SharedFunctionInfo add> - optimized code/marker: OptimizationMarker::kNone - invocation count: 1 - profiler ticks: 0 - slot #0 BinaryOp BinaryOp:SignedSmall { [0]: 1 } ...
  79. @ryzokuken @ryzokuken 79 function add(x, y) { return x +

    y; } console.log(add(1, 2)); %DebugPrint(add);
  80. @ryzokuken @ryzokuken 80 function add(x, y) { return x +

    y; } console.log(add(1, 2)); console.log(add(1.1, 2.2)); %DebugPrint(add);
  81. @ryzokuken @ryzokuken 81 ... - feedback vector: 0xcbd92d1dfe1: [FeedbackVector] in

    OldSpace - map: 0x0cbd4f880bc1 <Map> - length: 1 - shared function info: 0x0cbd92d1dc59 <SharedFunctionInfo add> - optimized code/marker: OptimizationMarker::kNone - invocation count: 2 - profiler ticks: 0 - slot #0 BinaryOp BinaryOp:Number { [0]: 7 } ...
  82. @ryzokuken @ryzokuken 82

  83. @ryzokuken @ryzokuken 83 Question: Is there a method to all

    this? Answer: Yes, it is called the Feedback Lattice.
  84. @ryzokuken @ryzokuken 84 None SignedSmall Number BigInt String NumberOrOddball Any

  85. @ryzokuken @ryzokuken 85 None SignedSmall Number BigInt String NumberOrOddball Any

  86. @ryzokuken @ryzokuken 86 None SignedSmall Number BigInt String NumberOrOddball Any

  87. @ryzokuken @ryzokuken 87 None SignedSmall Number BigInt String NumberOrOddball Any

  88. @ryzokuken @ryzokuken 88 None SignedSmall Number BigInt String NumberOrOddball Any

  89. @ryzokuken @ryzokuken 89 None SignedSmall Number BigInt String NumberOrOddball Any

  90. @ryzokuken @ryzokuken 90 None SignedSmall Number BigInt String NumberOrOddball Any

  91. @ryzokuken @ryzokuken 91 None SignedSmall Number BigInt String NumberOrOddball Any

  92. @ryzokuken @ryzokuken 92 This data in the Feedback Vectors is

    used to finally optimize your code once a function is hot. When is a function “hot”? Let’s see what actually happens by explicitly triggering optimization.
  93. @ryzokuken @ryzokuken 93 function add(x, y) { return x +

    y; } add(1, 2);
  94. @ryzokuken @ryzokuken 94

  95. @ryzokuken @ryzokuken 95 function add(x, y) { return x +

    y; } add(1, 2); // Warm up with SignedSmall feedback. %OptimizeFunctionOnNextCall(add); add(1, 2); // Optimize and run generated code.
  96. @ryzokuken @ryzokuken 96 Question: What did we just do?

  97. @ryzokuken @ryzokuken 97 Let’s see the optimized code generated by

    passing the --print-opt-code flag to d8.
  98. @ryzokuken @ryzokuken 98 leaq rbx,[rip+0xfffffff9] movq rbx,[rcx-0x20] testb [rbx+0xf],0x1 jz

    0x198104882dfb <+0x3b> movq r10,0x10efbfde0 (CompileLazyDeoptimizedCode) jmp r10 push rbp movq rbp,rsp push rsi push rdi cmpq rsp,[r13+0x11e8] (root (stack_limit)) jna 0x198104882e55 <+0x95> movq rdx,[rbp+0x18] testb rdx,0x1 jnz 0x198104882e7b <+0xbb> movq rcx,[rbp+0x10] testb rcx,0x1 jnz 0x198104882e87 <+0xc7> movq rdi,rcx shrq rdi, 32 movq r8,rdx shrq r8, 32 addl rdi,r8 jo 0x198104882e93 <+0xd3> shlq rdi, 32 movq rax,rdi movq rsp,rbp pop rbp ret 0x18
  99. @ryzokuken @ryzokuken 99 leaq rbx,[rip+0xfffffff9] movq rbx,[rcx-0x20] testb [rbx+0xf],0x1 jz

    0x198104882dfb <+0x3b> movq r10,0x10efbfde0 (CompileLazyDeoptimizedCode) jmp r10 push rbp movq rbp,rsp push rsi push rdi cmpq rsp,[r13+0x11e8] (root (stack_limit)) jna 0x198104882e55 <+0x95> movq rdx,[rbp+0x18] testb rdx,0x1 jnz 0x198104882e7b <+0xbb> movq rcx,[rbp+0x10] testb rcx,0x1 jnz 0x198104882e87 <+0xc7> movq rdi,rcx shrq rdi, 32 movq r8,rdx shrq r8, 32 addl rdi,r8 jo 0x198104882e93 <+0xd3> shlq rdi, 32 movq rax,rdi movq rsp,rbp pop rbp ret 0x18 Prologue
  100. @ryzokuken @ryzokuken 100 leaq rbx,[rip+0xfffffff9] movq rbx,[rcx-0x20] testb [rbx+0xf],0x1 jz

    0x198104882dfb <+0x3b> movq r10,0x10efbfde0 (CompileLazyDeoptimizedCode) jmp r10 push rbp movq rbp,rsp push rsi push rdi cmpq rsp,[r13+0x11e8] (root (stack_limit)) jna 0x198104882e55 <+0x95> movq rdx,[rbp+0x18] testb rdx,0x1 jnz 0x198104882e7b <+0xbb> movq rcx,[rbp+0x10] testb rcx,0x1 jnz 0x198104882e87 <+0xc7> movq rdi,rcx shrq rdi, 32 movq r8,rdx shrq r8, 32 addl rdi,r8 jo 0x198104882e93 <+0xd3> shlq rdi, 32 movq rax,rdi movq rsp,rbp pop rbp ret 0x18 Check x is Smi
  101. @ryzokuken @ryzokuken 101 leaq rbx,[rip+0xfffffff9] movq rbx,[rcx-0x20] testb [rbx+0xf],0x1 jz

    0x198104882dfb <+0x3b> movq r10,0x10efbfde0 (CompileLazyDeoptimizedCode) jmp r10 push rbp movq rbp,rsp push rsi push rdi cmpq rsp,[r13+0x11e8] (root (stack_limit)) jna 0x198104882e55 <+0x95> movq rdx,[rbp+0x18] testb rdx,0x1 jnz 0x198104882e7b <+0xbb> movq rcx,[rbp+0x10] testb rcx,0x1 jnz 0x198104882e87 <+0xc7> movq rdi,rcx shrq rdi, 32 movq r8,rdx shrq r8, 32 addl rdi,r8 jo 0x198104882e93 <+0xd3> shlq rdi, 32 movq rax,rdi movq rsp,rbp pop rbp ret 0x18 Check y is Smi
  102. @ryzokuken @ryzokuken 102 leaq rbx,[rip+0xfffffff9] movq rbx,[rcx-0x20] testb [rbx+0xf],0x1 jz

    0x198104882dfb <+0x3b> movq r10,0x10efbfde0 (CompileLazyDeoptimizedCode) jmp r10 push rbp movq rbp,rsp push rsi push rdi cmpq rsp,[r13+0x11e8] (root (stack_limit)) jna 0x198104882e55 <+0x95> movq rdx,[rbp+0x18] testb rdx,0x1 jnz 0x198104882e7b <+0xbb> movq rcx,[rbp+0x10] testb rcx,0x1 jnz 0x198104882e87 <+0xc7> movq rdi,rcx shrq rdi, 32 movq r8,rdx shrq r8, 32 addl rdi,r8 jo 0x198104882e93 <+0xd3> shlq rdi, 32 movq rax,rdi movq rsp,rbp pop rbp ret 0x18 Smi → Word32 (x)
  103. @ryzokuken @ryzokuken 103 leaq rbx,[rip+0xfffffff9] movq rbx,[rcx-0x20] testb [rbx+0xf],0x1 jz

    0x198104882dfb <+0x3b> movq r10,0x10efbfde0 (CompileLazyDeoptimizedCode) jmp r10 push rbp movq rbp,rsp push rsi push rdi cmpq rsp,[r13+0x11e8] (root (stack_limit)) jna 0x198104882e55 <+0x95> movq rdx,[rbp+0x18] testb rdx,0x1 jnz 0x198104882e7b <+0xbb> movq rcx,[rbp+0x10] testb rcx,0x1 jnz 0x198104882e87 <+0xc7> movq rdi,rcx shrq rdi, 32 movq r8,rdx shrq r8, 32 addl rdi,r8 jo 0x198104882e93 <+0xd3> shlq rdi, 32 movq rax,rdi movq rsp,rbp pop rbp ret 0x18 Smi → Word32 (y)
  104. @ryzokuken @ryzokuken 104 leaq rbx,[rip+0xfffffff9] movq rbx,[rcx-0x20] testb [rbx+0xf],0x1 jz

    0x198104882dfb <+0x3b> movq r10,0x10efbfde0 (CompileLazyDeoptimizedCode) jmp r10 push rbp movq rbp,rsp push rsi push rdi cmpq rsp,[r13+0x11e8] (root (stack_limit)) jna 0x198104882e55 <+0x95> movq rdx,[rbp+0x18] testb rdx,0x1 jnz 0x198104882e7b <+0xbb> movq rcx,[rbp+0x10] testb rcx,0x1 jnz 0x198104882e87 <+0xc7> movq rdi,rcx shrq rdi, 32 movq r8,rdx shrq r8, 32 addl rdi,r8 jo 0x198104882e93 <+0xd3> shlq rdi, 32 movq rax,rdi movq rsp,rbp pop rbp ret 0x18 Add x and y Overflow Check
  105. @ryzokuken @ryzokuken 105 leaq rbx,[rip+0xfffffff9] movq rbx,[rcx-0x20] testb [rbx+0xf],0x1 jz

    0x198104882dfb <+0x3b> movq r10,0x10efbfde0 (CompileLazyDeoptimizedCode) jmp r10 push rbp movq rbp,rsp push rsi push rdi cmpq rsp,[r13+0x11e8] (root (stack_limit)) jna 0x198104882e55 <+0x95> movq rdx,[rbp+0x18] testb rdx,0x1 jnz 0x198104882e7b <+0xbb> movq rcx,[rbp+0x10] testb rcx,0x1 jnz 0x198104882e87 <+0xc7> movq rdi,rcx shrq rdi, 32 movq r8,rdx shrq r8, 32 addl rdi,r8 jo 0x198104882e93 <+0xd3> shlq rdi, 32 movq rax,rdi movq rsp,rbp pop rbp ret 0x18 Result to Smi
  106. @ryzokuken @ryzokuken 106 leaq rbx,[rip+0xfffffff9] movq rbx,[rcx-0x20] testb [rbx+0xf],0x1 jz

    0x198104882dfb <+0x3b> movq r10,0x10efbfde0 (CompileLazyDeoptimizedCode) jmp r10 push rbp movq rbp,rsp push rsi push rdi cmpq rsp,[r13+0x11e8] (root (stack_limit)) jna 0x198104882e55 <+0x95> movq rdx,[rbp+0x18] testb rdx,0x1 jnz 0x198104882e7b <+0xbb> movq rcx,[rbp+0x10] testb rcx,0x1 jnz 0x198104882e87 <+0xc7> movq rdi,rcx shrq rdi, 32 movq r8,rdx shrq r8, 32 addl rdi,r8 jo 0x198104882e93 <+0xd3> shlq rdi, 32 movq rax,rdi movq rsp,rbp pop rbp ret 0x18 Epilogue
  107. @ryzokuken @ryzokuken 107 Now, let’s try something fun.

  108. @ryzokuken @ryzokuken 108 function add(x, y) { return x +

    y; } add(1, 2); // Warm up with SignedSmall feedback. %OptimizeFunctionOnNextCall(add); add(1, 2); // Optimize and run generated code.
  109. @ryzokuken @ryzokuken 109 function add(x, y) { return x +

    y; } add(1, 2); // Warm up with SignedSmall feedback. %OptimizeFunctionOnNextCall(add); add(1, 2); // Optimize and run generated code. add(1.1, 2.2) // DEOPTIMIZE!
  110. @ryzokuken @ryzokuken 110

  111. @ryzokuken @ryzokuken 111 Note: I’m obviously kidding. Deoptimization is no

    laughing matter.
  112. @ryzokuken @ryzokuken 112 Let’s see how the code is actually

    deoptimized by passing the --trace-deopt flag to d8.
  113. @ryzokuken @ryzokuken 113 [deoptimizing (DEOPT eager): begin 0x08d97929dbc1 <JSFunction add

    (sfi = 0x8d97929d999)> (opt #0) @0, FP to SP delta: 24, caller sp: 0x7ffee7bee248] ;;; deoptimize at <add.js:2:12>, not a Smi reading input frame add => bytecode_offset=0, args=3, height=1, retval=0(#0); inputs: 0: 0x08d97929dbc1 ; [fp - 16] 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> 1: 0x08d9c6b81521 ; [fp + 32] 0x08d9c6b81521 <JSGlobal Object> 2: 0x08d97929da31 ; rdx 0x08d97929da31 <HeapNumber 1.1> 3: 0x08d97929da41 ; [fp + 16] 0x08d97929da41 <HeapNumber 2.2> 4: 0x08d979281749 ; [fp - 24] 0x08d979281749 <NativeContext[247]> 5: 0x08d92b080e19 ; (literal 2) 0x08d92b080e19 <Odd Oddball: optimized_out> translating interpreted frame add => bytecode_offset=0, height=8 0x7ffee7bee240: [top + 72] <- 0x08d9c6b81521 <JSGlobal Object> ; stack parameter (input #1) 0x7ffee7bee238: [top + 64] <- 0x08d97929da31 <HeapNumber 1.1> ; stack parameter (input #2) 0x7ffee7bee230: [top + 56] <- 0x08d97929da41 <HeapNumber 2.2> ; stack parameter (input #3) ------------------------- 0x7ffee7bee228: [top + 48] <- 0x00010d2881e8 ; caller's pc 0x7ffee7bee220: [top + 40] <- 0x7ffee7bee290 ; caller's fp 0x7ffee7bee218: [top + 32] <- 0x08d979281749 <NativeContext[247]> ; context (input #4) 0x7ffee7bee210: [top + 24] <- 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> ; function (input #0) 0x7ffee7bee208: [top + 16] <- 0x08d97929dca1 <BytecodeArray[7]> ; bytecode array 0x7ffee7bee200: [top + 8] <- 0x003900000000 <Smi 57> ; bytecode offset ------------------------- 0x7ffee7bee1f8: [top + 0] <- 0x08d92b080e19 <Odd Oddball: optimized_out> ; accumulator (input #5) [deoptimizing (eager): end 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> @0 => node=0, pc=0x00010d2886c0, caller sp=0x7ffee7bee248, took 1.163 ms]
  114. @ryzokuken @ryzokuken Don’t worry, I got you. 114

  115. @ryzokuken @ryzokuken 115 [deoptimizing (DEOPT eager): begin 0x08d97929dbc1 <JSFunction add

    (sfi = 0x8d97929d999)> (opt #0) @0, FP to SP delta: 24, caller sp: 0x7ffee7bee248] ;;; deoptimize at <add.js:2:12>, not a Smi reading input frame add => bytecode_offset=0, args=3, height=1, retval=0(#0); inputs: 0: 0x08d97929dbc1 ; [fp - 16] 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> 1: 0x08d9c6b81521 ; [fp + 32] 0x08d9c6b81521 <JSGlobal Object> 2: 0x08d97929da31 ; rdx 0x08d97929da31 <HeapNumber 1.1> 3: 0x08d97929da41 ; [fp + 16] 0x08d97929da41 <HeapNumber 2.2> 4: 0x08d979281749 ; [fp - 24] 0x08d979281749 <NativeContext[247]> 5: 0x08d92b080e19 ; (literal 2) 0x08d92b080e19 <Odd Oddball: optimized_out> translating interpreted frame add => bytecode_offset=0, height=8 0x7ffee7bee240: [top + 72] <- 0x08d9c6b81521 <JSGlobal Object> ; stack parameter (input #1) 0x7ffee7bee238: [top + 64] <- 0x08d97929da31 <HeapNumber 1.1> ; stack parameter (input #2) 0x7ffee7bee230: [top + 56] <- 0x08d97929da41 <HeapNumber 2.2> ; stack parameter (input #3) ------------------------- 0x7ffee7bee228: [top + 48] <- 0x00010d2881e8 ; caller's pc 0x7ffee7bee220: [top + 40] <- 0x7ffee7bee290 ; caller's fp 0x7ffee7bee218: [top + 32] <- 0x08d979281749 <NativeContext[247]> ; context (input #4) 0x7ffee7bee210: [top + 24] <- 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> ; function (input #0) 0x7ffee7bee208: [top + 16] <- 0x08d97929dca1 <BytecodeArray[7]> ; bytecode array 0x7ffee7bee200: [top + 8] <- 0x003900000000 <Smi 57> ; bytecode offset ------------------------- 0x7ffee7bee1f8: [top + 0] <- 0x08d92b080e19 <Odd Oddball: optimized_out> ; accumulator (input #5) [deoptimizing (eager): end 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> @0 => node=0, pc=0x00010d2886c0, caller sp=0x7ffee7bee248, took 1.163 ms]
  116. @ryzokuken @ryzokuken 116 [deoptimizing (DEOPT eager): begin 0x08d97929dbc1 <JSFunction add

    (sfi = 0x8d97929d999)> (opt #0) @0, FP to SP delta: 24, caller sp: 0x7ffee7bee248] ;;; deoptimize at <add.js:2:12>, not a Smi reading input frame add => bytecode_offset=0, args=3, height=1, retval=0(#0); inputs: 0: 0x08d97929dbc1 ; [fp - 16] 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> 1: 0x08d9c6b81521 ; [fp + 32] 0x08d9c6b81521 <JSGlobal Object> 2: 0x08d97929da31 ; rdx 0x08d97929da31 <HeapNumber 1.1> 3: 0x08d97929da41 ; [fp + 16] 0x08d97929da41 <HeapNumber 2.2> 4: 0x08d979281749 ; [fp - 24] 0x08d979281749 <NativeContext[247]> 5: 0x08d92b080e19 ; (literal 2) 0x08d92b080e19 <Odd Oddball: optimized_out> translating interpreted frame add => bytecode_offset=0, height=8 0x7ffee7bee240: [top + 72] <- 0x08d9c6b81521 <JSGlobal Object> ; stack parameter (input #1) 0x7ffee7bee238: [top + 64] <- 0x08d97929da31 <HeapNumber 1.1> ; stack parameter (input #2) 0x7ffee7bee230: [top + 56] <- 0x08d97929da41 <HeapNumber 2.2> ; stack parameter (input #3) ------------------------- 0x7ffee7bee228: [top + 48] <- 0x00010d2881e8 ; caller's pc 0x7ffee7bee220: [top + 40] <- 0x7ffee7bee290 ; caller's fp 0x7ffee7bee218: [top + 32] <- 0x08d979281749 <NativeContext[247]> ; context (input #4) 0x7ffee7bee210: [top + 24] <- 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> ; function (input #0) 0x7ffee7bee208: [top + 16] <- 0x08d97929dca1 <BytecodeArray[7]> ; bytecode array 0x7ffee7bee200: [top + 8] <- 0x003900000000 <Smi 57> ; bytecode offset ------------------------- 0x7ffee7bee1f8: [top + 0] <- 0x08d92b080e19 <Odd Oddball: optimized_out> ; accumulator (input #5) [deoptimizing (eager): end 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> @0 => node=0, pc=0x00010d2886c0, caller sp=0x7ffee7bee248, took 1.163 ms]
  117. @ryzokuken @ryzokuken 117 [deoptimizing (DEOPT eager): begin 0x08d97929dbc1 <JSFunction add

    (sfi = 0x8d97929d999)> (opt #0) @0, FP to SP delta: 24, caller sp: 0x7ffee7bee248] ;;; deoptimize at <add.js:2:12>, not a Smi reading input frame add => bytecode_offset=0, args=3, height=1, retval=0(#0); inputs: 0: 0x08d97929dbc1 ; [fp - 16] 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> 1: 0x08d9c6b81521 ; [fp + 32] 0x08d9c6b81521 <JSGlobal Object> 2: 0x08d97929da31 ; rdx 0x08d97929da31 <HeapNumber 1.1> 3: 0x08d97929da41 ; [fp + 16] 0x08d97929da41 <HeapNumber 2.2> 4: 0x08d979281749 ; [fp - 24] 0x08d979281749 <NativeContext[247]> 5: 0x08d92b080e19 ; (literal 2) 0x08d92b080e19 <Odd Oddball: optimized_out> translating interpreted frame add => bytecode_offset=0, height=8 0x7ffee7bee240: [top + 72] <- 0x08d9c6b81521 <JSGlobal Object> ; stack parameter (input #1) 0x7ffee7bee238: [top + 64] <- 0x08d97929da31 <HeapNumber 1.1> ; stack parameter (input #2) 0x7ffee7bee230: [top + 56] <- 0x08d97929da41 <HeapNumber 2.2> ; stack parameter (input #3) ------------------------- 0x7ffee7bee228: [top + 48] <- 0x00010d2881e8 ; caller's pc 0x7ffee7bee220: [top + 40] <- 0x7ffee7bee290 ; caller's fp 0x7ffee7bee218: [top + 32] <- 0x08d979281749 <NativeContext[247]> ; context (input #4) 0x7ffee7bee210: [top + 24] <- 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> ; function (input #0) 0x7ffee7bee208: [top + 16] <- 0x08d97929dca1 <BytecodeArray[7]> ; bytecode array 0x7ffee7bee200: [top + 8] <- 0x003900000000 <Smi 57> ; bytecode offset ------------------------- 0x7ffee7bee1f8: [top + 0] <- 0x08d92b080e19 <Odd Oddball: optimized_out> ; accumulator (input #5) [deoptimizing (eager): end 0x08d97929dbc1 <JSFunction add (sfi = 0x8d97929d999)> @0 => node=0, pc=0x00010d2886c0, caller sp=0x7ffee7bee248, took 1.163 ms]
  118. @ryzokuken @ryzokuken 118 Pro Tip: If you don’t want your

    code to be slow, try to stick to certain types. It makes things easier.
  119. @ryzokuken Special Thanks • Benedikt Meurer (@bmeurer) • Yang Guo

    (@hashseed) • Sathya Gunasekaran (@_gsathya) • Jakob Gruber (@schuay) • Sigurd Schneider (@sigurdschn) • ...and everyone else from the V8 team. • The organizers. 119
  120. @ryzokuken @ryzokuken Спасибо! 120