Slide 1

Slide 1 text

No content

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | Confidential – Oracle Internal/Restricted/Highly Restricted !5 standalone Automatic transformation of interpreters to compilers Engine integration native and managed https://gotober.com/2018/sessions/650/graalvm-run-programs-faster-anywhere

Slide 4

Slide 4 text

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. | !9 https://gotober.com/2018/sessions/650/graalvm-run-programs-faster-anywhere

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

The performance of many dynamic language implementations suffers from high allocation rates and runtime type checks. This makes dynamic languages less applicable to purely algorithmic problems, despite their growing popularity. In this paper we present a simple compiler optimization based on online partial evaluation to remove object allocations and runtime type checks in the context of a tracing JIT. We evaluate the optimization using a Python VM and find that it gives good results for all our (real-life) benchmarks.

Slide 7

Slide 7 text

The performance of many dynamic language implementations suffers from high allocation rates and runtime type checks. This makes dynamic languages less applicable to purely algorithmic problems, despite their growing popularity. In this paper we present a simple compiler optimization based on online partial evaluation to remove object allocations and runtime type checks in the context of a tracing JIT. We evaluate the optimization using a Python VM and find that it gives good results for all our (real-life) benchmarks.

Slide 8

Slide 8 text

No content

Slide 9

Slide 9 text

Most high-performance dynamic language virtual machines duplicate language semantics in the interpreter, compiler, and runtime system. This violates the principle to not repeat yourself. In contrast, we define languages solely by writing an interpreter. The interpreter performs specializations, e.g., augments the interpreted program with type information and profiling information. Compiled code is derived automatically using partial evaluation while incorporating these specializations. This makes partial evaluation practical in the context of dynamic languages: it reduces the size of the compiled code while still compiling all parts of an operation that are relevant for a particular program. When a speculation fails, execution transfers back to the interpreter, the program re-specializes in the interpreter, and later partial evaluation again transforms the new state of the interpreter to compiled code.

Slide 10

Slide 10 text

Most high-performance dynamic language virtual machines duplicate language semantics in the interpreter, compiler, and runtime system. This violates the principle to not repeat yourself. In contrast, we define languages solely by writing an interpreter. The interpreter performs specializations, e.g., augments the interpreted program with type information and profiling information. Compiled code is derived automatically using partial evaluation while incorporating these specializations. This makes partial evaluation practical in the context of dynamic languages: it reduces the size of the compiled code while still compiling all parts of an operation that are relevant for a particular program. When a speculation fails, execution transfers back to the interpreter, the program re-specializes in the interpreter, and later partial evaluation again transforms the new state of the interpreter to compiled code.

Slide 11

Slide 11 text

We implement the language semantics only once in a simple form: as a language interpreter written in a managed high-level host language. Optimized compiled code is derived from the interpreter using partial evaluation. This approach and its obvious benefits were described in 1971 by Y. Futamura, and is known as the first Futamura projection. To the best of our knowledge no prior high-performance language implementation used this approach.

Slide 12

Slide 12 text

We implement the language semantics only once in a simple form: as a language interpreter written in a managed high-level host language. Optimized compiled code is derived from the interpreter using partial evaluation. This approach and its obvious benefits were described in 1971 by Y. Futamura, and is known as the first Futamura projection. To the best of our knowledge no prior high-performance language implementation used this approach.

Slide 13

Slide 13 text

We implement the language semantics only once in a simple form: as a language interpreter written in a managed high-level host language. Optimized compiled code is derived from the interpreter using partial evaluation. This approach and its obvious benefits were described in 1971 by Y. Futamura, and is known as the first Futamura projection. To the best of our knowledge no prior high-performance language implementation used this approach.

Slide 14

Slide 14 text

No content

Slide 15

Slide 15 text

https://codon.com/compilers-for-free

Slide 16

Slide 16 text

No content

Slide 17

Slide 17 text

Programs and Programming Languages

Slide 18

Slide 18 text

Programs • We call a program a sequence of instructions that can be executed by a machine. • The machine may be a virtual machine or a physical machine • In the following, when we say that a program is evaluated, we assume that there exists some machine that is able to execute these instructions.

Slide 19

Slide 19 text

Program Evaluation • Consider a program P, with input data D; • when we evaluate P over D it produces some output result R. D R P

Slide 20

Slide 20 text

f(k, u) = k + u 3 4 7 k + u

Slide 21

Slide 21 text

Interpreters • An interpreter I is a program • it evaluates some other given program P over some given data D, and it produces the output result R. P D R I • We denote this with I(P, D)

Slide 22

Slide 22 text

f(k, u) = k + u Instructions add x y sub x y mul x y ... write(D) while(has-more-instructions(P)): instr ← fetch-next-instruction(P) switch(op(instr)): case ’add’: x ← read() y ← read() result ← x + y write(result) case . . .

Slide 23

Slide 23 text

Compilers • Let be P a program that evaluates to R when given D; • A compiler C translates a source program P into an object program C(P) that evaluated over an input D still produces R P C C(P) C(P) D R • We denote this with C(P)(D)

Slide 24

Slide 24 text

f(k, u) = k + u sum: lea eax, [rdi+rsi] ret

Slide 25

Slide 25 text

$ cat example.ml print_string "Hello world!\n" $ ocaml example.ml Hello world! $ ocamlc example.ml $ ./a.out Hello world!

Slide 26

Slide 26 text

C(P)(D) = I(P, D)

Slide 27

Slide 27 text

Partial Evaluation

Slide 28

Slide 28 text

Partial Evaluation (intuition) Let us have a computation f of two parameters k, u f(k, u) • Now suppose that f is often called with k = 5; • f5(u) := “f by substituting 5 for k and doing all possible computation based upon value 5” • Partial evaluation is the process of transforming f(5, u) into f5(u)

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

This is Currying! I Know This! • Not exactly! In functional programming currying or partial applicationa is f5(u) := f(5, u) let f = (k, u) => k * (k * (k+1) + u+1) + u*u; let f5 = (u) => f(5, u); • In a functional programming language this usually does not change the program that implements f a Although, strictly speaking they are not synonyms, see https://en.wikipedia.org/wiki/Currying

Slide 32

Slide 32 text

Simplification let f = (k, u) => k * (k * (k+1) + u + 1) + u * u; by fixing k = 5 and simplifying: let f5 = (u) => 5 * (31 + u) + u * u;

Slide 33

Slide 33 text

Rewriting function pow(n, k) { if (k <= 0) { return 1; } else { return n * pow(n, k-1); } } function pow5(n) { return pow(n, 5); }

Slide 34

Slide 34 text

Rewriting function pow(n, k) { if (k <= 0) { return 1; } else { return n * pow(n, k-1); } } function pow5(n) { return n * pow(n, 4); }

Slide 35

Slide 35 text

Rewriting function pow(n, k) { if (k <= 0) { return 1; } else { return n * pow(n, k-1); } } function pow5(n) { return n * n * pow(n, 3); }

Slide 36

Slide 36 text

Rewriting function pow(n, k) { if (k <= 0) { return 1; } else { return n * pow(n, k-1); } } function pow5(n) { return n * n * n * n * n; }

Slide 37

Slide 37 text

Rewriting function pow(n, k) { if (k <= 0) { return 1; } else { return n * pow(n, k-1); } } function pow5(n) { return n * n * n * n * n; } In compilers this is sometimes called inlining

Slide 38

Slide 38 text

Rewriting and Simplification • Rewriting is similar to macro expansion and procedure integration (β-reduction, inlining) in the optimization technique of a compiler. • Often combined with simplification (constant folding)

Slide 39

Slide 39 text

Projection Projection The following equation holds for fk and f fk(u) = f(k, u) (1) we call fk a projection of f at k

Slide 40

Slide 40 text

Partial Evaluator A partial computation procedure may be a computer program α called a projection machine, partial computer or partial evaluator. α(f, k) = fk (2)

Slide 41

Slide 41 text

Partial Evaluator k u f(k, u) f

Slide 42

Slide 42 text

Partial Evaluator k u f(k, u) f

Slide 43

Slide 43 text

Partial Evaluator k u f(k, u) f fk α

Slide 44

Slide 44 text

Partial Evaluator function pow(n, k) { if (k <= 0) { return 1; } else { return n * pow(n, k-1); } } let pow5 = alpha(pow, {k:5}); // (n) => n * n * n * n * n;

Slide 45

Slide 45 text

Examples The paper presents: • Automatic theorem proving • Pattern matching • Syntax analyzer • Automatically generating a compiler

Slide 46

Slide 46 text

Examples The paper presents: • Automatic theorem proving • Pattern matching • Syntax analyzer • Automatically generating a compiler

Slide 47

Slide 47 text

Interpreters and Compilers (reprise) • An interpreter is a program • This program takes another program and the data as input • It evaluates the program on the input and returns the result I(P, D) • A compiler is a program • This program takes a source program and returns an object program • The object program processes the input and returns the result C(P)(D)

Slide 48

Slide 48 text

Partial Evaluation of an Interpreter P D R I

Slide 49

Slide 49 text

Partial Evaluation of an Interpreter P D R I

Slide 50

Slide 50 text

Partial Evaluation of an Interpreter P D R I IP α

Slide 51

Slide 51 text

First Equation of Partial Computation (First Projection) D R IP • That is, by feeding D into IP, you get R; • in other words, IP is an object program. I(P, D) = C(P)(D) α(I, P) = IP IP = C(P) (4)

Slide 52

Slide 52 text

f(k, u) = k + u (add x y) write(D) while(has-more-instructions(P)): instr ← fetch-next(P) switch(op(instr)): case ’add’: x ← read() y ← read() result ← x + y write(result) case . . .

Slide 53

Slide 53 text

f(k, u) = k + u (add x y) write(D) while(has-more-instructions(P)): instr ← fetch-next(P) switch(op(instr)): case ’add’: x ← read() y ← read() result ← x + y write(result) case . . . ...but this interpreter executes on a machine!

Slide 54

Slide 54 text

sum: lea eax, [rdi+rsi] ret

Slide 55

Slide 55 text

Partial Evaluation of an Interpreter P D R I IP α

Slide 56

Slide 56 text

Partial Evaluation of an Interpreter I P IP α

Slide 57

Slide 57 text

Partial Evaluation of the Partial Evaluation of an Interpreter I P IP α

Slide 58

Slide 58 text

Partial Evaluation of an Interpreter I P IP α αI α

Slide 59

Slide 59 text

Second Equation of Partial Computation (Second Projection) P IP αI αI(P) = IP (5) • but IP, evaluated on D gives R

Slide 60

Slide 60 text

Second Equation of Partial Computation (Second Projection) P C(P) αI αI(P) = IP (5) • but IP, evaluated on D gives R • then IP is an object program (P = C(P))

Slide 61

Slide 61 text

Second Equation of Partial Computation (Second Projection) P C(P) αI αI(P) = IP (5) • but IP, evaluated on D gives R • then IP is an object program (P = C(P)) • αI transforms a source program P to IP (i.e., C(P))

Slide 62

Slide 62 text

Second Equation of Partial Computation (Second Projection) P C(P) C αI(P) = IP (5) • but IP, evaluated on D gives R • then IP is an object program (P = C(P)) • αI transforms a source program P to IP (i.e., C(P)) • then αI is a compiler

Slide 63

Slide 63 text

No content

Slide 64

Slide 64 text

Partial Evaluation of the Partial Evaluation of an Interpreter I P IP α αI = C α

Slide 65

Slide 65 text

Partial Evaluation of the Partial Evaluation of an Interpreter α I αI = C α

Slide 66

Slide 66 text

Partial Evaluation of the Partial Evaluation of an Interpreter α I αI = C α

Slide 67

Slide 67 text

Partial Evaluation of the Partial Evaluation of the Partial Evaluation of an Interpreter α I αI = C α αα α

Slide 68

Slide 68 text

Third Equation of Partial Computation (Third Projection) I αI = C αα αα(I) = αI (6) • αα is a program that given I, returns αI = C • αI transforms a source program to an object program • αI is a compiler • αα is a compiler-compiler (a compiler generator) which generates a compiler αI from an interpreter I

Slide 69

Slide 69 text

No content

Slide 70

Slide 70 text

Partial Evaluation of the Partial Evaluation of an Interpreter α I αI = C α

Slide 71

Slide 71 text

Partial Evaluation of the Partial Evaluation of an Interpreter α I αI = C α

Slide 72

Slide 72 text

Partial Evaluation of the Partial Evaluation of the Partial Evaluation of an Interpreter α I αI = C α αα α

Slide 73

Slide 73 text

Projection of α at α α α αα α

Slide 74

Slide 74 text

Projection of α at α α α αα α

Slide 75

Slide 75 text

Projection of α at α α α αα α αα α

Slide 76

Slide 76 text

Fourth Equation of Partial Computation αα(α) = αα α αα αα • αα(I) = αI = C is a compiler for the language interpreter I; thus: αα(I) = αI = C αα(I)(P) = IP = C(P) • I is an interpreter • but at the beginning we said it could be any program • so, what is αα?

Slide 77

Slide 77 text

What is αα ? αα(I) = αI = C αα(α) = αα = C(α) • αα is a “compiler” for the “language α” ! • In other words, by finding αα we can generate fk for any f, k ! αα(f)(k) = fk (Fourth Equation) • That is, αα is a partial evaluation compiler (or generator). • However, the author notes, at the time of writing, there is no way to produce αα from α(α, α) for practical α’s.

Slide 78

Slide 78 text

GraalVM

Slide 79

Slide 79 text

No content

Slide 80

Slide 80 text

We implement the language semantics only once in a simple form: as a language interpreter written in a managed high-level host language. Optimized compiled code is derived from the interpreter using partial evaluation. This approach and its obvious benefits were described in 1971 by Y. Futamura, and is known as the first Futamura projection. To the best of our knowledge no prior high- performance language implementation used this approach.

Slide 81

Slide 81 text

We implement the language semantics only once in a simple form: as a language interpreter written in a managed high-level host language. Optimized compiled code is derived from the interpreter using partial evaluation. This approach and its obvious benefits were described in 1971 by Y. Futamura, and is known as the first Futamura projection. To the best of our knowledge no prior high- performance language implementation used this approach.

Slide 82

Slide 82 text

We believe that a simple partial evaluation of a dynamic language interpreter cannot lead to high-performance compiled code: if the complete semantics for a language operation are included during partial evaluation, the size of the compiled code explodes; if language operations are not included during partial evaluation and remain runtime calls, performance is mediocre. To overcome these inherent problems, we write the interpreter in a style that anticipates and embraces partial evaluation. The interpreter specializes the executed instructions, e.g., collects type information and profiling information. The compiler speculates that the interpreter state is stable and creates highly optimized and compact machine code. If a speculation turns out to be wrong, i.e., was too optimistic, execution transfers back to the interpreter. The interpreter updates the information, so that the next partial evaluation is less speculative.

Slide 83

Slide 83 text

We believe that a simple partial evaluation of a dynamic language interpreter cannot lead to high-performance compiled code: if the complete semantics for a language operation are included during partial evaluation, the size of the compiled code explodes; if language operations are not included during partial evaluation and remain runtime calls, performance is mediocre. To overcome these inherent problems, we write the interpreter in a style that anticipates and embraces partial evaluation. The interpreter specializes the executed instructions, e.g., collects type information and profiling information. The compiler speculates that the interpreter state is stable and creates highly optimized and compact machine code. If a speculation turns out to be wrong, i.e., was too optimistic, execution transfers back to the interpreter. The interpreter updates the information, so that the next partial evaluation is less speculative.

Slide 84

Slide 84 text

No content

Slide 85

Slide 85 text

https://twitter.com/larsr h/status/1227956746104266753

Slide 86

Slide 86 text

References • W¨ urthinger et al. 2017, Practical Partial Evaluation for High-Performance Dynamic Languages, PLDI’17 • ˇ Selajev 2018, GraalVM: Run Programs Faster Anywhere, GOTO Berlin 2018 • Bolz et al. 2011, Allocation Removal by Partial Evaluation in a Tracing JIT, PEPM’11 • Stuart 2013, Compilers for Free, RubyConf 2013 • Cook and L¨ ammel 2011, Tutorial on Online Partial Evaluation, EPTCS’11