Slide 1

Slide 1 text

All About TypeScript Moving To GO Tamar Twena-Stern

Slide 2

Slide 2 text

Tamar Twena-Stern twitter @sterntwena linkedin /tamarstern [email protected]

Slide 3

Slide 3 text

Microsoft Announcement - TypeScript Compiler Ported To GO

Slide 4

Slide 4 text

Old Situation VS New Situation

Slide 5

Slide 5 text

Promise - A 10x Faster Typescript

Slide 6

Slide 6 text

● Buzz ● Ambitious numbers ● Interesting Details

Slide 7

Slide 7 text

They Wrote The TypeScript Compiler In GO, and I am not interested in compilers

Slide 8

Slide 8 text

3 Items We Can Understand To Do A better Design

Slide 9

Slide 9 text

● How to match the technology to the problem domain ? ● How understanding the runtime environment of your model impact you design decisions ? ● How to change the foundations of our model ?

Slide 10

Slide 10 text

Claim Number 1 - A 10 times faster Typescript

Slide 11

Slide 11 text

What's Written In GO Is The TypeScript Compiler

Slide 12

Slide 12 text

The TypeScript compiler was written in JavaScript . New version was released and it is written in GO

Slide 13

Slide 13 text

TypeScript never run on production, It is always transpiled to JavaScript Write Typescript Natively Type Checks Run JavaScript after Transpilation Build phase will indicate Typescript errors

Slide 14

Slide 14 text

The TypeScript Compiler High Level Architecture Source Code ~~Scanner~~> Tokens ~~Parser~~> AST ~~Emitter~~> JavaScript

Slide 15

Slide 15 text

Example Of The TypeScript AST function add(a: number, b: number): number { return a + b; }

Slide 16

Slide 16 text

The build time of TypeScript will become 10x Faster - And Not The runtime of the program

Slide 17

Slide 17 text

Improvement TypeScript Compilation On Tested Projects Codebase Size (LOC) JavaScript Implementation GO Implementation Speedup VS Code 1,505,000 77.8s 7.5s 10.4x Playwright 356,000 11.1s 1.1s 10.1x TypeORM 270,000 17.5s 1.3s 13.5x date-fns 104,000 6.5s 0.7s 9.5x tRPC (server + client) 18,000 5.5s 0.6s 9.1x rxjs (observable) 2,100 1.1s 0.1s 11.0x

Slide 18

Slide 18 text

10x performance improvement Initial design decision was incorrect =

Slide 19

Slide 19 text

Which Types Of Applications We Should Write In Node.js ? IO Heavy ? CPU Intensive ? Memory Heavy ?

Slide 20

Slide 20 text

● Node.js is one of fastest web server technologies available. ● It outperform many multi threaded web server technologies ● It Shines for high-concurrency, low-computation workloads. Did You Know ?

Slide 21

Slide 21 text

Node.js Architecture

Slide 22

Slide 22 text

Node.js Architecture - Important Clarifications Libuv - A C library , handles asynchronous I/O operations Core Libraries - Written in C/C++. JavaScript APIs - Thin wrappers around Core Libraries IO Operations are almost efficient as C/C++ performance

Slide 23

Slide 23 text

Common Activities Of A Web Server app.post('/people', async (req: Request, res: Response) => { const people: Person[] = req.body; if (!Array.isArray(people)) { return res.status(400).json({ message: 'Request body must be an array of persons.' }); } for (const person of people) { if ( typeof person.name !== 'string' || typeof person.age !== 'number' || typeof person.ID !== 'string' ) { return res.status(400).json({ message: 'Each person must have name (string), age (number), and ID (string).' }); } } try { const db = await connectToMongo(); const result = await db.collection('people').insertMany(people); res.status(201).json({ insertedCount: result.insertedCount, insertedIds: result.insertedIds }); } catch (error) { console.error('Insert failed:', error); res.status(500).json({ message: 'Failed to insert people.' }); } });

Slide 24

Slide 24 text

Common Activities A Web Server Perform High Amount Of Time Low Amount Of Time Parsing JSON - CPU Exception - Large request body Transforming data structures - CPU Database Queries - IO Reading Files - IO Network Request - IO API calls - IO

Slide 25

Slide 25 text

Actual CPU Intensive Computation Is Usually Minimal - Node.js Architecture Is Ideal For Web Servers

Slide 26

Slide 26 text

Demo - Blocking VS Non Blocking Operations

Slide 27

Slide 27 text

The Challenge Of Node.js - CPU Intensive Operations

Slide 28

Slide 28 text

Node.js Challenge - CPU Intensive Algorithms while (true) { // Synchronously handle incoming events const events = getEvents(); for (const event of events) { processEvent(event); // Blocks loop } }

Slide 29

Slide 29 text

Compiling TypeScript = CPU Intensive Operations

Slide 30

Slide 30 text

Any compiler flow involve complex algorithms, large memory structures, and lots of computation- the kind of work that challenges JavaScript's execution model.

Slide 31

Slide 31 text

The Chosen Technology - GO

Slide 32

Slide 32 text

goroutines—lightweight threads managed by the Go runtime ● Natural Parallelism ● Direct Thread Access: CPU-intensive operations can run directly on threads without yielding. ● Efficient Coordination: Go's channels and synchronization primitives are designed to coordinate concurrent work. ● Memory Efficiency: Goroutines use minimal memory (a few KB each) compared to OS threads.

Slide 33

Slide 33 text

Different Approaches For Writing CPU Intensive Operations In Node.js

Slide 34

Slide 34 text

Node.js Event Loop Phases

Slide 35

Slide 35 text

Approach 1 - Split Task Into Smaller Tasks Inside The Event Loop while (true) { // Asynchronously handle incoming events in smaller chunks const events = getEvents(); for (const event of events) { await new Promise(resolve => setTimeout(resolve, 0)); // Yield back to the event loop processEvent(event); // CPU-intensive task (now chunked) } }

Slide 36

Slide 36 text

Code will be executed in chunks in timers phase

Slide 37

Slide 37 text

Problems With Split Task Into Smaller Tasks Inside The Event Loop ● Event Loop is single-threaded. ● V8 is optimized for I/O concurrency ● Even with chunking, the interpreter overhead and JIT warm-up add cost. ● Garbage collection pauses (especially with large ASTs or IR graphs) can introduce latency spikes. ● The actual delay is governed by the browser/node event loop, and can vary. ● High-load situations cause event loop lag → tasks don’t get timely execution.

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

Approach 2 - Worker Threads // Main const { Worker } = require('worker_threads'); const worker = new Worker('./worker.js'); worker.on('message', result => { // Process the message });

Slide 40

Slide 40 text

Approach 2 - Worker Threads // Worker const { parentPort } = require('worker_threads'); parentPort.on('message', ({ task, data }) => { if (task === 'processData') { const result = heavyComputation(data); parentPort.postMessage(result); } }); function heavyComputation(data) { // CPU-intensive task }

Slide 41

Slide 41 text

What Are Worker Threads ?

Slide 42

Slide 42 text

Why Worker Threads Were Not Efficient Enough For The TypeScript Compiler ? ● Verbose to manage. ● Costly due to structured cloning (data copying). ● Hard to coordinate shared state (e.g., symbol tables, IR graphs, caches).

Slide 43

Slide 43 text

Worker threads provide parallelism but come with overhead: ● Each worker has its own V8 instance ● Data passed between threads needs to be serialized and deserialized. ● Hard to coordinate shared state (symbol tables, IR graphs, caches).

Slide 44

Slide 44 text

Comparing Worker Threads And Goroutimes Node.js Worker Threads Go Goroutines Concurrency Model OS-level threads, manually created Lightweight coroutines, managed by Go runtime Startup Overhead High (new V8 instance, event loop) Tiny (~2KB per goroutine), near-zero startup cost Memory Sharing Structured cloning or SharedArrayBuffer Shared memory with channels or mutexes Execution Control Manual queueing and message passing Built-in scheduler with preemption

Slide 45

Slide 45 text

Comparing Worker Threads And Goroutimes Node.js Worker Threads Go Goroutines Code Complexity Async APIs + worker management = verbose Simple sync code with powerful concurrency primitives Performance Good for limited parallel tasks; costly to scale Excellent for high-volume, fine-grained concurrency Suitability for Compiler Feasible but harder to scale, debug, and optimize Ideal for parsing, analyzing, and transforming in parallel GC Behavior One V8 GC per thread (heavy) Unified, efficient GC tuned for concurrency

Slide 46

Slide 46 text

Simulating preemption in Node.js is clever and can work for lightweight workloads, but not for heavy load computations that require real performance, parallelism, and control over execution flow. Compilers are just one example.

Slide 47

Slide 47 text

Questions ?

Slide 48

Slide 48 text

Tamar Twena-Stern twitter @sterntwena linkedin /tamarstern [email protected]