Slide 1

Slide 1 text

don’t fear the threads simplify your life with JRuby david.copeland@livingsocial.com @davetron5000 www.naildrivin5.com www.awesomecommandlineapps.com 1

Slide 2

Slide 2 text

{ :me => { :tech_lead => “LivingSocial” } } 2

Slide 3

Slide 3 text

First Assignment 3

Slide 4

Slide 4 text

Write a 4

Slide 5

Slide 5 text

Command Line App Write a 4

Slide 6

Slide 6 text

5

Slide 7

Slide 7 text

Command Line App Write a 6

Slide 8

Slide 8 text

Command Line App Write a To charge credit cards faster 6

Slide 9

Slide 9 text

NO NEW HARDWARE 6

Slide 10

Slide 10 text

7

Slide 11

Slide 11 text

THREADS! 7

Slide 12

Slide 12 text

Are we maximizing our resources? 8

Slide 13

Slide 13 text

Are we maximizing our resources? 9

Slide 14

Slide 14 text

What problem are we solving? 10

Slide 15

Slide 15 text

Do some I/O Compute Stuff 11

Slide 16

Slide 16 text

Do some I/O 12

Slide 17

Slide 17 text

Do some I/O Can I do I/O? Read/Write Block YES NO 13

Slide 18

Slide 18 text

Do some I/O Block 14

Slide 19

Slide 19 text

Block 15

Slide 20

Slide 20 text

Let someone else work Block 15

Slide 21

Slide 21 text

Maximize Resources Block 16

Slide 22

Slide 22 text

17

Slide 23

Slide 23 text

Will our code be 17

Slide 24

Slide 24 text

Will our code be easy to write? 17

Slide 25

Slide 25 text

easy to understand? Will our code be easy to write? 17

Slide 26

Slide 26 text

easy to understand? easy to test? Will our code be easy to write? 17

Slide 27

Slide 27 text

Are we maximizing our resources? Are we maximizing our resources? 18

Slide 28

Slide 28 text

Run lots of processes 19

Slide 29

Slide 29 text

20

Slide 30

Slide 30 text

fork { # your code } 20

Slide 31

Slide 31 text

easy to understand 21

Slide 32

Slide 32 text

doesn’t maximize resources 22

Slide 33

Slide 33 text

Parent 23

Slide 34

Slide 34 text

Parent Parent’s Memory 23

Slide 35

Slide 35 text

Parent Parent’s Memory Child fork { Parent’s Memory (copy of) 23

Slide 36

Slide 36 text

Event-based I/O 24

Slide 37

Slide 37 text

25

Slide 38

Slide 38 text

require 'em-http' include EM::HttpRequest EM.run { # some of your code that does IO some_callback { |results_of_io| # the rest of your code that # uses the results } } 25

Slide 39

Slide 39 text

require 'em-http' include EM::HttpRequest EM.run { # some of your code that does IO some_callback { |results_of_io| # the rest of your code that # uses the results } } 25

Slide 40

Slide 40 text

Need to do I/O again? 26

Slide 41

Slide 41 text

27

Slide 42

Slide 42 text

require 'em-http' include EM::HttpRequest EM.run { # some of your code that does SOME of your I/O some_callback { |results_of_io| # use the results # now some more code that does some MORE I/O some_other_callback { |results_of_more_io| # use the results of THIS I/O } } } 27

Slide 43

Slide 43 text

28

Slide 44

Slide 44 text

require 'em-http' include EM::HttpRequest EM.run { # some of your code that does SOME of your I/O some_callback { |results_of_io| # use the results # now some more code that does some MORE I/O some_other_callback { |results_of_more_io| # use the results of THIS I/O # SERIOUSLY, more I/O?!??!@ How much do you need? yet_more_callbacks { |moar_resultz| # How many levels deep are we now?! } } } } 28

Slide 45

Slide 45 text

require 'em-http' include EM::HttpRequest EM.run { # some of your code that does SOME of your I/O some_callback { |results_of_io| # use the results # now some more code that does some MORE I/O some_other_callback { |results_of_more_io| # use the results of THIS I/O # SERIOUSLY, more I/O?!??!@ How much do you need? yet_more_callbacks { |moar_resultz| # How many levels deep are we now?! } } } } 28

Slide 46

Slide 46 text

NOT easy to understand 29

Slide 47

Slide 47 text

kinda maximizes resources 30

Slide 48

Slide 48 text

Entire call chain must be evented 31

Slide 49

Slide 49 text

Not true parallelism 32

Slide 50

Slide 50 text

Not true parallelism unless you run lots of processes 32

Slide 51

Slide 51 text

What’s parallelism? 33

Slide 52

Slide 52 text

Concurrency 34

Slide 53

Slide 53 text

Parallelism 35

Slide 54

Slide 54 text

Parallelism Code running at the same time as other code 35

Slide 55

Slide 55 text

Parallelism Impossible with only one CPU Code running at the same time as other code 35

Slide 56

Slide 56 text

Event-based I/O 36

Slide 57

Slide 57 text

NOT easy to understand kinda maximizes resources 37

Slide 58

Slide 58 text

Threads 38

Slide 59

Slide 59 text

39

Slide 60

Slide 60 text

Thread.new { # your code } 39

Slide 61

Slide 61 text

fork { # your code } 40

Slide 62

Slide 62 text

Thread.new { # your code } 41

Slide 63

Slide 63 text

Need to do I/O? 42

Slide 64

Slide 64 text

Need to do I/O? Not a problem 42

Slide 65

Slide 65 text

Need to do I/O THREE TIMES? 43

Slide 66

Slide 66 text

Need to do I/O THREE TIMES? Not a problem 43

Slide 67

Slide 67 text

What if I need to calculate π to the 2,345,123rd decimal place? 44

Slide 68

Slide 68 text

NOT A PROBLEM 44

Slide 69

Slide 69 text

Threads achieve true parallelism 45

Slide 70

Slide 70 text

easy to understand 46

Slide 71

Slide 71 text

maximizes resources 47

Slide 72

Slide 72 text

Threads don’t work in Ruby 48

Slide 73

Slide 73 text

Threads don’t work in Ruby…right? 48

Slide 74

Slide 74 text

MRI (aka C Ruby) 49

Slide 75

Slide 75 text

MRI (aka C Ruby) Thread is an OS thread (except in 1.8) 49

Slide 76

Slide 76 text

MRI (aka C Ruby) Thread is an OS thread GIL :( (except in 1.8) 49

Slide 77

Slide 77 text

MRI (aka C Ruby) Thread is an OS thread GIL :( I/O will cause a context switch (except in 1.8) 49

Slide 78

Slide 78 text

Rubinius 50

Slide 79

Slide 79 text

Rubinius Thread is an OS thread 50

Slide 80

Slide 80 text

Rubinius Thread is an OS thread No GIL! 50

Slide 81

Slide 81 text

Rubinius Thread is an OS thread No GIL! True parallelism 50

Slide 82

Slide 82 text

JRuby 51

Slide 83

Slide 83 text

JRuby Thread is a JVM Thread (which is an OS thread) 51

Slide 84

Slide 84 text

JRuby Thread is a JVM Thread (which is an OS thread) Context switch for variety of reasons 51

Slide 85

Slide 85 text

JRuby Thread is a JVM Thread (which is an OS thread) Context switch for variety of reasons True parallelism 51

Slide 86

Slide 86 text

JRuby BATTLE TESTED 51

Slide 87

Slide 87 text

So, you’ve decided to use Threads… 52

Slide 88

Slide 88 text

You need to know four things 53

Slide 89

Slide 89 text

54

Slide 90

Slide 90 text

Start & Manage 54

Slide 91

Slide 91 text

Start & Manage (Dealing with) Shared State 54

Slide 92

Slide 92 text

Start & Manage (Dealing with) Shared State Using Third Party Libraries 54

Slide 93

Slide 93 text

Start & Manage (Dealing with) Shared State Using Third Party Libraries Context Switching 54

Slide 94

Slide 94 text

Start & Manage Threads 55

Slide 95

Slide 95 text

Chaos 56

Slide 96

Slide 96 text

Thread.new { # your code } Thread.new { # moar code } # etc 57

Slide 97

Slide 97 text

Hand-Roll 58

Slide 98

Slide 98 text

threads = [] threads << Thread.new { # your code } threads << Thread.new { # moar code } # etc threads.each(&:join) # All threads have completed exit 0 59

Slide 99

Slide 99 text

java.util.concurrent 60

Slide 100

Slide 100 text

import java.util.concurrent # JRuby only service = Executors.new_fixed_thread_pool(100) service.execute { # your code } service.execute { # some other code } 61

Slide 101

Slide 101 text

62

Slide 102

Slide 102 text

63

Slide 103

Slide 103 text

63

Slide 104

Slide 104 text

ExecutorService 64

Slide 105

Slide 105 text

ExecutorService 65

Slide 106

Slide 106 text

ExecutorService execute() 65

Slide 107

Slide 107 text

ExecutorService execute() shutdown() 65

Slide 108

Slide 108 text

ExecutorService execute() shutdown() await_termination() 65

Slide 109

Slide 109 text

ExecutorService execute() shutdown() await_termination() shutdown_now() 65

Slide 110

Slide 110 text

service = Executors.new_fixed_thread_pool(10) tcp_erver = TCPServer.new("127.0.0.1",8080) loop do { s = tcp_server.accept service.execute { s.puts calculate_pi() s.close } } 66

Slide 111

Slide 111 text

service = Executors.new_fixed_thread_pool(10) tcp_server = TCPServer.new("127.0.0.1",8080) Signal.trap('SIGINT') { service.shutdown } loop do { s = tcp_server.accept service.execute { s.puts calculate_pi() s.close } break if service.is_shutdown } 67

Slide 112

Slide 112 text

service = Executors.new_fixed_thread_pool(10) tcp_server = TCPServer.new("127.0.0.1",8080) Signal.trap('SIGINT') { service.shutdown } loop do { s = tcp_server.accept service.execute { s.puts calculate_pi() s.close } break if service.is_shutdown } service.await_termination(10,TimeUnit.SECONDS) service.shutdown_now 68

Slide 113

Slide 113 text

So much more 69

Slide 114

Slide 114 text

ScheduledExecutorService So much more 69

Slide 115

Slide 115 text

ScheduledExecutorService So much more ThreadFactory 69

Slide 116

Slide 116 text

Read the javadocs ScheduledExecutorService So much more ThreadFactory 69

Slide 117

Slide 117 text

Shared State 70

Slide 118

Slide 118 text

results = [] Thread.new { results << some_result() } Thread.new { results << other_result() } 71

Slide 119

Slide 119 text

mutex = Mutex.new results = [] Thread.new { mutex.synchronize { results << some_result() } } Thread.new { mutex.synchronize { results << other_result() } } 72

Slide 120

Slide 120 text

Avoid explicit locking/sync 73

Slide 121

Slide 121 text

results = ConcurrentLinkedQueue.new Thread.new { results.offer(some_result()) } Thread.new { results.offer(some_result()) } 74

Slide 122

Slide 122 text

import java.util.concurrent.atomic flag = AtomicBoolean.new(true) Thread.new { if flag.get { flag.set(false) } } Thread.new { flag.set(true) } 75

Slide 123

Slide 123 text

So much more 76

Slide 124

Slide 124 text

ConcurrentHashMap So much more 76

Slide 125

Slide 125 text

ConcurrentHashMap So much more AtomicReference 76

Slide 126

Slide 126 text

Read the javadocs ConcurrentHashMap So much more AtomicReference 76

Slide 127

Slide 127 text

Third Party Libraries 77

Slide 128

Slide 128 text

Are they thread safe? 78

Slide 129

Slide 129 text

Probably 79

Slide 130

Slide 130 text

Probably …but to be sure 79

Slide 131

Slide 131 text

variables 80

Slide 132

Slide 132 text

variables Global 80

Slide 133

Slide 133 text

variables Global Class 80

Slide 134

Slide 134 text

Most are fine 81

Slide 135

Slide 135 text

Context Switching 82

Slide 136

Slide 136 text

Amdahl’s Law 83

Slide 137

Slide 137 text

Performance Gains Increase in # of Threads Speedup 84

Slide 138

Slide 138 text

Performance Gains with Context Switching Increase in # of Threads Speedup Cost of Context Switching Takes Over 85

Slide 139

Slide 139 text

Thread Pools 86

Slide 140

Slide 140 text

Worker Worker Worker Worker Thread Thread Thread Thread Pool Worker Worker Worker Worker 87

Slide 141

Slide 141 text

Worker Worker Worker Worker Thread Thread Thread Thread Pool Worker Worker Worker Worker 87

Slide 142

Slide 142 text

Worker Worker Worker Worker Thread Thread Thread Thread Pool Worker Worker Worker 87

Slide 143

Slide 143 text

Thread.new { # your code } 88

Slide 144

Slide 144 text

service.execute { # your code } 89

Slide 145

Slide 145 text

Not usually a concern 90

Slide 146

Slide 146 text

Always use Threads, right? 91

Slide 147

Slide 147 text

Thread UNsafe Libraries 92

Slide 148

Slide 148 text

Evented 93

Slide 149

Slide 149 text

Simple, I/O bound task 94

Slide 150

Slide 150 text

Evented 95

Slide 151

Slide 151 text

Cannot use JRuby 96

Slide 152

Slide 152 text

Evented 97

Slide 153

Slide 153 text

Otherwise, Threads will simplify your code and maximize your resources 98

Slide 154

Slide 154 text

Semantic layers on top of evented I/O 99

Slide 155

Slide 155 text

100

Slide 156

Slide 156 text

You’re using Threads 100

Slide 157

Slide 157 text

You’re using Threads (in degenerate form) 100

Slide 158

Slide 158 text

Exercises 101

Slide 159

Slide 159 text

Echo Server •Listen on a port •Respond to each request in a new Thread •Extra Credit: Record stats on requests in a shared data structure 102

Slide 160

Slide 160 text

Connection Pool •Allow N clients to access X shared instances of, say, Redis (where N > X) •Clients “check out” a connection and get exclusive access •Clients “check in” when done •Instances get re-used 103

Slide 161

Slide 161 text

THANKS! david.copeland@livingsocial.com @davetron5000 www.naildrivin5.com www.awesomecommandlineapps.com 104