Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Vamos falar de Concorrência
Search
Sponsored
·
Ship Features Fearlessly
Turn features on and off without deploys. Used by thousands of Ruby developers.
→
Plataformatec
August 31, 2012
Programming
3
25k
Vamos falar de Concorrência
Por José Valim, na RubyConf Brasil 2012.
Plataformatec
August 31, 2012
Tweet
Share
More Decks by Plataformatec
See All by Plataformatec
O case da Plataformatec com o Elixir - Como uma empresa brasileira criou uma linguagem que é usada no mundo inteiro @ Elixir Brasil 2019
plataformatec
5
1.2k
O case da Plataformatec com o Elixir - Como uma empresa brasileira criou uma linguagem que é usada no mundo inteiro @ QCon SP 2018
plataformatec
1
240
Elixir @ iMasters Intercon 2016
plataformatec
1
270
GenStage and Flow by @josevalim at ElixirConf
plataformatec
17
2.9k
Elixir: Programação Funcional e Pragmática @ 2º Tech Day Curitiba
plataformatec
2
310
Elixir: Programação Funcional e Pragmática @ Encontro Locaweb 2016
plataformatec
4
310
What's ahead for Elixir: v1.2 and GenRouter
plataformatec
15
2.2k
Arquiteturas Comuns de Apps Rails @ RubyConf BR 2015
plataformatec
6
400
Pirâmide de testes, escrevendo testes com qualidade @ RubyConf 2015
plataformatec
10
2.5k
Other Decks in Programming
See All in Programming
MDN Web Docs に日本語翻訳でコントリビュート
ohmori_yusuke
0
630
大規模Cloud Native環境におけるFalcoの運用
owlinux1000
0
260
[KNOTS 2026登壇資料]AIで拡張‧交差する プロダクト開発のプロセス および携わるメンバーの役割
hisatake
0
230
LLM Observabilityによる 対話型音声AIアプリケーションの安定運用
gekko0114
2
410
例外処理とどう使い分ける?Result型を使ったエラー設計 #burikaigi
kajitack
16
5.9k
MUSUBIXとは
nahisaho
0
120
Oxlintはいいぞ
yug1224
5
1.2k
The Past, Present, and Future of Enterprise Java
ivargrimstad
0
430
【卒業研究】会話ログ分析によるユーザーごとの関心に応じた話題提案手法
momok47
0
190
dchart: charts from deck markup
ajstarks
3
990
それ、本当に安全? ファイルアップロードで見落としがちなセキュリティリスクと対策
penpeen
7
2.4k
FOSDEM 2026: STUNMESH-go: Building P2P WireGuard Mesh Without Self-Hosted Infrastructure
tjjh89017
0
140
Featured
See All Featured
Impact Scores and Hybrid Strategies: The future of link building
tamaranovitovic
0
200
Exploring the Power of Turbo Streams & Action Cable | RailsConf2023
kevinliebholz
37
6.3k
Lightning talk: Run Django tests with GitHub Actions
sabderemane
0
110
Imperfection Machines: The Place of Print at Facebook
scottboms
269
14k
Effective software design: The role of men in debugging patriarchy in IT @ Voxxed Days AMS
baasie
0
220
AI in Enterprises - Java and Open Source to the Rescue
ivargrimstad
0
1.1k
How to build a perfect <img>
jonoalderson
1
4.9k
Faster Mobile Websites
deanohume
310
31k
Git: the NoSQL Database
bkeepers
PRO
432
66k
AI Search: Where Are We & What Can We Do About It?
aleyda
0
6.9k
The Success of Rails: Ensuring Growth for the Next 100 Years
eileencodes
47
7.9k
Writing Fast Ruby
sferik
630
62k
Transcript
CONCORRÊNCIA VAMOS FALAR SOBRE
None
Core Team Member
None
Executar duas ou mais tarefas de forma simultânea
Server Server Server Server
Server
MULTI CORE ÚNICO PROCESSO
estado concorrência off off
O modelo declarativo porque matemática rocks!
•não existe “mutação” •não existe concorrência •apenas funções
factorial = lambda do |x| case x when 0 1
else x * factorial.(x-1) end end print factorial.(10) # => 3628800
Determinismo As mesmas entradas => as mesmas saídas
•não existe random() •não existe I/O em disco •não existem
efeitos colaterais •sempre o mesmo resultado
lambda do |a, b| c = expensive_function.(a) d = also_expensive.(b)
c + d end
lambda do |a, b| c = expensive_function.(a) d = also_expensive.(b)
c + d end a = 1
lambda do |1, b| c = 42 d = also_expensive.(b)
c + d end
lambda do |a, b| c = expensive_function.(a) d = also_expensive.(b)
c + d end
lambda do |a, b| d = also_expensive.(b) c = expensive_function.(a)
c + d end
Haskell Usando determinismo para performance e expressividade
l = lambda { |a,b,c| a + b + c
} l.(1, 2, 3) #=> 6 l.curry #=> #<Proc> l.curry.(1) #=> #<Proc> l.curry.(1).(2).(3) #=> 6 Currying
l = lambda { |a,b| a * b } double
= l.curry.(2) triple = l.curry.(3) Currying
mult a b = a * b double = mult
2 (mult 2 3) ((mult 2) 3) Currying haskell
mult a b = a * b double = mult
2 double 3 haskell
mult a b = a * b double = mult
2 double 3 Compilador haskell
mult a b = a * b double = mult
2 double 3 Compilador (mult 2 3) haskell
CONCORRÊNCIA VAMOS ATIVAR
estado concorrência off on
Variáveis dataflow Concorrência sem dor!
lambda do |a, b| c = expensive_function.(a) d = also_expensive.(b)
c + d end
lambda do |a, b| thread { c = expensive_function.(a) }
thread { d = also_expensive.(b) } c + d end
main
main spawn thread 1
main spawn thread 1 spawn thread 2
main spawn thread 1 spawn thread 2 unbound c
main spawn thread 1 spawn thread 2 unbound c defines
c
main spawn thread 1 spawn thread 2 unbound c defines
c unbound d
main spawn thread 1 spawn thread 2 unbound c defines
c unbound d defines d
main spawn thread 1 spawn thread 2 unbound c defines
c unbound d defines d c + d
ESTADO VAMOS ATIVAR
estado concorrência on on
None
O PROBLEMA
class Counter mattr_accessor :i self.i = 0 end thread {
Counter.i = Counter.i + 1 }
thread 1 Counter.i thread 2
thread 1 Counter.i thread 2 0
thread 1 Counter.i thread 2 0 0
thread 1 Counter.i thread 2 0 0 1
thread 1 Counter.i thread 2 0 0 1 1
thread 1 Counter.i thread 2 0 1 0 1 1
thread 1 Counter.i thread 2 0 1 0 1 1
1
thread 1 Counter.i thread 2 0 1 0 1 1
1 1
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 1
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 1 2
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 1 2 2
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 1 2 2 2
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 1 2 2 2 2
•shared-memory concurrent model •locks •transactional memory •message-passing concurrent model
•shared-memory concurrent model •locks •transactional memory •message-passing concurrent model
class Counter mattr_accessor :i self.i = 0 end thread {
Counter.i = Counter.i + 1 }
class Counter mattr_accessor :i self.i = 0 end thread {
synchronize { Counter.i = Counter.i + 1 } }
thread 1 Counter.i thread 2
thread 1 Counter.i thread 2 0
thread 1 Counter.i thread 2 0 0
thread 1 Counter.i thread 2 0 0 1
thread 1 Counter.i thread 2 0 0 1 1
thread 1 Counter.i thread 2 0 1 0 1 1
thread 1 Counter.i thread 2 0 1 0 1 1
1 synchronize {
thread 1 Counter.i thread 2 0 1 0 1 1
2 1 synchronize {
thread 1 Counter.i thread 2 0 1 0 1 1
2 1 synchronize { 2 }
thread 1 Counter.i thread 2 0 1 0 1 1
2 2 1 synchronize { 2 }
thread 1 Counter.i thread 2 0 1 0 1 1
2 2 2 1 synchronize { 2 }
thread 1 Counter.i thread 2 0 1 0 1 1
2 2 3 2 1 synchronize { 2 }
thread 1 Counter.i thread 2 0 1 0 1 1
2 2 3 3 2 1 synchronize { 2 }
+ técnica mais popular + controle explícito sobre o lock
- controle explícito sobre o lock - técnica pessimista
•shared-memory concurrent model •locks •transactional memory •message-passing concurrent model
class Counter mattr_accessor :i self.i = ref { 0 }
end thread { atomic { Counter.i = Counter.i + 1 } }
thread 1 Counter.i thread 2
thread 1 Counter.i thread 2 0
thread 1 Counter.i thread 2 0 0
thread 1 Counter.i thread 2 0 0 1
thread 1 Counter.i thread 2 0 0 1 1
thread 1 Counter.i thread 2 0 1 0 1 1
thread 1 Counter.i thread 2 0 1 0 1 1
1 atomic {
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 atomic {
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 1 atomic {
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 2 1 atomic {
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 2 2 1 atomic {
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 2 2 2 1 atomic {
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 2 2 2 1 atomic { }
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 2 2 2 1 atomic { }
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 2 2 2 1 atomic { }
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 2 2 2 2 1 atomic { }
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 2 2 2 2 2 1 atomic { }
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 2 2 2 2 2 3 1 atomic { }
thread 1 Counter.i thread 2 0 1 0 1 1
1 1 2 2 2 2 2 3 3 1 atomic { }
+ técnica otimista + não possui deadlock nem condições de
corrida
- tentativas desnecessárias - overhead com transações
•shared-memory concurrent model •locks •transactional memory •message-passing concurrent model
server = lambda do |i| receive when :increment server.(i+1) when
:check client <- i server.(i) else warn "unknown message" server.(i) end end server.(0)
thread { server <- :increment }
client 1 server client 2
client 1 server client 2 0
client 1 server client 2 0 :increment
client 1 server client 2 0 1 :increment
client 1 server client 2 0 1 :increment :increment
client 1 server client 2 0 2 1 :increment :increment
client 1 server client 2 0 2 1 :increment :increment
:increment
client 1 server client 2 0 2 1 3 :increment
:increment :increment
client 1 server client 2 0 2 1 3 :increment
:increment :increment :check
client 1 server client 2 0 2 1 3 3
:increment :increment :increment :check
client 1 server client 2 0 2 1 3 3
:increment :increment :increment :check
client 1 server client 2 0 2 1 3 3
3 :increment :increment :increment :check
“Do not communicate by sharing memory; instead, share memory by
communicating;” Effective Go
+ não precisa de sincronização + fácil de distribuir
- coordenação é difícil - modelagem não convencional
RESUMINDO
message-passing locks stm go, erlang ruby clojure dataflow oz
github.com/celluloid
@elixirlang / elixir-lang.org
É IMPORTANTE O COMPORTAMENTO DEFAULT
BALAS DE PRATA NÃO EXISTEM
REFERÊNCIAS Seven languages in seven weeks
Concepts, Techniques and Models of Computer Programming REFERÊNCIAS
REFERÊNCIAS Software Transactional Memory http://java.ociweb.com/mark/stm/article.html Persistent Data Structures http://www.infoq.com/presentations/Value-Identity-State-Rich-Hickey
http://plataformatec.com.br Estamos contratando!
None
? PERGUNTAS José Valim @josevalim