Upgrade to Pro — share decks privately, control downloads, hide ads and more …

CoreOS: What is it and why should I care?

CoreOS: What is it and why should I care?

Overview and thoughts about CoreOS, presented at September Chicago Docker meetup.

Karl Grzeszczak

September 25, 2014
Tweet

Other Decks in Technology

Transcript

  1. At Mediafly, a lot of our infrastructure is service oriented

    distributed systems running docker containers 3 / 80
  2. CoreOS seems like an ideal fit for our needs, so

    I decided to investigate 4 / 80
  3. Lightweight CoreOS is designed to be a modern, minimal base

    to build your platform. Consumes 40% less RAM on boot than an average Linux installation. https://coreos.com/ 7 / 80
  4. Painless Updating Utilizes an active/passive dual-partition scheme to update the

    OS as a single unit instead of package by package. This makes each update quick, reliable and able to be easily rolled back. https://coreos.com/ 8 / 80
  5. Docker Containers Applications on CoreOS run as Docker containers. Containers

    provide maximum flexibility in packaging and can start in milliseconds. https://coreos.com/ 9 / 80
  6. Clustered By Default CoreOS works well on a single machine,

    but it's designed to be clustered. Easily run application containers across multiple machines with fleet and connect them together with service discovery. https://coreos.com/ 10 / 80
  7. Distributed Systems Tools Built-in primitives such as distributed locking and

    master election are the building blocks for large scale distributed systems. https://coreos.com/ 11 / 80
  8. Service Discovery Easily locate where services are being run within

    the cluster and be notified when something changes. Essential for a complex, highly dynamic cluster. Built into CoreOS with high availability and automatic fail-over. https://coreos.com/ 12 / 80
  9. No package manager All your applications should run as a

    container Linux kernel, docker, systemd, fleetd, etcd, sshd According to https://coreos.com, it uses 114MB of RAM at boot, approximately 40% less than average Linux server Designed specifically for running distributed systems 14 / 80
  10. What do you have to do differently? etcd service discovery

    broadcast your applications key infrastructure settings back to etcd 18 / 80
  11. What do you have to do differently? etcd service discovery

    broadcast your applications key infrastructure settings back to etcd use fleet to orchestrate your containers 19 / 80
  12. A highly-available key value store for shared configuration and service

    discovery. etcd is inspired by Apache ZooKeeper and doozer https://github.com/coreos/etcd#readme-version-046 22 / 80
  13. Simple: curl'able user facing API (HTTP+JSON) Secure: optional SSL client

    cert authentication Fast: benchmarked 1000s of writes/s per instance Reliable: properly distributed using Raft etcd is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log. https://github.com/coreos/etcd#readme-version-046 23 / 80
  14. In Search of an Understandable Concensus Algorithm by Stanford's Diego

    Ongaro and John Ousterhout https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf "As a result, each state machine processes the same series of commands and thus produces the same series of results and arrives at the same series of states." http://raftconsensus.github.io/ 25 / 80
  15. Raft elects a leader, and the leader records a master

    version and distributes that to the other nodes in the cluster. It does not write a confirmation until it hears back from a concensus of nodes that agree. If the leader goes AWOL for a certain time, then a new election process begins to find a new leader and continue. 27 / 80
  16. For now, just understand... Raft is similar to Paxos in

    fault-tolerance and performance and it makes sure that etcd and your cluster can continue operating even if some nodes experience partitions (or are terminated!) 28 / 80
  17. This is an AWESOME animation you should watch because it

    explains Raft MUCH better than I can: http://thesecretlivesofdata.com/raft/ 29 / 80
  18. Ensure that units are deployed together on the same machine

    https://github.com/coreos/fleet#supported-deployment-patterns 37 / 80
  19. Forbid specific units from colocation on the same machine (anti-affinity)

    https://github.com/coreos/fleet#supported-deployment-patterns 38 / 80
  20. It makes it very easy to know what is running

    in your cluster, where, and how it's doing 40 / 80
  21. Read this post later: http://lukebond.ghost.io/deploying-docker-containers-on- a-coreos-cluster-with-fleet/ I found this while

    putting together this presentation, and I think it does a great job explaining all this in written form 46 / 80
  22. bootstrapping the cluster k a r l @ k a

    r l - m e d i a f l y : ~ $ c u r l d i s c o v e r y . e t c d . i o / n e w h t t p s : / / d i s c o v e r y . e t c d . i o / b 9 8 4 5 b 3 1 a 5 7 7 9 3 f e 9 f 8 8 1 3 7 2 2 0 b 7 f 4 5 4 49 / 80
  23. Output gets pasted into user-data: # c l o u

    d - c o n f i g c o r e o s : e t c d : # g e n e r a t e a n e w t o k e n f o r e a c h u n i q u e c l u s t e r f r o m h t t p s : / / d i s c o v e r y . e t c d . i o / n e w # W A R N I N G : r e p l a c e e a c h t i m e y o u ' v a g r a n t d e s t r o y ' d i s c o v e r y : h t t p s : / / d i s c o v e r y . e t c d . i o / b 9 8 4 5 b 3 1 a 5 7 7 9 3 f e 9 f 8 8 1 3 7 2 2 0 b 7 f 4 5 4 a d d r : $ p u b l i c _ i p v 4 : 4 0 0 1 p e e r - a d d r : $ p u b l i c _ i p v 4 : 7 0 0 1 f l e e t : p u b l i c - i p : $ p u b l i c _ i p v 4 u n i t s : - n a m e : e t c d . s e r v i c e c o m m a n d : s t a r t - n a m e : f l e e t . s e r v i c e c o m m a n d : s t a r t - n a m e : d o c k e r - t c p . s o c k e t c o m m a n d : s t a r t e n a b l e : t r u e 50 / 80
  24. show all machines in your cluster c o r e

    @ c o r e - 0 1 ~ / s h a r e $ f l e e t c t l l i s t - m a c h i n e s M A C H I N E I P M E T A D A T A 7 8 e 5 a b 3 e . . . 1 7 2 . 1 7 . 8 . 1 0 3 - a d d d f 8 b e . . . 1 7 2 . 1 7 . 8 . 1 0 2 - d f 7 6 3 c 2 f . . . 1 7 2 . 1 7 . 8 . 1 0 1 - 51 / 80
  25. service unit [ U n i t ] D e

    s c r i p t i o n = k a r l g r z . c o m A f t e r = d o c k e r . s e r v i c e R e q u i r e s = d o c k e r . s e r v i c e [ S e r v i c e ] T i m e o u t S t a r t S e c = 0 E x e c S t a r t P r e = - / u s r / b i n / d o c k e r k i l l k a r l g r z _ w e b E x e c S t a r t P r e = - / u s r / b i n / d o c k e r r m k a r l g r z _ w e b E x e c S t a r t P r e = / u s r / b i n / d o c k e r p u l l k a r l g r z / u b u n t u - 1 4 . 0 4 - b a s e - n g i n x E x e c S t a r t P r e = / b i n / s h - c " c d / s r v / k a r l g r z . c o m & & \ / u s r / b i n / d o c k e r b u i l d - t k a r l g r z / k a r l g r z _ w e b . " E x e c S t a r t = / u s r / b i n / d o c k e r r u n - - n a m e k a r l g r z _ w e b - p 8 0 0 1 : 8 0 0 1 k a r l g r z / k a r l g r z _ w e b E x e c S t o p = / u s r / b i n / d o c k e r s t o p k a r l g r z _ w e b 52 / 80
  26. start up some units c o r e @ c

    o r e - 0 1 ~ / s h a r e / k a r l g r z - d o c k e r / f l e e t $ f l e e t c t l s t a r t f a n t a s y _ w e b . s e r v i c e \ j c s d o o r s o l u t i o n s _ w e b . s e r v i c e s t i c k f i g u r e n i n j a s _ w e b . s e r v i c e k a r l g r z _ w e b . s e r v i c e U n i t f a n t a s y _ w e b . s e r v i c e l a u n c h e d o n a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 U n i t k a r l g r z _ w e b . s e r v i c e l a u n c h e d o n a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 U n i t j c s d o o r s o l u t i o n s _ w e b . s e r v i c e l a u n c h e d o n 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 U n i t s t i c k f i g u r e n i n j a s _ w e b . s e r v i c e l a u n c h e d o n 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 53 / 80
  27. list loaded units and their status c o r e

    @ c o r e - 0 1 ~ / s h a r e / k a r l g r z - d o c k e r / f l e e t $ f l e e t c t l l i s t - u n i t s U N I T M A C H I N E A C T I V E S U B f a n t a s y _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v a t i n g s t a r t - p r e j c s d o o r s o l u t i o n s _ w e b . s e r v i c e 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 a c t i v a t i n g s t a r t - p r e k a r l g r z _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g s t i c k f i g u r e n i n j a s _ w e b . s e r v i c e 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 a c t i v a t i n g s t a r t - p r e c o r e @ c o r e - 0 1 ~ / s h a r e / k a r l g r z - d o c k e r / f l e e t $ f l e e t c t l l i s t - u n i t s U N I T M A C H I N E A C T I V E S U B f a n t a s y _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g j c s d o o r s o l u t i o n s _ w e b . s e r v i c e 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 a c t i v e r u n n i n g k a r l g r z _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g s t i c k f i g u r e n i n j a s _ w e b . s e r v i c e 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 a c t i v e r u n n i n g 54 / 80
  28. discovery sidekick [ U n i t ] D e

    s c r i p t i o n = A n n o u n c e k a r l g r z . c o m B i n d s T o = k a r l g r z _ w e b . s e r v i c e [ S e r v i c e ] E n v i r o n m e n t F i l e = / e t c / e n v i r o n m e n t E x e c S t a r t = / b i n / s h - c " w h i l e t r u e ; \ d o e t c d c t l s e t / a p p s / k a r l g r z _ w e b \ ' { \ " h o s t \ " : \ " k a r l g r z . c o m \ " , \ " a p p k e y \ " : \ " k a r l g r z _ w e b \ " , \ " i p \ " : \ " $ { C O R E O S _ P U B L I C _ I P V 4 } \ " , \ " p o r t \ " : \ " 8 0 0 1 \ " } ' \ - - t t l 6 0 ; s l e e p 4 5 ; d o n e " E x e c S t o p = / u s r / b i n / e t c d c t l r m / a p p s / k a r l g r z _ w e b [ X - F l e e t ] M a c h i n e O f = k a r l g r z _ w e b . s e r v i c e 55 / 80
  29. run discovery sidekicks c o r e @ c o

    r e - 0 1 ~ / s h a r e / k a r l g r z - d o c k e r / f l e e t $ e t c d c t l l s / a p p s c o r e @ c o r e - 0 1 ~ / s h a r e / k a r l g r z - d o c k e r / f l e e t $ f l e e t c t l s t a r t f a n t a s y _ d i s c o v e r y . s e r v i c e \ j c s d o o r s o l u t i o n s _ d i s c o v e r y . s e r v i c e s t i c k f i g u r e n i n j a s _ d i s c o v e r y . s e r v i c e \ k a r l g r z _ d i s c o v e r y . s e r v i c e U n i t j c s d o o r s o l u t i o n s _ d i s c o v e r y . s e r v i c e l a u n c h e d o n 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 U n i t s t i c k f i g u r e n i n j a s _ d i s c o v e r y . s e r v i c e l a u n c h e d o n 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 U n i t f a n t a s y _ d i s c o v e r y . s e r v i c e l a u n c h e d o n a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 U n i t k a r l g r z _ d i s c o v e r y . s e r v i c e l a u n c h e d o n a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 c o r e @ c o r e - 0 1 ~ / s h a r e / k a r l g r z - d o c k e r / f l e e t $ e t c d c t l l s / a p p s / a p p s / r e t h i n k d b _ s e r v i c e s / a p p s / f a n t a s y _ w e b / a p p s / k a r l g r z _ w e b / a p p s / j c s d o o r s o l u t i o n s _ w e b / a p p s / s t i c k f i g u r e n i n j a s _ w e b 56 / 80
  30. etcd values c o r e @ c o r

    e - 0 1 ~ / s h a r e / k a r l g r z - d o c k e r / f l e e t $ e t c d c t l g e t / a p p s / k a r l g r z _ w e b { " h o s t " : " k a r l g r z . c o m " , " a p p k e y " : " k a r l g r z _ w e b " , " i p " : " 1 7 2 . 1 7 . 8 . 1 0 2 " , " p o r t " : " 8 0 0 1 " } 57 / 80
  31. list units c o r e @ c o r

    e - 0 1 ~ / s h a r e / k a r l g r z - d o c k e r / f l e e t $ f l e e t c t l l i s t - u n i t s U N I T M A C H I N E A C T I V E S U B f a n t a s y _ d i s c o v e r y . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g f a n t a s y _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g j c s d o o r s o l u t i o n s _ d i s c o v e r y . s e r v i c e 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 a c t i v e r u n n i n g j c s d o o r s o l u t i o n s _ w e b . s e r v i c e 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 a c t i v e r u n n i n g k a r l g r z _ d i s c o v e r y . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g k a r l g r z _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g r e t h i n k d b _ d i s c o v e r y . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g r e t h i n k d b _ s e r v i c e s . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g s t i c k f i g u r e n i n j a s _ d i s c o v e r y . s e r v i c e 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 a c t i v e r u n n i n g s t i c k f i g u r e n i n j a s _ w e b . s e r v i c e 7 8 e 5 a b 3 e . . . / 1 7 2 . 1 7 . 8 . 1 0 3 a c t i v e r u n n i n g 58 / 80
  32. run a unit on ONLY one SPECIFIC node [ U

    n i t ] D e s c r i p t i o n = r e t h i n k d b A f t e r = d o c k e r . s e r v i c e R e q u i r e s = d o c k e r . s e r v i c e [ S e r v i c e ] T i m e o u t S t a r t S e c = 0 E x e c S t a r t P r e = - / u s r / b i n / d o c k e r k i l l r e t h i n k d b _ s e r v i c e s E x e c S t a r t P r e = - / u s r / b i n / d o c k e r r m r e t h i n k d b _ s e r v i c e s E x e c S t a r t P r e = / u s r / b i n / d o c k e r p u l l d o c k e r f i l e / r e t h i n k d b E x e c S t a r t = / u s r / b i n / d o c k e r r u n - - n a m e r e t h i n k d b _ s e r v i c e s \ - p 8 0 8 0 : 8 0 8 0 - p 2 8 0 1 5 : 2 8 0 1 5 - p 2 9 0 1 5 : 2 9 1 0 5 - v / h o m e / c o r e / r e t h i n k d b : / d a t a \ - t d o c k e r f i l e / r e t h i n k d b r e t h i n k d b - d / d a t a - - b i n d a l l E x e c S t o p = / u s r / b i n / d o c k e r s t o p r e t h i n k d b _ s e r v i c e s [ X - F l e e t ] M a c h i n e I D = 9 f 1 5 2 b f 8 59 / 80
  33. see logging output from a running container c o r

    e @ c o r e - 0 1 ~ $ f l e e t c t l j o u r n a l f a n t a s y _ w e b - - L o g s b e g i n a t W e d 2 0 1 4 - 0 9 - 2 4 2 1 : 3 2 : 3 2 U T C , e n d a t T h u 2 0 1 4 - 0 9 - 2 5 1 9 : 5 5 : 2 6 U T C . - - S e p 2 5 1 8 : 2 2 : 0 8 c o r e - 0 2 d o c k e r [ 1 5 7 2 ] : P y t h o n v e r s i o n : 2 . 7 . 6 ( d e f a u l t , M a r 2 2 2 0 1 4 , 2 3 : 0 3 : 4 1 ) S e p 2 5 1 8 : 2 2 : 0 8 c o r e - 0 2 d o c k e r [ 1 5 7 2 ] : P y t h o n m a i n i n t e r p r e t e r i n i t i a l i z e d a t 0 x c 5 3 5 4 0 S e p 2 5 1 8 : 2 2 : 0 8 c o r e - 0 2 d o c k e r [ 1 5 7 2 ] : p y t h o n t h r e a d s s u p p o r t e n a b l e d S e p 2 5 1 8 : 2 2 : 0 8 c o r e - 0 2 d o c k e r [ 1 5 7 2 ] : y o u r s e r v e r s o c k e t l i s t e n b a c k l o g i s l i m i t e d t o 1 0 0 c o n n e c t i o n s S e p 2 5 1 8 : 2 2 : 0 8 c o r e - 0 2 d o c k e r [ 1 5 7 2 ] : y o u r m e r c y f o r g r a c e f u l o p e r a t i o n s o n w o r k e r s i s 6 0 s e c o n d s S e p 2 5 1 8 : 2 2 : 0 8 c o r e - 0 2 d o c k e r [ 1 5 7 2 ] : m a p p e d 7 2 7 6 8 b y t e s ( 7 1 K B ) f o r 1 c o r e s S e p 2 5 1 8 : 2 2 : 0 8 c o r e - 0 2 d o c k e r [ 1 5 7 2 ] : * * * O p e r a t i o n a l M O D E : s i n g l e p r o c e s s * * * S e p 2 5 1 8 : 2 2 : 0 9 c o r e - 0 2 d o c k e r [ 1 5 7 2 ] : W S G I a p p 0 ( m o u n t p o i n t = ' ' ) r e a d y i n 1 s e c o n d s o n i n t e r p r e t e r 0 x c 5 3 5 4 0 p i d : 1 3 ( d e f a u l t a p p ) S e p 2 5 1 8 : 2 2 : 0 9 c o r e - 0 2 d o c k e r [ 1 5 7 2 ] : * * * u W S G I i s r u n n i n g i n m u l t i p l e i n t e r p r e t e r m o d e * * * S e p 2 5 1 8 : 2 2 : 0 9 c o r e - 0 2 d o c k e r [ 1 5 7 2 ] : s p a w n e d u W S G I w o r k e r 1 ( a n d t h e o n l y ) ( p i d : 1 3 , c o r e s : 60 / 80
  34. c o r e @ c o r e -

    0 3 ~ $ f l e e t c t l j o u r n a l k a r l g r z _ w e b - - L o g s b e g i n a t W e d 2 0 1 4 - 0 9 - 2 4 2 1 : 3 2 : 3 2 U T C , e n d a t T h u 2 0 1 4 - 0 9 - 2 5 1 9 : 5 6 : 3 3 U T C . - - S e p 2 5 1 8 : 2 1 : 5 8 c o r e - 0 3 s h [ 1 3 1 5 ] : - - - > U s i n g c a c h e S e p 2 5 1 8 : 2 1 : 5 8 c o r e - 0 3 s h [ 1 3 1 5 ] : - - - > c e 8 c d 3 2 f e 1 5 7 S e p 2 5 1 8 : 2 1 : 5 8 c o r e - 0 3 s h [ 1 3 1 5 ] : S t e p 6 : R U N c d / s r v & & m a k e p u b l i s h S e p 2 5 1 8 : 2 1 : 5 8 c o r e - 0 3 s h [ 1 3 1 5 ] : - - - > U s i n g c a c h e S e p 2 5 1 8 : 2 1 : 5 8 c o r e - 0 3 s h [ 1 3 1 5 ] : - - - > 8 3 f 7 f 3 3 3 8 8 9 b S e p 2 5 1 8 : 2 1 : 5 8 c o r e - 0 3 s h [ 1 3 1 5 ] : S t e p 7 : C M D [ " n g i n x " ] S e p 2 5 1 8 : 2 1 : 5 8 c o r e - 0 3 s h [ 1 3 1 5 ] : - - - > U s i n g c a c h e S e p 2 5 1 8 : 2 1 : 5 8 c o r e - 0 3 s h [ 1 3 1 5 ] : - - - > 4 c f 2 7 4 f 0 1 d a e S e p 2 5 1 8 : 2 1 : 5 8 c o r e - 0 3 s h [ 1 3 1 5 ] : S u c c e s s f u l l y b u i l t 4 c f 2 7 4 f 0 1 d a e S e p 2 5 1 8 : 2 1 : 5 9 c o r e - 0 3 s y s t e m d [ 1 ] : S t a r t e d k a r l g r z . c o m . 61 / 80
  35. c o r e @ c o r e -

    0 2 ~ / s h a r e / k a r l g r z - d o c k e r / f l e e t $ f l e e t c t l j o u r n a l c l a s s h o l e s _ w e b - - L o g s b e g i n a t W e d 2 0 1 4 - 0 9 - 2 4 2 1 : 3 2 : 0 1 U T C , e n d a t T h u 2 0 1 4 - 0 9 - 2 5 2 0 : 0 3 : 5 5 U T C . - - S e p 2 5 2 0 : 0 1 : 4 0 c o r e - 0 2 s y s t e m d [ 1 ] : S t a r t i n g c l a s s h o l e s . c o m . . . S e p 2 5 2 0 : 0 1 : 4 0 c o r e - 0 2 d o c k e r [ 3 0 7 1 ] : E r r o r r e s p o n s e f r o m d a e m o n : N o s u c h c o n t a i n e r : c l a s s h o l e s _ w e b S e p 2 5 2 0 : 0 1 : 4 0 c o r e - 0 2 d o c k e r [ 3 0 7 1 ] : 2 0 1 4 / 0 9 / 2 5 2 0 : 0 1 : 4 0 E r r o r : f a i l e d t o k i l l o n e o r m o r e c o n t a i n e r s S e p 2 5 2 0 : 0 1 : 4 0 c o r e - 0 2 d o c k e r [ 3 0 8 5 ] : E r r o r r e s p o n s e f r o m d a e m o n : N o s u c h c o n t a i n e r : c l a s s h o l e s _ w e b S e p 2 5 2 0 : 0 1 : 4 0 c o r e - 0 2 d o c k e r [ 3 0 8 5 ] : 2 0 1 4 / 0 9 / 2 5 2 0 : 0 1 : 4 0 E r r o r : f a i l e d t o r e m o v e o n e o r m o r e c o n t a i n e r s S e p 2 5 2 0 : 0 1 : 4 0 c o r e - 0 2 d o c k e r [ 3 0 9 5 ] : P u l l i n g r e p o s i t o r y k a r l g r z / u b u n t u - 1 4 . 0 4 - b a s e - n g i n x S e p 2 5 2 0 : 0 1 : 4 2 c o r e - 0 2 s y s t e m d [ 1 ] : c l a s s h o l e s _ w e b . s e r v i c e : c o n t r o l p r o c e s s e x i t e d , c o d e = e x i t e d s t a t u s = 1 S e p 2 5 2 0 : 0 1 : 4 2 c o r e - 0 2 s y s t e m d [ 1 ] : F a i l e d t o s t a r t c l a s s h o l e s . c o m . S e p 2 5 2 0 : 0 1 : 4 2 c o r e - 0 2 s h [ 3 1 1 0 ] : / b i n / s h : l i n e 0 : c d : / h o m e / c o r e / s h a r e / c l a s s h o l e s : N o s u c h f i l e o r d i r e c t o r y S e p 2 5 2 0 : 0 1 : 4 2 c o r e - 0 2 s y s t e m d [ 1 ] : U n i t c l a s s h o l e s _ w e b . s e r v i c e e n t e r e d f a i l e d s t a t e . 62 / 80
  36. terminate a node and see the services running on it

    moved to another node in the cluster k a r l @ k a r l - m e d i a f l y : ~ / w o r k s p a c e / c o r e o s - v a g r a n t $ v a g r a n t s s h c o r e - 0 3 - - - A L a s t l o g i n : T h u S e p 2 5 1 6 : 3 7 : 0 1 2 0 1 4 f r o m 1 0 . 0 . 2 . 2 C o r e O S ( b e t a ) c o r e @ c o r e - 0 3 ~ $ s h u t d o w n - n s h u t d o w n : i n v a l i d o p t i o n - - ' n ' c o r e @ c o r e - 0 3 ~ $ s h u t d o w n M u s t b e r o o t . c o r e @ c o r e - 0 3 ~ $ s u d o s h u t d o w n - n s h u t d o w n : i n v a l i d o p t i o n - - ' n ' c o r e @ c o r e - 0 3 ~ $ s u d o s h u t d o w n S h u t d o w n s c h e d u l e d f o r T h u 2 0 1 4 - 0 9 - 2 5 1 6 : 4 6 : 1 4 U T C , u s e ' s h u t d o w n - c ' t o c a n c e l . B r o a d c a s t m e s s a g e f r o m r o o t @ c o r e - 0 3 ( T h u 2 0 1 4 - 0 9 - 2 5 1 6 : 4 5 : 1 4 U T C ) : T h e s y s t e m i s g o i n g d o w n f o r p o w e r - o f f a t T h u 2 0 1 4 - 0 9 - 2 5 1 6 : 4 6 : 1 4 U T C ! 63 / 80
  37. c o r e @ c o r e -

    0 2 ~ $ f l e e t c t l l i s t - u n i t s U N I T M A C H I N E A C T I V E S U B f a n t a s y _ d i s c o v e r y . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g f a n t a s y _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g k a r l g r z _ d i s c o v e r y . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g k a r l g r z _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g r e t h i n k d b _ d i s c o v e r y . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g r e t h i n k d b _ s e r v i c e s . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g 64 / 80
  38. c o r e @ c o r e -

    0 2 ~ $ f l e e t c t l l i s t - u n i t s U N I T M A C H I N E A C T I V E S U B f a n t a s y _ d i s c o v e r y . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g f a n t a s y _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g j c s d o o r s o l u t i o n s _ d i s c o v e r y . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g j c s d o o r s o l u t i o n s _ w e b . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v a t i n g s t a r t - p r e k a r l g r z _ d i s c o v e r y . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g k a r l g r z _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g r e t h i n k d b _ d i s c o v e r y . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g r e t h i n k d b _ s e r v i c e s . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g s t i c k f i g u r e n i n j a s _ d i s c o v e r y . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g s t i c k f i g u r e n i n j a s _ w e b . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v a t i n g s t a r t - p r e 65 / 80
  39. c o r e @ c o r e -

    0 2 ~ $ f l e e t c t l l i s t - m a c h i n e s M A C H I N E I P M E T A D A T A a d d d f 8 b e . . . 1 7 2 . 1 7 . 8 . 1 0 2 - d f 7 6 3 c 2 f . . . 1 7 2 . 1 7 . 8 . 1 0 1 - 66 / 80
  40. c o r e @ c o r e -

    0 2 ~ $ f l e e t c t l l i s t - u n i t s U N I T M A C H I N E A C T I V E S U B f a n t a s y _ d i s c o v e r y . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g f a n t a s y _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g j c s d o o r s o l u t i o n s _ d i s c o v e r y . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g j c s d o o r s o l u t i o n s _ w e b . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g k a r l g r z _ d i s c o v e r y . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g k a r l g r z _ w e b . s e r v i c e a d d d f 8 b e . . . / 1 7 2 . 1 7 . 8 . 1 0 2 a c t i v e r u n n i n g r e t h i n k d b _ d i s c o v e r y . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g r e t h i n k d b _ s e r v i c e s . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g s t i c k f i g u r e n i n j a s _ d i s c o v e r y . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g s t i c k f i g u r e n i n j a s _ w e b . s e r v i c e d f 7 6 3 c 2 f . . . / 1 7 2 . 1 7 . 8 . 1 0 1 a c t i v e r u n n i n g 67 / 80
  41. Please keep in mind I ran this cluster on my

    laptop using Vagrant, not on cloud infrastructure 69 / 80
  42. Clustering just worked (I didn't even really have to think

    about failover or replication myself) 70 / 80
  43. alpha software fleet and etcd are great, but they both

    need some more work before being "production ready" 71 / 80
  44. fleet in particular gets into situations sometimes where I have

    destroyed a unit but it still shows in the list of units for a while 72 / 80
  45. fleet doesn't have a nice mechanism to restart all your

    units or groups (at least that I found) 73 / 80
  46. I plan on deploying CoreOS to power my side projects,

    blog, and the handful of sites I run for friends soon 77 / 80
  47. I feel that after a bit of work this will

    be the OS that powers distributed systems in the future 78 / 80