Cond: A Case Study; or Shipping Fn Channels

Cond: A Case Study; or Shipping Fn Channels

A defense against killing sync.Cond, and explaining it so that more people will have it as part of their toolbox! Includes full advertisement!

B755e38f79c61b27fbb5a9f489b85d70?s=128

Reed Allman

April 25, 2018
Tweet

Transcript

  1. Cond: A Case Study; or Shipping Fn Channels Reed Allman

    @rdallman10
  2. But first, welcome to 2018, have an ad

  3. None
  4. Three cloud vendors ruled the FaaS galaxy

  5. We, too, have a cloud, but...

  6. We want to be different

  7. And users don't want to wait 4 years for native

    support for their favorite programming language
  8. And since users are developers, they may have a thing

    or two to contribute
  9. So we started building our own cloud FaaS offering, in

    the open, and container native
  10. So that we may one day rule the Galaxy

  11. https://fnproject.io/ P.S. We can also pay you to work on

    it full time, say hi!
  12. Onto the show

  13. I'd like to talk to you about sync.Cond

  14. One of the least used, reviled constructs in go

  15. None
  16. So what is this abomination?

  17. None
  18. OK, that didn't help. Why does it exist in the

    first place?
  19. None
  20. [1] Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau, Operating Systems:

    Three Easy Pieces, 30.1
  21. Dude, show an example already

  22. None
  23. OK, but go has channels? We don't need no stinkin'

    cond
  24. First, define the problem: • Parent wait for children to

    finish • Producer/Consumer problem
  25. • Parent wait for children to finish (sync.WaitGroup) • Producer/Consumer

    problem (make(chan))
  26. So we can go home now?

  27. OK, maybe we should define our problem [1] https://lamport.azurewebsites.net/pubs/state-the-problem.pdf

  28. We need to allocate resources from a host (disk, memory,

    CPU) dynamically per container, and when full, wait for resources to free up. Meanwhile, if any of the existing containers free up, we can use them and abandon attempting to allocate more resources.
  29. None
  30. This problem is called a covering condition [1] Remzi H.

    Arpaci-Dusseau and Andrea C. Arpaci-Dusseau, Operating Systems: Three Easy Pieces, 30.3, 2015 [2] B.W. Lampson, D.R. Redell, Experience with Processes and Monitors in Mesa, pg. 105, 1980
  31. But channels taste like panacea?

  32. None
  33. We need to wake up all threads when a container

    exits, because multiple containers could fill its void and certain containers may not fit at all
  34. But you can close a channel to broadcast!

  35. While channels can do broadcast, afterwards the channel is closed.

    We really need a channel that remains open but allows broadcasts.
  36. Mostly, it's an issue of complexity. We could implement this

    all with channels, but we'd need a few things: • requests wait over resource allocation channel • a thread (or synchronization) that manages resource allocation channels & their requested sizes (FIFO), and separately receives wake up calls when resources are freed and doles them out (and handles when allocs gave up efficiently) • to figure out whatever the hell data structure ^ is
  37. ... or we can just grab a handy dandy sync.Cond

  38. None
  39. We also need to handle timeouts on <-ctx.Done()

  40. sync.Cond and channels don't mix out of the box, so

    wouldn't we be better off just doing this all with channels and get this fo free?
  41. None
  42. https://github.com/fnproject/fn/blob/54ba49be65c274d2a e52c115035211549079348e/api/agent/resource.go#L42

  43. What we accomplished: • Now we can wait for resources

    OR a container to free up, OR a request timeout, using a select statement • You were duped into watching an advertisement for the Fn Project under the false pretense that you might learn something from somebody you have never heard of
  44. A context, a context, my kingdom for a context?

  45. Thanks! Check us out: https://github.com/fnproject/fn We’re hiring engineers and evangelists!

    Reed Allman Principal Eng @ Oracle @rdallman10