Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Making Your Controllers Resilient: Workqueue To The Rescue

Making Your Controllers Resilient: Workqueue To The Rescue

Madhav Jivrajani

September 16, 2021
Tweet

More Decks by Madhav Jivrajani

Other Decks in Technology

Transcript

  1. client-go • client-go is a library used to communicate with

    a k8s cluster. • https://github.com/kubernetes/client-go
  2. SharedInformer • We can stay informed about when events like

    pod creation, node joining, etc. are triggered by using a primitive exposed by Kubernetes and the client-go called SharedInformer, inside the cache package. • Previously, each controller had its own informer cache that it would use.
  3. Enter Queues Now here we just had a print statement

    in our handler function, but most of the time your `AddFunc` will just be pushing events to a work queue.
  4. To try and understand this functionality provided, we’ll try and

    look at the following 2 things: • How does the queue itself work? • What are the different extensions to this queue that are provided?
  5. Due to this queue pattern, you can have: • Multiple

    producers producing items to be processed, and multiple consumers that pop these items out and process them. • This allows for “parallelizing” the work that needs to be done. • Next question is: what happens when an item is done processing?
  6. • It is possible that we have multiple instances of

    the same item that are to be processed. How do we ensure that we process this item just once? • Due to multiple concurrent consumers, how do we prevent the same item being processed multiple times, concurrently? ◦ This is important because if we duplicate process an item - best case you end up doing additional work to rectify the mistake and worst case you have a thrashing effect of processing and retrying.
  7. A few things to note: • Format of the key:

    <resource_namespace>/<resource_name> • If no namespace, it’ll just be <resource_name>
  8. • Delaying queue ◦ Extends the queue with the ability

    to add an element after specified duration. • Rate limiting queue ◦ It uses the delaying queue to rate limit items being added to the queue. ◦ The default rate limiter is a simple exponential rate limiter that rate limits per key. ▪ If a key had n re-queues, it will be added after 2^n * someBaseDelay back to the queue.
  9. A few practices to keep in mind: • Always handle

    errors outside your business logic ◦ Handling errors typically consists of requeueing a work item. • Start workers for for processing items from workqueue only after cache is in sync successfully.
  10. Resources and References • Kubernetes Controllers • Kubernetes client-go workqueue

    example • Kubernetes sample-controller • Workqueue package