Slide 1

Slide 1 text

Redis Bedtime Stories @igorwhilef a lse

Slide 2

Slide 2 text

Hi, I'm Igor. St a ff SRE @ GitL a b

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

Agenda • Refresher • Story 1: Microbursts • Story 2: Regression • Story 3: Kubernetes • Takeaways

Slide 6

Slide 6 text

Refresher

Slide 7

Slide 7 text

redis /ˈrɛdɪs/ n. a. remote dictionary service

Slide 8

Slide 8 text

{ k => v }

Slide 9

Slide 9 text

array($k => $v)

Slide 10

Slide 10 text

*: 6379

Slide 11

Slide 11 text

SET user:1:session "{...}"

Slide 12

Slide 12 text

GET user:1:session => "{...}"

Slide 13

Slide 13 text

No content

Slide 14

Slide 14 text

No content

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

"redis is fast" -- a liar, probably

Slide 17

Slide 17 text

Numbers Everyone Should Know L1 cache reference 0.5 ns Branch mispredict 5 ns L2 cache reference 7 ns Mutex lock/unlock 25 ns Main memory reference 100 ns Compress 1K bytes with Zippy 3,000 ns Send 2K bytes over 1 Gbps network 20,000 ns Read 1 MB sequentially from memory 250,000 ns Round trip within same datacenter 500,000 ns Disk seek 10,000,000 ns Read 1 MB sequentially from disk 20,000,000 ns Send packet CA->Netherlands->CA 150,000,000 ns Jeff De a n, LADIS 2009 Keynote

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

Story 1: Microbursts

Slide 20

Slide 20 text

• Clients see periodic performance degradation. • Sometimes it breaches the alert threshold, sometimes it doesn't. • Nothing obvious. • Investigation begins.

Slide 21

Slide 21 text

CPU utilization between 60% and 80%

Slide 22

Slide 22 text

$ sudo perf record \ -p $(pidof redis-server) \ -F 499 \ -g \ -- \ sleep 600

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

$ for i in {1..600} do sleep 1 echo -n "$(date +'%Y-%m-%d %H:%M:%S.%N %Z')" redis-cli info stats \ | grep -w 'instantaneous_ops_per_sec' done M a tt Smiley

Slide 25

Slide 25 text

M a tt Smiley

Slide 26

Slide 26 text

$ sudo tcpdump \ -v \ -G 60 \ -w $(hostname -s).inbound.%Y%m%d_%H%M%S.pcap \ 'dst port 6379' M a tt Smiley

Slide 27

Slide 27 text

$ mergecap -w - pcap-20200325-microburst-sample/*.pcap.gz | tcpflow -r - -I -s -o tcpflow M a tt Smiley

Slide 28

Slide 28 text

$ find tcpflow/ -name '*.06379.findx' \ | xargs -P 8 -n 100 redis_trace_cmd \ > trace_inbound_redis_commands.out M a tt Smiley

Slide 29

Slide 29 text

cache:gitlab:exists?:$PATTERN cache:gitlab:projects/count_service/$NUMBER/$NUMBER/forks_count cache:gitlab:projects/count_service/$NUMBER/$NUMBER/public_open_issues_count cache:gitlab:has_visible_content?:$PATTERN cache:gitlab:root_ref:$PATTERN cache:gitlab:avatar:$PATTERN:$NUMBER cache:gitlab:readme_path:$PATTERN M a tt Smiley

Slide 30

Slide 30 text

M a tt Smiley

Slide 31

Slide 31 text

sum by (controller, action) (rate(gitlab_cache_operations_total{env="gprd",operation="read"}[30s])) {action="GET /api/groups/:id/projects"}

Slide 32

Slide 32 text

json.route=/api/:version/groups/:id/projects

Slide 33

Slide 33 text

70 QPS 50000 QPS

Slide 34

Slide 34 text

70 req/s * 100 projects/page * 7 queries per project = 50k QPS

Slide 35

Slide 35 text

Lesson learned: Aggregation intervals hide burstiness

Slide 36

Slide 36 text

Story 2: Regression

Slide 37

Slide 37 text

Let's upgrade, it will be better, right?

Slide 38

Slide 38 text

5.0.9 => 6.0.10

Slide 39

Slide 39 text

+15% CPU utilization

Slide 40

Slide 40 text

6.0.10 => 5.0.9

Slide 41

Slide 41 text

$ sudo perf record -ag -F 497 -- sleep 120 $ sudo perf script --header \ | stackcollapse-perf.pl --kernel \ | grep redis-server \ | flamegraph.pl --hash --colors=perl \ > flamegraph.svg

Slide 42

Slide 42 text

M a tt Smiley

Slide 43

Slide 43 text

serveClientsBlockedOnKeyByModule

Slide 44

Slide 44 text

No content

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

No content

Slide 47

Slide 47 text

github.com/redis/redis/pull/8689

Slide 48

Slide 48 text

repro.go

Slide 49

Slide 49 text

before after

Slide 50

Slide 50 text

Lesson learned: Sometimes upgrades make things worse

Slide 51

Slide 51 text

Story 3: Kubernetes

Slide 52

Slide 52 text

VMs => k8s

Slide 53

Slide 53 text

No content

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

No content

Slide 56

Slide 56 text

dedicated node pool

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

No content

Slide 59

Slide 59 text

+25% CPU utilization

Slide 60

Slide 60 text

$ sudo perf record

Slide 61

Slide 61 text

M a tt Smiley

Slide 62

Slide 62 text

do_softirq

Slide 63

Slide 63 text

GKE is very smart

Slide 64

Slide 64 text

No content

Slide 65

Slide 65 text

• When tra ffi c reaches a Kubernetes node, it is handled the same way, regardless of the type of load balancer. • The load balancer is not aware of which nodes in the cluster are running Pods for its Service. • Instead, it balances tra ff i c across all nodes in the cluster, even those not running a relevant Pod.

Slide 66

Slide 66 text

🤦

Slide 67

Slide 67 text

GKE dataplane v2 & NEGs handle this a little better

Slide 68

Slide 68 text

Lesson learned: Production does not care about your readiness review

Slide 69

Slide 69 text

Takeaways

Slide 70

Slide 70 text

• Story 1: Microbursts • Aggregation intervals hide burstiness • Story 2: Regression • Sometimes upgrades make things worse • Story 3: Kubernetes • Production does not care about your readiness review

Slide 71

Slide 71 text

References • Microburst analysis • gitlab.com/gitlab-com/gl-infra/reliability/-/issues/9420 • Performance regression in BRPOP • github.com/redis/redis/issues/8668 • github.com/redis/redis/pull/8689 • Packet processing overhead on Kubernetes • gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1985 • Further reading • github.com/redis/redis/issues/7071 • about.gitlab.com/blog/2022/11/28/how-we-diagnosed-and-resolved-redis-latency-spikes • gitlab.com/gitlab-com/runbooks/-/blob/master/scripts/redis_trace_cmd.rb

Slide 72

Slide 72 text

Acknowledgements • Matt Smiley • Colleague, friend, sleuth extraordinaire

Slide 73

Slide 73 text

Thanks!