Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
k8sの可用性とScalabilityを担保するための大事な観点 / Best practic...
Search
Hiroki Sakamoto
September 29, 2020
Technology
3
2.3k
k8sの可用性とScalabilityを担保するための大事な観点 / Best practices for ensuring availability and scalability for k8s
Hiroki Sakamoto
September 29, 2020
Tweet
Share
More Decks by Hiroki Sakamoto
See All by Hiroki Sakamoto
Scaling Time-Series Data to Infinity: A Kubernetes-Powered Solution with Envoy
taisho6339
0
48
年間一億円削減した時系列データベースのアーキテクチャ改善
taisho6339
0
27
k8sで構築する大規模時系列データのスケーラブルな分散処理
taisho6339
0
16
Ingress For Anthosを活用した安全なk8sクラスタ運用/Ingress For Anthos In Production
taisho6339
2
1.1k
検索基盤を安全にElasticsearchに置き換えるためにやったこと
taisho6339
6
3.3k
Other Decks in Technology
See All in Technology
SwiftUIのGeometryReaderとScrollViewを基礎から応用まで学び直す:設計と活用事例
fumiyasac0921
0
130
BirdCLEF+2025 Noir 5位解法紹介
myso
0
190
生成AIとM5Stack / M5 Japan Tour 2025 Autumn 東京
you
PRO
0
150
許しとアジャイル
jnuank
1
110
Trust as Infrastructure
bcantrill
0
300
LLMアプリケーション開発におけるセキュリティリスクと対策 / LLM Application Security
flatt_security
7
1.8k
GopherCon Tour 概略
logica0419
2
170
SREとソフトウェア開発者の合同チームはどのようにS3のコストを削減したか?
muziyoshiz
1
100
Findy Team+のSOC2取得までの道のり
rvirus0817
0
310
研究開発部メンバーの働き⽅ / Sansan R&D Profile
sansan33
PRO
3
20k
全てGoで作るP2P対戦ゲーム入門
ponyo877
3
1.3k
Where will it converge?
ibknadedeji
0
110
Featured
See All Featured
Imperfection Machines: The Place of Print at Facebook
scottboms
269
13k
The Cult of Friendly URLs
andyhume
79
6.6k
Building a Modern Day E-commerce SEO Strategy
aleyda
43
7.7k
CoffeeScript is Beautiful & I Never Want to Write Plain JavaScript Again
sstephenson
162
15k
Optimizing for Happiness
mojombo
379
70k
The Invisible Side of Design
smashingmag
301
51k
Build The Right Thing And Hit Your Dates
maggiecrowley
37
2.9k
Creating an realtime collaboration tool: Agile Flush - .NET Oxford
marcduiker
32
2.2k
Responsive Adventures: Dirty Tricks From The Dark Corners of Front-End
smashingmag
252
21k
Automating Front-end Workflow
addyosmani
1371
200k
Chrome DevTools: State of the Union 2024 - Debugging React & Beyond
addyosmani
7
890
Docker and Python
trallard
46
3.6k
Transcript
k8sͷAvailabilityͱScalabilityΛ୲อ͢ΔͨΊͷେࣄͳ؍ @taisho6339
ࣗݾհ ࡔຊେক (Hiroki Sakamoto) Twitter: taisho6339 Github: taisho6339 ΩϟϦΞ Ϡϑʔ
→ ϦΫϧʔτςΫϊϩδʔζ → ϑϦʔϥϯε ݱࡏͷࣄ k8sʹΑΔϚΠΫϩαʔϏεͷͨΊͷج൫ͮ͘Γͱӡ༻ ࠓޙͷํ ΑΓࡋྔΛͬͯಇͨ͘Ίɺਖ਼ࣾһݕ౼தɻ
ຊͷςʔϚ k8sΛ҆શʹӡ༻͢Δʹ͋ͨͬͯ୲อͨ͠؍Λ2ʹߜͬͯཧʂ Scalability Availability
ࢿྉͷత ະདྷͷPJͰৼΓฦΔͨΊͷόΠϒϧΛࢦ͢
5ͭͷେࣄͳ؍ 1. ϨΠςϯγΛ୲อ͢Δ 2. εϧʔϓοτΛ୲อ͢Δ 3. εύΠΫʹඋ͑Δ 4. ϊʔυͷμϯʹඋ͑Δ 5.
ϚϧνΫϥελʹΑΔϝϯςφϯεઓུ
1. LatencyΛ୲อ͠Α͏ Throughput Latency
ϨΠςϯγͷ୲อ Pod୯ମͰఆϨΠςϯγͰϨεϙϯεΛฦͤΔ͔ΛνΣοΫʂ ͜͜ͷ୲อ͕͓Ζ͔ͦͩͱਫฏεέʔϧͤͯ͞ޮՌ͕ബ͍ Pod of Service A locust cluster How
fast?
ϨΠςϯγͷ୲อ ϨΠςϯγ͕ఆΑΓߴ͍ ରࡦ 1. PodΛεέʔϧΞοϓ 2. ࠷దͳNodeʹஔ͢Δ 3. ΞϓϦέʔγϣϯΛνϡʔχϯά
ϨΠςϯγͷ୲อ ~PodͷεέʔϧΞοϓ~ containers: ... resources: limits: cpu: 1.0 memory: 512Mi
requests: cpu: 0.2 memory: 512Mi ղܾࡦ1. PodͷεέʔϧΞοϓ • CPUɺϝϞϦͳͲͷϦιʔεΛࢦఆՄೳ • PodͷఆٛʹrequestͱlimitͰઃఆ
ϨΠςϯγͷ୲อ ~PodͷεέʔϧΞοϓ~ requestʹΑΔࢦఆ • PodʹׂΓͯΔϝϞϦͱCPUΛࢦఆ • Podʹrequest͞ΕͨϦιʔεྔΛݩʹஔ͞ΕΔϊʔυΛܾఆ
ϨΠςϯγͷ୲อ ~PodͷεέʔϧΞοϓ~ limitʹΑΔࢦఆ • Pod͕࣮ࡍʹ༻Ͱ͖ΔݶքͷϦιʔεྔ ◦ requestͷׂϦιʔεΛ͑Δ͜ͱ͕Ͱ͖Δ ◦ ࢦఆ͠ͳ͍ͱrequestͱಉ͡ʹͳΔ ◦
ීஈগͳ͍͍͕ͯ͘ɺҰ࣌తʹόʔετ͢ΔՄೳੑͷ͋ΔϫʔΫϩʔυʹ༗ޮ • limitΛӽ͑Α͏ͱ͢ΔͱεϩοτϦϯά͞Εɺ༻Λ͑ΒΕΔ
ϨΠςϯγͷ୲อ ~PodͷεέʔϧΞοϓ~ requestͱlimitͷҙ • limitͱrequest ͷ͕ࠩେ͖͍߹ ◦ limit·Ͱ༻্͕͕ͬͨͱ͖ʹϊʔυϦιʔε͕ރׇ͢ΔՄೳੑ • requestࢦఆ͕ͳ͍߹
◦ Scheduler͕Ϧιʔε༻ྔΛఆͰ͖ͳ͍ͷͰಛఆϊʔυʹूத͢ΔՄೳੑ
ϨΠςϯγͷ୲อ ~࠷దͳϊʔυͷஔ~ ղܾࡦ2. ࠷దͳϊʔυͷஔ • GPUɺSSDͳͲͷϦιʔελΠϓΛબͯ͠ஔ HDD SSD Node1 Node2
ߴʹIOॲཧΛͯ͠ ΄͍͠ͷͰ SSDͷϊʔυ
ϨΠςϯγͷ୲อ ~࠷దͳϊʔυͷஔ~ ϊʔυΛࢦఆ͢Δํ๏ • nodeSelector ◦ ಛఆͷϥϕϧΛ࣋ͭNodeʹஔ • NodeAffinity ◦
ಛఆͷϥϕϧΛ࣋ͭNodeʹஔɻͪ͜Βͷ΄͏͕ΑΓॊೈ • Taint + Toleration ◦ Nodeʹஔ੍ݶΛՃ͑ɺஔද໌Λ͍ͯ͠ΔPodͷΈஔ ࢀߟϦϯΫ: Node্ͷPodͷεέδϡʔϦϯά
ϨΠςϯγͷ୲อ ~ΞϓϦέʔγϣϯΛνϡʔχϯά~ ղܾࡦ3. ΞϓϦέʔγϣϯΛνϡʔχϯά • APMͳͲΛ׆༻ͯ͠ϘτϧωοΫΛಛఆ͠ɺ࣮Λมߋͯ͠ ύϑΥʔϚϯεվળΛߦ͏
2. ThroughputΛ୲อ͠Α͏ Throughput Latency
ఆΞʔΩςΫνϟ Ingress Gateway Service A Service B Service C LB
ϚΠΫϩαʔϏε + API Gatewayͳύλʔϯ
εϧʔϓοτͷ୲อ جຊํ • LBɺAPI GatewayɺServiceͷॱʹ୲อ͍ͯ͘͠ • ϘτϧωοΫΛݟ͑͘͢͢ΔͨΊʹHPAແޮʹ͠ɺखಈͰεέʔϧ
εϧʔϓοτͷ୲อ 6000RPSΛ ୲อ͍ͨ͠ʂ جຊํ Ingress Gateway Service A Service B
Service C LB ϚΠΫϩαʔϏε + API Gatewayͳύλʔϯ
εϧʔϓοτͷ୲อ Ingress Gateway Service A Service B Service C LB
ϚΠΫϩαʔϏε + API Gatewayͳύλʔϯ LB6000RPS ग़Δʁ Nginx (੩తίϯςϯπΛฦ٫) جຊํ
εϧʔϓοτͷ୲อ Ingress Gateway Service A Service B Service C LB
ϚΠΫϩαʔϏε + API Gatewayͳύλʔϯ Ingress Gateway 6000RPSग़Δʁ جຊํ
εϧʔϓοτͷ୲อ جຊํ Ingress Gateway Service A Service B Service C
LB ϚΠΫϩαʔϏε + API Gatewayͳύλʔϯ αʔϏε 6000RPSग़Δʁ
εϧʔϓοτͷ୲อ εϧʔϓοτ͕৳ͼͳ͘ͳͬͨΒ.... • Ͳ͜ʹϘτϧωοΫ͕དྷ͍ͯΔ͔Λ֬ೝ͢Δ ◦ CPU༻ɺϝϞϦ༻ɺIOPSɺϩʔυΞϕϨʔδɺJVMͷώʔϓɺ ίωΫγϣϯϓʔϧɺΩϟογϡώοτ... • ϘτϧωοΫΛҠಈͤ͞Α͏ʂ ◦
ਫฏεέʔϧɺਨεέʔϧɺΞϓϦέʔγϣϯνϡʔχϯάɺ࠷దͳ ϊʔυͷஔɺϧʔςΟϯάͷํͷݟ͠Λ࣮ࢪ
3. εύΠΫʹඋ͑Α͏ ٸܹͳ Traffic૿ʂ
εύΠΫͰͳ͘؇͔ͳτϥϑΟοΫ૿ͳΒɾɾɾ • Horizontal Pod Autoscaler + Cluster Autoscaler ͰରԠͰ͖Δ εύΠΫʹඋ͑Α͏
εύΠΫʹඋ͑Α͏ ~Horizontal Pod Autoscaler~ HPAͷΈ • 30ඵ͝ͱʹCPU༻ɺϦΫΤετͳͲͷϝ τϦΫεΛνΣοΫ͠ɺPodΛܭࢉ͢Δ • ࠷େͰ3ʹ1ճεέʔϧΞτɺ5ʹҰճε
έʔϧΠϯ ࢀߟ: Horizontal Pod Autoscaler
εύΠΫʹඋ͑Α͏ ~Horizontal Pod Autoscaler~ HPAͷܭࢉࣜ desiredReplicas = ceil[currentReplicas * (
currentMetricValue / desiredMetricValue )] εέʔϧޙPod = ceil [4 * (90 / 60)] = 6 ܭࢉྫ ܭࢉରϝτϦΫε: CPU༻ Target CPU༻: 60% ݱࡏͷReplica = 4 ݱࡏͷPodͷฏۉCPU༻: 90%
εύΠΫʹඋ͑Α͏ ~Cluster Autoscaler~ Cluster AutoscalerͷΈ request͞ΕͨϦιʔε͕Γͳ͘ͳΓɺ Pod͕ஔͰ͖ͳ͘ͳͬͨλΠϛϯάͰϊʔυ͕εέʔϧ͢Δ Node1 Node2 Full!
Full! εέδϡʔϧ ͢Δ ͱ͜Ζ͕ͳ͍...
εύΠΫʹඋ͑Α͏ ~Cluster Autoscaler~ Cluster AutoscalerͷΈ request͞ΕͨϦιʔε͕Γͳ͘ͳΓɺ Pod͕ஔͰ͖ͳ͘ͳͬͨλΠϛϯάͰϊʔυ͕εέʔϧ͢Δ Node1 Node2 Node3
ϊʔυՃ Scheduled!
εύΠΫʹඋ͑Α͏ HPA + Cluster Autoscalerͷ ϊʔυɺPod͕εέʔϧ͢Δ·ͰͷҰఆͷϦʔυλΠϜ͕͔͔ΔͷͰ εύΠΫతͳ૿Ճͩͱؒʹ߹Θͳ͍
εύΠΫʹඋ͑Α͏ ରԠࡦ • ͕࣌ؒ༧ଌͰ͖Δ߹ (CMޮՌɺYahoo๒ͳͲ) ◦ CronJobͳͲͰಛఆ࣌ؒʹHPAͷminReplicaΛ্͛Δ • ͕࣌ؒ༧ଌͰ͖ͳ͍߹ ◦
ྫ͑CPU༻ͷඪΛΏΔ͓ͯ͘͘͠ ◦ ༧ΊminReplicaΛੵΜͰ͓͘ ◦ CDNͳͲͷΩϟογϡઓུΛݟ͢
4. ϊʔυͷμϯʹඋ͑Α͏ Node1 Node2 Node3 Evict!
ϊʔυͷμϯʹඋ͑Α͏ ϊʔυ͕μϯ͢Δཧ༝ • ϋʔυΣΞো • κʔϯɺϦʔδϣϯো • ΫϥελΞοϓάϨʔυ • Մ༻ੑͷอূ͞Εͳ͍ϊʔυ
(PreemptibleϊʔυɺSpotΠϯελϯε) • ΦϖϨʔγϣϯϛε
ϊʔυͷμϯʹඋ͑Α͏ ϊʔυͷμϯʹର͢Δରࡦ 1. PodͷԽͱκʔϯࢄ 2. ҆શͳPodͷఀࢭ ◦ Graceful Shutdownͷઃఆ ◦
Podͷదͳୀආઓུ 3. Ϋϥελͷਖ਼͍͠ઃఆ ◦ ϝϯςφϯεΟϯυͱSurge Upgradeͷઃఆ ◦ Մ༻ੑඇอূϊʔυͷਖ਼͍͠ӡ༻
4.1 ϊʔυͷμϯʹඋ͑Α͏ Node1 Node2 Node3 Evict! - PodͷԽͱκʔϯࢄ -
ϊʔυͷμϯʹඋ͑Α͏ ~PodͷԽͱκʔϯࢄ~ Podͷࢄઓུ ~ஔϊʔυͷࢄ~ Node1 Node2 serviceA serviceA Pod Anti
AffinityΛ׆༻ͯ͠ɺ ಉαʔϏεͷPod͕ ಉ͡ϊʔυʹ ͳΔ͘ஔ͞Εͳ͍Α͏ʹ͢Δ
ϊʔυͷμϯʹඋ͑Α͏ ~PodͷԽͱκʔϯࢄ~ Podͷࢄઓུ ~κʔϯͷࢄ~ Node1 asia-northeast1-a serviceA serviceA Node2 asia-northeast1-b
Pod Anti AffinityΛ׆༻ͯ͠ɺ ಉαʔϏεͷPod͕ ಉ͡κʔϯʹ ͳΔ͘ஔ͞Εͳ͍Α͏ʹ͢Δ 1.18Ҏ্ Topology Spread Constraints ͕Φεεϝʂ ϦʔδϣφϧΫϥελʂ
4.2 ϊʔυͷμϯʹඋ͑Α͏ Node1 Node2 Node3 Evict! - ҆શͳPodͷఀࢭ -
ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ ϊʔυ͕ఀࢭ͢Δͱ͖ Node1 Node2 Node3 Schedule͞Ε͍ͯΔ Pod͕ऴྃ͞ΕΔ Terminate!
ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ ϊʔυ͕ఀࢭ͢Δͱ͖ Node1 Node2 Node3 ผͷϊʔυͰ৽ͨʹCreate
ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ ҆શͳӡ༻ͷͨΊʹ PodΛ҆શʹఀࢭ͢Δॲཧ͕ඞཁෆՄܽ
Graceful Shutdownͷઃఆ ৽ͨͳϦΫΤετࢭΊͭͭɺॲཧதͷϦΫΤετͷྃΛ͔ͬͯΒϓϩηε Λམͱͨ͢Ίͷઃఆ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ ϦΫΤετͷ ॲཧΛྃͤ͞Δ
Podͷऴྃ࣌ͷڍಈ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ Podͷऴྃ PreStopॲཧ SIGTERM SIGTERM ॲཧ SIGKILL Endpoint͔ΒPodΛআ
Podͷऴྃ࣌ͷڍಈ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ Podͷऴྃ PreStopॲཧ SIGTERM SIGTERM ॲཧ SIGKILL Endpoint͔ΒPodΛআ
̎ͭͷϥΠϯಉ࣌ʹॲཧ͕Δ
Podͷऴྃ࣌ͷڍಈ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ Podͷऴྃ PreStopॲཧ SIGTERM SIGTERM ॲཧ SIGKILL Endpoint͔ΒPodΛআ
Service Endpoint ϧʔςΟϯά͕ࢭ·Δʂ
Podͷऴྃ࣌ͷڍಈ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ Podͷऴྃ PreStopॲཧ SIGTERM SIGTERM ॲཧ SIGKILL Endpoint͔ΒPodΛআ
ҙͷίϚϯυ ॲཧ ΞϓϦέʔγϣϯ ଆͰϋϯυϧ
Podͷऴྃ࣌ͷڍಈ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ Podͷऴྃ PreStopॲཧ SIGTERM SIGTERM ॲཧ SIGKILL Endpoint͔ΒPodΛআ
͜͏ͳΔͱࢮΜͩίϯςφʹ ϧʔςΟϯά͞ΕΔ
PodͷϧʔςΟϯάఀࢭΛͭ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ PreStopॲཧ SIGTERM ॲཧ Endpoint͔ΒPodΛআ lifecycle: preStop: exec:
command: ["/bin/sh", "-c", "sleep 10"]
ॲཧதͷϦΫΤετॲཧྃΛͬͯϓϩηεΛऴྃ͢Δ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ PreStopॲཧ SIGTERM ॲཧ Endpoint͔ΒPodΛআ ΞϓϦέʔγϣϯShutdownॲཧ
Pod Disruption Budget ϊʔυ͕PodΛഉग़͢Δͱ͖ʹಉ࣌ʹఀࢭ͢ΔPodͷΛ੍ޚ͢ΔͨΊͷϦιʔε ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ ࢀߟ: Disruption Node ࢦఆͷͣͭ
ഉग़ apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: sample spec: maxUnavailable: "25%" selector: matchLabels: app: sample
4.3 ϊʔυͷμϯʹඋ͑Α͏ Node1 Node2 Node3 Evict! - Ϋϥελͷઃఆ -
ΫϥελͷAuto Upgradeʹඋ͑Δ • ϝϯςφϯεΟϯυτϥϑΟοΫ͕গͳ͍࣌ؒʹઃఆ͢Δ • Surge UpgradeΛઃఆͯ͠ɺॱ൪ʹUpgrade͞ΕΔΑ͏ʹ͢Δ ϊʔυͷμϯʹඋ͑Α͏ ~ΫϥελͷΞοϓάϨʔυ~
Preemptibleϊʔυͷӡ༻ ϊʔυͷμϯʹඋ͑Α͏ ~ඇՄ༻ੑอূͷϊʔυͷӡ༻~ Node1 ௨ৗͷNode Pool Node2 Node1 Preemptible Node
Pool Node2 • ྆ํͷNode Poolʹ ஔ͢Δ • ॏཁͳPodஔ͠ͳ ͍ • શମͷϊʔυͷҰ෦ ʹݶఆ͢Δ ServiceA ServiceB ServiceA ServiceB
5. Ϋϥελͷϝϯςφϯεઓུ Ingress Gateway Service A Service B Service C
LB Ingress Gateway Service A Service B Service C
ΫϥελϝϯςφϯεʹవΔ ΫϥελͷϝϯςφϯεʹవΔϦεΫ • k8sͷόʔδϣϯΞοϓͰյΕΔΞϓϦέʔγϣϯͳ͍͔ʁ • Istioͷߋ৽ͳͲͰαʔϏεͷૄ௨͕ࢮ͵Մೳੑ • ϦʔδϣϯোͰҰ੪ʹࢮ͵Մೳੑ
ΫϥελϝϯςφϯεʹవΔ Ϋϥελϝϯςφϯεͷରॲ ΫϥελΛԽͯ͠ϚϧνΫϥελԽ͢Δ
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C ΫϥελΛԽ ୯ҰͷVIPΛఏڙ͢ΔLB
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C Ϋϥελͷߋ৽࣌
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C LB͔ΒΓͯ͠ Ϋϥελߋ৽࡞ۀ
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C ࠶ϧʔςΟϯά
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C ͳ͚Ε ͪ͜Βߋ৽
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C ࠶ϧʔςΟϯά
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C Ϋϥελͷ ϩʔϦϯάΞοϓσʔτΛ࣮ݱ
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C ϦʔδϣϯোͰ ϑΣΠϧΦʔόʔ
ϚϧνΫϥελͷ࣮ݱํ๏ • GCLB + NEGΛͬͨϧʔςΟϯάΛࣗಈ Ͱߏங • ManagedͳService MeshΛఏڙ •
Observabilityͷ୲อͱSLO/SLIϞχλϦϯ ά • ΫϥελϦιʔεͷΫϥελؒಉظ GCP Anthos
ϚϧνΫϥελʹΑΔՃՁ Մ༻ੑ্͚ͩͰͳ͘ɺ༷ʑͳϝϦοτ • κʔϯΫϥελΛϚϧνΫϥελԽ͢Δ͜ͱͰϦʔδϣφϧΑΓ ҆͘ࡁΉՄೳੑ • VIP + IP AnycastʹΑΔϨΠςϯγԽ
·ͱΊ • ϨΠςϯγΛ୲อ͢Δ ◦ requestͱlimitΛదʹઃఆ͠Α͏ʂ ◦ ࠷దͳϦιʔεΛ࣋ͭϊʔυʹஔ͠Α͏ʂ ◦ ΞϓϦέʔγϣϯΛνϡʔχϯά͠Α͏
·ͱΊ • εϧʔϓοτΛ୲อ͢Δ ◦ Ͳ͜ʹϘτϧωοΫ͕དྷ͍ͯΔͷ͔ΛՄࢹԽͯ͠νϡʔχϯά͍ͯ͜͠͏ ◦ PodͷਫฏεέʔϧɺrequestͱlimitʹΑΔਨεέʔϧΛ͠Α͏
·ͱΊ • εύΠΫʹඋ͑Α͏ ◦ HPAͱCluster AutoscalerͷΈΛཧղ͓ͯ͠͏ ◦ ෆेͳ߹CronJobͳͲͰReplicaΛௐͨ͠ΓɺεέʔϧͷϝτϦΫεඪΛ ΏΔͨ͘͠Γ͠Α͏
·ͱΊ • ϊʔυͷμϯʹඋ͑Α͏ ◦ κʔϯɺϊʔυΛࢄͯ͠PodΛஔ͠Α͏ ◦ Graceful ShutdownɺPDBΛదʹઃఆͯ҆͠શʹఀࢭ͠Α͏ ◦ Ϋϥελͷϝϯςφϯε࣌ؒɺϝϯςφϯεํΛͪΌΜͱઃఆ͠Α͏
◦ Preemptible৻ॏʹʂ
·ͱΊ • Ϋϥελͷϝϯςφϯεઓུ ◦ ϚϧνΫϥελԽ͢Δͱ҆શʹΫϥελΛΧφϦϦϦʔεɺϩʔϦϯάΞοϓσʔτͰ͖ Δ ◦ ϨΠςϯγԽͳͲͷՃՁଘࡏ ◦ GCP
AnthosΛ͏ͱ୯ҰVIPʹෳΫϥελΛͿΒԼ͛ΔߏΛͱΔ͜ͱ͕Ͱ͖Δ
Follow Me!! @taisho6339
Thank you for listening!