Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
k8sの可用性とScalabilityを担保するための大事な観点 / Best practic...
Search
Hiroki Sakamoto
September 29, 2020
Technology
3
2.1k
k8sの可用性とScalabilityを担保するための大事な観点 / Best practices for ensuring availability and scalability for k8s
Hiroki Sakamoto
September 29, 2020
Tweet
Share
More Decks by Hiroki Sakamoto
See All by Hiroki Sakamoto
Scaling Time-Series Data to Infinity: A Kubernetes-Powered Solution with Envoy
taisho6339
0
22
年間一億円削減した時系列データベースのアーキテクチャ改善
taisho6339
0
10
k8sで構築する大規模時系列データのスケーラブルな分散処理
taisho6339
0
4
Ingress For Anthosを活用した安全なk8sクラスタ運用/Ingress For Anthos In Production
taisho6339
2
1.1k
検索基盤を安全にElasticsearchに置き換えるためにやったこと
taisho6339
6
3.2k
Other Decks in Technology
See All in Technology
ABWG2024採択者が語るエンジニアとしての自分自身の見つけ方〜発信して、つながって、世界を広げていく〜
maimyyym
1
190
Oracle Database Technology Night #87-1 : Exadata Database Service on Exascale Infrastructure(ExaDB-XS)サービス詳細
oracle4engineer
PRO
1
210
OPENLOGI Company Profile
hr01
0
60k
生成AI×財務経理:PoCで挑むSlack AI Bot開発と現場巻き込みのリアル
pohdccoe
1
780
フォーイット_エンジニア向け会社紹介資料_Forit_Company_Profile.pdf
forit_tech
1
1.7k
AWSアカウントのセキュリティ自動化、どこまで進める? 最適な設計と実践ポイント
yuobayashi
7
960
AIエージェント元年@日本生成AIユーザ会
shukob
1
240
AI自体のOps 〜LLMアプリの運用、AWSサービスとOSSの使い分け〜
minorun365
PRO
9
750
What's new in Go 1.24?
ciarana
1
110
わたしがEMとして入社した「最初の100日」の過ごし方 / EMConfJp2025
daiksy
14
5.3k
Amazon Q Developerの無料利用枠を使い倒してHello worldを表示させよう!
nrinetcom
PRO
2
120
データモデルYANGの処理系を再発明した話
tjmtrhs
0
180
Featured
See All Featured
Understanding Cognitive Biases in Performance Measurement
bluesmoon
27
1.6k
How to train your dragon (web standard)
notwaldorf
91
5.9k
Site-Speed That Sticks
csswizardry
4
410
The Myth of the Modular Monolith - Day 2 Keynote - Rails World 2024
eileencodes
21
2.5k
Designing Experiences People Love
moore
140
23k
BBQ
matthewcrist
87
9.5k
[Rails World 2023 - Day 1 Closing Keynote] - The Magic of Rails
eileencodes
33
2.1k
Performance Is Good for Brains [We Love Speed 2024]
tammyeverts
7
650
RailsConf 2023
tenderlove
29
1k
Building Better People: How to give real-time feedback that sticks.
wjessup
367
19k
Visualizing Your Data: Incorporating Mongo into Loggly Infrastructure
mongodb
45
9.4k
A Tale of Four Properties
chriscoyier
158
23k
Transcript
k8sͷAvailabilityͱScalabilityΛ୲อ͢ΔͨΊͷେࣄͳ؍ @taisho6339
ࣗݾհ ࡔຊେক (Hiroki Sakamoto) Twitter: taisho6339 Github: taisho6339 ΩϟϦΞ Ϡϑʔ
→ ϦΫϧʔτςΫϊϩδʔζ → ϑϦʔϥϯε ݱࡏͷࣄ k8sʹΑΔϚΠΫϩαʔϏεͷͨΊͷج൫ͮ͘Γͱӡ༻ ࠓޙͷํ ΑΓࡋྔΛͬͯಇͨ͘Ίɺਖ਼ࣾһݕ౼தɻ
ຊͷςʔϚ k8sΛ҆શʹӡ༻͢Δʹ͋ͨͬͯ୲อͨ͠؍Λ2ʹߜͬͯཧʂ Scalability Availability
ࢿྉͷత ະདྷͷPJͰৼΓฦΔͨΊͷόΠϒϧΛࢦ͢
5ͭͷେࣄͳ؍ 1. ϨΠςϯγΛ୲อ͢Δ 2. εϧʔϓοτΛ୲อ͢Δ 3. εύΠΫʹඋ͑Δ 4. ϊʔυͷμϯʹඋ͑Δ 5.
ϚϧνΫϥελʹΑΔϝϯςφϯεઓུ
1. LatencyΛ୲อ͠Α͏ Throughput Latency
ϨΠςϯγͷ୲อ Pod୯ମͰఆϨΠςϯγͰϨεϙϯεΛฦͤΔ͔ΛνΣοΫʂ ͜͜ͷ୲อ͕͓Ζ͔ͦͩͱਫฏεέʔϧͤͯ͞ޮՌ͕ബ͍ Pod of Service A locust cluster How
fast?
ϨΠςϯγͷ୲อ ϨΠςϯγ͕ఆΑΓߴ͍ ରࡦ 1. PodΛεέʔϧΞοϓ 2. ࠷దͳNodeʹஔ͢Δ 3. ΞϓϦέʔγϣϯΛνϡʔχϯά
ϨΠςϯγͷ୲อ ~PodͷεέʔϧΞοϓ~ containers: ... resources: limits: cpu: 1.0 memory: 512Mi
requests: cpu: 0.2 memory: 512Mi ղܾࡦ1. PodͷεέʔϧΞοϓ • CPUɺϝϞϦͳͲͷϦιʔεΛࢦఆՄೳ • PodͷఆٛʹrequestͱlimitͰઃఆ
ϨΠςϯγͷ୲อ ~PodͷεέʔϧΞοϓ~ requestʹΑΔࢦఆ • PodʹׂΓͯΔϝϞϦͱCPUΛࢦఆ • Podʹrequest͞ΕͨϦιʔεྔΛݩʹஔ͞ΕΔϊʔυΛܾఆ
ϨΠςϯγͷ୲อ ~PodͷεέʔϧΞοϓ~ limitʹΑΔࢦఆ • Pod͕࣮ࡍʹ༻Ͱ͖ΔݶքͷϦιʔεྔ ◦ requestͷׂϦιʔεΛ͑Δ͜ͱ͕Ͱ͖Δ ◦ ࢦఆ͠ͳ͍ͱrequestͱಉ͡ʹͳΔ ◦
ීஈগͳ͍͍͕ͯ͘ɺҰ࣌తʹόʔετ͢ΔՄೳੑͷ͋ΔϫʔΫϩʔυʹ༗ޮ • limitΛӽ͑Α͏ͱ͢ΔͱεϩοτϦϯά͞Εɺ༻Λ͑ΒΕΔ
ϨΠςϯγͷ୲อ ~PodͷεέʔϧΞοϓ~ requestͱlimitͷҙ • limitͱrequest ͷ͕ࠩେ͖͍߹ ◦ limit·Ͱ༻্͕͕ͬͨͱ͖ʹϊʔυϦιʔε͕ރׇ͢ΔՄೳੑ • requestࢦఆ͕ͳ͍߹
◦ Scheduler͕Ϧιʔε༻ྔΛఆͰ͖ͳ͍ͷͰಛఆϊʔυʹूத͢ΔՄೳੑ
ϨΠςϯγͷ୲อ ~࠷దͳϊʔυͷஔ~ ղܾࡦ2. ࠷దͳϊʔυͷஔ • GPUɺSSDͳͲͷϦιʔελΠϓΛબͯ͠ஔ HDD SSD Node1 Node2
ߴʹIOॲཧΛͯ͠ ΄͍͠ͷͰ SSDͷϊʔυ
ϨΠςϯγͷ୲อ ~࠷దͳϊʔυͷஔ~ ϊʔυΛࢦఆ͢Δํ๏ • nodeSelector ◦ ಛఆͷϥϕϧΛ࣋ͭNodeʹஔ • NodeAffinity ◦
ಛఆͷϥϕϧΛ࣋ͭNodeʹஔɻͪ͜Βͷ΄͏͕ΑΓॊೈ • Taint + Toleration ◦ Nodeʹஔ੍ݶΛՃ͑ɺஔද໌Λ͍ͯ͠ΔPodͷΈஔ ࢀߟϦϯΫ: Node্ͷPodͷεέδϡʔϦϯά
ϨΠςϯγͷ୲อ ~ΞϓϦέʔγϣϯΛνϡʔχϯά~ ղܾࡦ3. ΞϓϦέʔγϣϯΛνϡʔχϯά • APMͳͲΛ׆༻ͯ͠ϘτϧωοΫΛಛఆ͠ɺ࣮Λมߋͯ͠ ύϑΥʔϚϯεվળΛߦ͏
2. ThroughputΛ୲อ͠Α͏ Throughput Latency
ఆΞʔΩςΫνϟ Ingress Gateway Service A Service B Service C LB
ϚΠΫϩαʔϏε + API Gatewayͳύλʔϯ
εϧʔϓοτͷ୲อ جຊํ • LBɺAPI GatewayɺServiceͷॱʹ୲อ͍ͯ͘͠ • ϘτϧωοΫΛݟ͑͘͢͢ΔͨΊʹHPAແޮʹ͠ɺखಈͰεέʔϧ
εϧʔϓοτͷ୲อ 6000RPSΛ ୲อ͍ͨ͠ʂ جຊํ Ingress Gateway Service A Service B
Service C LB ϚΠΫϩαʔϏε + API Gatewayͳύλʔϯ
εϧʔϓοτͷ୲อ Ingress Gateway Service A Service B Service C LB
ϚΠΫϩαʔϏε + API Gatewayͳύλʔϯ LB6000RPS ग़Δʁ Nginx (੩తίϯςϯπΛฦ٫) جຊํ
εϧʔϓοτͷ୲อ Ingress Gateway Service A Service B Service C LB
ϚΠΫϩαʔϏε + API Gatewayͳύλʔϯ Ingress Gateway 6000RPSग़Δʁ جຊํ
εϧʔϓοτͷ୲อ جຊํ Ingress Gateway Service A Service B Service C
LB ϚΠΫϩαʔϏε + API Gatewayͳύλʔϯ αʔϏε 6000RPSग़Δʁ
εϧʔϓοτͷ୲อ εϧʔϓοτ͕৳ͼͳ͘ͳͬͨΒ.... • Ͳ͜ʹϘτϧωοΫ͕དྷ͍ͯΔ͔Λ֬ೝ͢Δ ◦ CPU༻ɺϝϞϦ༻ɺIOPSɺϩʔυΞϕϨʔδɺJVMͷώʔϓɺ ίωΫγϣϯϓʔϧɺΩϟογϡώοτ... • ϘτϧωοΫΛҠಈͤ͞Α͏ʂ ◦
ਫฏεέʔϧɺਨεέʔϧɺΞϓϦέʔγϣϯνϡʔχϯάɺ࠷దͳ ϊʔυͷஔɺϧʔςΟϯάͷํͷݟ͠Λ࣮ࢪ
3. εύΠΫʹඋ͑Α͏ ٸܹͳ Traffic૿ʂ
εύΠΫͰͳ͘؇͔ͳτϥϑΟοΫ૿ͳΒɾɾɾ • Horizontal Pod Autoscaler + Cluster Autoscaler ͰରԠͰ͖Δ εύΠΫʹඋ͑Α͏
εύΠΫʹඋ͑Α͏ ~Horizontal Pod Autoscaler~ HPAͷΈ • 30ඵ͝ͱʹCPU༻ɺϦΫΤετͳͲͷϝ τϦΫεΛνΣοΫ͠ɺPodΛܭࢉ͢Δ • ࠷େͰ3ʹ1ճεέʔϧΞτɺ5ʹҰճε
έʔϧΠϯ ࢀߟ: Horizontal Pod Autoscaler
εύΠΫʹඋ͑Α͏ ~Horizontal Pod Autoscaler~ HPAͷܭࢉࣜ desiredReplicas = ceil[currentReplicas * (
currentMetricValue / desiredMetricValue )] εέʔϧޙPod = ceil [4 * (90 / 60)] = 6 ܭࢉྫ ܭࢉରϝτϦΫε: CPU༻ Target CPU༻: 60% ݱࡏͷReplica = 4 ݱࡏͷPodͷฏۉCPU༻: 90%
εύΠΫʹඋ͑Α͏ ~Cluster Autoscaler~ Cluster AutoscalerͷΈ request͞ΕͨϦιʔε͕Γͳ͘ͳΓɺ Pod͕ஔͰ͖ͳ͘ͳͬͨλΠϛϯάͰϊʔυ͕εέʔϧ͢Δ Node1 Node2 Full!
Full! εέδϡʔϧ ͢Δ ͱ͜Ζ͕ͳ͍...
εύΠΫʹඋ͑Α͏ ~Cluster Autoscaler~ Cluster AutoscalerͷΈ request͞ΕͨϦιʔε͕Γͳ͘ͳΓɺ Pod͕ஔͰ͖ͳ͘ͳͬͨλΠϛϯάͰϊʔυ͕εέʔϧ͢Δ Node1 Node2 Node3
ϊʔυՃ Scheduled!
εύΠΫʹඋ͑Α͏ HPA + Cluster Autoscalerͷ ϊʔυɺPod͕εέʔϧ͢Δ·ͰͷҰఆͷϦʔυλΠϜ͕͔͔ΔͷͰ εύΠΫతͳ૿Ճͩͱؒʹ߹Θͳ͍
εύΠΫʹඋ͑Α͏ ରԠࡦ • ͕࣌ؒ༧ଌͰ͖Δ߹ (CMޮՌɺYahoo๒ͳͲ) ◦ CronJobͳͲͰಛఆ࣌ؒʹHPAͷminReplicaΛ্͛Δ • ͕࣌ؒ༧ଌͰ͖ͳ͍߹ ◦
ྫ͑CPU༻ͷඪΛΏΔ͓ͯ͘͘͠ ◦ ༧ΊminReplicaΛੵΜͰ͓͘ ◦ CDNͳͲͷΩϟογϡઓུΛݟ͢
4. ϊʔυͷμϯʹඋ͑Α͏ Node1 Node2 Node3 Evict!
ϊʔυͷμϯʹඋ͑Α͏ ϊʔυ͕μϯ͢Δཧ༝ • ϋʔυΣΞো • κʔϯɺϦʔδϣϯো • ΫϥελΞοϓάϨʔυ • Մ༻ੑͷอূ͞Εͳ͍ϊʔυ
(PreemptibleϊʔυɺSpotΠϯελϯε) • ΦϖϨʔγϣϯϛε
ϊʔυͷμϯʹඋ͑Α͏ ϊʔυͷμϯʹର͢Δରࡦ 1. PodͷԽͱκʔϯࢄ 2. ҆શͳPodͷఀࢭ ◦ Graceful Shutdownͷઃఆ ◦
Podͷదͳୀආઓུ 3. Ϋϥελͷਖ਼͍͠ઃఆ ◦ ϝϯςφϯεΟϯυͱSurge Upgradeͷઃఆ ◦ Մ༻ੑඇอূϊʔυͷਖ਼͍͠ӡ༻
4.1 ϊʔυͷμϯʹඋ͑Α͏ Node1 Node2 Node3 Evict! - PodͷԽͱκʔϯࢄ -
ϊʔυͷμϯʹඋ͑Α͏ ~PodͷԽͱκʔϯࢄ~ Podͷࢄઓུ ~ஔϊʔυͷࢄ~ Node1 Node2 serviceA serviceA Pod Anti
AffinityΛ׆༻ͯ͠ɺ ಉαʔϏεͷPod͕ ಉ͡ϊʔυʹ ͳΔ͘ஔ͞Εͳ͍Α͏ʹ͢Δ
ϊʔυͷμϯʹඋ͑Α͏ ~PodͷԽͱκʔϯࢄ~ Podͷࢄઓུ ~κʔϯͷࢄ~ Node1 asia-northeast1-a serviceA serviceA Node2 asia-northeast1-b
Pod Anti AffinityΛ׆༻ͯ͠ɺ ಉαʔϏεͷPod͕ ಉ͡κʔϯʹ ͳΔ͘ஔ͞Εͳ͍Α͏ʹ͢Δ 1.18Ҏ্ Topology Spread Constraints ͕Φεεϝʂ ϦʔδϣφϧΫϥελʂ
4.2 ϊʔυͷμϯʹඋ͑Α͏ Node1 Node2 Node3 Evict! - ҆શͳPodͷఀࢭ -
ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ ϊʔυ͕ఀࢭ͢Δͱ͖ Node1 Node2 Node3 Schedule͞Ε͍ͯΔ Pod͕ऴྃ͞ΕΔ Terminate!
ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ ϊʔυ͕ఀࢭ͢Δͱ͖ Node1 Node2 Node3 ผͷϊʔυͰ৽ͨʹCreate
ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ ҆શͳӡ༻ͷͨΊʹ PodΛ҆શʹఀࢭ͢Δॲཧ͕ඞཁෆՄܽ
Graceful Shutdownͷઃఆ ৽ͨͳϦΫΤετࢭΊͭͭɺॲཧதͷϦΫΤετͷྃΛ͔ͬͯΒϓϩηε Λམͱͨ͢Ίͷઃఆ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ ϦΫΤετͷ ॲཧΛྃͤ͞Δ
Podͷऴྃ࣌ͷڍಈ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ Podͷऴྃ PreStopॲཧ SIGTERM SIGTERM ॲཧ SIGKILL Endpoint͔ΒPodΛআ
Podͷऴྃ࣌ͷڍಈ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ Podͷऴྃ PreStopॲཧ SIGTERM SIGTERM ॲཧ SIGKILL Endpoint͔ΒPodΛআ
̎ͭͷϥΠϯಉ࣌ʹॲཧ͕Δ
Podͷऴྃ࣌ͷڍಈ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ Podͷऴྃ PreStopॲཧ SIGTERM SIGTERM ॲཧ SIGKILL Endpoint͔ΒPodΛআ
Service Endpoint ϧʔςΟϯά͕ࢭ·Δʂ
Podͷऴྃ࣌ͷڍಈ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ Podͷऴྃ PreStopॲཧ SIGTERM SIGTERM ॲཧ SIGKILL Endpoint͔ΒPodΛআ
ҙͷίϚϯυ ॲཧ ΞϓϦέʔγϣϯ ଆͰϋϯυϧ
Podͷऴྃ࣌ͷڍಈ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ Podͷऴྃ PreStopॲཧ SIGTERM SIGTERM ॲཧ SIGKILL Endpoint͔ΒPodΛআ
͜͏ͳΔͱࢮΜͩίϯςφʹ ϧʔςΟϯά͞ΕΔ
PodͷϧʔςΟϯάఀࢭΛͭ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ PreStopॲཧ SIGTERM ॲཧ Endpoint͔ΒPodΛআ lifecycle: preStop: exec:
command: ["/bin/sh", "-c", "sleep 10"]
ॲཧதͷϦΫΤετॲཧྃΛͬͯϓϩηεΛऴྃ͢Δ ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ PreStopॲཧ SIGTERM ॲཧ Endpoint͔ΒPodΛআ ΞϓϦέʔγϣϯShutdownॲཧ
Pod Disruption Budget ϊʔυ͕PodΛഉग़͢Δͱ͖ʹಉ࣌ʹఀࢭ͢ΔPodͷΛ੍ޚ͢ΔͨΊͷϦιʔε ϊʔυͷμϯʹඋ͑Α͏ ~҆શͳఀࢭ~ ࢀߟ: Disruption Node ࢦఆͷͣͭ
ഉग़ apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: sample spec: maxUnavailable: "25%" selector: matchLabels: app: sample
4.3 ϊʔυͷμϯʹඋ͑Α͏ Node1 Node2 Node3 Evict! - Ϋϥελͷઃఆ -
ΫϥελͷAuto Upgradeʹඋ͑Δ • ϝϯςφϯεΟϯυτϥϑΟοΫ͕গͳ͍࣌ؒʹઃఆ͢Δ • Surge UpgradeΛઃఆͯ͠ɺॱ൪ʹUpgrade͞ΕΔΑ͏ʹ͢Δ ϊʔυͷμϯʹඋ͑Α͏ ~ΫϥελͷΞοϓάϨʔυ~
Preemptibleϊʔυͷӡ༻ ϊʔυͷμϯʹඋ͑Α͏ ~ඇՄ༻ੑอূͷϊʔυͷӡ༻~ Node1 ௨ৗͷNode Pool Node2 Node1 Preemptible Node
Pool Node2 • ྆ํͷNode Poolʹ ஔ͢Δ • ॏཁͳPodஔ͠ͳ ͍ • શମͷϊʔυͷҰ෦ ʹݶఆ͢Δ ServiceA ServiceB ServiceA ServiceB
5. Ϋϥελͷϝϯςφϯεઓུ Ingress Gateway Service A Service B Service C
LB Ingress Gateway Service A Service B Service C
ΫϥελϝϯςφϯεʹవΔ ΫϥελͷϝϯςφϯεʹవΔϦεΫ • k8sͷόʔδϣϯΞοϓͰյΕΔΞϓϦέʔγϣϯͳ͍͔ʁ • Istioͷߋ৽ͳͲͰαʔϏεͷૄ௨͕ࢮ͵Մೳੑ • ϦʔδϣϯোͰҰ੪ʹࢮ͵Մೳੑ
ΫϥελϝϯςφϯεʹవΔ Ϋϥελϝϯςφϯεͷରॲ ΫϥελΛԽͯ͠ϚϧνΫϥελԽ͢Δ
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C ΫϥελΛԽ ୯ҰͷVIPΛఏڙ͢ΔLB
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C Ϋϥελͷߋ৽࣌
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C LB͔ΒΓͯ͠ Ϋϥελߋ৽࡞ۀ
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C ࠶ϧʔςΟϯά
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C ͳ͚Ε ͪ͜Βߋ৽
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C ࠶ϧʔςΟϯά
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C Ϋϥελͷ ϩʔϦϯάΞοϓσʔτΛ࣮ݱ
ϚϧνΫϥελͰՄ༻ੑΛ্ Ingress Gateway Service A Service B Service C LB
Ingress Gateway Service A Service B Service C ϦʔδϣϯোͰ ϑΣΠϧΦʔόʔ
ϚϧνΫϥελͷ࣮ݱํ๏ • GCLB + NEGΛͬͨϧʔςΟϯάΛࣗಈ Ͱߏங • ManagedͳService MeshΛఏڙ •
Observabilityͷ୲อͱSLO/SLIϞχλϦϯ ά • ΫϥελϦιʔεͷΫϥελؒಉظ GCP Anthos
ϚϧνΫϥελʹΑΔՃՁ Մ༻ੑ্͚ͩͰͳ͘ɺ༷ʑͳϝϦοτ • κʔϯΫϥελΛϚϧνΫϥελԽ͢Δ͜ͱͰϦʔδϣφϧΑΓ ҆͘ࡁΉՄೳੑ • VIP + IP AnycastʹΑΔϨΠςϯγԽ
·ͱΊ • ϨΠςϯγΛ୲อ͢Δ ◦ requestͱlimitΛదʹઃఆ͠Α͏ʂ ◦ ࠷దͳϦιʔεΛ࣋ͭϊʔυʹஔ͠Α͏ʂ ◦ ΞϓϦέʔγϣϯΛνϡʔχϯά͠Α͏
·ͱΊ • εϧʔϓοτΛ୲อ͢Δ ◦ Ͳ͜ʹϘτϧωοΫ͕དྷ͍ͯΔͷ͔ΛՄࢹԽͯ͠νϡʔχϯά͍ͯ͜͠͏ ◦ PodͷਫฏεέʔϧɺrequestͱlimitʹΑΔਨεέʔϧΛ͠Α͏
·ͱΊ • εύΠΫʹඋ͑Α͏ ◦ HPAͱCluster AutoscalerͷΈΛཧղ͓ͯ͠͏ ◦ ෆेͳ߹CronJobͳͲͰReplicaΛௐͨ͠ΓɺεέʔϧͷϝτϦΫεඪΛ ΏΔͨ͘͠Γ͠Α͏
·ͱΊ • ϊʔυͷμϯʹඋ͑Α͏ ◦ κʔϯɺϊʔυΛࢄͯ͠PodΛஔ͠Α͏ ◦ Graceful ShutdownɺPDBΛదʹઃఆͯ҆͠શʹఀࢭ͠Α͏ ◦ Ϋϥελͷϝϯςφϯε࣌ؒɺϝϯςφϯεํΛͪΌΜͱઃఆ͠Α͏
◦ Preemptible৻ॏʹʂ
·ͱΊ • Ϋϥελͷϝϯςφϯεઓུ ◦ ϚϧνΫϥελԽ͢Δͱ҆શʹΫϥελΛΧφϦϦϦʔεɺϩʔϦϯάΞοϓσʔτͰ͖ Δ ◦ ϨΠςϯγԽͳͲͷՃՁଘࡏ ◦ GCP
AnthosΛ͏ͱ୯ҰVIPʹෳΫϥελΛͿΒԼ͛ΔߏΛͱΔ͜ͱ͕Ͱ͖Δ
Follow Me!! @taisho6339
Thank you for listening!