Upgrade to Pro — share decks privately, control downloads, hide ads and more …

OpenShift-SDNとNetworkPolicy

 OpenShift-SDNとNetworkPolicy

Deep dive in OpenShift-SDN (ovs-networkpolicy)

orimanabu

May 28, 2019
Tweet

More Decks by orimanabu

Other Decks in Technology

Transcript

  1. CNIとは 2 • CNIとは ◦ Kubernetes でコンテナのネットワークインタフェースを設定 するための仕様 • やること

    ◦ Linux コンテナが作成された際のネットワーク接続性の確保 ◦ コンテナが 削除された際のリソース解放を行う
  2. OpenShift-SDN 4 • 3つの動作モード ◦ ovs-subnet ▪ ぶっとおし ◦ ovs-multitenant

    ▪ 同じプロジェクト内は疎通できる、異なるプロジェクトの Podとは通信できない ◦ ovs-networkpolicy ▪ NetworkPolicy v1 • Open vSwitch (OVS)を活用 • VXLANでオーバーレイ
  3. パケットの流れ 5 br0 vxlan0 tun0 veth6f5cb1b4 kernel iptables eth3 eth0

    httpd 4 1 2 node2 172.16.99/0/24 vxlan0 kernel iptables eth3 1 2 node1 server1 server1- 1-6qc4j 172.16.99.42 172.16.99.41 10.129.0.1 10.129.0.13 tun0 172.30.255.14 10.130.0.1 br0 eth0 httpd client1- 1-zn9ff 10.130.0.17 14
  4. VNID 6 [ori@ocp311-master1 RHTN]$ oc get netnamespaces NAME NETID EGRESS

    IPS default 0 [] kube-public 536622 [] kube-system 7695582 [] management-infra 14065074 [] openshift 2031527 [] openshift-console 2954107 [] openshift-infra 12640971 [] openshift-logging 13439836 [] openshift-node 3244486 [] openshift-sdn 14688704 [] openshift-web-console 7072175 [] proj1 4610606 [] proj2 10513584 [] 0x465a2e a06cb0
  5. node1のflow entry 7 table=0, priority=400, ip,in_port=2,nw_src=10.130.0.1 actions=goto_table:30 table=0, priority=300, ct_state=-trk,ip

    actions=ct(table=0) table=0, priority=300, ip,in_port=2,nw_src=10.130.0.0/23,nw_dst=10.128.0.0/14 actions=goto_table:25 table=0, priority=250, ip,in_port=2,nw_dst=224.0.0.0/4 actions=drop table=0, priority=200, arp,in_port=1,arp_spa=10.128.0.0/14,arp_tpa=10.130.0.0/23 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 table=0, priority=200, ip,in_port=1,nw_src=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 table=0, priority=200, ip,in_port=1,nw_dst=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 table=0, priority=200, arp,in_port=2,arp_spa=10.130.0.1,arp_tpa=10.128.0.0/14 actions=goto_table:30 table=0, priority=200, ip,in_port=2 actions=goto_table:30 table=0, priority=150, in_port=1 actions=drop table=0, priority=150, in_port=2 actions=drop table=0, priority=100, arp actions=goto_table:20 table=0, priority=100, ip actions=goto_table:20 table=0, priority=0 actions=drop table=10, priority=100, tun_src=172.16.99.31 actions=goto_table:30 table=10, priority=100, tun_src=172.16.99.21 actions=goto_table:30 table=10, priority=100, tun_src=172.16.99.42 actions=goto_table:30 table=10, priority=0 actions=drop table=20, priority=100, arp,in_port=14,arp_spa=10.130.0.17,arp_sha=00:00:0a:82:00:11/00:00:ff:ff:ff:ff actions=load:0x465a2e->NXM_NX_REG0[],goto_table:21 table=20, priority=100, arp,in_port=18,arp_spa=10.130.0.21,arp_sha=00:00:0a:82:00:15/00:00:ff:ff:ff:ff actions=load:0xa06cb0->NXM_NX_REG0[],goto_table:21 table=20, priority=100, ip,in_port=14,nw_src=10.130.0.17 actions=load:0x465a2e->NXM_NX_REG0[],goto_table:21 table=20, priority=100, ip,in_port=18,nw_src=10.130.0.21 actions=load:0xa06cb0->NXM_NX_REG0[],goto_table:21 table=20, priority=0 actions=drop table=21, priority=200, ip,nw_dst=10.128.0.0/14 actions=ct(commit,table=30) table=21, priority=0 actions=goto_table:30 table=25, priority=100, ip,nw_src=10.130.0.17 actions=load:0x465a2e->NXM_NX_REG0[],goto_table:30 table=25, priority=100, ip,nw_src=10.130.0.21 actions=load:0xa06cb0->NXM_NX_REG0[],goto_table:30 table=25, priority=0 actions=drop table=30, priority=300, arp,arp_tpa=10.130.0.1 actions=output:2 table=30, priority=300, ip,nw_dst=10.130.0.1 actions=output:2 table=30, priority=300, ct_state=+rpl,ip,nw_dst=10.130.0.0/23 actions=ct(table=70,nat) table=30, priority=200, arp,arp_tpa=10.130.0.0/23 actions=goto_table:40 table=30, priority=200, ip,nw_dst=10.130.0.0/23 actions=goto_table:70 table=30, priority=100, arp,arp_tpa=10.128.0.0/14 actions=goto_table:50 table=30, priority=100, ip,nw_dst=10.128.0.0/14 actions=goto_table:90 table=30, priority=100, ip,nw_dst=172.30.0.0/16 actions=goto_table:60 table=30, priority=50, ip,in_port=1,nw_dst=224.0.0.0/4 actions=goto_table:120 table=30, priority=25, ip,nw_dst=224.0.0.0/4 actions=goto_table:110 table=30, priority=0, ip actions=goto_table:100 table=30, priority=0, arp actions=drop table=40, priority=100, arp,arp_tpa=10.130.0.17 actions=output:14 table=40, priority=100, arp,arp_tpa=10.130.0.21 actions=output:18 table=40, priority=0 actions=drop table=50, priority=100, arp,arp_tpa=10.131.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.16.99.31->tun_dst,output:1 table=50, priority=100, arp,arp_tpa=10.128.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.16.99.21->tun_dst,output:1 table=50, priority=100, arp,arp_tpa=10.129.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.16.99.42->tun_dst,output:1 table=50, priority=0 actions=drop table=60, priority=200 actions=output:2 table=60, priority=0 actions=drop table=70, priority=100, ip,nw_dst=10.130.0.17 actions=load:0x465a2e->NXM_NX_REG1[],load:0xe->NXM_NX_REG2[],goto_table:80 table=70, priority=100, ip,nw_dst=10.130.0.21 actions=load:0xa06cb0->NXM_NX_REG1[],load:0x12->NXM_NX_REG2[],goto_table:80 table=70, priority=0 actions=drop table=80, priority=300, ip,nw_src=10.130.0.1 actions=output:NXM_NX_REG2[] table=80, priority=200, ct_state=+rpl,ip actions=output:NXM_NX_REG2[] table=80, priority=50, reg1=0xa4fb4b actions=output:NXM_NX_REG2[] table=80, priority=50, reg1=0xa7d717 actions=output:NXM_NX_REG2[] table=80, priority=50, reg1=0x465a2e actions=output:NXM_NX_REG2[] table=80, priority=50, reg1=0xa06cb0 actions=output:NXM_NX_REG2[] table=80, priority=0 actions=drop
  6. client1 → server1 on node1 8 • table=0, ip actions=goto_table:20

    • table=20, ip,in_port=14,nw_src=10.130.0.17 actions=load:0x465a2e->NXM_NX_REG0[],goto_table:21 • table=21, ip,nw_dst=10.128.0.0/14 actions=ct(commit,table=30) • table=30, ip,nw_dst=10.128.0.0/14 actions=goto_table:90 • table=90, ip,nw_dst=10.129.0.0/23 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:172.16.99.42->tun_dst,output:1
  7. client1 → server1 on node2 9 • table=0, ip,in_port=1,nw_src=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10

    • table=10, tun_src=172.16.99.41 actions=goto_table:30 • table=30, ip,nw_dst=10.129.0.0/23 actions=goto_table:70 • table=70, ip,nw_dst=10.129.0.13 actions=load:0x465a2e->NXM_NX_REG1[],load:0x8->NXM_NX_REG2[],goto_table:80 • table=80, reg1=0x465a2e actions=output:NXM_NX_REG2[]
  8. NetworkPolicy適用 (1) 10 • table=0, ip,in_port=1,nw_src=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 • table=10, tun_src=172.16.99.41

    actions=goto_table:30 • table=30, ip,nw_dst=10.129.0.0/23 actions=goto_table:70 • table=70, ip,nw_dst=10.129.0.13 actions=load:0x465a2e->NXM_NX_REG1[],load:0x8->NXM_NX_REG2[],goto_table:80 • table=80, actions=drop kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: [] Flow rules for [client1 → server1] on node2
  9. NetworkPolicy適用 (2) 11 • table=0, ip,in_port=1,nw_src=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 • table=10, tun_src=172.16.99.41

    actions=goto_table:30 • table=30, ip,nw_dst=10.129.0.0/23 actions=goto_table:70 • table=70, ip,nw_dst=10.129.0.13 actions=load:0x465a2e->NXM_NX_REG1[],load:0x8->NXM_NX_REG2[],goto_table:80 • table=80, reg0=0x465a2e,reg1=0x465a2e actions=output:NXM_NX_REG2[] kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} Flow rules for [client1 → server1] on node2
  10. NetworkPolicy適用 (3) 12 • table=0, ip,in_port=1,nw_src=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 • table=10, tun_src=172.16.99.41

    actions=goto_table:30 • table=30, ip,nw_dst=10.129.0.0/23 actions=goto_table:70 • table=70, ip,nw_dst=10.129.0.13 actions=load:0x465a2e->NXM_NX_REG1[],load:0x8->NXM_NX_REG2[],goto_table:80 • table=80, ip,reg0=0x465a2e,reg1=0x465a2e,nw_src=10.130.0.17,nw_dst=10.129.0.13 actions=output:NXM_NX_REG2[] kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-server1-from-client 1 spec: podSelector: matchLabels: app: server1 ingress: - from: - podSelector: matchLabels: app: client1 Flow rules for [client1 → server1] on node2
  11. NetworkPolicy適用 (4) 13 • table=0, ip,in_port=1,nw_src=10.128.0.0/14 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10 • table=10, tun_src=172.16.99.41

    actions=goto_table:30 • table=30, ip,nw_dst=10.129.0.0/23 actions=goto_table:70 • table=70, ip,nw_dst=10.129.0.13 actions=load:0x465a2e->NXM_NX_REG1[],load:0x8->NXM_NX_REG2[],goto_table:80 • table=80, ip,reg0=0x465a2e,reg1=0xa06cb0,nw_dst=10.129.0.14 actions=output:NXM_NX_REG2[] kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-from-proj1-client1 spec: podSelector: matchLabels: app: server2 ingress: - from: - namespaceSelector: matchLabels: name: proj1 Flow rules for [client1 → server1] on node2
  12. OpenShift-SDNにおける NetworkPolicyの注意点 14 • NetworkPolicyオブジェクトに紐付かないPodは全通し • OpenShift-SDNでサポートするNetworkPolicyはv1 APIのみ ◦ Ingressのみサポート

    ◦ 以下は使えない ▪ Egress ▪ IPBlock ▪ namespaceSelectorとpodSelectorの両方指定 • podSelectorでPod個別に指定すると、マッチするPod(のIPアド レス)ごとにFlow entryが増える ◦ なるべくnamespaceSelector、もしくは空のpodSelectorで 指定する ◦ 細かいPod間の制御は最低限にする
  13. CONFIDENTIAL Designator linkedin.com/company/red-hat youtube.com/user/RedHatVideos facebook.com/redhatinc twitter.com/RedHat Red Hat is the

    world’s leading provider of enterprise open source software solutions. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. Thank you 15