Slide 1

Slide 1 text

1 Edge computing with Red Hat OpenShift

Slide 2

Slide 2 text

Edge computing with Red Hat OpenShift 2 Agenda ▸ Business goals & challenges ▸ Red Hat OpenShift and edge computing architectures ▸ Red Hat OpenShift edge computing use cases ▸ Q&A

Slide 3

Slide 3 text

3 Offer deeper engagements Edge computing with Red Hat OpenShift Distribute processing Use new modern applications Deliver services at scale Drive Innovation Faster insights, for faster action Meeting business goals with edge computing

Slide 4

Slide 4 text

4 Connectivity Disconnected Sporadic Low bandwidth/High Latency Private 5G Scale Locations, Cluster, Applications, Devices 100s - 100K + Maintenance Monitor and Control Security Ransomware/Extortion IP/PII Theft Legacy Infrastructures & Device Understanding Edge Computing Key Challenges Topics and Considerations

Slide 5

Slide 5 text

5 Topologies to meet the needs of different edge tiers Edge computing architectures

Slide 6

Slide 6 text

STRICTLY INTERNAL ONLY Provider Edge Provider/Enterprise Core Edge Gateway Red Hat’s focus Regional Data Center “last mile” Edge Server Provider Far Edge Provider Access Edge Provider Aggregation Edge Core Data Center Device or Sensor 6 Edge Tiers * Edge computing == Fog computing (there is no real difference other t Partners Edge Endpoint End-User Premises Edge

Slide 7

Slide 7 text

STRICTLY INTERNAL ONLY Endpoint Regional Data Center Gateway Edge Server Core Data Center Device or Sensor Tier 3 Data collection Tier 2 Data aggregation Tier 1 Data analytics Red Hat in the datacenter 2 cores/2 GBs 8 cores/32 GBs 10,000+ 1,000+ 100+ 16 cores/128GBs HW Capacity Scale 7 Declining hardware computing capacity

Slide 8

Slide 8 text

Edge computing with Red Hat OpenShift 8 Edge gateway/ edge server Small bare metal footprint Infrastructure virtualization Public/private cloud A consistent edge platform to meet your needs Develop once, deploy anywhere Meet diverse use cases Consistent operations at scale

Slide 9

Slide 9 text

▸ Available now ▸ Available now Edge computing with Red Hat OpenShift 9 Central data center Cluster management and application deployment Kubernetes node control Regional data center Edge Single node edge servers Low bandwidth or disconnected sites. ▸ Available in 2021 C W Site 3 W Site 2 C C W Site 1 Remote worker nodes Environments that are space constrained 3 Node Clusters Small footprint with high availability Legend: C: Control nodes W: Worker nodes

Slide 10

Slide 10 text

10 C W W Expand the cluster at-will and on-demand ● Deploy additional worker nodes when demand requires ● Can be additional storage nodes if needed (set with label) ● Can remove worker role from original supervisors if desired ● These nodes could be “remote” workers if needed W ● Comprised of just 3 control plane nodes without the need for any additional worker nodes ○ Application workloads are schedulable on the control plane nodes ○ Control plane remains highly available supporting upgrades ● Requires: ○ Setting worker replicas to 0 in install-config will configure supervisor nodes as workers as well (any other value will set them as supervisors) ○ Temporary bootstrap node for initial cluster bring-up ○ External DNS and LB services ○ HAProxy for *.apps needs to be reconfigured to target control nodes (ensure health checks are enabled) ● Minimum system resource requirements for each control plane node are cumulative of control and worker requirements: ○ 6 vCPU, 24GB RAM, 200GB Storage W 3-Node cluster OpenShift install B OpenShift install B

Slide 11

Slide 11 text

11 Single Node OpenShift Dev preview in OCP 4.7 Current minimum requirements- 8vCPU / 32Gb Try it out - https://cloud.redhat.com/openshift/assisted-installer/clusters/~new

Slide 12

Slide 12 text

Edge computing with Red Hat OpenShift 12 Managing the edge, just like the core Red Hat Advanced Cluster Management for Kubernetes Multicluster lifecycle management Policy driven governance, risk, and compliance Advanced application lifecycle management

Slide 13

Slide 13 text

13 13 Manage edge clusters C W C W C W Central Data center Regional DCs High BW Site Cluster B or Cluster Management and Application lifecycle Kubernetes node control Far Edge Red Hat Advanced Cluster Management ● Cluster Purpose (label) ● General purpose policies (ex: security) ● Placement rules for App (granularity) ● Central update of apps (labels) C W Cluster A Red Hat Advanced Cluster security ● Vulnerability analysis ● Image assurance ● Compliance assessments / risk profiling ● Runtime behavioral analysis ● Thread detected / incident response

Slide 14

Slide 14 text

14 Zero Touch Provisioning - RHACM 2.3 (TP) ● Integrates and leverages existing technology stack - RHACM/Hive/Metal3/Assisted Installer ● Minimal prerequisites- Enables untrained technician installation flow (Barcode scan to trigger install). ● Highly customized deployment - Fits Connected/Disconnected, IPv4/IPv6, DHCP/Static, UPI/IPI deployment topologies ● Edge focused - no additional bootstrap node or external services needed for deployment. ● GitOps enabled - managed with kube-native declarative API Aimed at regional distributed on-prem deployment. Enabling customer’s automated path from uninstalled infrastructure to application running on an OpenShift cluster.

Slide 15

Slide 15 text

15 Zero Touch Provisioning - Ingredients Using Kubernetes CRs/GitOps practices to manage infrastructure Standardize Clusters Config At Scale Utilizing GitOps and RHACM policies or ArgoCD integration to provide configuration as code. Infrastructure Provisioning Cluster Configuration Put applications anywhere RHACM App-Subs functions for automated application lifecycle Application Rollout Central provisioning of OpenShift Clusters Infrastructure As Code Configuration As Code Application Placement As Code

Slide 16

Slide 16 text

16 A horizontal platform approach to edge computing Use cases

Slide 17

Slide 17 text

17 Transforming industries with edge computing Telecommunications Health–life science Manufacturing Automotive Retail Public sector Financial Energy Hospitality Edge computing with Red Hat OpenShift

Slide 18

Slide 18 text

TELCO VERTICAL HORIZONTAL Open Hybrid Cloud 18 Our focus edge use cases ● Enterprise, provider & operator edge ● Standardize distributed operations ● Modernize application environments (OT and IT) ● Modernize network infrastructure ● Automation/integration of monitoring & control processes ● Predictive analytics ● Production optimization ● Supply chain optimization Enterprise Edge Extend cloud/data center approaches to new contexts / distributed locations / OT Operations Edge Leverage edge/AI/serverless to transform OT environments Industrial Edge ● Aggregation, access and far edge ● Manages a network for others ○ Telecommunications Service Providers ○ Creates reliable, low latency networks Provider Edge Network and compute specifically to support remote/mobile use cases Provider Edge Enterprise Edge ● Vehicle edge (onboard & offboard) ○ In-vehicle OS ○ Autonomous driving, Infotainment up to ASIL-B ○ Quality management Customer-facing Product Edge Create new offerings or customer/ partner engagement models Vehicle Edge

Slide 19

Slide 19 text

19 ▸ Provides common, automated management across large-scale ▸ Lowers latency times with more distributed network architecture ▸ Uses remote worker node topology ▸ Deploy radio access network (RAN) functions where needed 5G DU Access 4G RU Aggregation 5G CU Red Hat OpenShift runs the most demanding workloads Telco RAN use case Aggregation 4G BBU Telco core 4G/5G Core Legend: CU: Centralized Unit Access: also known as far edge DU: Distributed Unit Aggregation: also known as near edge Access

Slide 20

Slide 20 text

Edge computing with Red Hat OpenShift 20 Transforming industrial manufacturing Capitalize on industry 4.0 technologies To achieve successful optimization, planning and control of production Transition their IT-OT environment to next generation infrastructure Capitalize on edge computing, AI/ML, hybrid cloud, and software-defined technology Optimize production at the factory floor Use AI/ML intelligent applications for predictive maintenance and higher quality products Support future operating environments Accelerate the designing, developing, and deploying new apps and services Business initiatives creating new opportunities

Slide 21

Slide 21 text

21 SENSOR DATA & INFORMATION W Remote worker node Sensor Simulators Core HQ Data Center S W 3 node cluster Factory #1 Factory #2 Sensor Simulators MQTT MQTT Git Quay Eclipse Che GitOps VSCode Quarkus CODE & CONFIGURATION Ceph AI/ML Industrial use case ▸ Simplify the deployment and lifecycle management of AI-powered application Red Hat OpenShift helping create the smart factory at the industrial edge ▸ Accelerate data gathering, preparation, and inferencing tasks ▸ Consistent development platform and tools ▸ Turn insights into positive business outcomes faster

Slide 22

Slide 22 text

Edge computing with Red Hat OpenShift 22 Solution Blueprint: Edge and AI/ML in industrial manufacturing Accelerating time to value using OpenShift at the edge Coding, simulation & deployment to production Container based CI/CD from data center the edge Automated configuration management Consistent roll-outs using end to end GitOps for distributed environments Data processing from sensors to analytics Open source middleware and AI/ML stacks ML model training and deployment to production Open Data Hub enabled CI/CD Bringing OpenShift, the Red Hat portfolio and ecosystem together from the core to the factory floor https://github.com/redhat-edge-computing

Slide 23

Slide 23 text

23 What happens when…??? Network Disruption

Slide 24

Slide 24 text

24 zone3 Red Hat OpenShift Control plane resides in a central location, with reliably-connected workers distributed at edge sites sharing a control plane. Control Plane WORKER zone1 WORKER WORKER WORKER zone2 WORKER WORKER Workloads & projects can be isolated (optionally in worker nodes) using Kubernetes zones. Zones modify pod eviction behavior and can slow down or stop pod evictions in case of disruption. Slowing pod evictions... Normally, for unreachable tainted nodes the controller evicts pods at a rate of 1 node every 10 seconds, using zones the controller evicts at a rate of 1 node every 100 seconds. Clusters with fewer than 50 nodes are not tainted & your cluster must have more than 3 zones for this to take effect. Edge computing with Red Hat Remote Worker Nodes Disruption handling methods - Zones with tolerations kind: Node apiVersion: v1 metadata: labels: topology.kubernetes.io/region=zone3 C W

Slide 25

Slide 25 text

25 W kubeletConfig: Node-status-update-frequency (5sec -configurable) Workloads continue running locally Node-monitor-grace-period (40sec -not configurable) C Pod-eviction-timeout (5min -not configurable) Node State Ready Unhealthy Unreachable Pods are marked for eviction and need to be rescheduled Edge computing with Red Hat Remote Worker Nodes C W Tolerations can mitigate the pod eviction indefinitely if toleration seconds = 0; Or extend the pod eviction timeout with a specified value for given taints; Tolerations: - key:”node.kuberenetes.io/unreachable” operator: “Exists” effect: “NoExecute” tolerationSeconds: 0 - key:”node.kuberenetes.io/not-ready” operator: “Exists” effect: “NoExecute” tolerationSeconds: 0 tolerationSeconds is used after Pod-eviction-timeout expires When connection is back, before Pod-eviction-timeout or tolerationSeconds expires the node comes back under control plane management Disruption handling methods - Zones with tolerations

Slide 26

Slide 26 text

26 Edge computing with Red Hat Remote Worker Nodes Disruption handling methods - Daemon sets / Static pods C W W C Workloads continue running locally Static pods Static pods are managed by the kubelet daemon on a specific node. Unlike pods that are managed by the control plane, the node specific kubelet watches each static pod. -Does restart workload with node restart without any trigger from API server Drawbacks -Secrets & config maps cannot be used Daemon sets Daemon sets insure that all (or some) nodes run a copy of a pod. If the Node disconnects from the cluster, then the Daemonset Pod within the API will not change state, and will continue in the last state that was reported. -Does NOT restart workload upon node restart during disruption -Workload restarts when the disruption is rectified and the node re-joins the cluster Node restart If a workload is targeted for all remote worker nodes using Daemon sets is the best practice. Daemon sets also support service endpoint and load balancer. Other methods to reschedule pods after Pod-eviction-timeout; - ReplicaSets - Deployments - Replication controllers

Slide 27

Slide 27 text

linkedin.com/company/red-hat youtube.com/user/RedHatVideos facebook.com/redhatinc twitter.com/RedHat Red Hat is the world’s leading provider of enterprise open source software solutions. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. Thank you 27