Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Edge computing with Red Hat OpenShift

Edge computing with Red Hat OpenShift

You've probably heard about the growth of edge computing, but what is edge? And what does it mean- especially for OpenShift admins? By moving workloads to the edge of the network, devices spend less time on the cloud, react faster to local changes, and operate more reliably. But with the opportunities that edge computing brings, some complexities need to be considered as you build out an infrastructure to support these use cases.

So in this episode, let's talk about what it's like to work with OpenShift at the edge. That means things like running nodes at remote sites, single-node OpenShift instances, and compact clusters. We'll also take a look at management tools and philosophies. Red Hat Principal Technical Marketing Manager Mark Schmitt will join the stream to help explore how the edge is a very different place from the data center and gives us a unique perspective on OpenShift at the edge.

Red Hat Livestreaming

September 08, 2021
Tweet

More Decks by Red Hat Livestreaming

Other Decks in Technology

Transcript

  1. 1
    Edge computing with
    Red Hat OpenShift

    View Slide

  2. Edge computing with Red Hat OpenShift
    2
    Agenda
    ▸ Business goals & challenges
    ▸ Red Hat OpenShift and edge computing architectures
    ▸ Red Hat OpenShift edge computing use cases
    ▸ Q&A

    View Slide

  3. 3
    Offer deeper
    engagements
    Edge computing with Red Hat OpenShift
    Distribute
    processing
    Use new modern
    applications
    Deliver services
    at scale
    Drive
    Innovation
    Faster insights, for
    faster action
    Meeting business goals with edge computing

    View Slide

  4. 4
    Connectivity
    Disconnected
    Sporadic
    Low bandwidth/High Latency
    Private 5G
    Scale
    Locations, Cluster, Applications,
    Devices
    100s - 100K +
    Maintenance
    Monitor and Control
    Security
    Ransomware/Extortion
    IP/PII Theft
    Legacy Infrastructures & Device
    Understanding Edge Computing Key Challenges
    Topics and Considerations

    View Slide

  5. 5
    Topologies to meet the
    needs of different edge
    tiers
    Edge computing
    architectures

    View Slide

  6. STRICTLY INTERNAL ONLY
    Provider Edge Provider/Enterprise Core
    Edge
    Gateway
    Red Hat’s focus
    Regional
    Data Center
    “last mile”
    Edge
    Server
    Provider
    Far
    Edge
    Provider
    Access
    Edge
    Provider
    Aggregation
    Edge
    Core
    Data Center
    Device or
    Sensor
    6
    Edge Tiers
    * Edge computing == Fog computing (there is no real difference other t
    Partners
    Edge
    Endpoint
    End-User Premises Edge

    View Slide

  7. STRICTLY INTERNAL ONLY
    Endpoint Regional
    Data Center
    Gateway Edge
    Server
    Core
    Data Center
    Device or
    Sensor
    Tier 3
    Data collection
    Tier 2
    Data aggregation
    Tier 1
    Data analytics
    Red Hat in the datacenter
    2 cores/2 GBs 8 cores/32 GBs
    10,000+ 1,000+ 100+
    16 cores/128GBs HW Capacity
    Scale
    7
    Declining hardware computing capacity

    View Slide

  8. Edge computing with Red Hat OpenShift
    8
    Edge gateway/
    edge server
    Small
    bare metal
    footprint
    Infrastructure
    virtualization
    Public/private
    cloud
    A consistent edge platform to meet your needs
    Develop once,
    deploy anywhere
    Meet diverse
    use cases
    Consistent
    operations at scale

    View Slide

  9. ▸ Available now
    ▸ Available now
    Edge computing with Red Hat OpenShift
    9
    Central data center
    Cluster management and application deployment Kubernetes node
    control
    Regional data center
    Edge
    Single node
    edge servers
    Low bandwidth or
    disconnected sites.
    ▸ Available in 2021
    C W
    Site 3
    W
    Site 2
    C
    C W
    Site 1
    Remote worker
    nodes
    Environments that are
    space constrained
    3 Node Clusters
    Small footprint with
    high availability
    Legend:
    C: Control nodes
    W: Worker nodes

    View Slide

  10. 10
    C W
    W
    Expand the cluster at-will and on-demand
    ● Deploy additional worker nodes when demand requires
    ● Can be additional storage nodes if needed (set with label)
    ● Can remove worker role from original supervisors if desired
    ● These nodes could be “remote” workers if needed
    W
    ● Comprised of just 3 control plane nodes without the need for any
    additional worker nodes
    ○ Application workloads are schedulable on the control plane nodes
    ○ Control plane remains highly available supporting upgrades
    ● Requires:
    ○ Setting worker replicas to 0 in install-config will configure
    supervisor nodes as workers as well (any other value will set them
    as supervisors)
    ○ Temporary bootstrap node for initial cluster bring-up
    ○ External DNS and LB services
    ○ HAProxy for *.apps needs to be reconfigured to target control
    nodes (ensure health checks are enabled)
    ● Minimum system resource requirements for each control plane node
    are cumulative of control and worker requirements:
    ○ 6 vCPU, 24GB RAM, 200GB Storage
    W
    3-Node cluster
    OpenShift install
    B
    OpenShift install
    B

    View Slide

  11. 11
    Single Node OpenShift
    Dev preview in OCP 4.7
    Current minimum requirements-
    8vCPU / 32Gb
    Try it out - https://cloud.redhat.com/openshift/assisted-installer/clusters/~new

    View Slide

  12. Edge computing with Red Hat OpenShift
    12
    Managing the edge, just like the core
    Red Hat Advanced Cluster Management for Kubernetes
    Multicluster lifecycle
    management
    Policy driven governance,
    risk, and compliance
    Advanced application
    lifecycle management

    View Slide

  13. 13
    13
    Manage edge clusters
    C
    W
    C W
    C W
    Central Data center
    Regional DCs
    High BW Site
    Cluster B
    or
    Cluster Management and
    Application lifecycle
    Kubernetes node
    control
    Far Edge
    Red Hat Advanced Cluster Management
    ● Cluster Purpose (label)
    ● General purpose policies (ex: security)
    ● Placement rules for App (granularity)
    ● Central update of apps (labels)
    C W
    Cluster A
    Red Hat Advanced Cluster security
    ● Vulnerability analysis
    ● Image assurance
    ● Compliance assessments / risk profiling
    ● Runtime behavioral analysis
    ● Thread detected / incident response

    View Slide

  14. 14
    Zero Touch Provisioning - RHACM 2.3 (TP)
    ● Integrates and leverages existing technology stack -
    RHACM/Hive/Metal3/Assisted Installer
    ● Minimal prerequisites- Enables untrained technician installation
    flow (Barcode scan to trigger install).
    ● Highly customized deployment - Fits Connected/Disconnected,
    IPv4/IPv6, DHCP/Static, UPI/IPI deployment topologies
    ● Edge focused - no additional bootstrap node or external services
    needed for deployment.
    ● GitOps enabled - managed with kube-native declarative API
    Aimed at regional distributed on-prem deployment.
    Enabling customer’s automated path from uninstalled infrastructure to
    application running on an OpenShift cluster.

    View Slide

  15. 15
    Zero Touch Provisioning - Ingredients
    Using Kubernetes
    CRs/GitOps practices to
    manage infrastructure
    Standardize Clusters
    Config At Scale
    Utilizing GitOps and RHACM
    policies or ArgoCD integration
    to provide configuration as
    code.
    Infrastructure Provisioning Cluster Configuration
    Put applications
    anywhere
    RHACM App-Subs functions for
    automated application lifecycle
    Application Rollout
    Central provisioning of
    OpenShift Clusters
    Infrastructure
    As Code Configuration
    As Code Application Placement
    As Code

    View Slide

  16. 16
    A horizontal platform
    approach to edge
    computing
    Use cases

    View Slide

  17. 17
    Transforming industries with edge computing
    Telecommunications
    Health–life science
    Manufacturing
    Automotive
    Retail
    Public sector
    Financial Energy Hospitality
    Edge computing with Red Hat OpenShift

    View Slide

  18. TELCO
    VERTICAL
    HORIZONTAL
    Open
    Hybrid
    Cloud
    18
    Our focus edge use cases
    ● Enterprise, provider & operator edge
    ● Standardize distributed operations
    ● Modernize application environments
    (OT and IT)
    ● Modernize network infrastructure
    ● Automation/integration of
    monitoring & control processes
    ● Predictive analytics
    ● Production optimization
    ● Supply chain optimization
    Enterprise Edge
    Extend cloud/data center approaches
    to new contexts / distributed locations / OT
    Operations Edge
    Leverage edge/AI/serverless
    to transform OT environments
    Industrial
    Edge
    ● Aggregation, access and far edge
    ● Manages a network for others
    ○ Telecommunications Service
    Providers
    ○ Creates reliable, low latency
    networks
    Provider Edge
    Network and compute specifically
    to support remote/mobile use cases
    Provider
    Edge
    Enterprise
    Edge
    ● Vehicle edge (onboard & offboard)
    ○ In-vehicle OS
    ○ Autonomous driving,
    Infotainment up to ASIL-B
    ○ Quality management
    Customer-facing Product Edge
    Create new offerings or customer/
    partner engagement models
    Vehicle
    Edge

    View Slide

  19. 19
    ▸ Provides common, automated management
    across large-scale
    ▸ Lowers latency times with more distributed
    network architecture
    ▸ Uses remote worker node topology
    ▸ Deploy radio access network (RAN) functions
    where needed
    5G DU
    Access
    4G RU
    Aggregation
    5G CU
    Red Hat OpenShift runs the
    most demanding workloads
    Telco RAN use case
    Aggregation
    4G BBU
    Telco core
    4G/5G Core
    Legend:
    CU: Centralized Unit Access: also known as far edge
    DU: Distributed Unit Aggregation: also known as near edge
    Access

    View Slide

  20. Edge computing with Red Hat OpenShift
    20
    Transforming industrial manufacturing
    Capitalize on industry 4.0 technologies
    To achieve successful optimization, planning and control of production
    Transition their IT-OT environment to next generation infrastructure
    Capitalize on edge computing, AI/ML, hybrid cloud, and software-defined technology
    Optimize production at the factory floor
    Use AI/ML intelligent applications for predictive maintenance and higher quality products
    Support future operating environments
    Accelerate the designing, developing, and deploying new apps and services
    Business initiatives creating new opportunities

    View Slide

  21. 21
    SENSOR DATA &
    INFORMATION
    W
    Remote
    worker node
    Sensor Simulators
    Core HQ Data Center
    S W
    3 node cluster
    Factory #1 Factory #2
    Sensor Simulators
    MQTT MQTT
    Git
    Quay
    Eclipse Che
    GitOps
    VSCode
    Quarkus
    CODE &
    CONFIGURATION
    Ceph
    AI/ML
    Industrial use case
    ▸ Simplify the deployment and lifecycle management
    of AI-powered application
    Red Hat OpenShift helping create the smart factory at the industrial edge
    ▸ Accelerate data gathering, preparation, and
    inferencing tasks
    ▸ Consistent development platform and tools
    ▸ Turn insights into positive business outcomes
    faster

    View Slide

  22. Edge computing with Red Hat OpenShift
    22
    Solution Blueprint: Edge and AI/ML in industrial manufacturing
    Accelerating time to value using OpenShift at the edge
    Coding, simulation & deployment to production
    Container based CI/CD from data center the edge
    Automated configuration management
    Consistent roll-outs using end to end GitOps for distributed environments
    Data processing from sensors to analytics
    Open source middleware and AI/ML stacks
    ML model training and deployment to production
    Open Data Hub enabled CI/CD
    Bringing OpenShift, the
    Red Hat portfolio and
    ecosystem together
    from the core to the
    factory floor
    https://github.com/redhat-edge-computing

    View Slide

  23. 23
    What happens when…???
    Network Disruption

    View Slide

  24. 24
    zone3
    Red Hat OpenShift Control plane resides in a central
    location, with reliably-connected workers distributed at
    edge sites sharing a control plane.
    Control Plane
    WORKER
    zone1
    WORKER
    WORKER
    WORKER
    zone2
    WORKER
    WORKER
    Workloads & projects can be isolated (optionally
    in worker nodes) using Kubernetes zones. Zones
    modify pod eviction behavior and can slow down
    or stop pod evictions in case of disruption.
    Slowing pod evictions...
    Normally, for unreachable tainted nodes the
    controller evicts pods at a rate of 1 node every 10
    seconds, using zones the controller evicts at a
    rate of 1 node every 100 seconds. Clusters with
    fewer than 50 nodes are not tainted & your
    cluster must have more than 3 zones for this to
    take effect.
    Edge computing with Red Hat
    Remote Worker Nodes
    Disruption handling methods - Zones with tolerations
    kind: Node
    apiVersion: v1
    metadata:
    labels:
    topology.kubernetes.io/region=zone3
    C
    W

    View Slide

  25. 25
    W
    kubeletConfig:
    Node-status-update-frequency
    (5sec -configurable)
    Workloads continue running locally
    Node-monitor-grace-period
    (40sec -not configurable)
    C
    Pod-eviction-timeout
    (5min -not configurable)
    Node State
    Ready
    Unhealthy
    Unreachable
    Pods are marked for eviction
    and need to be rescheduled
    Edge computing with Red Hat
    Remote Worker Nodes
    C
    W
    Tolerations can mitigate the pod eviction indefinitely if toleration seconds = 0;
    Or extend the pod eviction timeout with a specified value for given taints;
    Tolerations:
    - key:”node.kuberenetes.io/unreachable”
    operator: “Exists”
    effect: “NoExecute”
    tolerationSeconds: 0
    - key:”node.kuberenetes.io/not-ready”
    operator: “Exists”
    effect: “NoExecute”
    tolerationSeconds: 0
    tolerationSeconds is used after
    Pod-eviction-timeout expires
    When connection is back, before
    Pod-eviction-timeout or
    tolerationSeconds expires
    the node comes back under control
    plane management
    Disruption handling methods - Zones with tolerations

    View Slide

  26. 26
    Edge computing with Red Hat
    Remote Worker Nodes
    Disruption handling methods - Daemon sets / Static pods
    C
    W
    W
    C
    Workloads continue running locally
    Static pods
    Static pods are managed by the kubelet
    daemon on a specific node. Unlike pods that
    are managed by the control plane, the node
    specific kubelet watches each static pod.
    -Does restart workload with node restart
    without any trigger from API server
    Drawbacks
    -Secrets & config maps cannot be used
    Daemon sets
    Daemon sets insure that all (or some) nodes
    run a copy of a pod. If the Node disconnects
    from the cluster, then the Daemonset Pod
    within the API will not change state, and will
    continue in the last state that was reported.
    -Does NOT restart workload upon node restart
    during disruption
    -Workload restarts when the disruption is
    rectified and the node re-joins the cluster
    Node restart
    If a workload is targeted for all remote worker
    nodes using Daemon sets is the best
    practice. Daemon sets also support service
    endpoint and load balancer.
    Other methods to reschedule pods after Pod-eviction-timeout;
    - ReplicaSets
    - Deployments
    - Replication controllers

    View Slide

  27. linkedin.com/company/red-hat
    youtube.com/user/RedHatVideos
    facebook.com/redhatinc
    twitter.com/RedHat
    Red Hat is the world’s leading provider of enterprise
    open source software solutions. Award-winning
    support, training, and consulting services make
    Red Hat a trusted adviser to the Fortune 500.
    Thank you
    27

    View Slide