Upgrade to Pro — share decks privately, control downloads, hide ads and more …

What’s New in Red Hat OpenShift 4.9

What’s New in Red Hat OpenShift 4.9

Recorded on October 7, 2021. Technical Product Manager Overview of Red Hat OpenShift 4.9

Watch the recording on YouTube. https://www.youtube.com/watch?v=Q1j_mt_XLqE&t=101s&ab_channel=OpenShift

Red Hat Livestreaming

October 07, 2021
Tweet

More Decks by Red Hat Livestreaming

Other Decks in Technology

Transcript

  1. What’s New in OpenShift 4.9
    OpenShift Product Management
    1

    View Slide

  2. What's New in OpenShift 4.9
    Table of Contents
    ● 4.9 Overview
    ● Spotlight Features
    ○ API removals & upgrade behavior
    ○ MetaLB L2
    ○ Single Node OpenShift
    ○ Automatic RHEL entitlements
    ○ Simplified registry credentials
    ● Console
    ● Installer
    ○ Azure Stack Hub
    ● Control Plane
    ● Networking & Routing
    ● Specialized Workloads
    ● Operator Framework
    ● Quay
    ● Storage
    ● Management & Security
    ○ Advanced Cluster Security
    ○ Advanced Cluster Management
    ○ Cost Management
    ● Telco
    ● Observability
    2

    View Slide

  3. What's New in OpenShift 4.9
    Cluster security Global registry
    Multicluster management
    Observability | Discovery | Policy | Compliance
    | Configuration | Workloads
    Image management | Security scanning |
    Geo-replication Mirroring | Image builds
    Declarative security | Container vulnerability management |
    Network segmentation | Threat detection and response
    * Red Hat OpenShift® includes supported runtimes for popular languages/frameworks/databases. Additional capabilities listed are from the Red Hat Application Services and Red Hat Data Services portfolios.
    • Developer CLI | IDE
    • Plugins and extensions
    • CodeReady workspaces
    • CodeReady containers
    Developer services
    Developer productivity
    • Databases | Cache
    • Data ingest and prep
    • Data analytics | AI/ML
    • Data management & resilience
    Data services
    Data-driven insights*
    • Languages and runtimes
    • API management
    • Integration
    • Messaging
    • Process automation
    Application services
    Build cloud-native apps*
    • Service mesh | Serverless
    • Builds | CI/CD pipelines
    • GitOps
    • Log management
    • Cost management
    Platform services
    Manage workloads
    Kubernetes cluster services
    Install | Over-the-air updates | Networking | Ingress | Storage | Monitoring | Logging | Registry | Authorization | Containers | VMs | Operators | Helm
    Physical*
    Linux (container host operating system)
    Kubernetes (orchestration)
    Virtual Private cloud Public cloud Edge
    Red Hat OpenShift Platform Plus

    View Slide

  4. What's New in OpenShift 4.9
    INSTALLER
    FLEXIBILITY
    NEXT-GEN
    DEVELOPER TOOLS
    IMPROVED
    SECURITY
    Single Node UPI is GA
    RHEL8 Worker & Infra Nodes
    Azure Stack Hub using UPI
    Bring your own Windows nodes
    Kubernetes 1.22 & CRI-O 1.22
    Shorter etcd TLS expiry + rotation
    User customizable audit policy
    mTLS: Ingress & Serverless↔Mesh
    FIPS: ACM, Virtualization, &
    Sandboxed Containers
    Automatic RHEL entitlements
    Certified Helm charts in Console
    UI for GitOps pipelines as code
    Custom domains for Serverless
    OpenShift 4.9
    4

    View Slide

  5. What's New in OpenShift 4.9
    ● Secure by default
    ○ New built-in admission controller replaces
    PodSecurityPolicy
    ○ PSP slated for removal in 1.25
    ○ CIS guidelines still call for using PSPs
    ○ OpenShift’s SCCs are unaffected
    Major Themes and Features
    ● API deprecation
    ○ Affects many popular APIs (beta→stable)
    ○ Marked as deprecated for many releases, finally
    removed
    ● CSI for Windows nodes is GA
    CRI-O
    1.22
    Kubernetes
    1.22
    OpenShift
    4.9
    Blog: https://cloud.redhat.com/blog/whats-new-in-kubernetes-v1.22
    5
    Kubernetes 1.22

    View Slide

  6. What's New in OpenShift 4.9
    OpenShift Roadmap
    APP DEV
    PLATFORM DEV
    ● OpenShift Builds v2 & Buildpacks TP
    ● Shared Resource CSI Driver GA
    ● Tekton Hub on OpenShift
    ● Unprivileged builds
    ● Image build cache
    ● Manual approval in pipelines
    ● Global Operators Model & new Operator API
    ● Operator Maturity increase via SDK
    ● Serverless Functions Orchestration
    ● Stateful Functions
    HOST
    ● Cost mgmt integration to Subs Watch, ACM
    ● Detailed Quota Usage in cluster manager
    ● ROSA/OSD: AWS Dedicated instances
    ● Dev Preview of App Studio, a hosted developer
    experience
    ● OpenShift Serverless Functions IDE Experience
    ● OpenShift Dev CLI (odo onboarding & more)
    2022+
    ● OpenShift Serverless Functions GA
    ● Encryption of inflight data natively in Serverless
    ● Integration of Knative(Serverless) with KEDA
    ● OpenShift Serverless Kafka Broker
    ● Operator SDK for Java (Tech Preview)
    ● File-based operator catalog management
    ● MetalLB BGP support
    ● Azure Stack Hub (IPI)
    ● Alibaba, Nutanix & IBM Cloud (UPI/IPI)
    ● OpenShift on ARM (AWS and Bare Metal)
    ● SRO manages third party special devices
    ● Additional capabilities for Windows containers:
    health management, 3rd party network plugins such
    as Calico
    ● ZeroTouchProvisioning and Central infrastructure
    management GA (ACM)
    ● ExternalDNS Support
    ● NetFlow/sFlow/IPFIX Collector
    ● Introduce Gateway API
    ● ROSA/OSD: FedRAMP High on AWS GovCloud
    ● ROSA/OSD: Terraform provider
    ● ROSA/OSD/ARO: GPU Support
    ● ARO: Upgrades through cluster manager
    ● Cost management understands IBM Cloud IaaS
    H1 2022
    HOSTED PLATFORM
    Q4 2021
    APP
    APP DEV
    ● OpenShift Pipelines - Tekton Triggers GA
    ● Tekton Hub integration in Pipeline builder
    ● Automated retrieval of RHEL entitlement
    ● Mount Secret and ConfigMap in BuildConfigs
    ● Export Application (Dev Preview)
    ● ROSA: cluster manager UI for ROSA provisioning
    ● ROSA/OSD: Cluster Hibernation
    ● ARO: Azure Portal UI for ARO provisioning
    ● Cost: Improved models for distribution of costs
    HOST PLATFORM
    ● OpenShift Serverless cold start improvements
    ● Dynamic Plugins for the OCP Console
    ● MetalLB Support (L2)
    ● Azure Stack Hub (UPI)
    ● RHEL 8 Server Compute/Infra Nodes
    ● AWS support for China Regions
    ● Single Node OpenShift (UPI)
    ● Custom audit profiles by group
    ● OpenShift API compatibility level discovery tools
    ● API for Custom Route Name and Certificates
    ● Operator Metering end of life
    ● GA of BYOH Windows host for Windows Containers
    + additional supported platforms
    ● Disable case-sensitivity for case-insensitive IdPs
    ● Multi Service-Serving-Certificates for Headless
    StatefulSet
    ● Service Mesh federation between meshes/clusters
    ● ZeroTouchProvisioning and Central infrastructure
    management Tech Preview (ACM)
    ● Azure China
    ● Utilize cgroups v2
    ● Expand cloud providers for OpenShift on ARM
    ● Enable user namespaces
    ● Windows Containers: CSI proxy, improved
    monitoring/logging & more platforms supported
    ● Gateway API / Ingress Controller support
    ● Network Topology and Analysis Tooling
    ● SmartNIC Integrations
    ● eBPF Support
    ● Network Policy v2 & OVN no-overlay option
    ● BGP Advertised Services (FRR)
    ● Service Mesh on VMs
    ● SigStore style image signature verification
    ● Disconnected mirroring simplification

    View Slide

  7. OpenShift 4.9 Spotlight Features
    7

    View Slide

  8. What's New in OpenShift 4.9
    Confirming API usage during upgrade
    ● External software interacting with a cluster may
    use deprecated APIs.
    ● To prevent breakage, an admin will acknowledge
    external software has been updated prior to
    cluster upgrade
    ● This “ack” is a boolean on a ConfigMap
    ● We expect to use this functionality for similar
    changes of this magnitude in the future
    Affected APIs
    ● CRD (beta→stable)
    ● CertificateSigningRequest (beta→stable)
    ● Mutating/ValidatingAdmissionWebhook (b→s)
    ● Full list and more details
    Operators
    ● Change affects Operators that still use a beta CRD
    ● Partners and layered products have been audited and
    notified of updates they require
    ● Operators installed in 4.8 that do not have a
    compatible 4.9 release will block cluster upgrade
    API Removal and Upgrade Behavior
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: admin-acks
    namespace: openshift-cluster-version
    data:
    ack-4.8-kube-122-api-removals-in-4.9: "true"
    PM: Tony Wu
    8

    View Slide

  9. What's New in OpenShift 4.9
    ● Focused at production/edge use cases
    for Bare Metal
    ● Does not have a workload runtime
    dependency on a central control plane
    ● Bootstrap In Place - no additional bootstrap
    node needed
    ● Upgrade support
    ● Deployment via openshift-install (GA)
    ● Deployment via RHACM (ZTP/CIM)
    /Assisted installer (TP)
    ● OLM available to install Operators
    ● 8 cores 32GB mem minimal requirements
    ● ~2 cores 16GB platform footprint (vanilla
    OCP)
    Single Node OpenShift
    PM: Moran Goldboim
    Consistent application platform from the datacenter to the edge
    https://www.youtube.com/watch?v=QFf0yVAHQKc
    9

    View Slide

  10. What's new in OpenShift 4.9
    MetalLB L2 Support
    10
    PM: Marc Curry, Deepthi Dharwar
    ● MetalLB has two modes to announce reachability
    information for load balancer IP addresses:
    ○ Layer 2 (4.9)
    ○ BGP (4.10)
    ● Two components:
    ○ Controller - One per cluster
    ○ Speaker - Per Node (DaemonSet)
    ● L2 mode: ARP (IPv4) or NDP (IPv6) announces location of
    a LB’d IP address from the Speaker, then relies on Service
    load balancing within the cluster
    ● BGP mode: Traffic can target multiple nodes – routers can
    perform load balancing across the cluster using ECMP
    apiVersion: v1
    kind: Service
    metadata:
    name: nginx
    spec:
    ports:
    - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    selector:
    app: nginx
    type: LoadBalancer

    View Slide

  11. What's New in OpenShift 4.9
    ● OpenShift Pipelines 1.6 released
    ● Tetkon Triggers GA
    ● Auto-pruning configurations per namespace
    ● Pipeline as code
    ○ Private Git repository support
    ○ Hosted BitBucket support
    ● Granular observability and metrics configurations
    ● CRD introduced for customizing Tekton configs
    ● (Dev Console) Search and install Tasks from
    TektonHub in the Pipeline builder
    ● (Dev Console) Repository list views for pipeline as
    code
    OpenShift Pipelines
    PM: Siamak Sadeghianfar
    11

    View Slide

  12. What's New in OpenShift 4.9
    OpenShift GitOps
    PM: Siamak Sadeghianfar
    ● OpenShift GitOps 1.3
    ● User groups and kube-admin support when log into Argo CD
    with OpenShift credentials
    ● ApplicationSet integration with RHACM for cluster lookup
    ● kustomize 4 support
    ● External cert manager support for TLS configs in Argo CD
    ● Router sharding for Argo CD
    ● (Dev Console) Application deployment environment details
    12

    View Slide

  13. What's New in OpenShift 4.9
    OpenShift Serverless
    13
    Key Features & Updates
    ❖ Update to Knative 0.24
    ❖ Security: Encryption of Inflight Data with Service Mesh
    ❖ Custom Domain Mapping through DevConsole
    ❖ Visualization: New Monitoring Dashboards
    ➢ CPU, Memory, Network Usage
    ➢ Scaling Debugging
    ➢ User workload monitoring through Knative Queue Proxy
    ❖ Support for emptyDir
    ➢ Share files between sidecar and the main container
    ❖ Functions Tech Preview:
    ➢ Node, Quarkus, Python, Go, SpringBoot, TypeScriptNew,
    RustNew
    ➢ Access to data stored in secrets and config maps
    ➢ Available on MacOS , RHEL, Windows with Docker and/or
    Podman
    PM: Naina Singh
    OPENSHIFT
    OpenShift Serverless
    SERVING EVENTING
    Red Hat Enterprise Linux CoreOS
    Physical Virtual Private cloud Public cloud
    Applications/Functions Events
    F
    FUNCTIONS*
    Edge

    View Slide

  14. What's New in OpenShift 4.9
    Automatic RHEL Entitlement Management for Builds
    OpenShift
    ● Tech preview in 4.9
    ● Insights Operator pulls RHEL entitlements for OpenShift clusters
    ● Simple Content Access (SCA) must be enabled for customer’s Red Hat
    account (by the customer)
    ● Entitlements stored as Secret named etc-pki-entitlement in the
    openshift-config-managed namespace
    ● Entitlements rotated and refreshed regularly
    ● Admin responsible to distribute entitlement secret to namespaces
    ● Mount entitlement secret into Pods and Tekton for entitled builds
    ● Mount entitlement and other credential secrets (or configmaps) in
    BuildConfigs for entitled builds
    14
    Insights Operator
    openshift-config-managed
    etc-pki-entitlement
    cloud.redhat.com
    (OCM)
    PM: Siamak Sadeghianfar

    View Slide

  15. What's New in OpenShift 4.9
    Simplified registry credentials management
    Product Manager: Gaurav Singh
    15
    Multi-component/microservice deployments before New option: Multi-component/microservice deployments with 4.9
    OpenShift Global Pull Secret:
    - quay.io/openshift-release-dev
    - quay.io/foo/
    - quay.io/bar/
    - quay.io/baz/image
    Image Pull Secret, e.g.
    quay.io/foo/image:v1
    Image Pull Secret, e.g.
    quay.io/bar/image:v1
    Image Pull Secret, e.g.
    quay.io/baz/image:v1
    Multiple Secrets containing different registry credentials per
    Deployment / image
    Simplified registry credential management using a single Secret
    containing different logins, even for the same registry

    View Slide

  16. Console
    16

    View Slide

  17. What's New in OpenShift 4.9
    Common Console Updates
    PM: Ali Mobrem & Serena Chechile Nichols
    Focus on the projects you care about
    ● Hide system projects with the default
    projects toggle for privileged users
    ● Star favorite projects
    ● Quick project search
    User Preference
    ● Set your individual preferences for the console
    experience
    ● Changes will be autosaved
    17

    View Slide

  18. What's New in OpenShift 4.9
    Admin Console Updates
    PM: Ali Mobrem
    Troubleshoot Node-level
    problems directly from the
    Console
    ● View Node logs in the Console just
    like you can with your Pods
    ● Ability to filter journald by unit
    Clean Operator uninstall
    ● Cleanly remove all workloads,
    applications & resources managed by
    operator when uninstalling the operator
    itself
    Break down cluster
    utilization by node type
    ● Defaults to Master & Work node
    types
    ● Additional node types will auto
    appear in the list once created
    18

    View Slide

  19. What's New in OpenShift 4.9
    Dev Console - New features & UX enhancements
    PM: Serena Chechile Nichols
    Form based edit for Build Configs
    Dev/DevOps Experience
    Improvements to App Observability
    Converged Import Flow Export Application
    19

    View Slide

  20. What's New in OpenShift 4.9
    New Serverless UIs
    PM: Serena Chechile Nichols
    Community Kamelets available
    Dev/DevOps Experience
    Domain mapping support
    20

    View Slide

  21. What's New in OpenShift 4.9
    New Pipelines UIs
    PM: Serena Chechile Nichols
    Task searching & Tekton Hub Integration!
    Dev/DevOps Experience
    Repository list views for pipeline as code
    21

    View Slide

  22. Installer Flexibility
    22

    View Slide

  23. What's new in OpenShift 4.9
    4.9 Supported Providers
    Generally Available
    Full Stack Automation (IPI) Pre-existing Infrastructure (UPI)
    Bare Metal
    Product Manager(s): Marcos Entenza (AWS, Azure, Azure Stack Hub, GCP), Maria Bracho (VMware), Peter Lauterbach (RHV & OCP Virtualization), Anita Tragler (OSP), Ramon Acedo Rodriguez (BM), & Duncan Hardie (IBM Z & Power)
    IBM Power Systems
    23
    Bare Metal
    NEW
    Azure Stack Hub

    View Slide

  24. What's new in OpenShift 4.9
    Deploy OpenShift on Azure Stack Hub
    Installing a cluster on user-provisioned infrastructure
    ● Azure’s solution to run applications in an on-premises
    environment and deliver Azure services in your data
    centre.
    ● Allows an OpenShift cluster to be deployed to an
    user-provisioned infrastructure on Azure Stack Hub.
    ● Option to use the provided Azure Resource Manager
    (ARM) templates to assist with the installation.
    Generally Available
    Product Manager: Marcos Entenza
    24
    apiVersion: v1
    baseDomain: example.com
    controlPlane:
    name: master
    replicas: 3
    compute:
    - name: worker
    platform: {}
    replicas: 0
    metadata:
    name: ash-cluster
    networking:
    clusterNetwork:
    - cidr: 10.128.0.0/14
    hostPrefix: 23
    machineNetwork:
    - cidr: 10.0.0.0/16
    networkType: OpenShiftSDN
    serviceNetwork:
    - 172.30.0.0/16
    platform:
    azure:
    armEndpoint: azurestack_arm_endpoint
    baseDomainResourceGroupName: resource_group
    region: azure_stack_local_region
    resourceGroupName: existing_resource_group
    outboundType: Loadbalancer
    cloudName: AzureStackCloud
    pullSecret: '{"auths": ...}'
    fips: false
    sshKey: ssh-ed25519 AAAA...
    Azure Stack Hub

    View Slide

  25. What's new in OpenShift 4.9
    RHEL 8 support for workers and infra nodes
    Support of Red Hat Enterprise Linux 8
    ● RHEL 8 machines can be added to any UPI or IPI
    deployed cluster in day-2.
    ● OCP 4.9 starts with RHEL 8.4.
    ● Adding RHEL 7 machines to OCP is deprecated and
    support for RHEL7 workers will be removed in OCP 4.10
    ● RHEL 7 compute machines cannot be upgraded to RHEL
    8, new RHEL 8 compute machines must be deployed.
    Product Manager: Marcos Entenza
    25

    View Slide

  26. What's new in OpenShift 4.9
    Larger Subnet Sizes for Azure
    Product Manager: Marcos Entenza
    Increased subnet size within the machine CIDR
    ● When installing OpenShift on Azure, subnets are created as large as possible within the machine CIDR
    ● Allows the cluster to use a machine CIDR appropriately sized to accommodate the number of nodes in the cluster.
    OpenShift 4.8 or below OpenShift 4.9
    networking:
    clusterNetwork:
    - cidr: 10.128.0.0/14
    hostPrefix: 23
    machineNetwork:
    - cidr: 10.0.0.0/16
    networkType: OpenShiftSDN
    serviceNetwork:
    - 172.30.0.0/16
    networking:
    clusterNetwork:
    - cidr: 10.128.0.0/14
    hostPrefix: 23
    machineNetwork:
    - cidr: 10.0.0.0/16
    networkType: OpenShiftSDN
    serviceNetwork:
    - 172.30.0.0/16
    8190 hosts per subnet 32766 hosts per subnet
    26

    View Slide

  27. What's new in OpenShift 4.9
    Support China Regions in AWS
    Deploy OpenShift to AWS China regions
    ● New China Regions Beijing (cn-north-1) and Ningxia
    (cn-northwest-1) are now available to deploy OpenShift
    Container Platform.
    ● An Internet Content Provider (ICP) license is required to
    use these Regions.
    ● RHCOS images must be manually uploaded to your AWS
    account.
    ● The AWS Region and the accompanying custom AMI
    must be manually included in the installation
    configuration file.
    Generally Available
    Product Manager: Marcos Entenza
    27
    apiVersion: v1
    baseDomain: example.com
    ...
    ...
    ...
    platform:
    aws:
    region: cn-north-1
    userTags:
    adminContact: mak
    costCenter: 7536
    subnets:
    - subnet-1
    - subnet-2
    - subnet-3
    amiID: ami-96c6f8f7
    serviceEndpoints:
    - name: ec2
    url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn
    hostedZone: Z3URY6TWQ91KVV
    fips: false
    sshKey: ssh-ed25519 AAAA...
    publish: Internal
    pullSecret: '{"auths": ...}'

    View Slide

  28. Zero Touch Provisioning
    PM: Moran Goldboim
    ● Integrates and leverages existing technology stack -
    RHACM/Hive/Metal3/Assisted Installer
    ● Minimal prerequisites- deployment over L3 single network,
    no additional bootstrap node
    ● Highly customized deployment - Fits
    Connected/Disconnected, IPv4/IPv6, DHCP/Static, UPI/IPI
    deployment topologies
    ● Edge focused - no additional bootstrap node or external
    services needed for deployment.
    ● GitOps enabled - managed with kube-native declarative API
    ● Any deployment topology - SingleNodeOpenshift, Remote
    worker nodes, Compact clusters (3 nodes), multi-node
    Aimed at regional distributed on-prem deployment.
    Enabling customer’s automated path from uninstalled
    infrastructure to application running on an OpenShift
    cluster.
    Tech-Preview in Advanced Cluster Management 2.4
    (Infrastructure Operator)
    28

    View Slide

  29. Central Infrastructure Management
    PM: Moran Goldboim
    Tech-Preview in Advanced Cluster Management 2.4
    (Infrastructure Operator)
    Provides a separate interfaces for:
    -Infra-Admin (IT) - to manage on-prem compute
    across different datacenters/locations
    -Cluster creator (Dev/Ops) - to consume
    allocated compute resources for clusters
    creation
    ● Fully integrated with ACM
    ● Consisted UXD with Assisted installer (SAAS)
    ● Integrated preflight checks, monitoring and
    eventing
    ● K8S native API
    ● Any type of OpenShift deployment (SNO, RWN,
    Compact..) for Bare metal and platform agnostic
    29

    View Slide

  30. What's new in OpenShift 4.8
    Bare Metal IPI on IBM Cloud® Bare Metal
    platform:
    baremetal:
    apiVIP:
    ingressVIP:
    provisioningNetworkInterface:
    provisioningNetworkCIDR:
    hosts:
    - name: openshift-master-0
    role: master
    bmc:
    address: ipmi://10.196.130.145?privilegelevel=OPERATOR
    username: root
    password:
    bootMACAddress: 00:e0:ed:6a:ca:b4
    rootDeviceHints:
    deviceName: "/dev/sda"
    - name:
    role: worker
    bmc:
    address: ipmi://10.196.130.146?privilegelevel=OPERATOR
    username:
    password:
    bootMACAddress:
    rootDeviceHints:
    deviceName: "/dev/sda"
    Developers who require their apps to run on
    physical nodes can now provision and
    dispose of OpenShift clusters on bare metal
    from IBM Cloud® using the IPI bare metal
    installation workflow to install OpenShift on
    IBM Cloud®‘s physical nodes
    Note that this isn’t using a cloud provider for
    IBM Cloud®, it uses the regular bare metal
    IPI workflow with servers from IBM Cloud®
    IBM Cloud server BMC
    PM: Ramon Acedo Rodriguez
    30
    IBM Cloud server BMC
    $ ibmcloud sl hardware create --hostname \
    --domain \
    --size \
    --os \
    --datacenter \
    --port-speed \
    --billing
    Create your IBM Cloud
    physical servers with
    your account
    Use the standard bare
    metal IPI workflow to
    provision your cluster on
    IBM Cloud

    View Slide

  31. What's new in OpenShift 4.8
    Expand your local bare metal cluster with
    remote worker nodes
    $ oc edit provisioning
    [...]
    spec:
    [...]
    virtualMediaViaExternalNetwork: true
    Add remote worker nodes any time over virtual
    media
    You can now Install your cluster locally with
    DHCP/PXE as usual and later add remote worker
    nodes outside of your provisioning network
    The Bare Metal Operator will map the installation
    image as virtual media on the remote node’s BMC
    with the advantage of not needing L2-adjacency (as
    PXE-booting)
    PM: Ramon Acedo Rodriguez
    31
    Simply ask your provisioning custom resource (CR)
    to use the external network for virtual media

    View Slide

  32. Control Plane Updates
    32

    View Slide

  33. Scheduling Profiles Customization
    Product Manager: Gaurav Singh
    LowNodeUtilization Spread pods evenly across nodes
    HighNodeUtilization Pack as many pods as possible on
    to as few nodes
    NoScoring Quickest scheduling cycle by
    disabling all score plugins
    Customize default out of box behaviour of openshift scheduler with Scheduling Profiles
    *Note: in OSP 4.7 customer can use both policy API and profiles
    but going forward policy API will be depreciated to profiles
    Extension points
    Scheduling profile
    Scheduling plugin
    Extension points
    Scheduling plugin
    Add more
    Scheduling
    plugins
    Pre Build Profile Build your own Profile
    Scheduling profile : Openshift-scheduler can have only one profile
    Scheduling plugin : Implements one or more extension points
    Extension point : Plugins that define the scheduling logic
    33

    View Slide

  34. What's New in OpenShift 4.9
    ● The default route name for OpenShift Cluster Components now allows for any level of flexibility in
    customers environments. The current .apps.. can be customized for
    the OAuth server and the the OCP console.
    ● The OAuth server route can be customized using the ingress config route configuration API. A
    custom hostname and a TLS certificate can be set using the spec.componentRoutes part of the
    configuration.Set the custom hostname and optionally configure the serving certificate and key.
    Custom Route Name and Certs for certain cluster components
    PM: Anand Chandramohan
    Component Custom Route supported?
    OAuth Yes (from 4.9)
    Console Yes (from 4.8)
    Downloads Yes (from 4.8)
    Monitoring (AlertManager,
    Prometheus, Grafana, Thanos)
    No
    Image Registry No
    apiVersion: config.openshift.io/v1
    kind: Ingress
    metadata:
    name: cluster
    spec:
    componentRoutes:
    - name: oauth-openshift
    namespace: openshift-authentication
    hostname:
    servingCertKeyPairSecret:
    name:

    View Slide

  35. What's New in OpenShift 4.9
    Background on API Audit log policy (introduced in OpenShift 4.6)
    Control the amount of information that is logged to the API audit logs by choosing the audit log policy profile to use.
    ● Default: Logs only metadata for read and write requests; does not log request bodies except for OAuth access
    token requests. This is the default policy.
    ● WriteRequestBodies: In addition to logging metadata for all requests, logs request bodies for every write request
    to the API servers (create, update, patch). This profile has more resource overhead than the Default profile.
    ● AllRequestBodies: In addition to logging metadata for all requests, logs request bodies for every read and write
    request to the API servers (get, list, create, update, patch). This profile has the most resource overhead.
    apiVersion: config.openshift.io/v1
    kind: APIServer
    metadata:
    ...
    spec:
    audit:
    profile: WriteRequestBodies
    The Default audit log policy in OCP 4.8 logged request bodies for OAuth access token creation (login) and deletion
    (logout) requests. Previously, deletion request bodies were not logged.
    Improved customization of Audit Config
    35
    PM: Anand Chandramohan

    View Slide

  36. What's New in OpenShift 4.9
    ● You can configure an audit log policy that defines custom rules (new in 4.9). You can specify
    multiple groups and define which profile to use for that group. These custom rules take
    precedence over the top-level profile field. The custom rules are evaluated from top to bottom,
    and the first that matches is applied
    apiVersion: config.openshift.io/v1
    kind: APIServer
    metadata:
    ...
    spec:
    audit:
    customRules:
    - group: system:authenticated:oauth
    profile: WriteRequestBodies
    - group: system:authenticated
    profile: AllRequestBodies
    profile: Default
    ● Add one or more groups and specify the profile to use for that group. These custom rules take precedence over the
    top-level profile field. The custom rules are evaluated from top to bottom, and the first that matches is applied.
    ● Set audit log profile to Default, WriteRequestBodies, AllRequestBodies, or None. If you do not set this top-level
    audit.profile field, it defaults to the Default profile.
    Configuring the Audit log policy with custom rules
    36
    PM: Anand Chandramohan

    View Slide

  37. What's New in OpenShift 4.9
    ● You can disable audit logging for OpenShift Container Platform. When you disable audit logging,
    even OAuth access token requests and OAuth authorize token requests are not logged.
    kind: APIServer
    metadata:
    ...
    spec:
    audit:
    profile: None
    ● You can also disable audit logging only for specific groups by specifying custom rules in the
    spec.audit.customRules field.
    Disable Audit Logging
    37
    PM: Anand Chandramohan

    View Slide

  38. What's New in OpenShift 4.9
    Etcd Updates - Ciphers Customization
    38
    TLS security profiles provide a way for servers to regulate which ciphers a client can use when connecting to
    the server. This ensures that OpenShift Container Platform components use cryptographic libraries that do
    not allow known insecure protocols, ciphers, or algorithms.
    apiVersion: config.openshift.io/v1
    kind: APIServer
    metadata:
    name: cluster
    spec:
    tlsSecurityProfile:
    type: Custom
    custom:
    ciphers:
    - ECDHE-ECDSA-CHACHA20-POLY1305
    - ECDHE-RSA-CHACHA20-POLY1305
    - ECDHE-RSA-AES128-GCM-SHA256
    - ECDHE-ECDSA-AES128-GCM-SHA256
    minTLSVersion: VersionTLS11
    Name: cluster
    Namespace:
    ...
    API Version: operator.openshift.io/v1
    Kind: Etcd
    ...
    Spec:
    Log Level: Normal
    Management State: Managed
    Observed Config:
    Serving Info:
    Cipher Suites:
    TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
    TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
    TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
    Min TLS Version: VersionTLS12
    Add the spec.tlsSecurityProfile field:
    Verify that the TLS security profile is set in the etcd CR:
    PM: Anand Chandramohan

    View Slide

  39. What's New in OpenShift 4.9
    Etcd Updates - Cert Rotation
    39
    etcd certificates are used for encrypted communication between etcd member peers, as well as encrypted
    client traffic. The following certificates are generated and used by etcd and other processes that
    communicate with etcd:
    Peer certificates: Used for communication between etcd members.
    Client certificates: Used for encrypted server-client communication. Client certificates are currently used by
    the API server only, and no other service should connect to etcd directly except for the proxy. Client secrets
    (etcd-client, etcd-metric-client, etcd-metric-signer, and etcd-signer) are added to the openshift-config,
    openshift-monitoring, and openshift-kube-apiserver namespaces.
    Server certificates: Used by the etcd server for authenticating client requests.
    Metric certificates: All metric consumers connect to proxy with metric-client certificates.
    etcd certificates are signed by the etcd-signer; they come from a certificate authority (CA) that is generated
    by the bootstrap process. The CA certificates are valid for 10 years. The peer, client, and server certificates
    are valid for three years. These certificates are only managed by the system and are automatically
    rotated.
    PM: Anand Chandramohan

    View Slide

  40. What's New in OpenShift 4.9
    Etcd updates - Auto defrag
    40
    This feature enables an automated mechanism which provides defragmentation as the result of observations from the
    cluster. The goal is to provide a controller which manages automation of etcd defragmentation based on observable
    thresholds. A defrag controller has been added to the cluster-etcd-operator that will check the endpoints of the etcd
    cluster every 10 minutes for observable signs of fragmentation.
    The criteria for defragmentation is defined as:
    ● Cluster must be observed as HighlyAvailableTopologyMode.
    ● The etcd cluster must report all members as healthy.
    ● minDefragBytes this is a minimum database size in bytes before defragmentation should be considered. 100 MB.
    ● maxFragmentedPercentage this is a percentage of the store that is fragmented and would be reclaimed after
    defragmentation. 45%.
    Benefits
    ● Large scale clusters, that previously felt the pain of lack of fragmentation (resource bloat, OOM events, down time) can
    now can now benefit from the defrag controller that manages the maintenance of etcd backend
    ● Heavy churn workloads do not negatively impact cluster performance.
    ● Cluster operands utilize only the necessary resources to provide service
    PM: Anand Chandramohan

    View Slide

  41. Networking & Routing
    41

    View Slide

  42. What's new in OpenShift 4.9
    Enhanced EgressIP Load Distribution on OVN Clusters
    42
    PM: Marc Curry, Deepthi Dharwar
    ● Ability to assign one or more source
    egress IPs to a project or pod, for
    filtering at the edge of the cluster
    ● Label a set of nodes as “egress
    nodes” to route EgressIP-related
    traffic to external endpoints
    ● EgressIP traffic (non-regular) is
    SNAT-ed to the defined egress IPs on
    the egress nodes
    ● Multiple egress nodes can be
    specified to avoid “choke points”
    ● Traffic is LB’d between participant
    nodes (using OVS’s dp_hash)

    View Slide

  43. What's new in OpenShift 4.9
    General Networking Enhancements
    PM: Marc Curry, Deepthi Dharwar
    Support for Network Adapters in Fast Data
    Path list on OpenShift
    ● Support Matrix
    ● Going forward any fast datapath network
    adapter that RHEL supports will be
    supported on OpenShift.
    Hardware Enablement
    SR-IOV for Single Node OpenShift
    ● Support for running SR-IOV operator
    on a far less resource -constrained
    hardware
    ● Onboard realtime and low latency
    workloads with high performance
    networking support.
    Hardware Enablement
    Support for DPDK & RDMA with SR-IOV
    ● Better throughput and performance.
    ● Moved from Tech Preview to GA
    DPDK

    View Slide

  44. What's new in OpenShift 4.9
    Ingress Enhancements
    PM: Marc Curry, Deepthi Dharwar
    Allow Setting mTLS Through the Ingress
    Operator Support
    ● Support client-TLS which enables router to
    verify client certificates.
    ● Admin must provide CA certificate to the
    router.
    Support TLS 1.3 for OpenShift 4.x Ingress:
    ● Supports faster TLS handshake
    ● Simpler, secure cipher suites
    ● Better performance and stronger security.
    Ingress Updates
    HAProxy timeout Variables Customization
    a. clientTimeout/serverTimeout
    b. clientFinTimeout/serverFinTimeout
    c. tunnelTimeout
    d. tlslnspectDelay
    Set as part of Ingress controller spec under tuning
    Options.
    Global Options to Enforce HTTP Strict
    Transport Security [HSTS]
    ● Manual per-route annotations to enable HSTS
    is sub-optimal.
    ● Allow cluster admins to enforce this policy
    globally with ease and flexibility.
    Ingress Updates

    View Slide

  45. What's New in OpenShift 4.9
    45
    Virtual Routing/Forwarding (VRF) CNI is GA
    PM: Franck Baudin
    Pod with CAP_NET_RAW capability
    vfr-foo
    net1: 10.0.0.100/24
    eth0: 10.0.0.55/24
    vfr-bar
    net2: 10.0.0.100/24
    OVN-K
    network
    FOO
    network
    BAR
    Problem statement: How to connect a pod to several networks with overlapping CIDRs?
    Solution: Create multiple routing and forwarding domains within the pod with Linux VRFs.
    ▸ VRF CNI run on multus secondary interfaces, as long as they are kernel bound, netdevice only no DPDK:
    SR-IOV CNI and MACVLAN CNI

    View Slide

  46. What's new in OpenShift 4.9
    OpenShift Service Mesh 2.1
    46
    Key Features & Updates
    ● Update to Istio 1.9
    ● New Feature: Service Mesh Federation
    ○ Securely connect service meshes across OCP
    clusters without the Kubernetes API Server
    ○ Share services between meshes on a strict
    “need to know” basis
    ○ Manage traffic with remote services as if they
    were local services
    ● ServiceMeshExtensions API becomes GA
    ○ Extend service mesh API using Envoy’s
    WebAssembly Extensions
    ● Service Mesh 2.1 Release Date: November 2021
    Service
    A
    Service
    B
    Service
    C
    Service
    D
    Control Plane
    (Istiod)
    Control Plane
    (Istiod)
    Gateway Gateway
    OCP Cluster OCP Cluster
    Service Mesh Service Mesh
    Federated Service Meshes
    PM: Jamie Longmuir

    View Slide

  47. OpenShift on OpenStack: Service Type Load Balancer
    Red Hat OpenStack Octavia as Service Type Load Balancer (without Kuryr)
    ● OCP 4.7 introduced External Load Balancer [UPI]
    ● OCP 4.9 and OSP 16.1 - Octavia Amphora as Service
    Load Balancer (L4-L7) with IPI installer
    ○ HA-proxy/IPVS based load balancer
    ○ HTTP/HTTPS, other TCP (non-http)
    ○ Amphora VM per Tenant (project) or OCP cluster
    ○ Amphora supports Active/Standby HA
    ○ UDP [OCP 4.10 new external Cloud Provider]
    ○ SCTP [ OSP 17.x]
    ● Octavia OVN as L4 Load Balancer [OCP 4.9 Tech
    Preview]
    ○ Relies on k8s health monitoring, no member
    health monitoring [OSP 17.x]
    ○ TCP ports (non-http), UDP [OCP 4.10], SCTP
    ○ No extra hop (latency) needed since LB service is
    distributed with OVN
    ● How to set this up? Docs
    VM Worker1
    podA podB
    10.2.1.2/24
    ToR
    Tenant Access
    network: 11.1.1.0/24
    eth0
    VM Worker2
    podD
    podC
    eth0 10.2.1.3/24
    Octavia (Amphora)
    External LB
    11.1.1.2/24 11.1.1.3/24
    GW: 11.1.1.1/24
    VIP
    10.1.1.2/24 10.1.1.3/24
    10.1.2.2/24 10.1.2.3/24
    PM: Anita Tragler

    View Slide

  48. OpenShift on OpenStack: Router Sharding
    Red Hat OpenStack Octavia for Router Sharding
    Goal - Use Octavia Load balancer service with router
    sharding in order to isolate DMZ external traffic from
    internal API traffic
    OCP 4.9 adds support for Ingress Operator router
    sharding with Octavia Load balancer
    ● With OCP 4.9, Apps can leverage Router
    Sharding with Octavia LB
    ● Separate LB service for DMZ and internal API
    traffic using Labels, namespaces or service
    mesh
    ● Separate Load balancer for infrastructure
    internal API and separate LB service for
    workloads
    How to set this up? Docs
    LB
    PM: Anita Tragler
    48

    View Slide

  49. Specialized Workloads
    49

    View Slide

  50. OpenShift Virtualization
    Modernized workloads, support mixed applications with VMs, containers, and serverless
    PM: Peter Lauterbach
    Public Cloud Support
    ● AWS Bare-metal (Tech Preview) - Consistent environment with on-premise VM
    workloads to support on-demand scaling and cloud migration
    Enhanced Data Protection
    ● Crash-consistent online VM snapshots
    ● Improved Data Protection w/ upstream Velero plugin for VM backups (Tech Preview)
    SAP HANA for test and non-production
    ● SAP HANA enablement for testing and non-production deployments
    ○ production certification in a future release
    Enhanced Security and Performance
    ● Additional modes to boot UEFI guests, High performance workloads with vNUMA
    ● VM workloads in a FIPS compliant OpenShift cluster
    Operational Enhancements
    ● Hybrid workload with container and VMs in the same service mesh
    ● VM workflow management, easily configure Windows guests with sysprep
    50

    View Slide

  51. What's New in OpenShift 4.9
    VM lift-and-shift to OpenShift
    PM: Miguel Pérez Colino
    Migration Toolkit for Virtualization 2.1
    ● Easy to use UI
    ● Mass migration of VMs from VMware and RHV to
    OpenShift
    ● Added Red Hat Virtualization as supported source
    provider (Cold Migration only)
    ● Validation service (Tech Preview): Includes SR-IOV
    cards and Opaque networks that are configured.
    ● Hooks: Automated tasks to be performed pre and
    post migration
    ● Must-Gather: specific add-ons created to help
    debug issues during migrations
    51

    View Slide

  52. What's New in OpenShift 4.9
    PM: Anand Chandramohan
    52
    ● We are proud to announce the General Availability of Bring Your Own Host Support for
    Windows nodes to Red Hat OpenShift.
    ● With this offering you will be able to onboard your custom Windows nodes (aka pets) into an
    OpenShift cluster.
    ● We recognize customers have dedicated Windows server instances in their data centers that
    they regularly update, patch and manage. Often these instances run on vSphere, or Bare Metal
    platforms.
    ● It is essential to take advantage of these servers to run containerized workloads so their
    computing power can be harnessed in a hybrid cloud world.
    ● Enabling the Bring Your Own Host Support for these Windows servers can help customers lift
    and shift their on-premises workloads to a cloud native world.
    Bring Your Own Hosts (BYOH) for Windows Nodes

    View Slide

  53. What's New in OpenShift 4.9
    Bring Your Own Hosts (BYOH) for Windows Nodes
    PM: Anand Chandramohan
    53
    Windows traditional .NET
    framework containers
    Windows application
    Linux
    containers
    .NET core
    containers
    Windows
    containers
    Linux
    containers
    Windows
    virtual machine
    Red Hat OpenShift
    Virtualization
    Red Hat Enterprise Linux CoreOS
    Red Hat Enterprise
    Linux CoreOS
    Microsoft Windows
    Mixed Windows and Linux workloads
    .NET core containers
    Windows traditional .NET
    framework containers
    Windows
    containers
    Microsoft Windows
    .NET core containers
    Machine API Managed Infrastructure BYOH instance
    ● BYOH instance and Linux worker nodes on the cluster have to be on the same network
    ● The platform type for BYOH must match that set for the OCP cluster

    View Slide

  54. What's New in OpenShift 4.9
    OpenShift sandboxed containers
    Tech Preview
    FIPS Compliance Updates & Upgrades Must Gather Disconnected Mode
    Now you can run the
    OpenShift sandboxed
    containers operator
    on a FIPS enabled
    cluster without
    worrying about
    tainting its state. Our
    Operator, and Kata
    Containers are FIPS
    Validated.
    You can now seamlessly
    upgrade a cluster, as well
    as the operator and its
    artifacts (Kata Containers
    + QEMU extensions).
    An initial version of
    must-gather will be
    available in this release.
    This will help automate
    data-collection for you
    to get a better support
    experience.
    54
    Our operator now works
    in disconnected mode.
    PM: Adel Zaalouk

    View Slide

  55. What's New in OpenShift 4.9
    Hardware Accelerators enablement
    PM: Erwan Gallen
    55
    Hardware
    Accelerator
    Special Resource Operator (SRO)
    ● Orchestrator to manages the deployment of software
    stacks for hardware accelerators
    ● SRO uses recipes to enable the out-of-tree driver and
    manage the driver life cycle
    ● Day 2 operations:
    ○ Building and loading a kernel module
    ○ Deploying the driver
    ○ Deploying one device plugin
    ○ Monitoring stack
    ● Red Hat third-party support and certification policies.
    ● Tech Preview in OpenShift 4.9
    Hardware
    Accelerator
    Driver Toolkit (DTK)
    ● The Driver Toolkit is a container image to be used as a
    base image for driver containers.
    ● The DTK contains tools and the kernel packages
    required to build or install kernel modules
    ● Usable for partner builds or local builds
    ● Reduce cluster entitlement requirements
    ● Tech Preview in OpenShift 4.9
    Hardware
    Accelerator
    OpenShift worker node (RHEL CoreOS)
    Driver
    Daemonset
    CRI-O Plugin
    Daemonset
    Device Plugin
    Daemonset
    Feature Discovery
    Daemonset
    Node Exporter
    Daemonset
    Special Resource
    Operator
    (SRO)
    kubelet
    Node Feature
    Discovery
    (NFD)
    CRI-O
    Driver Toolkit

    View Slide

  56. What's new in OpenShift 4.9
    Multi-Architecture
    PM: Duncan Hardie
    ● Developer Preview of Arm is here
    ○ AWS only for now
    ○ Get your customers to try it out - feedback is needed
    ● IBM Power and IBM Z Features
    ○ Enhancement for developers using odo with support for
    Service Binding Operator
    ○ Now supported to use Helm to deploy applications
    ○ Multiple Network Interfaces - support “backdated” to 4.6
    ○ OpenShift Pipelines 1.6
    ● New for IBM Z only
    ○ Support for zVM 7.1
    ● New for IBM Power only
    ○ Support for Power 10
    56

    View Slide

  57. Operator Framework
    57

    View Slide

  58. What's New in OpenShift 4.9
    58
    Operator SDK Enhancements
    Operator handles proxy settings in the pods
    for managed Operands
    ● Helper functions for reading proxy info so
    it can be passed down to Operands pods.
    ● Easier to build “proxy-aware” Operators
    for proxied cluster environment.
    Uses UBI and other downstream
    images by default
    ● Base image (v4.y) is guaranteed
    with compatibility fixes in two
    OCP releases (4.y and 4.y+1).
    ● Easier create and maintain
    Operator projects in a Red Hat
    supported way.
    Bundle validate: WARN on k8s
    removed APIs
    ● Easily see and be aware of those
    removed k8s APIs in the bundled
    manifests.
    ● Get handy guidance on how to
    migrate per k8s upstream doc.
    PM: Tony Wu

    View Slide

  59. What's New in OpenShift 4.9
    59
    Auto-switching of catalogs
    Use Kubernetes/OCP-version specific operator catalogs and
    automatically switch during cluster updates, e.g.
    "quay.io/org/catalog:v{kube_major_version}.{kube_minor_version}"
    Support for “large” operator bundles
    Bundles with lots of metadata (for example large CRD
    manifests) are now compressed to stay below the 1MB etcd
    limit
    OpenShift Operator release compatibility
    OpenShift release compatibility can be denoted via operator
    metadata, initially blocks cluster upgrades
    metadata:
    annotations:
    operators.coreos.com/maxOpenShiftVersion: "4.8"
    Reduced resource usage / Better troubleshooting
    OLM catalog pods now use significantly less RAM. More
    status information in OperatorGroup and Subscription
    API, covering most install and update error scenarios.
    PM: Daniel Messer
    Operator Lifecycle Management Enhancements

    View Slide

  60. Quay
    60

    View Slide

  61. What's New in OpenShift 4.9
    OpenShift Mirror Registry
    Bootstrap registry for disconnected OpenShift cluster installations
    PM: Daniel Messer
    ▸ We prefer customers to run Quay on top of OCP
    ▸ But: disconnected clusters need a registry to store
    OCP release images and Operators before OCP can
    be installed
    ▸ Solution: tailored version of Quay helping customers
    to get a registry up and running quickly, mirroring is
    carried out via oc
    ▸ Local all-in-one Quay instance on RHEL 8
    ▸ Released as part of OpenShift, post 4.9 GA, included
    in every OCP subscription
    61

    View Slide

  62. What's New in OpenShift 4.9
    Quay Operator Enhancements
    More reliable deployments
    PM: Daniel Messer
    Numerous stability and configuration enhancements:
    ▸ All deployed components are now auto-scaled and
    have at minimum two replicas and anti-affinity rules
    ▸ Separate status updates for unhealthy component of
    a Quay deployment, separate health checks
    ▸ Support for TLS-encrypted connections to external
    databases
    ▸ Direct updates from Quay 3.3 to 3.6 are supported
    ▸ OpenShift-based TLS certificate management for
    Routes
    62

    View Slide

  63. What's New in OpenShift 4.9
    Nested repository support
    Simplifying mass-mirroring and organization of registry content
    PM: Daniel Messer
    ▸ Audience: Quay user / OpenShift administrator
    ▸ Use Cases:
    ・ Mirror content of multiple upstream registries
    into a single Quay* organization
    ・ Organize images into “subfolders” inside a single
    Quay organization
    ▸ Benefit: Eases skopeo mass mirroring, OpenShift
    Operator catalog mirroring
    ▸ Caveat: no hierarchical permission management
    Regular container image reference:
    quay.local/organization/repository:tag
    Nested container image references:
    quay.local/organization/collection/repository:tag
    quay.local/organization/folder/v1/repository:tag
    quay.local/ocp/v4/redhat-pipelines/operator:v4.9
    quay.local/ocp/v4/redhat-pipelines/tekton:v4.9
    * available in Quay 3.6 past OCP 4.9 GA, quay.io
    will get this towards the end of 2021
    63

    View Slide

  64. Storage
    64

    View Slide

  65. What's new in OpenShift 4.9
    OpenShift Storage - Journey to CSI
    PM: Gregory Charot
    ● CSI Operators - plugable, built-in upgrade, could include
    new functionality
    ○ Azure Stack Hub (GA)
    ○ AWS EBS (GA)
    ○ AWS EFS (Tech Preview)
    ○ vSphere enhancements (Tech Preview)
    ● CSI Migration - allow easy move from using existing
    intree drivers to new CSI drivers
    ○ GCE Disk (Tech Preview)
    ○ Azure Disk (Tech Preview)
    ● Prepare for vSphere CSI transition
    ○ CSI Driver will be the only option in 4.11
    ○ New CSI Driver requires hardware version 15
    ■ And version 6.7u3 and later
    ○ Get your customers ready to upgrade
    ○ h/w version 15 will be default starting 4.9 (but h/w
    13 is still supported)
    CSI Operators
    Operator target Migration Driver
    OpenStack Cinder Tech Preview Tech Preview
    AWS EBS Tech Preview GA
    AWS EFS n/a Tech Preview
    GCE Disk Tech Preview GA
    Azure Disk Tech Preview Tech Preview
    Azure Stack Hub n/a GA
    vSphere - Tech preview
    65

    View Slide

  66. What's new in OpenShift 4.9
    ● Rebranding from OpenShift Container Storage to
    OpenShift Data Foundation
    ● Disaster Recovery & Security
    ○ Regional DR with ACM (Tech Preview)
    ○ PV encryption with a service account
    ● Extended Control Plane with IBM Flashsystem
    ● Multicloud Object Gateway - Namespace replication
    ● Managed service on ROSA - Early trial
    Out of the box support
    Block, File, Object
    Platforms
    AWS/Azure Google Cloud (Tech Preview)
    ARO - Self managed OCS IBM ROKS & Satellite -
    Managed OCS (GA)
    RHV OSP (Tech Preview)
    Bare metal/IBM Z/Power VMWare Thin/Thick IPI/UPI
    Deployment modes
    Disconnected environment and Proxied environments
    PM:Eran Tamir
    OpenShift Data Foundation updates

    View Slide

  67. Management & Security
    67

    View Slide

  68. What's New in OpenShift 4.9
    Red Hat accelerates Kubernetes Security Innovations
    Red Hat Advanced Cluster Security
    1 Enhanced protections for the
    Kubernetes API server allow teams to
    detect and alert on actions against their
    organizations most sensitive secrets and
    configmaps.
    3
    Shorten feedback loops by allowing
    teams to target workflows for security
    alert distribution with namespace
    annotations.
    2
    6
    Accelerates security use case
    adoption in the cloud with certified
    testing and support for ROSA and
    ARO
    5
    Help organizations improve
    cybersecurity gap analysis and
    incident response prioritization
    by aligning security policies & alerts
    with the MITRE ATT&CK Framework
    Platform Support
    Advanced Security Uses
    Self-service workflows
    4
    Enable self-service security among
    application delivery organizations at
    scale with scoped access control
    annotations and labels
    Enhance OpenShift security with
    DeploymentConfig configuration
    checks for CI security testing
    68

    View Slide

  69. What's new in OpenShift 4.9
    Advanced Cluster Management for Kubernetes
    What’s new in RHACM 2.4
    69
    Product Managers: Jeff Brent, Scott Berens, Bradd Weidenbenner, Christian Stark, Sho Weimer
    Better Together
    ● RH Advanced Cluster Security, formerly known as
    Stackrox, deployed using ACM policy
    ● Support for OpenShift GitOps (ArgoCD)
    ApplicationSet
    ● Drive notifications from GRC Compliance into
    AlertManager and other incident management tools
    ● Observe cluster health metrics for non-OCP: EKS,
    GKE, AKS, IKS
    ● Service Level Objectives (SLO) can be defined on
    the Grafana dashboard
    Red Hat Advanced Cluster Management brings together the portfolio including Ansible,
    OpenShift GitOps, Red Hat Advanced Cluster Security, across cloud vendors and all from a
    single pane of glass.

    View Slide

  70. What's new in OpenShift 4.9
    70
    Product Managers: Jeff Brent, Scott Berens, Bradd Weidenbenner, Christian Stark, Sho Weimer
    Manage OpenShift Everywhere
    ● Cluster Lifecycle Support Enhancements -
    Microsoft Azure Government
    ● FIPS-Compliance
    ● RHACM Hub deployed on IBM Power and Z
    ● Centralized Infrastructure Management (CIM) for
    Bare Metal Deployments - Tech Preview
    ● Advanced image registry configurations for
    managed clusters in public clouds
    Meeting the needs of customers in Public Sector (NAPS), whether on
    premise with FIPS compliance or in the cloud with Microsoft Azure Gov.
    Advanced Cluster Management for Kubernetes
    What’s new in RHACM 2.4

    View Slide

  71. What's new in OpenShift 4.9
    71
    Product Managers: Jeff Brent, Scott Berens, Bradd Weidenbenner, Christian Stark, Sho Weimer
    Manage At the Edge
    ● 1K Management scale with IPv6 Dual Stack
    support
    ● Zero Touch Provisioning - Tech Preview
    ● Single Node OpenShift management (SNO)
    ● Hub-side Policy Templating
    ● PolicyGenerator simplifies distribution of
    Kubernetes resource objects to managed clusters
    ● Policy UX enhancements for fleet compliance
    At Red Hat, we see edge computing as an opportunity to extend the open
    hybrid cloud all the way to the data sources and end users. Edge is a
    strategy to deliver insights and experiences at the moment they’re needed.
    Advanced Cluster Management for Kubernetes
    What’s new in RHACM 2.4

    View Slide

  72. What's new in OpenShift 4.9
    Product Managers: Jeff Brent, Scott Berens, Bradd Weidenbenner, Christian Stark, Sho Weimer
    Business Continuity
    ● RHACM Hub backup and restore - Tech Preview
    ● Leverage ODF (aka OCS) and RHACM for stateful
    workloads - Tech Preview
    ● Persistent Volumes replication using VolSync
    (Scribe) - Tech Preview
    Red Hat customers expect a centralized management platform without the
    need for additional tooling to support their disaster recovery scenarios.
    Data Center 2
    ACM-Hub
    ManagedCluster 2
    PASSIVE
    NAMESPACE
    PVs
    RESOURCES
    RESOURCES
    RESOURCES
    PVs
    PVs
    • ODF/Volsync -
    Asynchronous
    Data Replication
    Data Center 1
    ManagedCluster 1
    NAMESPACE
    ACTIVE
    PVs
    RESOURCES
    RESOURCES
    RESOURCES
    PVs
    PVs
    Region 1 Region 2
    • Operator for
    Backup &
    Restore of Hub
    ACM-Hub
    backup
    S3
    • Restore and
    reattachment of
    new Hub
    Advanced Cluster Management for Kubernetes
    What’s new in RHACM 2.4

    View Slide

  73. What's New in OpenShift 4.9
    Cost management for OpenShift
    Cost distribution on memory
    ● It is now possible to select CPU or Memory
    Column management
    ● You can hide/show columns for infrastructure
    and supplementary (to increase clarity)
    CSV exports with labels
    ● Labels are now included in CSV exports
    Source pause
    ● Now you can pause a source to avoid receiving
    error messages if the source is not updating
    Overall service reliability improvements
    PM: Sergio Ocón-Cárdenas
    73

    View Slide

  74. Telco 5G
    74

    View Slide

  75. What's New in OpenShift 4.9
    75
    Kubelet Memory Manager: Beta graduation
    PM: Franck Baudin
    apiVersion: performance.openshift.io/v2
    kind: PerformanceProfile
    spec:
    [...]
    numa:
    topologyPolicy: "single-numa-node"
    Problem statement
    ● High performance applications (typically running DPDK) requires to have CPUs, Devices/NICs and
    memory on the same NUMA node
    ● Until OpenShift 4.9, Topology manager solely aligns CPUs and Devices/NICs
    Solution: Enhance the Kubelet Memory manager to permit Topology manager to align also memory and
    huge pages on the same NUMA node as the CPUs and Devices/NICs
    ● Applies to Guaranteed QoS class containers only
    ● Only enforced for single-numa-node and restricted topology manager policies
    CPU
    Socket0
    RAM
    CPU
    Socket1
    RAM
    PCI PCI

    View Slide

  76. OpenShift PTP Advancements for RAN Workloads
    - Cell Site Router (CSR) GMC - Grandmaster Clock BC - Boundary Clock OC - Ordinary Clock
    (GMC)
    NIC
    RU
    RU
    RU
    76
    ● OpenShift Node as a Boundary Clock
    ● [O-RAN Approved] Low-latency, Node-local Event Bus w/
    Ordinary Clock PTP Events and sidecar image for easy CNF (DU)
    consumption
    Red Hat OpenShift /
    Red Hat CoreOS
    DU Workload
    RH Provided Event Bus
    Sidecar
    Red Hat PTP SW
    Stack
    (PTP Operator,
    ptp4l, phys2sys, …)
    PTP Events AMQ Interconnect
    (Event Bus)
    PTP Events
    System Clock
    OpenShift Node as an Ordinary Clock currently supported in OpenShift 4.8
    Far Edge Hardware Platform
    PMs: Robert Love, Michal Zaspera, Franck Baudin

    View Slide

  77. Observability
    77

    View Slide

  78. What's New in OpenShift 4.9
    New enhancement for OpenShift Monitoring
    PM: Shannon Wilber
    Enhanced capabilities to improve working with
    the OpenShift Console Monitoring Experience:
    ● Support for Prometheus 2.29.2 and
    Thanos 0.22.0
    ● Enhancements to Alert Manager Rules,
    Cluster Monitoring Operator and refined
    triggering conditions
    ○ Additional options to set Alerts on
    Cube States
    ○ Improvements to detect more
    quickly when disk space is running
    low
    ○ Expanded Alert rules for Kube Client
    errors with Thanos queries
    ● Monitoring for User-Defined Projects
    ● Remote write storage for Prometheus
    Metrics
    New Kube State Metrics &
    Alertmanager Functionality
    78
    Note: You can now disable the default Grafana dashboard
    deployment using a configuration option.
    https://github.com/openshift/cluster-monitoring-operator
    /pull/1241

    View Slide

  79. What's New in OpenShift 4.9
    New features in Logging for OpenShift
    Available with OpenShift Logging 5.2.x
    ● (Preview) New Added Logging Support and Flexibility (Fluentd to Vector):
    ○ Option to replace fluentd with Vector as the primary collector for Logging:
    ■ Provides a smooth upgrade path from the previous GA version of fluentd to Vector
    ■ New Compatible API designed to extend requests to the Vector collector for expanded
    functionality
    ● Support for assembling multi-line stacktrace log messages
    ○ Now have the ability to assemble log messages part of a stacktrace to store it as one single log
    record instead of multiple.
    ○ Users can log stacktraces as JSON
    ● More flexibility to provide simple log exploration.
    ○ New API experience inside the OpenShift console
    ○ Ability to display contextualized logs inside an individual alert details page
    ● New Support and Capabilities for Loki:
    ○ Loki operator that is capable of providing an on-cluster solutions
    ■ Ability to install, update, and manage a cluster with an alternative, scalable and performing
    log store
    79
    PM: Shannon Wilber

    View Slide

  80. What's New in OpenShift 4.9
    New features in Insights for OpenShift
    Insights Advisor for OpenShift
    PM: Radek Vokál
    ● Insights Advisor continues delivering
    proactive support to all connected users
    ● Air-gapped Insights documented
    ● Immediate email notifications for new
    Insights Events configurable on
    console.redhat.com
    ● New recommendations focused on storage
    and network configuration, misconfiguration in
    user management
    ● Smaller insights footprint with conditional
    data gathering
    80
    https://console.redhat.com/openshift
    https://console.redhat.com/settings/notifications/openshift

    View Slide

  81. linkedin.com/company/red-hat
    youtube.com/user/RedHatVideos
    facebook.com/redhatinc
    twitter.com/RedHat
    81
    Red Hat is the world’s leading provider of enterprise
    open source software solutions. Award-winning
    support, training, and consulting services make
    Red Hat a trusted adviser to the Fortune 500.
    Thank you

    View Slide