if pools are degraded Do no declare upgrade complete if pools are degraded Misc. Etcd Disaster Recovery improvements Improve maintainability and usability of the operators in disconnected environments Documentation for BYO LB and DNS for OSP IPI/UPI Observability Network Stability: improve x.509 error output New metrics and dashboards for Pipelines, Storage, Networking
• Kiali integration with Dev Console • Pipelines as code • Jenkins Operator GA • OpenShift Builds v2 & Buildpacks GA • Application version model for Operators • Operator Maturity increase via SDK • Dynamic Plugins for the OCP Console • Azure China & AWS China • Alibaba, AWS Outposts, Equinix Metal, & Microsoft Hyper-V • Edge: Single node lightweight Kube cluster • Enable user namespaces Additional Windows Containers capabilities* • Priority and Fairness for APIserver • Ingress v2 + Contour • Operator metering lean architecture • Network Topology and Analysis Tooling • SmartNIC Integrations • Cost management integration to SWAtch / RH marketplace for subscriptions visibility OpenShift 4.9+ HOSTED • Cost mgmt integration to Subs Watch, ACM • ROSA AWS console integration • Cluster Suspend / Resume H2 2021+ • OpenShift Serverless (Functions GA) • OpenShift Pipelines GA • OpenShift Builds v2 & Buildpacks TP • OpenShift GitOps (Argo CD) GA • Simplify access to RHEL content in builds • Enhanced GitOps bootstrapping with kam • Console internationalization GA • Foundation for User Preferences • Application environments in Dev Console • Better Operator version & update mgmt OpenShift 4.8 • OSD consumption billing, autoscaling • Expanded ROSA and OSD Add-ons • ARO government region (MAG) support Q2 2021 • Azure Stack Hub and RHCOS for IBM Cloud • IPv6 (single/dual stack on control plane) • Enhanced Userspace Interface API & Library • Additional Windows Containers capabilities* • Support TLS 1.3 for Ingress • External DNS Management • OVN Egress Router (GA) • HAProxy 2.2 • ipfailover Support • Cost management: support for GCP, air-gapped HOSTED PLATFORM APP DEV • OpenShift Pipelines TP • OpenShift Serverless (Functions DP) • OpenShift GitOps (Argo CD) TP • Monitor application workloads • Foundation for Console internationalization • QuickStarts Extensible • Service Binding GA OpenShift 4.7 • GA of Red Hat OpenShift Service on AWS (ROSA) • OSD CCS 60-day free trial • ROSA and OSD log forwarding • ARO Azure Portal integration Q1 2021 • AWS C2S Region • GCP: Customer-managed disk encryption keys • GA Userspace Interface API & Library • Additional Windows Containers capabilities* • Network Enhancements derived from OVN • IPSec Support • FPGA Support (pilot) • OpenShift Update Service GA • Cost management: new onboarding UX • New LUKS, SW RAID, and multipath options HOSTED PLATFORM
pods evenly across nodes HighNodeUtilization Pack as many pods as possible on to as few nodes NoScoring Quickest scheduling cycle by disabling all score plugins Customize default out of box behaviour of openshift scheduler with Scheduling Profiles Extension points Extension points Add more Scheduling plugins Pre Build Profile Build your own Profile Scheduling profile Scheduling plugin Extension point
cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 1800 profiles: - <Profile: Select one or more profiles from the table on the left> Product Manager: Gaurav Singh AffinityAndTaints Evicts pods that violate node and pod affinity, and node taints TopologyAndDuplicates Evicts duplicate pods and balance distribution of pods LifecycleAndUtilization Evicts low utilized pods from node marked as high utilization nodes. Evicts pods base on “PodLifeTime” Evict pods that are scheduled on less desired nodes in a cluster based on profiles. Profiles*
between pods on different nodes is confidential, authenticated, and has not been tampered with. • Uses Libreswan and IPSec in the kernel • Currently IPv4-only • Each node has a unique IPsec connection to each other node in the cluster. ◦ Node private keys: valid for 5yr and rotate at 4.5yr (at cluster update) ◦ CA-signed keys: valid for 10yr, do not rotate currently • Encrypted internode traffic includes that from: ◦ hostnetwork-pod -> pod ◦ pod -> pod • The following internode traffic is NOT IPSec encrypted: ◦ Control plane traffic (already TLS encrypted) ◦ pod -> hostnetwork-pod ◦ hostnetwork-pod -> hostnetwork-pod spec: defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: ipsecConfig: {} Product Manager: Marc Curry IPSec is enabled by updating the Cluster Network Operator configuration during installation (details in Notes section):
Sadeghianfar • Multi-cluster GitOps config management with Argo CD ◦ One-click Argo CD install through OLM for cluster configs ◦ Restricted Argo CD instances for app deployment • Support for clusters with restricted networks • Deployments guide for Argo CD • Opinionated GitOps bootstrapping with GitOps Application Manager CLI
inspired by the CIS Kubernetes benchmark are now available. These work for both OCP 4.7 and OCP 4.6 (For 4.6, apply RHSA-2021:0190) The CIS OpenShift Benchmark will be released to the CIS Kubernetes community for comment in January. The OpenShift 4 Hardening Guide is available from Red Hat now until the CIS OpenShift Benchmark is published. Red Hat Advanced Cluster Manager 2.1 integrates with the OpenShift Compliance Operator Product Manager: Kirsten Newcomer What's new in OpenShift 4.7
network traffic, critical system-level events in each container, asset and inventory tracking Risk profiling of running deployments according to their security risk, correlation of image vulnerabilities deployments Configuration management: application configuration analysis (Kube Linter); policies applied at build and deploy time Compliance assessment across hundreds of controls for CIS Benchmarks, PCI, HIPAA, and NIST SP 800-190 Automated suggestion of network policy rules and simulation of the impact of network policy changes Threat detection: detect anomalous activity, pre-built policies to detect crypto mining, privilege escalation, various exploits Incidence response: alert on activity or kill impacted pods or containers; collect forensics data is collected and send to SIEM Integration with DevOps systems through a rich API and pre-built plugins with CI/CD tools, image scanners, SIEMs, and notification tools.
for Kubernetes • Support for additional managed OpenShift providers: ARO, and OSD • Multi-cluster networking with Submariner (Tech Preview) • Compliance: Integrate RHACM governance with Compliance Operator • Integrate Kubernetes Integrity Shield resource assurance (Tech Preview) • GitOps: Extend Argo CD with RHACM GitOps • Enhanced multi-cluster metric aggregation with customized allowlist • Customize your own Grafana dashboards for fleet management What’s new in RH ACM 2.2 Product Managers: Jeff Brent, Scott Berens More on One Stop!
user experience for OCP Clusters ◦ ▪ ◦ ◦ ◦ ◦ Community Operator Red Hat Operator Naming Koku Metrics Operator Cost management metrics operator Location In Cluster Operator Hub In Cluster Operator Hub Availability Today Q1/2 2021 Air-gapped support Q2 2021 Q2 2021
Hat OpenShift Service on AWS Azure Red Hat OpenShift Red Hat OpenShift on IBM Cloud OpenShift Container Platform On-premises Red Hat OpenShift Red Hat Managed OCP Customer Managed Developer Efficiency Business Productivity Enterprise Ready Red Hat OpenShift Dedicated Joint offerings with Cloud Provider Offered as a Native Console offering on equal parity with cloud provider Kubernetes service or OCP Customer Managed
Cluster autoscaling for ROSA • Install into existing VPC • Custom machine pools (multi-AZ machine sets) • Larger instance sizes (up to 96 vCPU and 768 GB memory) • Customer notifications tied to OCM cluster history log • BYOK disk encryption on AWS CCS OpenShift Dedicated & Red Hat OpenShift Service on AWS Product Manager: Andrew Cathrow, Will Gordon
Deploy managed OpenShift clusters on Azure’s government cloud Egress lockdown ◦ Documented outbound IP/DNS requirements to secure outbound traffic via firewall BYOK disk encryption for PV’s and OS disk Larger VM sizes, including dedicated instances Cluster create GUI in Azure Portal Azure Red Hat OpenShift Product Manager: Jerome Boutaud
Daniel Messer Operator SDK becomes a Red Hat supported offering • A new downstream Operator SDK as the supported and recommended way to build Operators for OpenShift. • Upcoming: customized base image for downstream Operator SDK. • Upcoming: potential customized scaffolding for downstream Operator SDK. OLM Integration • Run an Operator on a cluster to test if it behaves correctly when managed via OLM (operator-sdk run… ) • Support webhooks creation and managed by OLM for Golang Operators • Support "Operator Conditions" to explicitly communicate and influence OLM behavior (e.g. "Upgrade Readiness") • Generate bundle manifests and metadata (CSV) in new bundle format for OLM • Easily run an Operator in new bundle format on a cluster with OLM • Trigger an Operator upgrade of an already installed Operator bundle to stage and test in a pipeline Operator Bundle Format https://sdk.operatorframework.io/docs/upgrading-sdk-version
Red Hat Runtimes • Quarkus - Quick Starts, Example Helm Chart, Integration with Serverless Functions (DP) • EAP - Azure App Service (GA), Azure Marketplace, EAP XP 2.0 (Runnable JARs) • Spring Boot 2.3 - Support for UBI for Java 8 and 11, Security Starter, Dekorate Build Hooks (TP) Red Hat Integration • APIs OSD Add-On: Managed API Service (GA), 3scale Manageability Enhancements • Messaging AMQ Broker/Online, Interconnect LTS, AMQ Interconnect 2.0 (DP) • Streaming Kafka 2.7 support in AMQ Streams, Service Registry 2.0 (TP) Red Hat Process Automation • PAM 7.10 - First release with Kogito technology, Kafka integration, React support • Dashboard Builder Heat Map component Product Manager: Karena Angell (on behalf of the Red Hat Application Services team) Events APIs EIPs Data
creation with a catalog of golden images • Additional Quick Start guides • Sizing guidance for VMs at scale Core • Even simpler operator installation • Configure VMs on a subset of cluster nodes • Robust VM baseline performance w/OCS • Enable guests with UEFI Secure Boot • Microsoft Windows Server 2012 R2 and later with Windows Server Virtualization Validation Program (SVVP) Storage • Improve performance with pre-allocated VM disks • UI for offline VM snapshots Network • Tech preview for IPv4 and IPv6 dual-stack clusters Product Manager: Peter Lauterbach
Mauricio "Maltron" Leal OpenShift Service Mesh 2.0.x Key Features & Updates • Released in November 2020. • Based on Istio version 1.6.x. • Supported on OCP 4.6, 4.7. • Service Mesh 2.0 introduced substantial changes over 1.1.x: ◦ New APIs/CRDs for configuration. ◦ New architecture (Istiod). ◦ New extensions (Wasm - tech preview). ◦ New certificate management (SDS). ◦ New telemetry collection.
2.0’s release: ◦ Support for the OVN-Kubernetes CNI Plugin (2.0.1 - Dec 2020) ◦ Support for Service Mesh 2.0 on Power and System Z HW (Feb 2021) ◦ FIPS Validation based on RHEL OpenSSL (Pending NIST approval) ▪ Service Mesh is already supported on a FIPS enabled OCP cluster. • Next Up: Service Mesh 2.1 (Q2 2021) - Service Mesh Federation and more. Product Manager: Jamie Longmuir and Mauricio "Maltron" Leal OpenShift Service Mesh Updates
will soon be supported on managed OpenShift platforms (target: February 2021): ▪ OpenShift Dedicated (OSD) ▪ Azure Red Hat OpenShift (ARO) ▪ Red Hat OpenShift on AWS (ROSA) • Service Mesh will be supported as an unmanaged addon. • Customers will install and manage Service Mesh in the same manner as OCP. Product Manager: Jamie Longmuir and Mauricio "Maltron" Leal Service Mesh on Managed OpenShift
Manager: Siamak Sadeghianfar Build container images from source code using Kubernetes tools A Comprehensive DevOps Platform for Hybrid Cloud Declarative GitOps for multi-cluster continuous delivery
as , pipelines as • Cluster-wide proxy configs passed to TaskRuns pods • HTTPS support for webhooks (TLS in EventListeners) • EventListener can be shared across multiple namespaces to reduce resource consumption • Image digest published as in buildah and S2I tasks • Pipeline UX enhancements highlights in Dev Console ◦ Metrics tab: pipeline execution metrics ◦ TaskRuns tab: list of TaskRuns created by a PipelineRun ◦ Events tab: related PipelineRun, TaskRun and Pod events ◦ Download PipelineRun logs Product Manager: Siamak Sadeghianfar OpenShift Pipelines 1.3
from Tekton Hub • Install Tasks as ClusterTask • Code completion for variables when authoring Pipelines • Support for creating PVC for workspaces when starting a pipeline • Notifications for PipelineRun status upon completion Product Manager: Siamak Sadeghianfar Tekton CLI and IDE Plugins (VS Code & IntelliJ)
• Interact with OpenShift from GitHub workflows • Verified OpenShift actions on GitHub Marketplace ◦ OpenShift client (oc) ◦ OpenShift login ◦ S2I build ◦ Buildah builds ◦ Push image to registry • More actions and GitHub Runner to come... Red Hat GitHub Actions Blog: Deploying to OpenShift using GitHub Actions | Demo Product Manager: William Markito
with native developer experience Product Manager: Parag Dave Developer Sandbox for OpenShift https://developers.redhat.com/developer-sandbox **NEW** Shared cluster multi-tenant Dev Sandbox
Product Manager: Siamak Sadeghianfar • 0.4.0 (December 9) ◦ Available on OperaterHub! ◦ Works against plain Kubernetes ◦ Inject bindings as files into workloads • 0.5.0 (January 22) ◦ Protect against privilege escalations • Working towards GA in 2Q Deployment: cool-app Database CR: cool-db application Service Inject env vars
/ Mar 16th • Initial Bitbucket support for factory flow - easily start a workspace from a git repo • Raise and review PRs from within the IDE - integrated GitHub PR plugin • Easier getting started - in-IDE recommendations for configuring your workspace • Install and configure CRW on OCP 4.x clusters with a CLI - perfect for scripting and IaaC Product Manager: David Harris CodeReady Workspaces 2.6 / 2.7
Serena Nichols, Mohit Suman Developer Client Tooling • Updated support of Service Binding v0.3.0 • Improved support of devfiles • Improved documentation CodeReady Studio Hardened Eclipse desktop IDE • Supports development of Java, Node.js, Spring Boot and Quarkus applications • New Wildfly 22 and EAP 7.3.4/XP support • New support for component deployment using devfiles, leveraging odo 2.0 under the hood. v12.18 targeted for Jan 26th OpenShift Connector OpenShift/Kubernetes extension for IDEs • Enables rapid development and deployment of code on Kubernetes and Red Hat OpenShift • Provides local OpenShift cluster creation using Red Hat CodeReady Containers in VS Code extension • New support for component deployment using devfiles, leveraging odo 2.0 under the hood. • Coming soon - simplified access to Developer Sandbox $ odo create nodejs --starter odo v2.0.3 Developer CLI for OpenShift/Kubernetes IntelliJ v0.4.0 (Jan 20th) VS Code: v0.2.0
Regular releases to pick up 4.6 z-streams and fresh certs • Resource requirements - 2 GB available for application use. 9 GB minimum needed still • System tray updates - access CRC easily on Windows and Mac • Mac Installer - streamline installation for delivery of signed binary and setup for future experience improvements • End user insights - telemetry on command usage, success/failure, duration and operating system • VPN supports is dev preview - new networking implementations reduces complexity and improves performance Product Manager: Steve Speicher CodeReady Containers: OpenShift on your Laptop Windows System Tray MacOS MacOS Installer
images • All Operator Images • All Operator Bundles • Gated by Subscription • All upstream images remain on quay.io/projectquay Official Red Hat Quay images Download now via Red Hat Container Catalog Product Manager: Daniel Messer
Quay Operator 3.4 can now update deployments to a newer version and will also migrate existing deployments managed by the Quay Operator 3.3 Quay Operator can now deploy a complete Quay installation with all required services managed by the Operator and supported by Red Hat. * based on local storage provided by non-HA NooBaa S3 endpoint (included in subscription) Product Manager: Daniel Messer
instance per build requests • instance type, AMI, Subnets configurable • Expects a Fedora CoreOS environment • Builds can be executed via docker or podman • Upstream only, not supported by Red Hat OpenShift/Kubernetes Builds • Creates an k8s job per build on a bare-metal OpenShift/k8s cluster • Job launches containerized qemu in a pod • VM-in-a-container runs Red Hat CoreOS • Builds are executed as via buildah/libpod • Build cluster can be different from the cluster running Quay • Node selectors and container limits configurable • Supported by Red Hat when using Red Hat Quay • No RHEL subscription required, eligible as an infrastructure workload on OpenShift infra nodes Product Manager: Daniel Messer
William Markito DEVELOPING The Developer Catalog now has an Event Source sub-catalog. Camel-K Connectors are available in the Event Source Catalog when Red Hat Integration - Camel K Operator is installed. Administrator perspective now has access to Eventing resources Developers now have ability to create Channels & Subscriptions, and add Triggers to Brokers. Topology view now visualizes Channels & Subscriptions and Triggers and Brokers
Manager: Serena Nichols DEVELOPING Our enhanced Developer Catalog experience enables a consistent experience across catalogs, while exposing additional features as needed Administrators can now modify the available categories in the Developer Catalog Developer sub catalogs • Builder Images • Event Sources (operator enabled) • Helm Charts • Operator backed services • Samples • VMs (operator enabled)
Ali Mobrem, Serena Nichols LEARNING\EXTENDING Extensible: ConsoleQuickStart CRD • QuickStart guidelines - How to create a great quickStart! • New default sample: Hints: Ability to highlight sections of the UI • Top level nav items and masthead action icons New Quick Start Themes • OpenShift Container Storage • OpenShift Virtualization • Helm Chart Repository • Node • Quarkus • Spring Boot
Ali Mobrem MANAGING Internationalization (I18N) • Externalize all hard coded strings in the client code • Starting with support for Chinese and Japanese, Korean(Coming in a z-stream) • UI based language Selector • Localizing dates and times Accessibility (A11Y) • Improved screen reader functionality Blogs: • OCP\PF4 Localization Journey • OCP\PF4 Accessibility
enable predefined CatalogSources • Easier disable/enable the of the Operators for the page. Easily see the Operators being included in the CatalogSources • Easier see the being included of a for the OperatorHub. Expose all configurations and status of the CatalogSource object • Easier review/edit the of the CatalogSources directly with UI. Product Manager: Ali Mobrem, Tony Wu Managing Operators at ease EXTENDING
Logging 5.0 What Note: • No separate SKU. • No changes to the support process. • The changes are mostly about how and how often we deliver Logging but does not impact our current features. Why • • • Benefits Impact How
Full Stack Automation (IPI) Pre-existing Infrastructure (UPI) Product Manager(s): Katherine Dubé (AWS, Azure, GCP), Maria Bracho (VMware), Peter Lauterbach (RHV & OCP Virtualization), Ramon Acedo Rodriguez (OSP, BM), & Duncan Hardie (IBM Z & Power) IBM Power Systems
• The U.S. Intelligence Community can now deploy OpenShift to the AWS Commercial Cloud Services (C2S) region. ◦ Note: Deploying to the AWS SC2S region is not supported at this time since the Route 53 service is not available. • C2S is the government program and contract vehicle that brings Amazon Web Services (AWS) “over the fence” and into the Intelligence Community (IC). This air-gapped AWS Region on the Top Secret fabric has been operating since 2014 and is exclusively available to the U.S. IC. • RHEL CoreOS AMI publishing is not available in the C2S region so users must upload their own prior to installing OpenShift via: ◦ ‘aws ec2 import-snapshot’ & ‘aws ec2 register-image’ • Installation of OpenShift on AWS C2S is similar to existing deployment methods for other AWS regions, but the AWS region and RHEL CoreOS AMI ID must be manually configured in install-config.yaml. Generally Available Product Manager: Katherine Dubé % grep -B 1 -A 2 "aws:" mycluster/install-config.yaml platform: aws: region: us-iso-east-1 amiID: ami-2ebf36df % ./openshift-install create cluster --dir mycluster INFO Credentials loaded from default AWS environment variables INFO Consuming Common Manifests from target directory INFO Consuming Worker Machines from target directory INFO Consuming Openshift Manifests from target directory INFO Consuming OpenShift Install (Manifests) from target directory INFO Consuming Master Machines from target directory INFO Creating infrastructure resources… INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.19.0+f5121a6 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/userid/openshift-install/mycluster/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "5char-5char-5char-5char" INFO Time elapsed: 40m10s
GCP • On GCP by default, data is encrypted at rest using platform-managed keys; however, some users want to have more control over this and provide their own user-managed encryption key • Requirement for many organizations with explicit compliance and security guidelines for deploying applications to the cloud • A KMS key must be created and the correct permissions assigned to the service account prior to deploying OpenShift • Configured using the optional ‘encryptionKey’ object in the “[controlPlane|compute].platform.gcp" section of the install-config.yaml • Can be used with either installation workflow when deploying OpenShift to GCP Generally Available Product Manager: Katherine Dubé & Duncan Hardie apiVersion: v1 baseDomain: example.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: osDisk: encryptionKey: kmsKey: Name: machine-encryption-key keyRing: openshift-encryption-ring Location: global projectID: openshift-gcp-project kmsKeyServiceAccount: [email protected] replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp:
AWS Security Token Service (STS) enables an authentication flow allowing a client to assume an IAM Role resulting in short-lived credentials. OCP 4.7.z - Tech Preview • Manual implementation for AWS STS • AWS STS not supported natively by the OCP Installer • Requires validation before upgrading to OCP 4.8 to resolve any new or existing permission changes OCP 4.8 - GA • Support for AWS STS natively with OCP on AWS Installer • Tooling to automate the pre-installation configuration as well as the upgrade path • Documentation • New deployments only OCP 4.9+ • Migration in-place to AWS STS support $ oc get secrets -n kube-system aws-creds Error from server (NotFound): secrets "aws-creds" not found $ oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 -d [default] role_arn = arn:aws:iam::125931421481:role/image-registry-role web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token • No “root” AWS secret • Components are assuming the IAM Role specified in the Secret manifests (instead of creds minted by the cloud-credential-operator) Product Manager: Maria Bracho
Hat OpenStack Platform 13 Red Hat OpenStack Platform 16.1 • OpenStack Bare Metal (Ironic) integration (z-stream) • Autoscaling from/to zero nodes • SR-IOV Support for OpenShift pods • Cinder CSI support • Bring your own load balancer and DNS • BYO Network: Machine Sets on custom networks • Kuryr IPv6/dual-stack support (no IPI Installer) OpenShift on OpenStack Red Hat OpenStack as an IaaS Single cluster, Red Hat OpenShift as a workload
installer What’s new in OCP 4.7 • Improved experience with certificate handling. • Increased scaling with high performance VMs. • Easier management with automatic guest agent. • Better stability upon infrastructure failures. Supported RHV releases with OCP 4.6+ • RHV 4.4.2+ • Customers running OCP 4.5 on RHV 4.3 must upgrade to RHV 4.4.2+ before upgrading to OCP 4.6 Product Manager: Peter Lauterbach Generally Available $ ./openshift-install create cluster --dir ./demo ? SSH Public Key /home/user_id/.ssh/id_rsa.pub ? Platform ovirt ? Enter oVirt’s api endpoint URL admin:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api ? Is the installed oVirt certificate trusted? Yes ? Enter oVirt’s CA bundle xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ? Enter ovirt-engine username admin@internal ? Enter passsword xxxxxxxxxxxxx ? Select oVirt cluster Default ? Select oVirt storage domain hosted_storage ? Select oVirt network ovirtmgmt ? Enter the internal API virtual IP 10.35.1.19 ? Enter the internal DNS virtual IP 10.35.1.21 ? Enter the ingress IP 10.35.1.20 ? Base Domain example.com ? Cluster Name demo ? Pull Secret [? for help] xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx INFO Creating infrastructure resources... INFO API v1.17.1 up INFO Install complete! INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo.example.com INFO Login to the console with user: kubeadmin, password: xxxxx-xxxxx-xxxxx-xxxxx
Marc Curry • New "API Performance" grafana dashboard that visualizes kube-apiserver and openshift-apiserver metrics • Useful histogram of metrics that can be used to better understand API load characterization and debug issues • Metrics include: ◦ request rate by resource and verb, read vs write, status and instance ◦ request: duration, dropped, terminated, in-flight ◦ priority and fairness measurements ◦ TLS handshake error rates ◦ etcd object count ◦ ...and many others
proxy ◦ Useful if all traffic needs to go through an intercepting proxy for inspection • Machine API now supports AWS tenancy: dedicated ◦ Run on hardware that is dedicated to a single customer, needed to fulfill federal regulations • Google Cloud Disk Encryption Sets ◦ Allow customers to use their own user-managed keys, lots of corporate policies mandate this apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> httpsProxy: http://<username>:<pswd>@<ip>:<port> noProxy: example.com readinessEndpoints: - http://www.google.com - https://www.google.com trustedCA: name: user-ca-bundle proxy/cluster Product Manager: Duncan Hardie Generally Available
- Disk provisioning improvements! • Boot/root device mirroring for more robust bare metal nodes • Define RAID 1 & 5 on secondary block devices • Define LUKS encryption of any block device • Multipath FC SAN install and boot - Kdump (Tech Preview)
Provides the ability for customers to use their company's DNS domain as the default domain for Routes/Ingresses for applications running on the cluster, including TLS certificates from their own CA. • The configured default domain is used when creating apps without having to specify a hostname during "oc expose". • This alternative domain, if specified, overrides the default cluster domain for the purpose of determining the default host for a newly created route. • The custom DNS domain would be owned by the customer and so they would create the DNS record for it. The simplest method is to create a wildcard CNAME record that points to a name under the default ingress domain, for proper resolution. Product Manager: Marc Curry The following example configures “apps.foo.com ” as the default apps domain: $ oc new-app https://github.com/<path>/myapp.git $ oc patch ingresses.config/cluster -p '{"spec":{"appsDomain":"apps.foo.com"}}' --type=merge $ oc get ingresses.config/cluster -oyaml apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.<something>.dev.openshift.com appsDomain: apps.foo.com ...wait several minutes (~3-5) for the openshift-apiserver operator to perform a rolling update of the openshift-apiserver pods… $ oc expose svc myapp $ oc get routes NAME HOST/PORT PATH SERVICES PORT myapp myapp-myproj.apps.foo.com myapp 8080-tcp (some outputs truncated to fit on the slide)
for Intel N3000 • OCP supports the Intel N3000 NIC functions with the SR-IOV Operator • The exposed SR-IOV resource can be assigned to a pod for accelerating network packet processing OVN EgressFirewall Filtering with DNS Names • Feature parity with current OpenShift SDN • Limit the external hosts that some or all pods can access from within the cluster, filtering by either of: ◦ An IP address range in CIDR format ◦ A DNS name that resolves to an IP address Fast DataPath OpenStack Support • Enables Fast DataPath (SR-IOV, OVS-DPDK, OVS TC Flower offload) when OpenShift is running on OpenStack. • The OpenShift SR-IOV Operator was originally designed to work only on bare-metal, as it assumed the creation of the VFs by controlling the PF. • When running OpenShift in a VM, the PF is owned by the underlying OpenStack’s hypervisor and is not present in the VM, so OpenShift should use the VFs created by OpenStack rather than creating them. • This was already supported by the SR-IOV device plugin, and now the SR-IOV Operator. egress: - type: Allow to: cidrSelector: <cidr> dnsName: <dns_name> ... Product Manager: Marc Curry
API Library • An optional library, app-netutil, is available to assist a container application in gathering network information associated with a pod. • OpenShift supports running a Remote Direct Memory Access (RDMA) or Data Plane Development Kit (DPDK) application in a pod with an SR-IOV Virtual Function (VF) attached, for throughput performance. • This library is intended to ease an application’s integration of SR-IOV VFs in DPDK mode. • API methods implemented: ◦ GetCPUInfo() ◦ GetInterfaces() ◦ GetHugepages() ▪ Note: until a 4.8 enhancement, the feature gate for DownwardAPIHugePages must be enabled on Kubernetes 1.20 or greater. • Example pod spec with a VF in DPDK mode and hugepages configured, to the right. Product Manager: Marc Curry apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: capabilities: add: ["IPC_LOCK"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" requests: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages
OpenShift 4.8 “Public Service Announcement” for an upcoming change in OpenShift 4.8: • OpenShift 4.8 will update to HAProxy 2.2, which down-cases HTTP header names by default (for example, “Host: xyz.com ” is transformed to “host: xyz.com ”), as permitted by the HTTP protocol standard, and as required by HAProxy’s HTX feature for HTTP/2. • In OpenShift 4.7, for legacy applications that are sensitive to the capitalization of HTTP header names, the IngressController will have a new API field, spec.httpHeaders.headerNameCaseAdjustments , to accommodate these legacy applications until they can be fixed. • The new API will be backported to OpenShift 4.6, and allows the cluster administrator to specify rules for transforming the case of HTTP header names in HTTP/1 requests. • Cluster administrators and application developers need to be aware of the change and configure IngressControllers and Routes with this new configuration, if necessary, before upgrading to OpenShift 4.8. Product Manager: Marc Curry For more information about the change, including why it was made and how to specify Header name transformation rules, view the enhancement proposal.
integration for encryption • Data protection ◦ Multi-cluster block async replication (TP) ◦ Stretch cluster with arbiter ◦ Mutli-cluster Metro DR - Dev Preview • Flexible failure domain • Local object caching for AI/ML • Guided tours for better user experience Out of the box support Block, File, Object Platforms AWS Azure Bare metal RHV (Tech Preview) VMWare Google Cloud (Tech Preview) IBM Z/Power OSP (Tech Preview) Deployment modes Disconnected environment and Proxied environments Product Manager: Duncan Hardie
Plugin OpenShift 4.7 introduces a Virtual Routing and Forwarding (VRF) CNI Plugin. This networking meta plugin adds the following functionality: Product Manager: Robert Love POD Eth0 Net0 Net1 Net2 Net3 Vrf0 Vrf1 • Add secondary network interfaces to different Virtual Routing Spaces • Virtual Routing Spaces are isolated from the others • The interfaces belonging to different VRFs may have overlapping CIDRs • Used in conjunction with another kernel based CNI, typically MACVLAN or SR-IOV (netdevice)
Cloud-native Network Functions Tests (CNF Tests) The CNF Tests container image allows service providers to validate that their cluster has been provisioned and configured correctly ready to run CNFs. The documentation resides here. It validates the following additional performance-related functionality is configured and available on the cluster: • Precision Time Protocol (PTP) • Single-root input/output virtualization (SR-IOV) • Stream Control Transmission Protocol (SCTP) • Data Plane Development Kit (DPDK) • Performance AddOn Operator (PAO) • Operating System Latency Measurements (oslat) Product Manager: Robert Love
invalidates the OAuth token for the active session. You can use the following procedure to delete any OAuth tokens that are no longer needed. Deleting an OAuth access token logs out the user from all sessions that are using the token. List all tokens List all user OAuth access tokens: $ oc get useroauthaccesstokens Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full Delete the OAuth Access Token: $ oc delete useroauthaccesstokens <token_name> Example output useroauthaccesstoken.oauth.openshift.io "<token_name>" deleted Product Manager: Anand Chandramohan