without CPU requests or limits will effectively allow it to be scheduled on any suitable node, regardless of the amount of CPU left on that node. In practice, it will still get some minimal CPU guarantees.
cardinality, including pod_name, container_name, container_id. Type: gauge. Number of nanocores requested. High cardinality, including pod_name, container_name, container_id. Metrics to watch
(nanoseconds) High cardinality, including pod_name, container_name, container_id. Type: gauge. The number of periods during which the container was throttled High cardinality, including pod_name, container_name, container_id. Metrics to watch
can migrate from one CPU to another, as the kernel scheduler sees fit • Linux CPUSets • Exclusive access to cores • Caveat: exclusive access only applies to containers (not system processes) none static CPUManager
available to the container (in nanocores) High cardinality, including pod_name, container_name, container_id. Min(container limit, host capacity - static assignments)
on nodes They will also be used to proportionally distribute CPU time in case of contention CPU limits will affect your application performance But your container will keep running (not evicted). It will be throttled CPU pinning can be helpful for certain applications But take into account that the amount of CPU available for the rest of applications on the same node will be reduced Summary