Honestly, increasingly few, but you could run into performance problems if you’re really pushing limits around:
- Rapid container churn. If you’re creating pods which only last a few seconds, or creating / deleting several pods a second, you can start to hit issues, especially of those pods are mounting resources that don’t respond well to connection churn (ex, connecting to NFS)
(“that’s stupid” you might say “nobody should be creating pods that quickly”. I’d counter that other work-orchestrators like Apache YARN are built to handle very rapid container churn, and do it pretty well)
- Intense, optimized, IO: if you really, definitely need access to the local disk for IO-intensive operations (graphics editing or something), the Kubernetes strategy of “packing your pod wherever it fits” by default could get you into trouble, if a bunch of pods are hitting the same disk at the same time.
Obviously you can fix this with node affinities and taints, but doing that work yourself does take a way a bit of the magic.
- Intense / bursty network traffic: similar to the disk issue, pods will still step on each others’ feet if they’re trying to saturate a node’s network capabilities at the same time.