KCNA Linux Foundation Kubernetes and Cloud Native Associate Free Practice Exam Questions (2026 Updated)
Prepare effectively for your Linux Foundation KCNA Kubernetes and Cloud Native Associate certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2026, ensuring you have the most current resources to build confidence and succeed on your first attempt.
Which statement best describes the role of kubelet on a Kubernetes worker node?
kubelet manages the container runtime and ensures that all Pods scheduled to the node are running as expected.
kubelet configures networking rules on each node to handle traffic routing for Services in the cluster.
kubelet monitors cluster-wide resource usage and assigns Pods to the most suitable nodes for execution.
kubelet acts as the primary API component that stores and manages cluster state information.
The Answer Is:
AExplanation:
The kubelet is the primary node-level agent in Kubernetes and is responsible for ensuring that workloads assigned to a worker node are executed correctly. Its core function is to manage container execution on the node and ensure that all Pods scheduled to that node are running as expected, which makes option A the correct answer.
Once the Kubernetes scheduler assigns a Pod to a node, the kubelet on that node takes over responsibility for running the Pod. It continuously watches the API server for Pod specifications that target its node and then interacts with the container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). The kubelet starts, stops, and restarts containers to match the desired state defined in the Pod specification.
In addition to lifecycle management, the kubelet performs ongoing health monitoring. It executes liveness, readiness, and startup probes, reports Pod and node status back to the API server, and enforces resource limits defined in the Pod specification. If a container crashes or becomes unhealthy, the kubelet initiates recovery actions such as restarting the container.
Option B is incorrect because configuring Service traffic routing is the responsibility of kube-proxy and the cluster’s networking layer, not the kubelet. Option C is incorrect because cluster-wide resource monitoring and Pod placement decisions are handled by the kube-scheduler. Option D is incorrect because cluster state is managed by the API server and stored in etcd, not by the kubelet.
In summary, the kubelet acts as the executor and supervisor of Pods on each worker node. It bridges the Kubernetes control plane and the actual runtime environment, ensuring that containers are running, healthy, and aligned with the declared configuration. Therefore, Option A is the correct and verified answer.
Which key-value store is used to persist Kubernetes cluster data?
etcd
ZooKeeper
ControlPlaneStore
Redis
The Answer Is:
AExplanation:
Kubernetes stores its cluster state (API objects) in etcd, making A correct. etcd is a distributed, strongly consistent key-value store that serves as the source of truth for the Kubernetes control plane. When you create or update objects such as Pods, Deployments, ConfigMaps, Secrets, or Nodes, the kube-apiserver validates the request and then persists the desired state into etcd. Controllers and the scheduler watch the API for changes (which ultimately reflect etcd state) and reconcile the cluster to match that desired state.
etcd’s consistency guarantees are crucial. Kubernetes relies on accurate, up-to-date state to make scheduling decisions, enforce RBAC/admission policies, coordinate leader elections, and ensure controllers behave correctly. etcd uses the Raft consensus algorithm to replicate data among members and requires quorum for writes, enabling fault tolerance when deployed in HA configurations (commonly three or five members).
The other options are incorrect in Kubernetes’ standard architecture. ZooKeeper is a distributed coordination system used by some other platforms, but Kubernetes does not use it as its primary datastore. Redis is an in-memory data store used for caching or messaging, not as Kubernetes’ authoritative state store. “ControlPlaneStore” is not a standard Kubernetes component.
Operationally, etcd health is one of the most important determinants of cluster reliability. Slow disk I/O or unstable networking can degrade etcd performance and cause API latency spikes. Backup and restore procedures for etcd are critical disaster-recovery practices, and securing etcd (TLS, access restrictions) is essential because it may contain sensitive data (e.g., Secrets—often base64-encoded, and optionally encrypted at rest depending on configuration).
Therefore, the verified Kubernetes datastore is etcd, option A.
=========
What is a Service?
A static network mapping from a Pod to a port.
A way to expose an application running on a set of Pods.
The network configuration for a group of Pods.
An NGINX load balancer that gets deployed for an application.
The Answer Is:
BExplanation:
The correct answer is B: a Kubernetes Service is a stable way to expose an application running on a set of Pods. Pods are ephemeral—IPs can change when Pods are recreated, rescheduled, or scaled. A Service provides a consistent network identity (DNS name and usually a ClusterIP virtual IP) and a policy for routing traffic to the current healthy backends.
Typically, a Service uses a label selector to determine which Pods are part of the backend set. Kubernetes then maintains the corresponding endpoint data (Endpoints/EndpointSlice), and the cluster dataplane (kube-proxy or an eBPF-based implementation) forwards traffic from the Service IP/port to one of the Pod IPs. This enables reliable service discovery and load distribution across replicas, especially during rolling updates where Pods are constantly replaced.
Option A is incorrect because Service routing is not a “static mapping from a Pod to a port.” It’s dynamic and targets a set of Pods. Option C is too vague and misstates the concept; while Services relate to networking, they are not “the network configuration for a group of Pods” (that’s closer to NetworkPolicy/CNI configuration). Option D is incorrect because Kubernetes does not automatically deploy an NGINX load balancer when you create a Service. NGINX might be used as an Ingress controller or external load balancer in some setups, but a Service is a Kubernetes API abstraction, not a specific NGINX component.
Services come in several types (ClusterIP, NodePort, LoadBalancer, ExternalName), but the core definition remains the same: stable access to a dynamic set of Pods. This is foundational for microservices and for decoupling clients from the churn of Pod lifecycles.
So, the verified correct definition is B.
=========
Why is Cloud-Native Architecture important?
Cloud Native Architecture revolves around containers, microservices and pipelines.
Cloud Native Architecture removes constraints to rapid innovation.
Cloud Native Architecture is modern for application deployment and pipelines.
Cloud Native Architecture is a bleeding edge technology and service.
The Answer Is:
BExplanation:
Cloud-native architecture is important because it enables organizations to build and run software in a way that supports rapid innovation while maintaining reliability, scalability, and efficient operations. Option B best captures this: cloud native removes constraints to rapid innovation, so B is correct.
In traditional environments, innovation is slowed by heavyweight release processes, tightly coupled systems, manual operations, and limited elasticity. Cloud-native approaches—containers, declarative APIs, automation, and microservices-friendly patterns—reduce those constraints. Kubernetes exemplifies this by offering a consistent deployment model, self-healing, automated rollouts, scaling primitives, and a large ecosystem of delivery and observability tools. This makes it easier to ship changes more frequently and safely: teams can iterate quickly, roll back confidently, and standardize operations across environments.
Option A is partly descriptive (containers/microservices/pipelines are common in cloud native), but it doesn’t explain why it matters; it lists ingredients rather than the benefit. Option C is vague (“modern”) and again doesn’t capture the core value proposition. Option D is incorrect because cloud native is not primarily about being “bleeding edge”—it’s about proven practices that improve time-to-market and operational stability.
A good way to interpret “removes constraints” is: cloud native shifts the bottleneck away from infrastructure friction. With automation (IaC/GitOps), standardized runtime packaging (containers), and platform capabilities (Kubernetes controllers), teams spend less time on repetitive manual work and more time delivering features. Combined with observability and policy automation, this results in faster delivery with better reliability—exactly the reason cloud-native architecture is emphasized across the Kubernetes ecosystem.
=========
What is the difference between a Deployment and a ReplicaSet?
With a Deployment, you can’t control the number of pod replicas.
A ReplicaSet does not guarantee a stable set of replica pods running.
A Deployment is basically the same as a ReplicaSet with annotations.
A Deployment is a higher-level concept that manages ReplicaSets.
The Answer Is:
DExplanation:
A Deployment is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, so D is correct. A ReplicaSet’s primary job is to ensure that a specified number of Pod replicas are running at any time, based on a label selector and Pod template. It’s a fundamental “keep N Pods alive” controller.
Deployments build on that by managing the lifecycle of ReplicaSets over time. When you update a Deployment (for example, changing the container image tag or environment variables), Kubernetes creates a new ReplicaSet for the new Pod template and gradually shifts replicas from the old ReplicaSet to the new one according to the rollout strategy (RollingUpdate by default). Deployments also retain revision history, making it possible to roll back to a previous ReplicaSet if a rollout fails.
Why the other options are incorrect:
A is false: Deployments absolutely control the number of replicas via spec.replicas and can also be controlled by HPA.
B is false: ReplicaSets do guarantee that a stable number of replicas is running (that is their core purpose).
C is false: a Deployment is not “a ReplicaSet with annotations.” It is a distinct API resource with additional controller logic for declarative updates, rollouts, and revision tracking.
Operationally, most teams create Deployments rather than ReplicaSets directly because Deployments are safer and more feature-complete for application delivery. ReplicaSets still appear in real clusters because Deployments create them automatically; you’ll commonly see multiple ReplicaSets during rollout transitions. Understanding the hierarchy is crucial for troubleshooting: if Pods aren’t behaving as expected, you often trace from Deployment → ReplicaSet → Pod, checking selectors, events, and rollout status.
So the key difference is: ReplicaSet maintains replica count; Deployment manages ReplicaSets and orchestrates updates. Therefore, D is the verified answer.
=========
What are the initial namespaces that Kubernetes starts with?
default, kube-system, kube-public, kube-node-lease
default, system, kube-public
kube-default, kube-system, kube-main, kube-node-lease
kube-default, system, kube-main, kube-primary
The Answer Is:
AExplanation:
Kubernetes creates a set of namespaces by default when a cluster is initialized. The standard initial namespaces are default, kube-system, kube-public, and kube-node-lease, making A correct.
default is the namespace where resources are created if you don’t specify another namespace. Many quick-start examples deploy here, though production environments typically use dedicated namespaces per app/team.
kube-system contains objects created and managed by Kubernetes system components (control plane add-ons, system Pods, controllers, DNS components, etc.). It’s a critical namespace, and access is typically restricted.
kube-public is readable by all users (including unauthenticated users in some configurations) and is intended for public cluster information, though it’s used sparingly in many environments.
kube-node-lease holds Lease objects used for node heartbeats. This improves scalability by reducing load on etcd compared to older heartbeat mechanisms and helps the control plane track node liveness efficiently.
The incorrect options contain non-standard namespace names like “system,” “kube-main,” or “kube-primary,” and “kube-default” is not a real default namespace. Kubernetes’ built-in namespace set is well-documented and consistent with typical cluster bootstraps.
Understanding these namespaces matters operationally: system workloads and controllers often live in kube-system, and many troubleshooting steps involve inspecting Pods and events there. Meanwhile, kube-node-lease is key to node health tracking, and default is the catch-all if you forget to specify -n.
So, the verified answer is A: default, kube-system, kube-public, kube-node-lease.
=========
What best describes cloud native service discovery?
It's a mechanism for applications and microservices to locate each other on a network.
It's a procedure for discovering a MAC address, associated with a given IP address.
It's used for automatically assigning IP addresses to devices connected to the network.
It's a protocol that turns human-readable domain names into IP addresses on the Internet.
The Answer Is:
AExplanation:
Cloud native service discovery is fundamentally about how services and microservices find and connect to each other reliably in a dynamic environment, so A is correct. In cloud native systems (especially Kubernetes), instances are ephemeral: Pods can be created, destroyed, rescheduled, and scaled at any time. Hardcoding IPs breaks quickly. Service discovery provides stable names and lookup mechanisms so that one component can locate another even as underlying endpoints change.
In Kubernetes, service discovery is commonly achieved through Services (stable virtual IP + DNS name) and cluster DNS (CoreDNS). A Service selects a group of Pods via labels, and Kubernetes maintains the set of endpoints behind that Service. Clients connect to the Service name (DNS) and Kubernetes routes traffic to the current healthy Pods. For some workloads, headless Services provide DNS records that map directly to Pod IPs for per-instance discovery.
The other options describe different networking concepts: B is ARP (MAC discovery), C is DHCP (IP assignment), and D is DNS in a general internet sense. DNS is often used as a mechanism for service discovery, but cloud native service discovery is broader: it’s the overall mechanism enabling dynamic location of services, often implemented via DNS and/or environment variables and sometimes enhanced by service meshes.
So the best description remains A: a mechanism that allows applications and microservices to locate each other on a network in a dynamic environment.
The Kubernetes project work is carried primarily by SIGs. What does SIG stand for?
Special Interest Group
Software Installation Guide
Support and Information Group
Strategy Implementation Group
The Answer Is:
AExplanation:
In Kubernetes governance and project structure, SIG stands for Special Interest Group, so A is correct. Kubernetes is a large open source project under the Cloud Native Computing Foundation (CNCF), and its work is organized into groups that focus on specific domains—such as networking, storage, node, scheduling, security, docs, testing, and many more. SIGs provide a scalable way to coordinate contributors, prioritize work, review design proposals (KEPs), triage issues, and manage releases in their area.
Each SIG typically has regular meetings, mailing lists, chat channels, and maintainers who guide the direction of that part of the project. For example, SIG Network focuses on Kubernetes networking architecture and components, SIG Storage on storage APIs and CSI integration, and SIG Scheduling on scheduler behavior and extensibility. This structure helps Kubernetes evolve while maintaining quality, review rigor, and community-driven decision making.
The other options are not part of Kubernetes project terminology. “Software Installation Guide” and the others might sound plausible, but they are not how Kubernetes defines SIGs.
Understanding SIGs matters operationally because many Kubernetes features and design changes originate from SIGs. When you read Kubernetes enhancement proposals, release notes, or documentation, you’ll often see SIG ownership and references. In short, SIGs are the primary organizational units for Kubernetes engineering and stewardship, and SIG = Special Interest Group.
What happens with a regular Pod running in Kubernetes when a node fails?
A new Pod with the same UID is scheduled to another node after a while.
A new, near-identical Pod but with different UID is scheduled to another node.
By default, a Pod can only be scheduled to the same node when the node fails.
A new Pod is scheduled on a different node only if it is configured explicitly.
The Answer Is:
BExplanation:
B is correct: when a node fails, Kubernetes does not “move” the same Pod instance; instead, a new Pod object (new UID) is created to replace it—assuming the Pod is managed by a controller (Deployment/ReplicaSet, StatefulSet, etc.). A Pod is an API object with a unique identifier (UID) and is tightly associated with the node it’s scheduled to via spec.nodeName. If the node becomes unreachable, that original Pod cannot be restarted elsewhere because it was bound to that node.
Kubernetes’ high availability comes from controllers maintaining desired state. For example, a Deployment desires N replicas. If a node fails and the replicas on that node are lost, the controller will create replacement Pods, and the scheduler will place them onto healthy nodes. These replacement Pods will be “near-identical” in spec (same template), but they are still new instances with new UIDs and typically new IPs.
Why the other options are wrong:
A is incorrect because the UID does not remain the same—Kubernetes creates a new Pod object rather than reusing the old identity.
C is incorrect; pods are not restricted to the same node after failure. The whole point of orchestration is to reschedule elsewhere.
D is incorrect; rescheduling does not require special explicit configuration for typical controller-managed workloads. The controller behavior is standard. (If it’s a bare Pod without a controller, it will not be recreated automatically.)
This also ties to the difference between “regular Pod” vs controller-managed workloads: a standalone Pod is not self-healing by itself, while a Deployment/ReplicaSet provides that resilience. In typical production design, you run workloads under controllers specifically so node failure triggers replacement and restores replica count.
Therefore, the correct outcome is B.
=========
A Pod is stuck in the CrashLoopBackOff state. Which is the correct way to troubleshoot this issue?
Use kubectl exec
Use kubectl describe pod
Use kubectl get nodes to verify node capacity and then kubectl apply -f
Use kubectl top pod
The Answer Is:
BExplanation:
The CrashLoopBackOff state in Kubernetes indicates that a container inside a Pod is repeatedly starting, crashing, and then being restarted by the kubelet with increasing backoff delays. This is typically caused by application-level issues such as misconfiguration, missing environment variables, failed startup commands, application crashes, or incorrect container images. Proper troubleshooting focuses on identifying why the container is failing shortly after startup.
The most effective and recommended approach is to first use kubectl describe pod
After reviewing the events, the next step is to inspect the container’s logs using kubectl logs
Option A is incorrect because kubectl exec usually fails when containers are repeatedly crashing, and /var/log/kubelet.log is a node-level log not accessible from inside the container. Option C is incorrect because reapplying the Pod manifest does not address the underlying crash cause. Option D focuses on resource usage and scaling, which does not resolve application startup failures.
Therefore, the correct and verified answer is Option B, which aligns with Kubernetes documentation and best practices for diagnosing CrashLoopBackOff conditions.
What is the main purpose of the Ingress in Kubernetes?
Access HTTP and HTTPS services running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their path.
Access HTTP and HTTPS services running in the cluster based on their path.
The Answer Is:
DExplanation:
D is correct. Ingress is a Kubernetes API object that defines rules for external access to HTTP/HTTPS services in a cluster. The defining capability is Layer 7 routing—commonly host-based and path-based routing—so you can route requests like example.com/app1 to one Service and example.com/app2 to another. While the question mentions “based on their path,” that’s a classic and correct Ingress use case (and host routing is also common).
Ingress itself is only the specification of routing rules. An Ingress controller (e.g., NGINX Ingress Controller, HAProxy, Traefik, cloud-provider controllers) is what actually implements those rules by configuring a reverse proxy/load balancer. Ingress typically terminates TLS (HTTPS) and forwards traffic to internal Services, giving a more expressive alternative to exposing every service via NodePort/LoadBalancer.
Why the other options are wrong:
A suggests routing by IP address; Ingress is fundamentally about HTTP(S) routing rules (host/path), not direct Service IP access.
B and C describe non-HTTP protocols; Ingress is specifically for HTTP/HTTPS. For TCP/UDP or other protocols, you generally use Services of type LoadBalancer/NodePort, Gateway API implementations, or controller-specific TCP/UDP configuration.
Ingress is a foundational building block for cloud-native application delivery because it centralizes edge routing, enables TLS management, and supports gradual adoption patterns (multiple services under one domain). Therefore, the main purpose described here matches D.
=========
What is the API that exposes resource metrics from the metrics-server?
custom.k8s.io
resources.k8s.io
metrics.k8s.io
cadvisor.k8s.io
The Answer Is:
CExplanation:
The correct answer is C: metrics.k8s.io. Kubernetes’ metrics-server is the standard component that provides resource metrics (primarily CPU and memory) for nodes and pods. It aggregates this information (sourced from kubelet/cAdvisor) and serves it through the Kubernetes aggregated API under the group metrics.k8s.io. This is what enables commands like kubectl top nodes and kubectl top pods, and it is also a key data source for autoscaling with the Horizontal Pod Autoscaler (HPA) when scaling on CPU/memory utilization.
Why the other options are wrong:
custom.k8s.io is not the standard API group for metrics-server resource metrics. Custom metrics are typically served through the custom metrics API (commonly custom.metrics.k8s.io) via adapters (e.g., Prometheus Adapter), not metrics-server.
resources.k8s.io is not the metrics-server API group.
cadvisor.k8s.io is not exposed as a Kubernetes aggregated metrics API. cAdvisor is a component integrated into kubelet that provides container stats, but metrics-server is the thing that exposes the aggregated Kubernetes metrics API, and the canonical group is metrics.k8s.io.
Operationally, it’s important to understand the boundary: metrics-server provides basic resource metrics suitable for core autoscaling and “top” views, but it is not a full observability system (it does not store long-term metrics history like Prometheus). For richer metrics (SLOs, application metrics, long-term trending), teams typically deploy Prometheus or a managed monitoring backend. Still, when the question asks specifically which API exposes metrics-server data, the answer is definitively metrics.k8s.io.
=========
What sentence is true about CronJobs in Kubernetes?
A CronJob creates one or multiple Jobs on a repeating schedule.
A CronJob creates one container on a repeating schedule.
CronJobs are useful on Linux but are obsolete in Kubernetes.
The CronJob schedule format is different in Kubernetes and Linux.
The Answer Is:
AExplanation:
The true statement is A: a Kubernetes CronJob creates Jobs on a repeating schedule. CronJob is a controller designed for time-based execution. You define a schedule using standard cron syntax (minute, hour, day-of-month, month, day-of-week), and when the schedule triggers, the CronJob controller creates a Job object. Then the Job controller creates one or more Pods to run the task to completion.
Option B is incorrect because CronJobs do not “create one container”; they create Jobs, and Jobs create Pods (which may contain one or multiple containers). Option C is wrong because CronJobs are a core Kubernetes workload primitive for recurring tasks and remain widely used for periodic work like backups, batch processing, and cleanup. Option D is wrong because Kubernetes CronJobs intentionally use cron-like scheduling expressions; the format aligns with the cron concept (with Kubernetes-specific controller behavior around missed runs, concurrency, and history).
CronJobs also provide operational controls you don’t get from plain Linux cron on a node:
concurrencyPolicy (Allow/Forbid/Replace) to manage overlapping runs
startingDeadlineSeconds to control how missed schedules are handled
history limits for successful/failed Jobs to avoid clutter
integration with Kubernetes RBAC, Secrets, ConfigMaps, and volumes for consistent runtime configuration
consistent execution environment via container images, not ad-hoc node scripts
Because the CronJob creates Jobs as first-class API objects, you get observability (events/status), predictable retries, and lifecycle management. That’s why the accurate statement is A.
=========
What is a best practice to minimize the container image size?
Use a DockerFile.
Use multistage builds.
Build images with different tags.
Add a build.sh script.
The Answer Is:
BExplanation:
A proven best practice for minimizing container image size is to use multi-stage builds, so B is correct. Multi-stage builds allow you to separate the “build environment” from the “runtime environment.” In the first stage, you can use a full-featured base image (with compilers, package managers, and build tools) to compile your application or assemble artifacts. In the final stage, you copy only the resulting binaries or necessary runtime assets into a much smaller base image (for example, a distroless image or a slim OS image). This dramatically reduces the final image size because it excludes compilers, caches, and build dependencies that are not needed at runtime.
In cloud-native application delivery, smaller images matter for several reasons. They pull faster, which speeds up deployments, rollouts, and scaling events (Pods become Ready sooner). They also reduce attack surface by removing unnecessary packages, which helps security posture and scanning results. Smaller images tend to be simpler and more reproducible, improving reliability across environments.
Option A is not a size-minimization practice: using a Dockerfile is simply the standard way to define how to build an image; it doesn’t inherently reduce size. Option C (different tags) changes image identification but not size. Option D (a build script) may help automation, but it doesn’t guarantee smaller images; the image contents are determined by what ends up in the layers.
Multi-stage builds are commonly paired with other best practices: choosing minimal base images, cleaning package caches, avoiding copying unnecessary files (use .dockerignore), and reducing layer churn. But among the options, the clearest and most directly correct technique is multi-stage builds.
Therefore, the verified answer is B.
=========
What native runtime is Open Container Initiative (OCI) compliant?
runC
runV
kata-containers
gvisor
The Answer Is:
AExplanation:
The Open Container Initiative (OCI) publishes open specifications for container images and container runtimes so that tools across the ecosystem remain interoperable. When a runtime is “OCI-compliant,” it means it implements the OCI Runtime Specification (how to run a container from a filesystem bundle and configuration) and/or works cleanly with OCI image formats through the usual layers (image → unpack → runtime). runC is the best-known, widely used reference implementation of the OCI runtime specification and is the low-level runtime underneath many higher-level systems. In Kubernetes, you typically interact with a higher-level container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). That higher-level runtime then uses a low-level OCI runtime to actually create Linux namespaces/cgroups, set up the container process, and start it. In many default installations, containerd delegates to runC for this low-level “create/start” work.
The other options are related but differ in what they are: Kata Containers uses lightweight VMs to provide stronger isolation while still presenting a container-like workflow; gVisor provides a user-space kernel for sandboxing containers; these can be used with Kubernetes via compatible integrations, but the canonical “native OCI runtime” answer in most curricula is runC. Finally, “runV” is not a common modern Kubernetes runtime choice in typical OCI discussions. So the most correct, standards-based answer here is A (runC) because it directly implements the OCI runtime spec and is commonly used as the default low-level runtime behind CRI implementations.
=========
Which of the following best describes horizontally scaling an application deployment?
The act of adding/removing node instances to the cluster to meet demand.
The act of adding/removing applications to meet demand.
The act of adding/removing application instances of the same application to meet demand.
The act of adding/removing resources to application instances to meet demand.
The Answer Is:
CExplanation:
Horizontal scaling means changing how many instances of an application are running, not changing how big each instance is. Therefore, the best description is C: adding/removing application instances of the same application to meet demand. In Kubernetes, “instances” typically correspond to Pod replicas managed by a controller like a Deployment. When you scale horizontally, you increase or decrease the replica count, which increases or decreases total throughput and resilience by distributing load across more Pods.
Option A is about cluster/node scaling (adding or removing nodes), which is infrastructure scaling typically handled by a cluster autoscaler in cloud environments. Node scaling can enable more Pods to be scheduled, but it’s not the definition of horizontal application scaling itself. Option D describes vertical scaling—adding/removing CPU or memory resources to a given instance (Pod/container) by changing requests/limits or using VPA. Option B is vague and not the standard definition.
Horizontal scaling is a core cloud-native pattern because it improves availability and elasticity. If one Pod fails, other replicas continue serving traffic. In Kubernetes, scaling can be manual (kubectl scale deployment ... --replicas=N) or automatic using the Horizontal Pod Autoscaler (HPA). HPA adjusts replicas based on observed metrics like CPU utilization, memory, or custom/external metrics (for example, request rate or queue length). This creates responsive systems that can handle variable traffic.
From an architecture perspective, designing for horizontal scaling often means ensuring your application is stateless (or manages state externally), uses idempotent request handling, and supports multiple concurrent instances. Stateful workloads can also scale horizontally, but usually with additional constraints (StatefulSets, sharding, quorum membership, stable identity).
So the verified definition and correct choice is C.
=========
Which of the following characteristics is associated with container orchestration?
Application message distribution
Dynamic scheduling
Deploying application JAR files
Virtual machine distribution
The Answer Is:
BExplanation:
A core capability of container orchestration is dynamic scheduling, so B is correct. Orchestration platforms (like Kubernetes) are responsible for deciding where containers (packaged as Pods in Kubernetes) should run, based on real-time cluster conditions and declared requirements. “Dynamic” means the system makes placement decisions continuously as workloads are created, updated, or fail, and as cluster capacity changes.
In Kubernetes, the scheduler evaluates Pods that have no assigned node, filters nodes that don’t meet requirements (resources, taints/tolerations, affinity/anti-affinity, topology constraints), and then scores remaining nodes to pick the best target. This scheduling happens at runtime and adapts to the current state of the cluster. If nodes go down or Pods crash, controllers create replacements and the scheduler places them again—another aspect of dynamic orchestration.
The other options don’t define container orchestration: “application message distribution” is more about messaging systems or service communication patterns, not orchestration. “Deploying application JAR files” is a packaging/deployment detail relevant to Java apps but not a defining orchestration capability. “Virtual machine distribution” refers to VM management rather than container orchestration; Kubernetes focuses on containers and Pods (even if those containers sometimes run in lightweight VMs via sandbox runtimes).
So, the defining trait here is that an orchestrator automatically and continuously schedules and reschedules workloads, rather than relying on static placement decisions.
Which mechanism can be used to automatically adjust the amount of resources for an application?
Horizontal Pod Autoscaler (HPA)
Kubernetes Event-driven Autoscaling (KEDA)
Cluster Autoscaler
Vertical Pod Autoscaler (VPA)
The Answer Is:
AExplanation:
The verified answer in the PDF is A (HPA), and that aligns with the common Kubernetes meaning of “adjust resources for an application” by scaling replicas. The Horizontal Pod Autoscaler automatically changes the number of Pod replicas for a workload (typically a Deployment) based on observed metrics such as CPU utilization, memory (in some configurations), or custom/external metrics. By increasing replicas under load, the application gains more total CPU/memory capacity available across Pods; by decreasing replicas when load drops, it reduces resource consumption and cost.
It’s important to distinguish what each mechanism adjusts:
HPA adjusts replica count (horizontal scaling).
VPA adjusts Pod resource requests/limits (vertical scaling), which is literally “amount of CPU/memory per pod,” but it often requires restarts to apply changes depending on mode.
Cluster Autoscaler adjusts the number of nodes in the cluster, not application replicas.
KEDA is event-driven autoscaling that often drives HPA behavior using external event sources (queues, streams), but it’s not the primary built-in mechanism referenced in many foundational Kubernetes questions.
Given the wording and the provided answer key, the intended interpretation is: “automatically adjust the resources available to the application” by scaling out/in the number of replicas. That’s exactly HPA’s role. For example, if CPU utilization exceeds a target (say 60%), HPA computes a higher desired replica count and updates the workload. The Deployment then creates more Pods, distributing load and increasing available compute.
So, within this question set, the verified correct choice is A (Horizontal Pod Autoscaler).
=========
How do you perform a command in a running container of a Pod?
kubectl exec
docker exec
kubectl run
kubectl attach
The Answer Is:
AExplanation:
In Kubernetes, the standard way to execute a command inside a running container is kubectl exec, which is why A is correct. kubectl exec calls the Kubernetes API (API server), which then coordinates with the kubelet on the target node to run the requested command inside the container using the container runtime’s exec mechanism. The -- separator is important: it tells kubectl that everything after -- is the command to run in the container rather than flags for kubectl itself.
This is fundamentally different from docker exec. In Kubernetes, you don’t normally target containers through Docker/CRI tools directly because Kubernetes abstracts the runtime behind CRI. Also, “Docker” might not even be installed on nodes in modern clusters (containerd/CRI-O are common). So option B is not the Kubernetes-native approach and often won’t work.
kubectl run (option C) is for creating a new Pod (or generating workload resources), not for executing a command in an existing container. kubectl attach (option D) attaches your terminal to a running container’s process streams (stdin/stdout/stderr), which is useful for interactive sessions, but it does not execute an arbitrary new command like exec does.
In real usage, you often specify the container when a Pod has multiple containers: kubectl exec -it
=========
What feature must a CNI support to control specific traffic flows for workloads running in Kubernetes?
Border Gateway Protocol
IP Address Management
Pod Security Policy
Network Policies
The Answer Is:
DExplanation:
To control which workloads can communicate with which other workloads in Kubernetes, you use NetworkPolicy resources—but enforcement depends on the cluster’s networking implementation. Therefore, for traffic-flow control, the CNI/plugin must support Network Policies, making D correct.
Kubernetes defines the NetworkPolicy API as a declarative way to specify allowed ingress and egress traffic based on selectors (Pod labels, namespaces, IP blocks) and ports/protocols. However, Kubernetes itself does not enforce NetworkPolicy rules; enforcement is provided by the network plugin (or associated dataplane components). If your CNI does not implement NetworkPolicy, the objects may exist in the API but have no effect—Pods will communicate freely by default.
Option B (IP Address Management) is often part of CNI responsibilities, but IPAM is about assigning addresses, not enforcing L3/L4 security policy. Option A (BGP) is used by some CNIs to advertise routes (for example, in certain Calico deployments), but BGP is not the general requirement for policy enforcement. Option C (Pod Security Policy) is a deprecated/removed Kubernetes admission feature related to Pod security settings, not network flow control.
From a Kubernetes security standpoint, NetworkPolicies are a key tool for implementing least privilege at the network layer—limiting lateral movement, reducing blast radius, and segmenting environments. But they only work when the chosen CNI supports them. Thus, the correct answer is D: Network Policies.
=========