KCNA Linux Foundation Kubernetes and Cloud Native Associate Free Practice Exam Questions (2026 Updated)
Prepare effectively for your Linux Foundation KCNA Kubernetes and Cloud Native Associate certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2026, ensuring you have the most current resources to build confidence and succeed on your first attempt.
Manual reclamation policy of a PV resource is known as:
claimRef
Delete
Retain
Recycle
The Answer Is:
CExplanation:
The correct answer is C: Retain. In Kubernetes persistent storage, a PersistentVolume (PV) has a persistentVolumeReclaimPolicy that determines what happens to the underlying storage asset after its PersistentVolumeClaim (PVC) is deleted. The reclaim policy options historically include Delete and Retain (and Recycle, which is deprecated/removed in many modern contexts). “Manual reclamation” refers to the administrator having to manually clean up and/or rebind the storage after the claim is released—this behavior corresponds to Retain.
With Retain, when the PVC is deleted, the PV moves to a “Released” state, but the actual storage resource (cloud disk, NFS path, etc.) is not deleted automatically. Kubernetes will not automatically make that PV available for a new claim until an administrator takes action—typically cleaning the data, removing the old claim reference, and/or creating a new PV/PVC binding flow. This is important for data safety: you don’t want to automatically delete sensitive or valuable data just because a claim was removed.
By contrast, Delete means Kubernetes (via the storage provisioner/CSI driver) will delete the underlying storage asset when the claim is deleted—useful for dynamic provisioning and disposable environments. Recycle used to scrub the volume contents and make it available again, but it’s not the recommended modern approach and has been phased out in favor of dynamic provisioning and explicit workflows.
So, the policy that implies manual intervention and manual cleanup/reuse is Retain, which is option C.
=========
What does “continuous” mean in the context of CI/CD?
Frequent releases, manual processes, repeatable, fast processing
Periodic releases, manual processes, repeatable, automated processing
Frequent releases, automated processes, repeatable, fast processing
Periodic releases, automated processes, repeatable, automated processing
The Answer Is:
CExplanation:
The correct answer is C: in CI/CD, “continuous” implies frequent releases, automation, repeatability, and fast feedback/processing. The intent is to reduce batch size and latency between code change and validation/deployment. Instead of integrating or releasing in large, risky chunks, teams integrate changes continually and rely on automation to validate and deliver them safely.
“Continuous” does not mean “periodic” (which eliminates B and D). It also does not mean “manual processes” (which eliminates A and B). Automation is core: build, test, security checks, and deployment steps are consistently executed by pipeline systems, producing reliable outcomes and auditability.
In practice, CI means every merge triggers automated builds and tests so the main branch stays in a healthy state. CD means those validated artifacts are promoted through environments with minimal manual steps, often including progressive delivery controls (canary, blue/green), automated rollbacks on health signal failures, and policy checks. Kubernetes works well with CI/CD because it is declarative and supports rollout primitives: Deployments, readiness probes, and rollback revision history enable safer continuous delivery when paired with pipeline automation.
Repeatability is a major part of “continuous.” The same pipeline should run the same way every time, producing consistent artifacts and deployments. This reduces “works on my machine” issues and shortens incident resolution because changes are traceable and reproducible. Fast processing and frequent releases also mean smaller diffs, easier debugging, and quicker customer value delivery.
So, the combination that accurately reflects “continuous” in CI/CD is frequent + automated + repeatable + fast, which is option C.
=========
Which of the following cloud native proxies is used for ingress/egress in a service mesh and can also serve as an application gateway?
Frontend proxy
Kube-proxy
Envoy proxy
Reverse proxy
The Answer Is:
CExplanation:
Envoy Proxy is a high-performance, cloud-native proxy widely used for ingress and egress traffic management in service mesh architectures, and it can also function as an application gateway. It is the foundational data-plane component for popular service meshes such as Istio, Consul, and AWS App Mesh, making option C the correct answer.
In a service mesh, Envoy is typically deployed as a sidecar proxy alongside each application Pod. This allows Envoy to transparently intercept and manage all inbound and outbound traffic for the service. Through this model, Envoy enables advanced traffic management features such as load balancing, retries, timeouts, circuit breaking, mutual TLS, and fine-grained observability without requiring application code changes.
Envoy is also commonly used at the mesh boundary to handle ingress and egress traffic. When deployed as an ingress gateway, Envoy acts as the entry point for external traffic into the mesh, performing TLS termination, routing, authentication, and policy enforcement. As an egress gateway, it controls outbound traffic from the mesh to external services, enabling security controls and traffic visibility. These capabilities allow Envoy to serve effectively as an application gateway, not just an internal proxy.
Option A, “Frontend proxy,” is a generic term and not a specific cloud-native component. Option B, kube-proxy, is responsible for implementing Kubernetes Service networking rules at the node level and does not provide service mesh features or gateway functionality. Option D, “Reverse proxy,” is a general architectural pattern rather than a specific cloud-native proxy implementation.
Envoy’s extensibility, performance, and deep integration with Kubernetes and service mesh control planes make it the industry-standard proxy for modern cloud-native networking. Its ability to function both as a sidecar proxy and as a centralized ingress or egress gateway clearly establishes Envoy proxy as the correct and verified answer.
Which of the following statements is correct concerning Open Policy Agent (OPA)?
The policies must be written in Python language.
Kubernetes can use it to validate requests and apply policies.
Policies can only be tested when published.
It cannot be used outside Kubernetes.
The Answer Is:
BExplanation:
Open Policy Agent (OPA) is a general-purpose policy engine used to define and enforce policy across different systems. In Kubernetes, OPA is commonly integrated through admission control (often via Gatekeeper or custom admission webhooks) to validate and/or mutate requests before they are persisted in the cluster. This makes B correct: Kubernetes can use OPA to validate API requests and apply policy decisions.
Kubernetes’ admission chain is where policy enforcement naturally fits. When a user or controller submits a request (for example, to create a Pod), the API server can call external admission webhooks. Those webhooks can evaluate the request against policy—such as “no privileged containers,” “images must come from approved registries,” “labels must include cost-center,” or “Ingress must enforce TLS.” OPA’s policy language (Rego) allows expressing these rules in a declarative form, and the decision (“allow/deny” and sometimes patches) is returned to the API server. This enforces governance consistently and centrally.
Option A is incorrect because OPA policies are written in Rego, not Python. Option C is incorrect because policies can be tested locally and in CI pipelines before deployment; in fact, testability is a key advantage. Option D is incorrect because OPA is designed to be platform-agnostic—it can be used with APIs, microservices, CI/CD pipelines, service meshes, and infrastructure tools, not only Kubernetes.
From a Kubernetes fundamentals view, OPA complements RBAC: RBAC answers “who can do what to which resources,” while OPA-style admission policies answer “even if you can create this resource, does it meet our organizational rules?” Together they help implement defense in depth: authentication + authorization + policy admission + runtime security controls. That is why OPA is widely used to enforce security and compliance requirements in Kubernetes environments.
=========
Which of the following workload requires a headless Service while deploying into the namespace?
StatefulSet
CronJob
Deployment
DaemonSet
The Answer Is:
AExplanation:
A StatefulSet commonly requires a headless Service, so A is the correct answer. In Kubernetes, StatefulSets are designed for workloads that need stable identities, stable network names, and often stable storage per replica. To support that stable identity model, Kubernetes typically uses a headless Service (spec.clusterIP: None) to provide DNS records that map directly to each Pod, rather than load-balancing behind a single virtual ClusterIP.
With a headless Service, DNS queries return individual endpoint records (the Pod IPs) so that each StatefulSet Pod can be addressed predictably, such as pod-0.service-name.namespace.svc.cluster.local. This is critical for clustered databases, quorum systems, and leader/follower setups where members must discover and address specific peers. The StatefulSet controller then ensures ordered creation/deletion and preserves identity (pod-0, pod-1, etc.), while the headless Service provides discovery for those stable hostnames.
CronJobs run periodic Jobs and don’t require stable DNS identity for multiple replicas. Deployments manage stateless replicas and normally use a standard Service that load-balances across Pods. DaemonSets run one Pod per node, and while they can be exposed by Services, they do not intrinsically require headless discovery.
So while you can use a headless Service for other designs, StatefulSet is the workload type most associated with “requires a headless Service” due to how stable identities and per-Pod addressing work in Kubernetes.
In Kubernetes, what is the primary purpose of creating a Service resource for a Deployment?
To centrally manage and apply runtime configuration values for application components.
To provide a stable endpoint for accessing Pods even when their IP addresses change.
To automatically adjust the number of Pods based on CPU or memory utilization metrics.
To define and attach persistent volumes that store application data across Pod restarts.
The Answer Is:
BExplanation:
In Kubernetes, Pods are inherently ephemeral. They can be created, destroyed, restarted, or rescheduled at any time, and each time this happens, a Pod may receive a new IP address. This dynamic behavior is essential for resilience and scalability, but it also creates a challenge for reliably accessing application workloads. The Service resource addresses this problem by providing a stable network endpoint for a group of Pods, making option B the correct answer.
A Service selects Pods using label selectors—typically the same labels applied by a Deployment—and exposes them through a consistent virtual IP address (ClusterIP) and DNS name. Regardless of how many Pods are running or whether individual Pods are replaced, the Service remains stable and automatically routes traffic to healthy Pods. This abstraction allows clients to communicate with an application without needing to track individual Pod IPs.
Deployments are responsible for managing the lifecycle of Pods, including scaling, rolling updates, and self-healing. However, Deployments do not provide networking or service discovery capabilities. Without a Service, consumers would need to directly reference Pod IPs, which would break as soon as Pods are rescheduled or updated.
Option A is incorrect because centralized configuration management is handled using ConfigMaps and Secrets, not Services. Option C is incorrect because automatic scaling based on CPU or memory is the responsibility of the Horizontal Pod Autoscaler (HPA), not Services. Option D is incorrect because persistent storage is managed using PersistentVolume and PersistentVolumeClaim resources, which are unrelated to Services.
Services can be configured for different access patterns, such as ClusterIP for internal communication, NodePort or LoadBalancer for external access, and headless Services for direct Pod discovery. Despite these variations, their core purpose remains the same: providing a reliable and stable way to access Pods managed by a Deployment.
Therefore, the correct and verified answer is Option B, which aligns with Kubernetes networking fundamentals and official documentation.
A platform engineer wants to ensure that a new microservice is automatically deployed to every cluster registered in Argo CD. Which configuration best achieves this goal?
Set up a Kubernetes CronJob that redeploys the microservice to all registered clusters on a schedule.
Manually configure every registered cluster with the deployment YAML for installing the microservice.
Create an Argo CD ApplicationSet that uses a Git repository containing the microservice manifests.
Use a Helm chart to package the microservice and manage it with a single Application defined in Argo CD.
The Answer Is:
CExplanation:
Argo CD is a declarative GitOps continuous delivery tool designed to manage Kubernetes applications across one or many clusters. When the requirement is to automatically deploy a microservice to every cluster registered in Argo CD, the most appropriate and scalable solution is to use an ApplicationSet.
The ApplicationSet controller extends Argo CD by enabling the dynamic generation of multiple Argo CD Applications from a single template. One of its most powerful features is the cluster generator, which automatically discovers all clusters registered with Argo CD and creates an Application for each of them. By combining this generator with a Git repository containing the microservice manifests, the platform engineer ensures that the microservice is consistently deployed to all existing clusters—and any new clusters added in the future—without manual intervention.
This approach aligns perfectly with GitOps principles. The desired state of the microservice is defined once in Git, and Argo CD continuously reconciles that state across all target clusters. Any updates to the microservice manifests are automatically rolled out everywhere in a controlled and auditable manner. This provides strong guarantees around consistency, scalability, and operational simplicity.
Option A is incorrect because a CronJob introduces imperative redeployment logic and does not integrate with Argo CD’s reconciliation model. Option B is not scalable or maintainable, as it requires manual configuration for each cluster and increases the risk of configuration drift. Option D, while useful for packaging applications, still results in a single Application object and does not natively handle multi-cluster fan-out by itself.
Therefore, the correct and verified answer is Option C: creating an Argo CD ApplicationSet backed by a Git repository, which is the recommended and documented solution for multi-cluster application delivery in Argo CD.
At which layer would distributed tracing be implemented in a cloud native deployment?
Network
Application
Database
Infrastructure
The Answer Is:
BExplanation:
Distributed tracing is implemented primarily at the application layer, so B is correct. The reason is simple: tracing is about capturing the end-to-end path of a request as it traverses services, libraries, queues, and databases. That “request context” (trace ID, span ID, baggage) must be created, propagated, and enriched as code executes. While infrastructure components (proxies, gateways, service meshes) can generate or augment trace spans, the fundamental unit of tracing is still tied to application operations (an HTTP handler, a gRPC call, a database query, a cache lookup).
In Kubernetes-based microservices, distributed tracing typically uses standards like OpenTelemetry for instrumentation and context propagation. Application frameworks emit spans for key operations, attach attributes (route, status code, tenant, retry count), and propagate context via headers (e.g., W3C Trace Context). This is what lets you reconstruct “Service A → Service B → Service C” for one user request and identify the slow or failing hop.
Why other layers are not the best answer:
Network focuses on packets/flows, but tracing is not a packet-capture problem; it’s a causal request-path problem across services.
Database spans are part of traces, but tracing is not “implemented in the database layer” overall; DB spans are one component.
Infrastructure provides the platform and can observe traffic, but without application context it can’t fully represent business operations (and many useful attributes live in app code).
So the correct layer for “where tracing is implemented” is the application layer—even when a mesh or proxy helps, it’s still describing application request execution across components.
=========
How to create a headless Service?
By specifying .spec.clusterIP: headless
By specifying .spec.clusterIP: None
By specifying .spec.clusterIP: 0.0.0.0
By specifying .spec.clusterIP: localhost
The Answer Is:
BExplanation:
A headless Service is created by setting spec.clusterIP: None, so B is correct. Normally, a Service gets a ClusterIP, and kube-proxy (or an alternative dataplane) implements virtual-IP-based load balancing to route traffic from that ClusterIP to the backend Pods. A headless Service intentionally disables that virtual IP allocation. Instead of giving you a single stable VIP, Kubernetes publishes DNS records that resolve directly to the endpoints (the Pod IPs) behind the Service.
This is especially important for workloads that need direct endpoint discovery or stable per-Pod identities, such as StatefulSets. With a headless Service, clients can discover all Pod IPs (or individual Pod DNS names in StatefulSet patterns) and implement their own selection, quorum, or leader/follower logic. Kubernetes DNS (CoreDNS) responds differently for headless Services: rather than returning a single ClusterIP, it returns multiple A/AAAA records (one per endpoint) or SRV records for named ports, enabling richer service discovery behavior.
The other options are invalid. “headless” is not a magic value for clusterIP; the API expects either an actual IP address assigned by the cluster or the special literal None. 0.0.0.0 and localhost are not valid ways to request headless semantics. Kubernetes uses None specifically to signal “do not allocate a ClusterIP.”
Operationally, headless Services are used to: (1) expose each backend instance individually, (2) support stateful clustering and stable DNS names, and (3) avoid load balancing when the application or client library must choose endpoints itself. The key is that the Service still provides a stable DNS name, but the resolution yields endpoints, not a VIP.
=========
What can be used to create a job that will run at specified times/dates or on a repeating schedule?
Job
CalendarJob
BatchJob
CronJob
The Answer Is:
DExplanation:
The correct answer is D: CronJob. A Kubernetes CronJob is specifically designed for creating Jobs on a schedule—either at specified times/dates (expressed via cron syntax) or on a repeating interval (hourly, daily, weekly). When the schedule triggers, the CronJob controller creates a Job, and the Job controller creates the Pods that execute the workload to completion.
Option A (Job) is not inherently scheduled. A Job runs when you create it, and it continues until it completes successfully or fails according to its retry/backoff behavior. If you want it to run periodically, you need something else to create the Job each time. CronJob is the built-in mechanism for that scheduling.
Options B and C are not standard Kubernetes workload objects. Kubernetes does not include “CalendarJob” or “BatchJob” as official API kinds. The scheduling primitive is CronJob.
CronJobs also include important operational controls: concurrency policies prevent overlapping runs, deadlines control missed schedules, and history limits manage old Job retention. This makes CronJobs more robust than ad-hoc scheduling approaches and keeps the workload lifecycle visible in the Kubernetes API (status/events/logs). It also means you can apply standard Kubernetes patterns: use a service account with least privilege, mount Secrets/ConfigMaps, run in specific namespaces, and manage resource requests/limits so that scheduled workloads don’t destabilize the cluster.
So the correct Kubernetes resource for scheduled and repeating job execution is CronJob (D).
=========
What is the main purpose of a DaemonSet?
A DaemonSet ensures that all (or certain) nodes run a copy of a Pod.
A DaemonSet ensures that the kubelet is constantly up and running.
A DaemonSet ensures that there are as many pods running as specified in the replicas field.
A DaemonSet ensures that a process (agent) runs on every node.
The Answer Is:
AExplanation:
The correct answer is A. A DaemonSet is a workload controller whose job is to ensure that a specific Pod runs on all nodes (or on a selected subset of nodes) in the cluster. This is fundamentally different from Deployments/ReplicaSets, which aim to maintain a certain replica count regardless of node count. With a DaemonSet, the number of Pods is implicitly tied to the number of eligible nodes: add a node, and the DaemonSet automatically schedules a Pod there; remove a node, and its Pod goes away.
DaemonSets are commonly used for node-level services and background agents: log collectors, node monitoring agents, storage daemons, CNI components, or security agents—anything where you want a presence on each node to interact with node resources. This aligns with option D’s phrasing (“agent on every node”), but option A is the canonical definition and is slightly broader because it covers “all or certain nodes” (via node selectors/affinity/taints-tolerations) and the fact that the unit is a Pod.
Why the other options are wrong: DaemonSets do not “keep kubelet running” (B); kubelet is a node service managed by the OS. DaemonSets do not use a replicas field to maintain a specific count (C); that’s Deployment/ReplicaSet behavior.
Operationally, DaemonSets matter for cluster operations because they provide consistent node coverage and automatically react to node pool scaling. They also require careful scheduling constraints so they land only where intended (e.g., only Linux nodes, only GPU nodes). But the main purpose remains: ensure a copy of a Pod runs on each relevant node—option A.
=========
Which of these events will cause the kube-scheduler to assign a Pod to a node?
When the Pod crashes because of an error.
When a new node is added to the Kubernetes cluster.
When the CPU load on the node becomes too high.
When a new Pod is created and has no assigned node.
The Answer Is:
DExplanation:
The kube-scheduler assigns a node to a Pod when the Pod is unscheduled—meaning it exists in the API server but has no spec.nodeName set. The event that triggers scheduling is therefore: a new Pod is created and has no assigned node, which is option D.
Kubernetes scheduling is declarative and event-driven. The scheduler continuously watches for Pods that are in a “Pending” unscheduled state. When it sees one, it runs a scheduling cycle: filtering nodes that cannot run the Pod (insufficient resources based on requests, taints/tolerations, node selectors/affinity rules, topology spread constraints), then scoring the remaining feasible nodes to pick the best candidate. Once selected, the scheduler “binds” the Pod to that node by updating the Pod’s spec.nodeName. After that, kubelet on the chosen node takes over to pull images and start containers.
Option A (Pod crashes) does not directly cause scheduling. If a container crashes, kubelet may restart it on the same node according to restart policy. If the Pod itself is replaced (e.g., by a controller like a Deployment creating a new Pod), that new Pod will be scheduled because it’s unscheduled—but the crash event itself isn’t the scheduler’s trigger. Option B (new node added) might create more capacity and affect future scheduling decisions, but it does not by itself trigger assigning a particular Pod; scheduling still happens because there are unscheduled Pods. Option C (CPU load high) is not a scheduling trigger; scheduling is based on declared requests and constraints, not instantaneous node CPU load (that’s a common misconception).
So the correct, Kubernetes-architecture answer is D: kube-scheduler assigns nodes to Pods that are newly created (or otherwise pending) and have no assigned node.
=========
What are the most important resources to guarantee the performance of an etcd cluster?
CPU and disk capacity.
Network throughput and disk I/O.
CPU and RAM memory.
Network throughput and CPU.
The Answer Is:
BExplanation:
etcd is the strongly consistent key-value store backing Kubernetes cluster state. Its performance directly affects the entire control plane because most API operations require reads/writes to etcd. The most critical resources for etcd performance are disk I/O (especially latency) and network throughput/latency between etcd members and API servers—so B is correct.
etcd is write-ahead-log (WAL) based and relies heavily on stable, low-latency storage. Slow disks increase commit latency, which slows down object updates, watches, and controller loops. In busy clusters, poor disk performance can cause request backlogs and timeouts, showing up as slow kubectl operations and delayed controller reconciliation. That’s why production guidance commonly emphasizes fast SSD-backed storage and careful monitoring of fsync latency.
Network performance matters because etcd uses the Raft consensus protocol. Writes must be replicated to a quorum of members, and leader-follower communication is continuous. High network latency or low throughput can slow replication and increase the time to commit writes. Unreliable networking can also cause leader elections or cluster instability, further degrading performance and availability.
CPU and memory are still relevant, but they are usually not the first bottleneck compared to disk and network. CPU affects request processing and encryption overhead if enabled, while memory affects caching and compaction behavior. Disk “capacity” alone (size) is less relevant than disk I/O characteristics (latency, IOPS), because etcd performance is sensitive to fsync and write latency.
In Kubernetes operations, ensuring etcd health includes: using dedicated fast disks, keeping network stable, enabling regular compaction/defragmentation strategies where appropriate, sizing correctly (typically odd-numbered members for quorum), and monitoring key metrics (commit latency, fsync duration, leader changes). Because etcd is the persistence layer of the API, disk I/O and network quality are the primary determinants of control-plane responsiveness—hence B.
=========
What is a key feature of a container network?
Proxying REST requests across a set of containers.
Allowing containers running on separate hosts to communicate.
Allowing containers on the same host to communicate.
Caching remote disk access.
The Answer Is:
BExplanation:
A defining requirement of container networking in orchestrated environments is enabling workloads to communicate across hosts, not just within a single machine. That’s why B is correct: a key feature of a container network is allowing containers (Pods) running on separate hosts to communicate.
In Kubernetes, this idea becomes the Kubernetes network model: every Pod gets an IP address, and Pods should be able to communicate with other Pods across nodes without needing NAT (depending on implementation details). Achieving that across a cluster requires a networking layer (typically implemented by a CNI plugin) that can route traffic between nodes so that Pod-to-Pod communication works regardless of placement. This is crucial because schedulers dynamically place Pods; you cannot assume two communicating components will land on the same node.
Option C is true in a trivial sense—containers on the same host can communicate—but that capability alone is not the key feature that makes orchestration viable at scale. Cross-host connectivity is the harder and more essential property. Option A describes application-layer behavior (like API gateways or reverse proxies) rather than the foundational networking capability. Option D describes storage optimization, unrelated to container networking.
From a cloud native architecture perspective, reliable cross-host networking enables microservices patterns, service discovery, and distributed systems behavior. Kubernetes Services, DNS, and NetworkPolicies all depend on the underlying ability for Pods across the cluster to send traffic to each other. If your container network cannot provide cross-node routing and reachability, the cluster behaves like isolated islands and breaks the fundamental promise of orchestration: “schedule anywhere, communicate consistently.”
=========
Which Kubernetes component is the smallest deployable unit of computing?
StatefulSet
Deployment
Pod
Container
The Answer Is:
CExplanation:
In Kubernetes, the Pod is the smallest deployable and schedulable unit, making C correct. Kubernetes does not schedule individual containers directly; instead, it schedules Pods, each of which encapsulates one or more containers that must run together on the same node. This design supports both single-container Pods (the most common) and multi-container Pods (for sidecars, adapters, and co-located helper processes).
Pods provide shared context: containers in a Pod share the same network namespace (one IP address and port space) and can share storage volumes. This enables tight coupling where needed—for example, a service mesh proxy sidecar and the application container communicate via localhost, or a log-forwarding sidecar reads logs from a shared volume. Kubernetes manages lifecycle at the Pod level: kubelet ensures the containers defined in the PodSpec are running and uses probes to determine readiness and liveness.
StatefulSet and Deployment are controllers that manage sets of Pods. A Deployment manages ReplicaSets for stateless workloads and provides rollout/rollback features; a StatefulSet provides stable identities, ordered operations, and stable storage for stateful replicas. These are higher-level constructs, not the smallest units.
Option D (“Container”) is smaller in an abstract sense, but it is not the smallest Kubernetes deployable unit because Kubernetes APIs and scheduling work at the Pod boundary. You don’t “kubectl apply” a container; you apply a Pod template within a Pod object (often via controllers).
Understanding Pods as the atomic unit is crucial: Services select Pods, autoscalers scale Pods (replica counts), and scheduling decisions are made per Pod. That’s why Kubernetes documentation consistently refers to Pods as the fundamental building block for running workloads.
=========
In Kubernetes, what is the primary responsibility of the kubelet running on each worker node?
To allocate persistent storage volumes and manage distributed data replication for Pods.
To manage cluster state information and handle all scheduling decisions for workloads.
To ensure that containers defined in Pod specifications are running and remain healthy on the node.
To provide internal DNS resolution and route service traffic between Pods and nodes.
The Answer Is:
CExplanation:
The kubelet is a critical Kubernetes component that runs on every worker node and acts as the primary execution agent for Pods. Its core responsibility is to ensure that the containers defined in Pod specifications are running and remain healthy on the node, making option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a specific node, the kubelet on that node becomes responsible for carrying out the desired state described in the Pod specification. It continuously watches the API server for Pods assigned to its node and communicates with the container runtime (such as containerd or CRI-O) to start, stop, and restart containers as needed. The kubelet does not make scheduling decisions; it simply executes them.
Health management is another key responsibility of the kubelet. It runs liveness, readiness, and startup probes as defined in the Pod specification. If a container fails a liveness probe, the kubelet restarts it. If a readiness probe fails, the kubelet marks the Pod as not ready, preventing traffic from being routed to it. The kubelet also reports detailed Pod and node status information back to the API server, enabling controllers to take corrective actions when necessary.
Option A is incorrect because persistent volume provisioning and data replication are handled by storage systems, CSI drivers, and controllers—not by the kubelet. Option B is incorrect because cluster state management and scheduling are responsibilities of control plane components such as the API server, controller manager, and kube-scheduler. Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet serves as the node-level guardian of Kubernetes workloads. By ensuring containers are running exactly as specified and continuously reporting their health and status, the kubelet forms the essential bridge between Kubernetes’ declarative control plane and the actual execution of applications on worker nodes.
What does vertical scaling an application deployment describe best?
Adding/removing applications to meet demand.
Adding/removing node instances to the cluster to meet demand.
Adding/removing resources to applications to meet demand.
Adding/removing application instances of the same application to meet demand.
The Answer Is:
CExplanation:
Vertical scaling means changing the resources allocated to a single instance of an application (more or less CPU/memory), which is why C is correct. In Kubernetes terms, this corresponds to adjusting container resource requests and limits (for CPU and memory). Increasing resources can help a workload handle more load per Pod by giving it more compute or memory headroom; decreasing can reduce cost and improve cluster packing efficiency.
This differs from horizontal scaling, which changes the number of instances (replicas). Option D describes horizontal scaling: adding/removing replicas of the same workload, typically managed by a Deployment and often automated via the Horizontal Pod Autoscaler (HPA). Option B describes scaling the infrastructure layer (nodes) which is cluster/node autoscaling (Cluster Autoscaler in cloud environments). Option A is not a standard scaling definition.
In practice, vertical scaling in Kubernetes can be manual (edit the Deployment resource requests/limits) or automated using the Vertical Pod Autoscaler (VPA), which can recommend or apply new requests based on observed usage. A key nuance is that changing requests/limits often requires Pod restarts to take effect, so vertical scaling is less “instant” than HPA and can disrupt workloads if not planned. That’s why many production teams prefer horizontal scaling for traffic-driven workloads and use vertical scaling to right-size baseline resources or address memory-bound/cpu-bound behavior.
From a cloud-native architecture standpoint, understanding vertical vs horizontal scaling helps you design for elasticity: use vertical scaling to tune per-instance capacity; use horizontal scaling for resilience and throughput; and combine with node autoscaling to ensure the cluster has sufficient capacity. The definition the question is testing is simple: vertical scaling = change resources per application instance, which is option C.
What components are common in a service mesh?
Tracing and log storage
Circuit breaking and Pod scheduling
Data plane and runtime plane
Service proxy and control plane
The Answer Is:
DExplanation:
A service mesh is an architectural pattern that manages service-to-service communication in a microservices environment by inserting a dedicated networking layer. The two most common building blocks you’ll see across service mesh implementations are (1) a data plane of proxies and (2) a control plane that configures and manages those proxies—this aligns best with “service proxy and control plane,” option D.
In practice, the data plane is usually implemented via sidecar proxies (or sometimes node/ambient proxies) that sit “next to” workloads and handle traffic functions such as mTLS encryption, retries, timeouts, load balancing policies, traffic splitting, and telemetry generation. These proxies can capture inbound and outbound traffic without requiring changes to application code, which is one of the defining benefits of a mesh.
The control plane provides the management layer: it distributes policy and configuration to the proxies (routing rules, security policies, identities/certificates), discovers services/endpoints, and often coordinates certificate rotation and workload identity. In Kubernetes environments, meshes typically integrate with the Kubernetes API for service discovery and configuration.
Option C is close in spirit but uses non-standard wording (“runtime plane” is not a typical service mesh term; “control plane” is). Options A and B describe capabilities that may exist in a mesh ecosystem (telemetry, circuit breaking), but they are not the universal “core components” across meshes. Tracing/log storage, for example, is usually handled by external observability backends (e.g., Jaeger, Tempo, Loki) rather than being intrinsic “mesh components.”
So, the most correct and broadly accepted answer is D: service proxy and control plane.
=========
What are the 3 pillars of Observability?
Metrics, Logs, and Traces
Metrics, Logs, and Spans
Metrics, Data, and Traces
Resources, Logs, and Tracing
The Answer Is:
AExplanation:
The correct answer is A: Metrics, Logs, and Traces. These are widely recognized as the “three pillars” because together they provide complementary views into system behavior:
Metrics are numeric time series collected over time (CPU usage, request rate, error rate, latency percentiles). They are best for dashboards, alerting, and capacity planning because they are structured and aggregatable. In Kubernetes, metrics underpin autoscaling and operational visibility (node/pod resource usage, cluster health signals).
Logs are discrete event records (often text) emitted by applications and infrastructure components. Logs provide detailed context for debugging: error messages, stack traces, warnings, and business events. In Kubernetes, logs are commonly collected from container stdout/stderr and aggregated centrally for search and correlation.
Traces capture the end-to-end journey of a request through a distributed system, breaking it into spans. Tracing is crucial in microservices because a single user request may cross many services; traces show where latency accumulates and which dependency fails. Tracing also enables root cause analysis when metrics indicate degradation but don’t pinpoint the culprit.
Why the other options are wrong: a span is a component within tracing, not a top-level pillar; “data” is too generic; and “resources” are not an observability signal category. The pillars are defined by signal type and how they’re used operationally.
In cloud-native practice, these pillars are often unified via correlation IDs and shared context: metrics alerts link to logs and traces for the same timeframe/request. Tooling like Prometheus (metrics), log aggregators (e.g., Loki/Elastic), and tracing systems (Jaeger/Tempo/OpenTelemetry) work together to provide a complete observability story.
Therefore, the verified correct answer is A.
=========
Which Kubernetes resource workload ensures that all (or some) nodes run a copy of a Pod?
DaemonSet
StatefulSet
kubectl
Deployment
The Answer Is:
AExplanation:
A DaemonSet is the workload controller that ensures a Pod runs on all nodes or on a selected subset of nodes, so A is correct. DaemonSets are used for node-level agents and infrastructure components that must be present everywhere—examples include log collectors, monitoring agents, storage daemons, CNI components, and node security tools.
The DaemonSet controller watches for node additions/removals. When a new node joins the cluster, Kubernetes automatically schedules a new DaemonSet Pod onto that node (subject to constraints such as node selectors, affinities, and taints/tolerations). When a node is removed, its DaemonSet Pod naturally disappears with it. This creates the “one per node” behavior that differentiates DaemonSets from other workload types.
A Deployment manages a replica count across the cluster, not “one per node.” A StatefulSet manages stable identity and ordered operations for stateful replicas; it does not inherently map one Pod to every node. kubectl is a CLI tool and not a workload resource.
DaemonSets can also be scoped: by using node selectors, node affinity, and tolerations, you can ensure Pods run only on GPU nodes, only on Linux nodes, only in certain zones, or only on nodes with a particular label. That’s why the question says “all (or some) nodes.”
Therefore, the correct and verified answer is DaemonSet (A).