Summer Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: s2p65

Easiest Solution 2 Pass Your Certification Exams

CCAAK Confluent Certified Administrator for Apache Kafka Free Practice Exam Questions (2025 Updated)

Prepare effectively for your Confluent CCAAK Confluent Certified Administrator for Apache Kafka certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2025, ensuring you have the most current resources to build confidence and succeed on your first attempt.

Page: 1 / 1
Total 54 questions

Why does Kafka use ZooKeeper? (Choose two.)

A.

To access information about the leaders and partitions

B.

To scale the number of brokers in the cluster

C.

To prevent replication between clusters

D.

For controller election

Where are Apache Kafka Access Control Lists stored?

A.

Broker

B.

ZooKeeper

C.

Schema Registry

D.

Connect

What is the primary purpose of Kafka quotas?

A.

Throttle clients to prevent them from monopolizing Broker resources.

B.

Guarantee faster response times for some clients.

C.

Limit the number of clients that can connect to the Kafka cluster.

D.

Limit the total number of Partitions in the Kafka cluster

Kafka broker supports which Simple Authentication and Security Layer (SASL) mechanisms for authentication? (Choose three.)

A.

SASL/PLAIN

B.

SASL/SAML20

C.

SASL/GSSAPI (Kerberos)

D.

SASL/OAUTHBEARER

E.

SASL/OTP

What are important factors in sizing a ksqlDB cluster? (Choose three.)

A.

Data Schema

B.

Number of Queries

C.

Number of Partitions

D.

Data Encryption

E.

Topic Data Retention

Which ksqlDB statement produces data that is persisted into a Kafka topic?

A.

SELECT (Pull Query)

B.

SELECT (Push Query)

C.

INSERT VALUES

D.

CREATE TABLE

Which connector type takes data from a topic and sends it to an external data system?

A.

Sink Connector

B.

Source Connector

C.

Streams Connector

D.

Syslog Connector

Which tool is used for scalably and reliably streaming data between Kafka and other data systems?

A.

Kafka Connect

B.

Kafka Streams

C.

Kafka Schema Registry

D.

Kafka REST Proxy

An employee in the reporting department needs assistance because their data feed is slowing down. You start by quickly checking the consumer lag for the clients on the data stream.

Which command will allow you to quickly check for lag on the consumers?

A.

bin/kafka-consumer-lag.sh

B.

bin/kafka-consumer-groups.sh

C.

bin/kafka-consumer-group-throughput.sh

D.

bin/kafka-reassign-partitions.sh

Your organization has a mission-critical Kafka cluster that must be highly available. A Disaster Recovery (DR) cluster has been set up using Replicator, and data is continuously being replicated from source cluster to the DR cluster. However, you notice that the message on offset 1002 on source cluster does not seem to match with offset 1002 on the destination DR cluster.

Which statement is correct?

A.

The DR cluster is lagging behind updates; once the DR cluster catches up, the messages will match.

B.

The message on DR cluster got over-written accidently by another application.

C.

The offsets for the messages on the source, destination cluster may not match.

D.

The message was updated on source cluster, but the update did not flow into destination DR cluster and errored.

Which model does Kafka use for consumers?

A.

Push

B.

Publish

C.

Pull

D.

Enrollment

In certain scenarios, it is necessary to weigh the trade-off between latency and throughput. One method to increase throughput is to configure batching of messages.

In addition to batch.size, what other producer property can be used to accomplish this?

A.

sendbufferbytes

B.

linger.ms

C.

compression

D.

delivery.timeout.ms

What is the correct permission check sequence for Kafka ACLs?

A.

Super Users → Deny ACL → Allow ACL → Deny

B.

Allow ACL → Deny ACL → Super Users → Deny

C.

Deny ACL → Deny → Allow ACL → Super Users

D.

Super Users → Allow ACL → Deny ACL → Deny

If a broker's JVM garbage collection takes too long, what can occur?

A.

There will be a trigger of the broker's log cleaner thread.

B.

ZooKeeper believes the broker to be dead.

C.

There is backpressure to, and pausing of, Kafka clients.

D.

Log files written to disk are loaded into the page cache.

A customer has a use case for a ksqlDB persistent query. You need to make sure that duplicate messages are not processed and messages are not skipped.

Which property should you use?

A.

processing.guarantee=exactly_once

B.

ksql.streams auto offset.reset=earliest

C.

ksql.streams auto.offset.reset=latest

D.

ksql.fail.on.production.error=false

You have a Kafka cluster with topics t1 and t2. In the output below, topic t2 shows Partition 1 with a leader “-1”.

...

$ kafka-topics --zookeeper localhost:2181 --describe --topic t2

Topic: t2 Partition: 1 Leader: -1 Replicas: 1 Isr:

What is the most likely reason for this?

A.

Broker 1 failed.

B.

Leader shows “-1” while the log cleaner thread runs on Broker 1.

C.

Compression has been enabled on Broker 1.

D.

Broker 1 has another partition clashing with the same name.

Page: 1 / 1
Total 54 questions
Copyright © 2014-2025 Solution2Pass. All Rights Reserved