Spring Sale Special - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmaspas7

Easiest Solution 2 Pass Your Certification Exams

CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Free Practice Exam Questions (2026 Updated)

Prepare effectively for your Confluent CCDAK Confluent Certified Developer for Apache Kafka Certification Examination certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2026, ensuring you have the most current resources to build confidence and succeed on your first attempt.

Page: 1 / 2
Total 90 questions

You are writing to a topic with acks=all.

The producer receives acknowledgments but you notice duplicate messages.

You find that timeouts due to network delay are causing resends.

Which configuration should you use to prevent duplicates?

A.

enable.auto.commit=true

B.

retries=2147483647max.in.flight.requests.per.connection=5enable.idempotence=true

C.

retries=0max.in.flight.requests.per.connection=5enable.idempotence=true

D.

retries=2147483647max.in.flight.requests.per.connection=1enable.idempotence=false

Which configuration allows more time for the consumer poll to process records?

A.

session.timeout.ms

B.

heartbeat.interval.ms

C.

max.poll.interval.ms

D.

fetch.max.wait.ms

The producer code below features a Callback class with a method called onCompletion().

In the onCompletion() method, when the request is completed successfully, what does the value metadata.offset() represent?

A.

The sequential ID of the message committed into a partition

B.

Its position in the producer’s batch of messages

C.

The number of bytes that overflowed beyond a producer batch of messages

D.

The ID of the partition to which the message was committed

An application is consuming messages from Kafka.

The application logs show that partitions are frequently being reassigned within the consumer group.

Which two factors may be contributing to this?

(Select two.)

A.

There is a slow consumer processing application.

B.

The number of partitions does not match the number of application instances.

C.

There is a storage issue on the broker.

D.

An instance of the application is crashing and being restarted.

What is accomplished by producing data to a topic with a message key?

A.

Messages with the same key are routed to a deterministically selected partition, enabling order guarantees within that partition.

B.

Kafka brokers allow you to add more partitions to a given topic, without impacting the data flow for existing keys.

C.

It provides a mechanism for encrypting messages at the partition level to ensure secure data transmission.

D.

Consumers can filter messages in real time based on the message key without processing unrelated messages.

You are working on a Kafka cluster with three nodes. You create a topic named orders with:

replication.factor = 3

min.insync.replicas = 2

acks = allWhat exception will be generated if two brokers are down due to network delay?

A.

NotEnoughReplicasException

B.

NetworkException

C.

NotCoordinatorException

D.

NotLeaderForPartitionException

Which two statements are correct about transactions in Kafka?

(Select two.)

A.

All messages from a failed transaction will be deleted from a Kafka topic.

B.

Transactions are only possible when writing messages to a topic with single partition.

C.

Consumers can consume both committed and uncommitted transactions.

D.

Information about producers and their transactions is stored in the _transaction_state topic.

E.

Transactions guarantee at least once delivery of messages.

(Which configuration is valid for deploying a JDBC Source Connector to read all rows from the orders table and write them to the dbl-orders topic?)

A.

{"name": "orders-connect","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl","topic.whitelist": "orders","auto.create": "true"}

B.

{"name": "dbl-orders","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&password=pas","topic.prefix": "dbl-","table.blacklist": "ord*"}

C.

{"name": "jdbc-source","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&useAutoAuth=true","topic.prefix": "dbl-","table.whitelist": "orders"}

D.

{"name": "jdbc-source","connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&password=pas","topic.prefix": "dbl-","table.whitelist": "orders"}

(You have a Kafka Connect cluster with multiple connectors deployed.

One connector is not working as expected.

You need to find logs related to that specific connector to investigate the issue.

How can you find the connector’s logs?)

A.

Modify the log4j.properties file to enable connector context.

B.

Change the log level to DEBUG to include connector context information.

C.

Modify the log4j.properties file to add a dedicated log appender for the connector.

D.

Make no change; there is no way to isolate connector logs.

(Your configuration parameters for a Source connector and Connect worker are:

• offset.flush.interval.ms=60000

• offset.flush.timeout.ms=500

• offset.storage.topic=connect-offsets

• offset.storage.replication.factor=-1

Which two statements match the expected behavior?

Select two.)

A.

The offsets topic will use the broker default replication factor.

B.

The connector will commit offsets to the broker default offsets topic.

C.

The connector will commit offsets to a topic called connect-offsets.

D.

The connector will wait 500 ms before trying to commit offsets for tasks.

You have a topic t1 with six partitions. You use Kafka Connect to send data from topic t1 in your Kafka cluster to Amazon S3. Kafka Connect is configured for two tasks.

How many partitions will each task process?

A.

2

B.

3

C.

6

D.

12

Clients that connect to a Kafka cluster are required to specify one or more brokers in the bootstrap.servers parameter.

What is the primary advantage of specifying more than one broker?

A.

It provides redundancy in making the initial connection to the Kafka cluster.

B.

It forces clients to enumerate every single broker in the cluster.

C.

It is the mechanism to distribute a topic’s partitions across multiple brokers.

D.

It provides the ability to wake up dormant brokers.

You are creating a Kafka Streams application to process retail data.

Match the input data streams with the appropriate Kafka Streams object.

You are sending messages to a Kafka cluster in JSON format and want to add more information related to each message:

Format of the message payload

Message creation time

A globally unique identifier that allows the message to be traced through the systemWhere should this additional information be set?

A.

Header

B.

Key

C.

Value

D.

Broker

You have a Kafka client application that has real-time processing requirements.

Which Kafka metric should you monitor?

A.

Consumer lag between brokers and consumers

B.

Total time to serve requests to replica followers

C.

Consumer heartbeat rate to group coordinator

D.

Aggregate incoming byte rate

Which partition assignment minimizes partition movements between two assignments?

A.

RoundRobinAssignor

B.

StickyAssignor

C.

RangeAssignor

D.

PartitionAssignor

Match each configuration parameter with the correct option.

To answer choose a match for each option from the drop-down. Partial

credit is given for each correct answer.

You want to connect with username and password to a secured Kafka cluster that has SSL encryption.

Which properties must your client include?

A.

security.protocol=SASL_SSLsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

B.

security.protocol=SSLsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

C.

security.protocol=SASL_PLAINTEXTsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

D.

security.protocol=PLAINTEXTsasl.jaas.config=org.apache.kafka.common.security.ssl.TlsLoginModule required username='myUser' password='myPassword';

You need to consume messages from Kafka using the command-line interface (CLI).

Which command should you use?

A.

kafka-console-consumer

B.

kafka-consumer

C.

kafka-get-messages

D.

kafka-consume

You need to correctly join data from two Kafka topics.

Which two scenarios will allow for co-partitioning?

(Select two.)

A.

Both topics have the same number of partitions.

B.

Both topics have the same key and partitioning strategy.

C.

Both topics have the same value schema.

D.

Both topics have the same retention time.

Page: 1 / 2
Total 90 questions
Copyright © 2014-2026 Solution2Pass. All Rights Reserved