Weekend Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmaspas7

Easiest Solution 2 Pass Your Certification Exams

CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Free Practice Exam Questions (2025 Updated)

Prepare effectively for your Confluent CCDAK Confluent Certified Developer for Apache Kafka Certification Examination certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2025, ensuring you have the most current resources to build confidence and succeed on your first attempt.

Page: 1 / 1
Total 61 questions

You are writing to a topic with acks=all.

The producer receives acknowledgments but you notice duplicate messages.

You find that timeouts due to network delay are causing resends.

Which configuration should you use to prevent duplicates?

A.

enable.auto.commit=true

B.

retries=2147483647

max.in.flight.requests.per.connection=5

enable.idempotence=true

C.

retries=0

max.in.flight.requests.per.connection=5

enable.idempotence=true

D.

retries=2147483647

max.in.flight.requests.per.connection=1

enable.idempotence=false

You have a Kafka client application that has real-time processing requirements.

Which Kafka metric should you monitor?

A.

Consumer lag between brokers and consumers

B.

Total time to serve requests to replica followers

C.

Consumer heartbeat rate to group coordinator

D.

Aggregate incoming byte rate

You are writing a producer application and need to ensure proper delivery. You configure the producer with acks=all.

Which two actions should you take to ensure proper error handling?

(Select two.)

A.

Use a callback argument in producer.send() where you check delivery status.

B.

Check that producer.send() returned a RecordMetadata object and is not null.

C.

Surround the call of producer.send() with a try/catch block to catch KafkaException.

D.

Check the value of ProducerRecord.status().

An application is consuming messages from Kafka.

The application logs show that partitions are frequently being reassigned within the consumer group.

Which two factors may be contributing to this?

(Select two.)

A.

There is a slow consumer processing application.

B.

The number of partitions does not match the number of application instances.

C.

There is a storage issue on the broker.

D.

An instance of the application is crashing and being restarted.

Which tool can you use to modify the replication factor of an existing topic?

A.

kafka-reassign-partitions.sh

B.

kafka-recreate-topic.sh

C.

kafka-topics.sh

D.

kafka-reassign-topics.sh

What is a consequence of increasing the number of partitions in an existing Kafka topic?

A.

Existing data will be redistributed across the new number of partitions temporarily increasing cluster load.

B.

Records with the same key could be located in different partitions.

C.

Consumers will need to process data from more partitions which will significantly increase consumer lag.

D.

The acknowledgment process will increase latency for producers using acks=all.

A stream processing application is tracking user activity in online shopping carts.

You want to identify periods of user inactivity.

Which type of Kafka Streams window should you use?

A.

Sliding

B.

Tumbling

C.

Hopping

D.

Session

You need to collect logs from a host and write them to a Kafka topic named 'logs-topic'. You decide to use Kafka Connect File Source connector for this task.

What is the preferred deployment mode for this connector?

A.

Standalone mode

B.

Distributed mode

C.

Parallel mode

D.

SingleCluster mode

You create a producer that writes messages about bank account transactions from tens of thousands of different customers into a topic.

    Your consumers must process these messages with low latency and minimize consumer lag

    Processing takes ~6x longer than producing

    Transactions for each bank account must be processedin orderWhich strategy should you use?

A.

Use the timestamp of the message's arrival as its key.

B.

Use the bank account number found in the message as the message key.

C.

Use a combination of the bank account number and the transaction timestamp as the message key.

D.

Use a unique identifier such as a universally unique identifier (UUID) as the message key.

Which is true about topic compaction?

A.

When a client produces a new event with an existing key, the old value is overwritten with the new value in the compacted log segment.

B.

When a client produces a new event with an existing key, the broker immediately deletes the offset of the existing event.

C.

Topic compaction does not remove old events; instead, when clients consume events from a compacted topic, they store events in a hashmap that maintains the latest value.

D.

Compaction will keep exactly one message per key after compaction of inactive log segments.

Clients that connect to a Kafka cluster are required to specify one or more brokers in the bootstrap.servers parameter.

What is the primary advantage of specifying more than one broker?

A.

It provides redundancy in making the initial connection to the Kafka cluster.

B.

It forces clients to enumerate every single broker in the cluster.

C.

It is the mechanism to distribute a topic’s partitions across multiple brokers.

D.

It provides the ability to wake up dormant brokers.

You are developing a Java application using a Kafka consumer.

You need to integrate Kafka’s client logs with your own application’s logs using log4j2.

Which Java library dependency must you include in your project?

A.

SLF4J implementation for Log4j 1.2 (org.slf4j:slf4j-log4j12)

B.

SLF4J implementation for Log4j2 (org.apache.logging.log4j:log4j-slf4j-impl)

C.

None, the right dependency will be added by the Kafka client dependency by transitivity.

D.

Just the log4j2 dependency of the application

Match each configuration parameter with the correct deployment step in installing a Kafka connector.

Which two statements are correct about transactions in Kafka?

(Select two.)

A.

All messages from a failed transaction will be deleted from a Kafka topic.

B.

Transactions are only possible when writing messages to a topic with single partition.

C.

Consumers can consume both committed and uncommitted transactions.

D.

Information about producers and their transactions is stored in the _transaction_state topic.

E.

Transactions guarantee at least once delivery of messages.

Which statement is true about how exactly-once semantics (EOS) work in Kafka Streams?

A.

Kafka Streams disables log compaction on internal changelog topics to preserve all state changes for potential recovery.

B.

EOS in Kafka Streams relies on transactional producers to atomically commit state updates to changelog topics and output records to Kafka.

C.

Kafka Streams provides EOS by periodically checkpointing state stores and replaying changelogs to recover only unprocessed messages during failure.

D.

EOS in Kafka Streams is implemented by creating a separate Kafka topic for deduplication of all messages processed by the application.

You are creating a Kafka Streams application to process retail data.

Match the input data streams with the appropriate Kafka Streams object.

A producer is configured with the default partitioner. It is sending records to a topic that is configured with five partitions. The record does not contain any key.

What is the result of this?

A.

Records will be dispatched among the available partitions.

B.

Records will be sent to partition 0.

C.

An error will be raised and no record will be sent.

D.

Records will be sent to the least used partition.

What are three built-in abstractions in the Kafka Streams DSL?

(Select three.)

A.

KStream

B.

KTable

C.

GlobalKTable

D.

GlobalKStream

E.

StreamTable

Page: 1 / 1
Total 61 questions
Copyright © 2014-2025 Solution2Pass. All Rights Reserved