Spring Sale Discount Flat 70% Offer - Ends in 0d 00h 00m 00s - Coupon code: 70diswrap

Confluent CCDAK Dumps

Page: 1 / 9
Total 90 questions

Confluent Certified Developer for Apache Kafka Certification Examination Questions and Answers

Question 1

You are composing a REST request to create a new connector in a running Connect cluster. You invoke POST /connectors with a configuration and receive a 409 (Conflict) response.

What are two reasons for this response? (Select two.)

Options:

A.

The connector configuration was invalid, and the response body will expand on the configuration error.

B.

The connect cluster has reached capacity, and new connectors cannot be created without expanding the cluster.

C.

The Connector already exists in the cluster.

D.

The Connect cluster is in process of rebalancing.

Question 2

(A consumer application runs once every two weeks and reads from a Kafka topic.

The last time the application ran, the last offset processed was 217.

The application is configured with auto.offset.reset=latest.

The current offsets in the topic start at 318 and end at 588.

Which offset will the application start reading from when it starts up for its next run?)

Options:

A.

0

B.

218

C.

318

D.

589

Question 3

(You create a topic with five partitions.

What can you assume about messages read from that topic by a single consumer group?)

Options:

A.

Messages can be consumed by a maximum of five consumers in the same consumer group.

B.

The consumer group can only read the same number of messages from all the partitions.

C.

All messages will be read from exactly one broker by the consumer group.

D.

Messages from one partition can be consumed by any of the consumers in a group for faster processing.

Question 4

You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:

Topic name: DLQ-Topic

Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)

Options:

A.

errors.tolerance=all

B.

errors.deadletterqueue.topic.name=DLQ-Topic

C.

errors.deadletterqueue.context.headers.enable=true

D.

errors.tolerance=none

E.

errors.log.enable=true

F.

errors.log.include.messages=true

Question 5

(You are designing a stream pipeline to monitor the real-time location of GPS trackers, where historical location data is not required.

Each event has:

• Key: trackerId

• Value: latitude, longitude

You need to ensure that the latest location for each tracker is always retained in the Kafka topic.

Which topic configuration parameter should you set?)

Options:

A.

cleanup.policy=compact

B.

retention.ms=infinite

C.

min.cleanable.dirty.ratio=-1

D.

retention.ms=0

Question 6

A producer is configured with the default partitioner. It is sending records to a topic that is configured with five partitions. The record does not contain any key.

What is the result of this?

Options:

A.

Records will be dispatched among the available partitions.

B.

Records will be sent to partition 0.

C.

An error will be raised and no record will be sent.

D.

Records will be sent to the least used partition.

Question 7

You have a Kafka client application that has real-time processing requirements.

Which Kafka metric should you monitor?

Options:

A.

Consumer lag between brokers and consumers

B.

Total time to serve requests to replica followers

C.

Consumer heartbeat rate to group coordinator

D.

Aggregate incoming byte rate

Question 8

(You need to send a JSON message on the wire. The message key is a string.

How would you do this?)

Options:

A.

Specify a key serializer class for the JSON contents of the message’s value. Set the value serializer class to null.

B.

Specify a value serializer class for the JSON contents of the message’s value. Set a key serializer for the string value.

C.

Specify a value serializer class for the JSON contents of the message’s value. Set the key serializer class to null.

D.

Specify a value serializer class for the JSON contents of the message’s value. Set the key serializer class to JSON.

Question 9

You are experiencing low throughput from a Java producer.

Metrics show low I/O thread ratio and low I/O thread wait ratio.

What is the most likely cause of the slow producer performance?

Options:

A.

Compression is enabled.

B.

The producer is sending large batches of messages.

C.

There is a bad data link layer (layer 2) connection from the producer to the cluster.

D.

The producer code has an expensive callback function.

Question 10

(You are developing a Kafka Streams application with a complex topology that has multiple sources, processors, sinks, and sub-topologies.

You are working in a development environment and do not have access to a real Kafka cluster or topics.

You need to perform unit testing on your Kafka Streams application.

Which should you use?)

Options:

A.

TestProducer, TestConsumer

B.

KafkaUnitTestDriver

C.

TopologyTestDriver

D.

MockProducer, MockConsumer

Question 11

Which two statements are correct when assigning partitions to the consumers in a consumer group using the assign() API?

(Select two.)

Options:

A.

It is mandatory to subscribe to a topic before calling assign() to assign partitions.

B.

The consumer chooses which partition to read without any assignment from brokers.

C.

The consumer group will not be rebalanced if a consumer leaves the group.

D.

All topics must have the same number of partitions to use assign() API.

Question 12

You create a topic named stream-logs with:

A replication factor of 3

Four partitions

Messages that are plain logs without a keyHow will messages be distributed across partitions?

Options:

A.

The first message will always be written to partition 0.

B.

Messages will be distributed round-robin among all the topic partitions.

C.

All messages will be written to the same log segment.

D.

Messages will be distributed among all the topic partitions with strict ordering.

Question 13

This schema excerpt is an example of which schema format?

package com.mycorp.mynamespace;

message SampleRecord {

int32 Stock = 1;

double Price = 2;

string Product_Name = 3;

}

Options:

A.

Avro

B.

Protobuf

C.

JSON Schema

D.

YAML

Question 14

What is the default maximum size of a message the Apache Kafka broker can accept?

Options:

A.

1MB

B.

2MB

C.

5MB

D.

10MB

Question 15

(You are building real-time streaming applications using Kafka Streams.

Your application has a custom transformation.

You need to define custom processors in Kafka Streams.

Which tool should you use?)

Options:

A.

TopologyTestDriver

B.

Processor API

C.

Kafka Streams Domain Specific Language (DSL)

D.

Kafka Streams Custom Transformation Language

Question 16

(You are writing to a Kafka topic with producer configuration acks=all.

The producer receives acknowledgements from the broker but still creates duplicate messages due to network timeouts and retries.

You need to ensure that duplicate messages are not created.

Which producer configuration should you set?)

Options:

A.

enable.auto.commit=true

B.

retries=2147483647max.in.flight.requests.per.connection=5enable.idempotence=false

C.

retries=2147483647max.in.flight.requests.per.connection=1enable.idempotence=true

D.

retries=0max.in.flight.requests.per.connection=5enable.idempotence=true

Question 17

An application is consuming messages from Kafka.

The application logs show that partitions are frequently being reassigned within the consumer group.

Which two factors may be contributing to this?

(Select two.)

Options:

A.

There is a slow consumer processing application.

B.

The number of partitions does not match the number of application instances.

C.

There is a storage issue on the broker.

D.

An instance of the application is crashing and being restarted.

Question 18

Clients that connect to a Kafka cluster are required to specify one or more brokers in the bootstrap.servers parameter.

What is the primary advantage of specifying more than one broker?

Options:

A.

It provides redundancy in making the initial connection to the Kafka cluster.

B.

It forces clients to enumerate every single broker in the cluster.

C.

It is the mechanism to distribute a topic’s partitions across multiple brokers.

D.

It provides the ability to wake up dormant brokers.

Question 19

You need to explain the best reason to implement the consumer callback interface ConsumerRebalanceListener prior to a Consumer Group Rebalance.

Which statement is correct?

Options:

A.

Partitions assigned to a consumer may change.

B.

Previous log files are deleted.

C.

Offsets are compacted.

D.

Partition leaders may change.

Question 20

You want to connect with username and password to a secured Kafka cluster that has SSL encryption.

Which properties must your client include?

Options:

A.

security.protocol=SASL_SSLsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

B.

security.protocol=SSLsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

C.

security.protocol=SASL_PLAINTEXTsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

D.

security.protocol=PLAINTEXTsasl.jaas.config=org.apache.kafka.common.security.ssl.TlsLoginModule required username='myUser' password='myPassword';

Question 21

(Your application consumes from a topic configured with a deserializer.

You want the application to be resilient to badly formatted records (poison pills).

You surround the poll() call with a try/catch block for RecordDeserializationException.

You need to log the bad record, skip it, and continue processing other records.

Which action should you take in the catch block?)

Options:

A.

Log the bad record and seek the consumer to the offset of the next record.

B.

Log the bad record and call consumer.skip() method.

C.

Throw a runtime exception to trigger a restart of the application.

D.

Log the bad record; no other action is needed.

Question 22

(A consumer application needs to use an at-most-once delivery semantic.

What is the best consumer configuration and code skeleton to avoid duplicate messages being read?)

Options:

A.

auto.offset.reset=latest and enable.auto.commit=truewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);for (var record : records) {// Any processing}consumer.commitAsync();}

B.

auto.offset.reset=earliest and enable.auto.commit=falsewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);consumer.commitAsync();for (var record : records) {// Any processing}}

C.

auto.offset.reset=earliest and enable.auto.commit=falsewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);for (var record : records) {// Any processing}consumer.commitAsync();}

D.

auto.offset.reset=earliest and enable.auto.commit=truewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);consumer.commitAsync();for (var record : records) {// Any processing}}

Question 23

Kafka producers can batch messages going to the same partition.

Which statement is correct about producer batching?

Options:

A.

Producers can only batch messages of the same size.

B.

Two or more broker failures will automatically disable batching on the producer.

C.

Producers have a separate background thread for each batch.

D.

Producers can include multiple batches in a single request to a broker.

Question 24

(You are implementing a Kafka Streams application to process financial transactions.

Each transaction must be processed exactly once to ensure accuracy.

The application reads from an input topic, performs computations, and writes results to an output topic.

During testing, you notice duplicate entries in the output topic, which violates the exactly-once processing requirement.

You need to ensure exactly-once semantics (EOS) for this Kafka Streams application.

Which step should you take?)

Options:

A.

Enable compaction on the output topic to handle duplicates.

B.

Set enable.idempotence=true in the internal producer configuration of the Kafka Streams application.

C.

Set enable.exactly_once=true in the Kafka Streams configuration.

D.

Set processing.guarantee=exactly_once_v2 in the Kafka Streams configuration.

Question 25

Which statement describes the storage location for a sink connector’s offsets?

Options:

A.

The __consumer_offsets topic, like any other consumer

B.

The topic specified in the offsets.storage.topic configuration parameter

C.

In a file specified by the offset.storage.file.filename configuration parameter

D.

In memory which is then periodically flushed to a RocksDB instance

Question 26

(You are experiencing low throughput from a Java producer.

Kafka producer metrics show a low I/O thread ratio and low I/O thread wait ratio.

What is the most likely cause of the slow producer performance?)

Options:

A.

The producer is sending large batches of messages.

B.

There is a bad data link layer (Layer 2) connection from the producer to the cluster.

C.

The producer code has an expensive callback function.

D.

Compression is enabled.

Question 27

You are building a system for a retail store selling products to customers.

Which three datasets should you model as a GlobalKTable?

(Select three.)

Options:

A.

Inventory of products at a warehouse

B.

All purchases at a retail store occurring in real time

C.

Customer profile information

D.

Log of payment transactions

E.

Catalog of products

Page: 1 / 9
Total 90 questions