Winter Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dumps65

Confluent CCDAK Dumps

Page: 1 / 15
Total 150 questions

Confluent Certified Developer for Apache Kafka Certification Examination Questions and Answers

Question 1

When using the Confluent Kafka Distribution, where does the schema registry reside?

Options:

A.

As a separate JVM component

B.

As an in-memory plugin on your Zookeeper cluster

C.

As an in-memory plugin on your Kafka Brokers

D.

As an in-memory plugin on your Kafka Connect Workers

Question 2

If I supply the setting compression.type=snappy to my producer, what will happen? (select two)

Options:

A.

The Kafka brokers have to de-compress the data

B.

The Kafka brokers have to compress the data

C.

The Consumers have to de-compress the data

D.

The Consumers have to compress the data

E.

The Producers have to compress the data

Question 3

What isn't a feature of the Confluent schema registry?

Options:

A.

Store avro data

B.

Enforce compatibility rules

C.

Store schemas

Question 4

To import data from external databases, I should use

Options:

A.

Confluent REST Proxy

B.

Kafka Connect Sink

C.

Kafka Streams

D.

Kafka Connect Source

Question 5

Consumer failed to process record # 10 and succeeded in processing record # 11. Select the course of action that you should choose to guarantee at least once processing

Options:

A.

Commit offsets at 10

B.

Do not commit until successfully processing the record #10

C.

Commit offsets at 11

Question 6

How will you set the retention for the topic named “my-topic” to 1 hour?

Options:

A.

Set the broker config log.retention.ms to 3600000

B.

Set the consumer config retention.ms to 3600000

C.

Set the topic config retention.ms to 3600000

D.

Set the producer config retention.ms to 3600000

Question 7

What is the risk of increasing max.in.flight.requests.per.connection while also enabling retries in a producer?

Options:

A.

At least once delivery is not guaranteed

B.

Message order not preserved

C.

Reduce throughput

D.

Less resilient

Question 8

What client protocol is supported for the schema registry? (select two)

Options:

A.

HTTP

B.

HTTPS

C.

JDBC

D.

Websocket

E.

SASL

Question 9

A consumer has auto.offset.reset=latest, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group never committed offsets for the topic before. Where will the consumer read from?

Options:

A.

offset 2311

B.

offset 0

C.

offset 45

D.

it will crash

Question 10

You are building a consumer application that processes events from a Kafka topic. What is the most important metric to monitor to ensure real-time processing?

Options:

A.

UnderReplicatedPartitions

B.

records-lag-max

C.

MessagesInPerSec

D.

BytesInPerSec

Question 11

The rule "same key goes to the same partition" is true unless...

Options:

A.

the number of producer changes

B.

the number of kafka broker changes

C.

the number of partition changes

D.

the replication factor changes

Question 12

You are using JDBC source connector to copy data from 2 tables to two Kafka topics. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers. How many tasks are launched?

Options:

A.

6

B.

1

C.

2

D.

3

Question 13

There are 3 producers writing to a topic with 5 partitions. There are 10 consumers consuming from the topic as part of the same group. How many consumers will remain idle?

Options:

A.

10

B.

3

C.

None

D.

5

Question 14

To prevent network-induced duplicates when producing to Kafka, I should use

Options:

A.

max.in.flight.requests.per.connection=1

B.

enable.idempotence=true

C.

retries=200000

D.

batch.size=1

Question 15

Producing with a key allows to...

Options:

A.

Ensure per-record level security

B.

Influence partitioning of the producer messages

C.

Add more information to my message

D.

Allow a Kafka Consumer to subscribe to a (topic,key) pair and only receive that data

Question 16

Compaction is enabled for a topic in Kafka by setting log.cleanup.policy=compact. What is true about log compaction?

Options:

A.

After cleanup, only one message per key is retained with the first value

B.

Each message stored in the topic is compressed

C.

Kafka automatically de-duplicates incoming messages based on key hashes

D.

After cleanup, only one message per key is retained with the latest value

Compaction changes the offset of messages

Question 17

You are receiving orders from different customer in an "orders" topic with multiple partitions. Each message has the customer name as the key. There is a special customer named ABC that generates a lot of orders and you would like to reserve a partition exclusively for ABC. The rest of the message should be distributed among other partitions. How can this be achieved?

Options:

A.

Add metadata to the producer record

B.

Create a custom partitioner

C.

All messages with the same key will go the same partition, but the same partition may have messages with different keys. It is not possible to reserve

D.

Define a Kafka Broker routing rule

Question 18

When auto.create.topics.enable is set to true in Kafka configuration, what are the circumstances under which a Kafka broker automatically creates a topic? (select three)

Options:

A.

Client requests metadata for a topic

B.

Consumer reads message from a topic

C.

Client alters number of partitions of a topic

D.

Producer sends message to a topic

Question 19

A topic has three replicas and you set min.insync.replicas to 2. If two out of three replicas are not available, what happens when a consume request is sent to broker?

Options:

A.

Data will be returned from the remaining in-sync replica

B.

An empty message will be returned

C.

NotEnoughReplicasException will be returned

D.

A new leader for the partition will be elected

Question 20

A kafka topic has a replication factor of 3 and min.insync.replicas setting of 2. How many brokers can go down before a producer with acks=1 can't produce?

Options:

A.

0

B.

3

C.

1

D.

2

Question 21

A consumer is configured with enable.auto.commit=false. What happens when close() is called on the consumer object?

Options:

A.

The uncommitted offsets are committed

B.

A rebalance in the consumer group will happen immediately

C.

The group coordinator will discover that the consumer stopped sending heartbeats. It will cause rebalance after session.timeout.ms

Question 22

A topic "sales" is being produced to in the Americas region. You are mirroring this topic using Mirror Maker to the European region. From there, you are only reading the topic for analytics purposes. What kind of mirroring is this?

Options:

A.

Passive-Passive

B.

Active-Active

C.

Active-Passive

Page: 1 / 15
Total 150 questions