Special Summer Sale Discount Flat 70% Offer - Ends in 0d 00h 00m 00s - Coupon code: 70diswrap

Oracle 1z0-1127-25 Dumps

Page: 1 / 9
Total 88 questions

Oracle Cloud Infrastructure 2025 Generative AI Professional Questions and Answers

Question 1

Which component of Retrieval-Augmented Generation (RAG) evaluates and prioritizes the information retrieved by the retrieval system?

Options:

A.

Retriever

B.

Encoder-Decoder

C.

Generator

D.

Ranker

Question 2

What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?

Options:

A.

It specifies a string that tells the model to stop generating more content.

B.

It assigns a penalty to frequently occurring tokens to reduce repetitive text.

C.

It determines the maximum number of tokens the model can generate per response.

D.

It controls the randomness of the model’s output, affecting its creativity.

Question 3

How does the structure of vector databases differ from traditional relational databases?

Options:

A.

It stores data in a linear or tabular format.

B.

It is not optimized for high-dimensional spaces.

C.

It uses simple row-based data storage.

D.

It is based on distances and similarities in a vector space.

Question 4

How does a presence penalty function in language model generation when using OCI Generative AI service?

Options:

A.

It penalizes all tokens equally, regardless of how often they have appeared.

B.

It only penalizes tokens that have never appeared in the text before.

C.

It applies a penalty only if the token has appeared more than twice.

D.

It penalizes a token each time it appears after the first occurrence.

Question 5

Which statement describes the difference between "Top k" and "Top p" in selecting the next token in the OCI Generative AI Generation models?

Options:

A.

"Top k" and "Top p" are identical in their approach to token selection but differ in their application of penalties to tokens.

B.

"Top k" selects the next token based on its position in the list of probable tokens, whereas "Top p" selects based on the cumulative probability of the top tokens.

C.

"Top k" considers the sum of probabilities of the top tokens, whereas "Top p" selects from the "Top k" tokens sorted by probability.

D.

"Top k" and "Top p" both select from the same set of tokens but use different methods to prioritize them based on frequency.

Question 6

What is the function of the Generator in a text generation system?

Options:

A.

To collect user queries and convert them into database search terms

B.

To rank the information based on its relevance to the user's query

C.

To generate human-like text using the information retrieved and ranked, along with the user's original query

D.

To store the generated responses for future use

Question 7

Which is a key advantage of using T-Few over Vanilla fine-tuning in the OCI Generative AI service?

Options:

A.

Reduced model complexity

B.

Enhanced generalization to unseen data

C.

Increased model interpretability

D.

Faster training time and lower cost

Question 8

What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?

Options:

A.

The model's ability to generate imaginative and creative content

B.

A technique used to enhance the model's performance on specific tasks

C.

The process by which the model visualizes and describes images in detail

D.

The phenomenon where the model generates factually incorrect information or unrelated content as if it were true

Question 9

What does accuracy measure in the context of fine-tuning results for a generative model?

Options:

A.

The number of predictions a model makes, regardless of whether they are correct or incorrect

B.

The proportion of incorrect predictions made by the model during an evaluation

C.

How many predictions the model made correctly out of all the predictions in an evaluation

D.

The depth of the neural network layers used in the model

Question 10

How does the utilization of T-Few transformer layers contribute to the efficiency of the fine-tuning process?

Options:

A.

By incorporating additional layers to the base model

B.

By allowing updates across all layers of the model

C.

By excluding transformer layers from the fine-tuning process entirely

D.

By restricting updates to only a specific group of transformer layers

Question 11

What happens if a period (.) is used as a stop sequence in text generation?

Options:

A.

The model ignores periods and continues generating text until it reaches the token limit.

B.

The model generates additional sentences to complete the paragraph.

C.

The model stops generating text after it reaches the end of the current paragraph.

D.

The model stops generating text after it reaches the end of the first sentence, even if the token limit is much higher.

Question 12

What is the primary function of the "temperature" parameter in the OCI Generative AI Generation models?

Options:

A.

Controls the randomness of the model's output, affecting its creativity

B.

Specifies a string that tells the model to stop generating more content

C.

Assigns a penalty to tokens that have already appeared in the preceding text

D.

Determines the maximum number of tokens the model can generate per response

Question 13

Given the following code:

chain = prompt | llm

Which statement is true about LangChain Expression Language (LCEL)?

Options:

A.

LCEL is a programming language used to write documentation for LangChain.

B.

LCEL is a legacy method for creating chains in LangChain.

C.

LCEL is a declarative and preferred way to compose chains together.

D.

LCEL is an older Python library for building Large Language Models.

Question 14

What is the purpose of memory in the LangChain framework?

Options:

A.

To retrieve user input and provide real-time output only

B.

To store various types of data and provide algorithms for summarizing past interactions

C.

To perform complex calculations unrelated to user interaction

D.

To act as a static database for storing permanent records

Question 15

What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?

Options:

A.

Support for tokenizing longer sentences

B.

Improved retrievals for Retrieval Augmented Generation (RAG) systems

C.

Emphasis on syntactic clustering of word embeddings

D.

Capacity to translate text in over 100 languages

Question 16

How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?

Options:

A.

Shared among multiple customers for efficiency

B.

Stored in Object Storage encrypted by default

C.

Stored in an unencrypted form in Object Storage

D.

Stored in Key Management service

Question 17

What do prompt templates use for templating in language model applications?

Options:

A.

Python's list comprehension syntax

B.

Python's str.format syntax

C.

Python's lambda functions

D.

Python's class and object structures

Question 18

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?

Options:

A.

Increasing the temperature removes the impact of the most likely word.

B.

Decreasing the temperature broadens the distribution, making less likely words more probable.

C.

Increasing the temperature flattens the distribution, allowing for more varied word choices.

D.

Temperature has no effect on probability distribution; it only changes the speed of decoding.

Question 19

What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?

Options:

A.

It allows the LLM to access a larger dataset.

B.

It eliminates the need for any training or computational resources.

C.

It provides examples in the prompt to guide the LLM to better performance with no training cost.

D.

It significantly reduces the latency for each model request.

Question 20

How are documents usually evaluated in the simplest form of keyword-based search?

Options:

A.

By the complexity of language used in the documents

B.

Based on the number of images and videos contained in the documents

C.

Based on the presence and frequency of the user-provided keywords

D.

According to the length of the documents

Question 21

What does "k-shot prompting" refer to when using Large Language Models for task-specific applications?

Options:

A.

Providing the exact k words in the prompt to guide the model's response

B.

Explicitly providing k examples of the intended task in the prompt to guide the model’s output

C.

The process of training the model on k different tasks simultaneously to improve its versatility

D.

Limiting the model to only k possible outcomes or answers for a given task

Question 22

How are chains traditionally created in LangChain?

Options:

A.

By using machine learning algorithms

B.

Declaratively, with no coding required

C.

Using Python classes, such as LLMChain and others

D.

Exclusively through third-party software integrations

Question 23

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 hours?

Options:

A.

25 unit hours

B.

40 unit hours

C.

20 unit hours

D.

30 unit hours

Question 24

When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?

Options:

A.

When the LLM already understands the topics necessary for text generation

B.

When the LLM does not perform well on a task and the data for prompt engineering is too large

C.

When the LLM requires access to the latest data for generating outputs

D.

When you want to optimize the model without any instructions

Question 25

What issue might arise from using small datasets with the Vanilla fine-tuning method in the OCI Generative AI service?

Options:

A.

Overfitting

B.

Underfitting

C.

Data Leakage

D.

Model Drift

Question 26

When should you use the T-Few fine-tuning method for training a model?

Options:

A.

For complicated semantic understanding improvement

B.

For models that require their own hosting dedicated AI cluster

C.

For datasets with a few thousand samples or less

D.

For datasets with hundreds of thousands to millions of samples

Page: 1 / 9
Total 88 questions