Oracle Cloud Infrastructure 2025 Generative AI Professional Questions and Answers
Which component of Retrieval-Augmented Generation (RAG) evaluates and prioritizes the information retrieved by the retrieval system?
What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?
How does the structure of vector databases differ from traditional relational databases?
How does a presence penalty function in language model generation when using OCI Generative AI service?
Which statement describes the difference between "Top k" and "Top p" in selecting the next token in the OCI Generative AI Generation models?
What is the function of the Generator in a text generation system?
Which is a key advantage of using T-Few over Vanilla fine-tuning in the OCI Generative AI service?
What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?
What does accuracy measure in the context of fine-tuning results for a generative model?
How does the utilization of T-Few transformer layers contribute to the efficiency of the fine-tuning process?
What happens if a period (.) is used as a stop sequence in text generation?
What is the primary function of the "temperature" parameter in the OCI Generative AI Generation models?
Given the following code:
chain = prompt | llm
Which statement is true about LangChain Expression Language (LCEL)?
What is the purpose of memory in the LangChain framework?
What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?
How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?
What do prompt templates use for templating in language model applications?
How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?
What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?
How are documents usually evaluated in the simplest form of keyword-based search?
What does "k-shot prompting" refer to when using Large Language Models for task-specific applications?
How are chains traditionally created in LangChain?
You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 hours?
When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?
What issue might arise from using small datasets with the Vanilla fine-tuning method in the OCI Generative AI service?
When should you use the T-Few fine-tuning method for training a model?