Oracle Cloud Infrastructure 2023 AI Foundations Associate Questions and Answers
Which AI domain is associated with tasks such as recognizing forces in images and classifying objects?
Options:
Computer Vision
Anomaly Detection
Speech Processing
Natural Language Processing
Answer:
AExplanation:
Computer Vision is an AI domain that is associated with tasks such as recognizing faces in images and classifying objects. Computer vision is a field of artificial intelligence that enables computers and systems to derive meaningful information from digital images, videos, and other visual inputs, and to take actions or make recommendations based on that information. Computer vision works by applying machine learning and deep learning models to visual data, such as pixels, colors, shapes, textures, etc., and extracting features and patterns that can be used for various purposes. Some of the common techniques and applications of computer vision are:
- Face recognition: Identifying or verifying the identity of a person based on their facial features.
- Object detection: Locating and labeling objects of interest in an image or a video.
- Object recognition: Classifying objects into predefined categories, such as animals, vehicles, fruits, etc.
- Scene understanding: Analyzing the context and semantics of a visual scene, such as the location, time, weather, activity, etc.
- Image segmentation: Partitioning an image into multiple regions that share similar characteristics, such as color, texture, shape, etc.
- Image enhancement: Improving the quality or appearance of an image by applying filters, transformations, or corrections.
- Image generation: Creating realistic or stylized images from scratch or based on some input data, such as sketches, captions, or attributes. References: : What is Computer Vision? | IBM, Computer vision - Wikipedia
Which NVIDIA GPU is offered by Oracle Cloud Infrastructure?
Options:
P200
T4
A100
K80
Answer:
CExplanation:
Oracle Cloud Infrastructure offers NVIDIA A100 Tensor Core GPUs as one of the GPU options for its compute instances. The NVIDIA A100 GPU is a powerful and versatile GPU that can accelerate a wide range of AI and HPC workloads. The A100 GPU delivers up to 20x higher performance than the previous generation V100 GPU and supports features such as multi-instance GPU, automatic mixed precision, and sparsity acceleration12. The OCI Compute bare-metal BM.GPU4.8 instance offers eight 40GB NVIDIA A100 GPUs linked via high-speed NVIDIA NVLink direct GPU-to-GPU interconnects3. This instance is ideal for training large language models, computer vision models, and other complex AI tasks. References: Accelerated Computing and Oracle Cloud Infrastructure (OCI) - NVIDIA, Oracle Cloud Infrastructure Offers New NVIDIA GPU-Accelerated Compute …, GPU, Virtual Machines and Bare Metal | Oracle
What is the primary function of Oracle Cloud Infrastructure Speech service?
Options:
Converting text into images
Analyzing sentiment n text
Transcribing spoken language into written text
Recognizing objects in images
Answer:
CExplanation:
Oracle Cloud Infrastructure Speech is an AI service that applies automatic speech recognition (ASR) technology to transform audio-based content into text. Developers can easily make API calls to integrate Speech’s pretrained models into their applications. Speech can be used for accurate, text-normalized, time-stamped transcription via the console and REST APIs as well as command-line interfaces or SDKs. You can also use Speech in an OCI Data Science notebook session. With Speech, you can filter profanities, get confidence scores for both single words and complete transcriptions, and more1. References: Speech AI Service that Uses ASR | OCI Speech - Oracle
In machine learning, what does the term "model training" mean?
Options:
Analyzing the accuracy of a trained model
Establishing a relationship between Input features and output
Writing code for the entire program
Performing data analysis on collected and labeled data
Answer:
BExplanation:
Model training is the process of finding the optimal values for the model parameters that minimize the error between the model predictions and the actual output. This is done by using a learning algorithm that iteratively updates the parameters based on the input features and the output1. References: Oracle Cloud Infrastructure Documentation
What role do tokens play in Large Language Models (LLMs)?
Options:
They represent the numerical values of model parameters.
They are used to define the architecture of the model's neural network.
They are Individual units into which a piece of text is divided during processing by the model.
They determine the size of the model's memory.
Answer:
CExplanation:
Tokens are the basic units of text representation in large language models. They can be words, subwords, characters, or symbols. Tokens are used to encode the input text into numerical vectors that can be processed by the model’s neural network. Tokens also determine the vocabulary size and the maximum sequence length of the model3. References: Oracle Cloud Infrastructure 2023 AI Foundations Associate | Oracle University
Which AI domain is associated with tasks such as identifying the sentiment of text and translating text between languages?
Options:
Natural Language Processing
Speech Processing
Anomaly Detection
Computer Vision
Answer:
AExplanation:
Natural Language Processing (NLP) is an AI domain that is associated with tasks such as identifying the sentiment of text and translating text between languages. NLP is an interdisciplinary field that combines computer science, linguistics, and artificialintelligence to enable computers to process and understand natural language data, such as text or speech. NLP involves various techniques and applications, such as:
- Text analysis: Extracting meaningful information from text data, such as keywords, entities, topics, sentiments, emotions, etc.
- Text generation: Producing natural language text from structured or unstructured data, such as summaries, captions, headlines, stories, etc.
- Machine translation: Translating text or speech from one language to another automatically and accurately.
- Question answering: Retrieving relevant answers to natural language questions from a knowledge base or a document collection.
- Speech recognition: Converting speech signals into text or commands.
- Speech synthesis: Converting text into speech signals with natural sounding voices.
- Natural language understanding: Interpreting the meaning and intent of natural language inputs and generating appropriate responses.
- Natural language generation: Creating natural language outputs that are coherent, fluent, and relevant to the context. References: : What is Natural Language Processing? | IBM, Natural language processing - Wikipedia
Which type of machine learning is used to understand relationships within data and is not focused on making predictions or classifications?
Options:
Active learning
Unsupervised learning
Reinforcement learning
Supervised learning
Answer:
BExplanation:
Unsupervised learning is a type of machine learning that is used to understand relationships within data and is not focused on making predictions or classifications. Unsupervised learning algorithms work with unlabeled data, which means the data does not have predefined categories or outcomes. The goal of unsupervised learning is to discover hidden patterns, structures, or features in the data that can provide valuable insights or reduce complexity. Some of the common techniques and applications of unsupervised learning are:
- Clustering: Grouping similar data points together based on their attributes or distances. For example, clustering can be used to segment customers based on their preferences, behavior, or demographics.
- Dimensionality reduction: Reducing the number of variables or features in a dataset while preserving the essential information. For example, dimensionality reduction can be used to compress images, remove noise, or visualize high-dimensional data in lower dimensions.
- Anomaly detection: Identifying outliers or abnormal data points that deviate from the normal distribution or behavior of the data. For example, anomaly detection can be used to detect fraud, network intrusion, or system failure.
- Association rule mining: Finding rules that describe how variables or items are related or co-occur in a dataset. For example, association rule mining can be used to discover frequent itemsets in market basket analysis or recommend products based on purchase history. References: : Unsupervised learning - Wikipedia, What is Unsupervised Learning? | IBM
What is the difference between classification and regression in Supervised Machine Learning?
Options:
Classification assigns data points to categories, whereas regression predicts continuous values.
Classification and regression both predict continuous values.
Classification predicts continuous values, whereas regression assigns data points to categories.
Classification and regression both assign data points to categories.
Answer:
AExplanation:
Classification and regression are two subtypes of supervised learning in machine learning. The main difference between them is the type of output variable they deal with. Classification assigns data points to discrete categories based on some criteria or rules. For example, classifying emails into spam or not spam based on their content is a classification problem because the output variable is binary (spam or not spam). Regression predicts continuous values for data points based on their input features. For example, predicting house prices based on their size, location, amenities, etc., is a regression problem because the output variable is continuous (house price). Classification and regression use different types of algorithms and metrics to evaluate their performance. References: : Oracle Cloud Infrastructure AI - Machine Learning Concepts, Classification vs Regression in Machine Learning | by …
What is the purpose of fine-tuning Large Language Models?
Options:
To reduce the number of parameters in the model
To Increase the complexity of the model architecture
To specialize the model's capabilities for specific tasks
To prevent the model from overfitting
Answer:
CExplanation:
Fine-tuning is the process of updating the model parameters on a new task and dataset, using a pre-trained large language model as the starting point. Fine-tuning allows the model to adapt to the specific context and domain of the new task, and improve its performance and accuracy. Fine-tuning can be used to customize the model’s capabilities for specific tasks such as text classification, named entity recognition, and machine translation82. Fine-tuning is alsoknown as transfer learning or task-based learning. References: A Complete Guide to Fine Tuning Large Language Models, Finetuning Large Language Models - DeepLearning.AI