Microsoft Azure AI Fundamentals Questions and Answers
You need to create a training dataset and validation dataset from an existing dataset.
Which module in the Azure Machine Learning designer should you use?
Options:
Select Columns in Dataset
Add Rows
Split Data
Join Data
Answer:
CExplanation:
In Azure Machine Learning designer, the Split Data module is specifically designed to divide a dataset into training and validation (or testing) subsets. The AI-900 study guide and the Microsoft Learn module “Split data for training and evaluation” explain that this module allows users to control how data is partitioned, ensuring that models are trained on one portion of the data and tested on unseen data to assess performance.
By default, the Split Data module uses a 70/30 or 80/20 ratio, meaning 70–80% of the data is used for training and the remaining 20–30% for validation or testing. This ensures the model’s generalizability and prevents overfitting.
The other options serve different purposes:
A. Select Columns in Dataset: Used to choose specific columns or features from a dataset.
B. Add Rows: Combines multiple datasets vertically.
D. Join Data: Combines datasets horizontally based on a common key.
Only Split Data performs the function of dividing data into training and validation subsets.
For each of the following statements. select Yes if the statement is true. Otherwise, select No. NOTE; Each correct selection is worth one point

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features of Computer Vision workloads on Azure”, the Custom Vision service is a part of Azure Cognitive Services that allows users to build, train, and deploy custom image classification and object detection models. It is primarily designed for still-image analysis, not video processing.
“The Custom Vision service can be used to detect objects in an image.” – Yes.This is correct. The Custom Vision service supports two major model types: classification (categorizing entire images) and object detection (identifying and locating multiple objects within a single image). In object detection mode, the model outputs both the object’s category and its position in the image using bounding boxes. This capability is emphasized in the AI-900 curriculum as an example of applying computer vision to real-world scenarios, such as identifying products on shelves or detecting equipment parts in manufacturing.
“The Custom Vision service requires that you provide your own data to train the model.” – Yes.This statement is also true. Unlike prebuilt computer vision models, Custom Vision is a trainable model that requires users to upload their own labeled images to create a domain-specific AI model. The model’s accuracy depends on the quality and quantity of this user-provided data. The AI-900 study materials explain that Custom Vision is used when prebuilt models do not meet specific needs, enabling businesses to train models tailored to unique image sets.
“The Custom Vision service can be used to analyze video files.” – No.This is incorrect. Custom Vision is limited to image-based analysis. To analyze video content (detecting objects or motion in moving frames), Azure provides Video Indexer, which is a separate service designed for extracting insights from video files, including speech, objects, faces, and emotions.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore fundamental principles of machine learning,” regression is a type of supervised machine learning used to predict continuous numeric values.
In this question, the goal is to predict how many vehicles will travel across a bridge on a given day. The predicted output (the number of vehicles) is a continuous value—meaning it can take on any numerical value depending on various factors like time, weather, or day of the week. This makes it a regression problem, as the model learns from historical numeric data to estimate a continuous outcome.
How Regression Works:
Regression models find patterns between input features (such as temperature, weekday/weekend, traffic trends) and a numerical output (number of vehicles). Common regression algorithms include linear regression, decision trees for regression, and neural network regression. In Azure Machine Learning, regression tasks are used for business scenarios such as:
Predicting sales revenue for a future month.
Estimating house prices based on property characteristics.
Forecasting energy consumption or traffic flow, as in this case.
Why not the other options?
Classification: Used for predicting discrete categories (e.g., “spam” vs. “not spam”). It does not handle continuous numeric values.
Clustering: An unsupervised learning technique used to group data points based on similarity without predefined labels (e.g., segmenting customers into groups).
Therefore, the task of predicting the number of vehicles—a numeric, continuous value—is a regression problem.
Providing contextual information to improve the responses quality of a generative Al solution is an example of which prompt engineering technique?
Options:
providing examples
fine-tuning
grounding data
system messages
Answer:
CExplanation:
In Microsoft Azure OpenAI Service and the AI-900/AI-102 study materials, grounding data is the correct term used to describe the process of providing contextual or external information to improve the accuracy, relevance, and quality of responses generated by a generative AI model such as GPT-3.5 or GPT-4.
Grounding is a prompt engineering technique where the AI model is supplemented with relevant background data, such as company documents, knowledge bases, or user context, that helps the model generate factually correct and context-aware responses. Microsoft Learn defines grounding as a way to connect the model’s general knowledge to specific, real-world information. For example, if you ask a GPT-3.5 model about your organization’s HR policies, the base model will not know them unless that policy information is provided (grounded) in the prompt. By embedding this contextual data, the AI becomes “grounded” in the facts it needs to respond reliably.
This technique differs from other prompt engineering concepts:
A. Providing examples (few-shot prompting) shows the model sample inputs and outputs to guide formatting or style, not factual context.
B. Fine-tuning involves retraining the model with labeled data to permanently adjust its behavior — it’s not a prompt-based technique.
D. System messages define the model’s role, tone, or style (for example, “You are a helpful assistant”) but do not add factual context.
Therefore, when you provide contextual information (like product details, policy documents, or reference text) within a prompt to enhance the quality and factual reliability of the model’s responses, you are applying the grounding data technique.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:
When building a K-means clustering model, all features (variables) used in the model must be numeric in nature. According to the Microsoft Azure AI Fundamentals (AI-900) study materials and standard machine learning theory, K-means clustering is an unsupervised learning algorithm that groups data points into clusters based on their similarity — specifically by minimizing the Euclidean distance between data points and their assigned cluster centroids.
Because the K-means algorithm depends on distance calculations, it requires numeric data types. The Euclidean distance (or similar measures) can only be computed between numerical values. Therefore, all categorical or text data must first be converted into numeric form through feature engineering techniques such as one-hot encoding, label encoding, or embedding vectors, depending on the nature of the data.
Here’s how K-means works in summary:
The algorithm initializes a predefined number of centroids (K).
Each data point is assigned to the nearest centroid based on numeric distance.
The centroids are recalculated as the mean of the points in each cluster.
The process repeats until convergence.
If non-numeric data (e.g., text or Boolean) were provided, the model would not be able to calculate distances accurately, leading to computational errors.
Other options are incorrect:
Boolean and integer types can represent numeric values but are considered special cases; the algorithm requires general numeric representation (e.g., continuous values).
Text cannot be processed directly without conversion.
Thus, according to Azure Machine Learning and AI-900 official concepts, all features in a K-means clustering model must be numeric to ensure valid mathematical operations and clustering accuracy.

You need to develop a web-based AI solution for a customer support system. Users must be able to interact with a web app that will guide them to the best resource or answer.
Which service should you integrate with the web app to meet the goal?
Options:
Azure Al Language Service
Face
Azure Al Translator
Azure Al Custom Vision
Answer:
DExplanation:
QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semistructured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge base—automatically. Your knowledge base gets smarter, too, as it
continually learns from user behavior.
Which OpenAI model does GitHub Copilot use to make suggestions for client-side JavaScript?
Options:
GPT-4
Codex
DALL-E
GPT-3
Answer:
BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) learning path and Microsoft Learn documentation on GitHub Copilot, GitHub Copilot is powered by OpenAI Codex, a specialized language model derived from the GPT-3 family but fine-tuned specifically on programming languages and code data.
OpenAI Codex was designed to translate natural language prompts into executable code in multiple programming languages, including JavaScript, Python, C#, TypeScript, and Go. It can understand comments, function names, and code structure to generate relevant code suggestions in real time.
When a developer writes client-side JavaScript, GitHub Copilot uses Codex to analyze the context of the file and generate intelligent suggestions, such as completing functions, writing boilerplate code, or suggesting improvements. Codex can also explain what specific code does and provide inline documentation, which enhances developer productivity.
Option A (GPT-4): While some newer versions of GitHub Copilot (Copilot X) may integrate GPT-4 for conversational explanations, the core code completion engine remains based on Codex, as per the AI-900-level content.
Option C (DALL-E): Used for image generation, not for programming tasks.
Option D (GPT-3): Codex was fine-tuned from GPT-3 but has been further trained specifically for code generation tasks.
Therefore, the verified and official answer from Microsoft’s AI-900 curriculum is B. Codex — the OpenAI model used by GitHub Copilot to make suggestions for client-side JavaScript and other programming languages.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

The Azure AI Language Service includes several natural language processing features, such as question answering, language understanding, entity recognition, sentiment analysis, and more. Each feature serves a distinct purpose, and understanding their differences is key to selecting the correct AI workload.
“You can use Azure AI Language Service ' s question answering to query an Azure SQL database.” — NOThe question answering feature is designed to retrieve answers from text-based knowledge sources (for example, FAQs, documents, or website content). It cannot directly query a database such as Azure SQL. Querying databases requires Azure Cognitive Search, Azure OpenAI, or custom integration using application logic, not the question answering model.
“You should use Azure AI Language Service ' s question answering when you want a knowledge base to provide the same answer to different users who submit similar questions.” — YESThis is the primary use case of question answering. It allows developers to build a knowledge base (KB) of predefined question-answer pairs or extract answers from documents. When users submit semantically similar questions (e.g., “What are your office hours?” or “When are you open?”), the service returns the same consistent answer.
“Azure AI Language Service ' s question answering can determine the intent of a user utterance.” — NODetermining user intent is part of the Language Understanding (LUIS) capability, not question answering. LUIS models map natural language inputs to intents and entities, typically used in bots or applications that execute tasks (like booking a meeting or checking weather).
Hence, correct answers are: No, Yes, No — aligning with the AI-900 official study guide and Microsoft Learn module “Identify Azure AI Language capabilities.”
What should you do to ensure that an Azure OpenAI model generates accurate responses that include recent events?
Options:
Modify the system message.
Add grounding data.
Add few-shot learning.
Add training data.
Answer:
BExplanation:
In Azure OpenAI, grounding refers to the process of connecting the model to external data sources (for example, a database, search index, or API) so that it can retrieve accurate and up-to-date information before generating a response. This is particularly important for scenarios requiring current facts or events, since OpenAI models like GPT-3.5 and GPT-4 are trained on data available only up to a certain cutoff date.
By adding grounding data, the model’s responses are “anchored” to factual sources retrieved at runtime, improving reliability and factual accuracy. Grounding is commonly implemented in Azure OpenAI + Azure Cognitive Search solutions (Retrieval-Augmented Generation or RAG).
Option review:
A. Modify the system message: Changes model tone or behavior but doesn’t supply real-time data.
B. Add grounding data: ✅ Correct — allows access to recent and domain-specific information.
C. Add few-shot learning: Provides examples in the prompt to improve context understanding but not factual accuracy.
D. Add training data: Refers to fine-tuning; this requires retraining and doesn’t update the model’s awareness of current events.
Hence, the best method to ensure accurate and current responses from an Azure OpenAI model is to add grounding data, enabling the model to reference real, updated sources dynamically.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features of common machine learning types”, regression is a supervised machine learning technique used to predict continuous numerical values based on one or more input features. In this scenario, the task is to predict a vehicle’s miles per gallon (MPG)—a continuous numeric value—based on several measurable factors such as weight, engine power, and other specifications.
Regression models learn the mathematical relationship between input variables (independent features) and a numeric target variable (dependent outcome). Common regression algorithms include linear regression, decision tree regression, and support vector regression. In the example, the model would analyze historical data of vehicles and learn patterns that map characteristics (like engine size, horsepower, and weight) to fuel efficiency. Once trained, it can predict the MPG for a new vehicle configuration.
The other options describe different problem types:
Classification predicts discrete categories (for example, whether a car is “fuel efficient” or “not fuel efficient”), not continuous values.
Clustering is an unsupervised learning method that groups data points based on similarities without predefined labels, not predictive modeling.
Anomaly detection identifies data points that significantly deviate from normal patterns, such as detecting engine sensor failures or fraudulent transactions.
Since predicting MPG involves estimating a numeric value within a continuous range, regression is the most appropriate model type.
In summary, per AI-900 training content, regression models are used when the output variable is numeric, classification for categorical outputs, and clustering for pattern discovery. Therefore, predicting miles per gallon based on vehicle features is a textbook example of a regression problem in Azure Machine Learning.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

This question evaluates understanding of fundamental machine learning concepts as covered in the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Explore the machine learning process.” These statements relate to data labeling, model evaluation practices, and performance metrics—three essential parts of building and assessing a machine learning model.
Labelling is the process of tagging training data with known values → YesAccording to Microsoft Learn, “Labeling is the process of tagging data with the correct output value so the model can learn relationships between inputs and outputs.” This is essential for supervised learning, where models require historical data with known outcomes. For example, if training a model to recognize fruit images, each image is labeled as “apple,” “banana,” or “orange.” Hence, this statement is true.
You should evaluate a model by using the same data used to train the model → NoThe AI-900 guide stresses that using the same data for both training and evaluation can cause overfitting, where the model performs well on training data but poorly on unseen data. Instead, the dataset is split into training and testing (or validation) subsets. Evaluation must use test data that the model has never seen before to ensure an unbiased measure of performance. Therefore, this statement is false.
Accuracy is always the primary metric used to measure a model’s performance → NoMicrosoft Learn emphasizes that accuracy is only one metric and not always the best choice. Depending on the problem type, other metrics such as precision, recall, F1-score, or AUC (Area Under the Curve) may be more appropriate—especially in cases with imbalanced datasets. For example, in fraud detection, recall may be more important than accuracy. Thus, this statement is false.
Match the Microsoft guiding principles for responsible AI to the appropriate descriptions.
To answer, drag the appropriate principle from the column on the left to its description on the right. Each principle may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:
Box 1: Reliability and safety
To build trust, it ' s critical that AI systems operate reliably, safely, and consistently under normal circumstances and in unexpected conditions. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation.
Box 2: accountability
Box 3: Privacy and security
As AI becomes more prevalent, protecting privacy and securing important personal and business information is becoming more critical and complex. With AI, privacy and data security issues require especially close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that require transparency about the collection, use, and storage of data and mandate that consumers have appropriate controls to choose how their data is used
You need to create a customer support solution to help customers access information. The solution must support email, phone, and live chat channels. Which type of Al solution should you use?
Options:
natural language processing (NLP)
computer vision
machine learning
chatbot
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and Microsoft Learn module “Describe features of common AI workloads”, a chatbot (also known as a conversational AI agent) is a solution designed to interact with users through natural language conversation across multiple channels such as email, phone, webchat, and messaging apps.
Chatbots use Natural Language Processing (NLP) to interpret what users are saying, identify their intent, and provide relevant responses. In Azure, this functionality is implemented using the Azure Bot Service integrated with the Azure Cognitive Service for Language (Question Answering and Language Understanding). The study guide emphasizes that chatbots are used in customer service, information retrieval, and support automation to reduce the workload on human agents and improve response times.
The requirement in this question — supporting email, phone, and live chat channels — aligns exactly with the definition of a conversational AI chatbot, which can operate across multiple communication platforms. Microsoft Learn clearly identifies that chatbots can be deployed to assist customers in retrieving information, answering FAQs, and escalating complex issues when necessary.
The other options are incorrect because:
A. NLP is the underlying technology used by the chatbot but not the solution itself.
B. Computer vision involves analyzing images or videos, which is unrelated to this scenario.
C. Machine learning is a broader AI field and not a specific customer support solution type.
What are two tasks that can be performed by using the Computer Vision service? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
Train a custom image classification model.
Detect faces in an image.
Recognize handwritten text.
Translate the text in an image between languages.
Answer:
B, CExplanation:
B: Azure ' s Computer Vision service provides developers with access to advanced algorithms that process images and return information based on the visual features you ' re interested in. For example, Computer Vision can determine whether an image contains adult content, find specific brands or objects, or find human faces.
C: Computer Vision includes Optical Character Recognition (OCR) capabilities. You can use the new Read API to extract printed and handwritten text from images and documents.
In which scenario should you use key phrase extraction?
Options:
translating a set of documents from English to German
generating captions for a video based on the audio track
identifying whether reviews of a restaurant are positive or negative
identifying which documents provide information about the same topics
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Extract insights from text with the Text Analytics service”, key phrase extraction is a feature of the Text Analytics service that identifies the most important words or phrases in a given document. It helps summarize the main ideas by isolating significant concepts or terms that describe what the text is about.
In this scenario, the goal is to determine which documents share similar topics or themes. By extracting key phrases from each document (for example, “policy renewal,” “coverage limits,” “claim process”), you can compare and categorize documents based on overlapping keywords. This is exactly how key phrase extraction is used—to summarize and group text content by topic relevance.
The other options do not fit this use case:
A. Translation uses the Translator service, not key phrase extraction.
B. Generating video captions involves speech recognition and computer vision.
C. Identifying sentiment relates to sentiment analysis, not key phrase extraction.
Match the Azure OpenAI large language model (LLM) process to the appropriate task.
To answer, drag the appropriate process from the column on the left to its task on the right. Each process may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) study material and Azure OpenAI Service documentation, large language models (LLMs) such as GPT are capable of performing multiple natural language processing (NLP) tasks depending on the intent of the prompt. These tasks generally fall into categories like classification, generation, summarization, and translation, each with a distinct purpose and output type.
Classifying – This process involves analyzing text and assigning it to a predefined category or label based on its content. The scenario “Detect the genre of a work of fiction” clearly fits this category. The model must evaluate the text and determine whether it belongs to genres like mystery, romance, or science fiction. This is a classic text classification problem, as the output is a discrete category derived from textual features.
Summarizing – This process means condensing lengthy text into a shorter version that preserves the key information. In the scenario “Create a list of bullet points based on text input,” the model extracts essential information and reformats it as concise bullet points, which is an abstraction form of summarization. Summarization models help users quickly understand the main ideas from long documents, meeting efficiency and readability goals.
Generating – This refers to the LLM’s ability to produce new, creative content based on input instructions. The task “Create advertising slogans from a product description” represents generation because it requires the model to construct original text that didn’t previously exist. Generation tasks showcase the creativity and contextual fluency of models like GPT in marketing and content creation.
Thus, these mappings align directly with the Azure OpenAI LLM capabilities taught in AI-900, linking each NLP process with its most suitable real-world task.
You need to predict the animal population of an area.
Which Azure Machine Learning type should you use?
Options:
clustering
classification
regression
Answer:
CExplanation:
According to the AI-900 official study materials, regression is a type of supervised machine learning used to predict continuous numeric values. Predicting the animal population of an area involves estimating a numeric quantity, which makes regression the appropriate model type.
Microsoft Learn defines regression workloads as predicting real-valued outputs, such as:
Forecasting sales or demand.
Predicting housing prices.
Estimating resource usage or population sizes.
In contrast:
Classification predicts discrete categories (e.g., “cat” or “dog”).
Clustering groups data into similar clusters but doesn’t produce numeric predictions.
Therefore, because the task requires predicting a numerical population size, the verified answer is C. Regression, as per Microsoft’s AI-900 official guidelines.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

This question refers to a system that monitors a user’s emotions or expressions—in this case, identifying whether a kiosk user is annoyed—through a video feed. According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify Azure services for computer vision,” this scenario falls under facial analysis, which is a capability of Azure AI Vision or the Face API.
Facial analysis involves detecting human faces in images or video and analyzing facial features to interpret emotions, expressions, age, gender, or facial landmarks. The AI model does not try to identify who the person is but rather interprets how they appear or feel. For example, facial analysis can detect emotions such as happiness, anger, sadness, or surprise, which allows applications to infer a user’s engagement or frustration level while interacting with a system.
Option review:
Face detection: Identifies the presence and location of a face in an image but does not interpret expressions or emotions.
Facial recognition: Matches a detected face to a known individual’s identity (for authentication or security), not for emotion detection.
Optical character recognition (OCR): Extracts text from images or scanned documents and has no relation to human emotion or facial features.
Therefore, determining whether a kiosk user is annoyed, happy, or frustrated involves emotion detection within facial analysis, making Facial analysis the correct answer.
This aligns with AI-900’s definition of computer vision workloads, where facial analysis provides insights into emotions and expressions, supporting user experience optimization and customer behavior analytics.
You plan to create an Al application by using Azure Al Foundry. The solution will be deployed to dedicated virtual machines. Which deployment option should you use?
Options:
serverless API
Azure Kubernetes Service (AKS) cluster
Azure virtual machines
managed compute
Answer:
AYou have an Azure Machine Learning pipeline that contains a Split Data module. The Split Data module outputs to a Train Model module and a Score Model module. What is the function of the Split Data module?
Options:
selecting columns that must be included in the model
creating training and validation datasets
diverting records that have missing data
scaling numeric variables so that they are within a consistent numeric range
Answer:
BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features of Azure Machine Learning”, the Split Data module in an Azure Machine Learning pipeline is used to divide a dataset into two or more subsets—typically a training dataset and a testing (or validation) dataset. This is a fundamental step in the supervised machine learning workflow because it allows for accurate evaluation of the model’s performance on data it has not seen during training.
In a typical workflow, the data flows as follows:
The dataset is first preprocessed (cleaned, normalized, or transformed).
The Split Data module divides this dataset into two parts — one for training the model and another for testing or scoring the model’s accuracy.
The Train Model module uses the training data output from the Split Data module to learn patterns and build a predictive model.
The Score Model module then takes the trained model and applies it to the test data output to measure how well the model performs on unseen data.
The Split Data module typically uses a defined ratio (such as 0.7:0.3 or 70% for training and 30% for testing). This ensures that the trained model can generalize well to new, real-world data rather than simply memorizing the training examples.
Now, addressing the incorrect options:
A. Selecting columns that must be included in the model is done by the Select Columns in Dataset module.
C. Diverting records that have missing data is handled by the Clean Missing Data module.
D. Scaling numeric variables is done using the Normalize Data or Edit Metadata modules.
Therefore, based on the official AI-900 learning objectives, the verified and most accurate answer is B. creating training and validation datasets.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

This question examines your understanding of Natural Language Processing (NLP) as described in the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Explore natural language processing.” NLP is a branch of artificial intelligence that enables computers to analyze, understand, and generate human language — both written and spoken. Typical NLP tasks include text analytics, language understanding, sentiment analysis, key phrase extraction, and profanity detection.
Monitoring online service reviews for profanities → YesThis is a classic example of NLP. Detecting profane or inappropriate words in customer reviews requires analyzing text content. Azure Cognitive Services offers Content Moderator and Text Analytics APIs that can detect and filter profanity, sentiment, and offensive language automatically. Microsoft Learn states: “Natural language processing is used to process and analyze text to detect sentiment, key phrases, and inappropriate content.” Hence, this task is correctly classified as NLP.
Identifying brand logos in an image → NoThis task belongs to Computer Vision, not NLP. The Computer Vision API and Custom Vision service in Azure are designed to detect and classify visual elements like logos, objects, or scenes. Since it involves images, not text, it is unrelated to natural language processing.
Monitoring public news sites for negative mentions of a product → YesThis is another valid example of NLP. The process involves analyzing the sentiment of text from online articles to determine whether mentions of a product are positive, neutral, or negative. Azure Text Analytics provides prebuilt sentiment analysis and entity recognition capabilities that help automate such monitoring.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn documentation for Azure AI Custom Vision, this service is a specialized part of the Azure AI Vision family that enables developers to train custom image classification and object detection models. It allows organizations to build tailored computer vision models that recognize images or specific objects relevant to their business needs.
Detect objects in an image → YesThe Azure AI Custom Vision service supports both image classification (assigning an image to one or more categories) and object detection (identifying and locating objects within an image using bounding boxes). This means it can indeed detect and differentiate multiple objects in a single image, making this statement true.
Requires your own data to train the model → YesThe Custom Vision service is designed to be customizable. Unlike prebuilt Azure AI Vision models that work out of the box, Custom Vision requires you to upload and label your own dataset for training. The model then learns from your examples to perform specialized image recognition tasks relevant to your domain. Thus, this statement is also true.
Analyze video files → NoWhile Custom Vision can analyze images, it does not directly process or analyze video files. Video analysis is handled by a different service—Azure Video Indexer—which can extract insights such as spoken words, scenes, and faces from videos.
In summary:
✅ Yes – Detect objects in images
✅ Yes – Requires your own data
❌ No – Does not analyze video files.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore fundamental principles of machine learning,” a regression model is used when the goal is to predict a continuous numerical value based on historical data.
In this question, the task is to predict the sale price of auctioned items, which is a numeric output that can take on a wide range of values (for example, $50.25, $199.99, etc.). This makes it a regression problem because the output is continuous rather than categorical.
Regression models analyze the relationship between input features (such as item type, condition, age, bidding history, or demand) and a numerical target variable (the sale price). Common regression algorithms include linear regression, decision tree regression, and neural network regression. In Azure Machine Learning, these models are trained using labeled datasets containing known outcomes to learn patterns and make future predictions.
Let’s review the incorrect options:
Classification: Used to predict discrete categories or labels, such as “sold” vs. “unsold” or “low,” “medium,” “high.” It cannot output continuous numeric predictions.
Clustering: An unsupervised technique used to group similar data points based on shared characteristics, not to predict specific numeric outcomes.
Therefore, because predicting a sale price involves forecasting a continuous numerical value, the correct model type is Regression.
This aligns with Microsoft’s AI-900 teaching that regression is used for tasks such as:
Predicting house prices
Forecasting sales revenue
Estimating car values or auction prices
What are two tasks that can be performed by using computer vision? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
Predict stock prices.
Detect brands in an image.
Detect the color scheme in an image
Translate text between languages.
Extract key phrases.
Answer:
B, CExplanation:
According to the Microsoft Azure AI Fundamentals study guide and Microsoft Learn module “Identify features of computer vision workloads”, computer vision is an AI workload that allows systems to interpret and understand visual information from the world, such as images and videos.
Computer vision tasks typically include:
Object detection and image classification (e.g., detecting brands, logos, or items in images)
Image analysis (e.g., identifying colors, patterns, or visual features)
Face detection and recognition
Optical Character Recognition (OCR) for reading text in images
Therefore, both detecting brands and detecting color schemes in an image are clear examples of computer vision tasks because they involve analyzing visual content.
In contrast:
A. Predict stock prices → Regression task, not vision-based.
D. Translate text between languages → Natural language processing (NLP).
E. Extract key phrases → NLP as well.
Thus, the correct computer vision tasks are B and C.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

This question tests understanding of Microsoft’s six guiding principles for Responsible AI, which are: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These principles, as described in the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn Responsible AI module, help ensure that AI systems are developed and used ethically and responsibly.
Transparency – Yes:Transparency means users should understand how and why an AI system makes certain decisions. Providing an explanation of the outcome of a credit loan application clearly supports transparency because it helps customers know the reasoning behind approval or rejection. According to Microsoft Learn, transparency ensures that “AI systems are understandable by users and stakeholders,” especially in sensitive applications such as finance and credit scoring. Thus, the first statement is Yes.
Reliability and Safety – Yes:The reliability and safety principle ensures AI systems perform consistently, safely, and as intended, even in complex or high-risk environments. A triage bot that prioritizes insurance claims based on injury type aligns with this principle—it must be accurate, dependable, and safe to ensure claims are processed correctly and not influenced by errors or faulty algorithms. Microsoft teaches that AI should be “reliable under expected and unexpected conditions” to prevent harm or misjudgment. Therefore, this statement is Yes.
Inclusiveness – No:Inclusiveness focuses on ensuring AI systems empower and benefit all users, especially those with different abilities or backgrounds. Offering an AI solution at different prices across sales territories is a business decision, not an ethical or inclusiveness principle issue. It does not relate to accessibility or equal participation of diverse users. Therefore, this final statement is No.
Which type of machine learning should you use to identify groups of people who have similar purchasing habits?
Options:
classification
regression
clustering
Answer:
CExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Describe features of common AI workloads”, clustering is a type of unsupervised machine learning used to group data points that share similar characteristics. In unsupervised learning, the data provided to the model does not have predefined labels or outcomes. Instead, the algorithm identifies inherent patterns or groupings within the dataset based on similarities in input features.
In this scenario, the task is to identify groups of people who have similar purchasing habits. There is no predefined label such as “buyer type” or “purchase category.” The goal is to discover hidden patterns—such as grouping customers by spending behavior, preferred products, or frequency of purchases. This is precisely what clustering algorithms are designed to do.
Clustering is commonly used in:
Customer segmentation for marketing analytics.
Market basket analysis to find associations in purchasing patterns.
Recommender systems to identify similar user profiles.
Anomaly detection when outliers deviate from natural clusters.
Typical algorithms for clustering include K-means, Hierarchical clustering, and DBSCAN. These models analyze multidimensional data to form clusters that maximize intra-group similarity and minimize inter-group similarity.
By contrast:
Classification (A) is a supervised learning method that predicts a categorical label (e.g., whether a customer will churn or not). It requires labeled training data.
Regression (B) is used to predict continuous numeric values (e.g., sales revenue, temperature).
Since the question focuses on discovering groups of similar customers without prior labels, the correct type of machine learning is Clustering.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

In Azure Machine Learning Designer, the Dataset output visualization feature is specifically used to explore and understand the distribution of values in potential feature columns before model training begins. This capability is critical for data exploration and preprocessing, two essential stages of the machine learning pipeline described in the Microsoft Azure AI Fundamentals (AI-900) and Azure Machine Learning learning paths.
When a dataset is imported into Azure Machine Learning Designer, users can right-click on the dataset output port and select “Visualize”. This launches the dataset visualization pane, which provides detailed statistical summaries for each column, including:
Data type (numeric, categorical, string, Boolean)
Minimum, maximum, mean, and standard deviation values for numeric columns
Frequency counts and distinct values for categorical columns
Missing value counts
This visual inspection helps determine which columns should be used as features, which might need normalization or encoding, and which contain missing or irrelevant data. It is a vital step in ensuring the dataset is clean and ready for model training.
Let’s examine why other options are incorrect:
Normalize Data module is used to scale numeric data, not to visualize distributions.
Select Columns in Dataset module is used to include or exclude columns, not to analyze them.
Evaluation results visualization feature is used after model training to interpret performance metrics like accuracy or recall, not data distributions.
Therefore, based on official Microsoft documentation and AI-900 study materials, to explore the distribution of values in potential feature columns, you use the Dataset output visualization feature in Azure Machine Learning Designer.
You deploy the Azure OpenAI service to generate images.
You need to ensure that the service provides the highest level of protection against harmful content.
What should you do?
Options:
Configure the Content filters settings.
Customize a large language model (LLM).
Configure the system prompt
Change the model used by the Azure OpenAI service.
Answer:
AExplanation:
The correct answer is A. Configure the Content filters settings.
When using the Azure OpenAI Service for text or image generation, Microsoft provides built-in content filtering to help detect and block potentially harmful or unsafe outputs. These filters are part of Microsoft’s Responsible AI framework and are designed to prevent the generation of offensive, violent, sexual, or otherwise restricted content.
To ensure the highest level of protection, you can configure content filter settings within the Azure OpenAI deployment. This allows you to define stricter policies based on your organization’s safety requirements. For image generation models such as DALL·E, enabling or strengthening these filters ensures that inappropriate or unsafe images are not generated or returned.
B (Customize an LLM): Customization affects behavior but not safety filtering.
C (Configure the system prompt): Adjusts response style but doesn’t guarantee content safety.
D (Change the model): Different models have similar filter systems; protection level depends on filter configuration, not the model itself.
brectly completes the sentence.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of common AI workloads”, OCR (Optical Character Recognition) is a Computer Vision technology that detects and extracts printed or handwritten text from images and scanned documents. OCR allows organizations and individuals to convert physical or image-based text into machine-readable, editable, and searchable digital text.
In the context of this question, a historian working with old newspaper articles or archival documents would use OCR to digitize printed content. For instance, the historian can scan or photograph old newspaper pages, and then use an OCR tool—such as Azure Computer Vision’s OCR API—to automatically recognize and extract the textual content from those images. This process enables the historian to store, edit, and analyze the content digitally without manually typing everything.
OCR works by using deep learning algorithms trained on thousands of text samples. The system analyzes patterns, shapes, and spatial relationships of characters to identify text accurately, even from low-quality or aged paper documents. Once extracted, the digital text can be indexed, translated, or processed further using Natural Language Processing (NLP) tools for content analysis.
Now, addressing the other options:
Facial analysis is used to detect emotions, age, or gender from human faces—irrelevant to text digitization.
Image classification identifies entire images by categories (e.g., cat, car, flower).
Object detection identifies and locates multiple objects within an image but doesn’t extract text.
Therefore, per the AI-900 learning objectives under the Computer Vision workload, the correct and verified completion is:
Which natural language processing feature can be used to identify the main talking points in customer feedback surveys?
Options:
language detection
translation
entity recognition
key phrase extraction
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Explore natural language processing (NLP) in Azure”, key phrase extraction is a core feature of the Azure AI Language Service that enables you to automatically identify the most important ideas or topics discussed in a body of text.
When analyzing customer feedback surveys, key phrase extraction helps summarize the main talking points or recurring themes by detecting significant words and phrases. For instance, if multiple customers write comments like “The checkout process is slow” or “Website speed could be improved,” the model may extract key phrases such as “checkout process” and “website speed.” This allows businesses to quickly understand the most common subjects without manually reading each response.
Let’s review the other options:
A. Language detection: Determines the language of the text (e.g., English, French, or Spanish) but does not identify main ideas.
B. Translation: Converts text from one language to another using Azure Translator; it does not summarize or extract key information.
C. Entity recognition: Identifies named entities such as people, organizations, locations, or dates. While useful for identifying specific details, it does not capture general topics or overall discussion points.
Therefore, the appropriate NLP feature for identifying main topics or themes within textual data such as survey responses is Key Phrase Extraction.
This capability is part of the Azure AI Language Service and is commonly used in sentiment analysis pipelines, customer feedback analytics, and business intelligence workflows to summarize large text datasets efficiently.
You need to provide customers with the ability to query the status of orders by using phones, social media, or digital assistants.
What should you use?
Options:
Azure Al Bot Service
the Azure Al Translator service
an Azure Al Document Intelligence model
an Azure Machine Learning model
Answer:
AExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify Azure services for conversational AI,” the Azure AI Bot Service is specifically designed to create intelligent conversational agents (chatbots) that can interact with users across multiple communication channels, such as web chat, social media, phone calls, Microsoft Teams, and digital assistants.
In this scenario, customers need the ability to query the status of their orders through various interfaces — including voice and text platforms. Azure AI Bot Service enables this by integrating with Azure AI Language (for understanding natural language), Azure Speech (for speech-to-text and text-to-speech capabilities), and Azure Communication Services (for telephony or chat integration).
The bot can interpret user input like “Where is my order?” or “Check my delivery status,” call backend systems (such as an order database or API), and then respond appropriately to the user through the same communication channel.
Let’s analyze the incorrect options:
B. Azure AI Translator Service: Used for real-time text translation between languages; it doesn’t handle conversation logic or database queries.
C. Azure AI Document Intelligence model: Extracts data from structured and semi-structured documents (e.g., invoices, receipts), not user queries.
D. Azure Machine Learning model: Builds and deploys predictive models, but doesn’t provide conversational or multi-channel interaction capabilities.
Thus, for enabling multi-channel conversational experiences where customers can inquire about order statuses using voice, chat, or digital assistants, the most appropriate solution is Azure AI Bot Service, as outlined in Azure’s AI conversational workload documentation.
You run a charity event that involves posting photos of people wearing sunglasses on Twitter.
You need to ensure that you only retweet photos that meet the following requirements:
Include one or more faces.
Contain at least one person wearing sunglasses.
What should you use to analyze the images?
Options:
the Verify operation in the Face service
the Detect operation in the Face service
the Describe Image operation in the Computer Vision service
the Analyze Image operation in the Computer Vision service
Answer:
BExplanation:
The scenario requires two checks on each photo: (1) there is at least one face, and (2) at least one detected face is wearing sunglasses. The Azure AI Face service – Detect operation is purpose-built for this combination. It detects faces and returns per-face attributes, including glasses type, so you can enforce both rules in a single pass. From the official guidance, the Detect API “detects human faces in an image and returns the rectangle coordinates of their locations” and exposes face attributes such as glasses. A concise attribute extract states: “Glasses: NoGlasses, ReadingGlasses, Sunglasses, Swimming Goggles.” With this, you can count faces (requirement 1) and then verify that at least one face’s glasses attribute equals sunglasses (requirement 2).
By contrast, other options don’t align as precisely:
A. Verify (Face service) compares whether two detected faces belong to the same person. It does not provide content attributes like sunglasses; it requires face inputs for identity/one-to-one scenarios, which doesn’t meet your content-filter goal.
C. Describe Image (Computer Vision) returns a natural-language caption of the whole image. While a caption might mention “a person wearing sunglasses,” it’s not guaranteed, is not face-scoped, and offers less deterministic filtering than a structured attribute on a detected face.
D. Analyze Image (Computer Vision) can return tags such as “person” or sometimes “sunglasses,” but those tags are image-level and not bound to specific faces. You need to ensure that a detected face (not just any region) is wearing sunglasses. Face-scoped attributes from Face Detect are more reliable for this logic.
Therefore, the most accurate and exam-aligned choice is B. the Detect operation in the Face service, because it allows you to programmatically confirm face presence and per-face sunglasses in a precise, rule-driven workflow.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:
Azure Bot Service and Azure Cognitive Services can be integrated. → Yes
Azure Bot Service engages with customers in a conversational manner. → Yes
Azure Bot Service can import frequently asked questions (FAQ) to question and answer sets. → Yes
\
All three statements are true, as confirmed by the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Explore conversational AI.” The Azure Bot Service is Microsoft’s platform for building, deploying, and managing intelligent bots that can communicate naturally with users across various channels (web, Teams, Facebook Messenger, etc.).
Azure Bot Service and Azure Cognitive Services can be integrated → YesMicrosoft Learn specifies that Azure Bot Service can be enhanced with Azure Cognitive Services such as Language Understanding (LUIS), QnA Maker, and Speech Services to add intelligence. For example, integration with LUIS allows bots to understand user intent and context, while QnA Maker helps them respond accurately to FAQs. As stated in the official documentation: “The Azure Bot Service can be combined with Cognitive Services to create bots that understand language, speech, and meaning.”
Azure Bot Service engages with customers in a conversational manner → YesThe primary function of Azure Bot Service is to create conversational AI agents that interact naturally with users. These bots simulate human-like dialogue using text or speech. According to Microsoft Learn, “Bots created using Azure Bot Service communicate with users in a conversational format through natural language.”
Azure Bot Service can import frequently asked questions (FAQ) to question and answer sets → YesAzure Bot Service can integrate with the QnA Maker (now part of Azure Cognitive Service for Language) to automatically import FAQs from existing documents or web pages and generate a knowledge base of question-answer pairs. This allows the bot to respond intelligently to customer queries.
In conclusion, Azure Bot Service supports intelligent, conversational interaction, integrates seamlessly with Cognitive Services, and can use QnA Maker to import and manage FAQ-based knowledge sets—making all three statements true.
You need to reduce the load on telephone operators by implementing a chatbot to answer simple questions with predefined answers.
Which two AI service should you use to achieve the goal? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Options:
Text Analytics
QnA Maker
Azure Bot Service
Translator Text
Answer:
B, CExplanation:
To reduce operator load with a chatbot for predefined answers:
QnA Maker provides the knowledge base for answering questions automatically.
Azure Bot Service hosts and manages the chatbot interface to interact with users.Text Analytics and Translator Text are not required for basic Q & A chatbots.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

The correct completion of the sentence is:
“The interactive answering of questions entered by a user as part of an application is an example of natural language processing.”
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials, Natural Language Processing (NLP) is a branch of Artificial Intelligence that focuses on enabling computers to understand, interpret, and respond to human language in a way that is both meaningful and useful. It is one of the key AI workloads described in the “Describe features of common AI workloads” module on Microsoft Learn.
When a user types a question into an application and the system responds interactively — such as in a chatbot, Q & A system, or virtual assistant — this process requires language understanding. NLP allows the system to process the input text, determine user intent, extract relevant entities, and generate an appropriate response. This is the foundational capability behind services such as Azure Cognitive Service for Language, Language Understanding (LUIS), and QnA Maker (now integrated as Question Answering in the Language service).
Microsoft’s study guide explains that NLP workloads include the following key scenarios:
Language understanding: Determining intent and context from text or speech.
Text analytics: Extracting meaning, key phrases, sentiment, or named entities.
Conversational AI: Powering bots and virtual agents to interact using natural language.These systems rely on NLP models to analyze user inputs and respond accordingly.
In contrast:
Anomaly detection identifies data irregularities.
Computer vision analyzes images or video.
Forecasting predicts future values based on historical data.
Therefore, based on the AI-900 official materials, the interactive answering of user questions through an application clearly falls under Natural Language Processing (NLP).
You need to implement a pre-built solution that will identify well-known brands in digital photographs. Which Azure Al sen/tee should you use?
Options:
Face
Custom Vision
Computer Vision
Form Recognizer
Answer:
CExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Azure,” the Computer Vision service can analyze images to detect objects, landmarks, celebrities, and brands.
The brand detection capability in the Computer Vision Image Analysis API uses pre-trained models to identify well-known brand logos within images. When an image is analyzed, the service returns brand names, confidence scores, and bounding box coordinates where the logos appear.
Let’s examine the other options:
A. Face: Detects and analyzes human faces, not brand logos.
B. Custom Vision: Used for training custom models to recognize unique objects (e.g., company-specific products), not pre-built brand detection.
D. Form Recognizer: Extracts text and data from structured or semi-structured documents like invoices and receipts.
Thus, since the question specifies identifying well-known brands using a pre-built AI model, the correct Azure service is Computer Vision.
You use Azure Machine Learning designer to publish an inference pipeline.
Which two parameters should you use to consume the pipeline? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Options:
the model name
the training endpoint
the authentication key
the REST endpoint
Answer:
C, DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore Azure Machine Learning”, when you publish an inference pipeline (a deployed web service for real-time predictions) using Azure Machine Learning designer, you make the model accessible as a RESTful endpoint. Consumers—such as applications, scripts, or services—interact with this endpoint to submit data and receive predictions.
To securely access this deployed pipeline, two critical parameters are required:
REST endpoint (Option D):The REST endpoint is a URL automatically generated when the inference pipeline is deployed. It defines the network location where clients send HTTP POST requests containing input data (usually in JSON format). The endpoint routes these requests to the deployed model, which processes the data and returns prediction results. The REST endpoint acts as the primary access point for consuming the model’s inferencing capability programmatically.
Authentication key (Option C):The authentication key (or API key) is a security token provided by Azure to ensure that only authorized users or systems can access the endpoint. When invoking the REST service, the key must be included in the request header (typically as the value of the Authorization header). This mechanism enforces secure, authenticated access to the deployed model.
The other options are incorrect:
A. The model name is not required to consume the endpoint; it is used internally within the workspace.
B. The training endpoint is used for training pipelines, not for inference.
Therefore, according to Microsoft’s official AI-900 learning objectives and Azure Machine Learning documentation, when consuming a published inference pipeline, you must use both the REST endpoint (D) and the authentication key (C). These parameters ensure secure, controlled, and programmatic access to the deployed AI model for real-time predictions.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

The Transformer model architecture is a foundational deep learning model introduced in the 2017 research paper “Attention Is All You Need.” It serves as the core architecture for modern large language models such as GPT, BERT, and T5, all of which are used in Azure OpenAI Service.
“A transformer model architecture uses self-attention.” – YesThe self-attention mechanism is the defining feature of transformer models. It allows the model to evaluate the relationships between words (tokens) in a sequence and assign weights based on contextual relevance. This means that each word in an input sentence can " attend " to every other word, capturing dependencies regardless of their position in the text. This mechanism replaced older recurrent (RNN) and convolutional (CNN) architectures for sequence processing because it provides parallelization and better context understanding.
“A transformer model architecture includes an encoder block and a decoder block.” – YesThe original Transformer architecture includes both an encoder and a decoder. The encoder processes the input sequence into contextual representations, and the decoder generates the output sequence based on both the encoder’s output and previously generated tokens. Models like BERT use only the encoder stack, while GPT models use only the decoder stack, but the full Transformer design conceptually includes both.
“A transformer model architecture includes an encryption block or a decryption block.” – NoTransformers are not related to cryptography. They perform encoding and decoding of language data for representation learning—not encryption or decryption for data security. The terms “encoder” and “decoder” here refer to neural network components, not cryptographic processes.
Which parameter should you configure to produce a more diverse range of tokens in the responses from a chat solution that uses the Azure OpenAI GPT-3.5 model?
Options:
Max response
Past messages included
Presence penalty
Stop sequence
Answer:
CExplanation:
In Azure OpenAI Service, model behavior during text or chat generation is controlled by several parameters, such as temperature, max tokens, top_p, presence penalty, and frequency penalty. According to Microsoft Learn’s documentation for Azure OpenAI GPT models, the presence penalty influences how likely the model is to introduce new or diverse tokens in its responses.
Specifically, the presence penalty discourages the model from repeating previously used tokens, encouraging it to explore new topics or ideas instead of sticking to existing ones. Increasing the presence penalty value typically results in more diverse and creative outputs, while lowering it makes responses more repetitive or focused.
Option analysis:
A. Max response (Max tokens): Controls the maximum length of the generated response, not its diversity.
B. Past messages included: Defines how much chat history the model considers for context; it doesn’t affect diversity directly.
C. Presence penalty: Encourages novelty and introduces new tokens—this is correct for increasing response variety.
D. Stop sequence: Specifies a sequence of characters or tokens where the model should stop generating output.
You have a website that includes customer reviews.
You need to store the reviews in English and present the reviews to users in their respective language by recognizing each user’s geographical location.
Which type of natural language processing workload should you use?
Options:
translation
language modeling
key phrase extraction
speech recognition
Answer:
AExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) syllabus and Microsoft Learn module “Describe features of natural language processing (NLP) workloads on Azure,” translation is a core NLP workload that converts text from one language into another while maintaining meaning and context.
In this scenario, the website stores reviews in English and must present them in the user’s native language based on geographical location. This directly requires a translation workload, which uses Azure Cognitive Services — specifically, the Translator service — to automatically translate content dynamically for each user.
Other options explained:
B. Language modeling involves predicting the next word in a sentence or understanding linguistic patterns; it’s used in model training, not translation.
C. Key phrase extraction identifies main ideas in text, not language conversion.
D. Speech recognition converts spoken words into written text but does not perform translation or handle geographic adaptation.
Microsoft’s Translator service supports real-time text translation, multi-language detection, and context preservation, making it ideal for global websites. The AI-900 study guide emphasizes translation as one of the most common NLP workloads, enabling applications to break language barriers and enhance accessibility for diverse audiences.
Therefore, based on official Microsoft Learn material, the correct answer is:
✅ A. translation.
What are two metrics that you can use to evaluate a regression model? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
coefficient of determination (R2)
F1 score
root mean squared error (RMSE)
area under curve (AUC)
balanced accuracy
Answer:
A, CExplanation:
A: R-squared (R2), or Coefficient of determination represents the predictive power of the model as a value between -inf and 1.00. 1.00 means there is a perfect fit, and the fit can be arbitrarily poor so the scores can be negative.
C: RMS-loss or Root Mean Squared Error (RMSE) (also called Root Mean Square Deviation, RMSD), measures the difference between values predicted by a model and the values observed from the environment that is being modeled.
You have an app that identifies birds in images. The app performs the following tasks:
* Identifies the location of the birds in the image
* Identifies the species of the birds in the image
Which type of computer vision does each task use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure,” there are multiple types of computer vision tasks, each designed for different goals such as recognizing, categorizing, or locating objects within an image.
In this question, the application performs two distinct tasks: locating birds within an image and identifying their species. Each of these corresponds to a different type of computer vision capability.
Locate the birds → Object detection
Object detection is used when an AI system needs to identify and locate multiple objects within a single image.
It not only recognizes what the object is but also provides bounding boxes that indicate the exact position of each object.
In this scenario, locating the birds (drawing rectangles around each bird) is achieved through object detection models, such as those available in the Azure Custom Vision Object Detection domain.
Identify the species of the birds → Image classification
Image classification focuses on identifying what is in the image rather than where it is.
It assigns a single label (or multiple labels in multilabel classification) to an entire image based on its contents.
In this case, determining the species of a bird (e.g., robin, eagle, parrot) is achieved through image classification, where the model compares visual features against learned patterns from training data.
Incorrect options:
Automated captioning generates descriptive sentences about an image, not object locations or classifications.
Optical character recognition (OCR) extracts text from images, irrelevant in this case.
Which two resources can you use to analyze code and generate explanations of code function and code comments? Each correct answer presents a complete solution.
NOTE: Each correct answer is worth one point.
Options:
the Azure OpenAI DALL-E model
the Azure OpenAI Whisper model
the Azure OpenAI GPT-4 model
the GitHub Copilot service
Answer:
C, DExplanation:
The correct answers are C. the Azure OpenAI GPT-4 model and D. the GitHub Copilot service.
According to the Microsoft Azure AI Fundamentals (AI-900) curriculum and Microsoft Learn documentation on Azure OpenAI and GitHub Copilot, both GPT-4 and GitHub Copilot can be used to analyze and generate explanations for code functionality, as well as produce or refine code comments.
Azure OpenAI GPT-4 model (C):The GPT-4 model is a large language model (LLM) developed by OpenAI and available through the Azure OpenAI Service. It is trained on vast amounts of text, including programming languages, documentation, and natural language instructions. This enables it to interpret source code, explain what it does, suggest optimizations, and automatically generate detailed code comments. When prompted with code snippets, GPT-4 can provide structured natural language explanations describing the logic and intent of the code. In enterprise scenarios, developers use Azure OpenAI GPT models for code understanding, review automation, and documentation generation.
GitHub Copilot service (D):GitHub Copilot, powered by OpenAI Codex, is an AI coding assistant integrated into IDEs such as Visual Studio Code. It can analyze code context and generate inline comments and explanations in real time. GitHub Copilot understands the syntax and intent of numerous programming languages and provides intelligent suggestions or explanations directly in the developer’s environment.
The other options are not suitable:
A. DALL-E is a generative image model for creating visual content, not text or code analysis.
B. Whisper is an automatic speech recognition (ASR) model used for converting speech to text, unrelated to code interpretation.
Therefore, based on the official Azure AI and GitHub documentation, the correct and verified answers are C. Azure OpenAI GPT-4 model and D. GitHub Copilot service.
Which two components can you drag onto a canvas in Azure Machine Learning designer? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
dataset
co mpute
pipeline
module
Answer:
A, DExplanation:
In Azure Machine Learning designer, a low-code drag-and-drop interface, users can visually build machine learning workflows. According to the AI-900 study guide and Microsoft Learn module “Create and publish models with Azure Machine Learning designer”, two key components that can be dragged onto the designer canvas are datasets and modules.
Datasets (A): These are collections of data that serve as the input for training or evaluating models. They can be registered in the workspace and then dragged onto the canvas for use in transformations or model training.
Modules (D): These are prebuilt processing and modeling components that perform operations such as data cleaning, feature engineering, model training, and evaluation. Examples include “Split Data,” “Train Model,” and “Evaluate Model.”
Compute (B) and Pipeline (C) are not drag-and-drop items within the designer. Compute targets are infrastructure resources used to run the pipeline, while a pipeline represents the overall workflow, not a component that can be added like a dataset or module.
Hence, the correct answers are A. Dataset and D. Module.
You need to create a model that labels a collection of your personal digital photographs.
Which Azure Al service should you use?
Options:
Azure Al Language
Azure Al Computer Vision
Azure Al Document Intelligence
Azure Al Custom Vision
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and Microsoft Learn module “Describe features of Computer Vision workloads on Azure”, the Azure AI Custom Vision service allows users to build, train, and deploy custom image classification or object detection models. It is specifically designed for scenarios where you need a model tailored to your unique image dataset — in this case, personal digital photographs.
Custom Vision lets you upload and label your own images (for example, “family,” “friends,” “vacations,” or “pets”) and then train a model that learns to recognize those categories. The system automatically extracts relevant features from the training images and creates a model capable of classifying new images into the predefined labels. You can iteratively refine your model by adding more images or re-training to improve accuracy.
The other options do not fit this requirement:
A. Azure AI Language deals with text-based tasks such as language understanding, sentiment analysis, and key phrase extraction — not image processing.
B. Azure AI Computer Vision provides prebuilt image analysis models (e.g., object detection, tag generation, scene description), but it cannot learn custom categories unique to your dataset. It’s great for general image recognition but not for specialized labeling tasks.
C. Azure AI Document Intelligence (Form Recognizer) is used to extract information from structured or semi-structured documents such as forms, invoices, or receipts — not photographs.
Therefore, when you need to label or categorize personal photos with custom-defined tags, the right service is Azure AI Custom Vision, because it allows you to build a model trained specifically on your own collection of images.
Match the tool to the Azure Machine Learning task.
To answer, drag the appropriate tool from the column on the left to its tasks on the right. Each tool may be used once, more than once, or not at all
NOTE: Each correct match is worth one point.

Options:
Answer:

Explanation:

The correct matching aligns directly with the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn modules under “Identify features of Azure Machine Learning”. Azure Machine Learning provides a suite of tools that serve different functions within the model development lifecycle — from creating workspaces, to training models, to automating experimentation.
The Azure portal → Create a Machine Learning workspace.The Azure portal is a web-based graphical interface for managing all Azure resources. According to Microsoft Learn, you use the portal to create and configure the Azure Machine Learning workspace, which acts as the central environment where datasets, experiments, models, and compute resources are organized. Creating a workspace through the portal involves specifying a subscription, resource group, and region — tasks that are part of the setup stage rather than model development.
Machine Learning designer → Use a drag-and-drop interface used to train and deploy models.The Machine Learning designer (formerly “Azure ML Studio (classic)”) provides a visual, no-code/low-code interface for building, training, and deploying machine learning pipelines. The designer uses a drag-and-drop workflow where users connect modules representing data transformations, model training, and evaluation. This tool is ideal for beginners and those who want to quickly experiment with machine learning concepts without writing code.
Automated machine learning (Automated ML) → Use a wizard to select configurations for a machine learning run.Automated ML simplifies model creation by automatically selecting algorithms, hyperparameters, and data preprocessing options. Users interact through a guided wizard (within the Azure Machine Learning studio) that walks them through configuration steps such as selecting datasets, target columns, and performance metrics. The system then iteratively trains and evaluates multiple models to recommend the best-performing one.
Together, these tools streamline the machine learning workflow:
Azure portal for setup and resource management,
Machine Learning designer for visual model creation, and
Automated ML for guided, automated model selection and tuning.
You are evaluating whether to use a basic workspace or an enterprise workspace in Azure Machine Learning.
What are two tasks that require an enterprise workspace? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
Use a graphical user interface (GUI) to run automated machine learning experiments.
Create a compute instance to use as a workstation.
Use a graphical user interface (GUI) to define and run machine learning experiments from Azure Machine Learning designer.
Create a dataset from a comma-separated value (CSV) file.
Answer:
A, CExplanation:
The correct answers are A. Use a graphical user interface (GUI) to run automated machine learning experiments and C. Use a graphical user interface (GUI) to define and run machine learning experiments from Azure Machine Learning designer.
According to the Microsoft Azure AI Fundamentals (AI-900) official documentation and Microsoft Learn module “Create and manage Azure Machine Learning workspaces”, there are two workspace tiers: Basic and Enterprise. The Enterprise workspace provides advanced capabilities for automation, visualization, and collaboration that are not available in the Basic tier.
Specifically:
Automated machine learning (AutoML) using a GUI is only available in the Enterprise tier. AutoML automatically selects algorithms and tunes hyperparameters through the Azure Machine Learning studio interface.
Azure Machine Learning designer, which allows users to visually drag and drop datasets and modules to create machine learning pipelines, also requires the Enterprise workspace.
In contrast:
B. Create a compute instance and D. Create a dataset from a CSV file are fundamental actions supported in both Basic and Enterprise workspaces. These do not require the advanced licensing features of the Enterprise edition.
Therefore, tasks involving the graphical, no-code tools—Automated ML (AutoML) and the Designer—require the Enterprise workspace, aligning with AI-900’s learning objectives.
You have a dataset that contains experimental data for fuel samples.
You need to predict the amount of energy that can be obtained from a sample based on its density.
Which type of Al workload should you use?
Options:
Classification
Clustering
Knowledge mining
Regression
Answer:
DExplanation:
As described in the AI-900 study guide under “Identify features of machine learning,” regression is a supervised learning technique used to predict continuous numerical values. In this scenario, the goal is to predict energy output (a continuous variable) based on density (a numeric input).
Regression models find relationships between variables by fitting a line or curve that best represents the trend of the data. In Azure Machine Learning, regression algorithms such as linear regression, decision tree regression, and boosted decision trees are commonly used for such predictions.
Classification (A) predicts discrete labels (e.g., “High” or “Low”), not numeric values.
Clustering (B) groups similar data points but does not perform prediction.
Knowledge mining (C) extracts insights from unstructured data using tools like Azure AI Search and Cognitive Skills.
Hence, based on AI-900 fundamentals, predicting energy based on density requires a regression workload.
Match the facial recognition tasks to the appropriate questions.
To answer, drag the appropriate task from the column on the left to its question on the right. Each task may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

The correct matches are based on the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure.” These materials explain that facial recognition tasks can be categorized into four major operations: verification, identification, similarity, and grouping. Each task serves a distinct purpose in facial recognition scenarios.
Verification – “Do two images of a face belong to the same person?”The verification task determines whether two facial images represent the same individual. Azure Face API compares the facial features and returns a confidence score indicating the likelihood that the two faces belong to the same person.
Similarity – “Does this person look like other people?”The similarity task compares a face against a collection of faces to find visually similar individuals. It does not confirm identity but measures how closely two or more faces resemble each other.
Grouping – “Do all the faces belong together?”Grouping organizes a set of unknown faces into clusters based on similar facial features. This is used when identities are not known beforehand, helping discover potential duplicates or visually similar clusters within an image dataset.
Identification – “Who is this person in this group of people?”The identification task is used when the system tries to determine who a specific person is by comparing their face against a known collection (face database or gallery). It returns the identity that best matches the input face.
According to Microsoft’s AI-900 training, these tasks form the basis of Azure Face API’s capabilities. Each helps solve a different type of facial recognition problem—from matching pairs to discovering unknown identities—making them essential components of responsible AI-based vision systems.
You have a custom question answering solution.
You create a bot that uses the knowledge base to respond to customer requests. You need to identify what the bot can perform without adding additional skills. What should you identify?
Options:
Register customer complaints.
Answer questions from multiple users simultaneously.
Register customer purchases.
Provide customers with return materials authorization (RMA) numbers.
Answer:
BExplanation:
According to the AI-900 Microsoft Learn modules on Conversational AI, a custom question answering solution built using Azure AI Language (formerly QnA Maker) enables a chatbot to respond to user questions based on a predefined knowledge base. When integrated with a bot, the solution can automatically respond to multiple user queries in real time without additional programming.
This capability is known as scalability and concurrency, which allows chatbots to manage simultaneous conversations with multiple users. This feature is built into the Azure Bot Service, meaning you don’t need to add extra “skills” or custom logic for concurrent interactions.
Other options require additional integration or logic:
Register customer complaints or purchases would require connecting the bot to a CRM or sales system.
Provide RMA numbers requires business process logic or database access.
Therefore, the out-of-the-box functionality of a custom question answering bot is the ability to answer questions from multiple users at once, which is supported natively by Azure Bot Service and the QnA knowledge base.
You need to identify harmful content in a generative Al solution that uses Azure OpenAI Service.
What should you use?
Options:
Face
Video Analysis
Language
Content Safety
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) curriculum and Azure OpenAI documentation, the appropriate service for detecting and managing harmful, unsafe, or inappropriate content in text, images, or other generative AI outputs is Azure AI Content Safety.
Azure AI Content Safety is designed to automatically detect potentially harmful material such as hate speech, violence, self-harm, sexual content, or profanity. It ensures that generative AI applications like chatbots, image generators, and content creation tools comply with Microsoft’s Responsible AI principles — specifically Reliability & Safety and Accountability.
This service integrates directly with the Azure OpenAI Service, meaning that when developers build AI solutions using models like GPT-4 or DALL·E, they can use Content Safety to filter and moderate both input prompts and model outputs. This protects users from unsafe or offensive content generation.
Let’s analyze why the other options are incorrect:
A. Face – The Face service detects and analyzes human faces in images or videos. It is unrelated to moderating harmful textual or generative content.
B. Video Analysis – This service analyzes video streams to detect objects, actions, or events but not inappropriate or harmful text or imagery from AI models.
C. Language – The Azure AI Language service focuses on text understanding tasks like sentiment analysis, entity recognition, and translation, not content safety filtering.
Therefore, per Microsoft Learn’s official AI-900 guidance, when identifying or filtering harmful content in a generative AI solution built with Azure OpenAI, the correct and verified service to use is Azure AI Content Safety.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

This question is based on identifying Natural Language Processing (NLP) workloads, which is a fundamental topic in the Microsoft Azure AI Fundamentals (AI-900) certification. According to the official Microsoft Learn module “Describe features of natural language processing (NLP) workloads on Azure”, NLP enables computers to understand, interpret, and generate human language — both written and spoken.
A bot that responds to queries by internal users – YesThis is an example of a natural language processing workload because it involves understanding and generating human language. A chatbot interprets user input (queries written or spoken) using language understanding and text analytics, and then produces appropriate responses. On Azure, this can be implemented using Azure AI Language (LUIS) and the Azure Bot Service, both core NLP technologies.
A mobile application that displays images relating to an entered search term – NoThis application involves searching for or displaying images, which falls under the computer vision workload, not NLP. Computer vision focuses on analyzing and interpreting visual data like photos or videos, while NLP deals with language and text processing.
A web form used to submit a request to reset a password – NoA password reset form involves structured input fields and user authentication, not natural language understanding or generation. It’s part of standard web development and identity management, not an NLP-related process.
Therefore, based on Microsoft’s AI-900 curriculum definitions:
✅ The only true NLP example is the bot responding to user queries, since it processes and understands natural language input to generate conversational output.
You plan to build a conversational Al solution that can be surfaced in Microsoft Teams. Microsoft Cortana, and Amazon Alexa. Which service should you use?
Options:
Azure Bot Service
Azure Cognitive Search
Language service
Speech
Answer:
AExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of conversational AI workloads on Azure,” the Azure Bot Service is the dedicated Azure service for building, connecting, deploying, and managing conversational AI experiences across multiple channels — such as Microsoft Teams, Cortana, and Amazon Alexa.
The Azure Bot Service integrates with the Bot Framework SDK to design intelligent chatbots that can communicate with users in natural language. It also connects seamlessly with other Azure Cognitive Services, such as Language Service (LUIS) for intent understanding and Speech Service for voice input/output.
The question specifies that the conversational AI must be accessible through multiple platforms, including Microsoft Teams, Cortana, and Alexa. Azure Bot Service supports this multi-channel communication model out of the box, allowing developers to configure a single bot that interacts through many endpoints simultaneously.
Other options:
B. Azure Cognitive Search: Used for information retrieval and knowledge mining, not conversational AI.
C. Language Service: Provides natural language understanding, key phrase extraction, sentiment analysis, etc., but doesn’t handle multi-channel communication.
D. Speech: Provides speech-to-text and text-to-speech conversion but is not a chatbot platform.
Therefore, the best solution for building and deploying a multi-channel conversational AI system is Azure Bot Service, as clearly defined in Microsoft’s AI-900 learning content.
Which scenario is an example of a webchat bot?
Options:
Determine whether reviews entered on a website for a concert are positive or negative, and then add athumbs up or thumbs down emoji to the reviews.
Translate into English questions entered by customers at a kiosk so that the appropriate person can call the customers back.
Accept questions through email, and then route the email messages to the correct person based on the content of the message.
From a website interface, answer common questions about scheduled events and ticket purchases for a music festival.
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials, a webchat bot is defined as a conversational AI application that interacts with users through a web-based chat interface. It simulates human conversation using text (and sometimes voice) to answer questions, assist with transactions, or provide information automatically. Microsoft Learn’s “Describe features of common AI workloads” module highlights conversational AI as a major AI workload, where bots and virtual agents are used to provide automated, intelligent responses in real time through web, mobile, or messaging platforms.
In this scenario, the chatbot on the festival website provides immediate answers about scheduled events and ticket purchases. This aligns exactly with how a webchat bot operates — interacting with users through a website, handling repetitive inquiries, and providing consistent information without human intervention. This type of solution is commonly built using Azure Bot Service integrated with Azure Cognitive Services for Language, which allows the bot to understand user intent and respond naturally.
Let’s examine the other options to reinforce why D is correct:
A describes a text analytics or sentiment analysis scenario, not a conversational bot, because it classifies text sentiment but doesn’t “chat” with a user.
B is an example of machine translation using the Translator service, not a chatbot.
C is an email classification or natural language processing task, not a webchat interaction.
The AI-900 exam objectives clearly distinguish conversational AI from other cognitive services such as translation or sentiment analysis. Conversational AI focuses on dialogue and interaction through natural language conversation channels like websites or messaging apps.
Therefore, the verified and officially aligned answer is D. From a website interface, answer common questions about scheduled events and ticket purchases for a music festival.
Match the types of machine learning to the appropriate scenarios.
To answer, drag the appropriate machine learning type from the column on the left to its scenario on the right. Each machine learning type may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of common AI workloads”, there are three primary supervised and unsupervised machine learning types: Regression, Classification, and Clustering. Each type of learning addresses a different kind of problem depending on the data and desired prediction output.
Regression – Regression models are used to predict numeric, continuous values. The study guide specifies that “regression predicts a number.” In the scenario “Predict how many minutes late a flight will arrive based on the amount of snowfall,” the output (minutes late) is a continuous numeric value. Therefore, this is a regression problem. Regression algorithms like linear regression or decision tree regression estimate relationships between variables and predict measurable quantities.
Clustering – Clustering falls under unsupervised learning, where the model identifies natural groupings or patterns in unlabeled data. The official AI-900 training material states that “clustering is used to find groups or segments of data that share similar characteristics.” The scenario “Segment customers into different groups to support a marketing department” fits this description because the goal is to group customers based on behavior or demographics without predefined labels. Thus, it is a clustering problem.
Classification – Classification is a supervised learning method used to predict discrete categories or labels. The AI-900 content defines classification as “predicting which category an item belongs to.” The scenario “Predict whether a student will complete a university course” requires a yes/no (binary) outcome, which is a classic classification problem. Examples include logistic regression, decision trees, or neural networks trained for categorical prediction.
In summary:
Regression → Predicts continuous numeric outcomes.
Clustering → Groups data by similarities without predefined labels.
Classification → Predicts discrete or categorical outcomes.
Hence, the correct and verified mappings based on the official AI-900 study material are:
Regression → Flight delay prediction
Clustering → Customer segmentation
Classification → Course completion prediction
You are developing a solution that uses the Text Analytics service.
You need to identify the main talking points in a collection of documents.
Which type of natural language processing should you use?
Options:
entity recognition
key phrase extraction
sentiment analysis
language detection
Answer:
BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) learning path and Azure Text Analytics service documentation, key phrase extraction is a natural language processing (NLP) technique used to automatically identify the main topics or talking points within a text document or a collection of documents. This feature is designed to summarize textual data by detecting the most relevant words or short phrases that capture the essence of the content.
For example, if a document discusses “renewable energy sources such as solar and wind power,” the key phrases extracted might include “renewable energy,” “solar power,” and “wind power.” This helps users quickly understand the primary focus areas of large volumes of text without manual review.
In Azure, the Text Analytics service provides several core NLP capabilities, including:
Key phrase extraction – identifies main concepts or topics.
Entity recognition – detects and categorizes proper names like people, locations, or organizations.
Sentiment analysis – determines the emotional tone (positive, neutral, or negative).
Language detection – identifies the language used in the text.
Since the question specifies identifying main talking points, the correct feature is key phrase extraction, as it focuses on summarizing themes rather than identifying entities or emotions.
Therefore, the verified answer is B. key phrase extraction.
What should you do to reduce the number of false positives produced by a machine learning classification model?
Options:
Include test data in the training data.
Increase the number of training iterations.
Modify the threshold value in favor of false positives.
Modify the threshold value in favor of false negatives.
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of machine learning on Azure”, a classification model outputs a probability score representing how likely each input belongs to a particular class. To decide whether a prediction is “positive” or “negative,” the model applies a threshold (often defaulted to 0.5). Adjusting this threshold directly affects the balance between false positives and false negatives.
A false positive occurs when the model incorrectly predicts a positive outcome (for example, predicting that a patient has a disease when they do not).
A false negative occurs when the model fails to predict a true positive (for example, predicting that a patient does not have a disease when they actually do).
To reduce false positives, you must make the model less likely to classify borderline cases as positive. This is done by increasing the decision threshold, thereby favoring false negatives (because the model will only classify a case as positive when the prediction confidence is very high). In other words, by moving the threshold upward, you tighten the model’s standard for what qualifies as a “positive” prediction, reducing incorrect positives.
Let’s review why other options are incorrect:
A. Include test data in training data: This contaminates your dataset and causes overfitting, which leads to unreliable performance metrics.
B. Increase the number of training iterations: This may improve learning but doesn’t specifically target false positives.
C. Modify the threshold in favor of false positives: That would increase, not reduce, false positives.
Therefore, the correct step to reduce false positives is to adjust the threshold in favor of false negatives, making the model more conservative when labeling a case as positive — hence, Answer: D.
Which action can be performed by using the Azure Al Vision service?
Options:
identifying breeds of animals in live video streams
extracting key phrases from documents
extracting data from handwritten letters
creating thumbnails for training videos
Answer:
AExplanation:
The Azure AI Vision service (formerly Computer Vision) is designed to analyze visual content in images and videos. According to Microsoft Learn’s “Describe features of computer vision workloads,” Azure AI Vision can identify objects, people, text, and scenes, and even classify images or detect objects in real time.
Identifying breeds of animals in live video streams is an example of image classification or object detection—core capabilities of Azure AI Vision. The Vision service can analyze each frame in a video, recognize animals, and classify them according to known categories, making this the correct answer.
The other options are incorrect:
B. Extracting key phrases from documents → Done by Azure AI Language (Text Analytics).
C. Extracting data from handwritten letters → Done by Azure AI Document Intelligence (Form Recognizer) using OCR.
D. Creating thumbnails for training videos → While possible in Azure Media Services, it’s not a primary Azure AI Vision function.
Thus, the best answer is A. Identifying breeds of animals in live video streams.
Capturing text from images is an example of which type of Al capability?
Options:
text analysis
optical character recognition (OCR)
image description
object detection
Answer:
BExplanation:
The correct answer is B. Optical character recognition (OCR).
OCR is a key capability within the Computer Vision and Document Intelligence services in Azure AI that enables systems to detect and extract printed or handwritten text from images and scanned documents.
When capturing text from images, OCR technology analyzes visual patterns (shapes of letters and numbers) and converts them into machine-readable text. For example, a photo of a receipt, street sign, or printed report can be processed to extract textual content programmatically.
A (Text analysis): Applies to NLP tasks such as sentiment detection or key phrase extraction, not image processing.
C (Image description): Generates captions describing the scene or objects in an image.
D (Object detection): Identifies and locates objects but does not extract text.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

The Translator service, part of Microsoft Azure Cognitive Services, is designed specifically for text translation between multiple languages. It is a cloud-based neural machine translation service that supports more than 100 languages. According to Microsoft Learn’s module “Translate text with the Translator service”, this service provides two main capabilities: text translation and automatic language detection.
“You can use the Translator service to translate text between languages.” → YesThis statement is true. The primary purpose of the Translator service is to translate text accurately and efficiently between supported languages, such as English to Spanish or French to Japanese. It maintains contextual meaning using neural machine translation models.
“You can use the Translator service to detect the language of a given text.” → YesThis statement is also true. The Translator service includes automatic language detection, which determines the source language before translation. For instance, if a user submits text in an unknown language, the service can identify it automatically before performing translation.
“You can use the Translator service to transcribe audible speech into text.” → NoThis statement is false. Transcribing speech (audio) into text is a function of the Azure Speech service, specifically the Speech-to-Text API, not the Translator service.
Therefore, the Translator service is used for text translation and language detection, while speech transcription belongs to the Speech service.
You have a dataset that contains information about taxi journeys that occurred during a given period.
You need to train a model to predict the fare of a taxi journey.
What should you use as a feature?
Options:
the number of taxi journeys in the dataset
the trip distance of individual taxi journeys
the fare of individual taxi journeys
the trip ID of individual taxi journeys
Answer:
BExplanation:
The label is the column you want to predict. The identified Features are the inputs you give the model to predict the Label.
Example:
The provided data set contains the following columns:
vendor_id: The ID of the taxi vendor is a feature.
rate_code: The rate type of the taxi trip is a feature.
passenger_count: The number of passengers on the trip is a feature.
trip_time_in_secs: The amount of time the trip took. You want to predict the fare of the trip before the trip is completed. At that moment, you don ' t know how long the trip would take. Thus, the trip time is not a feature and you ' ll exclude this column from the model.
trip_distance: The distance of the trip is a feature.
payment_type: The payment method (cash or credit card) is a feature.
fare_amount: The total taxi fare paid is the label.
You use Azure Machine Learning designer to build a model pipeline. What should you create before you can run the pipeline?
Options:
a Jupyter notebook
a registered model
a compute resource
Answer:
CExplanation:
Before running a pipeline in Azure Machine Learning Designer, you must have an available compute resource (such as a compute instance or compute cluster). Compute provides the processing power required to train, evaluate, and execute the pipeline’s modules.
Other options:
A. Jupyter notebook – Used for code-first development, not required for Designer pipelines.
B. Registered model – Created after running a pipeline, not before.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE; Each correct selection is worth one point.

Options:
Answer:

Explanation:
Yes, Yes, No.
According to the Microsoft Azure AI Fundamentals (AI-900) study materials, conversational AI enables applications, websites, and digital assistants to interact with users via natural language. A chatbot is a key conversational AI workload and can be integrated into multiple channels such as web pages, Microsoft Teams, Facebook Messenger, and Cortana using Azure Bot Service and Bot Framework.
“A restaurant can use a chatbot to answer queries through Cortana” — Yes.Azure Bot Service supports multi-channel deployment, which includes Cortana integration. This means the same bot can respond to voice or text input via Cortana, making it a valid use case for a restaurant to provide menu details, reservations, or order tracking through voice-based AI assistants.
“A restaurant can use a chatbot to answer inquiries about business hours from a webpage” — Yes.This is a standard scenario for chatbots embedded on a company website. As per Microsoft Learn’s Describe features of conversational AI module, a chatbot can be added to a website to handle FAQs such as business hours, location, or menu details, thereby improving response time and reducing repetitive human workload.
“A restaurant can use a chatbot to automate responses to customer reviews on an external website” — No.Azure bots and other conversational AI tools cannot automatically interact with or post on external third-party platforms where the business does not control the data or API integration. Automated posting or replying to reviews on external review sites (e.g., Yelp or Google Reviews) would violate both ethical and technical boundaries of responsible AI usage outlined by Microsoft.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

This question is derived from the Microsoft Azure AI Fundamentals (AI-900) learning module, particularly under “Describe features of conversational AI workloads on Azure.” It tests understanding of chatbot capabilities and design principles within the context of Azure Bot Service and Conversational AI.
Chatbots can support voice input – YesAccording to the AI-900 official materials, conversational AI systems such as chatbots can interact with users through text or voice. Using speech recognition services like Azure Cognitive Services Speech-to-Text, bots can interpret spoken input, and with Text-to-Speech, they can respond verbally. This enables voice-based chatbots used in virtual assistants, call centers, and customer support. Hence, voice input is fully supported by conversational AI solutions in Azure.
A separate chatbot is required for each communication channel – NoThe Azure Bot Service is designed to provide multi-channel communication from a single bot instance. A single chatbot can communicate across several channels such as Microsoft Teams, Web Chat, Slack, Facebook Messenger, and email without needing separate bots for each platform. This centralized design allows developers to create, deploy, and manage one bot while configuring multiple channel connections through the Azure portal. Therefore, the statement is false.
Chatbots manage conversation flows by using a combination of natural language and constrained option responses – YesIn Microsoft’s AI-900 training, chatbots are described as using Natural Language Processing (NLP) to understand free-form user input while also guiding interactions with predefined options such as buttons or quick replies. This hybrid approach ensures both flexibility and control, improving user experience and accuracy. Bots can interpret natural language via services like Language Understanding (LUIS) and also present structured options to guide conversations efficiently.
To complete the sentence, select the appropriate option in the answer area.

Options:
Answer:

Explanation:

The correct answer is object detection. According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and Microsoft Learn module “Explore computer vision”, object detection is the process of identifying and locating objects within an image or video. The primary characteristic of object detection, as emphasized in the study guide, is its ability to return a bounding box around each detected object along with a corresponding label or class.
In this question, the task involves returning a bounding box that indicates the location of a vehicle in an image. This is the exact definition of object detection — identifying that the object exists (a vehicle) and determining its position within the frame. Microsoft Learn clearly differentiates this from other computer vision tasks. Image classification, for example, only determines what an image contains as a whole (for instance, “this image contains a vehicle”), but it does not indicate where in the image the object is located. Optical character recognition (OCR) is specifically used for extracting printed or handwritten text from images, and semantic segmentation involves classifying every pixel in an image to understand boundaries in greater detail, often used in autonomous driving or medical imaging.
The official AI-900 guide highlights object detection as one of the key computer vision workloads supported by Azure Computer Vision, Custom Vision, and Azure Cognitive Services. These services are designed to detect multiple instances of various object types in a single image, outputting bounding boxes and confidence scores for each.
Therefore, based on the AI-900 official curriculum and Microsoft Learn concepts, returning a bounding box that shows the location of a vehicle is a textbook example of object detection, as it involves both recognition and localization of the object within the image frame.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

The correct completion of the sentence is:
“The Form Recognizer service can be used to extract information from a driver’s license to populate a database.”
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of computer vision workloads,” Azure Form Recognizer (part of Azure AI Document Intelligence) is a document processing service that uses machine learning and optical character recognition (OCR) to extract structured data, key-value pairs, and text from documents such as invoices, receipts, identity cards, and driver’s licenses.
This service allows businesses to automate data entry and document processing workflows by converting physical or scanned documents into machine-readable formats. For example, with a driver’s license, Form Recognizer can extract structured data fields such as Name, Date of Birth, License Number, and Expiration Date, and automatically populate those values into a database or CRM system.
The AI-900 study materials emphasize that Form Recognizer is designed to handle both structured and unstructured document layouts. It includes prebuilt models for common document types (like invoices, receipts, and identity documents) and supports custom models for domain-specific forms.
By comparison:
Computer Vision extracts general text or image content but doesn’t structure or label extracted fields.
Custom Vision is used for training image classification or object detection models.
Conversational Language Understanding is for processing text or speech to determine intent, not extracting document data.
Therefore, based on the Microsoft Learn AI-900 official study content, the Form Recognizer service is the correct choice, as it is explicitly designed to extract and structure data from documents like driver’s licenses, forms, and receipts — making it ideal for automatically populating a database.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:
Yes, Yes, and No.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn modules under the topic “Describe features of common AI workloads”, conversational AI solutions like chatbots are used to automate and enhance customer interactions. A chatbot is an AI service capable of understanding user inputs (text or voice) and providing appropriate responses, often integrated into websites, mobile apps, or messaging platforms.
A restaurant can use a chatbot to empower customers to make reservations using a website or an app – Yes.This statement is true because conversational AI is designed to handle structured tasks such as booking, scheduling, and information retrieval. Chatbots built with Azure Bot Service can connect to backend systems (like a reservation database) to let customers make or modify reservations through a chat interface. The AI-900 study guide explicitly notes that chatbots can help businesses “automate processes such as booking or reservations” to improve efficiency and customer experience.
A restaurant can use a chatbot to answer inquiries about business hours from a webpage – Yes.This is also true. Chatbots can be trained using QnA Maker (now integrated into Azure AI Language) or Azure Cognitive Services for Language to answer common customer questions. FAQs such as opening hours, menu details, and directions are ideal for chatbot automation, as outlined in the AI-900 modules discussing customer support automation.
A restaurant can use a chatbot to automate responses to customer reviews on an external website – No.This is not a typical chatbot use case taught in AI-900. Chatbots are meant for direct interactions within controlled channels, such as a company’s own website or messaging app. Managing and posting responses to reviews on external platforms (like Yelp or Google Reviews) would involve policy restrictions, authentication issues, and reputational risk. The AI-900 course specifies that responsible AI usage requires maintaining human oversight in public-facing communications that influence brand image.
Your website has a chatbot to assist customers.
You need to detect when a customer is upset based on what the customer types in the chatbot.
Which type of AI workload should you use?
Options:
anomaly detection
semantic segmentation
regression
natural language processing
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) curriculum and Microsoft Learn module “Explore natural language processing”, NLP enables computers to understand, interpret, and analyze human language. One of its key capabilities is sentiment analysis, which detects emotional tone (positive, negative, or neutral) in text.
In this scenario, the chatbot must detect when a customer is upset based on what they type. This directly maps to sentiment analysis, a core NLP function. The Text Analytics service within Azure Cognitive Services provides prebuilt sentiment analysis models that return a sentiment score and classification (e.g., positive, neutral, negative). As per Microsoft Learn, “Natural language processing allows systems to understand sentiment and intent within text and speech to derive meaningful insights.”
Explanation of other options:
A. Anomaly detection identifies unusual patterns in data (e.g., fraud detection), not emotions in text.
B. Semantic segmentation is a computer vision technique used to label pixels in an image.
C. Regression predicts continuous numeric values and is not related to understanding text or emotions.
Therefore, to enable the chatbot to detect when a user is upset based on text input, the correct AI workload is Natural Language Processing (NLP), specifically through Azure Text Analytics sentiment analysis.
To complete the sentence, select the appropriate option in the answer area.

Options:
Answer:

Explanation:

According to Microsoft’s Responsible AI principles, one of the key guiding values is Reliability and Safety, which ensures that AI systems operate consistently, accurately, and safely under all intended conditions. The AI-900 study materials and Microsoft Learn modules explain that an AI system must be trustworthy and dependable, meaning it should not produce results when the input data is incomplete, corrupted, or significantly outside the expected range.
In the given scenario, the AI system avoids providing predictions when important fields contain unusual or missing values. This behavior demonstrates reliability and safety because it prevents the system from making unreliable or potentially harmful decisions based on bad or insufficient data. Microsoft emphasizes that AI systems must undergo extensive validation, testing, and monitoring to ensure stable performance and predictable outcomes, even when data conditions vary.
The other options do not fit this scenario:
Inclusiveness ensures that AI systems are accessible to and usable by all people, regardless of abilities or backgrounds.
Privacy and Security focuses on protecting user data and ensuring it is used responsibly.
Transparency involves making AI decisions explainable and understandable to humans.
Only Reliability and Safety directly address the concept of an AI system refusing to act or returning an error when it cannot make a trustworthy prediction. This principle helps prevent inaccurate or unsafe outputs, maintaining confidence in the system’s integrity.
Therefore, ensuring an AI system does not produce predictions when input data is incomplete or unusual aligns directly with Microsoft’s Reliability and Safety principle for responsible AI.
You need to create a clustering model and evaluate the model by using Azure Machine Learning designer. What should you do?
Options:
Split the original dataset into a dataset for features and a dataset for labels. Use the features dataset for evaluation.
Split the original dataset into a dataset for training and a dataset for testing. Use the training dataset for evaluation.
Split the original dataset into a dataset for training and a dataset for testing. Use the testing dataset for evaluation.
Use the original dataset for training and evaluation.
Answer:
CExplanation:
According to the Microsoft Learn module “Explore fundamental principles of machine learning” and the AI-900 Official Study Guide, when building and evaluating a model (such as a clustering model) in Azure Machine Learning designer, data must be divided into two subsets:
Training dataset: Used to train the model so it can learn patterns and relationships in the data.
Testing dataset: Used to evaluate the model’s performance on unseen data, ensuring that it generalizes well and does not overfit.
In Azure ML Designer, this is typically done using the Split Data module, which separates the dataset into training and testing portions (for example, 70% training and 30% testing). After training, you connect the testing dataset to an Evaluate Model module to assess metrics such as accuracy, precision, or silhouette score (for clustering).
Other options are incorrect:
A. Split into features and labels: Clustering is an unsupervised learning technique, so it doesn’t use labeled data.
B. Use training dataset for evaluation: This would cause overfitting, as the model is being tested on the same data it learned from.
D. Use the original dataset for training and evaluation: Also causes overfitting, offering no measure of generalization.
Stating the source of the data used to train a model is an example of which responsible Al principle?
Options:
fairness
transparency
reliability and safety
privacy and security
Answer:
BExplanation:
According to Microsoft’s Responsible AI Principles, Transparency means that AI systems should clearly communicate how they operate, including data sources, limitations, and decision-making processes. Stating the source of data used to train a model helps users understand where the model’s knowledge comes from, enabling informed trust and accountability.
Transparency ensures that organizations disclose relevant details about data collection and model design, especially for compliance, fairness, and reproducibility.
Other options are incorrect:
A. Fairness: Focuses on avoiding bias and ensuring equitable outcomes.
C. Reliability and safety: Ensures AI performs consistently and safely.
D. Privacy and security: Protects user data and maintains confidentiality.
Thus, the principle illustrated by disclosing training data sources is Transparency.
You need to track multiple versions of a model that was trained by using Azure Machine Learning. What should you do?
Options:
Provision an inference duster.
Explain the model.
Register the model.
Register the training data.
Answer:
CExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore Azure Machine Learning,” registering a model is the correct way to track multiple versions of models in Azure Machine Learning.
When you train models in Azure Machine Learning, each trained version can be registered in the workspace’s Model Registry. Registration stores the model’s metadata, including version, training environment, parameters, and lineage. Each registration automatically increments the version number, enabling you to manage, deploy, and compare multiple model iterations efficiently.
The other options are incorrect:
A. Provision an inference cluster – Used for model deployment, not version tracking.
B. Explain the model – Provides interpretability but does not track versions.
D. Register the training data – Registers data assets, not models.
You need to build an app that will read recipe instructions aloud to support users who have reduced vision.
Which version service should you use?
Options:
Text Analytics
Translator Text
Speech
Language Understanding (LUIS)
Answer:
CExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of speech capabilities in Azure Cognitive Services”, the Azure Speech service provides functionality for converting text to spoken words (speech synthesis) and speech to text (speech recognition).
In this scenario, the app must read recipe instructions aloud to assist users with visual impairments. This task is achieved through speech synthesis, also known as text-to-speech (TTS). The Azure Speech service uses advanced neural network models to generate natural-sounding voices in many languages and accents, making it ideal for accessibility scenarios such as screen readers, virtual assistants, and educational tools.
Microsoft Learn defines Speech service as a unified offering that includes:
Speech-to-text (speech recognition): Converts spoken words into text.
Text-to-speech (speech synthesis): Converts written text into natural-sounding audio output.
Speech translation: Translates spoken language into another language in real time.
Speaker recognition: Identifies or verifies a person based on their voice.
The other options do not fit the requirements:
A. Text Analytics – Performs text-based natural language analysis such as sentiment, key phrase extraction, and entity recognition, but it cannot produce audio output.
B. Translator Text – Translates text between languages but does not generate speech output.
D. Language Understanding (LUIS) – Interprets user intent from text or speech for conversational bots but does not read text aloud.
Therefore, based on the AI-900 curriculum and Microsoft Learn documentation, the correct service for converting recipe text to spoken audio is the Azure Speech service.
✅ Final Answer: C. Speech
You are authoring a Language Understanding (LUIS) application to support a music festival.
You want users to be able to ask questions about scheduled shows, such as: “Which act is playing on the main stage?”
The question “Which act is playing on the main stage?” is an example of which type of element?
Options:
an intent
an utterance
a domain
an entity
Answer:
BExplanation:
In a Language Understanding (LUIS) application, an utterance represents an example of what a user might say to the bot. According to Microsoft Learn – “Build a Language Understanding app”, an utterance is a sample phrase that helps train the LUIS model to recognize user intent.
In the given example — “Which act is playing on the main stage?” — the statement is an utterance that a user might say to find out about show schedules. LUIS uses utterances like this to identify the intent (the user’s goal, e.g., GetShowInfo) and to extract any entities (e.g., main stage) that provide additional details for fulfilling the request.
To clarify the other elements:
Intent: The overall purpose or action (e.g., “FindShowDetails”).
Entity: Specific information in the utterance (e.g., “main stage”).
Domain: A general subject area (e.g., entertainment, events).
Thus, “Which act is playing on the main stage?” is an utterance used to train the LUIS model to understand natural language input.
You need to build an app that will identify celebrities in images.
Which service should you use?
Options:
Azure OpenAI Service
Azure Machine Learning
conversational language understanding (CLU)
Azure Al Vision
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official learning path, the appropriate service for recognizing celebrities in images is Azure AI Vision (formerly Computer Vision). This service is part of Azure’s Cognitive Services suite and specializes in analyzing visual content using pretrained deep learning models. One of its built-in capabilities, as documented in Microsoft Learn: “Analyze images with Azure AI Vision”, includes object detection, face detection, and celebrity recognition.
The Azure AI Vision Analyze API can detect and identify thousands of objects, brands, and celebrities. When an image is submitted to the service, the model compares detected faces to a known database of public figures and returns metadata including celebrity names, confidence scores, and bounding box coordinates. This makes it ideal for applications that need to recognize well-known individuals automatically—such as media cataloging, content tagging, or entertainment apps.
The other options are incorrect:
A. Azure OpenAI Service provides generative AI and language models (like GPT-4), but it cannot analyze image content directly in the context of AI-900 fundamentals.
B. Azure Machine Learning is for custom model training and deployment, not a prebuilt vision recognition service.
C. Conversational Language Understanding (CLU) processes natural language input, not images.
Therefore, the correct service for identifying celebrities in images is D. Azure AI Vision.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:
Yes, No, Yes.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify capabilities of Azure Cognitive Services for Language”, the Azure Translator service is a cloud-based machine translation service used to translate text or entire documents between languages in real time. It uses REST APIs or client libraries to translate text input, detect languages, and support multiple target languages in a single request.
“The following service call will accept English text as an input and output Italian and French text: /translate?from=en & to=it,fr – Yes.This URL format is correct because the Translator service API allows multiple target languages to be specified in a single to parameter separated by commas. In this case, from=en defines the source language (English), and to=it,fr requests translations into Italian (it) and French (fr). The API would return results in both target languages simultaneously. This syntax is officially documented in Microsoft Learn as the valid format for multi-language translation.
“The following service call will accept English text as an input and output Italian and French text: /translate?from=en & to=fr & to=it – No.This format is incorrect, as the Translator API does not support repeating the to parameter multiple times. Only one to parameter is valid, and multiple target languages must be provided as a comma-separated list within the same to parameter.
“The Translator service can be used to translate documents from English to French.” – Yes.This statement is true. The Translator service supports both text translation and document translation. The document translation capability allows the translation of whole files such as Word, PowerPoint, or PDF documents while preserving formatting and structure. This feature is included in the official Translator API under “Document Translation.”
In summary, the AI-900 study content clarifies that:
✅ /translate?from=en & to=it,fr → Valid syntax
❌ /translate?from=en & to=fr & to=it → Invalid syntax
✅ Translator can translate full documents between languages
Which two actions can you perform by using the Azure OpenAI DALL-E model? Each correct answer presents a complete solution.
NOTE: Each correct answer is worth one point.
Options:
Create images.
Use optical character recognition (OCR).
Detect objects in images.
Modify images.
Generate captions for images.
Answer:
A, DExplanation:
The correct answers are A. Create images and D. Modify images.
The Azure OpenAI DALL-E model is a text-to-image generative AI model that can create original images and modify existing ones based on text prompts. According to Microsoft Learn and Azure OpenAI documentation, DALL-E interprets natural language descriptions to produce unique and creative visual content, making it useful for design, illustration, marketing, and educational applications.
Create images (A) – DALL-E can generate new images entirely from textual input. For example, the prompt “a futuristic city skyline at sunrise” would result in a custom-generated artwork that visually represents that description.
Modify images (D) – DALL-E also supports inpainting and outpainting, allowing users to edit or expand existing images. You can replace parts of an image (for example, changing a background or object) or add new elements consistent with the visual style of the original.
The remaining options are incorrect:
B. OCR is performed by Azure AI Vision, not DALL-E.
C. Detect objects in images is also an Azure AI Vision (Image Analysis) feature.
E. Generate captions for images is handled by Azure AI Vision, not DALL-E, since DALL-E generates—not interprets—visuals.
You are developing a natural language processing solution in Azure. The solution will analyze customer reviews and determine how positive or negative each review is.
This is an example of which type of natural language processing workload?
Options:
language detection
sentiment analysis
key phrase extraction
entity recognition
Answer:
BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore natural language processing (NLP) in Azure,” sentiment analysis is a core natural language processing (NLP) workload used to determine the emotional tone or attitude expressed in a piece of text. It helps identify whether a statement, review, or comment conveys a positive, negative, neutral, or mixed sentiment.
In this question, the scenario involves analyzing customer reviews and determining how positive or negative each review is. This directly aligns with sentiment analysis, which evaluates subjective text and quantifies the expressed opinion. In Azure, this workload is implemented through the Azure AI Language service (formerly Text Analytics API), where the Sentiment Analysis feature assigns a sentiment score to text inputs and classifies them accordingly.
For example:
“I love this product!” → Positive sentiment
“It’s okay, but could be better.” → Neutral or mixed sentiment
“I’m disappointed with the service.” → Negative sentiment
Let’s analyze why the other options are incorrect:
A. Language detection: Identifies which language (e.g., English, Spanish, French) the text is written in. It doesn’t measure positivity or negativity.
C. Key phrase extraction: Identifies the main topics or keywords in text (e.g., “battery life,” “customer support”), not the emotion.
D. Entity recognition: Detects and categorizes specific entities such as people, locations, organizations, or dates within the text.
Therefore, based on Microsoft’s AI-900 syllabus and Azure AI Language documentation, the workload that analyzes text to determine positive or negative opinions is Sentiment Analysis (Option B). This capability is widely used in customer feedback analysis, brand monitoring, and social media analytics to understand public perception and improve business decisions.
Which type of natural language processing (NLP) entity is used to identify a phone number?
Options:
regular expression
machine-learned
list
Pattern-any
Answer:
AExplanation:
In Natural Language Processing (NLP), entities are pieces of information extracted from text, such as names, locations, or phone numbers. According to the Microsoft Learn module “Explore natural language processing in Azure,” Azure’s Language Understanding (LUIS) supports several entity types:
Machine-learned entities – Automatically learned based on context in training data.
List entities – Used for predefined, limited sets of values (e.g., colors or product names).
Pattern.any entities – Capture flexible, unstructured phrases in user input.
Regular expression entities – Use regex patterns to match specific data formats such as phone numbers, postal codes, or dates.
A regular expression is ideal for recognizing phone numbers because phone numbers follow specific numeric or symbol-based patterns (e.g., (555)-123-4567 or +1 212 555 0199). By defining a regex pattern, the AI model can accurately extract phone numbers regardless of text context.
What are three Microsoft guiding principles for responsible AI? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
knowledgeability
decisiveness
inclusiveness
fairness
opinionatedness
reliability and safety
Answer:
C, D, FExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Describe features of common AI workloads and considerations”, Microsoft has defined six guiding principles for responsible AI. These principles are intended to ensure that AI systems are developed and deployed in ways that are ethical, transparent, and beneficial to all. The six principles are: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability.
Let’s break down the three correct options:
Fairness – Microsoft emphasizes that AI systems should treat all individuals fairly and avoid discrimination against people based on gender, race, age, or other characteristics. Fairness ensures that outcomes and decisions from AI systems are equitable across diverse user groups. In the AI-900 learning materials, fairness is explained as a foundational value that ensures algorithms and models do not introduce or amplify societal bias.
Reliability and Safety – This principle ensures that AI systems function as intended under all expected conditions and that they can handle unexpected inputs safely. Microsoft states that AI should be tested rigorously and validated for reliability before deployment. AI systems must perform consistently and avoid causing harm due to errors or failures.
Inclusiveness – Inclusiveness focuses on empowering everyone and engaging people of all backgrounds. Microsoft’s responsible AI guidance stresses designing AI systems that understand and respect cultural, linguistic, and ability differences to make technology accessible and beneficial to all users.
Options A (knowledgeability), B (decisiveness), and E (opinionatedness) are not part of Microsoft’s Responsible AI principles. These terms do not appear in any Microsoft Learn AI-900 curriculum or official responsible AI documentation.
Thus, based on the verified AI-900 study content and Microsoft’s Responsible AI framework, the correct answer is C. Inclusiveness, D. Fairness, and F. Reliability and Safety.
You have the following apps:
• App1: Uses a set of images and photos to extract brand names
• App2: Enables touchless access control for buildings
Which Azure Al Vision service does each app use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn documentation on Azure AI Vision services, different Azure AI Vision capabilities are suited for different use cases such as object detection, brand recognition, facial recognition, and spatial analysis.
App1: Uses a set of images and photos to extract brand names → Image AnalysisThe Azure AI Vision – Image Analysis service (formerly part of Computer Vision) can detect and extract brands, objects, text, and other visual features from images. It uses advanced image classification and object detection models to recognize logos and identify brand names (for example, “Microsoft” or “Coca-Cola”) in photos. The Image Analysis API can also return descriptive tags, scene descriptions, and confidence scores. Therefore, since App1 analyzes static images to extract brand names, it specifically relies on the Image Analysis feature of Azure AI Vision.
App2: Enables touchless access control for buildings → FaceThe Azure AI Face service is designed for facial detection, verification, and identification. It can recognize and match faces in real time, making it ideal for access control, identity verification, and attendance tracking systems. A “touchless access control” system uses a camera to detect a person’s face and verify identity against a stored profile, allowing or denying entry without physical interaction.
The other options are not suitable:
Optical Character Recognition (OCR) extracts text, not brand logos.
Spatial Analysis is for detecting movement or presence in video feeds.
Video Analysis is for analyzing dynamic video content rather than still images.
Extracting relationships between data from large volumes of unstructured data is an example of which type of Al workload?
Options:
computer vision
knowledge mining
natural language processing (NLP)
anomaly detection
Answer:
BExplanation:
Extracting relationships and insights from large volumes of unstructured data (such as documents, text files, or images) aligns with the Knowledge Mining workload in Microsoft Azure AI. According to the Microsoft AI Fundamentals (AI-900) study guide and Microsoft Learn module “Describe features of common AI workloads,” knowledge mining involves using AI to search, extract, and structure information from vast amounts of unstructured or semi-structured content.
In a typical knowledge mining solution, tools like Azure AI Search and Azure AI Document Intelligence work together to index data, apply cognitive skills (such as OCR, key phrase extraction, and entity recognition), and then enable users to discover relationships and patterns through intelligent search. The process transforms raw content into searchable knowledge.
The key characteristics of knowledge mining include:
Using AI to extract entities and relationships between data points.
Applying cognitive skills to text, images, and documents.
Creating searchable knowledge stores from unstructured data.
Hence, B. Knowledge Mining is correct.
The other options—computer vision, NLP, and anomaly detection—deal with image recognition, language understanding, and data irregularities, respectively, not large-scale information extraction.
Match the types of AI workloads to the appropriate scenarios.
To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

This question tests understanding of AI workload types, a fundamental topic in the Microsoft Azure AI Fundamentals (AI-900) curriculum. Each workload type—Computer Vision, Natural Language Processing, Machine Learning (Regression), and Anomaly Detection—serves a specific function within the AI landscape, as explained in Microsoft Learn’s module “Describe features of common AI workloads.”
Computer Vision enables computers to “see” and interpret visual information such as images or videos. Identifying handwritten letters requires analyzing image patterns, shapes, and strokes, which is a classic image recognition task. Azure’s Computer Vision API and Custom Vision services are specifically designed for such tasks.
Natural Language Processing (NLP) involves interpreting human language, both written and spoken. Determining the sentiment of a social media post (positive, negative, or neutral) is a typical text analytics use case within NLP, often implemented using Azure’s Text Analytics for Sentiment Analysis.
Anomaly Detection focuses on identifying data points that deviate from normal patterns. Detecting fraudulent credit card payments requires finding transactions that are unusual compared to historical spending behavior. Azure’s Anomaly Detector API applies machine learning to identify such irregularities.
Machine Learning (Regression) is used for predicting continuous numerical outcomes based on historical data. Estimating next month’s toy sales is a regression problem—an example of supervised learning where the model predicts future sales values from past sales data.
Thus, based on Microsoft’s official AI-900 learning objectives, the correct mapping of workloads to scenarios is:
Computer Vision → Identify handwritten letters
NLP → Predict sentiment
Anomaly Detection → Fraud detection
Machine Learning (Regression) → Predict toy sales
You have the process shown in the following exhibit.

Which type AI solution is shown in the diagram?
Options:
a sentiment analysis solution
a chatbot
a machine learning model
a computer vision application
Answer:
BSelect the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and the Microsoft Learn module “Describe Azure Machine Learning and Automated ML,” Azure Machine Learning designer (formerly known as Azure Machine Learning Studio) is a drag-and-drop, low-code/no-code environment that allows users to create, train, and evaluate machine learning models visually — without the need for extensive programming knowledge.
The designer provides a visual interface, known as the canvas, where users can:
Import and prepare data using modules for data transformation and cleaning.
Split data into training and testing datasets.
Select and configure algorithms (classification, regression, or clustering).
Train and evaluate the model.
Deploy the model as a web service directly from the designer.
The official Microsoft Learn content emphasizes that “Azure Machine Learning designer enables users to build, test, and deploy models by adding and connecting prebuilt modules on a visual interface.” This allows business analysts, data professionals, and beginners to experiment with machine learning workflows without writing code.
By comparison:
Automatically performing common data preparation tasks refers to Automated ML, not the designer.
Automatically selecting an algorithm is also part of Automated ML, which optimizes models algorithmically.
Using a code-first notebook experience applies to Azure Machine Learning notebooks, intended for data scientists familiar with Python and SDKs.
Therefore, as per the AI-900 study guide and Microsoft Learn documentation, the verified and correct answer is:
✅ Adding and connecting modules on a visual canvas, which accurately describes how Azure Machine Learning designer operates.
What should you use to explore pretrained generative Al models available from Microsoft and third-party providers?
Options:
Azure Synapse Analytics
Azure Machine Learning designer
Azure Al Foundry
Language Studio
Answer:
CFor each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Azure Cognitive Services documentation, the Custom Vision service is a specialized computer vision tool that allows users to build, train, and deploy custom image classification and object detection models. It is part of the Azure Cognitive Services suite, designed for scenarios where pre-built Computer Vision models do not meet specific business requirements.
“The Custom Vision service can be used to detect objects in an image.” → YesThis statement is true. The Custom Vision service supports object detection, enabling the model to identify and locate multiple objects within a single image using bounding boxes. For example, it can locate cars, products, or animals in photos.
“The Custom Vision service requires that you provide your own data to train the model.” → YesThis statement is true. Unlike pre-trained models such as the standard Computer Vision API, the Custom Vision service requires users to upload and label their own images. The system uses this labeled dataset to train a model specific to the user’s scenario, improving accuracy for custom use cases.
“The Custom Vision service can be used to analyze video files.” → NoThis statement is false. The Custom Vision service works only with static images, not videos. To analyze video files, Azure provides Video Indexer and Azure Media Services, which are designed for extracting insights from moving visual content.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Describe features of conversational AI workloads,” the correct answer is Power Virtual Agents. This service is part of the Microsoft Power Platform and is specifically designed to enable users to create intelligent chatbots without writing any code.
Power Virtual Agents (PVA) provides a no-code/low-code environment where business users and developers can collaboratively design conversational experiences. It integrates built-in natural language processing (NLP) models to understand user intent and respond intelligently to text or speech inputs. The platform’s interface allows chatbot creators to design dialogues visually, connect to back-end data via Power Automate, and publish bots on websites, Teams, or other communication channels.
This approach is highlighted in Microsoft Learn as an ideal solution for organizations that want to deploy conversational bots quickly without requiring specialized AI or programming expertise. PVA automatically leverages Microsoft’s language understanding models, allowing it to interpret user input and map it to predefined topics or actions.
Let’s analyze the other options:
Azure Health Bot: A specialized solution for the healthcare industry that provides prebuilt medical compliance and healthcare content. It is not a general-purpose, no-code chatbot builder.
Microsoft Bot Framework: A developer-focused SDK for building highly customized bots through code, offering maximum flexibility but not no-code functionality.
Therefore, the most appropriate choice that “can be used to build no-code apps that use built-in natural language processing models” is Power Virtual Agents — the official Microsoft no-code chatbot solution for conversational AI workloads.
What is an example of a regression model in machine learning?
Options:
dividing the student data in a dataset based on the age of the students and their educational achievements
identifying subtypes of spam email by examining a large collection of emails that were flagged by users
predicting the sale price of a house based on historical data, the size of the house, and the number of bedrooms in the house
identifying population counts of endangered animals by analyzing images
Answer:
CExplanation:
rrect answer is C. Predicting the sale price of a house based on historical data, the size of the house, and the number of bedrooms.
In machine learning, regression is a supervised learning technique used to predict continuous numeric values. Microsoft’s AI-900 study guide defines regression models as those that estimate relationships between variables—predicting a continuous outcome variable from one or more input features.
In this case, the house sale price is a continuous numeric value, and inputs such as size, location, and number of bedrooms are the features. Common regression algorithms include linear regression, decision tree regression, and boosted regression.
Other options represent different ML workloads:
A involves segmentation by categories (classification or clustering).
B represents clustering, grouping similar items without predefined labels.
D represents computer vision, counting animals in images rather than predicting a numeric value.
Hence, the verified answer is C. Regression.
You need to identify street names based on street signs in photographs.
Which type of computer vision should you use?
Options:
object detection
optical character recognition (OCR)
image classification
facial recognition
Answer:
BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of computer vision workloads on Azure”, Optical Character Recognition (OCR) is a core computer vision workload that enables AI systems to detect and extract text from images or scanned documents.
In this scenario, the goal is to identify street names from street signs in photographs. Since the text is embedded within images, OCR is the correct technology to use. OCR works by analyzing the visual patterns of letters, numbers, and symbols, then converting them into machine-readable text. Azure’s Computer Vision API and Azure AI Vision Service provide OCR capabilities that can extract printed or handwritten text from pictures, documents, and even real-time camera feeds.
Let’s analyze the other options:
A. Object detection: Identifies and locates objects (like cars, people, or street signs) but not the text written on them.
C. Image classification: Classifies an entire image into categories (e.g., “street scene” or “traffic sign”) but doesn’t extract text content.
D. Facial recognition: Identifies or verifies people by analyzing facial features, unrelated to text extraction.
Therefore, identifying street names on street signs is a text extraction problem, making Optical Character Recognition (OCR) the most accurate and verified answer per Microsoft Learn content.
When training a model, why should you randomly split the rows into separate subsets?
Options:
to train the model twice to attain better accuracy
to train multiple models simultaneously to attain better performance
to test the model by using data that was not used to train the model
Answer:
CExplanation:
When training a machine learning model, it is standard practice to randomly split the dataset into training and testing subsets. The purpose of this is to evaluate how well the model generalizes to unseen data. According to the AI-900 study guide and Microsoft Learn module “Split data for training and evaluation”, this ensures that the model is trained on one portion of the data (training set) and evaluated on another (test or validation set).
The correct answer is C. to test the model by using data that was not used to train the model.
Random splitting prevents data leakage and overfitting, which occur when a model memorizes patterns from the training data instead of learning generalizable relationships. By testing on unseen data, developers can assess true performance, ensuring that predictions will be accurate on future, real-world data.
Options A and B are incorrect because:
A. Train the model twice does not improve accuracy; model accuracy depends on data quality, feature engineering, and algorithm choice.
B. Train multiple models simultaneously refers to model comparison, not the purpose of splitting data.
Thus, the correct reasoning is that random splitting provides a reliable estimate of the model’s predictive power on new data.
You have an Al solution that provides users with the ability to control smart devices by using verbal commands.
Which two types of natural language processing (NLP) workloads does the solution use? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Options:
text-to-speech
translation
language modeling
key phrase extraction
speech-to-text
Answer:
C, EExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and the Microsoft Learn module “Describe features of Natural Language Processing (NLP) workloads on Azure”, this scenario combines two major capabilities of AI: speech recognition and natural language understanding.
Speech-to-Text (E) – This is the first step in processing verbal commands. The Azure Speech service converts the spoken words of a user into textual data that can be understood and processed by downstream components. This workload is commonly referred to as speech recognition, and it falls under the speech capabilities of Azure Cognitive Services. Without this transcription process, the system could not interpret the user’s voice input.
Language Modeling (C) – After the speech input is converted into text, the next step is to interpret the meaning of the text so the system can take appropriate action. Language modeling, also known as language understanding, is responsible for identifying the user’s intent (for example, “turn on the lights” or “set the thermostat to 72 degrees”) and extracting entities (such as device name or temperature value). In Azure, this function is handled by Language Understanding (LUIS) or Conversational Language Understanding (CLU). These models allow smart systems to process commands and map them to defined actions.
Other options are not correct:
A. Text-to-speech converts text output into spoken language, which is not mentioned as a requirement.
B. Translation converts text from one language to another, irrelevant to this scenario.
D. Key phrase extraction identifies important terms in text but doesn’t interpret or execute commands.
Therefore, the solution uses speech-to-text to transcribe verbal commands and language modeling to understand and act upon them — the two key NLP workloads enabling voice-controlled smart devices.
You need to predict the population size of a specific species of animal in an area.
Which Azure Machine Learning type should you use?
Options:
clustering
regression
classification
Answer:
BExplanation:
In Azure Machine Learning, regression is a supervised machine learning technique used to predict continuous numerical values based on input data. According to the Microsoft AI Fundamentals (AI-900) study guide and the Microsoft Learn module “Identify common types of machine learning,” regression models are ideal when the goal is to estimate a quantity — such as price, temperature, or, in this case, population size.
In the scenario, the task is to predict the population size of a specific species within a defined area. Population size is a numerical, continuous value that varies depending on multiple factors (like time, environment, and resources). A regression algorithm, such as linear regression or decision tree regression, can be trained on historical data (e.g., species count, area, temperature, food availability) to forecast future population numbers.
Option analysis:
A. Clustering: Used for unsupervised learning, where the goal is to group similar data points into clusters without predefined labels (e.g., grouping animals by behavior or habitat).
C. Classification: Used to predict discrete categories or labels (e.g., “endangered” vs. “not endangered”), not numerical values.
Therefore, the correct machine learning type for predicting a continuous value such as population size is Regression.
You plan to develop a bot that will enable users to query a knowledge base by using natural language processing.
Which two services should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Options:
Language Service
Azure Bot Service
Form Recognizer
Anomaly Detector
Answer:
A, BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore conversational AI in Microsoft Azure,” conversational bots are AI applications that can understand and respond to natural language inputs through text or speech. Building such a bot typically involves two key Azure services:
Azure Bot Service (Option B):This service provides the framework and infrastructure needed to create, test, and deploy intelligent chatbots that interact with users across multiple channels (webchat, Teams, email, etc.). It handles conversation flow, integration, and user message management.
Azure Language Service (Option A):This service powers the natural language understanding (NLU) capability of the bot. It enables the bot to interpret user input, extract intent, and query a knowledge base using Question Answering (formerly QnA Maker). This allows the bot to respond intelligently to user questions by finding the most relevant answers.
The other options are incorrect:
C. Form Recognizer is used for extracting structured data from documents like invoices or forms.
D. Anomaly Detector is used for identifying unusual patterns in time-series data.
Hence, to build a bot that understands and answers user questions in natural language, the solution must combine Azure Bot Service for conversation management and Azure Language Service for knowledge-based question answering and natural language understanding.
An app that analyzes social media posts to identify their tone is an example of which type of natural language processing (NLP) workload?
Options:
sentiment analysis
key phrase extraction
entity recognition
speech recognition
Answer:
AExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of natural language processing (NLP) workloads on Azure,” sentiment analysis is an NLP workload that determines the emotional tone or opinion expressed in a piece of text. This could be positive, negative, or neutral sentiment.
When an app analyzes social media posts to identify their tone, it is performing sentiment analysis, since it aims to understand the emotional context behind user-generated text such as tweets, reviews, or comments. Azure provides this functionality through the Azure Cognitive Services – Text Analytics API, which evaluates text and returns sentiment scores.
Other options are not suitable:
Key phrase extraction identifies main ideas in text but not tone.
Entity recognition identifies names of people, organizations, or locations.
Speech recognition converts spoken words into text, not emotional analysis.
Therefore, analyzing social media tone is an example of sentiment analysis, a key NLP workload in Microsoft’s AI-900 syllabus.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of common machine learning types”, there are three main types of machine learning: supervised learning, unsupervised learning, and reinforcement learning. Within supervised learning, two common approaches are regression and classification, while clustering is a primary example of unsupervised learning.
“You train a regression model by using unlabeled data.” – No.Regression models are trained with labeled data, meaning the input data includes both features (independent variables) and target labels (dependent variables) representing continuous numerical values. Examples include predicting house prices or sales forecasts. Unlabeled data (data without target output values) cannot be used to train regression models; such data is used in unsupervised learning tasks like clustering.
“The classification technique is used to predict sequential numerical data over time.” – No.Classification is used for categorical predictions, where outputs belong to discrete classes, such as spam/not spam or disease present/absent. Predicting sequential numerical data over time refers to time series forecasting, which is typically a regression or forecasting problem, not classification. The AI-900 syllabus clearly separates classification (categorical prediction) from regression (continuous value prediction) and time series (temporal pattern analysis).
“Grouping items by their common characteristics is an example of clustering.” – Yes.This statement is correct. Clustering is an unsupervised learning technique used to group similar data points based on their features. The AI-900 study materials describe clustering as the process of “discovering natural groupings in data without predefined labels.” Common examples include customer segmentation or document grouping.
Therefore, based on Microsoft’s AI-900 training objectives and definitions:
Regression → supervised learning using labeled continuous data (No)
Classification → categorical prediction, not sequential numeric forecasting (No)
Clustering → grouping by similarity (Yes)
To complete the sentence, select the appropriate option in the answer area.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features of regression machine learning”, regression is a type of supervised machine learning used when the target variable (the value you want to predict) is a continuous numeric value.
In this scenario, the task is to predict how many hours of overtime a delivery person will work based on the number of orders received. Both the input (number of orders) and the output (hours of overtime) are numeric variables. Since the goal is to estimate a quantitative value rather than categorize or group data, this is a classic example of a regression problem.
Regression models analyze the relationship between variables to make numerical predictions. For example, the model might learn that each additional 20 orders increases overtime by about two hours. Common algorithms used for regression include linear regression, decision tree regression, and boosted regression models. These models produce outputs such as “expected overtime = 5.6 hours,” which are continuous numeric results.
To contrast with the other options:
Classification is used for predicting categories or labels, such as “overtime required” vs. “no overtime,” or “high-risk” vs. “low-risk.” It deals with discrete outputs rather than continuous numbers.
Clustering is an unsupervised learning approach used to group similar data points based on shared characteristics, such as grouping delivery staff by performance patterns or customer types.
As emphasized in Microsoft’s Responsible AI and Machine Learning Fundamentals learning paths, regression models are ideal for numeric forecasting problems such as predicting sales, revenue, demand, or working hours.
Therefore, the correct answer is: Regression.