Winter Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dumps65

Oracle 1z0-1110-23 Dumps

Page: 1 / 8
Total 80 questions

Oracle Cloud Infrastructure Data Science 2023 Professional Questions and Answers

Question 1

Which feature of the Oracle Cloud Infrastructure (OCI) Vision service helps you generate in-dexing tags for a collection of marketing photographs?

Options:

A.

Document classification

B.

Image classification

C.

Text recognition

D.

Key Value extraction

Question 2

Which two statements are true about published conda environments?

Options:

A.

They are curated by Oracle Cloud Infrastructure (OCI) Data Science.

B.

The odac conda init command is used to configure the location of published conda

environments.

C.

Your notebook session acts as the source to share published conda environments with team

members.

D.

You can only create a published conda environment by modifying a Data Science conda

environment.

E.

In addition to service job run environment variables, conda environment variables can be

used in Data Science Jobs.

Question 3

You are preparing a configuration object necessary to create a Data Flow application. Which THREE parameter values should you provide?

Options:

A.

The path to the arhive.zip file.

B.

The local path to your pySpark script.

C.

The compartment of the Data Flow application.

D.

The bucket used to read/write the pySpark script in Object Storage.

E.

The display name of the application.

Question 4

While reviewing your data, you discover that your data set has a class imbalance. You are aware that the Accelerated Data Science (ADS) SDK provides multiple built-in automatic transformation tools for data set transformation. Which would be the right tool to correct any imbalance between the classes?

Options:

A.

sample()

B.

suggeste_recoomendations()

C.

auto_transform()

D.

visualize_transforms()

Question 5

You have trained three different models on your data set using Oracle AutoML. You want to

visualize the behavior of each of the models, including the baseline model, on the test set. Which

class should be used from the Accelerated Data Science (ADS) SDK to visually compare the models?

Options:

A.

EvaluationMetrics

B.

ADSEvaluator

C.

ADSExplainer

D.

ADSTuner

Question 6

Which of the following TWO non-open source JupyterLab extensions has Oracle Cloud In-frastructure (OCI) Data Science developed and added to the notebook session experience?

Options:

A.

Environment Explorer

B.

Table of Contents

C.

Command Palette

D.

Notebook Examples

E.

Terminal

Question 7

You are building a model and need input that represents data as morning, afternoon, or evening. However, the data contains a time stamp. What part of the Data Science life cycle would you be in when creating the new variable?

Options:

A.

Model type selection

B.

Model validation

C.

Data access

D.

Feature engineering

Question 8

You are a data scientist trying to load data into your notebook session. You understand that Accelerated Data Science (ADS) SDK supports loading various data formats. Which of the following THREE are ADS supported data formats?

Options:

A.

DOCX

B.

Pandas DataFram

C.

JSON

D.

Raw Images

E.

XML

Question 9

You have an embarrassingly parallel or distributed batch job on a large amount of data that you

consider running using Data Science Jobs. What would be the best approach to run the workload?

Options:

A.

Create the job in Data Science Jobs and start a job run. When it is done, start a new job run

until you achieve the number of runs required.

B.

Create the job in Data Science Jobs and then start the number of simultaneous jobs runs

required for your workload.

C.

Reconfigure the job run because Data Science Jobs does not support embarrassingly parallel

workloads.

D.

Create a new job for every job run that you have to run in parallel, because the Data Science

Jobs service can have only one job run per job.

Question 10

As a data scientist, you are working on a global health data set that has data from more than 50

countries. You want to encode three features such as 'countries', 'race' and 'body organ' as

categories.

Which option would you use to encode the categorical feature?

Options:

A.

OneHotEncoder ()

B.

DataFrameLabelEncoder ()

C.

show_in_notebook ()

D.

auto_transform()

Question 11

Which Oracle Accelerated Data Science (ADS) classes can be used for easy access to data sets from

reference libraries and index websites such as scikit-learn?

Options:

A.

DataLabeling

B.

DatasetBrowser

C.

SecretKeeper

D.

ADSTuner

Question 12

You want to write a Python script to create a collection of different projects for your data science

team. Which Oracle Cloud Infrastructure (OCI) Data Science interface would you use?

Options:

A.

The OCI Software Development Kit (SDK)

B.

OCI Console

C.

Command line interface (CLI)

D.

Mobile App

Question 13

You are a data scientist leveraging the Oracle Cloud Infrastructure (OCI) Language AI service for

various types of text analyses. Which TWO capabilities can you utilize with this tool?

Options:

A.

Topic classification

B.

Table extraction

C.

Sentiment analysis

D.

Sentence diagramming

E.

Punctuation correction

Question 14

You have created a Data Science project in a compartment called Development and shared it

with a group of collaborators. You now need to move the project to a different compartment called

Production after completing the current development iteration.

Which statement is correct?

Options:

A.

Moving a project to a different compartment also moves its associated notebook sessions

and models to the new compartment.

B.

Moving a project to a different compartment requires deleting all its associated notebook

sessions and models first.

C.

You cannot move a project to a different compartment after it has been created.

D.

You can move a project to a different compartment without affecting its associated

notebook sessions and models

Question 15

You want to make your model more parsimonious to reduce the cost of collecting and processing data. You plan to do this by removing features that are highly correlated. You would like to create a heat map that displays the correlation so that you can identify candidate features to remove. Which Accelerated Data Science (ADS) SDK method would be appropriate to display the correlation between Continuous and Categorical features?

Options:

A.

Corr{}

B.

Correlation_ratio_plot{}

C.

Pearson_plot{}

D.

Cramersv_plot{}

Question 16

You loaded data into Oracle Cloud Infrastructure (OCI) Data Science. To transform the data, you

want to use the Accelerated Data Science (ADS) SDK. When you applied the get_recommendations ()

tool to the ADSDataset object, it showed you user-detected issues with all the recommended

changes to apply to the dataset. Which option should you use to apply all the recommended

transformations at once?

Options:

A.

get_transformed_dataset ()

B.

fit_transform()

C.

auto_transform()

D.

visualize_transforms ()

Question 17

You are asked to prepare data for a custom-built model that requires transcribing Spanish video

recordings into a readable text format with profane words identified.

Which Oracle Cloud service would you use?

Options:

A.

OCI Translation

B.

OCI Language

C.

OCI Speech

D.

OCI Anomaly Detection

Question 18

You are working as a data scientist for a healthcare company. They decide to analyze the data to

find patterns in a large volume of electronic medical records. You are asked to build a PySpark

solution to analyze these records in a JupyterLab notebook. What is the order of recommended

steps to develop a PySpark application in Oracle Cloud Infrastructure (OCI) Data Science?

Options:

A.

Launch a notebook session. Install a PySpark conda environment. Configure core-site.xml.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

B.

Install a Spark conda environment. Configure core-site.xml. Launch a notebook session.

Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your

PySpark application.

C.

Configure core-site.xml. Install a PySpark conda environment. Create a Data Flow application

with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Launch a

notebook session.

D.

Launch a notebook session. Configure core-site.xml. Install a PySpark conda environment.

E.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

Question 19

Which Oracle Cloud Infrastructure (OCI) service should you use to create and run Spark

applications using ADS?

Options:

A.

Data Integration

B.

Vault

C.

Data Flow

D.

Analytics Cloud

Question 20

Select two reasons why it is important to rotate encryption keys when using Oracle Cloud

Infrastructure (OCI) Vault to store credentials or other secrets.

Options:

A.

Key rotation allows you to encrypt no more than five keys at a time.

B.

Key rotation improves encryption efficiency.

C.

Periodically rotating keys make it easier to reuse keys.

D.

Key rotation reduces risk if a key is ever compromised.

E.

Periodically rotating keys limits the amount of data encrypted by one key version.

Question 21

You have just completed analyzing a set of images by using Oracle Cloud Infrastructure (OCI) Data

Labelling, and you want to export the annotated data. Which two formats are supported?

Options:

A.

CONLL V2003

B.

COCO

C.

Data Labelling Service Proprietary JSON

D.

Spacy

Question 22

You want to ensure that all stdout and stderr from your code are automatically collected and

logged, without implementing additional logging in your code. How would you achieve this with Data

Science Jobs?

Options:

A.

On job creation, enable logging and select a log group. Then, select either a log or the option

to enable automatic log creation.

B.

Make sure that your code is using the standard logging library and then store all the logs to

Object Storage at the end of the job.

C.

Create your own log group and use a third-party logging service to capture job run details for

log collection and storing.

D.

You can implement custom logging in your code by using the Data Science Jobs logging

service.

Question 23

You want to evaluate the relationship between feature values and target variables. You have a

large number of observations having a near uniform distribution and the features are highly

correlated.

Which model explanation technique should you choose?

Options:

A.

Feature Permutation Importance Explanations

B.

Local Interpretable Model-Agnostic Explanations

C.

Feature Dependence Explanations

D.

Accumulated Local Effects

Question 24

Youare a data scientist working for a manufacturing company. You have developed a forecasting

model to predict the sales demand in the upcoming months. You created a model artifact that

contained custom logic requiring third party libraries. When you deployed the model, it failed to run

because you did not include all the third party dependencies in the model artifact. What file should

be modified to include the missing libraries?

Options:

A.

model_artifact_validate.py

B.

score.py

C.

requirements.txt

D.

runtime.yaml

Page: 1 / 8
Total 80 questions