Winter Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dumps65

Snowflake ARA-C01 Dumps

Page: 1 / 16
Total 162 questions

SnowPro Advanced: Architect Certification Exam Questions and Answers

Question 1

Which organization-related tasks can be performed by the ORGADMIN role? (Choose three.)

Options:

A.

Changing the name of the organization

B.

Creating an account

C.

Viewing a list of organization accounts

D.

Changing the name of an account

E.

Deleting an account

F.

Enabling the replication of a database

Question 2

A company wants to deploy its Snowflake accounts inside its corporate network with no visibility on the internet. The company is using a VPN infrastructure and Virtual Desktop Infrastructure (VDI) for its Snowflake users. The company also wants to re-use the login credentials set up for the VDI to eliminate redundancy when managing logins.

What Snowflake functionality should be used to meet these requirements? (Choose two.)

Options:

A.

Set up replication to allow users to connect from outside the company VPN.

B.

Provision a unique company Tri-Secret Secure key.

C.

Use private connectivity from a cloud provider.

D.

Set up SSO for federated authentication.

E.

Use a proxy Snowflake account outside the VPN, enabling client redirect for user logins.

Question 3

A user, analyst_user has been granted the analyst_role, and is deploying a SnowSQL script to run as a background service to extract data from Snowflake.

What steps should be taken to allow the IP addresses to be accessed? (Select TWO).

Options:

A.

ALTERROLEANALYST_ROLESETNETWORK_POLICY='ANALYST_POLICY';

B.

ALTERUSERANALYSTJJSERSETNETWORK_POLICY='ANALYST_POLICY';

C.

ALTERUSERANALYST_USERSETNETWORK_POLICY='10.1.1.20';

D.

USE ROLE SECURITYADMIN;

CREATE OR REPLACE NETWORK POLICY ANALYST_POLICY ALLOWED_IP_LIST = ('10.1.1.20');

E.

USE ROLE USERADMIN;

CREATE OR REPLACE NETWORK POLICY ANALYST_POLICY

ALLOWED_IP_LIST = ('10.1.1.20');

Question 4

Which security, governance, and data protection features require, at a MINIMUM, the Business Critical edition of Snowflake? (Choose two.)

Options:

A.

Extended Time Travel (up to 90 days)

B.

Customer-managed encryption keys through Tri-Secret Secure

C.

Periodic rekeying of encrypted data

D.

AWS, Azure, or Google Cloud private connectivity to Snowflake

E.

Federated authentication and SSO

Question 5

What considerations need to be taken when using database cloning as a tool for data lifecycle management in a development environment? (Select TWO).

Options:

A.

Any pipes in the source are not cloned.

B.

Any pipes in the source referring to internal stages are not cloned.

C.

Any pipes in the source referring to external stages are not cloned.

D.

The clone inherits all granted privileges of all child objects in the source object, including the database.

E.

The clone inherits all granted privileges of all child objects in the source object, excluding the database.

Question 6

A healthcare company is deploying a Snowflake account that may include Personal Health Information (PHI). The company must ensure compliance with all relevant privacy standards.

Which best practice recommendations will meet data protection and compliance requirements? (Choose three.)

Options:

A.

Use, at minimum, the Business Critical edition of Snowflake.

B.

Create Dynamic Data Masking policies and apply them to columns that contain PHI.

C.

Use the Internal Tokenization feature to obfuscate sensitive data.

D.

Use the External Tokenization feature to obfuscate sensitive data.

E.

Rewrite SQL queries to eliminate projections of PHI data based on current_role().

F.

Avoid sharing data with partner organizations.

Question 7

An Architect is integrating an application that needs to read and write data to Snowflake without installing any additional software on the application server.

How can this requirement be met?

Options:

A.

Use SnowSQL.

B.

Use the Snowpipe REST API.

C.

Use the Snowflake SQL REST API.

D.

Use the Snowflake ODBC driver.

Question 8

Which SQL alter command will MAXIMIZE memory and compute resources for a Snowpark stored procedure when executed on the snowpark_opt_wh warehouse?

A)as

B) as

C) as

D) as

Options:

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Question 9

An Architect entered the following commands in sequence:

as

USER1 cannot find the table.

Which of the following commands does the Architect need to run for USER1 to find the tables using the Principle of Least Privilege? (Choose two.)

Options:

A.

GRANT ROLE PUBLIC TO ROLE INTERN;

B.

GRANT USAGE ON DATABASE SANDBOX TO ROLE INTERN;

C.

GRANT USAGE ON SCHEMA SANDBOX.PUBLIC TO ROLE INTERN;

D.

GRANT OWNERSHIP ON DATABASE SANDBOX TO USER INTERN;

E.

GRANT ALL PRIVILEGES ON DATABASE SANDBOX TO ROLE INTERN;

Question 10

A company has a source system that provides JSON records for various loT operations. The JSON Is loading directly into a persistent table with a variant field. The data Is quickly growing to 100s of millions of records and performance to becoming an issue. There is a generic access pattern that Is used to filter on the create_date key within the variant field.

What can be done to improve performance?

Options:

A.

Alter the target table to Include additional fields pulled from the JSON records. This would Include a create_date field with a datatype of time stamp. When this field Is used in the filter, partition pruning will occur.

B.

Alter the target table to include additional fields pulled from the JSON records. This would include a create_date field with a datatype of varchar. When this field is used in the filter, partition pruning will occur.

C.

Validate the size of the warehouse being used. If the record count is approaching 100s of millions, size XL will be the minimum size required to process this amount of data.

D.

Incorporate the use of multiple tables partitioned by date ranges. When a user or process needs to query a particular date range, ensure the appropriate base table Is used.

Question 11

A company has built a data pipeline using Snowpipe to ingest files from an Amazon S3 bucket. Snowpipe is configured to load data into staging database tables. Then a task runs to load the data from the staging database tables into the reporting database tables.

The company is satisfied with the availability of the data in the reporting database tables, but the reporting tables are not pruning effectively. Currently, a size 4X-Large virtual warehouse is being used to query all of the tables in the reporting database.

What step can be taken to improve the pruning of the reporting tables?

Options:

A.

Eliminate the use of Snowpipe and load the files into internal stages using PUT commands.

B.

Increase the size of the virtual warehouse to a size 5X-Large.

C.

Use an ORDER BY command to load the reporting tables.

D.

Create larger files for Snowpipe to ingest and ensure the staging frequency does not exceed 1 minute.

Question 12

A company has an inbound share set up with eight tables and five secure views. The company plans to make the share part of its production data pipelines.

Which actions can the company take with the inbound share? (Choose two.)

Options:

A.

Clone a table from a share.

B.

Grant modify permissions on the share.

C.

Create a table from the shared database.

D.

Create additional views inside the shared database.

E.

Create a table stream on the shared table.

Question 13

A company is using Snowflake in Azure in the Netherlands. The company analyst team also has data in JSON format that is stored in an Amazon S3 bucket in the AWS Singapore region that the team wants to analyze.

The Architect has been given the following requirements:

1. Provide access to frequently changing data

2. Keep egress costs to a minimum

3. Maintain low latency

How can these requirements be met with the LEAST amount of operational overhead?

Options:

A.

Use a materialized view on top of an external table against the S3 bucket in AWS Singapore.

B.

Use an external table against the S3 bucket in AWS Singapore and copy the data into transient tables.

C.

Copy the data between providers from S3 to Azure Blob storage to collocate, then use Snowpipe for data ingestion.

D.

Use AWS Transfer Family to replicate data between the S3 bucket in AWS Singapore and an Azure Netherlands Blob storage, then use an external table against the Blob storage.

Question 14

An Architect has been asked to clone schema STAGING as it looked one week ago, Tuesday June 1st at 8:00 AM, to recover some objects.

The STAGING schema has 50 days of retention.

The Architect runs the following statement:

CREATE SCHEMA STAGING_CLONE CLONE STAGING at (timestamp => '2021-06-01 08:00:00');

The Architect receives the following error: Time travel data is not available for schema STAGING. The requested time is either beyond the allowed time travel period or before the object creation time.

The Architect then checks the schema history and sees the following:

CREATED_ON|NAME|DROPPED_ON

2021-06-02 23:00:00 | STAGING | NULL

2021-05-01 10:00:00 | STAGING | 2021-06-02 23:00:00

How can cloning the STAGING schema be achieved?

Options:

A.

Undrop the STAGING schema and then rerun the CLONE statement.

B.

Modify the statement: CREATE SCHEMA STAGING_CLONE CLONE STAGING at (timestamp => '2021-05-01 10:00:00');

C.

Rename the STAGING schema and perform an UNDROP to retrieve the previous STAGING schema version, then run the CLONE statement.

D.

Cloning cannot be accomplished because the STAGING schema version was not active during the proposed Time Travel time period.

Question 15

A media company needs a data pipeline that will ingest customer review data into a Snowflake table, and apply some transformations. The company also needs to use Amazon Comprehend to do sentiment analysis and make the de-identified final data set available publicly for advertising companies who use different cloud providers in different regions.

The data pipeline needs to run continuously ang efficiently as new records arrive in the object storage leveraging event notifications. Also, the operational complexity, maintenance of the infrastructure, including platform upgrades and security, and the development effort should be minimal.

Which design will meet these requirements?

Options:

A.

Ingest the data using COPY INTO and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

B.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Create an external function to do model inference with Amazon Comprehend and write the final records to a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

C.

Ingest the data into Snowflake using Amazon EMR and PySpark using the Snowflake Spark connector. Apply transformations using another Spark job. Develop a python program to do model inference by leveraging the Amazon Comprehend text analysis API. Then write the results to a Snowflake table and create a listing in the Snowflake Marketplace to make the data available to other companies.

D.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

Question 16

Files arrive in an external stage every 10 seconds from a proprietary system. The files range in size from 500 K to 3 MB. The data must be accessible by dashboards as soon as it arrives.

How can a Snowflake Architect meet this requirement with the LEAST amount of coding? (Choose two.)

Options:

A.

Use Snowpipe with auto-ingest.

B.

Use a COPY command with a task.

C.

Use a materialized view on an external table.

D.

Use the COPY INTO command.

E.

Use a combination of a task and a stream.

Question 17

Which technique will efficiently ingest and consume semi-structured data for Snowflake data lake workloads?

Options:

A.

IDEF1X

B.

Schema-on-write

C.

Schema-on-read

D.

Information schema

Question 18

What integration object should be used to place restrictions on where data may be exported?

Options:

A.

Stage integration

B.

Security integration

C.

Storage integration

D.

API integration

Question 19

When activating Tri-Secret Secure in a hierarchical encryption model in a Snowflake account, at what level is the customer-managed key used?

as

Options:

A.

At the root level (HSM)

B.

At the account level (AMK)

C.

At the table level (TMK)

D.

At the micro-partition level

Question 20

What is a characteristic of Role-Based Access Control (RBAC) as used in Snowflake?

Options:

A.

Privileges can be granted at the database level and can be inherited by all underlying objects.

B.

A user can use a "super-user" access along with securityadmin to bypass authorization checks and access all databases, schemas, and underlying objects.

C.

A user can create managed access schemas to support future grants and ensure only schema owners can grant privileges to other roles.

D.

A user can create managed access schemas to support current and future grants and ensure only object owners can grant privileges to other roles.

Question 21

An Architect clones a database and all of its objects, including tasks. After the cloning, the tasks stop running.

Why is this occurring?

Options:

A.

Tasks cannot be cloned.

B.

The objects that the tasks reference are not fully qualified.

C.

Cloned tasks are suspended by default and must be manually resumed.

D.

The Architect has insufficient privileges to alter tasks on the cloned database.

Question 22

Which Snowflake objects can be used in a data share? (Select TWO).

Options:

A.

Standard view

B.

Secure view

C.

Stored procedure

D.

External table

E.

Stream

Question 23

What does a Snowflake Architect need to consider when implementing a Snowflake Connector for Kafka?

Options:

A.

Every Kafka message is in JSON or Avro format.

B.

The default retention time for Kafka topics is 14 days.

C.

The Kafka connector supports key pair authentication, OAUTH. and basic authentication (for example, username and password).

D.

The Kafka connector will create one table and one pipe to ingest data for each topic. If the connector cannot create the table or the pipe it will result in an exception.

Question 24

An Architect needs to meet a company requirement to ingest files from the company's AWS storage accounts into the company's Snowflake Google Cloud Platform (GCP) account. How can the ingestion of these files into the company's Snowflake account be initiated? (Select TWO).

Options:

A.

Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage.

B.

Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 Glacier storage.

C.

Create an AWS Lambda function to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage.

D.

Configure AWS Simple Notification Service (SNS) to notify Snowpipe when new files have arrived in Amazon S3 storage.

E.

Configure the client application to issue a COPY INTO

command to Snowflake when new files have arrived in Amazon S3 Glacier storage.

command to Snowflake when new files have arrived in Amazon S3 Glacier storage. This option is not relevant because it does not use Snowpipe, but rather the standard COPY command, which is a batch loading method. Moreover, the COPY command also does not support ingesting files from Amazon S3 Glacier storage7 References:
  • 1: SnowPro Advanced: Architect | Study Guide 8
  • 2: Snowflake Documentation | Snowpipe Overview 9
  • 3: Snowflake Documentation | Using the Snowpipe REST API 10
  • 4: Snowflake Documentation | Loading Data Using Snowpipe and AWS Lambda 11
  • 5: Snowflake Documentation | Supported File Formats and Compression for Staged Data Files 12
  • 6: Snowflake Documentation | Using Cloud Notifications to Trigger Snowpipe 13
  • 7: Snowflake Documentation | Loading Data Using COPY into a Table
  • : SnowPro Advanced: Architect | Study Guide
  • : Snowpipe Overview
  • : Using the Snowpipe REST API
  • : Loading Data Using Snowpipe and AWS Lambda
  • : Supported File Formats and Compression for Staged Data Files
  • : Using Cloud Notifications to Trigger Snowpipe
  • : Loading Data Using COPY into a Table
  • Question 25

    A new table and streams are created with the following commands:

    CREATE OR REPLACE TABLE LETTERS (ID INT, LETTER STRING) ;

    CREATE OR REPLACE STREAM STREAM_1 ON TABLE LETTERS;

    CREATE OR REPLACE STREAM STREAM_2 ON TABLE LETTERS APPEND_ONLY = TRUE;

    The following operations are processed on the newly created table:

    INSERT INTO LETTERS VALUES (1, 'A');

    INSERT INTO LETTERS VALUES (2, 'B');

    INSERT INTO LETTERS VALUES (3, 'C');

    TRUNCATE TABLE LETTERS;

    INSERT INTO LETTERS VALUES (4, 'D');

    INSERT INTO LETTERS VALUES (5, 'E');

    INSERT INTO LETTERS VALUES (6, 'F');

    DELETE FROM LETTERS WHERE ID = 6;

    What would be the output of the following SQL commands, in order?

    SELECT COUNT (*) FROM STREAM_1;

    SELECT COUNT (*) FROM STREAM_2;

    Options:

    A.

    2 & 6

    B.

    2 & 3

    C.

    4 & 3

    D.

    4 & 6

    Question 26

    What are some of the characteristics of result set caches? (Choose three.)

    Options:

    A.

    Time Travel queries can be executed against the result set cache.

    B.

    Snowflake persists the data results for 24 hours.

    C.

    Each time persisted results for a query are used, a 24-hour retention period is reset.

    D.

    The data stored in the result cache will contribute to storage costs.

    E.

    The retention period can be reset for a maximum of 31 days.

    F.

    The result set cache is not shared between warehouses.

    Question 27

    Two queries are run on the customer_address table:

    create or replace TABLE CUSTOMER_ADDRESS ( CA_ADDRESS_SK NUMBER(38,0), CA_ADDRESS_ID VARCHAR(16), CA_STREET_NUMBER VARCHAR(IO) CA_STREET_NAME VARCHAR(60), CA_STREET_TYPE VARCHAR(15), CA_SUITE_NUMBER VARCHAR(10), CA_CITY VARCHAR(60), CA_COUNTY

    VARCHAR(30), CA_STATE VARCHAR(2), CA_ZIP VARCHAR(10), CA_COUNTRY VARCHAR(20), CA_GMT_OFFSET NUMBER(5,2), CA_LOCATION_TYPE

    VARCHAR(20) );

    ALTER TABLE DEMO_DB.DEMO_SCH.CUSTOMER_ADDRESS ADD SEARCH OPTIMIZATION ON SUBSTRING(CA_ADDRESS_ID);

    Which queries will benefit from the use of the search optimization service? (Select TWO).

    Options:

    A.

    select * from DEMO_DB.DEMO_SCH.CUSTOMER_ADDRESS Where substring(CA_ADDRESS_ID,1,8)= substring('AAAAAAAAPHPPLBAAASKDJHASLKDJHASKJD',1,8);

    B.

    select * from DEMO_DB.DEMO_SCH.CUSTOMER_ADDRESS Where CA_ADDRESS_ID= substring('AAAAAAAAPHPPLBAAASKDJHASLKDJHASKJD',1,16);

    C.

    select*fromDEMO_DB.DEMO_SCH.CUSTOMER_ADDRESSWhereCA_ADDRESS_IDLIKE ’%BAAASKD%';

    D.

    select*fromDEMO_DB.DEMO_SCH.CUSTOMER_ADDRESSWhereCA_ADDRESS_IDLIKE '%PHPP%';

    E.

    select*fromDEMO_DB.DEMO_SCH.CUSTOMER_ADDRESSWhereCA_ADDRESS_IDNOT LIKE '%AAAAAAAAPHPPL%';

    Question 28

    What is a characteristic of event notifications in Snowpipe?

    Options:

    A.

    The load history is stored In the metadata of the target table.

    B.

    Notifications identify the cloud storage event and the actual data in the files.

    C.

    Snowflake can process all older notifications when a paused pipe Is resumed.

    D.

    When a pipe Is paused, event messages received for the pipe enter a limited retention period.

    Question 29

    A company has an external vendor who puts data into Google Cloud Storage. The company's Snowflake account is set up in Azure.

    What would be the MOST efficient way to load data from the vendor into Snowflake?

    Options:

    A.

    Ask the vendor to create a Snowflake account, load the data into Snowflake and create a data share.

    B.

    Create an external stage on Google Cloud Storage and use the external table to load the data into Snowflake.

    C.

    Copy the data from Google Cloud Storage to Azure Blob storage using external tools and load data from Blob storage to Snowflake.

    D.

    Create a Snowflake Account in the Google Cloud Platform (GCP), ingest data into this account and use data replication to move the data from GCP to Azure.

    Question 30

    A company is designing high availability and disaster recovery plans and needs to maximize redundancy and minimize recovery time objectives for their critical application processes. Cost is not a concern as long as the solution is the best available. The plan so far consists of the following steps:

    1. Deployment of Snowflake accounts on two different cloud providers.

    2. Selection of cloud provider regions that are geographically far apart.

    3. The Snowflake deployment will replicate the databases and account data between both cloud provider accounts.

    4. Implementation of Snowflake client redirect.

    What is the MOST cost-effective way to provide the HIGHEST uptime and LEAST application disruption if there is a service event?

    Options:

    A.

    Connect the applications using the - URL. Use the Business Critical Snowflake edition.

    B.

    Connect the applications using the - URL. Use the Virtual Private Snowflake (VPS) edition.

    C.

    Connect the applications using the - URL. Use the Enterprise Snowflake edition.

    D.

    Connect the applications using the - URL. Use the Business Critical Snowflake edition.

    Question 31

    How does a standard virtual warehouse policy work in Snowflake?

    Options:

    A.

    It conserves credits by keeping running clusters fully loaded rather than starting additional clusters.

    B.

    It starts only if the system estimates that there is a query load that will keep the cluster busy for at least 6 minutes.

    C.

    It starts only f the system estimates that there is a query load that will keep the cluster busy for at least 2 minutes.

    D.

    It prevents or minimizes queuing by starting additional clusters instead of conserving credits.

    Question 32

    Which of the following ingestion methods can be used to load near real-time data by using the messaging services provided by a cloud provider?

    Options:

    A.

    Snowflake Connector for Kafka

    B.

    Snowflake streams

    C.

    Snowpipe

    D.

    Spark

    Question 33

    An Architect runs the following SQL query:

    as

    How can this query be interpreted?

    Options:

    A.

    FILEROWS is a stage. FILE_ROW_NUMBER is line number in file.

    B.

    FILEROWS is the table. FILE_ROW_NUMBER is the line number in the table.

    C.

    FILEROWS is a file. FILE_ROW_NUMBER is the file format location.

    D.

    FILERONS is the file format location. FILE_ROW_NUMBER is a stage.

    Question 34

    Database DB1 has schema S1 which has one table, T1.

    DB1 --> S1 --> T1

    The retention period of EG1 is set to 10 days.

    The retention period of s: is set to 20 days.

    The retention period of t: Is set to 30 days.

    The user runs the following command:

    Drop Database DB1;

    What will the Time Travel retention period be for T1?

    Options:

    A.

    10 days

    B.

    20 days

    C.

    30 days

    D.

    37 days

    Question 35

    A Data Engineer is designing a near real-time ingestion pipeline for a retail company to ingest event logs into Snowflake to derive insights. A Snowflake Architect is asked to define security best practices to configure access control privileges for the data load for auto-ingest to Snowpipe.

    What are the MINIMUM object privileges required for the Snowpipe user to execute Snowpipe?

    Options:

    A.

    OWNERSHIP on the named pipe, USAGE on the named stage, target database, and schema, and INSERT and SELECT on the target table

    B.

    OWNERSHIP on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table

    C.

    CREATE on the named pipe, USAGE and READ on the named stage, USAGE on the target database and schema, and INSERT end SELECT on the target table

    D.

    USAGE on the named pipe, named stage, target database, and schema, and INSERT and SELECT on the target table

    Question 36

    An Architect on a new project has been asked to design an architecture that meets Snowflake security, compliance, and governance requirements as follows:

    1) Use Tri-Secret Secure in Snowflake

    2) Share some information stored in a view with another Snowflake customer

    3) Hide portions of sensitive information from some columns

    4) Use zero-copy cloning to refresh the non-production environment from the production environment

    To meet these requirements, which design elements must be implemented? (Choose three.)

    Options:

    A.

    Define row access policies.

    B.

    Use the Business-Critical edition of Snowflake.

    C.

    Create a secure view.

    D.

    Use the Enterprise edition of Snowflake.

    E.

    Use Dynamic Data Masking.

    F.

    Create a materialized view.

    Question 37

    A company has a Snowflake environment running in AWS us-west-2 (Oregon). The company needs to share data privately with a customer who is running their Snowflake environment in Azure East US 2 (Virginia).

    What is the recommended sequence of operations that must be followed to meet this requirement?

    Options:

    A.

    1. Create a share and add the database privileges to the share

    2. Create a new listing on the Snowflake Marketplace

    3. Alter the listing and add the share

    4. Instruct the customer to subscribe to the listing on the Snowflake Marketplace

    B.

    1. Ask the customer to create a new Snowflake account in Azure EAST US 2 (Virginia)

    2. Create a share and add the database privileges to the share

    3. Alter the share and add the customer's Snowflake account to the share

    C.

    1. Create a new Snowflake account in Azure East US 2 (Virginia)

    2. Set up replication between AWS us-west-2 (Oregon) and Azure East US 2 (Virginia) for the database objects to be shared

    3. Create a share and add the database privileges to the share

    4. Alter the share and add the customer's Snowflake account to the share

    D.

    1. Create a reader account in Azure East US 2 (Virginia)

    2. Create a share and add the database privileges to the share

    3. Add the reader account to the share

    4. Share the reader account's URL and credentials with the customer

    Question 38

    When using the copy into

    command with the CSV file format, how does the match_by_column_name parameter behave?

    Options:

    A.

    It expects a header to be present in the CSV file, which is matched to a case-sensitive table column name.

    B.

    The parameter will be ignored.

    C.

    The command will return an error.

    D.

    The command will return a warning stating that the file has unmatched columns.

    command is used to load data from staged files into an existing table in Snowflake. The command supports various file formats, such as CSV, JSON, AVRO, ORC, PARQUET, and XML1.
  • The match_by_column_name parameter is a copy option that enables loading semi-structured data into separate columns in the target table that match corresponding columns represented in the source data. The parameter can have one of the following values2:
  • The match_by_column_name parameter only applies to semi-structured data, such as JSON, AVRO, ORC, PARQUET, and XML. It does not apply to CSV data, which is considered structured data2.
  • When using the copy into
  • command with the CSV file format, the match_by_column_name parameter behaves as follows2:

    References:

    • 1: COPY INTO
    | Snowflake Documentation
  • 2: MATCH_BY_COLUMN_NAME | Snowflake Documentation
  • Question 39

    A user has the appropriate privilege to see unmasked data in a column.

    If the user loads this column data into another column that does not have a masking policy, what will occur?

    Options:

    A.

    Unmasked data will be loaded in the new column.

    B.

    Masked data will be loaded into the new column.

    C.

    Unmasked data will be loaded into the new column but only users with the appropriate privileges will be able to see the unmasked data.

    D.

    Unmasked data will be loaded into the new column and no users will be able to see the unmasked data.

    Question 40

    How can the Snowpipe REST API be used to keep a log of data load history?

    Options:

    A.

    Call insertReport every 20 minutes, fetching the last 10,000 entries.

    B.

    Call loadHistoryScan every minute for the maximum time range.

    C.

    Call insertReport every 8 minutes for a 10-minute time range.

    D.

    Call loadHistoryScan every 10 minutes for a 15-minutes range.

    Question 41

    An Architect Is designing a data lake with Snowflake. The company has structured, semi-structured, and unstructured data. The company wants to save the data inside the data lake within the Snowflake system. The company is planning on sharing data among Its corporate branches using Snowflake data sharing.

    What should be considered when sharing the unstructured data within Snowflake?

    Options:

    A.

    A pre-signed URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with no time limit for the URL.

    B.

    A scoped URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with a 24-hour time limit for the URL.

    C.

    A file URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with a 7-day time limit for the URL.

    D.

    A file URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with the "expiration_time" argument defined for the URL time limit.

    Question 42

    A Developer is having a performance issue with a Snowflake query. The query receives up to 10 different values for one parameter and then performs an aggregation over the majority of a fact table. It then

    joins against a smaller dimension table. This parameter value is selected by the different query users when they execute it during business hours. Both the fact and dimension tables are loaded with new data in an overnight import process.

    On a Small or Medium-sized virtual warehouse, the query performs slowly. Performance is acceptable on a size Large or bigger warehouse. However, there is no budget to increase costs. The Developer

    needs a recommendation that does not increase compute costs to run this query.

    What should the Architect recommend?

    Options:

    A.

    Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The query results will then be cached and ready to respond quickly when the users re-issue the query.

    B.

    Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The task will be scheduled to align with the users' working hours in order to allow the warehouse cache to be used.

    C.

    Enable the search optimization service on the table. When the users execute the query, the search optimization service will automatically adjust the query execution plan based on the frequently-used parameters.

    D.

    Create a dedicated size Large warehouse for this particular set of queries. Create a new role that has USAGE permission on this warehouse and has the appropriate read permissions over the fact and dimension tables. Have users switch to this role and use this warehouse when they want to access this data.

    Question 43

    A DevOps team has a requirement for recovery of staging tables used in a complex set of data pipelines. The staging tables are all located in the same staging schema. One of the requirements is to have online recovery of data on a rolling 7-day basis.

    After setting up the DATA_RETENTION_TIME_IN_DAYS at the database level, certain tables remain unrecoverable past 1 day.

    What would cause this to occur? (Choose two.)

    Options:

    A.

    The staging schema has not been setup for MANAGED ACCESS.

    B.

    The DATA_RETENTION_TIME_IN_DAYS for the staging schema has been set to 1 day.

    C.

    The tables exceed the 1 TB limit for data recovery.

    D.

    The staging tables are of the TRANSIENT type.

    E.

    The DevOps role should be granted ALLOW_RECOVERY privilege on the staging schema.

    Question 44

    In a managed access schema, what are characteristics of the roles that can manage object privileges? (Select TWO).

    Options:

    A.

    Users with the SYSADMIN role can grant object privileges in a managed access schema.

    B.

    Users with the SECURITYADMIN role or higher, can grant object privileges in a managed access schema.

    C.

    Users who are database owners can grant object privileges in a managed access schema.

    D.

    Users who are schema owners can grant object privileges in a managed access schema.

    E.

    Users who are object owners can grant object privileges in a managed access schema.

    Question 45

    A Snowflake Architect created a new data share and would like to verify that only specific records in secure views are visible within the data share by the consumers.

    What is the recommended way to validate data accessibility by the consumers?

    Options:

    A.

    Create reader accounts as shown below and impersonate the consumers by logging in with their credentials.

    create managed account reader_acctl admin_name = userl , adroin_password ■ 'Sdfed43da!44T , type = reader;

    B.

    Create a row access policy as shown below and assign it to the data share.

    create or replace row access policy rap_acct as (acct_id varchar) returns boolean -> case when 'acctl_role' = current_role() then true else false end;

    C.

    Set the session parameter called SIMULATED_DATA_SHARING_C0NSUMER as shown below in order to impersonate the consumer accounts.

    alter session set simulated_data_sharing_consumer - 'Consumer Acctl*

    D.

    Alter the share settings as shown below, in order to impersonate a specific consumer account.

    alter share sales share set accounts = 'Consumerl’ share restrictions = true

    Question 46

    A Snowflake Architect Is working with Data Modelers and Table Designers to draft an ELT framework specifically for data loading using Snowpipe. The Table Designers will add a timestamp column that Inserts the current tlmestamp as the default value as records are loaded into a table. The Intent is to capture the time when each record gets loaded into the table; however, when tested the timestamps are earlier than the loae_take column values returned by the copy_history function or the Copy_HISTORY view (Account Usage).

    Why Is this occurring?

    Options:

    A.

    The timestamps are different because there are parameter setup mismatches. The parameters need to be realigned

    B.

    The Snowflake timezone parameter Is different from the cloud provider's parameters causing the mismatch.

    C.

    The Table Designer team has not used the localtimestamp or systimestamp functions in the Snowflake copy statement.

    D.

    The CURRENT_TIMEis evaluated when the load operation is compiled in cloud services rather than when the record is inserted into the table.

    Question 47

    A user named USER_01 needs access to create a materialized view on a schema EDW. STG_SCHEMA. How can this access be provided?

    Options:

    A.

    GRANT CREATE MATERIALIZED VIEW ON SCHEMA EDW.STG_SCHEMA TO USER USER_01;

    B.

    GRANT CREATE MATERIALIZED VIEW ON DATABASE EDW TO USER USERJD1;

    C.

    GRANT ROLE NEW_ROLE TO USER USER_01;

    GRANT CREATE MATERIALIZED VIEW ON SCHEMA ECW.STG_SCHEKA TO NEW_ROLE;

    D.

    GRANT ROLE NEW_ROLE TO USER_01;

    GRANT CREATE MATERIALIZED VIEW ON EDW.STG_SCHEMA TO NEW_ROLE;

    Question 48

    Assuming all Snowflake accounts are using an Enterprise edition or higher, in which development and testing scenarios would be copying of data be required, and zero-copy cloning not be suitable? (Select TWO).

    Options:

    A.

    Developers create their own datasets to work against transformed versions of the live data.

    B.

    Production and development run in different databases in the same account, and Developers need to see production-like data but with specific columns masked.

    C.

    Data is in a production Snowflake account that needs to be provided to Developers in a separate development/testing Snowflake account in the same cloud region.

    D.

    Developers create their own copies of a standard test database previously created for them in the development account, for their initial development and unit testing.

    E.

    The release process requires pre-production testing of changes with data of production scale and complexity. For security reasons, pre-production also runs in the production account.

    Page: 1 / 16
    Total 162 questions