SnowPro Core Certification Exam Questions and Answers
How does Snowflake describe its unique architecture?
Options:
A single-cluster shared data architecture using a central data repository and massively parallel processing (MPP)
A multi-duster shared nothing architecture using a soloed data repository and massively parallel processing (MPP)
A single-cluster shared nothing architecture using a sliced data repository and symmetric multiprocessing (SMP)
A multi-cluster shared nothing architecture using a siloed data repository and symmetric multiprocessing (SMP)
Answer:
AExplanation:
Snowflake's unique architecture is described as a multi-cluster, shared data architecture that leverages massively parallel processing (MPP). This architecture separates compute and storage resources, enabling Snowflake to scale them independently. It does not use a single cluster or rely solely on symmetric multiprocessing (SMP); rather, it uses a combination of shared-nothing architecture for compute clusters (virtual warehouses) and a centralized storage layer for data, optimizing for both performance and scalability.
References:
- Snowflake Documentation: Snowflake Architecture Overview
What happens when a network policy includes values that appear in both the allowed and blocked IP address list?
Options:
Those IP addresses are allowed access to the Snowflake account as Snowflake applies the allowed IP address list first.
Those IP addresses are denied access lei the Snowflake account as Snowflake applies the blocked IP address list first.
Snowflake issues an alert message and adds the duplicate IP address values lo both 'he allowed and blocked IP address lists.
Snowflake issues an error message and adds the duplicate IP address values to both the allowed and blocked IP address list
Answer:
BExplanation:
In Snowflake, when setting up a network policy that specifies both allowed and blocked IP address lists, if an IP address appears in both lists, access from that IP address will be denied. The reason is that Snowflake prioritizes security, and the presence of an IP address in the blocked list indicates it should not be allowed regardless of its presence in the allowed list. This ensures that access controls remain stringent and that any potentially unsafe IP addresses are not inadvertently permitted access.
References:
- Snowflake Documentation: Network Policies
Which Snowflake mechanism is used to limit the number of micro-partitions scanned by a query?
Options:
Caching
Cluster depth
Query pruning
Retrieval optimization
Answer:
CExplanation:
Query pruning in Snowflake is the mechanism used to limit the number of micro-partitions scanned by a query. By analyzing the filters and conditions applied in a query, Snowflake can skip over micro-partitions that do not contain relevant data, thereby reducing the amount of data processed and improving query performance. This technique is particularly effective for large datasets and is a key component of Snowflake's performance optimization features.
References:
- Snowflake Documentation: Query Performance Optimization
How does Snowflake reorganize data when it is loaded? (Select TWO).
Options:
Binary format
Columnar format
Compressed format
Raw format
Zipped format
Answer:
B, CExplanation:
When data is loaded into Snowflake, it undergoes a reorganization process where the data is stored in a columnar format and compressed. The columnar storage format enables efficient querying and data retrieval, as it allows for reading only the necessary columns for a query, thereby reducing IO operations. Additionally, Snowflake uses advanced compression techniques to minimize storage costs and improve performance. This combination of columnar storage and compression is key to Snowflake's data warehousing capabilities.
References:
- Snowflake Documentation: Data Storage and Organization
By default, which role has access to the SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER function?
Options:
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
ORGADMIN
Answer:
AExplanation:
By default, the ACCOUNTADMIN role in Snowflake has access to the SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER function. This function is used to set global account parameters, impacting the entire Snowflake account's configuration and behavior. The ACCOUNTADMIN role is the highest-level administrative role in Snowflake, granting the necessary privileges to manage account settings and security features, including the use of global account parameters.
References:
- Snowflake Documentation: SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER
How does a Snowflake user extract the URL of a directory table on an external stage for further transformation?
Options:
Use the SHOW STAGES command.
Use the DESCRIBE STAGE command.
Use the GET_ABSOLUTE_PATH function.
Use the GET_STAGE_LOCATION function.
Answer:
CExplanation:
To extract the URL of a directory table on an external stage for further transformation in Snowflake, the GET_ABSOLUTE_PATH function can be used. This function returns the full path of a file or directory within a specified stage, enabling users to dynamically construct URLs for accessing or processing data stored in external stages.
References:
- Snowflake Documentation: Working with Stages
What is the only supported character set for loading and unloading data from all supported file formats?
Options:
UTF-8
UTF-16
ISO-8859-1
WINDOWS-1253
Answer:
AExplanation:
UTF-8 is the only supported character set for loading and unloading data from all supported file formats in Snowflake. UTF-8 is a widely used encoding that supports a large range of characters from various languages, making it suitable for internationalization and ensuring data compatibility across different systems and platforms.
References:
- Snowflake Documentation: Data Loading and Unloading
Which Snowflake data type is used to store JSON key value pairs?
Options:
TEXT
BINARY
STRING
VARIANT
Answer:
DExplanation:
The VARIANT data type in Snowflake is used to store JSON key-value pairs along with other semi-structured data formats like AVRO, BSON, and XML. The VARIANT data type allows for flexible and dynamic data structures within a single column, accommodating complex and nested data. This data type is crucial for handling semi-structured data in Snowflake, enabling users to perform SQL operations on JSON objects and arrays directly.
References:
- Snowflake Documentation: Semi-structured Data Types
What does the TableScan operator represent in the Query Profile?
Options:
The access to a single table
The access to data stored in stage objects
The list of values provided with the VALUES clause
The records generated using the TABLE (GENERATOR (...)) construct
Answer:
AExplanation:
In the Query Profile of Snowflake, the TableScan operator represents the access to a single table. This operator indicates that the query execution involved reading data from a table stored in Snowflake. TableScan is a fundamental operation in query execution plans, showing how the database engine retrieves data directly from tables as part of processing a query.
References:
- Snowflake Documentation: Understanding the Query Profile
What should be used when creating a CSV file format where the columns are wrapped by single quotes or double quotes?
Options:
BINARY_FORMAT
ESCAPE_UNENCLOSED_FIELD
FIELD_OPTIONALLY_ENCLOSED_BY
SKIP BYTE ORDER MARK
Answer:
CExplanation:
When creating a CSV file format in Snowflake and the requirement is to wrap columns by single quotes or double quotes, the FIELD_OPTIONALLY_ENCLOSED_BY parameter should be used in the file format specification. This parameter allows you to define a character (either a single quote or a double quote) that can optionally enclose each field in the CSV file, providing flexibility in handling fields that contain special characters or delimiters as part of their data.
References:
- Snowflake Documentation: CSV File Format
Which use case does the search optimization service support?
Options:
Disjuncts (OR) in join predicates
LIKE/ILIKE/RLIKE join predicates
Join predicates on VARIANT columns
Conjunctions (AND) of multiple equality predicates
Answer:
DExplanation:
The search optimization service in Snowflake supports use cases involving conjunctions (AND) of multiple equality predicates. This service enhances the performance of queries that include multiple equality conditions by utilizing search indexes to quickly filter data without scanning entire tables or partitions. It's particularly beneficial for improving the response times of complex queries that rely on specific data matching across multiple conditions.
References:
- Snowflake Documentation: Search Optimization Service
How can a user get the MOST detailed information about individual table storage details in Snowflake?
Options:
SHOW TABLES command
SHOW EXTERNAL TABLES command
TABLES view
TABLE STORAGE METRICS view
Answer:
DExplanation:
To obtain the most detailed information about individual table storage details in Snowflake, the TABLE STORAGE METRICS view is the recommended option. This view provides comprehensive metrics on storage usage, including data size, time travel size, fail-safe size, and other relevant storage metrics for each table. This level of detail is invaluable for monitoring, managing, and optimizing storage costs and performance.
References:
- Snowflake Documentation: Information Schema
For Directory tables, what stage allows for automatic refreshing of metadata?
Options:
User stage
Table stage
Named internal stage
Named external stage
Answer:
DExplanation:
For directory tables, a named external stage allows for the automatic refreshing of metadata. This capability is particularly useful when dealing with files stored on external storage services (like Amazon S3, Google Cloud Storage, or Azure Blob Storage) and accessed through Snowflake. The external stage references these files, and the directory table's metadata can be automatically updated to reflect changes in the underlying files.
References:
- Snowflake Documentation: External Stages
A user wants to add additional privileges to the system-defined roles for their virtual warehouse. How does Snowflake recommend they accomplish this?
Options:
Grant the additional privileges to a custom role.
Grant the additional privileges to the ACCOUNTADMIN role.
Grant the additional privileges to the SYSADMIN role.
Grant the additional privileges to the ORGADMIN role.
Answer:
AExplanation:
Snowflake recommends enhancing the granularity and management of privileges by creating and utilizing custom roles. When additional privileges are needed beyond those provided by the system-defined roles for a virtual warehouse or any other resource, these privileges should be granted to a custom role. This approach allows for more precise control over access rights and the ability to tailor permissions to the specific needs of different user groups or applications within the organization, while also maintaining the integrity and security model of system-defined roles.
References:
- Snowflake Documentation: Roles and Privileges
What is it called when a customer managed key is combined with a Snowflake managed key to create a composite key for encryption?
Options:
Hierarchical key model
Client-side encryption
Tri-secret secure encryption
Key pair authentication
Answer:
CExplanation:
Tri-secret secure encryption is a security model employed by Snowflake that involves combining a customer-managed key with a Snowflake-managed key to create a composite key for encrypting data. This model enhances data security by requiring both the customer-managed key and the Snowflake-managed key to decrypt data, thus ensuring that neither party can access the data independently. It represents a balanced approach to key management, leveraging both customer control and Snowflake's managed services for robust data encryption.
References:
- Snowflake Documentation: Encryption and Key Management
How are network policies defined in Snowflake?
Options:
They are a set of rules that define the network routes within Snowflake.
They are a set of rules that dictate how Snowflake accounts can be used between multiple users.
They are a set of rules that define how data can be transferred between different Snowflake accounts within an organization.
They are a set of rules that control access to Snowflake accounts by specifying the IP addresses or ranges of IP addresses that are allowed to connect
to Snowflake.
Answer:
DExplanation:
Network policies in Snowflake are defined as a set of rules that manage the network-level access to Snowflake accounts. These rules specify which IP addresses or IP ranges are permitted to connect to Snowflake, enhancing the security of Snowflake accounts by preventing unauthorized access. Network policies are an essential aspect of Snowflake's security model, allowing administrators to enforce access controls based on network locations.
References:
- Snowflake Documentation: Network Policies
Which view can be used to determine if a table has frequent row updates or deletes?
Options:
TABLES
TABLE_STORAGE_METRICS
STORAGE_DAILY_HISTORY
STORAGE USAGE
Answer:
BExplanation:
The TABLE_STORAGE_METRICS view can be used to determine if a table has frequent row updates or deletes. This view provides detailed metrics on the storage utilization of tables within Snowflake, including metrics that reflect the impact of DML operations such as updates and deletes on table storage. For example, metrics related to the number of active and deleted rows can help identify tables that experience high levels of row modifications, indicating frequent updates or deletions.
References:
- Snowflake Documentation: TABLE_STORAGE_METRICS View
Which service or feature in Snowflake is used to improve the performance of certain types of lookup and analytical queries that use an extensive set of WHERE conditions?
Options:
Data classification
Query acceleration service
Search optimization service
Tagging
Answer:
CExplanation:
The Search Optimization Service in Snowflake is designed to improve the performance of specific types of queries, particularly those involving extensive sets of WHERE conditions. By maintaining a search index on tables, this service can accelerate lookup and analytical queries, making it a valuable feature for optimizing query performance and reducing execution times for complex searches.
References:
- Snowflake Documentation: Search Optimization Service
Regardless of which notation is used, what are considerations for writing the column name and element names when traversing semi-structured data?
Options:
The column name and element names are both case-sensitive.
The column name and element names are both case-insensitive.
The column name is case-sensitive but element names are case-insensitive.
The column name is case-insensitive but element names are case-sensitive.
Answer:
DExplanation:
When querying semi-structured data in Snowflake, the behavior towards case sensitivity is distinct between column names and the names of elements within the semi-structured data. Column names follow the general SQL norm of being case-insensitive, meaning you can reference them in any case without affecting the query. However, element names within JSON, XML, or other semi-structured data are case-sensitive. This distinction is crucial for accurate data retrieval and manipulation in Snowflake, especially when working with JSON objects where the case of keys can significantly alter the outcome of queries.
References:
- Snowflake Documentation: Querying Semi-structured Data
Which command can be used to list all the file formats for which a user has access privileges?
Options:
LIST
ALTER FILE FORMAT
DESCRIBE FILE FORMAT
SHOW FILE FORMATS
Answer:
DExplanation:
The command to list all the file formats for which a user has access privileges in Snowflake is SHOW FILE FORMATS. This command provides a list of all file formats defined in the user's current session or specified database/schema, along with details such as the name, type, and creation time of each file format. It is a valuable tool for users to understand and manage the file formats available for data loading and unloading operations.
References:
- Snowflake Documentation: SHOW FILE FORMATS
What is the MAXIMUM number of clusters that can be provisioned with a multi-cluster virtual warehouse?
Options:
1
5
10
100
Answer:
CExplanation:
In Snowflake, the maximum number of clusters that can be provisioned within a multi-cluster virtual warehouse is 10. This allows for significant scalability and performance management by enabling Snowflake to handle varying levels of query load by adjusting the number of active clusters within the warehouse.References: Snowflake documentation on virtual warehouses, particularly the scalability options available in multi-cluster configurations.
What SnowFlake database object is derived from a query specification, stored for later use, and can speed up expensive aggregation on large data sets?
Options:
Temporary table
External table
Secure view
Materialized view
Answer:
DExplanation:
A materialized view in Snowflake is a database object derived from a query specification, stored for later use, and can significantly speed up expensive aggregations on large data sets. Materialized views store the result of their underlying query, reducing the need to recompute the result each time the view is accessed. This makes them ideal for improving the performance of read-heavy, aggregate-intensive queries.
References:
- Snowflake Documentation: Using Materialized Views
What is the MINIMUM permission needed to access a file URL from an external stage?
Options:
MODIFY
READ
SELECT
USAGE
Answer:
DExplanation:
To access a file URL from an external stage in Snowflake, the minimum permission required is USAGE on the stage object. USAGE permission allows a user to reference the stage in SQL commands, necessary for actions like listing files or loading data from the stage, but does not permit the user to alter or drop the stage.
References:
- Snowflake Documentation: Access Control
Which Snowflake edition offers the highest level of security for organizations that have the strictest requirements?
Options:
Standard
Enterprise
Business Critical
Virtual Private Snowflake (VPS)
Answer:
DExplanation:
The Virtual Private Snowflake (VPS) edition offers the highest level of security for organizations with the strictest security requirements. This edition provides a dedicated and isolated instance of Snowflake, including enhanced security features and compliance certifications to meet the needs of highly regulated industries or any organization requiring the utmost in data protection and privacy.
References:
- Snowflake Documentation: Snowflake Editions
What criteria does Snowflake use to determine the current role when initiating a session? (Select TWO).
Options:
If a role was specified as part of the connection and that role has been granted to the Snowflake user, the specified role becomes the current role.
If no role was specified as part of the connection and a default role has been defined for the Snowflake user, that role becomes the current role.
If no role was specified as part of the connection and a default role has not been set for the Snowflake user, the session will not be initiated and the log in will fail.
If a role was specified as part of the connection and that role has not been granted to the Snowflake user, it will be ignored and the default role will become the current role.
If a role was specified as part of the connection and that role has not been granted to the Snowflake user, the role is automatically granted and it becomes the current role.
Answer:
A, BExplanation:
When initiating a session in Snowflake, the system determines the current role based on the user's connection details and role assignments. If a user specifies a role during the connection, and that role is already granted to them, Snowflake sets it as the current role for the session. Alternatively, if no role is specified during the connection, but the user has a default role assigned, Snowflake will use this default role as the current session role. These mechanisms ensure that users operate within their permissions, enhancing security and governance within Snowflake environments.
References:
- Snowflake Documentation: Understanding Roles
While clustering a table, columns with which data types can be used as clustering keys? (Select TWO).
Options:
BINARY
GEOGRAPHY
GEOMETRY
OBJECT
VARIANT
Answer:
A, CExplanation:
A clustering key can be defined when a table is created by appending a CLUSTER Where each clustering key consists of one or more table columns/expressions, which can be of any data type, except GEOGRAPHY, VARIANT, OBJECT, or ARRAY
Which data formats are supported by Snowflake when unloading semi-structured data? (Select TWO).
Options:
Binary file in Avro
Binary file in Parquet
Comma-separated JSON
Newline Delimited JSON
Plain text file containing XML elements
Answer:
B, DExplanation:
Snowflake supports a variety of file formats for unloading semi-structured data, among which Parquet and Newline Delimited JSON (NDJSON) are two widely used formats.
- B. Binary file in Parquet: Parquet is a columnar storage file format optimized for large-scale data processing and analysis. It is especially suited for complex nested data structures.
- D. Newline Delimited JSON (NDJSON): This format represents JSON records separated by newline characters, facilitating the storage and processing of multiple, separate JSON objects in a single file.
These formats are chosen for their efficiency and compatibility with data analytics tools and ecosystems, enabling seamless integration and processing of exported data.
References:
- Snowflake Documentation: Data Unloading
What are characteristic of Snowsight worksheet? (Select TWO.)
Options:
Worksheets can be grouped under folder, and a folder of folders.
Each worksheet is a unique Snowflake session.
Users are limited to running only one on a worksheet.
The Snowflake session ends when a user switches worksheets.
Users can import worksheets and share them with other users.
Answer:
A, EExplanation:
Characteristics of Snowsight worksheets in Snowflake include:
- A. Worksheets can be grouped under folders, and a folder of folders: This organizational feature allows users to efficiently manage and categorize their worksheets within Snowsight, Snowflake's web-based UI, enhancing the user experience by keeping related worksheets together.
- E. Users can import worksheets and share them with other users: Snowsight supports the sharing of worksheets among users, fostering collaboration by allowing users to share queries, analyses, and findings. This feature is crucial for collaborative data exploration and analysis workflows.
References:
- Snowflake Documentation: Snowsight (UI for Snowflake)
What is the Fail-safe retention period for transient and temporary tables?
Options:
0 days
1 day
7 days
90 days
Answer:
AExplanation:
The Fail-safe retention period for transient and temporary tables in Snowflake is 0 days. Fail-safe is a feature designed to protect data against accidental loss or deletion by retaining historical data for a period after its Time Travel retention period expires. However, transient and temporary tables, which are designed for temporary or short-term storage and operations, do not have a Fail-safe period. Once the data is deleted or the table is dropped, it cannot be recovered.
References:
- Snowflake Documentation: Understanding Fail-safe
When unloading data, which file format preserves the data values for floating-point number columns?
Options:
Avro
CSV
JSON
Parquet
Answer:
DExplanation:
When unloading data, the Parquet file format is known for its efficiency in preserving the data values for floating-point number columns. Parquet is a columnar storage file format that offers high compression ratios and efficient data encoding schemes. It is especially effective for floating-point data, as it maintains high precision and supports efficient querying and analysis.
References:
- Snowflake Documentation: Using the Parquet File Format for Unloading Data
By default, how long is the standard retention period for Time Travel across all Snowflake accounts?
Options:
0 days
1 day
7 days
14 days
Answer:
BExplanation:
By default, the standard retention period for Time Travel in Snowflake is 1 day across all Snowflake accounts. Time Travel enables users to access historical data within this retention window, allowing for point-in-time data analysis and recovery. This feature is a significant aspect of Snowflake's data management capabilities, offering flexibility in handling data changes and accidental deletions.
References:
- Snowflake Documentation: Using Time Travel
Which typos of charts does Snowsight support? (Select TWO).
Options:
Area charts
Bar charts
Column charts
Radar charts
Scorecards
Answer:
A, BExplanation:
Snowsight, Snowflake’s user interface for executing and analyzing queries, supports various types of visualizations to help users understand their data better. Among the supported types, area charts and bar charts are two common options. Area charts are useful for representing quantities through the use of filled areas on the graph, often useful for showing volume changes over time. Bar charts, on the other hand, are versatile for comparing different groups or categories of data. Both chart types are integral to data analysis, enabling users to visualize trends, patterns, and differences in their data effectively.References: Snowflake Documentation on Snowsight Visualizations
Which of the following SQL statements will list the version of the drivers currently being used?
Options:
Execute SELECT CURRENT_ODBC_CLlENT(); from the Web Ul
Execute SELECT CURRENT_JDBC_VERSION() ; from SnowSQL
Execute SELECT CURRENT_CLIENT(); from an application
Execute SELECT CURRENT_VERSION (); from the Python Connector
Answer:
BExplanation:
The correct SQL statement to list the version of the JDBC (Java Database Connectivity) drivers currently being used is to execute SELECT CURRENT_JDBC_VERSION(); from within SnowSQL or any client application that utilizes JDBC for connecting to Snowflake. Snowflake provides specific functions to query the version of the client drivers or connectors being used, such as JDBC, ODBC, and others. The CURRENT_JDBC_VERSION() function is designed specifically to return the version of the JDBC driver in use.
It's important to note that Snowflake supports various types of drivers and connectors for connecting to different client applications, including ODBC, JDBC, Python, and others. Each of these connectors has its own method or function for querying the current version in use. For JDBC, the appropriate function is CURRENT_JDBC_VERSION(), reflecting the specificity required to obtain version information relevant to the JDBC driver specifically.
References:
- Snowflake Documentation on Client Functions: This information can typically be found in the Snowflake documentation under the section that covers SQL functions, particularly those functions that provide information about the client or session.
Which semi-structured file format is a compressed, efficient, columnar data representation?
Options:
Avro
JSON
TSV
Parquet
Answer:
DExplanation:
Parquet is a columnar storage file format that is optimized for efficiency in both storage and processing. It supports compression and encoding schemes that significantly reduce the storage space needed and speed up data retrieval operations, making it ideal for handling large volumes of data. Unlike JSON or TSV, which are row-oriented and typically uncompressed, Parquet is designed specifically for use with big data frameworks, offering advantages in terms of performance and cost when storing and querying semi-structured data.References: Apache Parquet Documentation
How does the search optimization service help Snowflake users improve query performance?
Options:
It scans the micro-partitions based on the joins used in the queries and scans only join columns.
II maintains a persistent data structure that keeps track of the values of the table's columns m each of its micro-partitions.
It scans the local disk cache to avoid scans on the tables used in the Query.
It keeps track of running queries and their results and saves those extra scans on the table.
Answer:
BExplanation:
The search optimization service in Snowflake enhances query performance by maintaining a persistent data structure. This structure indexes the values of table columns across micro-partitions, allowing Snowflake to quickly identify which micro-partitions contain relevant data for a query. By efficiently narrowing down the search space, this service reduces the amount of data scanned during query execution, leading to faster response times and more efficient use of resources.References: Snowflake Documentation on Search Optimization Service
If a virtual warehouse is suspended, what happens to the warehouse cache?
Options:
The cache is dropped when the warehouse is suspended and is no longer available upon restart.
The warehouse cache persists for as long the warehouse exists, regardless of its suspension status.
The cache is maintained for up to two hours and can be restored If the warehouse Is restarted within this limit.
The cache is maintained for the auto suspend duration and can be restored it the warehouse 15 restarted within this limit.
Answer:
AExplanation:
When a virtual warehouse in Snowflake is suspended, the cache is dropped and is no longer available upon restart. This means that all cached data, including results and temporary data, are cleared from memory. The purpose of this behavior is to conserve resources while the warehouse is not active. Upon restarting the warehouse, it will need to reload any data required for queries from storage, which may result in a slower initial performance until the cache is repopulated. This is a critical consideration for managing performance and cost in Snowflake.
Which statement describes Snowflake tables?
Options:
Snowflake tables arc logical representation of underlying physical data.
Snowflake tables are the physical instantiation of data loaded into Snowflake.
Snowflake tables require that clustering keys be defined lo perform optimally.
Snowflake tables are owned by a use.
Answer:
AExplanation:
In Snowflake, tables represent a logical structure through which users interact with the stored data. The actual physical data is stored in micro-partitions managed by Snowflake, and the logical table structure provides the means by which SQL operations are mapped to this data. This architecture allows Snowflake to optimize storage and querying across its distributed, cloud-based data storage system.References: Snowflake Documentation on Tables
What happens when a suspended virtual warehouse is resized in Snowflake?
Options:
It will return an error.
It will return a warning.
The suspended warehouse is resumed and new compute resources are provisioned immediately.
The additional compute resources are provisioned when the warehouse is resumed.
Answer:
DExplanation:
In Snowflake, resizing a virtual warehouse that is currently suspended does not immediately provision the new compute resources. Instead, the change in size is recorded, and the additional compute resources are provisioned when the warehouse is resumed. This means that the action of resizing a suspended warehouse does not cause it to resume operation automatically. The warehouse remains suspended until an explicit command to resume it is issued, or until it automatically resumes upon the next query execution that requires it.
This behavior allows for efficient management of compute resources, ensuring that credits are not consumed by a warehouse that is not in use, even if its size is adjusted while it is suspended.
What does the Activity area of Snowsight allow users to do? (Select TWO).
Options:
Schedule automated data backups.
Explore each step of an executed query.
Monitor queries executed by users in an account.
Create and manage user roles and permissions.
Access Snowflake Marketplace to find and integrate datasets.
Answer:
B, CExplanation:
The Activity area of Snowsight, Snowflake's web interface, allows users to perform several important tasks related to query management and performance analysis. Among the options provided, the correct ones are:
- B. Explore each step of an executed query: Snowsight provides detailed insights into query execution, including the ability to explore the execution plan of a query. This helps users understand how a query was processed, identify performance bottlenecks, and optimize query performance.
- C. Monitor queries executed by users in an account: The Activity area enables users to monitor the queries that have been executed by users within the Snowflake account. This includes viewing the history of queries, their execution times, resources consumed, and other relevant metrics.
These features are crucial for effective query performance tuning and ensuring efficient use of Snowflake's resources.
References:
- Snowflake Documentation on Snowsight: Using Snowsight
Which command is used to upload data files from a local directory or folder on a client machine to an internal stage, for a specified table?
Options:
GET
PUT
CREATE STREAM
COPY INTO
Answer:
BExplanation:
To upload data files from a local directory or folder on a client machine to an internal stage in Snowflake, the PUT command is used. The PUT command takes files from the local file system and uploads them to an internal Snowflake stage (or a specified stage) for the purpose of preparing the data to be loaded into Snowflake tables.
Syntax Example:
PUT file://
This command is crucial for data ingestion workflows in Snowflake, especially when preparing to load data using the COPY INTO command.
In which hierarchy is tag inheritance possible?
Options:
Organization » Account» Role
Account » User » Schema
Database » View » Column
Schema » Table » Column
Answer:
DExplanation:
In Snowflake, tag inheritance is a feature that allows tags, which are key-value pairs assigned to objects for the purpose of data governance and metadata management, to be inherited within a hierarchy. The hierarchy in which tag inheritance is possible is from Schema to Table to Column. This means that a tag applied to a schema can be inherited by the tables within that schema, and a tag applied to a table can be inherited by the columns within that table.References: Snowflake Documentation on Tagging and Object Hierarchy
Which privilege is required on a virtual warehouse to abort any existing executing queries?
Options:
USAGE
OPERATE
MODIFY
MONITOR
Answer:
BExplanation:
The privilege required on a virtual warehouse to abort any existing executing queries is OPERATE. The OPERATE privilege on a virtual warehouse allows a user to perform operational tasks on the warehouse, including starting, stopping, and restarting the warehouse, as well as aborting running queries. This level of control is essential for managing resource utilization and ensuring that the virtual warehouse operates efficiently.
References:
- Snowflake Documentation on Access Control: Access Control Privileges
How many credits does a size 3X-Large virtual warehouse consume if it runs continuously for 2 hours?
Options:
32
64
128
256
Answer:
CExplanation:
In Snowflake, the consumption of credits by a virtual warehouse is determined by its size and the duration for which it runs. A size 3X-Large virtual warehouse consumes 128 credits if it runs continuously for 2 hours. This consumption rate is based on the principle that larger warehouses, capable of providing greater computational resources and throughput, consume more credits per hour of operation. The specific rate of consumption is defined by Snowflake’s pricing model and the scale of the virtual warehouse.References: Snowflake Pricing Documentation
Authorization to execute CREATE
Options:
Primary role
Secondary role
Application role
Database role
Answer:
AExplanation:
In Snowflake, the authorization to execute CREATE <object> statements, such as creating tables, views, databases, etc., is determined by the role currently set as the user's primary role. The primary role of a user or session specifies the set of privileges (including creation privileges) that the user has. While users can have multiple roles, only the primary role is used to determine what objects the user can create unless explicitly specified in the session.
What happens to the privileges granted to Snowflake system-defined roles?
Options:
The privileges cannot be revoked.
The privileges can be revoked by an ACCOUNTADMIN.
The privileges can be revoked by an orgadmin.
The privileges can be revoked by any user-defined role with appropriate privileges.
Answer:
AExplanation:
The privileges granted to Snowflake's system-defined roles cannot be revoked. System-defined roles, such as SYSADMIN, ACCOUNTADMIN, SECURITYADMIN, and others, come with a set of predefined privileges that are essential for the roles to function correctly within the Snowflake environment. These privileges are intrinsic to the roles and ensure that users assigned these roles can perform the necessary tasks and operations relevant to their responsibilities.
The design of Snowflake's role-based access control (RBAC) model ensures that system-defined roles have a specific set of non-revocable privileges to maintain the security, integrity, and operational efficiency of the Snowflake environment. This approach prevents accidental or intentional modification of privileges that could disrupt the essential functions or compromise the security of the Snowflake account.
References:
- Snowflake Documentation on Access Control: Understanding Role-Based Access Control (RBAC)
What action should be taken if a Snowflake user wants to share a newly created object in a database with consumers?
Options:
Use the automatic sharing feature for seamless access.
Drop the object and then re-add it to the database to trigger sharing.
Recreate the object with a different name in the database before sharing.
Use the grant privilege ... TO share command to grant the necessary privileges.
Answer:
DExplanation:
When a Snowflake user wants to share a newly created object in a database with consumers, the correct action to take is to use the GRANT privilege ... TO SHARE command to grant the necessary privileges for the object to be shared. This approach allows the object owner or a user with the appropriate privileges to share database objects such as tables, secure views, and streams with other Snowflake accounts by granting access to a named share.
The GRANT statement specifies which privileges are granted on the object to the share. The object remains in its original location; sharing does not duplicate or move the object. Instead, it allows the specified share to access the object according to the granted privileges.
For example, to share a table, the command would be:
GRANT SELECT ON TABLE new_table TO SHARE consumer_share;
This command grants the SELECT privilege on a table named new_table to a share named consumer_share, enabling the consumers of the share to query the table.
Automatic sharing, dropping and re-adding the object, or recreating the object with a different name are not required or recommended practices for sharing objects in Snowflake. The use of the GRANT statement to a share is the direct and intended method for this purpose.
Which object type is granted permissions for reading a table?
Options:
User
Role
Attribute
Schema
Answer:
BExplanation:
In Snowflake, permissions for accessing database objects, including tables, are not granted directly to users but rather to roles. A role encapsulates a collection of privileges on various Snowflake objects. Users are then granted roles, and through those roles, they inherit the permissions necessary to read a table or perform other actions. This approach adheres to the principle of least privilege, allowing for granular control over database access and simplifying the management of user permissions.
When should a stored procedure be created with caller's rights?
Options:
When the caller needs to be prevented from viewing the source code of the stored procedure
When the caller needs to run a statement that could not execute outside of the stored procedure
When the stored procedure needs to run with the privileges of the role that called the stored procedure
When the stored procedure needs to operate on objects that the caller does not have privileges on
Answer:
CExplanation:
Stored procedures in Snowflake can be created with either 'owner's rights' or 'caller's rights'. A stored procedure created with caller's rights executes with the privileges of the role that calls the procedure, not the privileges of the role that owns the procedure. This is particularly useful in scenarios where the procedure needs to perform operations that depend on the caller's access permissions, ensuring that the procedure can only access objects that the caller is authorized to access.
When snaring data in Snowflake. what privileges does a Provider need to grant along with a share? (Select TWO).
Options:
USAGE on the specific tables in the database.
USAGE on the specific tables in the database.
MODIFY on 1Mb specific tables in the database.
USAGE on the database and the schema containing the tables to share
OPEBATE on the database and the schema containing the tables to share.
Answer:
A, DExplanation:
When sharing data in Snowflake, the provider needs to grant the following privileges along with a share:
- A. USAGE on the specific tables in the database: This privilege allows the consumers of the share to access the specific tables included in the share.
- D. USAGE on the database and the schema containing the tables to share: This privilege is necessary for the consumers to access the database and schema levels, enabling them to access the tables within those schemas.
These privileges are crucial for setting up secure and controlled access to the shared data, ensuring that only authorized users can access the specified resources.
Reference to Snowflake documentation on sharing data and managing access:
- Data Sharing Overview
- Privileges Required for Sharing Data
Which Snowflake feature or tool helps troubleshoot issues in SQL query expressions that commonly cause performance bottlenecks?
Options:
Persisted query results
QUERY_HISTORY View
Query acceleration service
Query Profile
Answer:
DExplanation:
The Snowflake feature that helps troubleshoot issues in SQL query expressions and commonly identify performance bottlenecks is the Query Profile. The Query Profile provides a detailed breakdown of a query's execution plan, including each operation's time and resources consumed. It visualizes the steps involved in the query execution, highlighting areas that may be causing inefficiencies, such as full table scans, large joins, or operations that could benefit from optimization.
By examining the Query Profile, developers and database administrators can identify and troubleshoot performance issues, optimize query structures, and make informed decisions about potential schema or indexing changes to improve performance.
References:
- Snowflake Documentation on Query Profile: Using the Query Profile
What are the main differences between the account usage views and the information schema views? (Select TWO).
Options:
No active warehouse to needed to query account usage views but one is needed to query information schema views.
Account usage views do not contain data about tables but information schema views do.
Account issue views contain dropped objects but information schema views do not.
Data retention for account usage views is 1 year but is 7 days to 6 months for information schema views, depending on the view.
Information schema views are read-only but account usage views are not.
Answer:
C, DExplanation:
The account usage views in Snowflake provide historical usage data about the Snowflake account, and they retain this data for a period of up to 1 year. These views include information about dropped objects, enabling audit and tracking activities. On the other hand, information schema views provide metadata about database objects currently in use, such as tables and views, but do not include dropped objects. The retention of data in information schema views varies, but it is generally shorter than the retention for account usage views, ranging from 7 days to a maximum of 6 months, depending on the specific view.References: Snowflake Documentation on Account Usage and Information Schema
What are characteristics of Snowflake network policies? (Select TWO).
Options:
They can be set for any Snowflake Edition.
They can be applied to roles.
They restrict or enable access to specific IP addresses.
They are activated using ALTER DATABASE SQL commands.
They can only be managed using the ORGADMIN role.
Answer:
A, CExplanation:
Snowflake network policies are a security feature that allows administrators to control access to Snowflake by specifying allowed and blocked IP address ranges. These policies apply to all editions of Snowflake, making them widely applicable across different Snowflake environments. They are specifically designed to restrict or enable access based on the originating IP addresses of client requests, adding an extra layer of security.
Network policies are not applied to roles but are set at the account or user level. They are not activated using ALTER DATABASE SQL commands but are managed through ALTER ACCOUNT or ALTER NETWORK POLICY commands. The management of network policies does not exclusively require the ORGADMIN role; instead, they can be managed by users with the necessary privileges on the account.
What information is stored in the ACCESS_HlSTORY view?
Options:
History of the files that have been loaded into Snowflake
Names and owners of the roles that are currently enabled in the session
Query details such as the objects included and the user who executed the query
Details around the privileges that have been granted for all objects in an account
Answer:
DExplanation:
Details around the privileges that have been granted for all objects in an account. The ACCESS_HISTORY view in Snowflake provides a comprehensive log of access control changes, including grants and revocations of privileges on all securable objects within the account. This information is crucial for auditing and monitoring the security posture of your Snowflake environment.
Here's how to understand and use the ACCESS_HISTORY view:
- Purpose of ACCESS_HISTORY View: It is designed to track changes in access controls, such as when a user or role is granted or revoked privileges on various Snowflake objects. This includes tables, schemas, databases, and more.
- Querying ACCESS_HISTORY: To access this view, you can use the following SQL query pattern:
SELECT * FROM SNOWFLAKE.ACCOUNT_USAGE.ACCESS_HISTORY WHERE EVENT_TYPE = 'GRANT' OR EVENT_TYPE = 'REVOKE';
- Interpreting the Results: The results from the ACCESS_HISTORY view include the object type, the specific privilege granted or revoked, the grantee (who received or lost the privilege), and the timestamp of the event. This data is invaluable for audits and compliance checks.
Which function can be used with the copy into
Options:
FLATTEN
OBJECT_AS
OBJECT_CONSTRUCT
TO VARIANT
Answer:
DExplanation:
The correct function to use with the COPY INTO <location> statement to convert rows from a relational table into a single variant column and to unload rows into a JSON file is TO VARIANT. The TO VARIANT function is used to explicitly convert a value of any supported data type into a VARIANT data type. This is particularly useful when needing to aggregate multiple columns or complex data structures into a single JSON-formatted string, which can then be unloaded into a file.
In the context of unloading data, the COPY INTO <location> statement combined with TO VARIANT enables the conversion of structured data from Snowflake tables into a semi-structured VARIANT format, typically JSON, which can then be efficiently exported and stored. This approach is often utilized for data integration scenarios, backups, or when data needs to be shared in a format that is easily consumed by various applications or services that support JSON.
References:
- Snowflake Documentation on Data Unloading: Unloading Data
- Snowflake Documentation on VARIANT Data Type: Working with JSON
Which Snowflake database object can be shared with other accounts?
Options:
Tasks
Pipes
Secure User-Defined Functions (UDFs)
Stored Procedures
Answer:
CExplanation:
In Snowflake, Secure User-Defined Functions (UDFs) can be shared with other accounts using Snowflake's data sharing feature. This allows different Snowflake accounts to securely execute the UDFs without having direct access to the underlying data the functions operate on, ensuring privacy and security. The sharing is facilitated through shares created in Snowflake, which can contain Secure UDFs along with other database objects like tables and views.References: Snowflake Documentation on Data Sharing and Secure UDFs
When unloading data with the COPY into
Options:
To sort the contents of the output file by the specified expression.
To delimit the records in the output file using the specified expression.
To include a new column in the output using the specified window function expression.
To split the output into multiple files, one for each distinct value of the specified expression.
Answer:
DExplanation:
The PARTITION BY <expression> parameter option in the COPY INTO <location> command is used to split the output into multiple files based on the distinct values of the specified expression. This feature is particularly useful for organizing large datasets into smaller, more manageable files and can help with optimizing downstream processing or consumption of the data. For example, if you are unloading a large dataset of transactions and use PARTITION BY DATE(transactions.transaction_date), Snowflake generates a separate output file for each unique transaction date, facilitating easier data management and access.
This approach to data unloading can significantly improve efficiency when dealing with large volumes of data by enabling parallel processing and simplifying data retrieval based on specific criteria or dimensions.
References:
- Snowflake Documentation on Unloading Data: COPY INTO
What does Snowflake recommend a user do if they need to connect to Snowflake with a tool or technology mat is not listed in Snowflake partner ecosystem?
Options:
Use Snowflake's native API.
Use a custom-built connector.
Contact Snowflake Support for a new driver.
Connect through Snowflake’s JDBC or ODBC drivers
Answer:
DExplanation:
If a user needs to connect to Snowflake with a tool or technology that is not listed in Snowflake's partner ecosystem, Snowflake recommends using its JDBC or ODBC drivers. These drivers provide a standard method of connecting from various tools and programming languages to Snowflake, offering wide compatibility and flexibility. By using these drivers, users can establish connections to Snowflake from their applications, ensuring they can leverage the capabilities of Snowflake regardless of the specific tools or technologies they are using.References: Snowflake Documentation on Client Drivers
By default, which role can create resource monitors?
Options:
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
USERADMIN
Answer:
AExplanation:
The role that can by default create resource monitors in Snowflake is the ACCOUNTADMIN role. Resource monitors are a crucial feature in Snowflake that allows administrators to track and control the consumption of compute resources, ensuring that usage stays within specified limits. The creation and management of resource monitors involve defining thresholds for credits usage, setting up notifications, and specifying actions to be taken when certain thresholds are exceeded.
Given the significant impact that resource monitors can have on the operational aspects and billing of a Snowflake account, the capability to create and manage them is restricted to the ACCOUNTADMIN role. This role has the broadest set of privileges in Snowflake, including the ability to manage all aspects of the account, such as users, roles, warehouses, databases, and resource monitors, among others.
References:
- Snowflake Documentation on Resource Monitors: Managing Resource Monitors
Which roles can make grant decisions to objects within a managed access schema? (Select TWO)
Options:
ACCOUNTADMIN
SECURITYADMIN
SYSTEMADMIN
ORGADMIN
USERADMIN
Answer:
A, BExplanation:
- Managed Access Schemas: These are a special type of schema designed for fine-grained access control in Snowflake.
- Roles with Grant Authority:
- Important Note: The ORGADMIN role focuses on organization-level management, not object access control.
Which types of subqueries does Snowflake support? (Select TWO).
Options:
Uncorrelated scalar subqueries in WHERE clauses
Uncorrelated scalar subqueries in any place that a value expression can be used
EXISTS, ANY / ALL, and IN subqueries in WHERE clauses: these subqueries can be uncorrelated only
EXISTS, ANY / ALL, and IN subqueries in where clauses: these subqueries can be correlated only
EXISTS, ANY /ALL, and IN subqueries in WHERE clauses: these subqueries can be correlated or uncorrelated
Answer:
B, EExplanation:
Snowflake supports a variety of subquery types, including both correlated and uncorrelated subqueries. The correct answers are B and E, which highlight Snowflake's flexibility in handling subqueries within SQL queries.
- Uncorrelated Scalar Subqueries: These are subqueries that can execute independently of the outer query. They return a single value and can be used anywhere a value expression is allowed, offering great flexibility in SQL queries.
- EXISTS, ANY/ALL, and IN Subqueries: These subqueries are used in WHERE clauses to filter the results of the main query based on the presence or absence of matching rows in a subquery. Snowflake supports both correlated and uncorrelated versions of these subqueries, providing powerful tools for complex data analysis scenarios.
- Examples and Usage:
SELECT * FROM employees WHERE salary > (SELECT AVG(salary) FROM employees);
- uk.co.certification.simulator.questionpool.PList@18720860
SELECT * FROM orders o WHERE EXISTS (SELECT 1 FROM customer c WHERE c.id = o.customer_id AND c.region = 'North America');
What does Snowflake recommend for a user assigned the ACCOUNTADMIN role?
Options:
The ACCCUKTMKIN role should be set as tie user's default role.
The user should use federated authentication instead of a password
The user should be required to use Multi-Factor Authentication (MFA).
There should be just one user with the ACCOUNTADMIN role in each Snowflake account.
Answer:
CExplanation:
For users assigned the ACCOUNTADMIN role, Snowflake recommends enforcing Multi-Factor Authentication (MFA) to enhance security. The ACCOUNTADMIN role has extensive permissions, making it crucial to secure accounts held by such users against unauthorized access. MFA adds an additional layer of security by requiring a second form of verification beyond just the username and password, significantly reducing the risk of account compromise.References: Snowflake Security Best Practices
What causes objects in a data share to become unavailable to a consumer account?
Options:
The DATA_RETENTI0N_TIME_IN_DAYS parameter in the consumer account is set to 0.
The consumer account runs the GRANT IMPORTED PRIVILEGES command on the data share every 24 hours.
The objects in the data share are being deleted and the grant pattern is not re-applied systematically.
The consumer account acquires the data share through a private data exchange.
Answer:
CExplanation:
Objects in a data share become unavailable to a consumer account if the objects in the data share are deleted or if the permissions on these objects are altered without re-applying the grant permissions systematically. This is because the sharing mechanism in Snowflake relies on explicit grants of permissions on specific objects (like tables, views, or secure views) to the share. If these objects are deleted or if their permissions change without updating the share accordingly, consumers can lose access.
The DATA_RETENTION_TIME_IN_DAYS parameter does not directly affect the availability of shared objects, as it controls how long Snowflake retains historical data for time travel and does not impact data sharing permissions.
Running the GRANT IMPORTED PRIVILEGES command in the consumer account is not related to the availability of shared objects; this command is used to grant privileges on imported objects within the consumer's account and is not a routine maintenance command that would need to be run regularly.
Acquiring a data share through a private data exchange does not inherently make objects unavailable; issues would only arise if there were problems with the share configuration or if the shared objects were deleted or had their permissions altered without re-granting access to the share.
A single user of a virtual warehouse has set the warehouse to auto-resume and auto-suspend after 10 minutes. The warehouse is currently suspended and the user performs the following actions:
1. Runs a query that takes 3 minutes to complete
2. Leaves for 15 minutes
3. Returns and runs a query that takes 10 seconds to complete
4. Manually suspends the warehouse as soon as the last query was completed
When the user returns, how much billable compute time will have been consumed?
Options:
4 minutes
10 minutes
14 minutes
24 minutes
Answer:
CExplanation:
The billable compute time includes the time the warehouse is running queries plus the auto-suspend time after the last query if the warehouse is not manually suspended. In this scenario, the warehouse runs for 3 minutes, suspends after 10 minutes of inactivity, resumes for a 10-second query, and then is manually suspended. The total billable time is the sum of the initial 3 minutes, the 10 minutes of auto-suspend time, and the brief period for the 10-second query, which is rounded up to the next full minute due to Snowflake’s billing practices. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What types of data listings are available in the Snowflake Data Marketplace? (Choose two.)
Options:
Reader
Consumer
Vendor
Standard
Personalized
Answer:
C, EExplanation:
In the Snowflake Data Marketplace, the types of data listings available include ‘Vendor’, which refers to the providers of data, and ‘Personalized’, which indicates customized data offerings tailored to specific consumer needs45.
Which statements are true concerning Snowflake's underlying cloud infrastructure? (Select THREE),
Options:
Snowflake data and services are deployed in a single availability zone within a cloud provider's region.
Snowflake data and services are available in a single cloud provider and a single region, the use of multiple cloud providers is not supported.
Snowflake can be deployed in a customer's private cloud using the customer's own compute and storage resources for Snowflake compute and storage
Snowflake uses the core compute and storage services of each cloud provider for its own compute and storage
All three layers of Snowflake's architecture (storage, compute, and cloud services) are deployed and managed entirely on a selected cloud platform
Snowflake data and services are deployed in at least three availability zones within a cloud provider's region
Answer:
D, E, FExplanation:
Snowflake’s architecture is designed to operate entirely on cloud infrastructure. It uses the core compute and storage services of each cloud provider, which allows it to leverage the scalability and reliability of cloud resources. Snowflake’s services are deployed across multiple availability zones within a cloud provider’s region to ensure high availability and fault tolerance. References: [COF-C02] SnowPro Core Certification Exam Study Guide
How many days is load history for Snowpipe retained?
Options:
1 day
7 days
14 days
64 days
Answer:
CExplanation:
Snowpipe retains load history for 14 days. This allows users to view and audit the data that has been loaded into Snowflake using Snowpipe within this time frame3.
What is the minimum Snowflake edition required for row level security?
Options:
Standard
Enterprise
Business Critical
Virtual Private Snowflake
Answer:
BExplanation:
Row level security in Snowflake is available starting with the Enterprise edition. This feature allows for the creation of row access policies that can control access to data at the row level within tables and views
A user is preparing to load data from an external stage
Which practice will provide the MOST efficient loading performance?
Options:
Organize files into logical paths
Store the files on the external stage to ensure caching is maintained
Use pattern matching for regular expression execution
Load the data in one large file
Answer:
AExplanation:
Organizing files into logical paths can significantly improve the efficiency of data loading from an external stage. This practice helps in managing and locating files easily, which can be particularly beneficial when dealing with large datasets or complex directory structures1.
If 3 size Small virtual warehouse is made up of two servers, how many servers make up a Large warehouse?
Options:
4
8
16
32
Answer:
BExplanation:
In Snowflake, each size increase in virtual warehouses doubles the number of servers. Therefore, if a size Small virtual warehouse is made up of two servers, a Large warehouse, which is two sizes larger, would be made up of eight servers (2 servers for Small, 4 for Medium, and 8 for Large)2.
Size specifies the amount of compute resources available per cluster in a warehouse. Snowflake supports the following warehouse sizes:
Which Snowflake function will interpret an input string as a JSON document, and produce a VARIANT value?
Options:
parse_json()
json_extract_path_text()
object_construct()
flatten
Answer:
AExplanation:
The parse_json() function in Snowflake interprets an input string as a JSON document and produces a VARIANT value containing the JSON document. This function is specifically designed for parsing strings that contain valid JSON information3.
When loading data into Snowflake, how should the data be organized?
Options:
Into single files with 100-250 MB of compressed data per file
Into single files with 1-100 MB of compressed data per file
Into files of maximum size of 1 GB of compressed data per file
Into files of maximum size of 4 GB of compressed data per file
Answer:
AExplanation:
When loading data into Snowflake, it is recommended to organize the data into single files with 100-250 MB of compressed data per file. This size range is optimal for parallel processing and can help in achieving better performance during data loading operations. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which data types are supported by Snowflake when using semi-structured data? (Choose two.)
Options:
VARIANT
VARRAY
STRUCT
ARRAY
QUEUE
Answer:
A, DExplanation:
Snowflake supports the VARIANT and ARRAY data types for semi-structured data. VARIANT can store values of any other type, including OBJECT and ARRAY, making it suitable for semi-structured data formats like JSON. ARRAY is used to store an ordered list of elements
A user created a transient table and made several changes to it over the course of several days. Three days after the table was created, the user would like to go back to the first version of the table.
How can this be accomplished?
Options:
Use Time Travel, as long as DATA_RETENTION_TIME_IN_DAYS was set to at least 3 days.
The transient table version cannot be retrieved after 24 hours.
Contact Snowflake Support to have the data retrieved from Fail-safe storage.
Use the FAIL_SAFE parameter for Time Travel to retrieve the data from Fail-safe storage.
Answer:
AExplanation:
To go back to the first version of a transient table created three days prior, one can use Time Travel if the DATA_RETENTION_TIME_IN_DAYS was set to at least 3 days. This allows the user to access historical data within the specified retention period. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which SQL commands, when committed, will consume a stream and advance the stream offset? (Choose two.)
Options:
UPDATE TABLE FROM STREAM
SELECT FROM STREAM
INSERT INTO TABLE SELECT FROM STREAM
ALTER TABLE AS SELECT FROM STREAM
BEGIN COMMIT
Answer:
A, CExplanation:
The SQL commands that consume a stream and advance the stream offset are those that result in changes to the data, such as UPDATE and INSERT operations. Specifically, ‘UPDATE TABLE FROM STREAM’ and ‘INSERT INTO TABLE SELECT FROM STREAM’ will consume the stream and move the offset forward, reflecting the changes made to the data.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
In a Snowflake role hierarchy, what is the top-level role?
Options:
SYSADMIN
ORGADMIN
ACCOUNTADMIN
SECURITYADMIN
Answer:
CExplanation:
In a Snowflake role hierarchy, the top-level role is ACCOUNTADMIN. This role has the highest level of privileges and is capable of performing all administrative functions within the Snowflake account
Why does Snowflake recommend file sizes of 100-250 MB compressed when loading data?
Options:
Optimizes the virtual warehouse size and multi-cluster setting to economy mode
Allows a user to import the files in a sequential order
Increases the latency staging and accuracy when loading the data
Allows optimization of parallel operations
Answer:
DExplanation:
Snowflake recommends file sizes between 100-250 MB compressed when loading data to optimize parallel processing. Smaller, compressed files can be loaded in parallel, which maximizes the efficiency of the virtual warehouses and speeds up the data loading process
What is an advantage of using an explain plan instead of the query profiler to evaluate the performance of a query?
Options:
The explain plan output is available graphically.
An explain plan can be used to conduct performance analysis without executing a query.
An explain plan will handle queries with temporary tables and the query profiler will not.
An explain plan's output will display automatic data skew optimization information.
Answer:
BExplanation:
An explain plan is beneficial because it allows for the evaluation of how a query will be processed without the need to actually execute the query. This can help in understanding the query’s performance implications and potential bottlenecks without consuming resources that would be used if the query were run
Which of the following is an example of an operation that can be completed without requiring compute, assuming no queries have been executed previously?
Options:
SELECT SUM (ORDER_AMT) FROM SALES;
SELECT AVG(ORDER_QTY) FROM SALES;
SELECT MIN(ORDER_AMT) FROM SALES;
SELECT ORDER_AMT * ORDER_QTY FROM SALES;
Answer:
BExplanation:
Operations that do not require compute resources are typically those that can leverage previously cached results. However, if no queries have been executed previously, all the given operations would require compute to execute. It’s important to note that certain operations like DDL statements and queries that hit the result cache do not consume compute credits2.
The Snowflake Search Optimization Services supports improved performance of which kind of query?
Options:
Queries against large tables where frequent DML occurs
Queries against tables larger than 1 TB
Selective point lookup queries
Queries against a subset of columns in a table
Answer:
CExplanation:
The Snowflake Search Optimization Service is designed to support improved performance for selective point lookup queries. These are queries that retrieve specific records from a database, often based on a unique identifier or a small set of criteria3.
Snowflake supports the use of external stages with which cloud platforms? (Choose three.)
Options:
Amazon Web Services
Docker
IBM Cloud
Microsoft Azure Cloud
Google Cloud Platform
Oracle Cloud
Answer:
A, D, EExplanation:
Snowflake supports the use of external stages with Amazon Web Services (AWS), Microsoft Azure Cloud, and Google Cloud Platform (GCP). These platforms allow users to stage data externally and integrate with Snowflake for data loading operations
What is the default file size when unloading data from Snowflake using the COPY command?
Options:
5 MB
8 GB
16 MB
32 MB
Answer:
CExplanation:
The default file size when unloading data from Snowflake using the COPY command is not explicitly stated in the provided resources. However, Snowflake documentation suggests that the file size can be specified using the MAX_FILE_SIZE option in the COPY INTO
A table needs to be loaded. The input data is in JSON format and is a concatenation of multiple JSON documents. The file size is 3 GB. A warehouse size small is being used. The following COPY INTO command was executed:
COPY INTO SAMPLE FROM @~/SAMPLE.JSON (TYPE=JSON)
The load failed with this error:
Max LOB size (16777216) exceeded, actual size of parsed column is 17894470.
How can this issue be resolved?
Options:
Compress the file and load the compressed file.
Split the file into multiple files in the recommended size range (100 MB - 250 MB).
Use a larger-sized warehouse.
Set STRIP_OUTER_ARRAY=TRUE in the COPY INTO command.
Answer:
BExplanation:
The error “Max LOB size (16777216) exceeded” indicates that the size of the parsed column exceeds the maximum size allowed for a single column value in Snowflake, which is 16 MB. To resolve this issue, the file should be split into multiple smaller files that are within the recommended size range of 100 MB to 250 MB. This will ensure that each JSON document within the files is smaller than the maximum LOB size allowed. Compressing the file, using a larger-sized warehouse, or setting STRIP_OUTER_ARRAY=TRUE will not resolve the issue of the column size exceeding the maximum allowed. References: COPY INTO Error during Structured Data Load: “Max LOB size (16777216) exceeded…”
Which of the following statements describe features of Snowflake data caching? (Choose two.)
Options:
When a virtual warehouse is suspended, the data cache is saved on the remote storage layer.
When the data cache is full, the least-recently used data will be cleared to make room.
A user can only access their own queries from the query result cache.
A user must set USE_METADATA_CACHE to TRUE to use the metadata cache in queries.
The RESULT_SCAN table function can access and filter the contents of the query result cache.
Answer:
B, EExplanation:
Snowflake’s data caching features include the ability to clear the least-recently used data when the data cache is full to make room for new data. Additionally, the RESULT_SCAN table function can access and filter the contents of the query result cache, allowing users to retrieve and work with the results of previous queries. The other statements are incorrect: the data cache is not saved on the remote storage layer when a virtual warehouse is suspended, users can access queries from the query result cache that were run by other users, and there is no setting called USE_METADATA_CACHE in Snowflake. References: Caching in the Snowflake Cloud Data Platform, Optimizing the warehouse cache
Which file formats are supported for unloading data from Snowflake? (Choose two.)
Options:
Avro
JSON
ORC
XML
Delimited (CSV, TSV, etc.)
Answer:
B, EExplanation:
Snowflake supports unloading data in JSON and delimited file formats such as CSV and TSV. These formats are commonly used for data interchange and are supported by Snowflake for unloading operations
What is the minimum Snowflake edition that has column-level security enabled?
Options:
Standard
Enterprise
Business Critical
Virtual Private Snowflake
Answer:
BExplanation:
Column-level security, which allows for the application of masking policies to columns in tables or views, is available starting from the Enterprise edition of Snowflake1.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation1
When should a multi-cluster warehouse be used in auto-scaling mode?
Options:
When it is unknown how much compute power is needed
If the select statement contains a large number of temporary tables or Common Table Expressions (CTEs)
If the runtime of the executed query is very slow
When a large number of concurrent queries are run on the same warehouse
Answer:
DExplanation:
A multi-cluster warehouse should be used in auto-scaling mode when there is a need to handle a large number of concurrent queries. Auto-scaling allows Snowflake to automatically add or remove compute clusters to balance the load, ensuring that performance remains consistent during varying levels of demand
What happens to historical data when the retention period for an object ends?
Options:
The data is cloned into a historical object.
The data moves to Fail-safe
Time Travel on the historical data is dropped.
The object containing the historical data is dropped.
Answer:
CExplanation:
When the retention period for an object ends in Snowflake, Time Travel on the historical data is dropped ©. This means that the ability to access historical data via Time Travel is no longer available once the retention period has expired2.
Which Snowflake architectural layer is responsible for a query execution plan?
Options:
Compute
Data storage
Cloud services
Cloud provider
Answer:
CExplanation:
In Snowflake’s architecture, the Cloud Services layer is responsible for generating the query execution plan. This layer handles all the coordination, optimization, and management tasks, including query parsing, optimization, and compilation into an execution plan that can be processed by the Compute layer.
What actions will prevent leveraging of the ResultSet cache? (Choose two.)
Options:
Removing a column from the query SELECT list
Stopping the virtual warehouse that the query is running against
Clustering of the data used by the query
Executing the RESULTS_SCAN() table function
Changing a column that is not in the cached query
Answer:
B, DExplanation:
The ResultSet cache is leveraged to quickly return results for repeated queries. Actions that prevent leveraging this cache include stopping the virtual warehouse that the query is running against (B) and executing the RESULTS_SCAN() table function (D). Stopping the warehouse clears the local disk cache, including the ResultSet cache1. The RESULTS_SCAN() function is used to retrieve the result of a previously executed query, which bypasses the need for the ResultSet cache.
Which of the following are characteristics of Snowflake virtual warehouses? (Choose two.)
Options:
Auto-resume applies only to the last warehouse that was started in a multi-cluster warehouse.
The ability to auto-suspend a warehouse is only available in the Enterprise edition or above.
SnowSQL supports both a configuration file and a command line option for specifying a default warehouse.
A user cannot specify a default warehouse when using the ODBC driver.
The default virtual warehouse size can be changed at any time.
Answer:
C, EExplanation:
Snowflake virtual warehouses support a configuration file and command line options in SnowSQL to specify a default warehouse, which is characteristic C. Additionally, the size of a virtual warehouse can be changed at any time, which is characteristic E. These features provide flexibility and ease of use in managing compute resources2.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
Which services does the Snowflake Cloud Services layer manage? (Choose two.)
Options:
Compute resources
Query execution
Authentication
Data storage
Metadata
Answer:
C, EExplanation:
The Snowflake Cloud Services layer manages various services, including authentication and metadata management. This layer ties together all the different components of Snowflake to process user requests, manage sessions, and control access3.
A user created a new worksheet within the Snowsight Ul and wants to share this with teammates
How can this worksheet be shared?
Options:
Create a zero-copy clone of the worksheet and grant permissions to teammates
Create a private Data Exchange so that any teammate can use the worksheet
Share the worksheet with teammates within Snowsight
Create a database and grant all permissions to teammates
Answer:
CExplanation:
Worksheets in Snowsight can be shared directly with other Snowflake users within the same account. This feature allows for collaboration and sharing of SQL queries or Python code, as well as other data manipulation tasks1.
When publishing a Snowflake Data Marketplace listing into a remote region what should be taken into consideration? (Choose two.)
Options:
There is no need to have a Snowflake account in the target region, a share will be created for each user.
The listing is replicated into all selected regions automatically, the data is not.
The user must have the ORGADMIN role available in at least one account to link accounts for replication.
Shares attached to listings in remote regions can be viewed from any account in an organization.
For a standard listing the user can wait until the first customer requests the data before replicating it to the target region.
Answer:
B, CExplanation:
When publishing a Snowflake Data Marketplace listing into a remote region, it’s important to note that while the listing is replicated into all selected regions automatically, the data itself is not. Therefore, the data must be replicated separately. Additionally, the user must have the ORGADMIN role in at least one account to manage the replication of accounts1.
A column named "Data" contains VARIANT data and stores values as follows:
How will Snowflake extract the employee's name from the column data?
Options:
Data:employee.name
DATA:employee.name
data:Employee.name
data:employee.name
Answer:
DExplanation:
In Snowflake, to extract a specific value from a VARIANT column, you use the column name followed by a colon and then the key. The keys are case-sensitive. Therefore, to extract the employee’s name from the “Data” column, the correct syntax is data:employee.name.
Which commands are restricted in owner's rights stored procedures? (Select TWO).
Options:
SHOW
MERGE
INSERT
DELETE
DESCRIBE
Answer:
A, EExplanation:
In owner’s rights stored procedures, certain commands are restricted to maintain security and integrity. The SHOW and DESCRIBE commands are limited because they can reveal metadata and structure information that may not be intended for all roles.
Which Snowflake function is maintained separately from the data and helps to support features such as Time Travel, Secure Data Sharing, and pruning?
Options:
Column compression
Data clustering
Micro-partitioning
Metadata management
Answer:
CExplanation:
Micro-partitioning is a Snowflake function that is maintained separately from the data and supports features such as Time Travel, Secure Data Sharing, and pruning. It allows Snowflake to efficiently manage and query large datasets by organizing them into micro-partitions1.
A user wants to access files stored in a stage without authenticating into Snowflake. Which type of URL should be used?
Options:
File URL
Staged URL
Scoped URL
Pre-signed URL
Answer:
DExplanation:
A Pre-signed URL should be used to access files stored in a Snowflake stage without requiring authentication into Snowflake. Pre-signed URLs are simple HTTPS URLs that provide temporary access to a file via a web browser, using a pre-signed access token. The expiration time for the access token is configurable, and this type of URL allows users or applications to directly access or download the files without needing to authenticate into Snowflake5.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is the purpose of a Query Profile?
Options:
To profile how many times a particular query was executed and analyze its u^age statistics over time.
To profile a particular query to understand the mechanics of the query, its behavior, and performance.
To profile the user and/or executing role of a query and all privileges and policies applied on the objects within the query.
To profile which queries are running in each warehouse and identify proper warehouse utilization and sizing for better performance and cost balancing.
Answer:
BExplanation:
The purpose of a Query Profile is to provide a detailed analysis of a particular query’s execution plan, including the mechanics, behavior, and performance. It helps in identifying potential performance bottlenecks and areas for optimization
At what level is the MIN_DATA_RETENTION_TIME_IN_DAYS parameter set?
Options:
Account
Database
Schema
Table
Answer:
AExplanation:
The MIN_DATA_RETENTION_TIME_IN_DAYS parameter is set at the account level. This parameter determines the minimum number of days Snowflake retains historical data for Time Travel operations
What is the primary purpose of a directory table in Snowflake?
Options:
To store actual data from external stages
To automatically expire file URLs for security
To manage user privileges and access control
To store file-level metadata about data files in a stage
Answer:
DExplanation:
A directory table in Snowflake is used to store file-level metadata about the data files in a stage. It is conceptually similar to an external table and provides information such as file size, last modified timestamp, and file URL. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is the purpose of the STRIP NULL_VALUES file format option when loading semi-structured data files into Snowflake?
Options:
It removes null values from all columns in the data.
It converts null values to empty strings during loading.
It skips rows with null values during the loading process.
It removes object or array elements containing null values.
Answer:
DExplanation:
The STRIP NULL_VALUES file format option, when set to TRUE, removes object or array elements that contain null values during the loading process of semi-structured data files into Snowflake. This ensures that the data loaded into Snowflake tables does not contain these null elements, which can be useful when the “null” values in files indicate missing values and have no other special meaning2.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is a directory table in Snowflake?
Options:
A separate database object that is used to store file-level metadata
An object layered on a stage that is used to store file-level metadata
A database object with grantable privileges for unstructured data tasks
A Snowflake table specifically designed for storing unstructured files
Answer:
BExplanation:
A directory table in Snowflake is an object layered on a stage that is used to store file-level metadata. It is not a separate database object but is conceptually similar to an external table because it stores metadata about the data files in the stage5.
How can a Snowflake user traverse semi-structured data?
Options:
Insert a colon (:) between the VARIANT column name and any first-level element.
Insert a colon (:) between the VARIANT column name and any second-level element. C. Insert a double colon (: :) between the VARIANT column name and any first-level element.
Insert a double colon (: :) between the VARIANT column name and any second-level element.
Answer:
AExplanation:
To traverse semi-structured data in Snowflake, a user can insert a colon (:) between the VARIANT column name and any first-level element. This path syntax is used to retrieve elements in a VARIANT column4.
What will prevent unauthorized access to a Snowflake account from an unknown source?
Options:
Network policy
End-to-end encryption
Multi-Factor Authentication (MFA)
Role-Based Access Control (RBAC)
Answer:
AExplanation:
A network policy in Snowflake is used to restrict access to the Snowflake account from unauthorized or unknown sources. It allows administrators to specify allowed IP address ranges, thus preventing access from any IP addresses not listed in the policy1.
What is the relationship between a Query Profile and a virtual warehouse?
Options:
A Query Profile can help users right-size virtual warehouses.
A Query Profile defines the hardware specifications of the virtual warehouse.
A Query Profile can help determine the number of virtual warehouses available.
A Query Profile automatically scales the virtual warehouse based on the query complexity.
Answer:
AExplanation:
A Query Profile provides detailed execution information for a query, which can be used to analyze the performance and behavior of queries. This information can help users optimize and right-size their virtual warehouses for better efficiency. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What information is found within the Statistic output in the Query Profile Overview?
Options:
Operator tree
Table pruning
Most expensive nodes
Nodes by execution time
Answer:
CExplanation:
The Statistic output in the Query Profile Overview of Snowflake provides detailed insights into the performance of different parts of the query. Specifically, it highlights the "Most expensive nodes," which are the operations or steps within the query execution that consume the most resources, such as CPU and memory. Identifying these nodes helps in pinpointing performance bottlenecks and optimizing query execution by focusing efforts on the most resource-intensive parts of the query.
References:
- Snowflake Documentation on Query Profile Overview: It details the components of the profile overview, emphasizing how to interpret the statistics section to improve query performance by understanding which nodes are most resource-intensive.
QUSTION NO: 582
How do secure views compare to non-secure views in Snowflake?
A. Secure views execute slowly compared to non-secure views.
B. Non-secure views are preferred over secure views when sharing data.
C. Secure views are similar to materialized views in that they are the most performant.
D. There are no performance differences between secure and non-secure views.
Answer: D
Secure views and non-secure views in Snowflake are differentiated primarily by their handling of data access and security rather than performance characteristics. A secure view enforces row-level security and ensures that the view definition is hidden from the users. However, in terms of performance, secure views do not inherently execute slower or faster than non-secure views. The performance of both types of views depends more on other factors such as underlying table design, query complexity, and system workload rather than the security features embedded in the views themselves.
References:
- Snowflake Documentation on Views: This section provides an overview of both secure and non-secure views, clarifying that the main difference lies in security features rather than performance, thus supporting the assertion that there are no inherent performance differences.
QUSTION NO: 583
When using SnowSQL, which configuration options are required when unloading data from a SQL query run on a local machine? {Select TWO).
A. echo
B. quiet
C. output_file
D. output_format
E. force_put_overwrite
Answer: C, D
When unloading data from SnowSQL (Snowflake's command-line client), to a file on a local machine, you need to specify certain configuration options to determine how and where the data should be outputted. The correct configuration options required are:
- C. output_file: This configuration option specifies the file path where the output from the query should be stored. It is essential for directing the results of your SQL query into a local file, rather than just displaying it on the screen.
- D. output_format: This option determines the format of the output file (e.g., CSV, JSON, etc.). It is crucial for ensuring that the data is unloaded in a structured format that meets the requirements of downstream processes or systems.
These options are specified in the SnowSQL configuration file or directly in the SnowSQL command line. The configuration file allows users to set defaults and customize their usage of SnowSQL, including output preferences for unloading data.
References:
- Snowflake Documentation: SnowSQL (CLI Client) at Snowflake Documentation
- Snowflake Documentation: Configuring SnowSQL at Snowflake Documentation
QUSTION NO: 584
How can a Snowflake user post-process the result of SHOW FILE FORMATS?
A. Use the RESULT_SCAN function.
B. Create a CURSOR for the command.
C. Put it in the FROM clause in brackets.
D. Assign the command to RESULTSET.
Answer: A
first run SHOW FILE FORMATS
then SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID(-1)))
QUSTION NO: 585
Which file function gives a user or application access to download unstructured data from a Snowflake stage?
A. BUILD_SCOPED_FILE_URL
B. BUILD_STAGE_FILE_URL
C. GET_PRESIGNED_URL
D. GET STAGE LOCATION
Answer: C
The function that provides access to download unstructured data from a Snowflake stage is:
- C. GET_PRESIGNED_URL: This function generates a presigned URL for a single file within a stage. The generated URL can be used to directly access or download the file without needing to go through Snowflake. This is particularly useful for unstructured data such as images, videos, or large text files, where direct access via a URL is needed outside of the Snowflake environment.
Example usage:
SELECT GET_PRESIGNED_URL('stage_name', 'file_path');
This function simplifies the process of securely sharing or accessing files stored in Snowflake stages with external systems or users.
References:
- Snowflake Documentation: GET_PRESIGNED_URL Function at Snowflake Documentation
QUSTION NO: 586
When should a multi-cluster virtual warehouse be used in Snowflake?
A. When queuing is delaying query execution on the warehouse
B. When there is significant disk spilling shown on the Query Profile
C. When dynamic vertical scaling is being used in the warehouse
D. When there are no concurrent queries running on the warehouse
Answer: A
A multi-cluster virtual warehouse in Snowflake is designed to handle high concurrency and workload demands by allowing multiple clusters of compute resources to operate simultaneously. The correct scenario to use a multi-cluster virtual warehouse is:
- A. When queuing is delaying query execution on the warehouse: Multi-cluster warehouses are ideal when the demand for compute resources exceeds the capacity of a single cluster, leading to query queuing. By enabling additional clusters, you can distribute the workload across multiple compute clusters, thereby reducing queuing and improving query performance.
This is especially useful in scenarios with fluctuating workloads or where it's critical to maintain low response times for a large number of concurrent queries.
References:
- Snowflake Documentation: Multi-Cluster Warehouses at Snowflake Documentation
QUSTION NO: 587
A JSON object is loaded into a column named data using a Snowflake variant datatype. The root node of the object is BIKE. The child attribute for this root node is BIKEID.
Which statement will allow the user to access BIKEID?
A. select data:BIKEID
B. select data.BIKE.BIKEID
C. select data:BIKE.BIKEID
D. select data:BIKE:BIKEID
Answer: C
In Snowflake, when accessing elements within a JSON object stored in a variant column, the correct syntax involves using a colon (:) to navigate the JSON structure. The BIKEID attribute, which is a child of the BIKE root node in the JSON object, is accessed using data:BIKE.BIKEID. This syntax correctly references the path through the JSON object, utilizing the colon for JSON field access and dot notation to traverse the hierarchy within the variant structure.References: Snowflake documentation on accessing semi-structured data, which outlines how to use the colon and dot notations for navigating JSON structures stored in variant columns.
QUSTION NO: 588
Which Snowflake tool is recommended for data batch processing?
A. SnowCD
B. SnowSQL
C. Snowsight
D. The Snowflake API
Answer: B
For data batch processing in Snowflake, the recommended tool is:
- B. SnowSQL: SnowSQL is the command-line client for Snowflake. It allows for executing SQL queries, scripts, and managing database objects. It's particularly suitable for batch processing tasks due to its ability to run SQL scripts that can execute multiple commands or queries in sequence, making it ideal for automated or scheduled tasks that require bulk data operations.
SnowSQL provides a flexible and powerful way to interact with Snowflake, supporting operations such as loading and unloading data, executing complex queries, and managing Snowflake objects from the command line or through scripts.
References:
- Snowflake Documentation: SnowSQL (CLI Client) at Snowflake Documentation
QUSTION NO: 589
How does the Snowflake search optimization service improve query performance?
A. It improves the performance of range searches.
B. It defines different clustering keys on the same source table.
C. It improves the performance of all queries running against a given table.
D. It improves the performance of equality searches.
Answer: D
The Snowflake Search Optimization Service is designed to enhance the performance of specific types of queries on large tables. The correct answer is:
- D. It improves the performance of equality searches: The service optimizes the performance of queries that use equality search conditions (e.g., WHERE column = value). It creates and maintains a search index on the table's columns, which significantly speeds up the retrieval of rows based on those equality search conditions.
This optimization is particularly beneficial for large tables where traditional scans might be inefficient for equality searches. By using the Search Optimization Service, Snowflake can leverage the search indexes to quickly locate the rows that match the search criteria without scanning the entire table.
References:
- Snowflake Documentation: Search Optimization Service at Snowflake Documentation
QUSTION NO: 590
What compute resource is used when loading data using Snowpipe?
A. Snowpipe uses virtual warehouses provided by the user.
B. Snowpipe uses an Apache Kafka server for its compute resources.
C. Snowpipe uses compute resources provided by Snowflake.
D. Snowpipe uses cloud platform compute resources provided by the user.
Answer: C
Snowpipe is Snowflake's continuous data ingestion service that allows for loading data as soon as it's available in a cloud storage stage. Snowpipe uses compute resources managed by Snowflake, separate from the virtual warehouses that users create for querying data. This means that Snowpipe operations do not consume the computational credits of user-created virtual warehouses, offering an efficient and cost-effective way to continuously load data into Snowflake.
References:
- Snowflake Documentation: Understanding Snowpipe
QUSTION NO: 591
What is one of the characteristics of data shares?
A. Data shares support full DML operations.
B. Data shares work by copying data to consumer accounts.
C. Data shares utilize secure views for sharing view objects.
D. Data shares are cloud agnostic and can cross regions by default.
Answer: C
Data sharing in Snowflake allows for live, read-only access to data across different Snowflake accounts without the need to copy or transfer the data. One of the characteristics of data shares is the ability to use secure views. Secure views are used within data shares to restrict the access of shared data, ensuring that consumers can only see the data that the provider intends to share, thereby preserving privacy and security.
References:
- Snowflake Documentation: Understanding Secure Views in Data Sharing
QUSTION NO: 592
Which DDL/DML operation is allowed on an inbound data share?
A. ALTER TA3LE
B. INSERT INTO
C. MERGE
D. SELECT
Answer: D
In Snowflake, an inbound data share refers to the data shared with an account by another account. The only DDL/DML operation allowed on an inbound data share is SELECT. This restriction ensures that the shared data remains read-only for the consuming account, maintaining the integrity and ownership of the data by the sharing account.
References:
- Snowflake Documentation: Using Data Shares
QUSTION NO: 593
In Snowflake, the use of federated authentication enables which Single Sign-On (SSO) workflow activities? (Select TWO).
A. Authorizing users
B. Initiating user sessions
C. Logging into Snowflake
D. Logging out of Snowflake
E. Performing role authentication
Answer: B C
Federated authentication in Snowflake allows users to use their organizational credentials to log in to Snowflake, leveraging Single Sign-On (SSO). The key activities enabled by this setup include:
- B. Initiating user sessions: Federated authentication streamlines the process of starting a user session in Snowflake by using the existing authentication mechanisms of an organization.
- C. Logging into Snowflake: It simplifies the login process, allowing users to authenticate with their organization's identity provider instead of managing separate credentials for Snowflake.
References:
- Snowflake Documentation: Configuring Federated Authentication
QUSTION NO: 594
A user wants to upload a file to an internal Snowflake stage using a put command.
Which tools and or connectors could be used to execute this command? (Select TWO).
A. SnowCD
B. SnowSQL
C. SQL API
D. Python connector
E. Snowsight worksheets
Answer: B, E
To upload a file to an internal Snowflake stage using a PUT command, you can use:
- B. SnowSQL: SnowSQL, the command-line client for Snowflake, supports the PUT command, allowing users to upload files directly to Snowflake stages from their local file systems.
- E. Snowsight worksheets: Snowsight, the web interface for Snowflake, provides a user-friendly environment for executing SQL commands, including the PUT command, through its interactive worksheets.
References:
- Snowflake Documentation: Loading Data into Snowflake using SnowSQL
- Snowflake Documentation: Using Snowsight
Which statements describe benefits of Snowflake's separation of compute and storage? (Select TWO).
Options:
The separation allows independent scaling of computing resources.
The separation ensures consistent data encryption across all virtual data warehouses.
The separation supports automatic conversion of semi-structured data into structured data for advanced data analysis.
Storage volume growth and compute usage growth can be tightly coupled.
Compute can be scaled up or down without the requirement to add more storage.
Answer:
A, EExplanation:
Snowflake’s architecture allows for the independent scaling of compute resources, meaning you can increase or decrease the computational power as needed without affecting storage. This separation also means that storage can grow independently of compute usage, allowing for more flexible and cost-effective data management.
For which use cases is running a virtual warehouse required? (Select TWO).
Options:
When creating a table
When loading data into a table
When unloading data from a table
When executing a show command
When executing a list command
Answer:
B, CExplanation:
Running a virtual warehouse is required when loading data into a table and when unloading data from a table because these operations require compute resources that are provided by the virtual warehouse23.
How is unstructured data retrieved from data storage?
Options:
SQL functions like the GET command can be used to copy the unstructured data to a location on the client.
SQL functions can be used to create different types of URLs pointing to the unstructured data. These URLs can be used to download the data to a client.
SQL functions can be used to retrieve the data from the query results cache. When the query results are output to a client, the unstructured data will be output to the client as files.
SQL functions can call on different web extensions designed to display different types of files as a web page. The web extensions will allow the files to be downloaded to the client.
Answer:
BExplanation:
Unstructured data stored in Snowflake can be retrieved by using SQL functions to generate URLs that point to the data. These URLs can then be used to download the data directly to a client
Which Snowflake command can be used to unload the result of a query to a single file?
Options:
Use COPY INTO
Use COPY INTO
Use COPY INTO
Use COPY INTO
Answer:
CExplanation:
The Snowflake command to unload the result of a query to a single file is COPY INTO
Which Snowflake database object can be used to track data changes made to table data?
Options:
Tag
Task
Stream
Stored procedure
Answer:
CExplanation:
A Stream object in Snowflake is used for change data capture (CDC), which records data manipulation language (DML) changes made to tables, including inserts, updates, and deletes3.
Which VALIDATION_MODE value will return the errors across the files specified in a COPY command, including files that were partially loaded during an earlier load?
Options:
RETURN_-1_R0WS
RETURN_n_ROWS
RETURN_ERRORS
RETURN ALL ERRORS
Answer:
CExplanation:
The RETURN_ERRORS value in the VALIDATION_MODE option of the COPY command instructs Snowflake to validate the data files and return errors encountered across all specified files, including those that were partially loaded during an earlier load2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What does a Notify & Suspend action for a resource monitor do?
Options:
Send an alert notification to all account users who have notifications enabled.
Send an alert notification to all virtual warehouse users when thresholds over 100% have been met.
Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses after all statements being executed by the warehouses have completed.
Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses immediately, canceling any statements being executed by the warehouses.
Answer:
CExplanation:
The Notify & Suspend action for a resource monitor in Snowflake sends a notification to all account administrators who have notifications enabled and suspends all assigned warehouses. However, the suspension only occurs after all currently running statements in the warehouses have been completed1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What does SnowCD help Snowflake users to do?
Options:
Copy data into files.
Manage different databases and schemas.
Troubleshoot network connections to Snowflake.
Write SELECT queries to retrieve data from external tables.
Answer:
CExplanation:
SnowCD is a connectivity diagnostic tool that helps users troubleshoot network connections to Snowflake. It performs a series of checks to evaluate the network connection and provides suggestions for resolving any issues4.
Which command is used to unload data from a Snowflake database table into one or more files in a Snowflake stage?
Options:
CREATE STAGE
COPY INTO