SnowPro Core Certification Exam Questions and Answers
If a multi-cluster warehouse is using an economy scaling policy, how long will queries wait in the queue before another cluster is started?
Options:
1 minute
2 minutes
6 minutes
8 minutes
Answer:
BExplanation:
In a multi-cluster warehouse with an economy scaling policy, queries will wait in the queue for 2 minutes before another cluster is started. This is to minimize costs by allowing queries to queue up for a short period before adding additional compute resources. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A view is defined on a permanent table. A temporary table with the same name is created in the same schema as the referenced table. What will the query from the view return?
Options:
The data from the permanent table.
The data from the temporary table.
An error stating that the view could not be compiled.
An error stating that the referenced object could not be uniquely identified.
Answer:
AExplanation:
When a view is defined on a permanent table, and a temporary table with the same name is created in the same schema, the query from the view will return the data from the permanent table. Temporary tables are session-specific and do not affect the data returned by views defined on permanent tables2.
What is the MAXIMUM size limit for a record of a VARIANT data type?
Options:
8MB
16MB
32MB
128MB
Answer:
BExplanation:
The maximum size limit for a record of a VARIANT data type in Snowflake is 16MB. This allows for storing semi-structured data types like JSON, Avro, ORC, Parquet, or XML within a single VARIANT column. References: Based on general database knowledge as of 2021.
Which operations are handled in the Cloud Services layer of Snowflake? (Select TWO).
Options:
Security
Data storage
Data visualization
Query computation
Metadata management
Answer:
A, EExplanation:
The Cloud Services layer in Snowflake is responsible for various services, including security (like authentication and authorization) and metadata management (like query parsing and optimization). References: Based on general cloud architecture knowledge as of 2021.
Which URL type allows users to access unstructured data without authenticating into Snowflake or passing an authorization token?
Options:
Pre-signed URL
Scoped URL
Signed URL
File URL
Answer:
AExplanation:
Pre-signed URLs in Snowflake allow users to access unstructured data without the need for authentication into Snowflake or passing an authorization token. These URLs are open and can be directly accessed or downloaded by any user or application, making them ideal for business intelligence applications or reporting tools that need to display unstructured file contents
Which commands should be used to grant the privilege allowing a role to select data from all current tables and any tables that will be created later in a schema? (Choose two.)
Options:
grant USAGE on all tables in schema DB1.SCHEMA to role MYROLE;
grant USAGE on future tables in schema DB1.SCHEMA to role MYROLE;
grant SELECT on all tables in schema DB1.SCHEMA to role MYROLE;
grant SELECT on future tables in schema DB1.SCHEMA to role MYROLE;
grant SELECT on all tables in database DB1 to role MYROLE;
grant SELECT on future tables in database DB1 to role MYROLE;
Answer:
C, DExplanation:
To grant a role the privilege to select data from all current and future tables in a schema, two separate commands are needed. The first command grants the SELECT privilege on all existing tables within the schema, and the second command grants the SELECT privilege on all tables that will be created in the future within the same schema.
Which parameter prevents streams on tables from becoming stale?
Options:
MAXDATAEXTENSIONTIMEINDAYS
MTN_DATA_RETENTION_TTME_TN_DAYS
LOCK_TIMEOUT
STALE_AFTER
Answer:
AExplanation:
The parameter that prevents streams on tables from becoming stale is MAXDATAEXTENSIONTIMEINDAYS. This parameter specifies the maximum number of days for which Snowflake can extend the data retention period for the table to prevent streams on the table from becoming stale4.
Network policies can be applied to which of the following Snowflake objects? (Choose two.)
Options:
Roles
Databases
Warehouses
Users
Accounts
Answer:
D, EExplanation:
Network policies in Snowflake can be applied to users and accounts. These policies control inbound access to the Snowflake service and internal stages, allowing or denying access based on the originating network identifiers12.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which of the following activities consume virtual warehouse credits in the Snowflake environment? (Choose two.)
Options:
Caching query results
Running EXPLAIN and SHOW commands
Cloning a database
Running a custom query
Running COPY commands
Answer:
B, DExplanation:
Running EXPLAIN and SHOW commands, as well as running a custom query, consume virtual warehouse credits in the Snowflake environment. These activities require computational resources, and therefore, credits are used to account for the usage of these resources. References: [COF-C02] SnowPro Core Certification Exam Study Guide
If file format options are specified in multiple locations, the load operation selects which option FIRST to apply in order of precedence?
Options:
Table definition
Stage definition
Session level
COPY INTO TABLE statement
Answer:
DExplanation:
When file format options are specified in multiple locations, the load operation applies the options in the following order of precedence: first, the COPY INTO TABLE statement; second, the stage definition; and third, the table definition1
Which Snowflake feature will allow small volumes of data to continuously load into Snowflake and will incrementally make the data available for analysis?
Options:
COPY INTO
CREATE PIPE
INSERT INTO
TABLE STREAM
Answer:
BExplanation:
The Snowflake feature that allows for small volumes of data to be continuously loaded into Snowflake and incrementally made available for analysis is Snowpipe. Snowpipe is designed for near-real-time data loading, enabling data to be loaded as soon as it’s available in the storage layer3
How do Snowflake data providers share data that resides in different databases?
Options:
External tables
Secure views
Materialized views
User-Defined Functions (UDFs)
Answer:
BExplanation:
Snowflake data providers can share data residing in different databases through secure views. Secure views allow for the referencing of objects such as schemas, tables, and other views contained in one or more databases, as long as those databases belong to the same account. This enables providers to share data securely and efficiently with consumers. References: [COF-C02] SnowPro Core Certification Exam Study Guide
For non-materialized views, what column in Information Schema and Account Usage identifies whether a view is secure or not?
Options:
CHECK_OPTION
IS_SECURE
IS_UPDATEABLE
TABLE_NAME
Answer:
BExplanation:
In the Information Schema and Account Usage, the column that identifies whether a view is secure or not is IS_SECURE2.
Which of the following are handled by the cloud services layer of the Snowflake architecture? (Choose two.)
Options:
Query execution
Data loading
Time Travel data
Security
Authentication and access control
Answer:
D, EExplanation:
The cloud services layer of Snowflake architecture handles various aspects including security functions, authentication of user sessions, and access control, ensuring that only authorized users can access the data and services23.
If a virtual warehouse runs for 61 seconds, shuts down, and then restarts and runs for 30 seconds, for how many seconds is it billed?
Options:
60
91
120
121
Answer:
DExplanation:
Snowflake’s billing for virtual warehouses is per-second, with a minimum of 60 seconds for each time the warehouse is started or resumed. Therefore, if a warehouse runs for 61 seconds, it is billed for 61 seconds. If it is then shut down and restarted, running for an additional 30 seconds, it is billed for another 60 seconds (the minimum charge for a restart), totaling 121 seconds2
What are benefits of using Snowpark with Snowflake? (Select TWO).
Options:
Snowpark uses a Spark engine to generate optimized SQL query plans.
Snowpark automatically sets up Spark within Snowflake virtual warehouses.
Snowpark does not require that a separate cluster be running outside of Snowflake.
Snowpark allows users to run existing Spark code on virtual warehouses without the need to reconfigure the code.
Snowpark executes as much work as possible in the source databases for all operations including User-Defined Functions (UDFs).
Answer:
C, DExplanation:
Snowpark is designed to bring the data programmability to Snowflake, enabling developers to write code in familiar languages like Scala, Java, and Python. It allows for the execution of these codes directly within Snowflake’s virtual warehouses, eliminating the need for a separate cluster. Additionally, Snowpark’s compatibility with Spark allows users to leverage their existing Spark code with minimal changes1.
By definition, a secure view is exposed only to users with what privilege?
Options:
IMPORT SHARE
OWNERSHIP
REFERENCES
USAGE
Answer:
BExplanation:
A secure view in Snowflake is exposed only to users with the OWNERSHIP privilege. This privilege ensures that only authorized users who own the view, or roles that include ownership, can access the secure view
Which statement accurately describes a characteristic of a materialized view?
Options:
A materialized view can query only a single table.
Data accessed through materialized views can be stale.
Materialized view refreshes need to be maintained by the user.
Querying a materialized view is slower than executing a query against the base table of the view.
Answer:
BExplanation:
A characteristic of a materialized view is that the data accessed through it can be stale. This is because the data in a materialized view may not reflect the latest changes in the base tables until the view is refreshed
How can a Snowflake user optimize query performance in Snowflake? (Select TWO).
Options:
Create a view.
Cluster a table.
Enable the search optimization service.
Enable Time Travel.
Index a table.
Answer:
B, CExplanation:
To optimize query performance in Snowflake, users can cluster a table, which organizes the data in a way that minimizes the amount of data scanned during queries. Additionally, enabling the search optimization service can improve the performance of selective point lookup queries on large tables34.
Which privilege must be granted to a share to allow secure views the ability to reference data in multiple databases?
Options:
CREATE_SHARE on the account
SHARE on databases and schemas
SELECT on tables used by the secure view
REFERENCE_USAGE on databases
Answer:
DExplanation:
To allow secure views the ability to reference data in multiple databases, the REFERENCE_USAGE privilege must be granted on each database that contains objects referenced by the secure view2. This privilege is necessary before granting the SELECT privilege on a secure view to a share.
What privilege should a user be granted to change permissions for new objects in a managed access schema?
Options:
Grant the OWNERSHIP privilege on the schema.
Grant the OWNERSHIP privilege on the database.
Grant the MANAGE GRANTS global privilege.
Grant ALL privileges on the schema.
Answer:
CExplanation:
To change permissions for new objects in a managed access schema, a user should be granted the MANAGE GRANTS global privilege. This privilege allows the user to manage access control through grants on all securable objects within Snowflake2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Two users share a virtual warehouse named wh dev 01. When one of the users loads data, the other one experiences performance issues while querying data.
How does Snowflake recommend resolving this issue?
Options:
Scale up the existing warehouse.
Create separate warehouses for each user.
Create separate warehouses for each workload.
Stop loading and querying data at the same time.
Answer:
CExplanation:
Snowflake recommends creating separate warehouses for each workload to resolve performance issues caused by shared virtual warehouses. This ensures that the resources are not being overutilized by one user’s activities, thereby affecting the performance of another user’s activities4.
What technique does Snowflake use to limit the number of micro-partitions scanned by each query?
Options:
B-tree
Indexing
Map reduce
Pruning
Answer:
DExplanation:
Snowflake uses a technique called pruning to limit the number of micro-partitions scanned by each query. Pruning effectively filters out unnecessary micro-partitions based on the query’s filter conditions, which can significantly improve query performance by reducing the amount of data scanned1.
Which query profile statistics help determine if efficient pruning is occurring? (Choose two.)
Options:
Bytes sent over network
Percentage scanned from cache
Partitions total
Bytes spilled to local storage
Partitions scanned
Answer:
C, EExplanation:
Efficient pruning in Snowflake is indicated by the number of partitions scanned out of the total available. If a small percentage of partitions are scanned, it suggests that the pruning process is effectively narrowing down the data, which can lead to improved query performance
Query parsing and compilation occurs in which architecture layer of the Snowflake Cloud Data Platform?
Options:
Cloud services layer
Compute layer
Storage layer
Cloud agnostic layer
Answer:
AExplanation:
Query parsing and compilation in Snowflake occur within the cloud services layer. This layer is responsible for various management tasks, including query compilation and optimization
How does a scoped URL expire?
Options:
When the data cache clears.
When the persisted query result period ends.
The encoded URL access is permanent.
The length of time is specified in the expiration_time argument.
Answer:
BExplanation:
A scoped URL expires when the persisted query result period ends, which is typically after the results cache expires. This is currently set to 24 hours
What is the minimum Snowflake edition needed for database failover and fail-back between Snowflake accounts for business continuity and disaster recovery?
Options:
Standard
Enterprise
Business Critical
Virtual Private Snowflake
Answer:
CExplanation:
The minimum Snowflake edition required for database failover and fail-back between Snowflake accounts for business continuity and disaster recovery is the Business Critical edition. References: Snowflake Documentation3.
Which role has the ability to create and manage users and roles?
Options:
ORGADMIN
USERADMIN
SYSADMIN
SECURITYADMIN
Answer:
BExplanation:
The USERADMIN role in Snowflake has the ability to create and manage users and roles within the Snowflake environment. This role is specifically dedicated to user and role management and creation
Which Snowflake tool would be BEST to troubleshoot network connectivity?
Options:
SnowCLI
SnowUI
SnowSQL
SnowCD
Answer:
DExplanation:
SnowCD (Snowflake Connectivity Diagnostic Tool) is the best tool provided by Snowflake for troubleshooting network connectivity issues. It helps diagnose and resolve issues related to connecting to Snowflake services
.
What statistical information in a Query Profile indicates that the query is too large to fit in memory? (Select TWO).
Options:
Bytes spilled to local cache.
Bytes spilled to local storage.
Bytes spilled to remote cache.
Bytes spilled to remote storage.
Bytes spilled to remote metastore.
Answer:
A, BExplanation:
In a Query Profile, the statistical information that indicates a query is too large to fit in memory includes bytes spilled to local cache and bytes spilled to local storage. These metrics suggest that the working data set of the query exceeded the memory available on the warehouse nodes, causing intermediate results to be written to disk
A company needs to read multiple terabytes of data for an initial load as part of a Snowflake migration. The company can control the number and size of CSV extract files.
How does Snowflake recommend maximizing the load performance?
Options:
Use auto-ingest Snowpipes to load large files in a serverless model.
Produce the largest files possible, reducing the overall number of files to process.
Produce a larger number of smaller files and process the ingestion with size Small virtual warehouses.
Use an external tool to issue batched row-by-row inserts within BEGIN TRANSACTION and COMMIT commands.
Answer:
BExplanation:
Snowflake’s documentation recommends producing the largest files possible for data loading, as larger files reduce the number of files to process and the overhead associated with handling many small files. This approach can maximize the load performance by leveraging Snowflake’s ability to ingest large files efficiently1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What action can a user take to address query concurrency issues?
Options:
Enable the query acceleration service.
Enable the search optimization service.
Add additional clusters to the virtual warehouse
Resize the virtual warehouse to a larger instance size.
Answer:
CExplanation:
To address query concurrency issues, a user can add additional clusters to the virtual warehouse. This allows for the distribution of queries across multiple clusters, reducing the load on any single cluster and improving overall query performance2.
What are advantages clones have over tables created with CREATE TABLE AS SELECT statement? (Choose two.)
Options:
The clone always stays in sync with the original table.
The clone has better query performance.
The clone is created almost instantly.
The clone will have time travel history from the original table.
The clone saves space by not duplicating storage.
Answer:
C, EExplanation:
Clones in Snowflake have the advantage of being created almost instantly and saving space by not duplicating storage. This is due to Snowflake’s zero-copy cloning feature, which allows for the creation of object clones without the additional storage costs typically associated with data duplication23. Clones are independent of the original table and do not stay in sync with it, nor do they inherently have better query performance. However, they do inherit the time travel history from the original table at the time of cloning
A user needs to create a materialized view in the schema MYDB.MYSCHEMA. Which statements will provide this access?
Options:
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO ROLE MYROLE;
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER USER1;
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB. K"-'SCHEMA TO USER! ;
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO MYROLE;
Answer:
AExplanation:
To provide a user with the necessary access to create a materialized view in a schema, the user must be granted a role that has the CREATE MATERIALIZED VIEW privilege on that schema. First, the role is granted to the user, and then the privilege is granted to the role
What happens when a database is cloned?
Options:
It does not retain any privileges granted on the source object.
It replicates all granted privileges on the corresponding source objects.
It replicates all granted privileges on the corresponding child objects.
It replicates all granted privileges on the corresponding child schema objects.
Answer:
AExplanation:
When a database is cloned in Snowflake, it does not retain any privileges that were granted on the source object. The clone will need to have privileges reassigned as necessary for users to access it. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is a characteristic of the Snowflake Query Profile?
Options:
It can provide statistics on a maximum number of 100 queries per week.
It provides a graphic representation of the main components of the query processing.
It provides detailed statistics about which queries are using the greatest number of compute resources.
It can be used by third-party software using the Query Profile API.
Answer:
BExplanation:
The Snowflake Query Profile provides a graphic representation of the main components of the query processing. This visual aid helps users understand the execution details and performance characteristics of their queries4.
How does Snowflake allow a data provider with an Azure account in central Canada to share data with a data consumer on AWS in Australia?
Options:
The data provider in Azure Central Canada can create a direct share to AWS Asia Pacific, if they are both in the same organization.
The data consumer and data provider can form a Data Exchange within the same organization to create a share from Azure Central Canada to AWS Asia Pacific.
The data provider uses the GET DATA workflow in the Snowflake Data Marketplace to create a share between Azure Central Canada and AWS Asia Pacific.
The data provider must replicate the database to a secondary account in AWS Asia Pacific within the same organization then create a share to the data consumer's account.
Answer:
DExplanation:
Snowflake allows data providers to share data with consumers across different cloud platforms and regions through database replication. The data provider must replicate the database to a secondary account in the target region or cloud platform within the same organization, and then create a share to the data consumer’s account. This process ensures that the data is available in the consumer’s region and on their cloud platform, facilitating seamless data sharing. References: Sharing data securely across regions and cloud platforms | Snowflake Documentation
A Snowflake user has been granted the create data EXCHANGE listing privilege with their role.
Which tasks can this user now perform on the Data Exchange? (Select TWO).
Options:
Rename listings.
Delete provider profiles.
Modify listings properties.
Modify incoming listing access requests.
Submit listings for approval/publishing.
Answer:
C, EExplanation:
With the create data EXCHANGE listing privilege, a Snowflake user can modify the properties of listings and submit them for approval or publishing on the Data Exchange. This allows them to manage and share data sets with consumers effectively. References: Based on general data exchange practices in cloud services as of 2021.
What transformations are supported in a CREATE PIPE ... AS COPY ... FROM (....) statement? (Select TWO.)
Options:
Data can be filtered by an optional where clause
Incoming data can be joined with other tables
Columns can be reordered
Columns can be omitted
Row level access can be defined
Answer:
A, DExplanation:
In a CREATE PIPE ... AS COPY ... FROM (....) statement, the supported transformations include filtering data using an optional WHERE clause and omitting columns. The WHERE clause allows for the specification of conditions to filter the data that is being loaded, ensuring only relevant data is inserted into the table. Omitting columns enables the exclusion of certain columns from the data load, which can be useful when the incoming data contains more columns than are needed for the target table.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Simple Transformations During a Load1
Query compilation occurs in which architecture layer of the Snowflake Cloud Data Platform?
Options:
Compute layer
Storage layer
Cloud infrastructure layer
Cloud services layer
Answer:
DExplanation:
Query compilation in Snowflake occurs in the Cloud Services layer. This layer is responsible for coordinating and managing all aspects of the Snowflake service, including authentication, infrastructure management, metadata management, query parsing and optimization, and security. By handling these tasks, the Cloud Services layer enables the Compute layer to focus on executing queries, while the Storage layer is dedicated to persistently storing data.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Snowflake Architecture1
Which Snowflake feature is used for both querying and restoring data?
Options:
Cluster keys
Time Travel
Fail-safe
Cloning
Answer:
BExplanation:
Snowflake’s Time Travel feature is used for both querying historical data in tables and restoring and cloning historical data in databases, schemas, and tables3. It allows users to access historical data within a defined period (1 day by default, up to 90 days for Snowflake Enterprise Edition) and is a key feature for data recovery and management. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which of the following is a valid source for an external stage when the Snowflake account is located on Microsoft Azure?
Options:
An FTP server with TLS encryption
An HTTPS server with WebDAV
A Google Cloud storage bucket
A Windows server file share on Azure
Answer:
DExplanation:
In Snowflake, when the account is located on Microsoft Azure, a valid source for an external stage can be an Azure container or a folder path within an Azure container. This includes Azure Blob storage which is accessible via the azure:// endpoint. A Windows server file share on Azure, if configured properly, can be a valid source for staging data files for Snowflake. Options A, B, and C are not supported as direct sources for an external stage in Snowflake on Azure12. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What data is stored in the Snowflake storage layer? (Select TWO).
Options:
Snowflake parameters
Micro-partitions
Query history
Persisted query results
Standard and secure view results
Answer:
B, DExplanation:
The Snowflake storage layer is responsible for storing data in an optimized, compressed, columnar format. This includes micro-partitions, which are the fundamental storage units that contain the actual data stored in Snowflake. Additionally, persisted query results, which are the results of queries that have been materialized and stored for future use, are also kept within this layer. This design allows for efficient data retrieval and management within the Snowflake architecture1.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Key Concepts & Architecture | Snowflake Documentation2
Which Snowflake technique can be used to improve the performance of a query?
Options:
Clustering
Indexing
Fragmenting
Using INDEX__HINTS
Answer:
AExplanation:
Clustering is a technique used in Snowflake to improve the performance of queries. It involves organizing the data in a table into micro-partitions based on the values of one or more columns. This organization allows Snowflake to efficiently prune non-relevant micro-partitions during a query, which reduces the amount of data scanned and improves query performance.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Clustering
Which is the MINIMUM required Snowflake edition that a user must have if they want to use AWS/Azure Privatelink or Google Cloud Private Service Connect?
Options:
Standard
Premium
Enterprise
Business Critical
Answer:
DExplanation:
Which Snowflake partner specializes in data catalog solutions?
Options:
Alation
DataRobot
dbt
Tableau
Answer:
AExplanation:
Alation is known for specializing in data catalog solutions and is a partner of Snowflake. Data catalog solutions are essential for organizations to effectively manage their metadata and make it easily accessible and understandable for users, which aligns with the capabilities provided by Alation.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake’s official documentation and partner listings
Which copy INTO command outputs the data into one file?
Options:
SINGLE=TRUE
MAX_FILE_NUMBER=1
FILE_NUMBER=1
MULTIPLE=FAISE
Answer:
BExplanation:
The COPY INTO command in Snowflake can be configured to output data into a single file by setting the MAX_FILE_NUMBER option to 1. This option limits the number of files generated by the command, ensuring that only one file is created regardless of the amount of data being exported.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Data Unloading
What happens when an external or an internal stage is dropped? (Select TWO).
Options:
When dropping an external stage, the files are not removed and only the stage is dropped
When dropping an external stage, both the stage and the files within the stage are removed
When dropping an internal stage, the files are deleted with the stage and the files are recoverable
When dropping an internal stage, the files are deleted with the stage and the files are not recoverable
When dropping an internal stage, only selected files are deleted with the stage and are not recoverable
Answer:
A, DExplanation:
When an external stage is dropped in Snowflake, the reference to the external storage location is removed, but the actual files within the external storage (like Amazon S3, Google Cloud Storage, or Microsoft Azure) are not deleted. This means that the data remains intact in the external storage location, and only the stage object in Snowflake is removed.
On the other hand, when an internal stage is dropped, any files that were uploaded to the stage are deleted along with the stage itself. These files are not recoverable once the internal stage is dropped, as they are permanently removed from Snowflake’s storage.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Stages
True or False: Loading data into Snowflake requires that source data files be no larger than 16MB.
Options:
True
False
Answer:
BExplanation:
Snowflake does not require source data files to be no larger than 16MB. In fact, Snowflake recommends that for optimal load performance, data files should be roughly 100-250 MB in size when compressed. However, it is not recommended to load very large files (e.g., 100 GB or larger) due to potential delays and wasted credits if errors occur. Smaller files should be aggregated to minimize processing overhead, and larger files should be split to distribute the load among compute resources in an active warehouse.
References: Preparing your data files | Snowflake Documentation
What is the recommended file sizing for data loading using Snowpipe?
Options:
A compressed file size greater than 100 MB, and up to 250 MB
A compressed file size greater than 100 GB, and up to 250 GB
A compressed file size greater than 10 MB, and up to 100 MB
A compressed file size greater than 1 GB, and up to 2 GB
Answer:
CExplanation:
For data loading using Snowpipe, the recommended file size is a compressed file greater than 10 MB and up to 100 MB. This size range is optimal for Snowpipe’s continuous, micro-batch loading process, allowing for efficient and timely data ingestion without overwhelming the system with files that are too large or too small.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Snowpipe1
Which Snowflake object enables loading data from files as soon as they are available in a cloud storage location?
Options:
Pipe
External stage
Task
Stream
Answer:
AExplanation:
In Snowflake, a Pipe is the object designed to enable the continuous, near-real-time loading of data from files as soon as they are available in a cloud storage location. Pipes use Snowflake’s COPY command to load data and can be associated with a Stage object to monitor for new files. When new data files appear in the stage, the pipe automatically loads the data into the target table.
References:
Snowflake Documentation on Pipes
SnowPro® Core Certification Study Guide
https://docs.snowflake.com/en/user-guide/data-load-snowpipe-intro.html
A virtual warehouse's auto-suspend and auto-resume settings apply to which of the following?
Options:
The primary cluster in the virtual warehouse
The entire virtual warehouse
The database in which the virtual warehouse resides
The Queries currently being run on the virtual warehouse
Answer:
BExplanation:
The auto-suspend and auto-resume settings in Snowflake apply to the entire virtual warehouse. These settings allow the warehouse to automatically suspend when it’s not in use, helping to save on compute costs. When queries or tasks are submitted to the warehouse, it can automatically resume operation. This functionality is designed to optimize resource usage and cost-efficiency.
References:
SnowPro Core Certification Exam Study Guide (as of 2021)
Snowflake documentation on virtual warehouses and their settings (as of 2021)
A user unloaded a Snowflake table called mytable to an internal stage called mystage.
Which command can be used to view the list of files that has been uploaded to the staged?
Options:
list @mytable;
list @%raytable;
list @ %m.ystage;
list @mystage;
Answer:
DExplanation:
The command list @mystage; is used to view the list of files that have been uploaded to an internal stage in Snowflake. The list command displays the metadata for all files in the specified stage, which in this case is mystage. This command is particularly useful for verifying that files have been successfully unloaded from a Snowflake table to the stage and for managing the files within the stage.
References:
Snowflake Documentation on Stages
SnowPro® Core Certification Study Guide
UESTION NO: 241
What SQL command would be used to view all roles that were granted to user.1?
Options:
show grants to user USER1;
show grants of user USER1;
describe user USER1;
show grants on user USER1;
Answer:
AExplanation:
The correct command to view all roles granted to a specific user in Snowflake is SHOW GRANTS TO USER
What is a best practice after creating a custom role?
Options:
Create the custom role using the SYSADMIN role.
Assign the custom role to the SYSADMIN role
Assign the custom role to the PUBLIC role
Add__CUSTOM to all custom role names
Answer:
BExplanation:
Assigning the custom role to the SYSADMIN role is considered a best practice because it allows the SYSADMIN role to manage objects created by the custom role. This is important for maintaining proper access control and ensuring that the SYSADMIN can perform necessary administrative tasks on objects created by users with the custom role.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Section 1.3 - SnowPro Core Certification Study Guide1
Which data types does Snowflake support when querying semi-structured data? (Select TWO)
Options:
VARIANT
ARRAY
VARCHAR
XML
BLOB
Answer:
A, BExplanation:
Snowflake supports querying semi-structured data using specific data types that are capable of handling the flexibility and structure of such data. The data types supported for this purpose are:
A. VARIANT: This is a universal data type that can store values of any other type, including structured and semi-structured types. It is particularly useful for handling JSON, Avro, ORC, Parquet, and XML data formats1.
B. ARRAY: An array is a list of elements that can be of any data type, including VARIANT, and is used to handle semi-structured data that is naturally represented as a list1.
These data types are part of Snowflake’s built-in support for semi-structured data, allowing for the storage, querying, and analysis of data that does not fit into the traditional row-column format.
References:
Snowflake Documentation on Semi-Structured Data
[COF-C02] SnowPro Core Certification Exam Study Guide
What is the purpose of an External Function?
Options:
To call code that executes outside of Snowflake
To run a function in another Snowflake database
To share data in Snowflake with external parties
To ingest data from on-premises data sources
Answer:
AExplanation:
The purpose of an External Function in Snowflake is to call code that executes outside of the Snowflake environment. This allows Snowflake to interact with external services and leverage functionalities that are not natively available within Snowflake, such as calling APIs or running custom code hosted on cloud services3.
In which use cases does Snowflake apply egress charges?
Options:
Data sharing within a specific region
Query result retrieval
Database replication
Loading data into Snowflake
Answer:
CExplanation:
Snowflake applies egress charges in the case of database replication when data is transferred out of a Snowflake region to another region or cloud provider. This is because the data transfer incurs costs associated with moving data across different networks. Egress charges are not applied for data sharing within the same region, query result retrieval, or loading data into Snowflake, as these actions do not involve data transfer across regions.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Data Replication and Egress Charges1
Which account__usage views are used to evaluate the details of dynamic data masking? (Select TWO)
Options:
ROLES
POLICY_REFERENCES
QUERY_HISTORY
RESOURCE_MONIT ORS
ACCESS_HISTORY
Answer:
B, EExplanation:
To evaluate the details of dynamic data masking, the POLICY_REFERENCES and ACCESS_HISTORY views in the account_usage schema are used. The POLICY_REFERENCES view provides information about the objects to which a masking policy is applied, and the ACCESS_HISTORY view contains details about access to the masked data, which can be used to audit and verify the application of dynamic data masking policies.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Dynamic Data Masking1
When unloading to a stage, which of the following is a recommended practice or approach?
Options:
Set SINGLE: = true for larger files
Use OBJECT_CONSTRUCT ( * ) when using Parquet
Avoid the use of the CAST function
Define an individual file format
Answer:
DExplanation:
When unloading data to a stage, it is recommended to define an individual file format. This ensures that the data is unloaded in a consistent and expected format, which can be crucial for downstream processing and analysis2
Which of the following describes how multiple Snowflake accounts in a single organization relate to various cloud providers?
Options:
Each Snowflake account can be hosted in a different cloud vendor and region.
Each Snowflake account must be hosted in a different cloud vendor and region
All Snowflake accounts must be hosted in the same cloud vendor and region
Each Snowflake account can be hosted in a different cloud vendor, but must be in the same region.
Answer:
AExplanation:
Snowflake’s architecture allows for flexibility in account hosting across different cloud vendors and regions. This means that within a single organization, different Snowflake accounts can be set up in various cloud environments, such as AWS, Azure, or GCP, and in different geographical regions. This allows organizations to leverage the global infrastructure of multiple cloud providers and optimize their data storage and computing needs based on regional requirements, data sovereignty laws, and other considerations.
https://docs.snowflake.com/en/user-guide/intro-regions.html
Will data cached in a warehouse be lost when the warehouse is resized?
Options:
Possibly, if the warehouse is resized to a smaller size and the cache no longer fits.
Yes. because the compute resource is replaced in its entirety with a new compute resource.
No. because the size of the cache is independent from the warehouse size
Yes. became the new compute resource will no longer have access to the cache encryption key
Answer:
CExplanation:
When a Snowflake virtual warehouse is resized, the data cached in the warehouse is not lost. This is because the cache is maintained independently of the warehouse size. Resizing a warehouse, whether scaling up or down, does not affect the cached data, ensuring that query performance is not impacted by such changes.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Virtual Warehouse Performance1
What Snowflake features allow virtual warehouses to handle high concurrency workloads? (Select TWO)
Options:
The ability to scale up warehouses
The use of warehouse auto scaling
The ability to resize warehouses
Use of multi-clustered warehouses
The use of warehouse indexing
Answer:
B, DExplanation:
Snowflake’s architecture is designed to handle high concurrency workloads through several features, two of which are particularly effective:
B. The use of warehouse auto scaling: This feature allows Snowflake to automatically adjust the compute resources allocated to a virtual warehouse in response to the workload. If there is an increase in concurrent queries, Snowflake can scale up the resources to maintain performance.
D. Use of multi-clustered warehouses: Multi-clustered warehouses enable Snowflake to run multiple clusters of compute resources simultaneously. This allows for the distribution of queries across clusters, thereby reducing the load on any single cluster and improving the system’s ability to handle a high number of concurrent queries.
These features ensure that Snowflake can manage varying levels of demand without manual intervention, providing a seamless experience even during peak usage.
References:
Snowflake Documentation on Virtual Warehouses
SnowPro® Core Certification Study Guide
A company's security audit requires generating a report listing all Snowflake logins (e.g.. date and user) within the last 90 days. Which of the following statements will return the required information?
Options:
SELECT LAST_SUCCESS_LOGIN, LOGIN_NAME
FROM ACCOUNT_USAGE.USERS;
SELECT EVENT_TIMESTAMP, USER_NAME
FROM table(information_schema.login_history_by_user())
SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.ACCESS_HISTORY;
SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.LOGIN_HISTORY;
Answer:
DExplanation:
To generate a report listing all Snowflake logins within the last 90 days, the ACCOUNT_USAGE.LOGIN_HISTORY view should be used. This view provides information about login attempts, including successful and unsuccessful logins, and is suitable for security audits4.
Which of the following commands cannot be used within a reader account?
Options:
CREATE SHARE
ALTER WAREHOUSE
DROP ROLE
SHOW SCHEMAS
DESCRBE TABLE
Answer:
AExplanation:
In Snowflake, a reader account is a type of account that is intended for consuming shared data rather than performing any data management or DDL operations. The CREATE SHARE command is used to share data from your account with another account, which is not a capability provided to reader accounts. Reader accounts are typically restricted from creating shares, as their primary purpose is to read shared data rather than to share it themselves.
References:
Snowflake Documentation on Reader Accounts
SnowPro® Core Certification Study Guide
A user has 10 files in a stage containing new customer data. The ingest operation completes with no errors, using the following command:
COPY INTO my__table FROM @my__stage;
The next day the user adds 10 files to the stage so that now the stage contains a mixture of new customer data and updates to the previous data. The user did not remove the 10 original files.
If the user runs the same copy into command what will happen?
Options:
All data from all of the files on the stage will be appended to the table
Only data about new customers from the new files will be appended to the table
The operation will fail with the error uncertain files in stage.
All data from only the newly-added files will be appended to the table.
Answer:
AExplanation:
When the COPY INTO command is executed in Snowflake, it processes all files present in the specified stage that have not been ingested before or marked as already loaded. Since the user did not remove the original 10 files after the first load, running the same COPY INTO command again will result in all 20 files being processed. This means that the data from the original 10 files will be appended to the table again, along with the data from the new 10 files, potentially leading to duplicate records for the original data set.
References:
Snowflake Documentation on Data Loading
SnowPro® Core Certification Study Guide
What is a machine learning and data science partner within the Snowflake Partner Ecosystem?
Options:
Informatica
Power Bl
Adobe
Data Robot
Answer:
DExplanation:
Data Robot is recognized as a machine learning and data science partner within the Snowflake Partner Ecosystem. It provides an enterprise AI platform that enables users to build and deploy accurate predictive models quickly. As a partner, Data Robot integrates with Snowflake to enhance data science capabilities2.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Machine Learning & Data Science Partners
What is a responsibility of Snowflake's virtual warehouses?
Options:
Infrastructure management
Metadata management
Query execution
Query parsing and optimization
Management of the storage layer
Answer:
CExplanation:
The primary responsibility of Snowflake’s virtual warehouses is to execute queries. Virtual warehouses are one of the key components of Snowflake’s architecture, providing the compute power required to perform data processing tasks such as running SQL queries, performing joins, aggregations, and other data manipulations.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Virtual Warehouses1
What features does Snowflake Time Travel enable?
Options:
Querying data-related objects that were created within the past 365 days
Restoring data-related objects that have been deleted within the past 90 days
Conducting point-in-time analysis for Bl reporting
Analyzing data usage/manipulation over all periods of time
Answer:
B, CExplanation:
Snowflake Time Travel is a powerful feature that allows users to access historical data within a defined period. It enables two key capabilities:
B. Restoring data-related objects that have been deleted within the past 90 days: Time Travel can be used to restore tables, schemas, and databases that have been accidentally or intentionally deleted within the Time Travel retention period.
C. Conducting point-in-time analysis for BI reporting: It allows users to query historical data as it appeared at a specific point in time within the Time Travel retention period, which is crucial for business intelligence and reporting purposes.
While Time Travel does allow querying of past data, it is limited to the retention period set for the Snowflake account, which is typically 1 day for standard accounts and can be extended up to 90 days for enterprise accounts. It does not enable querying or restoring objects created or deleted beyond the retention period, nor does it provide analysis over all periods of time.
References:
Snowflake Documentation on Time Travel
SnowPro® Core Certification Study Guide
Which of the following Snowflake features provide continuous data protection automatically? (Select TWO).
Options:
Internal stages
Incremental backups
Time Travel
Zero-copy clones
Fail-safe
Answer:
C, EExplanation:
Snowflake’s Continuous Data Protection (CDP) encompasses a set of features that help protect data stored in Snowflake against human error, malicious acts, and software failure. Time Travel allows users to access historical data (i.e., data that has been changed or deleted) for a defined period, enabling querying and restoring of data. Fail-safe is an additional layer of data protection that provides a recovery option in the event of significant data loss or corruption, which can only be performed by Snowflake.
References:
Continuous Data Protection | Snowflake Documentation1
Data Storage Considerations | Snowflake Documentation2
Snowflake SnowPro Core Certification Study Guide3
Snowflake Data Cloud Glossary
When reviewing a query profile, what is a symptom that a query is too large to fit into the memory?
Options:
A single join node uses more than 50% of the query time
Partitions scanned is equal to partitions total
An AggregateOperacor node is present
The query is spilling to remote storage
Answer:
DExplanation:
When a query in Snowflake is too large to fit into the available memory, it will start spilling to remote storage. This is an indication that the memory allocated for the query is insufficient for its execution, and as a result, Snowflake uses remote disk storage to handle the overflow. This spill to remote storage can lead to slower query performance due to the additional I/O operations required.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Query Profile1
Snowpro Core Certification Exam Flashcards2
Which of the following are best practice recommendations that should be considered when loading data into Snowflake? (Select TWO).
Options:
Load files that are approximately 25 MB or smaller.
Remove all dates and timestamps.
Load files that are approximately 100-250 MB (or larger)
Avoid using embedded characters such as commas for numeric data types
Remove semi-structured data types
Answer:
C, DExplanation:
When loading data into Snowflake, it is recommended to:
C. Load files that are approximately 100-250 MB (or larger): This size is optimal for parallel processing and can help to maximize throughput. Smaller files can lead to overhead that outweighs the actual data processing time.
D. Avoid using embedded characters such as commas for numeric data types: Embedded characters can cause issues during data loading as they may be interpreted incorrectly. It’s best to clean the data of such characters to ensure accurate and efficient data loading.
These best practices are designed to optimize the data loading process, ensuring that data is loaded quickly and accurately into Snowflake.
References:
Snowflake Documentation on Data Loading Considerations
[COF-C02] SnowPro Core Certification Exam Study Guide
TION NO: 230
A developer is granted ownership of a table that has a masking policy. The developer's role is not able to see the masked data. Will the developer be able to modify the table to read the masked data?
Options:
Yes, because a table owner has full control and can unset masking policies.
Yes, because masking policies only apply to cloned tables.
No, because masking policies must always reference specific access roles.
No, because ownership of a table does not include the ability to change masking policies
Answer:
DExplanation:
Even if a developer is granted ownership of a table with a masking policy, they will not be able to modify the table to read the masked data if their role does not have the necessary permissions. Ownership of a table does not automatically confer the ability to alter masking policies, which are designed to protect sensitive data. Masking policies are applied at the schema level and require specific privileges to modify12.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Masking Policies
When is the result set cache no longer available? (Select TWO)
Options:
When another warehouse is used to execute the query
When another user executes the query
When the underlying data has changed
When the warehouse used to execute the query is suspended
When it has been 24 hours since the last query
Answer:
C, EExplanation:
The result set cache in Snowflake is invalidated and no longer available when the underlying data of the query results has changed, ensuring that queries return the most current data. Additionally, the cache expires after 24 hours to maintain the efficiency and accuracy of data retrieval1.
Which of the following describes external functions in Snowflake?
Options:
They are a type of User-defined Function (UDF).
They contain their own SQL code.
They call code that is stored inside of Snowflake.
They can return multiple rows for each row received
Answer:
AExplanation:
External functions in Snowflake are a special type of User-Defined Function (UDF) that call code executed outside of Snowflake, typically through a remote service. Unlike traditional UDFs, external functions do not contain SQL code within Snowflake; instead, they interact with external services to process data2.
%20functions%20are%20user%2Ddefined,code%20running%20outside%20of%20Snowflake.
What happens when a cloned table is replicated to a secondary database? (Select TWO)
Options:
A read-only copy of the cloned tables is stored.
The replication will not be successful.
The physical data is replicated
Additional costs for storage are charged to a secondary account
Metadata pointers to cloned tables are replicated
Answer:
C, EExplanation:
When a cloned table is replicated to a secondary database in Snowflake, the following occurs:
C. The physical data is replicated: The actual data of the cloned table is physically replicated to the secondary database. This ensures that the secondary database has its own copy of the data, which can be used for read-only purposes or failover scenarios1.
E. Metadata pointers to cloned tables are replicated: Along with the physical data, the metadata pointers that refer to the cloned tables are also replicated. This metadata includes information about the structure of the table and any associated properties2.
It’s important to note that while the physical data and metadata are replicated, the secondary database is typically read-only and cannot be used for write operations. Additionally, while there may be additional storage costs associated with the secondary account, this is not a direct result of the replication process but rather a consequence of storing additional data.
References:
SnowPro Core Exam Prep — Answers to Snowflake’s LEVEL UP: Backup and Recovery
Snowflake SnowPro Core Certification Exam Questions Set 10
What is the only supported character set for loading and unloading data from all supported file formats?
Options:
UTF-8
UTF-16
ISO-8859-1
WINDOWS-1253
Answer:
AExplanation:
UTF-8 is the only supported character set for loading and unloading data from all supported file formats in Snowflake. UTF-8 is a widely used encoding that supports a large range of characters from various languages, making it suitable for internationalization and ensuring data compatibility across different systems and platforms.
References:
Snowflake Documentation: Data Loading and Unloading
Which function is used to convert rows in a relational table to a single VARIANT column?
Options:
ARRAY_AGG
OBJECT_AGG
ARRAY_CONSTRUCT
OBJECT_CONSTRUCT
Answer:
DExplanation:
The OBJECT_CONSTRUCT function in Snowflake is used to convert rows in a relational table into a single VARIANT column that represents each row as a JSON object. This function dynamically creates a JSON object from a list of key-value pairs, where each key is a column name and each value is the corresponding column value for a row. This is particularly useful for aggregating and transforming structured data into semi-structured JSON format for further processing or analysis.
References:
Snowflake Documentation: Semi-structured Data Functions
How can a user get the MOST detailed information about individual table storage details in Snowflake?
Options:
SHOW TABLES command
SHOW EXTERNAL TABLES command
TABLES view
TABLE STORAGE METRICS view
Answer:
DExplanation:
To obtain the most detailed information about individual table storage details in Snowflake, the TABLE STORAGE METRICS view is the recommended option. This view provides comprehensive metrics on storage usage, including data size, time travel size, fail-safe size, and other relevant storage metrics for each table. This level of detail is invaluable for monitoring, managing, and optimizing storage costs and performance.
References:
Snowflake Documentation: Information Schema
How does a Snowflake stored procedure compare to a User-Defined Function (UDF)?
Options:
A single executable statement can call only two stored procedures. In contrast, a single SQL statement can call multiple UDFs.
A single executable statement can call only one stored procedure. In contrast, a single SQL statement can call multiple UDFs.
A single executable statement can call multiple stored procedures. In contrast, multiple SQL statements can call the same UDFs.
Multiple executable statements can call more than one stored procedure. In contrast, a single SQL statement can call multiple UDFs.
Answer:
BExplanation:
In Snowflake, stored procedures and User-Defined Functions (UDFs) have different invocation patterns within SQL:
Option B is correct: A single executable statement can call only one stored procedure due to the procedural and potentially transactional nature of stored procedures. In contrast, a single SQL statement can call multiple UDFs because UDFs are designed to operate more like functions in traditional programming, where they return a value and can be embedded within SQL queries.References: Snowflake documentation comparing the operational differences between stored procedures and UDFs.
A user wants to add additional privileges to the system-defined roles for their virtual warehouse. How does Snowflake recommend they accomplish this?
Options:
Grant the additional privileges to a custom role.
Grant the additional privileges to the ACCOUNTADMIN role.
Grant the additional privileges to the SYSADMIN role.
Grant the additional privileges to the ORGADMIN role.
Answer:
AExplanation:
Snowflake recommends enhancing the granularity and management of privileges by creating and utilizing custom roles. When additional privileges are needed beyond those provided by the system-defined roles for a virtual warehouse or any other resource, these privileges should be granted to a custom role. This approach allows for more precise control over access rights and the ability to tailor permissions to the specific needs of different user groups or applications within the organization, while also maintaining the integrity and security model of system-defined roles.
References:
Snowflake Documentation: Roles and Privileges
Which data types optimally store semi-structured data? (Select TWO).
Options:
ARRAY
CHARACTER
STRING
VARCHAR
VARIANT
Answer:
A, EExplanation:
In Snowflake, semi-structured data is optimally stored using specific data types that are designed to handle the flexibility and complexity of such data. The VARIANT data type can store structured and semi-structured data types, including JSON, Avro, ORC, Parquet, or XML, in a single column. The ARRAY data type, on the other hand, is suitable for storing ordered sequences of elements, which can be particularly useful for semi-structured data types like JSON arrays. These data types provide the necessary flexibility to store and query semi-structured data efficiently in Snowflake.
References:
Snowflake Documentation: Semi-structured Data Types
What information does the Query Profile provide?
Options:
Graphical representation of the data model
Statistics for each component of the processing plan
Detailed Information about I he database schema
Real-time monitoring of the database operations
Answer:
BExplanation:
The Query Profile in Snowflake provides a graphical representation and statistics for each component of the query's execution plan. This includes details such as the execution time, the number of rows processed, and the amount of data scanned for each operation within the query. The Query Profile is a crucial tool for understanding and optimizing the performance of queries, as it helps identify potential bottlenecks and inefficiencies.
References:
Snowflake Documentation: Understanding the Query Profile
Which activities are included in the Cloud Services layer? {Select TWO).
Options:
Data storage
Dynamic data masking
Partition scanning
User authentication
Infrastructure management
Answer:
D, EExplanation:
The Cloud Services layer in Snowflake is responsible for a wide range of services that facilitate the management and use of Snowflake, including:
D. User authentication: This service handles identity and access management, ensuring that only authorized users can access Snowflake resources.
E. Infrastructure management: This service manages the allocation and scaling of resources to meet user demands, including the management of virtual warehouses, storage, and the orchestration of query execution.
These services are part of Snowflake's fully managed, cloud-based architecture, which abstracts and automates many of the complexities associated with data warehousing.
References:
Snowflake Documentation: Overview of Snowflake Cloud Services
What is the Fail-safe period for a transient table in the Snowflake Enterprise edition and higher?
Options:
0 days
1 day
7 days
14 days
Answer:
AExplanation:
The Fail-safe period for a transient table in Snowflake, regardless of the edition (including Enterprise edition and higher), is 0 days. Fail-safe is a data protection feature that provides additional retention beyond the Time Travel period for recovering data in case of accidental deletion or corruption. However, transient tables are designed for temporary or short-term use and do not benefit from the Fail-safe feature, meaning that once their Time Travel period expires, data cannot be recovered.
References:
Snowflake Documentation: Understanding Fail-safe
Which command can be used to list all the file formats for which a user has access privileges?
Options:
LIST
ALTER FILE FORMAT
DESCRIBE FILE FORMAT
SHOW FILE FORMATS
Answer:
DExplanation:
The command to list all the file formats for which a user has access privileges in Snowflake is SHOW FILE FORMATS. This command provides a list of all file formats defined in the user's current session or specified database/schema, along with details such as the name, type, and creation time of each file format. It is a valuable tool for users to understand and manage the file formats available for data loading and unloading operations.
References:
Snowflake Documentation: SHOW FILE FORMATS
A Snowflake user is writing a User-Defined Function (UDF) that includes some unqualified object names.
How will those object names be resolved during execution?
Options:
Snowflake will resolve them according to the SEARCH_PATH parameter.
Snowflake will only check the schema the UDF belongs to.
Snowflake will first check the current schema, and then the schema the previous query used
Snowflake will first check the current schema, and them the PUBLIC schema of the current database.
Answer:
DExplanation:
Object Name Resolution: When unqualified object names (e.g., table name without schema) are used in a UDF, Snowflake follows a specific hierarchy to resolve them. Here's the order:
Current Schema: Snowflake first checks if an object with the given name exists in the schema currently in use for the session.
PUBLIC Schema: If the object isn't found in the current schema, Snowflake looks in the PUBLIC schema of the current database.
Note: The SEARCH_PATH parameter influences object resolution for queries, not within UDFs.
References:
Snowflake Documentation (Object Naming Resolution):
What are characteristics of reader accounts in Snowflake? (Select TWO).
Options:
Reader account users cannot add new data to the account.
Reader account users can share data to other reader accounts.
A single reader account can consume data from multiple provider accounts.
Data consumers are responsible for reader account setup and data usage costs.
Reader accounts enable data consumers to access and query data shared by the provider.
Answer:
A, EExplanation:
Characteristics of reader accounts in Snowflake include:
A. Reader account users cannot add new data to the account: Reader accounts are intended for data consumption only. Users of these accounts can query and analyze the data shared with them but cannot upload or add new data to the account.
E. Reader accounts enable data consumers to access and query data shared by the provider: One of the primary purposes of reader accounts is to allow data consumers to access and perform queries on the data shared by another Snowflake account, facilitating secure and controlled data sharing.
References:
Snowflake Documentation: Reader Accounts
Which role has the ability to create a share from a shared database by default?
Options:
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
ORGADMIN
Answer:
AExplanation:
By default, the ACCOUNTADMIN role in Snowflake has the ability to create a share from a shared database. This role has the highest level of access within a Snowflake account, including the management of all aspects of the account, such as users, roles, warehouses, and databases, as well as the creation and management of shares for secure data sharing with other Snowflake accounts.
References:
Snowflake Documentation: Roles
A Snowflake user wants to temporarily bypass a network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY.
What should they do?
Options:
Use the SECURITYADMIN role.
Use the SYSADMIN role.
Use the USERADMIN role.
Contact Snowflake Support.
Answer:
CExplanation:
To temporarily bypass a network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY, the USERADMIN role should be used. This role has the necessary privileges to modify user properties, including setting a temporary bypass for network policies, which can be crucial for enabling access under specific circumstances without permanently altering the network security configuration.
References:
Snowflake Documentation: User Management
Which function should be used to insert JSON format string data inot a VARIANT field?
Options:
FLATTEN
CHECK_JSON
PARSE_JSON
TO_VARIANT
Answer:
CExplanation:
To insert JSON formatted string data into a VARIANT field in Snowflake, the correct function to use is PARSE_JSON. The PARSE_JSON function is specifically designed to interpret a JSON formatted string and convert it into a VARIANT type, which is Snowflake's flexible format for handling semi-structured data like JSON, XML, and Avro. This function is essential for loading and querying JSON data within Snowflake, allowing users to store and manage JSON data efficiently while preserving its structure for querying purposes. This function's usage and capabilities are detailed in the Snowflake documentation, providing users with guidance on how to handle semi-structured data effectively within their Snowflake environments.
References:
Snowflake Documentation: PARSE_JSON
By default, how long is the standard retention period for Time Travel across all Snowflake accounts?
Options:
0 days
1 day
7 days
14 days
Answer:
BExplanation:
By default, the standard retention period for Time Travel in Snowflake is 1 day across all Snowflake accounts. Time Travel enables users to access historical data within this retention window, allowing for point-in-time data analysis and recovery. This feature is a significant aspect of Snowflake's data management capabilities, offering flexibility in handling data changes and accidental deletions.
References:
Snowflake Documentation: Using Time Travel
Which common query problems are identified by the Query Profile? (Select TWO.)
Options:
Syntax error
Inefficient pruning
Ambiguous column names
Queries too large to fit in memory
Object does not exist or not authorized
Answer:
B, DExplanation:
The Query Profile in Snowflake can identify common query problems, including:
B. Inefficient pruning: This refers to the inability of a query to effectively limit the amount of data being scanned, potentially leading to suboptimal performance.
D. Queries too large to fit in memory: This indicates that a query requires more memory than is available in the virtual warehouse, which can lead to spilling to disk and degraded performance.
The Query Profile helps diagnose these issues by providing detailed execution statistics and visualizations, aiding in query optimization and troubleshooting.
References:
Snowflake Documentation: Query Profile
Top of Form
How does a Snowflake user extract the URL of a directory table on an external stage for further transformation?
Options:
Use the SHOW STAGES command.
Use the DESCRIBE STAGE command.
Use the GET_ABSOLUTE_PATH function.
Use the GET_STAGE_LOCATION function.
Answer:
CExplanation:
To extract the URL of a directory table on an external stage for further transformation in Snowflake, the GET_ABSOLUTE_PATH function can be used. This function returns the full path of a file or directory within a specified stage, enabling users to dynamically construct URLs for accessing or processing data stored in external stages.
References:
Snowflake Documentation: Working with Stages
Which file function generates a SnowFlake-hosted URL that must be authenticated when used?
Options:
GET_STATE_LOCATION
GET_PRESENT_URL
BUILD_SCOPED_FILE_URL
BUILD_STAGE_FILE_URL
Answer:
DExplanation:
Purpose: The BUILD_STAGE_FILE_URL function generates a temporary, pre-signed URL that allows you to access a file within a Snowflake stage (internal or external). This URL requires authentication to use.
Key Points:
Security: The URL has a limited lifespan, enhancing security.
Use Cases: Sharing staged data with external tools or applications, or downloading it directly.
Snowflake Documentation (BUILD_STAGE_FILE_URL):
What criteria does Snowflake use to determine the current role when initiating a session? (Select TWO).
Options:
If a role was specified as part of the connection and that role has been granted to the Snowflake user, the specified role becomes the current role.
If no role was specified as part of the connection and a default role has been defined for the Snowflake user, that role becomes the current role.
If no role was specified as part of the connection and a default role has not been set for the Snowflake user, the session will not be initiated and the log in will fail.
If a role was specified as part of the connection and that role has not been granted to the Snowflake user, it will be ignored and the default role will become the current role.
If a role was specified as part of the connection and that role has not been granted to the Snowflake user, the role is automatically granted and it becomes the current role.
Answer:
A, BExplanation:
When initiating a session in Snowflake, the system determines the current role based on the user's connection details and role assignments. If a user specifies a role during the connection, and that role is already granted to them, Snowflake sets it as the current role for the session. Alternatively, if no role is specified during the connection, but the user has a default role assigned, Snowflake will use this default role as the current session role. These mechanisms ensure that users operate within their permissions, enhancing security and governance within Snowflake environments.
References:
Snowflake Documentation: Understanding Roles
How long is a query visible in the Query History page in the Snowflake Web Interface (Ul)?
Options:
60 minutes
24 hours
14 days
30 days
Answer:
CExplanation:
In the Snowflake Web Interface (UI), the Query History page displays the history of queries executed in Snowflake for up to 14 days. This allows users to review and analyze their query performance, troubleshoot issues, and understand their query patterns over a two-week period. The Query History page is a critical tool for monitoring and optimizing the use of Snowflake.
References:
Snowflake Documentation: Using the Web Interface
When using the ALLOW_CLI£NT_MFA_CACHING parameter, how long is a cached Multi-Factor Authentication (MFA) token valid for?
Options:
1 hour
2 hours
4 hours
8 hours
Answer:
CExplanation:
A cached MFA token is valid for up to four hours. er-of-prompts-during-authentication-optional
What is used to denote a pre-computed data set derived from a SELECT query specification and stored for later use?
Options:
View
Secure view
Materialized view
External table
Answer:
CExplanation:
A materialized view in Snowflake denotes a pre-computed data set derived from a SELECT query specification and stored for later use. Unlike standard views, which dynamically compute the data each time the view is accessed, materialized views store the result of the query at the time it is executed, thereby speeding up access to the data, especially for expensive aggregations on large datasets.
References:
Snowflake Documentation: Materialized Views
When referring to User-Defined Function (UDF) names in Snowflake, what does the term overloading mean?
Options:
There are multiple SOL UDFs with the same names and the same number of arguments.
There are multiple SQL UDFs with the same names and the same number of argument types.
There are multiple SQL UDFs with the same names but with a different number of arguments or argument types.
There are multiple SQL UDFs with different names but the same number of arguments or argument types.
Answer:
CExplanation:
In Snowflake, overloading refers to the creation of multiple User-Defined Functions (UDFs) with the same name but differing in the number or types of their arguments. This feature allows for more flexible function usage, as Snowflake can differentiate between functions based on the context of their invocation, such as the types or the number of arguments passed. Overloading helps to create more adaptable and readable code, as the same function name can be used for similar operations on different types of data.
References:
Snowflake Documentation: User-Defined Functions
Which data formats are supported by Snowflake when unloading semi-structured data? (Select TWO).
Options:
Binary file in Avro
Binary file in Parquet
Comma-separated JSON
Newline Delimited JSON
Plain text file containing XML elements
Answer:
B, DExplanation:
Snowflake supports a variety of file formats for unloading semi-structured data, among which Parquet and Newline Delimited JSON (NDJSON) are two widely used formats.
B. Binary file in Parquet: Parquet is a columnar storage file format optimized for large-scale data processing and analysis. It is especially suited for complex nested data structures.
D. Newline Delimited JSON (NDJSON): This format represents JSON records separated by newline characters, facilitating the storage and processing of multiple, separate JSON objects in a single file.
These formats are chosen for their efficiency and compatibility with data analytics tools and ecosystems, enabling seamless integration and processing of exported data.
References:
Snowflake Documentation: Data Unloading
Which Snowflow object does not consume and storage costs?
Options:
Secure view
Materialized view
Temporary table
Transient table
Answer:
CExplanation:
Temporary tables in Snowflake do not consume storage costs. They are designed for transient data that is needed only for the duration of a session. Data stored in temporary tables is held in the virtual warehouse's cache and does not persist beyond the session's lifetime, thereby not incurring any storage charges.
References:
Snowflake Documentation: Temporary Tables
Which Snowflake data type is used to store JSON key value pairs?
Options:
TEXT
BINARY
STRING
VARIANT
Answer:
DExplanation:
The VARIANT data type in Snowflake is used to store JSON key-value pairs along with other semi-structured data formats like AVRO, BSON, and XML. The VARIANT data type allows for flexible and dynamic data structures within a single column, accommodating complex and nested data. This data type is crucial for handling semi-structured data in Snowflake, enabling users to perform SQL operations on JSON objects and arrays directly.
References:
Snowflake Documentation: Semi-structured Data Types
Which privilege is required to use the search optimization service in Snowflake?
Options:
GRANT SEARCH OPTIMIZATION ON SCHEMA
GRANT SEARCH OPTIMIZATION ON DATABASE
GRANT ADD SEARCH OPTIMIZATION ON SCHEMA
GRANT ADD SEARCH OPTIMIZATION ON DATABASE
Answer:
CExplanation:
To utilize the search optimization service in Snowflake, the correct syntax for granting privileges to a role involves specific commands that include adding search optimization capabilities:
Option C: GRANT ADD SEARCH OPTIMIZATION ON SCHEMA
Options A and B do not include the correct verb "ADD," which is necessary for this specific type of grant command in Snowflake. Option D incorrectly mentions the database level, as search optimization privileges are typically configured at the schema level, not the database level.References: Snowflake documentation on the use of GRANT statements for configuring search optimization.
If a virtual warehouse runs for 61 seconds, shut down, and then restart and runs for 30 seconds, for how many seconds is it billed?
Options:
60
91
120
121
Answer:
CExplanation:
Snowflake bills virtual warehouse usage in one-minute increments, rounding up to the nearest minute for any partial minute of compute time used. If a virtual warehouse runs for 61 seconds and then, after being shut down, restarts and runs for an additional 30 seconds, the total time billed would be 120 seconds or 2 minutes. The first 61 seconds are rounded up to 2 minutes, and the subsequent 30 seconds are within a new minute, which is also rounded up to the nearest minute.
References:
Snowflake Documentation: Virtual Warehouses Billing
What is it called when a customer managed key is combined with a Snowflake managed key to create a composite key for encryption?
Options:
Hierarchical key model
Client-side encryption
Tri-secret secure encryption
Key pair authentication
Answer:
CExplanation:
Tri-secret secure encryption is a security model employed by Snowflake that involves combining a customer-managed key with a Snowflake-managed key to create a composite key for encrypting data. This model enhances data security by requiring both the customer-managed key and the Snowflake-managed key to decrypt data, thus ensuring that neither party can access the data independently. It represents a balanced approach to key management, leveraging both customer control and Snowflake's managed services for robust data encryption.
References:
Snowflake Documentation: Encryption and Key Management
Which command removes a role from another role or a user in Snowflak?
Options:
ALTER ROLE
REVOKE ROLE
USE ROLE
USE SECONDARY ROLES
Answer:
BExplanation:
The REVOKE ROLE command is used to remove a role from another role or a user in Snowflake. This command is part of Snowflake's role-based access control system, allowing administrators to manage permissions and access to database objects efficiently by adding or removing roles from users or other roles.
References:
Snowflake Documentation: REVOKE ROLE
How does Snowflake describe its unique architecture?
Options:
A single-cluster shared data architecture using a central data repository and massively parallel processing (MPP)
A multi-duster shared nothing architecture using a soloed data repository and massively parallel processing (MPP)
A single-cluster shared nothing architecture using a sliced data repository and symmetric multiprocessing (SMP)
A multi-cluster shared nothing architecture using a siloed data repository and symmetric multiprocessing (SMP)
Answer:
AExplanation:
Snowflake's unique architecture is described as a multi-cluster, shared data architecture that leverages massively parallel processing (MPP). This architecture separates compute and storage resources, enabling Snowflake to scale them independently. It does not use a single cluster or rely solely on symmetric multiprocessing (SMP); rather, it uses a combination of shared-nothing architecture for compute clusters (virtual warehouses) and a centralized storage layer for data, optimizing for both performance and scalability.
References:
Snowflake Documentation: Snowflake Architecture Overview
Why would a Snowflake user decide to use a materialized view instead of a regular view?
Options:
The base tables do not change frequently.
The results of the view change often.
The query is not resource intensive.
The query results are not used frequently.
Answer:
AExplanation:
A Snowflake user would decide to use a materialized view instead of a regular view primarily when the base tables do not change frequently. Materialized views store the result of the view query and update it as the underlying data changes, making them ideal for situations where the data is relatively static and query performance is critical. By precomputing and storing the query results, materialized views can significantly reduce query execution times for complex aggregations, joins, and calculations.
References:
Snowflake Documentation: Materialized Views
There are two Snowflake accounts in the same cloud provider region: one is production and the other is non-production. How can data be easily transferred from the production account to the non-production account?
Options:
Clone the data from the production account to the non-production account.
Create a data share from the production account to the non-production account.
Create a subscription in the production account and have it publish to the non-production account.
Create a reader account using the production account and link the reader account to the non-production account.
Answer:
BExplanation:
To easily transfer data from a production account to a non-production account in Snowflake within the same cloud provider region, creating a data share is the most efficient approach. Data sharing allows for live, read-only access to selected data objects from the production account to the non-production account without the need to duplicate or move the actual data. This method facilitates seamless access to the data for development, testing, or analytics purposes in the non-production environment.
References:
Snowflake Documentation: Data Sharing
Which SQL command can be used to verify the privileges that are granted to a role?
Options:
SHOW GRANTS ON ROLE
SHOW ROLES
SHOW GRANTS TO ROLE
SHOW GRANTS FOR ROLE
Answer:
CExplanation:
To verify the privileges that have been granted to a specific role in Snowflake, the correct SQL command is SHOW GRANTS TO ROLE <Role Name>. This command lists all the privileges granted to the specified role, including access to schemas, tables, and other database objects. This is a useful command for administrators and users with sufficient privileges to audit and manage role permissions within the Snowflake environment.
References:
Snowflake Documentation: SHOW GRANTS
What is the Fail-safe retention period for transient and temporary tables?
Options:
0 days
1 day
7 days
90 days
Answer:
AExplanation:
The Fail-safe retention period for transient and temporary tables in Snowflake is 0 days. Fail-safe is a feature designed to protect data against accidental loss or deletion by retaining historical data for a period after its Time Travel retention period expires. However, transient and temporary tables, which are designed for temporary or short-term storage and operations, do not have a Fail-safe period. Once the data is deleted or the table is dropped, it cannot be recovered.
References:
Snowflake Documentation: Understanding Fail-safe
By default, which role has access to the SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER function?
Options:
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
ORGADMIN
Answer:
AExplanation:
By default, the ACCOUNTADMIN role in Snowflake has access to the SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER function. This function is used to set global account parameters, impacting the entire Snowflake account's configuration and behavior. The ACCOUNTADMIN role is the highest-level administrative role in Snowflake, granting the necessary privileges to manage account settings and security features, including the use of global account parameters.
References:
Snowflake Documentation: SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER
Which service or feature in Snowflake is used to improve the performance of certain types of lookup and analytical queries that use an extensive set of WHERE conditions?
Options:
Data classification
Query acceleration service
Search optimization service
Tagging
Answer:
CExplanation:
The Search Optimization Service in Snowflake is designed to improve the performance of specific types of queries, particularly those involving extensive sets of WHERE conditions. By maintaining a search index on tables, this service can accelerate lookup and analytical queries, making it a valuable feature for optimizing query performance and reducing execution times for complex searches.
References:
Snowflake Documentation: Search Optimization Service
Which solution improves the performance of point lookup queries that return a small number of rows from large tables using highly selective filters?
Options:
Automatic clustering
Materialized views
Query acceleration service
Search optimization service
Answer:
DExplanation:
The search optimization service improves the performance of point lookup queries on large tables by using selective filters to quickly return a small number of rows. It creates an optimized data structure that helps in pruning the micro-partitions that do not contain the queried values3. References: [COF-C02] SnowPro Core Certification Exam Study Guide
For which use cases is running a virtual warehouse required? (Select TWO).
Options:
When creating a table
When loading data into a table
When unloading data from a table
When executing a show command
When executing a list command
Answer:
B, CExplanation:
Running a virtual warehouse is required when loading data into a table and when unloading data from a table because these operations require compute resources that are provided by the virtual warehouse23.
Which VALIDATION_MODE value will return the errors across the files specified in a COPY command, including files that were partially loaded during an earlier load?
Options:
RETURN_-1_R0WS
RETURN_n_ROWS
RETURN_ERRORS
RETURN ALL ERRORS
Answer:
CExplanation:
The RETURN_ERRORS value in the VALIDATION_MODE option of the COPY command instructs Snowflake to validate the data files and return errors encountered across all specified files, including those that were partially loaded during an earlier load2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which statements describe benefits of Snowflake's separation of compute and storage? (Select TWO).
Options:
The separation allows independent scaling of computing resources.
The separation ensures consistent data encryption across all virtual data warehouses.
The separation supports automatic conversion of semi-structured data into structured data for advanced data analysis.
Storage volume growth and compute usage growth can be tightly coupled.
Compute can be scaled up or down without the requirement to add more storage.
Answer:
A, EExplanation:
Snowflake’s architecture allows for the independent scaling of compute resources, meaning you can increase or decrease the computational power as needed without affecting storage. This separation also means that storage can grow independently of compute usage, allowing for more flexible and cost-effective data management.
Which Snowflake function will parse a JSON-null into a SQL-null?
Options:
TO_CHAR
TO_VARIANT
TO_VARCHAR
STRIP NULL VALUE
Answer:
DExplanation:
The STRIP_NULL_VALUE function in Snowflake is used to convert a JSON null value into a SQL NULL value1.
What is a directory table in Snowflake?
Options:
A separate database object that is used to store file-level metadata
An object layered on a stage that is used to store file-level metadata
A database object with grantable privileges for unstructured data tasks
A Snowflake table specifically designed for storing unstructured files
Answer:
BExplanation:
A directory table in Snowflake is an object layered on a stage that is used to store file-level metadata. It is not a separate database object but is conceptually similar to an external table because it stores metadata about the data files in the stage5.
Which function unloads data from a relational table to JSON?
Options:
TO_OBJECT
TO_JSON
TO_VARIANT
OBJECT CONSTRUCT
Answer:
BExplanation:
The TO_JSON function is used to convert a VARIANT value into a string containing the JSON representation of the value. This function is suitable for unloading data from a relational table to JSON format. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake feature allows administrators to identify unused data that may be archived or deleted?
Options:
Access history
Data classification
Dynamic Data Masking
Object tagging
Answer:
AExplanation:
The Access History feature in Snowflake allows administrators to track data access patterns and identify unused data. This information can be used to make decisions about archiving or deleting data to optimize storage and reduce costs.
Which parameter can be set at the account level to set the minimum number of days for which Snowflake retains historical data in Time Travel?
Options:
DATA_RETENTION_TIME_IN_DAYS
MAX_DATA_EXTENSION_TIME_IN_DAYS
MIN_DATA_RETENTION_TIME_IN_DAYS
MAX CONCURRENCY LEVEL
Answer:
AExplanation:
The parameter DATA_RETENTION_TIME_IN_DAYS can be set at the account level to define the minimum number of days Snowflake retains historical data for Time Travel1.
What factors impact storage costs in Snowflake? (Select TWO).
Options:
The account type
The storage file format
The cloud region used by the account
The type of data being stored
The cloud platform being used
Answer:
A, CExplanation:
The factors that impact storage costs in Snowflake include the account type (Capacity or On Demand) and the cloud region used by the account. These factors determine the rate at which storage is billed, with different regions potentially having different rates3.
What is the purpose of a Query Profile?
Options:
To profile how many times a particular query was executed and analyze its u^age statistics over time.
To profile a particular query to understand the mechanics of the query, its behavior, and performance.
To profile the user and/or executing role of a query and all privileges and policies applied on the objects within the query.
To profile which queries are running in each warehouse and identify proper warehouse utilization and sizing for better performance and cost balancing.
Answer:
BExplanation:
The purpose of a Query Profile is to provide a detailed analysis of a particular query’s execution plan, including the mechanics, behavior, and performance. It helps in identifying potential performance bottlenecks and areas for optimization
Which command is used to unload data from a Snowflake database table into one or more files in a Snowflake stage?
Options:
CREATE STAGE
COPY INTO