Snowflake ARA-C01 Question Answer
An Architect has selected the Snowflake Connector for Python to integrate and manipulate Snowflake data using Python to handle large data sets and complex analyses.
Which features should the Architect consider in terms of query execution and data type conversion? (Select TWO).
The large queries will require conn.cursor() to execute.
The Connector supports asynchronous and synchronous queries.
The Connector converts NUMBER data types to DECIMAL by default.
The Connector converts Snowflake data types to native Python data types by default.
The Connector converts data types to STRING by default.
The Answer Is:
This question includes an explanation.
Explanation:
The Snowflake Connector for Python is designed to integrate Snowflake with Python-based analytics, ETL, and application workloads. One key capability is its support for both synchronous and asynchronous query execution, which allows architects to design scalable pipelines and applications that can submit long-running queries without blocking execution threads (Answer B). This is particularly important for large data sets and complex analytical workloads, where asynchronous execution improves throughput and application responsiveness.
Additionally, the connector automatically converts Snowflake data types into native Python data types wherever possible (Answer D). For example, VARCHAR values are returned as Python strings, numeric values as Python numeric types, and timestamps as Python datetime objects. This default behavior simplifies downstream processing and analysis, eliminating the need for manual casting or parsing in most use cases.
The connector does not convert all values to strings by default, nor does it specifically convert NUMBER to DECIMAL as a required behavior; instead, type conversion is handled intelligently to match Python equivalents. While cursors are used to execute queries, this is standard DB-API behavior and not a distinguishing feature for performance or architecture decisions. For SnowPro Architect candidates, understanding these connector capabilities is essential when designing Python-based data engineering or analytics solutions on Snowflake.
=========
QUESTION NO: 7 [Security and Access Management]
Which parameters can only be set at the account level? (Select TWO).
A. DATA_RETENTION_TIME_IN_DAYS
B. ENFORCE_SESSION_POLICY
C. MAX_CONCURRENCY_LEVEL
D. PERIODIC_DATA_REKEYING
E. TIMESTAMP_INPUT_FORMAT
Answer: B, D
Snowflake parameters exist at different levels of the hierarchy, including account, user, session, warehouse, database, schema, and object levels. However, some parameters are intentionally restricted to the account level because they enforce global security or compliance behavior across the entire Snowflake environment.
ENFORCE_SESSION_POLICY is an account-level parameter that determines whether session policies (such as authentication or session controls) are enforced across all users. Because this impacts authentication and session behavior globally, it cannot be overridden at lower scopes (Answer B).
PERIODIC_DATA_REKEYING is another account-level-only parameter. It controls automatic re-encryption (rekeying) of data to meet strict compliance and security requirements. Rekeying affects all encrypted data in the account and must therefore be centrally managed at the account level (Answer D).
By contrast, DATA_RETENTION_TIME_IN_DAYS can be set at multiple levels (account, database, schema, and table). MAX_CONCURRENCY_LEVEL is a warehouse-level parameter, and TIMESTAMP_INPUT_FORMAT can be set at account, user, or session levels. From a SnowPro Architect perspective, understanding which parameters are global versus scoped is critical for designing secure, compliant, and governable Snowflake architectures.
=========
QUESTION NO: 8 [Snowflake Data Engineering]
A MERGE statement is designed to return duplicated values of a column ID in a USING clause. The column ID is used in the merge condition. The MERGE statement contains these two clauses:
WHEN NOT MATCHED THEN INSERT
WHEN MATCHED THEN UPDATE
What will be the result when this query is run?
A. The MERGE statement will run successfully using the default parameter settings.
B. If the value of the ID is present in the target table, all occurrences will be updated.
C. If the value of the ID is present in the target table, only the first occurrence will be updated.
D. If the ERROR_ON_NONDETERMINISTIC_MERGE = FALSE parameter is set, the MERGE statement will run successfully.
Answer: D
In Snowflake, MERGE statements require deterministic behavior when matching rows between the source (USING clause) and the target table. If the USING clause contains duplicate values for the join condition (in this case, column ID), Snowflake cannot deterministically decide which source row should update or insert into the target. By default, this results in an error to prevent unintended data corruption.
Snowflake provides the parameter ERROR_ON_NONDETERMINISTIC_MERGE to control this behavior. When set to TRUE (the default), Snowflake raises an error if nondeterministic matches are detected. When this parameter is explicitly set to FALSE, Snowflake allows the MERGE statement to run successfully even when duplicate keys exist in the source, accepting the nondeterministic outcome (Answer D).
Snowflake does not guarantee updating all or only the first occurrence in such cases; instead, the behavior is undefined unless the parameter is adjusted. This question tests an architect’s understanding of data correctness, deterministic processing, and safe data engineering practices—key topics within the SnowPro Architect exam scope.
=========
QUESTION NO: 9 [Snowflake Data Engineering]
An Architect needs to define a table structure for an unfamiliar semi-structured data set. The Architect wants to identify a list of distinct key names present in the semi-structured objects.
What function should be used?
A. FLATTEN with the RECURSIVE argument
B. INFER_SCHEMA
C. PARSE_JSON
D. RESULT_SCAN
Answer: A
When working with unfamiliar semi-structured data such as JSON, a common first step is to explore its structure and identify all possible keys. Snowflake’s FLATTEN function is specifically designed to explode VARIANT, OBJECT, or ARRAY data into relational form. Using the RECURSIVE option allows FLATTEN to traverse nested objects and arrays, returning all nested keys regardless of depth (Answer A).
This approach enables architects to query and aggregate distinct key names, making it ideal for schema discovery and exploratory analysis. INFER_SCHEMA, by contrast, is used primarily with staged files to infer column definitions for external tables or COPY operations, not for exploring existing VARIANT data already stored in tables. PARSE_JSON simply converts a string into a VARIANT type and does not help identify keys. RESULT_SCAN is used to query the results of a previously executed query and is unrelated to schema discovery.
For SnowPro Architect candidates, this highlights an important semi-structured data design pattern: using FLATTEN (often with RECURSIVE) to explore, profile, and understand evolving data structures before committing to a relational schema or transformation pipeline.
=========
QUESTION NO: 10 [Security and Access Management]
A global retail company must ensure comprehensive data governance, security, and compliance with various international regulations while using Snowflake for data warehousing and analytics.
What should an Architect do to meet these requirements? (Select TWO).
A. Create a network policy at the column level to secure the data.
B. Use column-level security to restrict access to specific columns.
C. Store encryption keys on an external server to manage encryption manually.
D. Implement Role-Based Access Control (RBAC) to assign roles and permissions.
E. Enable Secure Data Sharing with external partners for collaborative purposes.
Answer: B, D
Snowflake provides built-in governance and security mechanisms that align with global regulatory requirements. Column-level security—implemented through features such as dynamic data masking and row access policies—allows architects to restrict access to sensitive data at a granular level based on roles or conditions (Answer B). This is essential for compliance with regulations such as GDPR, HIPAA, and similar frameworks that require limiting access to personally identifiable or sensitive data.
Role-Based Access Control (RBAC) is the foundation of Snowflake’s security model and is critical for governing who can access which data and perform which actions (Answer D). By assigning privileges to roles instead of users, organizations can centrally manage permissions, enforce separation of duties, and audit access more effectively.
Snowflake does not support column-level network policies, and encryption keys are managed by Snowflake (or via Tri-Secret Secure), not manually by customers. Secure Data Sharing is useful for collaboration but is not a core requirement for governance and compliance in this scenario. For the SnowPro Architect exam, mastering RBAC and column-level security is essential for designing compliant and secure Snowflake architectures.

