# Snowflake Audit Logs

## Overview

Panther can fetch [Snowflake](https://www.snowflake.com/en/) audit information by querying the views in the [`ACCOUNT_USAGE` schema](https://docs.snowflake.com/en/sql-reference/account-usage) in the `SNOWFLAKE` database (or similarly named views in a custom database/schema). Data from these views can be enriched with "state data" in Snowflake—learn more on [Snowflake Enrichment](https://docs.panther.com/enrichment/snowflake).

{% hint style="info" %}
You can use this integration to monitor any Snowflake instance, however, to monitor your Panther-connected Snowflake instance, it's recommended to instead use [Scheduled Searches](https://docs.panther.com/search/scheduled-searches)—see [Scheduled Search Examples](https://docs.panther.com/search/scheduled-searches/examples#database-snowflake-monitoring).
{% endhint %}

Databases in any Snowflake cloud or region may be monitored, but these factors could affect [generated cost](#cost-considerations).

The available views include:

* [ACCESS\_HISTORY](https://docs.snowflake.com/en/sql-reference/account-usage/access_history)
* [DATA\_TRANSFER\_HISTORY](https://docs.snowflake.com/en/sql-reference/account-usage/data_transfer_history)
* [LOGIN\_HISTORY](https://docs.snowflake.com/en/sql-reference/account-usage/login_history)
* [QUERY\_HISTORY](https://docs.snowflake.com/en/sql-reference/account-usage/query_history)
* [SESSIONS](https://docs.snowflake.com/en/sql-reference/account-usage/sessions)

{% hint style="warning" %}
The [ACCESS\_HISTORY view](https://docs.snowflake.com/en/sql-reference/account-usage/access_history) requires the [Enterprise Edition of Snowflake](https://docs.snowflake.com/en/user-guide/intro-editions) or higher.
{% endhint %}

### Latency

Total data latency is a combination of Snowflake and Panther latency:

* Latency varies for each of the available Snowflake views, and can, in certain cases, be as high as three hours. To verify latency for each view, consult the **Latency** column of the **ACCOUNT\_USAGE views** table in [this Snowflake documentation](https://docs.snowflake.com/en/sql-reference/account-usage#account-usage-views).
* Panther adds at least one hour of latency.

### Cost considerations

Snowflake compute costs incurred by using this integration are affected by various factors, including:

* The [warehouse](https://docs.snowflake.com/en/user-guide/warehouses) you select for Panther to use
  * Panther must execute queries to pull data, thus it needs to use an active warehouse.
  * **You can minimize costs by**: selecting a warehouse that is already running.
* The data refresh interval
  * When setting up the log source in Panther, you will choose how often you'd like to pull data from Snowflake. This can be as frequent as every one minute, up to as long as every 24 hours. You should set this interval based on your desired latency-to-cost balance.
  * **You can minimize costs by**: choosing a longer refresh interval.
* Whether the cloud and region of the Snowflake instance you're monitoring is the same as your Panther Snowflake instance
  * **You can minimize costs by**: the cloud and region being the same as your Panther Snowflake instance.

Learn more on Snowflake's [Understanding overall cost](https://docs.snowflake.com/en/user-guide/cost-understanding-overall) documentation.

### Limitations

Snowflake Audit log events that exceed [Panther's limit of 15 MB](https://docs.panther.com/data-onboarding/..#data-ingestion-size-limit) will be skipped.

## How to onboard Snowflake Audit Logs to Panther

Note that after you've set up the initial integration, you can [rotate the RSA key associated to your Snowflake user](#how-to-rotate-the-rsa-key-of-an-existing-snowflake-audit-logs-source).

### Prerequisites

To configure this integration, you must:

* Have your [Snowflake account identifier](https://docs.snowflake.com/en/user-guide/admin-account-identifier). It should be formatted with a hyphen (not a period), like: `<org_name>-<account_name>`
* Have a Snowflake [warehouse](https://docs.snowflake.com/en/user-guide/warehouses) Panther can use to execute queries to pull data
* In Snowflake, have `CREATE USER` , `CREATE ROLE`, and `GRANT USAGE` permissions
  * This is only required if you will be creating a service user in Snowflake for Panther to use. If you already have a service user Panther can use, you do not need to have these permissions.

### Step 1: Create a worksheet in Snowsight

{% hint style="info" %}
This step is only required if you need to create a service user in Snowflake that Panther can use to pull data. If you already have a service user Panther can use, skip this step.
{% endhint %}

* In Snowsight, [create a worksheet](https://docs.snowflake.com/en/user-guide/ui-snowsight-worksheets-gs#create-worksheets-in-sf-web-interface) with the `CREATE USER` , `CREATE ROLE`, and `GRANT USAGE` permissions.

### Step 2: Create a new Snowflake log source in Panther

1. In the left-hand navigation bar of your Panther Console, click **Configure** > **Log Sources**.
2. Click **Create New.**
3. Search for “Snowflake Audit Logs,” then click its tile.
4. On the slide-out panel, click **Start Setup**.

   <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-fe2dd60002ff79bf23a3685f49dc5afb21d4f162%2FScreenshot%202024-10-09%20at%205.31.42%20PM.png?alt=media" alt="An arrow is drawn from a tile titled &#x22;Snowflake Audit Logs&#x22; to a button labeled &#x22;Start Setup.&#x22;" width="563"><figcaption></figcaption></figure>
5. On the **Configure** page, enter values for the following fields:
   * **Name**: Enter a descriptive name for the source, e.g. `Snowflake Prod`.
   * **Account Identifier**: Enter your Snowflake account identifier in the format `<org_name>-<account_name>`.
     * Use a hyphen, not a period, between the org and account names.
   * **Warehouse**: Enter the Snowflake warehouse Panther will use to execute queries to pull data.
   * (Optional) **Use custom database and schema**: If you have replicated audit log views in a custom database and schema and would like to query those (instead of the default database, `SNOWFLAKE`, and schema, `ACCOUNT_USAGE`), click this option.
     * **Database**: Enter the name of the custom database Panther will query.
     * **Schema**: Enter the name of the custom schema Panther will query. \* **Run Every**: Use the **Number** and **Period** fields to choose the interval on which you'd like Panther to pull data from Snowflake.
   * See [Cost considerations](#cost-considerations) to learn about how the interval can affect compute costs.
   * **Monitored Log Types**: Select the Snowflake views you'd like Panther to fetch.

{% hint style="warning" %}
Using a custom database and schema is rare. Doing so may appeal to you if you'd like to narrow the permissions of the service role you'll create in a later step to a certain Snowflake database and schema of your own.\
\
If you use a custom database and schema, the names of the views within the schema must exactly match the built-in Snowflake view names (listed in the [Overview](#overview)), as these are the names Panther expects. For example, if you'd like to pull query history logs, your view must be named `query_history`.
{% endhint %}

6. Click **Setup.**
7. On the **Set Credentials** page, fill in the form fields. Panther will generate an RSA key based on these values.
   * **Username**: The username of the Snowflake user Panther will use to pull data. The default value is `PANTHER_AUDIT_VIEW_USER`, but you may customize this.
     * If you already have a service user for Panther to use (and don't need to create a new one), enter its username here.
   * **Role**: The name of the role possessed by the Snowflake user that Panther will use to pull data. The default value is `PANTHER_AUDIT_VIEW_ROLE`, but you may customize this.
     * If you already have a service role for Panther to use (and don't need to create a new one), enter its name here.
8. If you already have a service user for Panther to use (and don't need to create a new one), click **I want to use my own RSA key**, then upload your RSA key file.

   <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-eff37af44d0a5c5a422b281d87e631b6344b9943%2FScreenshot%202025-04-14%20at%209.48.39%E2%80%AFAM.png?alt=media" alt="" width="563"><figcaption></figcaption></figure>
9. Click **Next**.
10. On the **Enrichment** page, if you'd like to enrich incoming logs with one or more of the [supported Snowflake enrichment types](https://docs.panther.com/enrichment/snowflake#supported-enrichment-types), on the tile of each one that you'd like to enable, click the toggle `ON` and set the **Refresh period (min)**.

    * The minimum refresh period is 60 minutes. If your data changes infrequently, it's recommended to increase this value.
    * If you toggle any of these enrichment sources on, they will be visible on the **Configure** > **Enrichments** page. Learn more on [Snowflake Enrichment](https://docs.panther.com/enrichment/snowflake).

    <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-c9ad3f8bad5fe8dc94932ea7335c56c2e5444f6c%2FScreenshot%202025-08-06%20at%204.31.31%E2%80%AFPM.png?alt=media" alt="Under an &#x22;Enrichment settings&#x22; title, there are six tiles, each with a title, description, an ON/OFF toggle, and a Refresh period (min) field."><figcaption></figcaption></figure>
11. Click **Setup**.
12. If you did not upload your own RSA key, create a service user for Panther to use with the generated SQL snippet. Panther generates an RSA key on your behalf and only surfaces the public portion.
    1. Copy the generated SQL snippet.
    2. Run the SQL snippet in a [Snowsight worksheet](https://docs.snowflake.com/en/user-guide/ui-snowsight-worksheets-gs).
    3. Click **Setup**.
13. If everything is correct, you will be directed to a success screen:

    <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-e55cedf82c6a6adc66ec5c14ebdcb164c3b1dcca%2FScreenshot%202023-08-03%20at%204.33.30%20PM.png?alt=media" alt="The success screen reads, &#x22;Everything looks good! Panther will now automatically pull &#x26; process logs from your account&#x22;" width="281"><figcaption></figcaption></figure>

    * You can optionally enable one or more [Detection Packs](https://docs.panther.com/detections/panther-managed/packs).
    * The **Trigger an alert when no events are processed** setting defaults to **YES**. We recommend leaving this enabled, as you will be alerted if data stops flowing from the log source after a certain period of time. The timeframe is configurable, with a default of 24 hours.

      <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-c48119abd559990173004bde99ff4907fdd2ded2%2FScreenshot%202023-08-03%20at%204.26.54%20PM.png?alt=media" alt="The &#x22;Trigger an alert when no events are processed&#x22; toggle is set to YES. The &#x22;How long should Panther wait before it sends you an alert that no events have been processed&#x22; setting is set to 1 Day" width="320"><figcaption></figcaption></figure>

## How to rotate the RSA key of an existing Snowflake Audit Logs source

To rotate the RSA key associated with a Snowflake user connected to an existing Snowflake Audit Logs source—without interrupting the incoming flow of logs:

1. In the left-hand navigation bar of your Panther Console, click **Configure** > **Log Sources**.
2. In the **Log Sources** list, locate the Snowflake Audit logs source you'd like to update, and click its name.
3. On the log source's details page, click the **Configuration** tab, then **Edit**.

   <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-2acf27b9820b212bcd22aac2aa2a8bd46e4d69a0%2FScreenshot%202025-09-18%20at%2012.39.47%E2%80%AFPM.png?alt=media" alt="Under a &#x22;Snowflake audit logs source,&#x22; an arrow is drawn from a &#x22;Configuration&#x22; tab to an &#x22;Edit&#x22; button."><figcaption></figcaption></figure>
4. Click **Set Credentials**.
5. Click **Rotate RSA Key**.

   <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-ff3126f2fce5c3d7ed32e0768d131354ce5bb918%2FScreenshot%202025-09-18%20at%2012.42.52%E2%80%AFPM.png?alt=media" alt="An arrow is drawn from a &#x22;Set Credentials&#x22; button to a &#x22;Rotate RSA Key&#x22; button."><figcaption></figcaption></figure>
6. On the **Rotate RSA Key** pop-up modal, click **Rotate RSA Key**.
7. Copy the provided **RSA Key Rotation SQL**, and run it in a [Snowsight worksheet](https://docs.snowflake.com/en/user-guide/ui-snowsight-worksheets-gs) (using a privileged user, like `ACCOUNTADMIN`).
8. In Panther, click **Save**.

## Supported log types

### Snowflake.AccessHistory

```yaml
schema: Snowflake.AccessHistory
description: Snowflake access history log
referenceURL: https://docs.snowflake.com/en/sql-reference/account-usage/access_history
fields:
  - name: BASE_OBJECTS_ACCESSED
    description: List of base objects accessed during the query
    type: array
    element:
      type: json
  - name: DIRECT_OBJECTS_ACCESSED
    description: List of direct objects accessed during the query
    type: array
    element:
      type: json
  - name: OBJECTS_MODIFIED
    description: List of objects modified during the query
    type: array
    element:
      type: json
  - name: POLICIES_REFERENCED
    description: List of policies referenced during the query
    type: array
    element:
      type: json
  - name: OBJECT_MODIFIED_BY_DDL
    description: Object modified by DDL during the query
    type: json
  - name: QUERY_ID
    description: Unique identifier for the query
    type: string
  - name: QUERY_START_TIME
    required: true
    description: The start time of the query
    type: timestamp
    timeFormats:
      - '%Y-%m-%d %H:%M:%S.%f %z'
      - '%a, %d %b %Y %H:%M:%S %z'
    isEventTime: true
  - name: USER_NAME
    description: Name of the user who executed the query
    type: string
    indicators:
      - username
  - name: PARENT_QUERY_ID
    description: The query ID of the parent job or NULL if the job does not have a parent.
    type: string
  - name: ROOT_QUERY_ID
    description: The query ID of the top most job in the chain or NULL if the job does not have a parent.
    type: string
```

### Snowflake.DataTransferHistory

```yaml
schema: Snowflake.DataTransferHistory
description: Snowflake History Of Data Transfers
fields:
    - name: ORGANIZATION_NAME
      required: true
      type: string
    - name: ACCOUNT_NAME
      required: true
      type: string
    - name: ACCOUNT_LOCATOR
      required: true
      type: string
    - name: REGION
      type: string
    - name: USAGE_DATE
      required: true
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
      isEventTime: true
    - name: SOURCE_CLOUD
      type: string
    - name: SOURCE_REGION
      type: string
    - name: TARGET_CLOUD
      type: string
    - name: TARGET_REGION
      type: string
    - name: BYTES_TRANSFERRED
      required: true
      type: bigint
    - name: TRANSFER_TYPE
      type: string
```

### Snowflake.LoginHistory

```yaml
schema: Snowflake.LoginHistory
description: Snowflake login history log
fields:
    - name: CLIENT_IP
      description: IP address of the client initiating the login
      type: string
      indicators:
        - ip
    - name: EVENT_ID
      required: true
      description: Unique identifier for the event
      type: string
    - name: EVENT_TIMESTAMP
      required: true
      description: Timestamp of the event
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
      isEventTime: true
    - name: EVENT_TYPE
      description: Type of the event (e.g., LOGIN, LOGOUT)
      type: string
    - name: FIRST_AUTHENTICATION_FACTOR
      description: The first authentication factor used
      type: string
    - name: IS_SUCCESS
      description: Indicates if the event was successful (YES/NO)
      type: string
    - name: RELATED_EVENT_ID
      description: Identifier for a related event, if any
      type: string
    - name: REPORTED_CLIENT_TYPE
      description: Type of the client reported
      type: string
    - name: REPORTED_CLIENT_VERSION
      description: Version of the client reported
      type: string
    - name: USER_NAME
      description: Name of the user involved in the event
      type: string
      indicators:
        - username
```

### Snowflake.QueryHistory

```yaml
schema: Snowflake.QueryHistory
description: Snowflake query history log
fields:
    - name: BYTES_DELETED
      description: Number of bytes deleted
      type: bigint
    - name: BYTES_READ_FROM_RESULT
      description: Number of bytes read from the result
      type: bigint
    - name: BYTES_SCANNED
      description: Number of bytes scanned
      type: bigint
    - name: BYTES_SENT_OVER_THE_NETWORK
      description: Number of bytes sent over the network
      type: bigint
    - name: BYTES_SPILLED_TO_LOCAL_STORAGE
      description: Number of bytes spilled to local storage
      type: bigint
    - name: BYTES_SPILLED_TO_REMOTE_STORAGE
      description: Number of bytes spilled to remote storage
      type: bigint
    - name: BYTES_WRITTEN
      description: Number of bytes written
      type: bigint
    - name: BYTES_WRITTEN_TO_RESULT
      description: Number of bytes written to the result
      type: bigint
    - name: CHILD_QUERIES_WAIT_TIME
      description: Wait time for child queries
      type: int
    - name: CLUSTER_NUMBER
      description: Number of the cluster
      type: int
    - name: COMPILATION_TIME
      description: Time taken for query compilation
      type: int
    - name: CREDITS_USED_CLOUD_SERVICES
      description: Credits used for cloud services
      type: float
    - name: DATABASE_ID
      description: Database identifier
      type: string
    - name: DATABASE_NAME
      description: Name of the database
      type: string
    - name: END_TIME
      description: The end time of the query
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
    - name: EXECUTION_STATUS
      description: Status of query execution
      type: string
    - name: EXECUTION_TIME
      description: Time taken for query execution
      type: int
    - name: EXTERNAL_FUNCTION_TOTAL_INVOCATIONS
      description: Total invocations of external functions
      type: int
    - name: EXTERNAL_FUNCTION_TOTAL_RECEIVED_BYTES
      description: Total bytes received by external functions
      type: int
    - name: EXTERNAL_FUNCTION_TOTAL_RECEIVED_ROWS
      description: Total rows received by external functions
      type: int
    - name: EXTERNAL_FUNCTION_TOTAL_SENT_BYTES
      description: Total bytes sent by external functions
      type: int
    - name: EXTERNAL_FUNCTION_TOTAL_SENT_ROWS
      description: Total rows sent by external functions
      type: int
    - name: INBOUND_DATA_TRANSFER_BYTES
      description: Inbound data transfer in bytes
      type: int
    - name: IS_CLIENT_GENERATED_STATEMENT
      description: Whether the statement was generated by a client
      type: boolean
    - name: LIST_EXTERNAL_FILES_TIME
      description: Time taken to list external files
      type: int
    - name: OUTBOUND_DATA_TRANSFER_BYTES
      description: Outbound data transfer in bytes
      type: int
    - name: PARTITIONS_SCANNED
      description: Number of partitions scanned
      type: int
    - name: PARTITIONS_TOTAL
      description: Total number of partitions
      type: int
    - name: PERCENTAGE_SCANNED_FROM_CACHE
      description: Percentage of data scanned from cache
      type: float
    - name: QUERY_ACCELERATION_BYTES_SCANNED
      description: Bytes scanned for query acceleration
      type: int
    - name: QUERY_ACCELERATION_PARTITIONS_SCANNED
      description: Partitions scanned for query acceleration
      type: int
    - name: QUERY_ACCELERATION_UPPER_LIMIT_SCALE_FACTOR
      description: Upper limit scale factor for query acceleration
      type: int
    - name: QUERY_HASH
      description: Hash of the query string
      type: string
    - name: QUERY_HASH_VERSION
      description: Hash version
      type: string
    - name: QUERY_ID
      required: true
      description: Unique identifier for the query
      type: string
    - name: QUERY_LOAD_PERCENT
      description: Load percentage during the query
      type: float
    - name: QUERY_PARAMETERIZED_HASH
      description: Hash of the parameterized query
      type: string
    - name: QUERY_PARAMETERIZED_HASH_VERSION
      description: Hash version of the parameterized query
      type: string
    - name: QUERY_TAG
      description: Tag associated with the query
      type: string
    - name: QUERY_TEXT
      description: Text of the query
      type: string
    - name: QUERY_TYPE
      description: Type of the query
      type: string
    - name: QUEUED_OVERLOAD_TIME
      description: Time spent in queue due to overload
      type: int
    - name: QUEUED_PROVISIONING_TIME
      description: Time spent in queue for provisioning
      type: int
    - name: QUEUED_REPAIR_TIME
      description: Time spent in queue for repair
      type: int
    - name: RELEASE_VERSION
      description: Version of the release
      type: string
    - name: ROLE_NAME
      description: Name of the role
      type: string
    - name: ROLE_TYPE
      description: Type of the role
      type: string
    - name: ROWS_DELETED
      description: Number of rows deleted
      type: int
    - name: ROWS_INSERTED
      description: Number of rows inserted
      type: int
    - name: ROWS_UNLOADED
      description: Number of rows unloaded
      type: int
    - name: ROWS_UPDATED
      description: Number of rows updated
      type: int
    - name: ROWS_WRITTEN_TO_RESULT
      description: Number of rows written to the result
      type: int
    - name: SCHEMA_ID
      description: Identifier for the schema
      type: string
    - name: SCHEMA_NAME
      description: Name of the schema
      type: string
    - name: SECONDARY_ROLE_STATS
      description: Secondary role stats
      type: string
    - name: SESSION_ID
      description: Identifier for the session
      type: string
    - name: START_TIME
      required: true
      description: The start time of the query
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
      isEventTime: true
    - name: TOTAL_ELAPSED_TIME
      description: Total elapsed time for the query
      type: int
    - name: TRANSACTION_BLOCKED_TIME
      description: Time the transaction was blocked
      type: int
    - name: TRANSACTION_ID
      description: Identifier for the transaction
      type: string
    - name: USER_NAME
      description: Name of the user
      type: string
      indicators:
        - username
    - name: WAREHOUSE_ID
      description: Identifier for the warehouse
      type: string
    - name: WAREHOUSE_NAME
      description: Name of the warehouse
      type: string
    - name: WAREHOUSE_SIZE
      description: Size of the warehouse
      type: string
    - name: WAREHOUSE_TYPE
      description: Type of the warehouse
      type: string
```

### Snowflake.Sessions

```yaml
schema: Snowflake.Sessions
description: Snowflake session history log
fields:
    - name: AUTHENTICATION_METHOD
      description: Method used for authentication
      type: string
    - name: CLIENT_APPLICATION_ID
      description: ID of the client application
      type: string
    - name: CLIENT_APPLICATION_VERSION
      description: Version of the client application
      type: string
    - name: CLIENT_BUILD_ID
      description: Build ID of the client application
      type: string
    - name: CLIENT_ENVIRONMENT
      description: Environment information of the client application (e.g., OS, version)
      type: json
      isEmbeddedJSON: true
    - name: CLIENT_VERSION
      description: Version of the client
      type: string
    - name: CLOSED_REASON
      description: Reason why the session was closed
      type: string
    - name: CREATED_ON
      required: true
      description: Timestamp when the session was created
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
      isEventTime: true
    - name: LOGIN_EVENT_ID
      description: Unique identifier for the login event
      type: string
    - name: SESSION_ID
      required: true
      description: Unique identifier for the session
      type: string
    - name: USER_NAME
      description: Name of the user
      type: string
      indicators: 
        - username
```
