Snowflake Audit Logs (Beta)

Panther supports pulling Audit Logs directly from Snowflake's ACCOUNT_USAGE schema

Overview

This feature is in open beta starting with Panther version 1.111, and is available to all customers. Please share any bug reports and feature requests with your Panther support team.

Panther can fetch Snowflake audit information by querying the views in the ACCOUNT_USAGE schema in the SNOWFLAKE database. Data from these views can be enriched with "state data" in Snowflake—learn more on Snowflake Enrichment.

You can use this integration to monitor any Snowflake instance, however, to monitor your Panther-connected Snowflake instance, it's recommended to instead use Scheduled Searches—see Scheduled Search Examples.

Databases in any Snowflake cloud or region may be monitored, but these factors could affect generated cost.

The available views include:

Latency

Total data latency is a combination of Snowflake and Panther latency:

  • Latency varies for each of the available Snowflake views, and can, in certain cases, be as high as three hours. To verify latency for each view, consult the Latency column of the ACCOUNT_USAGE views table in this Snowflake documentation.

  • Panther adds at least one hour of latency.

Cost considerations

Snowflake compute costs incurred by using this integration are affected by various factors, including:

  • The warehouse you select for Panther to use

    • Panther must execute queries to pull data, thus it needs to use an active warehouse.

    • You can minimize costs by: selecting a warehouse that is already running.

  • The data refresh interval

    • When setting up the log source in Panther, you will choose how often you'd like to pull data from Snowflake. This can be as frequent as every one minute, up to as long as every 24 hours. You should set this interval based on your desired latency-to-cost balance.

    • You can minimize costs by: choosing a longer refresh interval.

  • Whether the cloud and region of the Snowflake instance you're monitoring is the same as your Panther Snowflake instance

    • You can minimize costs by: the cloud and region being the same as your Panther Snowflake instance.

Learn more on Snowflake's Understanding overall cost documentation.

How to onboard Snowflake Audit Logs to Panther

Prerequisites

To configure this integration, you must:

  • Have your Snowflake account identifier. It should be formatted with a hyphen (not a period), like: <org_name>-<account_name>

  • Have a Snowflake warehouse Panther can use to execute queries to pull data

  • In Snowflake, have CREATE USER , CREATE ROLE, and GRANT USAGE permissions

    • This is only required if you will be creating a service user in Snowflake for Panther to use. If you already have a service user Panther can use, you do not need to have these permissions.

Step 1: Create a worksheet in Snowsight

This step is only required if you need to create a service user in Snowflake that Panther can use to pull data. If you already have a service user Panther can use, skip this step.

  • In Snowsight, create a worksheet with the CREATE USER , CREATE ROLE, and GRANT USAGE permissions.

Step 2: Create a new Snowflake log source in Panther

  1. In the left-hand navigation bar of your Panther Console, click Configure > Log Sources.

  2. Click Create New.

  3. Search for “Snowflake Audit Logs,” then click its tile.

  4. On the slide-out panel, click Start Setup.

    An arrow is drawn from a tile titled "Snowflake Audit Logs" to a button labeled "Start Setup."
  5. On the Configure page, enter values for the following fields:

    • Name: Enter a descriptive name for the source, e.g. Snowflake Prod.

    • Account Identifier: Enter your Snowflake account identifier in the format <org_name>-<account_name>.

      • Use a hyphen, not a period, between the org and account names.

    • Warehouse: Enter the Snowflake warehouse Panther will use to execute queries to pull data.

    • Run Every: Use the Number and Period fields to choose the interval on which you'd like Panther to pull data from Snowflake.

    • Monitored Log Types: Select the Snowflake views you'd like Panther to fetch.

  6. Click Setup.

  7. On the Set Credentials page, fill in the form fields. Panther will generated an RSA key based on these values.

    • Username: The username of the Snowflake user Panther will use to pull data. The default value is PANTHER_AUDIT_VIEW_USER, but you may customize this.

      • If you already have a service user for Panther to use (and don't need to create a new one), enter its username here.

    • Role: The name of the role possessed by the Snowflake user that Panther will use to pull data. The default value is PANTHER_AUDIT_VIEW_ROLE, but you may customize this.

      • If you already have a service role for Panther to use (and don't need to create a new one), enter its name here.

  8. If you already have a service user for Panther to use (and don't need to create a new one), click I want to use my own RSA key, then upload your RSA key file.

  9. Click Next.

  10. On the Enrichment page, if you would like to enrich incoming logs with one or more of the supported Snowflake enrichment types, toggle them ON.

    • Snowflake enrichment data is refreshed on the same interval as your log data, which you specified in the Run Every field on the Configure page. If you toggle any of these sources on, they will be visible on the Configure > Enrichment Providers page. Learn more on Snowflake Enrichment.

    Under a page title "Edit Snowflake Source" there are six tiles, each with a toggle setting. The tile titled "User Profiles" has a toggle set to ON.
  11. If you did not upload your own RSA key, create a service user for Panther to use with the generated SQL snippet. Panther generated an RSA key and only serves the public part.

    1. Copy the generated SQL snippet.

    2. Run the SQL snippet in a Snowsight worksheet.

    3. Click Setup.

  12. If everything is correct, you will be directed to a success screen:

    The success screen reads, "Everything looks good! Panther will now automatically pull & process logs from your account"
    • You can optionally enable one or more Detection Packs.

    • The Trigger an alert when no events are processed setting defaults to YES. We recommend leaving this enabled, as you will be alerted if data stops flowing from the log source after a certain period of time. The timeframe is configurable, with a default of 24 hours.

      The "Trigger an alert when no events are processed" toggle is set to YES. The "How long should Panther wait before it sends you an alert that no events have been processed" setting is set to 1 Day

Supported log types

Snowflake.AccessHistory

schema: Snowflake.AccessHistory
description: Snowflake access history log
fields:
    - name: BASE_OBJECTS_ACCESSED
      required: true
      description: List of base objects accessed during the query
      type: array
      element:
        type: json
    - name: DIRECT_OBJECTS_ACCESSED
      description: List of direct objects accessed during the query
      type: array
      element:
        type: json
    - name: OBJECTS_MODIFIED
      description: List of objects modified during the query
      type: array
      element:
        type: json
    - name: POLICIES_REFERENCED
      description: List of policies referenced during the query
      type: array
      element:
        type: json
    - name: OBJECT_MODIFIED_BY_DDL
      description: Object modified by DDL during the query
      type: json
    - name: QUERY_ID
      description: Unique identifier for the query
      type: string
    - name: QUERY_START_TIME
      required: true
      description: The start time of the query
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
      isEventTime: true
    - name: USER_NAME
      description: Name of the user who executed the query
      type: string
      indicators:
        - username

Snowflake.DataTransferHistory

schema: Snowflake.DataTransferHistory
description: Snowflake History Of Data Transfers
fields:
    - name: ORGANIZATION_NAME
      required: true
      type: string
    - name: ACCOUNT_NAME
      required: true
      type: string
    - name: ACCOUNT_LOCATOR
      required: true
      type: string
    - name: REGION
      type: string
    - name: USAGE_DATE
      required: true
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
      isEventTime: true
    - name: SOURCE_CLOUD
      type: string
    - name: SOURCE_REGION
      type: string
    - name: TARGET_CLOUD
      type: string
    - name: TARGET_REGION
      type: string
    - name: BYTES_TRANSFERRED
      required: true
      type: bigint
    - name: TRANSFER_TYPE
      type: string

Snowflake.LoginHistory

schema: Snowflake.LoginHistory
description: Snowflake login history log
fields:
    - name: CLIENT_IP
      description: IP address of the client initiating the login
      type: string
      indicators:
        - ip
    - name: EVENT_ID
      required: true
      description: Unique identifier for the event
      type: string
    - name: EVENT_TIMESTAMP
      required: true
      description: Timestamp of the event
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
      isEventTime: true
    - name: EVENT_TYPE
      description: Type of the event (e.g., LOGIN, LOGOUT)
      type: string
    - name: FIRST_AUTHENTICATION_FACTOR
      description: The first authentication factor used
      type: string
    - name: IS_SUCCESS
      description: Indicates if the event was successful (YES/NO)
      type: string
    - name: RELATED_EVENT_ID
      description: Identifier for a related event, if any
      type: string
    - name: REPORTED_CLIENT_TYPE
      description: Type of the client reported
      type: string
    - name: REPORTED_CLIENT_VERSION
      description: Version of the client reported
      type: string
    - name: USER_NAME
      description: Name of the user involved in the event
      type: string
      indicators:
        - username

Snowflake.QueryHistory

schema: Snowflake.QueryHistory
description: Snowflake query history log
fields:
    - name: BYTES_DELETED
      description: Number of bytes deleted
      type: bigint
    - name: BYTES_READ_FROM_RESULT
      description: Number of bytes read from the result
      type: bigint
    - name: BYTES_SCANNED
      description: Number of bytes scanned
      type: bigint
    - name: BYTES_SENT_OVER_THE_NETWORK
      description: Number of bytes sent over the network
      type: bigint
    - name: BYTES_SPILLED_TO_LOCAL_STORAGE
      description: Number of bytes spilled to local storage
      type: bigint
    - name: BYTES_SPILLED_TO_REMOTE_STORAGE
      description: Number of bytes spilled to remote storage
      type: bigint
    - name: BYTES_WRITTEN
      description: Number of bytes written
      type: bigint
    - name: BYTES_WRITTEN_TO_RESULT
      description: Number of bytes written to the result
      type: bigint
    - name: CHILD_QUERIES_WAIT_TIME
      description: Wait time for child queries
      type: int
    - name: CLUSTER_NUMBER
      description: Number of the cluster
      type: int
    - name: COMPILATION_TIME
      description: Time taken for query compilation
      type: int
    - name: CREDITS_USED_CLOUD_SERVICES
      description: Credits used for cloud services
      type: float
    - name: DATABASE_ID
      description: Database identifier
      type: string
    - name: DATABASE_NAME
      description: Name of the database
      type: string
    - name: END_TIME
      description: The end time of the query
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
    - name: EXECUTION_STATUS
      description: Status of query execution
      type: string
    - name: EXECUTION_TIME
      description: Time taken for query execution
      type: int
    - name: EXTERNAL_FUNCTION_TOTAL_INVOCATIONS
      description: Total invocations of external functions
      type: int
    - name: EXTERNAL_FUNCTION_TOTAL_RECEIVED_BYTES
      description: Total bytes received by external functions
      type: int
    - name: EXTERNAL_FUNCTION_TOTAL_RECEIVED_ROWS
      description: Total rows received by external functions
      type: int
    - name: EXTERNAL_FUNCTION_TOTAL_SENT_BYTES
      description: Total bytes sent by external functions
      type: int
    - name: EXTERNAL_FUNCTION_TOTAL_SENT_ROWS
      description: Total rows sent by external functions
      type: int
    - name: INBOUND_DATA_TRANSFER_BYTES
      description: Inbound data transfer in bytes
      type: int
    - name: IS_CLIENT_GENERATED_STATEMENT
      description: Whether the statement was generated by a client
      type: boolean
    - name: LIST_EXTERNAL_FILES_TIME
      description: Time taken to list external files
      type: int
    - name: OUTBOUND_DATA_TRANSFER_BYTES
      description: Outbound data transfer in bytes
      type: int
    - name: PARTITIONS_SCANNED
      description: Number of partitions scanned
      type: int
    - name: PARTITIONS_TOTAL
      description: Total number of partitions
      type: int
    - name: PERCENTAGE_SCANNED_FROM_CACHE
      description: Percentage of data scanned from cache
      type: float
    - name: QUERY_ACCELERATION_BYTES_SCANNED
      description: Bytes scanned for query acceleration
      type: int
    - name: QUERY_ACCELERATION_PARTITIONS_SCANNED
      description: Partitions scanned for query acceleration
      type: int
    - name: QUERY_ACCELERATION_UPPER_LIMIT_SCALE_FACTOR
      description: Upper limit scale factor for query acceleration
      type: int
    - name: QUERY_HASH
      description: Hash of the query string
      type: string
    - name: QUERY_HASH_VERSION
      description: Hash version
      type: string
    - name: QUERY_ID
      required: true
      description: Unique identifier for the query
      type: string
    - name: QUERY_LOAD_PERCENT
      description: Load percentage during the query
      type: float
    - name: QUERY_PARAMETERIZED_HASH
      description: Hash of the parameterized query
      type: string
    - name: QUERY_PARAMETERIZED_HASH_VERSION
      description: Hash version of the parameterized query
      type: string
    - name: QUERY_TAG
      description: Tag associated with the query
      type: string
    - name: QUERY_TEXT
      description: Text of the query
      type: string
    - name: QUERY_TYPE
      description: Type of the query
      type: string
    - name: QUEUED_OVERLOAD_TIME
      description: Time spent in queue due to overload
      type: int
    - name: QUEUED_PROVISIONING_TIME
      description: Time spent in queue for provisioning
      type: int
    - name: QUEUED_REPAIR_TIME
      description: Time spent in queue for repair
      type: int
    - name: RELEASE_VERSION
      description: Version of the release
      type: string
    - name: ROLE_NAME
      description: Name of the role
      type: string
    - name: ROLE_TYPE
      description: Type of the role
      type: string
    - name: ROWS_DELETED
      description: Number of rows deleted
      type: int
    - name: ROWS_INSERTED
      description: Number of rows inserted
      type: int
    - name: ROWS_UNLOADED
      description: Number of rows unloaded
      type: int
    - name: ROWS_UPDATED
      description: Number of rows updated
      type: int
    - name: ROWS_WRITTEN_TO_RESULT
      description: Number of rows written to the result
      type: int
    - name: SCHEMA_ID
      description: Identifier for the schema
      type: string
    - name: SCHEMA_NAME
      description: Name of the schema
      type: string
    - name: SECONDARY_ROLE_STATS
      description: Secondary role stats
      type: string
    - name: SESSION_ID
      description: Identifier for the session
      type: string
    - name: START_TIME
      required: true
      description: The start time of the query
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
      isEventTime: true
    - name: TOTAL_ELAPSED_TIME
      description: Total elapsed time for the query
      type: int
    - name: TRANSACTION_BLOCKED_TIME
      description: Time the transaction was blocked
      type: int
    - name: TRANSACTION_ID
      description: Identifier for the transaction
      type: string
    - name: USER_NAME
      description: Name of the user
      type: string
      indicators:
        - username
    - name: WAREHOUSE_ID
      description: Identifier for the warehouse
      type: string
    - name: WAREHOUSE_NAME
      description: Name of the warehouse
      type: string
    - name: WAREHOUSE_SIZE
      description: Size of the warehouse
      type: string
    - name: WAREHOUSE_TYPE
      description: Type of the warehouse
      type: string

Snowflake.Sessions

schema: Snowflake.Sessions
description: Snowflake session history log
fields:
    - name: AUTHENTICATION_METHOD
      description: Method used for authentication
      type: string
    - name: CLIENT_APPLICATION_ID
      description: ID of the client application
      type: string
    - name: CLIENT_APPLICATION_VERSION
      description: Version of the client application
      type: string
    - name: CLIENT_BUILD_ID
      description: Build ID of the client application
      type: string
    - name: CLIENT_ENVIRONMENT
      description: Environment information of the client application (e.g., OS, version)
      type: json
      isEmbeddedJSON: true
    - name: CLIENT_VERSION
      description: Version of the client
      type: string
    - name: CLOSED_REASON
      description: Reason why the session was closed
      type: string
    - name: CREATED_ON
      required: true
      description: Timestamp when the session was created
      type: timestamp
      timeFormats:
        - '%Y-%m-%d %H:%M:%S.%f %z'
      isEventTime: true
    - name: LOGIN_EVENT_ID
      description: Unique identifier for the login event
      type: string
    - name: SESSION_ID
      required: true
      description: Unique identifier for the session
      type: string
    - name: USER_NAME
      description: Name of the user
      type: string

Last updated

Was this helpful?