# Panther System Architecture

## Overview of Panther system

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-a3147dcf3d58010e23a536ff026756cd265ec107%2FPanther%20Functional%20Architecture%20Diagram.png?alt=media" alt="A diagram with various box elements and arrows between boxes is shown. Boxes have names like, &#x22;Log Processing,&#x22; Cloud Security Scanning,&#x22; and &#x22;Alerting.&#x22;"><figcaption></figcaption></figure>

The diagram above flows roughly from left to right, and can be read in the following steps:

1. Raw log data flows into Panther from various log sources, including SaaS pullers (e.g., [Okta](https://docs.panther.com/data-onboarding/supported-logs/okta)) and Data Transport sources (e.g., [AWS S3](https://docs.panther.com/data-onboarding/data-transports/aws/s3)). These raw logs are parsed, filtered and normalized in the [Log Processing](#log-processing-subsystem) subsystem.
   * The output of [Log Processing](#log-processing-subsystem) flows into two subsystems: [Data Lake](#data-lake-subsystem) and [Detection](#detection-subsystem).
2. If enabled, [Cloud Security Scanning](https://docs.panther.com/cloud-scanning) will scan onboarded cloud infrastructure, then pass the resources it finds into the [Detection](#detection-subsystem) subsystem.
3. The [Enrichment](#enrichment-subsystem) subsystem optionally adds additional context to the data flowing into the [Detection](#detection-subsystem) subsystem, which can be used to enhance detection efficacy (e.g., [IPinfo](https://docs.panther.com/enrichment/ipinfo), [Okta Profiles](https://docs.panther.com/enrichment/okta)).
4. The [Detection](#detection-subsystem) subsystem applies detections to the following inputs:
   * From [Log Processing](#log-processing-subsystem): Log events
   * From [Scheduled Searches](https://docs.panther.com/search/scheduled-searches): Log events
   * From [Cloud Security Scanning](https://docs.panther.com/cloud-scanning): Infrastructure resources
5. The [AI](#ai-subsystem) subsystem is used for alert triage, and can be configured to trigger automatically when alerts are created. The AI service is available via API and various [Console UI entry points](https://docs.panther.com/ai#where-to-use-panther-ai).
6. If a detection generates an alert, it is sent to the [Alerting](#alerting-subsystem) subsystem for dispatch to its appropriate alert [destinations](https://docs.panther.com/alerts/destinations) (e.g., Slack, Jira, a webhook, etc.). A single alert can be routed to more than one destination.

At the bottom of the diagram, the Control Plane represents the cross-cutting infrastructure responsible for configuring and controlling the subsystems above (the data plane). This will be expanded on in the descriptions of each subsystems, below. The [API Server](#api-subsystem) referenced in the upper right corner is the external entry point into the Control Plane.

### General considerations

#### AWS

* Each Panther customer has a Panther instance deployed into a dedicated AWS account.
  * A customer can choose to own the AWS account or have Panther manage the account.
  * No data is shared or accessible between customers.
  * The AWS account forms the permission boundary for the application.
  * There is a single VPC used for services requiring networking.
* Processing is done via AWS Lambda and Fargate instances.
  * A proprietary control plane dynamically picks the best compute to minimize cost (see below).
  * Compute resources do not communicate with one another directly; rather, they communicate via AWS services. In other words, there is no "east/west" network traffic, there is only "north/south" network traffic
* The[ Principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) is followed by using minimally scoped IAM roles for each infrastructure component.

#### Datalake

Each Panther customer has either a [Snowflake](https://docs.panther.com/search/backend/snowflake) or [Databricks](https://docs.panther.com/search/backend/databricks) instance.

Snowflake

* A Panther Snowflake instance deployed into a dedicated Snowflake account.
  * A customer can choose to own the Snowflake account or have Panther manage the account.
  * No data is shared or accessible between customers.
* Snowflake secrets are managed by AWS Secret Manager using RSA keys, and rotated daily.

Databricks

* The customer must have their own Databricks instance.
  * No data is shared or accessible between customers.

#### Other

* All data is encrypted in transit and at rest.
* All external interactions are conducted using the [API](#api-subsystem):
  * The Panther Console is a React application interfacing with the API server.
  * The public API exposes [GraphQL](https://docs.panther.com/panther-developer-workflows/api/graphql) and [REST](https://docs.panther.com/panther-developer-workflows/api/rest) endpoints.
  * All API actions are logged as [Panther Audit Logs](https://docs.panther.com/data-onboarding/supported-logs/panther-audit-logs), which can then be ingested as a log source in Panther.
* Secrets related to external integrations are managed in [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) using KMS encrypted fields.
* The system scales up and down according to load.
* Panther infrastructure is managed by [Pulumi](https://www.pulumi.com/).
  * All infrastructure is tagged (e.g., resource name, subsystem), enabling effective billing analysis.
  * Customers owning their AWS account [can add their own tags](https://docs.panther.com/system-configuration/panther-deployment-types/cloud-connected#custom-tags-on-aws-resources) to integrate into their larger organization's billing reporting.
* Monitoring is performed using a combination of [CloudWatch](https://aws.amazon.com/cloudwatch/), [Sentry](https://sentry.io/), and [Datadog](https://www.datadoghq.com/).

## API subsystem

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-4ad972479c2a76faa52718123345f060c48b1d94%2FPanther%20API%20Data%20Flows.png?alt=media" alt="A flow diagram has various components: API Client, API Services, Log Processing, Detection, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width."><figcaption></figcaption></figure>

The [Panther API](https://docs.panther.com/panther-developer-workflows/api) is the entry point for all external interactions with Panther. The Console, [GraphQL](https://docs.panther.com/panther-developer-workflows/api/graphql), and [REST](https://docs.panther.com/panther-developer-workflows/api/rest) clients connect via an AWS ALB. Customers can optionally configure an allowlist for ALB access using IP CIDRs.

API authentication is performed using AWS Cognito. GraphQL and REST clients use [tokens](https://docs.panther.com/panther-developer-workflows/api#how-to-create-a-panther-api-token), while the Panther Console uses [JWTs](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-tokens-with-identity-providers.html) managed by AWS Cognito. The Console supports [Single Sign-On (SSO)](https://docs.panther.com/system-configuration/saml) via AWS Cognito.

There is an internal API server that resolves the requests. Some requests are processed entirely within the API server, while others require one or more calls to other internal services implemented via AWS Lambda functions.

## Log Processing subsystem

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-eec473eff1f19920204cdaac281cab07f1471501%2FPanther%20Log%20Processing%20Diagram.png?alt=media" alt="A flow diagram has various components: Log Processing, Log Events, Event Sampling, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width."><figcaption></figcaption></figure>

All data inputted into this subsystem is delivered via AWS S3 and S3 notifications. Upstream sources that are not S3-based (e.g., SaaS pullers, [HTTP Source](https://docs.panther.com/data-onboarding/data-transports/http), [Google Cloud Storage Source](https://docs.panther.com/data-onboarding/data-transports/google/cloud-storage)) use [Amazon Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) to aggregate events into S3 objects. These notifications are routed through a master [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) topic. The Log Processing and Event Sampling workflows each subscribe to this SNS topic.

Log Processing computation is implemented with AWS Lambda and Fargate.

{% hint style="info" %}
Dynamic compute cost optimization

Panther uses an efficient, proprietary control plane that orchestrates compute selection, aggregation and scaling.

As traffic increases, additional compute is required. Panther's control plane scales to match traffic, meaning it minimizes the number of compute instances used to maximize aggregation of data and minimize cost.

Lambda is used as Panther's core compute because its low latency allows us to quickly follow traffic variations, which is cost effective for bursty and light traffic loads. However, Lambda's cost per unit time is higher than other compute options. In the case of sustained and predicable traffic, Lambda is not as cost effective as other compute options. This is why, if the control plane detects a high volume of stable traffic, Fargate (Fargate Spot, if available) is used instead of Lambda to minimize costs.
{% endhint %}

For each notification received, the following steps are taken:

1. The integration source associated with the S3 object is looked up in DynamoDB and the associated role is assumed for reading.
2. The data is read from S3.
3. Each event is parsed according to the associated [schema](https://docs.panther.com/data-onboarding/custom-log-types/reference) for that data type.
   * If classification or parsing errors arise, [System Errors](https://docs.panther.com/system-configuration/notifications/system-errors) are generated and the associated "bad" data is stored in the Data Lake within the `classification_failures` table.
4. [Ingestion filters](https://docs.panther.com/data-onboarding/ingestion-filters) and [transformations](https://docs.panther.com/data-onboarding/custom-log-types/transformations) are applied.
5. [Indicator fields](https://docs.panther.com/search/panther-fields#indicator-fields) (`p_any` fields) are extracted, and [standard fields](https://docs.panther.com/search/panther-fields) are inserted.
6. Processed events are written as S3 objects and notifications are sent to an internal SNS topic, which the [Data Lake](#data-lake-subsystem) and [Detection](#detection-subsystem) subsystems are subscribed to.

You can optionally configure an [event threshold alarm](https://docs.panther.com/data-onboarding#configuring-event-threshold-alarms) for each onboarded log source to alert if traffic stops unexpectedly.

{% hint style="info" %}
The S3 notifications also route to the Event Sampling subsystem, which is used for [log schema field discovery](https://docs.panther.com/data-onboarding/custom-log-types#enabling-field-discovery). As new attributes are found in the data, they are analyzed and added automatically to the schema (and associated Data Lake tables).
{% endhint %}

## Enrichment subsystem

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-59cd53f5944ca679ced6bdd22f1f4b8bf9fd339f%2FPanther%20Enrichment%20Diagram%20(1).png?alt=media" alt="A flow diagram has various components: Customer Data Provider, Lookup Table Processor, Detections, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width."><figcaption></figcaption></figure>

[Enrichment](https://docs.panther.com/enrichment) in Panther is implemented via Lookup Tables (LUTs). A LUT is a table containing data associated to a unique primary key. A LUT also has a mapping from schemas to primary key, which allows for automatic enrichment in the [Detection](#detection-subsystem) subsystem. Detections may also use a function call interface to look up data.

[IPinfo](https://docs.panther.com/enrichment/ipinfo), for example, is a Panther-managed enrichment provider containing geolocation data. IP addresses in a log event will automatically be enriched with location, ASN, and privacy information. Customers can also create their own [custom LUTs](https://docs.panther.com/enrichment/custom) to bring context relevant to their business and security concerns.

LUTs are created either via the Panther Console or in the CLI workflow (using a YAML specification file). Data for the LUT can be made accessible to Panther in a few ways: uploaded in the Console, included as a file in the CLI configuration, or stored as an S3 object. In general, the most useful way to manage LUT data is as an S3 object reference—you can create S3 objects in your own account, and Panther will poll for changes.

The metadata associated with a LUT is stored in DynamoDB. When there is new data, the Lookup Table Processor assumes the specified role from the metadata and processes the S3 data. This creates two outputs: a real-time database in EFS used by the [Detection](#detection-subsystem) subsystem, and tables in the [Data Lake](#data-lake-subsystem). The tables in the Data Lake can be used by [Scheduled Searches](https://docs.panther.com/search/scheduled-searches) to enrich events using joins.

## Detection subsystem

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-cf3e4894220299bdc329893a76dc127fffd9f722%2FPanther%20Detection%20Diagram.png?alt=media" alt="A flow diagram has various components: Log Processing, Cloud Security Scanning, Streaming, Scheduled, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width."><figcaption></figcaption></figure>

The streaming detection processor allows Python-based detections to run on log events from [Log Processing](#log-processing-subsystem) and [Scheduled Searches](https://docs.panther.com/search/scheduled-searches), as well as resources from [Cloud Security Scanning](https://docs.panther.com/cloud-scanning). The streaming detection processor runs as an AWS Lambda function (or Fargate instance) optimized for high speed execution of Python. (The processor is, however, not simply a Python Lambda—although it was in an earlier iteration of Panther's infrastructure. After years of experience, we have learned that a naive Python Lambda implementation is neither efficient nor cost effective.)

The streaming detection processor evaluates the following types of detections:

* [Streaming detections](https://docs.panther.com/detections/rules) (rules): Targeted at one or more log schemas (also called `LogTypes`)
* [Scheduled detections](https://docs.panther.com/detections/rules) (scheduled rules): Targeted at the output of one or more [Scheduled Searches](https://docs.panther.com/search/scheduled-searches)
* [Policy detections](https://docs.panther.com/detections/policies): Targeted at resources

Processing data from these sources follows these steps:

1. For every active Lookup Table, any matches are applied to the `p_enrichment` field so that the information is available for detections.
2. All detections associated to the given `LogType`, cloud resource, or Scheduled Search are found.
3. Each detection's `rule()` function is run on the event/resource. If it returns `True`, then the other optional functions are run, and an alert is sent to the [Alerting](#alerting-subsystem) subsystem. For rules and scheduled rules, alerts are only sent for the first detection within the detection's [deduplication window](https://docs.panther.com/detections/rules#deduplication-of-alerts).
4. Events associated with the detection are written to an S3 object and an S3 notification is sent to an internal SNS topic.
   * The [Data Lake](#data-lake-subsystem) subsystem subscribes to the SNS topic for data ingestion into the rule matches and [signals](https://docs.panther.com/detections/signals) tables.

When a [Scheduled Search](https://docs.panther.com/search/scheduled-searches) is finished executing, the streaming detection processor Lambda is invoked with a reference to the results of the query. The results are read, and each event is processed according to the steps above.

[Data Replay](https://docs.panther.com/detections/testing/data-replay) allows for testing of detections on historical data. This is implemented via a "mirror" set of infrastructure that is independent of the live infrastructure.

## Data Lake subsystem

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-caba14181f1ca7be1c2fd463af31416f16bb8618%2FPanther%20Data%20Lake.png?alt=media" alt="A flow diagram has various components: Database API, Query Execution History, AWS Secrets Manager, Security Data Lake, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width."><figcaption></figcaption></figure>

Panther customers can use either [Snowflake](https://docs.panther.com/search/backend/snowflake) or [Databricks](https://docs.panther.com/search/backend/databricks) as their security datalake.

#### Snowflake

Panther uses the [Snowflake Snowpipe service](https://docs.snowflake.com/en/user-guide/data-load-snowpipe-intro) to ingest data into Snowflake. This service uses AWS IAM permissions and is therefore not dependent on Snowflake users configured for queries and management. The onboarding of a new data source in Panther triggers the creation of associated tables and Snowpipe infrastructure using the Admin database API Lambda. This Lambda has an associated user with `read/write` permissions to Panther databases and schemas. Notice there is no direct outside connect to invoke this Lambda; rather, this Lambda is driven by the internal Control Plane.

Tables use `p_event_time` as a cluster key.

Snowflake secrets are stored in [AWS Secrets Manger](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html). [RSA secrets are used and rotated daily](https://docs.snowflake.com/en/user-guide/key-pair-auth).

#### Databricks

Panther has a loader process that listens on an internal SNS topic for S3 notifications for files to load. The loader process manages all tables and will bulk load S3 files using `COPY INTO` commands.

Tables use [liquid clustering](https://docs.databricks.com/aws/en/delta/clustering) on the `p_event_time` column.

[Databricks OAuth ](https://docs.databricks.com/aws/en/dev-tools/auth/oauth-u2m#automatic-authorization-with-unified-authentication)credentials are stored in [AWS Secrets Manger](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) in a customer's Databricks AWS account.

#### Queries

Queries are run using the `read only` database API Lambda. This Lambda has an associated user with `read only` permissions.

Queries are asynchronous. When an API request is made to run a query, the associated SQL is executed in the Datalake and returns a `queryId`. API calls are then made with the `queryId` to check the status and read the associated results. The status of the execution of a query is tracked in DynamoDB.

Query results are stored in EFS for 30 days (though this length is configurable). Customers can use the [Search History](https://docs.panther.com/search/search-history) in Panther to view results of past searches.

[Scheduled Searches](https://docs.panther.com/search/scheduled-searches) used by [Detection](#detection-subsystem) are run via an AWS Step Function. Upon query execution completion, the streaming detection processor is invoked with a reference to the query results for further processing.

When [RBAC per logtype](https://docs.panther.com/system-configuration/rbac#rbac-per-log-type-1) (currently only supported by Snowflake) is enabled, there is a unique, managed read-only user per role.

## Alerting subsystem

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-c684a2a522c8377715f73b22f2f57008c581616f%2FPanther%20Alerting%20Diagram.png?alt=media" alt="A flow diagram has various components: Detection, Database, Alert Dispatch, Alert &#x22;Storm&#x22; Limiter, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width."><figcaption></figcaption></figure>

The [Detection](#detection) subsystem inserts alerts into a DynamoDB table, which the alert dispatch Lambda listens to on a stream. This Lambda uses the [configured integrations](https://docs.panther.com/alerts/destinations) to send alerts to destinations.

To display alerts in the Panther Console, core alert data is retrieved from DynamoDB, while the alert's associated events are retrieved from the Data Lake.

The [alert limiter functionality](https://docs.panther.com/alerts#limiting-alerts) is intended to prevent "alert storms" from overloading your destinations, which arise from (likely) misconfigured detections. If more than 1000 alerts are generated in one hour from the same detection, alerts will be suppressed. (This limit is configurable.) If the limit is met, the detection will continue to run and store events in the Data Lake (so there is no data loss), however no alerts are created. In this case, a [System Error](https://docs.panther.com/system-configuration/notifications/system-errors) is generated to notify the customer, who can manually remove the alert suppression in the Console (perhaps after some detection tuning).

There are special authenticated endpoints for Jira and Slack to "call back" to Panther in order to sync alert state (e.g., to update the status of an alert to `Resolved`).

## AI subsystem

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-76c30ece52caa66e4a59f4c317c9eee75c668fb9%2Fimage.png?alt=media" alt="An icon that looks like half a brain and half of a computer network is surrounded by &#x22;AI&#x22; and &#x22;AWS Bedrock.&#x22;"><figcaption></figcaption></figure>

The [Panther AI](https://docs.panther.com/ai) subsystem is not architecturally complex. It consists of:

* Server: Receives requests and orchestrates response and persistence
  * Dynamically generates system prompts depending on the entry point and context
  * Enforces quotas and permissions
  * Interfaces with [Amazon Bedrock](https://aws.amazon.com/bedrock/) APIs
  * Orchestrates [tool](https://docs.panther.com/ai#tools) use (this consists of calling internal Panther APIs)
  * Manages persisting responses in [Amazon DynamoDB](https://aws.amazon.com/dynamodb/)
* [Amazon Bedrock](https://aws.amazon.com/bedrock/): The server communicates with Bedrock for all AI inferences
* [Amazon DynamoDB](https://aws.amazon.com/dynamodb/): Used for persistence of AI responses
  * Responses are deleted after 30 days unless explicitly [saved](https://docs.panther.com/ai/managing-ai-response-history#saving-an-ai-response)

### Panther AI workflows

[Panther AI](https://docs.panther.com/ai) exposes many of the services available to human Panther users as [tools](https://docs.panther.com/ai#tools). Since Panther AI will use tools as needed (i.e., given the context and the task), workflows are often variable. Below, a typical [alert triage](https://docs.panther.com/alerts#panther-ai-alert-triage) workflow is illustrated.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-bd2d0e11948f95c686c33781ab48b19113b2f809%2Fimage.png?alt=media" alt="A diagram with many boxes and arrows is shown. A blue box near the center of the diagram reads, &#x22;AI Processes with specialized alert triage prompts &#x26; tools.&#x22;"><figcaption></figcaption></figure>

### FAQs: Panther AI architecture and security

<details>

<summary>Does Panther use customer data to train AI?</summary>

No, Panther does not use customer data to train AI. Panther AI only performs AI inference using [Amazon Bedrock foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/foundation-models-reference.html).

</details>

<details>

<summary>Which foundation models does Panther AI use?</summary>

Panther AI uses [Anthropic Claude](https://www.anthropic.com/claude) models.

</details>

<details>

<summary>How are AI responses stored and protected?</summary>

AI responses are stored in [Amazon DynamoDB](https://aws.amazon.com/dynamodb/), in the Panther customer account. Responses are deleted after 30 days unless explicitly [saved](https://docs.panther.com/ai/managing-ai-response-history#saving-an-ai-response).

All AI inference runs under the identity and permissions of the invoking user. This means Panther AI cannot access data, modify resources, or perform actions that the user themselves could not perform in the Console or via API.

Panther AI enforces [log type access restrictions](https://docs.panther.com/system-configuration/rbac#how-to-restrict-log-types-for-a-certain-role), if set for the current user's role. This means:

* Responses for alerts have access restricted based on the user's permissions for alerts.
* Responses for Search results have access restricted based on the user's permissions for the data lake.

The default visibility of AI conversations depends on the entry point — conversations from the Panther AI page default to private (when privacy controls are enabled), while alert triage and other entry points default to shared. Users can toggle any conversation between shared and private at any time. Other users with the **View AI Private Responses** permission can still view private conversations.

</details>

<details>

<summary>Does Panther AI have any guardrails?</summary>

Panther AI has the following cost safety control quotas (which may be changed, if requested):

* Inferences per hour (the default is 100)
* Data lake queries executed per hour (the default is 100)

Additionally, [Cloud Connected](https://docs.panther.com/system-configuration/panther-deployment-types/cloud-connected) customers can implement [Amazon Bedrock Guard Rails](https://aws.amazon.com/bedrock/guardrails/).

Panther also offers [CloudTrail detections](https://github.com/panther-labs/panther-analysis/blob/develop/rules/aws_cloudtrail_rules/aws_bedrock_guardrail_update_delete.py) to monitor Amazon Bedrock.

</details>

<details>

<summary>Does Panther have an audit log of all AI inferences and tool calls?</summary>

Yes, each inference and associated tool call is logged in the [Panther Audit logs](https://docs.panther.com/data-onboarding/supported-logs/panther-audit-logs), which are available in the data lake. Only metadata is logged—no sensitive data.

</details>

<details>

<summary>Can Panther AI be used via API call?</summary>

Yes, [Cloud Connected](https://docs.panther.com/system-configuration/panther-deployment-types/cloud-connected) customers and [SaaS](https://docs.panther.com/system-configuration/panther-deployment-types/saas) customers with "pass-through billing" can use AI API operations. When invoked via API, Panther AI runs with the permissions of the API token used to authenticate the request — [tool](https://docs.panther.com/ai#tools) use, data lake queries, and all other operations respect the token's associated permissions.

</details>

<details>

<summary>Is any AI content present in errors generated by Panther, or in operational logs (e.g., in CloudWatch or Datadog)?</summary>

No, AI content is not logged in Panther errors or operational logs—only metadata is logged.

</details>
