# Panther AI

## Overview <a href="#overview" id="overview"></a>

Panther AI includes a set of generative AI features designed to accelerate your detection and response workflows. It operates with the persona of a security engineer and has access to many of the same [tools](#tools) available to human users of Panther.

Panther AI can quickly assess data, such as alerts and logs, to rapidly deliver insights. You can run predefined workflows or ask your own questions to Panther AI—it will leverage its available tools (such as [querying the data lake](#data-search-and-analysis)) to answer them, generally much faster than a human analyst would be able to.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Flx3hEPreONH4ciOfk4yh%2FScreenshot%202026-01-20%20at%205.25.06%E2%80%AFPM.png?alt=media&#x26;token=581d414f-f6bf-463a-94e3-71f203efa1b0" alt="On the right side is a slide-out panel titled &#x22;ALB Web Scanning Analysis.&#x22; Below, there are various sections, like Summary, Key Findings, and Security Implications."><figcaption></figcaption></figure>

{% hint style="info" %}
[Watch video demos of various Panther AI workflows here](https://docs.panther.com/ai/examples).
{% endhint %}

{% hint style="warning" %}
**Panther AI runs with your permissions.** Every AI operation — including tool invocations, data lake queries, alert access, and detection modifications — executes with the permissions of the Panther user who initiated it. If your role cannot access certain log types, neither can AI. If your role cannot modify detections, AI cannot create or edit them on your behalf. This applies to Console interactions, API calls (where AI runs with the API token's permissions), and [scheduled prompts](https://docs.panther.com/ai/scheduled-ai-prompts) (which always run as the user who created them).
{% endhint %}

Panther AI uses [Claude AI models by Anthropic](https://www.anthropic.com/claude) through [Amazon Bedrock](https://aws.amazon.com/bedrock/). Panther AI does not use your data for AI training—[learn more about data security below](#ai-permissions-and-scope).

When using Panther AI, you may want to view previous responses or rename, pin, save, or delete certain interactions. Learn how to perform these actions in [Managing Panther AI Response History](https://docs.panther.com/ai/managing-ai-response-history).

{% hint style="info" %}
Use of Panther AI features is subject to the [AI disclaimer found on the Legal page](https://docs.panther.com/resources/help/legal#ai-disclaimer).
{% endhint %}

## Getting started

1. **Enable Panther AI** — An admin enables AI in **Settings** > **Panther AI** > **Configuration** and toggles **Enable Panther AI** to `ON`.
2. **Grant permissions** — Assign the **Run Panther AI** and **Read Settings & SAML Preferences** permissions to the appropriate [roles](https://docs.panther.com/system-configuration/rbac#update-a-roles-permissions). The default **Admin** role receives **Run Panther AI** automatically.
3. **Try alert triage** — Navigate to any alert and click **View Panther AI Triage** to see Panther AI analyze the alert and its associated data.
4. **Explore from there** — Ask follow-up questions in the prompt bar, try [search summarization](https://docs.panther.com/search/search-tool#panther-ai-search-results-summary), or create a [scheduled prompt](https://docs.panther.com/ai/scheduled-ai-prompts).

## Using Panther AI agents in the Console

Find Panther AI in the Panther Console in the following locations:

* Panther AI
  * In the left-hand navigation bar, click **Panther AI**. Ask Panther AI anything—there is no context-dependent data being analyzed (like alerts or log events at the above entry points), so it's a good place to ask general security questions. Here, you'll see [suggested and favorite prompts](#suggested-and-favorite-prompts).

    <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FkYYXEoZ0DrhYGSZ2Lznd%2Fimage.png?alt=media&#x26;token=f9f3fa0c-c6b8-4c51-bec1-aeba48c3d396" alt="" width="563"><figcaption></figcaption></figure>
  * [Scheduled AI prompts](https://docs.panther.com/ai/scheduled-ai-prompts): Create prompts that run automatically to triage alerts, analyze security posture, or summarize alert queues.
* Panther AI alert triage
  * [AI alert triage](https://docs.panther.com/alerts#panther-ai-alert-triage): Gather information about and analyze an alert. You can [run AI alert triage on demand](https://docs.panther.com/alerts#run-ai-alert-triage-on-demand), or enable [auto-run](https://docs.panther.com/alerts#auto-run-ai-alert-triage), which runs AI alert triage on new alerts automatically.
  * [Alert list AI triage](https://docs.panther.com/alerts#panther-ai-summary-of-alerts-list): Triage multiple alerts in your alerts list at once.
  * When you triage one or more alerts, you will see a [Risk Classification score](https://docs.panther.com/ai/risk-scoring-and-classification-framework).
* Panther AI search
  * Use AI to [generate PantherFlow queries with natural language](https://docs.panther.com/search/search-tool#ai-powered-pantherflow-query-generation).
  * [Search results AI summary](https://docs.panther.com/search/search-tool#panther-ai-search-results-summary): Summarize a set of result events.
* Panther AI Detection Builder
  * [AI Detection Builder](https://docs.panther.com/detections/rules/ai-builder): Create and modify rules and scheduled rules with AI assistance directly in the rule editor. The AI Detection Builder can generate detection code, add test cases, and explain detection logic.
  * In a follow-up prompt to a [Search results AI summary](https://docs.panther.com/search/search-tool#panther-ai-search-results-summary), you can direct Panther AI to, "Write a Panther detection for this activity."
  * In a follow-up prompt to an [AI alert triage](https://docs.panther.com/alerts#panther-ai-alert-triage), you can ask Panther AI, "How should I tune the detection this alert was triggered by?"
  * On Detection detail pages, in the **Overview** tab, review the [AI-generated summary](https://docs.panther.com/detections#ai-detection-summaries).<br>

    <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2F3AIcYxJ1esd3hzuSChCv%2FScreenshot%202026-01-20%20at%205.42.43%E2%80%AFPM.png?alt=media&#x26;token=89ac96e4-511e-4959-9d7e-e26ca7194a06" alt="" width="563"><figcaption></figcaption></figure>
* Panther AI schema builder
  * [Infer schemas from sample logs](https://docs.panther.com/data-onboarding/custom-log-types#how-to-infer-a-schema)

There are also AI GraphQL API operations available to [Cloud Connected](https://docs.panther.com/system-configuration/panther-deployment-types/cloud-connected) customers and [SaaS](https://docs.panther.com/system-configuration/panther-deployment-types/saas) customers with pass-through billing—view them in the [GraphQL API schema](https://docs.panther.com/panther-developer-workflows/api/graphql#discover-the-panther-graphql-schema).

In addition to Panther AI, Panther offers an [MCP server](https://docs.panther.com/panther-developer-workflows/mcp-server).

### File attachments

Panther AI supports file attachments to provide additional context for your AI conversations. You can upload images, PDFs, and text files that Panther AI can analyze alongside your prompts.

{% hint style="info" %}
File attachments are only available when Web Access is enabled in your [Panther AI settings](https://docs.panther.com/system-configuration#panther-ai).
{% endhint %}

#### Supported file types

* **Images**: PNG, JPEG, GIF, WebP formats
* **Documents**: PDF files
* **Text files**: Plain text and other text-based formats

#### Attachment limits

* **Images and PDFs**: Up to 5 MB per file
* **Text files**: Up to 10 MB per file
* **Total attachments**: Maximum of 5 files per conversation

#### Using attachments effectively

* **Security analysis**: Upload screenshots of suspicious activity, security alerts, or system logs for AI analysis.
* **Documentation review**: Attach PDFs of security reports, compliance documents, or vendor documentation for AI to reference.
* **Visual evidence**: Include network diagrams, architecture screenshots, or other visual materials to help AI understand your environment.
* **Log samples**: Upload text files containing log samples or configuration files for analysis.

Attachments are processed securely and stored temporarily for the duration of your AI conversation. Once the conversation ends, attachment data is removed. Panther AI can reference and analyze the content of your attachments throughout the conversation.

## Enabling Panther AI <a href="#enabling-panther-ai" id="enabling-panther-ai"></a>

To use Panther AI features, your Panther instance's **Enable Panther AI** setting must be set to `ON` and your user role must have the **Run Panther AI** and **Read Settings & SAML Preferences** permissions.

To enable Panther AI:

1. In the upper-right corner of your Panther Console, click the gear icon (**Settings**) > **Panther AI**.
2. On the Configuration tab, click the **Enable Panther AI** toggle to `ON`.
   * The **Enable Panther AI** setting is set to `OFF` by default, and can only be updated by a user with the **Edit Settings & SAML Preferences** permission. See [System Configuration](https://docs.panther.com/system-configuration#panther-ai) to learn more about Panther AI settings.
   * Once **Enable Panther AI** is set to `ON`, the **Run Panther AI** permission will be:
     * Granted automatically to the [default](https://docs.panther.com/system-configuration/rbac#default-panther-roles) **Admin** role.
     * Available to assign to additional roles. [Learn how to update a role's permissions here](https://docs.panther.com/system-configuration/rbac#update-a-roles-permissions). (A user must additionally have the **Read Settings & SAML Preferences** permission to use Panther AI.)

       <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-470dd02928565a604ee3ddccf7bf3d5ed0017bce%2FPanther_AI_Config.png?alt=media" alt="On a page titled &#x22;Configuration&#x22; an empty checkbox and its label, Run Panther AI, are circled."><figcaption></figcaption></figure>

## How Panther AI uses your data <a href="#how-panther-ai-uses-your-data" id="how-panther-ai-uses-your-data"></a>

Panther AI does not use your data for AI training. Your prompts, Panther AI responses, and any intermediate data from tool calls all remain in your dedicated, single-tenant AWS account (like your logs). Amazon Bedrock, the underlying inference service, also does not use customer data for model training — [learn more about Bedrock's data handling commitments](https://docs.aws.amazon.com/bedrock/latest/userguide/data-protection.html).

You can enable [Panther-managed detections for Amazon Bedrock](https://github.com/search?q=repo%3Apanther-labs%2Fpanther-analysis+path%3A%2F%5Erules%5C%2Faws_cloudtrail_rules%5C%2F%2F+bedrock\&type=code) to monitor its activity. If you are a [Cloud Connected ](https://docs.panther.com/system-configuration/panther-deployment-types/cloud-connected)customer, you can also set up [Amazon Bedrock Guardrails](https://aws.amazon.com/bedrock/guardrails/) for extra protection.

{% hint style="info" %}
Learn more in [FAQs: Panther AI architecture and security](https://docs.panther.com/resources/panther-architecture#faqs-panther-ai-architecture-and-security).
{% endhint %}

## AI permissions and scope <a href="#ai-permissions-and-scope" id="ai-permissions-and-scope"></a>

All AI inference runs under the identity and permissions of the invoking user. In the Console, this is the logged-in user. For API calls, this is the user or API token that initiated the request. For [scheduled prompts](https://docs.panther.com/ai/scheduled-ai-prompts), this is the user who created the prompt.

This means Panther AI cannot access data, modify resources, or perform actions that the invoking user could not perform themselves. This includes:

* [**Log type access restrictions**](https://docs.panther.com/system-configuration/rbac#how-to-restrict-log-types-for-a-certain-role): If a user's role restricts access to certain log types, AI cannot query or summarize those log types.
* **Data lake access**: AI can only execute data lake queries if the user has the appropriate data analytics permissions.
* **Detection modifications**: AI can only create or edit detections if the user has the relevant `PolicyModify` or `RuleModify` permissions.
* **Alert modifications**: AI can only update alerts, add comments, or assign alerts if the user has `AlertModify` permission.

### Response visibility <a href="#response-visibility" id="response-visibility"></a>

The default visibility of AI conversations depends on where they are created:

* **Panther AI page** (left-hand navigation): New conversations default to **private** when privacy controls are enabled.
* **Alert triage, Search summaries, and other entry points**: Conversations default to **shared** (visible to all users).
* **Scheduled prompts**: Conversations default to **shared**, but can be configured as private when creating or editing the prompt.

Users can toggle a conversation between shared and private at any time using the privacy control in the conversation header. Other users with the **View AI Private Responses** permission can view private conversations created by other users. Regardless of sharing settings, response visibility follows the same role-based access controls as the rest of Panther — if a user cannot access a certain alert, they also cannot view AI triage responses for that alert.

### Tool approval for write operations <a href="#tool-approval" id="tool-approval"></a>

Panther AI includes a human-in-the-loop approval system for tools that perform write operations. Before Panther AI can execute actions that modify your data, you must explicitly approve or deny the operation. This gives you full control over what changes Panther AI makes in your environment.

#### Tools requiring approval

The following [tools](#tools) require explicit user approval before execution:

| Tool                                | Description                                                                                          | Required permission      |
| ----------------------------------- | ---------------------------------------------------------------------------------------------------- | ------------------------ |
| `panther_ai_detections_write`       | Create or update detection rules and policies                                                        | PolicyModify, RuleModify |
| `panther_ai_detections_author`      | Test and validate detection code for rules and policies                                              | PolicyModify, RuleModify |
| `panther_ai_alerts_add_comment`     | Add comments to alerts                                                                               | AlertModify              |
| `panther_ai_alerts_update`          | Update alert status, quality assessment, or context tags                                             | AlertModify              |
| `panther_ai_alerts_assign`          | Assign alerts to users                                                                               | AlertModify              |
| `panther_ai_alerts_bulk_update`     | Update multiple alerts at once                                                                       | AlertModify              |
| `panther_ai_utilities_ask_question` | Ask the user a structured multiple-choice question to gather information needed for the current task | RunPantherAI             |

Additionally, the `panther_ai_utilities_fetch_web` tool requires approval when accessing domains not on the approved domains list, if the **Require Approval for Non-Approved Domains** setting is enabled. Approved domains do not require approval. See [Web Access](https://docs.panther.com/system-configuration#web-access) for configuration details.

#### How tool approval works

When Panther AI attempts to use a [tool that requires approval](#tools-requiring-approval), Panther AI pauses and displays the proposed action, including the tool name and the parameters it intends to use.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FWY7pYBIMHY6YKJwPYKgI%2Fimage.png?alt=media&#x26;token=f712768e-d73c-4b6c-b5c1-55e70baaf137" alt=""><figcaption></figcaption></figure>

Review the details of the proposed operation, then click **Accept** or **Reject**. If you reject the operation, you can optionally provide a reason for denial. If no decision is made within two minutes, the operation times out and is not executed.

#### Audit logging

All tool approval decisions are recorded in [Panther audit logs](https://docs.panther.com/data-onboarding/supported-logs/panther-audit-logs), including:

* Whether the tool was approved or denied
* The rejection reason (if denied)
* The tool name and parameters
* The user who made the decision
* Timestamp of the decision

This provides a complete audit trail of all write operations performed by Panther AI.

## Panther AI settings

Panther AI configurations are made in two places: on the Panther AI settings page, and in the AI prompt bar itself.

### Panther AI settings page

The Panther AI settings page has settings for enabling Panther AI, auto-running AI alert triage, and configuring web access for Panther AI.

To access your Panther AI settings, click the gear icon in the upper right corner of your Panther Console, then select **Panther AI.** [Learn more about these settings on System Configuration](https://docs.panther.com/system-configuration#panther-ai). After saving changes to AI settings, changes may take up to 10 minutes to take effect due to configuration caching across the platform.

### AI prompt settings

Use AI prompt settings to tailor AI-generated content in Panther to your preferences. AI settings are universally applied to all AI entry points in Panther, but are specific to each Panther user.

To set your AI prompt settings:

1. Navigate to one of the AI prompt bars in the Panther Console.
2. On the right side of the prompt bar, click the **Edit prompt settings** icon: <img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-f23aec81508ed2c8e45f90c3dae7d2690282be14%2Fimage.png?alt=media" alt="" data-size="line">.
3. Set the [reasoning level setting](#reasoning-level).
4. Click **Save Settings**. Changes may take up to 10 minutes to take effect due to configuration caching.

#### **Reasoning level**

The reasoning level setting controls reasoning depth, model selection, and tool invocation limits—not just output length. The setting determines how thoroughly Panther AI analyzes the input and the sophistication of its analysis approach.

The reasoning level AI setting has three possible values:

* **Basic**: runs quickly and produces a brief summary
* **Standard**: recommended for initial [alert triage](https://docs.panther.com/alerts#panther-ai-alert-triage)
* **Advanced**: allows Panther AI to investigate deeply and produce detailed analysis outputs

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FsfkTInst9gGRO6AOD57p%2FScreenshot%202026-01-20%20at%2011.22.31%E2%80%AFAM.png?alt=media&#x26;token=39579b90-1163-4bd9-b176-e27f999409f3" alt="Under an &#x22;Edit prompt settings&#x22; title, there are three radio buttons: Basic, Standard, and Advanced." width="227"><figcaption></figcaption></figure>

{% hint style="info" %}
The reasoning level of auto-run AI triages for alerts triggered by a certain detection can be set by adding a tag to the detection. Learn more in [Auto-run AI alert triage](https://docs.panther.com/alerts#auto-run-ai-alert-triage).
{% endhint %}

### Personal AI preferences

You can customize how Panther AI communicates with you by setting a personal AI preference in your [Profile Settings](https://docs.panther.com/system-configuration#profile-settings). This allows you to specify your preferred communication style, role, expertise level, or other preferences that will be applied to all AI interactions.

To set your personal AI preferences:

1. In the upper-right corner of your Panther Console, click your initials, then select **Profile Settings**.
2. Navigate to the **AI Preferences** tab.
3. Enter your preferred AI communication style in the text area (up to 2048 characters).
   * For example: "Please respond as a senior security analyst with expertise in cloud environments. Use technical language and provide detailed explanations."
4. Click **Save**. Changes to AI preferences may take up to 10 minutes to take effect due to configuration caching.

Your personal AI preferences are combined with your organization's profile settings to provide contextual information that helps Panther AI tailor its responses to your specific needs and communication style.

## Suggested and favorite prompts

When opening Panther AI from the left-hand navigation menu, under **Suggested questions to get started**, you'll see some randomly generated suggested prompts. Click a suggestion to execute it.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-3d9c93798a09789a946ef645a33547e3cbecdc91%2Fimage.png?alt=media" alt=""><figcaption></figcaption></figure>

You can customize this list by favoriting a prompt:

1. Execute a prompt (in any of the Panther AI entry points).
2. To the right of the prompt text, click the star.

   <figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-ea97c7635c8f89637cb81988ffecf080ec41dfb4%2Fimage.png?alt=media" alt=""><figcaption></figcaption></figure>

   * The prompt will be added to your list of favorite prompts, which appears under **Suggested questions to get started**, to the left of suggested prompts.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-3272066d355deec13bd63699082494eb9d3c7c73%2Fimage.png?alt=media" alt=""><figcaption></figcaption></figure>

Favorites are specific to you, and are not shared with any other users. To remove a favorite, in the upper-right corner of the prompt tile, click **X**.

## Citations

When Panther AI aids in triaging or summarizing your data, it will return links to relevant data so you can verify its findings. Citations may include alerts, detections, and/or data queries.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-8dc6ecf00213dd944e05792ee330d5cecab78012%2FScreenshot%202025-05-09%20at%203.47.37%E2%80%AFPM.png?alt=media" alt="Under a &#x22;Panther AI&#x22; header at the top, there is text starting with &#x22;I&#x27;ll help you triage this alert.&#x22; Below, text is circled in two places: one starting with Alert and the other starting with Rule." width="563"><figcaption></figcaption></figure>

## Amazon Bedrock service quotas

If you are leveraging Panther AI often (e.g., you are using [auto-run AI alert triage](https://docs.panther.com/alerts#auto-run-ai-alert-triage)), you may hit [Amazon Bedrock service quotas](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#limits_bedrock). When this happens, Panther AI may not run as expected, or you may see an error in its output.

To remedy this:

* If you are a [Cloud Connected](https://docs.panther.com/system-configuration/panther-deployment-types/cloud-connected) customer, follow [this Amazon documentation to request an increase for Amazon Bedrock quotas](https://docs.aws.amazon.com/bedrock/latest/userguide/quotas-increase.html).
* If you are a [SaaS](https://docs.panther.com/system-configuration/panther-deployment-types/saas) customer with pass-through billing, reach out to Support.

## Tools

Panther AI has access to many of the same tools available to human users of Panther. Panther AI automatically selects which tools to use based on your prompt — you don't need to specify tools, but understanding what's available helps you craft effective prompts. For example, asking "What happened before this alert fired?" will prompt AI to use alert and data lake tools, while "Write a detection for brute-force logins" will use detection authoring tools.

When running tools (either in the Console or programmatically), Panther AI has the same permissions set as the current user. When entering your own prompt, you can direct it to use certain tools, if desired.

The most commonly used tools include `panther_ai_datalake_execute_sql` (custom SQL queries), `panther_ai_alerts_get` (alert details), and `panther_ai_detections_get` (detection metadata and code). See [which tools require human approval before execution above](#tool-approval).

### **Alert management**

* `panther_ai_alerts_add_comment`: Add comments to alerts
* `panther_ai_alerts_list`: List recent alerts (default 7 days) with filtering by type, severity, status, log type, source, quality, or context tags
* `panther_ai_alerts_get`: Get detailed alert information, including up to 25 recent comments and sampled events
* `panther_ai_alerts_assign`: Assign alerts to users
* `panther_ai_alerts_bulk_update`: Update up to 100 alerts at once with status, quality, tags, assignee, or comments
* `panther_ai_alerts_list_context_tags`: List all available context tags for categorizing alerts
* `panther_ai_alerts_update`: Update the status of alerts, quality assessment, or context tags

### **Data search and analysis**

* `panther_ai_datalake_summarize_column`: Compute top/bottom unique values with counts for any column or nested field, with results automatically enriched from lookup tables
* `panther_ai_datalake_search_logs`: Simple key/value search for straightforward single-attribute lookups returning complete log records
* `panther_ai_datalake_execute_sql`: Execute database-compatible SQL queries against Panther's data lake
* `panther_ai_datalake_activity_histogram`: Generate activity histograms to identify peak activity times before detailed searches
* `panther_ai_utilities_pantherflow_query`: Submit a PantherFlow query for validation and display
* `panther_ai_utilities_pantherflow_query_skill`: Get PantherFlow query language reference and generation instructions

### **Cloud resources**

* `panther_ai_cloud_resources_list_types`: Get a static list of supported AWS resource types for cloud resource queries and policy development
* `panther_ai_cloud_resources_list`: Search and filter cloud resources by type, compliance status, account, or ARN substring
* `panther_ai_cloud_resources_get`: Retrieve detailed resource configuration data for a specific cloud resource
* `panther_ai_cloud_resources_get_sample`: Get a sample resource of a specific type for policy authoring and testing

### **Cloud security scanning**

* `panther_ai_cloud_scanning_get_overview`: Get organization-level compliance posture summary with top failing policies and resources
* `panther_ai_cloud_scanning_describe_policy`: Analyze per-resource compliance results for a specific policy
* `panther_ai_cloud_scanning_describe_resource`: Analyze per-policy compliance results for a specific resource
* `panther_ai_cloud_scanning_list_cis_controls`: Get CIS AWS Foundations Benchmark reference data and control details

### **Detection management**

* `panther_ai_detections_list`: List and search detections (rules, scheduled rules, correlation rules, policies) with filtering by name, severity, log type, tags, MITRE ATT\&CK, status, and author
* `panther_ai_detections_get`: Get complete detection details including Python code, tests, and runbook
* `panther_ai_detections_write`: Create or update a detection directly in Panther. Supports RULE (real-time streaming), SCHEDULED\_RULE (historical), and POLICY (cloud resource compliance) types. Settings not in the schema (enabled state, tags, alert destinations) are preserved automatically on updates.
* `panther_ai_detections_author`: Author a new detection with testing and validation. Tests detection code in Panther's Python execution environment, validating rule(), alert\_context(), title(), dedup() for rules, or policy() for policies. Returns syntax errors, runtime exceptions, and logic errors with details.
* `panther_ai_detections_writer_skill`: Get specific instructions before writing a Panther detection

### **Log sources, schemas, and metadata**

* `panther_ai_log_sources_get_sample_data`: Retrieve sample data from a log source to understand data structure and verify ingestion
* `panther_ai_log_sources_list`: List log sources with health status (permissions, data flow, errors), with filtering by log type, health, and integration type
* `panther_ai_log_types_get_schema`: Get complete schema for log types including column definitions and nested field paths for SQL queries
* `panther_ai_log_types_list`: List available log types with table names and descriptions
* `panther_ai_log_types_test_schema`: Test a schema against sample data to validate correctness, returning match/unmatch statistics and error messages. Designed for iterative use until 100% match is achieved.
* `panther_ai_log_types_writer_skill`: Get instructions about schema structure, field types, and best practices before creating schemas
* `panther_ai_log_types_guidance_skill`: Get instructions for analyzing events based on log type
* `panther_ai_utilities_classification_error_fixer_skill`: Get instructions for diagnosing and fixing log classification errors

### **Query (Saved Search) management**

* `panther_ai_datalake_list_saved_queries`: List saved queries (Saved Searches) for discovery and reuse
* `panther_ai_datalake_get_query_results`: Retrieve results of previously executed async queries by query ID
* `panther_ai_datalake_write_saved_query`: Save a SQL query with a descriptive name and description for later reuse. The query is executed first to verify validity before saving.

### **Enrichment and context**

* `panther_ai_enrichments_lookup`: Look up enrichment data for IOCs and indicators (IP addresses, domains, hashes, usernames, email addresses, AWS ARNs)
* `panther_ai_users_list`: List Panther workspace users with IDs, names, and status for referencing in assignments and filters
* `panther_ai_users_get`: Get detailed user information including permissions and roles
* `panther_ai_roles_list`: List Panther roles and their permissions
* `panther_ai_roles_get`: Get details about a specific role, including permissions and log type access
* `panther_ai_utilities_calculate_risk_score`: Calculate a normalized risk score for entities (users, IPs, etc.) based on alert history and security indicators

### **Utilities**

* `panther_ai_utilities_ask_question`: Ask the user a structured multiple-choice question to gather information needed for the current task. Presents 2-10 specific options with a built-in "Other" option for custom responses. Supports single-select and multi-select response types.
* `panther_ai_utilities_fetch_web`: Fetch content from web pages and process user-uploaded file attachments. For web URLs, access is restricted to approved domains configured in [Panther AI settings](https://docs.panther.com/system-configuration#web-access), with user approval required for non-approved domains depending on settings. File attachments are stored securely in S3 for the duration of the AI conversation. Supports text pages, images (PNG, JPEG, GIF, WebP), and PDF documents.
* `panther_ai_utilities_panther_docs_skill`: Get instructions for navigating Panther documentation at docs.panther.com

### **AI responses and citations**

* `panther_ai_memory_get_response`: Retrieve a complete AI response by its ID, including parent and child responses for full conversation context. AI responses form a tree structure where conversations branch through follow-ups and related analyses.
* `panther_ai_memory_search_responses`: Search conversation history using semantic search for previous AI responses. Queries can be natural language questions, contextual phrases, or specific indicators (IP addresses, alert IDs, usernames).
* `panther_ai_citations_list`: List citations accumulated during the current conversation (resource references viewed or modified)

## Frequently asked questions <a href="#faq" id="faq"></a>

<details>

<summary>Can Panther AI access data from all my log sources?</summary>

Panther AI can access any log source and log type that the invoking user's role has permission to access. This includes both built-in and custom log types. If [log type access restrictions](https://docs.panther.com/system-configuration/rbac#how-to-restrict-log-types-for-a-certain-role) are configured for a user's role, Panther AI respects those restrictions — it will not query, summarize, or display data from restricted log types.

</details>

<details>

<summary>Can other users see my AI conversations?</summary>

The default visibility depends on the entry point — conversations started from the Panther AI page default to private (when privacy controls are enabled), while alert triage and other entry points default to shared. You can toggle any conversation between shared and private at any time using the privacy control in the conversation header. Other users with the **View AI Private Responses** permission can still view your private conversations. Regardless of sharing settings, users can never view AI responses that reference data they do not have permission to access (such as restricted log types or alerts outside their scope).

</details>

<details>

<summary>Can Panther AI make mistakes?</summary>

Like all AI systems, Panther AI can occasionally produce inaccurate or incomplete results. This is why Panther AI provides [citations](#citations) linking to the underlying data (alerts, detections, queries) so you can verify its findings. Write operations (such as creating detections or updating alerts) require [explicit approval](#tool-approval) before execution, giving you the opportunity to review proposed changes. Use of Panther AI is subject to the [AI disclaimer](https://docs.panther.com/resources/help/legal#ai-disclaimer).

</details>

<details>

<summary>What happens if I disable Panther AI?</summary>

When an admin sets **Enable Panther AI** to `OFF`, all AI features become unavailable across the Console. [Scheduled prompts](https://docs.panther.com/ai/scheduled-ai-prompts) will not execute while AI is disabled, but they are preserved and will resume when AI is re-enabled. Previously generated AI responses remain accessible in the [response history](https://docs.panther.com/ai/managing-ai-response-history) and continue to follow the 30-day retention policy (unless explicitly [saved](https://docs.panther.com/managing-ai-response-history#saving-an-ai-response)).

</details>

<details>

<summary>Does Panther AI work with custom log types?</summary>

Yes. Panther AI works with both built-in and custom log types. It can query, summarize, and analyze data from any log type onboarded to your Panther instance, and can also help you [infer schemas from sample logs](https://docs.panther.com/data-onboarding/custom-log-types#how-to-infer-a-schema) when onboarding new custom log types.

</details>

<details>

<summary>How is Panther AI billed?</summary>

Panther AI usage is powered by Amazon Bedrock. For [Cloud Connected](https://docs.panther.com/system-configuration/panther-deployment-types/cloud-connected) customers, Bedrock inference costs appear on your AWS bill. For [SaaS](https://docs.panther.com/system-configuration/panther-deployment-types/saas) customers, contact your Panther account team for details on AI billing. See [Amazon Bedrock service quotas](#amazon-bedrock-service-quotas) for information about usage limits.

</details>

<details>

<summary>Can Panther AI be used via API?</summary>

Yes. [Cloud Connected](https://docs.panther.com/system-configuration/panther-deployment-types/cloud-connected) customers and [SaaS](https://docs.panther.com/system-configuration/panther-deployment-types/saas) customers with pass-through billing can use AI GraphQL API operations. When invoked via API, Panther AI runs with the permissions of the API token used to authenticate the request. View available operations in the [GraphQL API schema](https://docs.panther.com/panther-developer-workflows/api/graphql#discover-the-panther-graphql-schema).

</details>
