Panther AI

Overview

Panther AI includes a set of generative AI features designed to accelerate your detection and response workflows. It operates with the persona of a security engineer and has access to many of the same tools available to human users of Panther.

Panther AI can quickly assess data, such as alerts and logs, to rapidly deliver insights. You can run predefined workflows or ask your own questions to Panther AI—it will leverage its available tools (such as querying the data lake) to answer them, generally much faster than a human analyst would be able to.

On the right side is a slide-out panel titled "ALB Web Scanning Analysis." Below, there are various sections, like Summary, Key Findings, and Security Implications.

Panther AI uses Claude AI models by Anthropicarrow-up-right through Amazon Bedrockarrow-up-right. Panther AI does not use your data for AI training—learn more about data security below.

When using Panther AI, you may want to view previous responses or rename, pin, save, or delete certain interactions. Learn how to perform these actions in Managing Panther AI Response History.

circle-info

Use of Panther AI features is subject to the AI disclaimer found on the Legal page.

Using Panther AI agents in the Console

Find Panther AI in the Panther Console in the following locations:

There are also AI GraphQL API operations available to Cloud Connected customers and SaaS customers with pass-through billing—view them in the GraphQL API schema.

In addition to Panther AI, Panther offers an MCP server.

File attachments

Panther AI supports file attachments to provide additional context for your AI conversations. You can upload images, PDFs, and text files that Panther AI can analyze alongside your prompts.

circle-info

File attachments are only available when Web Access is enabled in your Panther AI settings.

Supported file types

  • Images: PNG, JPEG, GIF, WebP formats

  • Documents: PDF files

  • Text files: Plain text and other text-based formats

Attachment limits

  • Images and PDFs: Up to 5 MB per file

  • Text files: Up to 10 MB per file

  • Total attachments: Maximum of 5 files per conversation

Using attachments effectively

  • Security analysis: Upload screenshots of suspicious activity, security alerts, or system logs for AI analysis.

  • Documentation review: Attach PDFs of security reports, compliance documents, or vendor documentation for AI to reference.

  • Visual evidence: Include network diagrams, architecture screenshots, or other visual materials to help AI understand your environment.

  • Log samples: Upload text files containing log samples or configuration files for analysis.

Attachments are processed securely and stored temporarily for the duration of your AI session. Panther AI can reference and analyze the content of your attachments throughout the conversation.

Enabling Panther AI

To use Panther AI features, your Panther instance's Enable Panther AI setting must be set to ON and your user role must have the Run Panther AI and Read Settings & SAML Preferences permissions.

To enable Panther AI:

  1. In the upper-right corner of your Panther Console, click the gear icon (Settings) > Panther AI.

  2. On the Configuration tab, click the Enable Panther AI toggle to ON.

    • The Enable Panther AI setting is set to OFF by default, and can only be updated by a user with the Edit Settings & SAML Preferences permission. See System Configuration to learn more about Panther AI settings.

    • Once Enable Panther AI is set to ON, the Run Panther AI permission will be:

How Panther AI uses your data

Panther AI does not use your data for AI training. Your prompts and Panther AI responses are stored in your dedicated, single-tenant AWS account (like your logs).

You can enable Panther-managed detections for Amazon Bedrockarrow-up-right to monitor its activity. If you are a Cloud Connected customer, you can also set up Amazon Bedrock Guardrailsarrow-up-right for extra protection.

AI permissions and scope

Panther AI assumes the role and associated permissions of the user running it—i.e., the user logged into the Console where AI operations are being run, or the user executing AI-related API calls.

This means Panther AI will not perform read or write operations the current user could not perform themselves. This includes log type access restrictions, if set for that user role.

Tool approval for write operations

Panther AI includes a human-in-the-loop approval system for tools that perform write operations. Before Panther AI can execute actions that modify your data, you must explicitly approve or deny the operation. This gives you full control over what changes Panther AI makes in your environment.

Tools requiring approval

The following tools require explicit user approval before execution:

Tool
Description
Required permission

panther_ai_detections_write

Create new detection rules and policies

PolicyModify, RuleModify

panther_ai_detections_author

Test and validate detection code for rules and policies

PolicyModify, RuleModify

panther_ai_alerts_add_comment

Add comments to alerts

AlertModify

panther_ai_alerts_update

Update alert status, quality assessment, or context tags

AlertModify

panther_ai_alerts_assign

Assign alerts to users

AlertModify

panther_ai_alerts_bulk_update

Update multiple alerts at once

AlertModify

Additionally, the panther_ai_utilities_fetch_web tool requires approval when accessing domains not on the approved domains list, if the Require Approval for Non-Approved Domains setting is enabled. Approved domains do not require approval. See Web Access for configuration details.

How tool approval works

When Panther AI attempts to use a tool that requires approval, Panther AI pauses and displays the proposed action, including the tool name and the parameters it intends to use.

Review the details of the proposed operation, then click Accept or Reject. If you reject the operation, you can optionally provide a reason for denial. If no decision is made within two minutes, the operation times out and is not executed.

Audit logging

All tool approval decisions are recorded in Panther audit logs, including:

  • Whether the tool was approved or denied

  • The rejection reason (if denied)

  • The tool name and parameters

  • The user who made the decision

  • Timestamp of the decision

This provides a complete audit trail of all write operations performed by Panther AI.

Panther AI settings

Panther AI configurations are made in two places: on the Panther AI settings page, and in the AI prompt bar itself.

Panther AI settings page

The Panther AI settings page has settings for enabling Panther AI, auto-running AI alert triage, and configuring web access for Panther AI.

To access your Panther AI settings, click the gear icon in the upper right corner of your Panther Console, then select Panther AI. Learn more about these settings on System Configuration. After saving changes to AI settings, changes may take up to 10 minutes to take effect.

AI prompt settings

Use AI prompt settings to tailor AI-generated content in Panther to your preferences. AI settings are universally applied to all AI entry points in Panther, but are specific to each Panther user.

To set your AI prompt settings:

  1. Navigate to one of the AI prompt bars in the Panther Console.

  2. On the right side of the prompt bar, click the Edit prompt settings icon: .

  3. Click Save Settings.

Reasoning level

The reasoning level setting controls reasoning depth, model selection, and tool invocation limits—not just output length. The setting determines how thoroughly Panther AI analyzes the input and the sophistication of its analysis approach.

The reasoning level AI setting has three possible values:

  • Basic: runs quickly and produces a brief summary

  • Standard: recommended for initial alert triage

  • Advanced: allows Panther AI to investigate deeply and produce detailed analysis outputs

Under an "Edit prompt settings" title, there are three radio buttons: Basic, Standard, and Advanced.
circle-info

The reasoning level of auto-run AI triages for alerts triggered by a certain detection can be set by adding a tag to the detection. Learn more in Auto-run AI alert triage.

Personal AI preferences

You can customize how Panther AI communicates with you by setting a personal AI preference in your Profile Settings. This allows you to specify your preferred communication style, role, expertise level, or other preferences that will be applied to all AI interactions.

To set your personal AI preferences:

  1. In the upper-right corner of your Panther Console, click your initials, then select Profile Settings.

  2. Navigate to the AI Preferences tab.

  3. Enter your preferred AI communication style in the text area (up to 2048 characters).

    • For example: "Please respond as a senior security analyst with expertise in cloud environments. Use technical language and provide detailed explanations."

  4. Click Save. Changes to AI preferences may take up to 10 minutes to take effect.

Your personal AI preferences are combined with your organization's profile settings to provide contextual information that helps Panther AI tailor its responses to your specific needs and communication style.

Suggested and favorite prompts

When opening Panther AI from the left-hand navigation menu, under Suggested questions to get started, you'll see some randomly generated suggested prompts. Click a suggestion to execute it.

You can customize this list by favoriting a prompt:

  1. Execute a prompt (in any of the Panther AI entry points).

  2. To the right of the prompt text, click the star.

    • The prompt will be added to your list of favorite prompts, which appears under Suggested questions to get started, to the left of suggested prompts.

Favorites are specific to you, and are not shared with any other users. To remove a favorite, in the upper-right corner of the prompt tile, click X.

Citations

When Panther AI aids in triaging or summarizing your data, it will return links to relevant data so you can verify its findings. Citations may include alerts, detections, and/or data queries.

Under a "Panther AI" header at the top, there is text starting with "I'll help you triage this alert." Below, text is circled in two places: one starting with Alert and the other starting with Rule.

Amazon Bedrock service quotas

If you are leveraging Panther AI often (e.g., you are using auto-run AI alert triage), you may hit Amazon Bedrock service quotasarrow-up-right. When this happens, Panther AI may not run as expected, or you may see an error in its output.

To remedy this:

Tools

Panther AI has access to many of the same tools available to human users of Panther. When running tools (either in the Console or programmatically), Panther AI has the same permissions set as the current user. In general, Panther AI decides when to use a specific tool based on the task you give it. When entering your own prompt, you can direct it to use certain tools, if desired.

See which tools require human approval before execution above.

Alert management

  • panther_ai_alerts_add_comment: Add comments to alerts

  • panther_ai_alerts_list: List recent alerts, with filtering options

  • panther_ai_alerts_get: Get detailed alert information, including comments and associated events

  • panther_ai_alerts_assign: Assign alerts to users

  • panther_ai_alerts_bulk_update: Update multiple alerts at once

  • panther_ai_alerts_list_context_tags: List all available context tags for categorizing alerts

  • panther_ai_alerts_update: Update the status of alerts, quality assessment, or context tags

Data search and analysis

  • panther_ai_datalake_summarize_column: Analyze distribution of attribute values

  • panther_ai_datalake_search_logs: Find specific log records by attribute/value pairs

  • panther_ai_datalake_execute_sql: Execute custom SQL queries for complex analysis

  • panther_ai_datalake_activity_histogram: Get time-bucketed histograms of activity across log sources

  • panther_ai_utilities_pantherflow_query: Submit a PantherFlow query for validation and display

  • panther_ai_utilities_pantherflow_query_skill: Get PantherFlow query language reference and generation instructions

Cloud resources

  • panther_ai_cloud_resources_list_types: Get a static list of supported AWS resource types for cloud resource queries and policy development

  • panther_ai_cloud_resources_list: Search and filter cloud resources by type, compliance status, account, or ARN substring

  • panther_ai_cloud_resources_get: Retrieve detailed resource configuration data for a specific cloud resource

  • panther_ai_cloud_resources_get_sample: Get a sample resource of a specific type for policy authoring and testing

Cloud security scanning

  • panther_ai_cloud_scanning_get_overview: Get organization-level compliance posture summary with top failing policies and resources

  • panther_ai_cloud_scanning_describe_policy: Analyze per-resource compliance results for a specific policy

  • panther_ai_cloud_scanning_describe_resource: Analyze per-policy compliance results for a specific resource

  • panther_ai_cloud_scanning_list_cis_controls: Get CIS AWS Foundations Benchmark reference data and control details

Detection management

  • panther_ai_detections_list: List available detections (rules and policies), with filtering options including resource types and compliance status

  • panther_ai_detections_get: Get detection metadata and code

  • panther_ai_detections_write: Create new detections (rules and policies) with support for POLICY analysis type

  • panther_ai_detections_author: Test and validate detection code against sample events or resources, with dedicated policy testing capabilities

  • panther_ai_detections_writer_skill: Get specific instructions before writing a Panther detection

Log sources, schemas, and metadata

  • panther_ai_log_sources_get_sample_data: Retrieve sample log events from a session for schema inference and testing

  • panther_ai_log_sources_list: List onboarded log sources, with health status

  • panther_ai_log_types_get_schema: Get column details for specific log types

  • panther_ai_log_types_list: List available log types

  • panther_ai_log_types_test_schema: Validate a schema against sample data, returning match/unmatch statistics and error messages

  • panther_ai_log_types_writer_skill: Get instructions about schema structure, field types, and best practices before creating schemas

  • panther_ai_log_types_guidance_skill: Get instructions for analyzing events based on log type

  • panther_ai_utilities_classification_error_fixer_skill: Get instructions for diagnosing and fixing log classification errors

Query (Saved Search) management

  • panther_ai_datalake_list_saved_queries: List queries (Saved Searches)

  • panther_ai_datalake_get_query_results: Retrieve query results

  • panther_ai_datalake_write_saved_query: Create a Saved Search for SQL reuse

Enrichment and context

  • panther_ai_enrichments_lookup: Look up entity information (IPs, users, etc.)

  • panther_ai_users_list: List Panther users

  • panther_ai_users_get: Get details about a user

  • panther_ai_roles_list: List Panther roles and their permissions

  • panther_ai_roles_get: Get details about a specific role, including permissions and log type access

  • panther_ai_utilities_calculate_risk_score: Calculate a normalized risk score from benign and risky security indicators

Utilities

  • panther_ai_utilities_fetch_web: Fetch content from web pages and process user-uploaded file attachments. For web URLs, access is restricted to approved domains configured in Panther AI settings, with user approval required for non-approved domains depending on settings. File attachments are stored securely in S3 for the duration of the AI conversation. Supports text pages, images (PNG, JPEG, GIF, WebP), and PDF documents.

  • panther_ai_utilities_panther_docs_skill: Get instructions for navigating Panther documentation at docs.panther.com

AI responses and citations

  • panther_ai_memory_get_response: Access AI response history

  • panther_ai_memory_search_responses: Search the AI response history database for relevant historical context

  • panther_ai_citations_list: List citations accumulated during the current conversation

Last updated

Was this helpful?