Panther AI (Beta)
Last updated
Was this helpful?
Last updated
Was this helpful?
Panther AI encompasses a set of generative AI features aimed at accelerating your detection and response workflows. Panther AI uses Claude AI models by Anthropic through Amazon Bedrock. Panther AI does not use your data for AI training—learn more about data security below.
Use Panther AI in the Panther Console when:
Analyzing alerts:
AI alert triage: Gather information about and analyze an alert.
Alert list AI summary: Summarize your alerts list.
Summarizing Search results:
Search results AI summary: Summarize a set of result events.
When using Panther AI triage and summarization as well as running your own prompts, you may want to view previous responses or rename, pin, or delete certain interactions. Learn how to perform these actions in Managing Panther AI Response History.
There are also AI GraphQL API operations available to Cloud Connected customers and SaaS customers with "pass-through billing"—view them in the GraphQL API schema.
Separate from Panther AI, Panther also offers an MCP server.
To use Panther AI features, your Panther instance's Enable Panther AI setting must be set to ON
and your user role must have the Run Panther AI permission. If you have a Cloud Connected Panther instance, you must also enable certain AI models in AWS.
To enable Panther AI:
In the upper-right corner of your Panther Console, click the gear icon (Settings) > General.
On the Panther AI tab, click the Enable Panther AI toggle to ON
.
The Enable Panther AI setting is set to OFF
by default, and can only be updated by a user with the Edit Settings & SAML Preferences permission.
Once Enable Panther AI is set to ON
, the Run Panther AI permission will be:
Granted automatically to the default Admin role.
Available to assign to additional roles. Learn how to update a role's permissions here.
If you have a Cloud Connected Panther instance, follow the instructions on the AWS Add or remove access to Amazon Bedrock foundation models documentation to request access to the following foundation models in the region your Panther instance is deployed in:
Claude 3.5 Sonnet v1 (anthropic.claude-3-5-sonnet-20240620-v1:0
)
Claude 3.5 Sonnet v2 (anthropic.claude-3-5-sonnet-20241022-v2:0
)
Claude 3.7 Sonnet v1 (anthropic.claude-3-7-sonnet-20250219-v1:0
)
Claude 3.5 Haiku v1 (anthropic.claude-3-5-haiku-20241022-v1:0
)
Panther AI does not use your data for AI training. Your prompts and Panther AI responses are stored in your dedicated, single-tenant AWS account (like your logs).
If you are a Cloud Connected customer, you may enable Amazon Bedrock Guardrails for extra protection. You can enable Panther-managed detections for Amazon Bedrock to monitor its activity.
Panther AI assumes the role and associated permissions of the user running it—i.e., the user logged into the Console where AI operations are being run, or the user executing AI-related API calls.
This means Panther AI will not perform read or write operations the current user could not perform themselves. This includes log type access restrictions, if set for that user role.
Use AI prompt settings to tailor AI-generated content in Panther to your preferences. AI settings are universally applied to all AI entry points in Panther, but are specific to each Panther user.
To set your AI prompt settings:
Navigate to one of the AI prompt bars in the Panther Console:
Set the response length setting.
Click Save Settings.
The response length setting determines the size of the AI output and the amount time Panther AI spends "thinking." The shorter the response length setting value, the less closely Panther AI considers the details of the input, and the faster the model runs.
The response length AI setting has three possible values:
Short: Runs quickly and produces a brief summary.
Medium: Elaborates more than Short, but is usually shorter than five paragraphs.
Long: Allows Panther AI to conduct an intricate analysis, and can produce very long and detailed outputs.
When Panther AI aids in triaging or summarizing your data, it will return links to relevant data so you can verify its findings. Citations may include alerts, detections, and/or data queries.
On the right side of the prompt bar, click the Edit prompt settings icon: .