Panther AI Best Practices
Overview
The security industry is shifting from human-scale to AI-scale, and Panther AI is designed to close the loop. Rather than bolting AI onto a rigid architecture as an afterthought, Panther AI is embedded across triage, investigation, and response so that the right action happens at every step, automatically.
Panther AI can dramatically reduce the time it takes to investigate and resolve alerts, but its effectiveness depends on how it's configured and used. This guide covers the best practices for prompting, environment setup, Auto-Triage configuration, and runbook writing.
Best practices
Write better prompts with the STAR framework
Panther AI responds to the context you give it. Structured prompts consistently produce better results than open-ended ones—especially for complex investigations that require querying multiple log sources or returning a specific output format.
The Specific, Targeted, Actionable, Relevant (STAR) framework is a practical method for structuring any prompt:
S
Specific
Exactly what you want to find or do
A
Actionable
The desired output (e.g., "summarize," "count by sourceIP," "visualize as a bar chart")
R
Relevant
Contextual details like time ranges and threat modeling notes (e.g., "in the last 24 hours")
Example prompts by role
Security analyst — alert triage
I'm investigating an impossible travel alert for [email protected]. Determine if the user's session was hijacked by searching for all
ConsoleLoginactivity for this user acrossAWS.CloudTrailandOkta.SystemLogover the past 24 hours. Provide a chronological timeline and a BLUF summary.
Threat hunter — proactive search
We received new threat intelligence regarding an adversary's infrastructure. Hunt for any network connections to their known domain by searching for connections to malicious-domain.com across AWS.VPCFlow logs. Group the results by sourceIP and visualize the top 10 targeted internal assets using a stacked bar chart.
Detection engineer — AWS Console brute force
We need to expand our coverage for brute force attacks. Create a new Panther detection for AWS Console brute force attempts. The rule should trigger on more than 5 failed
ConsoleLoginevents from the same source IP in 10 minutes usingAWS.CloudTraillogs, excluding our corporate CIDR 10.0.0.0/8. Output the detection in Python and set the severity to HIGH.
See more prompt examples in Panther AI Workflow Examples.
Configure your Organizational Profile
Panther AI becomes significantly more useful when it understands your environment. The Organizational Profile (found under Settings > Panther AI) acts as a persistent context layer embedded into every AI interaction—so you don't have to re-explain your infrastructure in every prompt.
Use your Organizational Profile to:
Define your environment — Map your network topology so Panther AI can reason about what's internal vs. external.
For example: "Internal subnets are 10.0.0.0/8. AWS account 123456789012 is our production environment."
Embed your SOPs — Encode incident response policies directly.
For example: "If lateral movement is detected, your final recommendation must be to page the IR team."
Classify asset criticality — Ensure high-value environments are treated with appropriate urgency regardless of detection severity.
For example: "Any alert involving the
prod-paymentsaccount should be treated as HIGH priority regardless of detection severity."
Define known-good service accounts — Reduce false positive noise from expected automation.
For example: "The IAM role
arn:aws:iam::123456789012:role/DataPipelineRoleperforms scheduled S3 syncs nightly and is expected."
The more specific your Organizational Profile, the less follow-up context your analysts need to include in individual prompts.
Use auto-run AI alert triage strategically
Panther AI can automatically triage new alerts as they're generated—reducing investigation time significantly. However, generative AI is a powerful, compute-heavy resource. Running auto-run triage on every single alert is an anti-pattern that can quickly consume your AI runs. The goal is to run AI only when its deep analytical capabilities are actually required.
Step 1: Tune your detections first
Before configuring auto-run AI alert triage, reduce the volume of low-fidelity signals reaching the AI:
Add allowlists in Python — Explicitly exclude known-good entities like vulnerability scanners, CI/CD service accounts, and routine automation (e.g.,
[email protected]).Adjust deduplication settings — Use Panther's event grouping to prevent alert storms from a single noisy source from flooding the AI queue.
Step 2: Gate auto-run triage with detection tags
Once your signal quality is high, configure auto-run triage to run only on alerts that genuinely benefit from deep analysis. In Settings > Panther AI, specify the detection tags that should trigger auto-run.
Panther AI will only auto-triage alerts fired by detections that carry at least one of the configured tags.
Recommended tags to get started:
ai-triage:auto
An explicit opt-in tag for any detection you want routed to the AI queue
threat:uba or identity-compromise
User Behavior Analytics and identity alerts are perfect for AI. The agent excels at rapidly querying 30-day baselines and cross-referencing activity across Okta and AWS logs.
context-heavy
Complex detections that typically require pivoting through 3+ log sources before reaching a conclusion
Learn how to configure Auto-Triage in Alerts & Destinations.
Write AI-native runbooks
When Panther AI triages an alert, it reads the associated detection runbook and executes its steps autonomously. Historically, runbooks were written for human analysts and are full of vague guidance that doesn't give the AI enough to work with. For Panther AI to perform a meaningful investigation, runbooks need to be explicit, sequential, and machine-readable.
Runbook instruction guidance
Specific log types to query
Vague instructions like "review the alert events"
Named alert fields (e.g., {user_arn}, {sourceIP})
Generic guidance like "check for similar alerts"
Defined time windows (e.g., "6 hours before and after the alert")
Ambiguous language like "investigate further"
Known-good values and baseline patterns
Instructions that don't translate to a concrete query
True/false positive decision criteria
Steps that assume human judgment without defining the criteria
Examples
❌ Vague runbook (written for humans)
Search for related user activity. Check if the user did anything weird. Document your findings.
Panther AI cannot execute this. There's no log table to query, no time window, and no definition of "weird."
✅ AI-native runbook
1. Query
AWS.CloudTrailfor all API calls by{user_arn}in the 6 hours before and after this alert.2. Check whether the accessed S3 object keys match known sensitive data patterns.
3. If activity falls outside the user's 30-day baseline, verify whether
{sourceIP}matches corporate VPN ranges.4. If
{sourceIP}is not in the corporate VPN range and{user_arn}has not performed this action in the past 30 days, treat as a true positive and escalate to the IR team.
This works because it specifies the log source, references actual alert fields, defines the time window, gives the AI clear criteria for classification, and closes the loop on true/false positive determination.
Runbook template
Use this structure when writing runbooks from scratch:
Panther AI datalake query limitations
Panther AI is powerful, but it operates under hard architectural constraints. Understanding these limitations helps you write better runbooks, calibrate expectations, and design investigations that work with the system rather than against it.
Panther AI does not query "all" data when asked to investigate a broad time range or data source. Every query is subject to hard limits that affect the completeness of results.
Query results are limited to 100 KB
Each query result set is capped at approximately 100 KB. In high-volume environments, this may represent a fraction of actual activity. This is a safety limit to prevent overwhelming the LLM context window.
Use aggregate queries (e.g., GROUP BY, COUNT) to summarize millions of events within the size cap. Narrow scope to a specific IP or user before fetching raw events. Select only the columns you need — avoid SELECT *.
Queries require a time filter
Panther AI rejects queries that do not include a time filter. Panther's data lake is partitioned by event time, and unbounded queries would scan entire tables. Even with a time filter, overly broad ranges (e.g., "past year") on high-volume sources may return incomplete results.
Start with a 24-hour window and widen only if needed. For baselining, anchor on a specific entity (user, IP, resource) rather than pulling all events.
Recent events may not yet be queryable due to ingestion lag
Events from the past few seconds to minutes may be absent from query results during active incidents.
For in-progress incidents, use the Panther console's live alert view for the most recent events. Shift to Panther AI for historical correlation once initial events have been ingested.
Write effective prompts for high-volume investigations
When investigating high-volume data sources, the way you structure your prompt determines whether you get a useful summary or hit a query limit. Use aggregations and filters to get meaningful results within Panther AI's query constraints.
"Summarize the top 10 CloudTrail API calls for user [email protected] over the past 7 days, grouped by event count."
"Show me all CloudTrail logs for the past month."
Combining an entity filter (specific user), aggregation (GROUP BY, COUNT), a result limit, and a named log source prevents full-table scans and keeps results actionable.
"Count failed ConsoleLogin events by source IP in the past 24 hours."
"Show me all failed login attempts."
A specific event type, aggregation, and tight time window (24 hours) are the strongest pattern for threat hunting without unbounded scans.
"Show me Cloudflare activity for IP 1.2.3.4 over the past 7 days."
"Find any suspicious Cloudflare activity."
A named indicator (IP address) with a bounded time window and explicit log source is the ideal formulation for indicator-driven investigations.
Last updated
Was this helpful?

