LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Setting Up a Cloud Connected Panther Instance
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • PAT commands
  • test: Running unit tests
  • benchmark: Evaluating rule performance
  • enrich-test-data: Enriching test data with Enrichment content
  • validate: Ensuring detection content is ready to be uploaded
  • zip: Creating a package to upload to the Panther Console
  • upload: Uploading packages to Panther directly
  • delete: Deleting Rules, Policies, or Saved Queries
  • update-custom-schemas: Creating or updating custom schemas
  • Permissions required per command
  • PAT command options (sub commands)
  • --filter: Filtering PAT commands
  • --minimum-tests: Requiring a certain number of unit tests

Was this helpful?

  1. Panther Developer Workflows
  2. Using panther-analysis
  3. Panther Analysis Tool

Panther Analysis Tool Commands

Use PAT to manage your Panther content

PreviousInstall, Configure, and Authenticate with the Panther Analysis ToolNextManaging Lookup Tables and Enrichment Providers with the Panther Analysis Tool

Last updated 3 months ago

Was this helpful?

Overview

You can manage your Panther detection content using the Panther Analysis Tool (PAT). PAT lets you , , and assets, among other actions.

Each of the PAT commands accepts certain . For example, you can use with several of the commands to narrow the scope of the action.

PAT commands

See the full list of available PAT commands in the following codeblock. Beneath it, find additional information about several of the commands.

PAT commands can be run using panther_analysis_tool or pat. .

To understand which Panther permissions you need to execute each PAT command, see .

% panther_analysis_tool -h
usage: panther_analysis_tool [-h] [--version] [--debug] {release,test,publish,upload,delete,update-custom-schemas,test-lookup-table,validate,zip,check-connection,sdk,benchmark,enrich-test-data} ...

Panther Analysis Tool: A command line tool for managing Panther policies and rules.

positional arguments:
  {release,test,publish,upload,delete,update-custom-schemas,test-lookup-table,validate,zip,check-connection,sdk,benchmark,enrich-test-data}
    release             Create release assets for repository containing panther detections. Generates a file called panther-analysis-all.zip and optionally generates panther-analysis-all.sig
    test                Validate analysis specifications and run policy and rule tests.
    publish             Publishes a new release, generates the release assets, and uploads them. Generates a file called panther-analysis-all.zip and optionally generates panther-analysis-all.sig
    upload              Upload specified policies and rules to a Panther deployment.
    delete              Delete policies, rules, or saved queries from a Panther deployment
    update-custom-schemas
                        Update or create custom schemas on a Panther deployment.
    test-lookup-table   Validate a Lookup Table spec file.
    validate            Validate your bulk uploads against your panther instance
    zip                 Create an archive of local policies and rules for uploading to Panther.
    check-connection    Check your Panther API connection
    sdk                 Perform operations using the Panther SDK exclusively (pass sdk --help for more)
    benchmark           Performance test one rule against one of its log types. The rule must be the only item in the working directory or specified by --path, --ignore-files, and --filter. This feature is an extension of Data Replay and is subject to the same limitations.
    enrich-test-data    Enrich test data with additional enrichments from the Panther API.

optional arguments:
  -h, --help            show this help message and exit
  --version             show program's version number and exit
  --debug

test: Running unit tests

Use PAT to load the defined specification files and evaluate unit tests locally:

panther_analysis_tool test --path <folder-name>

To filter rules or policies based on certain attributes:

panther_analysis_tool test --path <folder-name> --filter RuleID=Category.Behavior.MoreInfo

benchmark: Evaluating rule performance

You can use benchmark to test the performance of an existing or draft rule against one hour of data, for one log type. It can be particularly useful to iterate on a rule that is timing out. This is a long-running command intended to be used manually, as needed—not in a regular CI/CD workflow.

You must provide a single rule to benchmark, either by having just one rule in the working directory (./ or --path), or through the use of --ignore-files or --filter.

If you do not specify a certain hour of data, the system will select the historical hour with the highest volume of data. To specify a specific hour to run against, use --hour. (Most common time formats are supported, e.g., 2023-07-31T09:00:00-7:00—minutes, seconds, etc. will be truncated). For example:

panther_analysis_tool benchmark --hour <datetime>

For a rule with multiple log types, one must be specified using --log-type. For example:

panther_analysis_tool benchmark --log-type <log-type>

The output of benchmark will be written to both stdout, and to the directory indicated by the --out option.

enrich-test-data: Enriching test data with Enrichment content

enrich-test-data is simple to use, but may introduce substantial changes to your analysis YAML files. The command will modify files based on the following criteria:

  • If the Rule or Scheduled Rule does not have test cases, the YAML file will not be modified.

  • If the log type does not support enrichment, the YAML file will not be modified.

  • If the log type supports enrichment and there are test cases:

    • Test cases represented as inline JSON content will be reformatted into YAML.

    • The YAML file will be formatted according to common YAML conventions, using two spaces for indentation.

Similar to other commands, enrich-test-data works from the current directory, recursively. If you run the command at the root directory of your panther-analysis copy, it will attempt to enrich all Rules and Scheduled Rules. To enrich content in a single directory, navigate to that directory before running the command.

You can run enrich-test-data in PAT versions 0.26 and beyond using the following command:

panther_analysis_tool enrich-test-data

The output of the command will be written to stdout, including a list of any Rules or Scheduled Rules that were enriched.

validate: Ensuring detection content is ready to be uploaded

The validate command verifies your detection content is ready to be uploaded to your Panther instance by running the same checks that happen during the upload process. Because some of these checks require configuration information in your Panther instance, validate makes an API call.

To validate your detections against your Panther instance using PAT:

  1. Run the following command:

    panther_analysis_tool validate --path <path-to-your-detections> --api-token <your-api-token> --api-host https://api.<your-panther-instance-name>.runpanther.net/public/graphql

zip: Creating a package to upload to the Panther Console

To create a package for uploading manually to the Panther Console, run the following command:

$ panther_analysis_tool zip --path tests/fixtures/valid_policies/ --out tmp
[INFO]: Testing analysis packs in tests/fixtures/valid_policies/

AWS.IAM.MFAEnabled
	[PASS] Root MFA not enabled fails compliance
	[PASS] User MFA not enabled fails compliance

[INFO]: Zipping analysis packs in tests/fixtures/valid_policies/ to tmp
[INFO]: <current working directory>/tmp/panther-analysis-2020-03-23T12-48-18.zip

Uploading content in the Panther Console

  1. In the lefthand side of the Panther Console, click Detections.

  2. Click the Upload button in the upper right corner.

  3. Drag and drop your .zip file onto the page, or click Select file.

upload: Uploading packages to Panther directly

Starting with PAT version 0.22.0, if you have authenticated with an API token and execute the upload command, an asynchronous bulk upload will automatically be performed, to prevent timeout issues.

If you did not use an API token to authenticate, you can use the --batch option. The --batch option is only available in versions of PAT after 0.19.0.

The upload command uploads your detection content to your Panther instance.

To use upload:

  1. Run panther_analysis_tool test to ensure your unit tests are passing.

  2. Run the following command: panther_analysis_tool upload --path <path-to-your-detections> --api-token <your-api-token> --api-host https://api.<your-panther-instance-name>.runpanther.net/public/graphql

If you update the ID of an entity (i.e., the value of RuleId or PolicyId) and use upload—but do not also use delete to manually remove the old entity, both versions will exist in your Panther instance. If you intend to merely update the ID without creating a duplicate detection, use delete with the old ID.

delete: Deleting Rules, Policies, or Saved Queries

While panther_analysis_tool upload --path <directory> will upload everything from <directory>, it will not delete anything in your Panther instance if you simply remove a local file from <directory>. Instead, you can use the panther_analysis_tool delete command to explicitly delete detections from your Panther instance. To delete a specific detection, you can run the following command:

panther_analysis_tool delete --analysis-id MyRuleId

This will interactively ask you for a confirmation before it deletes the detection. If you would like to delete without confirming, you can use the following command:

panther_analysis_tool delete --analysis-id MyRuleId --no-confirm

You can delete up to 1000 detections at once with PAT.

update-custom-schemas: Creating or updating custom schemas

Use update-custom-schemas to create or update custom schemas.

After using this command to create a schema, wait at least 15 minutes before using upload to upload detections that reference the new schema.

Permissions required per command

Below is a mapping of permissions required for each command.

Command
Required permission(s)

check-connection

Read Panther Settings Info

Bulk Upload OR Bulk Upload Validate OR View Rules

benchmark

Read Panther Metrics

validate

Bulk Upload Validate OR Bulk Upload

upload

Bulk Upload

delete

Manage Policies Manage Rules Manage Saved Searches

update-custom-schemas

View Log Sources Manage Log Sources

PAT command options (sub commands)

See the options for each of the PAT commands in the codeblock below.

$ panther_analysis_tool release -h
usage: panther_analysis_tool release [-h] [--api-token API_TOKEN] [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--filter KEY=VALUE [KEY=VALUE ...]]
                                     [--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--kms-key KMS_KEY] [--minimum-tests MINIMUM_TESTS] [--out OUT] [--path PATH]
                                     [--skip-tests] [--skip-disabled-tests] [--available-destination AVAILABLE_DESTINATION] [--sort-test-results]
                                     [--ignore-table-names]

optional arguments:
  -h, --help            show this help message and exit
  --api-token API_TOKEN
                        The Panther API token to use. See: https://docs.panther.com/api-beta
  --api-host API_HOST   The Panther API host to use. See: https://docs.panther.com/api-beta
  --aws-profile AWS_PROFILE
                        The AWS profile to use when updating the AWS Panther deployment.
  --filter KEY=VALUE [KEY=VALUE ...]
  --ignore-files IGNORE_FILES [IGNORE_FILES ...]
                        Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
  --kms-key KMS_KEY     The key id to use to sign the release asset.
  --minimum-tests MINIMUM_TESTS
                        The minimum number of tests in order for a detection to be considered passing. If a number greater than 1 is specified, at least one True and
                        one False test is required.
  --out OUT             The path to store output files.
  --path PATH           The relative path to Panther policies and rules.
  --skip-tests          Skip testing before uploading
  --skip-disabled-tests Skip testing disabled detections before uploading
  --available-destination AVAILABLE_DESTINATION
                        A destination name that may be returned by the destinations function. Repeat the argument to define more than one name.
  --sort-test-results   Sort test results by whether the test passed or failed (passing tests first), then by rule ID
  --ignore-table-names  Allows skipping of table names from schema validation. Useful when querying non-Panther or non-Snowflake tables
  --valid-table-names   VALID_TABLE_NAMES [VALID_TABLE_NAMES ...]
                        Fully qualified table names that should be considered valid during schema validation (in addition to standard Panther/Snowflake tables), space
                        separated. Accepts '*' as wildcard character matching 0 or more characters. Example foo.bar.baz bar.baz.* foo.*bar.baz baz.* *.foo.*
$ panther_analysis_tool test -h   
usage: panther_analysis_tool test [-h] [--filter KEY=VALUE [KEY=VALUE ...]] [--minimum-tests MINIMUM_TESTS] [--path PATH] [--ignore-extra-keys IGNORE_EXTRA_KEYS]
                                  [--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--skip-disabled-tests] [--available-destination AVAILABLE_DESTINATION]
                                  [--sort-test-results] [--ignore-table-names]

optional arguments:
  -h, --help            show this help message and exit
  --filter KEY=VALUE [KEY=VALUE ...]
  --minimum-tests MINIMUM_TESTS
                        The minimum number of tests in order for a detection to be considered passing. If a number greater than 1 is specified, at least one True and
                        one False test is required.
  --path PATH           The relative path to Panther policies and rules.
  --ignore-extra-keys IGNORE_EXTRA_KEYS
                        Meant for advanced users; allows skipping of extra keys from schema validation.
  --ignore-files IGNORE_FILES [IGNORE_FILES ...]
                        Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
  --skip-disabled-tests
  --available-destination AVAILABLE_DESTINATION
                        A destination name that may be returned by the destinations function. Repeat the argument to define more than one name.
  --sort-test-results   Sort test results by whether the test passed or failed (passing tests first), then by rule ID
  --ignore-table-names  Allows skipping of table names from schema validation. Useful when querying non-Panther or non-Snowflake tables
  --valid-table-names   VALID_TABLE_NAMES [VALID_TABLE_NAMES ...]
                        Fully qualified table names that should be considered valid during schema validation (in addition to standard Panther/Snowflake tables), space
                        separated. Accepts '*' as wildcard character matching 0 or more characters. Example foo.bar.baz bar.baz.* foo.*bar.baz baz.* *.foo.*
$ panther_analysis_tool upload -h
usage: panther_analysis_tool upload [-h] [--max-retries MAX_RETRIES] [--api-token API_TOKEN]
                                    [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--auto-disable-base]
                                    [--filter KEY=VALUE [KEY=VALUE ...]] [--minimum-tests MINIMUM_TESTS] [--out OUT] [--path PATH] [--skip-tests]
                                    [--skip-disabled-tests] [--ignore-extra-keys IGNORE_EXTRA_KEYS] [--ignore-files IGNORE_FILES [IGNORE_FILES ...]]
                                    [--available-destination AVAILABLE_DESTINATION] [--sort-test-results] [--batch] [--no-async]
                                    [--ignore-table-names] [--valid-table-names VALID_TABLE_NAMES [VALID_TABLE_NAMES ...]]

optional arguments:
  -h, --help            show this help message and exit
  --max-retries MAX_RETRIES
                        Retry to upload on a failure for a maximum number of times
  --api-token API_TOKEN
                        The Panther API token to use. See: https://docs.panther.com/api-beta
  --api-host API_HOST   The Panther API host to use. See: https://docs.panther.com/api-beta
  --aws-profile AWS_PROFILE
                        The AWS profile to use when updating the AWS Panther deployment.
  --auto-disable-base   If uploading derived detections, set the corresponding
                        base detection's Enabled status to false prior to
                        upload (default: False)
  --filter KEY=VALUE [KEY=VALUE ...]
  --minimum-tests MINIMUM_TESTS
                        The minimum number of tests in order for a detection to be considered passing. If a number greater than 1 is specified, at
                        least one True and one False test is required.
  --out OUT             The path to store output files.
  --path PATH           The relative path to Panther policies and rules.
  --skip-tests
  --skip-disabled-tests
  --ignore-extra-keys IGNORE_EXTRA_KEYS
                        Meant for advanced users; allows skipping of extra keys from schema validation.
  --ignore-files IGNORE_FILES [IGNORE_FILES ...]
                        Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml
                        ./bar/baz.yaml
  --available-destination AVAILABLE_DESTINATION
                        A destination name that may be returned by the destinations function. Repeat the argument to define more than one name.
  --sort-test-results   Sort test results by whether the test passed or failed (passing tests first), then by rule ID
  --batch               When set your upload will be broken down into multiple zip files
  --no-async            When set your upload will be synchronous
  --ignore-table-names  Allows skipping of table name validation from schema validation. Useful when querying non-Panther or non-Snowflake tables
  --valid-table-names VALID_TABLE_NAMES [VALID_TABLE_NAMES ...]
                        Fully qualified table names that should be considered valid during schema validation (in addition to standard
                        Panther/Snowflake tables), space separated. Accepts '*' as wildcard character matching 0 or more characters. Example
                        foo.bar.baz bar.baz.* foo.*bar.baz baz.* *.foo.*
$ panther_analysis_tool delete -h
usage: panther_analysis_tool delete [-h] [--no-confirm] [--athena-datalake] [--api-token API_TOKEN] [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--analysis-id ANALYSIS_ID [ANALYSIS_ID ...]] [--query-id QUERY_ID [QUERY_ID ...]]

optional arguments:
  -h, --help            show this help message and exit
  --no-confirm          Skip manual confirmation of deletion
  --athena-datalake     Instance DataLake is backed by Athena
  --api-token API_TOKEN
                        The Panther API token to use. See: https://docs.panther.com/api-beta
  --api-host API_HOST   The Panther API host to use. See: https://docs.panther.com/api-beta
  --aws-profile AWS_PROFILE
                        The AWS profile to use when updating the AWS Panther deployment.
  --analysis-id ANALYSIS_ID [ANALYSIS_ID ...]
                        Space separated list of Detection IDs
  --query-id QUERY_ID [QUERY_ID ...]
                        Space separated list of Saved Queries
panther_analysis_tool update-custom-schemas -h
usage: panther_analysis_tool update-custom-schemas [-h] [--api-token API_TOKEN] [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--path PATH]

optional arguments:
  -h, --help            show this help message and exit
  --api-token API_TOKEN
                        The Panther API token to use. See: https://docs.panther.com/api-beta
  --api-host API_HOST   The Panther API host to use. See: https://docs.panther.com/api-beta
  --aws-profile AWS_PROFILE
                        The AWS profile to use when updating the AWS Panther deployment.
  --path PATH           The relative or absolute path to Panther custom schemas.
panther_analysis_tool test-lookup-table -h
usage: panther_analysis_tool test-lookup-table [-h]
                                               [--aws-profile AWS_PROFILE]
                                               --path PATH

optional arguments:
  -h, --help            show this help message and exit
  --aws-profile AWS_PROFILE
                        The AWS profile to use when updating the AWS Panther
                        deployment.
  --path PATH           The relative path to a lookup table input file.
panther_analysis_tool validate -h
usage: panther_analysis_tool validate [-h] [--api-token API_TOKEN] [--api-host API_HOST] [--filter KEY=VALUE [KEY=VALUE ...]] [--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--path PATH]

options:
  -h, --help            show this help message and exit
  --api-token API_TOKEN
                        The Panther API token to use. See: https://docs.panther.com/api-beta
  --api-host API_HOST   The Panther API host to use. See: https://docs.panther.com/api-beta
  --filter KEY=VALUE [KEY=VALUE ...]
  --ignore-files IGNORE_FILES [IGNORE_FILES ...]
                        Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
  --path PATH           The relative path to Panther policies and rules.
$ panther_analysis_tool zip -h
usage: panther_analysis_tool zip [-h] [--filter KEY=VALUE [KEY=VALUE ...]] [--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--minimum-tests MINIMUM_TESTS]
                                 [--out OUT] [--path PATH] [--skip-tests] [--skip-disabled-tests] [--available-destination AVAILABLE_DESTINATION]
                                 [--sort-test-results] [--ignore-table-names]

optional arguments:
  -h, --help            show this help message and exit
  --filter KEY=VALUE [KEY=VALUE ...]
  --ignore-files IGNORE_FILES [IGNORE_FILES ...]
                        Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
  --minimum-tests MINIMUM_TESTS
                        The minimum number of tests in order for a detection to be considered passing. If a number greater than 1 is specified, at least one True and
                        one False test is required.
  --out OUT             The path to store output files.
  --path PATH           The relative path to Panther policies and rules.
  --skip-tests
  --skip-disabled-tests
  --available-destination AVAILABLE_DESTINATION
                        A destination name that may be returned by the destinations function. Repeat the argument to define more than one name.
  --sort-test-results   Sort test results by whether the test passed or failed (passing tests first), then by rule ID
  --ignore-table-names  Allows skipping of table names from schema validation. Useful when querying non-Panther or non-Snowflake tables
  --valid-table-names   VALID_TABLE_NAMES [VALID_TABLE_NAMES ...]
                        Fully qualified table names that should be considered valid during schema validation (in addition to standard Panther/Snowflake tables), space
                        separated. Accepts '*' as wildcard character matching 0 or more characters. Example foo.bar.baz bar.baz.* foo.*bar.baz baz.* *.foo.*
panther_analysis_tool check-connection -h
usage: panther_analysis_tool check-connection [-h] [--api-token API_TOKEN] [--api-host API_HOST]

optional arguments:
  -h, --help            show this help message and exit
  --api-token API_TOKEN
                        The Panther API token to use. See: https://docs.panther.com/api-beta
  --api-host API_HOST   The Panther API host to use. See: https://docs.panther.com/api-beta
panther_analysis_tool benchmark -h
usage: panther_analysis_tool benchmark [-h] [--api-token API_TOKEN] [--api-host API_HOST] [--filter KEY=VALUE [KEY=VALUE ...]]
                                       [--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--path PATH] [--out OUT] [--iterations ITERATIONS] [--hour HOUR]
                                       [--log-type LOG_TYPE]

optional arguments:
  -h, --help            show this help message and exit
  --api-token API_TOKEN
                        The Panther API token to use. See: https://docs.panther.com/api-beta
  --api-host API_HOST   The Panther API host to use. See: https://docs.panther.com/api-beta
  --filter KEY=VALUE [KEY=VALUE ...]
  --ignore-files IGNORE_FILES [IGNORE_FILES ...]
                        Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
  --path PATH           The relative path to Panther policies and rules.
  --out OUT             The path to store output files.
  --iterations ITERATIONS
                        The number of iterations of the performance test to perform. Each iteration runs against the selected hour of data. Fewer iterations will
                        be run if the time limit is reached. Min: 1
  --hour HOUR           The hour of historical data to perform the benchmark against, in any parseable format, e.g. '2023-07-31T09:00:00.000-7:00'. Minutes,
                        Seconds, etc will be truncated if specified. If hour is unspecified, the performance test will run against the hour in the last two weeks
                        with the largest log volume.
  --log-type LOG_TYPE   Required if the rule supports multiple log types, optional otherwise. Must be one of the rule's log types.

--filter: Filtering PAT commands

The test, zip, upload, and release commands all support filtering. Filtering works by passing the --filter argument with a list of filters specified in the format KEY=VALUE1,VALUE2. The keys can be any valid field in a policy or rule. When using a filter, only analysis that matches each filter specified will be considered.

For example, the following command will test only items with the AnalysisType of policy AND the severity of High:

panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy Severity=High
[INFO]: Testing analysis packs in tests/fixtures/valid_policies

AWS.IAM.BetaTest
	[PASS] Root MFA not enabled fails compliance
	[PASS] User MFA not enabled fails compliance

The following command will test items with the AnalysisType policy OR rule, AND the severity High:

panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy,rule Severity=High
[INFO]: Testing analysis packs in tests/fixtures/valid_policies

AWS.IAM.BetaTest
	[PASS] Root MFA not enabled fails compliance
	[PASS] User MFA not enabled fails compliance

AWS.CloudTrail.MFAEnabled
	[PASS] Root MFA not enabled fails compliance
	[PASS] User MFA not enabled fails compliance

When writing policies or rules that refer to the global analysis types, be sure to include them in your filter. You can include an empty string as a value in a filter, and it will mean the filter is only applied if the field exists.

The following command will return an error, because the policy in question imports a global but the global does not have a severity so it is excluded by the filter:

panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy,global Severity=Critical
[INFO]: Testing analysis packs in tests/fixtures/valid_policies

AWS.IAM.MFAEnabled
	[ERROR] Error loading module, skipping

Invalid: tests/fixtures/valid_policies/example_policy.yml
	No module named 'panther'

[ERROR]: [('tests/fixtures/valid_policies/example_policy.yml', ModuleNotFoundError("No module named 'panther'"))]

For this query to work as expected, you need to allow for the severity field to be absent:

panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy,global Severity=Critical,""
[INFO]: Testing analysis packs in tests/fixtures/valid_policies

AWS.IAM.MFAEnabled
	[PASS] Root MFA not enabled fails compliance
	[PASS] User MFA not enabled fails compliance

Filters work for the zip, upload, and release commands in the same way they work for the test command.

--minimum-tests: Requiring a certain number of unit tests

You can set a minimum number of unit tests with the --minimum-tests flag. Detections that don't have the minimum number of tests will be considered failing, and if --minimum-tests is set to 2 or greater it will also enforce that at least one test must return True and one must return False.

In the example below, even though the rules passed all their tests, they're still considered failing because they do not have the correct test coverage:

panther_analysis_tool test --path tests/fixtures/valid_policies --minimum-tests 2
% panther_analysis_tool test --path okta_rules --minimum-tests 2
[INFO]: Testing analysis packs in okta_rules

Okta.AdminRoleAssigned
	[PASS] Admin Access Assigned

Okta.BruteForceLogins
	[PASS] Failed login

Okta.GeographicallyImprobableAccess
	[PASS] Non Login
	[PASS] Failed Login

--------------------------
Panther CLI Test Summary
	Path: okta_rules
	Passed: 0
	Failed: 3
	Invalid: 0

--------------------------
Failed Tests Summary
	Okta.AdminRoleAssigned
		['Insufficient test coverage, 2 tests required but only 1 found.', 'Insufficient test coverage: expected at least one passing and one failing test.']

	Okta.BruteForceLogins
		['Insufficient test coverage, 2 tests required but only 1 found.', 'Insufficient test coverage: expected at least one passing and one failing test.']

	Okta.GeographicallyImprobableAccess
		['Insufficient test coverage: expected at least one passing and one failing test.']

Running pat test for and requires an API token. See for more information.

The API token used with this command must be granted the "Read Panther Metrics" (also known as SummaryRead) and "Manage Rules" (also known as RuleModify) permissions. Because benchmark is an extension of , it is subject to the same .

Use enrich-test-data to enrich the test content of your Rules and Scheduled Rules with data from connected and custom . This allows you to build more sophisticated test cases for detections that rely on enrichment content.

If you have not already, .

You may exclude the --api-token and --api-host options if you are another way, i.e., by using environment variables or a configuration file.

If you have not already, .

You may exclude the --api-token and --api-host options if you are another way, i.e., by using environment variables or a configuration file.

When using upload, detections and Lookup Tables with existing IDs are overwritten. Locally deleted detections will not automatically be deleted in your Panther instance on upload—they must be removed with the command (or manually deleted in your Panther Console). When using the CLI workflow, it's recommended to set a detection's Enabled property to false, instead of deleting.

test (when testing detections that use )

Enrichment providers
Lookup Tables
delete
Inline Filters
correlation rules
Data Replay
upload
test
delete
options
--filter
Permissions required per command
generate an API token in your Panther Console
generate an API token in your Panther Console
Learn more about these aliases here
Simple Detections
Authenticating with an API token
setting configuration values
setting configuration values
limitations