You can manage your Panther detection content using the Panther Analysis Tool (PAT). PAT lets you upload, test, and delete assets, among other actions.
Each of the PAT commands accepts certain options. For example, you can use --filter with several of the commands to narrow the scope of the action.
PAT commands
See the full list of available PAT commands in the following codeblock. Beneath it, find additional information about several of the commands.
% panther_analysis_tool -h
usage: panther_analysis_tool [-h] [--version] [--debug] {release,test,publish,upload,delete,update-custom-schemas,test-lookup-table,validate,zip,check-connection,sdk,benchmark,enrich-test-data} ...
Panther Analysis Tool: A command line tool for managing Panther policies and rules.
positional arguments:
{release,test,publish,upload,delete,update-custom-schemas,test-lookup-table,validate,zip,check-connection,sdk,benchmark,enrich-test-data}
release Create release assets for repository containing panther detections. Generates a file called panther-analysis-all.zip and optionally generates panther-analysis-all.sig
test Validate analysis specifications and run policy and rule tests.
publish Publishes a new release, generates the release assets, and uploads them. Generates a file called panther-analysis-all.zip and optionally generates panther-analysis-all.sig
upload Upload specified policies and rules to a Panther deployment.
delete Delete policies, rules, or saved queries from a Panther deployment
update-custom-schemas
Update or create custom schemas on a Panther deployment.
test-lookup-table Validate a Lookup Table spec file.
validate Validate your bulk uploads against your panther instance
zip Create an archive of local policies and rules for uploading to Panther.
check-connection Check your Panther API connection
sdk Perform operations using the Panther SDK exclusively (pass sdk --help for more)
benchmark Performance test one rule against one of its log types. The rule must be the only item in the working directory or specified by --path, --ignore-files, and --filter. This feature is an extension of Data Replay and is subject to the same limitations.
enrich-test-data Enrich test data with additional enrichments from the Panther API.
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
--debug
test: Running unit tests
Use PAT to load the defined specification files and evaluate unit tests locally:
panther_analysis_tooltest--path<folder-name>
To filter rules or policies based on certain attributes:
You can use benchmark to test the performance of an existing or draft rule against one hour of data, for one log type. It can be particularly useful to iterate on a rule that is timing out. This is a long-running command intended to be used manually, as needed—not in a regular CI/CD workflow.
The API token used with this command must be granted the "Read Panther Metrics" (also known as SummaryRead) and "Manage Rules" (also known as RuleModify) permissions. Because benchmark is an extension of Data Replay, it is subject to the same limitations.
You must provide a single rule to benchmark, either by having just one rule in the working directory (./ or --path), or through the use of --ignore-files or --filter.
If you do not specify a certain hour of data, the system will select the historical hour with the highest volume of data. To specify a specific hour to run against, use --hour. (Most common time formats are supported, e.g., 2023-07-31T09:00:00-7:00—minutes, seconds, etc. will be truncated). For example:
panther_analysis_toolbenchmark--hour<datetime>
For a rule with multiple log types, one must be specified using --log-type. For example:
The output of benchmark will be written to both stdout, and to the directory indicated by the --out option.
enrich-test-data: Enriching test data with Enrichment content
Use enrich-test-data to enrich the test content of your Rules and Scheduled Rules with data from connected Enrichment providers and custom Lookup Tables. This allows you to build more sophisticated test cases for detections that rely on enrichment content.
enrich-test-data is simple to use, but may introduce substantial changes to your analysis YAML files. The command will modify files based on the following criteria:
If the Rule or Scheduled Rule does not have test cases, the YAML file will not be modified.
If the log type does not support enrichment, the YAML file will not be modified.
If the log type supports enrichment and there are test cases:
Test cases represented as inline JSON content will be reformatted into YAML.
The YAML file will be formatted according to common YAML conventions, using two spaces for indentation.
Similar to other commands, enrich-test-data works from the current directory, recursively. If you run the command at the root directory of your panther-analysis copy, it will attempt to enrich all Rules and Scheduled Rules. To enrich content in a single directory, navigate to that directory before running the command.
You can run enrich-test-data in PAT versions 0.26 and beyond using the following command:
panther_analysis_toolenrich-test-data
The output of the command will be written to stdout, including a list of any Rules or Scheduled Rules that were enriched.
validate: Ensuring detection content is ready to be uploaded
The validate command verifies your detection content is ready to be uploaded to your Panther instance by running the same checks that happen during the upload process. Because some of these checks require configuration information in your Panther instance, validate makes an API call.
To validate your detections against your Panther instance using PAT:
You may exclude the --api-token and --api-host options if you are setting configuration values another way, i.e., by using environment variables or a configuration file.
zip: Creating a package to upload to the Panther Console
To create a package for uploading manually to the Panther Console, run the following command:
$ panther_analysis_tool zip --path tests/fixtures/valid_policies/ --out tmp
[INFO]: Testing analysis packs in tests/fixtures/valid_policies/
AWS.IAM.MFAEnabled
[PASS] Root MFA not enabled fails compliance
[PASS] User MFA not enabled fails compliance
[INFO]: Zipping analysis packs in tests/fixtures/valid_policies/ to tmp
[INFO]: <current working directory>/tmp/panther-analysis-2020-03-23T12-48-18.zip
Uploading content in the Panther Console
In the lefthand side of the Panther Console, click Detections.
Click the Upload button in the upper right corner.
Drag and drop your .zip file onto the page, or click Select file.
upload: Uploading packages to Panther directly
Starting with PAT version 0.22.0, if you have authenticated with an API token and execute the upload command, an asynchronous bulk upload will automatically be performed, to prevent timeout issues.
If you did not use an API token to authenticate, you can use the --batch option. The --batch option is only available in versions of PAT after 0.19.0.
The upload command uploads your detection content to your Panther instance.
Run panther_analysis_tool test to ensure your unit tests are passing.
Run the following command:
panther_analysis_tool upload --path <path-to-your-detections> --api-token <your-api-token> --api-host https://api.<your-panther-instance-name>.runpanther.net/public/graphql
You may exclude the --api-token and --api-host options if you are setting configuration values another way, i.e., by using environment variables or a configuration file.
When using upload, detections and Lookup Tables with existing IDs are overwritten. Locally deleted detections will not automatically be deleted in your Panther instance on upload—they must be removed with the delete command (or manually deleted in your Panther Console). When using the CLI workflow, it's recommended to set a detection's Enabled property to false, instead of deleting.
If you update the ID of an entity (i.e., the value of RuleId or PolicyId) and use upload—but do not also use delete to manually remove the old entity, both versions will exist in your Panther instance. If you intend to merely update the ID without creating a duplicate detection, use delete with the old ID.
delete: Deleting Rules, Policies, or Saved Queries
While panther_analysis_tool upload --path <directory> will upload everything from <directory>, it will not delete anything in your Panther instance if you simply remove a local file from <directory>. Instead, you can use the panther_analysis_tool delete command to explicitly delete detections from your Panther instance.
To delete a specific detection, you can run the following command:
panther_analysis_tooldelete--analysis-idMyRuleId
This will interactively ask you for a confirmation before it deletes the detection. If you would like to delete without confirming, you can use the following command:
You can delete up to 1000 detections at once with PAT.
update-custom-schemas: Creating or updating custom schemas
Use update-custom-schemas to create or update custom schemas.
After using this command to create a schema, wait at least 15 minutes before using upload to upload detections that reference the new schema.
Permissions required per command
Below is a mapping of permissions required for each command.
PAT command options (sub commands)
See the options for each of the PAT commands in the codeblock below.
$ panther_analysis_tool release -h
usage: panther_analysis_tool release [-h] [--api-token API_TOKEN] [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--filter KEY=VALUE [KEY=VALUE ...]]
[--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--kms-key KMS_KEY] [--minimum-tests MINIMUM_TESTS] [--out OUT] [--path PATH]
[--skip-tests] [--skip-disabled-tests] [--available-destination AVAILABLE_DESTINATION] [--sort-test-results]
[--ignore-table-names]
optional arguments:
-h, --help show this help message and exit
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
--aws-profile AWS_PROFILE
The AWS profile to use when updating the AWS Panther deployment.
--filter KEY=VALUE [KEY=VALUE ...]
--ignore-files IGNORE_FILES [IGNORE_FILES ...]
Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
--kms-key KMS_KEY The key id to use to sign the release asset.
--minimum-tests MINIMUM_TESTS
The minimum number of tests in order for a detection to be considered passing. If a number greater than 1 is specified, at least one True and
one False test is required.
--out OUT The path to store output files.
--path PATH The relative path to Panther policies and rules.
--skip-tests Skip testing before uploading
--skip-disabled-tests Skip testing disabled detections before uploading
--available-destination AVAILABLE_DESTINATION
A destination name that may be returned by the destinations function. Repeat the argument to define more than one name.
--sort-test-results Sort test results by whether the test passed or failed (passing tests first), then by rule ID
--ignore-table-names Allows skipping of table names from schema validation. Useful when querying non-Panther or non-Snowflake tables
--valid-table-names VALID_TABLE_NAMES [VALID_TABLE_NAMES ...]
Fully qualified table names that should be considered valid during schema validation (in addition to standard Panther/Snowflake tables), space
separated. Accepts '*' as wildcard character matching 0 or more characters. Example foo.bar.baz bar.baz.* foo.*bar.baz baz.* *.foo.*
$ panther_analysis_tool test -h
usage: panther_analysis_tool test [-h] [--filter KEY=VALUE [KEY=VALUE ...]] [--minimum-tests MINIMUM_TESTS] [--path PATH] [--ignore-extra-keys IGNORE_EXTRA_KEYS]
[--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--skip-disabled-tests] [--available-destination AVAILABLE_DESTINATION]
[--sort-test-results] [--ignore-table-names]
optional arguments:
-h, --help show this help message and exit
--filter KEY=VALUE [KEY=VALUE ...]
--minimum-tests MINIMUM_TESTS
The minimum number of tests in order for a detection to be considered passing. If a number greater than 1 is specified, at least one True and
one False test is required.
--path PATH The relative path to Panther policies and rules.
--ignore-extra-keys IGNORE_EXTRA_KEYS
Meant for advanced users; allows skipping of extra keys from schema validation.
--ignore-files IGNORE_FILES [IGNORE_FILES ...]
Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
--skip-disabled-tests
--available-destination AVAILABLE_DESTINATION
A destination name that may be returned by the destinations function. Repeat the argument to define more than one name.
--sort-test-results Sort test results by whether the test passed or failed (passing tests first), then by rule ID
--ignore-table-names Allows skipping of table names from schema validation. Useful when querying non-Panther or non-Snowflake tables
--valid-table-names VALID_TABLE_NAMES [VALID_TABLE_NAMES ...]
Fully qualified table names that should be considered valid during schema validation (in addition to standard Panther/Snowflake tables), space
separated. Accepts '*' as wildcard character matching 0 or more characters. Example foo.bar.baz bar.baz.* foo.*bar.baz baz.* *.foo.*
$ panther_analysis_tool upload -h
usage: panther_analysis_tool upload [-h] [--max-retries MAX_RETRIES] [--api-token API_TOKEN]
[--api-host API_HOST] [--aws-profile AWS_PROFILE] [--auto-disable-base]
[--filter KEY=VALUE [KEY=VALUE ...]] [--minimum-tests MINIMUM_TESTS] [--out OUT] [--path PATH] [--skip-tests]
[--skip-disabled-tests] [--ignore-extra-keys IGNORE_EXTRA_KEYS] [--ignore-files IGNORE_FILES [IGNORE_FILES ...]]
[--available-destination AVAILABLE_DESTINATION] [--sort-test-results] [--batch] [--no-async]
[--ignore-table-names] [--valid-table-names VALID_TABLE_NAMES [VALID_TABLE_NAMES ...]]
optional arguments:
-h, --help show this help message and exit
--max-retries MAX_RETRIES
Retry to upload on a failure for a maximum number of times
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
--aws-profile AWS_PROFILE
The AWS profile to use when updating the AWS Panther deployment.
--auto-disable-base If uploading derived detections, set the corresponding
base detection's Enabled status to false prior to
upload (default: False)
--filter KEY=VALUE [KEY=VALUE ...]
--minimum-tests MINIMUM_TESTS
The minimum number of tests in order for a detection to be considered passing. If a number greater than 1 is specified, at
least one True and one False test is required.
--out OUT The path to store output files.
--path PATH The relative path to Panther policies and rules.
--skip-tests
--skip-disabled-tests
--ignore-extra-keys IGNORE_EXTRA_KEYS
Meant for advanced users; allows skipping of extra keys from schema validation.
--ignore-files IGNORE_FILES [IGNORE_FILES ...]
Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml
./bar/baz.yaml
--available-destination AVAILABLE_DESTINATION
A destination name that may be returned by the destinations function. Repeat the argument to define more than one name.
--sort-test-results Sort test results by whether the test passed or failed (passing tests first), then by rule ID
--batch When set your upload will be broken down into multiple zip files
--no-async When set your upload will be synchronous
--ignore-table-names Allows skipping of table name validation from schema validation. Useful when querying non-Panther or non-Snowflake tables
--valid-table-names VALID_TABLE_NAMES [VALID_TABLE_NAMES ...]
Fully qualified table names that should be considered valid during schema validation (in addition to standard
Panther/Snowflake tables), space separated. Accepts '*' as wildcard character matching 0 or more characters. Example
foo.bar.baz bar.baz.* foo.*bar.baz baz.* *.foo.*
$ panther_analysis_tool delete -h
usage: panther_analysis_tool delete [-h] [--no-confirm] [--athena-datalake] [--api-token API_TOKEN] [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--analysis-id ANALYSIS_ID [ANALYSIS_ID ...]] [--query-id QUERY_ID [QUERY_ID ...]]
optional arguments:
-h, --help show this help message and exit
--no-confirm Skip manual confirmation of deletion
--athena-datalake Instance DataLake is backed by Athena
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
--aws-profile AWS_PROFILE
The AWS profile to use when updating the AWS Panther deployment.
--analysis-id ANALYSIS_ID [ANALYSIS_ID ...]
Space separated list of Detection IDs
--query-id QUERY_ID [QUERY_ID ...]
Space separated list of Saved Queries
panther_analysis_tool update-custom-schemas -h
usage: panther_analysis_tool update-custom-schemas [-h] [--api-token API_TOKEN] [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--path PATH]
optional arguments:
-h, --help show this help message and exit
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
--aws-profile AWS_PROFILE
The AWS profile to use when updating the AWS Panther deployment.
--path PATH The relative or absolute path to Panther custom schemas.
panther_analysis_tool test-lookup-table -h
usage: panther_analysis_tool test-lookup-table [-h]
[--aws-profile AWS_PROFILE]
--path PATH
optional arguments:
-h, --help show this help message and exit
--aws-profile AWS_PROFILE
The AWS profile to use when updating the AWS Panther
deployment.
--path PATH The relative path to a lookup table input file.
panther_analysis_tool validate -h
usage: panther_analysis_tool validate [-h] [--api-token API_TOKEN] [--api-host API_HOST] [--filter KEY=VALUE [KEY=VALUE ...]] [--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--path PATH]
options:
-h, --help show this help message and exit
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
--filter KEY=VALUE [KEY=VALUE ...]
--ignore-files IGNORE_FILES [IGNORE_FILES ...]
Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
--path PATH The relative path to Panther policies and rules.
$ panther_analysis_tool zip -h
usage: panther_analysis_tool zip [-h] [--filter KEY=VALUE [KEY=VALUE ...]] [--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--minimum-tests MINIMUM_TESTS]
[--out OUT] [--path PATH] [--skip-tests] [--skip-disabled-tests] [--available-destination AVAILABLE_DESTINATION]
[--sort-test-results] [--ignore-table-names]
optional arguments:
-h, --help show this help message and exit
--filter KEY=VALUE [KEY=VALUE ...]
--ignore-files IGNORE_FILES [IGNORE_FILES ...]
Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
--minimum-tests MINIMUM_TESTS
The minimum number of tests in order for a detection to be considered passing. If a number greater than 1 is specified, at least one True and
one False test is required.
--out OUT The path to store output files.
--path PATH The relative path to Panther policies and rules.
--skip-tests
--skip-disabled-tests
--available-destination AVAILABLE_DESTINATION
A destination name that may be returned by the destinations function. Repeat the argument to define more than one name.
--sort-test-results Sort test results by whether the test passed or failed (passing tests first), then by rule ID
--ignore-table-names Allows skipping of table names from schema validation. Useful when querying non-Panther or non-Snowflake tables
--valid-table-names VALID_TABLE_NAMES [VALID_TABLE_NAMES ...]
Fully qualified table names that should be considered valid during schema validation (in addition to standard Panther/Snowflake tables), space
separated. Accepts '*' as wildcard character matching 0 or more characters. Example foo.bar.baz bar.baz.* foo.*bar.baz baz.* *.foo.*
panther_analysis_tool check-connection -h
usage: panther_analysis_tool check-connection [-h] [--api-token API_TOKEN] [--api-host API_HOST]
optional arguments:
-h, --help show this help message and exit
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
panther_analysis_tool benchmark -h
usage: panther_analysis_tool benchmark [-h] [--api-token API_TOKEN] [--api-host API_HOST] [--filter KEY=VALUE [KEY=VALUE ...]]
[--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--path PATH] [--out OUT] [--iterations ITERATIONS] [--hour HOUR]
[--log-type LOG_TYPE]
optional arguments:
-h, --help show this help message and exit
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
--filter KEY=VALUE [KEY=VALUE ...]
--ignore-files IGNORE_FILES [IGNORE_FILES ...]
Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
--path PATH The relative path to Panther policies and rules.
--out OUT The path to store output files.
--iterations ITERATIONS
The number of iterations of the performance test to perform. Each iteration runs against the selected hour of data. Fewer iterations will
be run if the time limit is reached. Min: 1
--hour HOUR The hour of historical data to perform the benchmark against, in any parseable format, e.g. '2023-07-31T09:00:00.000-7:00'. Minutes,
Seconds, etc will be truncated if specified. If hour is unspecified, the performance test will run against the hour in the last two weeks
with the largest log volume.
--log-type LOG_TYPE Required if the rule supports multiple log types, optional otherwise. Must be one of the rule's log types.
--filter: Filtering PAT commands
The test, zip, upload, and release commands all support filtering. Filtering works by passing the --filter argument with a list of filters specified in the format KEY=VALUE1,VALUE2. The keys can be any valid field in a policy or rule. When using a filter, only analysis that matches each filter specified will be considered.
For example, the following command will test only items with the AnalysisType of policy AND the severity of High:
panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy Severity=High
[INFO]: Testing analysis packs in tests/fixtures/valid_policies
AWS.IAM.BetaTest
[PASS] Root MFA not enabled fails compliance
[PASS] User MFA not enabled fails compliance
The following command will test items with the AnalysisType policy OR rule, AND the severity High:
panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy,rule Severity=High
[INFO]: Testing analysis packs in tests/fixtures/valid_policies
AWS.IAM.BetaTest
[PASS] Root MFA not enabled fails compliance
[PASS] User MFA not enabled fails compliance
AWS.CloudTrail.MFAEnabled
[PASS] Root MFA not enabled fails compliance
[PASS] User MFA not enabled fails compliance
When writing policies or rules that refer to the global analysis types, be sure to include them in your filter. You can include an empty string as a value in a filter, and it will mean the filter is only applied if the field exists.
The following command will return an error, because the policy in question imports a global but the global does not have a severity so it is excluded by the filter:
panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy,global Severity=Critical
[INFO]: Testing analysis packs in tests/fixtures/valid_policies
AWS.IAM.MFAEnabled
[ERROR] Error loading module, skipping
Invalid: tests/fixtures/valid_policies/example_policy.yml
No module named 'panther'
[ERROR]: [('tests/fixtures/valid_policies/example_policy.yml', ModuleNotFoundError("No module named 'panther'"))]
For this query to work as expected, you need to allow for the severity field to be absent:
panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy,global Severity=Critical,""
[INFO]: Testing analysis packs in tests/fixtures/valid_policies
AWS.IAM.MFAEnabled
[PASS] Root MFA not enabled fails compliance
[PASS] User MFA not enabled fails compliance
Filters work for the zip, upload, and release commands in the same way they work for the test command.
--minimum-tests: Requiring a certain number of unit tests
You can set a minimum number of unit tests with the --minimum-tests flag. Detections that don't have the minimum number of tests will be considered failing, and if --minimum-tests is set to 2 or greater it will also enforce that at least one test must return True and one must return False.
In the example below, even though the rules passed all their tests, they're still considered failing because they do not have the correct test coverage:
panther_analysis_tool test --path tests/fixtures/valid_policies --minimum-tests 2
% panther_analysis_tool test --path okta_rules --minimum-tests 2
[INFO]: Testing analysis packs in okta_rules
Okta.AdminRoleAssigned
[PASS] Admin Access Assigned
Okta.BruteForceLogins
[PASS] Failed login
Okta.GeographicallyImprobableAccess
[PASS] Non Login
[PASS] Failed Login
--------------------------
Panther CLI Test Summary
Path: okta_rules
Passed: 0
Failed: 3
Invalid: 0
--------------------------
Failed Tests Summary
Okta.AdminRoleAssigned
['Insufficient test coverage, 2 tests required but only 1 found.', 'Insufficient test coverage: expected at least one passing and one failing test.']
Okta.BruteForceLogins
['Insufficient test coverage, 2 tests required but only 1 found.', 'Insufficient test coverage: expected at least one passing and one failing test.']
Okta.GeographicallyImprobableAccess
['Insufficient test coverage: expected at least one passing and one failing test.']