Panther Analysis Tool Overview
Using Panther Analysis Tool to test and upload locally managed detections
This page describes how to install the CLI tool Panther Analysis Tool (PAT) and how to use it to test and upload locally managed detections.
The
panther_analysis_tool
is an open source utility for testing, packaging, and deploying Panther detections from source code. It's designed for developer-centric workflows such as managing your Panther analysis packs programmatically or within CI/CD pipelines.For information on managing detection content directly in the Panther Console with UI-based workflows, see Writing Detections.
We recommend using the API Token to configure PAT. The benefits of using API token authentication include PAT actions being captured in Panther Audit Logs and the token not expiring.
Please note the following considerations:
- If your organization requires token rotation, you'll need to do so manually.
- In some cases, uploads with a large number of scheduled queries may timeout. If you encounter this, try batching your import by detection type (policies, rules, scheduled rules, etc).
If you would like to discuss using an AWS IAM role setup for PAT instead, please contact your support team.
If you are not using an IAM role, PAT requires an API Token to authenticate against your Panther instance.
You will then pass this API token as an argument to the
panther_analysis_tool
command for operations such as uploading/deleting detections, custom schemas, saved queries, and more. See the "PAT commands and usage" section below for examples.Below is a mapping of permissions required for each command.
Command | Permissions |
---|---|
check-connection | Read Panther Settings Info |
upload | Bulk Upload |
delete | Manage Policies
Manage Rules
Manage Saved Queries |
update-custom-schemas | View Log Sources
Manage Log Sources |
To install PAT, run this command:
pip3 install panther_analysis_tool
If you'd prefer instead to run from source for development reasons, first setup your environment:
$ make install
$ pipenv run -- pip3 install -e .
If you would rather use the
panther_analysis_tool
outside of the virtual environment, install it directly:$ make deps
$ pip3 install -e .
PAT can read values from the command line, environment variables, or a configuration file.
The precedence for flag value sources is as follows (highest to lowest):
- 1.The Command Line flag value from the user
- 2.Environment Variables
- 3.Panther Configuration File
PAT will read options from a configuration file called
.panther_settings.yml
located in your working directory. An example configuration file is included in this repo: example_panther_config.yml. It contains example syntax for supported options.All options can be passed in through environment variables by prepending the variable name with
PANTHER_.
For example, the
AWS_TOKEN
argument can be passed in through an environment variable named PANTHER_AWS_TOKEN
.To upload detections via PAT:
- 1.Generate an API token from your Panther Console.
- 2.Run
panther_analysis_tool test
to ensure your unit tests are passing. - 3.Run the following command to upload new detections to your Panther instance:
panther_analysis_tool upload --path <path-to-your-rules> --api-token <your-api-token> --api-host https://api.<your-panther-instance-name>.runpanther.net/public/graphql
- You can also add
--batch
to your command to split the upload into multiple pieces, which will avoid some timeout issues. - The environment variables
PANTHER_API_TOKEN
andPANTHER_API_HOST
are required for these commands to execute successfully.- The
PANTHER_API_TOKEN
is the value of the API token you obtained in step 1. - The
PANTHER_API_HOST
is the API URL of your Panther Instance. For example, if your Panther domain is https://acme.runpanther.net, then your GraphQL API URL ishttps://api.acme.runpanther.net/public/graphql
.
Analysis with the same ID are overwritten. Additionally, locally deleted rules/policies will not automatically be deleted in the database and must be removed manually. We recommend setting the Enabled property to false instead of deleting policies or rules for CLI driven workflows.
To create a package for uploading manually to the Panther Console, run the following command:
$ panther_analysis_tool zip --path tests/fixtures/valid_policies/ --out tmp
[INFO]: Testing analysis packs in tests/fixtures/valid_policies/
AWS.IAM.MFAEnabled
[PASS] Root MFA not enabled fails compliance
[PASS] User MFA not enabled fails compliance
[INFO]: Zipping analysis packs in tests/fixtures/valid_policies/ to tmp
[INFO]: <current working directory>/tmp/panther-analysis-2020-03-23T12-48-18.zip
While
panther_analysis_tool upload --path <directory>
will upload everything from <directory>
, it will not delete anything in your Panther instance if you simply remove a local file from <directory>
. Instead, you can use the panther_analysis_tool delete
command to explicitly delete detections from your Panther instance.
To delete a specific detection, you can run the following command:panther_analysis_tool delete --analysis-id MyRuleId
This will interactively ask you for a confirmation before it deletes the detection. If you would like to delete without confirming, you can use the following command:
panther_analysis_tool delete --analysis-id MyRuleId --no-confirm
Use the Panther Analysis Tool to load the defined specification files and evaluate unit tests locally:
panther_analysis_tool test --path <folder-name>
To filter rules or policies based on certain attributes:
panther_analysis_tool test --path <folder-name> --filter RuleID=Category.Behavior.MoreInfo
$ panther_analysis_tool -h
usage: panther_analysis_tool [-h] [--version] [--debug] {release,test,publish,upload,delete,update-custom-schemas,test-lookup-table,zip,check-connection} ...
Panther Analysis Tool: A command line tool for managing Panther policies and rules.
positional arguments:
{release,test,publish,upload,delete,update-custom-schemas,test-lookup-table,zip,check-connection}
release Create release assets for repository containing panther detections. Generates a file called panther-analysis-all.zip and optionally generates panther-analysis-all.sig
test Validate analysis specifications and run policy and rule tests.
publish Publishes a new release, generates the release assets, and uploads them. Generates a file called panther-analysis-all.zip and optionally generates panther-analysis-all.sig
upload Upload specified policies and rules to a Panther deployment.
delete Delete policies, rules, or saved queries from a Panther deployment
update-custom-schemas
Update or create custom schemas on a Panther deployment.
test-lookup-table Validate a Lookup Table spec file.
zip Create an archive of local policies and rules for uploading to Panther.
check-connection Check your Panther API connection
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
release
test
upload
delete
test-lookup-table
zip
update-custom-schemas
$ panther_analysis_tool release -h
usage: panther_analysis_tool release [-h] [--api-token API_TOKEN] [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--filter KEY=VALUE [KEY=VALUE ...]] [--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--kms-key KMS_KEY]
[--minimum-tests MINIMUM_TESTS] [--out OUT] [--path PATH] [--skip-tests] [--skip-disabled-tests] [--available-destination AVAILABLE_DESTINATION]
optional arguments:
-h, --help show this help message and exit
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
--aws-profile AWS_PROFILE
The AWS profile to use when updating the AWS Panther deployment.
--filter KEY=VALUE [KEY=VALUE ...]
--ignore-files IGNORE_FILES [IGNORE_FILES ...]
Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
--kms-key KMS_KEY The key id to use to sign the release asset.
--minimum-tests MINIMUM_TESTS
The minimum number of tests in order for a detection to be considered passing. If a number greater than 1 is specified, at least one True and one False test is required.
--out OUT The path to store output files.
--path PATH The relative path to Panther policies and rules.
--skip-tests
--skip-disabled-tests
--available-destination AVAILABLE_DESTINATION
A destination name that may be returned by the destinations function. Repeat the argument to define more than one name.
$ panther_analysis_tool test -h
usage: panther_analysis_tool test [-h] [--filter KEY=VALUE [KEY=VALUE ...]]
[--minimum-tests MINIMUM_TESTS]
[--path PATH]
[--ignore-extra-keys IGNORE_EXTRA_KEYS]
[--ignore-files IGNORE_FILES [IGNORE_FILES ...]]
[--skip-disabled-tests]
[--available-destination AVAILABLE_DESTINATION]
optional arguments:
-h, --help show this help message and exit
--filter KEY=VALUE [KEY=VALUE ...]
--minimum-tests MINIMUM_TESTS
The minimum number of tests in order for a detection
to be considered passing. If a number greater than 1
is specified, at least one True and one False test is
required.
--path PATH The relative path to Panther policies and rules.
--ignore-extra-keys IGNORE_EXTRA_KEYS
Meant for advanced users; allows skipping of extra
keys from schema validation.
--ignore-files IGNORE_FILES [IGNORE_FILES ...]
Relative path to files in this project to be ignored
by panther-analysis tool, space separated. Example
./foo.yaml ./bar/baz.yaml
--skip-disabled-tests
--available-destination AVAILABLE_DESTINATION
A destination name that may be returned by the
destinations function. Repeat the argument to define
more than one name.
$ panther_analysis_tool upload -h
usage: panther_analysis_tool upload [-h] [--max-retries MAX_RETRIES] [--api-token API_TOKEN] [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--filter KEY=VALUE [KEY=VALUE ...]] [--minimum-tests MINIMUM_TESTS] [--out OUT] [--path PATH]
[--skip-tests] [--skip-disabled-tests] [--ignore-extra-keys IGNORE_EXTRA_KEYS] [--ignore-files IGNORE_FILES [IGNORE_FILES ...]] [--available-destination AVAILABLE_DESTINATION]
optional arguments:
-h, --help show this help message and exit
--max-retries MAX_RETRIES
Retry to upload on a failure for a maximum number of times
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
--aws-profile AWS_PROFILE
The AWS profile to use when updating the AWS Panther deployment.
--filter KEY=VALUE [KEY=VALUE ...]
--minimum-tests MINIMUM_TESTS
The minimum number of tests in order for a detection to be considered passing. If a number greater than 1 is specified, at least one True and one False test is required.
--out OUT The path to store output files.
--path PATH The relative path to Panther policies and rules.
--skip-tests
--skip-disabled-tests
--ignore-extra-keys IGNORE_EXTRA_KEYS
Meant for advanced users; allows skipping of extra keys from schema validation.
--ignore-files IGNORE_FILES [IGNORE_FILES ...]
Relative path to files in this project to be ignored by panther-analysis tool, space separated. Example ./foo.yaml ./bar/baz.yaml
--available-destination AVAILABLE_DESTINATION
A destination name that may be returned by the destinations function. Repeat the argument to define more than one name.
--batch
Breaks uploads into smaller pieces to avoid timeouts
$ panther_analysis_tool delete -h
usage: panther_analysis_tool delete [-h] [--no-confirm] [--athena-datalake] [--api-token API_TOKEN] [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--analysis-id ANALYSIS_ID [ANALYSIS_ID ...]] [--query-id QUERY_ID [QUERY_ID ...]]
optional arguments:
-h, --help show this help message and exit
--no-confirm Skip manual confirmation of deletion
--athena-datalake Instance DataLake is backed by Athena
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
--aws-profile AWS_PROFILE
The AWS profile to use when updating the AWS Panther deployment.
--analysis-id ANALYSIS_ID [ANALYSIS_ID ...]
Space separated list of Detection IDs
--query-id QUERY_ID [QUERY_ID ...]
Space separated list of Saved Queries
panther_analysis_tool test-lookup-table -h
usage: panther_analysis_tool test-lookup-table [-h]
[--aws-profile AWS_PROFILE]
--path PATH
optional arguments:
-h, --help show this help message and exit
--aws-profile AWS_PROFILE
The AWS profile to use when updating the AWS Panther
deployment.
--path PATH The relative path to a lookup table input file.
$ panther_analysis_tool zip -h
usage: panther_analysis_tool zip [-h] [--filter KEY=VALUE [KEY=VALUE ...]]
[--ignore-files IGNORE_FILES [IGNORE_FILES ...]]
[--minimum-tests MINIMUM_TESTS] [--out OUT]
[--path PATH] [--skip-tests]
[--skip-disabled-tests]
[--available-destination AVAILABLE_DESTINATION]
optional arguments:
-h, --help show this help message and exit
--filter KEY=VALUE [KEY=VALUE ...]
--ignore-files IGNORE_FILES [IGNORE_FILES ...]
Relative path to files in this project to be ignored
by panther-analysis tool, space separated. Example
./foo.yaml ./bar/baz.yaml
--minimum-tests MINIMUM_TESTS
The minimum number of tests in order for a detection
to be considered passing. If a number greater than 1
is specified, at least one True and one False test is
required.
--out OUT The path to store output files.
--path PATH The relative path to Panther policies and rules.
--skip-tests
--skip-disabled-tests
--available-destination AVAILABLE_DESTINATION
A destination name that may be returned by the
destinations function. Repeat the argument to define
more than one name.
panther_analysis_tool update-custom-schemas -h
usage: panther_analysis_tool update-custom-schemas [-h] [--api-token API_TOKEN] [--api-host API_HOST] [--aws-profile AWS_PROFILE] [--path PATH]
optional arguments:
-h, --help show this help message and exit
--api-token API_TOKEN
The Panther API token to use. See: https://docs.panther.com/api-beta
--api-host API_HOST The Panther API host to use. See: https://docs.panther.com/api-beta
--aws-profile AWS_PROFILE
The AWS profile to use when updating the AWS Panther deployment.
--path PATH The relative or absolute path to Panther custom schemas.
The
test
, zip
, and upload
commands all support filtering. Filtering works by passing the --filter
argument with a list of filters specified in the format KEY=VALUE1,VALUE2
. The keys can be any valid field in a policy or rule. When using a filter, only anaylsis that matches each filter specified will be considered. For example, the following command will test only items with the AnalysisType of policy AND the severity of High:
panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy Severity=High
[INFO]: Testing analysis packs in tests/fixtures/valid_policies
AWS.IAM.BetaTest
[PASS] Root MFA not enabled fails compliance
[PASS] User MFA not enabled fails compliance
The following command will test items with the AnalysisType policy OR rule, AND the severity High:
panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy,rule Severity=High
[INFO]: Testing analysis packs in tests/fixtures/valid_policies
AWS.IAM.BetaTest
[PASS] Root MFA not enabled fails compliance
[PASS] User MFA not enabled fails compliance
AWS.CloudTrail.MFAEnabled
[PASS] Root MFA not enabled fails compliance
[PASS] User MFA not enabled fails compliance
When writing policies or rules that refer to the
global
analysis types, be sure to include them in your filter. You can include an empty string as a value in a filter, and it will mean the filter is only applied if the field exists. The following command will return an error, because the policy in question imports a global but the global does not have a severity so it is excluded by the filter:
panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy,global Severity=Critical
[INFO]: Testing analysis packs in tests/fixtures/valid_policies
AWS.IAM.MFAEnabled
[ERROR] Error loading module, skipping
Invalid: tests/fixtures/valid_policies/example_policy.yml
No module named 'panther'
[ERROR]: [('tests/fixtures/valid_policies/example_policy.yml', ModuleNotFoundError("No module named 'panther'"))]
For this query to work as expected, you need to allow for the severity field to be absent:
panther_analysis_tool test --path tests/fixtures/valid_policies --filter AnalysisType=policy,global Severity=Critical,""
[INFO]: Testing analysis packs in tests/fixtures/valid_policies
AWS.IAM.MFAEnabled
[PASS] Root MFA not enabled fails compliance
[PASS] User MFA not enabled fails compliance
Filters work for the
zip
and upload
commands in the exact same way they work for the test
command.In addition to filtering, you can set a minimum number of unit tests with the
--minimum-tests
flag. Detections that don't have the minimum number of tests will be considered failing, and if --minimum-tests
is set to 2 or greater it will also enforce that at least one test must return True and one must return False.In the example below, even though the rules passed all their tests, they're still considered failing because they do not have the correct test coverage:
panther_analysis_tool test --path tests/fixtures/valid_policies --minimum-tests 2
% panther_analysis_tool test --path okta_rules --minimum-tests 2
[INFO]: Testing analysis packs in okta_rules
Okta.AdminRoleAssigned
[PASS] Admin Access Assigned
Okta.BruteForceLogins
[PASS] Failed login
Okta.GeographicallyImprobableAccess
[PASS] Non Login
[PASS] Failed Login
--------------------------
Panther CLI Test Summary
Path: okta_rules
Passed: 0
Failed: 3
Invalid: 0
--------------------------
Failed Tests Summary
Okta.AdminRoleAssigned
['Insufficient test coverage, 2 tests required but only 1 found.', 'Insufficient test coverage: expected at least one passing and one failing test.']
Okta.BruteForceLogins
['Insufficient test coverage, 2 tests required but only 1 found.', 'Insufficient test coverage: expected at least one passing and one failing test.']
Okta.GeographicallyImprobableAccess
['Insufficient test coverage: expected at least one passing and one failing test.']
Writing Detections locally means creating Python and metadata files that define a Panther Detection on your own machine. After writing Detections locally, you upload the files to your Panther instance (typically via the Panther Analysis Tool) to control your Detection content.
For information on writing detections locally, please see the documentation page for the type of detection you are writing:
To manage custom detections, you can create a private fork of the Panther Analysis Github repo. Upon tagged releases, you can pull upstream changes from this public repo.
When you want to pull in the latest changes from the repository, perform the following steps from your private repo:
# add the public repository as a remote
git remote add panther-upstream [email protected]:panther-labs/panther-analysis.git
# Pull in the latest changes
# Note: You may need to use the `--allow-unrelated-histories`
# flag if you did not maintain the history originally
git pull panther-upstream master
# Push the latest changes up to your forked repo and merge them
git push
Visit the Panther Knowledge Base to view articles about the Panther Analysis Tool that answer frequently asked questions and help you resolve common errors and issues.