Panther Analysis Tool
The panther_analysis_tool (PAT) is an open source utility for testing, packaging, and deploying Panther detections from source code.
It's designed for developer-centric workflows such as managing your Panther analysis packs programmatically or within CI/CD pipelines.
For additional information, please see the README on the PAT Github page.

Installation

PAT is installable with a Python package into your current environment:
1
pip3 install panther_analysis_tool
Copied!
For information on updating the version, please see the Github page.

File Organization

It's best practice to create a fork of Panther's open source analysis repository.
To get started, navigate to the locally checked out copy of your custom detections.
We recommend creating folders based on log/resource type, such as suricata_rules or aws_s3_policies. Use the open source Panther Analysis packs as a reference.
Each analysis consists of:
  • A Python file containing your detection/audit logic
  • A valid YAML or JSON specification file containing attributes of the detection.
    • By convention, we give this file the same name as the Python file.

Writing Rules

Rules are Python functions to detect suspicious behaviors. Returning a value of True indicates suspicious activity, which triggers an alert.
First, write your rule and save it (in your folder of choice) as my_new_rule.py:
1
def rule(event):
2
return 'prod' in event.get('hostName')
Copied!
Then, create a specification file using the template below:
1
AnalysisType: rule
2
DedupPeriodMinutes: 60 # 1 hour
3
DisplayName: Example Rule to Check the Format of the Spec
4
Enabled: true
5
Filename: my_new_rule.py
6
RuleID: Type.Behavior.MoreContext
7
Severity: High
8
LogTypes:
9
- LogType.GoesHere
10
Reports:
11
ReportName (like CIS, MITRE ATT&CK):
12
- The specific report section relevant to this rule
13
Tags:
14
- Tags
15
- Go
16
- Here
17
Description: >
18
This rule exists to validate the CLI workflows of the Panther CLI
19
Runbook: >
20
First, find out who wrote this the spec format, then notify them with feedback.
21
Reference: https://www.a-clickable-link-to-more-info.com
Copied!
When this rule is uploaded, each of the fields you would normally populate in the UI will be auto-filled.

Rule Specification Reference

Required fields are in bold.
A complete list of rule specification fields:
Field Name
Description
Expected Value
AnalysisType
Indicates whether this analysis is a rule, policy, or global
rule
Enabled
Whether this rule is enabled
Boolean
FileName
The path (with file extension) to the python rule body
String
RuleID
The unique identifier of the rule
String
LogTypes
The list of logs to apply this rule to
List of strings
Severity
What severity this rule is
One of the following strings: Info, Low, Medium, High, or Critical
Description
A brief description of the rule
String
DedupPeriodMinutes
The time period (in minutes) during which similar events of an alert will be grouped together
15,30,60,180 (3 hours),720 (12 hours), or 1440 (24 hours)
DisplayName
A friendly name to show in the UI and alerts. The RuleID will be displayed if this field is not set.
String
OutputIds
Static destination overrides. These will be used to determine how alerts from this rule are routed, taking priority over default routing based on severity.
List of strings
Reference
The reason this rule exists, often a link to documentation
String
Reports
A mapping of framework or report names to values this rule covers for that framework
Map of strings to list of strings
Runbook
The actions to be carried out if this rule returns an alert, often a link to documentation
String
SummaryAttributes
A list of fields that alerts should summarize.
List of strings
Threshold
How many events need to trigger this rule before an alert will be sent.
Integer
Tags
Tags used to categorize this rule
List of strings
Tests
Unit tests for this rule.
List of maps

Scheduled Query Specification Reference

Required fields are in bold.
A complete list of scheduled query specification fields:
Field Name
Description
Expected Value
AnalysisType
Indicates whether this analysis is a rule, policy, scheduled query, or global.
scheduled_query
QueryName
A friendly name to show in the UI.
String
Enabled
Whether this rule is enabled.
Boolean
Tags
Tags used to categorize this rule.
List of strings
Description
A brief description of the rule.
String
Query
A query that can run on any backend. If this field is specified, you should not specify a SnowflakeQuery or a AthenaQuery.
String
SnowflakeQuery
A query specifically for a snowflake backend.
String
AthenaQuery
A query specifically for Athena.
String
Schedule
The schedule that this this query should run. Can be expressed as a cron or in rate minutes. Note that cron and rate minutes are mutually exclusive.
Map

Rule Tests

Tests help validate that your rule will behave as intended and detect the early signs of a breach. In your spec file, add the Tests key with sample cases:
1
Tests:
2
-
3
Name: Name to describe our first test
4
LogType: LogType.GoesHere
5
ExpectedResult: true or false
6
Log:
7
{
8
"hostName": "test-01.prod.acme.io",
9
"user": "martin_smith",
10
"eventTime": "June 22 5:50:52 PM"
11
}
Copied!
Try to cover as many test cases as possible, including both true and false positives.

Writing Policies

Policies are Python functions to detect misconfigured cloud infrastructure. Returning a value of True indicates this resource is valid and properly configured. Returning False indicates a policy failure, which triggers an alert.
First, write your policy and save it (in your folder of choice) as my_new_policy.py:
1
def polcy(resource):
2
return resource['Region'] != 'us-east-1'
Copied!
Then, create a specification file using the template below:
1
AnalysisType: policy
2
Enabled: true
3
Filename: my_new_policy.py
4
PolicyID: Category.Type.MoreInfo
5
ResourceType:
6
- Resource.Type.Here
7
Severity: Info|Low|Medium|High|Critical
8
DisplayName: Example Policy to Check the Format of the Spec
9
Tags:
10
- Tags
11
- Go
12
- Here
13
Runbook: Find out who changed the spec format.
14
Reference: https://www.link-to-info.io
Copied!

Policy Specification Reference

Required fields are in bold.
A complete list of policy specification fields:
Field Name
Description
Expected Value
AnalysisType
Indicates whether this specification is defining a policy or a rule
policy
Enabled
Whether this policy is enabled
Boolean
FileName
The path (with file extension) to the python policy body
String
PolicyID
The unique identifier of the policy
String
ResourceTypes
What resource types this policy will apply to
List of strings
Severity
What severity this policy is
One of the following strings: Info, Low, Medium, High, or Critical
ActionDelaySeconds
How long (in seconds) to delay auto-remediations and alerts, if configured
Integer
AutoRemediationID
The unique identifier of the auto-remediation to execute in case of policy failure
String
AutoRemediationParameters
What parameters to pass to the auto-remediation, if one is configured
Map
Description
A brief description of the policy
String
DisplayName
What name to display in the UI and alerts. The PolicyID will be displayed if this field is not set.
String
Reference
The reason this policy exists, often a link to documentation
String
Reports
A mapping of framework or report names to values this policy covers for that framework
Map of strings to list of strings
Runbook
The actions to be carried out if this policy fails, often a link to documentation
String
Tags
Tags used to categorize this policy
List of strings
Tests
Unit tests for this policy.
List of maps

Policy Tests

In the spec file, add the following Tests key:
1
Tests:
2
-
3
Name: Name to describe our first test.
4
ResourceType: AWS.S3.Bucket
5
ExpectedResult: true
6
Resource:
7
{
8
"PublicAccessBlockConfiguration": null,
9
"Region": "us-east-1",
10
"Policy": null,
11
"AccountId": "123456789012",
12
"LoggingPolicy": {
13
"TargetBucket": "access-logs-us-east-1-100",
14
"TargetGrants": null,
15
"TargetPrefix": "acmecorp-fiancial-data/"
16
},
17
"EncryptionRules": [
18
{
19
"ApplyServerSideEncryptionByDefault": {
20
"SSEAlgorithm": "AES256",
21
"KMSMasterKeyID": null
22
}
23
}
24
],
25
"Arn": "arn:aws:s3:::acmecorp-fiancial-data",
26
"Name": "acmecorp-fiancial-data",
27
"LifecycleRules": null,
28
"ResourceType": "AWS.S3.Bucket",
29
"Grants": [
30
{
31
"Permission": "FULL_CONTROL",
32
"Grantee": {
33
"URI": null,
34
"EmailAddress": null,
35
"DisplayName": "admins",
36
"Type": "CanonicalUser",
37
"ID": "013ae1034i130431431"
38
}
39
}
40
],
41
"Versioning": "Enabled",
42
"ResourceId": "arn:aws:s3:::acmecorp-fiancial-data",
43
"Tags": {
44
"aws:cloudformation:logical-id": "FinancialDataBucket"
45
},
46
"Owner": {
47
"ID": "013ae1034i130431431",
48
"DisplayName": "admins"
49
},
50
"TimeCreated": "2020-06-13T17:16:36.000Z",
51
"ObjectLockConfiguration": null,
52
"MFADelete": null
53
}
Copied!
The value of Resource can be a JSON object copied directly from the Policies > Resources explorer.

Unit Test Mocking

Both policy and rule tests support unit test mocking. In order to configure mocks for a particular test case, add the Mocks key to your test case. The Mocks key is used to define a list of functions you want to mock, and the value that should be returned when that function is called. Multiple functions can be mocked in a single test. For example, if we have a rule test and want to mock the function get_counter to always return a 1 and the function geoinfo_from_ip to always return a specific set of geo IP info, we could write our unit test like this:
1
Tests:
2
-
3
Name: Test With Mock
4
LogType: LogType.Custom
5
ExpectedResult: true
6
Mocks:
7
- objectName: get_counter
8
returnValue: 1
9
- objectName: geoinfo_from_ip
10
returnValue: >-
11
{
12
"region": "UnitTestRegion",
13
"city": "UnitTestCityNew",
14
"country": "UnitTestCountry"
15
}
16
Log:
17
{
18
"hostName": "test-01.prod.acme.io",
19
"user": "martin_smith",
20
"eventTime": "June 22 5:50:52 PM"
21
}
Copied!
Mocking allows us to emulate network calls without requiring API keys or network access in our CI/CD pipeline, and without muddying the state of external tracking systems (such as the panther KV store).

Running Tests

Use the Panther Analysis Tool to load the defined specification files and evaluate unit tests locally:
1
panther_analysis_tool test --path <folder-name>
Copied!
To filter rules or policies based on certain attributes:
1
panther_analysis_tool test --path <folder-name> --filter RuleID=Category.Behavior.MoreInfo
Copied!

Globals

Global functions allow common logic to be shared across either rules or policies. To declare them as code, add them into the global_helpers folder with a similar pattern to rules and policies.
Globals defined outside of the global_helpers folder will not be loaded.
First, create your Python file (global_helpers/acmecorp.py):
1
from fnmatch import fnmatch
2
3
RESOURCE_PATTERN = 'acme-corp-*-[0-9]'
4
5
6
def matches_internal_naming(resource_name):
7
return fnmatch(resource_name, RESOURCE_PATTERN)
Copied!
Then, create your specification file:
1
AnalysisType: global
2
GlobalID: acmecorp
3
Filename: acmecorp.py
4
Description: A set of helpers internal to acme-corp
Copied!
Finally, use this helper in a policy (or a rule):
1
import acmecorp
2
3
4
def policy(resource):
5
return acmecorp.matches_internal_naming(resource['Name'])
Copied!

Uploading to Panther

Panther SaaS customers: Please file a support ticket to gain upload access to your Panther environment.
Make sure to configure your environment with valid AWS credentials prior to running the command below. This command will upload based on the exported value of AWS_REGION.
To upload your analysis packs to your Panther Console, run the following command:
1
panther_analysis_tool upload --path <path-to-your-rules> --out tmp
Copied!
Analysis with the same ID are overwritten. Additionally, locally deleted rules/policies will not automatically be deleted in the database and must be removed manually. We recommend setting the Enabled property to false instead of deleting policies or rules for CLI driven workflows.

Delete Rules, Policies, or Saved Queries

While panther_analysis_tool upload --path <directory> will upload everything from <directory>, it will not delete anything in your Panther instance if you simply remove a local file from <directory>. Instead, you can use the panther_analysis_tool delete command to explicitly delete detections from your Panther instance. To delete a specific detection, you can run the following command:
1
panther_analysis_tool delete --analysis-id MyRuleId
Copied!
This will interactively ask you for a confirmation before it deletes the detection. If you would like to delete without confirming, you can use the following command:
1
panther_analysis_tool delete --analysis-id MyRuleId --no-confirm
Copied!
For more information, please see the README on the PAT Github page.

Pack Source

See the Detection Packs documentation for details on using panther_analysis_tool with detection packs and pack sources.