panther_analysis_tooloutside of the virtual environment, install it directly:
.panther_settings.ymllocated in your working directory. An example configuration file is included in this repo: example_panther_config.yml. It contains example syntax for supported options.
minimum_tests: 2in the configuration file and
--minimum-tests 1on the command line, the minimum number of tests will be 2.
panther_analysis_tool upload --path <directory>will upload everything from
<directory>, it will not delete anything in your Panther instance if you simply remove a local file from
<directory>. Instead, you can use the
panther_analysis_tool deletecommand to explicitly delete detections from your Panther instance. To delete a specific detection, you can run the following command:
uploadcommands all support filtering. Filtering works by passing the
--filterargument with a list of filters specified in the format
KEY=VALUE1,VALUE2. The keys can be any valid field in a policy or rule. When using a filter, only anaylsis that matches each filter specified will be considered.
globalanalysis types, be sure to include them in your filter. You can include an empty string as a value in a filter, and it will mean the filter is only applied if the field exists.
uploadcommands in the exact same way they work for the
--minimum-testsflag. Detections that don't have the minimum number of tests will be considered failing, and if
--minimum-testsis set to 2 or greater it will also enforce that at least one test must return True and one must return False.
.pyextension) containing your detection/audit logic
.jsonextension) containing metadata attributes of the detection.
Trueindicates suspicious activity, which triggers an alert.
Testskey with sample cases:
Trueindicates this resource is valid and properly configured. Returning
Falseindicates a policy failure, which triggers an alert.
Mockskey to your test case. The
Mockskey is used to define a list of functions you want to mock, and the value that should be returned when that function is called. Multiple functions can be mocked in a single test. For example, if we have a rule test and want to mock the function
get_counterto always return a
1and the function
geoinfo_from_ipto always return a specific set of geo IP info, we could write our unit test like this:
Methods are defined, create an associated Python file (
data_models/aws_cloudtrail_datamodel.py): Note: The Filename specification field is required if a Method is defined in a mapping. If Method is not used in any Mappings, no Python file is required.
Testcases, in the
event.udm()method in the Rule's python logic:
global_helpersfolder with a similar pattern to rules and policies.
720(12 hours), or
RuleIDwill be displayed if this field is not set.
PolicyIDwill be displayed if this field is not set.
DataModelIDwill be displayed if this field is not set.