Testing
Use unit tests to ensure your detections are working as expected
Last updated
Use unit tests to ensure your detections are working as expected
Last updated
Testing your detections ensures that once deployed, detections will behave as expected, generating alerts appropriately. Testing also promotes reliability and protects against regressions as code evolves over time.
Testing works by defining a test log event for a certain detection, and indicating whether or not you'd expect an alert to be generated when the test event is processed by that detection. Panther doesn't enforce a required minimum or maximum number of tests for each detection, but it's recommended to configure at least two—one false positive and one true positive.
You can create, edit, and delete tests for custom detections (i.e., those that aren't Panther-managed). Tests for Panther-managed detections are maintained by Panther and are read-only.
Panther's Data Replay, which allows you to run historical log data through a rule to preview the outcome, is also useful while testing.
You can create tests for detections that are not Panther-managed in the Panther Console, or in the CLI workflow with the Panther Analysis Tool (PAT).
In the left-hand navigation bar of your Panther Console, click Detections.
Click the name of an existing detection, or create a new detection.
Scroll down to the Test section.
On the right-hand side of the Unit Test tile, click Add New.
If the detection is Panther-managed, Add New will be disabled, as you cannot create new tests.
The newly created test is given a placeholder name. To its right, click the three dots icon and select Rename.
Provide a meaningful name for the test, and press enter, or return, to save it.
Set the The detection should trigger based on the example event toggle toYES
or NO
.
Compose a test JSON event in the text editor.
To see whether your test passes, click Run Test.
When you are finished, in the upper-right corner of the page, click Update or Save.
You can rename or delete tests for detections that are not Panther-managed.
In the left-hand navigation bar of your Panther Console, click Detections.
Click the name of a detection.
Scroll down to the Test section.
Within the Unit Test tile, locate the test you'd like to rename or delete.
To the right of the test's name, click the three dot icon.
If the detection is Panther-managed, its tests cannot be renamed or deleted. Instead of a three dot icon next to the name of each test, a Panther icon will appear.
Click Rename or Delete.
If renaming, enter the new name and press enter, or return, to save.
If deleting, a Delete Test confirmation modal will pop up.
Select Confirm.
Click Edit in the upper right corner of the page.
Scroll down, below the Rule Function text editor, to the Unit Test text editor.
Keeping with the previous example (in Rules), let's write two tests for this detection:
Name
: Successful admin-panel Logins
Test event should trigger an alert
: Yes
Name
: Errored requests to the access-policy page
Test event should trigger an alert
: No
Use as many combinations as you would like to ensure the highest reliability with your detections.
It is highly discouraged to make external API requests from within your detections in Panther. In general, detections are processed at a very high scale, and making API requests can overload receiving systems and cause your rules to exceed the 15-second runtime limit.
Panther's testing framework also allows for basic Python call mocking. Both policy and rule tests support unit test mocking.
When writing a detection that requires an external API call, mocks can be utilized to mimic the server response in the unit tests without having to actually execute an API call.
Mocks are defined with a Mock Name and Return Value (or objectName
and returnValue
, in the CLI workflow) which respectively denote the name of the object to patch and the string
to be returned when the patched object is invoked.
Mocks are defined on the unit test level, allowing you to define different mocks for each unit test.
To add a mock to a unit test in the Console:
Within the Unit Test tile, locate the Mock Testing section, below the test event editor.
Add values for Mock Name and Return Value.
To test that your mock is behaving as expected, click Run Test.
Mocks are allowed on the global and built-in Python namespaces, this includes:
Imported Modules and Functions
Global Variables
Built-in Python Functions
User-Defined Functions
Python provides two distinct forms of importing, which can be mocked as such:
import package
Mock Name: package
from package import module
Mock Name: module
This example is based on the AWS Config Global Resources detection.
The detection utilizes a global helper function resource_lookup
from panther_oss_helpers
which queries the resources-api
and returns the resource attributes. However, the unit test should be able to be performed without any external API calls.
This test fails as there is no corresponding resource mapping to the generic example data.
The detection uses the from panther_oss_helpers import resource_lookup
convention which means the mock should be defined for the resource_lookup
function.
Mocks provide a way to leverage real world data to test the detection logic.
The return value used:
While this resource should be compliant, the unit test fails.
Detections that do not expect a string
to be returned requires a small tweak for mocks.
In order to get this unit test working as expected, the following modifications need to be made:
Once this modification is added, you can now test the detection logic with real data!
Unit test mocking is also supported with CLI based workflows for writing detections. For details on adding unit test mocks to a CLI based detection, see the unit test mocking section of the PAT documentation.
It's possible to enrich test events with p_enrichment
in both the Panther Console and CLI workflow.
If you are using the lookup()
function in your detection, instead follow the instructions in Unit testing detections that use lookup()
.
If you have Lookup Table(s) configured and created a detection to leverage them, click Enrich Test Data in the upper right side of the JSON event editor to ensure that p_enrichment
is populated correctly.