Use unit tests to ensure your detections are working as expected


Testing your detections ensures that once deployed, detections will behave as expected, generating alerts appropriately. Testing also promotes reliability and protects against regressions as code evolves over time.

Testing works by defining a test log event for a certain detection, and indicating whether or not you'd expect an alert to be generated when the test event is processed by that detection. Panther doesn't enforce a required minimum or maximum number of tests for each detection, but it's recommended to configure at least two—one false positive and one true positive.

You can create, edit, and delete tests for custom detections (i.e., those that aren't Panther-managed). Tests for Panther-managed detections are maintained by Panther and are read-only.

Panther's Data Replay, which allows you to run historical log data through a rule to preview the outcome, is also useful while testing.

Using tests

How to create a test

You can create tests for detections that are not Panther-managed in the Panther Console, or in the CLI workflow with the Panther Analysis Tool (PAT).

  1. In the left-hand navigation bar of your Panther Console, click Build > Detections.

  2. Click the name of an existing detection, or create a new detection.

  3. Scroll down to the Test section.

    • If the detection is Panther-managed, Add New will be disabled, as you cannot create new tests.

  4. Provide a meaningful name for the test, and press enter, or return, to save it.

  5. Set the The detection should trigger based on the example event toggle toYES or NO.

  6. Compose a test JSON event in the text editor.

  7. To see whether your test passes, click Run Test.

  8. When you are finished, in the upper-right corner of the page, click Update or Save.

How to rename or delete a test in the Panther Console

You can rename or delete tests for detections that are not Panther-managed.

  1. In the left-hand navigation bar of your Panther Console, click Build > Detections.

  2. Click the name of a detection.

  3. Scroll down to the Test section.

  4. Within the Unit Test tile, locate the test you'd like to rename or delete.

  5. To the right of the test's name, click the three dot icon.

    • If the detection is Panther-managed, its tests cannot be renamed or deleted. Instead of a three dot icon next to the name of each test, a Panther icon will appear.

  6. Click Rename or Delete.

    • If renaming, enter the new name and press enter, or return, to save.

    • If deleting, a Delete Test confirmation modal will pop up.

      • Select Confirm.

Test example

  • Click Edit in the upper right corner of the page.

  • Scroll down, below the Rule Function text editor, to the Unit Test text editor.

Keeping with the previous example (in Rules), let's write two tests for this detection:

def rule(event):
  return event.get('status') == 200 and 'admin-panel' in event.get('request')

def title(event):
  return f"Successful admin panel login detected from {event.get('remoteAddr')}"

def dedup(event):
  return event.get('remoteAddr')

Name: Successful admin-panel Logins

Test event should trigger an alert: Yes

  "httpReferer": "https://domain1.com/?p=1",
  "httpUserAgent": "Chrome/80.0.3987.132 Safari/537.36",
  "remoteAddr": "",
  "request": "GET /admin-panel/ HTTP/1.1",
  "status": 200,
  "time": "2019-02-06 00:00:38 +0000 UTC"

Name: Errored requests to the access-policy page

Test event should trigger an alert: No

  "httpReferer": "https://domain1.com/?p=1",
  "httpUserAgent": "Chrome/80.0.3987.132 Safari/537.36",
  "remoteAddr": "",
  "request": "GET /access-policy/ HTTP/1.1",
  "status": 500,
  "time": "2019-02-06 00:00:38 +0000 UTC"

Use as many combinations as you would like to ensure the highest reliability with your detections.


Panther's testing framework also allows for basic Python call mocking. Both policy and rule tests support unit test mocking.

When writing a detection that requires an external API call, mocks can be utilized to mimic the server response in the unit tests without having to actually execute an API call.

Mocks are defined with a Mock Name and Return Value (or objectName and returnValue, in the CLI workflow) which respectively denote the name of the object to patch and the string to be returned when the patched object is invoked.

Mocks are defined on the unit test level, allowing you to define different mocks for each unit test.

How to use mocks

To add a mock to a unit test in the Console:

  1. Within the Unit Test tile, locate the Mock Testing section, below the test event editor.

  2. Add values for Mock Name and Return Value.

  3. To test that your mock is behaving as expected, click Run Test.

Mocks are allowed on the global and built-in Python namespaces, this includes:

  1. Imported Modules and Functions

  2. Global Variables

  3. Built-in Python Functions

  4. User-Defined Functions

Python provides two distinct forms of importing, which can be mocked as such:

  • import package

    • Mock Name: package

  • from package import module

    • Mock Name: module

Example mock

This example is based on the AWS Config Global Resources detection.

The detection utilizes a global helper function resource_lookup from panther_oss_helpers which queries the resources-api and returns the resource attributes. However, the unit test should be able to be performed without any external API calls.

This test fails as there is no corresponding resource mapping to the generic example data.

Diving into the detection

# --- Snipped ---
from panther_oss_helpers import resource_lookup
# --- Snipped ---
def policy(resource):
    # --- Snipped ---
    for recorder_name in resource.get("Recorders", []):
        recorder = resource_lookup(recorder_name)
        resource_records_global_resources = bool(
            deep_get(recorder, "RecordingGroup", "IncludeGlobalResourceTypes")
            and deep_get(recorder, "Status", "Recording")
        if resource_records_global_resources:
            return True
    return False
    # --- Snipped ---

The detection uses the from panther_oss_helpers import resource_lookup convention which means the mock should be defined for the resource_lookup function.

Mocks provide a way to leverage real world data to test the detection logic.

The return value used:

 { "AccountId": "012345678910", "Name": "Default", "RecordingGroup": { "AllSupported": true, "IncludeGlobalResourceTypes": true, "ResourceTypes": null }, "Region": "us-east-1", "ResourceId": "012345678910:us-east-1:AWS.Config.Recorder", "ResourceType": "AWS.Config.Recorder", "RoleARN": "arn:aws:iam::012345678910:role/PantherAWSConfig", "Status": { "LastErrorCode": null, "LastErrorMessage": null, "LastStartTime": "2018-10-05T22:45:01.838Z", "LastStatus": "SUCCESS", "LastStatusChangeTime": "2021-05-28T17:45:14.916Z", "LastStopTime": null, "Name": "Default", "Recording": true }, "Tags": null, "TimeCreated": null }

While this resource should be compliant, the unit test fails. Detections that do not expect a string to be returned requires a small tweak for mocks.

In order to get this unit test working as expected, the following modifications need to be made:

# --- Snipped ---
import json
# Another option is to use: from ast import literal_eval
# --- Snipped ---
def policy(resource):
    # --- Snipped ---
        recorder = resource_lookup(recorder_name)
        if isinstance(recorder, str):
            recorder = json.loads(recorder)
    # --- Snipped ---

Once this modification is added, you can now test the detection logic with real data!

Mocks from the CLI

Unit test mocking is also supported with CLI based workflows for writing detections. For details on adding unit test mocks to a CLI based detection, see the unit test mocking section of the PAT documentation.

Enrich test data

It's possible to enrich test events in both the Panther Console and CLI workflow:

If you have Lookup Table(s) configured and created a detection to leverage them, click Enrich Test Data in the upper right side of the JSON event editor to ensure that p_enrichment is populated correctly.

Last updated