Rules and Scheduled Rules
Panther's rules and scheduled rules are Python functions that detect suspicious activity in logs, then generate alerts
Overview
Rules and scheduled rules are Python functions through which log data is run to detect suspicious activity and generate alerts. Rules and scheduled rules apply to logs, while policies apply to cloud resource configurations. Panther provides a number of Panther-managed rules and scheduled rules, which are already written and continuously updated.
Common examples of rules include analyzing logs for:
Authentication from unknown or unexpected locations
Sensitive API calls, such as administrative changes to SaaS services
Network traffic access to sensitive data sources, such as databases or virtual machines
New, suspicious entries added into a system's scheduled tasks, like cron
Alerts generated from NIDS, HIDS, or other security systems
Rules vs. scheduled rules
Both rules and scheduled rules have a Python rule function through which log events are run, but rules analyze real-time events, while scheduled rules analyze events queried from your data lake.
Rules
Rules, sometimes referred to as real-time rules, are the default mechanism of analyzing data sent to Panther. Rules work by accepting a defined set of log types such as Okta, Box, or your own custom data. Rules have the benefit of low-latency detection and alerting.
Use cases: High-signal logs that do not require joins with other data.
Scheduled rules
Scheduled rules work by accepting individual rows output from an associated scheduled query.
Use cases: Querying windows of time further in the past, running statistical analysis over data, or joining separate data streams.
How rules and scheduled rules work
Rules and scheduled rules each analyze one event at a time. They use event thresholds and de-duplication to create event grouping within windows of time. At a minimum, each rule and scheduled rule must contain a rule function. Rules and scheduled rules can also contain title, dedup, alert_context, and severity functions.
Deduplication
Events triggering the same detection within its deduplication period that also share a deduplication string will be grouped together in a single alert.
Each rule and scheduled rule has a default event threshold of 1 and deduplication period of 1h. This means all events returning True from the rule function (with the same deduplication string) will be grouped into a single alert within the hour after first being generated.
A rule or scheduled rule with an event threshold of 5 and deduplication period of 15m would not trigger an alert until five or more events (with the same deduplication string) passed into the rule function returned True within a 15-minute time period.
The order of precedence for setting the deduplication string is as follows:
The output of the detection's
dedupfunction is used.If
dedupis not defined, the output of the detection'stitlefunction is used.If
titleis not defined, the detection'sIDis used.
The detection editor in the Panther Console supports a maximum deduplication period of 24 hours. If you upload your detections via the bulk uploader or the Panther Analysis Tool (PAT), there is no limit on the value of DedupPeriodMinutes.
The deduplication period is not affected by changing the status of an alert. This means, for example, events will continue to be grouped into the same alert for the length of the deduplication period even if an alert's status is changed to Resolved.
How to write rules and scheduled rules
You can write rules and scheduled rules in the Panther Console, or you can write them locally. Before you start writing a new rule, remember to check to see if there's an existing Panther-managed rule that meets your needs.
Writing detections locally means creating Python and metadata files that define a Panther detection on your own machine. After writing detections locally, you upload the files to your Panther instance, typically via PAT.
Note: Anything printed to stdout or stderr by your Python code will end up in CloudWatch. For SaaS/CPaaS customers, Panther engineers can see these CloudWatch logs during routine application monitoring.
How to write rules
These instructions outline how to set up real-time rules. To configure a scheduled rule, see How to write scheduled rules.
How to write rules in the Panther Console
In the left-hand navigation bar of your Panther Console, click Build > Detections.
Click Create New.
On the New Detection page, select Rule for the detection type.
In the Basic Info section, provide values for the following fields:
Name: Enter a descriptive name for the rule.
ID (optional): Click the pen icon and enter a unique ID for your rule.
In the upper-right corner, click Continue.
On the next page, configure your rule:
In the upper-right corner, the Enabled toggle will be set to
ONby default. If you'd like to disable the rule, flip the toggle toOFF.In the For the Following Source section:
Log Types: Select the log types this rule should apply to.
In the Detect section:
In the Rule Function text editor, write a Python
rulefunction to define your detection.For detection templates and examples, see the panther_analysis GitHub repository.
In the Set Alert Fields section:
Severity: Select a severity level for the alerts triggered by this detection.
In the Optional Fields section, optionally provide values for the following fields:
Description: Enter additional context about the rule.
Runbook: Enter the procedures and operations relating to this rule.
To see examples of runbooks for built-in rules, see Alert Runbooks.
Reference: Enter an external link to more information relating to this rule.
Destination Overrides: Choose destinations to receive alerts for this detection, regardless of severity. Note that destinations can also be set dynamically, in the rule function. See Routing Order Precedence to learn more about routing precedence.
Deduplication Period and Events Threshold: Enter the deduplication period and threshold for rule matches. To learn how deduplication works, see Deduplication.
Summary Attributes: Enter the attributes you want to showcase in the alerts that are triggered by this detection.
To use a nested field as a summary attribute, use the Snowflake dot notation in the Summary Attribute field to traverse a path in a JSON object:
<column>:<level1_element>.<level2_element>.<level3_element>The alert summary will then be generated for the referenced object in the alert. Learn more about traversing semi-structured data in Snowflake here.
For more information on Alert Summaries, see Assigning and Managing Alerts.
Custom Tags: Enter custom tags to help you understand the rule at a glance (e.g.,
HIPAA.)In the Framework Mapping section:
Click Add New to enter a report.
Provide values for the following fields:
Report Key: Enter a key relevant to your report.
Report Values: Enter values for that report.
In the Test section:
In the Unit Test section, click Add New to create a test for the rule you defined in the previous step.
In the upper-right corner, click Save.
After you have created a rule, you can modify it using no-code rule filters.
How to write rules locally
If you're writing detections locally (instead of in the Panther Console), we recommend managing your local detection files in a version control system like GitHub or GitLab.
File setup
Each rule and scheduled rule consists of:
A Python file (a file with a
.pyextension) containing your detection logic.A YAML specification file (a file with a
.ymlextension) containing metadata attributes of the detection.By convention, we give this file the same name as the Python file.
Folder setup
If you group your rules into folders, each folder name must contain rules in order for them to be found during upload (using either PAT or the bulk uploader in the Console).
We recommend grouping rules into folders based on log/resource type, e.g., suricata_rules or aws_s3_policies. You can use the panther-analysis repo as a reference.
Rules are Python functions to detect suspicious behaviors. Returning a value of True indicates suspicious activity, which triggers an alert.
Write your rule and save it (in your folder of choice) as
my_new_rule.py:def rule(event): return 'prod' in event.get('hostName')Create a metadata file using the template below:
AnalysisType: rule DedupPeriodMinutes: 60 # 1 hour DisplayName: Example Rule to Check the Format of the Spec Enabled: true Filename: my_new_rule.py RuleID: Type.Behavior.MoreContext Severity: High LogTypes: - LogType.GoesHere Reports: ReportName (like CIS, MITRE ATT&CK): - The specific report section relevant to this rule Tags: - Tags - Go - Here Description: > This rule exists to validate the CLI workflows of the Panther CLI Runbook: > First, find out who wrote this the spec format, then notify them with feedback. Reference: https://www.a-clickable-link-to-more-info.com
When this rule is uploaded, each of the fields you would normally populate in the Panther Console will be auto-filled. See Rule specification reference below for a complete list of required and optional fields.
How to write scheduled rules
Scheduled rules are associated with one or more scheduled queries. If you have not yet created a scheduled query, follow the How to create a Saved and Scheduled Query instructions first, then return here to create the scheduled rule.
If the scheduled query returns multiple rows, each row is processed by the rule function as a separate event. The number of alerts triggered depends on the deduplication settings you've configured on the scheduled rule.
How to write scheduled rules in the Panther Console
In the left-hand navigation bar of your Panther Console, click Build > Detections.
Click Create New.
On the New Detection page, select Scheduled Rule for the detection type.
In the Basic Info section, provide values for the following fields:
Name: Enter a descriptive name for the scheduled rule.
ID (optional): Click the pen icon and enter a unique ID for your scheduled rule.
In the upper-right corner, click Continue.
On the next page, configure your scheduled rule:
In the upper-right corner, the Enabled toggle will be set to
ONby default. If you'd like to disable the scheduled rule, flip the toggle toOFF.In the For the Following Scheduled Queries section:
Scheduled Queries: Select one or more scheduled queries this scheduled rule should apply to.
In the Detect section:
In the Rule Function text editor, write a Python
rulefunction to define your detection.If all your filtering logic is already taken care of in the SQL of the associated scheduled query, you can configure the
rulefunction to simply returntruefor each row:def rule(event): return TrueFor detection templates and examples, see the panther_analysis GitHub repository
In the Set Alert Fields section:
Severity: Select a severity level for the alerts triggered by this detection.
In the Optional Fields section, optionally provide values for the following fields:
Description: Enter additional context about the rule.
Runbook: Enter the procedures and operations relating to this rule.
To see examples of runbooks for built-in rules, see Alert Runbooks.
Reference: Enter an external link to more information relating to this rule.
Destination Overrides: Choose destinations to receive alerts for this detection, regardless of severity. Note that destinations can also be set dynamically, in the rule function. See Routing Order Precedence to learn more about routing precedence.
Deduplication Period and Events Threshold: Enter the deduplication period and threshold for rule matches. To learn how deduplication works, see Deduplication.
Summary Attributes: Enter the attributes you want to showcase in the alerts that are triggered by this detection.
To use a nested field as a summary attribute, use the Snowflake dot notation in the Summary Attribute field to traverse a path in a JSON object:
<column>:<level1_element>.<level2_element>.<level3_element>The alert summary will then be generated for the referenced object in the alert. Learn more about traversing semi-structured data in Snowflake here.
For more information on Alert Summaries, see Assigning and Managing Alerts.
Custom Tags: Enter custom tags to help you understand the rule at a glance (e.g.,
HIPAA.)In the Framework Mapping section:
Click Add New to enter a report.
Provide values for the following fields:
Report Key: Enter a key relevant to your report.
Report Values: Enter values for that report.
In the Test section:
In the Unit Test section, click Add New to create a test for the rule you defined in the previous step.
In the upper-right corner, click Save.
Once you've clicked Save, the scheduled rule will become active. The SQL returned from the associated scheduled query (at the interval defined in the query) will be run through the scheduled rule (if, that is, any rows are returned).
After you have created a rule, you can modify it using no-code rule filters.
How to write scheduled rules locally
If you're writing detections locally (instead of in the Panther Console), we recommend managing your local detection files in a version control system like GitHub or GitLab.
File setup
Each scheduled rule consists of:
A Python file (a file with a
.pyextension) containing your detection logic.A YAML specification file (a file with a
.ymlextension) containing metadata attributes of the detection.By convention, we give this file the same name as the Python file.
Folder setup
If you group your rules into folders, each folder name must contain the string rules in order for them to be found during upload (using either PAT or the bulk uploader in the Console).
We recommend grouping rules into folders based on log/resource type, e.g., suricata_rules or aws_s3_policies. You can use the panther-analysis repo as a reference.
Scheduled rules allow you to analyze the output of a scheduled query with Python. Returning a value of True indicates suspicious activity, which triggers an alert.
Write your query and save it as
my_new_scheduled_query.yml:AnalysisType: scheduled_query QueryName: My New Scheduled Query Name Enabled: true Tags: - Optional - Tags Description: > An optional Description Query: 'SELECT * FROM panther_logs.aws_cloudtrail LIMIT 10' SnowflakeQuery: 'SELECT * FROM panther_logs.public.aws_cloudtrail LIMIT 10' AthenaQuery: 'SELECT * FROM panther_logs.aws_cloudtrail LIMIT 10' Schedule: # Note: CronExpression and RateMinutes are mutually exclusive, only # configure one or the other CronExpression: '0 * * * *' RateMinutes: 1 TimeoutMinutes: 1Write your rule and save it as
my_new_rule.py:# Note: See an example rule for more options # https://github.com/panther-labs/panther-analysis/blob/master/templates/example_rule.py def rule(_): # Note: You may add additional logic here return TrueCreate a metadata file and save it as
my_new_schedule_rule.yml:AnalysisType: scheduled_rule Filename: my_new_rule.py RuleID: My.New.Rule DisplayName: A More Friendly Name Enabled: true ScheduledQueries: - My New Scheduled Query Name Tags: - Tag Severity: Medium Description: > An optional Description Runbook: > An optional Runbook Reference: An optional reference.link Tests: - Name: Name ExpectedResult: true Log: { "JSON": "string" }
When this scheduled rule is uploaded, each of the files will connect a scheduled query with a rule, and fill in the fields you would normally populate in the Panther Console will be auto-filled. See Rule specification reference below for a complete list of required and optional fields.
Rule errors and scheduled rule errors
Rule errors and scheduled rule errors are types of detection errors generated when a detection's Python code raises an exception.
If there is no specific routing configured for rule errors, the alert for a rule error will route to the same destination used by the alert. See Routing order precedence on Alert Destinations for more information.
In the event of a query timeout, the Python code for Destinations will not run.
Rule and scheduled rule examples
See templates for rules and scheduled rules in the panther-analysis GitHub repository.
For in-depth detection examples, best practices, and information on functions and features, see Writing and Editing Detections.
Send an alert when an admin panel is accessed on a web server
As an example, let's write a rule to send an alert when an admin panel is accessed on a web server. The following NGINX log below will be used:
{
"httpReferer": "https://domain1.com/?p=1",
"httpUserAgent": "Chrome/80.0.3987.132 Safari/537.36",
"remoteAddr": "180.76.15.143",
"request": "GET /admin-panel/ HTTP/1.1",
"status": 200,
"time": "2019-02-06 00:00:38 +0000 UTC"
}A basic rule would look like this:
A
rulefunction that looks for 200 (OK) web requests to any URL with theadmin-panelstring.Return type: Boolean.
A
titleto say that admin panel logins have been logged into from a specific IP address.Return type: String.
A
dedupfunction to group all events by the same IP address.Return type: String.
def rule(event):
return event.get('status') == 200 and 'admin-panel' in event.get('request')
def title(event):
return f"Successful admin panel login detected from {event.get('remoteAddr')}"
def dedup(event):
return event.get('remoteAddr')Then, the following would occur:
An alert would be generated and sent to the set of associated destinations, which by default are based on the rule severity
The alert would say
Successful admin panel login detected from 180.76.15.143Similar events with the same dedup string of
180.76.15.143would be appended to the alertThe recipient of the alert could then check Panther to view all alert metadata, a summary of the events, and run SQL over all of the events to perform additional analysis
A unique alert will be generated for each unique deduplication string, which in this case, is the IP of the requestor.
Reference
Alert severity
We recommend following these guidelines to define alert severity levels:
Severity
Exploitability
Description
Examples
Info
None
No risk, simply informational
Gaining operational awareness.
Low
Difficult
Little to no risk if exploited
Non-sensitive information leaking such as system time and OS versions.
Medium
Difficult
Moderate risk if exploited
Expired credentials, missing protection against accidental data loss, encryption settings, best practice settings for audit tools.
High
Moderate
Very damaging if exploited
Large gaps in visibility, directly vulnerable infrastructure, misconfigurations directly related to data exposure.
Critical
Easy
Causes extreme damage if exploited
Public data/systems available, leaked access keys.
Rule specification reference
Required fields are in bold.
Field Name
Description
Expected Value
AnalysisType
Indicates whether this analysis is a rule, scheduled_rule, policy, or global
Rules: rule
Scheduled Rules: scheduled_rule
Enabled
Whether this rule is enabled
Boolean
FileName
The path (with file extension) to the python rule body
String
RuleID
The unique identifier of the rule
String
LogTypes
The list of logs to apply this rule to
List of strings
Severity
What severity this rule is
One of the following strings: Info, Low, Medium, High, or Critical
ScheduledQueries (field only for Scheduled Rules)
The list of Scheduled Query names to apply this rule to
List of strings
Description
A brief description of the rule
String
DedupPeriodMinutes
The time period (in minutes) during which similar events of an alert will be grouped together
15,30,60,180 (3 hours),720 (12 hours), or 1440 (24 hours)
DisplayName
A friendly name to show in the UI and alerts. The RuleID will be displayed if this field is not set.
String
OutputIds
Static destination overrides. These will be used to determine how alerts from this rule are routed, taking priority over default routing based on severity.
List of strings
Reference
The reason this rule exists, often a link to documentation
String
Reports
A mapping of framework or report names to values this rule covers for that framework
Map of strings to list of strings
Runbook
The actions to be carried out if this rule returns an alert, often a link to documentation
String
SummaryAttributes
A list of fields that alerts should summarize.
List of strings
Threshold
How many events need to trigger this rule before an alert will be sent.
Integer
Tags
Tags used to categorize this rule
List of strings
Tests
Unit tests for this rule.
List of maps
Last updated
Was this helpful?

