LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Setting Up a Cloud Connected Panther Instance
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
    • MCP Server (Beta)
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • High-level guidelines for creating PyPanther Detections
  • Writing a custom PyPanther Detection
  • Importing Panther-managed rules
  • Applying overrides on existing rules
  • Overriding single attributes
  • Overriding multiple attributes with the override function
  • Applying overrides on multiple PyPanther rules
  • Extending list attributes on existing rules
  • Creating include or exclude filters
  • Ensuring necessary fields are set on configuration-required rules
  • Creating PyPanther rules with inheritance
  • Using advanced Python
  • Calling super()
  • Using list comprehension

Was this helpful?

  1. Detections
  2. PyPanther Detections (Beta)

Creating PyPanther Detections

PreviousPyPanther Detections (Beta)NextRegistering, Testing, and Uploading PyPanther Detections

Last updated 1 month ago

Was this helpful?

Overview

PyPanther Detections are in closed beta starting with Panther version 1.108. Please share any bug reports and feature requests with your Panther support team.

You can import Panther-managed PyPanther Detections and make your own modifications, as well as create your own custom ones. After you've created detections, you can .

Before writing PyPanther Detections, you’ll need to set up your environment. See .

This page describes how to create PyPanther Detections in the CLI workflow. If you'd like to create PyPanther Detections in the Console instead, see .

High-level guidelines for creating PyPanther Detections

When working with PyPanther Detections in the CLI workflow:

  • A main.py file controls your entire detection configuration, outlining which rules to register, override configurations, and custom rule definitions. You can either:

    • (Recommended) Define your detections in various other files/folders, then import them into main.py (learn more in the )

    • Define your detections in main.py

  • A PyPanther rule is defined in a single Python file. Within it, you can import Panther-managed (or your own custom) PyPanther rules and specify overrides. A single Python file can define multiple detections.

  • All PyPanther rules subclass the pypanther Rule class or a parent class of type Rule.

  • Rules must be to be tested and uploaded to your Panther instance.

  • All event object functions currently available in v1 detections are available in PyPanther Detections. These include: , , , and .

  • All are available in PyPanther Detections, such as title() and severity(). See .

  • Use the pypanther type-ahead hints in your IDE, like searching for available rules or viewing properties of classes.

Writing a custom PyPanther Detection

  • Define a rule() function

    • (Optional) Define additional attributes, such as threshold or dedup_period_minutes

from pypanther import Rule, Severity, LogType

# Custom rule for a Panther-supported log type
class MyCloudTrailRule(Rule):
    id = "MyCloudTrailRule"
    tests = True
    log_types = [LogType.AWS_CLOUDTRAIL]
    default_severity = Severity.MEDIUM
    threshold = 50
    dedup_period_minutes = 1

    def rule(self, event) -> bool:
        return (
	    event.get("eventType") == "AssumeRole" and 
	        400 <= int(event.get("errorCode", 0)) <= 413
	)

Importing Panther-managed rules

Panther-managed rules can be imported directly or using the get_panther_rules() function.

Panther-managed rules currently all have a -prototype suffix (e.g., AWS.Root.Activity-prototype). This is temporary, and will be removed in the future.

To import a Panther-managed rule directly using the rules module, you would use a statement like:

from pypanther.rules.github import GitHubAdvancedSecurityChange

The get_panther_rules() function can filter on any Rule class attribute, such as default_severity, log_types, or tag. When filtering, keys use AND logic, and values use OR logic.

Get all Panther-managed rules using get_panther_rules():

from pypanther import get_panther_rules

all_rules = get_panther_rules()

Get Panther-managed rules with certain severities using get_panther_rules():

from pypanther import get_panther_rules, Severity

important_rules = get_panther_rules(
    default_severity=[
        Severity.HIGH,
        Severity.CRITICAL,
    ]
)

Get Panther-managed rules for certain log types using get_panther_rules():

from pypanther import get_panther_rules, LogType

cloudtrail_okta_rules = get_panther_rules(
    log_types=[
        LogType.AWS_CLOUDTRAIL,
        LogType.OKTA_SYSTEM_LOG
    ]
)

Get Panther-managed rules that meet multiple criteria using get_panther_rules():

from pypanther import get_panther_rules, Severity

rules_i_care_about = get_panther_rules(
    enabled=True,
    default_severity=[Severity.CRITICAL, Severity.HIGH],
    tags=["Configuration Required"],
)

Applying overrides on existing rules

When making overrides on Panther-managed detections, it's recommended to:

  • Outside of main.py, store all of your overrides in apply_overrides() functions.

  • In main.py, call pypanther's apply_overrides() to apply each of your apply_overrides() functions.

Overriding single attributes

You can override a single rule attribute in a one-line statement using the override() function:

from pypanther import Severity
from pypanther.rules.aws_cloudtrail import AWSCloudTrailCreated

# Set rule severity to High
AWSCloudTrailCreated.override(default_severity=Severity.HIGH)

Overriding multiple attributes with the override function

It’s also possible to make multi-attribute overrides with the override() function:

from pypanther import Severity
from pypanther.rules.box import BoxContentWorkflowPolicyViolation

BoxContentWorkflowPolicyViolation.override(
    default_severity=Severity.HIGH,
    default_runbook="Check if other internal users hit this same violation"
)

Applying overrides on multiple PyPanther rules

To apply overrides on multiple rules at once, iterate over the collection using a for loop.

This could be useful, for example, when updating a certain attribute for all rules associated to a certain LogType.

from pypanther import get_panther_rules, Severity, LogType

rules = get_panther_rules(
    enabled=True,
    default_severity=[Severity.CRITICAL, Severity.HIGH],
    tags=["Configuration Required"],
)

# Define the destination [override](<https://docs.panther.com/detections/rules/python#destinations>)
def aws_cloudtrail_destinations(self, event):
    if event.get('recipientAccountId') == 112233445566:
        # Name or UUID of a configured Slack Destination
        return ['AWS Notifications']
    # Suppress the alert, doesn't deliver
    return ['SKIP']

# Appending to the Panther-managed rules' destinations and tags using extend()
for rule in rules:
    rule.extend(
        default_destinations=["Slack #security"],
        tags=["Production"],
    )
		
    # Updating the destinations method for CloudTrail rules
    if LogType.AWS_CLOUDTRAIL in rule.log_types:
        rule.destinations = aws_cloudtrail_destinations

Extending list attributes on existing rules

When making modifications to an existing rule, you might want to add items to a list-type rule attribute (like tags, tests, include_filters, or exclude_filters) while preserving the existing list.

from pypanther import Severity
from pypanther.rules.box_rules.box_policy_violation import BoxContentWorkflowPolicyViolation

BoxContentWorkflowPolicyViolation.extend(
    tags=["Production"],
    include_filters=[lambda event: event.get("env") == "prod"]
)

# Because BoxContentWorkflowPolicyViolation already has a "Box" tag, it will now 
# have two tags: "Production" and "Box"

# Because BoxContentWorkflowPolicyViolation does not already define include_filters, 
# it will now have the single filter above

Creating include or exclude filters

PyPanther Detection filters let you exclude certain events from being evaluated by a rule. Filters are designed to be applied on top of existing rule logic (likely for Panther-managed PyPanther Detections you are importing).

Each filter defines logic that is run before the rule() function, and the outcome of the filter determines whether or not the event should go on to be evaluated by the rule.

Common use cases for filters include:

  • To target only certain environments, like prod

  • To exclude events that are known false positives, due to a misconfiguration or other non-malicious scenario

There are two types of filters:

  • include_filters: If the filter returns True for an event, the event is evaluated by rule()

  • exclude_filters: If the filter returns True for an event, the event is dismissed (i.e., not evaluated by rule())

Examples as standalone functions:

from pypanther.rules.github import GitHubAdvancedSecurityChange

# Include only repos that are production
prod_repos = ["prod_repo1", "prod_repo2"]
GitHubAdvancedSecurityChange.extend(
    include_filters=[lambda e: e.get("repo") in prod_repos]
)
from pypanther.rules.box import BoxContentWorkflowPolicyViolation

# Exclude accounts that are for development
dev_repos = ["dev_repo1", "dev_repo2"]
GitHubAdvancedSecurityChange.extend(
    exclude_filters=[lambda e: e.get("repo") in dev_repos]
)

Example as part of an inherited rule definition:

from pypanther.wrap import include, exclude
from pypanther.rules.github import GitHubAdvancedSecurityChange

class GitHubAdvancedSecurityChangeOverride(GitHubAdvancedSecurityChange):
    id = "GitHubAdvancedSecurityChangeOverride"
    # In a real-world scenario, likely only one of the below would be necessary
    # Extend parent rule's include_filters
    include_filters = super().include_filters.append(lambda e: e.get("repo") in prod_repos)
    # Override parent rule's exclude_filters
    exclude_filters = [lambda e: e.get("repo") in dev_repos]
    ...

Filters can also be reused with a for loop to be applied to multiple rules:

from pypanther.wrap import include

rules = [
    # Panther-managed and custom rules
    ...
]

def prod_filter(event):
    return event.get("repo") in prod_repos
		
for rule in rules:
    rule.extend(include_filters=[prod_filter])

Ensuring necessary fields are set on configuration-required rules

Panther-managed rules that require some customer configuration before they are uploaded into a Panther environment may include a validate_config() function, which defines one or more conditions that must be met for the rule to pass the test command (and function properly).

Example:

class ValidateMyRule(Rule):
    id = "Validate.MyRule"
    log_types = [LogType.PANTHER_AUDIT]
    default_severity = Severity.HIGH

    allowed_domains: list[str] = []

    def rule(self, event):
        return event.get("domain") not in self.allowed_domains

    @classmethod 
    def validate_config(cls):
        assert (
            len(cls.allowed_domains) > 0
        ), "The allowed_domains field on your rule must be populated"

In this example, if allowed_domains is not assigned a non-empty list, an assertion error will be thrown during pypanther test.

To set this value, you can use a statement like:

ValidateMyRule.allowed_domains = ["example.com"]

Creating PyPanther rules with inheritance

You can use inheritance to create rules that are subclasses of other rules (that are Panther-managed or custom).

It’s recommended to use inheritance when you’re creating a collection of rules that all share a number of characteristics—for example, rule() function logic, property values, class variables, or helper functions. In this case, it’s useful to create a base rule that all related rules inherit from.

For example, it may be useful to create a base rule for each LogType, from which all rules for that LogType are extended.

Example:

# Custom rule for a custom log type — the parent rule
class HostIDSBaseRule(Rule):
    # id not necessary since we're not uploading this parent rule
    log_types = ['Custom.HostIDS']
    default_severity = Severity.HIGH
    threshold = 1
    dedup_period_minutes = 6 * 60 # 6 hours

    def rule(self, event) -> bool:
        return event.get('event_name') == 'confirmed_compromise'
    
    def host_user_lookup(self, hostname):
	return 'groot'
		
    def title(self, event): -> str:
	return f"Confirmed compromise from hostname {event.get('hostname')}"
		
    def alert_context(self, event):
	user = self.host_user_lookup(event.get('hostName') 
	return {
	    'hostname': event['hostName'],
	    'time': event['p_event_time'],
	    'user': user,
	}
# Inherited rule #1

class IDSCommandAndControl(HostIDSBaseRule):
    id = 'IDSCommandAndControl'
    threshold = 19
    
    # Filter on event_type (in addition to base rule function)
    include_filters = [lambda e: e.get('event_type') == 'c2']
    
    def title(self, event):
	return f"Confirmed c2 on host {event.get('hostname')}"
	
    # From parent rule, inherits rule(), alert_context(), log_types, severity, and dedup_period_minutes
    # From pypanther Rule, inherits other fields (like Enabled)
# Inherited rule #2

class IDSCommandAndControl(HostIDSBaseRule):
    id = 'HostIDSMalware'
    threshold = 2
    default_severity = Severity.CRITICAL
    
    # Filter on event_type (in addition to base rule function)
    include_filters = [lambda e: e.get('event_type') == 'malware']

    def title(self, event):
	return f"Confirmed malware on host {event.get('hostname')}"
    
    # From parent rule, inherits rule(), alert_context(), log_types, and dedup_period_minutes
    # From pypanther Rule, inherits other fields (like enabled)

Using advanced Python

Because PyPanther rules are fully defined in Python, you can use its full expressiveness when customizing your detections.

Calling super()

For more advanced use cases, you can supplement the logic in functions defined by the parent rule.

# Creating an inherited rule
class MyRule(AnExistingRule):
    def alert_context(self, event):
	# Preserve the parent rule's alert context and extend with a new field
	context = super().alert_context(event)
	context["new field"] = "new_value"
	return context
				
    def severity(self, event):
	# Conditionally increment the parent rule's severity
	if event.get("env") == "prod":
	    return Severity.CRITICAL
	return super().severity(event)

Using list comprehension

You can use Python’s list comprehension functionality to create a new list based on an existing list with condensed syntax. This may be particularly useful when you want to filter a list of detections fetched using get_panther_rules().

# Collection of rules that have more than one LogType
rules = [rule for rule in get_panther_rules() if len(rule.log_types) > 1]
					
# Collection of rules that do not require configuration
rules = [rule for rule in get_panther_rules() if "Configuration Required" not in rule.tags]

A "custom" PyPanther rule is one that you write completely from scratch—i.e., one that isn't built from a . Custom PyPanther rules are defined in a Python class that subclasses the pypanther Rule class. In this class, you must:

(Optional) Define any of the , like title() or destinations()

Define certain , such as log_types

The id attribute is only required for a rule if you plan to it.

See the section for a full list of required and optional fields.

You may want to import Panther-managed rules (into main.py or another file) to either them individually as-is, set on them, or them. You can import Panther-managed rules directly (from the pypanther.rules module) or by using the get_panther_rules() function.

See for an example of how to use get_panther_rules() with advanced Python.

Once you’ve imported a Panther-managed rule, you can modify it using or .

If your objective is to modify a rule's logic, it's recommended to use instead of overriding the rule() function itself. Learn more in .

Learn more about apply_overrides() in .

Instead of overriding the attribute (using one of the methods in ), which would replace the existing list value, use the pypanther extend() function to append new values to the list attribute.

The has include_filters and exclude_filters attributes, which each contain a list of functions that will be evaluated against the log event.

Most commonly, validate_config() verifies that some class variable, such as an allowlist or denylist, has been assigned a value. If the requirements included in validate_config() are not met, an exception will be raised when the pypanther test command is run (if the rule is ).

Inheritance is commonly used with —i.e., subclassed rules can define additional criteria an event must meet in order to be processed by the rule.

If you don’t plan to a base rule, it’s not required to provide it an id property.

Panther-managed rule
Using list comprehension
overrides
inheritance
Applying overrides on existing rules
filters
register, test, and upload them
Managing PyPanther Detections in the Panther Console
get()
deep_get()
deep_walk()
udm()
alert functions available in Python (v1) detections
registered
overrides
subclass
include/exclude filters
other alert functions
attributes
Rule property reference
Rule base class
Getting started using PyPanther Detections
Rule auxiliary/alerting function reference
register
register
registered
register
PyPanther Detections Style Guide
Use filters instead of overriding rule()
PyPanther Detections Style Guide