LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Configuring Snowflake for Cloud Connected
        • Configuring AWS for Cloud Connected
        • Pre-Deployment Tools
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • How to create detections in Python
  • How to create a rule in Python
  • How to create a scheduled rule in Python
  • How to create a policy in Python
  • Python detection syntax
  • Basic Python rule structure
  • InlineFilters
  • Alert functions in Python detections
  • Event object functions
  • get()
  • deep_get()
  • deep_walk()
  • lookup()
  • udm()
  • Python best practices
  • Available Python libraries
  • Python detection writing best practices
  • Writing tests for your detections
  • Casing for event fields
  • Understanding top level fields and nested fields
  • Accessing top-level fields safely
  • Using Global Helper functions
  • Accessing nested fields safely
  • Checking fields for specific values
  • Checking fields for Integer values
  • Using the Universal Data Model
  • Using multiple conditions
  • Searching values in lists
  • Matching events with regex
  • Python rule specification reference
  • Python Policy Specification Reference
  • Troubleshooting Detections

Was this helpful?

  1. Detections
  2. Rules and Scheduled Rules

Writing Python Detections

Construct Python detections in the Console or CLI workflow

PreviousRules and Scheduled RulesNextPython Rule Caching

Last updated 19 hours ago

Was this helpful?

Overview

You can write your own Python detections in the Panther Console or locally, following the . When writing Python detections, try to follow , and remember that . Rules written in Python can be used in .

You can alternatively use the in the Console to create rules, or . If you aren't sure whether to write detections locally as Simple Detections or Python detections, see the section.

Before you write a new Python detection, see if there's a that meets your needs (or almost meets your needs—Panther-managed rules can be tuned with ). Leveraging a Panther-managed detection not only saves you the effort of writing one yourself, but also provides the ongoing benefit of continuous updates to core detection logic as Panther releases new versions.

It is highly discouraged to make external API requests from within your detections in Panther. In general, detections are processed at a very high scale, and making API requests can overload receiving systems and cause your rules to exceed the .

How to create detections in Python

How to create a rule in Python

You can write a Python rule in both the Panther Console and CLI workflow.

Creating a rule in Python in the Console
  1. In the left-hand navigation bar of your Panther Console, click Detections.

  2. Click Create New.

  3. In the Select Detection Type modal, choose Rule.

  4. On the create page, configure your rule:

    • Name: Enter a descriptive name for the rule.

    • ID (optional): Click the pen icon and enter a unique ID for your rule.

    • In the upper-right corner, the Enabled toggle will be set to ON by default. If you'd like to disable the rule, flip the toggle to OFF.

    • In the For the Following Source section:

      • Log Types: Select the log types this rule should apply to.

    • In the Detect section:

      • In the Rule Function text editor, write a Python rule function to define your detection.

        • For detection templates and examples, see the .

    • In the Create Alert section, set the Create Alert ON/OFF toggle. This indicates whether an should be created when there are matches, or only a . If you set this toggle to ON:

      • Severity: Select a for the alerts triggered by this detection.

      • In the Optional Fields section, optionally provide values for the following fields:

        • Description: Enter additional context about the rule.

        • Runbook: Enter the procedures and operations relating to this rule.

          • To see examples of runbooks for built-in rules, see .

          • It's recommended to provide a descriptive runbook, as will take it into consideration.

        • Reference: Enter an external link to more information relating to this rule.

        • Destination Overrides: Choose destinations to receive alerts for this detection, regardless of severity. Note that destinations can also be set dynamically, in the rule function. See to learn more about routing precedence.

        • Deduplication Period and Events Threshold: Enter the deduplication period and threshold for rule matches. To learn how deduplication works, see .

        • Summary Attributes: Enter the attributes you want to showcase in the alerts that are triggered by this detection.

          • To use a nested field as a summary attribute, use the Snowflake dot notation in the Summary Attribute field to traverse a path in a JSON object:

            <column>:<level1_element>.<level2_element>.<level3_element>

            The alert summary will then be generated for the referenced object in the alert.

          • For more information on Alert Summaries, see .

        • Custom Tags: Enter custom tags to help you understand the rule at a glance (e.g., HIPAA.)

        • In the Framework Mapping section:

          1. Click Add New to enter a report.

          2. Provide values for the following fields:

            • Report Key: Enter a key relevant to your report.

            • Report Values: Enter values for that report.

    • In the Test section:

      • In the Unit Test section, click Add New to for the rule you defined in the previous step.

  5. In the upper-right corner, click Save.

After you have created a rule, you can modify it using .

Creating a rule in Python in the CLI workflow

If you're writing detections locally (instead of in the Panther Console), we recommend managing your local detection files in a version control system like GitHub or GitLab.

We advise that you start your custom detection content by creating either or a from Panther's .

Folder setup

If you group your rules into folders, each folder name must contain rules in order for them to be found during upload (using either PAT or the bulk uploader in the Console).

We recommend grouping rules into folders based on log/resource type, e.g., suricata_rules or aws_s3_policies. You can use the repo as a reference.

File setup

Each rule and scheduled rule consists of:

  • A Python file (a file with a .py extension) containing your detection logic.

  • A YAML specification file (a file with a .yml extension) containing metadata attributes of the detection.

    • By convention, we give this file the same name as the Python file.

Rules are Python functions to detect suspicious behaviors. Returning a value of True indicates suspicious activity, which triggers an alert.

  1. Write your rule and save it (in your folder of choice) as my_new_rule.py:

    def rule(event):  
      return 'prod' in event.get('hostName')
  2. Create a metadata file using the template below:

    AnalysisType: rule
    DedupPeriodMinutes: 60 # 1 hour
    DisplayName: Example Rule to Check the Format of the Spec
    Enabled: true
    Filename: my_new_rule.py
    RuleID: Type.Behavior.MoreContext
    Severity: High
    LogTypes:
      - LogType.GoesHere
    Reports:
      ReportName (like CIS, MITRE ATT&CK):
        - The specific report section relevant to this rule
    Tags:
      - Tags
      - Go
      - Here
    Description: >
      This rule exists to validate the CLI workflows of the Panther CLI
    Runbook: >
      First, find out who wrote this the spec format, then notify them with feedback.
    Reference: https://www.a-clickable-link-to-more-info.com

When this rule is uploaded, each of the fields you would normally populate in the Panther Console will be auto-filled. See for a complete list of required and optional fields.

How to create a scheduled rule in Python

You can write a Python scheduled rule in both the Panther Console and CLI workflow.

Creating a scheduled rule in Python in the Console
  1. In the left-hand navigation bar of your Panther Console, click Detections.

  2. Click Create New.

  3. In the Select Detection Type modal, choose Scheduled Rule.

  4. On the create page, configure your scheduled rule:

    • Name: Enter a descriptive name for the scheduled rule.

    • ID (optional): Click the pen icon and enter a unique ID for your scheduled rule.

    • In the upper-right corner, the Enabled toggle will be set to ON by default. If you'd like to disable the scheduled rule, flip the toggle to OFF.

    • In the For the Following Scheduled Queries section:

    • In the Detect section:

      • In the Rule Function text editor, write a Python rule function to define your detection.

        • def rule(event):  
              return True
      • In the Optional Fields section, optionally provide values for the following fields:

        • Description: Enter additional context about the rule.

        • Runbook: Enter the procedures and operations relating to this rule.

        • Reference: Enter an external link to more information relating to this rule.

        • Summary Attributes: Enter the attributes you want to showcase in the alerts that are triggered by this detection.

          • To use a nested field as a summary attribute, use the Snowflake dot notation in the Summary Attribute field to traverse a path in a JSON object:

            <column>:<level1_element>.<level2_element>.<level3_element>

        • Custom Tags: Enter custom tags to help you understand the rule at a glance (e.g., HIPAA.)

        • In the Framework Mapping section:

          1. Click Add New to enter a report.

          2. Provide values for the following fields:

            • Report Key: Enter a key relevant to your report.

            • Report Values: Enter values for that report.

    • In the Test section:

  5. In the upper-right corner, click Save.

Creating a scheduled rule in Python in the CLI workflow

If you're writing detections locally (instead of in the Panther Console), we recommend managing your local detection files in a version control system like GitHub or GitLab.

Folder setup

If you group your rules into folders, each folder name must contain the string rules in order for them to be found during upload (using either PAT or the bulk uploader in the Console).

File setup

Each scheduled rule consists of:

  • A Python file (a file with a .py extension) containing your detection logic.

  • A YAML specification file (a file with a .yml extension) containing metadata attributes of the detection.

    • By convention, we give this file the same name as the Python file.

  1. Write your query and save it as my_new_scheduled_query.yml:

    AnalysisType: scheduled_query
    QueryName: My New Scheduled Query Name
    Enabled: true
    Tags:
      - Optional
      - Tags
    Description: >
      An optional Description
    Query: 'SELECT * FROM panther_logs.aws_cloudtrail LIMIT 10'
    Schedule:
      # Note: CronExpression and RateMinutes are mutually exclusive, only
      # configure one or the other
      CronExpression: '0 * * * *'
      RateMinutes: 1
      TimeoutMinutes: 1
  2. Write your rule and save it as my_new_rule.py:

    # Note: See an example rule for more options
    # https://github.com/panther-labs/panther-analysis/blob/master/templates/example_rule.py
    
    def rule(_):
        # Note: You may add additional logic here
        return True
  3. Create a metadata file and save it as my_new_schedule_rule.yml:

    AnalysisType: scheduled_rule
    Filename: my_new_rule.py 
    RuleID: My.New.Rule
    DisplayName: A More Friendly Name
    Enabled: true
    ScheduledQueries:
      - My New Scheduled Query Name
    Tags:
      - Tag
    Severity: Medium
    Description: >
      An optional Description
    Runbook: >
      An optional Runbook 
    Reference: An optional reference.link 
    Tests:
      -
        Name: Name 
        ExpectedResult: true
        Log:
          {
            "JSON": "string"
          }

How to create a policy in Python

Python detection syntax

A local Python detection is made up of two files: a Python file and a YAML file. When a Python detection is created in the Panther Console, there is only a Python text editor (not a YAML one). The keys listed in the YAML column, below, are set in fields in the user interface.

The Python file can contain:
The YAML file can contain:
  • Detection logic

  • Alert functions (dynamic)

  • Filter key

  • Metadata keys

  • Alert keys (static)

Basic Python rule structure

rule.py
rule.yml

InlineFilters

Alert functions in Python detections

Applicable to both rules and policies, each function below takes a single argument of event (rules) or resource (policies). Advanced users may define functions, variables, or classes outside of the functions defined below.

Each of the below alert functions are optional, but can add dynamic context to your alerts.

Alert function
Description
Default value
Return value

The level of urgency of the alert

In YAML: Severity key


In Console: Severity field

INFO, LOW, MEDIUM, HIGH,CRITICAL, or DEFAULT

The generated alert title

In YAML: DisplayName > RuleID or PolicyID


In Console: Name field > ID field

String

The string to group related events with, limited to 1000 characters

In Python/YAML: title() > DisplayName > RuleID or PolicyID


In Console: title() > Name field > ID field

String

Additional context to pass to the alert destination(s)

Dict[String: Any]

An explanation about why the rule exists

In YAML: Description key


In Console: Description field

String

A reference URL to an internal document or online resource about the rule

In YAML: Reference key


In Console: Reference field

String

In YAML: Runbook key


In Console: Runbook field

String

The label or ID of the destinations to specifically send alerts to. An empty list will suppress all alerts.

In YAML: OutputIds key


In Console: Destination Overrides field

List[Destination Name/ID]

severity

In some scenarios, you may need to upgrade or downgrade the severity level of an alert. The severity levels of an alert can be mapped to INFO, LOW, MEDIUM, HIGH, CRITICAL, or DEFAULT. Return DEFAULT to fall back to the statically defined rule severity.

The severity string is case insensitive, meaning you can return, for example, Critical or default, depending on your style preferences.

In the example below, if an API token has been created, a HIGH severity alert is generated—otherwise an INFO level alert is generated:

def severity(event):
    if event.get('eventType') == 'system.api_token.create':
        return "HIGH"
    return "INFO"

Example using DEFAULT:

def severity(event):
    if event.get('eventType') == 'system.api_token.create':
        return "HIGH"
    return "DEFAULT"

title

The title() function is optional, but it is recommended to include it to provide additional context in an alert.

In the example below, the log type, username, and a static string are sent to the alert destination. The function checks to see if the event is related the AWS.CloudTrail log type and, if so, returns the AWS Account Name.

Example:

def title(event):
    # use unified data model field in title
    log_type = event.get("p_log_type")
    title_str = (
        f"{log_type}: User [{event.udm('actor_user')}] has exceeded the failed logins threshold"
    )
    if log_type == "AWS.CloudTrail":
        title_str += f" in [{lookup_aws_account_name(event.get('recipientAccountId'))}]"
    return title_str

dedup

Deduplication is the process of grouping related events into a single alert to prevent receiving duplicate alerts. Events triggering the same detection that also share a deduplication string, within the deduplication period, are grouped together in a single alert. The dedup function is one way to define a deduplication string. It is limited to 1000 characters.

Example:

def dedup(event):
	user_identity = event.get("userIdentity", {})

	if user_identity.get("type") == "AssumedRole":
		return helper_strip_role_session_id(user_identity.get("arn", ""))

	return user_identity.get("arn")

destinations

By default, Alerts are sent to specific destinations based on severity level or log type event. Each Detection has the ability to override their default destination and send the Alert to one or more specific destination(s). In some scenarios, a destination override is required, providing more advance criteria based on the logic of the Rule.

Example:

A rule used for multiple log types utilizes the destinations function to reroute the Alert to another destination if the log type is "AWS.CloudTrail". The Alert is suppressed to this destination using return ["SKIP"] if the log type is not CloudTrail.

def destinations(event):
    if event.get("p_log_type") == "AWS.CloudTrail":
        return ["slack-security-alerts"] ### Name or UUID of destination
    # Do not send alert to an external destination
    return ["SKIP"]

alert_context

This function allows the detection to pass any event details as additional context, such as usernames, IP addresses, or success/failure, to the alert destination(s).

Values included in the alert context dictionary must be JSON-compliant. Examples of non-compliant values include Python's nan, inf, and -inf.

Example:

The code below returns all event data in the alert context.

def rule(event):
    return (
        event.get("actionName") == "UPDATE_SAML_SETTINGS"
        and event.get("actionResult") == "SUCCEEDED"
    )

def alert_context(event):
    return {
        "user": event.udm("actor_user"),
        "ip": event.udm("source_ip")
    }

runbook, reference, and description

These functions can provide additional context around why an alert was triggered and how to resolve the related issue.

def runbook(event):
    return f"""
    Query CloudTrail activity from the new access key ({event.deep_get("responseElements", 
    "accessKey", "accessKeyId", default="key not found")}) at least 2 hours after the alert was triggered 
    and check for data access or other privilege escalation attempts using the aws_cloudtrail table.
    """

This would produce a runbook like the following:

Query CloudTrail activity from the new access key (AKIA5FCD6LZQR7OPYQHF) at least 2 hours after the alert was triggered and check for data access or other privilege escalation attempts using the aws_cloudtrail table.

The example below dynamically provides a link within the reference field in an alert.

def reference(event):
	log_type = event.get("p_log_type")
	if log_type == "OnePassword.SignInAttempt":
		return: f"<https://link/to/resource>"
	elif log_type == "Okta.SystemLog":
		return: f"<https://link/to/resource/2>"
	else: 
		return: f"<https://default/link>"

Event object functions

get()

Function signature
def get(self, key, default=None) -> Any:

Use get() to access a top-level event field. You can provide a default value that will be returned if the key is not found.

Example:

Example event
{
  "key": "value"
}
Using get()
def rule(event):
    return event.get("key") == "value"

# The above would return true

deep_get()

Function signature
def deep_get(self, *keys: str, default: Any = None) -> Any:

Use deep_get() to return keys that are nested within Python dictionaries.

Example:

Given an event with the following structure

Example event
{
  "object": {
    "nested": {
       "key": "here"
      }
   }
 }
Using deep_get()
def rule(event):
    return event.deep_get("object", "nested", "key") == "here"
    
# The above would return true

deep_walk()

Function signature
def deep_walk(
        self, *keys: str, default: Optional[str] = None, return_val: str = "all"
    ) -> Union[Optional[Any], Optional[List[Any]]]:

Use deep_walk() to return values associated with keys that are deeply nested in Python dictionaries, which may contain any number of dictionaries or lists. If it matches multiple event fields, an array of matches will be returned; if only one match is made, the value of that match will be returned.

Example:

Example event
{
  "object": {
    "nested": {
       "list": [
          {
             "key": "first"
          },
          {
             "key": "second"
          }
         ]
      }
   }
 }
Using deep_walk()
def rule(event):
    return "first" in event.deep_walk("object", "nested", "list", "key", default=[])

# The above would return true

lookup()

Function signature
def lookup(self, lookup_table_name: str, lookup_key: str) -> Any:

lookup() takes two arguments:

  • The name of the Lookup Table

    • The Lookup Table name passed to lookup() must be as it appears on the Enrichment Providers or Lookup Tables pages in the Panther Console. This name may differ syntactically from how it appears in a search query; for example, My-Custom-LUT instead of my_custom_lut.

  • A Lookup Table primary key

If a match is found in the Lookup Table for the provided key, the full Lookup Table row is returned as a Python dictionary. If no match is found, None is returned.

Example using lookup():

# Imagine you have a Lookup Table named user_roles with the following entries:
# row 1: {"id": "your@email.com", "role": "admin"}
# row 2: {"id": "vistor@email.com", "role": "guest"}

# In this rule, we want to return True if the user has a non-admin role
# We want to fetch role from the Lookup Table, but the event doesn't 
# contain the Lookup Table's primary key (the email address)
def rule(event):
    lookup_table_name = "user_roles"
    # On the event, we *do* have access to the username, from which we can
    # generate the email address
    user_name = event.get("username", "").lower()
    lookup_key = f"{user_name}@email.com" # Dynamically compose the lookup key, or "Selector"
    
    lookup_data = event.lookup(lookup_table_name, lookup_key)
    # If a match occurs, `lookup_data` will contain the full row of data
    # Otherwise it will return None
    
    if (lookup_data and lookup_data.get("role") != "admin") or lookup_data is None:
        return True
        
    return False

Unit testing detections that use lookup()

_mocked_lookup_data_ should be structured like the following example:

Mocking event.lookup()
{
  "_mocked_lookup_data_": {
     "user_roles": { # This key is the name of your Lookup Table
         # The keys in this object should be the Lookup Table key
         # The values in this object should be the Lookup Table data object
         "your@email.com": {"id": "your@email.com", "role": "admin"},
         "vistor@email.com": {"id": "vistor@email.com", "role": "guest"}
      }
   }
}

If you do not specify a _mocked_lookup_data_ field in your unit test, attempts to call lookup() will return None/null.

udm()

Function signature
def udm(self, *key: str, default: Any = None) -> Any:

Here is how the udm() function works:

Sample udm usage
# Example usage when operating on data models 
def rule(event):
  return event.udm('field_on_data_model')
  
# Example usage when operating on data models with default
def rule(event):
  # The default parameter is only respected when using path-based mappings on
  # your data model. If your data model maps to a function, whatever value your
  # function returns will be respected
  return event.udm('field_on_data_model', default='')
  
# Example usage when operating on p_udm fields
def rule(event):
  # Note: When operating on p_udm fields, the udm function operates like our
  # deep_get function, allowing you to reference nested fields
  #
  # If the deep_get syntax is used and the top most field belongs to a mapped
  # data model field, udm will look at the p_udm field instead
  return event.udm('field_on_udm', 'nested_field', default='')
  
Data Model example
Mappings:
  - Name: source_ip
    Path: nested.srcIp
Example event
{
  "nested": {
    "srcIp": "127.0.0.1"
  }
}
Using udm()
def rule(event):
    return event.udm("source_ip") == "127.0.0.1"

# The above would return true

Python best practices

Available Python libraries

Package

Version

Description

License

jsonpath-ng

1.5.2

JSONPath Implementation

Apache v2

policyuniverse

1.3.3.20210223

Parse AWS ARNs and Policies

Apache v2

requests

2.23.0

Easy HTTP Requests

Apache v2

Python detection writing best practices

Writing tests for your detections

Casing for event fields

Lookups for event fields are not case sensitive. event.get("Event_Type") or event.get("event_type") will return the same result.

Understanding top level fields and nested fields

Top-level fields represent the parent fields in a nested data structure. For example, a record may contain a field called user under which there are other fields such as ip_address. In this case, user is the top-level field, and ip_address is a nested field underneath it.

Nesting can occur many layers deep, and so it is valuable to understand the schema structure and know how to access a given field for a detection.

Accessing top-level fields safely

The example below is a best practice because it leverages a get() function. get() will look for a field, and if the field doesn't exist, it will return None instead of an error, which will result in the detection returning False.

def rule(event):
    return event.get('field') == 'value'

In the example below, if the field exists, the value of the field will be returned. Otherwise, False will be returned:

def rule(event):
    if event.get('field')
        return event.get('field')
    return False

Bad practice example The rule definition below is bad practice because the code is explicit about the field name. If the field doesn't exist, Python will throw a KeyError:

def rule(event):
    return event['field'] == 'value'

Using Global Helper functions

Accessing nested fields safely

Example:

AWS CloudTrail logs nest the type of user accessing the console underneath userIdentity. Here is a snippet of a JSON CloudTrail root activity log:

{ 	
       "eventVersion": "1.05",
       "userIdentity": { 	
               "type": "Root", 	
               "principalId": "1111", 	
               "arn": "arn:aws:iam::123456789012:root", 	
               "accountId": "123456789012", 		
               "userName": "root" 
               }, 	
        ... 
 }

See how to check the value of type safely using both forms of deep_get():

def rule(event):
    return event.deep_get("userIdentity", "type") == "Root"
from panther_base_helpers import deep_get

def rule(event):
    return deep_get(event, "userIdentity", "type") == "Root"

Checking fields for specific values

You may want to know when a specific event has occurred. If it did occur, then the detection should trigger an alert. Since Panther stores everything as normalized JSON, you can check the value of a field against the criteria you specify.

For example, to detect the action of granting Box technical support access to your Box account, the Python below would be used to match events where the event_type equals ACCESS_GRANTED:

def rule(event):
    return event.get("event_type") == "ACCESS_GRANTED"

If the field is event_type and the value is equal to ACCESS_GRANTED then the rule function will return true and an Alert will be created.

Checking fields for Integer values

You may need to compare the value of a field against integers. This allows you to use any of Python’s built-in comparisons against your events.

For example, you can create an alert based on HTTP response status codes:

# returns True if 'status_code' equals 404
def rule(event):
    if event.get("status_code"):
        return event.get("status_code") == 404
    else:
        return False

# returns True if 'status_code' greater than 400
def rule(event):
    if event.get("status_code"):
        return event.get("status_code") > 404
    else:
        return False

Reference:

Using the Universal Data Model

Example:

import panther_event_type_helpers as event_type

def rule(event):
    # filter events on unified data model field ‘event_type’
    return event.udm("event_type") == event_type.FAILED_LOGIN

References:

Using multiple conditions

The and keyword is a logical operator and is used to combine conditional statements. It is often required to match multiple fields in an event using the and keyword. When using and, all statements must be true: "string_a" == "this"and"string_b" == "that"

Example:

To track down successful root user access to the AWS console you need to look at several fields:

from panther_base_helpers import deep_get

def rule(event):
    return (event.get("eventName") == "ConsoleLogin" and
            deep_get(event, "userIdentity", "type") == "Root" and
	    deep_get(event, "responseElements", "ConsoleLogin") == "Success")

The or keyword is a logical operator and is used to combine conditional statements. When using or, either of the statements may be true: "string_a" == "this" or "string_b" == "that"

Example:

This example detects if the field contains either Port 80 or Port 22:

# returns True if 'port_number' is 80 or 22
def rule(event):
    return event.get("port_number") == 80 or event.get("port_number") == 22

Searching values in lists

Comparing event values against a list (containing, for example, IP addresses or users) is quick in Python. It's a common pattern to set your rule logic to not match when an event value also exists in the list. This can help reduce false positives for known behavior in your environment.

When checking whether event values are in some collection, it's recommended to use a Python set—sets are more performant (i.e., memory efficient) than lists and tuples in Python. Lists and tuples, unlike sets, require iterating through each item in the collection to check for inclusion.

If the set against which you're performing the comparison is static, it's recommended to define it at the global level, rather than inside the rule() function. Global variables are initialized only once per Lambda invocation. Because a single Lambda invocation can process multiple events, a global variable is usually more efficient than initializing it each time rule() is invoked.

Example:

# Set - Recommended over tuples and lists for performance
ALLOW_IP = {'192.0.0.1', '192.0.0.2', '192.0.0.3'}

def rule(event):
    return event.get("ip_address") not in ALLOW_IP

In the example below, we use the Panther helper pattern_match_list:

from panther_base_helpers import pattern_match_list

USER_CREATE_PATTERNS = [
    "chage",   # user password expiry
    "passwd",  # change passwords for users
    "user*",   # create, modify, and delete users
]

def rule(event):
    # Filter the events
    if event.get("event") != "session.command":
        return False
    # Check that the program matches our list above
    return pattern_match_list(event.get("program", ""), USER_CREATE_PATTERNS)

Matching events with regex

If you want to match against events using regular expressions - to match subdomains, file paths, or a prefix/suffix of a general string - you can use regex. In Python, regex can be used by importing the re library and looking for a matching value.

In the example below, the regex pattern will match Administrator or administrator against the nested value of the privilegeGranted field.

import re
from panther_base_helpers import deep_get

#The regex pattern is stored in a variable
# Note: This is better performance than putting it in the rule function, which is evaluated on each event
ADMIN_PATTERN = re.compile(r"[aA]dministrator")

def rule(event):
    # using the deep_get function we can pull out the nested value under the "privilegeGranted" field
    value_to_search = deep_get(event, "debugContext", "debugData", "privilegeGranted")
    # finally we use the regex object we created earlier to check against our value
    # if there is a match, "True" is returned 
    return (bool(ADMIN_PATTERN.search(value_to_search, default="")))

In the example below, we use the Panther helper pattern_match:

from panther_base_helpers import pattern_match

def rule(event):
    return pattern_match(event.get("operation", ""), "REST.*.OBJECT")

References:

Python rule specification reference

Required fields are in bold.

Field Name

Description

Expected Value

AnalysisType

Indicates whether this analysis is a rule, scheduled_rule, policy, or global

Rules: rule Scheduled Rules: scheduled_rule

Enabled

Whether this rule is enabled

Boolean

FileName

The path (with file extension) to the python rule body

String

RuleID

The unique identifier of the rule

String Cannot include %

LogTypes

The list of logs to apply this rule to

List of strings

Severity

What severity this rule is

One of the following strings: Info, Low, Medium, High, or Critical

ScheduledQueries (field only for Scheduled Rules)

The list of Scheduled Query names to apply this rule to

List of strings

CreateAlert

Boolean

Description

A brief description of the rule

String

DedupPeriodMinutes

The time period (in minutes) during which similar events of an alert will be grouped together

15,30,60,180 (3 hours),720 (12 hours), or 1440 (24 hours)

DisplayName

A friendly name to show in the UI and alerts. The RuleID will be displayed if this field is not set.

String

OutputIds

Static destination overrides. These will be used to determine how alerts from this rule are routed, taking priority over default routing based on severity.

List of strings

Reference

The reason this rule exists, often a link to documentation

String

Reports

A mapping of framework or report names to values this rule covers for that framework

Map of strings to list of strings

Runbook

The actions to be carried out if this rule returns an alert, often a link to documentation

String

SummaryAttributes

A list of fields that alerts should summarize.

List of strings

Threshold

How many events need to trigger this rule before an alert will be sent.

Integer

Tags

Tags used to categorize this rule

List of strings

Tests

Unit tests for this rule.

List of maps

Python Policy Specification Reference

Required fields are in bold.

A complete list of policy specification fields:

Field Name

Description

Expected Value

AnalysisType

Indicates whether this specification is defining a policy or a rule

policy

Enabled

Whether this policy is enabled

Boolean

FileName

The path (with file extension) to the python policy body

String

PolicyID

The unique identifier of the policy

String Cannot include %

ResourceTypes

What resource types this policy will apply to

List of strings

Severity

What severity this policy is

One of the following strings: Info, Low, Medium, High, or Critical

Description

A brief description of the policy

String

DisplayName

What name to display in the UI and alerts. The PolicyID will be displayed if this field is not set.

String

Reference

The reason this policy exists, often a link to documentation

String

Reports

A mapping of framework or report names to values this policy covers for that framework

Map of strings to list of strings

Runbook

The actions to be carried out if this policy fails, often a link to documentation

String

Suppressions

Patterns to ignore, e.g., aws::s3::*

List of strings

Tags

Tags used to categorize this policy

List of strings

Tests

Unit tests for this policy.

List of maps

Troubleshooting Detections

Scheduled Queries: Select one or more this scheduled rule should apply to.

If all your filtering logic is already taken care of in the SQL of the associated , you can configure the rule function to simply return true for each row:

For detection templates and examples, see the

In the Create Alert section, set the Create Alert ON/OFF toggle. This indicates whether an should be created when there are matches, or only a . If you set this toggle to ON:

Severity: Select a for the alerts triggered by this detection.

To see examples of runbooks for built-in rules, see .

It's recommended to provide a descriptive runbook, as will take it into consideration.

Destination Overrides: Choose destinations to receive alerts for this detection, regardless of severity. Note that destinations can also be set dynamically, in the rule function. See to learn more about routing precedence.

Deduplication Period and Events Threshold: Enter the deduplication period and threshold for rule matches. To learn how deduplication works, see .

The alert summary will then be generated for the referenced object in the alert.

For more information on Alert Summaries, see .

In the Unit Test section, click Add New to for the rule you defined in the previous step.

Once you've clicked Save, the scheduled rule will become active. The SQL returned from the associated (at the interval defined in the query) will be run through the scheduled rule (if, that is, any rows are returned).

After you have created a rule, you can modify it using .

We advise that you start your custom detection content by creating either or a from Panther's .

We recommend grouping rules into folders based on log/resource type, e.g., suricata_rules or aws_s3_policies. You can use the repo as a reference.

Scheduled rules allow you to analyze the output of a with Python. Returning a value of True indicates suspicious activity, which triggers an alert.

When this scheduled rule is uploaded, each of the files will connect a scheduled query with a rule, and fill in the fields you would normally populate in the Panther Console will be auto-filled. See for a complete list of required and optional fields.

To learn how to create a policy, see the .

Only a rule() function and the YAML keys shown below are required for a Python rule. Additional Python alert functions, however, can make your alerts more dynamic. Additional YAML keys are available, too—see .

For more templates, see the .

Learn more about using InlineFilters in Python rules on .

Panther's detection auxiliary functions are Python functions that control analysis logic, generated alert title, event grouping, routing of alerts, and metadata overrides. Rules are customizable and can import from standard Python libraries or .

If you are using , the first event to match the detection is used as a parameter for these alert functions.

A list of instructions to follow once the alert is generated. It's recommended to provide a descriptive runbook, as will take it into consideration.

Reference:

Learn more about how an alert title is set on .

Reference:

Learn more about deduplication on .

Reference:

Reference:

It's recommended to provide a descriptive runbook, as will take it into consideration. For example:

In a Python detection, the rule() function and all take in a single argument: the event object. This event object has built-in functions to enable simple extraction of event values.

It is also possible to access a top-level field using and . Learn more about .

If the value you need to retrieve lives within a list, use instead.

This function is , but for convenience it is recommended to use this event object function.

This function is , but for convenience it is recommended to use this event object function.

The lookup() function lets you dynamically access data from and from your detections. The lookup() function may be useful if your incoming logs don't contain an exact match for a value in your Lookup Table's primary key column. You can use Python to modify an event value before passing it into lookup() to fetch enrichment data.

This lookup() function is different from "automatic" event enrichment, which happens when the value of an event field designated as a Selector exactly matches a value in the Lookup Table's primary key column. In that case, the Lookup Table data is appended to an event's p_enrichment field. Learn more in .

If you are using "automatic" enrichment in this fashion, access nested enrichment data using instead.

When are run, lookup() does not retrieve live data. To emulate the lookup functionality, add a _mocked_lookup_data_ field in the event payload of each unit test to mock the Lookup Table data. You cannot use the with lookup().

The behavior of udm() will change when Panther are removed on September 29, 2025. .

The udm() function is primarily intended to allow you to access and values, but can also be used to access event fields.

The function first checks to see if there is a key mapping defined for the value passed in to udm(). If so, the value of the Data Model is returned.

If a key is defined for the value passed in to udm(), the function returns its value and does not move on to steps 2 or 3, below. This is true even if the event being evaluated does not contain the key path defined in the Data Model mapping—in this case, null is returned.

If there is no Data Model key defined for the value passed into udm(), the function then checks to see if there is value in the event's p_udm struct with that name. If so, the Core Field value is returned.

In order for udm() to not move on to Step 3, below, the event being evaluated must contain the value passed in to udm() as a key within its p_udm struct. If a has simply been defined with the passed-in value as a key, but the event does not contain the mapping's associated field path, the event's p_udm struct will not contain the Core Field, and udm() will move on to Step 3, below.

If there is no defined or present for the value passed into udm(), the function then checks whether there is an event field with that name. If so, its value is returned.

In this case, udm() checks all event fields, even nested ones. Its behavior is analogous to .

The behavior outlined above means it is only possible to use udm() to access a value if there is not also a mapping defined with the same key. Further, it is only possible to use udm() to access an event field if there is not also a Data Model mapping defined nor a Core Field present with the same key.

Example using udm() to access a value:

Python Enhancement Proposals on how to cleanly and effectively write and style your Python code. For example, you can use to automatically ensure that your written detections all follow a consistent style.

The following Python libraries are available to be used in Panther in addition to boto3, provided by :

Before enabling new detections, it is that define scenarios where alerts should or should not be generated. Best practice dictates at least one positive and one negative to ensure the most reliability.

Basic Rules match a field’s value in the event, and a best practice to avoid errors is to leverage Python’s built-in function.

Once many detections are written, a set of patterns and repeated code will begin to emerge. This is a great use case for , which provide a centralized location for this logic to exist across all detections.

If you'd like to access a filed nested deeply within an event, use the and functions available on the event object. These functions are also represented as , but for convenience, it's recommended to use the event object version instead.

Checking the event value using the :

Checking the event value using the :

provide a way to configure a set of unified fields across all log types. By default, Panther comes with built-in Data Models for several log types. Custom Data Models can be added in the Panther Console or via the .

can only be used with log types that have an existing Data Model in your Panther environment.

Reference:

Whether the rule should generate on matches (default true)

Visit the Panther Knowledge Base to that answer frequently asked questions and help you resolve common errors and issues.

def rule(event): # or def policy(resource):
def severity(event):
def title(event):
def dedup(event):
def destinations(event):
def runbook(event):
def reference(event):
def description(event):
def alert_context(event):
InlineFilters: 
AnalysisType: # rule, scheduled_rule, or policy
Enabled: 
FileName: 
RuleID: # or PolicyId:
LogTypes: 
Reports: 
Tags: 
Tests: 
ScheduledQueries: # only applicable to scheduled rules
Suppressions: # only applicable to policies
CreateAlert: # not applicable to policies
Severity:
Description:
DedupPeriodMinutes:
Threshold: 
DisplayName:
OutputIds:
Reference:
Runbook:
SummaryAttributes: 
def rule(event): 
    if event.get("Something"): 
        return True 
    return False
AnalysisType: rule
Enabled: true
Filename: rule.py
RuleID: my.rule
LogTypes: 
    - Some.Schema
Severity: INFO
Scheduled Searches
scheduled query
panther_analysis GitHub repository
alert
Signal
Alert Runbooks
Learn more about traversing semi-structured data in Snowflake here.
Assigning and Managing Alerts
create a test
scheduled query
Inline Filters
a public fork
private cloned repo
open-source panther-analysis repository
panther-analysis
scheduled search
panther-analysis repo on GitHub
global helpers
Template Rule
Template Rule
AWS S3 Bucket Deleted Rule
Template Rule
Custom Lookup Tables
Panther-managed Enrichment providers
Data Model
Data Model
Data Model
publishes resources
autopep8
AWS Lambda
recommended to write tests
Global Helper functions
box_access_granted.py
Python Operators
Data Models
Panther Analysis Tool
Data Models Guide
Data Models
Brute Force by IP
Teleport Create User Accounts
re.compile
Pythex: simple RegEx editor and tester
AWS S3 Insecure Access
view articles about detections
severity level
Deduplication
Rule specification reference below
Python rule specification reference
dynamic alert functions
deep_get()
deep_walk()
accessing top-level fields safely below
deep_walk()
deep_walk()
deep_get()
get()
Global Helper functions
deep_get()
deep_walk()
event object deep_get() function
event.udm()
alert deduplication
Rules and Scheduled Rules
Rules and Scheduled Rules
severity
title
dedup
alert_context
description
reference
runbook
destinations
CLI workflow
detection derivation
no-code detection builder
write them locally as Simple Detections
Panther-managed detection
Inline Filters
panther_analysis GitHub repository
alert
Signal
Alert Runbooks
Learn more about traversing semi-structured data in Snowflake here.
Assigning and Managing Alerts
create a test
Inline Filters
a public fork
private cloned repo
open-source panther-analysis repository
panther-analysis
these best practices
certain alert fields can be set dynamically
severity level
Deduplication
Rule specification reference
Using Python vs. Simple Detections YAML
15-second runtime limit
unit tests
Learn more about this change here
Data Model
Data Model
Data Model
Panther AI alert triage
Panther AI alert triage
Panther AI alert triage
also represented as a global helper
also represented as a global helper
deep_get() Global Helper function
How is data matched between logs and Lookup Tables? on Custom Lookup Tables
Modifying Detections with Inline Filters
How to write a policy instructions on Policies
Panther AI alert triage
rule matches/an alert
Core Fields
Core Field (also known as p_udm field)
Core Field
Core Field
Core Field
Core Field
Routing Order Precedence
Routing Order Precedence
enrich test data button or CLI command