LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Snowflake Enrichment (Beta)
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Setting Up a Cloud Connected Panther Instance
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
    • MCP Server (Beta)
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Repository structure recommendations
  • main.py content recommendations
  • Call apply_overrides()
  • Best practices for PyPanther Detection writing
  • Use filters instead of overriding rule()
  • Use upgrade() or downgrade() in severity()
  • Optionally use explicit typing

Was this helpful?

  1. Detections
  2. PyPanther Detections (Beta)

PyPanther Detections Style Guide

PreviousManaging PyPanther Detections in the Panther ConsoleNextpypanther Library Reference

Last updated 3 months ago

Was this helpful?

PyPanther Detections are in closed beta starting with Panther version 1.108. Please share any bug reports and feature requests with your Panther support team.

Repository structure recommendations

Get up and running quickly by cloning the .

In your code repository where your PyPanther Detections are stored, it's recommended to:

  • Maintain a top-level module, content, in which all of your custom Python code is stored (except for the main.py file).

    The top-level directory we are calling content/ can be named anything except src/, which is a reserved repository name in Panther.

    Within this folder, it's recommended to:

    • Store custom rule definitions in a rules directory.

    • Store logic that makes overrides on Panther-managed rules in an overrides directory.

      • Define an in each file in overrides.

    • Store custom helpers in a helpers directory.

# Recommended repository structure
.
├── README.md
├── content
│   ├── __init__.py
│   ├── helpers
│   │   ├── __init__.py
│   │   ├── cloud.py
│   │   └── custom_log_types.py
│   ├── overrides
│   │   ├── __init__.py
│   │   ├── aws_cloudtrail.py└
│   │   └── aws_guardduty.py
│   ├── rules
│   │   ├── __init__.py
│   │   ├── my_custom_rule.py
│   │   └── my_inherited_rule.py
│   └── schemas
│       └── schema1.yml
└── main.py

main.py content recommendations

It's recommended for your main.py file to:

  • Import Panther-managed rules (which you do or don't want to make overrides on) using get_panther_rules

  • Import custom rules using get_rules

# Example main.py file
from pypanther import get_panther_rules, get_rules, register, apply_overrides

from content import rules, overrides
from content.helpers.custom_log_types import CustomLogType

# Load base rules
base_rules = get_panther_rules(
    # log_types=[
    #     LogType.AWS_CLOUDTRAIL,
    #     LogType.AWS_GUARDDUTY,
    #     LogType.PANTHER_AUDIT,
    # ],
    # default_severity=[
    #     Severity.CRITICAL,
    #     Severity.HIGH,
    # ],
)
# Load all local custom rules
custom_rules = get_rules(module=rules)
# Omit rules with custom log types, since they must be present in the Panther instance for upload to work
custom_rules = [rule for rule in custom_rules if not any(custom in rule.log_types for custom in CustomLogType)]

# Apply overrides
apply_overrides(module=overrides, rules=base_rules)

# Register all rules
register(base_rules + custom_rules)

Call apply_overrides()

The pypanther apply_overrides() convenience function lets you, in main.py, efficiently apply all detection overrides made in a separate file or folder.

Each folder containing rules imported with apply_overrides() must contain an __init__.py file.

In the following example, the apply_overrides() functions in general.py and aws_cloudtrail.py are applied when apply_overrides(overrides, all_rules) is called in main.py:

# main.py
import overrides

all_rules = get_panther_rules()
apply_overrides(overrides, all_rules)

register(all_rules)
# general.py in /overrides
def apply_overrides(rules):
    for rule in rules:
        rule.override(enabled=True)
# aws_cloudtrail.py in /overrides
from pypanther import Severity
from pypanther.rules.aws_cloudtrail import AWSCloudTrailStopped

def apply_overrides(rules):
    AWSCloudTrailStopped.override(
        default_severity=Severity.LOW,
        default_runbook=(
            "If the account is in production, investigate why CloudTrail was stopped. "
            "If it was intentional, ensure that the account is monitored by another CloudTrail. "
            "If it was not intentional, investigate the account for unauthorized access."
        ),
    )

Best practices for PyPanther Detection writing

Use filters instead of overriding rule()

Use upgrade() or downgrade() in severity()

While each PyPanther rule must define a default_severity, it can also define a severity() function, whose value overrides default_severity to set the severity level of resulting alerts.

Within severity(), it's common practice to dynamically set the severity based on an event field value. One way to do this is to add some condition, which, if satisfied, returns a hard-coded Severity value. In the example below, if the actor associated to the event has an admin role, Severity.MEDIUM is returned, regardless of the value of self.default_severity:

# Example NOT using upgrade() or downgrade()
class MyRule(Rule):
    ...
    default_severity = Severity.LOW
    ...

def severity(self, event):
    if event.deep_get("actor", "role") == "admin":
        return Severity.MEDIUM                     # returns MEDIUM
    return self.default_severity                   # returns LOW

Note that you can reference the default_severity value with self.default_severity.

In the example above, if MyRule's default_severity was ever changed from Severity.LOW to, say, Severity.MEDIUM, you would also need to remember to update the Severity.MEDIUM in severity() (presumably to Severity.HIGH), to preserve the escalation.

Using upgrade() instead of hard-coding a Severity

Instead of setting the return value of severity() as a hard-coded Severity value (as is shown in the example above), it's recommended to call the Severity class's upgrade() and downgrade() functions on self.default_severity.

In this model, if your detection's default_severity value ever changes, you won't also need to make changes in the severity() function. In the example below, default_severity has been changed to Severity.MEDIUM, and the return value within the condition automatically adjusts to returning Severity.HIGH because it uses upgrade():

# Example using upgrade()
class MyRule(Rule):
    ...
    default_severity = Severity.MEDIUM
    ...

def severity(self, event):
    if event.deep_get("actor", "role") == "admin":
        return self.default_severity.upgrade()        # returns HIGH
    return self.default_severity                      # returns MEDIUM

There may be rare instances when using a static value within severity() is preferable—for example, you may always want to return Severity.INFO when an event originates in a dev account, even if the default_severity later changes. This might look like:

def severity(self, event):
    if is_dev_env(event.get("accountId")):
        return Severity.INFO
    return self.default_severity

Optionally use explicit typing

The pypanther base Rule class is typed, so when you create a custom detection (i.e., inherit from Rule), explicit typing is optional. Still, if you prefer to use explicit types, you can do so.

Example of a custom rule with explicit typing
from time import strptime
from typing import Dict, List

from panther_core.enriched_event import PantherEvent
from pypanther import LogType, Rule, RuleTest, Severity
from pypanther.base import SeverityType
from pypanther.helpers.aws import aws_guardduty_context
from pypanther.severity import SEVERITY_DEFAULT


class MyTypedRule(Rule):
    log_types: List[LogType | str] = [LogType.AWS_GUARDDUTY]
    id: str = "AWS.GuardDuty.HighVolFindings"
    create_alert: bool = True
    dedup_period_minutes = 45
    display_name: str = "High volume of GuardDuty findings"
    enabled: bool = True
    threshold = 100
    tags: List[str] = ["GuardDuty", "Security"]
    reports: Dict[str, List[str]] = {"MITRE ATT&CK": ["TA0010:T1499"]}

    default_severity: Severity | str = Severity.HIGH
    default_destinations: List[str] = ["slack:my-channel"]
    default_description: str = "This rule tracks high volumes of GuardDuty findings"

    def rule(self, event: PantherEvent) -> bool:
        if event.deep_get("service", "additionalInfo", "sample"):
            # in case of sample data
            # https://docs.aws.amazon.com/guardduty/latest/ug/sample_findings.html
            return False
        return 7.0 <= float(event.get("severity", 0)) <= 8.9

    def title(self, event: PantherEvent) -> str:
        return event.get("title", "GuardDuty finding")

    def severity(self, event: PantherEvent) -> SeverityType:
        # Parse timestamp: "createdAt": "2020-02-14T18:12:22.316Z"
        timestamp = strptime(event.get("createdAt", "1970-01-01T00:00:00Z"), "%Y-%m-%dT%H:%M:%S.%fZ")
        # Increase severity if it's the weekend
        if timestamp.tm_wday in (5, 6):
            return Severity.CRITICAL
        return SEVERITY_DEFAULT

    def alert_context(self, event: PantherEvent) -> dict:
        return aws_guardduty_context(event)

    tests: List[RuleTest] = [
        RuleTest(
            name="High Sev Finding",
            expected_result=True,
            log={
                "schemaVersion": "2.0",
                "accountId": "123456789012",
                "region": "us-east-1",
                "partition": "aws",
                "arn": "arn:aws:guardduty:us-west-2:123456789012:detector/111111bbbbbbbbbb5555555551111111/finding/90b82273685661b9318f078d0851fe9a",
                "type": "PrivilegeEscalation:IAMUser/AdministrativePermissions",
                "service": {
                    "serviceName": "guardduty",
                    "detectorId": "111111bbbbbbbbbb5555555551111111",
                    "action": {
                        "actionType": "AWS_API_CALL",
                        "awsApiCallAction": {
                            "api": "PutRolePolicy",
                            "serviceName": "iam.amazonaws.com",
                            "callerType": "Domain",
                            "domainDetails": {"domain": "cloudformation.amazonaws.com"},
                            "affectedResources": {"AWS::IAM::Role": "arn:aws:iam::123456789012:role/IAMRole"},
                        },
                    },
                    "resourceRole": "TARGET",
                    "additionalInfo": {},
                    "evidence": None,
                    "eventFirstSeen": "2020-02-14T17:59:17Z",
                    "eventLastSeen": "2020-02-14T17:59:17Z",
                    "archived": False,
                    "count": 1,
                },
                "severity": 8,
                "id": "eeb88ab56556eb7771b266670dddee5a",
                "createdAt": "2020-02-14T18:12:22.316Z",
                "updatedAt": "2020-02-14T18:12:22.316Z",
                "title": "Principal AssumedRole:IAMRole attempted to add a policy to themselves that is highly permissive.",
                "description": "Principal AssumedRole:IAMRole attempted to add a highly permissive policy to themselves.",
            },
        ),
        RuleTest(
            name="High Sev Finding As Sample Data",
            expected_result=False,
            log={
                "schemaVersion": "2.0",
                "accountId": "123456789012",
                "region": "us-east-1",
                "partition": "aws",
                "arn": "arn:aws:guardduty:us-west-2:123456789012:detector/111111bbbbbbbbbb5555555551111111/finding/90b82273685661b9318f078d0851fe9a",
                "type": "PrivilegeEscalation:IAMUser/AdministrativePermissions",
                "service": {
                    "serviceName": "guardduty",
                    "detectorId": "111111bbbbbbbbbb5555555551111111",
                    "action": {
                        "actionType": "AWS_API_CALL",
                        "awsApiCallAction": {
                            "api": "PutRolePolicy",
                            "serviceName": "iam.amazonaws.com",
                            "callerType": "Domain",
                            "domainDetails": {"domain": "cloudformation.amazonaws.com"},
                            "affectedResources": {"AWS::IAM::Role": "arn:aws:iam::123456789012:role/IAMRole"},
                        },
                    },
                    "resourceRole": "TARGET",
                    "additionalInfo": {"sample": True},
                    "evidence": None,
                    "eventFirstSeen": "2020-02-14T17:59:17Z",
                    "eventLastSeen": "2020-02-14T17:59:17Z",
                    "archived": False,
                    "count": 1,
                },
                "severity": 8,
                "id": "eeb88ab56556eb7771b266670dddee5a",
                "createdAt": "2020-02-14T18:12:22.316Z",
                "updatedAt": "2020-02-14T18:12:22.316Z",
                "title": "Principal AssumedRole:IAMRole attempted to add a policy to themselves that is highly permissive.",
                "description": "Principal AssumedRole:IAMRole attempted to add a highly permissive policy to themselves.",
            },
        ),
    ]

If you defined apply_overrides functions, call pypanther's apply_overrides() to apply all of your changes. Learn more in , below.

apply_overrides() takes an imported package (folder) or module (file) name and an optional list of rules, and runs all functions named apply_overrides() from that package or module against the list of rules. (This is similar to how works.) It's recommended to name this package or module overrides.

If you would like to alter the logic of a Panther-managed PyPanther Detection, it's recommended to use instead of the rule's rule() function. Filters are designed for this purpose—to be applied on top of existing rule logic. They are executed against each incoming event before the rule() logic, in order to determine if the rule should indeed process the event.

If you are significantly altering the rule logic, you might also consider instead.

In order to use self.default_severity.upgrade() or self.default_severity.downgrade(), the detection's default_severity value must be a object, not a string literal.

pypanther-starter-kit repository
apply_overrides function
Call apply_overrides()
get_rules()
include/exclude filters
overriding
writing a custom rule
Severity