LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Configuring Snowflake for Cloud Connected
        • Configuring AWS for Cloud Connected
        • Pre-Deployment Tools
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • How to write a policy
  • Policy Body
  • Writing policies locally and in the Panther Console
  • Title of associated alerts
  • Ignoring specific cloud resources
  • Policy Writing Best Practices
  • Constructing Test Resources
  • Debugging Exceptions
  • Policy examples
  • S3 public read access
  • IAM Password Policy
  • Reference

Was this helpful?

  1. Detections

Policies

Scan and evaluate cloud infrastructure configurations

PreviousSignalsNextTesting

Last updated 9 months ago

Was this helpful?

Overview

Policies are used to identify misconfigured cloud infrastructure and generate alerts for your team. Panther provides a number of already written and continuously updated .

Policies may be written as ; they cannot be written as .

Matches on policies create compliance failures, but not . Compliance failures are visible:

  • In , in the panther_cloudsecurity.public database, in the compliance_history table

  • In , in the Could Security database, in the Compliance History table

How to write a policy

Before you start writing a new policy, remember to check to see if there's an existing that meets your needs.

It is highly discouraged to make external API requests from within your detections in Panther. In general, detections are processed at a very high scale, and making API requests can overload receiving systems and cause your rules to exceed the .

Policy Body

The policy body must:

  • Be valid Python3.

  • Define a policy() function that accepts one resource argument.

    • Each policy takes a resource input of a given resource type from the page.

  • Return a bool from the policy function.

def policy(resource):
  return True

The Python body should name the argument to the policy() function resource and also may do the following:

  • Import standard Python3 libraries

  • Import from the user defined aws_globals module

  • Import from the Panther defined panther module

  • Define additional helper functions as needed

  • Define variables and classes outside the scope of the rule function

Writing policies locally and in the Panther Console

How to write policies in the Panther Console

  1. In the left-hand navigation bar of your Panther Console, click Detections.

  2. In the upper-right corner, click Create New.

  3. In the Select Detection Type modal, choose Policy.

  4. On the create page, configure your policy:

    • Name: Enter a descriptive name for the policy.

    • ID (optional): Click the pen icon and enter a unique ID for your policy.

    • In the upper-right corner, the Enabled toggle will be set to ON by default. If you'd like to disable the policy, flip the toggle to OFF.

    • In the For the Following Resource Types section:

      • Resource Types: Select one or more resource types this policy should apply to. Leave empty to apply to all resources.

    • In the Detect section:

      • In the Policy Function text editor, write a Python policy function to define your detection.

    • In the Set Alert Fields section:

      • In the Optional Fields section, optionally provide values for the following fields:

        • Description: Enter additional context about the policy.

        • Runbook: Enter the procedures and operations relating to this policy.

        • Reference: Enter an external link to more information relating to this rule.

        • Ignore Patterns: Enter patterns to ignore.

        • Custom Tags: Enter custom tags to help you understand the rule at a glance (e.g., HIPAA.)

        • In the Framework Mapping section:

          1. Click Add New to enter a report.

          2. Provide values for the following fields:

            • Report Key: Enter a key relevant to your report.

            • Report Values: Enter values for that report.

    • In the Test section:

  5. In the upper-right corner, click Save.

We recommend managing your local detection files in a version control system like GitHub or GitLab.

File setup

Each detection consists of:

  • A Python file (a file with a .py extension) containing your detection/audit logic

  • A YAML or JSON specification file (a file with a .yml or .json extension) containing metadata attributes of the detection.

    • By convention, we give this file the same name as the Python file.

Folder setup

If you group your policies into folders, each folder name must contain policies in order for them to be found during upload (using either PAT or the bulk uploader in the Console).

Writing policies locally

  1. Write your policy and save it (in your folder of choice) as my_new_policy.py: def polcy(resource):

    def policy(resource):  
      return resource['Region'] != 'us-east-1'
  2. Create a specification file using the template below:

    AnalysisType: policy
    Enabled: true
    Filename: my_new_policy.py
    PolicyID: Category.Type.MoreInfo
    ResourceType:
      - Resource.Type.Here
    Severity: Info|Low|Medium|High|Critical
    DisplayName: Example Policy to Check the Format of the Spec
    Tags:
      - Tags
      - Go
      - Here
    Runbook: Find out who changed the spec format.
    Reference: https://www.link-to-info.io

Title of associated alerts

Ignoring specific cloud resources

It's possible to configure a policy to make exceptions for certain cloud resources, meaning the policy will not be run over those resources and alerts will not be generated. This is sometimes referred to as a "policy suppression."

There are three ways to configure a policy suppression:

While it's possible to add a policy suppression from the resource's page in the Panther Console, if you need to later remove it, you must edit the policy via one of the two other methods.

To configure a policy to ignore a resource from the resource's page in the Panther Console:

  1. In the left-hand navigation bar of your Panther Console, click Cloud Resources.

  2. Click the name of the cloud resource you'd like to ignore.

  3. In the Policies section, which contains all the policies that are applied to this resource, locate the policy you'd like to configure to ignore this resource.

  4. On the right side of its row, click the three dots icon, then Ignore.

    • This edits the policy, adding to its Ignore Patterns field.

To configure a policy to ignore a resource from the policy's page in the Panther Console:

  1. In the left-hand navigation bar of your Panther Console, click Build > Detections.

  2. Click the name of the policy you'd like to configure a suppression for.

  3. In the Set Alert Fields section, expand the Optional Fields dropdown.

  4. In the Ignore Patterns field, enter the ARN of the resource(s) you want to ignore.

    • You can ignore multiple resources with similar ARNs using a wildcard pattern. For example, you can exclude all S3 buckets with titles starting with panther by entering arn:aws:s3:::panther*.

To configure a policy to ignore a resource from the policy's local file configuration:

  1. Open the YAML file for your policy.

  2. Add a new field, called Suppressions with type array.

  3. Under Suppressions, add the resource ARN(s) you'd like to ignore as a list.

    • You can ignore multiple resources with similar ARNs using a wildcard pattern. For example, you can exclude all S3 buckets with titles starting with panther by entering arn:aws:s3:::panther*.

Example:

AnalysisType: policy
PolicyID: AWS.S3.CustomPolicy.Example
...
Suppressions:
  - "arn:aws:s3:::test-bucket"
  - "arn:aws:s3:::panther*"

Policy Writing Best Practices

Constructing Test Resources

Manually building test cases can be prone to human error. We suggest one of the following methods:

  • Option 1: In the Panther Console, navigate to Investigate > Cloud Resources. Apply a filter of the resource type you intend to emulate in your test. Select a resource in your environment, and on the Attributes card you can copy the full JSON representation of that resource by selecting copy button next to the word root.

Option 1 is best when it is practical, as this can provide real test data for your policies. Additionally, it is often the case that you are writing/modifying a policy specifically because of an offending resource in your account. Using that exact resource's JSON representation as your test case can guarantee that similar resources will be caught by your policy in the future.

Debugging Exceptions

Debugging exceptions can be difficult, as you do not have direct access to the Python environment running the policies.

Running this test case either locally or in the Panther Console should provide more context for the issue, and allow you to modify the policy to debug the exception without having to run the policy against all resources in your environment.

Note: Anything printed to stdout or stderr by your Python code will end up in CloudWatch. For SaaS/CPaaS customers, Panther engineers can see these CloudWatch logs during routine application monitoring.

Policy examples

S3 public read access

In the example below, the policy checks if an S3 bucket allows public read access:

# A list of grantees that represent public access
GRANTEES = {
    'http://acs.amazonaws.com/groups/global/AuthenticatedUsers',
    'http://acs.amazonaws.com/groups/global/AllUsers'
}
PERMISSIONS = {'READ'}


def policy(resource):
    for grant in resource['Grants']:
        if grant['Grantee']['URI'] in GRANTEES and grant[
                'Permission'] in PERMISSIONS:
            return False

    return True

IAM Password Policy

This example policy alerts when the password policy does not enforce a maximum password age:

def policy(resource):
    if resource['MaxPasswordAge'] is None:
        return False
    return resource['MaxPasswordAge'] <= 90

In the policy() body, returning a value of True indicates the resource is compliant and no alert should be sent. Returning a value of False indicates the resource is non-compliant.

{
    "AccountId": "123456789012",
    "AllowUsersToChangePassword": true,
    "AnyExist": true,
    "ExpirePasswords": true,
    "HardExpiry": null,
    "MaxPasswordAge": 90,
    "MinimumPasswordLength": 14,
    "Name": "AWS.PasswordPolicy",
    "PasswordReusePrevention": 24,
    "Region": "global",
    "RequireLowercaseCharacters": true,
    "RequireNumbers": true,
    "RequireSymbols": true,
    "RequireUppercaseCharacters": true,
    "ResourceId": "123456789012::AWS.PasswordPolicy",
    "ResourceType": "AWS.PasswordPolicy",
    "Tags": null,
    "TimeCreated": null
}

Reference

Using the schemas in provides details on all available fields in resources. Top level keys are always present, although they may contain NoneType values.

You can write and deploy policies in the Panther Console or you can write them locally and upload them to Panther using the CLI workflow:

For detection templates and examples, see the

Severity: Select a for the alerts triggered by this detection.

To see examples of runbooks for built-in rules, see .

Destination Overrides: Choose destinations to receive alerts for this detection, regardless of severity. Note that destinations can also be set dynamically, in the rule function. See to learn more about routing precedence.

In the Unit Test section, click Add New to for the policy you defined in the previous step.

It's best practice to create a fork of Panther's , but you can also create your own repo from scratch.

We recommend grouping policies into folders based on log/resource type, e.g., suricata_rules or aws_s3_policies. You can use the open source repo as a reference.

See the full for a complete list of required and optional fields.

The order of precedence for setting the alert title for policies is the same as it is for Rules and Scheduled Rules—see the section.

If you are configuring a policy suppression, it is not reversible.

Learn more about this field in the .

Option 2: Open the Panther , and navigate to the section for the resource you are trying to emulate. Copy the provided example resource. Paste this in to the resource editor if you're working in the web UI, or into the Resource field if you are working locally. Now you can manually modify the fields relevant to your policy and the specific test case you are trying to emulate.

When you see a policy that is showing the state Error on a given resource, that means that the policy threw an exception. The best method for troubleshooting these errors is to use option 1 in the section above and create a test case from the resource causing the exception.

The policy is based on an resource:

See the full .

supported resources
Panther Analysis Tool (PAT)
panther_analysis GitHub repository
Alert Runbooks
create a test
open-source analysis repository
Panther Analysis
Panther-managed
Resources documentation
IAM Password Policy
Constructing test resources
Panther-managed policies
Python detections
Simple Detections
signals
Data Explorer
Search
Panther-managed policy
supported resources
15-second runtime limit
severity level
How the alert title is set
A "Policies" section shows three policies in a table. Each policy has a status, severity, and a three dots icon.
An "Ignore Patterns" field has a value of "arn:aws:s3:::panther*"
Routing Order Precedence
Python policy specification reference on Writing Python Detections
Python Policy Specification Reference
Python policy specification reference on Writing Python Detections