LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Configuring Snowflake for Cloud Connected
        • Configuring AWS for Cloud Connected
        • Pre-Deployment Tools
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • Data Models use case
  • Panther-managed Data Models
  • How to create custom Data Models
  • Evaluating whether a field exists in Path
  • Using Data Models
  • Referencing Data Models in a rule
  • Using Data Models with Enrichment
  • Testing Data Models
  • DataModel specification reference
  • DataModel Mappings
  • Panther-managed Data Model mapping names

Was this helpful?

  1. Detections
  2. Rules and Scheduled Rules
  3. Writing Python Detections

Data Models

Data Models provide a way to configure a set of unified fields across all log types

PreviousPython Rule CachingNextGlobal Helper Functions

Last updated 3 months ago

Was this helpful?

Overview

Use Data Models to configure a set of unified fields across all log types, by creating between event fields for various log types and unified Data Model names. You can leverage , and .

Data Models for detections are different from the . To learn more, see .

Data Models use case

Suppose you have a detection that checks for a particular source IP address in network traffic logs, and you'd like to use it for multiple log types. These log types might not only span different categories (e.g., DNS, Zeek, Apache), but also different vendors. Without a common logging standard, each of these log types may represent the source IP using a different field name, such as ipAddress, srcIP, or ipaddr. The more log types you'd like to monitor, the more complex and cumbersome the logic of this check becomes. For example, it might look something like:

(event.get('ipAddress') == '127.0.0.1' or 
event.get('srcIP') == '127.0.0.1' or 
event.get('ipaddr') == '127.0.0.1')

If we instead define a Data Model for each of these log types, we can translate the event's field name to the Data Model name, meaning the detection can simply reference the Data Model version. The above logic then simplifies to:

event.udm('source_ip') == '127.0.0.1'

Panther-managed Data Models

By default, Panther comes with built-in Data Models for several log types, such as AWS.S3ServerAccess, AWS.VPCFlow, and Okta.SystemLog. All currently supported data models can be found in the .

The names of the supported Data Model mappings are listed in the .

How to create custom Data Models

Each log type can only have one enabled Data Model specified (however, a single Data Model can contain multiple mappings). If you want to change or update an existing Data Model, disable the existing one, then create a new, enabled one.

To create a new Data Model in the Panther Console:

  1. In the left-hand navigation bar of your Panther Console, click Detections.

  2. In the upper-right corner, click Create New.

  3. Under Settings, fill in the form fields.

    • Display Name: Enter a user-friendly display name for this Data Model.

    • ID: Enter a unique ID for this Data Model.

    • Log Type: Select a log type this Data Model should apply to. Only one log type per Data Model is permitted.

  4. Under Data Model Mappings, create Name/Field Path or Name/Field Method pairs.

  5. If you used the Field Method field, define the method(s) in the Python Module (optional) section.

  6. In the upper right corner, click Save.

How to create a Data Model in the CLI workflow

Folder setup

All files related to your custom Data Models must be stored in a folder with a name containing data_models (this could be a top-level data_models directory, or sub-directories with names matching *data_models*).

File setup

  1. Create your Data Model specification YAML file (e.g., data_models/aws_cloudtrail_datamodel.yml):

    AnalysisType: datamodel
    LogTypes: 
      - AWS.CloudTrail
    DataModelID: AWS.CloudTrail
    Filename: aws_cloudtrail_data_model.py
    Enabled: true
    Mappings:
      - Name: actor_user
        Path: $.userIdentity.userName
      - Name: event_type
        Method: get_event_type
      - Name: source_ip
        Path: sourceIPAddress
      - Name: user_agent
        Path: userAgent
    • Set AnalysisType to datamodel.

    • For LogTypes, provide the name of one of your log types. Despite this field taking a list, only one log type per Data Model is supported.

    • Provide a value for the DataModelID field.

    • Within Mappings, create Name / Path or Name / Method pairs.

  2. If you included one or more Method fields within Mappings, create an associated Python file (data_models/aws_cloudtrail_datamodel.py), and define any referenced methods.

    • In this case, you must also add the Filename field to the Data Model YAML file. If no Method fields are present, no Python file/Filename field is required.

      from panther_base_helpers import deep_get
      def get_event_type(event):
          if event.get('eventName') == 'ConsoleLogin' and deep_get(event, 'userIdentity', 'type') == 'IAMUser':
              if event.get('responseElements', {}).get('ConsoleLogin') == 'Failure':
                  return "failed_login"
              if event.get('responseElements', {}).get('ConsoleLogin') == 'Success':
                  return "successful_login"
          return None

How to create a Data Model using the Panther API

Evaluating whether a field exists in Path

Within a Path value, you can include logic that checks whether a certain event field exists. If it does, the mapping will be applied; if it doesn't, the mapping doesn't take effect.

  - Name: assigned_admin_role
    Path: $.events[*].parameters[?(@.name == 'ROLE_NAME')].value

Using Data Models

Referencing Data Models in a rule

To reference a Data Model field in a rule:

  1. In a rule's YAML file, ensure LogTypes field contains the log type of the Data Model you'd like applied:

    AnalysisType: rule
    DedupPeriodMinutes: 60
    DisplayName: DataModel Example Rule
    Enabled: true
    Filename: my_new_rule.py
    RuleID: DataModel.Example.Rule
    Severity: High
    LogTypes:
      # Add LogTypes where this rule is applicable
      # and a Data Model exists for that LogType
      - AWS.CloudTrail
    Tags:
      - Tags
    Description: >
      This rule exists to validate the CLI workflows of the Panther CLI
    Runbook: >
      First, find out who wrote this the spec format, then notify them with feedback.
    Tests:
      - Name: test rule
        ExpectedResult: true
        # Add the LogType to the test specification in the 'p_log_type' field
        Log: {
          "p_log_type": "AWS.CloudTrail"
        }
  2. Add the log type to all the Rule's Test cases, in the p_log_type field.

  3. def rule(event):    
        # filter events on unified data model field
        return event.udm('event_type') == 'failed_login'
    
    
    def title(event):
        # use unified data model field in title
        return '{}: User [{}] from IP [{}] has exceeded the failed logins threshold'.format(
            event.get('p_log_type'), event.udm('actor_user'),
            event.udm('source_ip'))

Using Data Models with Enrichment

Panther provides a built-in method on the event object called event.udm_path(). It returns the original path that was used for the Data Model.

AWS.VPCFlow logs example

In the example below, calling event.udm_path('destination_ip') will return 'dstAddr', since this is the path defined in the Data Model for that log type.

from panther_base_helpers import deep_get

def rule(event):
    return True

def title(event):
    return event.udm_path('destination_ip')

def alert_context(event):
    enriched_data = deep_get(event, 'p_enrichment', 'lookup_table_name', event.udm_path('destination_ip'))
    return {'enriched_data':enriched_data}

To test this, we can use this test case:

{   
  "p_log_type": "AWS.VPCFlow",
   "dstAddr": "1.1.1.1",
   "p_enrichment": {
      "lookup_table_name": {
        "dstAddr": {
          "datakey": "datavalue" }}}}

The test case returns the following alert, with Alert Context containing the value of dstAddr (or {"datakey": "datavalue"}) as the value of enriched_data.

Testing Data Models

DataModel specification reference

A complete list of DataModel specification fields:

Field name

Required

Description

Expected value

AnalysisType

Yes

Indicates whether this specification is defining a rule, policy, data model, or global

datamodel

DataModelID

Yes

The unique identifier of the data model

String

DisplayName

No

What name to display in the UI and alerts. The DataModelID will be displayed if this field is not set.

String

Enabled

Yes

Whether this Data Model is enabled

Boolean

FileName

No

The path (with file extension) to the Python Data Model body

String

LogTypes

Yes

Which log type this Data Model will apply to

Singleton List of strings Note: Although LogTypes accepts a list of strings, you can only specify one log type per Data Model

Mappings

Yes

Mapping from source field name or method to unified data model field name

DataModel Mappings

Mappings translate LogType fields to unified Data Model fields. Each Mappings entry must define:

  • Name: How you will reference this data model in detections.

  • One of:

    • Method: The name of the method. The method must be defined in the file listed in the data model specification Filename field.

Example:

Mappings:
  - Name: source_ip
    Path: srcIp
  - Name: user
    Path: $.events[*].parameters[?(@.name == 'USER_EMAIL')].value
  - Name: event_type
    Method: get_event_type

Panther-managed Data Model mapping names

Data Model mapping name

Description

actor_user

ID or username of the user whose action triggered the event.

assigned_admin_role

Admin role ID or name assigned to a user in the event.

destination_ip

Destination IP for the traffic

destination_port

Destination port for the traffic

event_type

Custom description for the type of event. Out of the box support for event types can be found in the global, panther_event_type_helpers.py.

http_status

Numeric http status code for the traffic

source_ip

Source IP for the traffic

source_port

Source port for the traffic

user_agent

User agent associated with the client in the event.

user

ID or username of the user that was acted upon to trigger the event.

Custom Data Models can be created in a few ways: in the Panther Console, using the , or in the . See the tabs below for creation instructions for each method.

Your custom Data Model mappings can use the , or your own custom names. Each mapping Name can map to an event field (with Path or Field Path) or a method you define (with Field Method or Method). If you map to a method, you must define the method either in a separate Python file (if working in the CLI workflow), which is referenced in the YAML file using Filename, or in the Python Module field in the Console.

Click the Data Models tab. .

Enabled: Select wether you'd like this Data Model enabled or disabled.

You can now reference this Data Model in your rules. Learn more in .

Learn more about Mappings syntax .

See below for a complete list of required and optional fields.

Upload your Data Model to your Panther instance using .

You can now reference this Data Model in your rules. Learn more in .

See the POST operation on .

For example, take the following Path values from the :

Use in the rule's Python logic:

To test a Data Model, write for a detection that references a Data Model mapping using in its rule() logic.

Path: The path to the field in the original log type's schema. This value can be a simple field name or a JSON path. For more information about jsonpath-ng, see .

The Path value of the user data model has logic that checks if the USER_EMAIL event field exists. Learn more in .

The mapping names are described below. When , you may use the names below, in addition to custom ones.

Panther Analysis Tool (PAT)
Panther API
Data Models
Panther-managed gsuite_data_model.yml
pypi.org's documentation here
names referenced in Panther-managed Data Models
Referencing Data Models in a rule
below, in DataModel Mappings
Data Model Specification Reference
Referencing Data Models in a rule
Evaluating whether a field exists in Path
Panther-managed Data Model
creating your own Data Model mappings
List of Mappings
panther-analysis repository, here
mappings
Panther-managed Data Models
create custom ones
Panther-managed Data Model mapping names table, below
unit tests
The screen shot shows a passing test in the Panther Console including the alert context with the data key and data value
Panther Unified Data Model fields (also known as Core Fields)
Core Fields vs. Data Models in Python detections
the PAT upload command
the event.udm() method
event.udm()