LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Setting Up a Cloud Connected Panther Instance
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
    • MCP Server (Beta)
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Required Fields
  • Core Fields
  • Core Fields vs. Indicator Fields
  • Core Fields vs. Data Models in Python detections
  • Mapping Core Fields in Custom Log Schemas
  • Indicator Fields
  • Enrichment Fields
  • The "all_logs" view
  • Standard Fields in detections

Was this helpful?

  1. Investigations & Search

Standard Fields

Panther's log analysis applies normalization fields to all log records

Panther's log analysis applies normalization fields (IPs, domains, etc) to all log records. These fields provide standard names for attributes across all data sources enabling fast and easy data correlation.

For example, each data source has a time that an event occurred, but each data source will likely not name the attribute the same, nor is it guaranteed that the associated time has a timezone consistent with other data sources.

The Panther attribute p_event_time is mapped to each data source's corresponding event time and normalized to UTC. This way you can query over multiple data sources joining and ordering by p_event_time to properly align and correlate the data despite the disparate schemas of each data source.

All appended standard fields begin with p_.

Required Fields

The fields below are appended to all log records:

Field Name

Type

Description

p_log_type

string

The type of log.

p_row_id

string

Unique id (UUID) for the row.

p_event_time

timestamp

The associated event time for the log type is copied here and normalized to UTC. Format: YYYY-MM-DD HH:MM:SS.fff

p_parse_time

timestamp

The current time when the event was parsed, normalized to UTC. Format: YYYY-MM-DD HH:MM:SS.fff

p_schema_version

integer

The version of the schema used for this row.

p_source_id

string

The Panther generated internal id for the source integration.

p_source_label

string

The user supplied label for the source integration (may change if edited).

p_source_file

object

Available for S3 sources only, this field contains metadata of the file that this event originated from, including the bucket name and object key.

If an event does not have a timestamp, then p_event_time will be set to p_parse_time, which is the time the event was parsed.

The p_source_id and p_source_label fields indicate where the data originated. For example, you might have multiple CloudTrail sources registered with Panther, each with a unique name (e.g., "Dev Accounts", "Production Accounts", "HR Accounts", etc.). These fields allow you to separate data based on the source, which is beneficial when configuring detections in Panther.

In addition, the fields below are appended to log records of all tables in the panther_rule_matches database:

Field Name in panther_rule_matches

Type

Description

p_alert_id

string

Id of alert related to row.

p_alert_creation_time

timestamp

Time of alert creation related to row.

p_alert_context

object

A JSON object returned from the rule's alert_context() function.

p_alert_severity

string

The severity level of the rule at the time of the alert. This could be different from the default severity as it can be dynamically set.

p_alert_update_time

timestamp

Time of last alert update related to row.

p_rule_id

string

The id of the rule that generated the alert.

p_rule_error

string

The error message if there was an error running the rule.

p_rule_reports

map[string]array[string]

List of user defined rule reporting tags related to row.

p_rule_severity

string

The default severity of the rule.

p_rule_tags

array[string]

List of user defined rule tags related to row.

Core Fields

Core Fields make up the Panther Unified Data Model (UDM). They normalize data from various sources into a consistent structure while maintaining its context. This makes Core Fields useful for searching and writing detections across log types.

The Panther UDM fields help define user and machine attributes. The user performing the action in the log (i.e. the actor) is represented as user, while machines are represented as either source or destination.

Supported log types with UDM mappings
AWS.ALB
AWS.AWSCloudtrail
AWS.GuardDuty
AWS.S3ServerAccess
AWS.VPCDNS
AWS.VPCFlow
AWS.WAFWebACL
AWS.AmazonEKSAudit

Cloudflare.CloudflareAudit
Cloudflare.CloudflareHTTPRequest
Cloudflare.CloudflareFirewall
Cloudflare.CloudflareSpectrum

Crowdstrike.CrowdstrikeFDREvent
Crowdstrike.CrowdstrikeActivityAudit
Crowdstrike.CrowdstrikeDetectionsSummary
Crowdstrike.CrowdstrikeDNSRequest
Crowdstrike.CrowdstrikeGroupIdentity
Crowdstrike.CrowdstrikeNetworkConnect
Crowdstrike.CrowdstrikeNetworkListen
Crowdstrike.CrowdstrikeProcessRollup2
Crowdstrike.CrowdstrikeUserIdentity
Crowdstrike.CrowdstrikeUserInfo
Crowdstrike.CrowdstrikeUserLogonLogoff

Duo.DuoAdministrator
Duo.DuoAuthentication
Duo.DuoOfflineEnrollment

GCP.GCPAudit
GCP.GCPHTTPLoadBalancer

GitHub.GithubAudit
GitHub.GithubWebhook

GitLab.GitLabAPI
GitLab.GitLabAudit
GitLab.GitLabProduction

Gsuite.GsuiteReports
GSuite.GsuiteActivityEvent

Linux.LinuxAuditd

Microsoft.AuditAzureActiveDirectory
Microsoft.AuditExchange
Microsoft.AuditGeneral
Microsoft.AuditSharepoint
Microsoft.DLP
Microsoft.GraphSecurityAlert

Notion.NotionAudit

Okta.OktaSystemlog

OnePassword.OnePasswordAuditEvent
OnePassword.OnePasswordItemUsage
OnePassword.OnePasswordSignInAttempt

osquery.OSQueryBatch
osquery.OSQueryDifferential
osquery.OSQuerySnapshot
osquery.OSQueryStatus

Panther.PantherAudit

SentinelOne.Activity
SentinelOne.DeepVisibilityv2

Slack.SlackAccess
Slack.SlackAudit
Slack.SlackIntegration

Events ingested prior to the Panther UDM being enabled in your Panther instance will not contain Core Fields.

See the full list of Panther Core Fields (also known as UDM fields) below:

Field Name
Display Name
Description
Example

destination.address

Destination Address

Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain, or a unix socket. You should always store the raw address in the .address field.

Then it should be duplicated to .mac, .ip, or .domain, depending on which one it is. In the case of multiple, preference is for domain over IP, over mac.

foo.acme.com, 1.1.1.1, 1.1.1.1:55001

destination.arns

Destination ARNs

ARNs associated with the destination resource

[”arn:aws:iam::560769337183:role/PantherLogProcessingRole-panther-account-us-west-2”]

destination.bytes

Destination Bytes

Bytes sent from the destination to the source

192

destination.domain

Destination Domain

The domain name of the destination system.

This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment.

foo.acme.com

destination.ip

Destination IP

IP address of the destination (IPv4 or IPv6)

1.1.1.1

destination.mac

Destination MAC

Mac address of the destination.

The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen.

00-00-5E-00-53-23

destination.port

Destination Port

Port of the destination

80

source.address

Source Address

Some event source addresses are defined ambiguously. The event will sometimes list an IP, a domain, or a unix socket. You should always store the raw address in the .address field.

Then it should be duplicated to .mac, .ip, or .domain, depending on which one it is. In the case of multiple, preference is for domain over IP, over mac.

foo.acme.com, 1.1.1.1, 1.1.1.1:55001

source.arns

Source ARNs

ARNs associated with the source resource

[”arn:aws:iam::560769337183:role/PantherLogProcessingRole-panther-account-us-west-2”]

source.bytes

Source Bytes

Bytes sent from the source to the destination

192

source.domain

Source Domain

The domain name of the source system.

This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment.

foo.acme.com

source.ip

Source IP

IP address of the source (IPv4 or IPv6)

1.1.1.1

source.mac

Source MAC

Mac address of the source.

The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen.

00-00-5E-00-53-23

source.port

Source Port

Port of the source

80

user.arns

User ARNs

ARNs associated with the user

[”arn:aws:iam::560769337183:role/PantherLogProcessingRole-panther-account-us-west-2”]

user.email

User Email

User email address

bob.sanders@acme.com

user.full_name

User Full Name

User’s full name, if available

Bob Sanders

user.name

User Display Name

Short name or login of the user

b.sanders

user.provider_id

User Providers ID

The ID given by another provider for the user, if it exists

abc123

Core Fields vs. Indicator Fields

In general, when a field can be classified as both a UDM and p_any field, the UDM mapping maintains relationship information, while the p_any field records that the value was in the event at all (but does not indicate whether it came from the side performing the action or having the action performed on it).

When a field can be classified as both a UDM and p_any field, it is recommended to create both mappings. This will allow you to, in cross-log detections and searches, either include logic that asks if the value was present in the log at all (using the p_any field), or if the value came from a certain side of the relationship (the UDM field).

Core Fields vs. Data Models in Python detections

  • When to use each one:

    • Use Core Fields if, in addition to the above ability in detections, you would also like the convenience of being able to use a single field to search your data lake for values in those differently named event fields across detection types.

  • How each one is defined:

    • Core Fields are defined in a log schema.

  • How each one transforms an incoming log:

    • When a Core Field is mapped, for each incoming event of that log type, the Core Field/value pair will be added within the event’s p_udm object.

  • How you can access each one in a Python detection:

      • For example: event.udm(...)

      • For example: event.deep_get("p_udm", ...) or event.udm(...)

Mapping Core Fields in Custom Log Schemas

You can map fields in custom log schemas to Core Fields.

To map fields in a custom log schema to a Core Field:

  1. In the left-hand navigation bar of your Panther Console, click Configure > Schemas.

  2. In the code editor, under the existing schema definition, add a udm key.

  3. The udm field takes a list of name and paths pairs.

    • The paths key takes a list of path keys, whose value is the path to the event key whose value you'd like to set for the UDM field indicated in name. The paths list will be evaluated chronologically and the first non-null path value will be set to the UDM field. The value is denoted with JSON dot notation.

Example:

schema: MySchema
fields:
   - name: actor
     type: object
     fields:
         - name: email
           type: string
         - name: name
           type: string
   - name: eventType
     type: string
   - name: eventName
     type: string

# Below is the new section. All the names here leave off the p_udm prefix.
# For example user.email here will correspond to p_udm.user.email
udm:
- name: user.email
  paths:
  - path: actor.email
  - path: actor.name
- name: user.name
  paths:
  - path: actor.name

Indicator Fields

In order for a field to be designated as an indicator in a schema, it must be type string.

Indicator Name
Extracted into fields
Description

actor_id

p_any_actor_ids

Append value to p_any_actor_ids.

aws_account_id

p_any_aws_account_ids

If the value is a valid AWS account id, append to p_any_aws_account_ids.

aws_arn

p_any_aws_arns, p_any_aws_instance_ids, p_any_aws_account_ids, p_any_emails

If value is a valid AWS ARN then append to p_any_aws_arns. If the ARN contains an AWS account id, extract and append to p_any_aws_account_ids. If the ARN contains an EC2 instance id, extract and append to p_any_aws_instance_ids. If the ARN references an AWS STS Assume Role and contains and email address, then extract email address into p_any_emails.

aws_instance_id

p_any_aws_instance_ids

If the value is a valid AWS instance id then append to p_any_aws_instance_ids.

aws_tag

p_any_aws_tags

Append value into p_any_aws_tags.

domain

p_any_domain_names

Append value to p_any_domain_names.

email

p_any_emails

If value is a valid email address then append value into p_any_emails. The portion of the value that precedes @ will also be populated in p_any_usernames

hostname

p_any_domain_names, p_any_ip_addresses

Append value to p_any_domain_names. If value is a valid ipv4 or ipv6 address then append to p_any_ip_addresses.

ip

p_any_ip_addresses

If value is a valid ipv4 or ipv6 address then append to p_any_ip_addresses.

mac

p_any_mac_addresses

If a value is a valid IEEE 802 MAC-48, EUI-48, EUI-64, or a 20-octet IP over InfiniBand link-layer address then append to p_any_mac_addresses.

md5

p_any_md5_hashes

If the value is a valid md5 then append value into p_any_md5_hashes.

net_addr

p_any_domain_names, p_any_ip_addresses

Extracts from values of the form <host>:<port>. Append host portion to p_any_domain_names. If host portion is a valid ipv4 or ipv6 address then append to p_any_ip_addresses.

serial_number

p_any_serial_numbers

Append value to p_any_serial_numbers.

This feature is in closed beta starting with Panther version 1.69. To share any bug reports or feature requests, please contact your Panther support team.

sha1

p_any_sha1_hashes

If the value is a valid sha1 then append to p_any_sha1_hashes.

sha256

p_any_sha256_hashes

If the value is a valid sha256 then append to p_any_sha256_hashes.

trace_id

p_any_trace_ids

Append value to p_any_trace_ids. Tag fields such as session ids and document ids that are used to associated elements with other logs in order to trace the full activity of a sequence of related events.

url

p_any_domain_names, p_any_ip_addresses

Parse url, extract the host portion after "http://" or "https://".

Append host portion to p_any_domain_names. If host portion is a valid ipv4 or ipv6 address then append to p_any_ip_addresses.

username

p_any_usernames

Append value into p_any_usernames. This field is also populated with values marked with the email indicator. The portion of the email value that precedes @ will be appended to this field.

Enrichment Fields

{ 
    'p_enrichment': {
        <name of lookup table>: { 
            <key in log that matched>: <matching row looked up>,
            ...
	    <key in log that matched>: <matching row looked up>,
        }    
    }
} 
Enrichment Field Name
Type
Description of Enrichment Field

p_enrichment

object

Dictionary of lookup results where matching rows were found.

p_match

string

p_match is injected into the data of each matching row within p_enrichment. Its value is the value that matched in the event.

The "all_logs" view

Panther manages a view over all data sources with standard fields.

This allows you to answer questions such as, "Was there any activity from some-bad-ip, and if so, where?"

The query below will show how many records, by log type, are associated with the IP address 95.123.145.92:

SELECT
 p_log_type, count(1) AS row_count
FROM panther_views.public.all_logs
WHERE p_occurs_between('2020-1-30', '2020-1-31')
     AND array_contains('95.123.145.92'::variant, p_any_ip_addresses)
GROUP BY p_log_type
SELECT
 p_log_type, count(1) AS row_count
FROM panther_views.all_logs
WHERE p_occurs_between('2020-1-30', '2020-1-31') 
     AND contains(p_any_ip_addresses, '95.123.145.92')
GROUP BY p_log_type

From these results, you can pivot to the specific logs where activity is indicated.

Standard Fields in detections

The Panther standard fields can be used in detections.

For example, the Python rule below triggers when any GuardDuty alert is on a resource tagged as Critical:

def rule(event):
    if 'p_any_aws_tags' in event:
        for tag in event['p_any_aws_tags']:
            if 'critical' in tag:
                return True
    return False 
PreviousPanther-Managed DashboardsNextSaved and Scheduled Searches

Last updated 1 month ago

Was this helpful?

Panther Core Fields will be removed on September 29, 2025. Before then, is strongly recommended to remove udm mappings in your custom log schemas, as well as references to p_udm fields in your detections and Saved Searches. .

Learn how to map fields in your custom log source schemas to Core Fields below, in . In certain cases, you may want to map one event field to both an field, and a Core Field—learn more in . Core Fields also differ from ; their differences are described below, in .

The listed below have UDM field mappings configured:

It may make sense to classify certain event fields as both and fields. For example, you might map one event field to the destination.ip UDM field, another event field to the source.ip UDM field, and include the ip indicator on both fields, so that the value of each one may be included in p_any_ip_addresses.

In addition to , Panther supports , which allow you to define common aliases for event fields across log types, which can then be referenced in . Below are key differences between and Core Fields:

Use if your objective is only to write detections that can use one alias to reference differently named fields in various log types.

, while specific to a log type, are defined separately from the log schema: , or .

Creating an alias in does not alter the structure of incoming events.

To access a field, use the on the event object.

To access a Core Field in a Python detection, use either the or function on the event object. Note that it is only possible to use udm() to access a Core Field if there is not also a mapping defined with the same key name. .

If you are using both Core Fields and , it is possible for a naming conflict to arise if p_udm. is at the beginning of your Data Models for detections alias name. In these cases, the Core Field takes precedence. For example, say you mapped an event field to the source.ip Core Field and defined a Data Models for detections alias called p_udm.source.ip. In your Python detection, when calling event.udm(p_udm.source.ip), the value for the event field mapped to the Core Field (not the Data Model for detections alias) will be used.

Locate the custom schema you'd like to map Core Fields within, and upper right corner of its tile, click the three docs icon > Edit.

The name key takes one of the values in the "Field Name" column of the above. The value is denoted with JSON dot notation.

A common security question is, “Was some indicator ever observed in any of our logs?” Panther's tool enables you to find the answer by searching across data from all of your various log sources.

As log events are ingested, the in their corresponding schema identifies which fields should have their values extracted into p_any_ fields, which are appended to and stored with the event. The table below shows which p_any_ field(s) data is extracted into, by indicator. All p_any_ fields are lists.

When constructing a custom schema, you can use the values in the Indicator Name column in the table below in your schema's . Each of the rows (except for hostname, net_addr, and url) corresponds to a "Panther Fields" option in . You may want to map certain event fields to , in addition to Indicator Fields. Learn more in .

Note that field name/value pairs outside of the fields in the table below can be searched with functionality—though because those fields have not been mapped to corresponding ones (with different syntax) in different log sources, only matches from log sources containing the exact field name searched will be returned.

The Panther rules engine will take the looked-up matches from and append that data to the event using the key p_enrichment in the following JSON structure:

Learn more about this change here
Panther-managed log types
Data Models for detections
Data Models for detections
Data Models for detections
Search
Lookup Tables
Data Models for detections
Mapping Core Fields in custom log schemas
Indicator (p_any)
Core Fields vs. Indicator Fields
Core Fields vs. Data Models in Python detections
Core (UDM)
Indicator (p_any)
Data Models in detections
Python detections
Data Models for detections
Core Fields
Core Fields table
Data Models for detections
Data Model for detections
Data Model for detections
Search
Core (UDM) Fields
Core Fields vs. Indicator Fields
Search's key/value filter expression
indicators field
indicators field
in the Data Models tab in the Panther Console
in a datamodel file in the CLI workflow
udm() function
deep_get()
udm()
Learn more about how udm() works here