Standard Fields

Panther's log analysis applies normalization fields to all log records

Panther's log analysis applies normalization fields (IPs, domains, etc) to all log records. These fields provide standard names for attributes across all data sources enabling fast and easy data correlation.

For example, each data source has a time that an event occurred, but each data source will likely not name the attribute the same, nor is it guaranteed that the associated time has a timezone consistent with other data sources.

The Panther attribute p_event_time is mapped to each data source's corresponding event time and normalized to UTC. This way you can query over multiple data sources joining and ordering by p_event_time to properly align and correlate the data despite the disparate schemas of each data source.

All appended standard fields begin with p_.

Required Fields

The fields below are appended to all log records:

If an event does not have a timestamp, then p_event_time will be set to p_parse_time, which is the time the event was parsed.

The p_source_id and p_source_label fields indicate where the data originated. For example, you might have multiple CloudTrail sources registered with Panther, each with a unique name (e.g., "Dev Accounts", "Production Accounts", "HR Accounts", etc.). These fields allow you to separate data based on the source, which is beneficial when configuring detections in Panther.

In addition, the fields below are appended to log records of all tables in the panther_rule_matches database:

Core Fields

Panther Core Fields are in open beta starting with Panther version 1.86, and are available to all customers. Please share any bug reports and feature requests with your Panther support team.

Core Fields make up the Panther Unified Data Model (UDM). They normalize data from various sources into a consistent structure while maintaining its context. This makes Core Fields useful for searching and writing detections across log types.

The Panther UDM fields help define user and machine attributes. The user performing the action in the log (i.e. the actor) is represented as user, while machines are represented as either source or destination.

Learn how to map fields in your custom log source schemas to Core Fields below, in Mapping Core Fields in custom log schemas. In certain cases, you may want to map one event field to both an Indicator (p_any) field, and a Core Field—learn more in Core Fields vs. Indicator Fields. Core Fields also differ from Data Models for detections; their differences are described below, in Core Fields vs. Data Models in Python detections.

The Panther-managed log types listed below have UDM field mappings configured:

Supported log types with UDM mappings
AWS.ALB
AWS.AWSCloudtrail
AWS.GuardDuty
AWS.S3ServerAccess
AWS.VPCDNS
AWS.VPCFlow
AWS.WAFWebACL
AWS.AmazonEKSAudit

Cloudflare.CloudflareAudit
Cloudflare.CloudflareHTTPRequest
Cloudflare.CloudflareFirewall
Cloudflare.CloudflareSpectrum

Crowdstrike.CrowdstrikeFDREvent
Crowdstrike.CrowdstrikeActivityAudit
Crowdstrike.CrowdstrikeDetectionsSummary
Crowdstrike.CrowdstrikeDNSRequest
Crowdstrike.CrowdstrikeGroupIdentity
Crowdstrike.CrowdstrikeNetworkConnect
Crowdstrike.CrowdstrikeNetworkListen
Crowdstrike.CrowdstrikeProcessRollup2
Crowdstrike.CrowdstrikeProcessRollup2
Crowdstrike.CrowdstrikeUserIdentity
Crowdstrike.CrowdstrikeUserInfo
Crowdstrike.CrowdstrikeUserLogonLogoff

Duo.DuoAdministrator
Duo.DuoAuthentication
Duo.DuoOfflineEnrollment
GCP.GCPAudit
GCP.GCPHTTPLoadBalancer

GitLab.GitLabAPI
GitLab.GitLabAudit
GitLab.GitLabProduction

Gsuite.GsuiteReports
GSuite.GsuiteActivityEvent

Microsoft.AuditAzureActiveDirectory
Microsoft.AuditExchange
Microsoft.AuditGeneral
Microsoft.AuditSharepoint
Microsoft.DLP

Notion.NotionAudit

osquery.OSQueryBatch
osquery.OSQueryDifferential
osquery.OSQuerySnapshot
osquery.OSQueryStatus

OnePassword.OnePasswordAuditEvent
OnePassword.OnePasswordItemUsage
OnePassword.OnePasswordSignInAttempt

Slack.SlackAccess
Slack.SlackAudit
Slack.SlackIntegration

Events ingested prior to the Panther UDM being enabled in your Panther instance will not contain Core Fields.

See the full list of Panther Core Fields (also known as UDM fields) below:

Core Fields vs. Indicator Fields

It may make sense to classify certain event fields as both Core (UDM) and Indicator (p_any) fields. For example, you might map one event field to the destination.ip UDM field, another event field to the source.ip UDM field, and include the ip indicator on both fields, so that the value of each one may be included in p_any_ip_addresses.

In general, when a field can be classified as both a UDM and p_any field, the UDM mapping maintains relationship information, while the p_any field records that the value was in the event at all (but does not indicate whether it came from the side performing the action or having the action performed on it).

When a field can be classified as both a UDM and p_any field, it is recommended to create both mappings. This will allow you to, in cross-log detections and searches, either include logic that asks if the value was present in the log at all (using the p_any field), or if the value came from a certain side of the relationship (the UDM field).

Core Fields vs. Data Models in Python detections

In addition to Core Fields, Panther supports Data Models in detections, which allow you to define common aliases for event fields across log types, which can then be referenced in Python detections. Below are key differences between Core Fields and Data Models for detections:

  • When to use each one:

    • Use Data Models for detections if your objective is only to write detections that can use one alias to reference differently named fields in various log types.

    • Use Core Fields if, in addition to the above ability in detections, you would also like the convenience of being able to use a single field to search your data lake for values in those differently named event fields across detection types.

  • How each one is defined:

  • How each one transforms an incoming log:

    • When a Core Field is mapped, for each incoming event of that log type, the Core Field/value pair will be added within the event’s p_udm object.

    • Creating an alias in Data Models for detections does not alter the event structure.

  • How you would access each one in a Python detection:

    • To access a Core Field in a Python detection, you would use event.deep_get("p_udm", ...).

    • To access a Data Model for detections field, you would use event.udm(...).

If you are using both Core Fields and Data Models for detections, it is possible for a naming conflict to arise if p_udm. is at the beginning of your Data Models for detections alias name. In these cases, the Core Field takes precedence. For example, say you mapped an event field to the source.ip Core Field and defined a Data Models for detections alias called p_udm.source.ip. In your Python detection, when calling event.udm(p_udm.source.ip), the value for the event field mapped to the Core Field (not the Data Model for detections alias) will be used.

Mapping Core Fields in Custom Log Schemas

You can map fields in custom log schemas to Core Fields.

To map fields in a custom log schema to a Core Field:

  1. In the left-hand navigation bar of your Panther Console, click Configure > Schemas.

  2. In the code editor, under the existing schema definition, add a udm key.

  3. The udm field takes a list of name and paths pairs.

    • The name key takes one of the values in the "Field Name" column of the Core Fields table above. The value is denoted with JSON dot notation.

    • The paths key takes a list of path keys, whose value is the path to the event key whose value you'd like to set for the UDM field indicated in name. The paths list will be evaluated chronologically and the first non-null path value will be set to the UDM field. The value is denoted with JSON dot notation.

Example:

schema: MySchema
fields:
   - name: actor
     type: object
     fields:
         - name: email
           type: string
         - name: name
           type: string
   - name: eventType
     type: string
   - name: eventName
     type: string

# Below is the new section. All the names here leave off the p_udm prefix.
# For example user.email here will correspond to p_udm.user.email
udm:
- name: user.email
  paths:
  - path: actor.email
  - path: actor.name
- name: user.name
  paths:
  - path: actor.name

Indicator Fields

A common security question is, “Was some indicator ever observed in any of our logs?” Panther's Search tool enables you to find the answer by searching across data from all of your various log sources.

As log events are ingested, the indicators field in their corresponding schema identifies which fields should have their values extracted into p_any_ fields, which are appended to and stored with the event. The table below shows which p_any_ field(s) data is extracted into, by indicator.

When constructing a custom schema, you can use the values in the Indicator Name column in the table below in your schema's indicators field. Each of the rows (except for hostname, net_addr, and url) corresponds to a "Panther Fields" option in Search. You may want to map certain event fields to Core (UDM) Fields, in addition to Indicator Fields. Learn more in Core Fields vs. Indicator Fields.

Note that field name/value pairs outside of the fields in the table below can be searched with Search's key/value filter expression functionality—though because those fields have not been mapped to corresponding ones (with different syntax) in different log sources, only matches from log sources containing the exact field name searched will be returned.

Enrichment Fields

The Panther rules engine will take the looked-up matches from Lookup Tables and append that data to the event using the key p_enrichment in the following JSON structure:

{ 
    'p_enrichment': {
        <name of lookup table>: { 
            <key in log that matched>: <matching row looked up>,
            ...
	    <key in log that matched>: <matching row looked up>,
        }    
    }
} 

The "all_logs" view

Panther manages a view over all data sources with standard fields.

This allows you to answer questions such as, "Was there any activity from some-bad-ip, and if so, where?"

The query below will show how many records, by log type, are associated with the IP address 95.123.145.92:

SELECT
 p_log_type, count(1) AS row_count
FROM panther_views.public.all_logs
WHERE p_occurs_between('2020-1-30', '2020-1-31')
     AND array_contains('95.123.145.92'::variant, p_any_ip_addresses)
GROUP BY p_log_type

From these results, you can pivot to the specific logs where activity is indicated.

Standard Fields in detections

The Panther standard fields can be used in detections.

For example, the Python rule below triggers when any GuardDuty alert is on a resource tagged as Critical:

def rule(event):
    if 'p_any_aws_tags' in event:
        for tag in event['p_any_aws_tags']:
            if 'critical' in tag:
                return True
    return False 

Last updated