Data Models
Data Models provide a way to configure a set of unified fields across all log types.

Data Model Motivation

Suppose you want to check for a particular source ip address in all events that log network traffic. These LogTypes might not only span different categories (DNS, Zeek, Apache, etc.), but also different vendors. Without a common logging standard, each of these LogTypes may represent the source ip by a different name, such as ipAddress, srcIP, or ipaddr. The more LogTypes you want to monitor, the more complex and cumbersome this simple check becomes:
(event.get('ipAddress') == '' or
event.get('srcIP') == '' or
event.get('ipaddr') == '')
If instead we define a Data Model for each of these LogType's, we can translate the unified data model field name to the LogType field name and our logic simplifies to:
event.udm('source_ip') == ''

Built-in Data Models

By default, Panther comes with built-in data models for several log types, such as AWS.S3ServerAccess, AWS.VPCFlow, and Okta.SystemLog. All currently supported data models can be found here.

Adding New Data Models

New data models are added in the Panther Console or via the Panther Analysis Tool. Each log type can only have one enabled data model specified. If you want to change or update an existing data model, disable the existing one, and create a new, enabled one.

Add New Data Model in the Panther Console

To create a new Data Model:
  1. 1.
    Log in to your Panther Console and navigate to Data > Data Models
  2. 2.
    In the upper right corner, click Create New.
  3. 3.
    Fill in the fields under Settings and Data Model Mappings.
  4. 4.
    In the upper right corner, click Save.
You can now access this Data Model in your rule logic with the event.udm() method.

Add New Data Model using Panther Analysis Tool

To add a new data model using the panther_analysis_tool, first create your DataModel specification file (e.g. data_models/aws_cloudtrail_datamodel.yml):
AnalysisType: datamodel
- AWS.CloudTrail
DataModelID: AWS.CloudTrail
Enabled: true
- Name: actor_user
Path: $.userIdentity.userName
- Name: event_type
Method: get_event_type
- Name: source_ip
Path: sourceIPAddress
- Name: user_agent
Path: userAgent
Then, if any Methods are defined, create an associated python file (data_models/
from panther_base_helpers import deep_get
def get_event_type(event):
if event.get('eventName') == 'ConsoleLogin' and deep_get(event, 'userIdentity', 'type') == 'IAMUser':
if event.get('responseElements', {}).get('ConsoleLogin') == 'Failure':
return "failed_login"
if event.get('responseElements', {}).get('ConsoleLogin') == 'Success':
return "successful_login"
return None
The Filename specification field is required if a Method is defined in a mapping. If Method is not used in any Mappings, no python file is required.
Finally, use this data model in a rule by:
  • Adding the LogType under the Rule specification LogType field
  • Add the LogType to all the Rule's Test cases, in the p_log_type field
  • Leveraging the event.udm() method in the Rule's python logic:
AnalysisType: rule
DedupPeriodMinutes: 60
DisplayName: DataModel Example Rule
Enabled: true
RuleID: DataModel.Example.Rule
Severity: High
# Add LogTypes where this rule is applicable
# and a Data Model exists for that LogType
- AWS.CloudTrail
- Tags
Description: >
This rule exists to validate the CLI workflows of the Panther CLI
Runbook: >
First, find out who wrote this the spec format, then notify them with feedback.
- Name: test rule
ExpectedResult: true
# Add the LogType to the test specification in the 'p_log_type' field
Log: {
"p_log_type": "AWS.CloudTrail"
def rule(event):
# filter events on unified data model field
return event.udm('event_type') == 'failed_login'
def title(event):
# use unified data model field in title
return '{}: User [{}] from IP [{}] has exceeded the failed logins threshold'.format(
event.get('p_log_type'), event.udm('actor_user'),

DataModel Specification Reference

A complete list of DataModel specification fields:
Field Name
Expected Value
Indicates whether this specification is defining a rule, policy, data model, or global
The unique identifier of the data model
What name to display in the UI and alerts. The DataModelID will be displayed if this field is not set.
Whether this data model is enabled
The path (with file extension) to the python DataModel body
What log types this policy will apply to
Singleton List of strings
Mapping from source field name or method to unified data model field name
List of Maps

DataModel Mappings

Mappings translate LogType fields to unified data model fields. Each mapping entry must define a unified data model field name (Name) and either a Path (Path) or a method (Method). The Path can be a simple field name or a JSON Path. The method must be implemented in the file listed in the data model specification Filename field.
- Name: source_ip
Path: srcIp
- Name: user
Path: $.events[*].parameters[?( == 'USER_EMAIL')].value
- Name: event_type
Method: get_event_type
More information about jsonpath-ng can be found here.

Unified Data Model Field Reference

The initial set of supported unified data model fields are described below.
Unified Data Model Field Name
ID or username of the user whose action triggered the event.
Admin role ID or name assigned to a user in the event.
Destination IP for the traffic
Destination port for the traffic
Custom description for the type of event. Out of the box support for event types can be found in the global,
Numeric http status code for the traffic
Source IP for the traffic
Source port for the traffic
User agent associated with the client in the event.
ID or username of the user that was acted upon to trigger the event.

Leveraging Existing Data Models

Rules can be updated to use unified data model field names by leveraging the event.udm() method. For example:
def rule(event):
return event.udm('source_ip') in DMZ_NETWORK
def title(event):
return 'Suspicious request originating from ip: ' + event.udm('source_ip')
Update the rule specification to include the pertinent LogTypes:
AnalysisType: rule
Description: A rule that uses datamodels
Severity: High
RuleID: Example.Rule
Enabled: true
- Logtype.With.DataModel
- Another.Logtype.With.DataModel

Using Data Models with Enrichment

Panther provides a built-in method on the event object called event.udm_path. It returns the original path that was used for the Data Model. In the example of AWS.VPCFlow logs, using event.udm_path('destination_ip') will return 'dstAddr', since this is the path defined in the Data Model for that log type. The following example uses event.udm_path:
from panther_base_helpers import deep_get
def rule(event):
return True
def title(event):
return event.udm_path('destination_ip')
def alert_context(event):
enriched_data = deep_get(event, 'p_enrichment', 'lookup_table_name', event.udm_path('destination_ip'))
return {'enriched_data':enriched_data}
This test case was used:
"p_log_type": "AWS.VPCFlow",
  "dstAddr": "",
  "p_enrichment": {
     "lookup_table_name": {
       "dstAddr": {
          "datakey": "datavalue"
The test case returned the following alert: