Links

Data Models

Data Models provide a way to configure a set of unified fields across all log types

Overview

Data Models provide a way to configure a set of unified fields across all log types.
Suppose you want to check for a particular source ip address in all events that log network traffic. These LogTypes might not only span different categories (DNS, Zeek, Apache, etc.), but also different vendors. Without a common logging standard, each of these LogTypes may represent the source ip by a different name, such as ipAddress, srcIP, or ipaddr. The more LogTypes you want to monitor, the more complex and cumbersome this simple check becomes:
(event.get('ipAddress') == '127.0.0.1' or
event.get('srcIP') == '127.0.0.1' or
event.get('ipaddr') == '127.0.0.1')
If instead we define a Data Model for each of these LogTypes, we can translate the unified data model field name to the LogType field name and our logic simplifies to:
event.udm('source_ip') == '127.0.0.1'

Built-in Data Models

By default, Panther comes with built-in data models for several log types, such as AWS.S3ServerAccess, AWS.VPCFlow, and Okta.SystemLog. All currently supported data models can be found here.

How to add Data Models

New Data Models are added in the Panther Console or via the Panther Analysis Tool. Each log type can only have one enabled data model specified. If you want to change or update an existing data model, disable the existing one, and create a new, enabled one.
Panther Console
Panther Analysis Tool
To create a new Data Model in the Panther Console:
  1. 1.
    Log in to your Panther Console and navigate to Build > Data Models. .
    The list of Data Models in the Panther Console is displayed
  2. 2.
    In the upper right corner, click Create New.
  3. 3.
    Fill in the fields under Settings and Data Model Mappings.
    In the Panther Console, the New Data Model screen is displayed. It contains fields for Display Name, ID, and Log Type. Under "Data Model Mappings" there are fields are Name, Field Path, and Field Method.
  4. 4.
    In the upper right corner, click Save.
You can now access this Data Model in your rule logic with the event.udm() method.

How to create a Data Model using PAT

Folder setup

All files related to your custom Data Models must be stored in a folder with a name containing data_models (this could be a top-level data_models directory, or sub-directories with names matching *data_models*).
  1. 1.
    Create your Data Model specification file (e.g. data_models/aws_cloudtrail_datamodel.yml):
    AnalysisType: datamodel
    LogTypes:
    - AWS.CloudTrail
    DataModelID: AWS.CloudTrail
    Filename: aws_cloudtrail_data_model.py
    Enabled: true
    Mappings:
    - Name: actor_user
    Path: $.userIdentity.userName
    - Name: event_type
    Method: get_event_type
    - Name: source_ip
    Path: sourceIPAddress
    - Name: user_agent
    Path: userAgent
  2. 2.
    If any Methods are defined, create an associated Python file (data_models/aws_cloudtrail_datamodel.py), as shown below. Note: The Filename specification field is required if a Method is defined in a mapping. If Method is not used in any Mappings, no Python file is required.
    from panther_base_helpers import deep_get
    def get_event_type(event):
    if event.get('eventName') == 'ConsoleLogin' and deep_get(event, 'userIdentity', 'type') == 'IAMUser':
    if event.get('responseElements', {}).get('ConsoleLogin') == 'Failure':
    return "failed_login"
    if event.get('responseElements', {}).get('ConsoleLogin') == 'Success':
    return "successful_login"
    return None
  3. 3.
    Use this Data Model in a rule:
    1. 1.
      Add the LogType under the Rule specification LogType field.
    2. 2.
      Add the LogType to all the Rule's Test cases, in the p_log_type field.
    3. 3.
      Leverage the event.udm() method in the Rule's python logic:
AnalysisType: rule
DedupPeriodMinutes: 60
DisplayName: DataModel Example Rule
Enabled: true
Filename: my_new_rule.py
RuleID: DataModel.Example.Rule
Severity: High
LogTypes:
# Add LogTypes where this rule is applicable
# and a Data Model exists for that LogType
- AWS.CloudTrail
Tags:
- Tags
Description: >
This rule exists to validate the CLI workflows of the Panther CLI
Runbook: >
First, find out who wrote this the spec format, then notify them with feedback.
Tests:
- Name: test rule
ExpectedResult: true
# Add the LogType to the test specification in the 'p_log_type' field
Log: {
"p_log_type": "AWS.CloudTrail"
}
def rule(event):
# filter events on unified data model field
return event.udm('event_type') == 'failed_login'
def title(event):
# use unified data model field in title
return '{}: User [{}] from IP [{}] has exceeded the failed logins threshold'.format(
event.get('p_log_type'), event.udm('actor_user'),
event.udm('source_ip'))
See Data Model Specification Reference below for a complete list of required and optional fields.

Using Data Models

Using Data Models in rules

Use your Data Model in a rule via any of the following methods:
  • Add the LogType under the Rule specification LogType field
  • Add the LogType to all the Rule's Test cases, in the p_log_type field
  • Leverage the event.udm() method in the Rule's python logic:
    def rule(event):
    # filter events on unified data model field
    return event.udm('event_type') == 'failed_login'
    def title(event):
    # use unified data model field in title
    return '{}: User [{}] from IP [{}] has exceeded the failed logins threshold'.format(
    event.get('p_log_type'), event.udm('actor_user'),
    event.udm('source_ip'))

Leveraging existing Data Models

Rules can be updated to use unified data model field names by leveraging the event.udm() method. For example:
def rule(event):
return event.udm('source_ip') in DMZ_NETWORK
def title(event):
return 'Suspicious request originating from ip: ' + event.udm('source_ip')
Update the rule specification to include the pertinent LogTypes:
AnalysisType: rule
Filename: example_rule.py
Description: A rule that uses datamodels
Severity: High
RuleID: Example.Rule
Enabled: true
LogTypes:
- Logtype.With.DataModel
- Another.Logtype.With.DataModel

Using Data Models with Enrichment

Panther provides a built-in method on the event object called event.udm_path. It returns the original path that was used for the Data Model.

AWS.VPCFlow logs example

Using event.udm_path('destination_ip') will return 'dstAddr', since this is the path defined in the Data Model for that log type. The following example uses event.udm_path:
from panther_base_helpers import deep_get
def rule(event):
return True
def title(event):
return event.udm_path('destination_ip')
def alert_context(event):
enriched_data = deep_get(event, 'p_enrichment', 'lookup_table_name', event.udm_path('destination_ip'))
return {'enriched_data':enriched_data}
This test case was used:
{   
"p_log_type": "AWS.VPCFlow",
  "dstAddr": "1.1.1.1",
  "p_enrichment": {
     "lookup_table_name": {
       "dstAddr": {
          "datakey": "datavalue"
     }
    }
   }
  }
The test case returns an alert that includes Alert Context with the datakey and datavalue:
The screen shot shows a passing test in the Panther Console including the alert context with the data key and data value

DataModel Specification Reference

A complete list of DataModel specification fields:
Field Name
Required
Description
Expected Value
AnalysisType
Yes
Indicates whether this specification is defining a rule, policy, data model, or global
datamodel
DataModelID
Yes
The unique identifier of the data model
String
DisplayName
No
What name to display in the UI and alerts. The DataModelID will be displayed if this field is not set.
String
Enabled
Yes
Whether this data model is enabled
Boolean
FileName
No
The path (with file extension) to the python DataModel body
String
LogTypes
Yes
What log type this policy will apply to
Singleton List of strings Note: Although LogTypes accepts a list of strings, you can only specify 1 log type per Data Model.
Mappings
Yes
Mapping from source field name or method to unified data model field name
List of Maps

DataModel Mappings

Mappings translate LogType fields to unified data model fields. Each mapping entry must define a unified data model field name (Name) and either a Path (Path) or a method (Method). The Path can be a simple field name or a JSON Path. The method must be implemented in the file listed in the data model specification Filename field.
Mappings:
- Name: source_ip
Path: srcIp
- Name: user
Path: $.events[*].parameters[?(@.name == 'USER_EMAIL')].value
- Name: event_type
Method: get_event_type
For more information about jsonpath-ng, see pypi.org's documentation here.

Unified Data Model Field Reference

The initial set of supported unified data model fields are described below.
Unified Data Model Field Name
Description
actor_user
ID or username of the user whose action triggered the event.
assigned_admin_role
Admin role ID or name assigned to a user in the event.
destination_ip
Destination IP for the traffic
destination_port
Destination port for the traffic
event_type
Custom description for the type of event. Out of the box support for event types can be found in the global, panther_event_type_helpers.py.
http_status
Numeric http status code for the traffic
source_ip
Source IP for the traffic
source_port
Source port for the traffic
user_agent
User agent associated with the client in the event.
user
ID or username of the user that was acted upon to trigger the event.