Custom Enrichments
Enrich events with your own stored data
Overview
Custom enrichments (also referred to as "Lookup Tables") allow you to store and reference custom enrichment data in Panther. This means you can reference this added context in detections and pass it into alerts. It may be particularly useful to create custom enrichments containing identity/asset information, vulnerability context, or network maps.
There are three import method options: a static file, S3 bucket, or Google Cloud Storage (GCS) bucket.
You can associate one or more log types with your custom enrichment—then all incoming logs of those types (that match a enrichment table value) will contain enrichment data. Learn more about the enrichment process in How incoming logs are enriched. It's also possible to dynamically reference enrichment data in Python detections. Learn how to view stored enrichment data here, and how to view log events with enrichment data here.
If your data is only needed for a few specific detections and will not be frequently updated, consider using Global helpers instead of an enrichment. Also note that you can use Panther-managed enrichments like like IPinfo and Tor Exit Nodes.
To increase the limit on the number of custom enrichments and/or size of your enrichment tables, please contact your Panther support team.
How incoming logs are enriched
Enrichments in Panther traditionally define both of the following:
A primary key: A field in the enrichment table data.
If the enrichment is defined in the CLI workflow, this is designated by the
PrimaryKeyfield in the YAML configuration file.
One or more associated log types, each with one or more Selectors: A Selector is an event field whose values are compared to the enrichment table's primary key values to find a match.
There are two ways to set log types/Selectors for an enrichment. See How log types and Selectors are set for an enrichment, below.
When a log is ingested into Panther, if its log type is one that is associated to an enrichment, the values of all of its Selector fields are compared against the enrichment's primary key values. When a match is found between a value in a Selector field and a primary key value, the log is enriched with the matching primary key's associated enrichment data in a p_enrichment field. Learn more about p_enrichment below, in p_enrichment structure.
In the example in the image below, the Selector field (in the events in Incoming Logs) is ip_address. The primary key of the enrichment LUT1 is bad_actor_ip. In the right-hand Alert Event, the log is enriched with the enrichment data (including bad_actor_name) because there was a match between the Selector value (1.1.1.1) and a primary key value (1.1.1.1).

How log types and Selectors are set for an enrichment
You can manually set associated log types and Selectors when creating an enrichment (Option 1), and/or let them be automatically mapped (Option 2).
Option 1: Manually choose log types and Selectors
When creating an enrichment, you can choose one or more log types the enrichment should be associated to—and for each log type, one or more Selector fields.
When creating an enrichment in the Panther Console, you can set associated log types and Selectors. Learn more in How to configure a custom enrichment, below.

When creating an enrichment in the CLI workflow, you will create and upload a YAML configuration file. The
AssociatedLogTypesvalue will be a list of objects containingLogTypeandSelectorsfields.See the Custom Enrichment Specification Reference for a full list of fields.
Option 2: Let log types and selectors be automatically mapped by indicator fields
The schema for your enrichment data can mark fields as indicators—for example:
- name: attack_ids
description: Attack field
type: array
element:
type: string
indicators:
- mitre_attack_techniqueThen, for each indicator value, Panther automatically:
Finds all Active log schemas (or log types) that designate any event field as that same indicator.
Associates those log types to the enrichment.
For each log type, sets the
p_anyfield associated to the indicator as a Selector.
For example, if your enrichment data's schema designates an attack_ids field as a mitre_attack_technique indicator as is shown above, all log types in your Panther instance that also set a mitre_attack_technique indicator will be associated to the enrichment, each with a p_any_mitre_attack_techniques Selector.
This mapping happens each time an enrichment's data is refreshed.
p_enrichment structure
p_enrichment structureIf your log events are injected with enrichment data, a p_enrichment field is appended to the event and accessed within a detection using deep_get() or DeepKey. The p_enrichment field will contain:
One or more enrichment name(s) that matched the incoming log event
The name of the Selector from the incoming log that matched the enrichment
The data from the enrichment that matched via the enrichment's primary key (including an injected
p_matchfield containing the Selector value that matched)
This is the structure of p_enrichment fields:
'p_enrichment': {
<name of enrichment1>: {
<name of selector>: {
'p_match': <value of Selector>,
<enrichment key>: <enrichment value>,
...
}
}
}Note that p_enrichment is not stored with the log event in the data lake. See Viewing log events with enrichment data for more information.
How to access enrichment data in detections
Option 1 (if log is enriched): Using deep_get()
deep_get()If your log event was enriched on ingest (as described in How incoming logs are enriched), you can access the data within the p_enrichment field (whose structure is described above) using the deep_get() event object function. Learn more about deep_get() on Writing Python Detections.
See a full example of this method below, in Writing a detection using custom enrichment data.
Option 2: Dynamically using lookup()
lookup()It's also possible to dynamically access enrichment data from Python detections using the event.lookup() function. In this way, you can retrieve data from any enrichment, without it being injected into an incoming event as described in How incoming logs are enriched.
Prerequisites for configuring a custom enrichment
Before configuring an enrichment, be sure you have:
Enrichment data in JSON or CSV format
JSON files can format events in various ways, including in lines, arrays, or objects.
A schema specifically for your enrichment data
This describes the shape of your enrichment data.
A primary key for your enrichment data
This primary key is one of the fields you defined in your enrichment's schema. The value of the primary key is what will be compared with the value of the selector(s) from your incoming logs.
See the below Primary key data types section to learn more about primary key requirements.
(Optional) Selector(s) from your incoming logs
The values from these selectors will be used to search for matches in your enrichment data.
(CLI workflow): An enrichment configuration file
See Custom Enrichment Specification Reference for a full list of fields.
We recommend you make a fork of the
panther-analysisrepository and install the Panther Analysis Tool (PAT).
Primary key data types
Your enrichment table's primary key column must be one of the following data types:
String
Number
Array (of strings or numbers)
Using an array lets you associate one row in your enrichment table with multiple string or number primary key values. This prevents you from having to duplicate a certain row of data for multiple primary keys.
How to configure a custom enrichment
After fulfilling the prerequisites, custom enrichments can be created and configured using one of the following methods:
Option 1: Import custom enrichment data via file upload
Best for data that is relatively static, such as information about AWS accounts or corporate subnets.
Example: adding metadata to distinguish developer and production accounts in your AWS CloudTrail logs.
Option 2: Sync custom enrichment data from an S3 bucket or Option 3: Sync custom enrichment data from a Google Cloud Storage (GCS) bucket
Best when you have a large amount of data that updates relatively frequently. Any changes in the S3 or GCS bucket will sync to Panther.
Example: if you wanted to know which groups and permission levels are associated with employees at your company. In this scenario, your company might have an S3 bucket with an up-to-date copy of their Active Directory listing that includes groups and permissions information.
After choosing one of these methods, you can opt to work within the Panther Console or with PAT.
The maximum size for a row in a custom enrichment table is 65535 bytes.
Option 1: Import custom enrichment data via file upload
You can import data via file upload through the Panther Console or PAT:
Option 2: Sync custom enrichment data from an S3 bucket
You can set up data sync from an S3 bucket through the Panther Console or PAT:
Option 3: Sync custom enrichment data from a Google Cloud Storage (GCS) bucket
You can set up data sync from a GCS bucket through the Panther Console or PAT:
Writing a detection using custom enrichment data
After you configure a custom enrichment, you can write detections based on the additional context.
For example, if you configured a custom enrichment to distinguish between developer and production accounts in AWS CloudTrail logs, you might want receive an alert only if the following circumstances are both true:
A user logged in who did not have MFA enabled.
The AWS account is a production (not a developer) account.
See how to create a detection using enrichment data below:
Accessing enrichment data the event was automatically enriched with
In Python, you can use the deep_get() helper function to retrieve the looked up field from p_enrichment using the foreign key field in the log. The pattern looks like this:
deep_get(event, 'p_enrichment', <Enrichment name>, <foreign key in log>, <field in Enrichment>)The rule would become:
from panther_base_helpers import deep_get
def rule(event):
is_production = deep_get(event, 'p_enrichment', 'account_metadata',
'recipientAccountId', 'isProduction') # If the field you're accessing is stored within a list, use deep_walk() instead
return not event.get('mfaEnabled') and is_productionDynamically accessing enrichment data
You can also use the event object's lookup() function to dynamically access enrichment data in your detection. This may be useful when your event doesn't contain an exact match to a value in the enrichment's primary key column.
In a Simple Detection, you can create an Enrichment match expression.
Detection:
- Enrichment:
Table: account_metadata
Selector: recipientAccountId
FieldPath: isProduction
Condition: Equals
Value: true
- KeyPath: mfaEnabled
Condition: Equals
Value: falseThe Panther rules engine will take the looked up matches and append that data to the event using the key p_enrichment in the following JSON structure:
{
"p_enrichment": {
<name of enrichment table>: {
<key in log that matched>: <matching row looked up>,
...
<key in log that matched>: <matching row looked up>,
}
}
} Example:
{
"p_enrichment": {
"account_metadata": {
"recipientAccountId": {
"accountID": "90123456",
"isProduction": false,
"email": "[email protected]",
"p_match": "90123456"
}
}
}
}
If the value of the matching log key is an array (e.g., the value of p_any_aws_accout_ids), then the lookup data is an array containing the matching records.
{
"p_enrichment": {
<name of enrichment table>: {
<key in log that matched that is an array>: [
<matching row looked up>,
<matching row looked up>,
<matching row looked up>
]
}
}
} Example:
{
"p_enrichment": {
"account_metadata": {
"p_any_aws_account_ids": [
{
"accountID": "90123456",
"isProduction": false,
"email": "[email protected]",
"p_match": "90123456"
},
{
"accountID": "12345678",
"isProduction": true,
"email": "[email protected]",
"p_match": "12345678"
}
]
}
}
}Testing detections that use enrichment
For rules that use p_enrichment, click Enrich Test Data in the upper right side of the JSON code editor to populate it with your Enrichment data. This allows you to test a Python function with an event that contains p_enrichment.

Enrichment (Lookup Table) History Tables

Enrichments will generate a number of tables in the Data Explorer. There are two main types of tables generated:
The current custom enrichment version:
exampleContains the most up-to-date custom enrichment data
Should be targeted in any Saved Searches, or anywhere you expect to see the most current data
This table name will never change
In the example above, the table is named
example
The current History Table version:
example_historyContains a version history of all data uploaded to the current custom enrichment
The table schema is identical to the current custom enrichment (here named
example) except for two additional fields:p_valid_startp_valid_end
These fields can be used to view the state of the custom enrichment at any previous point in time
When a new schema is assigned to the custom enrichment, the past versions of the custom enrichment and the History Table are both preserved as well.
These past versions are preserved by the addition of a numeric suffix, _### and will be present for both the custom enrichment and the History Table. This number will increment by one each time the schema associated with the custom enrichment is replaced, or each time the primary key of the custom enrichment is changed.
Last updated
Was this helpful?

















