Links

Lookup Tables

Overview

Lookup Tables allow you to add important context to your detections and alerts for improved investigation workflows. Use Lookup Tables to enhance alerts with identity/asset information, vulnerability context, network maps, and more. You can associate one or more log types with your Lookup Table, and then all logs of those log types will contain enrichment data from your Lookup Table.
To increase the limit on the number of Lookup Tables or the size of Lookup Tables in your account, please contact your Panther support team.
Consider using Global helpers instead when extra information is only needed for a few specific detections and will not be frequently updated.

How Lookup Tables work

Your configured Lookup Tables are associated with one or more log types, connected by foreign key fields called Selectors. Data enrichment begins prior to log events received by the detections engine, thus every incoming log event with a match in your Lookup Table will be enriched. If a match is found, a p_enrichment field is appended to the event and accessed within a detection using the deep_get function. The p_enrichment fields will contain:
  • One or more Lookup Table name(s) that matched the incoming log event
  • The name of the selector from the incoming log that matched the Lookup Table
  • The data from the Lookup Table that matched via the Lookup Table's primary key
This is the structure of p_enrichment fields:
'p_enrichment': {
<name of lookup table1>: {
<name of selector>: {
<lookup key>: <lookup value>,
...
}
}
}

How is data matched between logs and Lookup Tables?

When a Lookup Table is created, a Primary Key is selected from among the columns of the table. When the table is associated with a Log Type, a Selector Key is chosen from the fields of the Log Type.
When Panther parses one of the associated logs, it compares the value of the Selector Key in the log to the values of the Primary Key in the table. If one matches the other, Panther adds the corresponding row from the Lookup Table to the log event's p_enrichment struct.

How to configure a Lookup Table

Lookup Tables can be created and configured through the Panther Console or with the panther_analysis_tool.

Prerequisites for configuring a Lookup Table

  • A schema specifically for your Lookup Table data.
    • This describes the shape of your Lookup Table data.
  • Selector(s) from your incoming logs.
    • The values from these selectors will be used to search for matches in your Lookup Table data.
  • A primary key for your Lookup Table data.
    • This primary key is one of the fields that you defined in your schema for your Lookup Table. The value of the primary key is what will be compared with the value of the selector(s) from your incoming logs.
  • For local development and CI/CD: ensure you have the necessary configuration files in your environment.
The maximum size for a row in a lookup table is 65535 bytes.
The available methods for creating and configuring Lookup Tables are:

Import via file upload

This option is best for data that is relatively static, such as information about AWS accounts or corporate subnets. You may want to set up a Lookup Table via a File Upload in the Panther Console. For example, a possible use case is adding metadata to distinguish developer accounts from production accounts in your AWS CloudTrail logs.
Panther Console
panther_analysis_tool
  1. 1.
    Log in to the Panther Console.
  2. 2.
    From the left sidebar, click Configure > Lookup Tables.
  3. 3.
    In the upper right side of the page, click Create New to add a new Lookup Table.
  4. 4.
    Configure the Lookup Table Basic Information:
    • Enter a descriptive Lookup Name.
      • In the example screen shot, we use account_metadata.
    • Enter a Description (optional) and a Reference (optional). Description is meant for content about the table, while Reference can be used to hyperlink to an internal resource.
    • Next to Enabled? toggle the setting to Yes. Note: This is required to import your data later in this process.
  5. 5.
    Click Continue.
  6. 6.
    Configure the Associated Log Types:
    • Select the Log Type from the dropdown.
    • Type in the name of the Selectors, the foreign key fields from the log type you want enriched with your Lookup Table.
      • You also can reference attributes in nested objects using JSON path syntax. For example, if you wanted to reference a field in a map you could do $.field.subfield.
    • Click Add Log Type to add another if needed.
      In the example screen shot above, we selected AWS.CloudTrail logs and typed in accountID and recipientAccountID to represent keys in the CloudTrail logs.
  7. 7.
    Click Continue.
  8. 8.
    Configure the Table Schema. Note: If you have not already created a new schema, please see our documentation on creating schemas. You can also use your Lookup Table data to infer a schema. Once you have created a schema, you will be able to choose it from the dropdown on the Table Schema page while configuring a Lookup Table. Note: CSV schemas require column headers to work with Lookup Tables.
    • Select a Schema Name from the dropdown.
    • Select a Primary Key Name from the dropdown. This should be a unique column on the table, such as accountID.
  9. 9.
    Click Continue.
  10. 10.
    Drag and drop a file or click Select File to choose the file of your Lookup Table data to import. The file must be in .csv or .jsonl format.
  11. 11.
    Click Finish Setup. A source setup success page will populate.
  12. 12.
    Optionally, next to to Set an alarm in case this lookup table doesn't receive any data?, toggle the setting to YES to enable an alarm.
    • Fill in the Number and Period fields to indicate how often Panther should send you this notification.
    • The alert destinations for this alarm are displayed at the bottom of the page. To configure and customize where your notification is sent, see documentation on Panther Destinations.
    Note: Notifications generated for a Lookup Table upload failing are accessible in the System Errors tab within the Alerts & Errors page in the Panther Console.
Once finished, you should be returned to the Lookup Table overview screen. Ensure that your new Lookup Table is listed.
Note: Uploading data via Panther Analysis Tool works only for the small size of lookup data (< 1MB) that is mostly static. For larger or frequently changed files, we recommend using S3 to deliver them.

File setup

A Lookup Table requires the following files:
  • A YAML specification file containing the configuration for the table
  • A YAML file defining the schema to use when loading data into the table
  • A JSON or CSV file containing data to load into the table (optional, read further).
We recommend you store all files related to your Lookup Table in their own subdirectory. if using the Panther Analysis repo as a base, you should store the above files under lookup_tables/<my_table_name>.

Writing the configuration files

It's usually prudent to begin writing the schema config first, because the table config will reference some of those values.
  1. 1.
    Create a YAML file for the schema, and save it in the lookup table directory (for example, lookup_tables/my_table/my_table_schema.yml). This schema defines how to read the files you'll use to upload data to the table. If using a CSV file for data, then the schema should be able to parse CSV. The table schema is formatted the same as a log schema. For more information on writing schemas, read our documentation around Log Schemas.
  2. 2.
    Next, create a YAML file for the table configuration. For a Lookup Table with data stored in a local file, an example configuration would look like:
    AnalysisType: lookup_table
    LookupName: my_lookup_table # A unique display name
    Schema: Custom.MyTableSchema # The schema defined in the previous step
    FileName: ./my_lookup_table_data.csv # Relative path to data
    Description: >
    A handy description of what information this table contains.
    For example, this table might convert IP addresses to hostnames
    Reference: >
    A URL to some additional documentation around this table
    Enabled: true # Set to false to stop using the table
    LogTypeMap:
    PrimaryKey: ip # The primary key of the table
    AssociatedLogTypes: # A list of log types to match this table to
    - LogType: AWS.CloudTrail
    Selectors:
    - "sourceIPAddress" # A field in CloudTrail logs
    - "p_any_ip_addresses" # A panther-generated field works too
    - LogType: Okta.SystemLog
    Selectors:
    - "$.client.ipAddress" # Paths to JSON values are allowed
  3. 3.
    (Optional) If your lookup table data is stored in a local file, ensure the file is placed in the same directory as the configuration files above.

Update Lookup Tables via Panther Analysis Tool:

  1. 1.
    Locate the YAML configuration file for the Lookup Table in question.
  2. 2.
    Open the file, and look for the field FileName. You should see a file path which leads to the data file.
  3. 3.
    Update or replace the file indicated in FileName.
  4. 4.
    Push your changes to Panther with the following code:
    panther_analysis_tool upload
    Optionally, you can specify only to upload the Lookup Table:
    panther_analysis_tool upload --filter AnalysisType=lookup_table

Sync via S3 source

In some cases, you may want to sync from an S3 source to set up a Lookup Table. For example, if you want to know what groups and permission levels are associated with the employees at your company. In this scenario, your company might have an AWS S3 source with an up-to-date copy of their Active Directory listing that includes groups and permissions information.
This option is best for a larger amount of data that updates more frequently from an S3 bucket. Any changes in the S3 bucket will sync to Panther.
Panther Console
panther_analysis_tool
  1. 1.
    Log in to the Panther Console.
  2. 2.
    In the left sidebar, click Configure > Lookup Tables.
  3. 3.
    In the upper right side of the page, click Create New to add a new Lookup Table.
  4. 4.
    Configure the Lookup Table Basic Information:
    • Enter a descriptive Lookup Name.
    • Enter a Description (optional) and a Reference (optional).
      • Description is meant for content about the table, while Reference can be used to hyperlink to an internal resource.
    • Make sure the Enabled? toggle is set to Yes.
      • Note: This is required to import your data later in this process.
  5. 5.
    Click Continue.
  6. 6.
    Configure the Associated Log Types:
    • Select the Log Type from the dropdown.
    • Type in the name of the Selectors, the foreign key fields from the log type you want enriched with your Lookup Table.
    • Click Add Log Type to add another if needed.
      In the example screen shot above, we selected AWS.VPCFlow logs and typed in account to represent keys in the VPC Flow logs.
  7. 7.
    Click Continue.
  8. 8.
    Configure the Table Schema. Note: If you have not already created a new schema, please see our documentation on creating schemas. Once you have created a schema, you will be able to select it from the dropdown on the Table Schema page while configuring a Lookup Table.
    1. 1.
      Select a Schema Name from the dropdown.
    2. 2.
      Select a Primary Key Name from the dropdown. This should be a unique column on the table, such as accountID.
  9. 9.
    Click Continue.
  10. 10.
    On the "Choose Import Method" page, click Set Up next to "Sync Data from an S3 Bucket."
  11. 11.
    Set up your S3 source.
    • Enter the Account ID, the 12-digit AWS Account ID where the S3 bucket is located.
    • Enter the S3 URI, the unique path that identifies the specific S3 bucket.
    • Optionally, enter the KMS Key if your data is encrypted using KMS-SSE.
    • Enter the Update Period, the cadence your S3 source gets updated (defaulted to 1 hour).
  12. 12.
    Click Continue.
  13. 13.
    Set up an IAM Role.
    • Please see the next section, Creating an IAM Role, for instructions on the three options available to do this.
  14. 14.
    Click Finish Setup. A source setup success page will populate.
  15. 15.
    Optionally, next to to Set an alarm in case this lookup table doesn't receive any data?, toggle the setting to YES to enable an alarm.
    • Fill in the Number and Period fields to indicate how often Panther should send you this notification.
    • The alert destinations for this alarm are displayed at the bottom of the page. To configure and customize where your notification is sent, see documentation on Panther Destinations.
Note: Notifications generated for a Lookup Table upload failing are accessible in the System Errors tab within the Alerts & Errors page in the Panther Console.

Creating an IAM Role

There are three options for creating an IAM Role to use with your Panther Lookup Table using an S3 source:

Create an IAM role using AWS Console UI

  1. 1.
    On the "Set Up an IAM role" page, during the process of creating a Lookup Table with an S3 source, locate the tile labeled "Using the AWS Console UI". On the right side of the tile, click Select.
  2. 2.
    Click Launch Console UI.
    • You will be redirected to the AWS console in a new browser tab, with the template URL pre-filled.
    • The CloudFormation stack will create an AWS IAM role with the minimum required permissions to read objects from your S3 bucket.
    • Click the "Outputs" tab of the CloudFormation stack in AWS, and note the Role ARN.
  3. 3.
    Navigate back to your Panther account.
  4. 4.
    On the "Use AWS UI to set up your role" page, enter the Role ARN.
  5. 5.
    Click Finish Setup.
Create an IAM role using CloudFormation Template File
  1. 1.
    On the "Set Up an IAM role" page, during the process of creating a Lookup Table with an S3 source, locate the tile labeled "CloudFormation Template File". On the right side of the tile, click Select.
  2. 2.
    Click CloudFormation template, which downloads the template to apply it through your own pipeline.
  3. 3.
    Upload the template file in AWS:
    1. 1.
      Open your AWS console and navigate to the CloudFormation product.
    2. 2.
      Click Create stack.
    3. 3.
      Click Upload a template file and select the CloudFormation template you downloaded.
  4. 4.
    On the "CloudFormation Template" page in Panther, enter the Role ARN.
  5. 5.
    Click Finish Setup.

Create an IAM role manually

  1. 1.
    On the "Set Up an IAM role" page, during the process of creating a Lookup Table with an S3 source, click the link that says I want to set everything up on my own.
  2. 2.
    Create the required IAM role. You may create the required IAM role manually or through your own automation. The role must be named using the format PantherLUTsRole-${Suffix}(e.g., PantherLUTsRole-MyLookupTable).
    • The IAM role policy must include the statements defined below:
      "Version": "2012-10-17",
      "Statement": [
      {
      "Action": "s3:GetBucketLocation",
      "Resource": "arn:aws:s3:::<bucket-name>",
      "Effect": "Allow"
      },
      {
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::<bucket-name>/<input-file-path>",
      "Effect": "Allow"
      }
      ]
      }
    • If your S3 bucket is configured with server-side encryption using AWS KMS, you must include an additional statement granting the Panther API access to the corresponding KMS key. In this case, the policy will look something like this:
      "Version": "2012-10-17",
      "Statement": [
      {
      "Action": "s3:GetBucketLocation",
      "Resource": "arn:aws:s3:::<bucket-name>",
      "Effect": "Allow"
      },
      {
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::<bucket-name>/<input-file-path>",
      "Effect": "Allow"
      },
      {
      "Action": ["kms:Decrypt", "kms:DescribeKey"],
      "Resource": "arn:aws:kms:<region>:<your-accound-id>:key/<kms-key-id>",
      "Effect": "Allow"
      }
      ]
      }
  3. 3.
    On the "Setting up role manually" page in Panther, enter the Role ARN.
    • This can be found in the "Outputs" tab of the CloudFormation stack in your AWS account.
  4. 4.
    Click Finish Setup, and you will be redirected to the Lookup Tables list page with your new Employee Directory table listed.

File setup

A Lookup Table requires the following files:
  • A YAML specification file containing the configuration for the table
  • A YAML file defining the schema to use when loading data into the table
  • A JSON or CSV file containing data to load into the table (optional, read further).
We recommend you store all files related to your Lookup Table in their own subdirectory. if using the Panther Analysis repo as a base, you should store the above files under lookup_tables/<my_table_name>.

Writing the configuration files

It's usually prudent to begin writing the schema config first, because the table config will reference some of those values.
  1. 1.
    Create a YAML file for the schema, and save it in the lookup table directory (for example, lookup_tables/my_table/my_table_schema.yml). This schema defines how to read the files you'll use to upload data to the table. If using a CSV file for data, then the schema should be able to parse CSV. The table schema is formatted the same as a log schema. For more information on writing schemas, read our documentation around Log Schemas.
  2. 2.
    Next, create a YAML file for the table configuration. For a Lookup Table with data stored in a local file, an example configuration would look like:
    AnalysisType: lookup_table
    LookupName: my_lookup_table # A unique display name
    Schema: Custom.MyTableSchema # The schema defined in the previous step
    FileName: ./my_lookup_table_data.csv # Relative path to data
    Description: >
    A handy description of what information this table contains.
    For example, this table might convert IP addresses to hostnames
    Reference: >
    A URL to some additional documentation around this table
    Enabled: true # Set to false to stop using the table
    LogTypeMap:
    PrimaryKey: ip # The primary key of the table
    AssociatedLogTypes: # A list of log types to match this table to
    - LogType: AWS.CloudTrail
    Selectors:
    - "sourceIPAddress" # A field in CloudTrail logs
    - "p_any_ip_addresses" # A panther-generated field works too
    - LogType: Okta.SystemLog
    Selectors:
    - "$.client.ipAddress" # Paths to JSON values are allowed
  3. 3.
    (Optional) If your lookup table data is stored in a local file, ensure the file is placed in the same directory as the configuration files above.

Prerequisites

Before you can configure your Lookup Table to sync with S3, you'll need to have the following ready:
  1. 1.
    The ARN of an IAM role in AWS, which Panther can use to access the S3 bucket. For more information on setting up an IAM role for Panther, see the section on Creating an IAM Role.
  2. 2.
    The path to the file you intent to store data in. The path should be of the following format: s3://bucket-name/path_to_file

Configuring Lookup Table for S3

  1. 1.
    Navigate to the YAML specification file for this Lookup Table.
  2. 2.
    In the file, locate (or add) the Refresh field.
  3. 3.
    Specify the RoleARN, ObjectPath, and PeriodMinutes fields. For specs on the allowed values, see our Lookup Table Config File Specification.
  4. 4.
    Save the config file, then upload your changes with panther_analysis_tool.​

Using Data Explorer with Lookup Tables

Query via Data Explorer
View with Data Explorer

Query via Data Explorer

p_enrichment is not stored in the Data Lake, but you can join against the Lookup Table directly to any table in the Data Explorer with a query similar to the following:
with logs as
(select * from my_logs),
lookup as (select * from my_lookup_table)
select logs.fieldA, lookup.fieldB
from logs join lookup on logs.selector_field = lookup.key_field

View the Lookup Table data with Data Explorer

  1. 1.
    In your Panther Console, navigate to Configure > Lookup Tables to view your Lookup Tables list.
  2. 2.
    Click ... in the upper right corner of the Lookup Table you wish to view, then click View In Data Explorer.
For more information on using Data Explorer, please see the documentation: Data Explorer.

Write a detection using Lookup Table data

After you configure a Lookup Table, you can write detections based on the additional context from your Lookup Table.
For example, if you configured a Lookup Table to distinguish between developer and production accounts in AWS CloudTrail logs, you might want receive an alert only if the following circumstances are both true:
  • A user logged in who did not have MFA enabled.
  • The AWS account is a production (not a developer) account.
You can use the deep_get helper function to retrieve the looked up field from p_enrichment using the foreign key field in the log. The pattern looks like this:
deep_get(event, 'p_enrichment', <Lookup Table name>, <foreign key in log>, <field in Lookup Table>)
The Lookup Table name, foreign key and field name are all optional parameters. If not specified, deep_get will return a hierarchical dictionary with all the enrichment data available. Specifying the parameters will ensure that only the data you care about is returned.
See an example of a Python rule to detect this:
from panther_base_helpers import deep_get
def rule(event):
is_production = deep_get(event, 'p_enrichment', 'account_metadata',
'recipientAccountId', 'isProduction')
return not event.get('mfaEnabled') and is_production
The Panther rules engine will take the looked up matches and append that data to the event using the key p_enrichment in the following JSON structure:
{
'p_enrichment': {
<name of lookup table>: {
<key in log that matched>: <matching row looked up>,
...
<key in log that matched>: <matching row looked up>,
} }
}
Example:
{
"p_enrichment": {
"account_metadata": {
"recipientAccountId": {
"accountID": "90123456",
"isProduction": false,
"email": "[email protected]"
}
}
}
}

Detection Testing

For rules that use p_enrichment, click Enrich Test Data in the upper right side of the JSON code editor to populate it with your Lookup Table data. This allows you to test a Python function with an event that contains p_enrichment.

Lookup Table Examples

Example for translating 1Password UUIDs into human readable names

Please see our guide about using Lookup Tables to translate 1Password's Universally Unique Identifier (UUID) values into human readable names: Using Lookup Tables: 1Password UUIDs.

Example using CIDR matching through Panther Console

Example scenario: Let's say you want to write detections that consider the traffic logs from company IP space (e.g. VPNs and hosted systems) differently from others logs originating from public IP space.
You have a list of your company's allowed CIDR blocks listed in a .csv file (e.g. 4.5.0.0/16):
cidr
description
10.2.3.0/24
San Francisco Office
20.3.4.0/24
DC Office
30.4.5.0/24
Boston Office

Set up a Lookup Table with the CIDR list

  1. 1.
    Follow the steps above under "Set up a Lookup Table" to add a new Lookup Table and configure its basic information.
    • The name of the Lookup Table in this example is Company CIDR Blocks.
  2. 2.
    On the Associated Log Types page, choose the Log Type and Selectors.
    • For this example, we used AWS.VPCFlow logs and associated the source IP (srcAddr) and destination (dstAddr) keys.
  3. 3.
    Associate a schema for your Lookup Table: Select an existing one from your list or create a new schema.
    • Note: The primary key column which will hold the CIDR blocks needs to have a CIDR validation applied in the schema to indicate that this lookup table will do CIDR block matching on IP addresses. See our log schema reference.
      # Will allow valid ip6 CIDR ranges
      # e.g. 2001:0db8:85a3:0000:0000:0000:0000:0000/64
      - name: address
      type: string
      validate:
      cidr: "ipv6"
      # Will allow valid ipv4 IP addresses e.g. 100.100.100.100/00
      - name: address
      type: string
      validate:
      cidr: "ipv4"
  4. 4.
    Drag & drop a file or click Select File to choose the file of your CIDR block list to import. The file must be in .csv or .jsonl format. The maximum file size supported is 5MB.
  5. 5.
    After you successfully import a file, click View in Data Explorer to query that table data or click Finish Setup to go back to a list of your custom Lookup Tables.

Write a detection

You might like to receive an alert if any VPC traffic comes from a source IP address that is not part of your company's allowed CIDR blocks. Here is an example of Python rule that will send an alert in this case:
def rule(event):
if event.get('flowDirection') == 'egress': # we care about inbound
return False
if event.get('action') == 'REJECT': # we don't care about these either
return False
if deep_get(event, 'p_enrichment','Company CIDR Blocks','srcAddr'): # these are ok
return False
return True # alert if NOT from an approved network range
Note: The CIDR validation applied in the Lookup Table schema in this example will enable the system to match IP addresses in VPC flow log to CIDR blocks in the lookup.

Example using IP for Geolocation with Panther Analysis Tool

Let's say you want to know which geographical location your employees are connecting from (e.g., using info like geonames.org). In this scenario, your company has a static file that maps CIDRs to a GeoId, like the one we have in this example_cidr_lookup_content.csv.
> curl https://raw.githubusercontent.com/panther-labs/panther-analysis/master/templates/example_cidr_lookup_content.csv
network,geoname_id
1.0.0.0/24,2077422
1.0.1.0/24,1814991
1.0.2.0/23,1814991
1.0.4.0/22,2077456
1.0.8.0/21,1814991
1.0.16.0/20,1814991
You could use a YAML schema similar to the following:
AnalysisType: lookup_table # always lookup_table
LookupName: simple_cidr_lookup # str
Enabled: true # bool
Description: Lookup table description # str (Optional)
FileName: ./relative/path/to/content.csv # str (Optional)
Reference: An optional reference link # str (Optional)
Schema: Custom.Simple.Cidr # str (should already exist)
LogTypeMap:
PrimaryKey: network # str
AssociatedLogTypes: # [...]
- LogType: Aws.CloudTrail # str
Selectors: # [str]
- 'p_any_ip_addresses'
- LogType: Aws.VPCFlow
Selectors:
- 'p_any_ip_addresses'

Lookup Table Specification Reference

A complete list of Lookup Table specification fields. Field names in bold are required. An asterisk (*) indicates that 2 fields are mutually exclusive.
Field Name
Description
Expected Value
AnalysisType
Indicates that this is a Lookup Table
lookup_table
Enabled
Whether this table is enabled
Boolean
LookupName
The unique identifier of the table
String
Schema
The ID of the schema to use for parsing input data
String
LogTypeMap
A mapping of log schema fields to match against this table
Object, see below
FileName*
The relative path to the data file. Cannot be used with Refresh!
String
Refresh*
The configuration of the S3 Sync functionality. Cannot be used with FileName!
Object, see below
Description
A breif description of the table
String
Reference
An optional reference link
String

LogTypeMap Specification

LogTypeMap should be an object with the following fields:
Field Name
Description
Expected Value
PrimaryKey
Defines which column of the table to use for matching against events
String
AssociatedLogTypes
A list of Log Types and the fields of each to use as Selector Keys
List, see below
Each item of AssociatedLogTypes must be an object with the following fields:
Field Name
Description
Expected Value
LogType
The ID of the Log Schema
String
Selectors
A list of fields from the Log Type to be matched against the Primary Key
List of strings

Refresh Specification

Refresh defines the configuration for an S3 Sync. It must be an object with the following fields:
Field Name
Description
Expected Value
RoleARN
The AWS ARN corresponding the role Panther can assume to access the S3 object.
String
ObjectPath
A URI pointing to the file within the S3 bucket
String
PeriodMinutes
The number of minutes to wait between syncing with the S3 object
15,30,60,180 (3 hours),720 (12 hours), or 1440 (24 hours)