Custom Logs

Define, write, and manage custom schemas


Panther allows you to define your own custom schemas. You can ingest custom logs into Panther via a Data Transport, and your custom schema will then normalize and classify the data.
This page explains how to define, write, and manage custom schemas, as well as how to upload schemas with Panther Analysis Tool (PAT). For information on how to use pantherlog to work with custom schemas, please see pantherlog CLI tool.
Custom schemas are identified by a Custom. prefix in their name and can be used wherever a natively supported log type is used:
  • Log ingestion
    • You can onboard custom logs through a Data Transport (S3, SQS, Google Cloud Storage, CloudWatch Logs, or Google Cloud Pub/Sub.)
  • Detections
    • You can write Rules for custom schemas.
  • Investigations
    • You can query the data in Indicator Search and in Data Explorer. Panther will create a new table for the custom schema once you onboard a source that uses it.

How to define a custom schema

Panther supports JSON data formats and CSV with or without headers for custom log types. For inferring schemas, Panther does not support CSV without headers.
You can define a schema via the following methods:
Open the tabs below for instructions.
Sample logs
S3 data received in Panther
Historical S3 data

Generating a schema from sample logs

You can generate a schema by uploading sample logs into the Panther Console. If you'd like to use the command line instead, follow the instructions on using the pantherlog CLI tool here.
To get started, follow these steps:
  1. 1.
    Log in to your Panther Console.
  2. 2.
    On the left sidebar, navigate to Configure > Schemas.
  3. 3.
    At the top right of the page next to the search bar, click Create New.
  4. 4.
    On the New Data Schema page, enter a Schema ID, Description, and Reference URL.
    • The Description is meant for content about the table, while the Reference URL can be used to link to internal resources.
  5. 5.
    Scroll to the bottom of the page where you'll find the option to upload sample log files.
  6. 6.
    Upload a sample set of logs: Drag a file from your computer over the "Infer schema from sample logs" box or click Select file and choose the log file. Note that Panther does not support CSV without headers for inferring schemas.
    In the Panther Console, there is a screen labeled "Infer Schema from Sample Logs." At the bottom of the screen shot, there is a section to Drag and drop a file or select a file to upload.
    • After uploading a file, Panther will display the raw logs in the UI. You can expand the log lines to view the entire raw log. Note that if you add another sample set, it will override the previously-uploaded sample.
  7. 7.
    Select the appropriate Stream Type (view examples for each type here).
    • Lines: Events are separated by a new line character.
    • JSON: Events are in JSON format.
    • JSON Array: Events are inside an array of JSON objects.
    • CloudWatch Logs: Events came from CloudWatch Logs.
  8. 8.
    Click Infer Schema
    • Panther will begin to infer a schema from the raw sample logs.
    • Panther will attempt to infer multiple timestamp formats.
    • Once the schema is generated, it will appear in the schema editor box above the raw logs.
      In the Panther Console, a sample event schema is entered into the code box. There is a button at the bottom labeled "Validate and test schema."
  9. 9.
    To ensure the schema works properly against the sample logs you uploaded and against any changes you make to the schema, click Validate & Test Schema.
    • This test will validate that the syntax of your schema is correct and that the log samples you have uploaded into Panther are successfully matching against the schema. You should see the results appear below the schema editor box.
    • All successfully matched logs will appear under Matched; each log will display the column, field, and JSON view.
      The screen shot shows a message that says "12800/12800 logs matched". On the right, there are filter buttons for "Raw (12800)," "Unmatched (0)," and "Matched (12800)."
    • All unsuccessfully matched logs will appear under Unmatched; each log will display the error message and the raw log.
  10. 10.
    Click Save to publish the schema.
Panther will infer from all logs uploaded, but will only display up to 100 logs to ensure fast response time when generating a schema.

Inferring a custom schema from S3 data received in Panther

You can generate and publish a schema for a custom log source from live data streaming from an S3 bucket into Panther. You will first view your S3 data in Panther, then infer a schema, then test the schema.

View raw S3 data

After onboarding your S3 bucket into Panther, you can view raw data coming into Panther and infer a schema from it:
  1. 1.
    Follow the instructions to onboard an S3 bucket onto Panther without having a schema in place.
    • For the S3 Prefixes & Schemas - Optional step, choose Add prefix(es) & Schema(s) later.
  2. 2.
    While viewing your log source's Overview tab, scroll down to the Attach a schema to start classifying data section.
    The source overview page reads, "Attach a schema to start classifying data". Below, there are two options, each with their own Start button: "I want to add an existing schema" and "I want to generate a schema from raw events"
  3. 3.
    Choose from the following options:
    • I want to add an existing schema: Choose this option if you already created a schema and you know the S3 prefix you want Panther to read logs from. Click Start in the tile.
      • You will see a S3 Prefixes & Schemas popup modal:
        On the S3 Prefixes & Filters screen, there is an area where you can enter a S3 prefix. There are additional buttons to "Add Exclusion Filters" and "Add schemas"
    • I want to generate a schema from raw events: Select this option to generate a schema from live data in this bucket and define which prefixes you want Panther to read logs from. Click Start in the tile.
      • Note that you may need to wait up to 15 minutes for data to start streaming into Panther.
      • On the page you are directed to, you can view the raw data Panther has received at the bottom of the screen:
        The schema inference page is shown, with a Raw Events tile containing a number of raw JSON events in a table. In the leftmost column, each row has a "View JSON" button. The second column contains the raw events.
        • This data is displayed from data-archiver, a Panther-managed S3 bucket that retains raw logs for up to 15 days for every S3 log source.
        • Only raw log events that were placed in the S3 bucket after you configured the source in Panther will be visible, even if you've set the timespan to look further back.
        • If your raw events are JSON-formatted, you can view them as JSON by clicking View JSON in the left-hand column.

Infer a schema from raw data

If you chose to I want to generate a schema from raw events in the previous section, now you can infer a schema.
  1. 1.
    Once you see data populating in Raw Events, you can filter the events you'd like to infer a schema from by using the string Search, S3 Prefix, Excluded Prefix, and/or Time Period filters at the top of the Raw Events section.
  2. 2.
    Click Infer Schema to generate a schema.
    The image shows a section in the Panther Console labeled "Raw Events." On the right, there is a blue button labeled "Infer Schema." At the top of Raw Events, there is a Search bar, fields for S3 Prefix and Excluded Prefix, and a dropdown menu labeled Time Period.
  3. 3.
    On the Infer New Schema modal that pops up, enter the following:
    • New Schema Name: The name of the schema that will map to the table in the data lake once the schema is published.
      • The name will always start with Custom. and must have a capital letter after.
    • S3 Prefix: Use an existing prefix that was set up prior to inferring the schema or a new prefix.
      • The prefix you choose will filter data from the corresponding prefix in the S3 bucket to the schema you've inferred.
      • If you don't need to specify a specific prefix, you can leave this field empty to use the catch-all prefix that is called *.
        The image shows a section in the Panther Console labeled "Infer New Schema." At the top, there is a header labeled "Fill in new Schema name" and a field labeled "New Schema Name." Below that, there is a header labeled "Select S3 prefix" and fields labeled "S3 Prefix". At the bottom, there is a blue button labeled "Infer Schema."
  4. 4.
    Click Infer Schema.
    • At the top of the page, you will see '<schema name>' was successfully inferred.
      • Click Done.
        The source page says the schema was successfully inferred. There is a Done button.
    • The schema will then be placed in a Draft mode until you're ready to publish to production after testing.
  5. 5.
    Review the schema and its fields by clicking its name.
    In the Schemas section, the schema called Custom.CaraS3Countries is shown, with a "Draft" label. Below it is a section to Test Schemas, with a Run Test button.
    • Since the schema is in Draft, you can change, remove, or add fields as needed.
      The image shows an example schema from the Panther Console. There is a field labeled "SchemaID" and it contains the text "Custom.CaraS3Countries." The Reference URL field and Description field are not filled in. The schema is in a code block labeled "Event Schema." At the bottom, there is a blue button labeled "Validate Schema."

Test the schema with raw data

Once your schemas and prefixes are defined, you can proceed to testing the schema configuration against raw data.
  1. 1.
    In the Test Schemas section at the top of the screen, click Run Test.
    The image shows a section in the Panther Console labeled "Test Schemas." On the right, there is a blue button labeled "Run Test."
  2. 2.
    On the Test Schemas modal that pops up, select the Time Period you would like to test your schema against, then click Start Test.
    The image shows a section in the Panther Console labeled "Test Schemas." The center of the image contains the text "Test how your schemas perform during a selected time period." At the bottom, there is a drop-down menu labeled "Time Period" with the option "Last 14 days" selected. To the right of that, there is a blue button labeled "Start Test."
    • Depending on the time range and amount of data, the test may take a few minutes to complete.
      A section from the Panther Console labeled "Test finished - Elapsed Time 00min 00sec." The page shows Test Started Date, Events Date Start, Events Date End, Stream Type, Schemas Tested, Data Scanned, Matched Events, and Unmatched events.
    • Once the test is started, the results appear with the amount of matched and unmatched events.
      • Matched Events represent the number of events that would successfully classify against the schema configuration.
      • Unmatched Events represent the number of events that would not classify against the schema.
  3. 3.
    If there are Unmatched Events, inspect the errors and the JSON to decipher what caused the failures.
    The "Test Finished" screen in the Panther Console shows a list of specific errors and raw data.
    • Click Back to Schemas, make changes as needed, and test the schema again.
  4. 4.
    Click Back to Schemas.
  5. 5.
    In the upper right corner, click Save.
    On the source page, the schema name is shown. In the upper right corner is a Save button, which is circled.
    • The inferred schema is now attached to your log source.

Inferring custom schemas from historical S3 data

You can infer and save one or multiple schemas for a custom S3 log source from historical data in your S3 bucket (i.e., data that was added to the bucket before it was onboarded as a log source in Panther).

Prerequisite: Onboard your S3 bucket to Panther

  • Follow the instructions to onboard an S3 bucket onto Panther without having a schema in place.
    • For the S3 Prefixes & Schemas - Optional step, choose Add prefix(es) & Schema(s) later.
    • If you have onboarded the S3 source with a custom IAM role, that role must have the ListBucket permission.

Step 1: View the S3 bucket structure in Panther

After creating your S3 bucket source in Panther, you can view your S3 bucket's structure and data in the Panther Console:
  1. 1.
    In the Panther Console, navigate to Configure > Log Sources. Click into your S3 log source.
  2. 2.
    In the log source's Overview tab, scroll down to the Attach a Schema to start classifying the data section.
  3. 3.
    On the right side of the I want to generate a schema from bucket data tile, click Start.
    In Panther, in a log source's Overview tab, there is a "Start" button next to a tile labeled "I want to generate a schema from bucket data."
    • You will be redirected to a folder inspection of your S3 bucket. Here, you can view and navigate through all folders and objects in the S3 bucke.
      The folder inspection view in the Panther Console
    • Alternatively, you can access the folder inspection of your S3 bucket via the success page after onboarding your S3 source in Panther. From that page, click Attach or Infer Schemas.
      On the success screen after onboarding an S3 source in Panther, there is a button labeled "Attach or infer schemas."

Step 2: Navigate through your data

  • While viewing the folder inspection, click an object.
    • A preview window will appear, displaying a preview of its events:
In Panther, an S3 object is highlighted. A pop-over window is displaying a preview of its events.
If the events fail to render correctly (either generating an error or displaying events improperly), it's possible the wrong stream type has been chosen for the S3 bucket source. If this is the case, click Selected Logs Format is n:
On the source's folder selection view in the Panther Console, the option to select a stream type appears at the top.

Step 3: Indicate if each folder has existing schema or a new one should be inferred

After reviewing what's included in your bucket, you can determine if one or multiple schemas is necessary to represent all of the bucket's data. Next, you can select folders that include data with distinct structures and either infer a new schema, or assign an existing one.
  1. 1.
    Determine whether one or more schemas will need to be inferred from the data in your S3 bucket.
    • If all data in the S3 bucket is of the same structure (and therefore can be represented by one schema), you can leave the default Infer New Schema option selected on the bucket level. This generates a single schema for all data in the bucket.
      The "Infer 1 schema" button is in the upper right corner of the S3 folders page in the Panther Console.
    • If the S3 bucket includes data that need to be classified in multiple schemas, follow the steps below for each folder in the bucket:
      1. 1.
        Select a folder and click Include.
        • Alternatively, if there is a folder or subfolder that you do not want Panther to process, select it and click Exclude.
      2. 2.
        If you have an existing schema that matches the data, click the Schema dropdown on the right side of the row, then select the schema:
        The schema dropdown is expanded next to the data object.
        • By default, each newly included folder has the Infer New Schema option selected.
  2. 2.
    Click Infer n Schemas.

Step 4: Wait for schemas to be inferred

The schema inference process may take up to 15 minutes. You can leave this page while the process completes. You can also stop this process early, and keep the schema(s) inferred during the time that the process ran.
The source page in Panther shows the schema inference details, including an infer skipped and the number of events processed.

Step 5: Review the results

After the inference process is complete, you can view the resulting schemas and the number of events that were used during each schema's inference. You can also validate how each schema parses raw events.
  1. 1.
    Click the play icon on the right side of each row.
  2. 2.
    Click the Events tab to see the raw and normalized events.
  3. 3.
    Click the Schema tab to see the generated schema.

Step 6: Name the schema(s) and save source

Before saving the source, name each of the newly inferred schemas with a unique name by clicking Add name.
After all new schemas have been named, you will be able to click Save Source in the upper right corner.

Adding a Custom Schema manually

To add a Custom Schema manually:
  1. 1.
    In the Panther Console, navigate to Configure > Schemas.
  2. 2.
    Click New in the upper right corner.
  3. 3.
    Enter a name for the Custom Log (ie Custom.SampleAPI) and write or paste your YAML Log Schema definition.
  4. 4.
    Click Validate Syntax at the bottom to verify your schema contains no errors.
    • Note that syntax validation only checks the syntax of the Log Schema. It can still fail to save due to name conflicts.
  5. 5.
    Click Save.
You can now navigate to Configure > Log Sources and add a new source or modify an existing one to use the new Custom.SampleAPI _Log Type. Once Panther receives events from this Source, it will process the logs and store the Log Events to the custom_sampleapi table.
You can also now write Rules to match against these logs and query them using the Data Explorer.

Writing schemas

See the tabs below for instructions on writing schemas for JSON logs and for text logs.
Note that you can use the pantherlog CLI tool to generate your Log Schema.
Text logs

Writing a schema for JSON logs

To parse log files where each line is JSON you have to define a Log Schema that describes the structure of each log entry.
You can edit the YAML specifications directly in the Panther Console or they can be prepared offline in your editor/IDE of choice. For more information on the structure and fields in a Log Schema, see the Log Schema Reference.
In the example schemas below, the first tab displays the JSON log structure and the second tab shows the Log Schema.
Note: Please leverage the Minified JSON Log Example when using the pantherlog tool or generating a schema within the Panther Console.
JSON Log Example
Log Schema Example
"method": "GET",
"path": "/-/metrics",
"format": "html",
"controller": "MetricsController",
"action": "index",
"status": 200,
"params": [],
"remote_ip": "",
"user_id": null,
"username": null,
"ua": null,
"queue_duration_s": null,
"correlation_id": "c01ce2c1-d9e3-4e69-bfa3-b27e50af0268",
"cpu_s": 0.05,
"db_duration_s": 0,
"view_duration_s": 0.00039,
"duration_s": 0.0459,
"tag": "test",
"time": "2019-11-14T13:12:46.156Z"
Minified JSON log example:
version: 0
- name: time
description: Event timestamp
required: true
type: timestamp
- rfc3339
isEventTime: true
- name: method
description: The HTTP method used for the request
type: string
- name: path
description: The path used for the request
type: string
- name: remote_ip
description: The remote IP address the request was made from
type: string
indicators: [ ip ] # the value will be appended to `p_any_ip_addresses` if it's a valid ip address
- name: duration_s
description: The number of seconds the request took to complete
type: float
- name: format
description: Response format
type: string
- name: user_id
description: The id of the user that made the request
type: string
- name: params
type: array
type: object
- name: key
description: The name of a Query parameter
type: string
- name: value
description: The value of a Query parameter
type: string
- name: tag
description: Tag for the request
type: string
- name: ua
description: UserAgent header
type: string

Writing a schema for text logs

Panther handles logs that are not structured as JSON by using a 'parser' that translates each log line into key/value pairs and feeds it as JSON to the rest of the pipeline. You can define a text parser using the parser field of the Log Schema. Panther provides the following parsers for non-JSON formatted logs:
Match each line of text against one or more simple patterns
Use regular expression patterns to handle more complex matching such as conditional fields, case-insensitive matching etc
Treat log files as CSV mapping colunm names to field names

Managing custom schemas

Editing a custom schema

Panther allows custom schemas to be edited. Specifically, you can perform the following actions:
  • Add new fields.
  • Rename or delete existing fields.
  • Edit, add, or remove all properties of existing fields.
  • Modify the parser configuration to fix bugs or add new patterns.
Note: After editing a field's type, any newly ingested data will match the new type while any previously ingested data will retain its type.
To edit a custom schema:
  1. 1.
    Navigate to your custom schema's details page in the Panther Console.
  2. 2.
    Click Edit in the upper right corner of the details page.
    • There is an Edit button in the upper right corner of the schema details page.
  3. 3.
    Modify the YAML.
    • Click Diff View in the upper right corner of the text editor to see the additions, edits, and subtractions via the code editor. It also includes the ability to copy or revert deleted lines.
  4. 4.
    Click Update to submit your change.
Click Validate Syntax to check the YAML for structural compliance. Note that the rules will only be checked after you click Update. The update will be rejected if the rules are not followed.
Editing schema fields might require updates to related detections and saved queries. Click on the related entities in the alert banner displayed above the schema editor to view, update, and test the list of affected detections and saved queries.
A banner message says that editing schemas might require updates to related detections and saved queries. The message links to a list of detections and queries to review and test.

Query implications

Queries will work across changes to a Type provided the query does not use a function or operator which requires a field type that is not castable across Types.
  • Good example: The Type is edited from string to int where all existing values are numeric (i.e. "1"). A query using the function sum aggregates old and new values together.
  • Bad example: The Type is edited from string to int where some of the existing values are non-numeric (i.e. "apples"). A query using the function sum excludes values that are non-numeric.

Query castability table

This table shows which Types can be cast as each Type when running a query. Schema editing allows any Type to be changed to another Type.
Type From -> To
numbers only
numbers only
numbers only
numbers only
numbers only
numbers only
numbers only

Archiving and unarchiving a custom schema

You can archive and unarchive custom schemas in Panther. You might choose to archive a schema if it's no longer used to ingest data, and you do not want it to appear as an option in various dropdown selectors throughout Panther. In order to archive a schema, it must not be in use by any log sources. Schemas that have been archived still exist indefinitely; it is not possible to permanently delete a schema.
Archiving a schema does not affect any data ingested using that schema already stored in the data lake—it is still queryable using Data Explorer and Indicator Search. By default, archived schemas are not shown in the schema list view (visible on Configure > Schemas), but can be shown by modifying Status, within Filters, in the upper right corner. In the Data Explorer, tables of archived schemas are not shown under Tables.
Attempting to create a new schema with the same name as an archived schema will result in a name conflict, and prompt you to instead unarchive and edit the existing schema.
To archive or unarchive a custom schema:
  1. 1.
    In the Panther Console, navigate to Configure > Schemas.
    • Locate the schema you'd like to archive or unarchive.
  2. 2.
    Click the three dots icon in the upper right corner of the tile, and select Archive or Unarchive.
    • If you are archiving a schema and it is currently associated to one or more log sources, the confirmation modal will prompt you to first detach the schema. Once you have done so, click Refresh.
      An Archive Schema modal says, "Prior to archiving Custom.HarryPotterFake2, it must be detached from all associated Log Sources." A list of associated log sources is shown, with only one value: Carrie Tines Test
  3. 3.
    On the confirmation modal, click Continue.

Testing a custom schema

The "Test Schema against sample logs" feature found on the Schema Edit page in the Panther Console supports Lines, CSV (with or without headers), JSON, JSON Array, and CloudWatch Logs. See Stream Types for examples.
Additionally, the above log formats can be compressed using the following formats:
  • gzip
  • zstd (without dictionary)
Multi-line logs are supported for JSON and JSONArray formats.
Need to validate that a custom schema will work against your logs? You can test sample logs by following this process:
  1. 1.
    In the Panther Console, go to Configure > Schemas.
  2. 2.
    Click on a custom schema.
  3. 3.
    In the schema details page, scroll to the bottom of the page where you'll be able to upload logs.
In the Panther Console below a schema, there is a section labeled "Test a schema against sample logs." In that section, there is an option to drag and drop in a file or to select a file to upload.

Uploading log schemas with the Panther Analysis Tool

If you choose to maintain your log schemas outside of Panther, for example in order to keep them under version control and review changes before updating, you can upload the YAML files programmatically with the Panther Analysis Tool.
The uploader command receives a base path as an argument and then proceeds to recursively discover all files with extensions .yml and .yaml.
It is recommended to keep schema files separately from other unrelated files, otherwise you may notice several unrelated errors for attempting to upload invalid schema files.
panther_analysis_tool update-custom-schemas --path ./schemas
The uploader will check if an existing schema exists and proceed with the update or create a new one if no matching schema name is found.
The schemafield must always be defined in the YAML file and be consistent with the existing schema name for an update to succeed. For an example see here.
The uploaded files are validated with the same criteria as Web UI updates.

Troubleshooting Custom Logs

Visit the Panther Knowledge Base to view articles about custom log sources that answer frequently asked questions and help you resolve common errors and issues.