Custom Logs

Define, write, and manage custom schemas


Panther allows you to define your own custom schemas. You can ingest custom logs into Panther via a Data Transport, and your custom schema will then normalize and classify the data.

This page explains how to define, write, and manage custom schemas, as well as how to upload schemas with Panther Analysis Tool (PAT). For information on how to use pantherlog to work with custom schemas, please see pantherlog CLI tool.

Custom schemas are identified by a Custom. prefix in their name and can be used wherever a natively supported log type is used:

  • Log ingestion

    • You can onboard custom logs through a Data Transport (e.g., HTTP webhook, S3, SQS, Google Cloud Storage, Azure Blob Storage)

  • Detections

  • Investigations

    • You can query the data in Search and in Data Explorer. Panther will create a new table for the custom schema once you onboard a source that uses it.

How to define a custom schema

Panther supports JSON data formats and CSV with or without headers for custom log types. For inferring schemas, Panther does not support CSV without headers.

There are multiple ways to define a custom schema. You can:

Automatically infer the schema in Panther

Instead of writing a schema manually, you can let the Panther Console or the pantherlog CLI tool infer a schema (or multiple schemas) from your data.

When Panther infers a schema, note that if your data sample has:

  • A field of type object with more than 200 fields, that field will be classified as type json.

  • A field with mixed data types (i.e., it is an array with multiple data types, or the field itself has varying data types), that field will be classified as type json.

How to infer a schema

There are multiple ways to infer a schema in Panther:

Inferring a custom schema from sample logs

You can generate a schema by uploading sample logs into the Panther Console. If you'd like to use the command line instead, follow the instructions on using the pantherlog CLI tool here.

To get started, follow these steps:

  1. Log in to your Panther Console.

  2. On the left sidebar, navigate to Configure > Schemas.

  3. At the top right of the page next to the search bar, click Create New.

  4. Enter a Schema ID, Description, and Reference URL.

    • The Description is meant for content about the table, while the Reference URL can be used to link to internal resources.

  5. Optionally enable Field Discovery by clicking its toggle ON. Learn more in Enabling field discovery.

  6. Scroll to the bottom of the page where you'll find the option to upload sample log files.

    • After uploading a file, Panther will display the raw logs in the UI. You can expand the log lines to view the entire raw log. Note that if you add another sample set, it will override the previously-uploaded sample.

  7. Select the appropriate Stream Type (view examples for each type here).

    • Lines: Events are separated by a new line character.

    • JSON: Events are in JSON format.

    • JSON Array: Events are inside an array of JSON objects.

    • CloudWatch Logs: Events came from CloudWatch Logs.

    • Auto: Panther will automatically detect the appropriate stream type.

  8. Click Infer Schema

    • Panther will begin to infer a schema from the raw sample logs.

    • Panther will attempt to infer multiple timestamp formats.

  9. To ensure the schema works properly against the sample logs you uploaded and against any changes you make to the schema, click Validate & Test Schema.

    • This test will validate that the syntax of your schema is correct and that the log samples you have uploaded into Panther are successfully matching against the schema. You should see the results appear below the schema editor box.

    • All unsuccessfully matched logs will appear under Unmatched; each log will display the error message and the raw log.

  10. Click Save to publish the schema.

Panther will infer from all logs uploaded, but will only display up to 100 logs to ensure fast response time when generating a schema.

Create the schema yourself

How to create a custom schema manually

To create a custom schema manually:

  1. In the Panther Console, navigate to Configure > Schemas.

  2. Click Create New in the upper right corner.

  3. Enter a Schema ID, Description, and Reference URL.

    • The Description is meant for content about the table, while the Reference URL can be used to link to internal resources.

  4. Optionally enable Automatic Field Discovery by clicking its toggle ON. Learn more in Enabling field discovery.

  5. In the YAML code block, write or paste your YAML log schema definition.

  6. Click Validate Syntax at the bottom to verify your schema contains no errors.

    • Note that syntax validation only checks the syntax of the Log Schema. It can still fail to save due to name conflicts.

  7. Click Save.

You can now navigate to Configure > Log Sources and add a new source or modify an existing one to use the new Custom.SampleAPI _Log Type. Once Panther receives events from this source, it will process the logs and store them in the custom_sampleapi table.

You can also now write detections to match against these logs and query them using Search or Data Explorer.

Writing schemas

See the tabs below for instructions on writing schemas for JSON logs and for text logs.

Note that you can use the pantherlog CLI tool to generate your Log Schema.

Writing a schema for JSON logs

To parse log files where each line is JSON you have to define a log schema that describes the structure of each log entry.

You can edit the YAML specifications directly in the Panther Console or they can be prepared offline in your editor/IDE of choice. For more information on the structure and fields in a Log Schema, see the Log Schema Reference.

In the example schemas below, the first tab displays the JSON log structure and the second tab shows the Log Schema.

Note: Please leverage the Minified JSON Log Example when using the pantherlog tool or generating a schema within the Panther Console.

  "method": "GET",
  "path": "/-/metrics",
  "format": "html",
  "controller": "MetricsController",
  "action": "index",
  "status": 200,
  "params": [],
  "remote_ip": "",
  "user_id": null,
  "username": null,
  "ua": null,
  "queue_duration_s": null,
  "correlation_id": "c01ce2c1-d9e3-4e69-bfa3-b27e50af0268",
  "cpu_s": 0.05,
  "db_duration_s": 0,
  "view_duration_s": 0.00039,
  "duration_s": 0.0459,
  "tag": "test",
  "time": "2019-11-14T13:12:46.156Z"

Minified JSON log example:


Schema field suggestions

When creating or editing a custom schema, you can use field suggestions generated by Panther. To use this functionality:

  1. In the Panther Console, click into the YAML schema editor.

    • To edit an existing schema, click Configure > Schemas > [name of schema you would like to edit] > Edit.

    • To create a new schema, click Configure > Schemas > Create New.

  2. Press Command+I on macOS (or Control+I on PC).

    • The schema editor will display available properties and operations based on the position of the text cursor.

Managing custom schemas

Editing a custom schema

Panther allows custom schemas to be edited. Specifically, you can perform the following actions:

Note: After editing a field's type, any newly ingested data will match the new type while any previously ingested data will retain its type.

To edit a custom schema:

  1. Navigate to your custom schema's details page in the Panther Console.

  2. Click Edit in the upper right corner of the details page.

  3. Modify the YAML.

  4. Click Update to submit your change.

Click Validate Syntax to check the YAML for structural compliance. Note that the rules will only be checked after you click Update. The update will be rejected if the rules are not followed.

Editing schema fields might require updates to related detections and saved queries. Click on the related entities in the alert banner displayed above the schema editor to view, update, and test the list of affected detections and saved queries.

Query implications

Queries will work across changes to a Type provided the query does not use a function or operator which requires a field type that is not castable across Types.

  • Good example: The Type is edited from string to int where all existing values are numeric (i.e. "1"). A query using the function sum aggregates old and new values together.

  • Bad example: The Type is edited from string to int where some of the existing values are non-numeric (i.e. "apples"). A query using the function sum excludes values that are non-numeric.

Query castability table

This table shows which Types can be cast as each Type when running a query. Schema editing allows any Type to be changed to another Type.

Type From -> Tobooleanstringintbigintfloattimestamp











numbers only

numbers only

numbers only

numbers only







numbers only







numbers only







numbers only








Archiving and unarchiving a custom schema

You can archive and unarchive custom schemas in Panther. You might choose to archive a schema if it's no longer used to ingest data, and you do not want it to appear as an option in various dropdown selectors throughout Panther. In order to archive a schema, it must not be in use by any log sources. Schemas that have been archived still exist indefinitely; it is not possible to permanently delete a schema.

Archiving a schema does not affect any data ingested using that schema already stored in the data lake—it is still queryable using Data Explorer and Search. By default, archived schemas are not shown in the schema list view (visible on Configure > Schemas), but can be shown by modifying Status, within Filters, in the upper right corner. In Data Explorer, tables of archived schemas are not shown under Tables.

Attempting to create a new schema with the same name as an archived schema will result in a name conflict, and prompt you to instead unarchive and edit the existing schema.

To archive or unarchive a custom schema:

  1. In the Panther Console, navigate to Configure > Schemas.

    • Locate the schema you'd like to archive or unarchive.

  2. Click the three dots icon in the upper right corner of the tile, and select Archive or Unarchive.

  3. On the confirmation modal, click Continue.

Testing a custom schema

The "Test Schema against sample logs" feature found on the Schema Edit page in the Panther Console supports Lines, CSV (with or without headers), JSON, JSON Array, CloudWatch Logs, and Auto. See Stream Types for examples.

Additionally, the above log formats can be compressed using the following formats:

  • gzip

  • zstd (without dictionary)

Multi-line logs are supported for JSON and JSONArray formats.

Need to validate that a custom schema will work against your logs? You can test sample logs by following this process:

  1. In the Panther Console, go to Configure > Schemas.

  2. Click on a custom schema.

  3. In the schema details page, scroll to the bottom of the page where you'll be able to upload logs.

Enabling field discovery

Log source schemas in Panther define the log event fields that will be stored in Panther. When field discovery is enabled, data from fields in incoming log events that are not defined in the corresponding schema will not be dropped—instead, the fields will be identified, and the data will be stored. This means you can subsequently query data from these fields, and write detections referencing them.

Field discovery is currently only available for custom schemas, not Panther-managed ones. See additional limitations of field discovery below.

Handling of special characters in field names

If a field name contains a special character—a character that is not alphanumeric, an underscore (_) , or a dash ( -)—it will be transliterated using the algorithm below:

  • @ to at_sign

  • , to comma

  • ` to backtick

  • 'to apostrophe

  • $ to dollar_sign

  • * to asterisk

  • & to ambersand

  • ! to exclamation

  • % to percent

  • + to plus

  • / to slash

  • \ to backslash

  • # to hash

  • ~ to tilde

  • = to eq

All other ASCII characters (including space) will be replaced with an underscore (_). Non-ASCII characters are transliterated to their closest ASCII equivalent.

This transliteration affects only field names; values are not modified.


Field discovery currently has the following limitations:

  • The maximum number of top-level fields that can be discovered is 2,000. Within each object field, a maximum of 1,000 fields can be discovered.

    • There is no limitation on the number of overall fields discovered.

  • If your schema uses the csv parser and you are parsing CSV logs without a header, only fields included in the columns section of your schema will be discovered.

    • This does not apply if your schema uses the csv parser and you are parsing CSV logs with a header.

  • If your schema uses the fastmatch parser, only fields defined inside the match patterns will be discovered.

  • If your schema uses the regex parser, only fields defined inside the match patterns will be discovered.

Uploading log schemas with the Panther Analysis Tool

If you choose to maintain your log schemas outside of Panther, for example in order to keep them under version control and review changes before updating, you can upload the YAML files programmatically with the Panther Analysis Tool.

The uploader command receives a base path as an argument and then proceeds to recursively discover all files with extensions .yml and .yaml.

It is recommended to keep schema files separately from other unrelated files, otherwise you may notice several unrelated errors for attempting to upload invalid schema files.

panther_analysis_tool update-custom-schemas --path ./schemas

The uploader will check if an existing schema exists and proceed with the update or create a new one if no matching schema name is found.

The schemafield must always be defined in the YAML file and be consistent with the existing schema name for an update to succeed. For a list of all available CI/CD fields see our Log Schema Reference.

The uploaded files are validated with the same criteria as Web UI updates.

Troubleshooting Custom Logs

Visit the Panther Knowledge Base to view articles about custom log sources that answer frequently asked questions and help you resolve common errors and issues.

Last updated