Custom Logs
Define, write, and manage custom schemas
Overview
Panther allows you to define your own custom log schemas. You can ingest custom logs into Panther via a Data Transport, and your custom schemas will then normalize and classify the data.
This page explains how to determine how many custom schemas you need, infer, write, and manage custom schemas, as well as how to upload schemas with Panther Analysis Tool (PAT). For information on how to use pantherlog
to work with custom schemas, please see pantherlog
CLI tool.
Custom schemas are identified by a Custom.
prefix in their name and can be used wherever a natively supported log type is used:
Log ingestion
You can onboard custom logs through a Data Transport (e.g., HTTP webhook, S3, SQS, Google Cloud Storage, Azure Blob Storage)
Detections
You can write rules and scheduled rules for custom schemas.
Investigations
You can query the data in Search and in Data Explorer. Panther will create a new table for the custom schema once you onboard a source that uses it.
Determine how many custom schemas you need
There is no definitive rule for determining how many schemas you need to represent data coming from a custom source, as it depends on the intent of your various log events and the degree of field overlap between them.
In general, it's recommended to create the minimum number of schemas required for each log type's shape to be represented by its own schema (with room for some field variance between log types to be represented by the same schema). A rule of thumb is: if two different types of logs (e.g., application audit logs and security alerts) have less than 50% overlap in required fields, they should use different schemas.
In the table below, see example scenarios and their corresponding schema recommendations:
You have one type of log with fields A
, B
, and C
, and a different type of log with fields X
, Y
, and Z
.
Create two different schemas, one for each log type.
While it's technically possible to create one schema with all fields (A
, B
, C
, X
, Y
, Z
) marked as optional (i.e., required: false
), it's not recommended, as downstream operations like detection writing and searching will be made more difficult.
You have one type of log that always has fields A
, B
, and C
, and a different type of log that always has fields A
, B
, and Z
.
Create one schema, with fields A
and B
marked as required and fields C
and Z
marked as optional.
After you have determined how many schemas you need, you can define them.
If you have deduced that you need more than one schema and you'd like to use Panther's schema inference tools to generate them, it's recommended to do one of the following:
Use the Inferring a custom schema from sample logs method multiple times with samples from different log types
Send differently structured data to separate folders in a S3 bucket, then use the Inferring custom schemas from historical S3 data inference method
If you use either the Inferring a custom schema from S3 data received in Panther or Inferring a custom schema from HTTP data received in Panther methods, you risk Panther generating a single schema that represents all log types sent to the source.
How to define a custom schema
Panther supports JSON data formats and CSV with or without headers for custom log types. For inferring schemas, Panther does not support CSV without headers.
There are multiple ways to define a custom schema. You can:
Infer one or more schemas from data.
Create a schema manually.
Automatically infer the schema in Panther
Instead of writing a schema manually, you can let the Panther Console or the pantherlog
CLI tool infer a schema (or multiple schemas) from your data.
When Panther infers a schema, note that if your data sample has:
A field of type
object
with more than 200 fields, that field will be classified as typejson
.A field with mixed data types (i.e., it is an array with multiple data types, or the field itself has varying data types), that field will be classified as type
json
.
How to infer a schema
There are multiple ways to infer a schema in Panther:
In the Panther Console:
To infer a schema from sample data you've uploaded, see the Inferring a custom schema from sample logs tab, below.
To infer a schema from S3 data received in Panther, see the Inferring a custom schema from S3 data received in Panther tab, below.
To infer one or more schemas from historical S3 data, see the Inferring custom schemas from historical S3 data tab, below.
To infer a schema from HTTP data received in Panther, see the Inferring a custom schema from HTTP data received in Panther tab, below.
In the CLI workflow:
Use the
pantherlog infer
command.
Inferring a custom schema from sample logs
You can generate a schema by uploading sample logs into the Panther Console. If you'd like to use the command line instead, follow the instructions on using the pantherlog CLI tool here.
To get started, follow these steps:
Log in to your Panther Console.
On the left sidebar, navigate to Configure > Schemas.
At the top right of the page next to the search bar, click Create New.
Enter a Schema ID, Description, and Reference URL.
The Description is meant for content about the table, while the Reference URL can be used to link to internal resources.
Optionally enable Field Discovery by clicking its toggle
ON
. Learn more in Enabling field discovery.In the Schema section, in the Infer a schema from sample events tile, click Start.
In the Infer schema from sample logs modal, click one of the radio buttons:
Upload Sample file: Upload a sample set of logs: Drag a file from your system over the pop-up modal, or click Select file and choose the log file.
Note that Panther does not support CSV without headers for inferring schemas.
After uploading a file, Panther will display the raw logs in the UI. You can expand the log lines to view the entire raw log. Note that if you add another sample set, it will override the previously-uploaded sample.
Select the appropriate Stream Type (view examples for each type here).
Lines: Events are separated by a new line character.
JSON: Events are in JSON format.
JSON Array: Events are inside an array of JSON objects.
CloudWatch Logs: Events came from CloudWatch Logs.
Auto: Panther will automatically detect the appropriate stream type.
Click Infer Schema.
Panther will begin to infer a schema from the raw sample logs.
Panther will attempt to infer multiple timestamp formats.
To ensure the schema works properly against the sample logs you uploaded and against any changes you made to the schema, click Run Test.
This test will validate that the syntax of your schema is correct and that the log samples you have uploaded into Panther are successfully matching against the schema.
All successfully matched logs will appear under Matched; each log will display the column, field, and JSON view.
All unsuccessfully matched logs will appear under Unmatched; each log will display the error message and the raw log.
Click Save to publish the schema.
Panther will infer from all logs uploaded, but will only display up to 100 logs to ensure fast response time when generating a schema.
Create the schema yourself
How to create a custom schema manually
To create a custom schema manually:
In the Panther Console, navigate to Configure > Schemas.
Click Create New in the upper right corner.
Enter a Schema ID, Description, and Reference URL.
The Description is meant for content about the table, while the Reference URL can be used to link to internal resources.
Optionally enable Automatic Field Discovery by clicking its toggle
ON
. Learn more in Enabling field discovery.In the Schema section, in the Create your schema from scratch tile, click Start.
In the Parser section, if your schema requires a parser other than the default (JSON) parser, select it. Learn more about the other parser options on the following pages:
In the Fields & Indicators section, write or paste your YAML log schema fields.
You can use Panther-generated schema field suggestions.
(Optional) In the Universal Data Model section, define Core Field mappings for your schema.
Learn more in Mapping Core Fields in Custom Log Schemas.
At the bottom of the window, click Run Test to verify your schema contains no errors.
Note that syntax validation only checks the syntax of the Log Schema. It can still fail to save due to name conflicts.
Click Save.
You can now navigate to Configure > Log Sources and add a new source or modify an existing one to use the new Custom.SampleAPI
_Log Type. Once Panther receives events from this source, it will process the logs and store them in the custom_sampleapi
table.
You can also now write detections to match against these logs and query them using Search or Data Explorer.
Writing schemas
See the tabs below for instructions on writing schemas for JSON logs and for text logs.
Note that you can use the pantherlog
CLI tool to generate your Log Schema.
Writing a schema for JSON logs
To parse log files where each line is JSON you have to define a log schema that describes the structure of each log entry.
You can edit the YAML specifications directly in the Panther Console or they can be prepared offline in your editor/IDE of choice. For more information on the structure and fields in a Log Schema, see the Log Schema Reference.
It's also possible to use the starlark
parser with JSON logs to perform transformations outside of those that are natively supported by Panther.
In the example schemas below, the first tab displays the JSON log structure and the second tab shows the Log Schema.
Note: Please leverage the Minified JSON Log Example when using the pantherlog
tool or generating a schema within the Panther Console.
Minified JSON log example:
{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"params":[],"remote_ip":"1.1.1.1","user_id":null,"username":null,"ua":null,"queue_duration_s":null,"correlation_id":"c01ce2c1-d9e3-4e69-bfa3-b27e50af0268","cpu_s":0.05,"db_duration_s":0,"view_duration_s":0.00039,"duration_s":0.0459,"tag":"test","time":"2019-11-14T13:12:46.156Z"}
Schema field suggestions
When creating or editing a custom schema, you can use field suggestions generated by Panther. To use this functionality:
In the Panther Console, click into the YAML schema editor.
To edit an existing schema, click Configure > Schemas > [name of schema you would like to edit] > Edit.
To create a new schema, click Configure > Schemas > Create New.
Press
Command+I
on macOS (orControl+I
on PC).The schema editor will display available properties and operations based on the position of the text cursor.
Managing custom schemas
Editing a custom schema
Panther allows custom schemas to be edited. Specifically, you can perform the following actions:
Add new fields.
Rename or delete existing fields.
Edit, add, or remove all properties of existing fields.
Modify the
parser
configuration to fix bugs or add new patterns.
Note: After editing a field's type
, any newly ingested data will match the new type while any previously ingested data will retain its type.
To edit a custom schema:
Navigate to your custom schema's details page in the Panther Console.
Click Edit in the upper-right corner of the details page.
Modify the schema.
You can use Panther-generated schema field suggestions.
To more easily see your changes (or copy or revert deleted lines), click Single Editor, then Diff View.
In the upper-right corner, click Update.
Click Run Test to check the YAML for structural compliance. Note that the rules will only be checked after you click Update. The update will be rejected if the rules are not followed.
Update related detections and saved queries
Editing schema fields might require updates to related detections and saved queries. Click Related Detections in the alert banner displayed above the schema editor to view, update, and test the list of affected detections and saved queries.
Query implications
Queries will work across changes to a Type provided the query does not use a function or operator which requires a field type that is not castable across Types.
Good example: The Type is edited from
string
toint
where all existing values are numeric (i.e."1"
). A query using the functionsum
aggregates old and new values together.Bad example: The Type is edited from
string
toint
where some of the existing values are non-numeric (i.e."apples"
). A query using the functionsum
excludes values that are non-numeric.
Query castability table
This table shows which Types can be cast as each Type when running a query. Schema editing allows any Type to be changed to another Type.
boolean
same
yes
yes
yes
no
no
string
yes
same
numbers only
numbers only
numbers only
numbers only
int
yes
yes
same
yes
yes
numbers only
bigint
yes
yes
yes
same
yes
numbers only
float
yes
yes
yes
yes
same
numbers only
timestamp
no
yes
no
no
no
same
Archiving and unarchiving a custom schema
You can archive and unarchive custom schemas in Panther. You might choose to archive a schema if it's no longer used to ingest data, and you do not want it to appear as an option in various dropdown selectors throughout Panther. In order to archive a schema, it must not be in use by any log sources. Schemas that have been archived still exist indefinitely; it is not possible to permanently delete a schema.
Archiving a schema does not affect any data ingested using that schema already stored in the data lake—it is still queryable using Data Explorer and Search. By default, archived schemas are not shown in the schema list view (visible on Configure > Schemas), but can be shown by modifying Status, within Filters, in the upper right corner. In Data Explorer, tables of archived schemas are not shown under Tables.
Attempting to create a new schema with the same name as an archived schema will result in a name conflict, and prompt you to instead unarchive and edit the existing schema.
To archive or unarchive a custom schema:
In the Panther Console, navigate to Configure > Schemas.
Locate the schema you'd like to archive or unarchive.
On the right-hand side of the schema's row, click the Archive or Unarchive icon.
On the confirmation modal, click Continue.
Testing a custom schema
The "Test Schema against sample logs" feature found on the Schema Edit page in the Panther Console supports Lines, CSV (with or without headers), JSON, JSON Array, CloudWatch Logs, and Auto. See Stream Types for examples.
Additionally, the above log formats can be compressed using the following formats:
gzip
zstd (without dictionary)
Multi-line logs are supported for JSON and JSONArray formats.
To validate that a custom schema will work against your logs, you can test it against sample logs:
In the left-hand navigation bar in your Panther Console, click Configure > Schemas.
Click on a custom schema's name.
In the upper-right corner of the schema details page, click Test Schema.
Enabling field discovery
Log source schemas in Panther define the log event fields that will be stored in Panther. When field discovery is enabled, data from fields in incoming log events that are not defined in the corresponding schema will not be dropped—instead, the fields will be identified, and the data will be stored. This means you can subsequently query data from these fields, and write detections referencing them.
Field discovery is currently only available for custom schemas, not Panther-managed ones. See additional limitations of field discovery below.
Handling of special characters in field names
If a field name contains a special character—a character that is not alphanumeric, an underscore (_
) , or a dash ( -
)—it will be transliterated using the algorithm below:
@
toat_sign
,
tocomma
`
tobacktick
'
toapostrophe
$
todollar_sign
*
toasterisk
&
toambersand
!
toexclamation
%
topercent
+
toplus
/
toslash
\
tobackslash
#
tohash
~
totilde
=
toeq
All other ASCII characters (including space
) will be replaced with an underscore (_
). Non-ASCII characters are transliterated to their closest ASCII equivalent.
This transliteration affects only field names; values are not modified.
Limitations
Field discovery currently has the following limitations:
The maximum number of top-level fields that can be discovered is 2,000. Within each
object
field, a maximum of 1,000 fields can be discovered.There is no limitation on the number of overall fields discovered.
If your schema uses the
csv
parser and you are parsing CSV logs without a header, only fields included in thecolumns
section of your schema will be discovered.This does not apply if your schema uses the
csv
parser and you are parsing CSV logs with a header.
If your schema uses the
fastmatch
parser, only fields defined inside thematch
patterns will be discovered.If your schema uses the
regex
parser, only fields defined inside thematch
patterns will be discovered.
Uploading log schemas with the Panther Analysis Tool
If you choose to maintain your log schemas outside of Panther, for example in order to keep them under version control and review changes before updating, you can upload the YAML files programmatically with the Panther Analysis Tool.
The uploader command receives a base path as an argument and then proceeds to recursively discover all files with extensions .yml
and .yaml
.
It is recommended to keep schema files separately from other unrelated files, otherwise you may notice several unrelated errors for attempting to upload invalid schema files.
The uploader will check if an existing schema exists and proceed with the update or create a new one if no matching schema name is found.
The schema
field must always be defined in the YAML file and be consistent with the existing schema name for an update to succeed. For a list of all available CI/CD fields see our Log Schema Reference.
The uploaded files are validated with the same criteria as Web UI updates.
Troubleshooting Custom Logs
Visit the Panther Knowledge Base to view articles about custom log sources that answer frequently asked questions and help you resolve common errors and issues.
Last updated