Links

Custom Logs

Define, write, and manage custom schemas

Overview

Panther allows you to define your own custom schemas. You can ingest custom logs into Panther via a Data Transport, and your custom schema will then normalize and classify the data.
This page explains how to define, write, and manage custom schemas, as well as how to upload schemas with Panther Analysis Tool (PAT). For information on how to use pantherlog to work with custom schemas, please see pantherlog CLI tool.
Custom schemas are identified by a Custom. prefix in their name and can be used wherever a natively supported log type is used:
  • Log ingestion
    • You can onboard custom logs through a Data Transport (e.g., HTTP webhook, S3, SQS, Google Cloud Storage, Azure Blob Storage)
  • Detections
  • Investigations
    • You can query the data in Search and in Data Explorer. Panther will create a new table for the custom schema once you onboard a source that uses it.

How to define a custom schema

Panther supports JSON data formats and CSV with or without headers for custom log types. For inferring schemas, Panther does not support CSV without headers.
There are multiple ways to define a custom schema. You can:

Automatically infer the schema in Panther

Instead of writing a schema manually, you can let the Panther Console or the pantherlog CLI tool infer a schema (or multiple schemas) from your data.
When Panther infers a schema, note that if your data sample has:
  • A field of type object with more than 200 fields, that field will be classified as type json.
  • A field with mixed data types (i.e., it is an array with multiple data types, or the field itself has varying data types), that field will be classified as type json.

How to infer a schema

There are multiple ways to infer a schema in Panther:
Sample logs
S3 data received in Panther
Historical S3 data
HTTP data received in Panther

Inferring a custom schema from sample logs

You can generate a schema by uploading sample logs into the Panther Console. If you'd like to use the command line instead, follow the instructions on using the pantherlog CLI tool here.
To get started, follow these steps:
  1. 1.
    Log in to your Panther Console.
  2. 2.
    On the left sidebar, navigate to Configure > Schemas.
  3. 3.
    At the top right of the page next to the search bar, click Create New.
  4. 4.
    Enter a Schema ID, Description, and Reference URL.
    • The Description is meant for content about the table, while the Reference URL can be used to link to internal resources.
  5. 5.
    Optionally enable Field Discovery by clicking its toggle ON. Learn more in Enabling field discovery.
  6. 6.
    Scroll to the bottom of the page where you'll find the option to upload sample log files.
  7. 7.
    Upload a sample set of logs: Drag a file from your computer over the "Infer schema from sample logs" box or click Select file and choose the log file. Note that Panther does not support CSV without headers for inferring schemas.
    In the Panther Console, there is a screen labeled "Infer Schema from Sample Logs." At the bottom of the screen shot, there is a section to Drag and drop a file or select a file to upload.
    • After uploading a file, Panther will display the raw logs in the UI. You can expand the log lines to view the entire raw log. Note that if you add another sample set, it will override the previously-uploaded sample.
  8. 8.
    Select the appropriate Stream Type (view examples for each type here).
    • Lines: Events are separated by a new line character.
    • JSON: Events are in JSON format.
    • JSON Array: Events are inside an array of JSON objects.
    • CloudWatch Logs: Events came from CloudWatch Logs.
    • Auto: Panther will automatically detect the appropriate stream type.
  9. 9.
    Click Infer Schema
    • Panther will begin to infer a schema from the raw sample logs.
    • Panther will attempt to infer multiple timestamp formats.
    • Once the schema is generated, it will appear in the schema editor box above the raw logs.
      In the Panther Console, a sample event schema is entered into the code box. There is a button at the bottom labeled "Validate and test schema."
  10. 10.
    To ensure the schema works properly against the sample logs you uploaded and against any changes you make to the schema, click Validate & Test Schema.
    • This test will validate that the syntax of your schema is correct and that the log samples you have uploaded into Panther are successfully matching against the schema. You should see the results appear below the schema editor box.
    • All successfully matched logs will appear under Matched; each log will display the column, field, and JSON view.
      The screen shot shows a message that says "12800/12800 logs matched". On the right, there are filter buttons for "Raw (12800)," "Unmatched (0)," and "Matched (12800)."
    • All unsuccessfully matched logs will appear under Unmatched; each log will display the error message and the raw log.
  11. 11.
    Click Save to publish the schema.
Panther will infer from all logs uploaded, but will only display up to 100 logs to ensure fast response time when generating a schema.

Inferring a custom schema from S3 data received in Panther

You can generate and publish a schema for a custom log source from live data streaming from an S3 bucket into Panther. You will first view your S3 data in Panther, then infer a schema, then test the schema.

View raw S3 data

After onboarding your S3 bucket into Panther, you can view raw data coming into Panther and infer a schema from it:
  1. 1.
    Follow the instructions to onboard an S3 bucket onto Panther without having a schema in place.
  2. 2.
    While viewing your log source's Overview tab, scroll down to the Attach a schema to start classifying data section.
    The source overview page reads, "Attach a schema to start classifying data". Below, there are two options, each with their own Start button: "I want to add an existing schema" and "I want to generate a schema from raw events"
  3. 3.
    Choose from the following options:
    • I want to add an existing schema: Choose this option if you already created a schema and you know the S3 prefix you want Panther to read logs from. Click Start in the tile.
      • You will see a S3 Prefixes & Schemas popup modal:
        On the S3 Prefixes & Filters screen, there is an area where you can enter a S3 prefix. There are additional buttons to "Add Exclusion Filters" and "Add schemas"
    • I want to generate a schema from raw events: Select this option to generate a schema from live data in this bucket and define which prefixes you want Panther to read logs from. Click Start in the tile.
      • Note that you may need to wait up to 15 minutes for data to start streaming into Panther.
      • On the page you are directed to, you can view the raw data Panther has received at the bottom of the screen:
        The schema inference page is shown, with a Raw Events tile containing a number of raw JSON events in a table. In the leftmost column, each row has a "View JSON" button. The second column contains the raw events.
        • This data is displayed from data-archiver, a Panther-managed S3 bucket that retains raw logs for up to 15 days for every S3 log source.
        • Only raw log events that were placed in the S3 bucket after you configured the source in Panther will be visible, even if you've set the timespan to look further back.
        • If your raw events are JSON-formatted, you can view them as JSON by clicking View JSON in the left-hand column.

Infer a schema from raw data

If you chose to I want to generate a schema from raw events in the previous section, now you can infer a schema.
  1. 1.
    Once you see data populating in Raw Events, you can filter the events you'd like to infer a schema from by using the string Search, S3 Prefix, Excluded Prefix, and/or Time Period filters at the top of the Raw Events section.
  2. 2.
    Click Infer Schema to generate a schema.
    The image shows a section in the Panther Console labeled "Raw Events." On the right, there is a blue button labeled "Infer Schema." At the top of Raw Events, there is a Search bar, fields for S3 Prefix and Excluded Prefix, and a dropdown menu labeled Time Period.
  3. 3.
    On the Infer New Schema modal that pops up, enter the following:
    • New Schema Name: The name of the schema that will map to the table in the data lake once the schema is published.
      • The name will always start with Custom. and must have a capital letter after.
    • S3 Prefix: Use an existing prefix that was set up prior to inferring the schema or a new prefix.
      • The prefix you choose will filter data from the corresponding prefix in the S3 bucket to the schema you've inferred.
      • If you don't need to specify a specific prefix, you can leave this field empty to use the catch-all prefix that is called *.
        The image shows a section in the Panther Console labeled "Infer New Schema." At the top, there is a header labeled "Fill in new Schema name" and a field labeled "New Schema Name." Below that, there is a header labeled "Select S3 prefix" and fields labeled "S3 Prefix". At the bottom, there is a blue button labeled "Infer Schema."
  4. 4.
    Click Infer Schema.
    • At the top of the page, you will see '<schema name>' was successfully inferred.
      • Click Done.
        The source page says the schema was successfully inferred. There is a Done button.
    • The schema will then be placed in a Draft mode until you're ready to publish to production after testing.
  5. 5.
    Review the schema and its fields by clicking its name.
    In the Schemas section, the schema called Custom.CaraS3Countries is shown, with a "Draft" label. Below it is a section to Test Schemas, with a Run Test button.
    • Since the schema is in Draft, you can change, remove, or add fields as needed.
      The image shows an example schema from the Panther Console. There is a field labeled "SchemaID" and it contains the text "Custom.CaraS3Countries." The Reference URL field and Description field are not filled in. The schema is in a code block labeled "Event Schema." At the bottom, there is a blue button labeled "Validate Schema."

Test the schema with raw data

Once your schemas and prefixes are defined, you can proceed to testing the schema configuration against raw data.
  1. 1.
    In the Test Schemas section at the top of the screen, click Run Test.
    The image shows a section in the Panther Console labeled "Test Schemas." On the right, there is a blue button labeled "Run Test."
  2. 2.
    On the Test Schemas modal that pops up, select the Time Period you would like to test your schema against, then click Start Test.
    The image shows a section in the Panther Console labeled "Test Schemas." The center of the image contains the text "Test how your schemas perform during a selected time period." At the bottom, there is a drop-down menu labeled "Time Period" with the option "Last 14 days" selected. To the right of that, there is a blue button labeled "Start Test."
    • Depending on the time range and amount of data, the test may take a few minutes to complete.
      A section from the Panther Console labeled "Test finished - Elapsed Time 00min 00sec." The page shows Test Started Date, Events Date Start, Events Date End, Stream Type, Schemas Tested, Data Scanned, Matched Events, and Unmatched events.
    • Once the test is started, the results appear with the amount of matched and unmatched events.
      • Matched Events represent the number of events that would successfully classify against the schema configuration.
      • Unmatched Events represent the number of events that would not classify against the schema.
  3. 3.
    If there are Unmatched Events, inspect the errors and the JSON to decipher what caused the failures.
    The "Test Finished" screen in the Panther Console shows a list of specific errors and raw data.
    • Click Back to Schemas, make changes as needed, and test the schema again.
  4. 4.
    Click Back to Schemas.
  5. 5.
    In the upper right corner, click Save.
    On the source page, the schema name is shown. In the upper right corner is a Save button, which is circled.
    • The inferred schema is now attached to your log source.

Inferring custom schemas from historical S3 data

You can infer and save one or multiple schemas for a custom S3 log source from historical data in your S3 bucket (i.e., data that was added to the bucket before it was onboarded as a log source in Panther).

Prerequisite: Onboard your S3 bucket to Panther

Step 1: View the S3 bucket structure in Panther

After creating your S3 bucket source in Panther, you can view your S3 bucket's structure and data in the Panther Console:
  1. 1.
    In the Panther Console, navigate to Configure > Log Sources. Click into your S3 log source.
  2. 2.
    In the log source's Overview tab, scroll down to the Attach a Schema to start classifying the data section.
  3. 3.
    On the right side of the I want to generate a schema from bucket data tile, click Start.
    In Panther, in a log source's Overview tab, there is a "Start" button next to a tile labeled "I want to generate a schema from bucket data."
    • You will be redirected to a folder inspection of your S3 bucket. Here, you can view and navigate through all folders and objects in the S3 bucke.
      The folder inspection view in the Panther Console
    • Alternatively, you can access the folder inspection of your S3 bucket via the success page after onboarding your S3 source in Panther. From that page, click Attach or Infer Schemas.
      On the success screen after onboarding an S3 source in Panther, there is a button labeled "Attach or infer schemas."

Step 2: Navigate through your data

  • While viewing the folder inspection, click an object.
    • A preview window will appear, displaying a preview of its events:
In Panther, an S3 object is highlighted. A pop-over window is displaying a preview of its events.
If the events fail to render correctly (either generating an error or displaying events improperly), it's possible the wrong stream type has been chosen for the S3 bucket source. If this is the case, click Selected Logs Format is n:
On the source's folder selection view in the Panther Console, the option to select a stream type appears at the top.

Step 3: Indicate if each folder has existing schema or a new one should be inferred

After reviewing what's included in your bucket, you can determine if one or multiple schemas is necessary to represent all of the bucket's data. Next, you can select folders that include data with distinct structures and either infer a new schema, or assign an existing one.
  1. 1.
    Determine whether one or more schemas will need to be inferred from the data in your S3 bucket.
    • If all data in the S3 bucket is of the same structure (and therefore can be represented by one schema), you can leave the default Infer New Schema option selected on the bucket level. This generates a single schema for all data in the bucket.
      The "Infer 1 schema" button is in the upper right corner of the S3 folders page in the Panther Console.
    • If the S3 bucket includes data that need to be classified in multiple schemas, follow the steps below for each folder in the bucket:
      1. 1.
        Select a folder and click Include.
        • Alternatively, if there is a folder or subfolder that you do not want Panther to process, select it and click Exclude.
      2. 2.
        If you have an existing schema that matches the data, click the Schema dropdown on the right side of the row, then select the schema:
        The schema dropdown is expanded next to the data object.
        • By default, each newly included folder has the Infer New Schema option selected.
  2. 2.
    Click Infer n Schemas.

Step 4: Wait for schemas to be inferred

The schema inference process may take up to 15 minutes. You can leave this page while the process completes. You can also stop this process early, and keep the schema(s) inferred during the time that the process ran.
The source page in Panther shows the schema inference details, including an infer skipped and the number of events processed.

Step 5: Review the results

After the inference process is complete, you can view the resulting schemas and the number of events that were used during each schema's inference. You can also validate how each schema parses raw events.
  1. 1.
    Click the play icon on the right side of each row.
  2. 2.
    Click the Events tab to see the raw and normalized events.
  3. 3.
    Click the Schema tab to see the generated schema.

Step 6: Name the schema(s) and save source

Before saving the source, name each of the newly inferred schemas with a unique name by clicking Add name.
After all new schemas have been named, you will be able to click Save Source in the upper right corner.

Inferring a custom schema from HTTP data received in Panther

You can generate and publish a schema for a custom log source from live data streaming from an HTTP (webhook) source into Panther. You will first view your HTTP data in Panther, then infer a schema, then test the schema.

View raw HTTP data

After creating your HTTP source in Panther, you can view raw data coming into Panther and infer a schema from it:
  1. 1.
    • Do not select a schema during HTTP source setup.
  2. 2.
    While viewing your log source's Overview tab, scroll down to the Attach a schema to start classifying data section.
    The Overview tab of the detail page of an HTTP source called "HTTP Holding Tank" is shown. There is a Basic Info section with fields like Source ID, HTTP Ingest URL, etc. Below, there is a section titled "Attach a schema to start classifying data." Within it are two options: I want to add an existing schema, and I want to generate a schema.
  3. 3.
    Choose from the following options:
    • I want to add an existing schema: Choose this option if you already created a schema. Click Start in the tile.
      • You will be navigated to the HTTP source edit page, where you can make a selection in the Schemas - Optional field:
        The edit page for an HTTP source is shown. In the Basic Information section, the "Schemas - Optional" dropdown field is open, but no selections have been made.
        HTTP source edit page
    • I want to generate a schema: Select this option to generate a schema from live data. Click Start in the tile.
      • Note that you may need to wait a few minutes after POSTing the events to the HTTP endpoint for them to be visible in Panther.
      • On the page you are directed to, under Raw Events, you can view the raw data Panther has received within the last week:
        An HTTP source schema attachment page is shown. There is an arrow pointing to the section at the bottom, called "Raw Events." Various JSON events are included in this section. There is a blue "Infer Schema" button.
        HTTP Raw events
      • This data is displayed from data-archiver, a Panther-managed S3 bucket that retains raw HTTP source logs for 15 days.

Infer a schema from raw data

If you choose I want to generate a schema in the previous section, now you can infer a schema.
  1. 1.
    Once you see data populating within Raw Events, click Infer Schema.
    An HTTP source schema attachment page is shown. There is a section at the bottom called "Raw Events." Various JSON events are included in this section. There is an arrow pointing to a blue "Infer Schema" button.
  2. 2.
    On the Infer New Schema modal that pops up, enter the:
    • New Schema Name: Enter a descriptive name. It will always start with Custom. and must have a capital letter after.
  3. 3.
    Click Infer Schema.
    • At the top of the page, you will see '<schema name>' was successfully inferred.
  4. 4.
    Click Done.
    Text reads "'Custom.HttpHoldingTank' was successfully inferred." Below, there is a Done button, which is circled.
    • The schema will be placed in Draft mode until you're ready to publish it, after testing.
  5. 5.
    Click the draft schema's name to review its inferred fields.
    Under a "Schema(s)" header is "Custom.HttpHoldingTank" with a "Draft" label. It is circled.
    • Since the schema is in Draft, you can add, remove, and otherwise change fields as needed.
      The edit schema view is shown. There are fields for Schema ID, Reference URL, and Description. Below, is the schema itself, in a code editor.

Test the schema with raw data

Once your schema is defined, you can proceed to test the schema configuration against raw data.
  1. 1.
    In the Test Schemas section at the top of the screen, click Run Test.
    Under a "Schema(s)" header is "Custom.HttpHoldingTank" with a "Draft" label. In the bottom right corner, under a "Test Schemas" header, is a "Run Test" button, which is circled.
  2. 2.
    In the Test Schemas pop-up modal, select the Time Period you would like to test your schema against, then click Start Test.
    The "Test Schemas" modal has a "Time Period" dropdown selection and a "Start Test" button.
    • Depending on the time range and amount of data, the test may take a few minutes to complete.
      The HTTP Source schema test page is shown. It shows "18 Matched Events" and "0 Unmatched Events." There is a blue "Back to Schemas" button.
    • Once the test is started, the results appear with the amount of matched and unmatched events.
      • Matched Events represent the number of events that would successfully classify against the schema configuration.
      • Unmatched Events represent the number of events that would not classify against the schema.
  3. 3.
    If there are Unmatched Events, inspect the errors and the JSON to decipher what caused the failures.
    A list of JSON logs is shown under an "Unmatched Events" header. There are two columns, "Raw Events" and "Error"
    • Click Back to Schemas, make changes as needed, and test the schema again.
  4. 4.
    Click Back to Schemas.
  5. 5.
    In the upper right corner, click Save.
    The HTTP Source schema edit page is shown, and its "Save" button in the upper-right corner is circled.
    • The inferred schema is now attached to your log source.
    • Log events that were sent to the HTTP source before it had a schema attached, which were used to infer the schema, are then ingested into Panther.

Create the schema yourself

How to create a custom schema manually

To create a custom schema manually:
  1. 1.
    In the Panther Console, navigate to Configure > Schemas.
  2. 2.
    Click Create New in the upper right corner.
  3. 3.
    Enter a Schema ID, Description, and Reference URL.
    • The Description is meant for content about the table, while the Reference URL can be used to link to internal resources.
  4. 4.
    Optionally enable Automatic Field Discovery by clicking its toggle ON. Learn more in Enabling field discovery.
  5. 5.
    In the YAML code block, write or paste your YAML log schema definition.
  6. 6.
    Click Validate Syntax at the bottom to verify your schema contains no errors.
    • Note that syntax validation only checks the syntax of the Log Schema. It can still fail to save due to name conflicts.
  7. 7.
    Click Save.
You can now navigate to Configure > Log Sources and add a new source or modify an existing one to use the new Custom.SampleAPI _Log Type. Once Panther receives events from this source, it will process the logs and store them in the custom_sampleapi table.
You can also now write detections to match against these logs and query them using Search or Data Explorer.

Writing schemas

See the tabs below for instructions on writing schemas for JSON logs and for text logs.
Note that you can use the pantherlog CLI tool to generate your Log Schema.
JSON Logs
Text logs

Writing a schema for JSON logs

To parse log files where each line is JSON you have to define a log schema that describes the structure of each log entry.
You can edit the YAML specifications directly in the Panther Console or they can be prepared offline in your editor/IDE of choice. For more information on the structure and fields in a Log Schema, see the Log Schema Reference.
In the example schemas below, the first tab displays the JSON log structure and the second tab shows the Log Schema.
Note: Please leverage the Minified JSON Log Example when using the pantherlog tool or generating a schema within the Panther Console.
JSON Log Example
Log Schema Example
{
"method": "GET",
"path": "/-/metrics",
"format": "html",
"controller": "MetricsController",
"action": "index",
"status": 200,
"params": [],
"remote_ip": "1.1.1.1",
"user_id": null,
"username": null,
"ua": null,
"queue_duration_s": null,
"correlation_id": "c01ce2c1-d9e3-4e69-bfa3-b27e50af0268",
"cpu_s": 0.05,
"db_duration_s": 0,
"view_duration_s": 0.00039,
"duration_s": 0.0459,
"tag": "test",
"time": "2019-11-14T13:12:46.156Z"
}
Minified JSON log example:
{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"params":[],"remote_ip":"1.1.1.1","user_id":null,"username":null,"ua":null,"queue_duration_s":null,"correlation_id":"c01ce2c1-d9e3-4e69-bfa3-b27e50af0268","cpu_s":0.05,"db_duration_s":0,"view_duration_s":0.00039,"duration_s":0.0459,"tag":"test","time":"2019-11-14T13:12:46.156Z"}
version: 0
fields:
- name: time
description: Event timestamp
required: true
type: timestamp
timeFormats:
- rfc3339
isEventTime: true
- name: method
description: The HTTP method used for the request
type: string
- name: path
description: The path used for the request
type: string
- name: remote_ip
description: The remote IP address the request was made from
type: string
indicators: [ ip ] # the value will be appended to `p_any_ip_addresses` if it's a valid ip address
- name: duration_s
description: The number of seconds the request took to complete
type: float
- name: format
description: Response format
type: string
- name: user_id
description: The id of the user that made the request
type: string
- name: params
type: array
element:
type: object
fields:
- name: key
description: The name of a Query parameter
type: string
- name: value
description: The value of a Query parameter
type: string
- name: tag
description: Tag for the request
type: string
- name: ua
description: UserAgent header
type: string

Writing a schema for text logs

Panther handles logs that are not structured as JSON by using a 'parser' that translates each log line into key/value pairs and feeds it as JSON to the rest of the pipeline. You can define a text parser using the parser field of the Log Schema. Panther provides the following parsers for non-JSON formatted logs:
Name
Description
fastmatch
Match each line of text against one or more simple patterns
regex
Use regular expression patterns to handle more complex matching such as conditional fields, case-insensitive matching etc
csv
Treat log files as CSV mapping colunm names to field names

Schema field suggestions

When creating or editing a custom schema, you can use field suggestions generated by Panther. To use this functionality:
  1. 1.
    In the Panther Console, click into the YAML schema editor.
    • To edit an existing schema, click Configure > Schemas > [name of schema you would like to edit] > Edit.
    • To create a new schema, click Configure > Schemas > Create New.
  2. 2.
    Press Command+I on macOS (or Control+I on PC).
    • The schema editor will display available properties and operations based on the position of the text cursor.
      A YAML schema editor is shown. Below the cursor is a box with various field suggestions, including concat, copy, description, indicators, mask, etc.

Managing custom schemas

Editing a custom schema

Panther allows custom schemas to be edited. Specifically, you can perform the following actions:
Note: After editing a field's type, any newly ingested data will match the new type while any previously ingested data will retain its type.
To edit a custom schema:
  1. 1.
    Navigate to your custom schema's details page in the Panther Console.
  2. 2.
    Click Edit in the upper right corner of the details page.
    There is an Edit button in the upper right corner of the schema details page.
  3. 3.
    Modify the YAML.
    • You can use Panther-generated schema field suggestions.
    • Click Diff View in the upper right corner of the text editor to see the additions, edits, and subtractions via the code editor. It also includes the ability to copy or revert deleted lines.
  4. 4.
    Click Update to submit your change.
Click Validate Syntax to check the YAML for structural compliance. Note that the rules will only be checked after you click Update. The update will be rejected if the rules are not followed.
Editing schema fields might require updates to related detections and saved queries. Click on the related entities in the alert banner displayed above the schema editor to view, update, and test the list of affected detections and saved queries.
A banner message says that editing schemas might require updates to related detections and saved queries. The message links to a list of detections and queries to review and test.

Query implications

Queries will work across changes to a Type provided the query does not use a function or operator which requires a field type that is not castable across Types.
  • Good example: The Type is edited from string to int where all existing values are numeric (i.e. "1"). A query using the function sum aggregates old and new values together.
  • Bad example: The Type is edited from string to int where some of the existing values are non-numeric (i.e. "apples"). A query using the function sum excludes values that are non-numeric.

Query castability table

This table shows which Types can be cast as each Type when running a query. Schema editing allows any Type to be changed to another Type.
Type From -> To
boolean
string
int
bigint
float
timestamp
boolean
same
yes
yes
yes
no
no
string
yes
same
numbers only
numbers only
numbers only
numbers only
int
yes
yes
same
yes
yes
numbers only
bigint
yes
yes
yes
same
yes
numbers only
float
yes
yes
yes
yes
same
numbers only
timestamp
no
yes
no
no
no
same

Archiving and unarchiving a custom schema

You can archive and unarchive custom schemas in Panther. You might choose to archive a schema if it's no longer used to ingest data, and you do not want it to appear as an option in various dropdown selectors throughout Panther. In order to archive a schema, it must not be in use by any log sources. Schemas that have been archived still exist indefinitely; it is not possible to permanently delete a schema.
Archiving a schema does not affect any data ingested using that schema already stored in the data lake—it is still queryable using Data Explorer and Search. By default, archived schemas are not shown in the schema list view (visible on Configure > Schemas), but can be shown by modifying Status, within Filters, in the upper right corner. In Data Explorer, tables of archived schemas are not shown under Tables.
Attempting to create a new schema with the same name as an archived schema will result in a name conflict, and prompt you to instead unarchive and edit the existing schema.
To archive or unarchive a custom schema:
  1. 1.
    In the Panther Console, navigate to Configure > Schemas.
    • Locate the schema you'd like to archive or unarchive.
  2. 2.
    Click the three dots icon in the upper right corner of the tile, and select Archive or Unarchive.
    • If you are archiving a schema and it is currently associated to one or more log sources, the confirmation modal will prompt you to first detach the schema. Once you have done so, click Refresh.
      An Archive Schema modal says, "Prior to archiving Custom.HarryPotterFake2, it must be detached from all associated Log Sources." A list of associated log sources is shown, with only one value: Carrie Tines Test
  3. 3.
    On the confirmation modal, click Continue.

Testing a custom schema

The "Test Schema against sample logs" feature found on the Schema Edit page in the Panther Console supports Lines, CSV (with or without headers), JSON, JSON Array, CloudWatch Logs, and Auto. See Stream Types for examples.
Additionally, the above log formats can be compressed using the following formats:
  • gzip
  • zstd (without dictionary)
Multi-line logs are supported for JSON and JSONArray formats.
Need to validate that a custom schema will work against your logs? You can test sample logs by following this process:
  1. 1.
    In the Panther Console, go to Configure > Schemas.
  2. 2.
    Click on a custom schema.
  3. 3.
    In the schema details page, scroll to the bottom of the page where you'll be able to upload logs.
In the Panther Console below a schema, there is a section labeled "Test a schema against sample logs." In that section, there is an option to drag and drop in a file or to select a file to upload.

Enabling field discovery

Log source schemas in Panther define the log event fields that will be stored in Panther. When field discovery is enabled, data from fields in incoming log events that are not defined in the corresponding schema will not be dropped—instead, the fields will be identified, and the data will be stored. This means you can subsequently query data from these fields, and write detections referencing them.
Field discovery is currently only available for custom schemas, not Panther-managed ones. See additional limitations of field discovery below.
The edit view of a schema called "Custom.HTTPProxy" is shown. There are fields for Description, Reference URL, Field Discovery, and the schema in YAML. Field Discovery is toggled ON.

Handling of special characters in field names

If a field name contains a special character—a character that is not alphanumeric, an underscore (_) , or a dash ( -)—it will be transliterated using the algorithm below:
  • @ to at_sign
  • , to comma
  • ` to backtick
  • 'to apostrophe
  • $ to dollar_sign
  • * to asterisk
  • & to ambersand
  • ! to exclamation
  • % to percent
  • + to plus
  • / to slash
  • \ to backslash
  • # to hash
  • ~ to tilde
  • = to eq
All other ASCII characters (including space) will be replaced with an underscore (_). Non-ASCII characters are transliterated to their closest ASCII equivalent.
This transliteration affects only field names; values are not modified.

Limitations

Field discovery currently has the following limitations:
  • The maximum number of top-level fields that can be discovered is 2,000. Within each object field, a maximum of 1,000 fields can be discovered.
    • There is no limitation on the number of overall fields discovered.
  • If your schema uses the csv parser and you are parsing CSV logs without a header, only fields included in the columns section of your schema will be discovered.
    • This does not apply if your schema uses the csv parser and you are parsing CSV logs with a header.
  • If your schema uses the fastmatch parser, only fields defined inside the match patterns will be discovered.
  • If your schema uses the regex parser, only fields defined inside the match patterns will be discovered.

Uploading log schemas with the Panther Analysis Tool

If you choose to maintain your log schemas outside of Panther, for example in order to keep them under version control and review changes before updating, you can upload the YAML files programmatically with the Panther Analysis Tool.
The uploader command receives a base path as an argument and then proceeds to recursively discover all files with extensions .yml and .yaml.
It is recommended to keep schema files separately from other unrelated files, otherwise you may notice several unrelated errors for attempting to upload invalid schema files.
panther_analysis_tool update-custom-schemas --path ./schemas
The uploader will check if an existing schema exists and proceed with the update or create a new one if no matching schema name is found.
The schemafield must always be defined in the YAML file and be consistent with the existing schema name for an update to succeed. For a list of all available CI/CD fields see our Log Schema Reference.
The uploaded files are validated with the same criteria as Web UI updates.

Troubleshooting Custom Logs

Visit the Panther Knowledge Base to view articles about custom log sources that answer frequently asked questions and help you resolve common errors and issues.