LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Configuring Snowflake for Cloud Connected
        • Configuring AWS for Cloud Connected
        • Pre-Deployment Tools
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • LogSchema fields
  • CI/CD schema fields
  • Example
  • ParserSpec
  • FieldSchema
  • ValueSchema
  • Timestamps
  • Indicators
  • Validate
  • Using JSON schema in an IDE
  • JetBrains custom JSON schemas
  • VSCode custom JSON schemas
  • Stream type

Was this helpful?

  1. Data Sources & Transports
  2. Custom Logs

Log Schema Reference

PreviousCustom LogsNextTransformations

Last updated 1 month ago

Was this helpful?

In this guide, you will find common fields used to build YAML-based schemas when onboarding and schemas.

Required fields throughout this page are in bold.

LogSchema fields

Each log schema contains the following fields:

  • fields ()

    • The fields in each Log Event.

  • parser ()

    • A parser that can convert non-JSON logs to JSON and/or perform custom transformations

CI/CD schema fields

Additionally, schemas defined using a CI/CD workflow can contain the following fields:

  • schema (string)

    • The name of the schema

  • description (string)

    • A short description that will appear in the UI

  • referenceURL (string)

    • A link to an external document which specifies the log structure. Often, this is a link to a 3rd party's documentation.

  • fieldDiscoveryEnabled (boolean)

    • Indicates whether will be enabled for this schema or not.

Example

The example below contains the CI/CD fields mentioned above.

schema: Custom.MySchema
description: (Optional) A handy description so I know what the schema is for.
referenceURL: (Optional) A link to some documentation on the logs this schema is for.
fieldDiscoveryEnabled: true
parser:
  csv:
    delimiter: ','
    hasHeader: true
fields:
- name: action
  type: string
  required: true
- name: time
  type: timestamp
  timeFormats:
    - unix

ParserSpec

A ParserSpec specifies a parser to use to convert non-JSON input to JSON. Only one of the following fields can be specified:

  • fastmatch (FastmatchParser{}): Use fastmatch parser

  • regex (RegexParser{}): Use regex parser

  • csv (CSVParser{}): Use csv parser

    • Note: The columns field is required when there are multiple CSV schemas in the same log source.

  • script: Use script parser

See the fields for fastmatch, regex, and csv in the tabs below.

Parser fastmatch fields

  • match ([]string): One or more patterns to match log lines against. This field cannot be empty.

  • emptyValues ([]string): Values to consider as null.

  • expandFields (map[string]string): Additional fields to be injected by expanding text templates.

  • trimSpace (bool): Trim space surrounding each value.

Parser regex fields

  • match ([]string): A pattern to match log lines against (can be split it into parts for documentation purposes). This field cannot be empty.

  • patternDefinitions (map[string]string): Additional named patterns to use in match pattern.

  • emptyValues ([]string): Values to consider as null.

  • expandFields (map[string]string): Additional fields to be injected by expanding text templates.

  • trimSpace (bool): Trim space surrounding each value.

Parser csv fields

  • delimiter (string): A character to use as field delimiter.

  • hasHeader (bool): Use first row to derive column names (unless columns is set also in which case the header is just skipped).

  • columns ([]string, required(without hasHeader), non-empty): Names for each column in the CSV file. If not set, the first row is used as a header.

  • emptyValues ([]string): Values to consider as null.

  • trimSpace (bool): Trim space surrounding each value.

  • expandFields (map[string]string): Additional fields to be injected by expanding text templates.

Parser script fields

FieldSchema

A FieldSchema defines a field and its value. The field is defined by:

  • name (string)

    • The name of the field.

  • required (boolean)

    • If the field is required or not.

  • description (string)

    • Some text documenting the field.

    • If present, the field's value will be copied from the referenced object.

    • If present, the field's name will be changed.

    • If present, the field's value will be the combination of the values of two or more other fields.

    • If present, the field's value will be extracted from another string field by splitting it based on a separator.

    • If present, the field's value will be masked.

ValueSchema

A ValueSchema defines a value and how it should be processed. Each ValueSchema has a type field that can be of the following values:

Type Values

Description

string

A string value

int

A 32-bit integer number in the range -2147483648, 2147483647

smallint

A 16-bit integer number in the range -32768, 32767

bigint

A 64-bit integer number in the range -9223372036854775808, 9223372036854775807

float

A 64-bit floating point number

boolean

A boolean value true / false

timestamp

A timestamp value

array

A JSON array where each element is of the same type

object

A JSON object of known keys

json

Any valid JSON value (JSON object, array, number, string, boolean)

The fields of a ValueSchema depend on the value of the type.

Type

Field

Value

Description

object

fields (required)

An array of FieldSpec objects describing the fields of the object.

array

element (required)

A ValueSchema describing the elements of an array.

timestamp

timeFormats (required)

[]String

timestamp

isEventTime

Boolean

A flag to tell Panther to use this timestamp as the Log Event Timestamp.

string

indicators

[]String

string

validate

Validation rules for the string value

Timestamps

The timeFormats field was introduced in version 1.46 to support multiple timestamp formats in custom log schemas. While timeFormat is still supported for existing log sources, we recommend using timeFormats for all new schemas.

Timestamps are defined by setting the type field to timestamp and specifying the timestamp format using the timeFormats field. Timestamp formats can be any of the built-in timestamp formats:

timeFormat
Example
Description

rfc3339

2022-04-04T17:09:17Z

The most common timestamp format.

unix_auto

1649097448 (seconds) 1649097491531 (milliseconds) 1649097442000000 (microseconds) 1649097442000000000 (nanoseconds)

Timestamp expressed in time passed since UNIX epoch time. It can handle seconds, milliseconds, microseconds, and nanoseconds.

unix

1649097448

Timestamp expressed in seconds since UNIX epoch time. It can handle fractions of seconds as a decimal part.

unix_ms

1649097491531

Timestamp expressed in milliseconds since UNIX epoch time.

unix_us

1649097442000000

Timestamp expressed in microseconds since UNIX epoch time.

unix_ns

1649097442000000000

Timestamp expressed in nanoseconds since UNIX epoch time. Scientific float notation is supported.

Defining a custom format

# The field is a timestmap using a custom timestamp format like "2020-09-14 14:29:21"
- name: ts
  type: timestamp
  timeFormats:
    - "%Y-%m-%d %H:%M:%S" # note the quotes required for proper YAML syntax

Panther's strftime format supports using %N code to parse nanoseconds. For example:

%H:%M:%S.%N can be used to parse 11:12:13.123456789

Using multiple time formats

When multiple time formats are defined, each of them will be tried sequentially until successful parsing is achieved:

- name: ts
  type: timestamp
  timeFormats:
    - rfc3339
    - unix

Timestamp values can be marked with isEventTime: true to tell Panther that it should use this timestamp as the p_event_time field. It is possible to set isEventTime on multiple fields. This covers the cases where some logs have optional or mutually exclusive fields holding event time information. Since there can only be a single p_event_time for every Log Event, the priority is defined using the order of fields in the schema.

Schema test cases that are used with the pantherlog test command must define the time field value in theresult payload formatted as YYYY-MM-DD HH:MM:SS.fffffffff. For backwards compatibility reasons, single time format configurations will retain the same format.

Example:

- name: singleFormatTimestamp
  type: timestamp
  timeFormats:
    - unix
input: >
  {
    "singleFormatTimestamp": "1666613239"
  }
result: >
  {
    "singleFormatTimestamp": "1666613239"
  }

When multiple time formats are defined:

- name: multipleFormatTimestamp
  type: timestamp
  timeFormats:
    - unix
    - rfc3339
input: >
  {
    "multipleFormatTimestamp": "1666613239"
  }
result: >
  {
    "multipleFormatTimestamp": "2022-10-24 12:07:19.459326000"
  }

Indicators

For example:

# Will scan the value as IP address and store it to `p_any_ip_addresses`
- name: remote_ip
  type: string
  indicators: [ ip ]

# Will scan the value as a domain name and/or IP address.
# Will store the result in `p_any_domain_names` and/or `p_any_ip_addresses`
- name: target_url
  type: string
  indicators: [ url ]

Validate

Under the validate key, you can specify conditions for the field's value that must be met in order for an incoming log to match this schema.

It's also possible to use validate on the element key (where type: string) to perform validation on each element of an array value.

allow and deny validation

You can validate values of string type by declaring an allowlist or denylist. Only logs with field values that match (or do not match) the values in allow/deny will be parsed with this schema. This means you can have multiple log types that have common overlapping fields but differ on values of those fields.

# Will only allow 'login' and 'logout' event_type values to match this log type
- name: event_type
  type: string
  validate:
    allow: [ "login", "logout"]
    
# Will match if log has any event_type value other than 'login' and 'logout'
- name: event_type
  type: string
  validate:
    deny: [ "login", "logout"]
    
# Will match logs with a severities field with value 'info' or 'low' 
- name: severities
  type: array
  element:
    type: string
    validate:
      allow: ["info", "low"]

ip and cidr format validation

Values of string type can be restricted to match well-known formats. Currently, Panther supports the ip and cidr formats to require that a string value be a valid IP address or CIDR range.

ip and cidr validation can be combined with allow or deny rules but doing so is somewhat redundant—for example, if you allow two IP addresses, then adding an ip validation will simply ensure that your validation will not include false positives if the IP addresses in your list are not valid.

# Will allow valid ipv4 IP addresses e.g. 100.100.100.100
- name: address
  type: string
  validate:
    ip: "ipv4"
    
# Will allow valid ipv6 CIDR ranges 
# e.g. 2001:0db8:85a3:0000:0000:0000:0000:0000/64
- name: address
  type: string
  validate:
    cidr: "ipv6"
    
# Will allow any valid ipv4 or ipv6 address
- name: address
  type: string
  validate:
    ip: "any"
    
# All elements of the addresses array must be valid ipv4 ID addresses
- name: addresses
  type: array
  element:
    type: string
    validate:
      ip: "ipv4"

Using JSON schema in an IDE

JetBrains custom JSON schemas

VSCode custom JSON schemas

Stream type

View example log events for each type below.

Stream type
Description
Example log event(s)

Auto

Panther will automatically detect the appropriate stream type.

n/a

Lines

Events are separated by a new line character.

JSON

Events are in JSON format.

JSON Array

Events are inside an array of JSON objects.

OR Events are inside an array of JSON objects, which is the value to a key in a top-level object. This can be known as an "enveloped array."

CloudWatch Logs

Events came from CloudWatch Logs.

See the Custom Logs page for using Panther Analysis Tool (PAT).

Learn more on .

Learn more on .

Learn more on .

Learn more on .

function (string): The function to run per event

(object)

(object)

(object)

(object)

(object)

Its value is defined using the fields of a .

An array specifying the formats to use for parsing the timestamp (see )

Tells Panther to extract indicators from this value (see )

See

You can also define a custom format by using notation. For example:

Values of string type can be used as indicators. To mark a field as an indicator, set the indicators field to an array of indicator scanner names (more than one may be used). This will instruct Panther to parse the string and store any indicator values it finds to the relevant p_any_ field. For a list of values that are valid to use in the indicators field, see .

If your code editor or integrated development environment (IDE) supports , you can configure it to use for Panther schemas and for schema tests. Doing so will allow you to receive suggestions and error messages while developing Panther schemas and their tests.

See the for instructions on how to configure JetBrains IDEs to use custom JSON Schemas.

See the for instructions on how to configure VSCode to use JSON Schemas.

While performing certain actions in the Panther Console, such as or , you need to select a log stream type.

"10.0.0.1","user-1@example.com","France"
"10.0.0.2","user-2@example.com","France"
"10.0.0.3","user-3@example.com","France"
{ 
    "ip": "10.0.0.1", 
    "un": "user-1@example.com", 
    "country": "France" 
}
OR
{ "ip": "10.0.0.1", "un": "user-1@example.com", "country": "France" }{ "ip": "10.0.0.2", "un": "user-2@example.com", "country": "France" }{ "ip": "10.0.0.3", "un": "user-3@example.com", "country": "France" }
OR
{ "ip": "10.0.0.1", "un": "user-1@example.com", "country": "France" }
{ "ip": "10.0.0.2", "un": "user-2@example.com", "country": "France" }
{ "ip": "10.0.0.3", "un": "user-3@example.com", "country": "France"OR
[
	{ "ip": "10.0.0.1", "username": "user-1@example.com", "country": "France" },
	{ "ip": "10.0.0.2", "username": "user-2@example.com", "country": "France" },
	{ "ip": "10.0.0.3", "username": "user-3@example.com", "country": "France" }
]
OR
{ "events": [
        { "ip": "10.0.0.1", "username": "user-1@example.com", "country": "France" },
	{ "ip": "10.0.0.2", "username": "user-2@example.com", "country": "France" },
	{ "ip": "10.0.0.3", "username": "user-3@example.com", "country": "France" }
    ] 
}
{
  "owner": "111111111111",
  "logGroup": "services/foo/logs",
  "logStream": "111111111111_CloudTrail/logs_us-east-1",
  "messageType": "DATA_MESSAGE",
  "logEvents": [
      {
          "id": "31953106606966983378809025079804211143289615424298221568",
          "timestamp": 1432826855000,
          "message": "{\"ip\": \"10.0.0.1\", \"user\": \"user-1@example.com\", \"country\": \"France\"}"
      },
      {
          "id": "31953106606966983378809025079804211143289615424298221569",
          "timestamp": 1432826855000,
          "message": "{\"ip\": \"10.0.0.2\", \"user\": \"user-2@example.com\", \"country\": \"France\"}"
      },
      {
          "id": "31953106606966983378809025079804211143289615424298221570",
          "timestamp": 1432826855000,
          "message": "{\"ip\": \"10.0.0.3\", \"user\": \"user-3@example.com\", \"country\": \"France\"}"
      }
  ]
}
Fastmatch Log Parser
Regex Log Parser
CSV Log Parser
Script Log Parser
Starlark
strftime
JSON Schema
this schema file
this schema-tests file
JetBrains documentation
VSCode documentation
ValueSchema
[]FieldSpec
ValueSchema
Timestamps
Indicators
Validate
Custom Log Types
Lookup Table
[]FieldSchema
ParserSpec
configuring an S3 bucket for Data Transport
copy
rename
concat
split
mask
Standard Fields
field discovery
information on how to manage schemas through a CI/CD pipeline
inferring a custom schema from raw logs