LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Snowflake Enrichment (Beta)
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Setting Up a Cloud Connected Panther Instance
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
    • MCP Server (Beta)
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • Prerequisite
  • Step 1: Onboard log sources
  • Step 1.1: Identify log sources to onboard
  • Step 1.2: Onboard each log source
  • (Optional) Step 1.3: Onboard AWS account(s) for Cloud Security Scanning
  • Step 2: Create or enable detections
  • Step 2.1: Choose the Console or CLI workflow for detection management
  • Step 2.2: Create or enable rules and scheduled rules for each log source
  • (Optional) Step 2.3: Create or enable policies for each Cloud Security Scanning account
  • Step 3: Configure alert destinations
  • Step 3.1: Identify where you want to receive Panther alerts
  • Step 3.2: Set up destinations
  • Step 3.3: Ensure at least one destination is receiving System Errors
  • Step 4: Learn how to use search tools
  • (Optional) Step 5: Set up Enrichment

Was this helpful?

  1. Quick Start

Onboarding Guide

Set up your Panther environment

Overview

Onboarding in Panther includes setting up log sources, detections, and alert destinations, as well as familiarizing yourself with search tools and optionally enabling enrichment capabilities. This guide explains how to complete each of these tasks.

If you need help while onboarding, please reach out to your Panther support team.

Prerequisite

  • You have successfully logged in to your Panther Console.

Step 1: Onboard log sources

The first step in configuring your Panther environment is to onboard log sources, which provide data to Panther to analyze and store. After identifying valuable sources, you'll onboard each one.

Step 1.1: Identify log sources to onboard

Consider the log-emitting systems in your environment that you'd like to monitor for security. It's recommended to onboard enough sources to come close to your allowed ingest volume. You can use log filtering if you would only like to ingest some logs from a certain source into Panther.

If you need some ideas of where to get started, review the Supported Logs list. You can also onboard completely custom sources.

Step 1.2: Onboard each log source

For each of the log sources you've identified as wanting to ingest:

  • If the log source is one of Panther's supported sources, onboard it by following the instructions on its documentation page.

  • If the log source is not one of Panther's supported sources:

    1. If the source is able to emit event webhooks:

      If the source is high-volume (emits at least one GB per hour) and/or its payload size exceeds the HTTP payload limit, skip to the next step.

      1. Onboard the source by following the HTTP Source creation instructions.

      2. Follow the instructions to infer a custom schema from HTTP data received in Panther.

    2. If the source is not able to emit event webhooks but can export events to an S3 bucket:

      1. Onboard the source by following the S3 Source creation instructions.

      2. Follow the instructions to infer a custom schema in one of the following ways:

        • From S3 data received in Panther

        • From historical S3 data

    3. If the source is not able to emit event webhooks nor export events to an S3 bucket, but can export events to one of the other Data Transport locations Panther can pull from, e.g., Google Cloud Storage or Azure Blob Storage:

      1. Define a custom schema in one of the following ways:

        • Inferring from sample logs in the Console

        • Inferring using pantherlog infer

        • Creating one manually in the Console

      2. Onboard the source by following the instructions within the documentation for your chosen Data Transport.

    4. If the source is not able to emit event webhooks nor export events to any of Panther's Data Transport sources, see Panther's Data Pipeline Tools guides or reach out to your Panther support team for assistance in connecting your data to Panther.

These Step 1.2 instructions are also represented in the flow chart below:

(Optional) Step 1.3: Onboard AWS account(s) for Cloud Security Scanning

If you use AWS as a cloud provider, you can use Panther's Cloud Security Scanning feature to monitor the configurations of your cloud resources.

  • If you'd like to use Cloud Security Scanning, onboard one or more AWS accounts by following these instructions.

Log sources: Go further

  • Learn how to monitor the health of your log sources.

  • Learn about field discovery for custom log sources.

  • If you created any custom schemas, designate fields as Indicator Fields to enable cross-log search and detections.

Step 2: Create or enable detections

Now that your data is flowing into Panther, it's time to configure detections. First, you'll choose whether to manage detection content in the Panther Console or CLI workflow. Then, for each source, you'll enable Panther-managed detections or create your own.

After you have created or enabled detections, alerts for matches will be visible in your Panther Console and queryable via the Panther API—but you will not receive alerts in external applications until you complete the next step, to set up alert destinations.

Step 2.1: Choose the Console or CLI workflow for detection management

Decide whether you'd like to manage detection content in the Panther Console or in the CLI workflow (performing uploads using the Panther Analysis Tool [PAT], perhaps in a CI/CD pipeline). Detection content includes detection packs and individual detections (rules, scheduled rules, and policies), as well as data models, global helpers, lookup tables, saved searches, and scheduled searches. Managing detection content in both the Console and CLI workflows is unsupported.

You might choose to use the CLI workflow if your team is comfortable using git, command line tools, and CI/CD pipelines. Otherwise, it's recommended to use the Panther Console.

Panther's Simple Detections functionality aims to eventually integrate the Console and CLI workflows. Currently, if your team uses the CLI workflow to manage detection content, the changes made to detections using the Simple Detection builder in the Console will still be overwritten on next upload (except for Inline Filters created in the Console, which will be preserved).

Step 2.2: Create or enable rules and scheduled rules for each log source

For each log source you onboarded to Panther in the previous step, you will enable Panther-managed detections or create your own. If the source is one of Panther's Supported Logs, follow the Supported logs section below. Otherwise, follow the Custom logs section.

Supported logs

  • If the source is one of Panther's Supported Logs:

    • Enable a Panther-managed Detection Pack for the source. See the instructions below for enabling a Detection Pack in the Panther Console and in the CLI workflow.

    • If you already enabled a Detection Pack for this log source during onboarding (on the final "Success!" page), move on to the next log source.

Enable a Panther-managed Detection Pack in the Console

  • Follow these instructions to enable a Panther-managed Detection Pack for the source.

Go further:

  • Learn how to customize a Panther-managed detection.

  • Create additional, custom detections for this source.

Enable a Panther-managed Detection Pack in the CLI workflow

  1. If you have not done so already, follow these instructions to clone or fork the panther-analysis repository of detections.

  2. Within the rules directory of your copy of the panther-analysis repository, locate the directory for this source, which contains Panther-managed rules and (possibly) scheduled rules.

  3. For each Panther-managed rule and scheduled rule that you would like to enable, in the detection's corresponding YAML file, set:

    Enabled: True
  4. If there are any rules or scheduled rules in the source's directory that you would not like enabled, in the detection's corresponding YAML file, set:

    Enabled: False
  5. Upload your detections to Panther manually using PAT, or configure your CI/CD pipeline to upload detection content with PAT.

Go further:

  • Create additional, custom detections for this source.

Custom logs

  • If the source is a custom log source:

    • Create your own detections. See the instructions below for creating detections in the Panther Console and in the CLI workflow. While creating detections:

      • Consider leveraging Panther-managed helper functions, or creating your own.

      • Create tests.

Create rules and scheduled rules in the Console

  • Create one or more rules for the log source.

    • To create a Python rule, follow these instructions.

    • To use the Simple Detection builder, follow these instructions.

  • If necessary, create one or more Scheduled Rules for the log source by following these instructions.

Create rules and scheduled rules in the CLI workflow

  1. If you have not done so already, follow these instructions to clone or fork the panther-analysis repository of Python detections.

  2. Write one or more rules for the log source:

    • To write a Python rule, follow these instructions.

    • To write a Simple Detection rule, follow these instructions.

  3. If necessary, write one or more Scheduled Rules for the log source by following these instructions.

  4. Upload your detections to Panther manually using PAT, or configure your CI/CD pipeline to upload detection content with PAT.

(Optional) Step 2.3: Create or enable policies for each Cloud Security Scanning account

If you onboarded one or more AWS accounts for Cloud Security Scanning, enable Panther-managed policies, or create your own.

Enable Panther-managed Policies in the Console

  • Enable the Panther Core AWS Pack in the Panther Console. Note that in addition to Policies, this pack includes rules, helpers, and data models.

    • See instructions for enabling Packs in the Console here.

Create Policies in the Console

  • To create Policies in the Console, follow these instructions.

Enable Panther-managed Policies in the CLI workflow

  • If you have not done so already, follow these instructions to clone or fork the panther-analysis repository of Python detections.

  • Within the policies directory of your copy of the panther-analysis repository, identify the directories of interest to you, i.e., the directories covering AWS resources you are interested in monitoring.

  • In each directory of interest, for each Panther-managed policy that you would like to enable, set the following in the detection's corresponding YAML file:

    Enabled: True
  • In each directory of interest, if there are any policies in the directory that you would not like enabled, set the following in the detection's corresponding YAML file:

    Enabled: False
  • Upload your detections to Panther manually using PAT, or configure your CI/CD pipeline to upload detection content with PAT.

Create Policies in the CLI workflow

  • To write Policies in the CLI workflow, follow these instructions.

Detections: Go further

  • If you are using the CLI workflow, configure your CI/CD pipeline to upload to Panther.

  • Use Data Replay to check that your detections match when expected.

  • If you onboarded an AWS account for Cloud Security Scanning, set up real-time monitoring.

Step 3: Configure alert destinations

Set up alert destinations to receive alerts in locations outside of your Panther Console.

Step 3.1: Identify where you want to receive Panther alerts

Where is the best place for your team to receive Panther alerts? Does it make sense to configure multiple destinations, and route alerts of different severities to different locations?

If you need some ideas to get started, check out the list of supported destinations on the Alert Destinations page. You can also create custom destinations.

Step 3.2: Set up destinations

For each alert destination you'd like to set up:

  • If the destination is one of the destinations natively supported by Panther, follow the setup instructions specific to that destination.

  • If the destination is not natively supported by Panther:

    • If the destination can receive HTTP POST requests containing a JSON payload, follow the instructions to use a Custom Webhook Destination.

    • Alternatively, consider polling the Panther API for new alerts on a schedule. Learn more about this option here.

Step 3.3: Ensure at least one destination is receiving System Errors

System Errors notify users when some part of their Panther workflow is not functioning correctly, such as log sources turning unhealthy or alerts failing to deliver. Learn more about System Errors on System Health Notifications.

When setting up each alert destination, you'll select the Alert Types sent to that destination, shown below. It's strongly recommended to configure at least one alert destination to receive System Errors.

Alert destinations: Go further

  • Learn how to triage alerts in Panther on Assigning and Managing Alerts.

Step 4: Learn how to use search tools

Before it's time to investigate a security incident, you'll want to be comfortable using Panther's search tools.

  • Practice creating filters and executing a search in the Search tool.

  • If you are comfortable writing SQL, practice running queries in Data Explorer.

    • See example queries in Data Explorer Query Examples.

Search: Go further

  • Create a Scheduled Search, on top of which you can create a Scheduled Rule.

(Optional) Step 5: Set up Enrichment

Panther's Enrichment features can add useful context to log events, enabling you to write higher fidelity detections and generate more informative alerts. These features include:

  • Panther-managed Enrichment Providers like IPinfo, Tor Exit Nodes, and Anomali ThreatStream

  • Identity Provider Profiles like Okta Profiles and Google Workspace Profiles

  • Lookup Tables containing custom data

For each of the above features, determine whether you would like to enable them, and if so, follow the set up instructions on their respective pages.

PreviousQuick StartNextData Sources & Transports

Last updated 6 months ago

Was this helpful?

This flow chart diagram shows how to onboard a given log source depending on characteristics of the source, like whether it can emit webhook events or export events to S3.