LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Configuring Snowflake for Cloud Connected
        • Configuring AWS for Cloud Connected
        • Pre-Deployment Tools
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • How to configure System Error alarms
  • Configuring an Alert Destination for System Errors
  • Configuring log drop-off alarms for log sources
  • Types of System Errors
  • Log Source Health alerts
  • Log Classification alerts
  • S3 GetObject Error Notifications
  • Alert Delivery Failure
  • Cloud Security Scanning Failure

Was this helpful?

  1. System Configuration
  2. Notifications and Errors (Beta)

System Errors

Panther's System Errors alert you if the Panther platform is not functioning as expected

PreviousNotifications and Errors (Beta)NextPanther Deployment Types

Last updated 2 months ago

Was this helpful?

Overview

Panther's System Errors alert you when a part of the Panther platform is not functioning as expected. This includes the following:

  • Log Source Health Notifications

    • Log sources turning unhealthy as a result of a failed health check

    • Logs dropping off entirely from a log source

  • Alert Delivery Failure

    • Alerts failing to deliver to the Alert Destination

  • Log Classification Errors

    • Logs failing to classify

  • S3 GetObject Errors

    • Panther failing to fetch S3 objects

  • Cloud Security Scanning Failure

    • Panther failing to scan a cloud resource because of an "access denied" error

  • Query errors in Scheduled Searches not connected to a Scheduled Rule

    • Query errors in Scheduled Searches connected to a Scheduled Rule are surfaced as

These types of alerts are classified as System Errors in Panther. System Errors will always have a CRITICAL severity level—and be sent to alert destinations configured to receive System Errors, even if they are not configured to receive alerts with a CRITICAL severity. They are automatically generated, with the exception of log drop-off alarms which you can configure manually per log source. System Error alerts are visible in your Panther Console within Alerts & Errors > System Errors.

It's strongly recommended to configure an alert destination to receive the System Error alert type.

System Errors are also a type of in-app notification in your Panther Console. Learn more about notifications on Notifications and Errors.

How to configure System Error alarms

To ensure that you receive alerts for all types of System Errors:

  • Configure an alert destination that is receiving the System Error alert type.

  • Configure Log Drop-off alarms for log sources that will trigger an alert when data is no longer being received.

    • Note that you do not need to enable alerts for Log Classification errors, Alert Delivery failure, S3 GetObject errors, and Cloud Security Scanning failure.

Configuring an Alert Destination for System Errors

By default, Panther will send System Errors alerts to the Alerts page in your Panther Console. It is also strongly recommended to configure one of your alert destinations to receive them.

Alert destinations configured to receive System Errors will receive them even if the destination is not configured to receive alerts with a CRITICAL severity.

To ensure these alerts are sent to a custom Alert Destination, follow the steps below:

  1. Log in to your Panther Console.

  2. On the left sidebar navigation, click Configure > Alert Destinations

  3. Choose an existing Alert Destination or add a new Alert Destination.

  4. On the configuration page for the Alert Destination, add System Errors to the Alert Types section:

Configuring log drop-off alarms for log sources

Panther allows you to set up event threshold alarms for individual log sources, which will trigger an alert if data is not received over a specific time interval.

For example, if you configure the threshold to 15 minutes, then you will receive an alert if no events are processed in 15 minutes.

This can be useful for log sources that have been incorrectly linked to Panther or are experiencing issues outside of Panther.

Note: The alert is only sent one time; there is no re-notification for event threshold.

You can add an alarm to a new or an existing log source:

Setting up an alarm for a new log source

  1. In the left-hand navigation bar of your Panther Console, click Configure > Log Sources.

  2. In the upper-right corner, click Create New.

  3. Complete each step of the onboarding workflow.

    • See Data Sources and Transports for specific setup instructions by source.

  4. On the success page at the end of the onboarding workflow, the Trigger an alert when no events are processed defaults to YES. Leave this enabled.

    • Enter your desired time period by filling in the Number and Period fields next to How long should Panther wait before it sends you an alert that no events have been processed?.

Setting up an alarm for an existing log source

  1. In the left-hand navigation bar of your Panther Console, click Configure > Log Sources.

  2. Select the log source to configure an alarm for.

  3. In the pop-up window, toggle the Trigger an alert when no events are processed setting to ON.

  4. Click Apply Changes.

Types of System Errors

Log Source Health alerts

Panther performs health checks on log sources to ensure that Panther is correctly linked to the source, has the right credentials, and is receiving data from the source consistently.

Log drop-off alerts

Panther allows you to set up event threshold alarms for individual log sources, which will trigger an alert if data is not received over a specific time interval. For instructions on enabling these alerts, see the section above: Configuring log drop-off alarms for log sources.

It is not possible to set up a log drop-off alarm for Panther audit logs, when enabled as a log source.

Log Classification alerts

Log classification alerts generate when logs hit a parsing error and fail to classify when sent to Panther. When this happens, the following actions take place by default:

  • Logs that failed to classify are sent to the data lake and are searchable in a table called classification_failures in the panther_monitor database.

  • An alert is generated immediately after the first log fails to classify. The alert will display all log lines that are failing to classify.

An alert's details page in the Panther Console highlights the log lines that fail to parse correctly, to help you determine which lines in the log type's respective schemas need to be corrected or added.

The alert includes a link to the respective log source's Log Source Ops page where you can view the rate at which events are failing to classify within the Health tab.

Remediate Classification Failures

After a source has received classification errors for a set of events, you will need to identify which schema of your source has failed and for what reason. You can find this information either on the Health tab of the Log Source Operation page or directly from the Data Explorer in a table called classification_failures in the panther_monitor database.

Common causes for Classification Failures include:

  • A field is tagged as required didn't exist on some of the incoming data

  • A field is tagged as int but we received string

  • A timestamp field has the wrong format definition

After you identify the reason and the schema where those failing events should belong, you should update the failing field(s) properly. The schema changes should be reflected in your sources automatically.

As a last step, mark the alarm on that source as "Resolved" in the Log Source Operations page.

S3 GetObject Error Notifications

S3 GetObject error alerts generate when Panther fails to fetch S3 objects. When this happens, the following actions take place by default:

  • Panther stores the S3 objects in the data lake which can be queried through the Data Explorer in a table titled panther_monitor.data_audit.

  • An alert is generated if Panther fails to fetch any S3 object in the last 24 hours. The alert displays the specific S3 objects that are failing.

Alert Delivery Failure

Alert Delivery Failure alerts are generated when Panther fails to deliver an alert to a destination.

If the initial attempt to deliver an alert fails, Panther automatically attempts to re-deliver it. After breaching a certain threshold of alert delivery failures, a system health alert is generated and sent to any alert destinations configured to receive System Error alerts.

Cloud Security Scanning Failure

Cloud Security Scanning Failure alerts are generated when Panther fails to scan a cloud resource because of an "access denied" error.

This occurs when permissions are not configured properly to allow scanning to occur. This is most commonly caused by one of the following scenarios:

  • Our scanning role (PantherAuditRole) is not configured with sufficient permissions.

    • This is an extremely rare case as the permissions of this role rarely change. This can be resolved by updating the PantherAuditRole to the latest version.

  • An AWS organizations Service Control Policy (SCP) is preventing our scanning role from carrying out scans.

    • Commonly this occurs with SCP's with restrictions for certain regions or services. This can be resolved by either modifying the SCP to add an exception for our scanning role, or by modifying the Cloud Security integration to exclude certain regions or resource types.

  • An AWS resource base policy is preventing our scanning role from carrying out scans.

    • In AWS, permissions are bidirectional. The PantherAuditRole may be granted permission to access a resource, but the resource itself may not grant permission to be accessed by our role. This can be resolved by either modifying the resource based policy to add an exception for our scanning role, or by modifying the Cloud Security integration to exclude certain resources or resource types.

The alert will indicate which resource scanning failed on, and the AWS error that caused the scanning to fail:

You can use this information to pinpoint the exact permissions issue. In the example above, we can see no resource-based policy allows the kms:ListResourcetags action. This indicates to us that the issue is related to a resource-based policy.

On the log source's details page, in the Overview tab, click the pencil icon to the right of the value in the Drop-off Alarm field.

Next to How long should Panther wait before it sends you an alert that no events have been processed? set a Number and Period.

Alert limit notifications
rule errors
The "Trigger an alert when no events are processed" toggle is set to YES. The "How long should Panther wait before it sends you an alert that no events have been processed" setting is set to 1 Day
The image shows an alert in the Panther Console titled "Source [panther-account] has scanning errors." The "Events" tab is open, and it includes metadata for the alert.
On the configuration screen for a Slack destination in the Panther Console, "System Errors" is one of the selected Alert Types.
Timed out Scheduled Searches