LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Snowflake Enrichment (Beta)
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Setting Up a Cloud Connected Panther Instance
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
    • MCP Server (Beta)
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview of Panther system
  • General considerations
  • Log Processing subsystem
  • Enrichment subsystem
  • Detection subsystem
  • Data Lake subsystem
  • Alerting subsystem
  • API subsystem

Was this helpful?

  1. Resources

Panther System Architecture

Diagrams and explanations of the Panther system architecture

PreviousLegal

Last updated 4 months ago

Was this helpful?

Overview of Panther system

The diagram above flows roughly from left to right, and can be read in the following steps:

  1. Raw log data flows into Panther from various log sources, including SaaS pullers (e.g., Okta) and Data Transport sources (e.g., AWS S3). These raw logs are parsed, filtered and normalized in the Log Processing subsystem.

    • The output of Log Processing flows into two subsystems: Data Lake and Detection.

  2. If enabled, Cloud Security Scanning will scan onboarded cloud infrastructure, then pass the resources it finds into the Detection subsystem.

  3. The Enrichment subsystem optionally adds additional context to the data flowing into the Detection subsystem, which can be used to enhance detection efficacy (e.g., IPinfo, Okta Profiles).

  4. The Detection subsystem applies detections to the following inputs:

    • From Log Processing: Log events

    • From Scheduled Searches: Log events

    • From Cloud Security Scanning: Infrastructure resources

  5. If a detection generates an alert, it is sent to the Alerting subsystem for dispatch to its appropriate alert destinations (e.g., Slack, Jira, a webhook, etc.). A single alert can be routed to more than one destination.

At the bottom of the diagram, the Control Plane represents the cross-cutting infrastructure responsible for configuring and controlling the subsystems above (the data plane). This will be expanded on in the descriptions of each subsystems, below. The API Server referenced in the upper right corner is the external entry point into the Control Plane.

General considerations

AWS

  • Each Panther customer has a Panther instance deployed into a dedicated AWS account.

    • A customer can choose to own the AWS account or have Panther manage the account.

    • No data is shared or accessible between customers.

    • The AWS account forms the permission boundary for the application.

    • There is a single VPC used for services requiring networking.

  • Processing is done via AWS Lambda and Fargate instances.

    • A proprietary control plane dynamically picks the best compute to minimize cost (see below).

    • Compute resources do not communicate with one another directly; rather, they communicate via AWS services. In other words, there is no "east/west" network traffic, there is only "north/south" network traffic

  • The Principle of least privilege is followed by using minimally scoped IAM roles for each infrastructure component.

Snowflake

  • Each Panther customer has a Panther Snowflake instance deployed into a dedicated Snowflake account.

    • A customer can choose to own the Snowflake account or have Panther manage the account.

    • No data is shared or accessible between customers.

  • Snowflake secrets are managed by AWS Secret Manager using RSA keys, and rotated daily.

Other

  • All data is encrypted in transit and at rest.

  • All external interactions are conducted using the API:

    • The Panther Console is a React application interfacing with the API server.

    • The public API exposes GraphQL and REST endpoints.

    • All API actions are logged as Panther Audit Logs, which can then be ingested as a log source in Panther.

  • Secrets related to external integrations are managed in DynamoDB using KMS encrypted fields.

  • The system scales up and down according to load.

  • Panther infrastructure is managed by Pulumi.

    • All infrastructure is tagged (e.g., resource name, subsystem), enabling effective billing analysis.

    • Customers owning their AWS account can add their own tags to integrate into their larger organization's billing reporting.

  • Monitoring is performed using a combination of CloudWatch, Sentry, and Datadog.

Log Processing subsystem

All data inputted into this subsystem is delivered via AWS S3 and S3 notifications. Upstream sources that are not S3-based (e.g., SaaS pullers, HTTP Source, Google Cloud Storage Source) use Amazon Data Firehose to aggregate events into S3 objects. These notifications are routed through a master Amazon SNS topic. The Log Processing and Event Sampling workflows each subscribe to this SNS topic.

Log Processing computation is implemented with AWS Lambda and Fargate.

Dynamic compute cost optimization

Panther uses an efficient, proprietary control plane that orchestrates compute selection, aggregation and scaling.

As traffic increases, additional compute is required. Panther's control plane scales to match traffic, meaning it minimizes the number of compute instances used to maximize aggregation of data and minimize cost.

Lambda is used as Panther's core compute because its low latency allows us to quickly follow traffic variations, which is cost effective for bursty and light traffic loads. However, Lambda's cost per unit time is higher than other compute options. In the case of sustained and predicable traffic, Lambda is not as cost effective as other compute options. This is why, if the control plane detects a high volume of stable traffic, Fargate (Fargate Spot, if available) is used instead of Lambda to minimize costs.

For each notification received, the following steps are taken:

  1. The integration source associated with the S3 object is looked up in DynamoDB and the associated role is assumed for reading.

  2. The data is read from S3.

  3. Each event is parsed according to the associated schema for that data type.

    • If classification or parsing errors arise, System Errors are generated and the associated "bad" data is stored in the Data Lake within the classification_failures table.

  4. Ingestion filters and transformations are applied.

  5. Indicator fields (p_any fields) are extracted, and standard fields are inserted.

  6. Processed events are written as S3 objects and notifications are sent to an internal SNS topic, which the Data Lake and Detection subsystems are subscribed to.

You can optionally configure an event threshold alarm for each onboarded log source to alert if traffic stops unexpectedly.

The S3 notifications also route to the Event Sampling subsystem, which is used for log schema field discovery. As new attributes are found in the data, they are analyzed and added automatically to the schema (and associated Data Lake tables).

Enrichment subsystem

Enrichment in Panther is implemented via Lookup Tables (LUTs). A LUT is a table containing data associated to a unique primary key. A LUT also has a mapping from schemas to primary key, which allows for automatic enrichment in the Detection subsystem. Detections may also use a function call interface to look up data.

IPinfo, for example, is a Panther-managed enrichment provider containing geolocation data. IP addresses in a log event will automatically be enriched with location, ASN, and privacy information. Customers can also create their own custom LUTs to bring context relevant to their business and security concerns.

LUTs are created either via the Panther Console or in the CLI workflow (using a YAML specification file). Data for the LUT can be made accessible to Panther in a few ways: uploaded in the Console, included as a file in the CLI configuration, or stored as an S3 object. In general, the most useful way to manage LUT data is as an S3 object reference—you can create S3 objects in your own account, and Panther will poll for changes.

The metadata associated with a LUT is stored in DynamoDB. When there is new data, the Lookup Table Processor assumes the specified role from the metadata and processes the S3 data. This creates two outputs: a real-time database in EFS used by the Detection subsystem, and tables in the Data Lake. The tables in the Data Lake can be used by Scheduled Searches to enrich events using joins.

Detection subsystem

The streaming detection processor allows Python-based detections to run on log events from Log Processing and Scheduled Searches, as well as resources from Cloud Security Scanning. The streaming detection processor runs as an AWS Lambda function (or Fargate instance) optimized for high speed execution of Python. (The processor is, however, not simply a Python Lambda—although it was in an earlier iteration of Panther's infrastructure. After years of experience, we have learned that a naive Python Lambda implementation is neither efficient nor cost effective.)

The streaming detection processor evaluates the following types of detections:

  • Streaming detections (rules): Targeted at one or more log schemas (also called LogTypes)

  • Scheduled detections (scheduled rules): Targeted at the output of one or more Scheduled Searches

  • Policy detections: Targeted at resources

Processing data from these sources follows these steps:

  1. For every active Lookup Table, any matches are applied to the p_enrichment field so that the information is available for detections.

  2. All detections associated to the given LogType, cloud resource, or Scheduled Search are found.

  3. Each detection's rule() function is run on the event/resource. If it returns True, then the other optional functions are run, and an alert is sent to the Alerting subsystem. For rules and scheduled rules, alerts are only sent for the first detection within the detection's deduplication window.

  4. Events associated with the detection are written to an S3 object and an S3 notification is sent to an internal SNS topic.

    • The Data Lake subsystem subscribes to the SNS topic for data ingestion into the rule matches and signals tables.

When a Scheduled Search is finished executing, the streaming detection processor Lambda is invoked with a reference to the results of the query. The results are read, and each event is processed according to the steps above.

Data Replay allows for testing of detections on historical data. This is implemented via a "mirror" set of infrastructure that is independent of the live infrastructure.

Data Lake subsystem

Panther uses the Snowflake Snowpipe service to ingest data into the Data Lake. This service uses AWS IAM permissions and is therefore not dependent on Snowflake users configured for queries and management. The onboarding of a new data source in Panther triggers the creation of associated tables and Snowpipe infrastructure using the Admin database API Lambda. This Lambda has an associated user with read/write permissions to Panther databases and schemas. Notice there is no direct outside connect to invoke this Lambda; rather, this Lambda is driven by the internal Control Plane.

Queries are run using the read only database API Lambda. This Lambda has an associated user with read only permissions.

Queries are asynchronous. When an API request is made to run a query, the associated SQL is executed in Snowflake and Snowflake returns a queryId. API calls are then made with the queryId to check the status and read the associated results. The status of the execution of a query is tracked in DynamoDB.

Query results are stored in EFS for 30 days (though this length is configurable). Customers can use the Search History in Panther to view results of past searches.

Scheduled Searches used by Detection are run via an AWS Step Function. Upon query execution completion, the streaming detection processor is invoked with a reference to the query results for further processing.

When RBAC per logtype is enabled, there is a unique, managed read-only user per role.

Snowflake secrets are stored in AWS Secrets Manger. RSA secrets are used and rotated daily.

Alerting subsystem

The Detection subsystem inserts alerts into a DynamoDB table, which the alert dispatch Lambda listens to on a stream. This Lambda uses the configured integrations to send alerts to destinations.

To display alerts in the Panther Console, core alert data is retrieved from DynamoDB, while the alert's associated events are retrieved from the Data Lake.

The alert limiter functionality is intended to prevent "alert storms" from overloading your destinations, which arise from (likely) misconfigured detections. If more than 1000 alerts are generated in one hour from the same detection, alerts will be suppressed. (This limit is configurable.) If the limit is met, the detection will continue to run and store events in the Data Lake (so there is no data loss), however no alerts are created. In this case, a System Error is generated to notify the customer, who can manually remove the alert suppression in the Console (perhaps after some detection tuning).

There are special authenticated endpoints for Jira and Slack to "call back" to Panther in order to sync alert state (e.g., to update the status of an alert to Resolved).

API subsystem

The Panther API is the entry point for all external interactions with Panther. The Console, GraphQL, and REST clients connect via an AWS ALB. Customers can optionally configure an allowlist for ALB access using IP CIDRs.

API authentication is performed using AWS Cognito. GraphQL and REST clients use tokens, while the Panther Console uses JWTs managed by AWS Cognito. The Console supports Single Sign-On (SSO) via AWS Cognito.

There is an internal API server that resolves the requests. Some requests are processed entirely within the API server, while others require one or more calls to other internal services implemented via AWS Lambda functions.

A flow diagram has various components: Log Processing, Cloud Security Scanning, Detection, Alerting, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width.
A flow diagram has various components: Log Processing, Log Events, Event Sampling, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width.
A flow diagram has various components: Customer Data Provider, Lookup Table Processor, Detections, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width.
A flow diagram has various components: Log Processing, Cloud Security Scanning, Streaming, Scheduled, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width.
A flow diagram has various components: Database API, Query Execution History, AWS Secrets Manager, Security Data Lake, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width.
A flow diagram has various components: Detection, Database, Alert Dispatch, Alert "Storm" Limiter, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width.
A flow diagram has various components: API Client, API Services, Log Processing, Detection, and more. There are icons associated to each component, and arrows drawn between components. At the bottom, a Control Plane rectangle runs along the entire width.