LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Configuring Snowflake for Cloud Connected
        • Configuring AWS for Cloud Connected
        • Pre-Deployment Tools
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • How to configure destinations
  • Supported destinations
  • Setting up destinations that are not natively supported
  • Modifying or deleting destinations
  • Alert routing scenarios
  • Scenario 1: Dynamically defined destination(s) on the detection
  • Scenario 2: Statically defined destination(s) on the detection
  • Scenario 3: Destination configuration
  • Destination example
  • Destination schema
  • Workflow automation
  • Example JSON payload:
  • Troubleshooting alert destinations

Was this helpful?

  1. Alerts & Destinations

Alert Destinations

Destinations are integrations that receive alerts from rules and policies in Panther

PreviousAlerts & DestinationsNextAmazon SNS Destination

Last updated 22 days ago

Was this helpful?

Overview

Destinations are integrations that receive alerts from rules and policies.

By default, alerts are routed based on severity and can dispatch to multiple destinations simultaneously. For example, a single alert might create a Jira ticket, create a PagerDuty incident, and .

You can override destinations on a per-rule or per-policy basis by setting the destination in the detection's Python function or its metadata. For a detailed explanation of how routing is determined, see below.

Starting in version 1.42 and newer, Panther sends alerts from a known static IP address. This allows customers to configure destinations to accept alerts from this IP address. You can locate the address, listed as Gateway Public IP, in the Panther Console by navigating to Settings > General and scrolling to the bottom of the page.

How to configure destinations

Follow the pages below to learn how to set up specific alert destinations.

Supported destinations

Panther has supported integrations for the following destinations:

Setting up destinations that are not natively supported

If you'd like to receive alerts at a destination that isn't natively supported by Panther, consider using the Custom Webhook or API workflows.

Panther's Custom Webhook

Panther API

However, because alerts are fetched from Panther every n minutes or hours, say—rather than being sent to the destination as soon they are created, as with the Custom Webhook and supported integrations—this method could introduce a delay.

Modifying or deleting destinations

  1. Log in to the Panther Console.

  2. In the left sidebar menu, click Configure > Alert Destinations.

  3. Click the triple dot icon on the right side of the destination.

    • In the dropdown menu that appears, click Delete to delete the destination.

    • Click Edit to modify the display name, the severity level, and other configurations.

Alert routing scenarios

The destination(s) an alert is routed to depends on the destination configuration on the detection, if any, or the configuration on the destination.

The routing scenarios are explained below, in their order of precedence from highest to lowest. Once one scenario is met, alert routing stops (i.e., the following scenarios are not invoked).

Scenario 1: Dynamically defined destination(s) on the detection

If the list returned by the destinations() function is empty ([]) or it includes "SKIP", the alert will not be routed to any destination. If the returned list includes "SKIP" among other destination names/UUIDs, those other values will be ignored.

Scenario 2: Statically defined destination(s) on the detection

Static destination overrides can be defined either within a detection’s YAML file or in the Console:

  • In the CLI workflow, you can statically define destinations in a detection's YAML file, by setting the OutputIds field.

  • In the Console, destinations are defined within a detection's Rule Settings, using the Destination Overrides field.

Scenario 3: Destination configuration

  • The Severity Levels configured on the destination must include the severity level of the alert.

  • The Alert Types configured on the destination must include the type of the alert.

  • If the destination is configured to only accept alerts from certain Log Types, that list must include the log type associated with the alert.

The Log Types filter on an alert destination applies only to rules (or "real-time rules"). Filtering by log type is not available for Scheduled Rules, policies, correlation rules, or System Errors.

Destination example

The following example demonstrates how to receive an alert to a destination based on a user's multiple failed login attempts to Okta.

You have configured the following:

  • Destinations:

  • Log source:

  • Detection:

    • You created a rule called “Okta User Locked Out,” to alert you when a user is locked out of Okta due to too many failed login attempts:

      from panther_base_helpers import deep_get
      def rule(event):
          return deep_get(event, 'outcome', 'reason') == 'LOCKED OUT'
      
      def title(event):
          return f"{deep_get(event, 'actor', 'alternateId')} is locked out."
      
      def destinations(event):
          if deep_get(event, 'actor', 'alternateId') == "username@example.com":
              return ['dev-alert-destinations', 'tines-okta-user-lock-out']
          return ['dev-general'] 
      
      def alert_context(event):
          return {
              "actor": deep_get(event, "actor", "displayName"),
              "id": deep_get(event, "actor", "id")
          }
    • The alert_context() contains the username and the user's Okta ID value.

1. An event occurs

One of your users unsuccessfully attempts to log in to Okta multiple times. Eventually their account is locked out.

2. Panther ingests logs and detects an event that matches the rule you configured

3. The rule match triggers an alert

The detected rule match triggers an alert to your Slack destination and to your Tines destination.

Within a few minutes of the event occurring, the alert appears in the Slack channel you configured as a destination:

Destination schema

Workflow automation

The alert payload generally takes the following form. For custom webhooks, SNS, SQS, or other workflow automation-heavy destinations, this is important for defining how you process the alert.

For native integrations such as Jira or Slack, this is processed automatically into a form that the destination can understand.

{
   "id": string,
   "createdAt": AWSDateTime,
   "severity": string,
   "type": string,
   "link": string,
   "title": string,
   "name": string,
   "alertId": string,
   "description": string,
   "runbook": string,
   "tags": [string],
   "version": string
}

Example JSON payload:

{
  "id": "AllLogs.IPMonitoring",
  "createdAt": "2020-10-13T03:35:24Z",
  "severity": "INFO",
  "type": "RULE",
  "link": "https://runpanther.io/alerts/b90c19e66e160e194a5b3b94ec27fb7c",
  "title": "New Alert: Suspicious traffic detected from [123.123.123.123]",
  "name": "Monitor Suspicious IPs",
  "alertId": "b90c19e66e160e194a5b3b94ec27fb7c",
  "description": "This rule alerts on any activity outside of our IP address whitelist",
  "runbook": "",
  "tags": [
    "Network Monitoring",
    "Threat Intel"
  ],
  "version": "CJm9PiaXV0q8U0JhoFmE6L21ou7e5Ek0"
}

Troubleshooting alert destinations

Use Panther's destination to reach additional third-parties (with APIs) such as Tines, TheHive, or SOCless.

If the destination you'd like to reach doesn't have a public API (such as an internal application), you can receive alerts by polling the for alerts on a schedule. See the available API operations for viewing and manipulating alerts on (REST) and (GraphQL).

If you want a certain destination to only receive alerts from one specific detection, you can create a destination that contains no severity levels or log types, then configure the detection to point to that destination (using , , or the ). See Panther's KB article for more information:

A Python detection can define a that determines which destination(s) should be alerted. Destinations defined in this way take precedence over all other configurations.

If there is no destinations() function defined in the detection's Python body, or if there is a destinations() function defined, but it returns None, Panther will move on to , below, to find alert destinations.

Panther moves on to this scenario if is not met.

Learn more about OutputIds in the rule specification reference , and .

"Overrides" means this method of destination definition takes precedence over , below.

If destinations are not defined on the detection (as described in and ), the configurations on destinations themselves are invoked. In order for an alert to be routed to a given destination, the following must conditions must be met:

Note that an alert's severity is either statically defined within the detection's Severity key, or dynamically defined by the detection's (for Python detections) or value (for YAML detections).

, configured to receive an alert for rule matches.

Tines (set up via ), configured to receive an alert for rule matches.

Your Panther instance is ingesting logs.

As the Okta audit logs stream through your Panther instance, your “Okta User Locked Out” rule detects that a user is locked out.

The alert is also sent to Tines via a you've configured as a destination. Tines receives the values from the alert_context() function, and it is set up to automatically unlock the user's Okta account then send a confirmation message in Slack.

The scalar type represents a valid extended ISO 8601 DateTime string. It accepts datetime strings of the form YYYY-MM-DDThh:mm:ss.sssZ. The field after the seconds field is a nanoseconds field. It can accept between 1 and 9 digits. The seconds and nanoseconds fields are optional. The time zone offset is compulsory for this scalar. The time zone offset must either be Z (representing the UTC time zone) or be in the format ±hh:mm:ss. The seconds field in the timezone offset will be considered valid even though it is not part of the ISO 8601 standard.

Visit the Panther Knowledge Base to that answer frequently asked questions and help you resolve common errors and issues.

Custom Webhook
Panther API
Alerts
Alerts & Errors
destinations() function
Slack
Custom Webhook
Okta
Custom Webhook
AWSDateTime
view articles about alert destinations
How do I route a single Panther alert to a specific alert destination?
destinations()
OutputIds
Destination Overrides field
Scenario 2
Scenario 1
Scenario 3
Scenarios 1
2
Amazon SNS
Amazon SQS
Asana
Blink Ops
Discord
GitHub
Google Pub/Sub
Incident.io
Jira Cloud
Jira Data Center
Microsoft Teams
Mindflow
OpsGenie
PagerDuty
Rapid7
Slack (Webhook)
Slack Bot
Splunk
Tines
Torq
Alert routing scenarios
severity() function
ServiceNow (via Custom Webhook)
send an email via Amazon Simple Notification Service (SNS)
DynamicSeverities
for YAML detections here
for Python detections here
The triple dot icon in the right side of an alert is expanded, and an arrow points to the "Delete" option in the dropdown menu.
A Python file shows a rule function as well as a destinations function. destinations includes a conditional statement, which either routes alerts to "slack-security-alerts", or "SKIP"
A Slack app posts a Panther alert that says a user is locked out. The alert includes a link to the Panther UI, a Runbook that recommends verifying IPs, a Severity of Low, and Alert Context that includes the "actor" and "id" parameters.
The automated process in Tines shows the sequence of events: Receive Alert from Panther, Wait 10 minutes, Unlock Okta user by ID via HTTP Request, Send Unlock message to Slack via HTTP Request.