LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Configuring Snowflake for Cloud Connected
        • Configuring AWS for Cloud Connected
        • Pre-Deployment Tools
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • How to create a Saved Search
  • How to create a Scheduled Search
  • How to use the Scheduled Search crontab
  • Using Saved and Scheduled Searches
  • How to delete or download a Saved Search
  • How to deactivate a Scheduled Search
  • Update a Saved Search's metadata
  • Search for Saved Searches
  • Use LIMITs in Scheduled Searches
  • Exporting Scheduled Searches from your Panther Console
  • Saved Search specification reference
  • Scheduled Search specification reference

Was this helpful?

  1. Investigations & Search

Saved and Scheduled Searches

Save and optionally schedule searches

PreviousStandard FieldsNextTemplated Searches

Last updated 2 months ago

Was this helpful?

Overview

You can avoid repeatedly creating the same searches in Panther's and by saving your searches. You can also schedule searches created in Data Explorer, which allows you to then run results against a rule and alert on matches. This workflow includes the following features:

  • , a preserved search expression.

  • , a Saved Search that you can schedule to run on a designated interval.

  • , a detection that's associated with a Scheduled Search. The data returned each time the search executes is run against the detection, alerting when matches are found.

By default, each Panther account is limited to 10 active Scheduled Searches. This limit is only precautionary, and can be increased via a support request. There is no additional cost from Panther for raising this limit, however you may incur extra charges from the database backend, depending on the volume of data processed.

In the , Saved and Scheduled Searches are often referred to as queries.

How to create a Saved Search

A Saved Search is a preserved search expression. Saving the searches your team runs frequently can help reduce duplicated work. You can create Saved Searches (in either Search or Data Explorer), or .

You can also add variables in your Saved Searches, creating Templated Queries. Learn more on .

How to create a Saved Search in the Panther Console

You can save a search in Panther's Data Explorer or Search. Searches saved in both tools are considered Saved Searches. Follow , and .

How to create a Saved Search in the CLI workflow

Writing a Saved Search locally means creating a file that defines a SQL query on your own machine, then uploading it to your Panther instance (typically via the ).

We recommend managing your local detection files in a version control system like GitHub or GitLab.

It's best practice to create a fork of Panther's , but you can also create your own repo from scratch.

File setup

Each Saved Search consists of:

  • A YAML file (.yml or .json extension) containing .

Folder setup

If you group your queries into folders, each folder name must contain queries in order for them to be found during upload (using either PAT or the bulk uploader in the Console).

We recommend grouping searches into folders based on log/resource type. You can use the open source repo as a reference.

Write the Saved Search

In your Saved Search file (called, for example, new-saved-search.yml), write your Saved Search, following the template below.

See the full list of available fields in the .

AnalysisType: saved_query
QueryName: MySavedQuery
Description: Example of a saved query for PAT
Query: |-
    Your query goes here
Tags:
  - Your tags

Upload the content with PAT

  • Use the PAT upload command: panther_analysis_tool upload --path <path-to-your-search> --api-token <your-api-token> --api-host https://api.<your-panther-instance-name>.runpanther.net/public/graphql

    • Replace the values:

      • <your-panther-instance-name> : The fairytale name of your instance (e.g. carrot-tuna.runpanther.net).

      • <path-to-your-query> : The path to your Saved Search on your own machine.

How to create a Saved Search in the Panther API

How to create a Scheduled Search

A Scheduled Search is a Saved Search that has been configured to run on a schedule. Using the Panther Console, currently only —Saved Searches created in Search (including those created in both SQL and ) cannot be scheduled. You can alternatively .

How to create a Scheduled Search in Data Explorer

To learn how to schedule your Saved Search created in Data Explorer, follow one of the below sets of instructions:

How to create a Scheduled Search in the CLI workflow

We recommend managing your local detection files in a version control system like GitHub or GitLab.

File setup

Each scheduled query consists of:

Folder setup

If you group your searches into folders, each folder name must contain queries in order for them to be found during upload (using either PAT or the bulk uploader in the Console).

Write the Scheduled Query

In your Scheduled Search file (called, for example, new-scheduled-search.yml), write your Scheduled Search, following the template below.

AnalysisType: scheduled_query
QueryName: ScheduledQuery_Example
Description: Example of a scheduled query for PAT
Enabled: true
Query: |-
    Select 1
Tags:
  - Your tags   
Schedule:
  CronExpression: "0 0 29 2 *"
  RateMinutes: 0
  TimeoutMinutes: 2

Upload the content with PAT

  • Use the PAT upload command: panther_analysis_tool upload --path <path-to-your-search> --api-token <your-api-token> --api-host https://api.<your-panther-instance-name>.runpanther.net/public/graphql

    • Replace the values:

      • <your-panther-instance-name> : The fairytale name of your instance (e.g. carrot-tuna.runpanther.net).

      • <path-to-your-query> : The path to your Saved Query on your own machine.

How to create a Scheduled Search in the Panther API

How to use the Scheduled Search crontab

Panther's Scheduled Search crontab uses the standard crontab notation consisting of five fields: minutes, hours, day of month, month, day of week. Additionally, you will find a search timeout selector (with a maximum value currently set at 10 minutes). The expression will run on UTC.

The interpreter uses a subset of the standard crontab notation:

┌───────── minute (0 - 59)
│ ┌──────── hour (0 - 23)
│ │ ┌────── day of month (1 - 31)
│ │ │ ┌──── month (1 - 12)
│ │ │ │ ┌── day of week (0 - 6 => Sunday - Saturday)
│ │ │ │ │               
↓ ↓ ↓ ↓ ↓
* * * * *

If you want to specify day by day, you can separate days with dashes (1-5 is Monday through Friday) or commas, for example 0,1,4 in the Day of Week field will execute the command only on Sundays, Mondays and Thursdays. Currently, we do not support using named days of the week or month names.

Using the crontab allows you to be more specific in your schedule than the Period frequency option:

Using Saved and Scheduled Searches

How to delete or download a Saved Search

You can delete Saved Searches individually or in bulk. Note that if a Saved Search is scheduled (i.e., it's a Scheduled Search), it must be unlinked from any Scheduled Rules it's associated to in order to be deleted.

  1. In the left-hand navigation bar of your Panther Console, click Investigate > Saved Searches.

  2. In the list of Saved Searches, find the search or searches you'd like to download or delete. Check the box to the left of the name of each search.

    • If you clicked Download, a saved_queries.zip file will be downloaded.

How to deactivate a Scheduled Search

  1. In the left-hand navigation bar of your Panther Console, click Investigate > Saved Searches.

  2. In the dropdown menu, click Edit Search Metadata.

  3. Click Update Query to save your changes.

Update a Saved Search's metadata

To edit a Saved Search's name, tags, description, and default database (and, for Scheduled Searches, whether it's active, and the period or cron expression):

  1. In the left-hand navigation bar of your Panther Console, click Investigate > Saved Searches.

  2. In the dropdown menu, click Edit Search Metadata.

  3. Make changes in the Update Search form as needed.

  4. Click Update Search.

Search for Saved Searches

On the Saved Searches page, you can search for queries using:

  • The search bar at the top of the queries list

  • The date range selector in the upper right corner

  • The Filters option in the upper right corner

    • Filter by whether the query is scheduled, whether its active, its type (Native SQL, Search, or PantherFlow Search), or by up to 100 tags.

Click on the name of the Saved Search to be taken directly to Data Explorer (for Native SQL queries) or Search (for Search and PantherFlow Search searches) with the query populated.

Use LIMITs in Scheduled Searches

In the Panther Data Lake settings page, you can optionally enable a setting that will check if a Scheduled Search has a LIMIT clause specified. Use this option if you're concerned about a Scheduled Search unintentionally returning thousands of results, potentially resulting in alert delays, Denial of Service (DoS) for downstream systems and general cleanup overhead from poorly tuned queries.

  1. Click the Data Lake tab.

When this field is set to ON, any new Scheduled Searches marked as active cannot be saved unless a LIMIT clause is specified in the query definition.

Existing Scheduled Searches without a LIMIT clause will appear with a warning message in the list of Saved Searches, and edits cannot be saved unless a LIMIT clause is included.

The setting only checks for the existence of a LIMIT clause anywhere in the Saved Search. It does not check specifically for outer LIMIT clauses.

Exporting Scheduled Searches from your Panther Console

You can export a .zip file of all of the detections and Scheduled Searches in your Panther Console:

  1. In the left-hand navigation bar of your Panther Console, click Detections.

  2. In the upper-right corner, click Upload.

  3. In the Bulk Uploader modal, click Download all entities.

Saved Search specification reference

Required fields are in bold.

A complete list of Saved Search specification fields:

Field Name
Description
Expected Value

AnalysisType

Indicates whether this analysis is a Rule, Policy, Scheduled Search, Saved Search, or global.

saved_query

QueryName

A friendly name to show in the UI.

String

Tags

Tags used to categorize this rule.

List of strings

Description

A brief description of the rule.

String

Query

String

Scheduled Search specification reference

Required fields are in bold.

A complete list of Scheduled Search specification fields:

Field Name
Description
Expected Value

AnalysisType

Indicates whether this analysis is a Rule, Policy, Scheduled Search, Saved Search, or global.

scheduled_query

QueryName

A friendly name to show in the UI.

String

Enabled

Whether this rule is enabled.

Boolean

Tags

Tags used to categorize this rule.

List of strings

Description

A brief description of the rule.

String

Query

A data query.

String

Schedule

The schedule that this query should run. Expressed with a CronExpression or in Rate Minutes. TimeoutMinutes is required to release the query if it takes longer than expected. Note that cron and rate minutes are mutually exclusive.

Map

<api-token> : The you generated.

When your Saved Search is uploaded, each of the fields you would normally populate in the Panther Console will be auto-filled. See for a complete list of required and optional fields.

See the POST operation on .

Note that creating a Scheduled Search alone won't run the returned data against detections or send alerts. To do this, you must also , and associate it with your Scheduled Search.

accounts: Your company will incur costs on your database backend every time a Scheduled Search runs. Please make sure that your searches can complete inside the specified timeout period. This does not apply to accounts that use Panther-managed Snowflake.

If you haven't yet created a Saved Search in Data Explorer, follow the instructions, paying attention to Is this a Scheduled Search? in Step 4.

If you've already saved the search in Data Explorer, follow the , paying attention to Step 6.

Writing a Scheduled Search locally means creating a file that defines a SQL query on your own machine, then uploading it to your Panther instance (typically via the ).

It's best practice to create a fork of Panther's , but you can also create your own repo from scratch.

A YAML file (.yml or .json extension) containing .

View an .

We recommend grouping searches into folders based on log/resource type. You can use the open source repo as a reference.

See the full list of available fields in the .

<api-token> : The you generated.

When your Scheduled Search is uploaded, each of the fields you would normally populate in the Panther Console will be auto-filled. See for a complete list of required and optional fields.

See the POST operation on .

At the top of the page, click either Download or Delete.

If you clicked Delete, an Attention! modal will pop up. Click Confirm.

Find the Scheduled Search you'd like to deactivate, and in the upper right corner of its tile, click the three dots icon.

In the Update Search form, toggle the setting Is it active? to OFF to disable the query.

Locate the query you'd like to edit, and click the three dots icon in the upper right corner of its tile.

Scheduled Searches that result in a timeout will generate a to identify that the Scheduled Search was unsuccessful.

In the upper right corner of the Panther Console, click the gear icon. In the dropdown menu that appears, click General.

Scroll down to the Scheduled Queries header. Below the header, you will see the LIMIT clause toggle setting:

Toggle the LIMIT Clause for Scheduled Queries setting to ON to start enforcing LIMITs in Scheduled Queries.

A data query. Must be written in SQL (i.e., cannot be ).

CronExpression: "0 0 29 2 *"
  RateMinutes: 0
  TimeoutMinutes: 2
API key
Queries
open-source analysis repository
example Scheduled Search YAML file here
Panther Analysis
API key
Queries
System Error
Saved Search Specification Reference
metadata attributes of the Scheduled Search
Scheduled Search specification reference
Scheduled Search Specification Reference
PantherFlow
Data Explorer
Search
CLI workflow
Templated Queries and Macros
open-source analysis repository
Panther Analysis
PantherFlow
Create a Saved Search
Create a Scheduled Search
in the Panther Console
using the CLI workflow
metadata attributes of the Saved Search
Saved Search specification reference
Saved Searches created in Data Explorer can be scheduled
create and upload Scheduled Searches using the CLI workflow
Panther Analysis Tool
create a Scheduled Rule
Panther Analysis Tool
Create a Scheduled Rule
these instructions for how to save a search in Search
Customer-configured Snowflake
these instructions for how to save a search in Data Explorer
Save a search in Data Explorer
Update a Saved Search in Data Explorer instructions
in the Saved Searches list, use the date range or filters in the upper right corner to search for queries. In the image, the date range selector is circled and the Filters button is circled.
A Scheduled Query without a LIMIT clause shows a warning banner that says "This Scheduled Query does not contain a LIMIT clause in the SQL expression."
The Cron expression screen displays options for selecting a time range for the scheduled query to run.