Saved and Scheduled Searches
Save and optionally schedule searches
You can avoid repeatedly creating the same searches in Panther's Data Explorer and Search by saving your searches. You can also schedule searches created in Data Explorer, which allows you to then run results against a rule and alert on matches. This workflow includes the following features:
- Create a Scheduled Rule, a detection that's associated with a Scheduled Search. The data returned each time the search executes is run against the detection, alerting when matches are found.
By default, each Panther account is limited to 10 active Scheduled Searches. This limit is only precautionary, and can be increased via a support request. There is no additional cost from Panther for raising this limit, however you may incur extra charges from the database backend, depending on the volume of data processed.
A Saved Search is a preserved search expression. Saving the searches your team runs frequently can help reduce duplicated work. You can create Saved Searches in the Panther Console (in either Search or Data Explorer), or using the CLI workflow.
You can also add variables in your Saved Searches, creating Templated Queries. Learn more on Templated Queries and Macros.
Console
CLI
You can save a search in Panther's Data Explorer or Search. Searches saved in both tools are considered Saved Searches. Follow these instructions for how to save a search in Data Explorer, and these instructions for how to save a search in Search.
Writing a Saved Search locally means creating a file that defines a SQL query on your own machine, then uploading it to your Panther instance (typically via the Panther Analysis Tool).
We recommend managing your local detection files in a version control system like GitHub or GitLab.
It's best practice to create a fork of Panther's open-source analysis repository, but you can also create your own repo from scratch.
Each Saved Search consists of:
If you group your queries into folders, each folder name must contain
queries
in order for them to be found during upload (using either PAT or the bulk uploader in the Console).We recommend grouping searches into folders based on log/resource type. You can use the open source Panther Analysis repo as a reference.
In your Saved Search file (called, for example,
new-saved-search.yml
), write your Saved Search, following the template below. AnalysisType: saved_query
QueryName: MySavedQuery
Description: Example of a saved query for PAT
Query: |-
Your query goes here
Tags:
- Your tags
- Use the PAT upload command:
panther_analysis_tool upload --path <path-to-your-search> --api-token <your-api-token> --api-host https://api.<your-panther-instance-name>.runpanther.net/public/graphql
- Replace the values:
<your-panther-instance-name>
: The fairytale name of your instance (e.g. carrot-tuna.runpanther.net).<path-to-your-query>
: The path to your Saved Search on your own machine.
When your Saved Search is uploaded, each of the fields you would normally populate in the Panther Console will be auto-filled. See Saved Search Specification Reference for a complete list of required and optional fields.
A Scheduled Search is a Saved Search that has been configured to run on a schedule. Using the Panther Console, currently only Saved Searches created in Data Explorer can be scheduled—Saved Searches created in Search cannot be scheduled. You can alternatively create and upload Scheduled Searches using the CLI workflow. Scheduled Searches created in the CLI workflow can use session variables to create dynamic timeframes.
Note that creating a Scheduled Search alone won't run the returned data against detections or send alerts. To do this, you must also create a Scheduled Rule, and associate it with your Scheduled Search.
Customer-configured Snowflake accounts: Your company will incur costs on your database backend every time a Scheduled Search runs. Please make sure that your searches can complete inside the specified timeout period. This does not apply to accounts that use Panther-managed Snowflake.
Data Explorer
CLI
To learn how to schedule your Saved Search created in Data Explorer, follow one of the below sets of instructions:
- If you haven't yet created a Saved Search in Data Explorer, follow the Save a search in Data Explorer instructions, paying attention to Is this a Scheduled Search? in Step 4.
- If you've already saved the search in Data Explorer, follow the Update a Saved Search in Data Explorer instructions, paying attention to Step 6.
Writing a Scheduled Search locally means creating a file that defines a SQL query on your own machine, then uploading it to your Panther instance (typically via the Panther Analysis Tool).
We recommend managing your local detection files in a version control system like GitHub or GitLab.
It's best practice to create a fork of Panther's open-source analysis repository, but you can also create your own repo from scratch.
Each scheduled query consists of:
If you group your searches into folders, each folder name must contain
queries
in order for them to be found during upload (using either PAT or the bulk uploader in the Console).We recommend grouping searches into folders based on log/resource type. You can use the open source Panther Analysis repo as a reference.
In your Scheduled Search file (called, for example,
new-scheduled-search.yml
), write your Scheduled Search, following the template below.AnalysisType: scheduled_query
QueryName: ScheduledQuery_Example
Description: Example of a scheduled query for PAT
Enabled: true
Query: |-
Select 1
Tags:
- Your tags
Schedule:
CronExpression: "0 0 29 2 *"
RateMinutes: 0
TimeoutMinutes: 2
- Use the PAT upload command:
panther_analysis_tool upload --path <path-to-your-search> --api-token <your-api-token> --api-host https://api.<your-panther-instance-name>.runpanther.net/public/graphql
- Replace the values:
<your-panther-instance-name>
: The fairytale name of your instance (e.g. carrot-tuna.runpanther.net).<path-to-your-query>
: The path to your Saved Query on your own machine.
When your Scheduled Search is uploaded, each of the fields you would normally populate in the Panther Console will be auto-filled. See Scheduled Search Specification Reference for a complete list of required and optional fields.
Panther's Scheduled Search crontab uses the standard crontab notation consisting of five fields: minutes, hours, day of month, month, day of week. Additionally, you will find a search timeout selector (with a maximum value currently set at 10 minutes). The expression will run on UTC.
The interpreter uses a subset of the standard crontab notation:
┌───────── minute (0 - 59)
│ ┌──────── hour (0 - 23)
│ │ ┌────── day of month (1 - 31)
│ │ │ ┌──── month (1 - 12)
│ │ │ │ ┌── day of week (0 - 6 => Sunday - Saturday)
│ │ │ │ │
↓ ↓ ↓ ↓ ↓
* * * * *
If you want to specify day by day, you can separate days with dashes (
1-5
is Monday through Friday) or commas, for example 0,1,4
in the Day of Week
field will execute the command only on Sundays, Mondays and Thursdays. Currently, we do not support using named days of the week or month names.Using the crontab allows you to be more specific in your schedule than the Period frequency option:

When creating a Scheduled Search in the CLI workflow (i.e., writing a SQL expression locally, then uploading it using the Panther Analysis Tool), you can use session variables to create dynamic start and end times in your SQL query. Note that it is not possible to use session variables when creating a Scheduled Search in the Panther Console.
In the Scheduled Search YAML file, include the
Lookback
and LookbackWindowSeconds
keys. To use session variables, Lookback
must be set to true
, and LookbackWindowSeconds
given an integer value (that is greater than 0
and less than 12096001
[two weeks, in seconds]).Lookback: true
LookbackWindowSeconds: 3600
Then, in the SQL query, include the
$pStartTimeVar
and $pEndTimeVar
session variables to define a window of time. For example: Query: |-
SELECT * FROM panther_logs.public.aws_cloudtrail
WHERE p_event_time between $pStartTimeVar and $pEndTimeVar
LIMIT 10;
The value of these variables will be set according to the following formulas:
$pStartTimeVar
=$pEndTimeVar
-LookbackWindowSeconds
$pEndTimeVar
=<time_of_scheduled_query>
In full, the Scheduled Search YAML file would look like:
AnalysisType: scheduled_query
QueryName: ScheduledQuery_Example
Description: Example of a scheduled query for PAT
Enabled: true
Lookback: true
LookbackWindowSeconds: 3600
Query: |-
SELECT * FROM panther_logs.public.aws_cloudtrail
WHERE p_event_time between $pStartTimeVar and $pEndTimeVar
LIMIT 10;
Tags:
- Your tags
Schedule:
CronExpression: "0 0 29 2 *"
RateMinutes: 0
TimeoutMinutes: 2
You can delete Saved Searches individually or in bulk. Note that if a Saved Search is scheduled (i.e., it's a Scheduled Search), it must be unlinked from any Scheduled Rules it's associated to in order to be deleted.
- 1.In the left-hand navigation bar of your Panther Console, click Investigate > Saved Searches.
- 2.In the list of Saved Searches, find the search or searches you'd like to download or delete. Check the box to the left of the name of each search.
- 3.At the top of the page, click either Download or Delete.
- If you clicked Download, a
saved_queries.zip
file will be downloaded. - If you clicked Delete, an Attention! modal will pop up. Click Confirm.
- 1.In the left-hand navigation bar of your Panther Console, click Investigate > Saved Searches.
- 2.Find the Scheduled Search you'd like to deactivate, and in the upper right corner of its tile, click the three dots icon.
- 3.In the dropdown menu, click Edit Search Metadata.
- 4.In the Update Search form, toggle the setting Is it active? to OFF to disable the query.
- 5.Click Update Query to save your changes.
To edit a Saved Search's name, tags, description, and default database (and, for Scheduled Searches, whether it's active, and the period or cron expression):
- 1.In the left-hand navigation bar of your Panther Console, click Investigate > Saved Searches.
- 2.Locate the query you'd like to edit, and click the three dots icon in the upper right corner of its tile.
- 3.In the dropdown menu, click Edit Search Metadata.
- 4.Make changes in the Update Search form as needed.
- 5.Click Update Search.
On the Saved Searches page, you can search for queries using:
- The search bar at the top of the queries list
- The date range selector in the upper right corner
- The Filters option in the upper right corner
- Filter by whether the query is scheduled, whether its active, its type (Native SQL or Search), or by up to 100 tags.

Click on the name of the Saved Search to be taken directly to Data Explorer (for Native SQL queries) or Search (for Search searches) with the query populated.
In the Panther Data Lake settings page, you can optionally enable a setting that will check if a Scheduled Search has a
LIMIT
clause specified. Use this option if you're concerned about a Scheduled Search unintentionally returning thousands of results, potentially resulting in alert delays, Denial of Service (DoS) for downstream systems and general cleanup overhead from poorly tuned queries.Note: Scheduled Searches that result in a timeout will generate a
System Error
to identify that the Scheduled Search was unsuccessful.- 1.In the upper right corner of the Panther Console, click the gear icon. In the dropdown menu that appears, click General.
- 2.Click the Data Lake tab.
- 3.Scroll down to the Scheduled Queries header. Below the header, you will see the LIMIT clause toggle setting:
- 4.Toggle the
LIMIT
Clause for Scheduled Queries setting to ON to start enforcing LIMITs in Scheduled Queries.
When this field is set to ON, any new Scheduled Searches marked as active cannot be saved unless a LIMIT clause is specified in the query definition.

Existing Scheduled Searches without a LIMIT clause will appear with a warning message in the list of Saved Searches, and edits cannot be saved unless a LIMIT clause is included.

The setting only checks for the existence of a LIMIT clause anywhere in the Saved Search. It does not check specifically for outer LIMIT clauses.
You can export a .zip file of all of the detections and Scheduled Searches in your Panther Console:
- 1.In the left-hand navigation bar of your Panther Console, click Build > Bulk Uploader.
- 2.In the upper right side of the Bulk Uploader page, click Download all entities.
Required fields are in bold.
A complete list of Saved Search specification fields:
Field Name | Description | Expected Value |
---|---|---|
AnalysisType | Indicates whether this analysis is a Rule, Policy, Scheduled Search, Saved Search, or global. | saved_query |
QueryName | A friendly name to show in the UI. | String |
Tags | Tags used to categorize this rule. | List of strings |
Description | A brief description of the rule. | String |
Query | A query that can run on any backend. If this field is specified, you should not specify a SnowflakeQuery or a AthenaQuery. | String |
SnowflakeQuery | A query specifically for a Snowflake backend. | String |
AthenaQuery | A query specifically for Athena. | String |
Required fields are in bold.
A complete list of Scheduled Search specification fields:
Field Name | Description | Expected Value |
---|---|---|
AnalysisType | Indicates whether this analysis is a Rule, Policy, Scheduled Search, Saved Search, or global. | scheduled_query |
QueryName | A friendly name to show in the UI. | String |
Enabled | Whether this rule is enabled. | Boolean |
Tags | Tags used to categorize this rule. | List of strings |
Description | A brief description of the rule. | String |
Query | A query that can run on any backend. If this field is specified, you should not specify a SnowflakeQuery or a AthenaQuery. | String |
SnowflakeQuery | A query specifically for a Snowflake backend. | String |
AthenaQuery | A query specifically for Athena. | String |
Schedule | The schedule that this query should run. Expressed with a CronExpression or in Rate Minutes. TimeoutMinutes is required to release the query if it takes longer than expected. Note that cron and rate minutes are mutually exclusive. CronExpression: "0 0 29 2 *" RateMinutes: 0 TimeoutMinutes: 2 | Map |
Lookback | Boolean | |
LookbackWindowSeconds | Integer that is greater than 0 and less than 12096001 (2 weeks in seconds) |
Last modified 46m ago