Saved and Scheduled Queries
Save and optionally schedule queries
You can avoid repeatedly creating the same queries in Panther's Data Explorer and Search by saving your queries. You can also schedule queries created in Data Explorer, which allows you to then run results against a rule and alert on matches. This workflow includes the following features:
- Create a Scheduled Rule, a detection that's associated with a Scheduled Query. The data returned each time the query executes is run against the detection, alerting when matches are found.
By default, each Panther account is limited to 10 active Scheduled Queries. This limit is only precautionary, and can be increased via a support request. There is no additional cost from Panther for raising this limit, however you may incur extra charges from the database backend, depending on the volume of data processed.
A Saved Query is a preserved data query. Saving the queries your team runs frequently can help reduce duplicated work. You can create Saved Queries in the Panther Console (in either Search or Data Explorer), or using the CLI workflow.
You can also add variables in your Saved Queries, created Templated Queries. Learn more on Templated Queries and Macros.
Console
CLI
You can save a query in Panther's Data Explorer or Search. Queries saved in both tools are considered Saved Queries. Follow these instructions for how to save a query in Data Explorer, and these instructions for how to save a query in Search.
Writing Saved Queries locally means creating metadata files that define SQL queries on your own machine. Upload the files to your Panther instance (typically via the Panther Analysis Tool) to control your Saved Query content.
We recommend managing your local detection files in a version control system like GitHub or GitLab.
It's best practice to create a fork of Panther's open-source analysis repository, but you can also create your own repo from scratch.
Each saved query consists of:
If you group your queries into folders, each folder name must contain
queries
in order for them to be found during upload (using either PAT or the bulk uploader in the Console).We recommend grouping queries into folders based on log/resource type. You can use the open source Panther Analysis repo as a reference.
In your Saved Query file (called, for example,
new-saved-query.yml
), write your Saved Query, following the template below. AnalysisType: saved_query
QueryName: MySavedQuery
Description: Example of a saved query for PAT
Query: |-
Your query goes here
Tags:
- Your tags
- Use the PAT upload command:
panther_analysis_tool upload --path <path-to-your-query> --api-token <your-api-token> --api-host https://api.<your-panther-instance-name>.runpanther.net/public/graphql
- Replace the values:
<your-panther-instance-name>
: The fairytale name of your instance (e.g. carrot-tuna.runpanther.net).<path-to-your-query>
: The path to your Saved Query on your own machine.
When your Saved Query is uploaded, each of the fields you would normally populate in the Panther Console will be auto-filled. See Saved Query Specification Reference for a complete list of required and optional fields.
A Scheduled Query is a Saved Query that has been configured to run on a schedule. Using the Panther Console, currently only Saved Queries created in Data Explorer can be scheduled—Saved Queries created in Search cannot be scheduled. You can alternatively create and upload Scheduled Queries using the CLI workflow. Scheduled Queries created in the CLI workflow can use session variables to create dynamic timeframes.
Remember that creating a Scheduled Query alone won't run the returned data against detections or send alerts. To do this, also create a Scheduled Rule, and associate it with your Scheduled Query.
Customer-configured Snowflake accounts: Your company will incur costs on your database backend every time a Scheduled Query runs. Please make sure that your queries can complete inside the specified timeout period. This does not apply to accounts that use Panther-managed Snowflake.
Data Explorer
CLI
To learn how to schedule your Saved Query created in Data Explorer, follow one of the below sets of instructions:
- If you haven't yet created a Saved Query in Data Explorer, follow the Save a query in Data Explorer instructions, paying attention to Is this a scheduled query? in Step 4.
- If you've already saved the query in Data Explorer, follow the Update a Saved Query in Data Explorer instructions, paying attention to Step 6.
Writing Scheduled Queries locally means creating metadata files that define SQL queries on your own machine. Upload the files to your Panther instance (typically via the Panther Analysis Tool) to control your Scheduled Query content.
We recommend managing your local detection files in a version control system like GitHub or GitLab.
It's best practice to create a fork of Panther's open-source analysis repository, but you can also create your own repo from scratch.
Each scheduled query consists of:
If you group your queries into folders, each folder name must contain
queries
in order for them to be found during upload (using either PAT or the bulk uploader in the Console).We recommend grouping queries into folders based on log/resource type. You can use the open source Panther Analysis repo as a reference.
In your Scheduled Query file (called, for example,
new-scheduled-query.yml
), write your Scheduled Query, following the template below.AnalysisType: scheduled_query
QueryName: ScheduledQuery_Example
Description: Example of a scheduled query for PAT
Enabled: true
Query: |-
Select 1
Tags:
- Your tags
Schedule:
CronExpression: "0 0 29 2 *"
RateMinutes: 0
TimeoutMinutes: 2
- Use the PAT upload command:
panther_analysis_tool upload --path <path-to-your-query> --api-token <your-api-token> --api-host https://api.<your-panther-instance-name>.runpanther.net/public/graphql
- Replace the values:
<your-panther-instance-name>
: The fairytale name of your instance (e.g. carrot-tuna.runpanther.net).<path-to-your-query>
: The path to your Saved Query on your own machine.
When your Scheduled Query is uploaded, each of the fields you would normally populate in the Panther Console will be auto-filled. See Scheduled Query Specification Reference for a complete list of required and optional fields.
Panther's Scheduled Query crontab uses the standard crontab notation consisting of five fields: minutes, hours, day of month, month, day of week. Additionally, you will find a query timeout selector (with a maximum value currently set at 10 minutes). The expression will run on UTC.
The interpreter uses a subset of the standard crontab notation:
┌───────── minute (0 - 59)
│ ┌──────── hour (0 - 23)
│ │ ┌────── day of month (1 - 31)
│ │ │ ┌──── month (1 - 12)
│ │ │ │ ┌── day of week (0 - 6 => Sunday - Saturday)
│ │ │ │ │
↓ ↓ ↓ ↓ ↓
* * * * *
If you want to specify day by day, you can separate days with dashes (
1-5
is Monday through Friday) or commas, for example 0,1,4
in the Day of Week
field will execute the command only on Sundays, Mondays and Thursdays. Currently, we do not support using named days of the week or month names.Using the crontab allows you to be more specific in your schedule than the Period frequency option:

When creating a Scheduled Query in the CLI workflow (i.e., writing it locally, then uploading it using the Panther Analysis Tool), you can use session variables to create dynamic start and end times in your SQL query. Note that it is not possible to use session variables when creating a Scheduled Query in the Panther Console.
In the Scheduled Query YAML file, include the
Lookback
and LookbackWindowSeconds
keys. To use session variables, Lookback
must be set to true
, and LookbackWindowSeconds
given an integer value (that is greater than 0
and less than 12096001
[two weeks, in seconds]).Lookback: true
LookbackWindowSeconds: 3600
Then, in the SQL query, include the
$pStartTimeVar
and $pEndTimeVar
session variables to define a window of time. For example: Query: |-
SELECT * FROM panther_logs.public.aws_cloudtrail
WHERE p_event_time between $pStartTimeVar and $pEndTimeVar
LIMIT 10;
The value of these variables will be set according to the following formulas:
$pStartTimeVar
=$pEndTimeVar
-LookbackWindowSeconds
$pEndTimeVar
=<time_of_scheduled_query>
In full, the Scheduled Query YAML file would look like:
AnalysisType: scheduled_query
QueryName: ScheduledQuery_Example
Description: Example of a scheduled query for PAT
Enabled: true
Lookback: true
LookbackWindowSeconds: 3600
Query: |-
SELECT * FROM panther_logs.public.aws_cloudtrail
WHERE p_event_time between $pStartTimeVar and $pEndTimeVar
LIMIT 10;
Tags:
- Your tags
Schedule:
CronExpression: "0 0 29 2 *"
RateMinutes: 0
TimeoutMinutes: 2
You can delete Saved Queries individually or in bulk. Note that if a Saved Query is scheduled (i.e., it's a Scheduled Query), it must be unlinked from any Scheduled Rules it's associated to in order to be deleted.
- 1.Log in to the Panther Console, then navigate to Investigate > Saved Queries.
- 2.In the list of Saved Queries, find the query or queries you'd like to download or delete. Check the box to the left of the name of each query.
- 3.At the top of the page, click either Download or Delete.
- If you clicked Download, a
saved_queries.zip
file will be downloaded. - If you clicked Delete, an Attention! modal will pop up. Click Confirm.
- 1.Log in to the Panther Console, then navigate to Investigate > Saved Queries.
- 2.Find the Scheduled Query you'd like to deactivate, and in the upper right corner of its tile, click the three dots icon.
- 3.In the dropdown menu, click Edit Query Metadata.
- 4.In the Update Query form, toggle the setting Is it active? to OFF to disable the query.
- 5.Click Update Query to save your changes.
To edit a Saved Query's name, tags, description, and default database (and, for Scheduled Queries, whether it's active, and the period or cron expression):
- 1.Log in to the Panther Console, then navigate to Investigate > Saved Queries.
- 2.Locate the query you'd like to edit, and click the three dots icon in the upper right corner of its tile.
- 3.In the dropdown menu, click Edit Query Metadata.
- 4.Make changes in the Update Query form as needed.
- 5.Click Update Query.
On the Saved Queries page, you can search for queries using:
- The search bar at the top of the queries list
- The date range selector in the upper right corner
- The Filters option in the upper right corner
- Filter by whether the query is scheduled, whether its active, its type (Native SQL or Search), or by up to 100 tags.

Click on the name of the Saved Query to be taken directly to Data Explorer (for Native SQL queries) or Search (for Search queries) with the query populated.
In the Panther Data Lake settings page, you can optionally enable a setting that will check if a Scheduled Query has a LIMIT clause specified. Use this option if you're concerned about a Scheduled Query unintentionally returning thousands of results, potentially resulting in alert delays, Denial of Service (DoS) for downstream systems and general cleanup overhead from poorly tuned queries.
- 1.In the upper right corner of the Panther Console, click the gear icon. In the dropdown menu that appears, click General.
- 2.Click the Data Lake tab.
- 3.Scroll down to the Scheduled Queries header. Below the header, you will see the LIMIT clause toggle setting:
- 4.Toggle the
LIMIT
Clause for Scheduled Queries setting to ON to start enforcing LIMITs in Scheduled Queries.
When this field is set to ON, any new Scheduled Queries marked as active cannot be saved unless a LIMIT clause is specified in the query definition.

Existing Scheduled Queries without a LIMIT clause will appear with a warning message in the list of Saved Queries, and edits cannot be saved unless a LIMIT clause is included.

The setting only checks for the existence of a LIMIT clause anywhere in the Saved Query. It does not check specifically for outer LIMIT clauses.
You can export a .zip file of all of the detections and scheduled queries in your Panther Console:
- 1.In the lefthand side of the Panther Console, click Build > Bulk Uploader.
- 2.In the upper right side of the Bulk Uploader page, click Download all entities.
Required fields are in bold.
A complete list of saved query specification fields:
Field Name | Description | Expected Value |
---|---|---|
AnalysisType | Indicates whether this analysis is a Rule, Policy, Scheduled Query, Saved Query, or global. | saved_query |
QueryName | A friendly name to show in the UI. | String |
Tags | Tags used to categorize this rule. | List of strings |
Description | A brief description of the rule. | String |
Query | A query that can run on any backend. If this field is specified, you should not specify a SnowflakeQuery or a AthenaQuery. | String |
SnowflakeQuery | A query specifically for a Snowflake backend. | String |
AthenaQuery | A query specifically for Athena. | String |
Required fields are in bold.
A complete list of scheduled query specification fields:
Field Name | Description | Expected Value |
---|---|---|
AnalysisType | Indicates whether this analysis is a Rule, Policy, Scheduled Query, Saved Query, or global. | scheduled_query |
QueryName | A friendly name to show in the UI. | String |
Enabled | Whether this rule is enabled. | Boolean |
Tags | Tags used to categorize this rule. | List of strings |
Description | A brief description of the rule. | String |
Query | A query that can run on any backend. If this field is specified, you should not specify a SnowflakeQuery or a AthenaQuery. | String |
SnowflakeQuery | A query specifically for a Snowflake backend. | String |
AthenaQuery | A query specifically for Athena. | String |
Schedule | The schedule that this query should run. Expressed with a CronExpression or in Rate Minutes. TimeoutMinutes is required to release the query if it takes longer than expected. Note that cron and rate minutes are mutually exclusive. CronExpression: "0 0 29 2 *" RateMinutes: 0 TimeoutMinutes: 2 | Map |
Lookback | Boolean | |
LookbackWindowSeconds | Integer that is greater than 0 and less than 12096001 (2 weeks in seconds) |
Last modified 2d ago