# Data Explorer

## Overview

The Data Explorer in your Panther Console is where you can view your normalized Panther data and perform SQL queries (with autocompletion).&#x20;

In Data Explorer, you can:

* Browse collected log data, rule matches, and search standard fields across all data
* [Save, and optionally schedule, your searches](#how-to-manage-saved-searches-in-data-explorer)
  * [Scheduled Searches](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/search/scheduled-searches) can run through the rule engine
* Create [Templated Searches and macros](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/search/scheduled-searches/templated-searches)
* Share results with your team through a shareable link, or download results in a CSV
* Select entire rows of JSON to use in the rule engine as unit tests
* Preview table data, filter results, and summarize columns without SQL
* Limit access to the Data Explorer through [Role-Based Access Control](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/system-configuration/rbac) (RBAC)

{% hint style="warning" %}
Data Explorer results that include `bigint` data exceeding 32-bit precision will be shown rounded due to browser limitations rendering JSON. If you'd like these values to be represented without precision loss, cast them to strings in the SQL command. Actual data stored in the data lake is not affected.
{% endhint %}

## Query syntax in Data Explorer

Queries executed in Data Explorer should use Snowflake SQL syntax described in Snowflake's [SQL Command Reference](https://docs.snowflake.com/en/sql-reference-commands) documentation.

You can also learn about:

* Best practices for searching in Data Explorer, in [Searching effectively in Data Explorer](#searching-effectively-in-data-explorer)
* How to [reference nested fields in Data Explorer](#referencing-nested-fields-in-data-explorer)
* How to use [Data Explorer macros](#how-to-use-data-explorer-macros)

### Referencing nested fields in Data Explorer

When traversing a JSON object, if a key name does not conform to [Snowflake SQL identifier rules](https://docs.snowflake.com/en/sql-reference/identifiers-syntax)—for example if it contains periods or spaces—enclose the value in double quotes.&#x20;

For example, if you want to run a query for accessing the field `context.ip_address` from the [IPInfo Privacy Enrichment Provider](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/enrichment/ipinfo), you would write it as `p_enrichment:ipinfo_privacy:"context.ip_address"`.&#x20;

Learn more in Snowflake's [Querying Semi-structured Data](https://docs.snowflake.com/en/user-guide/querying-semistructured) documentation.

### Searching effectively in Data Explorer

To ensure your results return as quickly as possible, it's recommended to follow these best practices:

* **Use a `LIMIT` clause**
  * Use the `LIMIT` clause to specify the number of records your query will return. Limiting queries can return results more quickly. Panther limits the size of results to 100MB by default.
* **Use a time range filter**
  * Snowflake groups files in S3 in [micropartitions](https://docs.snowflake.com/en/user-guide/tables-clustering-micropartitions.html). When you filter by a time range (such as `p_event_time` or `p_occurs_since()`) in your query, Snowflake will only need to access specific partitions, which returns results more quickly.&#x20;
  * For more information on macros, see the section below:[ How to use Data Explorer macros](#how-to-use-data-explorer-macros).
* **Use p\_any fields**
  * During log ingestion, Panther extracts common security indicators into the `p_any` fields. The `p_any` fields are stored in optimized columns. These fields standardize names for attributes across all data sources, enabling fast data correlation.
  * Learn more on [Standard Fields](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/search/panther-fields).
* **Query specific fields**
  * Using `SELECT * FROM ...` pulls all columns, which can slow down queries. When possible, query only the fields you need to investigate. For example, `SELECT user_name, event_name FROM ...`.
* **Summarize**
  * Summaries are faster to run than querying full records. This is especially helpful when investigating logs over a large period of time, or in a situation where you are unsure how much data exists over the time you are investigating.
  * Instead of querying the full data set, you can use `count(*)` and `group by` a time range, which will run more quickly and help you determine a more narrow timeframe to subsequently query.
  * For example, if you look back over a day and `GROUP BY hour`, you might determine which specific hour you need to investigate in your data. You can then run a query against that hour to narrow your results further.

If your query is still running slowly after following the best practices above, we recommend the following steps:&#x20;

* Count the rows to see how much data you are querying.
  * This will help you determine whether it's a large amount of data and expected that it's taking longer.
* Reduce the time range you are querying.
* Reach out to your Panther Support team for additional help.

## **How to use Data Explorer**

### **Access Data Explorer**

* In the left-hand navigation bar of your Panther Console, click **Investigate > Data Explorer**.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FIsUROQyD6cPMovskOAkB%2Fdata-explorer.png?alt=media&#x26;token=759809f6-3b5b-476c-9fc6-77ef4739650e" alt="The Data Explorer page has a column on the left labeled Tables. In the middle of the page, there is a text editor labeled New Query. The bottom of the page has tabs labeled Results and Summarize. "><figcaption></figcaption></figure>

### Preview table data

You can preview example table data without writing SQL. To generate a sample SQL query for that log source, click the eye icon next to the table type:

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FxLTDA6QpNizA60UUzJTE%2Fpreview-table-data.png?alt=media&#x26;token=48557c9d-665f-41ec-b033-58a7e894db61" alt="In Data Explorer, under the Tables column, each log source has an eye icon next to it. The image shows a red circle around the eye icons."><figcaption></figcaption></figure>

### Filter Data Explorer results

You can filter columns from a Result set in Data Explorer without writing SQL.&#x20;

In the upper right corner of the Results table, click **Filter Columns** to select the columns you would like to display in the Results:

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FDJvd7bmbeaeduku51OCS%2Fimage.png?alt=media&#x26;token=32f27b58-a5d5-4e30-824f-699283c9b620" alt="The Results tab of the Data Explorer shows there are 3202 Results. The Query Time was 528 ms, and the Data Scanned was 1.89MB. There is a select dropdown that has &#x22;Filter Columns (3)&#x22; selected, and a &#x22;Download CSV&#x22; button."><figcaption></figcaption></figure>

Note: The filters applied through this mechanism do not apply to the SQL select statement in your query.

### Summarize column data

You can generate a summary (frequency count) of a column from a results set in Data Explorer without writing SQL.

On the column that your want to generate a summary for, click the down arrow and then **Summarize** to display summary results in a separate tab.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FpWUXMmQl3kBfK9KsDCB6%2Fimage.png?alt=media&#x26;token=34522744-d416-4f44-9fb9-acf88f0e8198" alt="The Results tab of the Data Explorer shows there are 3202 Results. The Query Time was 528 ms, and the Data Scanned was 1.89MB. There is a select dropdown that has &#x22;Filter Columns (3)&#x22; selected, and a &#x22;Download CSV&#x22; button. The first result of the query is shown in table format. The following fields are visible: p_timeline, type, timestamp, elb, clientIp, clientPort, targetIp, targetPort, requestProcessingTime, and targetProcessingTime."><figcaption></figcaption></figure>

You can also generate a summary for the first time after a query is executed by switching to the **Summarize** tab and selecting a column from the dropdown.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FZzMf7gh0GB10tXepQAIX%2Fimage.png?alt=media&#x26;token=b85face0-13f0-4c07-b18f-a2795f34a50d" alt="In the Summarize tab, the type-ahead dropdown allows selecting a column to summarize "><figcaption></figcaption></figure>

The summary results for a selected column are displayed in the Summary tab, with the option to sort results by highest count or lowest count first (default is the highest count first).

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2F85jqP0hJIczH4so1qxit%2Fimage.png?alt=media&#x26;token=620d6bb5-b214-4add-9c95-39f308ef574c" alt="row_count represents the frequency of each unique clientIp in the result set"><figcaption></figcaption></figure>

In addition to the `row_count` value, the summary also displays `first_seen` and `last_seen` values if the result data contains the `p_event_time` [standard field](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/search/panther-fields).

## How to use Data Explorer macros

All the tables in our supported backend databases (Athena and Snowflake) are partitioned by event time to allow for better performance when querying using a time filter.&#x20;

For efficiency and ease of use, Panther offers macros that will be expanded into full expressions when sent to the database:

* [Current time: `p_current_timestamp`](#current-time-p_current_timestamp)
* [Time range filter: `p_occurs_between`](#time-range-filter-p_occurs_between)
* [Time offset from present: `p_occurs_since`](#time-offset-from-present-p_occurs_since)
* [Filter around a certain time: `p_occurs_around`](#filter-around-a-certain-time-p_occurs_around)

{% hint style="info" %}
These macros are different from template macros. Learn more about template macros on [Templated Searches](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/search/scheduled-searches/templated-searches).
{% endhint %}

### Macro formatting <a href="#time-offset-format" id="time-offset-format"></a>

#### Time duration format <a href="#time-offset-format" id="time-offset-format"></a>

Some macros take a time duration as a parameter. The format for this duration is a positive integer number followed by an optional suffix to indicate the unit of time. If no suffix is provided the number is interpreted as seconds.

Supported suffixes are list below:

* `s, sec, second, seconds` — macro adds specified seconds
* `m, min, minute, minutes` — macro adds specified minutes to offset
* `h, hr, hrs, hour, hours` — macro adds specified hours to offset
* `d, day, days` — macro adds specified days to offset
* `w, wk, week, weeks` — macro adds a specified number of weeks to offset
* if no suffix is detected, the default is seconds

Examples:

* `'6 d'` - 6 days
* `'2 weeks'` - 2 weeks
* `900` - 900 seconds
* `'96 hrs'` - 96 hours

#### Timestamp format

Ensure your time expressions can be parsed by the database backend your team is using. Some timestamps that work in Snowflake (i.e. `2021-01-21T11:15:54.346Z`) are not accepted as valid timestamps by Athena. The default safe time format should probably look similar to `2021-01-02 15:04:05.000` and is assumed to be in the UTC time zone.

### Data Explorer macros

#### Current time: `p_current_timestamp`

`p_current_timestamp()`

This macro expands to `current_timestamp` in Data Explorer, but similar to [`p_occurs_since`](#time-offset-from-present-p_occurs_since), when run in a scheduled query it expands to the scheduled time of the query (regardless of when the query is actually executed).

#### Time range filter: `p_occurs_between`

`p_occurs_between(startTime, endTime, [, tableAlias [, column]])`

* `startTime` - a time in [timestamp format](#timestamp-format), indicating start of search window
* `endTime` - a time in [timestamp format](#timestamp-format), indicating the end of the search window
* `tableAlias` - an optional identifier that will be used as the table alias if provided
* `column` - an optional identifier that will be used as the column if provided
  * If not present, the default column is `p_event_time`.&#x20;
  * Indicating a different column (such as `p_parsed_time`) with `column` can lead to significantly longer query times, as without a restriction on `p_event_time`, the entire table is searched.

**Note**: Please ensure that your time expression can be parsed by the database backend your instance is using. For more information see [Timestamp format](#timestamp-format).

The macro `p_occurs_between()` takes a start time and end time in [timestamp format](#timestamp-format) and filters the result set to those events in the time range, using the correct partition (minimizing I/O and speeding up the query).&#x20;

To be used properly this macro should occur within a filter, such as a `WHERE` clause.

The following Snowflake command contains a macro:

```sql
select p_db_name, count(*) as freq from panther_views.public.all_databases
where p_occurs_between(current_date - 1, current_timestamp)
group by p_db_name
limit 1000
```

The macro that will be automatically expanded before the query is sent to the database. The form the expansion takes is database specific. In Snowflake, this expansion is pretty straightforward:

```sql
select p_db_name, count(*) as freq from panther_views.public.all_databases
where p_event_time between convert_timezone('UTC',current_date - 1)::timestamp_ntz
    and convert_timezone('UTC',current_timestamp)::timestamp_ntz
group by p_db_name
limit 1000
```

Keep in mind that different database back-ends allow different date formats and operations. Athena does not allow simple arithmetic operations on dates, therefore the care must be taken to use an Athena-friendly time format:

```sql
select p_db_name, count(*) as freq from panther_views.public.all_databases
where p_occurs_between(current_date - interval '1' day, current_timestamp)
group by p_db_name
limit 1000
```

Because of the structure of allowed indexes on partitions in Athena, the expansion looks different:

```sql
select p_db_name, count(*) as freq from panther_views.all_databases
where p_event_time between cast (current_date - interval '1' day as timestamp) and cast (current_timestamp as timestamp)
  and partition_time between to_unixtime(date_trunc('HOUR', (cast (current_date - interval '1' day as timestamp))))
    and to_unixtime(cast (current_timestamp as timestamp))
group by p_db_name
limit 1000
```

The macro also takes an optional table alias. This can be helpful when referring to multiple tables, such as with a `JOIN`:

```sql
select aws.awsRegion, ata.digestEndTime
from panther_logs.public.aws_cloudtrail as aws
join panther_logs.public.aws_cloudtraildigest as ata ON aws.awsRegion = ata.awsAccountId
where p_occurs_between('2023-01-01', '2023-06-30', aws)
limit 10
```

#### Time offset from present: `p_occurs_since`

`p_occurs_since(offsetFromPresent [, tableAlias[, column]])`

* `offsetFromPresent` - an expression in [time duration format](#time-offset-format), interpreted relative to the present, for example `'1 hour'`&#x20;
* `tableAlias` - an optional identifier that will be used as the table alias if provided
* `column` - an optional identifier that will be used as the column if provided
  * If not present, the default column is `p_event_time`.&#x20;
  * Indicating a different column (such as `p_parsed_time`) with `column` can lead to significantly longer query times, as without a restriction on `p_event_time`, the entire table is searched.

The macro `p_occurs_since()` takes an offset parameter specified in [time duration format](#time-offset-format) and filters the result set down to those events between the current time and the specified offset, using the correct partition or cluster key (minimizing I/O and speeding up the query).

The macro also takes an optional table alias which can be helpful when referring to multiple tables, such as a `JOIN`.

To be used properly this macro should occur within a filter, such as a `WHERE` clause.

#### Examples:

```sql
p_occurs_since('6 d')
p_occurs_since('2 weeks')
p_occurs_since(900) // assumes seconds
p_occurs_since('96 hrs')
```

{% hint style="info" %}
If this is used in a [Scheduled Search](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/search/scheduled-searches), then rather than using the current time as the reference, the scheduled run time will be used. For example, if a query is scheduled to run at the start of each hour, then the `p_occurs_since('1 hour')` macro will expand using a time range of 1 hour starting at the start of each hour (regardless of when the query is actually executed).
{% endhint %}

In the following example of a macro with a table alias parameter, we look at CloudTrail logs to identify S3 buckets created and deleted within one hour of their creation, a potentially suspicious behavior. To get this information we do a self-join on the `aws_cloudtrail` table in `panther_logs`, and we use a macro expansion to limit this search to the past 24 hours on each of the two elements of the self-join (aliased `ct1` and `ct2` below):

```sql
select 
ct1.p_event_time createTime, ct2.p_event_time deleteTime,
timediff('s',createTime, deleteTime) timeExtant,
ct1.requestparameters:"bucketName"::varchar createdBucket,
ct1.useridentity:"arn"::varchar createArn, deleteArn,
ct1.useragent createUserAgent, deleteUserAgent
from panther_logs.public.aws_cloudtrail ct1
join (
select p_event_time, requestparameters:"bucketName"::varchar deletedBucket, errorcode,
  eventname deleteEvent, useridentity:"arn"::varchar deleteArn, useragent deleteUserAgent  from panther_logs.public.aws_cloudtrail) ct2
on (ct1.requestparameters:"bucketName"::varchar = ct2.DeletedBucket
    and ct2.p_event_time > ct1.p_event_Time
    and timediff('s',ct1.p_event_time, ct2.p_event_time) < 3600)
where ct2.deleteEvent = 'DeleteBucket'
and ct1.eventName = 'CreateBucket'
and ct1.errorCode is null and ct2.errorcode is null
and p_occurs_since('1 day',ct2)  -- apply to ct2
and p_occurs_since('24 hours',ct1)  -- apply to ct1
order by createdBucket, createTime;
```

There are two separate calls to `p_occurs_since` each applied to a different table, as indicated by the table alias used as a second parameter. This is expanded into the following Snowflake query:

```sql
select 
ct1.p_event_time createTime, ct2.p_event_time deleteTime,
timediff('s',createTime, deleteTime) timeExtant,
ct1.requestparameters:"bucketName"::varchar createdBucket,
ct1.useridentity:"arn"::varchar createArn, deleteArn,
ct1.useragent createUserAgent, deleteUserAgent
from panther_logs.public.aws_cloudtrail ct1
join (
select p_event_time, requestparameters:"bucketName"::varchar deletedBucket, errorcode,
  eventname deleteEvent, useridentity:"arn"::varchar deleteArn, useragent deleteUserAgent  from panther_logs.public.aws_cloudtrail) ct2
on (ct1.requestparameters:"bucketName"::varchar = ct2.deletedBucket
    and ct2.p_event_time > ct1.p_event_Time
    and timediff('s',ct1.p_event_time, ct2.p_event_time) < 3600)
where ct2.deleteEvent = 'DeleteBucket'
and ct1.eventName = 'CreateBucket'
and ct1.errorCode is null and ct2.errorcode is null
and ct2.p_event_time >= current_timestamp - interval '86400 second'
and ct1.p_event_time >= current_timestamp - interval '86400 second'
order by createdBucket, createTime;
```

#### Filter around a certain time: `p_occurs_around`

`p_occurs_around(timestamp, timeOffset [, tableAlias[, column]])`

* `timestamp` - a time in [timestamp format](#timestamp-format), indicating the time to search around
* `timeOffset` - an expression in [time duration format](#time-offset-format), indicating the amount of time to search around the `timestamp`, for example `'1 hour'`&#x20;
* `tableAlias` - an optional identifier that will be used as the column alias if provided
* `column` - an optional identifier that will be used as the column if provided
  * If not present, the default column is `p_event_time`.&#x20;
  * Indicating a different column (such as `p_parsed_time`) with `column` can lead to significantly longer query times, as without a restriction on `p_event_time`, the entire table is searched.

**Note**: Please ensure that your time expression can be parsed by the database backend your instance is using. For more information see [Timestamp format](#timestamp-format).

The `p_occurs_around()` macro allows you to filter for events that occur around a given time. It takes a timestamp in [timestamp format](#timestamp-format) indicating the time to search around and an offset in [time duration format](#time-offset-format) specifying the interval to search. The search range is from `timestamp - timeOffset` to `timestamp + timeOffset`.

For example, the macro `p_occurs_around('2022-01-01 10:00:00.000', '10 m')` filters for events that occurred between 09:50 am and 10:10 am UTC on January 1, 2022.

The macro also takes an optional table alias which can be helpful when referring to multiple tables, such as a `JOIN`.

To be used properly this macro should occur within a filter, such as the `WHERE` clause of a SQL statement.

#### Examples:

```sql
p_occurs_around('2022-01-01 10:00:00.000', '6 d')
p_occurs_around('2022-01-01 10:00:00.000', '2 weeks')
p_occurs_around('2022-01-01 10:00:00.000', 900) // assumes seconds
p_occurs_around('2022-01-01 10:00:00.000', '96 hrs')
```

## How to manage Saved Searches in Data Explorer

Saving your commonly run searches in Data Explorer means you won't need to rewrite them again and again.

{% hint style="info" %}
Note that the instructions to delete a Saved Search are outlined on [Saved and Scheduled Queries](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/scheduled-searches#how-to-delete-or-download-a-saved-search).
{% endhint %}

### Save a search in Data Explorer

Below are instructions for how to save a search you've written in Data Explorer. You can also [create a Saved Search from Search](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/search-tool#creating-a-saved-search).

1. In the left-hand navigation bar of your Panther Console, click **Investigate** > **Data Explorer** in the left sidebar.
2. In the SQL editor, write a search using SQL.
   * If you have enabled the system-wide setting to [require a LIMIT clause](#use-limits-in-scheduled-queries), make sure your query includes a LIMIT.\
     ![The image shows an example query written in the Scheduled Query text editor in the Panther Console.](https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FcjdTT1JaruWkRHFJwj4U%2FScreen%20Shot%202022-03-21%20at%2010.15.07%20AM.png?alt=media\&token=8555cbda-c17b-483c-9780-b026088d80f3)
   * You can create a Templated Search by including variables in your SQL expression. Learn more on [Templated Searches](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/search/scheduled-searches/templated-searches).
3. Below the SQL editor, click **Save As**.
4. In the **Save Search** modal that pops up, fill in the form:

   * **Search Name**: Add a descriptive name.
   * **Tags**: Add tags to help you group similar searches together.
   * **Description**: Describe the purpose of the search.
   * **Is this a Scheduled Search?**: If you want this Saved Search to run on a schedule (making it a Scheduled Search), switch the toggle to **ON**.
     * When you switch this toggle to **ON**, the options described below will appear.
     * **Is it active?**: If you want this Scheduled Search to start running on your selected schedule, switch the toggle to **ON**.

   \
   If you've toggled **Is this a scheduled query?** to **ON, c**onfigure one of the following interval options: Period or [Cron Expression](#how-to-use-the-scheduled-query-crontab).

   * **Period** (select if your query should run on fixed time intervals):
     * **Period(days)** and **Period(min)**: Enter the number of days and/or minutes after which the SQL query should run again. For example: setting a period of 0 days and 30 minutes will mean that the query will run every day, every 30 minutes.
     * **Timeout(min)**: Enter the timeout period in minutes, with a maximum allowed value of 10 minutes. If your query does not complete inside the allowed time window, Panther will retry 3 times before automatically canceling it.
   * **Cron Expression** (select if your query should run repeatedly at specific dates, and learn more about how to create a cron expression in [How to use the Scheduled Search crontab](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/scheduled-searches#how-to-use-the-scheduled-search-crontab)):
     * **Minutes** and **Hours**: Enter the time of day for the query to run.
     * **Day** and **Month** (day of month): If you wish to have this query run on a specific day and month, enter the day and month.
     * **Day** (day of week): If you wish to have this query run on a specific day of the week, enter the day.
     * **Timeout**: Enter the timeout period in minutes, with a maximum allowed value of 10 minutes. If your query does not complete inside the allowed time window, Panther will retry 3 times before automatically canceling it.\
       ![The image shows the Search creation screen. There are fields for Search Name, Tags, Description, and Default Database. Next to "Is this a Scheduled Search?" the toggle is set to "on." Next to "Is it active?" the toggle is set to "On." The interval option "Period" is selected. ](https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fs1ZgJnAyFtenWFLldqFm%2FScreenshot%202023-10-02%20at%209.58.13%20AM.png?alt=media\&token=dcbeb4cb-6b80-4a17-8e6c-309bc7067e65)
5. Click **Save Search**.

If you've created a Scheduled Search (by toggling **Is this a Scheduled Search?** to **ON)**, you can now [follow the instructions to create a Scheduled Rule](https://docs.panther.com/~/changes/Cd1BxbxeaFl8dlynhNpt/detections/rules#how-to-write-rules-and-scheduled-rules) if you'd like the data returned by your search to be passed through a detection, alerting on matches.

### Update a Saved Search in Data Explorer

1. In your Panther Console, navigate to **Investigate** > **Data Explorer** in the left sidebar.
2. In the upper right corner, click **Open Saved Search**.
   * An **Open a Search** modal will pop up.&#x20;
3. In the modal, select the Saved Search you'd like to update, and click **Open Search**.
   * The Saved Search will populate in the Data Explorer SQL editor.
4. Make desired changes to the SQL command.
5. Below the editor, click **Update**.\
   ![The bottom of the SQL editor in Data Explorer is shown. Below the editor are two buttons: Run Search and Update. The Update button is circled.](https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FysxLdfGlXQsYVOvciaF2%2FScreenshot%202023-10-02%20at%2010.04.30%20AM.png?alt=media\&token=d013a05b-dd3e-440e-b881-5dfc5deb1906)
   * An Update Search modal will pop up.
6. Make desired changes to the Saved Search's metadata, including the **Search Name**, **Tags**, **Description**, **Default Database**, and **Is this a Scheduled Search?** (and related fields).
7. Click **Update Search** to save your changes.
