LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Setting Up a Cloud Connected Panther Instance
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • Query syntax in Data Explorer
  • Referencing nested fields in Data Explorer
  • Searching effectively in Data Explorer
  • How to use Data Explorer
  • Access Data Explorer
  • Preview table data
  • Filter Data Explorer results
  • Summarize column data
  • How to use Data Explorer macros
  • Macro formatting
  • Data Explorer macros
  • How to manage Saved Searches in Data Explorer
  • Save a search in Data Explorer
  • Update a Saved Search in Data Explorer

Was this helpful?

  1. Investigations & Search

Data Explorer

Use Panther's Data Explorer to view normalized data and perform SQL queries

PreviousSearch Filter OperatorsNextData Explorer SQL Search Examples

Last updated 11 months ago

Was this helpful?

Overview

The Data Explorer in your Panther Console is where you can view your normalized Panther data and perform SQL queries (with autocompletion).

In Data Explorer, you can:

  • Browse collected log data, rule matches, and search standard fields across all data

    • can run through the rule engine

  • Create

  • Share results with your team through a shareable link, or download results in a CSV

  • Select entire rows of JSON to use in the rule engine as unit tests

  • Preview table data, filter results, and summarize columns without SQL

  • Limit access to the Data Explorer through (RBAC)

Data Explorer results that include bigint data exceeding 32-bit precision will be shown rounded due to browser limitations rendering JSON. If you'd like these values to be represented without precision loss, cast them to strings in the SQL command. Actual data stored in the data lake is not affected.

Query syntax in Data Explorer

Queries executed in Data Explorer should use Snowflake SQL syntax described in Snowflake's documentation.

You can also learn about:

Referencing nested fields in Data Explorer

Searching effectively in Data Explorer

To ensure your results return as quickly as possible, it's recommended to follow these best practices:

  • Use a LIMIT clause

    • Use the LIMIT clause to specify the number of records your query will return. Limiting queries can return results more quickly. Panther limits the size of results to 100MB by default.

  • Use a time range filter

  • Use p_any fields

    • During log ingestion, Panther extracts common security indicators into the p_any fields. The p_any fields are stored in optimized columns. These fields standardize names for attributes across all data sources, enabling fast data correlation.

  • Query specific fields

    • Using SELECT * FROM ... pulls all columns, which can slow down queries. When possible, query only the fields you need to investigate. For example, SELECT user_name, event_name FROM ....

  • Summarize

    • Summaries are faster to run than querying full records. This is especially helpful when investigating logs over a large period of time, or in a situation where you are unsure how much data exists over the time you are investigating.

    • Instead of querying the full data set, you can use count(*) and group by a time range, which will run more quickly and help you determine a more narrow timeframe to subsequently query.

    • For example, if you look back over a day and GROUP BY hour, you might determine which specific hour you need to investigate in your data. You can then run a query against that hour to narrow your results further.

If your query is still running slowly after following the best practices above, we recommend the following steps:

  • Count the rows to see how much data you are querying.

    • This will help you determine whether it's a large amount of data and expected that it's taking longer.

  • Reduce the time range you are querying.

  • Reach out to your Panther Support team for additional help.

How to use Data Explorer

Access Data Explorer

  • In the left-hand navigation bar of your Panther Console, click Investigate > Data Explorer.

Preview table data

You can preview example table data without writing SQL. To generate a sample SQL query for that log source, click the eye icon next to the table type:

Filter Data Explorer results

You can filter columns from a Result set in Data Explorer without writing SQL.

In the upper right corner of the Results table, click Filter Columns to select the columns you would like to display in the Results:

Note: The filters applied through this mechanism do not apply to the SQL select statement in your query.

Summarize column data

You can generate a summary (frequency count) of a column from a results set in Data Explorer without writing SQL.

On the column that your want to generate a summary for, click the down arrow and then Summarize to display summary results in a separate tab.

You can also generate a summary for the first time after a query is executed by switching to the Summarize tab and selecting a column from the dropdown.

The summary results for a selected column are displayed in the Summary tab, with the option to sort results by highest count or lowest count first (default is the highest count first).

How to use Data Explorer macros

All the tables in our supported backend databases (Athena and Snowflake) are partitioned by event time to allow for better performance when querying using a time filter.

For efficiency and ease of use, Panther offers macros that will be expanded into full expressions when sent to the database:

Macro formatting

Time duration format

Some macros take a time duration as a parameter. The format for this duration is a positive integer number followed by an optional suffix to indicate the unit of time. If no suffix is provided the number is interpreted as seconds.

Supported suffixes are list below:

  • s, sec, second, seconds — macro adds specified seconds

  • m, min, minute, minutes — macro adds specified minutes to offset

  • h, hr, hrs, hour, hours — macro adds specified hours to offset

  • d, day, days — macro adds specified days to offset

  • w, wk, week, weeks — macro adds a specified number of weeks to offset

  • if no suffix is detected, the default is seconds

Examples:

  • '6 d' - 6 days

  • '2 weeks' - 2 weeks

  • 900 - 900 seconds

  • '96 hrs' - 96 hours

Timestamp format

Ensure your time expressions can be parsed by the database backend your team is using. Some timestamps that work in Snowflake (i.e. 2021-01-21T11:15:54.346Z) are not accepted as valid timestamps by Athena. The default safe time format should probably look similar to 2021-01-02 15:04:05.000 and is assumed to be in the UTC time zone.

Data Explorer macros

Current time: p_current_timestamp

p_current_timestamp()

Time range filter: p_occurs_between

p_occurs_between(startTime, endTime, [, tableAlias [, column]])

  • tableAlias - an optional identifier that will be used as the table alias if provided

  • column - an optional identifier that will be used as the column if provided

    • If not present, the default column is p_event_time.

    • Indicating a different column (such as p_parsed_time) with column can lead to significantly longer query times, as without a restriction on p_event_time, the entire table is searched.

To be used properly this macro should occur within a filter, such as a WHERE clause.

The following Snowflake command contains a macro:

select p_db_name, count(*) as freq from panther_views.public.all_databases
where p_occurs_between(current_date - 1, current_timestamp)
group by p_db_name
limit 1000

The macro that will be automatically expanded before the query is sent to the database. The form the expansion takes is database specific. In Snowflake, this expansion is pretty straightforward:

select p_db_name, count(*) as freq from panther_views.public.all_databases
where p_event_time between convert_timezone('UTC',current_date - 1)::timestamp_ntz
    and convert_timezone('UTC',current_timestamp)::timestamp_ntz
group by p_db_name
limit 1000

Keep in mind that different database back-ends allow different date formats and operations. Athena does not allow simple arithmetic operations on dates, therefore the care must be taken to use an Athena-friendly time format:

select p_db_name, count(*) as freq from panther_views.public.all_databases
where p_occurs_between(current_date - interval '1' day, current_timestamp)
group by p_db_name
limit 1000

Because of the structure of allowed indexes on partitions in Athena, the expansion looks different:

select p_db_name, count(*) as freq from panther_views.all_databases
where p_event_time between cast (current_date - interval '1' day as timestamp) and cast (current_timestamp as timestamp)
  and partition_time between to_unixtime(date_trunc('HOUR', (cast (current_date - interval '1' day as timestamp))))
    and to_unixtime(cast (current_timestamp as timestamp))
group by p_db_name
limit 1000

The macro also takes an optional table alias. This can be helpful when referring to multiple tables, such as with a JOIN:

select aws.awsRegion, ata.digestEndTime
from panther_logs.public.aws_cloudtrail as aws
join panther_logs.public.aws_cloudtraildigest as ata ON aws.awsRegion = ata.awsAccountId
where p_occurs_between('2023-01-01', '2023-06-30', aws)
limit 10

Time offset from present: p_occurs_since

p_occurs_since(offsetFromPresent [, tableAlias[, column]])

  • tableAlias - an optional identifier that will be used as the table alias if provided

  • column - an optional identifier that will be used as the column if provided

    • If not present, the default column is p_event_time.

    • Indicating a different column (such as p_parsed_time) with column can lead to significantly longer query times, as without a restriction on p_event_time, the entire table is searched.

The macro also takes an optional table alias which can be helpful when referring to multiple tables, such as a JOIN.

To be used properly this macro should occur within a filter, such as a WHERE clause.

Examples:

p_occurs_since('6 d')
p_occurs_since('2 weeks')
p_occurs_since(900) // assumes seconds
p_occurs_since('96 hrs')

In the following example of a macro with a table alias parameter, we look at CloudTrail logs to identify S3 buckets created and deleted within one hour of their creation, a potentially suspicious behavior. To get this information we do a self-join on the aws_cloudtrail table in panther_logs, and we use a macro expansion to limit this search to the past 24 hours on each of the two elements of the self-join (aliased ct1 and ct2 below):

select 
ct1.p_event_time createTime, ct2.p_event_time deleteTime,
timediff('s',createTime, deleteTime) timeExtant,
ct1.requestparameters:"bucketName"::varchar createdBucket,
ct1.useridentity:"arn"::varchar createArn, deleteArn,
ct1.useragent createUserAgent, deleteUserAgent
from panther_logs.public.aws_cloudtrail ct1
join (
select p_event_time, requestparameters:"bucketName"::varchar deletedBucket, errorcode,
  eventname deleteEvent, useridentity:"arn"::varchar deleteArn, useragent deleteUserAgent  from panther_logs.public.aws_cloudtrail) ct2
on (ct1.requestparameters:"bucketName"::varchar = ct2.DeletedBucket
    and ct2.p_event_time > ct1.p_event_Time
    and timediff('s',ct1.p_event_time, ct2.p_event_time) < 3600)
where ct2.deleteEvent = 'DeleteBucket'
and ct1.eventName = 'CreateBucket'
and ct1.errorCode is null and ct2.errorcode is null
and p_occurs_since('1 day',ct2)  -- apply to ct2
and p_occurs_since('24 hours',ct1)  -- apply to ct1
order by createdBucket, createTime;

There are two separate calls to p_occurs_since each applied to a different table, as indicated by the table alias used as a second parameter. This is expanded into the following Snowflake query:

select 
ct1.p_event_time createTime, ct2.p_event_time deleteTime,
timediff('s',createTime, deleteTime) timeExtant,
ct1.requestparameters:"bucketName"::varchar createdBucket,
ct1.useridentity:"arn"::varchar createArn, deleteArn,
ct1.useragent createUserAgent, deleteUserAgent
from panther_logs.public.aws_cloudtrail ct1
join (
select p_event_time, requestparameters:"bucketName"::varchar deletedBucket, errorcode,
  eventname deleteEvent, useridentity:"arn"::varchar deleteArn, useragent deleteUserAgent  from panther_logs.public.aws_cloudtrail) ct2
on (ct1.requestparameters:"bucketName"::varchar = ct2.deletedBucket
    and ct2.p_event_time > ct1.p_event_Time
    and timediff('s',ct1.p_event_time, ct2.p_event_time) < 3600)
where ct2.deleteEvent = 'DeleteBucket'
and ct1.eventName = 'CreateBucket'
and ct1.errorCode is null and ct2.errorcode is null
and ct2.p_event_time >= current_timestamp - interval '86400 second'
and ct1.p_event_time >= current_timestamp - interval '86400 second'
order by createdBucket, createTime;

Filter around a certain time: p_occurs_around

p_occurs_around(timestamp, timeOffset [, tableAlias[, column]])

  • tableAlias - an optional identifier that will be used as the column alias if provided

  • column - an optional identifier that will be used as the column if provided

    • If not present, the default column is p_event_time.

    • Indicating a different column (such as p_parsed_time) with column can lead to significantly longer query times, as without a restriction on p_event_time, the entire table is searched.

For example, the macro p_occurs_around('2022-01-01 10:00:00.000', '10 m') filters for events that occurred between 09:50 am and 10:10 am UTC on January 1, 2022.

The macro also takes an optional table alias which can be helpful when referring to multiple tables, such as a JOIN.

To be used properly this macro should occur within a filter, such as the WHERE clause of a SQL statement.

Examples:

p_occurs_around('2022-01-01 10:00:00.000', '6 d')
p_occurs_around('2022-01-01 10:00:00.000', '2 weeks')
p_occurs_around('2022-01-01 10:00:00.000', 900) // assumes seconds
p_occurs_around('2022-01-01 10:00:00.000', '96 hrs')

How to manage Saved Searches in Data Explorer

Saving your commonly run searches in Data Explorer means you won't need to rewrite them again and again.

Save a search in Data Explorer

  1. In the left-hand navigation bar of your Panther Console, click Investigate > Data Explorer in the left sidebar.

  2. In the SQL editor, write a search using SQL.

  3. Below the SQL editor, click Save As.

  4. In the Save Search modal that pops up, fill in the form:

    • Search Name: Add a descriptive name.

    • Tags: Add tags to help you group similar searches together.

    • Description: Describe the purpose of the search.

    • Is this a Scheduled Search?: If you want this Saved Search to run on a schedule (making it a Scheduled Search), switch the toggle to ON.

      • When you switch this toggle to ON, the options described below will appear.

      • Is it active?: If you want this Scheduled Search to start running on your selected schedule, switch the toggle to ON.

    • Period (select if your query should run on fixed time intervals):

      • Period(days) and Period(min): Enter the number of days and/or minutes after which the SQL query should run again. For example: setting a period of 0 days and 30 minutes will mean that the query will run every day, every 30 minutes.

      • Timeout(min): Enter the timeout period in minutes, with a maximum allowed value of 10 minutes. If your query does not complete inside the allowed time window, Panther will retry 3 times before automatically canceling it.

      • Minutes and Hours: Enter the time of day for the query to run.

      • Day and Month (day of month): If you wish to have this query run on a specific day and month, enter the day and month.

      • Day (day of week): If you wish to have this query run on a specific day of the week, enter the day.

  5. Click Save Search.

Update a Saved Search in Data Explorer

  1. In your Panther Console, navigate to Investigate > Data Explorer in the left sidebar.

  2. In the upper right corner, click Open Saved Search.

    • An Open a Search modal will pop up.

  3. In the modal, select the Saved Search you'd like to update, and click Open Search.

    • The Saved Search will populate in the Data Explorer SQL editor.

  4. Make desired changes to the SQL command.

    • An Update Search modal will pop up.

  5. Make desired changes to the Saved Search's metadata, including the Search Name, Tags, Description, Default Database, and Is this a Scheduled Search? (and related fields).

  6. Click Update Search to save your changes.

Best practices for searching in Data Explorer, in

How to

How to use

When traversing a JSON object, if a key name does not conform to —for example if it contains periods or spaces—enclose the value in double quotes.

For example, if you want to run a query for accessing the field context.ip_address from the , you would write it as p_enrichment:ipinfo_privacy:"context.ip_address".

Learn more in Snowflake's documentation.

Snowflake groups files in S3 in . When you filter by a time range (such as p_event_time or p_occurs_since()) in your query, Snowflake will only need to access specific partitions, which returns results more quickly.

For more information on macros, see the section below:.

Learn more on .

In addition to the row_count value, the summary also displays first_seen and last_seen values if the result data contains the p_event_time .

These macros are different from template macros. Learn more about template macros on .

This macro expands to current_timestamp in Data Explorer, but similar to , when run in a scheduled query it expands to the scheduled time of the query (regardless of when the query is actually executed).

startTime - a time in , indicating start of search window

endTime - a time in , indicating the end of the search window

Note: Please ensure that your time expression can be parsed by the database backend your instance is using. For more information see .

The macro p_occurs_between() takes a start time and end time in and filters the result set to those events in the time range, using the correct partition (minimizing I/O and speeding up the query).

offsetFromPresent - an expression in , interpreted relative to the present, for example '1 hour'

The macro p_occurs_since() takes an offset parameter specified in and filters the result set down to those events between the current time and the specified offset, using the correct partition or cluster key (minimizing I/O and speeding up the query).

If this is used in a , then rather than using the current time as the reference, the scheduled run time will be used. For example, if a query is scheduled to run at the start of each hour, then the p_occurs_since('1 hour') macro will expand using a time range of 1 hour starting at the start of each hour (regardless of when the query is actually executed).

timestamp - a time in , indicating the time to search around

timeOffset - an expression in , indicating the amount of time to search around the timestamp, for example '1 hour'

Note: Please ensure that your time expression can be parsed by the database backend your instance is using. For more information see .

The p_occurs_around() macro allows you to filter for events that occur around a given time. It takes a timestamp in indicating the time to search around and an offset in specifying the interval to search. The search range is from timestamp - timeOffset to timestamp + timeOffset.

Note that the instructions to delete a Saved Search are outlined on .

Below are instructions for how to save a search you've written in Data Explorer. You can also .

If you have enabled the system-wide setting to , make sure your query includes a LIMIT.

You can create a Templated Search by including variables in your SQL expression. Learn more on .

If you've toggled Is this a scheduled query? to ON, configure one of the following interval options: Period or .

Cron Expression (select if your query should run repeatedly at specific dates, and learn more about how to create a cron expression in ):

Timeout: Enter the timeout period in minutes, with a maximum allowed value of 10 minutes. If your query does not complete inside the allowed time window, Panther will retry 3 times before automatically canceling it.

If you've created a Scheduled Search (by toggling Is this a Scheduled Search? to ON), you can now if you'd like the data returned by your search to be passed through a detection, alerting on matches.

Below the editor, click Update.

Snowflake SQL identifier rules
IPInfo Privacy Enrichment Provider
Querying Semi-structured Data
micropartitions
Standard Fields
standard field
Templated Searches
Scheduled Search
Templated Searches
Scheduled Searches
Templated Searches and macros
Role-Based Access Control
SQL Command Reference
Save, and optionally schedule, your searches
Searching effectively in Data Explorer
reference nested fields in Data Explorer
Data Explorer macros
How to use Data Explorer macros
Current time: p_current_timestamp
Time range filter: p_occurs_between
Time offset from present: p_occurs_since
Filter around a certain time: p_occurs_around
p_occurs_since
timestamp format
timestamp format
Timestamp format
timestamp format
time duration format
time duration format
timestamp format
time duration format
Timestamp format
timestamp format
time duration format
Cron Expression
require a LIMIT clause
Saved and Scheduled Queries
How to use the Scheduled Search crontab
follow the instructions to create a Scheduled Rule
create a Saved Search from Search
The Data Explorer page has a column on the left labeled Tables. In the middle of the page, there is a text editor labeled New Query. The bottom of the page has tabs labeled Results and Summarize.
In Data Explorer, under the Tables column, each log source has an eye icon next to it. The image shows a red circle around the eye icons.
The Results tab of the Data Explorer shows there are 3202 Results. The Query Time was 528 ms, and the Data Scanned was 1.89MB. There is a select dropdown that has "Filter Columns (3)" selected, and a "Download CSV" button.
The Results tab of the Data Explorer shows there are 3202 Results. The Query Time was 528 ms, and the Data Scanned was 1.89MB. There is a select dropdown that has "Filter Columns (3)" selected, and a "Download CSV" button. The first result of the query is shown in table format. The following fields are visible: p_timeline, type, timestamp, elb, clientIp, clientPort, targetIp, targetPort, requestProcessingTime, and targetProcessingTime.
In the Summarize tab, the type-ahead dropdown allows selecting a column to summarize
row_count represents the frequency of each unique clientIp in the result set