LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Configuring Snowflake for Cloud Connected
        • Configuring AWS for Cloud Connected
        • Pre-Deployment Tools
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • Example use case
  • How to use Templated Searches
  • Creating a Templated Search
  • Running a Templated Search in Data Explorer
  • How to use template macros
  • Creating template macros
  • Calling template macros in other queries
  • Debugging template macros

Was this helpful?

  1. Investigations & Search
  2. Saved and Scheduled Searches

Templated Searches

Export collections of paramaterized SQL expressions for reuse

PreviousSaved and Scheduled SearchesNextBehavioral Analytics and Anomaly Detection Template Macros (Beta)

Last updated 4 months ago

Was this helpful?

Overview

Templated Searches are (written in SQL) that contain variables. Template macros are Templated Searches which can be imported into other Saved Searches, and called with arguments (similar to function calls).

It is possible to save a Templated Search without defining it as a template macro. In this case, the query cannot be run elsewhere, and the variable values are provided in the same Saved Search.

Commonly run SQL expressions can be saved as template macros in a single, library-like Saved Search, making it easier to manage and reuse your complex SQL code. You could, for example, define "libraries" of related queries for the below areas:

  • Employee macros

  • Correlating macros

  • Enrichment macros

Templated Searches are defined in Panther by including -- pragma: template as the first line of SQL.

Panther uses the , which closely mirrors . This means most templates are interoperable.

Example use case

To demonstrate the usefulness of template macros, imagine you have a complex query (spanning, say, 50 lines of SQL) that requires a date range and username. Without template macros, each time you want to run this search, you must edit it to input a specific date range and username, which can be tedious and error prone.

Instead, this complex query can be created as a template macro (named user_activity_profile in the example below) in a Templated Search (named user queries in the example below), with variables for the date range and username.

You can then create a new query like the below, which imports the template macro (user_activity_profile) from the Templated Search (user queries) and call the macro, passing in username and date time values. You can then save this search (calling it run_user_activity_profile, for example), for even faster access.

-- pragma: template

--loads query user_activity_profile from the saved query 'user queries'
{% import 'user queries' user_activity_profile %}

{{user_activity_profile('bob_smith', '2023-01-01 00:00:00', '2023-01-02 00:00:00')}}

How to use Templated Searches

You can create a Templated Search that uses variables without defining it as a macro. In this case, while the query cannot be imported in other Saved Searches, future execution of the query will be simpler, as all variable values are defined at the top of the file.

Creating a Templated Search

You can create a Templated Search in Data Explorer within the Panther Console, or locally, in the CLI workflow.

How to create a Templated Search in the Panther Console

  1. In the left-hand navigation bar of your Panther Console, click Investigate > Data Explorer.

  2. In the SQL editor, add -- pragma: template at the top.

  3. Create a SQL query that declares variables using {{}} (double curly brackets).

    • Example:

      -- pragma: template
      select {{p}}, {{q}}
  4. To set variable values, add statements like the following at the top of the file:

    {% set <variable one name> = <variable one value> %}
    {% set <variable two name> = <variable two value> %}
    • Example:

      -- pragma: template
      {% set p = "1" %}
      {% set q = "'queue'" %}
      
      select {{p}}, {{q}}
  5. Click Save as, and provide values for the following fields:

    • Query Name: Add a descriptive name.

    • Tags: Add tags to help you group similar queries together.

    • Description: Describe the purpose of the query.

    • Default Database: Choose the database to query from.

    • Is this a Scheduled Query?: Toggle this ON if you would like this query to run on a defined interval.

      • If you toggle this ON, provide a time interval.

  6. Click Save Query.

How to create a Templated Search in the CLI workflow

When writing the SQL in the Query key:

  • Include -- pragma: template at the top.

  • Add the query, with variables enclosed in double curly brackets.

Example:

Query: |-
    -- pragma: template
    {% set p = "1" %}
    {% set q = "'queue'" %}
    select '{{p}}', '{{q}}'

Running a Templated Search in Data Explorer

You can run a previously defined Templated Search in Data Explorer by opening it and providing variable values.

  1. In the left-hand navigation bar of your Panther Console, click Investigate > Data Explorer.

  2. In the upper-right corner, click Open Saved Query.

  3. Select the Templated Search you would like to run, then click Open Query.

  4. Ensure all variables in your SQL expression have values defined at the top of the file, in statements like the following:

    {% set <variable one name> = <variable one value> %}
    {% set <variable two name> = <variable two value> %}
  5. Click Run Query.

Example

In the example below, the p and q variables are given values of 1 and 'queue', respectively.

-- pragma: template
{% set p = "1" %}
{% set q = "'queue'" %}

select {{p}}, {{q}}

How to use template macros

Template macros are one or more Templated Searches that are exported, meaning they can be imported in other searches. It may be particularly useful to group multiple macros in one templated query, effectively creating a "library" of related macros.

Creating template macros

You can create template macros in Data Explorer within the Panther Console, or locally, in the CLI workflow.

How to create template macros in the Panther Console

  1. In the left-hand navigation bar of your Panther Console, click Investigate > Data Explorer.

  2. In the SQL editor, add -- pragma: template at the top.

  3. Create one or more SQL queries that declare variables using {{}} (double curly brackets).

  4. Surround each templated query with:

    • Before the query: {% macro <macro name>(<variable_one>, <variable_two>) export %}

    • After the query: {% endmacro %}

  5. Click Save as, and provide values for the following fields:

    • Query Name: Add a descriptive name.

      • This value will be used later, on import.

    • Tags: Add tags to help you group similar queries together.

    • Description: Describe the purpose of the query.

    • Default Database: Choose the database to query from.

    • Is this a Scheduled Query?: Leave this toggled OFF.

  6. Click Save Query.

Example

-- pragma: template
{% macro my_query1(p, q) export %}
select '{{p}}', '{{q}}'
{% endmacro %}

{% macro my_query2(x) export %}
select {{x}}}
{% endmacro %}

How to create a templated macros in the CLI workflow

When writing the SQL in the Query key:

  • Include -- pragma: templateat the top.

  • Add one or more queries, with variables enclosed in double curly brackets.

  • Surround each templated query with:

    • Before the query: {% macro <macro name>(<variable_one>, <variable_two>) export %}

    • After the query: {% endmacro %}

Example:

Query: |-
    -- pragma: template
    {% macro my_query1(p, q) export %}
    select '{{p}}', '{{q}}'
    {% endmacro %}

    {% macro my_query2(x) export %}
    select {{x}}}
    {% endmacro %}

Calling template macros in other queries

You can import previously created template macros, and run them by passing in variable values.

  1. In the left-hand navigation bar of your Panther Console, click Investigate > Data Explorer.

  2. In the SQL editor, add a -- pragma: template statement at the top.

    If you would like the expanded SQL to be displayed (to aid in debugging, perhaps) include -- pragma: show macro expanded.

  3. Import your previously-defined macro with a statement structured like the following:

    {% import '<name of Saved Search>' <name of macro> %}
  4. Call the macro, passing in values for its arguments, with a statement structured like the following:

    {{<name of macro>('<value of variable one>', '<value of variable two>')}}
  5. Click Run Query.

    • Optionally save the query by clicking Save as.

Full example

-- pragma: template

--loads my_query1 from the Saved Search 'example macros'
{% import 'example macros' my_query1 %}

{{my_query1('1.1.1.1', 'test')}}

Debugging template macros

When running a template macro in Data Explorer, you can view the expanded SQL statement (rather than only the template code) by adding -- pragma: show macro expanded below the -- pragma: template statement. For example, if you run the below in Data Explorer:

-- pragma: template
-- pragma: show macro expanded
{% set p = "1" %}
{% set q = "'queue'" %}

select {{p}}, {{q}}

The template will be replaced with the expanded SQL:

-- pragma: template
-- pragma: show macro expanded

select 1, 'queue'

To create a Templated Search in the CLI workflow, follow the .

To create a Templated Search in the CLI workflow, follow the .

Saved Searches
Django template language
Jinja templates
DBT
How to create a Saved Search in the CLI workflow instructions
How to create a Saved Search in the CLI workflow instructions