PantherFlow (Beta)

PantherFlow is Panther's pipelined query language

Overview

PantherFlow is in open beta starting with Panther version 1.110, and is available to all customers. Please share any bug reports and feature requests with your Panther support team.

PantherFlow is Panther's pipelined query language. It's designed to be simple to understand, yet powerful and expressive.

Use PantherFlow to explore and analyze your data in Panther. With its operators and functions, you can perform a variety of data operations, such as filtering, transformations, and aggregations—in addition to visualizing your results as a bar or line chart. PantherFlow is schema-flexible, meaning you can seamlessly search across multiple data sources (including those with different schemas) in a single query.

PantherFlow queries use pipes (|) to delineate data operations, which are processed sequentially. This means the output of a query's first operator is passed as the input to the second operator, and so on. See an example query below:

panther_logs.public.okta_systemlog
| where p_event_time > time.ago(1d)
| search 'doug'
| summarize agg.count() by eventType 

Where to use PantherFlow

Use PantherFlow to query data in Search. Learn how to use PantherFlow in Search here.

To assist your query writing, the PantherFlow code editor in Search has autocomplete, error underlining, hover tooltips, inlay hints, and function signature assistance.

How a PantherFlow query works

The term "PantherFlow query" typically refers to a tabular expression statement, which retrieves a dataset and returns it in some form (in contrast to a let statement.) A tabular expression statement usually contains operators separated by pipes (|). Each operator performs some action on the data—i.e., filters or transforms it—before passing it on to the next operator. Operator order is important, as PantherFlow statements are read sequentially.

See an overview of PantherFlow syntax on PantherFlow Quick Reference, or explore syntax topics in more detail:

Example

Let's explore the following PantherFlow query:

panther_logs.public.aws_alb
| where p_event_time > time.ago(1d)
| sort p_event_time
| limit 10

In short, this query reads data from the aws_alb table, filters out events that occurred before the last day, sorts remaining events by time, and returns the first 10 events.

Let's take a deeper look at each line:

  1. panther_logs.public.aws_alb

    • This statement identifies the data source.

    • This query is reading from the panther_logs.public.aws_alb table. If the query contained only this line, all data in the table would be returned.

  2. | where p_event_time > time.ago(1d)

    • The where operator takes an expression to filter the data.

    • This query is requesting data where the p_event_time field value is greater than the time one day ago. In other words, it's asking for events that occurred within the last day. The time.ago() function subtracts from the current time, and its argument (1d) is a timestamp constant representing one day.

  3. | sort p_event_time

    • The sort operator lets you order events by one or more field values.

    • This query orders data by p_event_time. Because the default sort order is descending, the most recent event will be returned first.

  4. | limit 10

    • The limit operator defines how many events you'd like returned, at most.

    • This query is requesting no more than 10 events.

See additional query examples:

Limitations of PantherFlow

Best practices when using PantherFlow

To ensure your PantherFlow query results return as quickly as possible (and to minimize Snowflake costs arising from the search), it's recommended to follow these best practices:

  • Use the limit operator

    • Use the limit operator to specify the maximum number of records your query will return.

    • Example: panther_logs.public.aws_alb | limit 100

  • Use a time range filter

    • Use the where operator to filter by a time range (perhaps against p_event_time). A query with a time range filter will access fewer micro-partitions, which returns results faster.

    • Example: panther_logs.public.aws_alb | where p_event_time > time.ago(1d)

  • Use p_any fields

    • During log ingestion, Panther extracts common security indicators into p_any fields, which standardize attribute names across all data sources. The p_any fields are stored in optimized columns. It's recommended to query p_any fields instead of various differently named fields for multiple log types.

    • Learn more on Standard Fields.

    • Example: panther_logs.public.aws_alb | '10.0.0.0' in p_any_ip_addresses

  • Use the project operator

    • A query without a project operator retrieves all columns, which can slow down queries. When possible, use project to query only the fields you need to investigate.

    • Example: panther_logs.public.aws_alb | project targetIp, targetPort

  • Summarize results

    • Summaries execute faster than queries fetching full log records. Using a summary is especially helpful when you're investigating logs over a long period of time, or when you don't know how much data volume exists for the time range you're investigating.

    • Instead of querying the full data set, use the summarize operator, which will execute faster and help you determine a narrower timeframe to query next.

    • Example: panther_logs.public.aws_alb | summarize count=agg.count() by targetIp

  • Filter data early

    • Filter data before performing expensive operations, such as summarize or join, rather than after.

    • Example:

      • Instead of: panther_logs.public.aws_alb | summarize agg.count() by actor | where actor != nil

      • Use: panther_logs.public.aws_alb | where actor != nil | summarize agg.count() by actor

  • Avoid the search operator, if possible

    • The search operator can introduce slowness, and should be avoided unless necessary. If you know which column (or columns) might contain the text you'd like to search for, instead of searching across all columns in the specified database/table with search, use where with strings.contains().

    • Example:

      • Instead of: | search 'alice'

      • Use: | where strings.contains(name, 'alice')

If your query is still running slowly after implementing the best practices above:

  • Check the number of returned rows to see how much data you're querying.

    • If it's a large amount of data, it is likely expected for it to take a while.

  • Reduce the time range you're querying.

  • Reach out to your Panther Support team for additional help.

Last updated

Was this helpful?