Links

Operations

Assessing Data Ingest Volume

You can use the Panther API metrics operations to measure the total number of bytes and events that Panther ingested or processed over a specific time period. For other methods of assessing data ingestion volume, see the sections below.

SaaS deployments of Panther

Please reach out to your Panther account team for help accessing this data.

Self-Hosted and CPaaS deployment of Panther

The information below applies to legacy Self-Hosted and CPaaS deployment types. Panther no longer supports these deployment types for new accounts.
The Panther log analysis CloudWatch dashboard provides deep insight into operationally relevant aspects of log processing. Understanding the ingest volume is critically important to forecast the cost of running Panther.
In the Dashboard of your Panther Console, you can view the volume of logs ingested. This can be used, in combination with your AWS bill, to forecast costs as you scale your data. We suggest you use a month of data to estimate your costs.
To view the CloudWatch dashboard:
  1. 1.
    Log in to the AWS Console.
  2. 2.
    Click CloudWatch from the Services menu.
  3. 3.
    Click Dashboards from the left sidebar of the CloudWatch console.
  4. 4.
    Click the dashboard name beginning with PantherLogAnalysis
  5. 5.
    Click the three dots icon in upper right corner of the tile titled "Input MBytes (Uncompressed) by Log Type". In the dropdown menu, click View in CloudWatch Insights.
  6. 6.
    Set the time period for 4 weeks, then click Apply.
  7. 7.
    At the top of the Logs Insights page, click Run Query.

Tools

Panther comes with some operational tools useful for managing the Panther infrastructure. The tools are statically compiled executables for Linux, Mac (including Darwin) and Windows.
These tools require that AWS credentials be set in the environment with sufficient privileges. We recommend a tool to manage these securely such as AWS Vault.
Do not run any of these tools unless specifically advised by a Panther team member.

How to download Panther's tools

Instructions for downloading the tools differs depending on your Panther version.

Step 1: Confirm your Panther version

  1. 1.
    Log in to the Panther Console.
  2. 2.
    In the upper right corner, click your user icon. The Panther version is at the bottom of the dropdown menu.
Panther v1.27+
Panther 1.26.x and older

Panther v1.27+

You can download each tool individually from an S3 URL with the following format: https://panther-community-us-east-1.s3.amazonaws.com/{version}/tools/{os}-{arch}-{tool}.zip Replace the placeholder text in the download URL as described below:
  • version: The version of Panther you have deployed, e.g. v1.27.0
  • os: Use one of the following: darwin, linux , or windows
  • arch: Use amd64 or arm64
  • tool: The name of the tool you are downloading. See Tool Names for a list of available tools.
An example of a complete tool link: https://panther-community-us-east-1.s3.amazonaws.com/v1.27.0/tools/darwin-amd64-snowconfig.zip

Panther v1.26.X and Older

In these versions of Panther, all tools were bundled together in a single zipfile. You can download the tools from a link using this format: https://panther-community-us-east-1.s3.amazonaws.com/{version}/tools/{architecture}.zip Replace the placeholder text in the download URL as described below:
  • version: The version of Panther you have deployed, e.g. v1.23.3
  • architecture: Use one of the following:
    • darwin-amd64
    • linux-amd64
    • linux-arm
    • windows-amd64
    • windows-arm
Each zip archive will contain the entire set of tools. See Tool Names for a list of all available tools.
An example of a full link to the set of tools: https://panther-community-us-east-1.s3.amazonaws.com/v1.23.3/tools/darwin-amd64.zip

Tool Names

Running these commands with the -h flag will explain usage.
  • analytics-backfiller: backfill backend analytics via an EventBridge bus
  • checker: compares detection entities to every panther-analysis release
  • compact: backfill JSON-to-Parquet conversion of log data
  • cost: generates cost reports using the costexplorer api
  • filegen: writes synthetic log files to s3 for use in benchmarking
  • flushrsc: flush "delete pending" entries from the panther-resource table
  • gluerecover: scans S3 for missing AWS Glue partitions and recovers them
  • gluesync: update glue table and partition schemas
  • historicalmigrate: migrate historical data from Athena to Snowflake
  • migrate: utility to do a data migration for the gsuite_reports table (log & rule table)
  • opslambda: invokes the panther-ops-tools Lambda function to handle some common ops tasks. Over time, more opstool functionality will move into this function
    • invite: invites a Panther admin user
    • requeue: copy messages from one SQS queue to another, typically to replay DLQ messages
  • pantherlog: parse logs using built-in or custom schemas
  • s3list: list all objects in all sources and outputs the S3 objects to a file
  • s3queue: list files under an S3 path and send them to the log processor input queue for processing (useful for back fill of data)
  • s3sns: lists S3 objects and posts S3 notifications to the Panther log processor SNS topic
  • s3undelete: removes S3 delete markers from a versioned bucket
  • snowconfig: uses an account-admin enabled SF user to configure the databases and roles for the Panther users
  • snowcopy: uses the shares created by snowshare to copy data into a new account
  • snowcreate: uses the Panther Snowflake ORG admin account and credentials to create new Snowflake accounts
  • snowmelt: uses an account-admin enabled SF user to destroy the databases and roles for the Panther users
  • snowrepair: generates a ddl file to configure Snowflake to ingest Panther data
  • snowrotate: uses an account-admin enabled SF user to rotate the credentials for the two Panther users
  • snowshare: creates Snowflake data shares of Panther databases between a source and a target account
  • sources: lists all log sources, optionally validates each log processing role can be assumed and data accessed
  • updater: takes the CSV output of the checker tool and auto-applies the recommended actions

Monitoring

The information below applies to legacy Self-Hosted and CPaaS deployment types. Panther no longer supports these deployment types for new accounts.

Visibility

Panther has 5 CloudWatch dashboards to provide visibility into the operation of the system:
  • PantherOverview An overview of all errors and performance of all Panther components.
  • PantherCloudSecurity: Details of the components monitoring infrastructure for CloudSecurity.
  • PantherAlertProcessing: Details of the components that relay alerts for CloudSecurity and Log Processing.
  • PantherLogAnalysis: Details of the components processing logs and running rules.
  • PantherRemediation: Details of the components that remediate infrastructure issues.

Alarms

Panther uses CloudWatch Alarms to monitor the health of each component. Edit the deployments/panther_config.yml file to associate an SNS topic you have created with the Panther CloudWatch alarms to receive notifications. If this value is blank then Panther will associate alarms with the default Panther SNS topic called panther-alarms:
MonitoringParameterValues:
# This is the arn for the SNS topic you want associated with Panther system alarms.
# If this is not set alarms will be associated with the SNS topic `panther-alarms`.
AlarmSNSTopicARN: 'arn:aws:sns:us-east-1:05060362XXX:MyAlarmSNSTopic'
To configure alarms to send to your team, follow the guides below:
  • Note: Pager Duty cannot handle composite CloudWatch alarms, which Panther uses to avoid duplicate pages to oncall staff. As a workaround, you can use a Custom Event Transformer.
    Follow the instructions using the below code:
    var details = JSON.parse(PD.inputRequest.rawBody);
    var description = "unknown event";
    if ("AlarmDescription" in details) { // looks like a CloudWatch event ...
    var descLines = details.AlarmDescription.split("\n");
    description = (descLines.length > 1)? descLines[0] + " " + descLines[1] : descLines[0];
    }
    var normalized_event = {
    event_type: PD.Trigger,
    description: description,
    incident_key: description,
    details: details
    };
    PD.emitGenericEvents([normalized_event]);
    Configure the SNS topic to use RawMessageDelivery: true when creating the Pager Duty subscription.