LogoLogo
Knowledge BaseCommunityRelease NotesRequest Demo
  • Overview
  • Quick Start
    • Onboarding Guide
  • Data Sources & Transports
    • Supported Logs
      • 1Password Logs
      • Apache Logs
      • AppOmni Logs
      • Asana Logs
      • Atlassian Logs
      • Auditd Logs
      • Auth0 Logs
      • AWS Logs
        • AWS ALB
        • AWS Aurora
        • AWS CloudFront
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS Config
        • AWS EKS
        • AWS GuardDuty
        • AWS Security Hub
        • Amazon Security Lake
        • AWS S3
        • AWS Transit Gateway
        • AWS VPC
        • AWS WAF
      • Azure Monitor Logs
      • Bitwarden Logs
      • Box Logs
      • Carbon Black Logs
      • Cisco Umbrella Logs
      • Cloudflare Logs
      • CrowdStrike Logs
        • CrowdStrike Falcon Data Replicator
        • CrowdStrike Event Streams
      • Docker Logs
      • Dropbox Logs
      • Duo Security Logs
      • Envoy Logs
      • Fastly Logs
      • Fluentd Logs
      • GCP Logs
      • GitHub Logs
      • GitLab Logs
      • Google Workspace Logs
      • Heroku Logs
      • Jamf Pro Logs
      • Juniper Logs
      • Lacework Logs
        • Lacework Alert Channel Webhook
        • Lacework Export
      • Material Security Logs
      • Microsoft 365 Logs
      • Microsoft Entra ID Audit Logs
      • Microsoft Graph Logs
      • MongoDB Atlas Logs
      • Netskope Logs
      • Nginx Logs
      • Notion Logs
      • Okta Logs
      • OneLogin Logs
      • Orca Security Logs (Beta)
      • Osquery Logs
      • OSSEC Logs
      • Proofpoint Logs
      • Push Security Logs
      • Rapid7 Logs
      • Salesforce Logs
      • SentinelOne Logs
      • Slack Logs
      • Snowflake Audit Logs (Beta)
      • Snyk Logs
      • Sophos Logs
      • Sublime Security Logs
      • Suricata Logs
      • Sysdig Logs
      • Syslog Logs
      • Tailscale Logs
      • Teleport Logs
      • Tenable Vulnerability Management Logs
      • Thinkst Canary Logs
      • Tines Logs
      • Tracebit Logs
      • Windows Event Logs
      • Wiz Logs
      • Zeek Logs
      • Zendesk Logs
      • Zoom Logs
      • Zscaler Logs
        • Zscaler ZIA
        • Zscaler ZPA
    • Custom Logs
      • Log Schema Reference
      • Transformations
      • Script Log Parser (Beta)
      • Fastmatch Log Parser
      • Regex Log Parser
      • CSV Log Parser
    • Data Transports
      • HTTP Source
      • AWS Sources
        • S3 Source
        • CloudWatch Logs Source
        • SQS Source
          • SNS Source
        • EventBridge
      • Google Cloud Sources
        • Cloud Storage (GCS) Source
        • Pub/Sub Source
      • Azure Blob Storage Source
    • Monitoring Log Sources
    • Ingestion Filters
      • Raw Event Filters
      • Normalized Event Filters (Beta)
    • Data Pipeline Tools
      • Chronosphere Onboarding Guide
      • Cribl Onboarding Guide
      • Fluent Bit Onboarding Guide
        • Fluent Bit Configuration Examples
      • Fluentd Onboarding Guide
        • General log forwarding via Fluentd
        • MacOS System Logs to S3 via Fluentd
        • Syslog to S3 via Fluentd
        • Windows Event Logs to S3 via Fluentd (Legacy)
        • GCP Audit to S3 via Fluentd
      • Observo Onboarding Guide
      • Tarsal Onboarding Guide
    • Tech Partner Log Source Integrations
  • Detections
    • Using Panther-managed Detections
      • Detection Packs
    • Rules and Scheduled Rules
      • Writing Python Detections
        • Python Rule Caching
        • Data Models
        • Global Helper Functions
      • Modifying Detections with Inline Filters (Beta)
      • Derived Detections (Beta)
        • Using Derived Detections to Avoid Merge Conflicts
      • Using the Simple Detection Builder
      • Writing Simple Detections
        • Simple Detection Match Expression Reference
        • Simple Detection Error Codes
    • Correlation Rules (Beta)
      • Correlation Rule Reference
    • PyPanther Detections (Beta)
      • Creating PyPanther Detections
      • Registering, Testing, and Uploading PyPanther Detections
      • Managing PyPanther Detections in the Panther Console
      • PyPanther Detections Style Guide
      • pypanther Library Reference
      • Using the pypanther Command Line Tool
    • Signals
    • Policies
    • Testing
      • Data Replay (Beta)
    • Framework Mapping and MITRE ATT&CK® Matrix
  • Cloud Security Scanning
    • Cloud Resource Attributes
      • AWS
        • ACM Certificate
        • CloudFormation Stack
        • CloudWatch Log Group
        • CloudTrail
        • CloudTrail Meta
        • Config Recorder
        • Config Recorder Meta
        • DynamoDB Table
        • EC2 AMI
        • EC2 Instance
        • EC2 Network ACL
        • EC2 SecurityGroup
        • EC2 Volume
        • EC2 VPC
        • ECS Cluster
        • EKS Cluster
        • ELBV2 Application Load Balancer
        • GuardDuty Detector
        • GuardDuty Detector Meta
        • IAM Group
        • IAM Policy
        • IAM Role
        • IAM Root User
        • IAM User
        • KMS Key
        • Lambda Function
        • Password Policy
        • RDS Instance
        • Redshift Cluster
        • Route 53 Domains
        • Route 53 Hosted Zone
        • S3 Bucket
        • WAF Web ACL
  • Alerts & Destinations
    • Alert Destinations
      • Amazon SNS Destination
      • Amazon SQS Destination
      • Asana Destination
      • Blink Ops Destination
      • Custom Webhook Destination
      • Discord Destination
      • GitHub Destination
      • Google Pub/Sub Destination (Beta)
      • Incident.io Destination
      • Jira Cloud Destination
      • Jira Data Center Destination (Beta)
      • Microsoft Teams Destination
      • Mindflow Destination
      • OpsGenie Destination
      • PagerDuty Destination
      • Rapid7 Destination
      • ServiceNow Destination (Custom Webhook)
      • Slack Bot Destination
      • Slack Destination (Webhook)
      • Splunk Destination (Beta)
      • Tines Destination
      • Torq Destination
    • Assigning and Managing Alerts
      • Managing Alerts in Slack
    • Alert Runbooks
      • Panther-managed Policies Runbooks
        • AWS CloudTrail Is Enabled In All Regions
        • AWS CloudTrail Sending To CloudWatch Logs
        • AWS KMS CMK Key Rotation Is Enabled
        • AWS Application Load Balancer Has Web ACL
        • AWS Access Keys Are Used Every 90 Days
        • AWS Access Keys are Rotated Every 90 Days
        • AWS ACM Certificate Is Not Expired
        • AWS Access Keys not Created During Account Creation
        • AWS CloudTrail Has Log Validation Enabled
        • AWS CloudTrail S3 Bucket Has Access Logging Enabled
        • AWS CloudTrail Logs S3 Bucket Not Publicly Accessible
        • AWS Config Is Enabled for Global Resources
        • AWS DynamoDB Table Has Autoscaling Targets Configured
        • AWS DynamoDB Table Has Autoscaling Enabled
        • AWS DynamoDB Table Has Encryption Enabled
        • AWS EC2 AMI Launched on Approved Host
        • AWS EC2 AMI Launched on Approved Instance Type
        • AWS EC2 AMI Launched With Approved Tenancy
        • AWS EC2 Instance Has Detailed Monitoring Enabled
        • AWS EC2 Instance Is EBS Optimized
        • AWS EC2 Instance Running on Approved AMI
        • AWS EC2 Instance Running on Approved Instance Type
        • AWS EC2 Instance Running in Approved VPC
        • AWS EC2 Instance Running On Approved Host
        • AWS EC2 Instance Running With Approved Tenancy
        • AWS EC2 Instance Volumes Are Encrypted
        • AWS EC2 Volume Is Encrypted
        • AWS GuardDuty is Logging to a Master Account
        • AWS GuardDuty Is Enabled
        • AWS IAM Group Has Users
        • AWS IAM Policy Blocklist Is Respected
        • AWS IAM Policy Does Not Grant Full Administrative Privileges
        • AWS IAM Policy Is Not Assigned Directly To User
        • AWS IAM Policy Role Mapping Is Respected
        • AWS IAM User Has MFA Enabled
        • AWS IAM Password Used Every 90 Days
        • AWS Password Policy Enforces Complexity Guidelines
        • AWS Password Policy Enforces Password Age Limit Of 90 Days Or Less
        • AWS Password Policy Prevents Password Reuse
        • AWS RDS Instance Is Not Publicly Accessible
        • AWS RDS Instance Snapshots Are Not Publicly Accessible
        • AWS RDS Instance Has Storage Encrypted
        • AWS RDS Instance Has Backups Enabled
        • AWS RDS Instance Has High Availability Configured
        • AWS Redshift Cluster Allows Version Upgrades
        • AWS Redshift Cluster Has Encryption Enabled
        • AWS Redshift Cluster Has Logging Enabled
        • AWS Redshift Cluster Has Correct Preferred Maintenance Window
        • AWS Redshift Cluster Has Sufficient Snapshot Retention Period
        • AWS Resource Has Minimum Number of Tags
        • AWS Resource Has Required Tags
        • AWS Root Account Has MFA Enabled
        • AWS Root Account Does Not Have Access Keys
        • AWS S3 Bucket Name Has No Periods
        • AWS S3 Bucket Not Publicly Readable
        • AWS S3 Bucket Not Publicly Writeable
        • AWS S3 Bucket Policy Does Not Use Allow With Not Principal
        • AWS S3 Bucket Policy Enforces Secure Access
        • AWS S3 Bucket Policy Restricts Allowed Actions
        • AWS S3 Bucket Policy Restricts Principal
        • AWS S3 Bucket Has Versioning Enabled
        • AWS S3 Bucket Has Encryption Enabled
        • AWS S3 Bucket Lifecycle Configuration Expires Data
        • AWS S3 Bucket Has Logging Enabled
        • AWS S3 Bucket Has MFA Delete Enabled
        • AWS S3 Bucket Has Public Access Block Enabled
        • AWS Security Group Restricts Ingress On Administrative Ports
        • AWS VPC Default Security Group Restricts All Traffic
        • AWS VPC Flow Logging Enabled
        • AWS WAF Has Correct Rule Ordering
        • AWS CloudTrail Logs Encrypted Using KMS CMK
      • Panther-managed Rules Runbooks
        • AWS CloudTrail Modified
        • AWS Config Service Modified
        • AWS Console Login Failed
        • AWS Console Login Without MFA
        • AWS EC2 Gateway Modified
        • AWS EC2 Network ACL Modified
        • AWS EC2 Route Table Modified
        • AWS EC2 SecurityGroup Modified
        • AWS EC2 VPC Modified
        • AWS IAM Policy Modified
        • AWS KMS CMK Loss
        • AWS Root Activity
        • AWS S3 Bucket Policy Modified
        • AWS Unauthorized API Call
    • Tech Partner Alert Destination Integrations
  • Investigations & Search
    • Search
      • Search Filter Operators
    • Data Explorer
      • Data Explorer SQL Search Examples
        • CloudTrail logs queries
        • GitHub Audit logs queries
        • GuardDuty logs queries
        • Nginx and ALB Access logs queries
        • Okta logs queries
        • S3 Access logs queries
        • VPC logs queries
    • Visualization and Dashboards
      • Custom Dashboards (Beta)
      • Panther-Managed Dashboards
    • Standard Fields
    • Saved and Scheduled Searches
      • Templated Searches
        • Behavioral Analytics and Anomaly Detection Template Macros (Beta)
      • Scheduled Search Examples
    • Search History
    • Data Lakes
      • Snowflake
        • Snowflake Configuration for Optimal Search Performance
      • Athena
  • PantherFlow (Beta)
    • PantherFlow Quick Reference
    • PantherFlow Statements
    • PantherFlow Operators
      • Datatable Operator
      • Extend Operator
      • Join Operator
      • Limit Operator
      • Project Operator
      • Range Operator
      • Sort Operator
      • Search Operator
      • Summarize Operator
      • Union Operator
      • Visualize Operator
      • Where Operator
    • PantherFlow Data Types
    • PantherFlow Expressions
    • PantherFlow Functions
      • Aggregation Functions
      • Date/time Functions
      • String Functions
      • Array Functions
      • Math Functions
      • Control Flow Functions
      • Regular Expression Functions
      • Snowflake Functions
      • Data Type Functions
      • Other Functions
    • PantherFlow Example Queries
      • PantherFlow Examples: Threat Hunting Scenarios
      • PantherFlow Examples: SOC Operations
      • PantherFlow Examples: Panther Audit Logs
  • Enrichment
    • Custom Lookup Tables
      • Creating a GreyNoise Lookup Table
      • Lookup Table Examples
        • Using Lookup Tables: 1Password UUIDs
      • Lookup Table Specification Reference
    • Identity Provider Profiles
      • Okta Profiles
      • Google Workspace Profiles
    • Anomali ThreatStream
    • IPinfo
    • Tor Exit Nodes
    • TrailDiscover (Beta)
  • Panther AI (Beta)
    • Managing Panther AI Response History
  • System Configuration
    • Role-Based Access Control
    • Identity & Access Integrations
      • Azure Active Directory SSO
      • Duo SSO
      • G Suite SSO
      • Okta SSO
        • Okta SCIM
      • OneLogin SSO
      • Generic SSO
    • Panther Audit Logs
      • Querying and Writing Detections for Panther Audit Logs
      • Panther Audit Log Actions
    • Notifications and Errors (Beta)
      • System Errors
    • Panther Deployment Types
      • SaaS
      • Cloud Connected
        • Configuring Snowflake for Cloud Connected
        • Configuring AWS for Cloud Connected
        • Pre-Deployment Tools
      • Legacy Configurations
        • Snowflake Connected (Legacy)
        • Customer-configured Snowflake Integration (Legacy)
        • Self-Hosted Deployments (Legacy)
          • Runtime Environment
  • Panther Developer Workflows
    • Panther Developer Workflows Overview
    • Using panther-analysis
      • Public Fork
      • Private Clone
      • Panther Analysis Tool
        • Install, Configure, and Authenticate with the Panther Analysis Tool
        • Panther Analysis Tool Commands
        • Managing Lookup Tables and Enrichment Providers with the Panther Analysis Tool
      • CI/CD for Panther Content
        • Deployment Workflows Using Panther Analysis Tool
          • Managing Panther Content via CircleCI
          • Managing Panther Content via GitHub Actions
        • Migrating to a CI/CD Workflow
    • Panther API
      • REST API (Beta)
        • Alerts
        • Alert Comments
        • API Tokens
        • Data Models
        • Globals
        • Log Sources
        • Queries
        • Roles
        • Rules
        • Scheduled Rules
        • Simple Rules
        • Policies
        • Users
      • GraphQL API
        • Alerts & Errors
        • Cloud Account Management
        • Data Lake Queries
        • Log Source Management
        • Metrics
        • Schemas
        • Token Rotation
        • User & Role Management
      • API Playground
    • Terraform
      • Managing AWS S3 Log Sources with Terraform
      • Managing HTTP Log Sources with Terraform
    • pantherlog Tool
    • Converting Sigma Rules
  • Resources
    • Help
      • Operations
      • Security and Privacy
        • Security Without AWS External ID
      • Glossary
      • Legal
    • Panther System Architecture
Powered by GitBook
On this page
  • Overview
  • How set up an AWS S3 bucket log source in Panther
  • Prerequisite
  • Step 1: Configure the S3 bucket source in Panther
  • Step 3: Finish the source setup
  • Viewing ingested logs
  • Recommended S3 bucket expiration policy

Was this helpful?

  1. Data Sources & Transports
  2. Data Transports
  3. AWS Sources

S3 Source

Onboarding AWS S3 as a Data Transport log source in the Panther Console

PreviousAWS SourcesNextCloudWatch Logs Source

Last updated 14 days ago

Was this helpful?

Overview

Panther supports configuring your Amazon S3 bucket as a Data Transport to pull security logs from S3 buckets. First you will set up the S3 source in your Panther Console, then you will configure your S3 bucket to send notifications when it receives new data.

Data can be sent compressed (or uncompressed). Learn more about compression specifications in .

If you are a customer, create any log source infrastructure in a separate AWS account from the one your Panther deployment resides in.

See the diagram below to understand how data flows from your application(s) into Panther using S3 (in ):

How set up an AWS S3 bucket log source in Panther

Prerequisite

Step 1: Configure the S3 bucket source in Panther

  1. In the left-hand navigation bar of your Panther Console, click Configure > Log Sources.

  2. In the upper right corner, click Create New.

  3. Click the Custom Log Formats tile.

  4. On the "Configure your source" page, enter values for the following fields:

    • Name: Enter a descriptive name for the S3 source.

    • AWS Account ID: Enter the 12-digit AWS Account ID where the S3 buckets are located.

    • Bucket Name: Enter the ID or name of the S3 bucket to onboard.

    • KMS Key ARN (optional): If your data is encrypted using KMS-SSE, provide the ARN of the KMS key.

    1. In the S3 Prefixes & Schemas popup modal, create combinations of S3 prefixes, schemas, and exclusion filters, according the structure of your data storage in S3.

    2. Click Apply Changes.

  5. Click Setup.

If you add a KMS key to your S3 bucket after creating the S3 log source in Panther, you must recreate the log source in Panther with the KMS key. Editing the original source to add the KMS key will not work.

Step 3: Finish the source setup

You will be directed to a success screen:

  • If any permission errors are detected, they will be displayed and you will be asked to try configuring the IAM role again.

  • The Trigger an alert when no events are processed setting defaults to YES. We recommend leaving this enabled, as you will be alerted if data stops flowing from the log source after a certain period of time. The timeframe is configurable, with a default of 24 hours.

  • If you have not done so already, attach one or more schemas to the source.

    1. Click Attach or Infer Schemas.

Viewing ingested logs

Recommended S3 bucket expiration policy

It is recommended to keep the data added to your S3 bucket for at least seven days before expiring it. Under normal circumstances, Panther processes new objects within minutes of them being added to your S3 bucket, however if the Panther ingestion service is experiencing availability issues, it could take longer for new objects to be processed.

The instructions below outline how to set up an S3 integration manually, in the Panther Console. It's also possible to manage your S3 log source , or .

To set up an S3 log source in Panther, follow the steps below. You can also view the for a quick walkthrough of S3 source setup.

If an Amazon S3 bucket does not already exist, create one by following .

In the AWS S3 Bucket tile on the slide-out panel, click Start.

If you would like to attach schemas for this source and/or configure inclusive or exclusive bucket prefixes, click Configure Prefixes & Schemas (Optional). You can also perform these actions after the source is set up.

To attach one or more schemas to all data in the bucket, leave the S3 Prefix field blank. This will create a wildcard (*) prefix.

You can optionally enable one or more .

Either attach a Panther-managed schema, or .

After your log source is configured, you can search ingested data using or .

Amazon's Creating a bucket documentation
Detection Packs
Search
Data Explorer
Cloud Connected
using Terraform
data ingestion video overview
SaaS
using the Panther API
Ingesting compressed data in Panther
follow these instructions to infer a custom schema from historical S3 data

Step 2: Set up an IAM role

To read objects from your source, Panther needs an AWS IAM role with certain permissions. To set up this role, you can choose from the following options:

  • Using the AWS Console UI

    • If this is the first Data Transport source you are setting up in Panther, select this option.

  • CloudFormation or Terraform File

  • I want to set up everything on my own

Using the AWS Console UI

Launch a CloudFormation stack using the AWS console:

  1. On the Create IAM Role page, on the Using the AWS Console UI tile, click Continue.

    • You will be redirected to the AWS console in a new browser tab, with the template URL pre-filled.

    • The CloudFormation stack will create an AWS IAM role with the minimum required permissions to read objects from your source.

    • Click the "Outputs" tab of the CloudFormation stack in AWS, and note the Role ARN.

  2. Navigate back to the Panther Console, and enter values in the fields:

    • (Not applicable if setting up an S3 Source) Bucket name – Required: Enter the outputted S3 bucket name.

    • Role ARN – Required: Enter the outputted IAM role ARN.

  3. Click Setup.

CloudFormation or Terraform File

Use Panther's provided CloudFormation or Terraform templates to create an IAM role:

  1. On the Create IAM Role page, on the CloudFormation or Terraform File tile, click Continue.

  2. On the CloudFormation or Terraform Template File page, depending on which Infrastructure as Code (IaC) provider you'd like to use, select either CloudFormation Template or Terraform Template.

  3. Click Download Template.

  4. In your CLI, run the command(s) in the Workflow section.

  5. After deploying the template in your IaC pipeline, enter values in the fields:

    • (Not applicable if setting up an S3 Source) Bucket name – Required: Enter the outputted S3 bucket name.

    • Role ARN – Required: Enter the outputted IAM role ARN.

  6. Click Setup.

I want to set everything up on my own

Create the IAM role manually, then enter the role ARN in Panther. When you set up the IAM role manually, you must also follow the "Manual IAM role creation: Additional steps" instructions below to configure your S3 buckets to send notifications when new data arrives.

  1. On the Create IAM Role page, click I want to set up everything on my own.

  2. Create an IAM role, either manually or through your own automation.

    • The IAM policy, which will be attached to the role, must include the statements defined below:

      {
          "Version": "2012-10-17",    
          "Statement": [
              {
                  "Action": "s3:GetBucketLocation",
                  "Resource": "arn:aws:s3:::<bucket-name>",
                  "Effect": "Allow"
              },
              {
                  "Action": "s3:GetObject",
                  "Resource": "arn:aws:s3:::<bucket-name>/<input-file-path>",
                  "Effect": "Allow"
              }
          ]
      }
    • If your S3 bucket is configured with server-side encryption using AWS KMS, you must include an additional statement granting the Panther API access to the corresponding KMS key. In this case, the policy will look something like this:

      {
          "Version": "2012-10-17",    
          "Statement": [
              {
                  "Action": "s3:GetBucketLocation",
                  "Resource": "arn:aws:s3:::<bucket-name>",
                  "Effect": "Allow"
              },
              {
                  "Action": "s3:GetObject",
                  "Resource": "arn:aws:s3:::<bucket-name>/<input-file-path>",
                  "Effect": "Allow"
              },
              {
                  "Action": ["kms:Decrypt", "kms:DescribeKey"],
                  "Resource": "arn:aws:kms:<region>:<your-account-id>:key/<kms-key-id>",
                  "Effect": "Allow"
              }
          ]
      }
    • {   
           "Version": "2012-10-17",    
          "Statement": [
              {
                  "Action": [
                      "s3:GetBucketLocation",
                      "s3:ListBucket"
                  ],
                  "Resource": "arn:aws:s3:::<bucket-name>",
                  "Effect": "Allow"
              },
              {
                  "Action": "s3:GetObject",
                  "Resource": "arn:aws:s3:::<bucket-name>/<input-file-path>",
                  "Effect": "Allow"
              }
          ]
      }
  3. Add a trust policy to your role with the following AssumeRolePolicyDocument statement so that Panther can assume this role:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": [
              "arn:<aws-partition>:iam::<panther-master-account-id>:root"
            ]
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "Bool": {
              "aws:SecureTransport": true
            }
          }
        }
      ]
    }
    • Populate <AWS-PARTITION> with the partition of the account running the Panther backend (e.g., aws). Note that we do not deploy to aws-cn or aws-us-gov.

    • Populate <PANTHER-MASTER-ACCOUNT-ID> with the 12-digit account ID where Panther is deployed. To get your AWS Account ID: Click the gear icon in the upper right side of the Panther Console to access Settings, then the AWS account ID is displayed at the bottom of the page.

  4. In the Panther Console, enter values in the fields:

    • (Not applicable if setting up an S3 Source) Bucket name – Required: Enter the outputted S3 bucket name.

    • Role ARN – Required: Enter the outputted IAM role ARN.

  5. Click Setup.

  6. Proceed to the "Manual IAM role creation: Additional steps" section below.

Click Launch Console UI.

You can also find the Terraform template at .

In addition to the above, if you want to view the contents of your S3 bucket in the Panther Console (such as to utilize the feature), you will need to add the s3:ListBucket action: \

this GitHub link
inferring custom schemas from historical data
On the IAM Role Setup page, there are three options: Using the AWS Console UI, CloudFormation or Terraform File, or I want to set everything up on my own
A diagram shows how data flows from a customer application into Panther, using the S3 bucket Data Transport. The flow is as follows: Application(s), S3 bucket, SNS topic, SQS, Panther application, IAM role (assumed by Panther), KMS key (optional), S3 bucket, Panther application, Parse & normalize, real-time detections, Alerts generated, Long-term retention in Snowflake, and Alert Destinations
On the IAM Role Setup page, there are three options: Using the AWS Console UI, CloudFormation or Terraform File, or I want to set everything up on my own
The success screen reads, "Everything looks good! Panther will now automatically pull & process logs from your account"
The "Trigger an alert when no events are processed" toggle is set to YES. The "How long should Panther wait before it sends you an alert that no events have been processed" setting is set to 1 Day

Manual IAM role creation: Additional steps

If during log source creation you opted to set up the IAM role manually, you must also follow the instructions below to configure your S3 bucket to send notifications when new data arrives.

Step 1: Create or modify an SNS topic

How to create an SNS topic

Note: If you already have configured the bucket to send All object create events to an SNS topic, instead follow the "Modify an existing SNS topic" tab, and subscribe it to Panther's input data queue.

Only one SNS topic (per AWS account) is required, meaning multiple S3 buckets within one AWS account can all use the same SNS topic. If you've already created an SNS topic for a different S3 bucket in the same AWS account, you can skip this step.

First you need to create an SNS Topic and SNS Subscription to notify Panther that new data is ready for processing.

  1. Log into the AWS Console of the account that owns the S3 bucket.

  2. Select the AWS Region where your S3 bucket is located and navigate to the CloudFormation console.

  3. Under the "Specify template" section, enter the following Amazon S3 URL:

    https://panther-public-cloudformation-templates.s3-us-west-2.amazonaws.com/panther-log-processing-notifications/latest/template.yml
  4. Specify the following stack details:

    • Stack name: A name of your choice, e.g. panther-log-processing-notifications-<bucket-label>

    • MasterAccountId: The 12 digit AWS Account ID where Panther is deployed

    • PantherRegion: The region where Panther is deployed

    • SnsTopicName: The name of the SNS topic receiving the notification. The default value is panther-notifications-topic

  5. Click Next, Next, and then Create Stack to complete the process.

    • This stack has one output: SnsTopicArn.

How to modify an existing SNS topic

Follow the steps below if you wish to use an existing SNS topic for sending S3 bucket notifications. Note that the SNS topic must be in the same region as your S3 bucket.

Step 1: Enable KMS encryption for the SNS topic

  1. Log in to the AWS console and navigate to KMS.

  2. Select the KMS key you want to use for encryption.

    • Example policy:

      {
          "Sid": "Allow access for Key User (SNS Service Principal)",
          "Effect": "Allow",
          "Principal": {
              "Service": "sns.amazonaws.com"
          },
          "Action": [
              "kms:GenerateDataKey*",
              "kms:Decrypt"
          ],
          "Resource": "<SNS-TOPIC-ARN>"
      },
      {
          "Sid": "Allow access for Key User (S3 Service Principal)",
          "Effect": "Allow",
          "Principal": {
              "Service": "s3.amazonaws.com"
          },
          "Action": [
              "kms:GenerateDataKey*",
              "kms:Decrypt"
          ],
          "Resource": "arn:aws:s3:::<bucket-name>"
      }
  3. Click the Encryption tab under the SNS topic.

  4. Click Enable, and specify the KMS key you want to use for encryption.

Step 2: Modify SNS topic Access Policy

Create a subscription between your SNS topic and Panther's log processing SQS queue.

    • Note the ARN of this SNS topic.

  1. Click Edit and scroll down to the Access Policy card.

  2. Add the following statement to the topic's Access Policy:

    {
      "Sid": "CrossAccountSubscription",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<PANTHER-MASTER-ACCOUNT-ID>:root"
      },
      "Action": "sns:Subscribe",
      "Resource": "<SNS-TOPIC-ARN>"
    }
    • Populate <PANTHER-MASTER-ACCOUNT-ID> with the 12-digit account ID where Panther is deployed. This AWS account ID can be found in your Panther Console at the bottom of the page after navigating to Settings by clicking the gear icon.

    • Populate SNS-TOPIC-ARN with the ARN you noted previously in this documentation.

Step 3: Create SNS subscription to SQS

Create the subscription to the Panther Master account's SQS queue.

  1. From the SNS console, click Subscriptions.

  2. Click Create subscription.

  3. Fill out the form:

    • Topic ARN: Select the SNS topic you would like to use.

    • Protocol: Select Amazon SQS.

    • Endpoint: arn:aws:sqs:<PantherRegion>:<MasterAccountId>:panther-input-data-notifications-queue

    • Enable raw message delivery: Do not check this box. Raw message delivery must be disabled.

  4. Click Create subscription.

If your subscription is in a "Pending" state and does not get confirmed immediately, you must finish setting up this log source in your Panther Console. Panther confirms the SNS subscription only if a Panther log source exists for the AWS account of the SNS topic.

Step 2: Configure event notifications on the S3 bucket

With the SNS topic created, the final step is to enable notifications from the S3 buckets.

  1. Locate the Event notifications card.

  2. Click Create event notification and use the following settings:

    • In the General Configuration section:

      • Event name: PantherEventNotifications

      • Prefix (optional): Limits notifications to objects with keys that start with matching characters

      • Suffix (optional): Limits notifications to objects with keys that end in matching characters

    • In the Event Types card, check the box next to All object create events.

    • In the Destination card:

      • Under Destination, select SNS topic.

      • For SNS topic, select the SNS topic you created or modified in an earlier step.

        • If you used the default topic name in the CloudFormation template provided, the SNS topic is named panther-notifications-topic.

4. Click Save.

  • Return to "Step 3: Finish the source setup," above.

Navigate to the Stacks section. Select Create Stack (with new resources).

Edit the policy to ensure it has the to be used with the SNS topic and S3 bucket notifications.

Navigate to the and select the SNS topic currently receiving events.

Navigate to the AWS , select the relevant bucket, and click the Properties tab.

Avoid . Otherwise, your configuration will not be considered valid.

If you are using a custom SNS topic, ensure it has the correct policies set and a subscription to the Panther SQS queue.

appropriate permissions
SNS console
S3 console
creating multiple filters that use overlapping prefixes and suffixes