You can define your S3 log source—meaning you can create the S3 bucket and associated infrastructure in AWS, and onboard it to your Panther instance—all in Terraform. Panther is a Terraform provider.
How to define your Panther S3 log source in Terraform
The following sections outline how to define your S3 log source in HashiCorp Configuration Language (HCL). You will define both AWS and Panther infrastructure in HCL.
Prerequisite
Before starting, ensure you have a GraphQL API URL and token with the Manage Log Sources permission. This is required to complete Step 4.
Define a *.tfvars file with the following AWS and Panther variables.
variable "aws_account_id" { type =string description ="The AWS account ID where the template is being deployed"}variable "panther_aws_account_id" { type =string description ="The AWS account ID of your Panther instance"}variable "panther_aws_region" { type =string default ="us-east-1" description ="The region where the Panther instance is deployed"}variable "panther_aws_partition" { type =string default ="aws" description ="AWS partition of the account running the Panther backend e.g aws, aws-cn, or aws-us-gov"}variable "s3_bucket_name" { type =string description ="The S3 Bucket name to onboard"}variable "log_source_name" { type =string description ="The name of the log source to be created in Panther"}variable "panther_api_token" { type =string}variable "panther_api_url" { type =string}
Step 2: Define Terraform providers
Include both the the AWS and Panther Terraform providers.
In AWS, you need to create an S3 bucket and an SNS topic. To ingest logs from an S3 bucket, the bucket must write notifications to an SNS topic on object creation. A subscription on this topic is needed to push object information onto Panther's input queue.
Define S3 bucket
The following HCL configuration defines the S3 bucket and associated IAM role for accessing its contents. This role requires read permissions on the S3 bucket, as it will be assumed by your Panther instance to read incoming logs.
resource "aws_s3_bucket" "log_bucket" { bucket = var.s3_bucket_name}resource "aws_iam_role" "log_processing_role" { name ="PantherLogProcessingRole-${var.s3_bucket_name}"# Policy that grants an entity permission to assume the role. assume_role_policy =jsonencode({ Version :"2012-10-17", Statement : [ { Action :"sts:AssumeRole", Effect :"Allow", Principal : { AWS :"arn:${var.aws_partition}:iam::${var.panther_aws_account_id}:root" } Condition : { Bool : { "aws:SecureTransport":true } } } ] }) tags = { Application ="Panther" }}# Provides an IAM role inline policy for reading S3 Dataresource "aws_iam_role_policy" "read_data_policy" { name ="ReadData" role = aws_iam_role.log_processing_role.id policy =jsonencode({ Version :"2012-10-17", Statement : [ { Effect :"Allow", Action : ["s3:GetBucketLocation","s3:ListBucket", ], Resource :"arn:${var.aws_partition}:s3:::${aws_s3_bucket.log_bucket.bucket}" }, { Effect :"Allow", Action :"s3:GetObject", Resource :"arn:${var.aws_partition}:s3:::${aws_s3_bucket.log_bucket.bucket}/*" }, ] })}
Define SNS topic
The following HCL configuration creates the SNS topic and related policy for enabling S3 bucket notifications. It also creates a subscription to forward messages to Panther's input data notifications queue.
The same SNS topic can be used for multiple S3 buckets integrations.
The following HCL configuration defines the S3 log source in Panther. Note that to complete this section, you will need the API URL and token outlined in the Prerequisite section.
Note that panther_managed_bucket_notifications_enabled is set to false. This indicates that all of the infrastructure related to this log source is being managed externally, in this case through Terraform.