# CloudWatch Logs Source

## Overview

Panther supports configuring CloudWatch as a Data Transport to pull security logs from CloudWatch into your Panther account.

In order to enable [real-time processing of log data](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html), Panther will create a [Firehose Delivery Stream](https://aws.amazon.com/kinesis/data-firehose) and an S3 Bucket that will be used as the Delivery Stream's destination. A [subscription filter](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CreateSubscriptionFilterFirehose.html) is then configured for the CloudWatch Logs log group using the Firehose Delivery Stream as its destination. The required read permissions for processing files added by Firehose to the newly created S3 bucket are granted to the IAM role.

More details on this process can be found in Amazon's documentation: [AWS Cloudwatch Logs documentation for subscriptions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CrossAccountSubscriptions-Firehose.html).

{% hint style="warning" %}
If you are a [Cloud Connected](https://docs.panther.com/system-configuration/panther-deployment-types/cloud-connected) customer, create any log source infrastructure (such as S3 buckets or IAM roles) in a separate AWS account from the one your Panther deployment resides in.
{% endhint %}

See the diagram below to understand how data flows from your application(s) into Panther using CloudWatch Logs (in [SaaS](https://docs.panther.com/system-configuration/panther-deployment-types#saas)):

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-616f9694b64bc765a742b4219a2a1b527ce38a39%2FData_Transport_CloudWatch.png?alt=media" alt="A diagram shows how data flows from a customer application into Panther, using the CloudWatch Data Transport. The flow is as follows: Application(s), CloudWatch log group, Subscription filter, Kinesis Firehose, S3 bucket, SNS topic, SQS, Panther application, IAM Role (assumed by Panther, S3 bucket, Panther application, parse &#x26; normalize, real-time detections, Long term retention in Snowflake, Alerts generated, Alert destination"><figcaption></figcaption></figure>

## How to set up a CloudWatch log source in Panther

### Step 1: Configure CloudWatch in the Panther Console

1. In the left-hand navigation bar of your Panther Console, click **Configure** > **Log Sources**.
2. In the upper-right corner, click **Create New**.
3. Click the **AWS CloudWatch Logs** tile.
4. On the **Configure** page, fill in the fields:
   * **Name**: Enter a descriptive name of the CloudWatch logs source.
   * **Log Group Name**: Enter the unique name of the CloudWatch logs group.
   * **AWS Account ID**: Enter the AWS Account ID number that your CloudWatch log group lives in.
   * **Pattern Filter (optional)**: Use this field to filter data log data received from CloudWatch. Read more in [Amazon's documentation on filter and pattern syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html).
   * **Log Types**: Select the Log Types Panther should use to parse CloudWatch logs. At least one Log Type must be selected from the dropdown menu.
5. Click **Setup**.

### Step 2: Set up an IAM role

To read objects from your source, Panther needs an AWS IAM role with certain permissions. To set up this role, you can choose from the following options:

* **Using the AWS Console UI**
  * If this is the first Data Transport source you are setting up in Panther, select this option.
* **CloudFormation or Terraform File**
* **I want to set up everything on my own**

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-ad7cc4be0a5e6b30ae75119003691a5ea8212510%2FScreenshot%202025-04-08%20at%2012.31.12%E2%80%AFPM.png?alt=media" alt="On the IAM Role Setup page, there are three options: Using the AWS Console UI, CloudFormation or Terraform File, or I want to set everything up on my own"><figcaption></figcaption></figure>

{% tabs %}
{% tab title="Using the AWS Console UI" %}
**Using the AWS Console UI**

Launch a CloudFormation stack using the AWS console:

1. On the **Create IAM Role** page, on the **Using the AWS Console UI** tile, click **Continue**.
2. Click **Launch Console UI**.
   * You will be redirected to the AWS console in a new browser tab, with the template URL pre-filled.
   * The CloudFormation stack will create an AWS IAM role with the minimum required permissions to read objects from your source.
   * Click the "Outputs" tab of the CloudFormation stack in AWS, and note the Role ARN.
3. Navigate back to the Panther Console, and enter values in the fields:
   * (Not applicable if setting up an S3 Source) **Bucket name – Required**: Enter the outputted S3 bucket name.
   * **Role ARN – Required**: Enter the outputted IAM role ARN.
4. Click **Setup**.
   {% endtab %}

{% tab title="CloudFormation or Terraform File" %}
**CloudFormation or Terraform File**

Use Panther's provided CloudFormation or Terraform templates to create an IAM role:

1. On the **Create IAM Role** page, on the **CloudFormation or Terraform File** tile, click **Continue**.
2. On the **CloudFormation or Terraform Template File** page, depending on which Infrastructure as Code (IaC) provider you'd like to use, select either **CloudFormation Template** or **Terraform Template**.
3. Click **Download Template**.
   * You can also find the Terraform template at [this GitHub link](https://github.com/panther-labs/panther-auxiliary/tree/9365346d8698e730bd623086e24ca6f2a34c4b5c/terraform/panther_log_analysis_iam).
4. In your CLI, run the command(s) in the **Workflow** section.
5. After deploying the template in your IaC pipeline, enter values in the fields:
   * (Not applicable if setting up an S3 Source) **Bucket name – Required**: Enter the outputted S3 bucket name.
   * **Role ARN – Required**: Enter the outputted IAM role ARN.
6. Click **Setup**.
   {% endtab %}

{% tab title="I want to set everything up on my own" %}
**I want to set everything up on my own**

Create the IAM role manually, then enter the role ARN in Panther. When you set up the IAM role manually, you must also follow the "Manual IAM role creation: Additional steps" instructions below to configure your S3 buckets to send notifications when new data arrives.

1. On the **Create IAM Role** page, click **I want to set up everything on my own**.
2. Create an IAM role, either manually or through your own automation.
   * The IAM policy, which will be attached to the role, must include the statements defined below:

     ```json
     {
         "Version": "2012-10-17",
         "Statement": [
             {
                 "Action": "s3:GetBucketLocation",
                 "Resource": "arn:aws:s3:::<bucket-name>",
                 "Effect": "Allow"
             },
             {
                 "Action": "s3:GetObject",
                 "Resource": "arn:aws:s3:::<bucket-name>/<input-file-path>",
                 "Effect": "Allow"
             }
         ]
     }
     ```
   * If your S3 bucket is configured with server-side encryption using AWS KMS, you must include an additional statement granting the Panther API access to the corresponding KMS key. In this case, the policy will look something like this:

     ```json
     {
         "Version": "2012-10-17",
         "Statement": [
             {
                 "Action": "s3:GetBucketLocation",
                 "Resource": "arn:aws:s3:::<bucket-name>",
                 "Effect": "Allow"
             },
             {
                 "Action": "s3:GetObject",
                 "Resource": "arn:aws:s3:::<bucket-name>/<input-file-path>",
                 "Effect": "Allow"
             },
             {
                 "Action": ["kms:Decrypt", "kms:DescribeKey"],
                 "Resource": "arn:aws:kms:<region>:<your-account-id>:key/<kms-key-id>",
                 "Effect": "Allow"
             }
         ]
     }
     ```
   * In addition to the above, if you want to view the contents of your S3 bucket in the Panther Console (such as to utilize the [inferring custom schemas from historical data](https://github.com/panther-labs/panther-docs/blob/main/docs/gitbook/data-onboarding/data-onboarding/custom-log-types/README.md#inferring-custom-schemas-from-historical-s3-data) feature), you will need to add the `s3:ListBucket` action:

     ```json
     {
          "Version": "2012-10-17",
         "Statement": [
             {
                 "Action": [
                     "s3:GetBucketLocation",
                     "s3:ListBucket"
                 ],
                 "Resource": "arn:aws:s3:::<bucket-name>",
                 "Effect": "Allow"
             },
             {
                 "Action": "s3:GetObject",
                 "Resource": "arn:aws:s3:::<bucket-name>/<input-file-path>",
                 "Effect": "Allow"
             }
         ]
     }
     ```
3. Add a trust policy to your role with the following `AssumeRolePolicyDocument` statement so that Panther can assume this role:

   ```json
   {
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
           "AWS": [
             "arn:<aws-partition>:iam::<panther-master-account-id>:root"
           ]
         },
         "Action": "sts:AssumeRole",
         "Condition": {
           "Bool": {
             "aws:SecureTransport": true
           }
         }
       }
     ]
   }
   ```

   * Populate `<AWS-PARTITION>` with the partition of the account running the Panther backend (e.g., `aws`). Note that we do not deploy to `aws-cn` or `aws-us-gov`.
   * Populate `<PANTHER-MASTER-ACCOUNT-ID>` with the 12-digit account ID where Panther is deployed. To get your AWS Account ID: Click the gear icon in the upper right side of the Panther Console to access Settings, then the AWS account ID is displayed at the bottom of the page.
4. In the Panther Console, enter values in the fields:
   * (Not applicable if setting up an S3 Source) **Bucket name – Required**: Enter the outputted S3 bucket name.
   * **Role ARN – Required**: Enter the outputted IAM role ARN.
5. Click **Setup.**
6. Proceed to the "Manual IAM role creation: Additional steps" section below.
   {% endtab %}
   {% endtabs %}

### Step 3: Finish the source setup

You will be directed to a success screen:

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-e55cedf82c6a6adc66ec5c14ebdcb164c3b1dcca%2FScreenshot%202023-08-03%20at%204.33.30%20PM.png?alt=media" alt="The success screen reads, &#x22;Everything looks good! Panther will now automatically pull &#x26; process logs from your account&#x22;" width="281"><figcaption></figcaption></figure>

* You can optionally enable one or more [Detection Packs](https://docs.panther.com/detections/panther-managed/packs).
* The **Trigger an alert when no events are processed** setting defaults to **YES**. We recommend leaving this enabled, as you will be alerted if data stops flowing from the log source after a certain period of time. The timeframe is configurable, with a default of 24 hours.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-c48119abd559990173004bde99ff4907fdd2ded2%2FScreenshot%202023-08-03%20at%204.26.54%20PM.png?alt=media" alt="The &#x22;Trigger an alert when no events are processed&#x22; toggle is set to YES. The &#x22;How long should Panther wait before it sends you an alert that no events have been processed&#x22; setting is set to 1 Day" width="320"><figcaption></figcaption></figure>

* If you have not done so already, click **Attach or Infer Schemas** to attach one or more schemas to the source.

## Envelope field retention

You can optionally preserve CloudWatch Logs envelope metadata (such as `owner`, `logGroup`, and `logStream`) in a `p_header` field on each processed event. This option is available on any source that uses the CloudWatch Logs stream type.

#### Configuring envelope fields for existing CloudWatch log sources

If you have an existing CloudWatch log source and want to enable or disable envelope field retention, you can configure this setting after the source has been created:

1. In the left-hand navigation bar of your Panther Console, click **Configure** > **Log Sources**.
2. Find your CloudWatch log source in the list and click on it.
3. Click on the **Configuration** tab.
4. Click the **edit icon** next to the **Stream Type**.
5. Toggle the **Retain envelope fields in `p_header` field** switch on.
6. Click **Apply Selection** to save your changes.

<figure><img src="https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2FyHGi2mq5mutIKVrnGefF%2FScreenshot%202026-03-13%20at%201.23.35%E2%80%AFPM.png?alt=media&#x26;token=8b0e4eba-d9ca-42c7-ae49-94e02f0c4eeb" alt=""><figcaption></figcaption></figure>

#### CloudWatch Logs Envelope Metadata

When envelope field retention is enabled, the `p_header` field contains a JSON object with the following CloudWatch metadata:

* **owner**: The AWS account ID that owns the log group
* **logGroup**: The name of the CloudWatch log group
* **logStream**: The name of the CloudWatch log stream
* **subscriptionFilters**: Array of subscription filter names configured for the log group

Example `p_header` content:

```json
{
  "owner": "123456789012",
  "logGroup": "aws/lambda/my-function",
  "logStream": "2023/12/01/[$LATEST]abc123...",
  "subscriptionFilters": ["panther-subscription-filter"]
}
```

## Viewing ingested logs

After your log source is configured, you can search ingested data using [Search](https://docs.panther.com/search/search-tool) or [Data Explorer](https://docs.panther.com/search/data-explorer).

## Manual IAM role creation: Additional steps

If during log source creation you opted to set up the IAM role manually, you must also follow the instructions below to configure your S3 bucket to send notifications when new data arrives.

### Step 1: Create or modify an SNS topic

{% tabs %}
{% tab title="Create an SNS topic" %}
**How to create an SNS topic**

Note: If you already have configured the bucket to send `All object create events` to an SNS topic, instead follow the "Modify an existing SNS topic" tab, and subscribe it to Panther's input data queue.

{% hint style="info" %}
Only one SNS topic (per AWS account) is required, meaning multiple S3 buckets within one AWS account can all use the same SNS topic. If you've already created an SNS topic for a different S3 bucket in the same AWS account, you can skip this step.
{% endhint %}

First you need to create an SNS Topic and SNS Subscription to notify Panther that new data is ready for processing.

1. Log into the AWS Console of the account that owns the S3 bucket.
2. Select the AWS Region where your S3 bucket is located and navigate to the **CloudFormation** console.
3. Navigate to the **Stacks** section. Select **Create Stack** (with new resources).\
   ![In the AWS CloudFormation console, there is a "Create Stack" dropdown menu in the upper right. In this image, the menu is expanded and the option "with new resources (standard)" is highlighted.](https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-225de8f479ea23d55add21922cfac223184a1d82%2FScreen%20Shot%202022-09-02%20at%2011.02.26%20AM.png?alt=media)
4. Under the "Specify template" section, enter the following Amazon S3 URL:

   ```
   https://panther-public-cloudformation-templates.s3-us-west-2.amazonaws.com/panther-log-processing-notifications/latest/template.yml
   ```
5. Specify the following stack details:
   * **Stack name**: A name of your choice, e.g. `panther-log-processing-notifications-<bucket-label>`
   * **MasterAccountId**: The 12 digit AWS Account ID where Panther is deployed
   * **PantherRegion**: The region where Panther is deployed
   * **SnsTopicName**: The name of the SNS topic receiving the notification. The default value is `panther-notifications-topic`
6. Click **Next**, **Next**, and then **Create Stack** to complete the process.
   * This stack has one output: `SnsTopicArn`.
     {% endtab %}

{% tab title="Modify an existing SNS topic" %}
**How to modify an existing SNS topic**

Follow the steps below if you wish to use an existing SNS topic for sending S3 bucket notifications. Note that the SNS topic must be in the same region as your S3 bucket.

**Step 1: Enable KMS encryption for the SNS topic**

1. Log in to the AWS console and navigate to KMS.
2. Select the KMS key you want to use for encryption.
3. Edit the policy to ensure it has the [appropriate permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/grant-destinations-permissions-to-s3.html#key-policy-sns-sqs) to be used with the SNS topic and S3 bucket notifications.
   * Example policy:

     ```json
     {
         "Sid": "Allow access for Key User (SNS Service Principal)",
         "Effect": "Allow",
         "Principal": {
             "Service": "sns.amazonaws.com"
         },
         "Action": [
             "kms:GenerateDataKey*",
             "kms:Decrypt"
         ],
         "Resource": "<SNS-TOPIC-ARN>"
     },
     {
         "Sid": "Allow access for Key User (S3 Service Principal)",
         "Effect": "Allow",
         "Principal": {
             "Service": "s3.amazonaws.com"
         },
         "Action": [
             "kms:GenerateDataKey*",
             "kms:Decrypt"
         ],
         "Resource": "arn:aws:s3:::<bucket-name>"
     }
     ```
4. Click the **Encryption** tab under the SNS topic.
5. Click **Enable**, and specify the KMS key you want to use for encryption.

**Step 2: Modify SNS topic Access Policy**

Create a subscription between your SNS topic and Panther's log processing SQS queue.

1. Navigate to the [SNS console](https://us-west-2.console.aws.amazon.com/sns/v3/home#/topics) and select the SNS topic currently receiving events.
   * Note the ARN of this SNS topic.
2. Click **Edit** and scroll down to the **Access Policy** card.
3. Add the following statement to the topic's **Access Policy**:

   ```json
   {
     "Sid": "CrossAccountSubscription",
     "Effect": "Allow",
     "Principal": {
       "AWS": "arn:aws:iam::<PANTHER-MASTER-ACCOUNT-ID>:root"
     },
     "Action": "sns:Subscribe",
     "Resource": "<SNS-TOPIC-ARN>"
   }
   ```

   * Populate `<PANTHER-MASTER-ACCOUNT-ID>` with the 12-digit account ID where Panther is deployed. This AWS account ID can be found in your Panther Console at the bottom of the page after navigating to **Settings** by clicking the **gear** ico&#x6E;**.**
   * Populate `SNS-TOPIC-ARN` with the ARN you noted previously in this documentation.

**Step 3: Create SNS subscription to SQS**

Create the subscription to the Panther Master account's SQS queue.

1. From the SNS console, click **Subscriptions**.
2. Click **Create subscription**.
3. Fill out the form:
   * **Topic ARN**: Select the SNS topic you would like to use.
   * **Protocol**: Select **Amazon SQS**.
   * **Endpoint**: `arn:aws:sqs:<PantherRegion>:<MasterAccountId>:panther-input-data-notifications-queue`
   * **Enable raw message delivery**: Do not check this box. Raw message delivery must be disabled.
4. Click **Create subscription**.

{% hint style="info" %}
If your subscription is in a "Pending" state and does not get confirmed immediately, you must finish setting up this log source in your Panther Console. Panther confirms the SNS subscription only if a Panther log source exists for the AWS account of the SNS topic.
{% endhint %}
{% endtab %}
{% endtabs %}

### Step 2: Configure event notifications on the S3 bucket

With the SNS topic created, the final step is to enable notifications from the S3 buckets.

1. Navigate to the AWS [S3 console](https://s3.console.aws.amazon.com/s3/home), select the relevant bucket, and click the **Properties** tab.
2. Locate the **Event notifications** card.
3. Click **Create event notification** and use the following settings:
   * In the **General Configuration** section:
     * **Event name**: `PantherEventNotifications`
     * **Prefix** (optional): Limits notifications to objects with keys that start with matching characters
     * **Suffix** (optional): Limits notifications to objects with keys that end in matching characters
     * In the **Event Types** card, check the box next to **All object create events**.

{% hint style="info" %}
Avoid [creating multiple filters that use overlapping prefixes and suffixes](https://help.panther.com/articles/1907385559-how-do-i-resolve-cannot-have-overlapping-suffixes-in-two-rules-if-the-prefixes-are-overlapping-for-the-same-event-type-when-setting-up-an-s3-source-for-panther). Otherwise, your configuration will not be considered valid.
{% endhint %}

* In the **Destination** card:
  * Under **Destination**, select **SNS topic**.
  * For **SNS topic**, select the SNS topic you created or modified in an earlier step.
    * If you used the default topic name in the CloudFormation template provided, the SNS topic is named `panther-notifications-topic`.
    * If you are using a custom SNS topic, ensure it has the correct policies set and a subscription to the Panther SQS queue.\
      ![](https://4011785613-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LgdiSWdyJcXPahGi9Rs-2910905616%2Fuploads%2Fgit-blob-34d83679fde29c4bea65c21428e638b118356f6f%2Fs3-source-setup.png?alt=media)

4\. Click **Save**.

* Return to "Step 3: Finish the source setup," above.
