# Standard Fields

Panther's log analysis applies normalization fields (IPs, domains, etc) to all log records. These fields provide standard names for attributes across all data sources enabling fast and easy data correlation.

For example, each data source has a time that an event occurred, but each data source will likely not name the attribute the same, nor is it guaranteed that the associated time has a timezone consistent with other data sources.

The Panther `p_event_time` attribute is mapped to each data source's corresponding event time and [normalized to UTC](https://docs.panther.com/data-onboarding/custom-log-types/reference#timestamps). This means while querying multiple data sources, you can join and order by `p_event_time`, despite the disparate schemas of each data source.

{% hint style="info" %}
All appended standard fields begin with `p_.`
{% endhint %}

## Required Fields

The fields below are appended to all log records:

<table data-header-hidden><thead><tr><th width="227.35377358490564">Field Name</th><th width="150.8477048821628">Type</th><th>Description</th></tr></thead><tbody><tr><td><strong>Field name</strong></td><td><strong>Type</strong></td><td><strong>Description</strong></td></tr><tr><td><code>p_log_type</code></td><td><code>string</code></td><td>The type of log.</td></tr><tr><td><code>p_row_id</code></td><td><code>string</code></td><td>Unique id (UUID) for the row.</td></tr><tr><td><code>p_event_time</code></td><td><code>timestamp</code></td><td>The associated event time for the log type is copied here and normalized to UTC.<br><br>Format: <code>YYYY-MM-DD HH:MM:SS.fff</code></td></tr><tr><td><code>p_parse_time</code></td><td><code>timestamp</code></td><td>The current time when the event was parsed, normalized to UTC.<br><br>Format: <code>YYYY-MM-DD HH:MM:SS.fff</code></td></tr><tr><td><code>p_schema_version</code></td><td><code>integer</code></td><td>The version of the schema used for this row.</td></tr><tr><td><code>p_source_id</code></td><td><code>string</code></td><td>The Panther generated internal id for the source integration.</td></tr><tr><td><code>p_source_label</code></td><td><code>string</code></td><td>The user supplied label for the source integration (may change if edited).</td></tr><tr><td><code>p_source_file</code></td><td><code>object</code></td><td>Available for S3 sources only, this field contains metadata of the file that this event originated from, including the bucket name and object key.</td></tr><tr><td><code>p_header</code></td><td><code>object</code></td><td>Contains envelope metadata when <strong>Retain envelope fields</strong> is enabled for <a href="../../data-onboarding/data-transports/aws/cloudwatch#envelope-field-retention">CloudWatch log sources</a>. Includes information like log group, log stream, owner account, and subscription filters.</td></tr></tbody></table>

{% hint style="info" %}
If an event does not have a timestamp, then `p_event_time` will be set to `p_parse_time`, which is the time the event was parsed.
{% endhint %}

The `p_source_id` and `p_source_label` fields indicate where the data originated. For example, you might have multiple CloudTrail sources registered with Panther, each with a unique name (e.g., "Dev Accounts", "Production Accounts", "HR Accounts", etc.). These fields allow you to separate data based on the source, which is beneficial when configuring detections in Panther.

In addition, the fields below are appended to log records of all tables in the `panther_rule_matches` database:

<table data-header-hidden><thead><tr><th width="279.2877939529675">Field Name</th><th width="195.33333333333334">Type</th><th>Description</th></tr></thead><tbody><tr><td><strong>Field name in panther_rule_matches</strong></td><td><strong>Type</strong></td><td><strong>Description</strong></td></tr><tr><td><code>p_alert_id</code></td><td><code>string</code></td><td>Id of alert related to row.</td></tr><tr><td><code>p_alert_creation_time</code></td><td><code>timestamp</code></td><td>Time of alert creation related to row.</td></tr><tr><td><code>p_alert_context</code></td><td>object</td><td>A JSON object returned from the rule's alert_context() function.</td></tr><tr><td><code>p_alert_severity</code></td><td><code>string</code></td><td>The severity level of the rule at the time of the alert. This could be different from the default severity as it can be dynamically set.</td></tr><tr><td><code>p_alert_update_time</code></td><td><code>timestamp</code></td><td>Time of last alert update related to row.</td></tr><tr><td><code>p_rule_id</code></td><td><code>string</code></td><td>The id of the rule that generated the alert.</td></tr><tr><td><code>p_rule_error</code></td><td><code>string</code></td><td>The error message if there was an error running the rule.</td></tr><tr><td><code>p_rule_reports</code></td><td><code>map[string]array[string]</code></td><td>List of user defined rule reporting tags related to row.</td></tr><tr><td><code>p_rule_severity</code></td><td><code>string</code></td><td>The default severity of the rule.</td></tr><tr><td><code>p_rule_tags</code></td><td><code>array[string]</code></td><td>List of user defined rule tags related to row.</td></tr></tbody></table>

## Indicator Fields

A common security question is, “Was `some indicator` ever observed in *any* of our logs?” Panther's [Search](https://docs.panther.com/search/search-tool) tool enables you to find the answer by searching across data from all of your various log sources.

As log events are ingested, the [`indicators` field](https://docs.panther.com/data-onboarding/custom-log-types/reference#indicators) in their corresponding schema identifies which fields should have their values extracted into `p_any_` fields, which are appended to and stored with the event. The table below shows which `p_any_` field(s) data is extracted into, by `indicator`. All `p_any_` fields are lists.

When constructing a custom schema, you can use the values in the Indicator Name column in the table below in your schema's [`indicators` field](https://docs.panther.com/data-onboarding/custom-log-types/reference#indicators). Each of the rows (except for `hostname`, `net_addr`, and `url`) corresponds to a "Panther Fields" option in [Search](https://docs.panther.com/search/search-tool).

Note that field name/value pairs outside of the fields in the table below can be searched with [Search's key/value filter expression](https://docs.panther.com/search-tool#key-value-filter-expression) functionality—though because those fields have not been mapped to corresponding ones (with different syntax) in different log sources, only matches from log sources containing the exact field name searched will be returned.

{% hint style="info" %}
In order for a field to be designated as an indicator in a schema, it must be type `string`.
{% endhint %}

<table><thead><tr><th width="178.33333333333331">Indicator Name</th><th width="213.67435158501445">Extracted into fields</th><th>Description</th></tr></thead><tbody><tr><td>actor_id</td><td>p_any_actor_ids</td><td>Append value to p_any_actor_ids.</td></tr><tr><td>aws_account_id</td><td>p_any_aws_account_ids</td><td>If the value is a valid AWS account id, append to p_any_aws_account_ids.</td></tr><tr><td>aws_arn</td><td>p_any_aws_arns,<br>p_any_aws_instance_ids,<br>p_any_aws_account_ids,<br>p_any_emails<br></td><td>If value is a valid AWS ARN then append to p_any_aws_arns.<br>If the ARN contains an AWS account id, extract and append to p_any_aws_account_ids.<br>If the ARN contains an EC2 instance id, extract and append to p_any_aws_instance_ids.<br>If the ARN references an AWS STS Assume Role and contains and email address, then extract email address into p_any_emails.</td></tr><tr><td>aws_instance_id</td><td>p_any_aws_instance_ids</td><td>If the value is a valid AWS instance id then append to p_any_aws_instance_ids.</td></tr><tr><td>aws_tag</td><td>p_any_aws_tags</td><td>Append value into p_any_aws_tags.</td></tr><tr><td>cve</td><td>p_any_cves</td><td>Extract any values matching the regex <code>^[Cc][Vv][Ee]-\d{4}-\d+$</code> and append to p_any_cves</td></tr><tr><td>domain</td><td>p_any_domain_names</td><td>Append value to p_any_domain_names.</td></tr><tr><td>email</td><td>p_any_emails</td><td>If value is a valid email address then append value into p_any_emails. The portion of the value that precedes <code>@</code> will also be populated in p_any_usernames</td></tr><tr><td>hostname</td><td>p_any_domain_names, p_any_ip_addresses</td><td>Append value to p_any_domain_names.<br>If value is a valid ipv4 or ipv6 address then append to p_any_ip_addresses.</td></tr><tr><td>ip</td><td>p_any_ip_addresses</td><td>If value is a valid ipv4 or ipv6 address then append to p_any_ip_addresses.</td></tr><tr><td>mac</td><td>p_any_mac_addresses</td><td>If a value is a valid IEEE 802 MAC-48, EUI-48, EUI-64, or a 20-octet IP over InfiniBand link-layer address then append to p_any_mac_addresses.</td></tr><tr><td>md5</td><td>p_any_md5_hashes</td><td>If the value is a valid md5 then append value into p_any_md5_hashes.</td></tr><tr><td>mitre_attack_technique</td><td>p_any_mitre_attack_techniques</td><td>Extract any values matching the regex <code>\b[Tt]\d{4}(?:\.\d{3})?\b</code> and append to p_any_mitre_attack_techniques. For example, if the field value is <code>"Technique: T1234"</code>, p_any_mitre_attack_techniques would have a value of <code>["T1234"]</code></td></tr><tr><td>net_addr</td><td>p_any_domain_names, p_any_ip_addresses</td><td>Extracts from values of the form <code>&#x3C;host>:&#x3C;port></code> and appends host portion to p_any_domain_names.<br>If host portion is a valid ipv4 or ipv6 address, then append to p_any_ip_addresses.</td></tr><tr><td>serial_number</td><td>p_any_serial_numbers</td><td>Append value to p_any_serial_numbers.</td></tr><tr><td>sha1</td><td>p_any_sha1_hashes</td><td>If the value is a valid sha1 then append to p_any_sha1_hashes.</td></tr><tr><td>sha256</td><td>p_any_sha256_hashes</td><td>If the value is a valid sha256 then append to p_any_sha256_hashes.</td></tr><tr><td>trace_id</td><td>p_any_trace_ids</td><td>Append value to p_any_trace_ids.<br>Tag fields such as session ids and document ids that are used to associated elements with other logs in order to trace the full activity of a sequence of related events.</td></tr><tr><td>url</td><td>p_any_domain_names, p_any_ip_addresses</td><td><p>Parse url, extract the host portion after "http://" or "https://".<br></p><p>Append host portion to p_any_domain_names.<br>If host portion is a valid ipv4 or ipv6 address then append to p_any_ip_addresses.</p></td></tr><tr><td>username</td><td>p_any_usernames</td><td>Append value into p_any_usernames.<br><br>This field is also populated with values marked with the <code>email</code> indicator. The portion of the email value that precedes <code>@</code> will be appended to this field.</td></tr></tbody></table>

## Enrichment Fields <a href="#enrichmentfields" id="enrichmentfields"></a>

The Panther rules engine will take the looked-up matches from [Lookup Tables ](https://docs.panther.com/enrichment/custom)and append that data to the event using the key `p_enrichment` in the following JSON structure:

```json
{ 
    'p_enrichment': {
        <name of lookup table>: { 
            <key in log that matched>: <matching row looked up>,
            ...
	    <key in log that matched>: <matching row looked up>,
        }    
    }
} 
```

<table><thead><tr><th width="250">Enrichment Field Name</th><th width="122.33333333333331">Type</th><th>Description of Enrichment Field</th></tr></thead><tbody><tr><td><code>p_enrichment</code></td><td>object</td><td>Dictionary of lookup results where matching rows were found.</td></tr><tr><td><code>p_match</code></td><td>string</td><td><code>p_match</code> is injected into the data of each matching row within <code>p_enrichment</code>. Its value is the value that matched in the event.</td></tr></tbody></table>

## The "all\_logs" view

Panther manages a view over all data sources with standard fields.

This allows you to answer questions such as, "Was there *any* activity from `some-bad-ip`, and if so, where?"

The query below will show how many records, by log type, are associated with the IP address `95.123.145.92`:

```sql
SELECT
 p_log_type, count(1) AS row_count
FROM panther_views.public.all_logs
WHERE p_occurs_between('2020-1-30', '2020-1-31')
     AND array_contains('95.123.145.92'::variant, p_any_ip_addresses)
GROUP BY p_log_type
```

From these results, you can pivot to the specific logs where activity is indicated.

## Standard Fields in detections

The Panther standard fields can be used in detections.

For example, the Python rule below triggers when any GuardDuty alert is on a resource tagged as `Critical`:

```python
def rule(event):
    if 'p_any_aws_tags' in event:
        for tag in event['p_any_aws_tags']:
            if 'critical' in tag:
                return True
    return False 
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.panther.com/search/panther-fields.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
