Scheduled Search Examples
This page contains common use cases and example searches you may want to use while investigating suspicious activity in your logs.
The examples below will require some customization to the local environment to be effective. Note that all queries should control the result size. This can be done with a LIMIT
or GROUP BY
clause.
Your company will incur costs on your data platform every time a Scheduled Search runs. Please make sure that your queries can complete inside the specified timeout period.
All examples in this section use Snowflake style SQL.
Streaming rules
Panther enables you to run Scheduled Searches (Saved Searches on an interval) and, in concert with a Scheduled Rule, allows you to create detections that work over large time spans and aggregate data from multiples sources.
When possible, try to implement detections using Streaming Rules. Streaming Rules are low latency and less expensive compared to Scheduled Searches. However, there are cases when more context is required for a detection and then a Scheduled Search is the appropriate solution.
Data latency and timing considerations
In order to write effective Scheduled Rules, you need to understand the latency of the data from the time the event is recorded until it reaches Panther. Use this information to adjust the scheduled and window of time used accordingly.
For example, AWS CloudTrail data has a latency of about 10 minutes. This is due to AWS limitations and not due to Panther functionality. If you want to analyze the last hour of data, as a best practice we recommend that you schedule the search to run 15 minutes past the hour.
This is not a consideration for Panther streaming rules, since they are applied when the data is first processed as it comes into the system. Since Scheduled Searches periodically look back at accumulated data, timing considerations are important.
Examples
Restrict AWS console access to specific IP addresses
Let's start with a very simple end to end example to better understand the process of creating a detection using a Scheduled Search and a Scheduled Rule. Let's assume you very carefully restrict access to the AWS console to IP addresses from with company IP space. In order to verify this control you want to check that all AWS console logins have a sourceIPAddress
from within this IP space. You keep a table of these IP blocks in the datalake. NOTE: this example uses a simple equality join operation and assumes that the table of IP blocks are all /32 addresses. In a realistic implementation the IP block table would have CIDR blocks and the check would be to find those not inside the CIDR blocks. This is left as an exercise for the reader.
Let's assume we schedule a rule to run every 15 minutes, checking the previous 30 minutes (we are using long window handle inherent delay in data associated with CloudTrail).
Full disclosure: you could implement this detection using a Streaming Rule if you managed the IP whitelist table in Dynamodb or S3. For very high volume log sources it will be more efficient to do a periodic batch join as below.
The query to detect these events is:
Note the p_occurs_since()
is a Panther SQL macro to make creating Scheduled Queries easier.
Since the output of a Scheduled Search flows through a Scheduled Rule (in Python) it is important to keep the number of rows returned carefully controlled. It is recommended to always provide a LIMIT
clause or use GROUP BY
aggregations that return limited number of rows (less than a few thousand maximum).
To implement this:
Create a Scheduled Search by following these instructions.
Make sure the Scheduled Search is set to active.
A Scheduled Rule has all the capability of a streaming rule, allowing you to customize alerts and direct the destinations. The deduping in Panther prevents alert storms, in the above rule we use the sourceIPAddress
dedupe which will only create 1 alert per 30 minutes.
This pattern of joining to a list can also be used for IOC detections (keeping a table of IOCs such as TOR Exit Nodes, malware hashes ,etc).
Command and Control (C2) beacon detection
In this example we will create a very simple but effective behavioral detection that uses aggregation to find C2 beaconing.
This is an oversimplified detection for illustration purposes only. Using this without refinements such as allowlisting and tuning thresholds can cause excessive false positives (there are many non-malicious processes that "beacon"). That said, on well understood networks and using the appropriate allowlisting, this technique can be very effective.
We will define a C2 beacon as any IP activity that happens at most five times per day, and repeats for more than three days. To implement this:
To implement this:
Create a Scheduled Search by following these instructions.
Make sure the Scheduled Search is set to active.
How well is my endpoint monitoring working?
For this hypothetical example, we will assume you are using CrowdStrike as your endpoint monitoring software. Panther is configured to ingest your logs and you have a CMDB populated that maps the deployed agents to their internal associated user(s).
There are many interesting questions that can be asked of this data but for this example we will specifically ask the question: "Which endpoints have not reported ANY data in the last 24 hours?"
In CrowdStrike logs the unique id for the deployed agent is called an aid
. The CMDB has a mapping of aid
to reference data. For this example we will assume it has the attributes employee_name
, employee_group
and last_seen
. The employee related attributes help identify who currently uses the endpoint and the last_seen
is a timestamp we assume is updated by a backend process that tracks network activity (e.g, VPN access, DHCP leases, Authentication, liveliness detections, etc).
To answer this question, we want to know which agents in the CMDB that do have network activity in the last 24 hours but do not have any CrowdStrike activity, which may indicate the agent is not running or has been disabled (indicating a coverage gap). The query below will compute a report by employee group that includes the specific suspect endpoints:
To implement this:
Create a Scheduled Search by following these instructions.
Make sure the Scheduled Search is set to active.
Make a Scheduled Rule targeted at the output of the Scheduled Search:
The events associated with the alert can be reviewed by an analyst which will be at most one per employee group. The "hits" are accumulated in endpoints
using the employee info for easy vetting. As with all Panther rules, you have the flexibility to customize destinations of alert. For example, if the employee_group
is C-Suite
then perhaps that generates a page to the oncall, while the default alerts simply go to a work queue for vetting the next day.
Unusual Okta logins
The Okta logs provide the answers to "who", "with what device" and the "where" questions associated with an event. This information can be used to identify suspicious behavior, for example an attacker using stolen credentials.
The challenge is defining "suspicious". One way to define suspicious is deviation from normal. If we can construct a baseline for each user, then we could alert when there is a significant change.
That sounds good, but now we have to define "significant change" in a way that generates useful security findings (and not many false positives). For this example we will target significant changes to the client
information that might indicate stolen credentials. NOTE: the Okta data is very rich in context, this is just one simple example of how to make use of this data.
Because of VPNs and proxies, it is often not practical to just use specific IP addresses or related geographic information to identity suspect activity. Similarly, users may change their device because they are using a new one, or they may make use of multiple devices. We expect significant variation between legitimate users. However, for any particular user we expect there to be more consistency over time.
For this example, we will characterize "normal" by computing for each actor
, for up to 30 days in the past:
unique auth clients used
unique os versions used
unique devices used
unique locations used (defined as: country, state, city)
We will define events that do NOT match ANY the four dimensions as "suspicious". This means:
We will not get alerts if they get a new device.
We will not get alerts when they change location.
We will get alerts when all of the attributes change at once,
and we are assuming this is both anomalous and interesting from the security point of view.
We will also NOT consider actors unless they have at least 5 days of history, to avoid false positives from new employees.
Assume we schedule this to run once a day for the previous day.
This is just an example, and requires tuning like any other heuristic but has the benefit of being self calibrating per actor.
The SQL to compute the above is:
Recomputing these baselines each time a search is run is not very efficient. In the future Panther will be supporting the ability to create summary tables so that methods such as described above can be made more efficient.
Detecting password spraying
Password spraying is an attack that attempts to access numerous accounts (usernames) with a few commonly used passwords. Traditional brute-force attacks attempt to gain unauthorized access to a single account by guessing the password. This can quickly result in the targeted account getting locked-out, as commonly used account-lockout policies allow for a limited number of failed attempts (typically three to five) during a set period of time. During a password-spray attack (also known as the “low-and-slow” method), the malicious actor attempts a single commonly used password (such as ‘password123’ or ‘winter2017’) against many accounts before moving on to attempt a second password, and so on. This technique allows the actor to remain undetected by avoiding rapid or frequent account lockouts.
The key to detecting this behavior is to aggregate over time and look at the diversity of usernames with failed logins. The example below uses CloudTrail but a similar technique and be used with any authentication log. The thresholds chosen will need to be tuned to the target network.
Detecting DNS tunnels
Since DNS cannot generally be blocked on most networks, DNS based data exfiltration and C2 can be extremely effective. There are many tools available to create DNS-based tunnels. Not all DNS tunnels are malicious, ironically, many anti-virus tools use DNS tunnels to send telemetry back "home". Most security-minded people find DNS tunnels un-nerving, so detecting them on your network is useful. Simple traffic analysis can easily find these tunnels but because of legitimate tunnels, this below example will require some tuning to the local environment for both thresholds and whitelisting.
We will define a potential DNS tunnel as a DNS server (port 53) that moves enough data to be interesting with an hour's time to only a few UNIQUE domains.
Assume we run this search every hour, looking back 1 hour to identify these tunnels:
Monthly reporting of cloud infrastructure
Given Panther Cloud Security can report on your AWS infrastructure, you can use the resource_history
table to compute activity statistics that may be of interest to operations as well a security.
A simple example is the below report that can be scheduled run on the first of the month for the previous month to show the activity in the monitored accounts.
Example output:
The resource_history
table has detail down to the specific resource, so there are variations of the above search that can be more detailed if desired.
Database (Snowflake) monitoring
Databases holding sensitive data require extensive security monitoring as they are often targets of attack.
These queries require that Panther's read-only role has access to the snowflake.account_usage
audit database (this may need to be done by the Snowflake admins).
This query looks for patterns of failed logins by username and should be run on a regular schedule:
Snowflake failed logins by single IP looks at login attempts by IP over 24 hours and returns IPs with more than 2 failed logins. This could be scheduled to run every 24 hours to highlight potentially suspicious activity. The effectiveness of this approach may depend on how your enterprise handles company-internal IP addresses.
Grants of admin rights in Snowflake, looking back 7 days. This is not necessarily suspicious, but possibly something the Snowflake admins may want to keep track of.
Querying account usage views
See Snowflake's documentation for additional examples on querying account usage.
Last updated