Quantcast
Channel: ThreatConnect
Viewing all 483 articles
Browse latest View live

Using ATT&CK in ThreatConnect to Prioritize, Ask Questions, and Respond Faster

$
0
0

We have exciting news: ThreatConnect now supports the MITRE ATT&CK Framework!

What does this mean for our users? By applying Tags to Indicators and Groups, you’ll be able to classify your intelligence in ThreatConnect using the tactics and techniques of MITRE ATT&CK and, more importantly, derive meaningful conclusions to help you prioritize response and make better decisions.

Directly from the ThreatConnect Platform, you’re able to view all Techniques related to the MITRE Pre-ATT&CK and Enterprise ATT&CK Datasets. Drill down into each Technique to get details mapped directly back to the information provided in the ATT&CK Framework.

In more concrete terms, you can query Intelligence with ATT&CK, create Dashboards with ATT&CK, and drive Playbooks logic with ATT&CK.

View of the Details Page for the ‘Data Obfuscation’ MITRE Enterprise ATT&CK Technique

 

Below, we’ve outlined three practical use cases where leveraging ATT&CK within ThreatConnect will help our users classify indicators, prioritize threats, and automate processes. 

Using ATT&CK to Ask Critical Questions of Your Intel with TQL

  • What can the adversaries I’m up against do?
  • Are there critical techniques I’m blind against?
  • For key tactics, what are the indicators I should be looking out for?

Once you start classifying your intelligence in ThreatConnect, you can start answering all of these questions and more with ATT&CK thanks to TQL.

Before we go further, here’s a quick review of  TQL: ThreatConnect Query Language (TQL) is a SQL-like query language that allows users to build structured queries for advanced search filters, for example “show me Email Address Indicators tied to Incidents that involved ransomware.”

Once you’ve started classifying your intel with ATT&CK, you can use TQL to start asking some complex questions. For example:

typeName in (“File”, “User Agent”) and dateAdded >= “NOW() – 7 DAYS” and hasTag(summary contains (“per – ent – att&ck”))

This query shows all File or User Agent Indicators that have been added in the past week and are associated with the Persistence Tactic (obviously you’ll want to customize things to what matters to you!). You can take things even further by linking together multiple objects. For example, “show me all URL Indicators tied to Incidents of spearphishing” or “show me all Reports we’ve brought in recently that can help me better understand account manipulation.”

Note that TQL queries can be saved for later use and, as you’ll see in the next section, used to power Dashboards.

Using ATT&CK in Dashboards*

A key use for ATT&CK is prioritization. By understanding what tactics and techniques you’re facing, you can prioritize response, allocate resources, inform red teaming and adversary emulation exercises, make strategic decisions, invest in training and education, and the list goes on. You already know that ThreatConnect has very customizable and dynamic Dashboards. Paired with the flexibility of our data model and ATT&CK implementation, you can now use Dashboards to better understand the specific adversary tactics and techniques your security team is up against.

For example, the Dashboard card below shows the top Techniques being used by the Adversaries you’re tracking, as well as any Incidents.

ThreatConnect Dashboard Card Showing Top Adversary ATT&CK Techniques

 

If you’d like to create a similar card, the underlying TQL query is:

typeName in (“Adversary”, “Incident”) and hasTag(summary contains (” – att&ck”))

Then just set the card to query by Groups and show the Top 5 Tags.

We want to point out that with all of these cards, there is an opportunity to drill down further. In the example above, clicking on T1028 will show you the eleven Adversaries and Incidents where the Windows Remote Management Technique was recorded in conjunction with the Execution Tactic.

Because of how we’ve structured our ATT&CK tags, it’s easy to query on specific tactics or techniques as well. Depending on how you choose to classify your intel using ATT&CK, you can create similar cards around Incidents, Indicators, and other data objects in ThreatConnect.

Using ATT&CK in Playbooks*

At ThreatConnect, we’re all about decisions you can actually take action on. The addition of ATT&CK into the ThreatConnect Platform is no exception. Incorporating ATT&CK into ThreatConnect Playbooks lets you automate how the framework gets utilized in your workflow. For example, let’s say you want your physical security team notified whenever a physical exfiltration incident occurs. The Playbook below will send a Slack message to the physical security team whenever an incident is classified as involving T1052 – Exfiltration Over Physical Medium. This is just a very basic example, and we’re looking forward to seeing how our users continue to leverage the framework in an automated capacity.

Example of Playbook that Incorporate ATT&CK Techniques in Orchestration Decisions

 

We view ATT&CK as a powerful framework for understanding adversary behavior and potential gaps in defense, and being able to prioritize those is critical to a successful infosec program. As we see more customers adopting the ATT&CK framework, we want to make sure they have the tools they need to understand the data and tie it to relevant intelligence.

This is just the beginning of MITRE ATT&CK support in ThreatConnect. We’re really excited about what the framework enables, so stay tuned as we continue to roll out more ATT&CK-related features in the coming months and beyond!

*Note that custom Dashboards and Playbooks require a paid subscription to a ThreatConnect dedicated cloud or on-premises instance.

The post Using ATT&CK in ThreatConnect to Prioritize, Ask Questions, and Respond Faster appeared first on ThreatConnect | Intelligence-Driven Security Operations.


Playbook Fridays: Koodous Playbook Components

$
0
0

Today’s post features two Playbook Components designed to query Koodous. The Playbook Components are available on our GitHub repository here.

The first component, named “[Koodous] Request APK Data.pbx”, takes the sha256 hash of a file as input and returns information for this file, if any exists, from Koodous. If you would like to test this component, install the component, create a simple playbook that uses this component (there is an example below), and submit “9be4fb5e337bb0b994c9d2b781355f934a349b76abc34a02a40527ae760eb1f0” as the sha256 (this is the sha256 of the APK here).

 

The second component, named “[Koodous] Search for APKs.pbx”, queries Koodous using the advanced searches documented here. If you would like to test this component, install the component, create a simple playbook using the component (there is an example below), and submit “package_name:”com.whatsapp” -developer:”WhatsApp Inc.”” as the query. This query will return a list of APKs in Koodous where the package name is “com.whatsapp” and the developer is not “WhatsApp Inc.”.

If you have any questions or feedback, feel free to raise an issue. Also, don’t forget to explore our repository of Playbooks, Playbook Components, and Playbook Apps.

 

The post Playbook Fridays: Koodous Playbook Components appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Building Out ProtonMail Spoofed Infrastructure with Creation Timestamp Pivoting

$
0
0

ThreatConnect Research reviews phishing activity targeting Bellingcat researcher Christo Grozev and identifies a series of ProtonMail-spoofing domains most likely associated with attacks on Russia focused researchers and journalists. 

On July 24th, Bellingcat shared a phishing email from July 23rd that unsuccessfully targeted Christo Grozev, a Bellingcat contributor who focuses on Russia-related security threats and weaponization of information. Using ThreatConnect, our various integrations, and DomainTools’ capabilities, we researched the email and identified a series of ProtonMail-spoofing domains most likely associated with the phishing activity that targeted Bellingcat. This case study highlights the importance of reviewing hosting infrastructure, co-locations, name servers, and WHOIS creation timestamps for malicious domains that are privacy protected. In this case, we identified eleven domains registered since April 11, 2019 most likely associated with the actor behind this activity and possibly used in attacks against other Russia-focused researchers or journalists. These findings have been memorialized in ThreatConnect Incident 20190724A: ProtonMail Spoofed Domains Used in Phishing Against Russian-Focused Researchers.

We’ve been fortunate to previously work with Bellingcat on Fancy Bear activity targeting them following their MH-17 reporting beginning in 2015 and continuing on in to at least 2017. In this instance, we don’t know if Fancy Bear is behind this activity. The activity pattern observed in this incident suggests that may be the case, but that assessment is in no way definitive based on our current understanding of the activity as described below.

Phishing Targeting Bellingcat

The phishing email that targeted Bellingcat purported to be from ProtonMail’s support team and claimed that the target’s encryption keys and privacy may have been compromised. The “from” email addresses were most likely spoofed. The email header shows that the message was sent from legitimate Mail.de infrastructure and lists notifysendingservice@mail[.]uk as the return path email address. At this time, we do not know if this is an email address belonging to a legitimate service that the actor leveraged or an actor-controlled account. We have contacted Mail.de for additional information.

The email prompts the target to either change their password or generate new encryption keys at the provided links.

Shared Phishing Email Targeting Bellingcat Contributor

 

Those links are actually for the sites hxxp://mail[.]protonmail[.]sh/password and hxxp://mail[.]protonmail[.]sh/keys, respectively. We were unable to capture the password site live; however, the keys URL redirected to another domain — mailprotonmail[.]ch — as seen below in the Internet Archive. This site hosts a spoofed ProtonMail loading page that prompts the target to enable Javascript. We are still in the process of reviewing the Javascript files that this site attempts to load and will provide an update as we better understand them.

Identified Redirect Between ProtonMail Spoofed Domains and Spoofed ProtonMail Loading Page

 

Hosting Infrastructure

From an infrastructure perspective, at this point we have identified two domains associated with this activity — protonmail[.]sh and mailprotonmail[.]ch. Reviewing the WHOIS for these domains in our DomainTools Spaces App, we can see that both of these sites were registered through Njalla, which provides anonymous domain registrations and protects users “from ferocious domain predators.”

WHOIS Information for Identified Domains

 

Reviewing the hosting history for these domains using our Farsight DNSDB integration, we note that mailprotonmail[.]ch is hosted at 217.182.13[.]249.

Passive DNS Resolutions for mailprotonmail[.]ch

 

This IP address has hosted only three domains in the last two months and all of them spoof ProtonMail. We can reasonably conclude that this IP most likely is exclusive to the actor behind the activity that targeted Bellingcat.

Passive DNS Resolutions for 217.182.13[.]249

 

Iterating the previous research steps for these new domains — mailprotonmail[.]com and protonmail[.]systems — we see that these domains were also registered through Njalla. Additionally, the mailprotonmail[.]com domain was previously hosted at 193.33.61[.]199.

Passive DNS Resolutions for mailprotonmail[.]com

 

As with the 217.182.13[.]249 IP, reviewing passive DNS resolutions for 193.33.61[.]199 with our Farsight DNSDB integration, we see that it has recently hosted domains that all appear to spoof ProtonMail and again most likely is exclusive to the actor behind this activity. The additional co-located domains include protonmail[.]direct, my.secure-protonmail[.]com, and prtn[.]xyz.

Passive DNS Resolutions for 193.33.61[.]199

 

Iterating again with these new domains, we see that protonmail[.]direct was also registered through Njalla while my.secure-protonmail[.]com and prtn[.]xyz were registered through Web4Africa. Notably, these domains were registered on April 11, 2019, suggesting that this campaign may date back much earlier than the recently-identified phishing email targeting Bellingcat.

WHOIS for secure-protonmail[.]com and prtn[.]xyz

 

Creation Timestamp Pivoting

At this point, we’ve exhausted what we can identify from hosting IPs and domain co-locations. Unfortunately, in this case, we don’t have any registrant email domains from WHOIS or start of authority (SOA) records to build out our understanding of this actor’s infrastructure. However, we have a technique that sometimes proves useful for researching such domains — creation timestamp pivoting. This method helps identify other domains that were registered through the same reseller at the same time as the domain in question.

The idea here is that actors will sometimes register groups of domains at a single time. Doing so cuts down on the number of transactions they have to perform and the amount of time they spend procuring infrastructure. Even when using privacy protection services, WHOIS name server and creation timestamp information can often be used to find other domains that may be associated with those you’re researching.

To do this research, we’ll use a DomainTools Iris or Reverse WHOIS query to search for domains that use the name server of the site we’re investigating AND have the same creation timestamp string down to the hour. We then review the WHOIS for the returned domains and identify those that were registered in close temporal proximity to the one we started with. Let’s use the previously identified mailprotonmail[.]com as an example.

WHOIS for mailprotonmail[.]com

In the WHOIS for mailprotonmail[.]com, we see that it was registered at 6:10 UTC on June 27, 2019 through Njalla. An Iris query to pivot on these characteristics would look like the following:

DomainTools Iris Creation Timestamp Pivoting

 

Ultimately, four additional domains are returned. Looking at the WHOIS for these results, we see that two of the additional domains — prtn[.]app and the previously identified protonmail[.]sh — were registered within about 30 seconds of mailprotonmail[.]com.

WHOIS for protonmail[.]sh and prtn[.]app

 

Iterating through this methodology for the other previously identified domains, we can determine that the following additional infrastructure is most likely associated with the actor we’re investigating:

  • protonmail[.]gmbh
  • prtn[.]app
  • protonmail[.]team
  • protonmail[.]support

It’s important to note that this method is not without caveats:

  • Boutique is Best – Generally, this methodology only works for smaller name servers or registrars. The more widely used a name server or registrar, the more results will show up for the given time you’re investigating.
  • Domain Creations on Intervals – In some cases, the reseller or registrar may not immediately register a domain for a customer and instead create groups of domains from multiple customers at a specified interval.
  • Lack of Results – Sometimes, the creation timestamp information may not be indexed by the capability you’re using, so the lack of additional domains in reverse WHOIS queries does not preclude the actual existence of other, related domains.
  • No Rule of Thumb – There is not a hard and fast rule for how close in temporal proximity domains have to be to be deemed “related.” In this case, we saw domains that were registered seconds apart and up to a minute and a half apart. It’s going to vary between resellers.
  • Coincidence – Two domains registered by different actors could be registered through the same reseller at the same or close to the same time.
  • Probability – Results from this research should always be considered within the larger context of the activity you’re investigating. In this case, all the additional domains spoof ProtonMail. Similar consistencies or lack thereof should be considered when applying probabilistic language to your resulting analysis.

Conclusions

In terms of attribution, based on our current understanding of the activity, we cannot assess who is behind this activity with a reasonable level of confidence. Fancy Bear has previously targeted Bellingcat and used the Njalla and Web4Africa resellers to procure infrastructure; however, none of those characteristics are exclusive to Fancy Bear. So, the shoe fits, but it probably fits others too. Additional information on the Javascript hosted at the aforementioned sites, other targets of this campaign, the extent of the campaign, and the landing pages for other links in the phishing emails could help us better assess who is behind this activity.

At this point, we don’t know if, how, or against whom all of the additional domains from this research have been used. Journalism and think tank organizations — particularly those that investigate Russia-related issues — whose contributors or employees use ProtonMail should review previous emails and monitor for future emails containing links to this infrastructure. Additionally, several of the identified domains have not been hosted to date, and could be used in future operations. Monitoring for passive DNS resolutions for these domains or new subdomains may help identify if or when they are operationalized.

Identified Domains and IPs:

protonmail[.]sh

mail[.]protonmail[.]sh

mailprotonmail[.]ch

mailprotonmail[.]com

protonmail[.]direct

protonmail[.]gmbh

protonmail[.]systems

prtn[.]app

protonmail[.]team

protonmail[.]support

user[.]protonmail[.]support

prtn[.]xyz

secure-protonmail[.]com

my[.]secure-protonmail[.]com

217.182.13[.]249

193.33.61[.]199

 

In ThreatConnect:

20190724A: ProtonMail Spoofed Domains Used in Phishing Against Russian-Focused Researchers

Associated Snort Signature

 

The post Building Out ProtonMail Spoofed Infrastructure with Creation Timestamp Pivoting appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Playbook Fridays: Query Palo Alto Wildfire For New Submissions / Submit Wildfire Binary to VMRay

$
0
0

With these Playbooks, create the sharing and connection between two otherwise segmented products

These two Playbooks allow you to orchestrate the ability to retrieve files deemed malicious by Palo Alto Wildfire and submit them to VMRay for a full malware analysis. They bridge the gap between two malware analysis products, as well as create actionable threat intelligence in both the Palo Alto Wildfire Playbook, as well as hooking into other VMray Playbooks.

These Playbooks create the sharing and connection between two otherwise segmented products. The first Playbook runs on a timer, downloading the Wildfire Threat logs which then uses the hashes provided to actively query Wildfire for the malicious files, while the second Playbook automatically triggers when those files are stored as malware in the vault within a Wildfire source.

The Playbooks:

  • are fully automated
  • free SOC personnel from manually reviewing files in Palo Alto Wildfire
  • can be used in conjunction with separate VMRay Playbooks that pull all the analysis results down into the Platform as actionable threat intelligence

How to Set up Playbook 1: Query Wildfire For New Submissions

Initial Wildfire Call: This HTTP client is used to submit the first request to the Wildfire API, specifically to GET Wildfire logs that are malicious – this is achieved by using the Wildfire query language to specify the verdict, as well as the time frame for this log pull. In this case, (verdict neq benign) and (receive_time in last-15-minutes) is used. Set all of these as query parameters in the HTTP client.

Delay 5 seconds: Since Wildfire queues up a job to collect these logs, delay for a short amount of time, as immediately making a second request could result in an empty response if that job didn’t complete.

Extract JobID: This step is simply using a regular expression to extract the JobID from the initial API response.

Second Wildfire Call w/ JobID: This is essentially the same request made to the API as the initial request, but in this instance you are defining the JobID in the query parameters to specifically pull back the logs that you just requested.

Convert JSON <> XML: This is completely optional – in case you prefer working with JSON formatted data rather than XML.

Extract Data: This is utilizing JMESPath to extract the specific data that is relevant to the use case from the returned logs. The JMESPath expression specifically here is response.*.*.*.*[][][][].{filedigest: filedigest, subject: subject, misc: misc}.

Deduplicate all: This is an array operations app that has many different functions to it – in this instance you can use the Unique operation to deduplicate data from what you just extracted in the previous JMESPath step.

Extract Individual Arrays: One more JMESPath pass to get the data perfect. You can use a pretty simple expression to pull the file digest, subject, and misc fields out of the JSON for later use.

Iterate Hashes For Binaries: From the Wildfire logs, pass the following through to iterate on: file hash values, document names, and file names. The rest of the Playbook is contained within the iterator and will execute one pass per file hash value.

Get Wildfire Sample: This is a prebuilt Playbook application that handles all the Wildfire API calls for the end user. Simply supplying the API key and the hash value to retrieve is all that is needed.

Compress File: Compress the malware sample via a password protected zip file to store in the malware vault.

Create ThreatConnect Document: This is the standard document creation Playbook application – you are simply passing through the relevant data into each field and letting the ThreatConnect API take care of the rest.

Create ThreatConnect File: Similarly to creating the document, this just involves the extra step of associating this hash indicator with the document itself.

 

How to Set up Playbook 2: Submit Wildfire Binary to VMRay

This Playbook automatically triggers when the documents from the prior Playbook get created within their respective Wildfire source in ThreatConnect.

Get ThreatConnect Document by ID: This step is simply grabbing the document from the trigger.

Submit Binary to VMRay: Use VMRay’s API to submit the malware sample using the sample_file key. Refer to VMRay’s API documentation for more granular options when submitting files for analysis.

Get Submission ID: In this JMESPath application, you can use the data.jobs[].job_submission_id expression to extract the submission ID.

Deduplicate: This is another array operations application, also using the unique operation to deduplicate if any exist.

To String: The logger application is a great way to convert an array into a string output if a string type is required in subsequent steps.

Formatting ID/Formatting ID Cont.: Use find and replace to remove the brackets around the original submission ID value to make it cleaner.

Attribute VMRay Submission ID to Document: This step is memorializing the submission ID in the VMRay Submission ID attribute within the original document that triggered the playbook execution.

 

If you have any questions or feedback, feel free to raise an issue. Also, don’t forget to explore our repository of Playbooks, Playbook Components, and Playbook Apps.

The post Playbook Fridays: Query Palo Alto Wildfire For New Submissions / Submit Wildfire Binary to VMRay appeared first on ThreatConnect | Intelligence-Driven Security Operations.

CAL™ 2.2 Brings Improved Data Hygiene and More Robust Graph Modeling

$
0
0

Right on the heels of our 2.1 CAL update, we’re keeping up the momentum with the release of CAL 2.2!

As a refresher, ThreatConnect’s CAL™ (Collective Analytics Layer) provides anonymized, crowdsourced intel about your threats and indicators. It leverages the collective insight of the thousands of analysts who use ThreatConnect worldwide to provide you with even more context regarding your indicators and threats.

The analytics engine that powers CAL has been improved over time, and is something that you can really think of as the ‘Brain of CAL’.

Once all of the data is collected and aggregated, CAL allows for data classification, and consequently, pivoting across related indicators. This is extremely beneficial when determining relationships between indicators.

The improvements in 2.2 include Better Data Hygiene and More Robust Graph Modeling.

Let’s dig deeper into each.

Better Data Hygiene

To say CAL handles a lot of data would be an understatement. We’re talking nearly half a billion indicators as of June 2019 that are sent to CAL for further analysis. CAL takes those indicators and, through proprietary algorithms leveraging overlaying datasets, creates a threat score to indicate the potential maliciousness associated with the respective indicator. We combine this score with ThreatConnect’s in-app reputation engine, called ThreatAssess, which gives users a score from a 0-1000 scale to help them make better decisions. Furthermore, CAL can modulate an indicator’s in-app status, reducing clutter from false positives and promoting relevant indicators in analyst workflows.

Keeping that in mind, ThreatAssess is only as reliable as the algorithms and scoring that are in place. In an effort to continuously make our data more reliable and accurate, a few things have been added to allow for even better data hygiene. With every CAL release, we’re adding additional data sources to help with data hygiene.  In this release,new capabilities include:

  • The ability for CAL to benefit directly from ThreatConnect Research team’s curation. Our Research Team is already working to ensure that we’re keeping a clean house in the ThreatConnect cloud, now CAL can benefit from their analysis and pass those insights along to private instances.
  • Dynamic inclusion of Microsoft Office365 networks for better whitelisting. By using some of their newer endpoints, we can keep a finger on the pulse of Microsoft’s entire Office365 infrastructure.  These IP addresses are responsible for tens of millions of noisy observations per month, and CAL’s analytics can deprioritize them appropriately.

More Robust Graph Modeling

To drive its analytics, CAL models the highly relational dataset of the threat landscape at a behemoth scale.  To replicate the analysis that humans make at the scale of hundreds of millions of relationships a day, we needed to improve our ability to model and process the graph that CAL extends every day.

As CAL learns about new indicators and discovers new links, its analytics need to be able to scan deeper and faster across the information model to generate new insights.  This lays a foundation for us to inject even more data into the CAL engine, enabling more sophisticated analytics and insights in the releases to come!

 

The post CAL™ 2.2 Brings Improved Data Hygiene and More Robust Graph Modeling appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Playbook Fridays: Reporting Through Email Attachment

$
0
0

This Playbook streamlines a process for reporting to a threat intel team without asking the reporting party to rework any existing infrastructure, or go too far out of their way to make findings accessible. This also works regardless of who the reporting party is; whether that is a SOC, customer, or industry partner.

The most viable source of intelligence for any organization is going to be based upon intelligence driven by other teams coexisting with the same workspace. This process is extremely important as it drives the communication across all security team and personnel. This type of internal intelligence along with premium and OSINT feeds provides a completed picture for a threat intelligence analyst.

While many teams have an idea of what applications and appliances they would like to integrate with, only extremely mature threat intel teams have working processes for how communication is received and how confirmation is sent to reporting parties. Even mature teams have some manual steps within a process that this Playbook would help automate.

Overall this Playbook provides a simple and professional way or gathering data reported from invested parties outside of the threat intel team, making collaboration a simple and easy task. The Playbook:

  • Saves time by automating the process
  • Provides a uniform process for parties reporting to the threat intel team
  • Gives the threat intel team a uniform structure in which to work new reports

How It Works

This Playbook is triggered when an email is sent to a mailbox created within ThreatConnect. This mailbox can be added to an existing distribution list that is used for reporting data to the threat intel team.

An email with an attached report is sent to the mailbox setup within the trigger of the playbook. This can also be done with the body of an email through minimal alterations with the Playbook.

This Playbook does not directly use any integrations; However, you can use the integrations to enrich data after the Playbook is run.

After receiving the email, the Playbook uses the “Create ThreatConnect Document” app to save the attachment as a Document to maintain a historical record of the report. Then the “Create ThreatConnect Attribute” app is used to provide the from field of the email header as an attribute to maintain a record of whom sent the report. Following this, we convert the binary to a string (Binary to String app), then use the “Regex Extract app” to extract IOCs.  For this, the regex queries found for all indicator types can be found under the system settings.

Once the IOCs have been extracted into a string array, the playbook converts the string arrays to strings using the “join Array app”. Now that the IOCs are in strings — respective to their IOC type — a system of “If/Else” statements are constructed to validate the IOC outputs consist of data. The if/else statements only pass logic where the IOC strings only do not equal “null”.  This process allows any IOC types that do not exist within the original report to drop and take no further system resources.

Concurrently with this process, the Playbook sends a reply back to the original reporting email acknowledging the email was received. Once the logic has passed the if/else statements, proving the IOC strings consist of data, the original string arrays from the Regex Extract app are used to create ThreatConnect Indicators, respective of their IOC types. Once the IOCs have been created, the logic flows into a merge statement with all the string arrays of the create (indicator type) apps’ outputs and feeds this output into a send email app to alert the reporting party of the Indicators being created within ThreatConnect.

 

The post Playbook Fridays: Reporting Through Email Attachment appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Best Practices for Writing Playbooks in ThreatConnect, Part 1

$
0
0

Proper naming conventions, using Descriptions and Labels, and more!

This is the first of a multi-part series of posts on Playbooks best practices.  There’s a lot of material to cover, and it would be too much for a single post. In Part 1,  I’m sticking to the basics – how to use proper naming conventions and how to best use Descriptions and Labels. In the next post,  I’ll go more in-depth with things like efficiency tricks and error handling methods.

Recently, I’ve been surveying Playbooks written not only in ThreatConnect, but in various other orchestration platforms, and I have made a few observations that (I think) are worth sharing. I’ve noticed that a lot of Playbooks appear to have been developed in a haphazard manner – with minimal testing, and then activated without them being cleaned up in order to ready them for production.

It’s typical that Playbooks are created through trial and error.  And, most likely, the progression of a workflow is modeled from a human process brain dump, which is a perfectly acceptable approach.  However, once the Playbook is working, it is still not finished. The next step is to refine the workflow and find efficiencies where human tasks don’t translate directly to Apps (some can be combined in a single App, while others may take multiple Apps). This “refinement” is a critical step to successfully create a production-ready Playbook.

Bottom line, Playbooks should be treated as production-ready automation and orchestration workflows.

With that, let’s dive a bit deeper into some Best Practices!

What’s In A Name?

Your Playbooks should be named in a way that anyone could read the title and have at least a basic understanding of what the Playbook does.

  • IP_VT = Bad
  • Enrich IP Address with VirusTotal = Good

Not only should you be properly naming your Playbook, but also any triggers and apps used within the Playbook itself. When you introduce an App onto the Playbook Canvas, ThreatConnect will always append a number to it.

In the example above, you can see that we’ve got a pretty ambiguously designed Playbook with no properly named triggers or apps. This can get confusing when you start to build Playbooks that may have 5+ copies of the same app performing different tasks. This will make debugging a nightmare. Save yourself the hassle and give each app a name that clearly describes the task it’s performing.

Below is a screenshot of the same Playbook, but with more intuitive app names. As you can see, it’s much easier to understand what may be going on here, so anyone on the team will be able to get up to speed on the team’s processes at an accelerated pace.

Describing Your Playbooks For Fun And Profit

We strongly recommend using the Description field to full document your Playbook. Producing a Playbook that doesn’t have a Description is actually a detriment to the process and a big oversight. No one wants uncommented code!

An example of a subpar Playbook description (besides a blank one) would be something like:

“VirusTotal Enrichment”.

Although technically accurate, it doesn’t necessarily tell the whole story.

Think about using something like this:

“This Playbook takes a given IP address IOC, enriches it data from VirusTotal’s Enterprise API, and stores the context as additional attributes. Additionally, the Playbook searches for, parses out and stores any related IOCs in ThreatConnect for further analysis.”

Put simply:

  • Descriptions should include things like a summary, required dependencies, helpful links, and perhaps some sample data, etc.
  • Summaries should include an excerpt on what the Playbook does and how the user will benefit from deploying it. Additionally, it’s very helpful to include any sample use cases that you may have.

Listing any required information and dependencies is also a really great idea as it’s going to help the user decide early on if they’ll even be able to deploy that particular Playbook. If your Playbook uses a paid API endpoint for a particular enrichment service or enterprise security product that requires an API key, definitely note that upfront. This prevents users who don’t subscribe to that product or service from wasting their time (some 3rd party products offer free versions of their service which can be confusing for a lot of people).

If you’re a generous Playbook developer (hint, hint), you can include additional info such as links to third party product information, API documentation, sample data, etc (if applicable). This info isn’t necessarily required and is more of a “nice to have”, but it really rounds out the Playbook and produces a fully finished product that can be more easily consumed by other users.

Don’t Label Me, Bro

Kidding. Label everything!

Labels, as the name implies, can be thought of as tags or keywords. We recommend strongly that you use Labels to organize your Playbooks.

What does the Playbook do? Does it send a binary to VirusTotal’s automated malware analysis solution? Try using the following Labels:

  • Malware, Sandbox

Another great use case for Labels is to indicate where a Playbook stands in the development and implementation process. Some basic examples of labels for this are:

  • In Design, In Development
  • Pre-Production, UAT, Testing
  • Production

One thing you’ll want to do is make sure to check and see if said Label already exists (when typing the label, if you get an autocomplete dropdown, use it).

Look for more in Best Practices, Part 2. In the meantime, check out our Playbook Fridays blog posts.  Or, if you have any questions or feedback, feel free to raise an issue. Also, don’t forget to explore our repository of Playbooks, Playbook Components, and Playbook Apps.

 

 

 

The post Best Practices for Writing Playbooks in ThreatConnect, Part 1 appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Operationalizing Threat Intel: On the Importance of “Boring” Dashboards

$
0
0

This blog post is for boots-on-the-ground security analysts. Managers, turn back now! By the end, you’ll be able to create a tailor-made dashboard in ThreatConnect to help inform your day-to-day activities.

“Flash! Ah-ah! Savior of the universe!” -Queen

I cringe a little bit whenever I hear someone say that dashboards are just to show “pretty pictures” to a group of ivory tower executives. All flash, no substance. Yes, bubbling up certain metrics to leadership is important for justifying investments, demonstrating success, and tracking progress, but “pretty pictures” are more the purview of a report, not a dashboard. A dashboard is about monitoring. What do I need to know now — to stay on top of — to do my job?

We’ve covered some dashboarding best practices in the past. Think about your car dashboard: it shows you key stats you need in the moment, like how fast you’re going, or critical warning indicators. It does NOT show a pie chart of the most recent speed limits, or a pew-pew map of where you’ve been honked at. And yet, looking at how most infosec dashboards are advertised, we see something like this, very often animated:

A strange dashboard. The only winning strategy is not to use it.

I get it. There’s a reason that Las Vegas is full of blinky lights and colorful carpets and plinky sounds: our barely modern monkey brains will PAY ATTENTION! THINGS ARE HAPPENING! LOOK AT ME LOOK AT ME LOOK AT ME! And so a bright, beautiful dashboard is going to make people look, and hopefully get curious, and when all the flash is finally peeled away there will be some valuable substance underneath. Unfortunately, this very flash dooms dashboards to misrepresentation as mere pretty pictures.

A Better, Boring-er Way

There are plenty of good reasons to include bar graphs and colors on dashboards if they’re the best way to convey necessary information, but imagine seeing this projected on a 10×10 screen on a tradeshow floor:

Pfft. Data tables? How will I get my bosses to pay attention? What will I animate???

This dashboard is NOT sexy. It will NOT cause your brain to release dopamine just by looking at it. BUT if you create a dashboard like this that’s tailor-made to your workflow, it just might help you do your job more effectively. Let’s see how.

Tabula Rasa

ThreatConnect lets you create special dashboard cards called “datatables.” Using ThreatConnect Query Language (TQL), you can construct these datatables tailor-made to your specific workflow. Ideally, your workflow is not just blindly triaging alerts; you have some method to the madness. Some focus area of expertise, ideally driven by intelligence and infosec requirements linked to business risk. For example:

  • Targeted attacks against your industry vertical (e.g. finance, manufacturing, etc.)
  • Threats related to your cloud infrastructure
  • Physical security matters
  • Specific classes of malware (e.g. ransomware)
  • Different tactics or techniques driven by the ATT&CK framework

You can also create dashboards purpose-built around your or your team’s Priority Intel Requirements (PIRs), which we’ll cover in a future blog article:

A “small multiple” on a PIR dashboard, in this case dealing with threats related to financial institutions in Canada, that shows relevant feeds, new intel, and related topics.

Once you’ve identified the requirements that are driving your day-to-day, you can start creating datatable-driven dashboards that bubble up meaningful results so they’re the first thing you see when you log into ThreatConnect:

  • Are there new indicators related to the threats I care about? (either reported by a feed or alerted on or observed internally)
  • Is there new high-level intelligence I could be acting on? (from a feed or created internally or by a Playbook)

Creating a Relevant Dashboard

Let’s say you’re tracking threats targeting financial institutions. You might want to see intel (both Groups and Indicators)  in ThreatConnect tagged with that industry as well as threat actors and malware that traditionally target your sector. That way you’re constantly and consistently monitoring the data that’s relevant to you. Rather than, “here’s a pretty pie chart that tells me nothing,” it’s “my favorite feed just published a report on an APT I’m tracking and our SIEM recently observed a domain related to a malware variant we’ve been on the lookout for.”

To start, hover over the Dashboard link in ThreatConnect’s main menu, then select “New Dashboard.” Give your dashboard a descriptive name.

You can create as many dashboards as you want!

Then click the prompt to create your first dashboard card, give your card a title, (e.g. “Groups Tagged with Finance Sector Tags”), and select the “New Query” option. On the next screen, specify that you want to create a datatable. This is where the magic happens.

The “Advanced Query” section lets you use TQL to ask simple to exceptionally complex questions of the data in ThreatConnect. Everything from “show me all Indicators tagged ‘banking’” to “show me all Adversaries in China who have been linked to Incidents that leverage ATT&CK Technique T1192 that have Indicators observed in the last 90 days.”

Depending on your query and how you want the data grouped, you’ll be able to pick and choose the fields you want to include in your table:

Options will vary based on your other selections.

For our example dashboard, we want to create four cards: two that show Indicators and Groups explicitly with tags related to Finance, and two that show Indicators and Groups related to threats or adversaries that are known to target Finance. Here’s the TQL for each card:

Groups Tagged with Finance Sector Tags

typeName in (“Threat”, “Adversary”, “Incident”, “Campaign”) and tag in (“Finance”, “Financial”, “Banking”, “Banking and Finance”, “Finance and Banking”) and dateAdded >= “NOW() – 360 DAYS”

Groups Tagged with Threats Relevant to Finance

typeName in (“Threat”, “Campaign”, “Incident”, “Adversary”) and tag in (“Carbanak”, “Fin7”, “Hidden Cobra”, “Lazarus Group”, “Guardians of Peace”, “APT37”, “Anunak”, “Teleport Crew”, “Suckfly”, “APT10”, “Stone Panda”, “menuPass”, “Emissary Panda”, “APT27”, “Molerats”, “Suckfly”, “Locky”) and dateAdded >= “NOW() – 360 DAYS”

Indicators Tagged with Finance Sector Tags

typeName in (“Address”, “EmailAddress”, “File”, “Host”, “URL”, “ASN”, “CIDR”, “Mutex”, “Registry Key”, “User Agent”) and tag in (“Finance”, “Financial”, “Banking”, “Banking and Finance”, “Finance and Banking”) and dateAdded >= “NOW() – 360 DAYS”

Indicators Tagged with Threats Relevant to Finance

typeName in (“Address”, “EmailAddress”, “File”, “Host”, “URL”, “ASN”, “CIDR”, “Mutex”, “Registry Key”, “User Agent”) and tag in (“Carbanak”, “Fin7”, “Hidden Cobra”, “Lazarus Group”, “Guardians of Peace”, “APT37”, “Anunak”, “Teleport Crew”, “Suckfly”, “APT10”, “Stone Panda”, “menuPass”, “Emissary Panda”, “APT27”, “Molerats”, “Suckfly”, “Locky”) and dateAdded >= “NOW() – 60 DAYS”

You should end up with a dashboard that looks something like this:

Perfect for reviewing each morning over coffee.

And there you have it! It’s not pretty to look at, but it does help you create habits and keep on top of the specific infosec requirements you are responsible for on a daily basis. Spend some time thinking through the types of threats you and your organization care about, then customize the TQL samples above to create dashboards that really matter to you.

Where to Go Next

To learn more about dashboards, check out this overview. As always, for product feedback, please contact me directly at dcole@threatconnect.com. If you’re a customer and would like to share your best practices or ask questions related to dashboards, please reach out in our customer Slack workspace!

Special thanks to Kyle Ehmke on our Research team for providing the TQL and inspiration for this post!

 

 

The post Operationalizing Threat Intel: On the Importance of “Boring” Dashboards appeared first on ThreatConnect | Intelligence-Driven Security Operations.


Playbook Fridays: Component IOC All Data Pull

$
0
0

For all of the other applications that ThreatConnect does not have an integration for, API is the best way to go. With the repeating of calling IOC data, the use of the Component allows you to have all of that data in a format to map it as needed

The reason for the PlayBook came from working so much in API calls with other applications. For all of the other applications that ThreatConnect does not have an integration for, API is the best way to go to get communication working in a structured manner. With the repeating of calling IOC data, the use of the Component allows you to have all of that data in a format to map it as needed or even allow data updates to happen in fewer steps than it would take to do by hand. (Side note: It’s actually beneficial to name Components with “Component” in the name for the simplicity of sorting your exported PlayBooks/Components in an organized manner.) With the data always being in the same format, you are able to generalize your PlayBook actions to work off the data structure rather than attempting to pull the data (it might not exist).

With this Playbook:

  • Data is formatted to JSON
  • All data is more parseable, and
  • All API data is collected in one step

The use of the Component helps to eliminate the need for additional steps to capture the data. Regardless of the data’s existence, the JSON structure will always be present. With that structure there all the time, the parsing of the data can always call on the section. If there is no data, the parse will still happen and succeed, it just will not be anything that a future step will need to work.

When working with API calls and the return of the data, you then need to parse and map the data to variables. Depending on the data that is to be sent, different items may need to be collected for different applications. With a one-stop of all the IOC data that can be obtained by the API, you simplify the need to set up specific calls. With all of the data in one JSON, you lessen the number of parsing steps needed, as all the parsing can be done at one time against the combined JSON.

Because this is a Component, it is built to be put in a PlayBook to simplify the data calls needed. The Component Trigger accepts a String for the incoming Indicator. The output from the Trigger is the JSON structure of all the data that was collected on the Indicator that was fed in at the beginning.

The PlayBook starts by calling the ThreatConnect API to check for the IOC in question. If the IOC exists, then an initial call of the IOC data via API will be done. At the same time, an API call is being done to capture the available IOC types for the Platform. The API Branch for the IOC is parsed out of the results where it is compared to the IOC Type from the initial IOC API call. The next two steps are to clean up the data from the parse to ensure it is accurate for the future API calls.

Now with some of the IOCs, special characters exist in their summaries. In order to use them in an API call, the special characters must be percent-encoded. The three steps after the API branch variable cleanup are to take the IOC and encode it in a manner that can be used by the API. Once the IOC is encoded, there are a set of steps that are branched out. Each branch tackles a certain API call that is specific in nature where the data is not able to be collected from the main call. Once the specific data calls are done, the data is parsed from each call to help with future formatting. All parsed data is pushed into a Logger app along with the data from the main API call. You can use Logger to create the JSON structure and reference the parsed data to provide the same framework. Now if the IOC in question does not exist, a separate path runs where a variable is set to 0 (zero). This app and the Logger app join together at the Merge in the Component. Depending on which app has data, it will be converted into a new variable. That variable is passed back to the Component Trigger to be used as the output back into the PlayBook it is in.

Here’s how it works:

Step 1 – Component Trigger Input

First, start by defining the input you want the Component to ingest and declaring the variable name. I chose to do String only as we just want the name of the IOC to come into the Component. If you choose others, you would need to include additional parsing to capture just the IOC name. I set the option to include $Text Variables and made it a Requirement to ensure that some sort of data is fed into the Component. We will skip on the Output section of the Trigger until we have the parts needed setup.

Step 2 – Main API Call of IOC

With the variable set from the input, we now call on the ThreatConnect API app. Since we plan to allow this API call to be any IOC type, we will use the API path of /v2/indicators only and set the HTTP Method to GET. We set in the query parameters to includeAddtional to true, includes to tags, includes to attributes, and filters to summary=#variable-name. The #variable-name comes from Component input.

Step 3 – Results Count !=0

To ensure that the future steps only happen if the IOC exists in the Sources the Run As User has access to, we use an IF/Else operator to run a check. The #tc.api.count is referenced from the Main API Call app from before. If that count does not equal 0, then we will want to proceed with the additional steps to collect the data.

Step 4 – Set Result to 0

For the False/Failed connection on Step 3, we will connect the Set Variable app to it. Create a new variable and set the value to 0.

Step 5A – Parse API Results

From the True/Success connection on Step 3, we will connect the JMESPath app to it. The app will reference the #tc.api.result output variable from Step 2 to parse against. We will set up three String Expressions to parse out the data needed in the future. The first String will have a value of data.indicator[].summary | [0] . This will capture the IOC name again, but for some IOC types, there is a variation on how it is stored from the API. The second String will have a value of data.indicator[].type | [0] . This will capture the IOC type from the results that we will use to leverage the API branch. The third String will have a value of data.indicator . This is a parsing of the JSON data to the extent of the IOC information itself. This will be used to build the new JSON later.

Step 5B – API for IOC Types

Off the same True/Success connection that Step 5A uses, we will connect the ThreatConnect API app to run a query at the same time. The API path for the app to run is /v2/types/indicatorTypes . We will use GET as our Method and no other options needed. This will capture all IOC types in the platform along with the related data.

Step 6 – Pull API Branch Based on IOC Type

Both Step5a and Step 5B success connections tie into a JMESPath app. The JSON Data input for the app will come from the #.tc.api.result output variable from Step5B. We will make a single String Expression to map the branch needed to the IOC. The value to make that mapping is data.indicatorType[?name==’#j.ioc.type ‘].apiBranch . The #j.ioc.type is the second String mapping we made in Step5A. This value will search the IOC Types API call we did and list the apiBranch that matches to the name of the IOC type of the IOC brought into the Component. This parse will ensure that the apiBranch called will always match back to the IOC running to allow Custom IOCs to be used in this Component.

Step 7 – Drop Brackets from Branch

With the capturing of the correct apiBranch from JMESPath, we connect to a Find and Replace app. The input from the app will be the variable setup in Step 6 that parsed the apiBranch. We use (\[|\]) as our Find and leave the Replace field empty. We check the Regex box as our query is Regex. The query will look for [ or ] and delete them from the string.

Step 8 – Drop Quotes from Branch

We connect another Find and Replace app to the success of Step 7. This step will take the output variable from the previous Step as the input text. Our Find field will just be a double quote, and Replace will again be empty. This will strip the quotes from the String to ensure they do not interfere in the future API calls.

Step 9 – Change \\ to \ in IOC

With the way the data on the IOC is stored from the API perspective, any \ in the summary name of the IOC gets an additional \ added to act as the escape character for the String. We will take another Find and Replace app and connect it from the success in Step 8. This Find and Replace will leverage the IOC variable that we created in Step5A as the input text. The Find field will be two backslashes and the Replace field will be a single backslash. The Find and Replace will only make changes if the Find query does exist, so the majority of the IOCs may skip this step.

Step 10 – Encode IOC String for API

We will take a String Operations app and connect it to the success of Step 9 for our next step. We’ll use the URL Encode Operation to encode any special characters in the output variable from Step 9. The Strings field will be that output variable to make those changes.

Step 11 – Replace / for Encoding IOC

Just like in Step 9, we will use Find and Replace to help adjust the IOC variable. Take another Find and Replace app, and connect it to the success of Step 10. The input for the app will be the output variable #string.outputs.0 from Step 10. This variable is specified as a String and we will want to keep the variable that way. Our Find field will be the forward slash and the Replace field will be %2F . The encoding in Step 10 does not cover down on this character, so this step will ensure that we cover down in case that character exists in the IOC name.

Step 12A – Victim Associations Pull

From the success of Step 11, we branch 5 other apps to run concurrently from each other. For 12A, the app is the ThreatConnect API app so that we can pull the potential Victims that may be related to the IOC. Our API path will be: /v2/indicators/#find.replace.output/find.replace.output/victims . The first find.replace.output is the output variable from Step 8 and the second find.replace.output is the output variable from Step 11. We will set the HTTP Method to GET.

Step 12B – Victim Assets Associations Pull

From the success of Step 11, we branch 5 other apps to run concurrently from each other. For 12B, the app is the ThreatConnect API app so that we can pull the potential Victims that may be related to the IOC. Our API path will be: /v2/indicators/#find.replace.output/find.replace.output/victimAssets . The first find.replace.output is the output variable from Step 8 and the second find.replace.output is the output variable from Step 11. We will set the HTTP Method to GET.

Step 12C – Security Labels Pull

From the success of Step 11, we branch 5 other apps to run concurrently from each other. For 12C, the app is the ThreatConnect API app so that we can pull the potential Victims that may be related to the IOC. Our API path will be: /v2/indicators/#find.replace.output/find.replace.output/securityLabels . The first find.replace.output is the output variable from Step 8 and the second find.replace.output is the output variable from Step 11. We will set the HTTP Method to GET.

Step 12D – Group Associations Pull

From the success of Step 11, we branch 5 other apps to run concurrently from each other. For 12D, the app is the ThreatConnect API app so that we can pull the potential Victims that may be related to the IOC. Our API path will be: /v2/indicators/#find.replace.output/find.replace.output/groups . The first find.replace.output is the output variable from Step 8 and the second find.replace.output is the output variable from Step 11. We will set the HTTP Method to GET.

Step 12E – IOC to IOC Associations Pull

From the success of Step 11, we branch 5 other apps to run concurrently from each other. For 12E, the app is the ThreatConnect API app so that we can pull the potential Victims that may be related to the IOC. Our API path will be: /v2/indicators/#find.replace.output/find.replace.output/indicators . The first find.replace.output is the output variable from Step 8 and the second find.replace.output is the output variable from Step 11. We will set the HTTP Method to GET.

Step 13A – Victim API Data Parse

From the success of Step 12A, we connect the JMESPath app to it. Our JSON data field will be the #tc.api.result output variable from Step 12A. We will create one String expression with the value set to data.victim. This will help to clean up the data for the future JSON.

Step 13B – Victim Assets API Data Parse

From the success of Step 12B, we connect the JMESPath app to it. Our JSON data field will be the #tc.api.result output variable from Step 12B. We will create one String expression with the value set to data.victimAsset. This will help to clean up the data for the future JSON.

Step 13C – Security Labels API Data Parse

From the success of Step 12C, we connect the JMESPath app to it. Our JSON data field will be the #tc.api.result output variable from Step 12C. We will create one String expression with the value set to data.securityLabel. This will help to clean up the data for the future JSON.

Step 13D – Group API Data Parse

From the success of Step 12D, we connect the JMESPath app to it. Our JSON data field will be the #tc.api.result output variable from Step 12D. We will create one String expression with the value set to data.group. This will help to clean up the data for the future JSON.

Step 13E – IOC Assoications API Data Parse

From the success of Step 12E, we connect the JMESPath app to it. Our JSON data field will be the #tc.api.result output variable from Step 12E. We will create one String expression with the value set to data.indicator. This will help to clean up the data for the future JSON.

Step 14 – All API Calls into New JSON

For Step 14, we will be using a Logger app. We will connect all five apps in Steps 13A-E to the Logger app. We will use Logger’s ability to modify the data to make a new JSON. The Log Message will be:

{

“indicator”: #ioc.api  ,

“securityLabels”: #label.api  ,

“associatedGroups”: #group.api  ,

“associatedIndicators”: #ioc.assoc.api  ,

“victims”: #victim.api  ,

“victimAssets”: #victim.asset.api

}

The #ioc.api variable is the output variable from Step 5A.

The #label.api variable is the output variable from Step 13C.

The #group.api variable is the output variable from Step 13D.

The #ioc.assoc.api variable is the output variable from Step 13E.

The #victim.api variable is the output variable from Step 13A.

The #victim.asset.api variable is the output variable from Step 13B.

NOTE: Be mindful when creating this to remove the extra space that gets created when you insert in these variables.

Step 15 – Join Failure/Success

Now that the flow for both options on the IF/Else from Step 3 are complete, we can create a new Merge app. The Merge app will be connected to the Success of Step 4 and Step 14. We will use the variable merging option in Merge to ensure one variable is outputted regardless of the workflow. The Value for the key will be the #logger.content output variable from Step 14 and the output variable created in Step 4.

Step 16 – Component Trigger Output

Now that all variables have been created, we can go back into the Component and set up the output variable for the Component. The Data Type for the variable will be the output variable from Merge in Step 15. This will provide the data collected in the Component as an output variable to be passed along to other apps within the larger Playbook.

 

 

 

 

 

 

 

The post Playbook Fridays: Component IOC All Data Pull appeared first on ThreatConnect | Intelligence-Driven Security Operations.

The Secret to our (Customer) Success

$
0
0

I recently sat down with Jody Caldwell, the Senior Director of Customer Success at ThreatConnect, to pick his brain and understand the specifics of how we help a customer from initial deployment throughout the entire span of their relationship with ThreatConnect.

Will you describe your role at ThreatConnect? What does a day look like for you and what are your responsibilities?

I’m the Senior Director of Customer Success. Day-to-day what I do is manage support, the customer success team, and the deployment engineers. Key drivers for me are ensuring customers get the most value out of ThreatConnect from day one, and that it continues throughout the entire time of their subscription. We’re making sure that deployment and training get executed, and also that we are meeting short term and long term goals laid out by the customer. Whether that’s on the threat intel side or working with their incident response team, and really any team within the SOC environment, the main driver is making sure they’re getting maximum value out of the platform.

Can you discuss how ThreatConnect supports customers, from deployment to continuing throughout the lifetime of the relationship?

We recognized from the outset that with a platform that offers as many capabilities as ThreatConnect does, not all customers were necessarily going to be as technically proficient or immediately understand all the processes that go into integrating the ThreatConnect Platform into the environment that they’re in. So, originally we started out with Customer Success Engineers, or CSEs as we call them here, who were technically proficient in handling the integration side of things, but also understand the process of bringing ThreatConnect into a new environment and helping customers evolve with ThreatConnect.

From there, we grew from a single point CSE after we recognized we needed a deployment engineer. Deployment engineers primarily help with on-premises customers and provide a dedicated resource to help you initially deploy ThreatConnect, but also assist on the upgrade side of things. For every upgrade, you’ll have access to a deployment engineer to assist with getting everything working in your environment. Finally what we added was a Customer Success Manager, or what we call CSMs. This was more of a strategic position that we wanted to put in place to work with the business owners at the customer organization, and also to ensure some of the project management items were being met throughout the phases and life cycle of the customer’s journey. Our goal is to ensure those points are met with specific deliverables and by meeting timelines, but also by working with the more technical or engineering team members from both the customer and from ThreatConnect. They make sure everyone is aware of the goals. Some of the larger companies obviously offer a program manager on their side, so what we thought was better for companies of all sizes is that we offer the same thing from a ThreatConnect perspective to be sure we are driving value consistently.

Can you talk a little more about the CSM / CSE program at ThreatConnect? What are the goals and responsibilities of those individuals, and some examples of day to day interactions or touchpoints that those folks may have with customers?

From day one once a delivery is executed on our end, each customer is assigned a CSM and a CSE. Both of those roles are equally important to ensure value is being derived from ThreatConnect. The CSE is primarily the technical point of contact, and the CSM is the strategic point of contact and from day one starts to recognize what the priorities are for things like getting integrations deployed, for meeting specific use cases that a customer has requested during the pre sales cycle, and for getting the proper trainings scheduled for all the right people. Both of those individuals will be involved when it comes to training the customer.

We do quarterly business reviews that the CSM is in charge of. They typically happens with the business owner at the customer organization to ensure they recognize the efforts that are being made by both the customer and by ThreatConnect, and the partnership that’s being built. During the QBR, priorities and short and long term goals are reviewed, and also allows time to plan. Oftentimes this is where customers have the conversation that starts with, “This is the next initiative I want to do with ThreatConnect.”  By ‘initiative’ they could mean additional integrations they may be looking to achieve, or if the organization is looking to expand to another division such as the incident response team, security operations team, vulnerability, fraud- once you start wanting to integrate other divisions within your organization into ThreatConnect, your CSM assists with that – including any additional necessary trainings. Ideally, as we go through a couple QBRs, the business owners are getting a high level of engagement with our team and are able to keep track of where the project is moving.

What type of background do you look for when hiring people for your team? How does that translate to being beneficial to ThreatConnect customers?

A bulk of the CSEs on the team today have experience in some facet, whether it’s them working through government and threat intel services, or they’ve actually held IR or SOC roles, so they understand the need for not only a solution like ThreatConnect, but they are also very keen to understand processes and how those processes drive security operations. They’ve lived it. They get the problems and the necessary solutions. That translates really well with the customers and allows us to have candid, two-way conversations with them about successes and challenges they face in their jobs.

What does the mapping of CSEs and CSMs to customers look like? Is that done by specific territories? Additionally, do you have the same team throughout the lifetime of the relationship?

A little bit is based on geography, but we also look at it from a maturity and industry standpoint. What type of subscription have they procured? And who is really the best asset to address that? I’ve got a good team that focuses on Financial Institutions, I’ve got a good team that focuses on Government related organizations, and one for Oil and Gas. Those things may vary a little bit, and it’s also based on the availability and workload of my current team on who will be assigned to what.

For the duration that you’re a ThreatConnect customer, from day one, you will always have a dedicated CSE and a CSM.

I will say one thing about onboarding, and this may go back to the last point, with onboarding typically there will be at least weekly touchpoints to get things implemented and deployed. After about the first 6 weeks once we get to a solid state where integrations are up and running, the team is trained, and we all feel comfortable with where things stand- those typically scale back to biweekly. As we move on and customers become more self sufficient, those may change to monthly. That being said, you can set up ad hoc meetings with the CSE and CSM at any point along the way.

That’s a lot of communication and touchpoints, which is great, but why is so much communication and support necessary for this type of solution?

With ThreatConnect, and a lot of other TIP and SOAR-like solutions, we realize there are a lot of functionalities, capabilities, and potential integrations with the Platform. Ensuring we are providing the most up to date and consistent information is important, so having those regular touchpoints and somebody you can rely on to talk to you – having that identified asset – speaks volumes to the commitment that we’re willing to make. You’re not necessarily having to reach out to a support desk and be in a queue before you get an answer to your issue. We look at every customer as a partnership, so as in any partnership, ensuring there’s effective communication back and forth with both parties involved is key to us. We’ve had a lot of success that we’re very proud of with this program.

What are the ways that a customer can get in contact with ThreatConnect?

We put out as many ways that we can think of to ensure that we are providing unlimited access to both the Customer Success and the Support teams. We offer a ThreatConnect Users Public Slack Channel which is for customers only at this time. We do see engagement between customers in that Slack Channel, as well as with other members of the ThreatConnect team – Customer Success, Product Management, etc. For support issues we do have support@threatconnect.com. Our CSEs and CSMs provide at least the office number where you can reach them, and multiple people on my team offer their cell number out to customers as well. I’d say that frequently gets used before, during, and after office hours which I believe shows the level of commitment we’re providing. For customers, we also offer a Private Slack Space. That is something that’s hosted by ThreatConnect. This gives you access to your customer success team members, the support team, and also to members of our Product Management and engineering teams.

Is everything we’ve discussed here included in the standard ThreatConnect subscription? Is this a unique program or do other vendors do the same?

Yes, absolutely. Most everything we’ve talked about here is included in standard terms. Above and beyond, and for an additional cost, we do offer enhanced support which is 24×7. If you need something at 2am, you can call a 1-800 number and one of our support team members will answer and be available to help you right away.

As far as this program being unique, I have spoken to other Customer Success Directors and people in similar roles, and I think with most of the better companies you get typically one person assigned to help you. But, if you really look at that, if you have one person dedicated to you and they have 30 customers, are you really getting the level of support that you need? So, one thing we’ve done with assigning two folks is break down the level of effort between being tactical and technical, and on the other side, being strategic and ensuring those needs are being met as well.

To use an analogy, you’re really fighting a two front war. You have to win over the analysts and technical team at a customer site, but you also have to win over the business owners and strategic assets and make sure they understand the value proposition and see the value that they’re getting from the money that they’re spending. This whole idea came about a few years ago when we realized our team was really good technically and really good with the analysts, but we didn’t have that strategic partnership at the business level. This led to a lot of frustration with some of our users, because they thought everything was going great, but when looking up the chain, say someone new came in as the head of the department, they may cut the budget and we’re not aware of that before it happens. That means we’re not there to help the actual users, analysts, translate the benefits they’re getting from ThreatConnect into business benefits that the higher-ups can understand.

More information on ThreatConnect’s Customer Success Team here.

 

The post The Secret to our (Customer) Success appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Playbook Fridays: Generate Intelligence Reports

$
0
0

 John Locke, a wise man, once said, “No man’s knowledge here can go beyond his experience.”

The same is true with the latest release of ThreatConnect that includes quite a few new features. The feature that has me excited the most is the AppBuilder functionality. The primary reason is that I can see the full potential, when in the right hands, for someone to develop an application to fully extend the Platform to meet their organizations needs at a lightning fast pace. The secondary — selfishly — is that learning Python is one of my goals for this year.

I can’t think of a better way to achieve both of these other than by combining them and achieving more goals within the same time. With that, here’s the AppBuilder project, “Generate Intelligence Report”:

Along with the Playbook “Generate Report”:

The custom app itself is very simple in regards to inputs as it takes in:

  • A report ID (this can either be a String or TCEntity, if a TCEntity is used, Owner is ignored)
  • Owner Name
  • URL for your Company’s logo which this defaults to the ThreatConnect logo

The outputs from the app are:

  • An HTML Report (binary type)
  • An HTML Report (string type)

There are 8 required attributes for the report generation to be successful:

  • Source (used for References), within the document this should look like:
    [1] Retrieved from http://www.somewebsite.com/url
    [2] Retrieved from https://www.someotherwebsite.com/randomthing
  • External ID (used for Report serial number)
  • Additional Analysis and Context (used for Analysis)
  • Course of Action Recommendation (used for Mitigation)
  • Report Revision Date
  • Report Release Date
  • Description (used for Executive Summary)
  • Report Type

These attributes can be uploaded to your instance of ThreatConnect (As a System Admin, ⚙ > System Settings > Attribute Types > Upload) or (alternatively as an Org Admin: ⚙ > Org Config > Attribute Types > Upload) and select the attributes.json

Note: If these are added at the Org level only, this app will only work properly at the Org level and not in any communities or sources. If this app is uploaded to the System Level, it will work across all Orgs, Communities and sources within your instance.

For demonstration purposes we will be generating an abbreviated example of this report: https://web.mhanet.com/SQI/Emergency%20Preparedness/FBI%20Flash%2003-25-16.PDF (FBI MC-000070-MW ).

Below is how this would appear in ThreatConnect prior to executing the UserAction Trigger.

(Note the “Report File” box showing that no file exists)

After importing the Playbook and activating on the same page you would click the User Action Trigger titled “Generate Report”:

After clicking ▶ the expected result looks like this:

Then simply refreshing the page and you will now have a report available:

Clicking the “👁 View” button you can now see the generated report:

If you click the 🖨 icon you will get a stripped down view of an Intelligence Report:

You can find this project here on GitHub: https://github.com/ThreatConnect-Inc/threatconnect-playbooks/tree/master/apps/TCPB_-_Generate_Intelligence_Report

The link to the Playbook App (.tcx): https://github.com/ThreatConnect-Inc/threatconnect-playbooks/blob/master/apps/TCPB_-_Generate_Intelligence_Report/Generate%20Intelligence%20Report.tcx

The link to the Playbook (.pbx): https://github.com/ThreatConnect-Inc/threatconnect-playbooks/blob/master/apps/TCPB_-_Generate_Intelligence_Report/Generate%20Intelligence%20Report.pbx

The link to the attributes.json: https://github.com/ThreatConnect-Inc/threatconnect-playbooks/blob/master/apps/TCPB_-_Generate_Intelligence_Report/attributes.json

Look out for the post next Friday for how to customize this app to change the disclaimer, and contact information.

 

 

 

 

 

 

 

The post Playbook Fridays: Generate Intelligence Reports appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Best Practices for Writing Playbooks, Part 2

$
0
0

This is Part 2 of the Best Practices for Writing Playbooks in ThreatConnect blog post series. This time, I wanted to get into  the weeds on some best practices for development and testing. If you haven’t already read it, I highly recommend taking a minute and reading Part 1, here.

To Playbook or not to Playbook, that is the question

To effectively leverage Playbooks, it first helps to understand what Playbooks are really meant to do. ThreatConnect’s Playbooks capability was designed to help analysts orchestrate and automate repetitive workflow-based tasks that would otherwise take up time spent performing analysis. Sometimes we see Playbooks looked to as a catch-all solution for any sort of data processing, and this can definitely lead to a lot of misconceptions and potential performance issues.

Below are a few tasks that Playbooks are NOT the ideal solution to handle:

  • Handling mass quantities of big data
  • Acting as as a message bus
  • Performing in-depth automated malware sandboxing (basic triage is totally on the table, see below)

To contrast, here are a few things that Playbooks are a GREAT solution for:

  • Phishing email parsing and analysis
  • IOC enrichment
  • Customized incident ticketing and alerting
  • Basic malware triage (pulling strings, metadata, zipping up and sending to an AMA, etc.)

Pump up the volume…carefully

“I’ve seen things you people wouldn’t believe. Phishing Email Triage Playbooks with 57 apps off the coast of Microsoft Outlook. I watched a hundred thousand REST API calls glitter in the dark near the VirusTotal AV pool. All those Playbook executions will be lost in the queue. Like tears in the rain. Time to reboot.”

-Roy Batty (during his SOC Analyst days)

Playbooks can be a very powerful force multiplier, but with that power comes the potential for very high resource utilization if not employed properly. Playbooks that have a highly complex design path, get thrown an overabundance of data to work through, or get executed too many times have the potential to bring even the beefiest of systems to a crawl. Or worse, crash the ThreatConnect instance entirely (been there, done that, got yelled at by DevOps).

One of the most common Playbooks use cases out there is IOC enrichment, which on the surface sounds pretty cut and dry, right? I mean, who wouldn’t want more enrichment and context for their IOCs?! The issue is that a lot of times, I see analysts writing Playbooks that trigger whenever a particular type of IOC is created in their instance. This can cause issues if you’re triggering on IOC creation in a data source that you don’t really have control over. OSINT feeds, for example, can quite often produce  thousands of new IOCs per day. I’ve also seen some premium intelligence vendor feeds produce tens of thousands of new IOCs per day! This means that if you’ve designed your Playbook to trigger on each new IOC being created, you’re easily gonna queue up a few thousand Playbook executions and not even realize it.

Instead, think about ways you can pare down the execution count. Are the new IOCs being created as part of an associated Group such as a new Report, Campaign, or Incident? If so, you should explore triggering off of said Group creation, then taking the associated set of IOCs and enriching those all at the same time. Doing that, you’re talking one Playbook execution handling multiple IOCs instead of a single execution per IOC.

Additionally, think about triggering on conditions such as IOCs or Groups being tagged with a particular keyword. This is a great method because you’re taking data that already has a good chance of being relevant to you, and enriching it even further

I can promise you that if you just throw 200k executions of a particular Playbook at ThreatConnect and expect them to be completed immediately, you’re setting yourself up for disappointment!

A Variable Has No Name

Variable naming matters. You should be documenting what the variables that you’re creating are, but ideally a user should be able to have a reasonably good understanding of what it is just by the variable name. Keeping a consistent variable naming scheme is highly recommended. Not only does this make for a cleaner Playbook, but also it’s super helpful when you have to debug failed Playbook executions. Below are a few examples of times where good variable naming should come into play:

  • Using the Merge operator to merge two variables from upstream apps, it’s best practice to prepend the new variable with the letter “m” or “merged” (Figure 1).
    • Example: Merging the two potential outcomes of an API call. In a success scenario, you will get a response, and then likely do some sort of parsing out of a value. There is the potential that the API call will fail, in which case you may want to use something like a Set Variable app to denote said failure.

Figure 1

  • When using the Split String app, it’s a good idea to prepend the newly created variable with something like “s” or “split” (e.g. splitting the String variable “#url.values” would return a StringArray named “#s.url.values” or “#split.url.values”). Same goes for Join Array using “j” or “join”.

Keep Calm And Handle Your Errors

Apps fail, it happens. Don’t take it personally. I’m sure you’re a very nice person.

It’s really easy as Playbook developers to fall into the bad habit of “developing for success”. We will just keep building until you get that sweet sweet green dot indicating that your Playbook works and then call it a day. As weird as this sounds, this is actually a really bad habit! Obviously, developing with the goal of having a Resource that runs successfully is great, but the trap lies in the act of stopping development when you have a successful running Playbook. You’re only halfway done. You should be writing your Playbook to handle the eventuality that things are NOT always going to go according to plan.

Keep your blue dots close, but your orange dots closer

Every app has two possible exit conditions denoted by dots on the right side of the app icon itself (blue for success and orange for failure). The orange dot is there for a reason, so don’t be afraid to use it.

Sometimes you may expect an app to fail, in which case there’s more than likely separate logic you’ll want to run. Just don’t forget to merge the two potential paths. One great example of this is merging the two potential outcomes of an API call. In a success scenario, you will get a response, and then likely do some sort of parsing out of a value. There is the potential that the API call will return no results, in which case you may want to use something like a Set Variable app to denote said failure with a “No API Results Found” (Figure 2).

Figure 2

“You know how you get to Carnegie Hall, don’tcha? Practice.” 

I hope everyone is testing their Playbooks thoroughly before turning them on! If not…well, that’s a bold strategy, Cotton. Let’s see if it pays off. Testing should occur not only during development process, but especially during QA and UAT. If you remember from Part One of this blog series, Playbooks should be treated like production level automations. The number one rule of delivering quality products is to perform adequate quality control and testing.

For those of you that ARE doing testing, kudos! That being said, let’s go through some tips and tricks to make sure you’re getting the most out of your testing.

“Use the Logs, Luke.”

ThreatConnect has a VERY robust logging mechanism that’s absolutely crucial for proper development and testing. By default, the logging verbosity is minimal for performance reasons (especially when executing hundreds or thousands of Playbooks each with 5+ apps), but the logging level can be raised to DEBUG or TRACE to see everything that’s happening.

!!!WARNING!!!
Please, for the love of all that is holy, DO NOT leave your Playbook on TRACE logging after your testing is done. Depending on the complexity of the Playbook and amount of executions, you can quite easily produce gigabytes of logs and fill up a hard disk. This kills the ThreatConnect. Ask me how I know.
!!!WARNING!!!

One of the best logging features ThreatConnect offers, in my opinion, is the ability to watch the actual values being passed between Playbook apps. This is extremely useful because you may find yourself in a situation where a certain app is failing and you have no idea why. Then, using the variable value inspection feature you can see what’s actually being sent. You can view both the input values of a given App or Component as well as the output values (Figures 3 and 4):

Figure 3

Figure 4

To give you a real world example; I have personally found numerous mistakes in regular expressions that I’ve written because what was actually being parsed out was different than what I (and other downstream apps) was expecting.

  • Verifying that you’re not passing in NULL data into a “Create X” app is an integral part of development and testing.

 “Efficiency is intelligent laziness.”

As technology practitioners, we should always strive to be as efficient as possible. This need is especially prevalent when dealing with any sort of automation at scale because you can quite easily be dealing with hundreds, thousands, or even tens of thousands of Playbook executions. Any efficiency you can squeeze out of your Playbook is going to immediately pay dividends in the form of better execution throughput.

There’s no magic bullet for making the most efficient Playbooks. Learning how to be efficient with Playbooks comes with time and practice. That being said, below are a few tips that can help get you started:

  • Duplication of effort: Find yourself redoing the same steps over and over again? Try building those steps into a Component for easy reuse! Don’t know what a Component is? Check this video out: https://www.youtube.com/watch?v=6axA-farO1I
  • Excessive branching: Pay attention to how much you branch in a single Playbook. There are limits in ThreatConnect that regulate the amount of concurrent branches can be executed at any one time. Any branches over that limit are going to be queued up and won’t execute simultaneously, therefore slowing the overall execution of the Playbook down. Sometimes branching more than 4 times is unavoidable, but just keep in mind that if possible consolidate branches where you can.
  • Set Operations vs Iterators: Whenever possible use Set based operations instead of Iterators. When dealing with large amounts of data you pay a performance penalty when using Iterators.
    • e.g. Using the Create ThreatConnect Tag app and sending a StringArray of 100 indicators is going to MUCH more efficient than using the Iterator to loop through each element of the StringArray and adding the tag one at a time.
  • Resource Monitoring: Be mindful of system resource utilization with intensive Playbooks. Resource usage can be monitored via the Activity page (Figure 5). Definitely keep an eye on this when doing your testing.

Figure 5

And with that, we come to the end of this blog series. You should now be able to design, create, name, and test high quality Playbooks right?

Good.

Now, get off my lawn!

The post Best Practices for Writing Playbooks, Part 2 appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Playbook Fridays: Generate Intelligence Reports, Part 2

$
0
0

As promised, below is how to customize this app to change the disclaimer, and contact information. However, I encourage you to stick around as I dig in for a deeper dive, explaining in detail all of the code within this app. With that said, let’s go!

Replace everything after the = on Lines 32-34 and Lines 36-40 so that instead of this:

It looks like this:

Add in your text for each in between the ‘ ‘ for each of the variables:

Then click the Released button and your changes will be made and live – so long as you choose a patch or minor release. Otherwise, if you do a new major release existing playbooks using the older version will not automatically update.

 

The remainder of this post is a line by line breakdown, detailing how the app operates. If you missed Part 1, check it out first before moving on.

Imports:

Let’s begin by looking at lines 3-8, these are our import statements that tells the Python interpreter “these are all of the libraries that we need for our code”.

The libraries json, base64 and re are built in libraries from Python.

The json library is used to encode/decode JSON to/from python objects that are of the type dictionary.

The re library is used for parsing regex, in this code we are using it to find and replace characters.

The base64 library is used to encode/decode objects, in this code we base64 encode the associated screenshots to embed them into the report.

The jmespath library is used to filter the JSON responses from the API queries.

The ioc_fanger library is used to de-fang indicators that are retrieved. Special shoutout to Floyd Highertower for this awesomeness!

The jinja2 library is used for the HTML templating engine.

Setting initial variables:

This block of code focuses on setting initial variables needed for the remainder of the code.

Line 18 we are getting the user’s input for what they have specified for report_id. This field is special as it will accept one of two different inputs. Either a string (report id) OR an TCEntity.
Line 19 we are logging at the info level what the playbook is performing at that particular step.
Line 20 we are check which object type report_id is. If it is an TCEntity it will be True. If it is not, the only other option is a string.
Line 21 at the info level, in the logs it will show that we are using the TCEntity, therefore the Owner selected is ignored.
Line 22 at the debug level, we are logging the contents of the item that was configured for the report_id’s input.
Line 23 we are re-assigning the owner input to be the owner specified from the TCEntity (the python JSON library is used here).
Line 24 we are re-assigning report_id to be the id from the TCEntity that was specified (the python JSON library is used here as well).
Line 25 should the item entered via the UI be a String meaning the result from Line 20 is False, Lines 21-24 are skipped.
Line 26 we are seeing the owner variable to what the user has specified in the UI.
Line 27 we are logging at the info level that a TCEntity was not used, and simply log the supplied report_id and owner.

Note:
  • Playbooks have several different run-levels offered for logging. With ERROR being the least amount of information, two steps above that is INFO and DEBUG/TRACE having the most verbose output possible.With TCEX debug is equal to TRACE within Playbooks. One convention that you will see repeatedly throughout this walkthrough are what looks like two duplicates for logging, however I’ve tried my best to balance the two. With info simply returning where the execution is at with a particular step, and with debug giving back everything that I can to assist the playbook designer and/or app developer in troubleshooting any issues should they arise during execution.
  • If a TCEntity is used and Line 20 evaluates to True, the Lines 25-27 are ignored.
  • If a TCEntity is not used and Line 20 evaluates to False, Lines 21-24 are ignored.
  • Line 22 makes use of String Formatting within Python to insert where you see {} with the value to the right. So for example, if the report ID provided was “1234” then the log would show: TCEntity Input: 1234

Line 28 we are reading in the URL supplied by the user for the header logo when the report is printed.
Lines 29/30 we are logging at the debug level what the Report ID and Owner being used is.
Lines 32-34 we are assigning contact_info to the string contained to the right.
Lines 36-40 we are assigning disclaimer to the string contained to the right.

Note:
  • The ‘ \ over the far right for contact_info and disclaimer are to break up the text so that it is PEP8 compliant but is not 100% required. You could write the entire string without those and it will function correctly. However, it is best practice to not exceed 80 characters in width to be compliant with PEP-8.

Custom Functions:

This block of code focuses on creating two internal functions used by the app.

Lines 42-47 make up our function for replacing new lines with HTML line breaks.
Line 42 we are declaring the name (_replace_new_line) and the input that we are assigning to a name to be used within our function (report_data).
Line 43 are logging at the debug level, what action we are performing and the input data.
Lines 44/45 we are searching for a new line or carriage return using regex (re library) and are replacing it with <br>.
Line 46 at the debug level, are logging what the output result is after formatting
Line 47 we are returning the result of the regex and exit the function
Lines 49-53 make up our function for using JMESPath to parse the JSON that is passed into it.
Line 49 we are declaring the name (_jmespath) for our function and are assigning the name to be used for the two inputs (jmespath_query and json_data).
Line 50 we are logging at the debug level both the query and the full dataset that was passed in.
Line 51 we are assigning json_data the result of the jmespath query.
Line 52 we are logging at the debug level what the results of the JMESPath query where.
Line 53 we return the result of the jmespath query and exit the function.

Note:
  • I’ve prefixed my functions with _ to denote that this is intended for internal use only and is by convention only. Meaning that if someone where to take this app.py and reference it somewhere else, it may not work as they have intended for their code.

Getting Attributes:

This block of code focuses on getting the attributes and reformatting them with an JMESPath query to convert them into a dictionary. 

Line 55 we are logging at the info level the API query that we are about to perform.
Lines 56/57 we are assigning get_attributes to the JSON response returned from the API and are passing in several variables.
Line 58 we are logging at the debug level, the response from the API query.
Line 59 we are assigning report_json to the result of a json.loads version of the get_attributes response.
Lines 61-69 we are assigning attributes to the result of our JMESPath query that is ran against report_json’s contents.
Line 70 we are logging at the debug level the result of the JMESPath query.

Note:
  • Line 59 using json.loads we are converting the object from JSON to a Python dictionary.

Line 71 we are logging at the info level the action that we are about to do (checking if the report type was missing).
Line 72 we are checking if the attribute is equal to None (empty or missing or not null).
Line 73 will only be executed if the result from Line 72 is True, in which case the Playbook app will terminate.
Line 74/75 otherwise if the result from Line 72 is False, meaning it exists, then we will exit the if and continue to Line 76
Lines 76/77 we are assigning TLP_get to the API query to get the TLP label that is applied to the report.
Line 78 we are assigning TLP to the JMESPath result for the specified JMESPath query to extract out the specific TLP applied, ex: TLP:RED
Line 79 at the debug level we are logging the result of the TLP Attribute from the JMESPath query.
Line 80 we are logging at the info level that we are creating the TLP mapping.
Lines 81/82 we are checking to see if the TLP exists from the report. If it does not, then we are setting the TLP_hex value to None as well. If this is True, lines 82-91 are skipped.
Lines 83-91 are executed if the TLP is not None, or null/empty.
Lines 84-89 are creating a dictionary in Python to map a specific TLP to an HTML hex value.
Line 90 we are logging at the debug level what the tlp_lookup_result looks like.
Line 91 we are assigning TLP_hex to the value of what it equals from the tlp_lookup_table
Line 92 we are logging at the debug level what the TLP returned as well as the TLP_hex is.

Getting Associated Indicators:

This block of code gets all of the associated indicators and then if any are returned – defangs them for the report.

Line 93 we are logging at the info level the action we are about to perform. In this case that we are getting indicators associated with the report name.
Lines 94/95 we are assigning associated_indicators with the result of the API query that we are performing.
Line 96 we are logging at the debug level the response that we received.
Line 97 we are logging at the info level that we are going to extract the summaries (name of each indicator)
Line 98 we are assigning the variables indicators to the JMESPath query to extract the names.
Lines 99/100 we are logging at the debug level, the JMESPath query and the result of the extraction.
Line 101 we are logging at the info level, that we are extracting the indicator type.
Line 102 we are assigning ioc_type to the result of the JMESPath query to extract the indicator types.
Lines 103/104 we are logging at the debug level the JMESPath query and the types returned.
Line 105 we are logging at the info level that we are Extracting the indicator rating
Line 106 we are assigning get_number_results to the JMESPath query to get the item from the JSON response.
Line 107 we are creating an ioc_rating list.
Lines 108-111 we are populating the ioc_rating list
Line 108 we are stating for each item in the number of results do what is contained in lines 109-111
Line 109 we are appending to the list, a JMESPath query to get the result of indicator rating OR if the rating was null assigning it to 0 for each item.
Lines 110/111 we are logging at the debug level the JMESPath query and result.
Line 112 we are logging at the info level the action we are about to do, in this case, converting a float to an integer.
Line 113 we are using list comprehension to convert all floats to integer.
Line 114 we are logging at the debug level the ratings after conversion.
Line 115 We are logging at the info level that we are checking if any indicators where returned.
Line 116 We are checking if the first item in the array indicator[0] array is None
Line 117 if the result from line 116 is True, then we log at the info level that no indicators where returned.
Line 118 If the result from line 116 is False
Line 119 we are updating the indicators array and using list comprehension are converting are using the ioc_fanger library to defang each indicator.

Note:
  • The API returns the ratings as a float (number with a decimal) and we need it to be an integer (whole number).

Please see this for list comprehension: https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions

Formatting Attributes:

This block of code uses the above internal function created to remove any \r or \n characters for HTML representation.

Lines 120-123 are using the custom function and is replacing any line that has \r (carriage return) or \n (new line) to HTML formatted with <BR> to make them spacing recognized by HTML for the templating engine (jinja2).

Getting Associated Documents:

This block of code gets the associated documents (screenshots) and converts them into Base64 to add them to the finalized report.

Line 124 we are logging at the info level that we are getting associated documents.
Lines 125-127 we are assigning to the associated_documents variable the results of the API query.
Line 128 at the debug level, we are returning JSON result for the query.
Line 129 we are assigning document_names to the JMESPath query to extract the document names.
Line 130 at the debug level we are logging the document names returned in step 129.
Line 131 we are assigning the document_ids variable to the JMESPath query to get the document ids.
Line 132 an internal comment
Line 133 we are reversing the document_ids list (see comment on line 132).
Line 134 we are reversing the document contents (see comment on line 132).
Line 135 we are creating an empty list called “b64image”.
Line 136 we are logging at the info level that we are getting document contents
Line 137 we are iterating over each ID in the document_ids list.
Lines 138-139 we are assigning the temporary variable “t” to the base64 encoded result of the API query that returns a binary document type that is then base64 encoded.
Line 140 for each of those base64 encoded items from lines 138/139 we are adding the b64image list.

Filling in the template:

This block of code gets the template and begins to populate the template.

Line 141 we are creating a variable file_loader to get items in the current directory
Line 142 we are creating an env variable to hold the localized Environment for jinja2.
Line 143 we are logging at the info level that we are loading the template.
Line 144 we are assigning the template variable to the actual file to use.
Line 145 we are assigning the variable output to functions from the jinja2 library. We are also loading in the TLP variable for the template.
Lines 136-160 we are assigning the inputs for the items in the template. See the Other Notes of interest, template below. For lines 152/153 please see the Other Notes of interest, ZIP below.
Line 161 we are logging at the info level that the template rendering is complete.
Line 162 we are logging at the debug level the actual contents of the rendering.

Notes:

Please see here for what zip in the above means: https://docs.python.org/3.3/library/functions.html#zip

Returning the result:

This block of code returns the output from the code.

 

Line 163 we are creating the variable self.html and assigning it to output.
Line 171 we are writing the output for self.html as a Binary as the name report_html
Line 172 we are writing the output for self.html as a Binary as the name report_text

Note:
  • Since our code is inside of a function (def run(self)) in order for us to use it outside of the function we have to prepend self.
  • For my outputs, they are referencing the same variable but I am changing the type. This is for interoperability and allowing the playbook to solve or be used for multiple use-cases.

Other notes of interest:

requirements.txt:

Here is the requirements.txt file. When the app is built, pip will install these to the lib_XXX folder correlating to the version of Python that you have installed. For example, Python 3.6.3 would produce a folder, lib_3.6.3 with these libraries downloaded into it.

Template

Within app builder you will notice the template.html. The variables on lines 145-160 on the left hand side of the operator refer to items in the template. Ex:
{{ TLP }} or {{ TLP_hex }}.

ZIP:

For the items on Lines 152 and 153:

iocs=zip(indicators, ioc_type, ioc_rating),
docs=zip(b64image, document_names),

As we are performing a zip operation we need to iterate over them. Note the example below for the indicator inputs. This is slightly complicated to explain but I encourage you to read Python’s zip documentation, but essentially I am combining 2 lists into tuples so that I can iterate over these in parallel.

As an example given the two lists below:

list1 = [“item1”, “item2”, “item3”]
list2 = [“item4”, “item5”, “item6”]

After a zip operation:
combined = zip(list1, list2)

This will be this will be a list of 3 tuples:
[ (“item1”, “item4”), (“item2”, “item5”), (“item3”, “item6”) ].

So that an iteration can be performed in parallel over the objects so we could do something like:
for thing1, thing2 in combined:
    print(‘First item: {}, Second item: {}’.format(thing1, thing2))

Would print out:
First item: item1, Second item: item4
First item: item2, Second item: item5
First item: item3, Second item: item6

Logging:

When logging actions, it’s important to have a fine balance between the various levels. Bearing in mind that the default “info” should not be very verbose and should simply log the steps being performed. Whereas “debug” should give back the most (and relevant) information for troubleshooting purposes to help someone either debug their input or assist the app developer in troubleshooting an error in their logic.


 

 

 

 

 

 

 

The post Playbook Fridays: Generate Intelligence Reports, Part 2 appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Top 6 Reasons Why You Should Apply Intelligence to Automation and Orchestration

$
0
0

Let’s agree that many security products today perform some level of security automation and orchestration. However, they may only incorporate intelligence to trigger certain workflows,  or to be used as enrichment for some context. Most likely, they do not enable adaption for future runs of their Playbooks or the creation of new intelligence as one of the outputs of the workflow itself. Some platforms  allow for aggregation of external data feeds, creation of internal intelligence, and even have many connectors to defensive products for automation of detection and prevention with operational threat intelligence. This is a great first step.

But, organizations need a solution that focuses on getting the most value out of that intelligence by enabling cross-team coordination and orchestrating their workflows.  When you have one platform that includes threat intelligence, orchestration, automation, and response together, you create a holistic system of insight.

Here are 6 reasons why applying threat intel to automation and orchestration is key:

  • Alert, block, and quarantine based on relevant threat intel

Even for tasks like alerting and blocking, having relevant threat intelligence is important. Along with the ability to automate detection and prevention tasks, having multi-sourced, validated threat intel can help ensure that you are alerting and blocking on the right things.

  • Increase your accuracy, confidence, and precision

Situational awareness and historical context is key to decision making. Working directly from threat intelligence allows you to work quicker and prevent attacks before they happen. The more you can automate up front, the more proactive you can be. By eliminating false positives and using validated intelligence you are increasing the accuracy of the actions taken. This accuracy leads to confidence and improves speed and precision.

  • Understand context and improve over time

When you automate tasks based on threat intelligence thresholds such as indicator scores, and memorialize all of that information, you can strategically look at your processes to determine how to improve.

  • Orchestrate with more confidence

Applying in-platform analytical processes to external threat intelligence allows for more accurate and less false positive prone alerting, blocking, and quarantine actions. It’s not as simple as being able to ingest lots of threat intel feeds or take action from a shared Indicator of Compromise. Its making sense of them at scale with adaptable scoring and contextualization to know what action to take, if any, based off of it.

  • Internal intelligence creation from security operations and response

Your own team and data is the best source of intelligence you will ever have. Capture the insights, artifacts, and sightings from operations and response engagements that can be immediately refined into intelligence in the form of new IOCs, adversary tactics and techniques, and knowledge of gaps in your security.

  • Adjust processes automatically as information and context changes

Intelligence-driven orchestration is data first, while security orchestration is action first. When your orchestration capabilities are fully adaptable to new threat capabilities, tactics, techniques, and infrastructure as its available from structured threat intelligence, your processes automatically adjust as the threat landscape changes.

 

The post Top 6 Reasons Why You Should Apply Intelligence to Automation and Orchestration appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Playbook Fridays: Query Hashes via Email Submission

$
0
0

We were asked by a customer  to extend the analysis functionality of ThreatConnect to other SOC personnel that didn’t have direct access to the Platform. So we did. This Playbook creates a new process in which non-ThreatConnect users can get on-the-fly analysis and context into potential hash IOCs they’ve encountered, and simplifies the process of using the Threat Intelligence team as a conduit for another team to do analysis.

With this Playbook:

  • No extra user accounts are needed
  • Automated hash analysis occurs
  • The Threat Intelligence team’s value across the enterprise is extended

And it solves:

  • Lack of user accounts to give access to non-essential SOC staff.
  • Uses the API to iterate over any potential owner within the local ThreatConnect instance

The Playbook is triggered when an email with a list of file hashes in the body is sent to an inbox. The mailbox trigger outputs the body of the email to a regex extract step, which pulls all of the included files hashes out and stores them within an array output variable. That array is passed through the main iterator, where the majority of the Playbook logic takes place. The larger concept here is the use of an iterator operator within another iterator. This is because there is a list of file hashes submitted for analysis, and each individual hash could have its own list of owners within the Platform; thus creating a potential for an  array within an array situation.

The main iterator will perform the following steps on each file hash within the array extracted from the email: First an API call is done to the /owners endpoint to get a list of owners within the ThreatConnect instance that contains the singular hash as an IOC. Once that list is compiled, a JMESPath query will extract all of the owners and pass that as an array to a secondary iterator. This iterator is more simple in nature, as all it is doing to looping through the list of owners and substituting each one within separate API calls to the hash value, but now using the ‘?owner=’ query parameter to gather all relevant information on the hash from each source that owns it within the Platform. The secondary iterator is then closed when this is finished, and the result is an analysis listing for each owner for the single hash value that is an output array from the iterator. This array goes through a quick JSON formatting step, and then to an if/else statement. If the data contains a failure message (meaning no results were found in the previous steps) then it takes the ‘false’ path out of the if/else statement and simply logs that no details were found for that hash. If there is no failure message, it will then take the ‘true’ path out of the if/else to a component.

This component is essentially a series of if/else steps that are checking for the IOC types that we define within its input configuration. Once the respective if/else statement hits a true statement, it will exit to its respective JMESPath query and extract the appropriate data relative to the indicator type. In this case for file hashes, there is subsequent checks to determine what type of hash it is. This is then returned to the component trigger, and outputted as a variable for use downstream of the component in the larger workflow.

In this Playbook’s current configuration this step isn’t necessarily needed, as since we are solely dealing with file hashes – we could have hard coded the hash related JMESPath query instead, but having it does offer the flexibility in the future to leverage other IOC types in this workflow without having to drastically change or add to the logic.

This final result is passed back to the original iterator, thus closing the loop on this first pass on our list of hashes where the result is stored at index 0 of the iterator output, and the entire process repeats again for the second hash in our original email list.

Once this array of results is completed, the main iterator will output the results to a series of formatting steps, and then finally out to a Send Email Playbook app that will send an email containing the results of what the customer’s ThreatConnect instance knows, back to the original sender.

How to Set Up this Playbook:

  • Mailbox Trigger: Nothing special with this. The mailbox name can be changed to something other than the randomly generated name upon creation.
  • Parse Hashes: The is a regex extract application. This outputs arrays of MD5, SHA1, and SHA256 hash values from the body of the email.
  • Union MD5 and SHA1: This is an array operations app utilizing the union function. This is compiling a single array between the MD5 and SHA1 hashes.
  • Union Built Array with SHA256: Identical to the previous step, only this is performing a union between the newly created array and the SHA256 values to produce a single array of hashes.
  • Deduplicate: This is also an array operations step, however this is utilizing the unique function. The idea here is to remove any duplicates in the array so the Playbook doesn’t process the same data more than once to improve speed and efficiency.
  • Iterator Hashes: This iterator operator has the deduplicated string array as the input, and will loop through each hash value.
  • Get Owners List: This step is using the ThreatConnect API app to make a direct call (GET) to the /v2/indicators/files/{hashVariable}/owners endpoint to get a list of all the owners in the Platform that include the specific hash within their source.
  • Extract Owners Name: This JMESPath app takes the owners list generated in the previous step and isolates just the owner names. ‘Data.owner[].name’ is the expression I am using for this.
  • Iterate Over Owners: With the newly compiled owners list, this iterator takes that array and will make another API call to the same hash as before, but this time with the owner defined as well. This looks like /v2/indicators/files/{hashVariable}?owner={ownerVariable}. This owner can either be hardcoded into the URL as we just did here, or can be setup within the API app as a key-value pair with owner being the key, and the {ownerVariable} being the value. The outputs from this step are either straight to a merge for success, or to a logger for a failure. The logger is basically there for error checking later on if something where to go wrong. The merge is simply there to reform both outputs from the logger and the API app into one output to close the loop on the owner iterator.
  • Change Formatting: This encapsulates the iterator result in square brackets if more than one owner is found for any of the given hash values.
  • Check for IOC IF/ELSE: This is taking the formatted results from the owner iterator and checking if it contains a message from the API that an IOC was not found. If it does, it will exit out the failure path to log that it found no data for that specific hash. If that API error message isn’t found the success path will be taken to the next step of the playbook.
  • Extract Results JMES Component: This component is a series of IF/ELSE statements that match the IOC type defined in its configuration. Once it matches, it will redirect the input data to the appropriate JMESPath app that will strip out the relevant data we want to return as the output of the component.
  • Merge Results: This is another simple merge to redirect the output of both the component and the no IOC found logger to one output that is sent back to the first iterator, closing the loop.
  • To string: The output of the iterator by design is a stringArray, but for the next few steps I needed to work with the data as a string data type. This step is doing just that, an array to string conversion.
  • Format JSON (both)/Flatten JSON: The are just steps to clean up the data and reformat to create valid JSON. The iterator encapsulates each string output it is building into the overall array within double quotes. The steps are removing those quotes, and then flattening the overall object with a JMESPath ‘[]’ expression.
  • Back to string: The send email app accepts string type data as an input for the body of the email. Much like before, we are simply converting the array into a string so it can be inserted into the response email.
  • Send Results Email: This is the final step of the Playbook, as we are sure you are eagerly awaiting the results of your submission. The results are inserted into the body of the email, and the ‘To’ line is simply pulled from the initial trigger utilizing the trg.mbox.from output variable.

See this Playbook on Github.

The post Playbook Fridays: Query Hashes via Email Submission appeared first on ThreatConnect | Intelligence-Driven Security Operations.


Playbook Fridays: Query Jira for Ticket Information

$
0
0

As someone in Customer Success for ThreatConnect, we are constantly asked to push the limits of our creativity for a customer. The Playbook below is the result of such a request. So without ado, I present Get all available information from JIRA!

The prerequisites that you will need for this Playbook:

  • URL to JIRA Instance
  • Credentials for JIRA
  • A custom attribute “JIRA Ticket ID” (case sensitive)

These can be configured in the following steps in the Playbook:

Set JIRA Base URL
With your Base url including protocol:port designation without a trailing / :

Encode JIRA Credentials
With the username/password keypair to authenticate to JIRA:

What makes it work?

The Playbook at first will start by retrieving and then validating that the attribute exists on the group or indicator. If it does not, the Playbook will exit and will inform the user that the attribute is missing:

If it does exist then it flows down to the next step where in we query JIRA’s API for all available information that it can give for the provided ticket ID:

This JSON response is then passed over 2 JMESPath apps. The first one will filter out the description, any comments and attachment URLs, and attachment filenames. Note that we are extracting some of this as a String and a StringArray for condition handling in follow-on steps.

The second JMESPath filter we are extracting out is the Issue Type and Priority contained in the JSON response.

The next items that in the Playbook is formatting the extracted items from the JSON to a more human readable and displayable version.

The next step in the Playbook is updating the document in ThreatConnect, if it exists. If it doesn’t exist, then we can create it, and upload the document contents which is created to hold the JIRA information that was extracted previously.

Then we check if there are any attachments to the ticket. If there are attachments we iterate over them, then save and associate them accordingly. If there are no attachments, we create and send the response back to the user initiating the User Action Trigger.

Within the merge that connects back to the trigger, we gather the different messages that can be returned back to the user as we’ve handled any errors that could occur.

How do I use this?

After performing the configuring steps above and enabling the Playbook, you will need to ensure the required attribute is on a group or indicator.

The Attribute JIRA Ticket ID
Added to any indicator or group object with a JIRA ticket ID:

After those items are in place and activating the Playbook in the User Action card on any indicator or group object, you will see this:

If there are any errors when executing the Playbook, they are returned to you. On a successful execution you will receive a message like the below:

Clicking the blue link “here” you will see the newly created group object:

 

For a comparative view, this is how this looks in JIRA:

 

See this Playbook in Github, here.

 

 

 

 

 

 

 

 

 

 

The post Playbook Fridays: Query Jira for Ticket Information appeared first on ThreatConnect | Intelligence-Driven Security Operations.

ThreatConnect and ServiceNow: More Integrations for Better Context

$
0
0

We’re strengthening our partnership with ServiceNow® by offering more robust integrations with the ServiceNow Orchestration and ServiceNow Security Operations products, as well as launching a new Playbook App for managing table records across all ServiceNow products.

With this update, we’ve added three types of integrations to the ServiceNow and ThreatConnect Platforms, each with its own specific capabilities. Let’s dive into each.

ThreatConnect Activity Pack for ServiceNow Orchestration

The ThreatConnect Activity Pack for ServiceNow Orchestration provides a set of activities that can be leveraged from ServiceNow Orchestration workflows to interact bidirectionally with ThreatConnect’s API and Playbooks.  These activities provide a broad set of functionality that can be used for automating processes associated with security operations and incident response. Think of it as predetermined automation actions that will allow ServiceNow analysts like you to interact with ThreatConnect in a variety of ways:

  • Create ThreatConnect Incident – This activity creates an Incident in ThreatConnect
  • Create ThreatConnect Indicator – This activity creates an Indicator in ThreatConnect
  • Get ThreatConnect Incident – This activity retrieves  an Incident from ThreatConnect
  • Get ThreatConnect Indicator – This activity retrieves an Indicator from ThreatConnect
  • Filter ThreatConnect Indicators – This activity retrieves multiple Indicators from ThreatConnect
  • ThreatConnect API Client –  This activity provides general-purpose access to the ThreatConnect API
  • Run ThreatConnect Playbook – This activity triggers a ThreatConnect Playbook with an HttpLink Trigger

Now, you are able to look up intelligence in ThreatConnect and use the results in ServiceNow Orchestration workflows. You can also create ThreatConnect tasks and incidents from ServiceNow and share ServiceNow Incidents and Observables back to ThreatConnect to generate new intelligence which enables a feedback loop.

For those of you focused on security operations or incident response related tasks,  you are now able to trigger a Playbook in ThreatConnect from a ServiceNow workflow. Then you can use the results to make further decisions in ServiceNow or update the incident for review, ultimately increasing confidence in automated decisions by leveraging ThreatConnect’s intelligence collection as part of containment and response actions.

ThreatConnect App for ServiceNow Security Operations

The ThreatConnect App for ServiceNow Security Operations allows Threat Lookup and Observable Enrichment capabilities against ThreatConnect intelligence and analytics collections. These features give those of you working inside ServiceNow the information you need to get relevant and actionable insights from intelligence sources within the ThreatConnect Platform. The app will allow you to enrich observables which will provide detailed context from ThreatConnect in an enrichment table. It will also allow you to perform Threat Lookups and will produce malicious or unknown ratings automatically.

This means that you can operationalize Intelligence from the ThreatConnect Platform in other parts of the security organization and you can provide the information you need to get relevant and actionable insights from intelligence sources within the ThreatConnect Platform.

ServiceNow Playbook App for ThreatConnect

In addition to the added capabilities that can be leveraged from the ServiceNow Platform’s UI, we’ve also updated the ServiceNow Playbook App for ThreatConnect. All straight from ThreatConnect, you’re provided with a set of actions to work with ServiceNow table records and attachments.  These actions provide the key building blocks for automating processes between ThreatConnect and ServiceNow.

The following actions are available:

  • List Table Records
  • Get Table Records
  • Create Table Records
  • Update Table Records
  • Add Attachment

This means that you can now manage any ServiceNow table record — built-in or custom — as part of a
ThreatConnect Playbook.  Security processes vary greatly from organization to organization and even team to team.  It was important to match the flexibility of ServiceNow with our Playbook app so that you can automate nearly any process that interacts with ServiceNow from within ThreatConnect.

If you have any questions, please reach out to us at sales@threatconnect.com. Current customers can contact their Customer Success Engineer for any questions.

The post ThreatConnect and ServiceNow: More Integrations for Better Context appeared first on ThreatConnect | Intelligence-Driven Security Operations.

How to Choose the Right SOAR Platform: A Checklist

$
0
0

The great thing about  SOAR is that, if deployed correctly, it gives your organization the platform required to implement an intelligence-driven security strategy.

You can think of SOAR and how it’s been defined and implemented (so far) as operating very much like an enabler, or a hub for decision making. It provides a centralized location that accepts numerous inputs which drive specific outputs. If you do not have a system that uses existing internal and external intelligence on threats and your operations as it orchestrates as part of all of its processes, you have an automation machine which can support various “if this, then that” type scenarios, but it’s not necessarily improving efficiencies or efficacy after those experienced after it initial implementation. With the addition of an engine that interprets and creates intelligence, the SOAR platform becomes smarter which makes the organization faster and stronger.

Intelligence Empowers Smarter Operations: Start a Feedback Loop between Intel & Ops 

Intelligence does not exist for its own sake, intelligence, including threat intelligence, specifically exists to inform decisions for security operations, tactics, and strategy. This relationship is not a one-way street. Intelligence and operations as functions of the security team should be cyclical and symbiotic. Intelligence informs decisions for operations resulting in actions being taken based on those decisions. Those actions (such as cleanups, further investigations, or other mitigations) will beget data and information in the form of artifacts such as lists of targeted or affected assets, identified malware, network-based IOC’s, newly observed attack patterns, etc. These artifacts can be refined into intelligence that can thus inform decisions for future operations.

While some organizations do not have a formally defined intelligence function on their team, the concept of using what you know about the threat-space to inform your operations exists in all organizations. Regardless of whether an explicitly named threat intelligence analyst employee is on staff, the relationship between intelligence and operations is fundamental and present in all security teams. Threat intelligence may be the catalyst for taking an action or starting a process and informing how the process and decision making are done throughout. As threat intelligence drives your orchestrated actions, the result of those actions can be used to create or enhance existing threat intelligence. Thus, a feedback loop is created — threat intelligence drives orchestration, orchestration enhances threat intelligence.

But, implementing an intelligence-driven defense isn’t without its challenges. Fragmentation of information, people, processes, and technologies is a significant hurdle. Our objective has always been to help security teams get the most value out of that intelligence by enabling cross-team coordination and workflows. While the industry analysts are still defining the architectural concept of SOAR, we see a need for a platform to bring it altogether to automate, orchestrate, and break down fragmentation for seamless coordination. A centralized platform that enables the refinement of relevant data from cases, response engagements, threat investigations, shared communities, and external vendors into intelligence suitable for decision making by any analyst, and also leverage that newly created intelligence to inform decisions across the security team.

To that end, we have created a checklist for a complete SOAR platform. Look for a solution that provides the following:

Management and Sharing of Intelligence

  • The ability to heavily leverage a REST API and represent data in a way that can be shared among multiple teams and tools
  • Relationships with Information Sharing and Analysis Centers (ISACs) to aid in collaboration with your respective industry.
  • Secure flexibility around who can see what information, for example using the TLP protocol
  • STIX/TAXII support
  • Integrations with multiple OSINT and paid intelligence providers

Team Collaboration

  • Role-based access control
  • Team-based notifications and tasking
  • Commenting and markdown support
  • Escalation management
  • Integrations with communication tools like Slack

Document & Artifact Storage

  • Document indexing, for example using ElasticSearch
  • Extensible storage to meet growing needs
  • The ability to link documents and artifacts to relevant intelligence or other information

Investigative Case Management

Cybersecurity investigations are complex with huge amounts of digital evidence. Look for features that reduce complexity, foster collaboration, and speed up investigatory timelines. Specific capabilities a SOAR solution should include are:

  • Reconstructed timelines of actions taken and decisions made to provide up-to-date progress reports and to support post-incident reviews
  • Ability to assign tasks to specific team members or groups of users to allow collaboration and management
  • Ensure consistency and repeatability of investigations through the use of customizable workflow templates
  • Reduction in false positives and dwell time by integrating threat intelligence directly in case reports
  • Quickly link cases and investigations to historical or other ongoing cases

Automated Phishing Handling

Eliminate the burden of manually analyzing and remediating the growing volume of phishing emails with feature capabilities that support the following:

  • The automated collection of potentially malicious emails from end users
  • Automated analysis of email with available threat intelligence
  • Integrations with an email system, sandbox, and ticketing system to provide a process for finding all emails with suspicious links or attachments to enable quarantining any email that was sent to other users while waiting for decision of deleting or allowing access

Feedback Loop

Leverage the feedback loop to enable faster, more accurate actions as you anticipate and thwart a threat actor’s next move. Focus on solutions that:

  • Reduce false positives and determine level of risk and prioritization based on historical data
  • Help you derive meaningful threat intelligence from operational data

Robust Integration Capabilities

Scale integrations across security tools and processes with solutions that offer:

  • Flexible playbooks to support integration workflows
  • REST API to allow flexibility in integration development
  • Mature, bi-directional SIEM integrations to help reduce false positives
  • Playbook apps can be built without the need for custom development or code

Automation and Orchestration

  • No limits on executions
  • Ability to prioritize mission-critical playbooks
  • Additional servers can be rolled out to meet demand for resiliency and performance
  • Performance can be easily monitored from a central location

Collective Analytics Layer

  • “Ground truth” telemetry from other analysts around the globe is provided anonymously and automatically

Dashboards

There’s no such thing as a one-size-fits-all dashboard, so ensure that the solution allows you to:

  • Create multiple, custom dashboards tailored to different teams
  • Query the data using a variety of parameters to ensure the right information is bubbled up
  • Use your own, custom metrics to measure the key performance indicators you care about

Data Model

  • Flexible data model that supports bespoke indicators
  • Admins can create their own attributes to ensure the data they care about is properly modelled and memorialized
  • Associations can be formed between different objects, for example between threat actors and their capabilities

For more information on SOAR Platforms, see our e-book, SOAR Platforms: Everything you need to know about Security Orchestration, Automation, and Response, here.

The post How to Choose the Right SOAR Platform: A Checklist appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Playbook Fridays: Leveraging ThreatConnect to Enrich Greynoise IOCs

$
0
0

Querying GreyNoise’s both free and paid APIs to retrieve insights on IOCs for alert triaging and filtering purposes

Analysts get inundated with alerts from all sorts of activity; both targeted and also part of widespread activity such as mass port scanning, crawlers, search engines, etc. Our customers wanted a way to use GreyNoise data from within the ThreatConnect Platform to filter out Address IOCs and alerts that are not specifically targeting them.

We tend to see users validate and filter alerts through GreyNoise before performing any subsequent investigation as a way to lower the amount of alerts they need to respond to. So we built a Component that can be triggered any way a user wishes. Components are independent pieces that can fit into any larger Playbook workflow. The GreyNoise Components were designed to help analysts leverage GreyNoise data in their workflow so that they can filter out low priority alerts and not waste time, money, and effort pursuing activity that doesn’t have high impact to their organization.

The enrichment Components will take an address IOC and query GreyNoise’s API to retrieve any available information they may have on the given indicator. These Components were designed to allow a user to query for relative IP address IOCs as well as retrieve enrichment information that can then be turned into insights to help “filter out the noise”. The GNQL Component allows the user to craft a query using GreyNoise’s built-in query language to retrieve matching addresses.

Some really useful examples of utilizing GreyNoise data in Playbooks:

  • If benign: decrease the severity of the alert or completely disregard it
  • If the IOC has not been seen in GreyNoise, increase severity and enrich with other sources
  • If seen in GreyNoise AND unknown or malicious AND hitting the perimeter: reduce severity (or, just do not alert)
  • If seen in GreyNoise AND there is egress communication (your network is talking TO an IP in GreyNoise) this is very bad
  • If successful login from GreyNoise malicious IP: this is VERY bad

Below is a screenshot of a sample workflow using the GreyNoise Enterprise Enrichment Component:

 

 

 

The post Playbook Fridays: Leveraging ThreatConnect to Enrich Greynoise IOCs appeared first on ThreatConnect | Intelligence-Driven Security Operations.

CAL 2.3 Brings New Data Sources and Analytics Improvements to ThreatConnect

$
0
0

It’s the best holiday gift we could ask for: CAL 2.3 is live! 

For those of you not familiar, ThreatConnect’s CAL™ (Collective Analytics Layer) provides a way to learn how many times potential threats were identified across all participating Platform instances. CAL anonymously leverages the thousands of analysts worldwide who use ThreatConnect. Taking it one step further, we have built in our own analytics engine powered by that collective insight to answer questions our users have about threat intelligence, sometimes before they even know to ask them.

CAL has its own release cycle that operates on a separate timeline from the core ThreatConnect Platform. This allows for more frequent releases with seamless deployments. CAL 2.3 includes some massive new datasets and cool tradecraft that we’re going to help our customers leverage.

New Data Sources

Making sense of a massive amount of data is a big job, and one we’re happy to do for you. Every night we pull the master ASN listings and their respective CIDR mappings.  This new capability adds a massive amount of data: CAL now has a staggering understanding of over 67,000 ASN’s and the 700,000 CIDR ranges mapped to them!!  This robust graph allows CAL to leverage its existing analytics to help identify interesting (and uninteresting) neighborhoods on the internet.

Analytics Improvements

  • Report Cards

CAL enables data to be presented to users in a way that is easily readable and clear to understand. One way we do this is via CAL Report Cards. Report Cards visualize information related to feed performance so you can better understand a feed before you start leveraging the information in it to make decisions related to your security team. As you can see below, the graphic provided is clear and easy to understand. So much so, that oftentimes we may forget about the powerful analytics that power it.

We’ve improved some of the math behind Report Cards, allowing the bars on the bullet charts to evolve naturally with our dataset.  The red, green, and yellow target zones will now dynamically reflect the way our collection of feeds behaves in our users’ ecosystems.  This should simplify user decision-making when selecting and understanding OSINT feeds.

  • Nameserver Analysis

We’ve worked with a member of the ThreatConnect Research Team, Kyle Ehmke, to replicate some of his analysis techniques for nameservers in CAL!  This happens at scale.  Every day. We’ve already identified 2 million nameservers (300+ of which are a nexus of malicious activity).  By wrapping some of Kyle’s analysis techniques into CAL, we’ve been able to pivot off of those 300+ nameservers to identify over 1,000 novel suspicious hosts aren’t being reported anywhere else!  Stay tuned as ThreatConnect’s Research Team and CAL work together to discover more malicious activity and help you make the right decisions with it!

Our team is already hard at work on the next CAL release. Stay tuned!

The post CAL 2.3 Brings New Data Sources and Analytics Improvements to ThreatConnect appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Viewing all 483 articles
Browse latest View live