Three New Ways to Use HEARTH
What Can I Hunt?, Coverage Map, and Context Graph
HEARTH now has 160+ community-curated hunting hypotheses structured around the PEAK threat hunting framework: Flames for hypothesis-driven hunts, Embers for baselining and exploration, and Alchemy for model-assisted work. That’s a lot of ground to cover, and the flat list view we shipped with was fine for browsing but not for answering the questions analysts actually ask at their desks: What can I run with the logs I already have? Where are my coverage holes? Why does this hunt matter this week?
This release adds three new tabs that each answer one of those questions. None of them change the underlying hunt library — hunts still live as markdown in the repo and flow into hunts-data.json on build. The tabs are different lenses on the same data, plus a context graph layer that ties hunts back to the actors and campaigns driving them.
Here’s what shipped.
What Can I Hunt?
The problem. You have an EDR, Zeek, Okta, and Microsoft 365 logs. You do not have a cloud trail from every CSP, you do not have full packet capture, and you are not going to read 130 markdown files to figure out which hunts are runnable tonight. The default question for any new hunter onboarding to an environment is “given my telemetry, what should I work on first?” — and until now HEARTH made you answer that by hand.
How it works. The left sidebar lists data source categories pulled from a curated datasource-mapping.json — EDR, Network, Identity, Cloud, Email, and so on. Check the boxes for what you actually have. The right pane filters in real time to only hunts whose required data sources you can satisfy, then ranks them with a HuntRanker score that blends three signals:
Prevalence — how many active threat campaigns are currently leveraging the underlying techniques, shown as 🔥 hot, 🌡️ warm, or ❄️ cold.
Actor count — how many tracked threat actors are known to use the technique.
Active campaigns — the live campaign count touching the technique right now.
The top 5 cards float to the top as “highest-impact given current threat activity.” If you have a data source but the techniques under it have zero HEARTH hypotheses, a coverage gap alert fires with a CTA to submit one — this is how the library grows in the places it’s needed. Every card has a “View hypothesis” link that opens the rendered markdown straight from GitHub, so you can go from “what should I hunt?” to reading the full hypothesis, data requirements, and detection logic in one click.
Who it’s for. SOC analysts and detection engineers onboarding to a new environment, hunt team leads building a sprint backlog, and anyone who wants to stop triaging a 130-row spreadsheet. The pitch is simple: tell us what telemetry you have, and we’ll tell you what’s worth hunting right now.
Coverage Map
The problem. Leadership asks where your hunting program covers ATT&CK and where it doesn’t. You want a single view that shows hunts mapped to techniques, color-coded by hunt type, with gaps called out — not a spreadsheet you have to re-render every quarter.
How it works. The Coverage Map is an interactive data visualization graph placing hunts against ATT&CK techniques. Nodes are color-coded by type so you can read the map at a glance:
Orange — ATT&CK technique
Purple — Flame hunt (hypothesis-driven)
Blue — Ember hunt (baselining/exploration)
Amber — Alchemy hunt (model-assisted)
Green — Data source
Red (glowing) — Coverage gap
Tactic filter buttons across the top let you narrow to a single kill-chain phase — Credential Access, Persistence, Exfiltration, whatever you’re scoping. Click any node and a sidebar slides out with the full details: linked techniques, required data sources, hunt IDs, description.
The coverage gap nodes are the interesting ones. They surface techniques that public reporting has tied to active campaigns but that HEARTH does not yet have a hypothesis for. That’s your contribution backlog, sorted by relevance instead of by whim.
Who it’s for. Detection engineering leads building a coverage strategy, program leads pitching hunting maturity to stakeholders, and contributors looking for the highest-value gap to fill instead of writing yet another Mimikatz hunt.
Context Graph
The problem. A hunt hypothesis on its own tells you what to look for, not why it matters this week. To prioritize, you need the full chain: which actor ran which campaign, which techniques did that campaign use, and which HEARTH hunts land on those techniques. That’s four entity types and a lot of edges, and it does not fit in a flat list.
How it works. The Context Graph is a data visualization force-directed graph with four node types and real edges between them:
Threat actors (red)
Campaigns (amber)
ATT&CK techniques (orange)
HEARTH hunts (Flames, Embers, Alchemy in their respective colors)
Edges reflect real relationships derived from public threat intel: actor X ran campaign Y, which used T1071.001, which HEARTH hunt H002 can detect. Hover any node for a tooltip with the metadata that matters — campaign dates, actor aliases, technique ID, hunt description. Click to expand the full sidebar.
The graph is kept up to date by enrichment scripts that pull data from public threat intelligence sources. You’re not looking at a snapshot someone committed last quarter; you’re looking at a live picture of what’s being reported on.
Who it’s for. Threat intel analysts who need to connect reports to action, hunt program leads justifying prioritization to stakeholders, and researchers exploring what a given actor has actually been doing lately.
Putting it together
These tabs are designed to chain. A realistic workflow looks like this:
You read an advisory about an actor getting louder — pick your favorite. You jump into the Context Graph, find the actor node, and expand out to the campaigns and techniques they’ve been using. Two of those techniques look new and relevant to your environment.
From there you switch to the Coverage Map and filter by the tactics those techniques belong to. Three are covered by existing hunts; one is a red gap node. You note the gap as a contribution candidate and move on.
Finally you open 🎯 What Can I Hunt?, confirm your data source boxes are checked, and the three covered hunts sort into your top 5 based on current prevalence and actor count. You click through to the markdown, paste the detection logic into your platform of choice, and you’re running something meaningful by lunch.
That’s the whole point. Context Graph tells you what matters, Coverage Map tells you where you stand, and What Can I Hunt? tells you what to run. Same library, three different questions, one workflow.
Try it
All three tabs are live now at hearth.thorcollective.com. If you find a coverage gap worth filling, the contribution workflow is linked directly from the gap alerts — submit a hypothesis and it’ll flow through the normal PEAK review into the next build.
Feedback, bug reports, and new hunts all welcome. HEARTH is only as useful as the community makes it, and these tabs are meant to make that contribution loop tighter.



