← All posts

Why Barding, Why Now

Why Barding, Why Now

Cybersecurity has reached a stable equilibrium over the past couple of decades. Attackers are constrained by human limits on speed, scale, and skill, and can only profitably target a subset of organizations. Defenders have built programs calibrated to those constraints. Security budgets, staffing models, detection and response timelines all assume attackers that move at human speed, operate at human scale, and are bottlenecked by the availability of human skill.

Those assumptions are collapsing. And defenders are adapting to this new reality more slowly than attackers are shaping it.

I’m launching Barding Defense to equip defenders with some of the tools and insights they need to protect our internet- and software-dependent society as the old assumptions collapse.

What’s changing

AI is making offensive cyber operations cheaper, faster, and more capable. I’ve written at length about what this means: the “fracking” of lower-value targets that weren’t previously worth attacking, worm-like intrusions that combine machine speed with human-like reasoning, and a compression of the timelines that defenders rely on to detect and respond.

The market has noticed the first two shifts in scale and speed, and is responding with a flood of breach-and-attack-simulation and adversarial exposure validation (BAS/AEV) products. These tools are valuable: they validate that technical controls can detect and disrupt attack techniques, and the best of them do some of this adaptively. But their design priorities of coverage and speed create a pattern of activity that looks like a smash-and-grab attacker, not a stealthy one: broad sweeps across many techniques, triggering alerts and preventions along the way. That footprint doesn’t emulate stealth, and it interacts with your detection stack differently than a real intrusion would. Correlative alerts and behavioral analytics respond to a concentrated burst of test activity in ways they wouldn’t against an attacker who spreads those same actions across weeks. The detections that fire during an AEV test aren’t necessarily the ones that would matter during a real intrusion. And scoping and containment aren’t tested at all; there’s no coherent compromise to scope, no persistent adversary to find and evict. AEV validates controls. Red teams validate programs. That’s not a gap better engineering closes. It’s structural.

They are all largely missing the third shift: skill. Stealth has historically been a high-investment capability; quietly compromising an organization, maintaining persistent access, and moving methodically toward high-value objectives has been the domain of state-sponsored operators, because the tradecraft was expensive and slow to develop. Most financially-motivated attackers didn’t bother; smash-and-grab tactics like ransomware provided a better ROI.

That cost structure is changing. As stealth gets cheap, the consequences get dramatically worse.

What’s coming

Here’s what we predict the next generation of intrusions will look like: an AI-augmented attacker stealthily compromises dozens of organizations across an industry over weeks or months. No alarms. No indicators shared among victims, because there’s nothing to share yet. Then, in coordination, they detonate ransomware across all of them simultaneously victimizing an entire industry or region rather than a single company, and demanding a correspondingly massive payout.

Every component exists today. What’s been missing is the diffusion of the ability to operate stealthily at scale; it no longer requires an army of skilled human operators. AI changes that math, and the current state of AI in offensive operations suggests we’re closer to this reality than most people are comfortable admitting. The gating factor isn’t model capability, it’s how quickly attackers build effective harnesses around models that are already capable enough. Stealth doesn’t require willing and uniquely-capable human operators, it is becoming available to anyone who has compute and data.

And while such a large and impactful operation would draw heightened law enforcement response, I remember when ransoming an entire organization seemed far fetched, and when our industry was convinced that attacks on critical infrastructure that resulted in loss of life would be a redline drawing massive government response. Both of these impossibilities or redlines have now become commonplace.

This is a hard shift to comprehend and an unpleasant one to consider. This category is drawing serious investment and attention, and for good reason: the threat is real and urgent.

Why red teams

Most security programs are not built for what’s coming. And the hardest part of preparing is that the threat is abstract until it isn’t, until the day an organization discovers it’s been quietly compromised for months, or an entire sector gets hit at once.

Red teams exist to make that threat concrete before attackers do. Through real attacks against real defenses, they answer the question automated tools can’t: how well would we actually handle an intrusion? Not “do our controls fire alerts for known techniques”; even chained together, individual control tests don’t answer the question that matters. Red teams test whether your program – your people, processes, and tools working together – can detect, scope, and contain an adaptive attacker.

That question has always mattered. It matters far more now. And fortunately, it is becoming more measurable and more accessible to more organizations.

Traditional red teaming can’t scale to meet this moment. Operations take weeks. Documentation eats operator time. Validating that fixes actually work requires expensive re-engagements. The talent pool is tiny. The organizations that most need this insight are the ones least able to afford it.

What we’re building

Barding Defense is building Brumby, a platform that scales red teams by augmenting human operators with AI.

For operators: Brumby is built to support how you already work. It integrates with your existing C2 frameworks (Cobalt Strike, Mythic, Sliver, custom tooling) without replacing them or limiting you to a proprietary implant. Your tradecraft, your tools, your workflows. Brumby wraps around them, automatically capturing structured logs of everything that happens during an operation so you can focus on the operation itself instead of documenting it. The platform is designed to augment operators; it makes them faster and more efficient, while enhancing their capabilities in stealth, OPSEC, and safety, including protecting the target environment from unintended modification or damage during operations.

For security leaders: Red team reports built on Brumby don’t just contextualize individual findings. They provide leadership with empirical, benchmarked intelligence on how well your security program is performing; tied directly to proof from operations. Because every operation run through the platform generates structured data, Barding can show you how your detection and response capabilities compare against similar organizations. You see where your program held up, where it broke down, and how you compare. You can go to your peers or board with evidence, not opinions, and only target high-leverage areas of improvement for your program.

For the ecosystem: Every operation run through Brumby generates intelligence that makes the platform smarter, improving recommendations for operators and producing richer benchmarks for defenders. The more teams use the platform, the more valuable it becomes for everyone on it. We take this seriously: we protect the privacy of operators and their customers, and our models are built on abstracted patterns, not raw customer data.

Why me, and why now

I spent the last four years building and leading Target’s red team, and before that I built and led a Department of Defense red team through NSA certification. I’ve seen the same pattern across those environments and every other team I’ve talked with: red teams deliver disproportionate value (and are the only way to safely and accurately answer the question “how well would we handle an intrusion”), but can’t scale to meet demand. And what red teams provide has never been more necessary.

I’m building Barding because I believe one of our best defenses right now is for red teams to clearly and safely demonstrate the risk. While we still have time to prepare, and before attackers force the realization. If defenders can see what AI-augmented intrusions actually look like against their own environments, they have a chance to get ahead of it. If they can’t, we’ll all find out the hard way.

If you’re a red team operator, a CISO thinking about how to prepare for what’s coming, or anyone working on this problem, I’d love to talk. Reach out at contact @ bardingdefense.com, or drop us a line to learn more.

— Jackson Reed, Founder, Barding Defense