Skip to main content
Home Services Areas Served Threat Intelligence FAQ Contact Client Login Free Security Check
Back to Blog

The Security Industry Does Not Have a Compute Problem. It Has a Software Efficiency Problem.

2026-03-28 · Trevor Skinner

The industry is demanding more infrastructure. I think the software should earn its footprint first.

There is a conversation happening right now across this country about data centers, power, AI infrastructure, and the future of compute.

Most people are looking at the explosion in demand and assuming the answer is obvious: build more. More facilities. More racks. More power draw. More cooling. More capital. More land. More steel. More concrete. More always-on infrastructure.

I think that framing is wrong.

I do not think the biggest problem is that the industry lacks compute. I think the bigger problem is that too much of the software running on top of that compute is inefficient, bloated, badly architected, or dependent on brute-force infrastructure to compensate for weak design.

That is one of the reasons I built Prometheus.

I did not want to build another security platform that only works if it is backed by an enormous infrastructure footprint. I did not want to build something that treats scale like a synonym for waste. And I did not want to accept the industry’s quiet assumption that protecting large numbers of systems must automatically require giant, always-expanding compute estates.

I wanted to see what would happen if the software got better first.

Not bigger first. Better first.

That distinction matters more than most people realize.

The Question I Keep Asking

Every time I see another discussion about the need for more compute, I keep coming back to the same question:

How much of that new infrastructure is actually necessary — and how much of it is being demanded by inefficient software?

That question matters in cybersecurity even more than it does in most industries, because security platforms do not just consume resources once. They consume them continuously. They sit in the path of telemetry, events, detections, classifications, enrichment, response logic, dashboards, retention, and reporting. If the architecture is inefficient, that waste compounds at scale.

A small inefficiency per endpoint does not stay small for long.

A few extra watts here. A little extra memory there. A few seconds of needless delay. A pipeline that needs oversized hardware to survive mediocre software decisions. At small scale, people shrug. At national scale, that becomes a strategic problem.

That is what drove my thinking. I did not want Prometheus to become another example of software that demands a bigger and bigger machine just to preserve the illusion of capability.

I wanted to see whether I could build something more disciplined.

What I Actually Measured

I recently benchmarked Prometheus against the live running stack — not just a thought experiment — and the results reinforced exactly why I have been thinking this way.

  • 44.3 events per second end-to-end on a single node
  • 383 ms median detection latency on 10-event batches, 836 ms p95
  • 98.4% detection accuracy on attack patterns
  • 99.63% ML ensemble accuracy on the held-out test set

That alone was encouraging. But the part that really got my attention was the efficiency side.

Under benchmark load, the measured GPU power barely moved: 31.75 W idle versus 31.99 W under full pipeline load — a delta of only 0.24 W. VRAM usage sat at about 1.67 GB of 16 GB. The benchmark concluded that the platform is currently CPU-bound, not GPU-bound.

In other words, the bottleneck is not giant model inference. It is CPU-side detection logic and database writes.

That is a huge deal.

It means the expensive part of the stack is not where many people would assume it is. The platform’s machine learning layer is not the thing forcing massive infrastructure expansion. The core AI side of Prometheus is lightweight enough that it is not dragging the rest of the architecture into a bloated footprint.

The benchmark’s measured baseline estimated total system power at about 182 W under load, with approximately 1,340 endpoints per node at current modeled capacity, or about 0.14 W per endpoint at capacity.

I want to pause on that number, because this is where a lot of people miss the real story.

Why 0.14 Watts Per Endpoint Is a Big Deal

If you are only looking at one endpoint, 0.14 watts sounds tiny. Almost forgettable.

But infrastructure is not a single-endpoint problem. It is a multiplication problem.

If your software is inefficient by even a small amount per endpoint, that inefficiency gets multiplied across every protected system, every tenant, every region, every rack, every expansion cycle, every power bill, every cooling requirement, every redundancy layer, every procurement conversation, and every future scaling decision.

At small scale, people can afford to ignore that. At large scale, they cannot.

When I say Prometheus benchmarked at around 0.14 W per endpoint at capacity, what I am really saying is that the platform is showing signs of being able to protect at scale without dragging a giant energy penalty behind it.

That changes the conversation. It means efficiency is no longer just a nice engineering bonus. It becomes strategic. It becomes architectural. It becomes economic. It becomes political. It becomes national infrastructure policy — whether people want to admit that or not.

Because if the next generation of security systems can protect more with less, that changes what kinds of compute buildout are actually justified.

What the Scaling Model Says

After benchmarking the live stack, I projected the measured baseline into larger-scale scenarios:

EndpointsNodesPower DrawEst. Monthly Cost
100,000~75~13.7 kW~$30,000
500,000~375~68.3 kW~$150,000
1,000,000~750~136.5 kW~$300,000
5,000,000~3,750~682.5 kW~$1.5M
10,000,000~7,500~1.37 MW~$3M

To be clear: those larger figures are modeled extrapolations from measured baselines, not direct live proof of 10 million endpoints today. I am not pretending otherwise.

But even as modeled outputs, they tell an important story. At those numbers, Prometheus does not look like a platform that inherently requires some absurd hyperscale footprint just to be viable. It looks like a platform whose efficiency profile could matter a lot if the architecture is scaled intelligently.

I am not anti-scale. I want scale.

I want millions of protected systems. I want Prometheus to become a serious force in this industry. I want it to help protect critical infrastructure, businesses, service providers, and eventually much more than that.

But I do not believe scale should mean waste. I believe scale should mean readiness, efficiency, and discipline.

I Am Not Building for Bigger. I Am Building for Better.

There is a bad habit in technology where people confuse expansion with progress.

Buy more hardware. Light up more racks. Consume more power. Treat infrastructure growth itself as proof that something important is happening.

Sometimes it is. Sometimes it is just a sign that the software layer never became efficient enough to deserve the hardware beneath it.

That is the part I refuse to normalize.

My goal with Prometheus has never been to build a security platform that only works if it is constantly surrounded by excess. My goal is to build a platform that can justify its footprint. A platform that earns its scale. A platform that can grow because the software is efficient — not because the inefficiency is being hidden behind ever-larger procurement.

That is also why my infrastructure philosophy is deliberate. I believe in readiness. I believe in capacity planning. I believe in being able to scale quickly. But I do not believe in wasting operational power just to create the appearance of size.

There is a difference between having capacity prepared and burning resources with no purpose. The distinction matters.

Why This Matters Beyond Prometheus

This is not just a startup opinion piece.

If cybersecurity, AI, and infrastructure continue down a path where every new generation of software assumes it deserves exponentially more compute, then the national response will always be more construction, more energy draw, more operational complexity, and more public tension around whether the footprint is worth it.

But if the software gets dramatically more efficient, the conversation changes.

The country does not just gain cheaper software. It gains optionality.

  • The ability to protect more systems with less expansion
  • The ability to deploy meaningful defensive capability without assuming every answer must come from another giant facility
  • The ability to think about resilience and national digital defense in a way that is not chained to maximum waste by default

That is a big deal. And yes, I think that has implications far beyond one company or one product.

If Prometheus keeps going in this direction, I think it has the potential to help change how people think about cyber defense at scale. Not just by detecting threats well, but by proving that large-scale protection does not have to be architected like a resource bonfire.

The Real Benchmark Story

A lot of people will look at the benchmark and focus on the top-line figures:

  • 44.3 events per second
  • 383 ms median latency
  • 98.4% attack-pattern detection
  • 99.63% ML accuracy
  • Under 0.15 W per endpoint

Those matter. But to me, the deeper story is not just “look at these numbers.”

The deeper story is what the numbers imply:

  • The software is doing meaningful work without dragging huge model overhead behind it
  • The AI layer is light enough that the GPU is barely being exercised under full benchmark load
  • The current bottlenecks are practical engineering bottlenecks — not fantasy-scale constraints
  • The architecture still has room for major gains through async pipeline improvements, database optimizations, and CPU-side work before needing to brute-force its way forward

That is the kind of profile I wanted. Not a platform that needs enormous infrastructure to justify its existence. A platform that earns the right to scale because it is already disciplined at smaller scale.

What I Think the Industry Is Getting Wrong

I think too many people are asking:

How do we build enough compute for the future?

I think the better question is:

How do we make software efficient enough that the future does not require nearly as much compute as we currently assume?

Those are not the same question. One leads to dependency. The other leads to leverage. One assumes the expansion is inevitable. The other challenges whether the expansion is being demanded by reality — or by poor engineering.

That is where I stand. I am not against ambitious infrastructure. I am against lazy architecture hiding behind ambitious infrastructure.

Where I Go From Here

Prometheus is still being built, refined, validated, and hardened. I am not pretending the journey is over.

But these benchmark results matter to me because they validate a deeper belief I have had from the beginning:

The future of cyber defense does not belong to whoever can burn the most resources. It belongs to whoever can turn the least waste into the most protection.

That is what I am trying to build. A platform that is adaptive, technically credible, operationally efficient, and architected with respect for what infrastructure actually costs.

Not just in dollars. In watts. In hardware. In land. In cooling. In operations. In national compute strategy. In public trust.

I think the industry needs more people willing to ask whether the answer is really “more” — or whether the answer is finally “better.”

I know which side I am building for.