From Semifinals to Real-World Pilots: What We Learned
Reaching the Semifinals of the XPRIZE Wildfire competition was a meaningful milestone. But the more valuable outcome was not the placement — it was the clarity we gained about the gap between a technically compelling concept and a system that agencies will actually deploy.
This post is an honest account of what we learned, and how it is shaping the next phase of Pyr-Stop's development.
The Competition Context
XPRIZE Wildfire (Track B) challenged teams to advance the state of the art in wildfire detection and early warning. The competition provided structured evaluation criteria, access to expert reviewers, and — crucially — a forcing function to sharpen our thinking under time pressure.
We entered with a concept grounded in multi-source detection and human-in-the-loop response. We left with that concept intact, but substantially refined.
What the Evaluation Process Revealed
Operational integration is harder than detection
Early in development, we focused heavily on the detection problem — how do you find a fire as early as possible? The evaluation process pushed us to think harder about the step after detection: what happens when an alert is generated, and who does it go to?
Agencies already have dispatch systems, GIS platforms, and communication protocols. A detection tool that cannot connect to those systems creates a new workload rather than reducing one. We came away with a much clearer view of the integration requirements.
False positive tolerance is low — and varies
Different operators have very different thresholds for alert noise. A land manager monitoring a large, remote area may tolerate more alerts than a fire service duty officer who must decide whether to dispatch a crew. Designing a single threshold that works for both is not straightforward.
Our current approach aims to make confidence levels explicit and configurable, so that operators can set thresholds appropriate to their context.
Human oversight needs to be designed in, not bolted on
We had always intended for the system to keep humans in the decision loop. What the competition process clarified is that this intention needs to be expressed in the interface and workflow design — not just in the architecture documentation.
If the human step is slow, inconvenient, or poorly presented, operators will either ignore it or bypass it. Designing for genuine human oversight means designing for the conditions under which operators actually work.
What We Are Doing Differently
Based on these lessons, we are now prioritising:
- Integration-first design: Building the data pipeline so that alert output can connect to existing agency systems, rather than requiring agencies to adopt a new platform.
- Configurable alert thresholds: Giving operators meaningful control over sensitivity.
- Workflow-aware UI: Designing the operator interface around realistic duty-officer workflows, including mobile access.
The Path to Pilots
We are currently in conversations with a small number of organisations about scoping initial pilot deployments. We are being deliberate about this — a poorly scoped pilot that does not fit an agency's actual workflow tells us very little.
If you are an agency or land manager interested in discussing what a meaningful pilot might look like for your context, we would like to hear from you.