Better Planning, Doctrine-based Decisions, Learning & More
Profiles in Preparedness #57
Welcome back to The CP Journal, where we break down what it takes to get left of bang.
Preparation for future events is often guided by the loudest voice in the room. We don’t always like to acknowledge it, but it’s a common dynamic in public safety organizations.
That voice is usually backed by years of experience, rank on the collar, and a strong intuitive sense of what will work—and what won’t. To be clear, that intuition isn’t necessarily wrong, it’s just that the intuition is hard to prove, hard to test, and even harder to challenge.
When assumptions remain unstated but strongly defended, it becomes difficult to explore alternative approaches that might work better. But this is where I’m seeing a meaningful shift begin to take place in preparedness work. The process improves when authority no longer determines which ideas get explored.
In this week’s article below, I look at how AI-enabled tools are changing evacuation planning and capability development, drawing on a fascinating conversation with Ladris CEO and co-founder Leo Zlimen.
These tools allow us to quantify and understand the variables that actually drive large, community-wide evacuations—time, behavior, traffic flow, and capacity—rather than relying solely on experience and instinct.
But it’s one thing to hear about this shift conceptually, and it’s another thing to see it materialize in practice.
During a recent evacuation planning project, we used Ladris’ modeling tools to inform the planning process and deliberately challenge long-held assumptions about strategy, decisions, timelines, and tradeoffs.
As options were tested and retested, there was a change in the conversation that I hadn’t really seen before on planning projects. New voices in the room started asking: “What if we tried this?” “What happens if we shift that?” “Can we test another approach?”
Because it only took minutes to re-run a simulation, ideas no longer had to be defended on authority alone. Sometimes intuition proved correct and meaningfully reduced evacuation time. Other times, it didn’t. Either way, the decision-making improved because assumptions were visible, testable, and open to refinement.
Instead of being constrained by the strongest voice in the room, the planning looked more like a learning system where experience still carried weight, but the data and outputs helped everyone see where improvements were possible.
Getting ahead of problems and preparing for an uncertain future is strongest when intuition is complemented—not replaced—by measurement. This is how preparedness moves organizations further left of bang: by stress-testing and improving capabilities before lives depend on their performance.
Inside The CP Journal
Here is what was added to the site this week.
Every evacuation centers on one critical question: can people get out in time? Whenever an incident forces people to leave their homes, neighborhoods, or cities, it sets off a race between two clocks: the time it takes people to reach safety, and the time the incident allows for them to get there.
This article, based on an interview with Leo Zlimen, CEO of Ladris, looks at how technology and AI are improving the level of clarity that cities and counties can achieve while preparing for large-scale evacuations.
This Week‘s Reads
Here are a few standout reads from the week with insights or ideas that caught my attention.
👨💻 Article | AI at the Edge: Enhancing Marine Corps Decision-Making Through Doctrine. “War has always tempted societies to believe that a new technology could tame uncertainty.” This article from The Connecting File looks at the promise of AI tools to reduce the administrative grind of planning, but can’t replace judgment or carry the commander’s intent. Describing a planning process (and the frustrations inherent in it) that transcend military applications, but are also present in both business and government disaster planning efforts, the author offers a clear description of the problem that we’d love for technology to solve. But for as long as war is a human endeavor, it is still humanity’s role and responsibility to plan and wage it.
🧑🎓 Article | Student for Life. “If I’ve learned anything from my most recent courses, passive participation never leads to desirable outcomes.” In this recently unlocked article, Kyle Shepard provided the prompt and the draw I needed to return to observation and awareness, learning from those around me and from what their experiences offer. And then, of course, applying what you learn and observe to see what fits your life, your values, and your goals.
👩🏫 Article | Mentoring is a Multiplayer Skill. “Mentoring is often treated like a favor…That framing breaks mentoring before it even starts.” I know there are a lot of leaders who are subscribed to this newsletter who feel like their time developing others is being used ineffectively, and I know there are many up-and-coming professionals who are actively seeking guidance from experienced people in their field. If either of those statements categorizes you, give this article a read—it will absolutely make you think about the agreement to teach and to learn, and how to assess whether the relationship is “working” or not.
Before You Go
Found this useful? Share it. Passing this along helps grow a community focused on staying left of bang.
If you want to go deeper, a paid subscription gives you access to advanced courses, playbooks, and exclusive leadership writing.
And if you’re working to strengthen how you get ahead of problems, before they become emergencies, that’s our work—from strategy and assessments to planning and exercises.



This is a refreshing take on planning proceses. The idea of replacing authority-driven decision making with data-driven testing realy does feel like a cultural shift for emergency management. I've seen too many scenarios where the most senior voice shuts down other ideas before there's even a chance to test them. What I wonder about is weather there's resistance from experienced planners who feel like their years in the field are being second-guesed by models. Thats a hard balance to strike.