In this post, I wanted to extend the thinking I started in: Decision Velocity: The Only Competitive Advantage Left. In that article, I discussed how I have realised that my thinking around world models has been too narrow. My focus had been purely on cause-and-effect in the physical world – If I kick a ball (action) it will hit that window (future state).
That focus was partly due to my geospatial background, plus the examples we see from the main players in the world models space (World Labs, Luma, Decart, Odyssey etc.) all have focus on the physical world in one form or other. Why?
Because most are training their models on video or the physics of the real world. That way models learn actual physics – If I drop a ball it will fall. But as I began think about this I realised what I was missing was the biggest part of the future market for world models – the non-physical world – If the Fed raise interest rates (action), mortgage demand will drop and construction stocks will slide (future state).
Cause and Effect
At their heart world models are cause and effect engines. In contrast, LLM’s are designed to predict the next word (pattern matching), they do not understand the underlying ‘why’. World models map causal levers allowing AI to run thousands of what-if simulations (I’ll unpack this a little more in a moment). World models are designed to understand complex systems and make predictions on a future state. In the physical world they work with objects like cars and balls, in the non-physical world those object are variables like inflation, liquidity or consumer sentiment.
The “Action-Perception” Loop
Now might be a good time to provide a non-physical example. Let’s consider a Supply Chain:
Observe: “Our warehouse is 90% full, and a storm is hitting the coast.”
Abstract: It ignores the weather reporter’s tie and the names of the warehouse workers. It focuses on: Supply (High) + Transport Link (Broken).
Simulate (Cause & Effect): “If I do nothing, we will run out of parts in 4 days. If I reroute to air freight (Action), we stay operational but costs rise 20%.”
Predict: It predicts the “State” of the company’s bank account and inventory for both scenarios.
Decide: It chooses the path that avoids the “crash” (bankruptcy or stock-out).
That is a nice simple workflow. So, how do world models actually work?
3 Engines Working Together
There are 3 parts to world models:
- Physicist (Encoder – defines the “Now.”) – This takes messy data and ‘crushes’ it into latent space. It throws away noise (car colour) and keeps the salient information (cars speed and direction).
- Captain (Decode/Policy – proposes the “Maybe”) – The Captain isn’t “smart” on its own—it is goal-oriented. It holds the instructions: “Maximize profit, stay within legal boundaries, and reduce carbon footprint. It doesn’t know how to do that until it starts asking the Navigator questions. The captain tells the navigator to run simulations. Once a future state has been found which looks good (safe, profitable, efficient) the captain issues a command to act. If the captain is not ‘smart’, how does it get to this place of recommending the best action (more on this in a moment).
- Navigator (Transition Model – predicts the “If.”) – This is the heart of a world model. This is the simulator. It takes the latent space and asks “If I take action X, how does the world change?”. The Navigator holds the “physics” or “rules.”. It generates a mathematical state of the future. It doesn’t have an opinion on what’s “good” or “bad.” It just knows that if you pull Lever A, Result B happens.
I hope you are staying with me. So we have noise free data, a persistent, goal oriented captain, and a navigator which can predict an outcome based on an input. So where is he adult in the room, you might ask? Let me introduce the critic:
4. The Critic (The Judge – “value”) – This is how the “not smart” Captain knows which simulation was actually the “best.” The critic looks at the future states the Navigator predicts and assigns them a score.
Okay, one last addition. Those of you who read my posts will know I talk about reasoning engines, and it is here they fit in. the Reasoning Engine is not a “thing” you can point to—it is the behavior that emerges when the Captain, Navigator, and Critic work together.
The Use Case: Global Supply Chain Re-Routing
Let’s pull this all together – Imagine a major port in Singapore closes unexpectedly due to a regional strike. A global electronics company needs to decide: Wait it out or fly the parts?
1. The Physicist (The Filter)
Role: Strips the noise.
The Action: It ignores the news anchor’s political commentary and social media “outrage.” It only extracts the salient data: “Port Capacity = 0%” and “Inventory in Transit = 50,000 units.”
2. The Navigator (The Simulator)
Role: Predicts the “What-If.”
The Action: It knows the “physics” of logistics (Lead times, fuel costs, air freight capacity).
- Simulation A: “If we wait 10 days, production stops on Day 4. Revenue Loss: $20M.”
- Simulation B: “If we fly parts tomorrow, costs increase by $5M, but production continues. Revenue Loss: $0.”
3. The Critic (The Judge)
Role: Scores the outcome.
The Action: It looks at the Navigator’s results and tells the captain – based on your goal this simulation scores the highest.
- Score A: “Losing $20M and stopping the factory is a ‘Critical Failure’ state. Score: -500.“
- Score B: “Losing $5M in margin but keeping the factory running is a ‘Resilient’ state. Score: +100.“
4. The Captain (The Executor)
Role: Proposes and Executes.
The Action: It tries 50 other variations (e.g., “What if we only fly half the parts?”). Once the Critic gives the highest score to “Fly half, ship half from a different port,” the Captain triggers the booking order in the ERP system.
5. The Reasoning Engine (The Loop)
Role: The “Thinking.”
The Action: This is the high-speed “conversation” where the Captain asks for 500 options, the Navigator simulates them, and the Critic grades them.
So we have clean data, a captain who has a goal and who asks the navigator to run 50 simulations, a critic who scores those simulation based on the captains goal, and finally the captain who triggers an action (automation) or provides a human that best action (augmentation).
There is much for you to digest here. But I hope this explains world models in very simple terms, how they work and, most importantly their incredible value beyond understanding and making decisions focused on the physical world.
Next up, I am going to discuss the challenges of training world models on non-physical use case, who are innovating in that space and the rise of Causal Hubs. Stay tuned.
About Matt Sheehan
For 25 years, I’ve operated at the intersection of complex customer problems and geospatial solutions. Today, my focus has shifted entirely to the rapidly evolving AI landscape—specifically the pursuit of Decision Velocity.
We are in a period of massive disruption. I see my role as an innovator mapping the path forward, helping organizations harness World Models and Reasoning Engines to eliminate latency and drive high-speed, high-stakes decisions.
I unpack these concepts and the architecture of the future every week in The Decision Layer Weekly. Subscribe here: https://lnkd.in/g4HtmYT5
The Decision Flow: A World Model Framework
- The Physicist (Data): Provides Clean Data. It ignores the noise and filters out everything except the variables that actually move the needle.
- The Captain (Intent): Sets the Goal. It holds the mission and asks the Navigator to find the most efficient path to success.
- The Navigator (Simulation): Runs the “What-Ifs.” It acts as a flight simulator, testing 50+ scenarios against the “physics” (rules) of the industry.
- The Critic (Evaluation): Scores the results. It matches the Navigator’s predicted futures against the Captain’s goals to identify the highest-value path.
- The Reasoning Engine (The Loop): The System-2 Thinking. This is the high-speed internal conversation between all roles that occurs before any action is taken.
- The Outcome (Execution): Action or Augmentation. The Captain either triggers an automated solution or provides a human with a “Best Path” recommendation.
Why it Matters for Decision Velocity
Standard AI (LLMs) focuses on prediction based on patterns. This World Model framework focuses on planning based on consequences. By simulating the future in a safe, “latent” space, organizations can eliminate Decision Latency—moving from “I think” to “I know” in milliseconds.


