
Manufacturing’s Leadership Vacuum: Nearshoring’s Hidden Crisis
January 27, 2026
Why Your VP Sales Failed: The Setup Nobody Names
January 28, 2026
The Optimization Trap: When Speed Becomes Sabotage
Formula 1 just made an admission that should make every executive pause.
For 2026, F1 is running three pre-season tests instead of the usual one. Not because teams asked for it. Not because sponsors demanded more track time. But because the FIA looked at the regulation changes and concluded that nobody knows how these cars will behave until they're actually running.
The acknowledgment is unprecedented in modern F1. Active aerodynamics replace DRS entirely. The power split shifts from 80/20 combustion/electric to 50/50. Energy recovery quadruples. Cars are 30kg lighter, 200mm shorter, with 30% less downforce and 55% less drag.
Nobody knows how these systems interact in race conditions. Will active aero create new turbulence patterns? How do drivers optimize energy deployment when harvesting 8.5 megajoules per lap instead of 2? What happens to tire degradation with completely different load characteristics?
These aren't questions you answer in simulators. You need to track time. Failures. Iterations. Learning.
So F1 did something remarkable: they refused to measure performance before understanding the system.
By 2027, they'll return to single tests once teams understand the new environment. But 2026 requires exploration before optimization. Learning before execution. Preparation before performance metrics.
If this sounds familiar, like your last market entry that took twice as long as projected, or that acquisition integration is still struggling eighteen months in, you're recognizing the pattern.
The Mistake Most Leaders Make
Most organizations treat potential as proof of readiness, then punish teams when the environment doesn't cooperate.
The confusion is between capability and capacity. Capability refers to potential, what someone or an organization could accomplish given sufficient time, resources, and learning opportunities. Capacity refers to the current state, what they can accomplish right now with current knowledge, systems, and conditions.
Your team is probably capable of succeeding in a new market. But their capacity to operate in that environment is limited until they've learned its specific requirements. And their ability to execute won't emerge until capacity develops through exploration.
F1 teams are highly capable. They have brilliant engineers, advanced simulation tools, and decades of experience. But their capacity to optimize 2026 cars is zero until they've explored how these fundamentally different machines behave.
The organizational psychologist Elliott Jaques spent decades studying what he called "time horizon of discretion", the longest period into the future that someone can effectively plan and work without direction. His research revealed something uncomfortable: cognitive capability for handling complexity correlates directly with time horizon, and you can't compress capability development by simply working harder or hiring smarter people. Time horizon capacity develops through experience with increasingly complex problems over extended periods.
Three tests don't make F1 teams more capable. They develop capacity through exploration, which enables the ability to emerge.
You can't collapse these phases into one test any more than you can compress a Level 5 strategic thinker's development into a weekend workshop.
Uber's $2 Billion Lesson
Uber's global expansion provides a textbook case of underestimating preparation time.
In 2014-2016, they entered dozens of markets simultaneously, assuming their U.S. playbook would translate with minor adjustments. China proved instructive. Uber committed $2 billion over two years before selling to Didi Chuxing at a massive loss.
The problem wasn't execution; it was insufficient exploration of what success actually required in that regulatory, competitive, and cultural environment.
They optimized before they learned. They set aggressive growth targets before understanding what growth mechanisms worked. They held local teams accountable to metrics designed for U.S. markets operating under completely different conditions.
The exploration period they needed: 12-18 months of smaller-scale operations, regulatory relationship-building, and competitive positioning before scaling.
The timeline they built: 90 days to launch, immediate scale expectations.
The gap between those timelines cost $2 billion and a market position they never recovered.
The Mexico Expansion That Worked
A U.S. manufacturing client was considering opening operations in Mexico. Their initial timeline: 6 months from decision to operational facility. They called me in month four, expecting to discuss executive candidates for a plant manager role they needed filled immediately.
I told them they weren't ready to hire.
Not because candidates weren't available, but because they hadn't done the exploration work to know what kind of leader would succeed in this specific environment. They were optimizing (filling the role) before they understood (what the role actually required in Mexico).
After assessment, we restructured the approach:
Months 1-3: Exploration. Regulatory requirements, talent availability, supplier relationships, and cultural dynamics. What don't we know? What assumptions need testing?
Months 4-6: Development. Facility design, hiring strategy, and operational processes adapted to the local context. Based on discoveries, what approaches might work? This is when we defined the leadership profile, not before.
Months 7-12: Execution. Construction, hiring, systems implementation. I placed their VP of Operations during this phase, once we understood the environment well enough to know what success required.
Months 13-18: Optimization. Refining processes, scaling operations. Which approaches deliver the best results?
The extended timeline initially frustrated the CFO. "We're capable of moving faster," he argued.
True, but the capability for execution isn't the same as the capacity for learning a fundamentally different operating environment. And hiring an executive before you understand the environment is how you set that executive up to fail.
Eighteen months later, the Mexico facility was exceeding projections while competitors who rushed market entry struggled with regulatory issues, talent retention problems, and operational inefficiencies. The VP we placed is still there, now overseeing expansion to a second facility.
The difference? They allocated time for exploration before optimization. They didn't confuse potential with readiness, not for the organization, and not for the leader they hired.
What F1 Is Teaching Us About Phases
F1's three-test approach maps directly to how complex environments actually work:
First test: Discovery. What don't we know? What assumptions need testing? What does success even look like here? During this phase, "failure" isn't failure; it's information gathering. Performance metrics are premature. Accountability is for learning velocity, not execution results.
Second test: Development. Based on discoveries, what approaches might work? What needs adaptation? This is where capability begins to convert to capacity. You're developing organizational knowledge that enables future optimization.
Third test: Optimization. Now that you understand the environment and have developed approaches, which delivers best results? This is where performance metrics become meaningful. You're optimizing within an understood domain.
Racing: Execution. With capacity developed and approaches optimized, you can meaningfully execute against performance metrics.
The critical insight: exploration should consume 20-30% of the total timeline. If your project plan allocates less than that to understanding the new environment versus implementing predetermined approaches, you're probably underestimating learning requirements.
Most organizations invert this. They allocate 5-10% to exploration ("discovery phase," "stakeholder interviews," "market research") and 90% to execution. Then they wonder why execution keeps failing.
The Optimization Trap
Here's the escalation most leaders miss.
A capable team enters a new environment. They underperform initially, not because they lack talent, but because they're still learning. Leadership interprets this as an execution failure. Pressure increases. Metrics tighten. The team is told to "figure it out faster."
What happens next is predictable.
The best people, the ones with options, leave for organizations that will give them time to succeed. The ones who stay learn to optimize for appearing productive rather than for deep learning. Decisions get made to hit short-term metrics rather than build long-term understanding.
By the time the organization finally understands the environment, the people who could have navigated it are gone.
This isn't inefficiency. It's strategic self-sabotage disguised as accountability.
You've selected against the very capability you needed. And you did it by confusing exploration with underperformance.
The real risk of skipping exploration isn't delay. It's this: teams get labeled weak, leaders get replaced, strategies get abandoned, before the organization ever understands the terrain. You're not just losing time. You're losing the people and the learning that would have made success possible.
The Counterintuitive Truth
Organizations that invest in exploration phases often reach optimization faster than those that rush to execution.
The rushed organization spends months fixing problems, rebuilding approaches, and recovering from premature optimization. The prepared organization spends time up front but executes cleanly.
F1 teams that use all three tests effectively won't just avoid mistakes, they'll develop organizational knowledge that compounds over the season. They'll understand causal relationships between setup changes and performance outcomes. They'll have data supporting strategic decisions. They'll build confidence on actual learning rather than on hopeful assumptions.
The teams that minimize testing to "get started faster" will spend the season chasing problems they don't understand, making changes that don't work, and wondering why capable people are underperforming.
The time invested in exploration isn't a delay. It's preparation that accelerates everything downstream.
2026 Will Reveal Who Understood This
When F1's season begins in March 2026, we'll see immediately which teams used testing effectively.
Some will have developed a genuine understanding of how these cars work, built organizational capacity for optimization, and positioned themselves for sustained performance.
Others will have rushed through tests focused on looking fast rather than learning deeply. They'll spend the season struggling, and they'll probably blame the drivers.
The variable won't be talent, resources, or capability. It will be whether they allocated adequate time for exploration before expecting optimization.
The same will be true for your organization's next major initiative.
Complex systems reward understanding first, speed second. F1 has accepted that reality. The question is whether your organization still assumes learning can be compressed through effort alone, and what it costs you when it can't.
Planning a Major Initiative?
Whether it's market entry, integration, or executive hiring — a 30-minute conversation can help you distinguish between capability and readiness before the timeline becomes a trap.
Schedule a Confidential ConsultationFrequently Asked Questions
What is the optimization trap?
The optimization trap occurs when organizations skip exploration phases and move directly to execution, then interpret early underperformance as failure rather than learning. Teams get pressured, metrics tighten, and the best people leave — before the organization ever understood what success required in the new environment. It's strategic self-sabotage disguised as accountability.
What's the difference between capability and capacity?
Capability refers to potential — what someone or an organization could accomplish given sufficient time, resources, and learning opportunity. Capacity refers to current state — what they can accomplish right now with current knowledge and conditions. Most organizations confuse the two, treating potential as proof of readiness and then punishing teams when the environment doesn't cooperate.
How much time should be allocated to exploration before optimization?
Exploration should consume 20-30% of total project timeline. If your project plan allocates less than that to understanding the new environment versus implementing predetermined approaches, you're probably underestimating learning requirements. Most organizations invert this — allocating 5-10% to exploration and 90% to execution — then wonder why execution keeps failing.
Why is F1 running three pre-season tests for 2026?
The 2026 F1 regulations represent fundamental changes: active aerodynamics, 50/50 power split between combustion and electric, quadrupled energy recovery, lighter and shorter cars with dramatically different downforce and drag characteristics. The FIA concluded that nobody knows how these systems interact in race conditions, so they're requiring three tests (discovery, development, optimization) before racing begins. They refused to measure performance before understanding the system.
What is Elliott Jaques' concept of time horizon of discretion?
Elliott Jaques studied the longest period into the future that someone can effectively plan and work without direction. His research revealed that cognitive capability for handling complexity correlates directly with time horizon — and you can't compress capability development by simply working harder or hiring smarter people. Time horizon capacity develops through experience with increasingly complex problems over extended periods.
How does the optimization trap apply to executive hiring?
Organizations often want to hire executives immediately when entering new markets or launching major initiatives. But hiring before you understand the environment is how you set executives up to fail. You need exploration to define what the role actually requires in this specific context — not what it required somewhere else. The leadership profile should emerge from discovery, not precede it.



