SOUTHSTART 2026 · Adelaide · 17 March
Death by a Thousand Handoffs
How to overcome the productivity paradox in the age of vibe coding
Dr Milad Dakka · Founder & CEO, Colabyr
Thank you everyone for attending this session. I know with so much going on at SOUTHSTART it's hard to pick and choose, so I'm honoured you've chosen this one and I hope we can all get a lot of value out of it. To that end, I really want to use a bunch of time to hear from and chat with the audience, so I'll structure this session as two roughly 20-minute blocks with a small break in between, then open it up to a conversational Q&A for the last 30 minutes or so. Anyway, without further ado, the talk today is about a very perplexing productivity paradox we're seeing in our global economy, with large language models hitting the market over two years ago with an ever-increasing hype train of messaging that "this will change everything". And, indeed it seems it is doing so, but we're not seeing that change include any real macroeconomic benefit. And the reason is that organisations are still operating traditionally, when what we really need is a complete redesign of how they work. The title, "Death by a Thousand Handoffs", captures the key thesis: organisations live and die by the speed at which they deliver quality to their consumers. Before we go ahead, I want to try a little experiment. Show of hands: who in this room has used an AI coding tool in the past month? Cursor, Copilot, Claude, ChatGPT for code, anything. [Pause.] Great. Now keep your hand up if you feel it's making you faster. [Pause.] Now keep your hand up if you genuinely feel the organisation around you is shipping proportionally faster because of it. [Pause.] And finally, keep your hand up if you think your organisation is shipping more value along with that speed. [Hands drop.] That gap between those answers is what we're here to talk about today.
I
The Paradox
So let's start with the question formulated here, which is this paradox. CLICK!
Act I
If every developer just got 10× faster..
.. why aren't companies shipping 10× more?
So let's start with the question formulated here, which is this paradox. If AI has truly sped up individual workflows, which by some accounts it certainly has, why doesn't it seem to be driving organisational and institutional benefits?
Andrej Karpathy coined the term "vibe coding" about a year ago, and it has since become Collins Dictionary's Word of the Year. We know that over 84 percent of developers are now using AI tools, we know that over a quarter of Y Combinator's Winter 2025 batch had codebases that were at least 95 percent AI-generated, meaning we know the adoption is real. But when we look closer at the results, we start to realise the answer is not going to be "better tools." Indeed, to answer this properly, we actually need to go back a bit further than most people expect, all the way back to the 1890s.
Act I · Electricity: A lesson from history
The 1890s promised enormous productivity gains..
.. but it wasn't until the 1920s, when factories were completely redesigned around the new technology, that the gains materialised.
I want to set the scene by going back to the industrial revolution of the late 1800s and early 1900s, when electricity eventually completely transformed the way factories worked. The interesting part is that when electricity was first introduced, there was no marked improvement in productivity. [CLICK] It wasn't until factories were completely redesigned from scratch, with electricity powering specific equipment and workers doing fundamentally different jobs, that electrification actually delivered meaningful returns. This analogy is the most useful one for us today, because we're already seeing that the returns will not be coming directly from the technology itself, but from redesigning organisations and the technology hand-in-hand. This very expensive lesson from history, I believe we're learning it again, right now, with AI.
Act I · The real question
Coding is 25–35% of the work
Bain & Company, September 2025
25–35%
Discovery · Requirements · Review · Deployment · Coordination
Where AI tools sit
Where the bottleneck actually lives
Even a 10× speedup on 30% of the pipeline yields marginal improvement end-to-end
And it gets worse: the individual-level promise is itself overstated .
Bain found that coding is only 25 to 35 percent of the work between idea and launch. Even if AI tools delivered a genuine 10x speedup on that slice, and we'll see evidence on that in a moment, speeding up 30 percent of the pipeline doesn't transform your time to market. The bottleneck was always elsewhere. [CLICK] But it actually gets worse than that.
Act I · The perception gap
What developers believe vs. what happened
METR randomised control trial · July 2025 · 16 experienced devs · 246 real tasks
0%
+40%
-20%
+38%
Expert prediction
+24%
Dev forecast
+20%
Dev belief (after)
−19%
Actual
39pt gap
Developers believed they were 20% faster. They were actually 19% slower.
This is the METR study, the only rigorous randomised control trial we have on AI coding productivity. 16 experienced open-source developers, 246 real tasks from their own repos, randomly assigned: AI allowed or AI not allowed. Before the study, machine learning experts predicted developers would be 38 percent faster. The developers themselves predicted 24 percent. After the experiment, the developers perceived that they were 20 percent faster. The actual measured result? Almost 20 percent slower. That's a 39 percentage-point gap between perception and reality. Now, the caveat here is that these were experienced developers working on large, mature codebases with early-2025 tools, and we know these tools have continued to improve. The authors don't claim AI is useless everywhere, and no one claiming that would be serious. But the perception gap is enormous, and it matters because organisations are making investment decisions based on what people believe, based on hype, not what's actually happening.
Act I · The evidence
It gets worse
CodeRabbit · Dec 2025
AI code: 1.7× more defects Readability issues 3× higher XSS vulnerabilities 2.7× worse
Replit · Jul 2025
AI agent deleted a production DB despite "don't delete" ×11 Then lied about recovery
Faros AI · Jun 2025
Devs merge 98% more PRs Review time jumped 91% Net delivery improvement: zero
Bain · Sep 2025
Coding = 25–35% of work from idea to launch "Speeding it up does little"
And it's not just the METR study. CodeRabbit analysed 470 pull requests and found AI code ships with 1.7 times more defects, readability issues are 3 times higher, and so are security vulnerability issues. Then there's the Replit story, where their AI agent deleted Jason Lemkin's production database during a live demo despite being told not to delete it eleven times. It then lied about recovering that data. [CLICK] But the study that matters most for our story is this Faros AI study, which was by far the largest. They tracked 10,000 developers across 1,200 teams, and found that while developers merged about 98 percent more PRs, roughly double, the review time also jumped by about the same amount. So company-wide delivery was basically unaffected. They didn't speed up the pipeline. They moved the bottleneck downstream. So clearly, the bottleneck is elsewhere.
Act I
The bottleneck was nevertyping speed
It's the Slack pings. The "quick clarifications." The 23 minutes to regain focus. The staging deployment after weeks in a backlog…
…all for a "this isn't what I asked for."
So here's the picture. The bottleneck was never typing speed, and having an LLM replace a developer, even if it is part of a solution, cannot be the whole solution. Because even if the tools worked perfectly, coding is only ever at best a third of the entire pipeline. When you look at the way organisations actually work, you see the distractions, the miscommunications, the signal degradation from customer through the various teams from discovery to delivery. And when you consider the context-switching costs, studies have shown it takes 23 minutes for a developer to regain focus after an interruption. And in my experience, that 23 minutes actually compounds. Something distracts you, and rather than getting straight back into it, you need a coffee, you need a break from the mental exhaustion. That break turns into a chat, and often that 23 minutes in reality looks like at least half a day. So you can start to see why things pile up. Features sit in staging deployments for weeks in a backlog. [CLICK] And all of this, one of the most demoralising outcomes in any organisation is a "this isn't what I asked for." Whether that comes from internal team members or the customer. So if AI tools have given developers this jet engine, the analogy is that the runway is full of potholes and nothing is truly lifting off.
II
The Diagnosis
So if the bottleneck isn't being addressed by the tools, and if individual productivity doesn't automatically compound into organisational productivity, then what's actually going wrong? We need a diagnosis.
Act II · The hidden accelerant
AI tools don't challenge bias. They amplify it.
Individual AI tools reinforce the user , not the truth .
Two colleagues disagree. Each consults their AI assistant. Both are told they're right. The gap between them widens . At scale, this exacerbates organisational misalignment.
The most important AI agents won't be yes-men . They'll be disciplined no-men .
Individual AI tools are designed to agree with you. They're optimised for engagement, for satisfaction, for making you feel productive. The reinforcement learning that's used to make these tools so amazing reinforces the user, not necessarily the truth. [CLICK] Think about what this means in an organisation. Two colleagues disagree on a product direction. Each goes to their AI assistant for backup. Both are told they're right. Both come back to the meeting more confident than before. And the gap between them widens. At scale, this exacerbates organisational misalignment. You start to get factions, people in well-entrenched positions. And the worst outcome is when you get amplification of already very human traits, things like echo chambers that are now supercharged by the most persuasive technology humans have ever built. And the uncomfortable bit is that the loudest AI advocates inside many organisations might end up being the worst-performing employees, because they're the ones who finally have something that agrees with them all day every day. These are the people who, to prove a point, go in and bombard everyone with TL;DR messages. I like to think of those types of employees as DDoS attackers, right? People who literally shut the system down with too much content. That's intoxicating. And it's organisationally toxic. Organisations have spent literally thousands of years building systems to counteract exactly this: investment committees, boards of directors, peer review, separation of powers. All of these exist because humans need structured disagreement to find truth. [CLICK] So the most important AI agents in the future won't be the ones that say "you're absolutely right." They'll be disciplined no-men that interrogate reasoning, surface risks, and enforce standards. But this won't be possible without organisation-wide context. And that's what institutional intelligence will look like. Before we go further, though, I want to lean into the very human side of this problem.
Act II
Responsibility
High
Low
✓ Optimal
Empowered
"I have the tools to succeed."
High performance and ownership. Decisions are made where information exists.
⚠ Danger Zone
Accountable but Powerless
"I am blamed for things I cannot change."
Systemic burnout and deep resentment. High turnover and psychological unsafety.
! Warning
Power without Accountability
"I make decisions but don't face consequences."
Extreme misalignment and hidden risk. Narcissistic leadership patterns emerge.
↙ Stagnant
Disengaged
"It's not my job and I don't care."
Deep apathy and low productivity. The organisation's momentum is lost here.
I put together this matrix because I kept seeing the same structural flaw in every dysfunctional organisation I've worked with. And it's not a talent problem or a process problem. It's a power misalignment problem. Specifically, responsibility and control are misaligned. In the top left, you've got the Empowered corner. This is where you want everyone. These people carry responsibility and they carry the authority to act. Top right is where a lot of people unfortunately find themselves, which is Accountable but Powerless. They own the outcome, they care, but they learn soon enough that they can't influence it. This is where people burn out. Bottom left is the corollary: Power without Accountability. These are people who can make decisions but don't really have to face up to the consequences. This is where politics breeds and thrives. And the bottom right, which I would argue has almost a gravitational force pulling organisations toward it, is the Disengaged corner. Nobody owns anything, nobody cares. This is what organisations should strive to avoid at all costs. Now, while the best organisations live in the top left, the worst create what I call a diagonal of dysfunction, from top-right to bottom-left, where responsibility and control are maximally mismatched. And those individual AI tools we just talked about? The sycophantic ones that agree with everyone? They make this diagonal worse, not better.
Act II · The trap
The rotating diagonal
PM is accountable but powerless → give PM more control → engineering loses governance over architecture, security, tech debt → the dysfunction didn't disappear. It rotated.
The real solution isn't giving one side more power.
It's designing the system so both sides are empowered within their domains.
Now here's the trap that some organisations fall into. They see the product manager is accountable but powerless, right? In many software companies, the PM is the one tasked with owning the product roadmap, looking at the Gantt chart, the timeline of when features will be delivered. So the organisation tries to fix this by giving the PM more control. Direct access to engineering. The ability to push things through and rush reviews to get things across the line. It becomes very delivery-driven. But what happens? The engineering team loses control over their own domain. Architecture decisions get overridden. Technical debt piles up. Security reviews are skipped in the name of speed. They've bulldozed past the people who are responsible for a critical function in the organisation, which is governance. Engineering teams are actually responsible for the governance, the architecture, the security of an organisation's digital assets. And by bulldozing through that team, you end up rotating this diagonal of dysfunction. You didn't fix the misalignment. You just shifted it. [CLICK] So the real solution is not to give one side more power at the expense of the other. [CLICK] It's designing the system such that both sides are empowered within their domains of responsibility. And that's an infrastructure and institutional problem, not a people problem. Now before we go further, I'd love to get a quick read on the room.
Pause
Which quadrant do you live in?
Whatever your role or industry. Not the official answer, the honest one.
or go to menti.com
1734 5481
Empowered "I have the tools to succeed."
Accountable but Powerless "I'm blamed for things I can't change."
Power without Accountability "I decide but don't face consequences."
Disengaged "Not my job, not my problem."
Take out your phones. Go to menti.com, enter the code on screen. Just one question. Which of these four quadrants best describes your day-to-day experience at work? Whatever your role or industry, this pattern is universal. I'm not asking where your organisation says you sit. I'm asking where you actually live. The honest answer. [Wait 60-90 seconds for responses, then advance to next slide.]
[Results stream in live.] Look at that distribution. The majority of the room does not feel empowered. That gap exists before a single line of AI-generated code enters the picture. This is the structural problem that no tool can fix on its own. We can see strong signals for Accountable but Powerless and Disengaged. Everyone experiences this dysfunction differently depending on where they sit. A lot of people are probably suffering under the weight of expectations without the ability to do something about those expectations. And for those who are Empowered, good on you. You are in the fortunate minority, and that is because there are great people, great teams, great cultures out there that put in a lot of energy to create responsibility and control alignment within their organisations. So we've diagnosed an issue. Now I want to take you somewhere unexpected.
III
The Mechanism
An insight from an unlikely source
We've diagnosed the structural problem: misaligned responsibility and control, amplified by tools that reinforce bias instead of truth. But what's the underlying mechanism? One of the insights I want to present today is a concept from electrical engineering that I believe explains exactly what's happening, and more importantly, tells us how to fix it, at least at a high level.
Act III · The analogy
The wrong adapter
AI tools
World-class
+
Your org
Different shape
=
✕
Nothing works
The appliance is perfect. The power supply is perfect. The interface is wrong.
Everyone in this room understands what happens when you plug a European appliance into an Australian power socket without an adapter. Or everyone should understand it. Spoiler alert: don't do it. Let's just assume both the appliance and the power supply are perfect, but the interface is wrong. The adapter isn't there. So either nothing happens, or something blows up. More often than not, these things don't even fit into each other, and that's a good thing because bad things can happen. But if you find a way to connect them, you'll be doing so haphazardly, at your own risk. [CLICK] This is exactly what's happening in software organisations right now, and potentially even more broadly across team silos in companies. We've upgraded the appliance, we now have these extraordinary AI coding tools. But the adapter between those tools and the rest of the organisation? Nobody's touched it. Nobody's redesigned the factory to take advantage of this.
Act III · The concept
Impedance matching
In electrical engineering, when two systems have mismatched impedance , signal doesn't just weaken. It reflects back toward the source.
The worse the mismatch, the more energy is wasted as reflection. In extreme cases, standing waves form that damage the components themselves.
Matching doesn't mean making both sides identical . It means designing the interface so maximum power transfers across the boundary.
This problem in electrical engineering has a name: impedance matching. When you connect two systems with mismatched impedance, signal doesn't just weaken. It reflects back toward the source. Energy literally bounces back the way it came. [CLICK] And the worse the mismatch, the more energy is wasted as reflection. In extreme cases, standing waves form, which are oscillations that can damage the components themselves. [CLICK] Now the crucial insight to tie this to our subject is that impedance matching doesn't mean making both sides identical, or merging them, or melting them into each other. It means designing the interface such that maximum power transfers across the boundary. In our context, product managers don't need to become engineers. Engineers don't need to become PMs. But the boundary between them needs to be properly matched.
Act III · The centrepiece
Organisational impedance
Mismatched
Matched
Discovery
Customer signal
Rework
Handoff wall
Governance layer Aligns responsibility with control
Delivery Weak signal
Delivery Full power
SOURCE
LOAD
Signal amplitude
Signal reflection
→ "This isn't what I asked for"
Standing waves
→ Same meetings, cycling tickets
Signal degradation
→ "Out of sight, out of mind"
Wasted energy
→ Burnout, disengagement, turnover
Maximum power transfer
→ Customer value flows end-to-end
No reflection
→ Build the right thing first time
Clean signal
→ Teams share context continuously
Full power at load
→ Sustainable, governed velocity
[Start in mismatched state.] So on the left here, we've got Discovery. This is a broad category that encompasses your PMs, customer success staff, salespeople. These are the people who should be collecting intelligence about what customers actually need. They're generating the customer signal. On the right, Delivery. The people who should turn that signal into shipped product. But in practice, in between these teams is an invisible wall. Different tools, different cadences, different vocabularies. And when customer signal hits that wall, it reflects back. That reflection looks like rework. It's the feature that doesn't match the spec. Look at the waveform here: chaotic on the left, almost nothing reaching the right. And remember, delivery needs to actually come back and connect with discovery. You need a full loop, which we'll get to. So when that loop doesn't come back around, that signal death on the right side eventually means inefficient, unproductive organisations that cannot compete. [TOGGLE TO MATCHED] Now if we add a governance layer that properly matches the organisational impedance between discovery and delivery, the signal passes through at full power. Not because anyone necessarily changed roles, but because the interface between them is properly designed. The organisation must be built around this governance layer rather than having it tacked on, and that's of course the hard part. But this is the requirement for organisations to move beyond the era we're in now, where we've got this superpower of a tool in everybody's hands, but it's not generating the outcomes that would justify it as revolutionary. And so this is what we're interested in: allowing organisations to flow customer value end-to-end, loop from discovery to delivery and back to discovery, reducing reflections, increasing signal clarity across teams. When the system is loaded and connected, that's when you see the true power of an organisation.
IV
The Full Loop
So what does "properly designed" actually look like in practice? What does this full loop look like?
Act IV
"Delivery is getting faster and easier, so it feels like we can just ship. But all we are doing is shipping the wrong stuff faster."
Teresa Torres, March 2026
Author of Continuous Discovery Habits
Discovery and delivery aren't sequential phases.
We must be continuously discovering.
Teresa Torres, the author of Continuous Discovery Habits, said something on the 6th of March that I think should be on every product manager's wall, and really should be considered by anyone who touches or releases software. "All we are doing is shipping the wrong stuff faster." That's the entire productivity paradox said better than I could have said it myself. [CLICK] And what this means is that discovery and delivery are not sequential phases. They're not separate teams doing separate things in sequence and then moving on. [CLICK] They have to be connected to be continuously discovering. And the drive here is discovery, not delivery.
Act IV · The slop problem
Generating anything is no longer the problem
AI lets anyone generate essays, presentations, code, websites, and software. The volume of output has exploded .
The problem is generating and selecting the right thing .
And there's one more dimension to this that I want to name explicitly. Generating anything is no longer the problem. AI lets anyone produce essays, presentations, code, websites, software, whatever you can think of. The volume of output has absolutely exploded. [CLICK] So the problem is generating and selecting the right thing. And let's be honest, a lot of AI-generated output is slop. And that slop is proliferating at exponential rates inside organisations. Think about what this means in practice. If you're evaluating software proposals, or reviewing pull requests, or assessing vendor pitches, the volume of polished-looking work hitting your desk has grown dramatically. But the proportion of that work that actually matters has probably reduced. So finding the one good artefact, the one right decision, the signal in the noise, is much harder now than it was before. And that will be the real economic driver of the next decade. That work needs to be well-defined, deterministic, and the key point here, auditable across an organisation. Not vibes. Not "the AI said it was good." Structured processes with clear checkpoints that are visible to everybody. That's what will separate individual productivity theatre from institutional intelligence.
Act IV · The gap
Where AI tools actually sit
AI coding tools address stages 3–4 in isolation
Without organisation-wide context, you just get faster production of misaligned software
So if we break down the discovery-to-delivery process into five stages, from discovery to requirements gathering, to building, to reviewing, to releasing, where do the current AI coding tools sit? The Cursors, the Copilots, the Replits, the Claude Codes, and on and on. They sit in stages 3 and 4. Build and review. Which is exactly the 25 to 35 percent that Bain told us is not the bottleneck. And they operate in isolation. No organisational context. No customer signal. No feedback loop. So without the connective tissue across the full loop, faster coding just means faster production of the wrong thing.
Act IV · The pipeline
"Out of sight, out of mind"
Continuously. One unified view.
One seamless loop between Discovery and Delivery, not a thousand handoffs. Authority sits where responsibility sits. Tools must be organisation-wide with full organisational context, or they exacerbate fragmentation rather than heal it.
Instead, when we consider these five stages, this is how they might look in a future organisational design. Notice something: stages 1, 3, and 5 are now open to everyone. Discovery is no longer a PM-only or sales-only activity. Developers now also have access to what's driving the need for change. Building is no longer a developer-only activity, we already know this, everybody and their brother are now builders. And release is no longer just DevOps. The key insight is that authority sits where responsibility sits. And that can mean shared authority and shared responsibility where it makes sense. But the roles really do matter at stages 2 and 4, because that's where specialised expertise genuinely exercises its authority. This layout is the surgical redesign of an organisation. And the problem it solves is the "out of sight, out of mind" problem. The further teams are from the customer signal, the weaker that signal gets. If you're a developer and your world view is limited to a Kanban board on Jira or Linear, and you're just picking things up in a sprint being told what to work on with no context of why, you just drone on and deliver with decreased signal and this broken-telephone phenomenon. [CLICK] But the whole point of this structure is to build a continuous loop. To collapse the thousand handoffs into one single, fluid, well-matched interface between discovery and delivery. And critically, any tooling you bring in must have full organisational context. A tool that only sees one stage is a tool that deepens silos rather than bridging them. Which is why post-release monitoring is so critical, it must flow organically and continuously back into discovery. This is what allows organisations to enjoy a unified view, and what enables full power transfer across internal teams.
V
What We Wish We'd Known
Six months ago
So with that, let me close with the practical takeaways.
Act V
Rules for the new era
Eliminate handoffs before accelerating keystrokes
Speed at the bottleneck is the only speed that matters
Match impedance, don't just add headcount
Redesign the interface between teams, not the teams themselves
Governance is an accelerant, not a brake
Companies seeing 25–30% real gains pair AI with process transformation
Keep the customer signal alive to deployment
"Out of sight, out of mind" is the disease. The full loop is the cure.
Tools must be organisation-wide or they deepen silos
Full organisational context. Everyone participates, authority sits where responsibility sits.
[CLICK] One: eliminate handoffs before accelerating keystrokes. Restructure organisations such that the true bottlenecks are unblocked. The way to see it is that speed at the bottleneck is the only speed that matters, and if your bottleneck is between teams, not within them, no coding tool on earth will fix it. [CLICK] Two: match organisational impedance. Don't think of just adding headcount. This is about redesigning the interfaces between teams, not the teams themselves. And remember, matching doesn't mean making both sides identical or responsible for exactly the same thing. It means designing the boundaries such that maximum signal transfers across them. [CLICK] Three: governance is an accelerant, not a brake. The companies seeing real gains are the ones pairing AI tools with process transformation. Governance is not red tape. It's the adapter that makes the power socket work. [CLICK] Four: keep the customer signal alive all the way to deployment, at all costs. Solve this "out of sight, out of mind" disease. The full loop, from discovery through delivery and back again, this is the cure. [CLICK] And five: tools must be organisation-wide or they deepen silos. If an AI coding tool only sees one slice of the organisational pipeline, it fragments the organisation further instead of unifying it. Full organisational context. Everyone participates. Authority sits where responsibility sits.
The tools are necessary. But they are not sufficient .
The hard part was never generating the output. It's governing the people and processes around it.
We have our electricity. It's time to redesign our factories.
So let me bring it all back together. The tools are necessary. We're not going back, these tools are here to stay. But they are not sufficient. Because the bottleneck was never generating the output. It's governing the people and processes around it. [CLICK] Remember those factories from the 1890s. They had the best technology available to them. Some tried to bolt it on to the old factory and waited for results. And it took 30 years. But the organisations that won weren't the ones with the best motors. They were the ones that redesigned the factory around the motor. We now have our electricity. Every one of us in this room has access to extraordinary AI tools. The question is whether we'll spend the next decade bolting those tools into the same broken structures we already work in, or whether organisations will do the harder, more important work of redesigning how they actually function. That's the problem we're building Colabyr to solve.
"...and I reckon the best way to test whether any of this holds up is to open it to the room. So, who here's worked in product discovery, delivery, management or development? And what's broken in your world?"
Then just look at someone. Pick a face. The silence will be about three seconds, max. If nobody bites immediately, you can prime the pump with something like:
"I'll start with one I get a lot: 'Milad, this is great in theory, but my org has 400 people and we can't just redesign the factory overnight.' Fair. So let me address that..."
Q&A
Let's dig in.
Connect
Dr Milad Dakka · Colabyr · colabyr.ai
I reckon the best way to test whether any of this holds up is to open it to the room. So, who here's worked in product discovery, delivery, management or development? And what's broken in your world?"
"Anyone else got a question, a pushback, a war story?"
Then just look at someone. Pick a face. The silence will be about three seconds, max. If nobody bites immediately, you can prime the pump with something like:
"I'll start with one I get a lot: 'Milad, this is great in theory, but my org has 400 people and we can't just redesign the factory overnight.' Fair. So let me address that..."