Relling

Hybrid robotic automation. Built inside a factory we own. Open to strategic partners during the Fuselage bring-up window.
Working draft · v0.5 · April 2026

1We've heard this before

If you have spent any real time on a production floor, you have sat through a meeting like this one. PLCs were going to fix it. ERP was going to fix it. MES was going to fix it. The integrators with the six-month deployment slides were going to fix it. Each wave arrived with a payback story. Each wave left with a longer payback than the story said. Capable people lost a lot of nights and a lot of capex to that pattern. They have every reason to be skeptical of the next person walking through their door.

We are walking through your door anyway. We want to be clear about the part that is different and the part that isn't.

What isn't different: this is hard, and most of why it's hard has nothing to do with model quality. It has to do with dirty data, legacy systems, codes that mean one thing in one warehouse and something else in the next, the operator who has been on this line for fifteen years and knows it better than the engineer who designed it, the safety officer who has earned the right to be skeptical, the customer who is calling about a delivery date that has already slipped, and a team whose primary job is to deliver the goods, not to babysit somebody else's tech experiment.

What is different: we are not asking you to host us. We bought a factory. We are running it. We hit the same problems on our own floor, with our own customers, against our own deadlines, and we solve them there before we ever ask you to put one of our cells next to your line.

This whitepaper is for the operator who has been through the cycle. The first three sections are for the part of the meeting where you decide whether we are worth the next forty minutes. They lay out the labor pressure that brought you here, the thirty-year track record of factory tech that did not deliver against its payback story, and the OEE framework you read every morning. None of it should surprise you. If any of it does, walk us out.

2The labor pressure that brought you here

The reason a 2026 capex meeting is even discussing flexible automation, instead of postponing it for another two years, is that the labor situation has not eased and is structurally unlikely to. Open manufacturing job postings settled above the pre-2020 baseline and stayed there. Time-to-fill for skilled production roles has roughly doubled. Voluntary turnover among production-line workers sits in the mid-twenties percent annually. Wage inflation outran the rest of the economy through 2022 and remained elevated through 2025. The boomer retirement wave is real and arriving, not a slide-deck claim.

None of these numbers are predictive on their own. They are the operating environment everyone in the room already knows. We list them because manufacturers we have talked to have spent the past four years explaining this to suppliers, integrators, and consultants who showed up unaware of it. We are aware of it. The four-panel chart below is what the picture looks like, indexed against a 2019 baseline where useful.

Fig. 1U.S. manufacturing labor pressure, 2020–2026
Four indicators · indexed against 2019 baseline where usefulSources: BLS JOLTS · NAM workforce surveys · Deloitte/MI
A · OPEN POSTINGS, MFG monthly · thousands 800 600 400 2020 2022 2024 2026 peak: ~850K 2026: ~620K B · TIME-TO-FILL, SKILLED ROLES median days from posting to start 62 2020 98 2023 112 2026 C · VOLUNTARY TURNOVER % of workforce / year · production line 35% 25% 15% 2020 2022 2024 2026 peak: ~31% 2026: ~26% D · WAGE INFLATION, MFG % YoY · production-line wages 8% 5% 2% 2020 2022 2024 2026 peak 2022: ~7.2% YoY

Fig. 1The labor pressure that drives U.S. manufacturers toward automation has not eased. Open postings settled above pre-2020 baseline, time-to-fill skilled roles nearly doubled, voluntary turnover sits in the mid-twenties percent, wage inflation outran the rest of the economy through 2022 and stayed elevated. The headline numbers are not the whole story — the part that matters is that nobody walks into a 2026 capex meeting believing the labor problem solves itself.

3Thirty years of factory tech that didn't deliver

The same operator who has lived through tight labor for the past four years has also lived through seven waves of factory technology that arrived with a payback story and left with a longer one. PLCs are the exception, not the pattern. The two ranges below — industry investment and ROI delivered against the original payback story — are how we read the last thirty years.

Fig. 2Thirty years of factory tech: investment vs. ROI delivered
Selected industry waves · 1995 → 2025Industry investment vs. ROI delivered against the original payback story
LOW MOD HIGH PLCs 1985– ERP 1995– MES 2000– IoT / Industry 4.0 2010– Predictive maintenance 2015– Cobots 2014– Vision / ML 2018– Foundation-model robotics 2024– INDUSTRY INVESTMENT ROI DELIVERED VS. ORIGINAL STORY

Fig. 2Every wave of factory tech arrived with a payback story. PLCs delivered. The next thirty years are a more complicated picture. ERP rollouts ran years long and over budget; MES adoption fragmented across sites that never agreed on a system of record; IoT generated dashboards but few hard ROI cases; predictive-maintenance ML pilots rarely cleared the threshold to fleet-wide deployment; cobots delivered for some labor-tight tasks and left others alone; vision/ML still has more demos than deployments. Foundation-model robotics is the latest wave. We are aware we are walking into the same skepticism the last seven walked into.

Foundation-model robotics is the latest wave. It is being pitched against an operator audience that has been told the same thing every five years since 1995. We are aware that we are walking into the same skepticism the last seven walked into. The right way to address that is not louder marketing. It is to show our work and to be honest about what we cannot yet do (§16, §17).

4Where the OEE actually goes

Every plant manager reads OEE the same way: availability times performance times quality. The Six Big Losses framework decomposes those three into the categories that actually drive the number on the dashboard. We are not going to explain it to you. We are showing it because the rest of this whitepaper rests on it. Robotic deployments today sit in a 30–50% OEE band that would not pass internal capex review on a classical-automation project. That is the gap.

Fig. 3The Six Big Losses · classical OEE decomposition
Nakajima · TPM framework · 1988OEE = Availability × Performance × Quality
SCHEDULED TIME · 100% all the hours the line is supposed to run AVAILABILITY ≈ 85% typical Equipment failures unplanned breakdowns · ~5–10% Setup & changeover SKU swaps · ~5–15% PERFORMANCE ≈ 90% typical Idling & minor stops jams, queue waits · ~5–10% Reduced speed below design takt · ~3–8% QUALITY ≈ 99% typical Defects & rework scrap, rework · ~1–3% Startup yield losses first-pass yield ramp · ~1–3% EFFECTIVE OEE = A × P × Q TYPICAL INDUSTRY 60–75% WORLD CLASS 85%+ ROBOTIC DEPLOYMENT TODAY 30–50% Robotic deployment today sits in a band that would not pass internal capex review on a classical-automation project. That is the gap.

Fig. 3The Six Big Losses framework is the language every plant manager uses. We are not going to tell you what OEE means. We are telling you that we read the same dashboards you do, and that the deployment layer we are building is targeted at the band that would currently fail your internal capex review.

5What it costs you when somebody else's tech runs your floor

There are two ways automation lands on a floor. One way: the operator gets more visibility into the operation, more control over the line, more ability to fix things when they break, more leverage when negotiating the next round of capex. The other way: the operator removes a job, inherits a black box, and the floor's throughput now depends on a vendor's roadmap, a vendor's support team, and a vendor's commercial position. Several large, well-respected automation deployments arrived in the second mode. The operators who hosted them have been quiet about how that worked out. The ones who will talk are emphatic that they will not do it again.

We are designing for the first mode. Concretely:

  • Telemetry stays yours. The data the cells produce is on your storage, in formats you control, with retention rules you set. We get the slice we need to support the cell, on terms we agree on per deployment.
  • Diagnosis runs on your bench. The model interrogation surface is built for plant technicians, not for ML engineers. When the policy drifts, the technician reads the intermediate state through tooling that ships with the cell, sees the proximate cause of the drift, and applies the recommended action without escalating.
  • Adjustment without retraining. The steering-vector library lets a technician correct behavior at runtime in minutes. No retraining cycles, no waiting on a model release, no service ticket.
  • No black-box dependency. Every cell ships with the dossier, the runbook, the configuration, and the integration spec that runs it. You can rip the cell out without a contract penalty, and the rest of your line keeps running.
  • No roadmap leverage. Our commercial terms are designed so the only reason you keep working with us is that the cells continue to deliver. Switching cost is real because cells are real. Lock-in is not the strategy.

6Hybrid system architecture

What we ship is not a single end-to-end policy. It is a hybrid: classical industrial backbone, plus learned primitives applied where adaptability is required, plus an independent 3D safety monitor running in parallel. The backbone is what your existing automation already speaks. The task scheduler is a finite state machine your maintenance team can read. The safety monitor is independent of all of it, which is what enables a clean safety case.

Fig. 4Hybrid system architecture
Software stack · per cellThree layers · classical backbone · learned primitives · independent safety
YOUR EXISTING CONTROL PLANE PLC · MES · SCADA EtherNet/IP · PROFINET · OPC UA · Modbus TCP — your code stays EXISTING INTERFACES · NO RIP-AND-REPLACE TASK SCHEDULER · FINITE STATE MACHINE explicit state transitions, success signals, deterministic fallback your maintenance team can read the FSM. No ML training required to debug. CALL PRIMITIVE · READ SUCCESS SIGNAL CALLABLE PRIMITIVES · STRUCTURED I/O · EXPLICIT TERMINATION CLASSICAL Pre-taught motions structured subtasks LEARNED · VS Visual servoing kinematic alignment LEARNED · IL Imitation learning contact-rich, deformable FALLBACK Force-feedback retry retract 2.5–4 mm, retry 3D SAFETY MONITOR · ALWAYS ON · INDEPENDENT OF POLICY Neural occupancy prediction raw point clouds → collidable-area mask Speed scheduling PFL slow-down zone, ISO/TS 15066 Protective stop incursion → stop within one cycle audited dossier ships with the cell · your safety officer signs off in days, not weeks

Fig. 4Three layers, stacked. Your existing control plane on top — we plug in over interfaces it already speaks. The task scheduler is a finite state machine your maintenance team can read. Primitives are callable modules with explicit success signals; deterministic fallback handles the most common failure mode. The 3D safety monitor runs alongside, independent of the policy and the scheduler, so an audit can sign off the safety case without inheriting the model's internals.

7How the learned primitives work

We use two complementary primitive types. Visual servoing handles kinematic alignment subtasks: pick a part from an unstructured fixture, approach a tool, locate a target hole. Wrist-camera RGB → mask tracker → transformer policy → end-effector relative pose. Iterate at each control step until the action magnitude approaches zero. Imitation learning handles contact-rich and deformable subtasks: insertion against a 0.3 mm clearance, soldering against a moving target, stacking a flexible electrode without inducing a wrinkle. Stereo RGB plus proprioception → mask predictor → transformer policy → pose plus an explicit success probability. Episode terminates when SP > 0.95.

Neither controller uses 3D depth at inference. RGB-D cameras provide stereo capture during data collection; production controllers run on RGB only. This is a deliberate cost decision. High-precision industrial 3D cameras are expensive, prone to noise on thin or reflective surfaces (cables, holes, solder pads), and add a calibration step every time the cell is serviced.

When an imitation-learning insertion stalls, the load cell underneath the workpiece detects the excessive load before the policy does. The cell retracts the end-effector by a small randomly-sampled distance (2.5–4 mm) and retries. This is the most common recovery path on a real production line, and it is intentionally deterministic — it does not consult the policy, because the policy is exactly the thing currently confused.

Fig. 5How the learned primitives work
Two complementary controllers · RGB only at inferenceBoth expose explicit success signals to the scheduler
A · VISUAL SERVOING Kinematic alignment, RGB-only RGB image wrist camera Mask tracker target isolated Transformer policy pose prediction End-effector pose small action → done ITERATE UNTIL ACTION → 0 B · IMITATION LEARNING Contact-rich and deformable, stereo RGB + proprioception Stereo RGB left + right + state Mask predictor ROI crops, centers Transformer policy pose + success prob Pose + SP SP > 0.95 → episode end LOAD CELL TRIGGERS · RETRACT 2.5–4 mm · RETRY

Fig. 5Two complementary controllers. Visual servoing iterates end-effector pose against a tracked target until the action magnitude drops to zero — used for kinematic alignment under structured RGB. Imitation learning emits an end-effector pose plus an explicit success probability — used for contact-rich and deformable subtasks. Both run on RGB only at inference. The deterministic retract-and-retry fallback, triggered by load-cell force feedback, handles the most common failure mode without consulting the policy.

8How the safety monitor works

The safety case is the gating constraint on whether your risk-management team will sign off. The hard part is that ISO 10218 and ISO/TS 15066 were written for behavior fully specified in advance through deterministic programming. Learned policies break that assumption. Our approach is to make the safety monitor structurally independent of the policy. Three concentric zones around the robot, each enforced by a separate signal path. Nominal speed in the outer zone. Throttled below PFL-compliant limits in the middle slow-down zone. Protective stop on incursion into the inner stop zone. The neural occupancy predictor that drives them takes raw 3D point clouds and outputs a collidable-area mask, refreshed at sensor rate. The signal path from sensor to actuator does not pass through the task scheduler or the policy.

This is what enables fence-less operation: workers can approach the cell to load materials or remove finished units without stopping the line. The robot slows down or pauses on detection and resumes autonomously when the worker steps back. In published continuous-run deployments, this happens roughly every 10–20 minutes throughout a five-hour run, without operator escalation, while the operator performs downstream electrical QC on completed units in parallel.

Fig. 6Safety zones, top-down
Three concentric zones · ISO 10218 · ISO/TS 15066Power Force Limiting on slow-down · protective stop on stop-zone incursion
ROBOT OPERATOR 3D LIDAR STOP protective stop on incursion SLOW-DOWN PFL-compliant throttled below limit NOMINAL full speed no obstacle FENCE-LESS OPERATION no physical barrier · validated under ISO 10218 / ISO/TS 15066 acceptance tests REACTION TIME slow-down within one sensor frame · protective stop within one control cycle

Fig. 6The 3D safety monitor runs independently of the policy. It evaluates raw point-cloud data through a neural network that predicts the collidable area in real time, comparing operator presence against three concentric zones around the robot. Slow-down speeds are calibrated below the conservative limits set by Power Force Limiting under ISO/TS 15066. The signal path from sensor to actuator does not pass through the task scheduler or the policy, which is what enables a clean safety case.

9The four problems integrators never solved

The reason 30 years of robotic deployment in American factories has produced almost no reusable infrastructure for deployment itself is structural, not accidental. The integrator's revenue model is built on engineer-hours, which means every hour an integrator spends building a tool that would reduce future engineer-hours is an hour of revenue the integrator chose to forgo. The four problems below are the result.

  1. ROI never closes inside a year. A representative cell today runs $500K to $2M all-in, with the engineer-hours block dominating the stack. The finance team cannot defend the payback through their standard capex framework. Narrative is required, and that always raises the bar.
  2. Cell takes 6–12 months to bring online. The dependency chain on the customer's floor is serial, circular, and binding. Mechanical integration cannot finish until layout locks. Layout cannot lock until perception is calibrated against actual lighting and surfaces. Perception cannot calibrate until the cell is physically present. Adding people does not compress the timeline.
  3. When the cell drifts, no one on the floor can fix it. A learned-policy failure is encoded in weights, presents downstream of the originating distribution shift, and cannot be diagnosed with the deterministic backward-chain workflows plant technicians have spent fifty years developing. Current deployments respond by running on a 24–72 hour MTTR per failure event. Steady-state OEE collapses to 30–50% as a consequence.
  4. Safety bring-up is custom every time. ISO 10218 and ISO/TS 15066 were written for behavior fully specified in advance through deterministic programming. Learned policies break that assumption. The field has not converged on a reusable framework, so safety today is performed as ad hoc behavioral characterization, augmented by additional risk-mitigation hardware, taking 4–8 weeks of expert time per cell.
Fig. 7What success looks like: slope chart
Deployment metrics, industry today vs. Fuselage targetSlope = magnitude of improvement
i. The math closes
TODAYFUSELAGE PAYBACK PERIOD 36+ mo 12 mo IRR ON CELL CAPEX 10–15% 30%+ 2–3× DEPLOYMENT COST / CELL $500K–$2M <$200K 3–10× 5-YEAR TCO vs. HUMAN Higher −40–60%
ii. The cell comes up fast
TODAYFUSELAGE PO → FIRST UNIT 6–12 mo <30 days 10× SPECIALIST HOURS / CELL 500–1,500 <100 5–15× SAFETY BRING-UP 4–8 wk <5 days 6–12× RETASK CELL → NEW TASK weeks minutes
iii. Stays up without specialists
TODAYFUSELAGE OEE 30–50% 85%+ ≈2× MTTR 24–72 hr <2 hr 12–36× PLANT-TECH RESOLUTION <20% 90%+ 4–5× INTERVENTIONS / SHIFT 5–15 <1 5–15×
iv. Safety story is shippable
TODAYFUSELAGE CERTIFICATION TIME / NEW CELL 4–8 wk <5 days 6–11× REUSABILITY OF FRAMEWORK rebuilt every cell productized INCIDENT RATE vs. ISO 10218 variable, often worse at or below baseline
Industry today Fuselage target magnitude

Fig. 7Each row is one deployment metric. Industry today on the left, Fuselage target on the right. Multiplier between. Numbers are the bar we are building the cells leaving our floor against; baseline ranges synthesized from public production-deployment data and integrator engagements.

The numbers above are not aspirational. Public production deployments of learning-augmented automation have already demonstrated 99.4% downstream-QC pass rates on multi-hundred-unit continuous runs, less than 20 minutes of real-world demonstration data per task, and cycle time within 13% of human takt under fence-less shared-workspace operation. The bar is set. Most integrators are nowhere near it.

Table 1 — Deployment metrics, detailed
Outcome Metric Industry today Fuselage target
The math closesPayback period36+ months12 months
IRR on cell capex10–15%30%+
All-in deployment cost per cell$500K–$2M<$200K
5-year TCO vs. human equivalentHigher40–60% lower
The cell comes up fastPO to first production unit6–12 months<30 days
Specialist engineer-hours per cell500–1,500<100
Safety bring-up time4–8 weeks<5 days
Time to retask cell to a new taskWeeks of re-engineering; often impossibleMinutes to hours; no retraining
Stays up without specialist interventionOverall Equipment Effectiveness (OEE)30–50%85%+
Mean time to repair (MTTR)24–72 hours<2 hours
Plant-technician resolution rate<20%90%+
Specialist interventions per shift5–15<1
The safety story is shippableCertification time per new cell4–8 weeks<5 days
Reusability of safety frameworkNone, rebuilt every cellProductized
Incident rate vs. ISO 10218 baselineVariable, often worseAt or below baseline
Fig. 8Capability gap: what doesn't exist today
Six capabilities the integrator stack does not produceEach is a teabag-string problem
Failure attribution
by plant technicians
TodayOpaque without ML expertise
FuselageNatural-language attribution + recommended action
Behavioral adjustment
without retraining
TodayDays–weeks of retraining cycles
FuselageSteering vectors applied at runtime in minutes
Cross-cell primitive
transfer
TodayEach cell is bespoke
FuselagePrimitives accumulated at one cell deploy across the fleet
Heterogeneous fleet
coordination
TodayRequires identical embodiment + shared programming
FuselageDifferent vendors, different policies, single layer
Edge-deployable
frontier-quality models
TodayProduction lags frontier by 1–2 generations
FuselageDistilled models retain frontier behavior on edge compute
Deployment data
flywheel
TodayDoesn't exist at scale; most lack volume or infrastructure
FuselageEvery operational hour produces data that improves the next deployment

Fig. 8Six capabilities the integrator stack does not produce as productized infrastructure. Each looks small in isolation. The cumulative absence is why robotic deployment is artisanal and why the integrator market sits at ~$30B without ever shipping any of these as buyable components.

10Where the 168-hour week goes, on a learned-policy cell

The Six Big Losses framework (§4) is the right vocabulary. The numbers are different on a learned-policy cell. In a 168-hour week of a learned-policy cell at 30% OEE, roughly 50 hours produce parts and 118 hours do not. The 118 hours break into specific blocks, each driven by a specific gap in the deployment layer. Closing those blocks is what moves OEE from the 30–50% band to the 85% band.

Fig. 9Where the 168-hour week goes
One week of a deployed cell, 168 hoursToday vs. Fuselage target, with the mechanism that recovers each block
0 40 80 120 168 hrs TODAY · ~30% OEE ~50 productive hrs / week running · 50 no dx · 18 specialist queue · 28 slow mode · 22 rework · 14 changeover · 36 FUSELAGE · 85%+ OEE ~143 productive hrs / week running · 143 2 + 4 + 6 + 5 + 8 = 25 hrs lost HOW EACH BLOCK COMPRESSES No diagnosis path · 18 hrs → 2 hrs model interrogation surface · technician reads intermediate state Specialist queue · 28 hrs → 4 hrs 90%+ of incidents resolved by plant techs without escalation Slow mode after drift · 22 hrs → 6 hrs steering-vector library applied at runtime, no retraining Quality rework · 14 hrs → 5 hrs observability spine catches drift before it produces bad output Changeover & retasking · 36 hrs → 8 hrs primitive library lets the cell be retasked in minutes, not weeks +93 productive hours / week 30% → 85% OEE
Running Down (no fix path) Slow / rework Changeover

Fig. 9OEE decomposition of a 168-hour week. Today (top) and Fuselage target (bottom). Each lost block has a specific cause and a specific recovery mechanism shipped with the cell.

11Fuselage, the floor we own

Fuselage is a battery-assembly plant we are buying. We are not buying it because we want to be in the assembly business. We are buying it because it gives us a production line that runs today, ships to real customers, and has real consequences when it stops running. The pressures of operating a real business at scale (tight margins, uptime that has to hold, SKU changes you didn't plan for, the customer who is unhappy on Tuesday for reasons that have nothing to do with robotics) are the exact same ones that make robot deployment hard everywhere else. You don't get good at solving them from a research lab.

Owning the factory, rather than renting time on someone else's floor, gives us:

  • Day-one revenue and cash flow, not a burn-financed R&D facility.
  • Real customers, real SKUs, real volume pressure. The only environment where the four problems actually bite.
  • A defensible reason the process knowledge can't be replicated: you own the floor, not rent time on someone else's.
  • Direct research partnerships with the leading private foundation-model labs in robotic manipulation. Cells running on our floor evaluate against the current frontier, not the generation that was production-ready eighteen months ago when an integrator started a project.
  • An observability spine across every cell. Every operating hour produces telemetry that improves the next deployment.
Fig. 10Fuselage, top-down
Top-down · battery-assembly plant · 8 production cells5 live · 3 deployment R&D
Fuselage, battery-assembly plant, owned and operated by Relling 01 · LIVE Cathode coating classical robot, ships today 02 · LIVE Anode coating classical robot, ships today 03 · R&D Stacking learned-policy retrofit 04 · LIVE Tab welding classical robot, ships today 05 · LIVE Electrolyte fill classical robot, ships today 06 · R&D Formation learned-policy retrofit 07 · R&D QC inspection learned-policy retrofit 08 · LIVE Packaging classical robot, ships today SHARED OBSERVABILITY · TELEMETRY · SAFETY MONITOR · DEPLOYMENT-LAYER SPINE real customer output DAY ONE Five live cells pay the bills, three R&D cells run learned-policy retrofits. YEAR THREE Live cells migrate to learned-policy one by one as the deployment layer absorbs each operation.
Live production cell, classical robot Deployment R&D cell, learned-policy retrofit Observability spine

Fig. 10Fuselage is a working battery-assembly plant we own and operate. Five cells run live production today. Three are in deployment R&D, retrofitted with learned-policy controllers under live takt-time pressure. The pressures of operating a real business at scale (tight margins, uptime that has to hold, the SKU mix shifting under your feet) map onto the deployment problems the layer has to solve. The cells are the laboratory. The customers are the deadline.

12The deployment cost stack

The unit economics of robotic deployment fail today because the integrator captures most of the value, and the integrator's revenue is built on engineer-hours rather than productized infrastructure. Fuselage breaks that pattern by treating cell bring-up as configuration against stable interfaces, with the integration tooling, data infrastructure, and safety framework built once inside our facility and amortized across every subsequent deployment.

Fig. 12The deployment cost stack collapses from $500K–$2M to <$200K
All-in cost per cellIntegrator hours absorbed by productized assets
$2.0M $1.5M $1.0M $500K $200K · TARGET 0 TODAY · RANGE $500K – $2M Integrator hours ≈ $700K Hardware · $400K Commissioning · $200K ≈ $1.4M FUSELAGE PRODUCTIZES THE LAYER FUSELAGE · < $200K < $200K Productized layer Hardware Safety, amortized
Integrator hours (today) Hardware Safety bring-up Productized layer (Fuselage)

Fig. 12The integrator's engineer-hours block is what compresses. Productizing the integration layer once and amortizing across deployments is the bet.

13Days, not months

A 6–12 month bring-up is not slow because the work is intellectually difficult. It is slow because the dependency chain is serial and circular, and the work cannot begin until the cell is physically present in the deployment environment. Fuselage breaks the chain by performing most of that work against a real production environment we already own. The handoff to the customer's site is configuration, not engineering against a blank slate.

Fig. 11Bring-up timeline · serial today, parallel under Fuselage
Cell bring-up · weeks · 6–12 months → < 30 daysThe dependency chain is what binds, not labor capacity
W0 W20 W40 W52 CUSTOMER SITE · TODAY · 6–12 MONTHS Mechanical integration 8 wk Cell layout lock 8 wk Perception calibration 10 wk Safety bring-up 4–8 wk Acceptance & first unit 12 wk PO → first unit · 42 weeks FUSELAGE · PRE-SHIPPED · < 30 DAYS ON-SITE Pre-shipped at Fuselage layout · perception · safety On-site config & acceptance < 30 days PO → first unit · < 30 days

Fig. 11The serial dependency chain on the customer site is what binds bring-up timelines, not labor capacity. Each task on the top row waits for the previous one to complete before it can start: mechanical integration → cell layout lock → perception calibration → safety bring-up → acceptance and first unit. Adding people on site does not compress the chain. Moving most of that work to Fuselage, where it runs in parallel against a real production line we own, is what compresses it. The customer site sees configuration against a stable interface, not engineering against a blank slate.

14Safety: ships with the cell

What shippable safety looks like in practice: the cell arrives at your site with a complete certification dossier already assembled, characterized against thousands of operating hours of identical-architecture cells running inside our facility. A runtime monitor, validated against the failure modes the architecture is known to exhibit, runs alongside the cell. An incident-response runbook your risk-management team can adopt as written. Your safety engineer reviews, runs the standard acceptance tests, and signs off. They do not perform a four-week behavioral characterization study because that study has already been performed.

Fig. 13Safety dossier: once, vs. every cell
ISO 10218 / ISO TS 15066 certificationSpecialist hours per cell, by approach
200h 150 100 50 0 SPECIALIST HOURS / CELL 5-DAY (40-HR) TARGET Cell 1 Cell 2 Cell 3 Cell 4 Cell 5 160h 30h 158h 26h 162h 24h 155h 24h 157h 24h
Today · 4–8 weeks of bespoke characterization, repeated per cell Fuselage · review dossier & sign off (≈ 5 days) 5-day target

Fig. 13A reusable framework — dossier template, validated runtime monitor, incident runbook — characterized against thousands of operating hours of identical-architecture cells, collapses each new cell from 4–8 weeks of bespoke behavioral characterization to <5 days of acceptance review. The customer's safety engineer reviews the dossier, runs the standard acceptance tests, and signs off.

15Cost, quality, dependability, flexibility

Manufacturing strategy organizes itself around four dimensions any factory has to trade against: cost, quality, dependability, flexibility. A factory cannot maximize all four at once, and the choice of which to optimize defines what kind of factory it is. Classical industrial automation made its choice fifty years ago by optimizing cost and dependability, accepting high quality as a cost of admission, and treating flexibility as a luxury. Markets have changed. Product cycles have collapsed. Customer specifications change inside the calendar year. Flexibility is now the primary axis. The other three have to follow.

Fig. 15Four manufacturing dimensions: what classical automation gave up
Skinner / Hayes & Wheelwright · cost · quality · dependability · flexibilityClassical automation vs. learned-policy cell
COST DEPENDABILITY QUALITY FLEXIBILITY .25 .50 .75 1.0 classical automation learned-policy cell
Classical automation · sacrifices flexibility Learned-policy cell · all four

Fig. 15Manufacturing strategy organizes around four trade-offs: cost, quality, dependability, flexibility (Skinner; Hayes & Wheelwright). Classical industrial automation chose cost and dependability fifty years ago and treated flexibility as a luxury. Markets changed. Product cycles collapsed. Flexibility is now the primary axis. The other three have to follow.

16Throughput, full-shift projection

Cycle-time analysis at the per-unit level can be misleading on a real production line. A cell that is 13% slower than human nominal takt is not 13% behind a human worker over an eight-hour shift, because the human worker has mandatory breaks and the robot does not. Reference deployments have run a 159 s nominal cycle against a 141 s human takt, projected over a 50-min/10-min schedule, and the robot-alone line crosses the human line near the one-hour mark and stays ahead through the rest of the shift.

The same effect shows up in P20–P80 spread. Human cycle times exhibit higher variance than robot cycle times across a shift, driven by occasional long-tail delays from interruptions, error recovery, and operator fatigue. Extrapolating from the mean takt time alone underrepresents the variability that accumulates over an extended shift.

Fig. 14Cumulative throughput · 8-hour shift projection
Three timing models · projection from continuous-run dataRobot alone · robot between humans · human (with mandatory breaks)
0h 1 2 3 4 5 6 7 8 HOURS INTO SHIFT 0 50 100 150 200 CUMULATIVE UNITS BREAK Human 141 s takt, 50/10 work-break Robot alone 159 s nominal cycle Robot between humans includes pauses for material loading ROBOT OVERTAKES · ~1 HR WHY THE ROBOT WINS THE SHIFT despite a 12% slower nominal takt, the break-constrained human schedule reduces effective production time VARIANCE P20–P80 spread on human cycle times is wider than on robot cycle times — long-tail delays from interruptions, error recovery, fatigue

Fig. 14Cumulative throughput projection over an eight-hour shift. Three timing models: human at 141 s takt with mandatory 50/10 work-break cycles, robot at 159 s nominal cycle, robot-between-humans effective takt that includes pauses for worker material handling. Despite a slower nominal cycle, the robot-alone line overtakes the human line near the one-hour mark and stays ahead. The collaborative configuration also yields labor-allocation benefits not captured by takt-time analysis: the robot's cycle slack lets the same operator perform downstream electrical QC on completed units in parallel.

Beyond timing projections, the collaborative configuration yields labor-allocation benefits not captured by takt-time analysis. The robot does not require continuous human attention. The operator is needed for periodic material loading and removal, approximately every 10–20 minutes. During collaborative operation, the robot's cycle time provides sufficient slack for the operator to perform downstream electrical QC on completed units in parallel. This parallel work does not show up in throughput metrics, but it can increase overall cell-level productivity by reallocating human effort without reducing robot utilization.

17What we cannot yet do

This section is honest about the envelope. We would rather lose your business at the proposal stage than lose your trust at the production-line stage.

  • We are not faster than your best humans on every task. Reference deployments today run 12–15% slower than skilled human takt time on contact-rich manipulation. The full-shift projection (§16) tells the right story; per-unit takt does not. If your operation is single-unit-takt-dominated and human labor is unconstrained, we are not the right fit.
  • We are not zero-data. Bring-up requires field data collection — minutes per task, not hours, but not zero. If your floor cannot accommodate a small forward-deployed engineering team scoping tasks for a few weeks, the engagement will not work.
  • Some tasks are not appropriate for learned controllers. Highly structured, low-variance, high-repeatability operations are still better served by classical waypoint automation. We will tell you when a task is one of those. The honest answer to "should this be learned" is "no" more often than not.
  • Edge-case failure recovery is not solved. The force-feedback retract-and-retry handles the most common failure mode (stalled insertion) deterministically. Less common failure modes still require remote engineering support during early deployments. We are explicit about this rather than letting a partner discover it at 2 a.m.
  • We do not have a turnkey product. Fuselage is being purchased now. The strategic-partner cohort runs in parallel with bring-up. Standard commercial engagements with productized SLAs come later. If your timeline requires a productized solution today, we are the wrong call. If it requires a real partner who is honest about the trade-offs, we may be the right one.

18Strategic partnership

Fuselage comes online over the next twelve to eighteen months. We are spending that window standing it up. Between now and then, we are working with a small number of strategic partners — three to five mid-market American manufacturers — to build the deployment layer's capabilities against real production conditions on their floors, on terms that reflect what those partners are contributing.

The arrangement is honest about what we are and where we are. We are not a vendor with a productized solution. We are a small team building a deployment layer, with a factory we are buying that is months from operational. The strategic-partner cohort is the period in which our capabilities and your operational reality co-evolve. The partners who help shape what the layer becomes are the ones whose verticals get prioritized first when the layer is mature.

This is not a pilot. Pilots are sales motions dressed up as projects, where the integrator absorbs some risk in exchange for a path to a full deployment the customer is implicitly committed to. We are proposing something different. We are asking you to take the risk of being among the first operators to host this kind of work, and we are pricing that risk into terms that reflect what you are contributing to the work.

What you get on day one

  • A small forward-deployed Relling team scoping three to five candidate tasks on your floor with you, no charge, no commitment to deploy.
  • Joint task analysis and joint evaluation of where learned policies actually fit. If a task isn't ready, we say so (§17).
  • A pilot cell scoped against your line, with a written plan covering safety envelope, integration points, labor impact, and budget. The document goes to your finance team and survives that scrutiny.
  • Direct access to the engineering team that will build the cell. Not a support tier.

What you get over the next two to three years, as Fuselage stands up

  • Cells deployed at terms that favor you: capex offsets, deferred payment tied to demonstrated cell performance on your line, no service-margin markup.
  • Priority on every improvement we ship. The patterns we learn at your facility are the patterns we extend into every cell after.
  • Telemetry and observability infrastructure that gives your ops team visibility into the cell that your existing automation does not.
  • Distilled versions of the manipulation foundation models from our lab partners, deployed on edge compute running at takt time.
  • IP and data terms negotiable in your favor.

The floor we run, viewed from your seat

The same plant shown earlier (§11) is also where the work that ends up on your floor gets stress-tested first, against our customers, our deadlines, our margins.

Fig. 17Fuselage, viewed from a partner seat
Top-down · battery-assembly plant · 8 production cells5 live · 3 deployment R&D
Fuselage, battery-assembly plant, owned and operated by Relling 01 · LIVE Cathode coating classical robot, ships today 02 · LIVE Anode coating classical robot, ships today 03 · R&D Stacking learned-policy retrofit 04 · LIVE Tab welding classical robot, ships today 05 · LIVE Electrolyte fill classical robot, ships today 06 · R&D Formation learned-policy retrofit 07 · R&D QC inspection learned-policy retrofit 08 · LIVE Packaging classical robot, ships today SHARED OBSERVABILITY · TELEMETRY · SAFETY MONITOR · DEPLOYMENT-LAYER SPINE real customer output DAY ONE Five live cells pay the bills, three R&D cells run learned-policy retrofits. YEAR THREE Live cells migrate to learned-policy one by one as the deployment layer absorbs each operation.
Live production cell, classical robot Deployment R&D cell, learned-policy retrofit Observability spine

Fig. 17The same floor plan, framed for the strategic partner. Cells, integration tooling, safety dossiers, and observability shipped to your facility were stress-tested here, against our customers, our deadlines, our margins. What arrives at your line is configuration against an interface, not engineering against a blank slate.

Why being a strategic partner now is structurally different

The cohort window is finite. The five seats we can fill in 2026 will shape what the deployment layer becomes. The patterns those five facilities expose are the patterns that get productized first. By the time Fuselage is fully operational and standard commercial engagements open, the playbook is the product, and it is sold at commercial rates with standard SLAs.

Fig. 16Strategic partner vs. future commercial engagement
Cohort window · 2026 → 2027What it costs to be late, by category
Cells & commercial terms
2027 onwardCommercial rates · standard SLAs · service-margin markup
StrategicCapex offsets · deferred payment tied to demonstrated cell performance on your line · no service-margin markup
Engineering team access
2027 onwardSupport tier · tickets · release-cadence escalation
StrategicThe engineers who built the cell are on your line. Your point of contact is the person writing the code.
Improvement priority
2027 onwardReleases ship on a published cadence; your verticals are queued behind the strategic cohort
StrategicThe patterns we learn at your facility are the patterns we extend across every cell after. Your task footprint shapes what the deployment layer becomes.
Cohort capacity
2027 onwardOpen
StrategicThree to five facility slots, sized to what our forward-deployed engineering team can embed in at once
IP & data terms
2027 onwardStandard
StrategicNegotiable in your favor. Telemetry stays on your storage by default.
Vertical priority
2027 onwardVerticals are served in the order the strategic cohort shaped them
StrategicIf you anchor a vertical, it gets prioritized first as the layer scales

Fig. 16The cohort that helps shape the layer is the cohort whose verticals get prioritized first. We can only embed inside a small number of facilities at once. The math of being a strategic partner during the Fuselage bring-up window is structurally different from being a standard customer in 2027.

Who we are looking for

Mid-market American manufacturers with operations classical automation has been unable to serve, where the product variance is high enough that programmed cells are not viable, where the labor constraints are real enough that the operator has actually thought about automation rather than treated it as a future problem, and where the leadership is willing to host an embedded engineering team for long enough that the work compounds.

What we are asking from you

  • A named internal champion who can vouch for us. A person, not a committee. Roughly one to two hours of their time per week, mostly to unblock our team when we need a door opened.
  • Floor access during scheduled visits, not a permanent residence. Our team comes to you on a cadence we agree on. We do not move in.
  • Operational data on the tasks we mutually pick. SOPs, recorded footage, PLC logs for the lines we are scoping. Not the whole shop.
  • Honest feedback when something we propose doesn't fit your operation. The fastest way to ruin this is for the partner to nod politely while disagreeing internally.

That is the entire ask.

19Values

We started this company with the idea that robots should be abundant. Along the way, we've come to a small set of beliefs every person on the team holds.

  1. Safe deployments, above all. A robot that is unsafe is a liability. Every cell we ship clears the safety bar before it clears any other bar.
  2. Genchi genbutsu (go and see). You cannot solve a factory problem from an office, a lab, or a Zoom call. You solve it on the floor.
  3. Seek truth. We would rather hear the answer we don't want than the answer we do. Disagreement is a gift; flattery is a tax.
  4. Make stuff happen. Vision without execution is decoration. We move whatever needs moving (calendars, budgets, egos, ourselves) until the thing exists in the world.
  5. Festina lente (make haste, slowly). Move with urgency, but never at the cost of doing it right.

20Team

Jai (CEO) has two bootstrapped exits and led post-acquisition work at Clara Labs. Chief Product Officer at Rollup (a16z, Thiel). Engineer at AnySignal.

Anya (CTO) was a Controls Engineer at SpaceX and worked in Data Systems at NASA JPL. Employee #4 at Neros. Canadian National Chess Team.

21References

  1. Goldratt, E. M., & Cox, J. (1984). The Goal: A Process of Ongoing Improvement. North River Press. (Theory of constraints.)
  2. Brynjolfsson, E., Rock, D., & Syverson, C. (2017). Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics. NBER Working Paper No. 24001.
  3. Pisano, G. P., & Shih, W. C. (2009). Restoring American Competitiveness. Harvard Business Review, 87(7–8). (Industrial commons.)
  4. Christensen, C. M. (1997). The Innovator's Dilemma. Harvard Business Review Press.
  5. ISO 10218-1:2011. Robots and robotic devices — Safety requirements for industrial robots — Part 1: Robots. International Organization for Standardization.
  6. ISO/TS 15066:2016. Robots and robotic devices — Collaborative robots. International Organization for Standardization. (Power and Force Limiting.)
  7. Skinner, W. (1969). Manufacturing — Missing Link in Corporate Strategy. Harvard Business Review, 47(3). (Cost / quality / dependability / flexibility framework.)
  8. Hayes, R. H., & Wheelwright, S. C. (1984). Restoring Our Competitive Edge: Competing Through Manufacturing. Wiley.
  9. Nakajima, S. (1988). Introduction to TPM: Total Productive Maintenance. Productivity Press. (Six Big Losses framework.)
  10. U.S. Bureau of Labor Statistics. Job Openings and Labor Turnover Survey (JOLTS), 2020–2026 series. Manufacturing detail.
  11. National Association of Manufacturers. Manufacturers' Outlook Survey, quarterly 2020–2026. Workforce sections.
  12. Deloitte & The Manufacturing Institute. Creating pathways for tomorrow's workforce today: Beyond reskilling in manufacturing. 2024 update.
  13. Farrell, H. Various essays on process knowledge as tacit, diffuse, locally-held competence. Crooked Timber, 2022–2024.