AI right now is a digital phenomenon. Models are writing code, handling customer support, doing legal research, and running real parts of real companies, but almost none of this has made it into the physical world in any meaningful way. A factory floor in 2026 runs more or less the same way it did 30 years ago, and the gap between what software can do and what a robot can do keeps getting wider rather than closing.
IClassical robots, current production
Current day product environments are filled with thousands of "classical robots." Classical robots are narrowly preprogrammed for specific tasks — they execute the same task thousands of times a day with a high amount of repeatability and precision. When the task changes (like putting glue on a 10-inch plate vs. a 5-inch plate), these classical robots are manually reprogrammed. The process of bringing up these robots into new environments without flexibility of the task they can complete often leads to a lot of deliberation and ROI calculations around the automation of tasks; even tasks that look automatable may not economically make sense to automate given the volume of task and the bring-up time and maintenance of the robot.
Robots that have the ability to learn new tasks from demonstration or instruction, while incredibly impressive from a research perspective, have not yet been brought into the manufacturing setting.
The specific technical and operational challenges inherent to the deployment of autonomous physical systems in environments where robots and humans co-exist cannot be understated, as was learned on the road to autonomous vehicles.
IIThree categories of deployment company
Robotics deployment companies today have primarily been:
- Capability sellers who sell models or skill-specific robots to someone else who runs the operation. The customer takes the integration risk. Ex: Physical Intelligence, Skild, Dyna, Covariant.
- Deployers who drop robots and service into existing customer facilities. They own the deployment and the customer keeps the shop. They convince the customer that their deployment will save them money. Ex: Formic, Path Robotics, Chef Robotics, Symbotic.
- Vertical operators who own the shop and compete with legacy incumbents on end-product margin. Ex: Hadrian and Senra in software-and-automation; Foundry Robotics, Divergent, and Machina Labs in robotics.
The axis is how far down the stack they go toward the end customer — model → deployment → finished good. Each step simultaneously absorbs more risk and, as a result, captures more margin.
Fig. IThe further a company sits from the finished good, the less risk it owns — and the less margin it sees. None of the three positions structurally produces a deployment layer.
IIIWhy robots are not abundant
Robotics research itself is advancing quickly. VLAs are getting better at generalization, sim-to-real is working in ways that would have been hard to imagine three years ago, and there's real evidence that scaling laws hold for robotic actions the same way they do for text.
Every production system has a single binding constraint at any given time, and the theory of constraints tells us that improvements anywhere except the constraint don't move system-wide throughput. For the robotics industry today, the binding constraint is not model quality. It's deployment infrastructure — the integration work, the technician interface, the safety framework, the primitive library, the distillation to edge compute, the data pipeline back to training.
The three existing categories of robotics companies are each structurally unable to build this layer, despite each doing important work.
Capability sellers are commercially incentivized to produce the best model and license it, not to solve deployment for any specific customer. Building full deployment infrastructure would dilute their focus and require them to become services companies, which breaks their licensing model. They are structurally forced to assume deployment will be solved by someone else.
Deployers make their money deploying robots into specific customer facilities and capturing a slice of the labor savings. Their deployment stack is optimized for the tasks they've already productized; extending to new tasks requires custom engineering their unit economics cannot support at low volume. They are structurally forced to optimize for a narrow task envelope.
Vertical operators own operations and compete with legacy incumbents on end-product margin. Their commercial incentive is to optimize unit cost of the end product they sell, not to productize their deployment infrastructure. Their operating knowledge stays inside their own four walls because productizing it would require a separate business that competes with their own operations for attention and capital.
None of them is building the deployment layer itself, because their business models force different choices. The layer has to be built by a company whose primary product is the layer, whose primary revenue eventually comes from serving other operators' deployments, and whose primary operational flywheel is sustained immersion in real production under conditions the company controls.
Broadly, the problems that actually block deployment are operational rather than scientific. The corollary is that if you identify and lift the binding constraint, throughput improves system-wide.
Fig. IIInvestments in better models do not lift the binding constraint. Throughput improves only when the constraint does. The deployment layer is the constraint — and it is unowned.
IVWhat is this deployment layer?
Critical deployment problems have yet to be solved:
- The best models are too big to run on the compute inside the robot, so what actually ships to the floor are the smaller, worse ones that were never the point of the research.
- When a robot fails, the technician has no way to ask it why, because there is no code to read — just weights.
- Setting up a new cell is a bespoke engineering project every time. Safety bring-up takes weeks of expert time, and integrating a robot with the rest of the plant is its own separate custom job that has to be redone for every deployment.
- The ROI on deploying and maintaining robots into someone else's factory doesn't actually work today, even though a lot of people will tell you it does.
The integrator market, for example, is a $30B symptom of deployment being artisanal.
VProcess knowledge
None of these problems are intellectually hard in the way that training a frontier model is hard. Each of them, individually, is the kind of problem a competent engineering team could solve in a few months if they were sitting inside an active production operation, with real technicians, real failure modes, and real integration constraints. The reason they aren't solved is not that the industry lacks talent or capital. It's that solving them requires a form of knowledge that the industry hasn't been set up to accumulate.
The distinction between product knowledge (the blueprints and specifications that describe what a thing is) and process knowledge (the tacit, diffuse, locally-held competence that describes how it actually gets built) matters here. Process knowledge is not a commodity that can be bought and sold. You cannot acquire process knowledge by licensing it, by buying a company that has it, or by hiring one person who has it. You can only build it through sustained operational presence over time.
Every one of the deployment problems listed above is, at root, a process knowledge problem. They will be solved by engineers who have spent years inside real production, watching what actually breaks and building the tools that make it not break. That work hasn't happened yet, not because it is impossible, but because the existing structures of the robotics industry are each structurally unable to produce it from where they sit.
It is useful to frame process knowledge as tacit, diffuse, locally-held knowledge that is the product of market interactions. It can only be built through sustained operational presence over time.
Fig. IIIProduct knowledge moves through markets. Process knowledge does not. It is built only by standing on the floor — and that is the only door open to building a deployment layer.
VIFuselage
We call our facility Fuselage. We're buying a profitable assembly business and turning it into our learning environment. Buying and owning the factory lets us inherit the following:
- Day-one revenue and cash flow, not a burn-financed R&D facility.
- Real customers, real SKUs, real volume pressure — the only environment where the four problems actually bite.
- A defensible reason the process knowledge can't be replicated: you own the floor, not rent time on someone else's.
Fuselage exists to solve four problems that the rest of the robotics industry has either failed to address or has tried to solve from the wrong position. Each one is a process knowledge problem in the sense that it is solvable only through sustained operational presence, and structurally impossible to solve from a research lab, a systems integrator's office, or a customer's running production line. Each one is also a problem we cannot solve by writing better code or training better models. They have to be lived through, on a real floor, until the patterns that resolve them emerge from the work itself.
The goal of Fuselage is to solve four foundational problems:
- ROI closes within 12 months.
- The cell is brought online in days, not months.
- The cell stays running without specialist intervention.
- Safety is solved as productized infrastructure.
Fig. IVEach row is a single deployment metric — industry today on the left, Fuselage target on the right. The red figure between them is the multiplier. The gap is the bet.
| Outcome | Metric | Industry Today | Fuselage Target |
|---|---|---|---|
| The math closes | Payback period | 36+ months | 12 months |
| IRR on cell capex | 10–15% | 30%+ | |
| All-in deployment cost per cell | $500K–$2M | <$200K | |
| 5-year TCO vs. human equivalent | Higher | 40–60% lower | |
| The cell comes up fast | PO to first production unit | 6–12 months | <30 days |
| Specialist engineer-hours per cell | 500–1,500 | <100 | |
| Safety bring-up time | 4–8 weeks | <5 days | |
| Time to retask cell to a new task | Weeks of re-engineering; often impossible | Minutes to hours; no retraining | |
| Stays up without specialist intervention | Overall Equipment Effectiveness (OEE) | 30–50% | 85%+ |
| Mean time to repair (MTTR) | 24–72 hours | <2 hours | |
| Plant-technician resolution rate | <20% | 90%+ | |
| Specialist interventions per shift | 5–15 | <1 | |
| The safety story is shippable | Certification time per new cell | 4–8 weeks | <5 days |
| Reusability of safety framework | None — rebuilt every cell | Productized | |
| Incident rate vs. ISO 10218 baseline | Variable, often worse | At or below baseline |
by plant technicians
without retraining
transfer
coordination
frontier-quality models
flywheel
Fig. VEach capability looks small. The cumulative absence is the reason robotic deployment is artisanal.
ROI Closes Fast
The unit economics of robotic deployment fail today for a reason that is structural rather than incidental. When an operator buys a cell, they are buying engineer-hours of work that will be performed inside their facility over the next six to twelve months, of which less than a third produces hardware with any continuing use beyond that single deployment, and the rest is consumed by an integration process that will be repeated, almost identically, the next time another operator buys another cell from another vendor. The integrator captures most of the value in this transaction, and the integrator's revenue model is built on engineer-hours rather than on productized infrastructure, which means every hour the integrator spends building a tool that would reduce future engineer-hours is an hour of revenue the integrator is choosing to forgo. This is the central reason that thirty years of robotic deployment in American factories has produced almost no reusable infrastructure for deployment itself.
Fuselage is built to break that pattern by treating cell bring-up as configuration against stable interfaces rather than as engineering against a blank slate, and by building the integration tooling, the data infrastructure, and the safety framework as productized assets that get developed once inside our facility and amortize across every subsequent deployment we run. The target is a representative cell deployed for under two hundred thousand dollars all-in, IRR above thirty percent against the operator's labor baseline, and a payback period inside twelve months that a finance team can defend through their standard capex framework rather than through narrative or analogy. The wager is that the company that builds the integration layer captures the value integrators have been extracting through engineer-hours for thirty years, and that the deployment layer will modularize the way data infrastructure modularized in software in the 2010s.
Fig. VIThe integrator's engineer-hours block is what compresses. Productizing it once and amortizing across deployments is the bet.
Cell Is Brought Online in Days, Not Months
A cell takes six to twelve months to bring online today, and the reason is not that the work is intellectually difficult or that the engineers performing it are slow. The reason is that the bring-up process is serial, the dependencies between stages are circular, and the work cannot begin until the cell is physically present in the deployment environment. Mechanical integration cannot finish until the cell layout is locked, the cell layout cannot lock until perception is calibrated against the actual environment, and perception cannot calibrate until the lighting and surface conditions of the customer's facility are characterized in situ. Safety bring-up sits on top of all of this and cannot start in earnest until the cell is in something close to its final configuration, which puts it on the calendar at the end of the bring-up window when the customer is most impatient and the integrator is least focused. The result is that adding people to a deployment does not compress the timeline, because the dependency chain is what is binding, and the dependency chain cannot be parallelized in a customer's facility no matter how much labor is thrown at it.
Fuselage breaks the dependency chain by performing the work that currently happens inside the customer's facility against a real production environment that we own. Most of what an integrator does on-site for six months is not work that has to happen in the customer's building; it is work that has to happen against a working production line, and a working production line is a thing we have. Cells get configured, calibrated, validated, and safety-characterized inside our facility, against the operational reality of an actual factory rather than against a benchmark, and what arrives at the customer's site is a cell that has already done the work the customer's site has historically been the place to do. The handoff is configuration against a stable interface, not engineering against a blank slate. The target is purchase order to first production unit in under thirty days, with specialist engineer-hours per cell driven below one hundred, and the calendar time the customer's facility is occupied by integration work measured in days rather than in quarters. The wager is that the bring-up timeline is the variable that determines whether robotic deployment is a custom service or a buyable product, and that compressing it by an order of magnitude is what unlocks the deployment volume the rest of the business depends on.
Fig. VIIThe serial dependency chain compresses because the work moves to a place where it can run in parallel — Fuselage's own line — and the customer's site sees only configuration.
The Cell Stays Running Without Specialist Intervention
A learned-policy cell fails differently than a programmed cell, and the operational implication is that the customer's plant technicians cannot resolve those failures with the diagnostic workflows they have spent fifty years developing. When a programmed robot fails, the cause is attributable to a specific line of code or a specific sensor reading, and the technician follows a deterministic chain backward from symptom to cause. When a learned-policy robot fails, the cause is encoded in model weights, the proximate symptom typically appears several steps after the originating distribution shift that produced it, and the diagnostic path requires interpretive tooling that plant staff have neither the training nor the access to use. A grasp angle that drifts by two degrees because of seasonal lighting changes in the customer's facility will not present as a missed grasp; it will present as a downstream insertion error six steps later, after the part has been transferred to another station, and the technician diagnosing the insertion failure has no mechanism to trace it backward to the grasp because the model does not expose its own intermediate state. Current learned-policy deployments respond to this by depending on a constant background of remote engineering support, with mean time to repair running between twenty-four and seventy-two hours per failure event, and the cumulative effect drags steady-state OEE into the thirty to fifty percent range that makes the cell economically marginal regardless of how impressive the underlying model is.
Fuselage is where the technician-facing infrastructure that closes this gap gets developed. The model interrogation surface that exposes the policy's intermediate state in a form a maintenance technician can read without ML training, the steering vector library that lets a technician correct behavioral drift at runtime without retraining the model, the primitive-level decomposition that localizes failures to specific composable sub-skills rather than to opaque whole-task policies, and the observability tooling that surfaces the leading indicators of failure before the failure produces downtime — all built against the failure modes that emerge in our facility under sustained operation, and shipped with every cell. The target is steady-state OEE above eighty-five percent, mean time to repair under two hours, ninety percent of incidents resolved by the customer's plant technicians without external escalation, and specialist intervention rates below one event per shift across the deployed fleet. The wager is that the binding constraint on operator trust in learned systems is not model quality but model legibility, and that a learned-policy cell becomes buyable at the moment a maintenance technician can ask it what went wrong and get an answer they can act on.
Fig. VIIIThe binding constraint on uptime is not model quality but model legibility. Move MTTR by an order of magnitude and the OEE band moves with it.
Safety Is Solved as Productized Infrastructure
The robotic safety standards that govern industrial deployment in American factories — principally ISO 10218 covering robot system design and ISO/TS 15066 covering collaborative operation — were written for robots whose behavior is fully specified in advance through deterministic programming, and the certification process built around them assumes safety can be verified by analyzing the program against the hazard model. Learned policies break this assumption at the foundation. Their behavior is not specified in advance, their state space is too large to be exhaustively enumerated, and their failure modes cannot be characterized through the static analysis that programmed robots permit. The field has not converged on a reusable framework for certifying learned systems, so safety bring-up for a learned-policy cell today is performed as ad hoc behavioral characterization, augmented by additional risk-mitigation hardware, and accompanied by documentation of the operating envelopes the cell is allowed to occupy. This work consumes between four and eight weeks of specialist engineering time per cell, depends on safety engineers who are individually expensive and collectively rare, and exists as tacit practice in the heads of a small number of consultants rather than as productized infrastructure that ships with the cell.
What shippable safety looks like in practice is a cell that arrives at a customer's site with a complete certification dossier already assembled, characterized against thousands of operating hours of identical-architecture cells running inside our facility, with a runtime monitor that has already been validated against the failure modes the architecture is known to exhibit, and an incident response runbook that the customer's risk management team can adopt as written rather than reconstruct locally. The customer's safety engineer reviews the dossier, runs the standard acceptance tests against the runtime monitor, and signs off; they do not perform a four-week behavioral characterization study because that study has already been performed. Fuselage is where that study lives, and where the framework that makes it transferable across deployments gets built. The target is safety bring-up under five days per new cell, a reusable framework that applies across deployments rather than being rebuilt from zero, and incident rates that meet or exceed the ISO 10218 baseline that programmed robots achieve. The wager is that corporate risk management is the gating constraint on deployment scale, that no operator above mid-market will install learned systems at volume until the safety story is shippable rather than custom, and that the company that productizes safety for learned policies in industrial environments captures a position no incumbent in the standards-and-certification ecosystem is structurally able to take.
Fig. IXToday's certification work runs from zero on every cell. Fuselage's reusable framework — dossier template, validated runtime monitor, incident runbook — collapses each cell to acceptance review.
VIIWhy Fuselage
We think vertical integration is the wrong way.
The pattern: an initially vertically integrated industry unbundles into horizontal layers when the interfaces stabilize. In PCs, initially integrated (DEC, IBM), then modular (Intel + Microsoft + OEMs + peripherals). In robotics: currently vertically integrated (every robotics company builds hardware + firmware + models + integration), but it will modularize as interfaces stabilize. Relling is betting that the deployment layer will become one of those horizontal layers, and that owning it early is more valuable than owning any single vertical stack.
The SpaceX/Tesla/Anduril pattern was the right shape for the category when no platform existed, but the emergence of a platform changes the game. The full-stack companies in robotics may still exist and win — Figure, 1X, Skild as full-stack apps — but the larger structural opportunity is now the platform layer itself.
Fig. XInitially integrated industries unbundle when interfaces stabilize. PCs went modular in fifteen years. Robotics is in the early decade of the same trajectory — and the deployment layer is the layer worth owning.
The Critical Bet
Manufacturing strategy has organized itself around four dimensions that any factory has to trade against: cost, quality, dependability, and flexibility. A factory cannot maximize all four at once, and the choice of which dimension to optimize defines what kind of factory it is. Classical industrial automation made its choice fifty years ago by optimizing for cost and dependability, accepted high quality as a cost of admission, and treated flexibility as a luxury that could be sacrificed because the products that ran through the line did not change often enough for flexibility to matter. The robots were programmed for a specific task, the cell was built for a specific product, and the line ran the same operation for years before being retooled. This bet was correct for its time. Markets in the late twentieth century rewarded long production runs of standardized products, and the cost-and-dependability optimum produced the manufacturing economy that the United States and Japan built their industrial bases on.
Product cycles have collapsed, customer specifications change inside the calendar year, and the operations that need to be automated are increasingly the ones that classical automation could never handle precisely because they require flexibility that programmed cells cannot deliver. The factories that have to be built now, and that the American industrial base has to build at scale to remain competitive, are factories where flexibility is the primary dimension and the others have to follow. Relling's central bet is that this shift has already happened in the market and is waiting for the deployment infrastructure that makes flexibility achievable without sacrificing the cost, quality, and dependability.
The reason flexibility has been treated as a luxury is that the only way to deliver it historically was through human labor, and human labor is expensive, scarce, and dependent on tacit skill that takes years to develop. A cell running a learned policy can handle product variants the cell was never explicitly programmed for, can adapt to changes in input materials without re-engineering, and can be retasked through a primitive library and a steering surface rather than through weeks of integrator labor. The four problems Fuselage exists to solve are the four constraints on letting that flexibility actually express itself in a production environment. The math has to close so that flexibility is not a premium product. Cells have to come online fast so that flexibility extends to new deployments rather than being trapped inside the one cell that already exists. Cells have to stay running without specialist intervention so that flexibility does not collapse the moment something drifts. Safety has to be productized so that flexibility does not stall at the corporate risk approval stage.
Fig. XIClassical automation chose cost and dependability fifty years ago and treated flexibility as a luxury. Markets changed. The new diamond fills the axis the old one gave up.
Partners
Relling needs operational reality before it has its own facility. Fuselage is the long-run answer to where the deployment layer gets built and proven, but the timeline to a fully operational facility is measured in quarters rather than weeks, and the research and engineering work cannot wait. The four problems Fuselage exists to solve are also the four problems Relling is solving today, in the present tense, against operational conditions wherever those conditions can be found.
In the period before Fuselage is operational, we work directly with partner facilities who are willing to host that work. What this means in practice is that a small forward-deployed engineering team from Relling embeds inside the partner's operation, observes the partner's SOPs against the level of detail that lets us actually understand how the work gets done, deploys cells against specific operations the partner identifies as candidates for automation, and runs the research questions that the engineering team needs to answer against real production conditions rather than against a benchmark. The partner gets cells deployed at terms that significantly favor them, including capex offsets, deferred payment structures tied to demonstrated performance, and direct access to the engineering team that built the deployment. The partner also gets priority on every subsequent improvement we ship, because the patterns we learn inside their facility are the patterns we extend across the rest of our work.
The arrangement is not a pilot in the conventional sense. A pilot is a sales motion dressed up as a project, with the integrator absorbing some of the risk in exchange for a path to a full deployment that the customer is implicitly committed to. We are proposing that the partner take the risk of being among the first operators to host this kind of work, and we are pricing that risk into terms that reflect what the partner is contributing.
We are looking for mid-market American manufacturers with operations that classical automation has been unable to serve, where the product variance is high enough that programmed cells are not viable, where the labor constraints are real enough that the operator has actually thought about automation rather than treated it as a future problem, and where the leadership is willing to host an embedded engineering team for long enough that the work compounds rather than producing a single demo and stopping.
When Fuselage comes online, the work we do inside our own facility produces the integration tooling, the safety framework, and the deployment infrastructure that travels into partner operations, and the forward-deployed teams continue to embed inside customer facilities the way they embed today.
Fuselage Works — What Now?
What Fuselage proves, when it works, is not a factory. It is a deployment layer. The four problems Fuselage exists to solve produce, when solved together, an integration substrate that did not exist before in any form. Fuselage is where that substrate gets built and proven. What happens after it is built is the question of how the substrate scales beyond the building it was created in, and the honest answer is that we do not yet know which scaling path is the right one.
The goal is not to operate thousands of factories. The goal is to enable the thousands of existing factories that already constitute the American industrial base to produce the volumes the next industrial revolution requires them to produce. The distinction matters because the standard tech-meets-manufacturing playbook of the last decade has been to build a vertically integrated operator that competes with incumbents on end-product margin, and the standard outcome has been that companies with tens or hundreds of millions of dollars of funding find themselves unable to compete on margin with family-owned operators who have been running their lines for forty years. Forceful additions of technology to manufacturing operations rarely produce the efficiency they promise on paper, because the operations the technology is being added to are themselves the product of decades of process knowledge that the technology has not yet earned. The factories that already exist do not need to be replaced. They need a deployment layer that lets them automate what classical automation could not.
Two paths run from Fuselage toward that goal, and the period after Fuselage is operational is when we will learn which one is the right one to scale through.
The first path is replication. The blueprint that gets produced inside Fuselage — the integration tooling, the safety framework, the technician interface, the data infrastructure, the playbook for how a deployment actually gets stood up against real operational conditions — is designed from the beginning to be transferable to other facilities that Relling owns and operates. The argument for this path is that the deployment layer is only as general as the operational conditions it has been exercised against, and operating across multiple industrial contexts ourselves is the most direct way to extend that generality. The risk is that owned facilities are capital-intensive, and the rate at which the deployment layer can scale through them is fundamentally constrained by how fast we can stand up new operations.
The second path is the playbook leaving our walls. The deployment layer, by the time it is mature enough, is operable by someone other than us. Other operators get access to the infrastructure Relling has built and use it to stand up cells inside their own operations at the cost and timeline Fuselage has produced. Forward-deployed engineering teams from Relling continue to embed inside those operations, but the work being done has been replaced by configuration against productized infrastructure rather than engineering from scratch. The argument for this path is that scaling the deployment layer through existing operators reaches the thousands of factories that already constitute the American industrial base, which is the goal the company exists to serve in the first place. The risk is that selling into industrial facilities is, in our experience and in the experience of essentially everyone who has tried it, a deeply difficult sales process. Industrial buyers are conservative for legitimate reasons, their procurement cycles are long, their risk tolerance for new technology is low, and the people who sign capex approvals are often several layers removed from the people who will operate the equipment. None of these are problems that better engineering solves on its own. They are problems that have to be worked through one operator at a time, with the kind of trust and patience that no amount of capital can shortcut.
The honest position is that we do not yet know which path is the better one, and the period after Fuselage is operational is when that question gets answered against actual evidence rather than against speculation. We expect the answer will come from the texture of the work itself: the rate at which the playbook compounds across deployments, the durability of the deployment layer when it operates outside our direct control, the actual difficulty of selling into industrial operators at scale, and the unit economics that emerge once the integration work is genuinely productized. The bet is not on a particular scaling configuration. The bet is that the deployment layer is the thing that captures value, that whoever builds and owns it captures the position the rest of the industry runs on top of, and that the configuration through which it scales is downstream of evidence we are still in the process of generating.
The position the company is structurally building toward is the same in either case. The deployment layer is the asset. Fuselage is where that asset gets proven. The factories that exist in 2035 are not the factories that exist now, but the work of bringing those factories into existence is not the work of replacing the operators who run American manufacturing today. It is the work of giving them a deployment layer that classical industrial automation never produced, and letting them put it to work against the operations they understand better than we ever will.
In short, we will continue asking ourselves two questions:
- The goal of Fuselage is to answer the first question: Is the cost of deployment declining?
- Post-Fuselage, we are answering the second question: Is the improvement of yield rate higher — does the deployment layer compound?
Fuselage will allow us to cross the chasm — the specific challenge infrastructure companies face in moving from early adopter customers (who will tolerate rough edges) to early majority customers (who need a complete, polished product).
Fig. XIIThe deployment layer is the asset in either case. Which path scales it is a question Fuselage exists to answer against evidence, not speculation.
VIIIValues
We started this company with the idea that robots should be abundant. Along the way, we've come to a small set of beliefs that every person on this team holds. They are not the only things we believe, but they are the things we will not compromise on.
- Safe deployments, above all. A robot that is unsafe is a liability. Every cell we ship clears the safety bar before it clears any other bar. This is non-negotiable, and it is the reason we exist.
- Genchi genbutsu — go and see. You cannot solve a factory problem from an office, a lab, or a Zoom call. You solve it by standing on the floor, watching the work, and understanding why the thing that should work doesn't. Every hard problem in robotics is a process knowledge problem, and process knowledge lives on the floor.
- Seek truth. We would rather hear the answer we don't want than the answer we do. Disagreement is a gift; flattery is a tax. We measure ideas against reality, not against each other, and we change our minds when reality tells us to.
- Make stuff happen. Vision without execution is decoration. When we commit to something, we move whatever needs moving — calendars, budgets, egos, ourselves — until the thing exists in the world. We are biased toward action and allergic to ceremony.
- Festina lente — make haste, slowly. Move with urgency, but never at the cost of doing it right.
IXTeam
Jai (CEO) has two bootstrapped exits and led post-acquisition work at Clara Labs. He was Chief Product Officer at Rollup (a16z, Thiel) and an engineer at AnySignal.
Anya (CTO) was a Controls Engineer at SpaceX and worked in Data Systems at NASA JPL. She was employee #4 at Neros, and part of the Canadian National Chess Team.