April 1, 2026

Systems engineering at Woven by Toyota: Building simulation and validation infrastructure for safety-centric SDVs

Share Article

⚑ Ann Arbor, Michigan, United States

As vehicles become increasingly software-defined, validating how software behaves as part of a broader system becomes as important as developing the software itself. Nowhere is that more true than in safety-critical systems in vehicles. In this conversation, Andrew, a systems engineer at Woven by Toyota, explains how simulation and validation infrastructure is built to support that challenge at Toyota scale. He shares how teams carry test intent across simulation, hardware, and real vehicles; why that continuity is difficult; and what it takes to test complex, learning-driven systems with the speed and confidence required for real-world deployment.

Q: Let’s start at the beginning. What drew you to systems engineering?

It was a pretty intentional move. Over my career, I’d worked as a controls engineer, software engineer, integration engineer, even some calibration work. But in every team, there were systems engineers and system architects guiding the overall direction. They were the ones really bringing teams together and helping everyone find a common forward direction.

They weren’t writing every line of code or designing every piece of software or hardware, but they understood how the pieces interact and shaped how the product came together. That end-to-end ownership and being responsible for how the system behaves as a whole was what really appealed to me.

Q: Do people need a specific background to move into systems engineering?

Definitely not. Systems engineers come from all kinds of backgrounds: software, electrical, mechanical, robotics. A lot of the systems we work on are a mix of software and hardware, so having experience in any one area can be a strong foundation. What matters more than your original discipline is your ability to think across boundaries, reason about tradeoffs, and remain grounded in how a system actually behaves in the real world.

Q: What does systems engineering look like within the context of Arene?

When you’re developing vehicle software, you usually move through three main environments: 

  • Pure simulation, which is optimized for speed and scale

  • Bench setups, where some real hardware is introduced, and

  • Full vehicle setups, where everything is finalized. 

Within Arene’s simulation and validation approach, we work across teams to define clear, shared expectations for how the system should behave at each stage. We then ensure those expectations are consistently reflected in the tools and infrastructure teams use, and that they remain consistent and dependable as software moves from early development into a real vehicle. Systems engineering ensures that Arene products remain aligned to stakeholder needs throughout each stage of development.

Maintaining that continuity across environments is one of the hardest challenges in automotive software, and it’s also what makes Woven by Toyota’s engineering relatively rare.

Andrew-inbody-photo-1

Q: What problems are you ultimately trying to solve?

At a high level, we’re helping teams move faster without losing confidence as vehicle software becomes more complex.

Modern vehicles are made up of so many interconnected systems, and they’re often developed by different teams and also being updated individually, all at the same time. A small change in one area can have unexpected effects elsewhere, and teams need to be able to see how changes ripple through the system.

At the same time, by making sure software behaves consistently across the different environments I mentioned a second ago, development teams can iterate more quickly and trust what they’re seeing in test results. At Toyota’s scale, that combination of speed and confidence is what will make a true safety-centric software-defined Toyota vehicle possible.

"At Toyota’s scale, that combination of speed and confidence is what will make a true safety-centric software-defined Toyota vehicle possible."

Q: You mentioned working across pure simulation, partial hardware, and actual vehicle environments is rare. Why is that?

What’s rare is being able to carry the same test intent across all of those environments. 

In a lot of companies, each stage leans on different tools or vendors, which makes it hard to compare results or reproduce issues. A failure you see in a vehicle might be difficult to recreate on a bench, and even harder to trace back to something you can fix quickly in software.

At Woven by Toyota, the focus is on designing tests, scenarios, and interfaces, and in turn, their systems, so they can be replicated and reused across all of those different environments. When an issue shows up in a vehicle, teams can recreate the same conditions in a controlled bench setup or in simulation, where it’s easier and safer to investigate. Once the root cause is understood and fixed, the same test can be rerun step by step, all the way back through the bench and into the vehicle, to confirm the behavior is actually fixed.

“At Woven by Toyota, the focus is on designing tests, scenarios, and interfaces, and in turn, their systems, so they can be replicated and reused across all of those different environments.”

Q: Double clicking into simulation, what determines whether a simulated component behaves like the real thing?

Two things: protocol fidelity and behavior fidelity.

  • Protocol fidelity means you speak the same language; same message IDs, payload formats, timing, and state transitions. 

  • Behavior fidelity means the outputs respond correctly to vehicle dynamics and operating modes.

If either is off, downstream ECUs receive “bad data” and they’ll often fail in subtle, hard-to-diagnose ways. That’s why we’re obsessive about matching the real component’s contracts, especially when simulation results are used to make safety-critical decisions.

Q: Can you give us an example?

Yes — simulating a sensor during “dynamic” driving events. If the vehicle brakes aggressively, its pitch changes, which directly affects how a real sensor would perceive the world. A believable simulation has to reflect that change immediately and accurately in both the data values it produces and in the timing at which that data is delivered.

If the simulated component lags, skips updates, or behaves differently than the real sensor would, downstream systems may still function, but in ways that are subtly wrong. Those kinds of issues are hard to spot and easy to misinterpret.

Q: What’s the most interesting technical challenge you’ve tackled at Woven by Toyota so far?

Running a simulated sensor ECU inside a real vehicle, while still meeting real-time requirements.

In that setup, we had to decouple the physical sensor and feed in synthetic input from an environment simulator. From the vehicle’s perspective, everything else stayed the same, but downstream systems still expected data to arrive at the right rate, in the right format, and with the same behavior as a real sensor, which meant the simulated component had to behave like the real thing in all the ways that mattered.

The hardest part was keeping the simulation in sync with what the vehicle was actually doing. In software-only environments, you can usually relax timing. In a real vehicle, you can’t. For example, during a safety-critical maneuver, downstream systems rely on timely and consistent detection data to decide whether to intervene. If simulated data arrives even slightly out of sync, those decisions can change in ways that wouldn’t happen in the real world.

So getting that level of fidelity right was really challenging, but incredibly rewarding when we cracked it. It allowed us to test complex, safety-critical behavior in a controlled way while still operating in a real vehicle. 

Andrew-inbody-photo-2

“At this scale, failures aren’t just about something breaking outright. They’re often about subtle behavior changes, timing differences, or edge cases that only show up under specific conditions.”

Q: For engineers coming from outside automotive, what might surprise them about engineering in this space?

While many domains create work that impacts the physical world, here the connection between what you build and the real-world outcome is more direct. Even in hybrid environments, what you build becomes tangible and has an intended benefit to the driver or the passenger or to both, or even to the people outside of the vehicle. There’s a real sense of responsibility that comes with that, and it’s something even I didn’t fully appreciate until I experienced it.

Andrew-inbody-photo-3

Q: Final question. What excites you most about what you’re building now, and where this leads?

For learning-based systems, traditional pass/fail testing doesn’t really work. You’re often looking at distributions, tendencies, and tradeoffs rather than single correct outputs. The work now is about building ways to probe that behavior intentionally, to ask questions like how stable a model’s decisions are under small changes, how it behaves near operational boundaries, and how those behaviors shift as models are retrained or updated. Where this leads is a much stronger feedback loop between development and validation. Instead of discovering issues late or reacting to unexpected behavior, teams can explore and understand model behavior earlier and more systematically. Over time, that makes it possible to evolve AI-driven functionality with far more confidence, because you’re shipping behavior you actually understand. That shift, from treating AI as a black box to something you can interrogate and trust, is what makes this work especially compelling.