The comparison with ROS 2 is a bit questionable. Comparing a single-process-only (Copper) approach using shared memory with a multi-process system (ROS 2) using default DDS settings really isn't comparing the same thing. There are ways to make the ROS 2 system much faster if you're willing to be limited to similar constraints (single process with components, or local-system shared-memory transport) but most people don't because those constraints are very limiting in practical applications.
We have ~100x less latency and ~12x faster logging also because we have adopted a data oriented architecture: the tasks outputs are written back to back in memory, all the IOs are linear (we could log to a block device straight, we don't even need a filesystem). I am not sure it is possible to touch this with ROS just because of its "everything is asynchronous" design pattern.
So the question is more about those limits in practical applications: do you have use cases where you absolutely need every single component deployed as a kind of micro service in a robot?
Having worked on both large ROS systems and large monolithic robotics systems, the monolith wins hands down. The ROS paradigm that every node is it's own process is frankly insane.
> every node is it's own process
.. was SOA about 10 years ago. At this point any comparison to ROS1.0 is just strawman.
Nowadays nodelets and Ros2 eschew that approach to a larger degree. But you're right. Plenty of shops still use ROS1.0.
ROS was ... first, so they set the standard, and were AFAICT the introduction to distributed and message-passing systems for most roboticists coming out of school/Academia. They all graduated and had to relearn things over time (myself included). A plain-jane shared memory paradigm is just simpler and easier once you get a basic framework together for most things. There certainly are situations where you want process partitions, e.g., plug-and-play payloads. But even there, ROS plug-and-play is atrocious, since this is akin to a network drop, which it just doesn't handle. So, everyone just writes bridge nodes anyway.
Then don't get me started on multi-agent systems. At one point it was honestly believed that all agents would use a common Ros Master. Laughable!
Another angle: `ROS` and `RTOS` share letters, but not much else!
I gather that robotics is a fusion of embedded and mechanical engineering; I refer to the former.
I would say tho there are different "kinds" of roboticists. Just as you have backend and frontend devs in the web world, you might consider people who work on hardware or software to be the robotics equivalent. Hardware people are going to be low level and working with embedded devices and may even program them. But higher up in the robotics stack most of that embedded know-how (toolchain wrangling as you put it) is much less important. The full stack robotics engineer has hardware and software knowledge/experience, but you don't usually see roles that ask for both, so people tend to specialize into one or the other.
I think if something annoys me enough I'll end up using nanomsg or mqtt as a bridge, but it's a pain.
If I paint a broad brush:
- embedded is all about latency, low bandwidth
- computers are all about bandwidth with often terrible latencies.
A modern robot needs both low latency and high bandwidth.
I also imagine distributed computing (for example: over CAN or another bus) would be a useful pattern.
Deterministic log replay is a killer feature to have baked in from the start - many autonomous vehicle & robotics companies have invested years of effort to achieve that nirvana, while the most popular robotics framework today (ROS) makes it borderline impossible.
Don't you mean "like"? I thought game engines were all about data oriented approach.
I don't think Copper claims to offer more than this either, but I can't speak for them.
Is deterministic log replay really a differentiating factor? My naive assumption would be that this is table stakes for pretty much any software.
It's important for safety critical to be sure, but you can get surprisingly far without it.
Technically Copper is a compiler taking your graph with those constraints in mind and building a game loop out of it.
In terms of architecture though, it looks like it makes a ton of sense inside something like a tracker (ie the entity is basically a track) but for other parts like a vision pipeline, sensor fusion etc... I don't see how it helps.
Tell me if I am missing something.
Talking about Bevy, fitting Copper within Bevy to build this little simulation example happened super naturally: Copper is a System querying the Entities within the virtual world after each simulation tick.
As far as how it helps, I was mostly thinking from a dev ergonomics perspective. For things like ROS/mavlink it can sometimes be hard (for me) to think through how all the systems are interacting but for whatever reason ECSs feel like a natural way to think about systems with simultaneousish inputs and outputs.
Some might verbally assault me for calling these two things similar, but I mean that loosely. Someone who is new to both might notice that they compose systems which can communicate and respond to inputs, which is conceptually similar. State charts offer far more guarantees, consistency, reliability, and predictability though.
I suppose one is about propagating data (not state, specifically), the other is about state and state control. Both are hierarchical in a sense, but ECS doesn't place as much importance on hierarchy.
Apologies if I'm dead wrong.
“ Copper is a user-friendly runtime engine for creating fast and reliable robots. Copper is to robots what a game engine is to games.”
Other projects should take notice.