So, yes, there's a lot of high-level GC'ed languages that "run" on these systems.
For hobbyist one-off projects (where you can overspec the hardware by 10x without any negative financial impact to the project) they are great. For mass-production of a device, the extra RAM (and resources in general) that they require tend to be double what you would ordinarily need if written in C (or a crippled dialect of C++).[1]
For the best high-level logic control of embedded devices, you cannot beat esphome, and that's because that project converts the high-level logic (specified in YAML) to native C and/or C++ code, which is then pushed OTA to the device. That approach keeps the firmware as lean as possible.
[1] My best experience with stuffing a more abstract language into something like an atmega328 (I believe that's what's used for Arduino) is with a special purpose-built compiler that compiled a PLC type language of my own design (only for controlling and switching digital IO and reading ADC) to a byte code, which was run by a tiny interpreter on the atmega, using three bytes per instruction+operand.
Such a scheme allowed tons of conditional logic to be stuffed into the flash area, as the 'interpreter' on the chip would read and execute 3 bytes at a time from flash. The native-code version of the same conditional logic could easily be 10x the size.
I do get that if you have to produce millions of widgets that a few cents here and there might add up, but if you just exchange one part for a physically similar but higher capacity part, and the list price difference is really small, why would that be such a hurdle? I'm not talking about redesigning all the power supplies, adding a PMIC or building an entire computer, but even in the WRT54G days you could just solder replacement DRAM and NOR chips and be done for a few cents for a 500% increase in capacity. In later models you could still do that but the NOR became NAND and BGAs are harder, but it's still pretty easy and cost-effective.
In the EE world, designing for manufacture does try to squeeze every fraction of a cent out of everything, while in the software world using 10MB of RAM instead of 1MB is fine as long as you decode your PNGs correctly (via an earlier libpng reference implementation comment a few days ago). Even at volume, I doubt that saving a tenth of a cent really matters until you hit some extreme production numbers. (even 10 million units would at best save you $10k)
Mechanically there would be something to consider, i.e. connectors vs screws vs solder vs glue etc, those all have a direct impact on how reliable a connection is, how long it will last, and how easy it is to manufacture (and how easy it is to take apart later). But fractions of a cent when compared to all the other aspects?
> In interested in learning why say, a device would be much more expensive if we use 1GiB of DRAM chips instead of 32MB.
Those are different product categories. The IoT devices tend to range from ~400KB of RAM to around 1MB of RAM. Most have the RAM in package, not external, so putting more RAM is expensive because it is done during fabrication.
The most recent IoT product I delivered for a client was based on a esp32-C3 (RISC-V) module with under 300KB of usable RAM and 4MB of flash, of which exactly 1MB can be used for the program code.
It cost $5. The one which allows you to have slightly over double the program code (around 2MB in flash, out of 8MB) is $7.50, but it has the same RAM in-package.
At those differences, that extra $2.50 is literally more than the client would make off each device sold!
And, of course, it's a $5 component. Using 32MB/1GiB as a reference point is basically entering the lower end of rpi territory; basically a different product for a different use.
The second issue is product differentiation. Products are priced based on what the market will bear. The actual cost of production matters very little once a developed product hits the market.
Something with 1GiB of RAM is intended for a very different market than something with 32MB of RAM. A manufacturer who simply goes ahead and places the extra RAM in (and adjusts the sales price to ensure that it balances out) is:
a) Leaving money on the table for those use-cases where the customer is willing to pay twice as much,
b) Missing out on sales because that $5 increase in price moved it out of consideration in the market it was in.
What you'll mostly find if you go ahead and replace 32MB with 1GB (or 1GiB, which is close enough) on some product is that no one uses it for the higher-spec use-case[1] and you've lost some sales on the low end.
[1] Because RAM is not the only upgraded component in the higher-level use-case products. I'm not too familiar with products in the > 512KB ram class, so take the following with a pinch of salt (i.e. I might be wrong). Typically the 1GB RAM products are used for a specific use-case. Those use-cases can't be done on the ~200MHz cores that come with the 16MB/32MB SBCs. It's cheaper to simply switch to a rpi-class of computer that comes with all matching components.
A quick search shows that there really aren't many sub-1GB devices anymore, anyway, other than industrial components.
Then charge $21.59, use the cheaper chip and make additional profit
I might have something similar lying around on my previous PC which is in storage somewhere, but the "something similar" was for a short course I taught to interns and based on the Arduino itself.
* Gobot (https://gobot.io/)
* TinyGo (https://tinygo.org/)
* GoCV (https://gocv.io/)
Not applicable to the clear majority of IoT hardware; it's for embedded systems which are large enough to run Linux, basically.
From the website:
> Firmware sizes start in the 20-30 MB range.
What I mean is that the performant libraries of the ML/AI python ecosystem are all written in C or another high-speed compiled language. The speed of the underlying execution is not meaningfully affected by the speed of the Python interpreter. If CPython 3.12 came out with a huge performance regression and ran 50% slower than 3.11, these ML programs would run at almost exactly the same speed they do today.
> the libs are all in C
Yeah node uses bindings too, examples:
node-llama-cpp (C++)... node-SDL (C)...
Why wouldn't it also be true for mechatronics? Have some native bindings for C libraries then you can write Node or Python and still use the machine code where needed. Speaking of which:
> The speed of the underlying execution is not meaningfully affected by the speed of the Python interpreter
Wouldn't both JIT vs not JIT have an impact on speed, and also some of the "heuristical" stuff V8 does?
I wouldn't rule out JavaScript for mechatronics, everything ends up in JavaScript.
I suppose my knee-jerk reaction was driven by "interpreted languages are not suitable for low-resource machines," because the fixed overhead of booting up a Javascript or Python interpreter/JIT are comparatively higher in embedded environments. But the space nowadays is a lot bigger/wider than the traditional "embedded" space (where 4KB of RAM is a luxury), so there's no reason a drone or robot can't have a few gigs of RAM and multiple cores to use. So Javascript/Python is probably fine for a huge category of mechatronics applications.
Thanks for discussing with me. :)
According to the docs (for both this and the Johnny-Five project), the JS ONLY runs on a PC-class computer. You connect your IoT device to the computer that is running the JS program, and the JS program then controls the device. The IoT device must be tethered to a PC of some sort.
I'm guessing that controlling PC does things like "set GPIO-$X to input", "read GPIO-$X, "set GPIO-$Y to output", "write GPIO-$Y", "read ADC-$A", etc.
Maybe they designed their custom protocol to also handle time-constraints (like clocking a signal at a certain frequency on a particular pin), or maybe counting transitions on a digital input for specific duration, so that you can mostly do what you'd expect to, but I wouldn't bet on it.
My understanding of these types of projects is that they don't compile the input into a state-machine that is downloaded to the device; they send each instruction as and when it occurs.
This is especially problematic considering that the example in linked page is for a drone taking off, flying, then landing 10s later. You better hope that your drone doesn't ascend so fast in that 10s that it is out of range by the time the `land` command is issued.