> Magic Lantern is a free software add-on that runs from the SD/CF card and adds a host of new features to Canon EOS cameras that weren't included from the factory by Canon.
It also backports new features to old Canon cameras that aren't supported anymore, and is generally just a really impressive feat of both (1) reverse engineering and (2) keeping old hardware relevant and useful.
Today, cameras like Blackmagic and editing platforms like DaVinci handle RAW seamlessly, but it wasn't like this even a few years ago.
I think possibly someone thought it sounded a bit like firmware?
I'm the current lead dev, so please ask questions.
Got a Canon DSLR or mirrorless and like a bit of software reverse engineering? Consider joining in; it's quite an approachable hardware target. No code obfuscation, just classic reversing. You can pick up a well supported cam for a little less than $100. Cams range from ARMv5te up to AArch64.
Is this situation still the same? (Apologies for the hazy details -- this was 5 years ago!)
Anyway, I can happily talk you through how to do it. Our discord is probably easiest, or you can ask on the forum. Discord is linked from the forum: https://www.magiclantern.fm/forum/
Whatever code you had back then won't build without some updates. 4000D is a good target for ML, lots of features that could be added.
I'll definitely keep this in mind and hit you up whenever I have a buncha hours to spare. :)
The 4000D is an interesting cam, we've had a few people start ports then give up. It has a mix of old and new parts in the software. Canon used an old CPU / ASIC: https://en.wikipedia.org/wiki/Template:Canon_EOS_digital_cam...
So it has hardware from 2008, but they did update the OS to a recent build. This is not what the ML code expects to find, so it's been a confusing test of our assumptions. Normally the OS stays in sync with the hardware changes, which means when we're reversing, it's hard to tell which changes are which.
That said, 4000D is probably a relatively easy port.
> I'm the current lead dev, so please ask questions.
Well, you asked for it!
One question I've always wondered about the project is: what is the difference between a model that you can support, and a model you currently can't? Is there a hard line where ML future compatibility becomes a brick wall? Are there models where something about the hardware / firmware makes you go 'ooh, that's a good candidate! I bet we can get that one working next'?
Also, as someone from the outside looking in who would be down to spend $100 to see if this something I can do or am interested in, which (cheap) model would be the easiest to grab and load up as dev environment (or in a configuration that mimics what someone might do to work on a feature), and where can I find documentation on how to do that? Is there a compendium of knowledge about how these cameras work from a reverse-engineering angle, or does everyone cut their teeth on forum posts and official canon technical docs?
edit: Found the RE guide on the website, gonna take a look at this later tonight
Re what we can support - it's a reverse engineering project, we can support anything with enough time ;) The very newest cams have software changes that make enabling ML slightly harder for normal users, but don't make much difference from a developer perspective. I don't see any signs of Canon trying to lock out reverse engineers. Gaining access and doing a basic, ML GUI but no features port, is not hard when you have experience.
What we choose to support: I work on the cams that I have. And the cams that I have are whatever I find for cheap, so it's pretty random. Other devs have whatever priorities they have :)
The first cam I ported to was 200D, unsupported at the time. This took me a few months to get ML GUI working (with no features enabled), and I had significant help. Now I can get a new cam to that standard in a few days in most cases. All the cams are fairly similar for the core OS. It's the peripherals that change the most as hardware improves, so this takes the most time. And the newer the camera, the more the hw and sw has diverged from the best supported cams.
The cheapest way for you to get started is to use your 5D3 - which you can do in our fork of qemu. You can dump the roms (using software, no disassembly required), then emulate a full Canon and ML GUI, which can run your custom ML changes. There are limitations, mostly around emulation of peripherals. It's still very useful if you want to improve / customise the UI.
https://github.com/reticulatedpines/qemu-eos/tree/qemu-eos-v...
Re docs - they're not in a great shape. It's scattered over a few different wikis, a forum, and commit messages in multiple repos. Quick discussion happens on Discord. We're very responsive there, it's the best place for dev questions. The forum is the best single source for reference knowledge. From a developer perspective, I have made some efforts on a Dev Guide, but it's far from complete, e.g.:
https://github.com/reticulatedpines/magiclantern_simplified/...
If you want physical hardware to play with (it is more fun after all), you might be able to find a 650d or 700d for about $100. Anything that's Digic 5 green here is a capable target:
https://en.wikipedia.org/wiki/Template:Canon_EOS_digital_cam...
Digic 4 stuff is also easy to support, and will be cheaper, but it's less capable and will be showing its age generally - depends if that bothers you.
Thanks for your work keeping it going, and for those that have worked on it before.
Could this be a conflict with long exposures? Conceivably AF, too. The intervalometer will attempt to trigger capture every 5s wall time. If the combined time to AF seek, expose, and finish saving to card (etc) is >5s, you will skip a shot.
When the time comes, compare the price of a used 5d3 vs a shutter replacement on the 5d2, maybe you'll get a "free" upgrade :) Thanks for the kind words!
I've done lots of 1/2 second exposures with 3s interval, and it shoots some at much shorter interval than 3 and some 3+??? At one point, the docs said 5s was a barrier. Maybe it was the 5dmkii specifically. All of my cards are rated higher than the 5D can write (but makes DIT much faster) so I doubt it is write speed interfering. What makes me think it is not the camera is that using a cheap external timer works without skipping a beat.
From what I've seen, the image capture process is state machine based and tries to avoid sleeps and delays. Which makes sense for RTOS and professional photography.
If you care enough to debug it, pop into the discord and I can make you some tests to run.
I am a hobbyist nature photographer and it helped me capture some incredible moments. Though I have a Canon R7, the Canon 5d3 is my favorite camera because I prefer the feel of DSLR optical viewfinders when viewing wildlife subjects, and I prefer certain Canon EF lenses.
More here:
https://amontalenti.com/photos
When I hang out with programmer friends and demo Magic Lantern to them, they are always blown away.
Please recruit your programmer friends to the cause :) The R7 is a target cam, but nobody has started work on it yet. There is some early work on the R5 and R6. I don't remember for the R7, but from the age and tier, this may be one of the new gen quad core AArch64.
I expect these modern cams to be powerful enough to run YOLO on cam, perhaps with sub 1s latency. Could be some fun things to do there.
It’s been a huge blessing!
I am a compiler dev with decent low level skills, anything in particular I should look at that would be good for the project as well as my ‘new’ 6D? (No experience with video unfortunately)
I have a newer R62 as well, but would rather not try anything with it yet.
I've had a fun idea knocking around for a while for astro. These cams have a fairly accessible serial port, hidden under the thumb grip rubber. I think the 6D may have one in the battery grip pins, too. We can sample LV data at any time, and do some tricks to boost exposure for "night vision". Soooo, you could turn the cam itself into a star tracker, which controlled a mount over serial. While doing the photo sequence. I bet you could do some very cool tricks with that. Bit involved for a first time project though :D
The 6D is a fairly well understood and supported cam, and your compiler background should really help you - so really the question is what would you like to add? I can then give a decent guess about how hard various things might be. I believe the 6D has integrated Wifi. We understand the network stack (surprisingly standard!) and a few demo things have been written, but nothing very useful so far. Maybe an auto image upload service? Would be cool to support something like OAuth, integrate with imgur etc?
It's slow work, but hopefully you don't mind that too much, compilers have a similar reputation.
Hmm, that's a neat idea. The better language for it is 'auto guider'. Auto guiding is basically supplying correction information to the mount when it drifts off.
Most mounts support guiding input and virtually all astrophotographers set up a separate tiny camera, a small scope, and a laptop to auto guide the mount. It would be neat for the main camera to do it. The caveat is that this live view sampling would add extra noise to the main images (more heat, etc). But in my opinion, the huge boost in convenience would make that worth it, given that modern post processing is pretty good for mitigating noise.
The signals that have to be sent to the mount are pretty simple too, so I'll look at this at some point in future. The bottleneck for me is that I have ever got 'real' auto guiding to work reliably with my mount so if I run into issues it would be tricky as there's no baseline working version.
> Maybe an auto image upload service?
This sounds pretty useful, even uploading seamlessly to a phone or laptop would be a huge time saver for most people! I'll set up ML on my 6D and try out some of the demo stuff that use the network stack.
Is there a sorted list of things that people want and no one has got around to implementing yet?
For networking, this module demonstrates the principles: https://github.com/reticulatedpines/magiclantern_simplified/...
A simple python server, that accepts image data from the cam, does some processing, sends data back. The network protocol is dirt simple. The config file format for holding network creds, IP addr etc is really very ugly. It was written for convenience of writing the code, not convenience of making the config file.
You would need to find the equivalent networking functions (our jargon is "stubs"). You will likely want help with this, unless you're already familiar with Ghidra or IDA Pro, and have both a 6D and 200D rom dump :) Pop in the discord when you get to that stage, it's too much detail for here.
There's no real list of things people want (well, they want everything...). The issues on the repo will have some good ideas. In the early days of setting that up I tagged a few things as Good First Issue, but gave up since it was just me working on them.
I would say it's more important to find something you're personally motivated by, that way you're more likely to stick with it. It gets a lot easier, but it doesn't have a friendly learning curve.
I was assuming it would be possible to quite accurately model the drift over time, and adjust the model based on the last image. The model continuously guides the mount, and the lag in updates hopefully wouldn't matter - so you can use saved images, not LV. In fact, we can trigger actions to occur on the in memory image just before writing out.
This indeed seems like something someone would have written software for!
(I literally only want a raw histogram)
(I also have a 1Dx2 but that's probably a harder port)
Independent of that: how dangerous is ML dev to the cameras themselves (in terms of brick potential)? Permanently bricking a camera in the price range of the 1DX is not exactly my idea of a good time. :-)
I don't think I'd want to learn the ropes on a cam too expensive to psychologically say goodbye to. Maybe save that for the second port.
Thank you and the magic lantern team!
Heh, a little like saying "the main thing you need is to be able to play the violin, which is a small instrument with good tutorials".
Maaaaybe I'm hiding a tradeoff around complexity vs built-in features, but volunteers can work that out themselves later on.
You honestly don't need much knowledge of C to get started in some areas. The ML GUI is easy to modify if you stay within the lines. Other areas, e.g., porting a complex feature to a new camera, are much harder. But that's the life of a reverse engineer.
C genuinely is easy to pick up. It is harder to master. And you're right, for many domains, there are better options now, so it may not be worth while mastering it.
Because it's an old language, what it lacks in built-in safety features, is provided by decades of very good surrounding tooling. You do of course need to learn that tooling, and choose to use it!
In the context of Magic Lantern, C is the natural fit. We are working with very tight memory limitations, due to the OS. We support single core 200Mhz targets (ARMv5, no out-of-order or other fancy tricks). We don't include C stdlib, a small test binary can be < 1kB. Normal builds are around 400kB (this includes a full GUI, debug capabilities, all strings and assets, etc).
Canon code is probably mostly C, some C++. We have to call their code directly (casting reverse engineered addresses to function pointers, basically). We don't know what safety guarantees their code makes, or what the API is. Most of our work is interacting with OS or hardware. So we wouldn't gain much by using a safe language for our half.
I feel like this is a bit of an https://xkcd.com/2501/ situation.
C is considered easy to pick up for the average user posting HN comments because we have the benefit of years -- the average comp sci student, who has been exposed to Javascript and Python, who might not know what "pass by reference" even means... I'm not sure they're going to be considering C easy.
Honestly, C seems to be one of the easier languages to teach the basics of. It's certainly easier than Java or C++, which have many more concepts.
C has some concepts that confuse the hell out of beginners, and it will let you shoot yourself in the foot very thoroughly with them (much more than say, Java). But you don't tend to encounter them till later on.
I have never said getting good at C is easy. Just that it's easy to pick up.
Just to provide another data point here... that C is a little easier to pick up, today, than it was in the 1990s or 2000s, when all you had was the K&R C book and a Linux shell. I regularly recommend CS50x to newcomers to programming via a guide I wrote up as a GitHub gist. I took the CS50x course myself in 2020 (just to refresh my own memory of C after years of not using it that much), and it is very high quality.
See this comment for more info:
Having a good, logical description of supported features, with a warning that if you do unsupported stuff things may break, is much more important than trying to define every possible action in a predictable way.
The latter approach often leads to explosion of spec volume and gives way more opportunities for writing bad code: predictable in execution, but instead with problems in design and logic which are harder to understand, maintain and fix. My 2c.
It's sad that the dev, who has done great work, has to spend time defending the C language from critters living under a bridge when it's a fixed element that isn't going to change.
Only if all other things are equal, which they never are.
Very impressive! Thankless work. A reminder to myself to chase down some warnings in projects I am a part of...
I have an xcconfig file[0], that I add to all my projects, that turns on treat warnings as errors, and enables all warnings. In C, I used to compile -wall.
I also use SwiftLint[1].
But these days, I almost never trigger any warnings, because I’ve developed the habit of good coding.
Since Magic Lantern is firmware, I’m surprised that this was not already the case. Firmware needs to be as close to perfect as possible (I used to write firmware. It’s one of the reasons I’m so anal about Quality).
[0] https://github.com/RiftValleySoftware/RVS_Checkbox/blob/main... (I need to switch the header to MIT license, to match the rest of the project. It’s been a long time, since I used GPL, but I’ve been using this file, forever).
We build with: -Wall -Wextra -Werror-implicit-function-declaration -Wdouble-promotion -Winline -Wundef -Wno-unused-parameter -Wno-unused-function -Wno-format
Warnings are treated as errors for release builds.
Great work, and good luck!
You only add one at a time, so you only need to fix one at a time, and you understand what you're trying to do.
It is, however, a real bitch to fix all compiler warnings in decade old code that targets a set of undocumented hardware platforms with which you are unfamiliar. And you just updated the toolchain from gcc 5 to 12.
Unpopular topic. I talk about it anyway, as it's one of my casus belli. I can afford the dings.
BTW: I used to work for Canon's main [photography] competitor, and Magic Lantern was an example of the kind of thing I wanted them to enable, but they were not particularly open to the idea -control freaks.
Also, it's a bit "nit-picky," I know, but I feel that any software that runs on-device is "firmware," and should be held to the same standards as the OS. I know that Magic Lantern has always been good. We used to hear customers telling us how good it was, and asking us to do similar.
I think RED had something like that, as well. I wonder how that's going?
I have done a stint in QA, as well as highly aggressive security testing against a big C codebase, so I too care a lot about quality. And you can do it in C, you just have to put in the effort.
I'd like to get Valgrind or ASAN working with our code, but that's quite a big task on an RTOS. It would be more practical in Qemu, but still a lot of effort. The OS has multiple allocators, and we don't include stdlib.
Re firmware / software, doesn't all software run on a device? So I suppose it depends what you mean by a device. Is a Windows exe on a desktop PC firmware? Is an app from your phones store firmware? We support cams that are much more powerful than low end Android devices. Here the cam OS, which is on flash ROM, brings the hardware up, then loads our code from removable storage, which can even be a spinning rust drive. It feels like they're firmware, and we're software, to me. It's not a clearly defined term.
The main reason I make the distinction is because we get a lot of users who think ML is like a phone rom flash, because that's what firmware is to most people. Thus they assume it's a risky process, and that the Canon menus etc will be gone. But we don't work that way.
But I put as much effort into my mobile apps, as I did, into my firmware projects (it’s been decades since I wrote firmware, BTW. The landscape is quite different, these days -This is my first ever shipped engineering project[0]. Back then, we could still use an ICE to debug our software).
It just taught me to be very circumspect about Quality.
I do feel that any software (in any part of the stack) I write that affects moving parts, needs to be quite well-tested. I never had issues with firmware, but drivers are another matter. I've fried stuff that cost a lot.
[0] https://littlegreenviper.com/TF30194/TF30194-Manual-1987.pdf
I think IoT has seen a resurgence in firmware devs... but regrettably not so much in quality. Too cheap to be worth it, I suppose. I can imagine a microwave could be quite a concerning product to design - there's some fairly obvious risks there!
Certainly, whatever you class ML as, we could damage the hardware. The shutter in particular is quite vulnerable, and Canon has made an unusual design choice that it flashes an important rom with settings at every power off. Leaving these settings in an inconsistent state can prevent the cam from booting. We do try to think hard about contingencies, and program defensively. At least for anything we release. I've done some very stupid tests on my own cams, and only needed to recover with UART access once ;)
I haven't use ICE, but I have used SoftICE. Oh, and we had a breakthrough on locating JTAG pinouts very recently, so we might end up being able to do similar.
We had to add software dust removal, because the shutter kicked dirt onto the sensor.
I’m assuming that, at some point, the sensor technology will progress to where mechanical shutters are no longer necessary.
By the way, rift valley software? I'm writing to you from Kenya, one of the homes of the great rift valley. It is truly remarkable to drive down the escarpment just North of Nairobi!
Visiting the Rift Valley in Southwest Uganda was one of the most awesome experiences of my childhood. My other company, Little Green Viper, riffs on that, too.
I was born in Africa, and spent the first eleven years of my life, there.
Had to leave Uganda in a hurry, though (1973).
The photography world is mired in proprietary software/ formats, and locked down hardware; and while it has always been true that a digital camera is “just” a computer, now more than ever it is painful just how limited and archaic on-board camera software is when compared to what we’ve grown accustomed to in the mobile phone era.
If I compare photography to another creative discipline I am somewhat familiar with, music production - the latter has way more open software/hardware initiatives, and freedom of not having to tether yourself to large, slow, user-abusing companies when choosing gear to work with.
Long live Magic Lantern!
cries in .x3f & Sigma Photo Pro
> git clone https://github.com/reticulatedpines/magiclantern_simplified
*No judgement, maintaining a niche and complex reverse-engineering project must be a thankless task
One of those projects I wanted to take on but always back logged. Wild that they've been on a 5 year hiatus -- https://www.newsshooter.com/2025/06/21/the-genie-is-out-of-t... -- that's the not-so-happy side of cool free wares.
It is actually easier to get started now, as I spent several months updating the dev infrastructure so it all works on modern platforms with modern tooling.
Plus Ghidra exists now, which was a massive help for us.
We didn't really go on hiatus - the prior lead dev left the project, and the target hardware changed significantly. So everything slowed down. Now we are back to a more normal speed. Of course, we still need more devs; currently we have 3.
Because a lot of features that cost a lot of money are only software limitations. With many of the cheaper cameras the max shutter speed and video capabilities are limited by software to make the distinction with the more expensive cameras bigger. So they do sell hardware - but opening up the software will make their higher-end offerings less compelling.
Camera manufacturers live and die on their reputation for making tools that deliver for the professional users of those tools. On a modern camera, the firmware and software needs to 100% Just Work and completely get out of the photographer's way, and a photographer needs to be able to grab a (camera) body out of the locker and know exactly what it's going to do for given settings.
The more cameras out there running customized firmware, the more likely someone misses a shot because "shutter priority is different on this specific 5d4" or similar.
I'm sure Canon is quietly pleased that Magic Lantern has kept up the resale value of their older bodies. I'm happy that Magic Lantern exists-- I no longer need an external intervalometer! It does make sense, though, that camera manufacturers don't deliberately ship cameras as openly-programmable computational photography tools.
Also another thing, Magic Lantern adds optional features which are arbitrarily(?) not present on some models. Perhaps Canon doesn't think you're "pro enough" (e.g. spent enough money) so they don't switch on focus peeking or whatever on your model.
IIRC none of the EOS DSLRs had focus peaking from the factory, you need Magic Lantern -- Canon didn't program it at all.
Same here. I used to live in a fairly tall building in Manhattan, so found my way to the roof, found an outlet, and would set it up to do timelapses of sunsets over the Hudson.
The camera lens was pretty dirty, so they weren't great, but I enjoyed them: https://www.youtube.com/watch?v=OVpOgP-8c9A
However, a lot of the features exposed are more video oriented. The Canon bodies were primarily photo cameras that could shoot video in a cumbersome way. ML brings features a video shooter would need without diving into the menus like audio metering. The older bodies also have hardware limitations on write speed, so people use the HDMI out to external recorders to record a larger framesize/bitrate/codec than natively possible. Also, that feed normally has the camera UI overlay which prevents clean recordings. ML allows turning that off.
There are just too many features that ML unlocks. You'd really just need to find the camera body you are interested in using on their site, and see what it does for that body. Different bodies have different features. So some effort is required on your part to know exactly what it can do for you.
Frankly: I once tried to maintain a help file and browsed through a lot of lesser known features. Took me days and I didn't even test RAW/MLV recording.
In fact make this all devices with firmware, printers, streamers etc.
But forcing is never a right thing.
Extending this to enable software access by 3rd parties doesn't feel controversial to me. The core intent of copyright and patent seems to be "when the time limit expires, everyone should be able to use the IP". But in practice you often can't, where hardware with software is concerned.
Firmwares should be open-source by law. Especially when products are discontinued.
I was pleasantly surprised to find out this was something very different.
https://en.wikipedia.org/wiki/Magic_Leap
> As of December 2024, the Magic Leap One is no longer supported or working, becoming end of life and abruptly losing functionality when cloud access was ended. This happened whilst encouraging users to buy a newer model.
Ah, that’s about how I thought that would end up.
It's not firmware, which is a nice bonus, no risk of a bad rom flash damaging your camera (only our software!).
We load as normal software from the SD card. The cam is running a variant of uITRON: https://en.wikipedia.org/wiki/ITRON_project
We're a normal program, running on their OS, DryOS, a variant of uITRON.
This has the benefit that we never flash the OS, removing a source of risk.
The high end cams need ML less, they have more features stock, plus devs need access to the cam to make a good port. So higher end cams tend to be less attractive to developers.
https://www.canonrumors.com/inside-the-canon-eos-1d-c
Quote:
I was told by someone at Canon that they would “bring the might of its legal team” to anyone that attempts to modify at the software level, the features of an EOS-1 camera body
/Quote
"Someone"'s name was never revealed.
ML was never mentioned directly.*
ML team was never contacted by any Canon official or anyone speaking in their regard.
* Of course it is logical to assume there is a connection to ML.
200D is much newer, but less well supported by ML. I own this cam and am actively working on improving it. 200D has DPAF, which means considerably improved auto-focus, especially for video. Also it can run Doom.
Are there any ML features in particular you're interested in?
So ideally I'd imagine getting a second-hand 600D or 200D and having a similar setup. We did have a setup (previously) where a GoPro or mini-HDMI campera is captured and then processed by a RBPi 2/3/4, but this seems an overkill compared to the DroidCAM Setup.
And, of course, the optics on the 600D/200D are expected to be much more correct than those on an iPhone or similar phone/mobile device.
Thanks for your kind attention.
AF with 600D in liveview: Phase detection only. Focus hunting galore. 200D comes with usable DPAF.
I prefer 250D for streaming. Dual display support, no 30 minute limit for HDMI out (but cam display will go dark until some button action).
IMHO even 800x600 is okay for most streaming needs. And particularly when sound quality is of primary importance.
But this is actually really cool because, as it turns out, I've got an old Canon Eos DSLR that I haven't used for a long time and didn't know this thing existed before.
Around 2020, our old lead dev, a1ex, after years of hard work, left the project. The documentation was fragmentary. Nobody understood the build system. A very small number of volunteers kept things alive, but nothing worked well. Nobody had deep knowledge of Magic Lantern code.
Sounds like a bit of a dick move. Part of being a lead dev is making sure you can get hit by a bus and the project continues. That means documentation, simple enough and standard build system (It's C after all), etc. As a lead dev you should ensure the people on the project get familiarity with other part than their niche too, so that one can succeed you.It doesn't take much work to not leave a gigantic pile of trash behind you.
If anything, it's an even more a self-responsible thing to do in the OSS world, as there isn't a chain of command such as in the corporate world, enforcing this.
It's selfish to engage in group relation with other people building something without the conscious decision of continuity.
A job worth doing is a job worth doing well. Maybe I'm just a gray beard with unrealistic expectations, or maybe I care about quality.
If I put some code out on the internet and some other people find it and start using it, they message me we talk and I start adding things they suggest and working with others to improve this code. Then one day I wake up and don't want to do it anymore. At what point did I become obligated? When I published the code? When I first started talking to others about it(building a community)? When I coded their suggestions? When I worked with other coders?
Who get to decide where the line is?
Or did I, by engaging in a civil conversation with you, implicitly promise to abide by the normal social rules of etiquette, as far as I am reasonably able?
It is similar with software. If you, say, put up a web site (or even just a README.md) containing blurbs about how useful your software is, extolling its virtues, you are implicitly promising future updates and support, to the best of your (limited) ability. If you need to step away from the project, you are expected to do so in an orderly fashion (again, to the best of your – possibly limited – ability), announce it publicly, etc.
If you have no web site, but you have given similar indications in conversations, the same principle is applicable, but you have fewer people to notify.
> Who get to decide where the line is?
If a user can reasonably feel let down by your actions, or can reasonably feel that you have misled them, then I feel that a line has been crossed.