Sunlight contains copious amounts of 800-nm light, so this is probably completely non-hazardous.
1.2 watts over your entire head is fine.
1.2 watts in a 800nm-diameter cylindrical path is "for some reason we decided to make the outer few millimetres of your skin explode, but we had to be in contact with your skin to manage that because that power density of laser would have ionised the air before it reached you".
Technically it is possible to focus on things a bit smaller than a wavelength, but not by much and some of the options don't even work far from the lens itself.
https://en.wikipedia.org/wiki/Diffraction-limited_system#The...
So it looks like it packs some... Power. But I guess the frequency of the light makes all the difference, or maybe exposure duration?
Static electricity discharges at around 400 to 600W. Yet it's for an extremely brief amount of time. Sub millisecond usually) so the actual energy transferred is minimal.
It takes around 1Wh to heat a liter of water 1 degree just to put things in perspective.
What makes you say that? By what metric?
Asking Google about one watt lasers inevitably produces junk that does not address the question at all.
Sifting and sorting through that junk is tiresome.
AI provides a succinct answer with as much depth is requested and can further clarify or expound on the original findings.
What’s not to like about such interactions? And, furthermore, how is the AI interaction in this case not objectively ‘better’ than a Google search?
The at-best-coincidental correlation with accuracy and truth.
Thinner and denser then makes it interact more per unit time but any induced charge imbalances have closer neighboring material to rebalance charge shifts with, maybe you have a better chance of getting through the same number of particles over a shorter distance than at higher distance.
You definitely have a better chance faster than slower. its when it slows to a critical speed its non-collision charge based interactions build.
Protons almost never collide directly with a nucleus because most of an atom is empty space; a proton beam disrupts the matter it's passing through mostly by electrostatic effect, and that's a function of position and time!
"In 1996, Bugorski applied unsuccessfully for disability status to receive free epilepsy medication. Bugorski showed interest in making himself available for study to Western researchers but could not afford to leave Protvino."
This is just sad all around through and through.
It took heroic efforts to leave to the west in those times. The best most people could swing was finding work in Moscow or Serpuhov and commuting there on the daily. And this is all considering that it was a 'science town'; Many of those who lived there in some way worked at or adjacent to the accelerator institute and were fairly well educated individuals.
That said, in my humble (amateur!) opinion the framing from IEEE leaves a little to be desired, for one simple reason: they don't mention that most of what we're looking for is in the cortex (outer layer) of the brain, anyway!† And it kind of has to be, AFAIK... Namely;
fNIRS[1] is one of the four main brain imaging technologies (that I know of?): EEG, fMRI, fNIRS, and ultrasound. Like fMRI (& ultrasound?), fNIRS measures the oxygenation levels of different parts of the brain, which has been shown to be a close analogue for brain activity (more activity => more respiration, just like muscles). In this context, it's not enough to simply receive the signal you sent through -- you want to infer which emitter the signal came from so that you can infer the oxygenation levels of the regions it passed through/reflected-off-of.
All of that is a very amateur, high-level overview, but hopefully it clearly supports my underlying point/question: how could you possibly make this work with a cross-head emitter-detector setup?? It seems impossible to disentangle more than one emitter's signals, and I'm not sure how you'd map oxygenation levels without more than one. The diagram in the article seems to support this confusion, given how chaotic it is.
Then again, fNIRS and EEG both already rely on some serious statistical wizardy to turn 16-128 1D time series into a 3D model of activity, so perhaps I'm underestimating our tools! For example, the addition of frequency modulation to the fNIRS setup is an ongoing area of frontier research, which seems insanely complex to me.
P.S. In case any of the hackers here haven't heard yet: BCI (Brain-Computer interaction) is blowing up right now thanks to the unreasonable efficacy of LLMs for decoding brain activity[2][3][4], and it's a very hackable field! There's a healthy open-source community for both fNIRS[5] and EEG[6], and I can personally highly recommend the ~$1000 Unicorn EEG system[7] for hackers.
[1] https://en.wikipedia.org/wiki/Functional_near-infrared_spect...
[2] https://www.nature.com/articles/s42003-025-07731-7
[3] https://arxiv.org/abs/2309.14030v2
[4] https://arxiv.org/pdf/2401.03851
[5] https://openfnirs.org/2024/01/01/continuous-wave-spectroscop...
[7] https://www.gtec.at/product-configurator/unicorn-brain-inter...
†: As a human, you're not even a brain piloting a skeleton -- you're a 3mm wrap around the basic mammalian brain! https://en.wikipedia.org/wiki/Cerebral_cortex
I have friends who do research in this area pretty heavily and my impression is the same, that it's pretty limited to the outer layers of brain, and not super high in resolution.
There are advantages but they are more practical than anything else. Of course, practical can be critical but there a large percent of applications where it would have little utility. But hopefully things will improve.
"The quantum nature of light is why it's possible to shine a bright light through a human head without setting that head on fire... As long as it's the right color."
To prevent light from reaching the detector from sources other than light
transmitted through the head, the experiment was performed in a light-tight
enclosure that surrounded the head. The enclosure was built using black
foamboard and covered with two layers of black cloth and a laser safety
curtain.
https://doi.org/10.1117/1.NPh.12.2.025014https://spectrum.ieee.org/media-library/a-3d-illustration-sh...
It's very common to have a CMS feeding images to an LLM that extracts the contents and gives image files a meaningful file name and alt tag.
Non-invasively. No "below threshold of detection". Beyond anything our scientists say is possible.
We're just not advanced enough as a species to do it yet.
We need to keep pushing these boundaries.
I suppose you could flood the brain with nano-machines which would latch onto all the bits and pieces and collect the data? But where would they store it? How would we get them all back out again?
I don't think it's possible to do this with our current understanding of physics. This is not a question of needing better technology, but needing a whole new universe with different physics altogether.
I'm not even sure what is more far fetched, this or superluminal travel. I'm actually leaning towards the former :D
Then consider a further constrained version in which the EMF-generating cubes may only generate EMF in response to external inputs, i.e. as in the game of life.
We've kind of got this whole thing backwards.