JWST reveals its first direct image discovery of an exoplanet
322 points
1 day ago
| 22 comments
| smithsonianmag.com
| HN
GMoromisato
1 day ago
[-]
In case anyone is wondering, we are (sadly) very far from getting an image of this planet (or any extra-solar planet) that is more than 1 pixel across.

At 110 light-years distance you would need a telescope ~450 kilometers across to image this planet at 100x100 pixel resolution--about the size of a small icon. That is a physical limit based on the wavelength of light.

The best we could do is build a space-based optical interferometer with two nodes 450 kilometers apart, but synchronized to 1 wavelength. That's a really tough engineering challenge.

reply
GolfPopper
1 day ago
[-]
We can do better than that! Using the Sun as a gravitation lens[1], and a probe at a focal point of 542 AU, we could get 25km scale surface resolution on a planet 98 ly away. [2] This would be an immense and time-consuming endeavor, but does seem to be within humanity's current technological capabilities.

1. https://en.wikipedia.org/wiki/Solar_gravitational_lens

2. https://www.nasa.gov/general/direct-multipixel-imaging-and-s...

reply
ycui1986
19 hours ago
[-]
There are also alternative proposals to use Earth's atmosphere refraction for focusing, in a geometrically similar fashion as gravitational lens. It seems more feasible than using Sun's gravitational lensing.

https://en.wikipedia.org/wiki/Terrestrial_atmospheric_lens

reply
le-mark
10 hours ago
[-]
Does size of the planet matter? How about using Saturn or Jupiter?
reply
Balgair
10 hours ago
[-]
Yes, the larger the object you're using as a lens, the better the image. This is due to the 'Lens Makers' Equation'. Larger objects like Earth, Jupiter, or the Sun would make for larger radii and therefore better resolution.
reply
os2warpman
22 hours ago
[-]
A maintenance-free power source capable of lasting the 200 or so years it would take to make it to 542 AU does not seem within humanity's current technological capabilities.

Parker at its highest velocity could make it there in a century, but it doesn't have to slow down and stop. Or station keep.

When we have a power source that can do 5kW (I just doubled Hubble, 542 AU would probably require much more for communications) for 100 years I'll agree that its design can be refined and its lifespan extended to 200 and 542 AU is within our reach.

reply
dotnet00
20 hours ago
[-]
With distances that big, is it even necessary to slow down much? The depth of focus is probably a couple dozen AU? Even if it takes the probe a century to get there, if you can squeeze a decade or two of observation out of it without slowing down, there's no reason to bother and instead send a new upgraded telescope every decade or so.

As far as power requirements go, assuming a doubled power demand from Hubble might be a bit excessive. A telescope that far out would have to be nuclear powered, so thermal regulation is 'free'/passive and RCS load is reduced (don't have to constantly adjust to point away from the Earth), which I expect are the biggest power draws on Hubble.

If we assume a 150 year lifetime, with a 3kW draw by EOL and current RTG tech... RTGs have ~6% efficiency, so for 3kW electricity, you need 50kW in heat. RTG electricity output drops ~2% per year, so after 150 years, you have 5% of the initial electrical output, and you get ~0.57W/g of Pu-238. Meaning, you need ~600kg of it to power the telescope this way [https://www.mathscinotes.com/2012/01/nuclear-battery-math/].

That's not a politically feasible amount, but it's not technically impossible with current/near future tech whose development could be spurred on by serious interest in this kind of mission.

'Proper' fission reactors can also do the job, you get higher efficiency and don't have to run the reactors for the entire 150 years besides accounting for decay (e.g. an RTG that needs to provide enough power to keep some clocks running, the electronics and batteries warm, and trigger whatever mechanism would start up the reactor). Probably less than 100kg of Pu-238 just by better reactor efficiency.

reply
os2warpman
18 hours ago
[-]
I agree with you.

It is indeed spherical frictionless cow-ly possible if we spend a trillion dollars to increase ORNL's annual Pu production capacity so that it doesn't take 200 years to make 600kg of Pu-238.

When someone demonstrates a complex device (let's set aside power generation how about a valve? Or a capacitor?) that can last a century in space I'll agree that it is actually possible.

That's what "current level of technology" means. The lego bricks exist, now, today, preferably in stock ready for immediate shipment on Digikey, and can be snapped into place.

reply
Dylan16807
1 hour ago
[-]
> It is indeed spherical frictionless cow-ly possible if we spend a trillion dollars to increase ORNL's annual Pu production capacity so that it doesn't take 200 years to make 600kg of Pu-238.

Oh come on, we used to make so much more of it.

I see estimates that it costs 4 million dollars per pound, plus some scaling costs?

A trillion dollars is not even close to "spherical frictionless cow" when the benchmark is "humanity's current technological capabilities", and a few billion is basically nothing at that scale.

> When someone demonstrates a complex device (let's set aside power generation how about a valve? Or a capacitor?) that can last a century in space I'll agree that it is actually possible.

Is a bunch of stuff lasting 50 years not good evidence? What is your threshold for "demonstrate", do we have to wait 200 years before you can be convinced?

reply
griffzhowl
12 hours ago
[-]
Wouldn't there be a problem putting 600kg (or even 100kg) of Pu-238 together, because of supercriticality? I couldn't think of a plausible design, but I know next to nothing about this area. Basically I've heard that if you put a lot of this stuff together it'll make a big explosion
reply
ben_w
9 hours ago
[-]
Criticality isn't hard to avoid, just split it between e.g. 344 units arranged in a 7x7x7 cube with 10cm gaps each way. Or more, I picked that separation and mass division based on guessing.
reply
griffzhowl
6 hours ago
[-]
Yeah, I thought about doing something like that, but that would make many parallel power-generating units that would only last as long as one unit wouldn't it? Maybe the individual units could be subdivided further and the subunits could be brought together only when a previous unit runs out of power. I don't know enough about how it would actually work
reply
ben_w
5 hours ago
[-]
I don't think so. The radioisotope itself is an exponential decay, and only goes faster when critical, not when subdivided; the part I'm not sure about is the thermocouple and why that decays.
reply
griffzhowl
6 minutes ago
[-]
I see. So the maximum time these units could provide power is the time it would take a subcritical mass to decay to the point that it's no longer useful. So the idea is unworkable for powering very long journeys
reply
ycui1986
19 hours ago
[-]
i don't think modern semiconductor device will last more than 100 years, even without all the radiation. making something last more than a few decades is very hard.
reply
alex_young
7 hours ago
[-]
Considering that the longest continually operating computer is in Voyager 2 and has been running for nearly 50 years I would be surprised if this was actually a problem. https://www.guinnessworldrecords.com/world-records/635980-lo...
reply
le-mark
10 hours ago
[-]
Does encasing electronics in lead help against high energy cosmic rays? With cheap kg to orbit one could assume the mass budget would be large.
reply
fpoling
8 hours ago
[-]
Project Orion-type space craft can archive 1000 km/s and can travel within 3 years 542 AU. And this is absolutely feasible technically, just not politically.
reply
cubefox
11 hours ago
[-]
> A maintenance-free power source capable of lasting the 200 or so years it would take to make it to 542 AU

It wouldn't take nearly that long. The proposal is to use solar sails. There is a nice video about the details on YouTube: https://www.youtube.com/watch?v=NQFqDKRAROI

reply
perihelions
14 hours ago
[-]
[deleted]
reply
varjag
13 hours ago
[-]
It then would have to brake..
reply
minitoar
9 hours ago
[-]
Or just keep launching more so there’s always a usable one
reply
j_not_j
10 hours ago
[-]
Wouldn't be worth the trouble to try.

Why, you ask?

How do you point it? Where do you point it?

You have a "telescope" with a field of view of one-planets worth of pixels. But the planet is in orbit, so it drifts away from the imaged field of view within minutes.

Meanwhile your sensor is travelling away from the "lens" so transverse velocity would be needed to track the orbit at a delta-v and direction that is unknowable. Unknowable, because you have to know where the planet is, within a radius, to put your "sensor" in the right place in the first place.

Imagine taking a straw, place it in a tree, walk away a few km and focus a telescope on the straw and hope to look through the straw to see an airplane flying past. You have the same set of unknowables.

reply
__MatrixMan__
9 hours ago
[-]
I won't argue that it would be worth the effort, but it would be interesting to set something like that going and just keep scanning. A few years worth of data might turn up interesting things even if it wasn't particularly useful for finding those things a second time.
reply
nandomrumber
21 hours ago
[-]
For scale, Voyager 1 is about 167 AU away.
reply
seanhunter
15 hours ago
[-]
You’re never going to break into popular science reporting with that sort of attitude. If you are going to do the scale of a small thing, you have to compare it to the size of a banana or the width of a hair if it’s very small. For larger things, “football pitches” are the standard, although “blue whales” and “double-decker busses” are also acceptable units in some circumstances.

So, for scale, Voyager 1 is about 2.5 x 10^11 regulation football pitches away although they vary in size so it could be anywhere between 2.08 x 10^11 and 2.8 x 10^11. Now, see how much more relatable that is for a common person?

reply
nandomrumber
14 hours ago
[-]
reply
xg15
14 hours ago
[-]
We should definitely use TeraSmoots more as an astronomically unit.
reply
rishav_sharan
21 hours ago
[-]
I think Tipping of the Cool Worlds youtube channel did a video that we can just use earth for the gravitational lensing and that would be far cheaper

https://m.youtube.com/watch?v=jgOTZe07eHA

reply
kilroy123
1 day ago
[-]
I was going to post the same exact thing and links.

Of all the possible space probes or missions we could do. I want this one more than any of them!

reply
catlifeonmars
6 hours ago
[-]
reply
JumpCrisscross
1 day ago
[-]
Do we have a recent cost estimate?
reply
bigiain
21 hours ago
[-]
I'd guess less then 1 or 2 hyped AI startup valuations that eventually collapse to nothing.
reply
HPsquared
16 hours ago
[-]
Those are just financial transactions though, not actual loss of much engineering time etc.
reply
nurettin
15 hours ago
[-]
ouch I thought I was cynical
reply
dmos62
18 hours ago
[-]
Thank you for the chuckle.
reply
thiht
15 hours ago
[-]
And more importantly, a story points estimate (t-shirt sizing is obviously XL)
reply
ainiriand
13 hours ago
[-]
Lets get an epic ticket ready.
reply
twothreeone
23 hours ago
[-]
"We used to look up at the sky and wonder at our place in the stars. Now we just look down, and worry about our place in the dirt."
reply
sho_hn
23 hours ago
[-]
It's cynical to assume OP was gunning for "it's too expensive". They might just want to know the size of the challenge to get it done.
reply
JumpCrisscross
5 hours ago
[-]
I’m genuinely curious what it would cost given recent launch-cost and fabrication advances. If above $10bn, we should keep working on those inputs. If below, it strikes me as more promising than another circular collider.
reply
twothreeone
22 hours ago
[-]
And it's ironic to scold others for missing a point while missing their point. All good though.
reply
amanaplanacanal
21 hours ago
[-]
I missed it too. What was your point?
reply
GMoromisato
1 day ago
[-]
Agreed! This might be easier than an interferometer. You just need a lot of delta-v
reply
cedws
20 hours ago
[-]
How do you decelerate once you get there though?
reply
GMoromisato
19 hours ago
[-]
By “delta-v” I mean propellant budget, not initial velocity. So you spend half your delta-v to accelerate out and the other half to decelerate.

But of course, the initial delta-v costs a lot of propellant because it has to push an almost full tank. By the time we have to decelerate the ship will be a lot lighter.

That’s why you needed a full Saturn 3rd stage to send Apollo to the moon, but just the service module to get back to Earth.

I realize now that “a lot of delta-v” is an understatement. 500 AUs is ridiculously far. To get there in under a century you’d need fission-fraction reactors, well beyond our current tech.

reply
kadoban
18 hours ago
[-]
> I realize now that “a lot of delta-v” is an understatement. 500 AUs is ridiculously far. To get there in under a century you’d need fission-fraction reactors, well beyond our current tech.

Voyager 1 is 166 AU away, it launched about 50 years ago. So wouldn't we just have to do about twice as well as that, or launch 2 of them in opposite directions? That sounds _very_ hard (Voyager is amazing), but it can't be beyond our current tech, right? We did fairly close to that 50 years ago.

reply
throw0101c
13 hours ago
[-]
> At 110 light-years distance you would need a telescope ~450 kilometers across to image this planet at 100x100 pixel resolution--about the size of a small icon.

Or use two (or more) telescopes that are 450km apart:

* https://en.wikipedia.org/wiki/Aperture_synthesis

* https://www.nature.com/articles/ncomms7852

reply
perlgeek
15 hours ago
[-]
It would be really cool to have an array of space-based telescopes spaced out evenly in the Earth's orbit around the sun, and use each as relay for the others that cannot directly communicate with Earth, because the path is blocked by the Sun.

Then you could do observations outside the solar system's orbital plane with a 2 AU synthetic aperture. And maybe even do double duty as a gravitational wave observatory.

(And yes, this is currently more science fiction than science, but it's at least plausible that we can build such a thing one day).

reply
nico
1 day ago
[-]
How big would the telescope/mirror/lens need to be to get a picture of something in the Alpha Centauri system, 4.37 light years away?

Also, could the image be created by “scanning” a big area and then composing the image from a bunch of smaller ones?

reply
joshvm
20 hours ago
[-]
It's a lot easier to reason about this using angular resolution, because that's normally what the diffraction limit formula is in reference to. If you know the angular diameter of the system (α) and the wavelength (say λ=500 nm for visible), you can use α ≈ λ/d and solve for the aperture of the telescope (d).

That puts a basic limit on the smallest thing you can resolve with a given aperture. You can use the angular diameter of the planet and the resolution you're after. For Alpha Centauri A it's 8.5 milli arc-second, so O(1 μas) for a 100px image? That's just for the star!

The Event Horizon Telescope can achieve around 20-25 μas in microwave; you need a planet-scale interferometer to do that. https://en.wikipedia.org/wiki/Event_Horizon_Telescope It's possible to do radio measurements in sync with good clocks and fast sampling/storage, much harder with visible.

I'm not super up to date on visible approaches, but there is LISA which will be a large scale interferometer in space. The technology for synchronising the satellites is similar to what you'd need for this in the optical.

https://www.edmundoptics.com/knowledge-center/application-no...

https://arxiv.org/abs/astro-ph/0303634

reply
schobi
19 hours ago
[-]
How far off are we still for doing this with visual light?

Let's say you build single photon detectors and ultra precise time stamping. Would that get us near? Today, maybe we don't have femtosecond time stamping and detectors yet. But that is something I can imagine being built! Timing reference distribution within fs over 100s of km? Up to now, nobody needed that I guess.

reply
joshvm
10 hours ago
[-]
The biggest issue is the sheer separation required. EHT operates in mm wave light, visible is 4-6 orders of magnitude shorter wavelength. There are several smaller scale interferometers. They can already do quite impressive things because even a 50m baseline is better than any optical telescope that exists.

The way that timing works for EHT is each station has a GPS reference that's conditioned with a very good atomic clock - for example at SPT we use a hydrogen maser. The readout and timing system is separate from the normal telescope control system, we just make sure the dish is tracking the right spot before we need to start saving data (sampling around 64 Gbps).

I'm not sure what the timing requirements are for visible and how the clock is distributed, but syncing clocks extremely well over long distances shouldn't be insurmountable. LISA needs to solve this problem for gravitational waves and that's a million+ km baseline.

Some problems go away in space. You obviously need extremely accurate station keeping (have a look how LISA Pathfinder does it, very cool), but on Earth we also have to take continental drift into account.

reply
kadoban
18 hours ago
[-]
Is there another limit in terms of just: how many photons from X object even hit an area of Y telescope apeture size from distance Z in like, say a year? We can't see the thing if no photons from it even intersect our telescope, right? Or maybe that limit is way way less restrictive than the other...
reply
hnaccount_rng
18 hours ago
[-]
The number of photons themselves is not too restrictive (i think the voyager probe still emits 6ish photons per second directed at the receiving dish). And we easily build sensors that detect every photon (far above 99% levels). The tricky part will be differentiating between “source photons” and “background photons” (for Voyager we exactly know what to look for, here we wouldn’t have any baseline for distinguishing)
reply
GMoromisato
1 day ago
[-]
It's linear, so if it is 25 times closer then the telescope can be 25 times smaller. At 4.37 light-years we'd need an 18 kilometer telescope to image at Jupiter-sized planet at 100x100 pixel resolution.

If you only wanted 10x10 resolution you could get by with a 1.8 kilometer telescope.

Wikipedia has more: https://en.wikipedia.org/wiki/Angular_resolution. The Rayleigh criterion is the equation to calculate this.

reply
yongjik
18 hours ago
[-]
LIGO (the famous gravity wave detector) is made of two 4-kilometer arms. According to its website:

https://www.ligo.caltech.edu/page/facts

> At its most sensitive state, LIGO will be able to detect a change in distance between its mirrors 1/10,000th the width of a proton! This is equivalent to noticing a change in distance to the nearest star (some 4.2 light years away) of the width of a human hair.

So I think two telescopes at 450km distance synchronized to "merely" (haha) a visible light's wavelength should be doable, if we throw a fuckton of money on that.

reply
parpfish
11 hours ago
[-]
Do these scientist know they can just say “enhance”?
reply
le-mark
10 hours ago
[-]
As someone who’s sat in meeting with nontechnical people and having heard this exact request (“can’t you just enhance the image?”) I felt this.
reply
whitehexagon
15 hours ago
[-]
Even a single pixel in the IR range is pretty cool, but something inside me wants the RGB pixel color in visible light range.

Is that a case of un redshifting this pixel, or needing the optical inferometer you mentioned with multiple single frequency filters.

Or something new? like a LHC style accelerator, or space based rail gun, to fire off a continuous stream of tiny cube sats towards the target, and using the stream itself as a comms channel back.

Yeah I know, this planet is burning, and all that effort for a RGB wallpaper seems crazy, but 'space stuff' also brings knowledge and hope.

reply
gsliepen
17 hours ago
[-]
If you drop the requirement that the image has to be taken with wavelengths our eyes are sensitive to, you could image it using radio telescopes. We already have this capability, the problem though with radio interferometry is that while you can get an effectively huge aperture, the contrast level will be very low, and I am guessing that after subtracting the signal from the star, the signal from the planet will not be above the noise level. Note that optical interferometers would have the same problem.
reply
bravesoul2
1 day ago
[-]
L2 is moving though right? Or does it need to be simultaneously receiving at the 2 points?
reply
GMoromisato
1 day ago
[-]
Sadly, it has to be simultaneous.

My (tenuous) understanding of interferometry is that you receive light from two points separated by a baseline and then combine that light in such a way that the wavelengths match up and reinforce at appropriate points.

Wikipedia has a decent summary: https://en.wikipedia.org/wiki/Aperture_synthesis

reply
shit_game
18 hours ago
[-]
I just wanna say that this an exemplary comment. This is the kind if thing i read hn coments for.
reply
fsckboy
18 hours ago
[-]
>In case anyone is wondering, we are (sadly) very far from getting an image of this planet (or any extra-solar planet) that is more than 1 pixel across.

the image on the linked website is more than 1 pixel across: what are you saying? it's false/fake?

reply
quailfarmer
17 hours ago
[-]
The resolution of the image (the ability to resolve two points) is greater than the size of the planet, thus it appears as a point spread function, no detail can be resolved.
reply
drgiran
13 hours ago
[-]
Synchronization is solvable, and why stop at two? You could have a three-dimensional array of them, spread over very large distances. We have the technology now to pull this off.
reply
littlestymaar
3 hours ago
[-]
Why can't we use the motion of the telescope in space to make a synthetic aperture like SAR imagery satellites do?
reply
vlovich123
19 hours ago
[-]
I thought modern telescopes use software to merge images across a period of time / from multiple telescopes to get a significantly higher resolution than that achieved through the physical limitation of light. At least that’s how all the spy telescopes work and how various ground based telescopes collaborate afaik.

That’s in addition to gravitational lensing effects.

reply
jmyeet
20 hours ago
[-]
Take this even further and it eliminates a whole bunch of possible explanations for the Fermi Paradox.

If, like me, you believe the future of any civilization (including ours) is a Dyson Swarm then you end up with hundreds of millions of orbitals around the Sun between, say, the orbits of Venus and Mars. It's not crowded either. The mean distance between orbitals is ~100,000km.

People often ask why would anyone do this? Easy. Two reasons: land area (per unit mass) and energy. With 10 billion people, that'd be land about the size of Africa each with each person having an energy budget of about the solar output hitting the Earth, a truly incomprehensibly large amount of energy.

So instead of a telescope 450km wide (fia optical interferometry), you have orbitals that are up to ~400 million kilometers apart. The resolution with which you could view very distance worlds is unimaginably high.

Why does this eliminate Fermi Paradox proposed solutions? One idea is that advanced civilizations hide. There is no hiding from a K2 civilization.

reply
behnamoh
1 day ago
[-]
Yet another reminder that space is huge and no matter how big we can imagine, due to the realities of physics, there is a good chance that we might never be able to reach the far stars and galaxies.
reply
grues-dinner
1 day ago
[-]
The depressing, if that's the right word, counterpoint to all the "oh my god it's fun of stars" deep fields crammed with millions of galaxies per square arcsecond is that the expansion of the universe means that nearly all of them are permanently and irrevocably out of reach even with near-lightspeed travel: they'll literally wink out of observable reality before we could ever get to them, leaving only a few nearby galaxies in the sky. At best you can reach the handful of gravitationally-bound galaxies in the local group.

Not that the Milky Way is a small place, but even most sci-fi featuring FTL and all sorts of handwaves has to content itself with shenanigans confined to a single galaxy due to the mindblowing, and accelerating, gaps between galaxies.

reply
sho_hn
23 hours ago
[-]
It's a shame, but in a glass-falf-full sense the fact that this planet is our little boat in the ocean and all that we got is also a quite helpful focusing reminder and scope constraint.

That the stars are beyond reach might be depressing, how aggresively we are gambling our little boat is on the other hand actively scary and perhaps the dominant limit on humanity's effective reach.

reply
kristopolous
19 hours ago
[-]
There was an article I saw about how long it would take the fastest spacecraft built with "non-speculative" physics - phenomena that has actually been observed in labs or in nature, ignoring any manufacturing and budget infeasibility (as in no handwaving sci-fi) and we're still talking like an entire lifetime to the next star.

In a way we're kind of still like an ancient village who can only travel by boats made of reeds

reply
runarberg
52 minutes ago
[-]
I think this is the solution to the Fermi paradox, that space is simply too big for civilizations across the galaxy too discover each other, let alone interact with each other.

Further more I don't think technologically advanced civilizations will be wasting their time and resources in colonizing new works, space is simply too big for that. And that they would conduct their explorations with telescopes, not probes, space is simply too big for probes.

reply
jodrellblank
12 hours ago
[-]
Might be Charles Stross’s blog post The High Frontier: http://www.antipope.org/charlie/blog-static/2007/06/the-high...
reply
UltraSane
21 hours ago
[-]
Biological humans won't reach the stars but our immortal robotic offspring can.
reply
runarberg
19 hours ago
[-]
Unlikely. There are both economical and moral reasons to never build a self replicating robotic fleet of probes. I think a sufficiently advanced civilization will always prefer telescopes over probes for anything more distant then the nearest couple of solar systems.

Just to ring the point home, we are technically (but not yet economically) capable of creating small telescopes which use our sun as a gravitational lens, which would be able to take photographs of exoplanets. In the far future we could potentially build very large telescopes which can do the same and see very distant objects with a fine resolution. That would be a much better investment then to send out self replicating robotic probes.

reply
UltraSane
12 hours ago
[-]
"There are both economical and moral reasons to never build a self replicating robotic fleet of probes."

Such as?

" I think a sufficiently advanced civilization will always prefer telescopes over probes for anything more distant then the nearest couple of solar systems."

What part of "immortal" don't you understand? traveling at 1% of c doesn't feel slow if you just turn off or slow down your brain during the trip.

reply
runarberg
10 hours ago
[-]
I would expect that the probe makers would want some benefits from the fleet of probes they sent, the only benefit I can think of to be had are information about far away objects, which is of scientific value. The probe’s makers will therefor have to keep contact with an ever expanding fleet of probes and sift through an exponentially increasing amount of information for millions of years. This just does not seem practical when you can just build a telescope. Now time may not pass that slowly from the perspective of the probe, but for the civilization on the homeworld, this method is painfully slow. They could have built thousands or millions of telescopes during that time to gather the same information (albeit of lower quality). Which is why you would probably want to probe your nearest neighboring solar systems, but nothing farther.

As for the moral reasons to not send out a fleet of self replicating probes. These are an extreme pollution hazard. An ever expanding fleet of robots traveling across the galaxy over millions of years, growing in numbers exponentially, exploiting resources in foreign worlds, with nothing to stop them if something happens to their makers. Over millions of years these things would be everywhere, and—in the best case—be a huge nuisance, but at worse they would be a risk to the public safety of the worlds they travel to. With these risks I believe a sufficiently advanced civilization would just build telescopes for their exploration needs.

reply
UltraSane
7 hours ago
[-]
You don't understand. The "probes" WOULD BE the creators. Biological life is far too fragile to survive interstellar travel but AI running on much more durable hardware makes it downright easy.

And they wouldn't have to be inherently self-replicating.

When you can live millions of years your idea of what is "slow" changes pretty drastically.

reply
m3kw9
1 day ago
[-]
Didn’t China able to shoot lasers to the moon orbit for comms?
reply
aaronbrethorst
1 day ago
[-]
Anne-Marie Lagrange, lead author of the study

What an appropriate name for an astrophysicist. I wonder if she's distantly related to the namesake of the Lagrange point. https://en.wikipedia.org/wiki/Lagrange_point

Incidentally, although I'd never heard of A-M Lagrange before now, she's had an incredible career: https://en.wikipedia.org/wiki/Anne-Marie_Lagrange

reply
kergonath
1 day ago
[-]
> What an appropriate name for an astrophysicist. I wonder if she's distantly related to the namesake of the Lagrange point.

Scopus has 390 profiles of people named Lagrange. It is not a very popular family name but it is not uncommon either and some of them are bound to end up in academia, whether they are descendants of Joseph-Louis or not.

reply
fanatic2pope
10 hours ago
[-]
reply
louthy
1 day ago
[-]
Exactly my thought too, probably nominative determinism striking again
reply
wasabinator
10 hours ago
[-]
Another way to put that 111 light year distance into perspective, the Voyager space probes are yet to pass 1 light day from earth.
reply
bane
20 hours ago
[-]
I've been bearish on the JWST in the past. I've thought it an investment in science that could have been made better by waiting a bit for cheaper heavy lift and advances in computational imaging.

However, this is the culmination of the construction of a cathedral to science. Every stone laid one atop another from our first comprehension of the cosmos to our emergence from our long dream as the center of a deity constructed universe has resulted in a discipline that can not only conceive of other spheres we can stand on, to entire other systems of spheres we can now see.

This is magnificent.

reply
bearjaws
3 hours ago
[-]
Why bother doing anything then?

You can apply this logic to pretty much anything. The better thing is just around the corner, might as well wait.

I'm sure many other advancements were made by JWST that can be applied to your theoretical better telescope.

reply
thebruce87m
1 day ago
[-]
> Although there is a slight possibility that the newly detected infrared source might be a background galaxy

I understand the difficulty in what they are doing, but the scale of the error here is amusing. “We thing we took a picture of something, but it might have been billions of things much bigger but further away”

reply
dredmorbius
1 day ago
[-]
With time, orbital motion should distinguish the two possibilities.

Though at a 50 AU orbit around a smallish star, that might take a while.

reply
silverquiet
1 day ago
[-]
That actually makes one wonder if it will move enough within the lifetime of JWST to actually detect that orbital motion.
reply
dredmorbius
23 hours ago
[-]
That should be calculable.

Orbital mechanics, orbital period, and minimum determinable arc of JWST.

Though another thought is that doppler might also reveal velocity, if a spectrum could be obtained. Since the system is nearly perpendicular to the Solar System (we're viewing it face-on rather than from the side), those shifts will be small.

reply
meatmanek
17 hours ago
[-]
The HN title is subtly incorrect: this isn't the first direct image of an exoplanet from JWST. Here's an article from March showing several exoplanet images from JWST: https://science.nasa.gov/missions/webb/nasas-webb-images-you...

The key word "discovery" has been removed from the headline from TFA: "The James Webb Space Telescope Reveals Its First Direct Image Discovery of an Exoplanet". I.e, this is the first time that direct imagery was used to _discover_ a planet we didn't know existed previously.

reply
dang
11 hours ago
[-]
Ok, we've put discovery in there now. Thanks!

Submitted title was "James Webb Space Telescope reveals its first direct image of an exoplanet", which I'm sure was just a good-faith attempt to fit HN's 80 char title limit. I've achieved that by compressing to JWST now :)

reply
jl6
18 hours ago
[-]
> To further support their observations, Lagrange and her colleagues ran computer models that visualized the potential planetary system. The simulations yielded images that aligned with the ones captured by the telescope. “This was really why we were confident that there was a planet,”

Don’t get me wrong, I love that we are doing this work and have no reason to doubt that this is indeed an exoplanet image, but I view this kind of modelling as a pretty weak form of support for a hypothesis. Models are built from assumptions, which are influenced by expectations. They are not data.

reply
fc417fc802
1 hour ago
[-]
It depends on how the model was constructed and how it is used. Ideally you expect the vast majority of possible observations not to fit your model. So if they do then that's a strong indicator that you have what you expect. Whereas if they don't you can't be certain if the model is maybe just not quite right.
reply
GMoromisato
22 hours ago
[-]
Another cool thing is that this technique is biased towards planets far from their star, because it is easier to see a planet the further away from their bright star.

In contrast, current techniques are biased towards close-in planets. Both Doppler-shift and light-curve methods tend to detect close-in planets.

We’ll get a better idea of the distribution of planets with both techniques.

reply
BitwiseFool
1 day ago
[-]
The JWST is a marvel of engineering. It is also a machine designed around the restrictions of what the most powerful rockets of the 1990's were capable of. Just imagine how capable future telescopes will be now that we have multiple super-heavy launch vehicles with cavernous payload fairings in development.
reply
adriand
1 day ago
[-]
My fantasy is that at some point we’ll have a sufficiently powerful telescope to cause a galactic “Van Leeuwenhoek moment” where, just like that discoverer of microbes, we will suddenly see the galaxy swarming with spacecraft.
reply
sneak
20 hours ago
[-]
Assume for a moment that happens. Can you possibly imagine the chaos and turmoil that causes on Earth?
reply
unfunco
19 hours ago
[-]
No? I genuinely think most of the world will have moved on and will be caring about something else within a day, the world will be about as chaotic and tumultuous as it was shortly after the discovery of microbes.
reply
booleandilemma
14 hours ago
[-]
Microbes weren't discovered everywhere all at once though. I think if the entire planet found out (through modern media) people would go ballistic.
reply
dylan604
1 day ago
[-]
it's hard to commit to building JWST type of payload around a non-yet proven launcher. you'd want to wait until the "in development" becomes proven before planning to launch some decadal planned mission.
reply
lawlessone
1 day ago
[-]
Ariane 5 seems pretty proven to me :D
reply
dylan604
23 hours ago
[-]
yeah, nothing says proven like being retired
reply
WalterBright
1 day ago
[-]
Yes, and too bad a twin or two weren't developed simultaneously, as the additional cost would be minimal - and now we have SpaceX rockets to launch them.
reply
ryanisnan
1 day ago
[-]
This is super exciting. It seems possible to one day receive higher resolution images of this type of find. Perhaps someone who is more familiar with this subject can opine.

The moment we have our first, direct-observation photo of an earth-like exoplanet will be a defining point in our history.

reply
pkaye
1 day ago
[-]
The Nancy Grace Roman Space Telescope is supposed to have even better coronagraph as a technology demonstrator. They keep finding ways to improve on the technology.
reply
xorbax
1 day ago
[-]
If it's allowed to continue, which seems very shakey at the moment. NASA's would from DOGE will result in projects - even mostly completed one - being trashed.
reply
JumpCrisscross
1 day ago
[-]
China is catching up on optics and launch. The torch of civilisation seems unlikely to be lost if we fuck it up that badly.
reply
ceejayoz
1 day ago
[-]
I’m not sure why this is downvoted. It’s entirely accurate.

https://en.wikipedia.org/wiki/Nancy_Grace_Roman_Space_Telesc...

> In April 2025, the second Trump administration proposed to cut funding for Roman again as part of its FY2026 budget draft. This was part of wider proposed cuts to NASA's science budget, down to US$3.9 billion from its FY2025 budget of US$7.5 billion. On April 25, 2025, the White House Office of Management and Budget announced a plan to cancel dozens of space missions, including the Roman Space Telescope, as part of the cuts.

reply
cryptoz
1 day ago
[-]
That will be done with a solar gravitational lens - there's a recent-ish NASA paper about it. Basically you send your probe to > 550 AU in the opposite direction of your target exoplanet, point it at the Sun and you will get a warped high-res photo of the planet around the Sun. You can then algorithmically decode it into a regular photo.

I think the transit time is likely decades and the build time is also a long time as well. But in maybe 40-100 years we could have plentiful HD images of 'nearby' exoplanets. If I'm still around when it happens I will be beyond hyped.

reply
sanxiyn
1 day ago
[-]
FYI: Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravity Lens Mission. https://arxiv.org/abs/2002.11871
reply
dylan604
1 day ago
[-]
this is one of those where a missed alignment is going to be a huge bummer. 550AU * arcseconds is a long way off looking not at what you wanted. you wouldn't know until you were at minimum distance which is going to take generations to achieve. voyager 1 is only ~166AU and that was >40 years. so if you try to nudge your coarse, how many more generations would it be before it was aligned correctly?
reply
umeshunni
1 day ago
[-]
an arcsecond at 550AU is "only" 400,125 km. So, in theory, it's correctable in days.
reply
danparsonson
18 hours ago
[-]
I was aware that we have directly imaged exoplanets before, but I didn't know just how many we've seen now: https://en.m.wikipedia.org/wiki/List_of_directly_imaged_exop...

Not to take anything away from JWST - every one of these is an incredible achievement!

reply
neom
1 day ago
[-]
reply
neom
1 day ago
[-]
I really liked the image a lot so I emailed the author of the paper to see if she had a version without the clipart,she didn't but said it was fine to remove it, so: https://s.h4x.club/YEuYLW8z (doesn't render tiffs I guess, so hit download)
reply
rwmj
1 day ago
[-]
So presumably they'll be able to take another photograph in a year or two and the planet will have visibly moved? (Jupiter's orbital period around the Sun is about 12 years, but this planet is about 10 times further from the star and has an estimated orbital period of 550 years.)
reply
monster_truck
1 day ago
[-]
Do NOT trust my napkin math, but I believe TWA 7 moves ~0.6 "pixels" (0.02 arcsec) per Earth-year.
reply
imoverclocked
18 hours ago
[-]
The article starts using JSWT instead of JWST … is anyone here able to effect an edit?
reply
BryanLegend
1 day ago
[-]
reply
padjo
18 hours ago
[-]
It’s been truly fascinating to is f_p from the Drake equation go from a guess of maybe 0.5 as an upper bound to an increasingly confident 1 in my lifetime.
reply
mensetmanusman
11 hours ago
[-]
How would you feel if you were a planet mistaken for a galaxy?
reply
ge96
1 day ago
[-]
The star thing made me think "Who's that planetoid?"

edit: but it's the orange thing not the star

reply
grg0
6 hours ago
[-]
If you showed this to Trump, he would believe that is what an actual star looks like. Even a child knows that a star has five pointy ends!
reply
tiahura
1 day ago
[-]
How is it that we can spot a planet 110 light years away, but whether there’s another planet in the solar system past Pluto is a matter of legitimate scientific debate?
reply
meatmanek
1 day ago
[-]
Because exoplanets by definition are going to be found adjacent to stars, which limits the area you need to search. Planets are fairly common, so you don't need to look at that many stars before you find evidence of an exoplanet, provided you have a good-enough telescope.

A hypothetical planet beyond Pluto be in a huge part of the sky: Presumably the orbit of such a planet could be inclined about as much as Pluto's. The 17-degree inclination of Pluto's orbit means it could be in a 34-degree wide strip of the sky, which, if I'm doing my math right, is about 29% of the full sky. If we allow for up to a 30 degree inclination, then that's half the sky.

There's also the matter of object size and brightness. The proposed Planet Nine[1] was supposed to be a few hundred AU away, and around the mass of 4 or 5 Earths. The object discovered in this paper is around 100 M🜨, at around 52 AU from its star. Closer and larger. (Of course, there's a sweet spot for exoplanet discovery, where you want the planet to be close enough to be bright, but far enough away to be outside the glare of the star.)

1. https://en.wikipedia.org/wiki/Planet_Nine

reply
ethan_smith
16 hours ago
[-]
The paradox is explained by different detection methods: exoplanets like this one glow in infrared and are directly visible against the black of space, while Planet Nine would be extremely dim, non-glowing, and lost in the cluttered background of our galaxy's disk.
reply
charlieyu1
1 day ago
[-]
Because we are looking for much smaller planets.
reply
koolala
1 day ago
[-]
Why is it censored?
reply
skybrian
1 day ago
[-]
They have to block out the light of the star so that it doesn't overwhelm the light from the planet.
reply
umeshunni
1 day ago
[-]
Not sure if you're joking, but in case you're not - the star at the center is usually so bright that its light drowns out the light of anything nearby. In such cases, the star is covered so that the dimmer objects nearby are visible.
reply
m3kw9
1 day ago
[-]
Did it come from JPL?
reply
twothreeone
23 hours ago
[-]
Bunch of liberals.. shakes fist

/s

reply
sylware
15 hours ago
[-]
Nice web site: "enable javascript to continue"

Any direct link on the pic?

reply
timmg
1 day ago
[-]
How cool would it be to directly image artificial light on the "dark side" of a planet (like all the photos you see of lights on earth at night)?

I mean, even if there is life it's like 1 in a gazillion. But you could imagine some ML looking through all of its images to find planets, etc.

reply
ripped_britches
1 day ago
[-]
Or imagine another civilization looking at our lights with their telescope
reply
deadbabe
1 day ago
[-]
And imagine that the only reason, the ONLY reason, they haven’t completely blown us away, is because our planet happens to be one of the very rare planets where the ratio of the size of our moon and earth is in such a way that you can witness a total solar eclipse as a black hole in the sky once a year, and they would like to witness this event someday.
reply
qw
12 hours ago
[-]
What if FTL is not possible? In that case the attack will take a long time to reach us, and in the meantime we will be much more advanced technologically and could potentially defend ourselves.

In sci-fi we see warp drives, worm hole travel, phasers, photon torpedos and energy shields around ships. But what if none of that is possible? In that case, we might even have the technology to defend ourselves today if we manage to detect the attack in time.

It's a huge risk for a civilization to attack us. Even if they have capabilities that are beyond our technology, there might still be limitations based on the laws of physics. And if they attack us, they risk a response.

reply
krapp
1 day ago
[-]
That's no reason not to blow us away, eclipses still work if there are no annoying humans around to see them.
reply