This is Zeno's dichotomy paradox [1]. Finitely-defined infinitely-complex systems (e.g. fractals and anything chaos theory) are the escape.
[1] https://en.wikipedia.org/wiki/Zeno%27s_paradoxes#Dichotomy_p...
Sure. The point is the gedankenexperiment proves nothing. We don't need to "[record] an infinite amount of information" to encapsulate the infinity between any pair of real numbers.
[0] https://www.physicsforums.com/insights/hand-wavy-discussion-...
A Noether Theorem for discrete Covariant Mechanics, https://arxiv.org/abs/1902.08997
Statements were made that shielding would improve after Ver 1.0 .. it got worse. Statements were made that sats would go low power over quiet zones, they do not.
Returning to your erudite point "and stuff"
The NASA Cosmic Background Explorer (COBE) satellite orbited Earth in 1989–1996 ...
Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales ...
The sensitivity of the new experiments improved dramatically, with a reduction in internal noise by three orders of magnitude.
~ https://en.wikipedia.org/wiki/Cosmic_microwave_backgroundHmmm, it appears the ground based results were a dramatic improvement over the sat based data.
Not necessarily so?
https://en.wikipedia.org/wiki/Xuntian
As of 2024, Xuntian is scheduled for launch no earlier than late 2026 on a Long March 5B rocket to co-orbit with the Tiangong space station in slightly different orbital phases, which will allow for periodic docking with the station.
Leaving aside the fact that an optical telescope isn't a microwave array nor is it a Square Kilomtre Array of radio telescopes with each component larger than your example ...
Putting an instrument in orbit has all the costs of development of a ground based instrument, additional costs to space harden and test, additional costs to lift, limited ability to tune, tweak or extend when in orbit, hard constraints on size and weight, and other issues.
Xuntian allows for periodic docking, sure. How will this not be more expensive and limited than (say) walking | driving out daily or weekly to much lager instruments on the ground?
https://en.wikipedia.org/wiki/Xuntian#Instruments <- Terahertz receiver
( https://en.wikipedia.org/wiki/Terahertz_radiation
This band of electromagnetic radiation lies within the transition region between microwave and far infrared, and can be regarded as either. )
> Putting an instrument in orbit has all the costs of development of a ground based instrument, additional costs to space harden and test, additional costs to lift, limited ability to tune, tweak or extend when in orbit, hard constraints on size and weight, and other issues.
Who is to say they won't pull a Space-X, maybe even overtaking it, going fully reusable? Which allegedly lowers the costs, giving more economically access to space and lessening the constraints on payloads, while giving all the advantages of being in space?
With the additional cost of lifting it to orbit, the additional cost of difficulty of in orbit maitainaince, and the additional weight and dimension restraints of going to orbit, the additional costs of over designing to harden for space and limited access.
Yes, there are advantages to being in space. They vary by application.
That aside it's still cheaper to build an instrument or instrument array that's deployed on the ground.
Eg: SKA - definitely cheaper on the ground.
Unconvinced. Because building the whole system, not some isolated dishes somewhere amounted to 1.3 Billion EUR, operating it up to 2030 adds another 0.7 Billion EUR. 2 Billions. Chump change for sure.
Now we can compare that with the JWST and typical cost overruns in american boondoggle style, or look at the latest shining star, EUCLID. Just 1.4 Billion EUR for the latter.
Then there was GAIA at about 740 Million EUR, with the orbiting article at 450 Million EUR alone, plus another 250 Million EUR for the data-processing org.
All of these with more or less conventional rocketry, and not co-orbiting anything for more easy maintenance and upgrading.
My gut feeling tells me we will have cheaper and more reliable access to space, with larger payload capacity, necessitating less 'origamics' for the space parts, and that chinese concept seems sound, too. Very much so, in fact.
How much that will cost I have no clue.
But again, if something like this is becoming reality, no matter by whom, some former assumptions about cost, feasibility (at all, because payload weight and dimension constraints are relaxed, needing less 'origamics') will have to be rethought.
That was my point, in general. Not limited to any special application.
In the interim, and as a general rule for all private entities, it'd be nice to not pollute the commons with unnecessary discharges and sparkles and to carry through on pinky promises to maybe do something about that.
There are a large number of continuous physical quantities, not only length (though all continuous quantities are dependent in one way or another on space or time, which are the primitive continuous quantities), and the reason why you cannot encode an arbitrary amount of information into a specific value of such a quantity is because it is impossible to make an object for which such a quantity would have a perfectly constant value. All the values of such quantities are affected by noise-like variations so you could store information only in the average value of such a quantity, computed over some time and any such average would still be affected by uncertainties that limit the amount of information that can be stored.
One of the most constant lengths that have ever characterized an artificial object has been the length of the international prototype meter kept in France and used to define the meter until 1960. To minimize the length variations, that meter bar was made of platinum-iridium alloy and it was measured at a temperature as constant as possible.
Despite the precautions, which included gentle removing of the dust and handling with soft grippers, the length of that meter bar fluctuated continuously. Even if it was attempted to keep a constant temperature, very small fluctuations in temperature still caused thermal expansions and contractions. Every time the bar was touched, a few metal atoms were removed from it, but other atoms from the environment remained stuck to its surface, changing the length.
All these continuous variations have nothing to do with the possibility of the space being discrete, but they limit the amount of information that can be stored in any such value.
For now there exists absolutely no evidence about the space or time being discrete and not continuous. There have been attempts to make theories based on the discreteness of the space and/or time, but until now they have not provided any useful result.
Instead, my way is simpler by generating an absurd result that if you could build and measure a thing to arbitrary precision you can encode infinite information into it. This is enough for me to reject the counter-factual without going through the messiness of thinking through hypothetical realistic experiments.
The one interesting place to consider is at the Schwarzchild radius of a black hole, where presumably information accumulates to an absurd degree, monotonically over time. I don't really know enough about it to comment intelligently, so I won't except to note its existence.
The Schrödinger wave-function is expressed in a unit which is the square root of an inverse cubic meter. This fact alone makes clear that the wave-function is an abstraction, forever hidden from our view. Nobody will ever measure directly the square root of an inverse cubic meter.
Freeman Dyson, Why is Maxwell’s Theory so hard to understand?
https://www.clerkmaxwellfoundation.org/DysonFreemanArticle.p...
>> Just because you can't record something...
>>> Who said anything about recording?
Depends on what you're measuring. To illustrate why that isn't a facetious response, consider the difference between 'measuring' pi, 'measuring' a meter and 'measuring' the mass of a proton. (Or, for that matter, the relative mass of three of something to one of it.)
It's worse than that: you also need an unambiguous way of determining whether the needle is overlapping a stripe.
Pick your method. It’s the ratio of a circle’s circumference to its diameter.
I think it's reasonable to say we can't truly measure pi, though.
And you can neither know nor measure a random real.
> We know pi in the sense of "a unique real number satisfying many useful properties".
We know it a lot better than that. We have efficient programs that output the numerical value of pi for as many digits as you want.
There's a bunch of real numbers we can identify that are far harder to make use of or approximate, and don't have easy exact description of their value.
Sure, but how would you compare those against a measurement?
So, how would result of measuring e.g. length of something to an infinite precision look like? It would look like two particles that are kept at rest relative to each other; the distance between them is the measured distance. Whether this distance has to be commeasurable with the Planck scale or not is an interesting question but it really can go either way.
And how do you do that in the face of Heisenberg uncertainty?
To actually try to answer your question: I don't know. But that's just me; and ain't there some interesting experimental setups with super-cooled crystals? In any case, inability to imagine something is hardly a convincing proof of anything.
Hm.
Momentum space being compact does seem weird..
Of course, if rather than a discrete group for space, you just have a discrete uh, co-compact(? Unsure of term. Meaning, there is a finite radius such that the balls of that radius at each of the sites, covers the entire space [edit: “Delone set” is the term I wanted.]), uh, if you take a Fourier transform of that lattice…
Err… wait, but if the lattice is a subgroup, how does the Fourier transform relate to…
I think the Fourier transform of a Dirac comb is also a Dirac comb (with the spacings being inversely proportional) If you multiply the Dirac comb by something first… Well, if you multiply it pointwise by e^(i x p_0 /hbar) , then the Fourier transform will have whole thing shifted by p_0 , and this is periodic in (width of the spacing of the comb in momentum space)
So, if you consider all the pointwise multiples of a Dirac comb in position space (multiplying it by arbitrary functions), then I guess the image of that space under the Fourier transform, is going to in some way correspond to functions on S^1, I guess it would be functions periodic in the width of the comb in momentum space.
So, if instead of a regular comb, you jostle each of the Dirac deltas in the position space comb by a bit first (a different random amount for each)… I’m not sure quite what one would get…
The operative word being "seems". Position and momentum (and indeed real numbers in general) are mathematical models that predict observations. But the observations themselves are the results of physical interactions that transfer energy, and those can only ever be discrete because energy is quantized.
Maybe one can make the argument that position itself is quantized (thus the position of the mirrors can not be varied continuously), but we do not have experimental reasons to believe space is discrete (and quantum mechanics does not require it to be discrete). And while it is pleasing to imagine it discrete (it is more "mathematically elegant"), we do not have any significant rigorous reasons to believe it is.
Edit: Moreover, if you want to describe (in quantum mechanics) the interaction between a finite system and the open environment around it, the only way to get a mathematical description that matches real-world experiments is to have continously parameterized energy levels for the systems making up the open environment. If you assume that only discrete values are possible, you will simply get the wrong result. Most quantum optics textbooks have reasonably good discussion of this. E.g.:
Quantum Optics by Walls and Milburn
Quantum Optics by Scully and Zubairy
Methods in Theoretical Quantum Optics by Barnett and Radmore
Sure, but can you measure those continuously-parameterized energies? I don't see how.
Continuously parameterized energies are no different from continuously parameterized space. They are part of the mathematical model we use to make accurate predictions, but we have no direct access to either, and (AFAICT) we cannot possibly have access to them because that would violate the no-cloning theorem.
The following is a (simplified, abstracted) way to measure an arbitrary energy value.
1. Set the system up so that the carrier of the energy is a photon (e.g. let the two-level system decay[1] or use some form of transduction or whatever).
2. Send that photon to pass by two semi-transparent mirrors at a certain (continuously parameterized) distance between each other.
3. If the photon passes through both mirrors (as detected by a photon detector at the other side), it means its energy is equal to some known constant divided by the distance between the mirrors. If it does not pass it means it has a different energy.
4. Repeat the experiment many times as you slowly vary the distance between mirrors.
I guess in point 4 there is an issue that you need to repeat the experiment with a new realization of your photon each time. Does that have bearing on the initial point being discussed?
You are probably seeing this at this point, but just for completeness: this technique is no different from tuning a musical instrument with a tuning fork.
I papered over some details about whether we want to detect transmission or reflection and exactly what type of transparency the mirrors need to be, etc.
[1] Funnily, the usual way in which someone would prove that decay can happen at all does rely on the existence of a continuous spectrum of energies. This is the same topic I raised above when citing Quantum Optics textbooks.
Yes, I get that. But there are two problems. First, you cannot tune an instrument precisely. Precise tuning of a real musical instrument isn't even a meaningful concept because any wave with finite temporal extent has non-zero bandwidth. It's the same with energy. The exact same uncertainty relationship between frequency and time produces the Heisenberg uncertainty relation between energy and time, so it is not possible to produce an isolated photon at a known time with a known energy. The best you can do is produce a lot of photons so you don't have to wait forever for one of them to arrive at your detector. So the problem with the setup you describe is that in step 2 the concept of "that photon" is not well defined.
Second...
> I guess in point 4 there is an issue that you need to repeat the experiment with a new realization of your photon each time.
Yes, that too. But you need to do more than that: in order to get a meaningful result you'd need to produce photons with the same energy, i.e. you'd need to use a laser or some other kind of tuned cavity. But it is not meaningful to identify individual photons emitted from a laser because they are identical bosons.
You're assuming spacetime behaves like the set of reals (something with cardinal ℵ1, if you accept the continuity hypothesis), an object that even if you stay confined within the bounds of pure mathematics, behaves in very, very weird ways.
It may be that spacetime at small scales maps better to a different kind of mathematical object and not even a grid-like one.
Jorge Borges' way of telling a story as analogy is beautiful and simple.
It takes the resources of the universe to simulate the universe.
The electron might be smaller. Its diameter is known to be smaller than 10^-22m, but could be much smaller than that.
Further below the Planck Length, there are strong indications that the universe isn't continuous -- it's discrete. That there's an absolute limit to precision, something really quite analogous to a pixel. This elementary length could be somewhere around 10^-93m.
The theory that the Planck length has any significance is just a speculation.
Nobody knows how interactions would behave at distances so small and there are no known methods that could compress anything into volumes so small. There is no basis to believe that extrapolating the behavior from normal distances and sizes to the scale of the Planck length is valid.
There are pure speculations that are interesting, but in my opinion any speculation about the Planck length is not interesting, because nobody has been able to formulate any prediction based on such a speculation that can be verified in any way.
Most speculations about the Planck length are made by people who obviously know very little about the meaning of the so-called fundamental constants or of about the significance of the useful natural units for physical quantities, to which the Planck length does not belong.
The Planck length is just one way to express the intensity of the gravitational interaction, i.e. an alternative to Newton's constant of gravitation. Its numeric value does not say anything about any other physical phenomena.
The numeric smallness of Planck's length is just an expression of how weak the gravitational interaction is in comparison with the other interactions. It does not have any other significance.
There are indications discrete space is plausible. It's actively debated.
There are also strong indications space is continous, e.g. Lorentz symmetry. (This was recently the death knell for a branch of LQG.)
For example, I pound the picnic table. Presumably this is somehow transmitted thru the entirety of the Earth, or at least thru a tiny portion of it. But is there a cutoff ? Where is the cutoff ? Where is the effect simply too small to "register" in any conception of reality ?
However, there are deeper things around. Seth Lloyd suggested that we use information density to derive general relativity from quantum theory: https://arxiv.org/abs/1206.6559
The article actually seems clear and straightforward to me. I'd only add that I wish there were links at the end regarding what scientists are proposing right now for resolving those mysteries.
That supposes in particular that general relativity is still a valid theory at these minuscule scales, something that I believe has never been experimentally verified.
If general relativity's equations do not work at the planck scale, we know strictly nothing about black hole formation.
The fundamental challenges these experiments (and others) surface is a deep challenge to the traditional narratives of Materialism or 'Physicalism' as our understanding of what existence is. In essence science and human knowledge has lept forward technologigcally over the past 400 and esp the past 100 years because we started assuming the world was physical in nature, material and metaphysically, ie that it reduced to fundamentally physical things we could quantify and measure.
Yet, the older I get the more inclined I am to believe in some form of Idealism.. Not only in Idealism but I'm leaning towards the belief that some kind of fundamental universal Consciousness is the only fundamental property or baseline to the universe or to existence.
Time and Space is not fundamental. Locality isnt true.
> I get the more inclined I am to believe in some form of Idealism.. Not only in Idealism but I'm leaning towards the belief that some kind of fundamental universal Consciousness is the only fundamental property or baseline to the universe or to existence.
> Time and Space is not fundamental. Locality isnt true.
thats interesting, but im curious what the basis for that thought comes from?Is the problem the author can't let go of not understanding? That they need everything to be, for lack of a better term, quantifiable? That there must always be no boundary to our ability to measure? Do they demand an answer to why there is a limit to what we can see at the end of the universe (beginning/surface)?
Is this something AI shat out for clicks? Did they fire actual writers at quanta? Did they smoke a bunch of DMT? Are you ok, quantamagazine? Do you need us to call for help? I'm a bit annoyed that I had to read that, thinking there would be some point, that the top thread was exaggerating, but they weren't.
Edit: My understanding is that all bodies are the size that they are because the inner/outer pressure equalizes, and this has many equilibriums based on the makeup of the body. Black holes are the ultimate degenerate last-stand where the make up is basically raw "information" which cannot be compressed any further while allowing said information to be recovered, which seems to be a fact of our universe. And it just so happens that the amount of information is proportional to the surface area of the black hole rather than its volume, which is probably a statement about how efficiently information can be compressed in our universe. One dimension is redundant?
Hoberman spheres expand and contract via forces that act only along the structures that make up the surface, and this is a simple classical object. I don't see why a more exotic physical object like a black hole couldn't only have properties defined by its surface.
The surface of space doesn't require something in a higher dimension pushing it out. That such an object may appear to have internal volume from our perspective doesn't need to be any more real than the apparent depth behind a mirror.
As you approach the event horizon, your frame of reference slows asymptotically to match that of the black hole while the universe around you fast-forwards toward heat death. I’d expect the hawking radiation coming out at you to blue shift the closer you got until it was so bright as to be indistinguishable from a white hole. You’d never cross the event horizon; you’d be disintegrated and blasted outward into the distant future as part of that hawking radiation.
For the unfortunate person falling into the black hole, there is nothing special about the event horizon. The spacetime they experience is rotated (with respect to the external observer) in such a way that their "future" points toward the black hole.
In a very real sense, for external observers there isn't really an interior of the black hole. That "inside" spacetime is warped so much that it exists more in "the future" than the present.
Professor Brian Cox also says that from a string theory perspective there isn't really an inside of a black hole, it's just missing spacetime. I tried to find a reference for this but I couldn't find one. Perhaps in his book about black holes.
I'm no physicist so happy to be corrected on any of the above!
This is from a simplified model using black holes with infinite lifetime, which is non-physical. Almost all textbook Penrose diagrams use this invalid assumption and shouldn't be relied upon..
Fundamentally, external observers and infalling observers can't disagree on "what happens", just the timing of events. If external observers never see someone falling in, then they didn't fall in.
This isn't true. As long as the two observers can't communicate with each other, they absolutely can have different results. To put it in simpler terms, the requirement of physics is that an experiment has a unique result according to some rule, but different experiments can have different results even if they break our intuitions.
So, if you measure the position of a particle falling towards a blackhole, you will see it disappear at the event horizon, and perhaps be radiated out later as Hawking radiation from that same event horizon. If you measure the position of the same particle while you yourself are passing through the event horizon, you will it will record no special interaction and see the particle moving completely normally. Since you can't perform both experiments at once, and you can't relay any data from one to the other, there is no contradiction.
This is just another case of a duality in physics, similar to how some experiments measure electrons as point-like particles completely localized to a certain place, and others measure them as waves spread out over a very large area.
I don't believe this is the case -- the particle just becomes ever more redshifted.
* generally considered a large scale fraud,
* perpetrated by (UK's Professor Brian) cox
Most that I know would say that it was disapointingly too big and too general to make specific predictions tied to this specific universe we occupy, although it had early promise.
Brian Cox didn't even make the wikipedia page so its difficult to claim he had any major role in perpertaring it as a large scale fraud.
I am, of course, joking but she posts this sort of easy and empty clickbait.
I don’t think that not being able to communicate your results makes it not scientific.
Falling "through" a hologram on the surface would be physically indistinguishable to the person falling from falling into a volume.
In my mind that is what a black hole is, a spherical hole in the fabric of spacetime with matter bunched up around it in a very thin shell. That's why their area is proportional to their mass instead of their volume, because there is no volume.
The volume deviation is carried in the Ricci tensor
https://en.wikipedia.org/wiki/Ricci_curvature#Direct_geometr...
http://arxiv.org/pdf/gr-qc/0401099v1 (section 5.2)
https://math.ucr.edu/home/baez/gr/outline2.html (bullet point 9)
The highest-scoring answer at https://physics.stackexchange.com/posts/36411/revisions is a fairly reasonable attempt to calculate the volume deviation for nonspinning ~spherically symmetric bodies with the masses of the Earth (~ 10^2 km^3) and the Sun (~ 10^12 km^3), compared to the Euclidean-Newtonian volumes. Qualitatively, dropping these symmetries and the uniformity of the matter will tend to make the volume deviation larger.
> there is no volume
The volume deviation becomes enormous for compact (relativistic) objects, and for black holes one has to exercise care in even defining a volume, since naive choices of coordinates will show a divergence. Typically the choice of a 3-space inside the horizon has a time-dependency, and most choices of 3-space will tend to grow towards the future.
Christodoulou & Rovelli's (C&R) approach: https://arxiv.org/abs/1411.2854 ("it is large" for the largest volume bounded by a BH's area should win some sort of award for understatement). https://arxiv.org/abs/0801.1734 (reference [5] of the 2014 C&R paper) takes a slightly different path to the same conclusion.
YC Ong (several other references, and a number of related later papers) has a nice article at https://plus.maths.org/content/dont-judge-black-hole-its-are... The prize quote: "To give an idea of how large the interior of a black hole could become, this formula estimates that the volume for Sagittarius A, the supermassive black hole at the centre of our Milky Way Galaxy, can fit a million solar systems, despite its Schwarzschild radius being only about 10 times the Earth-Moon distance. (Sagittarius A is actually a rotating black hole, so its geometry is not really well-described by the Schwarzschild solution, but this subtlety does not change the result by much.)" And: " These examples show that, in addition to the surprising property that the largest spherically symmetric volume of a black hole grows with time, in general, the idea that volume of a black hole grows with the size of its surface area is wrong. In other words, by comparing two black holes from the outside, we cannot, in general, infer that the "smaller" black hole contains a lesser amount of volume. "
The area of a Schwarzschild horizon is straightforward to define, and unique for constant mass. (Procedurally you could count the number of unique tangent planes at r_{schwarzschild}, but there are other ways of arriving at the area).
If your sweater "weave" represents a set of orbits around the black hole and your ant free-fall along those rather than walk, you are getting close to a solution of the geodesic equations for a black hole. A free-falling ant will stick quite firmly to geodesic motion around a black hole. However, there are definitely plunging orbits that will take the orbiting-ant inside the horizon, and there is an innermost stable circular orbit (ISCO) that isn't solid like the yarn: a small perturbation of an orbiting-ant there will knock it into or away from the BH. But an un-knocked ant can circle forever.
The ISCO (3r_{schwarzschild} for a Schwarzschild black hole) is quite a lot of ant-lengths above the horizon of a BH (2r_{schwarzschild}). Spinning black holes have a narrower gap between the ISCO and the point of no return.
The point of no return for a spinning hole is just that: the ant can't backtrack, but will continue moving "forward" from there, and for a massive enough black hole it could do so for an hour or more before it feels the discomfort that precedes spaghettification. The "no drama" conjecture holds that the freely-falling ant won't even notice crossing the point of no return, although astrophysically it is likely to have noticed things falling inwards on different trajectores even above the point of no return (at ISCO around an astrophysical black hole the ant has a good chance of being knocked by something on an intersecting trajectory).
> fabric of spacetime
Misleading terminology. It's not a substance. Spacetime is nothing more than a collection of possible trajectories, and none of them needs to be realized. (Our universe has an enormous number of unrealized trajectories compared to ones on which real bodies move).
> bunched up in a very thin shell
The "thin shell" is just a set of points of no return, and for an astrophysical black hole where exactly each point is can be rather fuzzy since it depends on the outside universe which is filled with moving ants (and galaxies).
Space-time is not Euclidean geometry under GR.
We don’t know this. It has been as far as we’ve measured. But there are compelling reasons to at least consider discrete spacetime.
The former is the boundary, the latter is the interior + boundary. One of the great arbitrary naming conventions of math.
In any spacetime you care to do an ADM split on, there are an infinite number of real-valued smooth scalar fields whose gradient is everywhere non-zero and timelike available to serve as the coordinate time.
In the standard cosmology the at-rest isotropic and homogeneous distribution of matter provides an obvious coordinate time function, but physics still has to work for other inertial and accelerated observers, so there is nothing preventing anyone from using a non-comoving observer's proper time in the ADM split.
Nits:
* Hamiltonian formulation of general relativity
* ADM formalism
The expansion is in the metric, and most visible (and most amenable to interpretation as an expansion of space) when using comoving coordinates. However, we are allowed to work in any system of coordinates, and when we do a general coordinate transform, we lose the interpretation of the metric expansion as an expansion in space.
("We see in both [Milne & de Sitter] cosmologies that the interpretation of redshift as an expansion of space is dependent upon the coordinates one chooses to calculate z." -- https://academic.oup.com/mnras/article/422/2/1418/1036317 final sentence in §3.4 de Sitter space).
The standard cosmology's FLRW spacetime (and also de Sitter space, where the vacuum constant positive scalar curvature makes for an easier-to-understand expansion history) is time-orientable. There are more points in spacetime in the future direction from an arbitrary point anywhere in the entire spacetime, than in that point's past.
However, the real distribution of matter is in our universe lumpy, as is easily demonstrated right here on Earth. Standing on the ground you can tell that locally there is neither isotropy nor homogenity (you can see quite far up into the sky, but not so far into the ground). Moreover, there are differences in isotropic pressure, convection, heat conduction, and shear stresses between the air and ground, none of which are features of the perfect fluids in the FLRW model. So locally the FLRW metric is not suitable, and since the Earth is very much not vacuum de Sitter is even less suitable. Consequently it should not offend anyone to read that Manhattan isn't undergoing cosmological expansion. Most things within ten kiloparsecs of here are gravitationally collapsing. It requires coordinate contortions to interpret interplanetary or interstellar space in the Milky way as expanding, and switching to different systems of coordinates will blow up that interpretation.
A "swiss cheese" universe with collapsing vacuoles in an expanding cosmology reflects that it's only when we go from ~kpc length scales to ~Gpc length scales that the distribution of matter behaves enough like perfect fluids to allow for an exact solution to the Friedmann equations. The collapsing vacuoles are usually modelled like Lemaître-Tolman-Bondi with a thin shell boundary; this results in a vacuole that time-orientable and has fewer spacetime points in the future direction. "Hole" time functions and "cheese" time functions must be different (hole clocks tick slower as they fall towards the centre).
This is just another proof that spacetime as a real entity does not exist. If there are infinitely many spacetimes, spacetime cannot be the fabric of the universe but it can only be fodder for academic careers. There is one universe but an infinite number of spacetimes. So which spacetime is the true one? None of them. You pick and choose one and write a paper on your chosen spacetime and collect your academic points. Another physicist chooses another popular spacetime and she writes a paper on that spacetime. The whole thing is a joke.
It turns out that one can knit together simplified models and build up a good description of a real complex system, but this has been known since at least the dawn of thermodynamics, if not since the time of Newton.
Indeed, Nasa and its counterparts have been working for decades with an approximation of the solar system: new bodies inside Neptune's orbit keep being found practically every year. It's not remotely likely that we know all the bodies of the solar system even out to Neptune (much less beyond), let alone their orbital parameters and how those evolve over mere millions of years <https://en.wikipedia.org/wiki/Stability_of_the_Solar_System>. Does that mean publishing the results of studies of long-running models of our solar system "can only be fodder for academic careers"? Even if it spots anomalies that lead to the actual discovery of very dim bodies?
> collect your academic points
The academic points mostly come from being cited by an author poking holes in your paper. Go investigate google scholar.
This is called the academic dialogue. And yes, a variety of scores are kept (e.g. the loathsome h-index).
But I guess you don't care, because you are happy writing obviously ignorant nonsense on hacker news for engagement and upvotes, right?
> There is one universe but an infinite number of spacetimes
There is possibly one unique spacetime that fully describes the universe, but guess what, we simply do not have enough computer power on the planet to validate such a model.
Here's the recent state of the art in computational cosmological simulation, n-body with n in the hundreds of billions:
https://flamingo.strw.leidenuniv.nl/ (their page)
https://skyandtelescope.org/astronomy-news/largest-ever-comp... (decent write up)
There are more than a hundred billion stars in this galaxy; there are more than a hundred billion galaxies in our sky. And there are more than a hundred billion particles in a star. And there are lots of motes of dust and blobs of gas between stars. So we're quite a few orders of magnitude too small in n in our n-body simulating to be able to pick out our own universe, exactly described, from a large set of simulations.
We also obviously don't have an infinite number of observations, since neutrinos and gravitational waves are hard to detect at all (and we only see a small part of the frequency space of both), we've only just started having really good views in the near infrared (JWST), our views in X-Rays and gamma rays are fuzzy because of technological limits, and so on and so forth. We are just at the start of https://en.wikipedia.org/wiki/Multi-messenger_astronomy four centuries after Galileo used a telescope to find the four biggest moons of Jupiter. (Incidentally, three moons of Jupiter were just discovered two years ago, because hey telescopes are not all-powerful and all-seeing. Guess how they figured out where to point the Victor M. Blanco Telescope?).
> The whole thing is a joke
Honestly, it's amazing that you aren't embarrassed by how obvious your ignorant contrarianism is.
Write back if you're actually interested in expanding your knowledge rather than mocking people you don't know whose work you know very very very very little about.
Ha ha just joking, I know you don't care.
This takeaway is a variation on an old theory. Perhaps we are already inside a black hole, and the expansion of spacetime is the rate of the black hole's growth in another universe.
The sum of an infinite series can be finite [1].
[1] https://www.mathcentre.ac.uk/resources/uploaded/mc-ty-conver...
Everything is infinite if we think this way.
Is this actually experimentally confirmed?
Sometimes from deep Universe, appear particles with much more energy than achieved at labs, and some theories say, their energy enough to create BH, but have not confirmation with decades observations, may be because scientific method need tens or better hundreds appearances in one place to confirm, but have less than half dozen.
(Saying this as someone who's read Kant twice and agrees with most of what he claimed. Outside his taste in music.)
I assume you mean the Critique of Pure Reason? Kant's Oeuvre is quite vast, though it wouldn't be unreasonable to have read the 3 critiques twice.
>What would Kant add to this discussion that the physicists in the article haven't considered?
Kant himself, I'm not sure, but its his model of the cosmos that we employ today, and spatio-temporality is a development out of his critical philosophy, especially his aesthetics. If you want to break space and time out of spatiotemporality it helps if you are familiar with the metaphysical undergirding of contemporary physics, since we have not treated them as separate since Einstein, even though Kant originally kept them as completely separate intuitions and did not seek to unify them but only to try and see what happens when they are set in relation. That is to say that spatiotemporality is, if we are being good Kantians, an entirely negative, transcendental view of space and time since it does not appear at the level of the senses but rather as an abstraction from them. But if we treat the second-level abstraction as real then we are bound to make errors about the empirical world, as returning to the critical project would, I believe, greatly help in re-evaluating our empirical methods.
To be clear, you're suggesting that physisticsts reject general relativity's unification of spacetime because Kant, who obviously had no knowledge of GR nor the empiricism that supports it, did not unify them? Kant's Prize essay also pre-dates e.g. Gödel. That doesn't mean every modern mathemetician must first consider a Kantian slant to their work before rejecting it for well-established and obvious reasons. (Nor that Gödel disproves Kant.)
Deducing that Kant would want us to reject modern science because it's not based on our senses ignores entirely his work as a mathemetician. Kant was, in his own time, a modernist. Not a proto flat earther.
I am suggesting that if someone wants to escape from the current paradigm of physics it may help to understand how it came about instead of wasting a lot of time speculating about what space and time “actually” mean.
And in any case, there is nothing in math that prevents someone from being a flat-earther. If nothing else, set theory and qauntam mechanics has done nothing but flatten our empirical world. Kant was Modern but his work was not, one can read his Opus Postumum as an attempt to reconcile his physics with an intuition that would be capable of bringing about the “feeling of life” that arrives from the experience of natural beauty—which, while very odd, clearly follows from the 3rd critique and the critical project more broadly, unless you throw out the concept of Freedom entirely and all the moral philosophy that follows from it and stay, as some have, in the analytic. But that’s clearly not what Einstein did, Einstein is a Schopenhaurian, he based GR heavily in the Aesthetic, but an Aesthetic devoid of this possible “third” intuition, that of feeling of form, since spatiotemporality annihilates all other possibilities of dimensionality besides space and time. Now what I’m saying is, its not possible for contemporary physicists to critically interrogate their theories without having a solid intellectual grounding in how they came about. Doesn’t mean that someone couldn’t simply come up with something new and brilliant spontaneously, but not by questioning spatiotemporality itself but by developing an entirely new framework that disregards it.
But that’s much more difficult than starting with the basics, right?
For those who aren’t in the know physics is in a crisis where huge portions of theoretical physics are turning out to be complete nonsense.
The whole thing seems like some over excited marketing person enshittifying the literal idea of static pages of informative just to make something "new".
I'm sorry you have issues but I'm glad the world doesn't cater to a single individual's issue.
I can't swim because of a whole in my ear drum from when the Nun at the free clinic my poor mother took me to popped that bad boy with a enthusiastic squeeze from an ear syringe and my tinnitus rings like a son-of-a-bitch when I wear ear plugs but I don't demand they fill in every swimming pool with concrete. I just walk by on those hot summer days wistfully jealous of the guy doing a cannonball and the lady doing the hand stand thing where your feet are in dry air but your head is 2 feet below the water level.
The analogy is ridiculous, yes. As it is ridiculous to build such a website that disabled people cannot possibly read. You don't have to make it perfect for them, just don't make it impossible.
Should businesses and academia strive to make information accessible? Yes. Should every piece of information be put into accessible formats, damn the art? I don't think so.