- Apple Vision Pro: 3660 x 3200 pixels
- Pimax 8K X: 3840 x 2160
- Pimax Crystal Light: 2880 x 2880
- HTC Vive Pro 2: 2448 x 2448
- HP Reverb G2: 2160 x 2160
- Meta Quest 3: 2064 x 2208
- Sony PS VR2: 2000 x 2040
- Pimax Crystal Super 3840 x 3840 (57 ppd, unreleased)
The VR180 3D footage gets spread to 180° (horizontal) and inside your HMD you see around 70-90° (again horizontal²) or so at a time. You can see that below 8K per-eye image resolution, you will start noticing a decrease in visual fidelity.--
² the Pimax has around 159° horizontal FoV.
If it records "naively" where the vertical pixel maps to the angle relative to the horizon, then you get the least amount of detail around the "equator", and you get the most at the "poles" above and below you. Which is not only totally imbalanced, but imbalanced in the worst possible way!
Back in 2017 Google introduced a new format that YouTube VR uses:
https://blog.google/products/google-ar-vr/bringing-pixels-fr...
It's called an Equi-Angular Cubemap, so each pixel represents the same amount of area in a VR projection.
Unfortunately, outside of YouTube it still hasn't really taken off as far as I know, although some VR video players support it.
I wonder if Blackmagic records in it? Or more generally, in what shape does the 180° image fall on the rectangular sensor -- and so how does actual resolution vary across the VR projection?
https://blog.mikeswanson.com/apples-mysterious-fisheye-proje...
Interesting that it's such a strange transformation, even the author of that article hasn't been able to decipher it exactly. The Google EAC seems so straightforward and "neutral", I'm surprised Apple created their own format (unless Google's format requires licensing, but I don't think it does).
There's nothing surprising here, it's been Apple's style for the entire time I've been working in tech - 15 years or so - to create formats that they control and preferably which only work on Apple devices. It's a big part of why their walled garden is so strong.
> Additionally, the format is undocumented, they haven’t responded to an open question on the Apple Discussion Forums asking for more detail, and they didn’t cover it in their WWDC23 sessions
Unfortunately, this is completely normal behavior from Apple and I've run up against it far too many times.
When finally forced, by regulation or industry pressure, to answer questions, open their format up, or support other open formats, they pay lip service and drag their heels in every way possible.
Assuming no intentional bias, it's personally hard for me to imagine how anyone could be in the industry for that long and not understand Apple's contributions to formats and other standards we take for granted.
* Apple was the first major adopter and popularizer of now-de-facto standards like 3.5" floppy drives, USB, Wi-Fi, and DisplayPort, and additionally created (and gave away) DisplayPort Mini
* Apple co-developed and popularized IEEE 1394 (FireWire), USB-C, Thunderbolt, and USB 4
* Apple created the ISO base media file format (ISOBMFF), which is the basis for MPEG-4 and many other time-based and image file formats
* Apple popularized today's most popular compressed media formats, and will do the same for AV1 (with hardware decode in M3 and newer Macs today, and Apple TV soon)
* Apple dropped a proprietary OS for the BSD-based OS used across their product line, from wearables, to mobile, to HMDs, to laptops/PCs
* Apple used its open source WebKit to advance modern web standards, and are one of the few defenders against Google's near-total hegemony of web technologies
* Apple's contributions to and investments in open-source technologies like Clang/LLVM and Swift have helped all developers directly and indirectly
Using an as-yet-undocumented projection format to argue the opposite isn't super-persuasive, since Apple eats their own dog food (sometimes for years) before promoting it to an open, generalized industry standard (e.g. "QuickTime Movie" container format).
It takes someone with a good deal of integrity to display a nuanced view of something, weighing both the good and bad.
Such observations aren't typically wanted in forums. Forums favour a "hive mind" mentality, because that's easy and doesn't require thought. Simply put <x> into the good or bad box. If we play the same game of distilling HN into a singular entity, then we already know what it thinks about Apple.
As for Apple's actions I don't think it deserves the polarised views that we see on HN. I've been around long enough to see how companies come in and out of favour and often the meddling efforts by competitors to sway such public opinion. Some people are just really distracted by team fandom and cheerleading, instead of looking at the more important question "what does <x> do for or against me personally?"
If they do build something on their own, they have a reason. Most often technical. E.g. lightning is strictly better than any alternative that was available when they were introducing it (and that it didn't become the USB-C form factor is partially due to them wanting high licensing revenue). And really... you don't get why they created their own CPUs? Which hands down beat anything else out there on perf/watt and allow their systems to have incredible battery runtime on really tiny batteries?
In other words, there could be some Apple in literally every device on the planet, but they decided they didn't want that -- for some reason. That is the part I don't understand. It seems like such a short-sighted play.
But that's the job of a company...
> but they decided they didn't want that -- for some reason.
Because they are a consumer company. They depend on a strong asymmetry between their customers and them. It would be really hard for any other company to rely on Apple as a supplier. Afaik the only relationship that exists in that way are company phones, but those are not handled by Apple directly, but rather through carriers. Apple simply doesn't have any experience in being a supplier (and probably also large aversions to becoming one from their own treatment of their supply chain).
Linus Torvald's repeatedly says something very insightful about "enterprise grade hardware", he describes it as "over-priced crap that doesn't work". Which is correct in the sense of "it doesn't comply to standards and only works in one specific combination". But that's literally where the value of "enterprise" comes from. There is someone that provides an in-depth description of a single use case and the appropriate solution and sells that. It isn't supposed to work in many scenarios. It's supposed to work in one. But for that one you have to guarantee a certain quality level and if you fail that you will have to pay for that. That is simply not how Apple operates. They are the big dog. Always. That's why they broke up with Nvidia
But my point was: They didn't introduce any format for the fun of it. There always was a reason. Nearly always a technical one. Sometimes "only" a business one (which one could argue that qualifies for "for the hack of it")
By that logic, Google isn't a company despite setting industry standards.
Variants of this principle is seen everywhere throughout their systems and architectural designs, thankfully backfiring often enough that Apple isn't taking over PC any time soon.
They manage pricing to maximize industry profit share, not marketshare.
Even in technical threads were people are literally struggling to complete work because of Apple's attitude (my experience here mostly relates to support for 3d browser apis over the last decade) there's still a largely negative response to any perceived Apple criticism.
I'm actually pleasantly surprised that my comment here is no longer downvoted. I guess HN folks are more savvy than typical web devs.
To be honest I'm not sure I'm even criticizing Apple. I don't like it, for sure, and it makes my life harder, but it's clearly a sound business strategy that has served them well.
A pixel, is not always a pixel. There's more to the story.
clowns
Here is that thread [0], with mostly professional takes. One interesting take-away:
> I’ve pre-ordered one. Vision Pro sales will be around 1/2M at the one year mark, and there’s a total of about 3 hours of immersive content available on the headset across every app right now.
> That’s a once in a lifetime content opportunity.
[0] https://old.reddit.com/r/cinematography/comments/1hhvwfv/bla...
If you ask me, the Vision Pro’s sales up to this point justify discontinuation. I think that Apple is only making investments in Vision Pro because they don’t really have another long play for “the next great device form factor,” and because Meta hasn’t thrown in the towel yet Apple presumably refuses to sit back while Meta dominates marketshare for a specific type of app platform.
I think that Apple and everyone else is very aware that VR/AR is more likely than not be close to the maximum user base. Meta’s been doing everything it can to make the platform stay within impulse buy territory because they know it’s not really a purchase that potential customers are going to seriously believe that they’ll spend hours and hours every day using like a traditional game console or PC graphics card. Meta has to convince you to buy a device that they must certainly know from their own telemetry that users only interact with for a handful of hours every week.
The only reason Apple is sticking with it is that it’s a long term play and they have unlimited money. Or maybe because they refuse to give up until Meta gives up. They can’t let Meta own a computing platform out of pure business ego.
What he might be thinking is that there are 400k to 500k people who have already spent $3500 on a device which currently has no content. If he got 10% of them to "spend" $2 on his short immersive video experience, that would cover the cost of the camera + shoot + profit, in his first successful attempt.
How much are you spending on marketing to reach those users?
However, the reason that I put "spend" in scare quotes was that it might be the case the these indie immersive content creators get their content subsided, or bought outright, by either Apple or some content app maker.
source: 100% supposition by someone who has never owned a VR headset.
As it is right now it feels like those early days of film, which seem incredibly awkward because they didn't know how to use it to tell a story. But they were clearly casting about for something they knew was there. It just took a while to find.
Market size of high end VR hardware/content does have a chicken and egg problem, regardless.
It seems like that you could have made the conclusion that it should be discontinued before it went on the market if "poor sales" justifies that for this device.
Doubly crushing for Apple given it sold worse than their low expectations, they shut the production manufacturing and cut sales expectations from 800k to 400k.
With the Vision Pro, that kind of enthusiasm just isn't there. If it was going to be the next big thing, there would be a lot of hype for it even though its rough around the edges. The general public isn't rejecting the Vision Pro because it costs too much and has no apps, they're rejecting it because wearing a computer on their face isn't something they're interested in.
I don't know who you know, but I'd be very surprised if most of them have either of those..
All up at least 10 current active VR users. Usage scale ranges from “an experience 3-4 times a year” to “years of daily active usage.”
Another mate joined just recently who is not really into gaming, but got a quest because he actually wanted a big tv/projector but it’s not feasible living in shared housing. I actually told him not to, I just didn’t think he would enjoy it, and despite enjoying jt myself, I actually don’t promote it much to people I consider “normies”because I know it’s niche and I don’t think it really is for most people, but interestingly he has been really happy with it entirely for movies.
The technology might need another decade (or two), but I think it’s very shortsighted to think VR/AR is close to its maximum user base.
That said I also don’t think we’re are a time-local maxima of users either.
In the first case you have a type of product that is evidently very useful but isn't ready for the general public. In the second case, you have a product that early adopters can't find a routine use for.
What are you basing this on though? From all accounts they’re pretty close to selling the number of units they could manufacture.
These are sales that are on par with the Nintendo Virtual Boy.
And if they can’t manufacture any more than that, they have an even bigger problem.
It’s now been a year and a half since the first model was announced and there is no sign of a second model to move the product into a more mass appeal device.
We saw critical follow-ups like the iPhone 3G and Apple Watch Series 1/2 come out as quick releases that were in retrospect very important to establishing a practical device that a regular person might consider buying. I think the fact that we haven’t seen one yet is a huge problem.
If Apple couldn’t make another leap in a calendar year it’s clear that they will never catch up to Meta. Meta is out there selling a gazillion Quest 3S bundles to your local Costco impulse buyer.
The million sales mark was from one single report by Kuo. Kuo himself previously said they were limited to ~900K display units which is ~450K devices and other analysts have said the same.
There is no other source saying 1M was the target that I know of that doesn’t trace back to Kuo. If they are at ~500K units then they’ve exceeded the initial sales target that Kuo himself laid out.
For your second point about a follow up, you’re comparing product announcement to product launches. The product itself only launched 10months ago. You’re expecting a second iteration within 10 months, of an entirely new product class for them ? Meanwhile other more popular Apple products often go longer between releases. Even Meta are around two years between products within a device class.
Your last point of comparing to a meta quest is misplaced too. They’re different classes of the same device category. There’s no way Apple are expecting to compete with a device a tenth of its price for total sales.
You're right that the Meta Quest is a different class of the same device category. But that's the problem, isn't it? Apple made a device that is in a different class of the same device category, really far away in a price point where just aren't any customers.
That's why Apple actually needed an unusually fast follow-up device, because the first product was too close to being an overpriced barely-working tech demo, similar to the original iPhone and Apple Watch.
I think it is quite safe for us to all assume that Apple isn't releasing a Vision Pro follow-up in the next couple of months. The iPhone 3G dropped the price and outclassed the original iPhone owners so much that Steve Jobs had to write an apology letter. The Apple Watch released a new product that resolved all the gripes from the first one in 12 months. I just think as a business strategy that Apple is missing the mark here with the Vision Pro.
To the rest of your point, I’m sure Apple is aware that there aren’t a lot of customers at the 3500 dollar mark. They’ve said as much in interviews, but you’d have to attribute a lot of hubris to them to think they thought they’d steal market share from the Quest line.
Again, all reports say they can’t even manufacture enough right now to be more than a blip on the sales chart.
Perhaps your first paragraph bellies your confusion because you claim it’s just a consumption device, and partly that’s because apples marketing focuses on it, because it’s the easiest thing to communicate.
The 3500 price tag is exactly the starting price point of other high end XR headsets that are used in many non-consumption areas. I say starting, because they go up considerably from there (see the Varjo XR prices). That’s the space where it’s not consumption, but work. It’s the same way that a Mac Pro can be used to view Netflix the same way a MacBook Air can, and they have similar capabilities on paper but widely different markets to target.
From all accounts of working with those industries, Apple has eaten that market share. Varjo has stopped being the headset of choice for most of those cases, to the point that even NVIDIA (who were a huge Varjo customer) are doing Vision Pro courses at siggraph and made it the headset they feature for professional work ( https://resources.nvidia.com/en-us-awe-2024/omniverse-apple-... )
Basically, don’t take Apple’s consumer facing marketing as their entire sales pitch. They have completely separate paths for businesses.
None of this is to say that there aren’t areas that Apple has missed on with the headset. There clearly are several, but I don’t think sales strategy is actually one of them. Marketing is perhaps an issue though.
Industrial and medical use, creative platforms etc, Literally the only other headsets I can use for the lines of work I’m involved with are 4x the price of the Vision Pro (the Varjo XRs), yet by your metric wouldn’t be worthwhile.
And this is the problem with people who don’t consider usecases beyond their own.
But the AVP cannot be used for any of these use cases, and as far as I know is not in serious use for professionals in such industries at scale, so why are you acting as if I was blind to the real reason people buy it? None of those are reasons people would buy it today.
Also, you seem to not be aware that the Quest series actually have a professional / industrial business sales setup and do actually make large volumes of sales for business use cases, unlike AVP (https://forwork.meta.com/quest/business-subscription/)
The reason I think you’re not counting those is because you claim equivalency of capability despite cost difference, but there’s clear areas that I know of where a Quest would not cut it.
Your link to metas business use has no relation to my argument. I’m not saying the quest can’t be used for business cases, because I myself have set up professional environments around it. But that’s why I’m confident in saying the Vision Pro allows for a level of fidelity that the Quest cannot provide for today.
Excuse me for being very skeptical.
I don't see a whole lot of Apple marketing material talking about the Vision Pro as a professional device. They have a grand total of one press release that highlights different uses for business. There's no landing page that says "contact sales" or anything like that you'd see for an enterprisey specialized solution.
In my mind the more plausible explanation is that Apple misjudged the pricing strategy for the Vision Pro, a device that it considers to be primarily a consumer content device.
If the Vision Pro was a $1000 product they would have potentially had a hit. But I think what's going to happen is that Apple is going to have a product like that in 2026 and when it comes out the response is going to be quite muted.
Also in terms of press releases, a quick google search shows these two
https://www.apple.com/newsroom/2024/04/apple-vision-pro-brin...
https://www.apple.com/newsroom/2024/03/apple-vision-pro-unlo...
Just because Apples consumer marketing pages don’t target enterprise, doesn’t mean they don’t target enterprise as a product. See the Mac Pro webpage that also doesn’t mention enterprise contacts https://www.apple.com/mac-pro/ but it’s clearly not targeted for just consumers either.
https://time.com/7093536/surgeons-apple-vision-pro/
https://pubmed.ncbi.nlm.nih.gov/39140319/
But yeah, if Apple can become a medical device company, it would unlock a huge revenue stream for them. Looks like these people are optimistic, I'll take them at their word for that.
I personally don't know if Apple is willing to put the investment into the devices to make them viable as medical devices. There is a reason medical devices have huge costs, and that's largely because of the human effort involved to go multiple rounds with the FDA and other regulatory bodies to get a device approved for use in medical facilities outside of trials and studies.
This is why you’ll often see a range of display devices in operating theatres. You’ll even see iPads or other tablets that aren’t necessarily certified for use.
But I could also see the value of a comparably priced or even higher priced version, with ultra versions of their M chips, going all in as a Mac or MacBook upgrade in terms of user interface.
I use mine as a MacBook Pro screen upgrade. I would love to dispense with the MacBook, while retaining Mac level (as apposition to limited iOS style) applications.
Maybe for current device clunkiness and capabilities.
I expect that would change if it could do a good job of replacing desk screens, or let people spend their commute staring at a hud instead of staring at a phone.
But at the same time everyone knows that the tech will get there eventually. A lot of current VR products seem to mostly exist to position companies to be able to exploit the market once the tech gets good enough.
https://www.kandaovr.com/Obsidian-Pro
Panoramic photography for VR is on my bucket list although I have a huge list of other projects such as having a reliable camera-to-audience system for stereograms I shoot with another other camera from that company
https://www.kandaovr.com/qoocam-ego
Note there are cheap pano cameras too
https://www.kandaovr.com/qoocam-3
though my Uni has a resource center for that kind of thing and I can probably talk my way into borrowing one of the better ones.
Stereo panos can be absolutely amazing on a consumer VR headset, I've greatly enjoyed crowd scenes from Paris such as in front of the Louvre and an observation deck on the Eiffel tower.
The 3d economy more fundamentally needs some kind of photo-to-3d technology and that is going to take multiple photographs from different angles, a depth camera helps but in one shot it does not give you the pixels that are only visible on the L or the R channel in a stereogram because of obscuration.
I've got a friend who makes 3-d models using a $265 million camera
https://mastodon.social/@UP8/111915448546172624
one thing we've talked about is where to get the missing pixels that aren't in any of the photographs, it's a tougher problem for him as a scientist than it is for me because he can't make stuff up.
Pano content in VR really is something new.
Apple's lack of vision with the Vision Pro is shocking as is the arrogance that somehow a $3k headset will revive interest in something people wouldn`t pay an extra $5 for at the movies.
With twice the memory and a desktop grade processor the AVP could trash the Quest 3 at immersive application but Apple is stuck on a backwards and conservative vision of mobile apps floating in the air - totally mundane sci-fi (Washuu had this in Tenchi Muyo) but a $3k headset has to do all, not just what one rich dude thinks is stylish.
If you are doing any VR or AR work you realize memory for textures is terribly short and 'more pixels' is the road to nowhere.
Lots of as-well-made-as-able 3D "stereo" movies in Disney etc that work on AVP beautifully. None of those are the same "you are here now" sense as the Alicia Keys demo.
Agree with you on Apple's seeming reluctance to empower a new UX/UI for the AVP affordances. Having a multi-window iPad strapped to your face is less compelling. Over the past 15 years one notices how much of iOS UI was invented by the market (pull down to refresh, for instance). Perhaps they want to see what people come up with for this.
Not going to happen without jailbreaking the locked-down VisionOS.
Apple could choose to enable for 18 months, then integrate the best use cases into the platform.
Yes, you'd have to have your various "apps" in your same "suite" app (like Microsoft ships Word, Excel, Powerpoint, inside Office for iPadOS), but third party apps wouldn't know your new UX/UI paradigm anyway.
Most people have seen maybe three real 3-D movies from among: the Avatars, a Pixar movie, Hugo, The Hobbit, the Transformers one where they tore up Chicago, and... yeah, I'm hard-pressed to name another movie right now that was shot in 3-D... oh, and Drive Angry. Which no one saw.
The vast majority of movies offered in "3-D" were post-processed junk.
The BlackMagic design is aimed squarely at cinema use, where BM is already one of the industry standard platforms for color grading and increasingly for editing, and already highly respected for image acquisition. This matters because film distribution agreements increasingly mandate specific technologies for production to mitigate the risk of customer complaints abouts image quality.
The 3d part of the camera is somewhat relevant for the cinema release market (and VR headset users who want to watch a movie in 3d...but I think this will remain a small market because wearing a helmet/goggles to watch a movie is inherently anti-social), but even if you never plan to release in 3d it's nice to be able to acquire that way for vfx purposes. Recording ground truth 3d information during acquisition is always going to be superior and cheaper to inferring it computationally from a monocular image.
Any projection is bound to separate areas which could be compressed more efficiently together.
A native stereoscopic spherical video encoder could improve compression even more, since side by side views are quite similar in general.
Now that's an interesting problem to solve ! (and a very hard one probably)
Existing video formats already support this for interlacing, although you could also let inter-prediction refer to earlier parts of the same frame and get most of the benefit.
Edit: I'll certainly read the rest of the articles!
Thanks!
A setup with a fixed VR camera and a 180 FOV could totally transform the experience, because now with a VR headset I'd be the one tracking the ball with my head movements like in a real stadium.
Many smaller local clubs suffer from low attendence due to local factors like people leaving the area, not having time or just bigger clubs playing at the same time.
This could be overcome with global audiences and live VR recordings (where you're still able to move your head) and potentially be a nice source of income for many clubs selling virtual stadium tickets.
https://www.usa.canon.com/shop/p/rf5-2mm-f2-8-l-dual-fisheye...
Firstly, it’s an actual cine camera so has a lot of cine features for reviewing, output, formats that are recorded etc.
Next is the lenses, this has a wider inter pupillary distance so will feel more dimensional and natural. It also has wider coverage.
Then there’s the sensors and readout: this can capture 8K per eye, for 90fps. This is required for the Apple immersive format because it partially surrounds you and so you need 8k per eye to make sure you have good resolution coverage for the portion of the video shown on the 4k per eye display.
There’s no other commercial product that compares to this.
I’m no photographer, but it seems like it’d be tough for someone to justify spending $30K on a single purpose camera - when you could just use an existing high end camera like RED + new lens.
The canon fisheye lens does not have the same IPD as this lens.
Beyond that, you’d still be limited by the sensor. The highest RED sensor is 8k? This is 8k per eye. Thats double (actually slightly higher than double).
So, you could make something inferior, yes, but not the same capabilities.
Blackmagic does have e.g. a 12k non-stereo Ursa Cine but, like you hint at, whatever they can have in the non-stereo can always be better in stereo because a 2x sensor setup has 4x the sensor area as a 1/2 sensor setup. Sensor area (for equivalent class sensors) determines the quality of the recording. When quality is what's important to a professional setting then it doesn't matter (in this market segment) there is a solution which is 20k cheaper if it's always going to be inferior by design. They don't expect to sell many of these to professionals even so it's fine it doesn't make cost sense to the average person.
The rest of everything (recording workflows and settings, IPD, framerates, editing software) can all be identical with either approach but the sensor area is sensor area and there is nothing which can be done to fix that.
But that’s still not 16k of pixels. You don’t even need two 8k sensors to make this work. Just aim the stereo lenses at different parts of a 16k sensor. The Canon solution is simply lacking IPD and pixels.
> Sensor area (for equivalent class sensors) determines the quality of the recording.
This is false. Going to get up on my soapbox again here:
Larger sensors actually have more noise (noise is proportional the square root of the area).
It’s easy to understand the confusion, though: Putting a larger sensor behind the same lens is the opposite of cropping… you get a larger field of view and less image detail. Thus, keeping field of view the same, a larger sensor forces you to use a lens with a longer focal length.
Now, if you re-grind the original lens to have a longer focal length, you encounter another problem: The same physical aperture divided by the new longer focal length means that you have a smaller focal ratio (the number in F/<number> gets bigger). You have a dimmer lens!
So, to keep the same focal ratio (“F-stop”), you need a lens with a larger physical aperture… That larger physical aperture is collecting more light onto your sensor!
That’s why everyone seems to think larger sensors are better. It’s the lens you are forced to use, not the sensor itself.
Since light collected is directly proportional to the area of the lens (and lens area will be proportional to sensor area, see above) and sensor noise is only proportional to sqrt(area), the signal to noise ratio goes as area/sqrt(area) = sqrt(area).
But that’s not the same thing as saying a larger sensor is better… you could have just used a lens with a larger physical aperture in the first place. You don’t need a larger sensor to do that.
You're of course correct that the better lens helps. But a bigger sensor can also be better by itself.
Most optical aberrations increase with high powers of the f-number so it's highly undesirable to make ultra-fast lenses, so it quite quickly becomes cheaper to use a larger sensor with a slower f-number. Try matching a jellybean 85/2 lens on a full-frame sensor on e.g. MFT. It's going to be rather expensive. Then try matching a 85/1.4 or 85/1.2 (nowadays not uncommon) lens and you find yourself at "that's not physically possible".
Coincidentally, full-frame sensors can be made from just two stitched exposures on a regular chip stepper, so they're sort of the largest sensor size before cost explodes. Meanwhile S35/APS-C offers some real cost savings (single exposure).
In stereo you really do have more visual information. It's not unusual for 10% of the pixels in a stereogram (say a close up of a person) to be unique to one channel. On top of that you have left and right eye pixels that are shared which must be equivalent to more than one mono pixel even if they aren't equivalent to two.
Although I get MPO's with two JPEGs in one file from my New 3DS, stereo content is frequently delivered in side-by-side format as one big JPEG. Stereo movies and TV frequently use side-by-side with half horizontal resolution on the assumption that stereo is feeding your eyes and brains more data although it probably doesn't match the original perceived resolution.
Which would help with synchronized sensor readout.
Of course it's still possible that's really just one sensor with a logical split, which would be some disappointing marketing.
But very impressive that they have such tight synchronization between sensor readouts to feel comfortable splitting it.
Here’s the RED body list https://www.red.com/productcategory/Camera-BRAINs
There’s no brain for an 8k sensor where you could have more than one for the cost of this. So you’d have to at the very least compromise on resolution.
You’d also have to construct a multi camera rig which adds to both cost and size/weight/difficulty. So you’d compromise on the ergonomics of it.
Then you’d have to add the lenses. These have them integrated. Finding comparable lenses would set you over your comparable budget.
Okay, then let’s talk storage. This has 8TB on board. Getting an equivalent for the RED would also set you over the budget.
Finally, connectivity. The only REDs that you could maybe bring under budget need additions to add connectivity. So you’re compromising there.
And at the end of the day, 30K for a camera of this caliber is insanely cheap. I think everyone getting caught up on the cost has only dealt with prosumer stuff at best. Anyone at the professional level has been awed by black magic’s ability to bring this and the Ursa Cine 17k at the price point that they have.
Besides, the cost for everything else will far outpace the camera. The camera is the one thing you don’t want to skimp on. You have a bad camera day, you ruin everything else and waste more money than you’d have saved.
I’ll reiterate: 30k is an absolute bargain for this or the 17k.
These are cameras that productions rent by the day for a specific shoot, not something they buy outright. Similar to high-end cine cameras, slow motion cameras, underwater cameras, etc.
I don't think hobbyists are the target market. Isn't that price in-line with any studio-quality camera? (I have no idea if this qualifies as a studio-quality camera, but I can imagine at least a few studios would be willing to try it out).
Pro equipment can reach prices that seem unbelievable to prosumers.
In the digital age zooming video is completely routine and if you've got a picture with absurd megapixels you can do it in a big way.
It's an awesome camera, but note the absence of a price, on the Web site. I think it retails for around $80K.
[0] https://www.phantomhighspeed.com/products/cameras/4kmedia/fl...
WITHOUT the cost of additional lenses. Then you add in sets, lighting, generators, cast, etc.
All of this is fractions compared to maybe millions of dollars for marketing.
And if you are a small film crew, you rent.
Outright purchasing a camera and equipment vs renting them for a shoot is a waste of money unless you're a production company that is going to use the equipment over and over again until it falls apart, and even then if you rent it it is on the rental company to handle maintenance and providing replacements in case of equipment breakdown, so it can still be a good deal for you.
For mass market consumers, you can already shoot in Apple's spatial video format with an iPhone.
Reviews in the tech press do not agree.
> Apple iPhone Spatial Video Looks Amazing on Vision Pro
https://www.cnet.com/tech/mobile/apple-iphone-spatial-video-...
Source: own an AVP and an iPhone 15 Pro
With the example Canon lens approach alberth had linked the device does not need to natively support stereo. The downsides, and why one might still spend 30k on an alternative, is said lens approach effectively halves the sensor area and the optics system won't be quite as well designed as a natively optimized one. Also the device is aimed to match the AVP precisely e.g. 90 FPS at full ~59 MP resolution.
I doubt they expect to sell many units but the units they do sell are for top professionals looking for the absolute best stereo quality they could get for the AVP, not for prosumers or average productions which would be fine with the slight quality and workflow bump to save 20k.
So is "I, as a non-photographer, openly question the utility of this pro-photographer tool to pro-photographers", though.
> Can you not get this capability with...
Can be interpreted as "this is pointless.. you can do this other thing for way cheaper" OR "help me understand why this exists because I don't understand it". GP is _clearly_ in the latter camp and saying "as a non-photographer" to make that clear.
Since we're already quoting HN guidelines
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith
The whole point of the Canon Dual Fisheye lens is to record stereoscopic video on a “standard” single sensor camera.
Today I adjust stereograms so that most objects are close to the paper or screen by sliding them horizontally, but I think Lipton is right that it is better to make the cameras converge though all my stereo cameras are parallel.
Anyway, this feels like the beginning of that.
Strange Days: https://en.wikipedia.org/wiki/Strange_Days_(film)
> The film's SQUID scenes, which offer a point-of-view shot, required multi-faceted cameras and considerable technical preparation.[5] A full year was spent building a specialized camera that could reproduce the effect of looking through someone else's eyes.[5] Bigelow revealed that it was essentially "a stripped-down Arri that weighed much less than the smallest EYMO and yet it would take all the prime lenses.
It's an unfairly forgotten film. Much like Blade Runner, it suffers from a clunky plot but has quite smart world building.
I find GPT quite useful for those “tip of your tongue” type queries, and have used it to name movies and actors quite a few times.
What’s funny is that there are others out there that are thinking the same thing regarding that film. Cheers!
Fragments of a Hologram Rose (1977) also by Gibson already had this.
Does anybody know even earlier instances?
I saved a newspaper clipping from a local theater showing "BRIANSTORM". My father's name is Brian. 7 y/o me thought the misspelling in the ad was hilarious.
Also, see the last episode of season one of Black Mirror.
And "brain dances" in Cyberpunk 2077.
There was a scene, where one of the researchers looped a porn scene, and they busted down his door, to find him in bed, twitching.
is underrated, the visual depiction of the lab is like Crichton's Looker. In my mind Research Triangle Park is as cool as it is this movie, in real life it falls short only a little.
Thoughts -> (electric signal) -> LLM decoding and calling a generative model -> (electric signal) -> Brain
Anime has the advantage of being drawn frame-by-frame, thus able to "change" lenses, cameras, etc mid action-packed shots. Using this may allow for shooting two different setups at once, achieving a similar effect.
I would love to see that attempted today again with how much progress we've made in terms of screen resolution and camera quality.
The iPhone 15 and 16 pro models can take 3D photos and videos right now https://support.apple.com/guide/iphone/spatial-photos-record...
I would love to have a 3d tv that works without glasses even if it was a limited depth thing (like multiple screens on top of each other to create real depth within a confined space) but I think the technology of the 3DS screen wouldn't scale to larger screens.
iirc this is effectively what looking glass displays do [0], or at least the early prototypes I saw, split a projected beam across 16 or so panes of glass. I've only seen the little one in real life but it was pretty enchanting. They go up to 32" and 64", something like 20,000 dollars tho [1] I don't know if they've actually made any sales of the larger formats
I was recently googling for whether these displays supported Apple Spatial Video and the answer was yes after some 3rd party conversion and playing media back straight from the iPhone it was recorded on, sounded annoying but feasible [2]
[0] https://lookingglassfactory.com/about
[1] https://www.pcmag.com/news/looking-glass-unveils-second-gen-...
[2] https://stereoscopy.blog/2024/09/22/how-to-use-the-looking-g...
But back then everyone just said "3D is a gimmick I hate it I just want an normal TV" and the fad died.
https://ja.wikipedia.org/wiki/SH-12C
The iPhone recently added support for actually shooting spatial photos in addition to videos so I need to try that out.
Sure, I could get one from Germany. However I did that with a Google Pixel 6 Pro and that turned out to be hell when I needed to claim warranty on it. Which required an address in Germany. So I'm not really inclined to go down that road again.
We've had techniques for editing videos on underpowered PCs since the 1990s. Possibly earlier.
You use something called a "proxy workflow": For each 8K source video, generate a 480p "proxy" with the same frame timing but a much more manageable amount of data. You edit the entire film using the 480p videos. Then once you're happy you "render" the video - which swaps the high quality sources back in and produces an output file. The final render might take all weekend for an hour-long video - but you've only got to do it once.
Then a person called the negative cutter would go through the list, duplicate the editing decisions on a high-quality negative without the generational loss, and that would go on to become the final print.
That’s why sometimes you’ll see a deleted scene from a movie whose picture quality looks quite poor. That was most likely taken from the workprint, and never went through negative cutting or any finishing.
Is it really? I haven't touched "high end cinema cameras" but if my consumer camera can generate ~1TB/hour and it's a "normal" consumer camera, I'd easily expect 4x that in high end cinema gear for 3D video (multiple videos stitched into one essentially)
But again, haven't used any of those or looked it up, so what do I know. It doesn't sound outlandish to me though.
If your consumer camera generates 1TB/hour then you're generating data as fast as a Red Komodo [1] recording at 6K "VFX, Extreme Detail Scenes"
Consumer quality? A high-end iphone can record 4K 60FPS video and an hour's footage takes up 24 gigabytes.
And you're watching 4K 60fps video on Netflix? Youtube? Maybe 12 gigabytes an hour.
According to https://support.apple.com/en-us/109041 4k60 recording in ProRes needs 220 MB/s storage, so an hour would be ~792 GB. Sure, you can choose to throw away most of that data with more lossy compression, but the barely-acceptable bitrates used by streaming services are not at all the right point of comparison here.
Of course most people wouldn't shoot at 60 fps for historical reasons, and raw video codecs are intra-only so data rate scales linearly with fps. They're just relatively heavily lossily compressed raw images in a box, basically.
Does anyone have better ideas?
There are options from caldigit on the low end: https://www.caldigit.com/t4/
or qnap on the mid end: https://www.qnap.com/en/product/tvs-h874t
To prevent causing issues upstream you would want to write to a fast NVME SSD first before backing up to a HDD array. Unfortunately, it doesn't support this use case as the NAS is designed for movie streaming, offices, security cameras etc.
What's the point of integrating two cameras into one unit, when you can just capture with two cameras. It's a software problem.
Say you want a stereogram that shows you Manhattan or the Grand Canyon as if it were a model laid out on a table before you. A human-like stereoscopic shot of those places will not produce the depth.
(Maybe something can be computed, but that's a separate discussion.)
This is the page you want https://www.blackmagicdesign.com/media/release/20241217-01
What I find interesting is that there appears to be no way to view the live video on a set of goggles for the camera operator, or the director. At least, it's not mentioned in the link above.
Also, it seems like Apple must have contributed to Blackmagic's investment in this product, right? There are ~300k Vision Pros, so maybe Blackmagic will sell a couple hundred of these units? Without Apple's involvement, how could they have justified the investment in hardware and the new version of Resolve?
My background was in film where I also worked on stereo for certain big projects. I know some anti-Apple folk will criticize my comments below so I want to be clear I’m talking about 3D video specifically.
I think it’s a bet on the future. Even though Apple aren’t high volume, they’ve dramatically shifted the professional stereo video landscape more than anything else in the last decade.
This is everything from bringing full resolution stereo videos for home viewing , to making a seemingly standardized format for 180 videos. Even if the latter is just restricted to their platform.
If I was BMD, I’d be seeing how everyone else is now following Apple in this specific area. Even though Meta were first, they’re undeniably also following Apple in some key areas. Same with Android XR. You can just look at their software releases/announcements over the last year as evidence.
If DaVinci can output to a range of formats, then it reduces the issue of it being apple specific. It’s a bet that they’ll be effectively the only professional game in town when all the brands (Apple, meta, Google) want to start driving content.
Beyond that, I don’t think the outlay for hardware is that high. It’s largely based off the Cine 17k, so most of the investment is amortized there.
Also even beyond the VR space, there’s the market for immersive experiences like projection events, the Vegas sphere, theme parks etc…
Their color science too is very nice and I think they’re making good moves with the 17k.
> This is everything from bringing full resolution stereo videos for home viewing , to making a seemingly standardized format for 180 videos. Even if the latter is just restricted to their platform.
I'd assume Porn already achieved all of the above. The format seem to have mostly settled, and the volume produced are relevant.
Apple might succeed in the "not first but best" approach, but do they have that much of an impact on the landscape right now ? In particular while this camera is marketed toward AVP movies, Apple being an early partner and probably footing the bill for most of it, is a weaker signal than BlackMagic doing it on its own as a forward investment.
https://www.blackmagicdesign.com/products/blackmagicursacine...
which seems to be more in line what the article talks about
Then again... Maybe if AVP owners represent an audience that you'd like to target it wouldn't be a bad decision. Everyone that owns one will probably be starved for special content and I'd imagine they'd be willing to buy something specifically made for their niche platform.
But based on the Vision Pro demo I would expect to see them prioritise non-fiction content e.g. Planet Earth style movies, concerts, athlete profiles etc.
>the world’s first advanced cinema camera designed to shoot for Apple Immersive Video
I think they are tapping early into an emerging "new" video format.
Converting to the "Apple Vision Pro" format is the last step on the pipeline, after editing.
0: https://www.blackmagicdesign.com/media/release/20241217-01
CinemaDNG is not a compressed format. It is a directory with DNG files. DNG is an open raw photo format. Both DNG and CinemaDNG predate REDCODE.
My camera records 4K 12-bit CinemaDNG with no compression and is in the same price segment.
If BM, given options they had (which also include things like “pay RED” or “recall products”), chose to silently remove the support for CinemaDNG in cameras that they sold advertising CinemaDNG support, I doubt blaming RED is anything but a PR tactic.
https://www.blackmagicdesign.com/uk/media/release/20241217-0...
The article you linked mentions it uses BRAW which is indeed supported in Resolve today already.
You’ll turn your head and the image will just stay fixed in 3D in front of you?
But of course the quality is probably a lot lower than with this camera
Um, okay, what is supported and what is achievable are two entirely different things. Even with the fattest of pipes, uploading media content to the cloud is only considered fast if you're a turtle or a snail. Even with 12Gbps connection it takes 10-12 minutes to transfer 250GB files.
https://www.blackmagicdesign.com/products/blackmagiccloudsto...
Firstly, it is not passthrough video.
Secondly, you cannot currently have the same experience on the quest. You can have lower quality versions of it, but immersive video is 8k per eye at 90fps.
There has literally never been cameras available to consumers to capture that till this specific camera. Unless you did professional custom camera rigs.
As someone who owns both a Quest and a Vision Pro, and has worked in stereo for a large portion of my career, the two experiences are not remotely comparable when it comes to video today. The quest excels in other areas, but this is one where Meta have very weak coverage on.
Moreover, it's not really a proprietary format and you can already play them officially on Quest.
Nobody has been able to extract Apples immersive videos yet and I’m not convinced the Quest has the decoding power for it.
It’s a lot of pixels to decode (16k at 90fps) , while also doing reprojection of the frames (https://hackaday.com/2024/04/18/unraveling-the-secrets-of-ap...) and I don’t believe their Qualcomm chip used has enough juice leftover to do that.
That's not to say it cannot be used for other things. Blackmagic frequently market all their cameras for prosumer/professional film-making, but you can use the cameras for so much more than just recording films, although the marketing is geared towards film-markers. Doesn't make it misleading.
Sometimes tech is amazing.
The equivalent to the Cine from other makers starts at the $30k and goes up depending on what options you want. Except, at those prices, you're only getting 4K. Red, Arri, Sony, etc won't even get out of bed for anything less than $30k.
That's just BMD's DNA to give the customer so much bang for their buck. Every thing they offer is so much lower MSRPs than competitors. I remember when they first released Resolve for Mac, for free after BMD acquired DaVinci. Of course it couldn't do much without a $20k MacPro build, but the software was free. This was running right next to the $50k Resolve Linux build, so naturally it was jaw dropping.
https://en.wikipedia.org/wiki/List_of_3D_films_(2005%E2%80%9...
Of the 13 3D movies released this year only four are native 3D, and those are all fully CGI animation, so none of them used 3D cameras. Avatar 2 (2022) was the last movie to use 3D cameras for live action shots and Avatar 3 is the only upcoming movie known to be using them. It's beyond niche at this point unless your name is James Cameron.
If you pay attention in their work you can see how they try to hide the hard drive array that was required, and sometimes also the accountant holding on to it so that it doesn’t get swept away.
Vision Pro has been released for near a year now. I don't think it got the traction that's anywhere close to the hype when it was first announced. Not even among VR enthusiasts, let alone the mass consumer market.
There is an Apple Store near where I live. When I walk by it, 9 out of 10 times there is nobody around the Vision Pro booth, when many people are playing with iPhones and iPads.
In the first year, they were constrained by the number of displays Sony could produce.
> Sony, the supplier of Vision Pro's ultra high resolution OLED microdisplays, can't manufacture more than 900,000 displays per year. Apple needs two displays per headset, so this bottleneck would impose severe limitations on how many Vision Pros can be produced.
https://www.uploadvr.com/apple-vision-pro-production-severel...
As far as I've seen, their sales were in line with the number of units they could be expected to build, at least until Sony is able to ramp up production.
> 2024 Apple Vision Pro Shipments Estimated Between 500–600 Thousand Units, Micro OLED Key to Cost and Volume
https://www.trendforce.com/presscenter/news/20240118-12003.h...
My friend has one and I’ve got a Quest 2 and I was absolutely blown away by the AVP, significantly better VR experience in my opinion.
Vision Pro supporting PSVR2 controllers will help a lot.
The final piece is getting something like ALVR or Virtual Desktop to support PCVR without requiring fiddling.
The Q2 has horrible lenses that induce a terrible experience. I had mine for all of a week before I sold it and decided to wait another gen. The Q3 with better lenses is a significantly better product than Q2.
I’ve not had any issues with my Q2 though, I can play for quite extended amounts of time and it tends to be my arms and legs that stop me playing!
It's more that the Vision Pro deliberately prioritized certain things that Meta or Vive or Valve or Sony have not: geometrically stable pass through, wide library of popular 3D movies via AppleTV and Disney+, high resolution immersive environments, seamless keyboard/mouse/trackpad migration between PC and native apps, strong iOS/iPadOS ecosystem integration, high fps / low latency wireless ultra-wide virtual displays for the Mac, etc.
In some ways it focuses on what the Oculus Go was trying to do but was underpowered to really do it. It's meant to replace other iOS devices for general productivity and entertainment, and to complement a Mac.
It's not focused on VR gaming though it can do that.
I have a Oculus Rift dev kit, Ovulus Go, Quest , Quest 2, Valve Index, PSVR2. The AVP is much better of an experience on almost every level but three: too much motion blur when moving your head (this isn't bad when watching high fps video), lack of controller support, not so great hand tracking (which the Quest had to do well due to lack of eye tracking). The controller support should be fixed with the Sony PSVR2 partnership. Motion blur and hand tracking I suspect will be software fixed as they evolve to prioritize active fitness with the AVP.
https://techcrunch.com/2024/10/13/apple-might-release-a-2000...
People that bought Vision Pro at $3,500 they are not using it all that much. A lower price will just result in more headsets gathering dust.
VR has no product-market fit except for a couple of game niches. Far from the “next computing platform” that justified investment of tens of billions of dollars a year.
Headsets and platforms need fundamental rethinking before optimizing for price.
People that bought Vision Pro are often using it for multiple hours a day. I am sure some collect dust, but many are heavily used.
The Meta Quest is outselling the Xbox series. VR clearly has product market fit, but it doesn't yet have iPhone or iPad levels of market fit.
Also sales /= usage and retention. Engagement is what you need to grow a platform.
Your numbers about XBox sales might be true for a brief period of time between Quest2 and Quest3 releases. Still what matters is engagement and retention.
As mentioned only product-market (albeit niche) fit for VR has been some games subgenres. Can you point to any other applications with significant numbers?
I have no more data than you do when you say a "shrinking tiny fraction" of AVP buyers. I've been in the industry for 30 years. We both have our anecdata.
Sometimes it’s ok to make Lamborghini and it’s not a failure to say it has less owners than the Corolla.
This is so that people can begin making content for the Apple AR headset that comes out 3 years from now, not the $3500 devkit.
People also forget how bleak the iPad app market was for the first year or so. They also forget that VR has existed for the quest for the better part of 12 years and there are .... 4? 5? very good apps.
Even now there's nothing -incredibly- compelling for the Quest. I'm not a hater, I've owned 5 of them starting at DK1.
No such equivalent exists for AVP, it's a new type of device for pretty much everybody.
Who builds half a million units of a dev kit?
Some stuff you can just tell is going to flop. This is one of them. Apple still doesn't get that people dont want to put on highly conspicuous headsets to watch a movie or play a game, they're fine using a phone or tablet for that. Zuckerberg still pretends like he didnt spend 2 years and untold billions trying to will Horizon Worlds into relevance. Similarly, nobody talks about immersive video on AVP as some kind of gamechanger, not even the usual Apple consumer strategy whisperers like Daring Fireball.
Apple is not Sony, who were happy to keep investing in their ecosystem even if people didn't buy it (Betamax, MiniDisc, GPS addon for the PSP).
Citation needed. The AVP is priced for supply constraints.
'A "gamechanger" in Apple-ese means tens of millions of units shifted'
Not at all. That may be an external party's definition of success. It is not Apple's.
"Apple is in the business of selling Big Macs, not wagyu steaks."
I can't even begin to describe how wrong this statement is, even based on a cursory glance of their current product line.
This is a 5-10 year strategy, not a 1-2 year one.
The closer you get to that 5-10 years, the more these types of capital intensive projects start looking non-viable (think Apple EV) compared to cash cows like the App Store and iCloud.
Pornographers
IMO vision pro is like the first iPhone/iPad and in a few years if they keep refining it there will be a larger adoption.
I think the main thing is that it should support full mac os apps without tethering to an external macbook/mac mini. They need to move the compute out of the headset itself and into the battery module. Apple probably would never do this, but imagine if you bought a mac mini sized compute module that could go on an external display or connect to a vision pro device. If the compute was separate the headset would be significantly lighter and more comfortable.