Direct Sockets API in Chrome 131
200 points
2 months ago
| 27 comments
| chromestatus.com
| HN
modeless
2 months ago
[-]
I think a lot of people don't realize it's possible to use UDP in browsers today with WebRTC DataChannel. I have a demo of multiplayer Quake III using peer-to-peer UDP here: https://thelongestyard.link/

Direct sockets will have their uses for compatibility with existing applications, but it's possible to do almost any kind of networking you want on the web if you control both sides of the connection.

reply
ignoramous
2 months ago
[-]
> Direct sockets will have their uses for compatibility with existing applications...

In fact runtimes like Node, Deno, Cloudflare Workers, Fastly Compute, Bun et al run JS on servers, and will benefit from standardization of such features.

  [WICG] aims to provide a space for JavaScript runtimes to collaborate on API interoperability. We focus on documenting and improving interoperability of web platform APIs across runtimes (especially non-browser ones).
https://wintercg.org/
reply
synctext
2 months ago
[-]
This slowly alters the essence of The Internet, due to the permissionless nature of running any self-organising system like Bittorrent and Bitcoin. This is NOT in Android, just isolated Web Apps at desktops at this stage[0]. The "direct socket access" creep moves forward again. First, IoT without any security standards. Now Web Apps.

With direct socket access to TCP/UDP you can build anything! You loose the constraint of JS servers, costly WebRTC server hosting, and lack of listen sockets feature in WebRTC DataChannel.

<self promotion>NAT puncturing is already solved in our lab, even for mobile 4G/5G. This might bring back the cyberpunk dreams of Peer2Peer... In our lab we bought 40+ SIM cards for the big EU 4G/5G networks and got the carrier-grade NAT puncturing working[1]. Demo blends 4G/5G puncturing, TikTok-style streaming, and Bittorrent content backend. Reading the docs, these "isolated" Web Apps can even do SMTP STARTTLS, IMAP STARTTLS and POP STLS. wow!

[0] https://github.com/WICG/direct-sockets/blob/main/docs/explai... [1] https://repository.tudelft.nl/record/uuid:cf27f6d4-ca0b-4e20...

reply
Uptrenda
2 months ago
[-]
Hello, I wanted to say I've been working on a peer-to-peer library and I'm very much interested in your work on symmetric NAT punching (which as far as I know is novel.) Your work is exactly what I was looking for. Good job on the research. It will have far-reaching applications. I'd be interesting in implementing your algorithms depending on the difficulty some time. Are they patented or is this something anyone can use?

Here's a link to an over-view for my system: https://p2pd.readthedocs.io/en/latest/p2p/connect.html

My system can't handle symmetric --- symmetric. But could in theory handle other types of NATs ---- symmetric. Depending on the exact NAT types and delta types.

reply
ignoramous
2 months ago
[-]
I read OP's thesis (which focuses on CGNAT), and one of the techniques discussed therein is similar to Tailscale's: https://tailscale.com/blog/how-nat-traversal-works

  ...with the help of the birthday paradox. Rather than open 1 port on the hard side and have the easy side try 65,535 possibilities, let’s open, say, 256 ports on the hard side (by having 256 sockets sending to the easy side's ip:port), and have the easy side probe target ports at random.
reply
Uptrenda
2 months ago
[-]
this comment section has been the most useful and interesting thing I've seen for my own work in a very long time. And completely random, too. Really not bad. To me this represents the godly nature of this website. Where you have extremely well informed people posting high quality technical comments that would be hard to find anywhere else on the web. +100 to all contributors.
reply
synctext
2 months ago
[-]
indeed, Tailscale was the first to realise this.

We added specific 4G and 5G mobile features. these carrier-grade boxes have often non-random port allocations. "By relying on provider-aware IPv4 range allocations, provider-aware port prediction heuristics, high bandwidth probing, and the birthday paradox we can successfully bypass even symmetric NATs."

reply
3np
2 months ago
[-]
> By leveraging provider-aware (Vodafone,Orange,Telia, etc.) NAT puncturing strategies we create direct UDP-based phone-to-phone connectivity.

> We utilise parallelism by opening at least 500 Internet datagram sockets on two devices. By relying on provider-aware IPv4 range allocations, provider-aware port prediction heuristics, high bandwidth probing, and the birthday paradox we can successfully bypass even symmetric NATs.

U mad. Love it!

reply
eternityforest
1 month ago
[-]
What if someone finds your IP address and sends you a bunch of crap? It would be very easy to use someone's entire monthly data allowance.

Plus, it only works if you can afford and have access to cell service, and in those cases you or have access to normal Internet stuff.

Unless cell towers are able to route between two phones when their fiber backend goes down. That would make this actually pretty useful in emergencies if a rower could work like a ham repeater, assuming it wasn't too clogged with traffic to have a chance.

reply
savolai
2 months ago
[-]
I don’t understand the topic deeply. Is this futureproof, or likely to be shutdown in a cat and mouse game if it gets widespread, like it needs to for a social network?
reply
noduerme
2 months ago
[-]
Can you explain further... how does this improve upon websockets and socketIO for node?
reply
arlort
2 months ago
[-]
Without a middleman you can only use web socket to connect to an http server.

So, for instance if I want to connect to an mqtt server from a webpage I have to use a server that supports websocket endpoint. With direct sockets I could connect to any server using any protocol

reply
bpfrh
2 months ago
[-]
You can also use WebTransport with streams for tcp and datagramms for udp https://developer.mozilla.org/en-US/docs/Web/API/WebTranspor...
reply
IshKebab
2 months ago
[-]
Not peer to peer though presumably?
reply
jauntywundrkind
2 months ago
[-]
There was some traction & interest in https://github.com/w3c/p2p-webtransport but haven't seen any activity in a while now.

I'm pretty cocksure certain a whole industry of p2p enthusiasts would spring up building cool new protocols and systems on the web in rapid time if this ever showed up.

reply
arthurcolle
1 month ago
[-]
Perfect timing with realtime AGI happening. Need lots of focus on realtime streaming protocols
reply
modeless
2 months ago
[-]
Yes and not in Safari yet either. Someday I hope that all parts of WebRTC can be replaced with smaller and better APIs like this. But for now we're stuck with WebRTC.
reply
bedatadriven
1 month ago
[-]
This a very early draft I'm following: https://wicg.github.io/local-peer-to-peer/
reply
dboreham
2 months ago
[-]
WebRTC depends on some message transport (using http) existing first between peers before the data channel can be established . That's far from equivalent capability to direct sockets.
reply
modeless
2 months ago
[-]
Yes, you do need a connection establishment server, but in most cases traffic can flow directly between peers after connection establishment. The reality of the modern internet is even with native sockets many if not most peers will not be able to establish a direct peer-to-peer connection without the involvement of a connection establishment server anyway due to firewalls, NAT, etc. So it's not as big of a downgrade as you might think.
reply
huggingmouth
2 months ago
[-]
That changed (ahm.. will change) with ipv6. I was surprised to see that I can reach residential ipv6 lan hosts directly from the server. No firewalls, no nat. This remains true even with abusive isps that only give out /64 blocks.

That said, I agree that peer to peer will never be seemless thanks mostly to said abusive isps.

reply
kelnos
2 months ago
[-]
> I was surprised to see that I can reach residential ipv6 lan hosts directly from the server. No firewalls, no nat

No NAT, sure, that's great. But no firewalls? That's not great. Lots of misconfigured networks waiting for the right malware to come by...

reply
theamk
2 months ago
[-]
I sure hope not, this will bring in a new era for internet worms.

If some ISPs are not currently firewalling all incoming IPv6 connections, it's a major security risk. I hope some security researcher raises boise about that soon, and the firewalls will go closed by default.

reply
immibis
2 months ago
[-]
My home router seems to have a stateful firewall and so does my cellphone in tethering mode - I don't know whether that one's implemented on the phone (under my control) or the network.

Firewalling goes back in the control of the user in most cases - the other day we on IRC told someone how to unblock port 80 on their home router.

reply
1oooqooq
2 months ago
[-]
it kinda of already begun
reply
modeless
2 months ago
[-]
Has there been a big ipv6 worm? I thought that the defense against worms was that scanning the address space was impractical due to the large size.
reply
1oooqooq
2 months ago
[-]
i don't think they scan the entire space. but even before that there were ones abusing bonjour/upnp which is what chrome will bring back with this feature.
reply
apitman
2 months ago
[-]
IPv6 isn't going to happen. Most people's needs are met by NAT for clients and SNI routing for servers. We ran out of IPv4 addresses years ago. If it was actually a problem it would have happened then. It makes me said for the p2p internet but it's true.
reply
justahuman74
2 months ago
[-]
> If it was actually a problem

It became a problem precisely the moment AWS starting charging for ipv4 addresses.

"IPv4 will cost our company X dollars in 2026, supporting IPv6 by 2026 will cost Y dollars, a Z% saving"

There's now a tangible motivator for various corporate systems to at least support ipv6 everywhere - which was the real ipv6 impediment.

Residential ISP appear to be very capable of moving to v6, there are lots of examples of that happening in their backends, and they've demonstrated already that they're plenty capable of giving end users boxes the just so happen to do ipv6.

reply
apitman
2 months ago
[-]
Yes and setting up a single IPv4 VPS as load balancer with SNI routing in front of IPv6-only instances solves that.

Most people are probably using ELB anyway

reply
immibis
2 months ago
[-]
What do you mean not going to happen? It's already happening. It's about 45% of internet packets.
reply
apitman
2 months ago
[-]
The sun is about 45% of the way through its life.
reply
paulddraper
2 months ago
[-]
Not happening for 55%.

Try to connect to github.com over IPv6.

reply
remram
2 months ago
[-]
It doesn't work now so it's never going to work?
reply
paulddraper
2 months ago
[-]
If it doesn't work for a website as large as technically forward as GitHub in 2024, the odds are not looking good.
reply
apitman
2 months ago
[-]
GitHub might work someday. Wide enough adoption that you can host a service without an IPv4 address will never happen.
reply
sroussey
2 months ago
[-]
Honestly, it could be a feature rather than a bug…
reply
immibis
2 months ago
[-]
Yes, that's one of the rare exceptions of a company trying to obsolete itself. It's actually one reason a bunch of people are moving away from Github.
reply
ElijahLynn
2 months ago
[-]
"We are introducing a new charge for public IPv4 addresses. Effective February 1, 2024 there will be a charge of $0.005 per IP per hour for all public IPv4 addresses"

https://aws.amazon.com/blogs/aws/new-aws-public-ipv4-address...

reply
apitman
2 months ago
[-]
Yes and setting up a single IPv4 VPS as load balancer with SNI routing in front of IPv6-only instances solves that.

Most people are probably using ELB anyway.

reply
lifthrasiir
2 months ago
[-]
Not only that, but DTLS is mandated for any UDP connections.
reply
modeless
2 months ago
[-]
Is that a problem? Again, I'm talking about the scenario where you control both sides of the connection, not where you're trying to use UDP to communicate with a third party service.
reply
lifthrasiir
2 months ago
[-]
I think all three comments including mine are essentially saying the same but in different viewpoints.
reply
typedef_struct
2 months ago
[-]
This looks to use Web Sockets, not WebRTC, right? I don't see any RTCPeerConnection, and the peerServer variable is unused.

I ask because I've spent multiple days trying to get a viable non-local WebRTC connection going with no luck.

view-source:https://thelongestyard.link/q3a-demo/?server=Seveja

reply
modeless
2 months ago
[-]
Web sockets are only used for WebRTC connection establishment. The code that creates the RTCPeerConnection is part of the Emscripten-generated JavaScript bundle. I'm using a library called HumbleNet to emulate Berkeley sockets over WebRTC.

The code is here: https://github.com/jdarpinian/ioq3 and here: https://github.com/jdarpinian/HumbleNet. For example, here is the file where the RTCPeerConnection is created: https://github.com/jdarpinian/HumbleNet/blob/master/src/humb...

I feel your pain. WebRTC is extremely difficult to use.

reply
evbogue
2 months ago
[-]
Check out Trystero[1], it makes WebRTC super simple to develop with.

[1] https://github.com/dmotz/trystero

reply
flohofwoe
2 months ago
[-]
There's also this new WebTransport thingie based on HTTP/3:

https://developer.mozilla.org/en-US/docs/Web/API/WebTranspor...

I haven't tinkered with it yet though.

reply
modeless
2 months ago
[-]
Yeah, not in Safari yet and no peer-to-peer support. Maybe someday though! It will be great if all of WebRTC's features can be replaced by better, smaller-scoped APIs like this.
reply
eternityforest
1 month ago
[-]
Doesn't WebRTC still require an secure server somewhere?

Direct sockets will be amazing for IoT, because it will let you talk directly to devices.

With service workers you can make stuff that works 100% offline other than the initial setup.

Assuming anyone uses it and we don't just all forget it exists, because FF and Safari probably won't support it.

reply
mhitza
2 months ago
[-]
Longest Yard is my favorite Q3 map, but for some reason I cannot use my mouse (?) in your version of the Quake 3 demo.
reply
modeless
2 months ago
[-]
Interesting, what browser and OS?
reply
mhitza
2 months ago
[-]
Brave browser (Chromium via Flatpak) on the Steam Deck (Arch Linux) in Desktop mode with bluetooth connected mouse/keyboard.
reply
modeless
2 months ago
[-]
Hmm, I bet the problem is my code expects touch events instead of mouse events when a touchscreen is present. Unfortunately I don't have a computer with both touchscreen and mouse here to test with so I didn't test that case. I did implement both gamepad and touch controls, so you could try them to see if they work.
reply
topspin
2 months ago
[-]
Same browser on win10. Mouse works after you click in the window and it goes full screen. However, it hangs after a few seconds of game play.

Stopped hanging... then input locks up somehow.

Switched to chrome on win10, same issue: input locks up after a bit.

reply
modeless
2 months ago
[-]
Yeah that issue I have seen, but unfortunately haven't been able to debug yet as it isn't very reproducible and usually stops happening under a debugger.
reply
topspin
2 months ago
[-]
Even with the problems, just the few seconds of playing before the crash+input hang got me hooked. So, off to GOG to get q3a for $15. Also, quake3e with all the quality, widescreen, aspect ratio and FPS tweaks... chatgpt 4o seems to know everything there is to know about quake3e, for some reason.

Talk about getting nerd sniped.

reply
mhitza
2 months ago
[-]
Works in Firefox, on the same system.
reply
nmfisher
2 months ago
[-]
I can't use mouse either, macos/Chrome. Otherwise, cool!
reply
yesthisiswes
2 months ago
[-]
Awesome demo. I’ve really missed that map it’s been too long.
reply
nightowl_games
2 months ago
[-]
Yeah we use WebRTC for our games built on a fork of Godot 3.

https://gooberdash.winterpixel.io/

tbh the WebRTC performance is basically the same network performance as websockets and was way more complicated to implement. Maybe the webrtc perf is better in other parts of the world or something...

reply
modeless
2 months ago
[-]
Yeah WebRTC is a bear to implement for sure. Very poorly designed API. It can definitely provide significant performance improvements over web sockets, but only when configured correctly (unordered/unreliable mode) and not in every case (peer-to-peer is an afterthought in the modern internet).
reply
nightowl_games
2 months ago
[-]
We got it in unreliable/unordered and it still barely moves the needle on network perf over websockets from what we see in north america connecting to another server in north america
reply
modeless
2 months ago
[-]
I wouldn't expect a big improvement in average performance but the long tail of high latency cases should be improved by avoiding head-of-line blocking. Also peer-to-peer should be an improvement over client-server-client in some situations. Not for battle royale though I guess.

Edit: Very cool game! I love instant loading web games and yours seems very polished and fun to play. Has the web version been profitable, or is most of your revenue from the app stores? I wish I better understood the reasons web games (reportedly) struggle to monetize.

reply
nightowl_games
1 month ago
[-]
Thanks! The web versions of both of our mobile/web games do about the same as the IAP versions. We dont have ads in the mobile versions, so the ad revenue is reasonble. We're actually leaning more into smaller web games as a result of that. Profit on this game specifically I think it deserves better. I think Goober Dash is a great game, but it's not crushing it like I'd hoped.
reply
modeless
1 month ago
[-]
That's really interesting, thanks! I agree Goober Dash deserves to be successful.
reply
windows2020
2 months ago
[-]
I would say WebRTC is both a must and only worth it if you need UDP, such as in the case of real-time video.
reply
saurik
2 months ago
[-]
I mean, the only cases where UDP vs. TCP are going to matter are 1) if you experience packet loss (and maybe you aren't for whatever reason) and 2) if you are willing to actively try to shove other protocols around and not have a congestion controller (and WebRTC definitely has a congestion controller, with the default in most implementations being an algorithm about as good as a low-quality TCP stack).
reply
modeless
2 months ago
[-]
Out-of-order delivery is another case where UDP provides a benefit.
reply
winrid
2 months ago
[-]
Runs smoother than the Android home screen. :)
reply
justin66
2 months ago
[-]
Not really peer to peer though, is it? The q3 server is just running in the browser session that shares a URL with everyone else?
reply
modeless
2 months ago
[-]
Yes, it is. The first peer to visit a multiplayer URL hosts the Quake 3 server in their browser. Subsequent visitors to the same multiplayer URL send UDP traffic directly to that peer. The packets travel directly between peers, not bouncing off any third server (after connection establishment). If your clients are on the same LAN, your UDP traffic will be entirely local, not going to the Internet at all (assuming your browser's WebRTC implementation provides the right ICE candidates).

It won't work completely offline unfortunately, as the server is required for the connection establishment step in WebRTC. A peer-to-peer protocol for connection establishment on offline LANs would be awesome, but understandably low priority for browsers. The feature set of WebRTC is basically "whatever Google Meet needs" and then maybe a couple other things if you're lucky.

reply
justin66
1 month ago
[-]
This is neat. A little perverse, but neat.
reply
chocolatkey
2 months ago
[-]
When reading https://github.com/WICG/direct-sockets/blob/main/docs%2Fexpl..., it's noted this is part of the "isolated web apps" proposal: https://github.com/WICG/isolated-web-apps/blob/main/README.m... , which is important context because the obvious reaction to this is the security nightmare
reply
crote
2 months ago
[-]
That doesn't really make it any better, if you ask me.

The entire Isolated Web Apps proposal is a massive breakdown of the well-established boundaries provided by browsers. Every user understands two things about the internet: 1) check the URL before entering any sensitive data, and 2) don't run random stuff you download. The latter is heavily enforced by both Chrome and Windows complaining quite a bit if you're trying to run downloaded executables - especially unsigned ones. If you follow those two basic things, websites cannot hurt your machine.

IWA seems to be turning this upside-down. Chrome is essentially completely bypassing all protections the OS has added, and allowing Magically Flagged Websites to do all sorts of dangerous stuff on your computer. No matter what kind of UX they provide, it is going to be nigh-on impossible to explain to people that websites are now suddenly able to do serious harm to your local network.

Browsers should not be involved in this. They are intended to run untrusted code. No browser should be allowed to randomly start executing third-party code as if it is trustworthy, that's not what browsers are for. It's like the FDA suddenly allowing rat poison into food products - provided you inform consumers by adding it to the ingredients list of course.

reply
apitman
2 months ago
[-]
> Every user understands two things about the internet: 1) check the URL before entering any sensitive data, and 2) don't run random stuff you download

I think you're severely overestimating the things every user knows.

reply
derefr
2 months ago
[-]
Does it help to think of it less as Chrome allowing websites to do XYZ, and more as a PWA API for offering to install full-fat browser-wrapper OS apps (like the Electron kind) — where these apps just so happen to “borrow” the runtime of the browser they were installed with, rather than shipping with (and thus having to update) their own?
reply
rad_gruchalski
2 months ago
[-]
The last time I used Chrome was about 3 years ago. You have a choice.
reply
pseudosavant
2 months ago
[-]
Only kind of. If you are on Mac you can use Safari. On Windows your options are Firefox or other versions of Chrome (Edge, Opera, Brave, etc), and Firefox will not work right enough, and it'll drive you to a version of Chrome.
reply
eitland
2 months ago
[-]
Something always breaks my streak, but since last year or so I feel I am down to twice a year or something.
reply
girvo
2 months ago
[-]
Unfortunately this is the future. Handing the world wide webs future to Google was a mistake, and the only remedy is likely to come from an (unlikely) antitrust breakup or divestment.
reply
rad_gruchalski
2 months ago
[-]
> Handing the world wide webs future to Google

Nobody handed anything to anyone. They go with the flow. The flow is driven by people who use their products. The browser is how Google delivers their products so it’s kinda difficult to blame them for trying to push the envelope but there are alternatives to Chrome.

reply
troupo
2 months ago
[-]
> They go with the flow.

The ancient history of just 10-15 years ago shows Google aggressively marketing Chrome across all of its not inconsiderable properties like search and Youtube, and sabotaging other browsers while they were at it: https://archive.is/2019.04.15-165942/https://twitter.com/joh...

reply
rad_gruchalski
2 months ago
[-]
Indeed. There was time I myself used it as my primary browser and recommended it to everyone around. That changed when they started insisting on signing into the account to „make the most out of it” so I went back to Firefox. Since then I stopped caring. I know, virtue signalling. My point is: nobody handed anything over to Google. At the time alternatives sucked so they won the market. But today we have great alternatives.
reply
pjmlp
1 month ago
[-]
And some developers shipping Chrome alongside their apps, instead of learning proper Web development.
reply
bloomingkales
2 months ago
[-]
I doubt websites as we know it will be what we’ll be dealing with going forward anyways.

What is a browser if we just digest all the HTML and spit out clean text in the long run?

We handed over something of some value I guess, once upon a time.

reply
mschuster91
2 months ago
[-]
> If you follow those two basic things, websites cannot hurt your machine.

Oh yes they can. Quite a bunch of "helper" apps - printer drivers are a bit notorious IME - open up local HTTP servers, and not all of them enforce CORS properly. Add some RCE or privilege escalation vulnerability in that helper app and you got yourself an 0wn-from-the-browser exploit chain.

reply
BenjiWiebe
2 months ago
[-]
How often does that actually happen?
reply
phildenhoff
2 months ago
[-]
Interesting — the Firefox team’s response was very negative, but didn’t (in my reading) address use of the API as being part of an otherwise essentially trusted app (as opposed to being an API available to any website).

In reading their comments, I also felt the API was a bad idea. Especially when technology like Electron or Tauri exist, which can do those TCP or UDP connections. But IWA serves to displace Electron, I guess

reply
nzoschke
2 months ago
[-]
I'm hacking on a Tauri web app that needs to bridge to talking UDP protocols literally as we speak.

While Tauri seems better than ever for cross platform native apps, it's still a huge step to take to allow my web app access to lower level. Rust toolchain, Tauri plugins, sidecar processes, code gen, JSON RPC, all to let my web app talk to my network.

Seems great that Chrome continues to bundle these pieces into the browser engine itself.

Direct sockets plus WASM could eat a lot of software...

reply
1oooqooq
2 months ago
[-]
with so many multiplatform gui toolkits today, tauri and electron are really bad choices
reply
montymintypie
2 months ago
[-]
What's your recommendation? I've tried so many multiplatform toolkits (including GTK, Qt, wxWidgets, Iced, egui, imgui, and investigated slint and sciter) and nothing has come close to the speed of dev and small final app size of something like Tauri+Svelte.
reply
nzoschke
2 months ago
[-]
I've also tried Flutter, React Native, Kotlin multiplatform, Wails.

I'm landing on Svelte and Tauri too.

The other alternative I dabble with is using the Android Studio, XCode to write my own WebView wrappers.

reply
bpfrh
2 months ago
[-]
What did you dislike about kotlin multiplattform?
reply
1oooqooq
2 months ago
[-]
of course dev speed will be better with tauri plus the literal ton of JavaScript transpilers we use today.

but for us an inhouse egui pile of helpers allow for fast applications that are closer to native speeds. and flutter for mobile (using neither Cupertino or material)

reply
montymintypie
2 months ago
[-]
Glad to hear that egui is working for you, but in my experience it's not accessible, difficult to render accurate text (including emoji and colours), very frustrating to extend inbuilt widgets, and quite verbose. One of my most recent experiences was making a fairly complex app at work in egui, then migrating to tauri because it was such a slog.
reply
api
2 months ago
[-]
The web stack is now the desktop UI stack. I think the horse has left the barn.

It’s not great but there’s just no momentum or resources anywhere to work on native anymore outside platform specific libraries. Few people want to build an app that can only ever run on Mac or Windows.

reply
cageface
2 months ago
[-]
The cross platform desktop gui toolkits all have some very big downsides and tend to result in bad looking UIs too.
reply
rubymamis
2 months ago
[-]
I've built my app[1] using Qt (C++ and QML), and I think the UI looks decent. There's still a long way for it to feel truly native, but I've got some cool ideas.

[1] https://get-notes.com/

reply
rty32
2 months ago
[-]
You are probably not solving the same problems many other people are facing.

Many such applications are accessible on the web, often with the exact UI. They may even have a mobile/iPad version. They may be big enough that they have a design system that needs to be applied to in every UI (including company website). Building C++ code on all platforms and running all the tests may be too expensive. The list goes on.

reply
rubymamis
2 months ago
[-]
I just started prototyping a mobile version of my app (which shares the code as my desktop app) and the result looks promising (still work-in-progress tho).

Offering a web app is indeed not trivial. Maybe Qt WebAssembly will be a viable option if I can optimize the binary and users wouldn't mind first long load time (and then the app should be cached for instant load). Or maybe I could build a read-only web app using web technology.

Currently, my focus is building a good native application, and I think most of my users care about that. But in the future, I can see how a web app could be useful for more users. One thing I would like to built is a web browser that could load both QML and HTML files (using regular web engine), so I could simply deploy my app by serving my QML files without the binary over the internet.

reply
cageface
2 months ago
[-]
That's definitely one of the best looking Qt apps I've seen.
reply
rubymamis
2 months ago
[-]
Thank you! I think Qt is absolutely great. One need to put a little effort to make it look and behave nicely. I wrote a blog post about it[1], if you're interested.

[1] https://rubymamistvalove.com/block-editor

reply
chrismorgan
2 months ago
[-]
> but didn’t (in my reading) address use of the API as being part of an otherwise essentially trusted app

That’s what the Narrower Applicability section is about <https://github.com/mozilla/standards-positions/issues/431#is...>. It exposes new vulnerabilities because of IP address reuse across networks, and DNS rebinding.

reply
mmis1000
2 months ago
[-]
- It is possible, if not likely, that an attacker will control name resolution for a chosen name. This allows them to provide an IP address (or a redirect that uses CNAME or similar) that could enable request forgery.

This is quite trival, not even possible though. DNS server is quite a simple protocol. Writing a dns that reflect every request from aaa-bbb-ccc-ddd.domain.test to ip aaa.bbb.ccc.ddd won't take you even for a day. And in fact this already existed in the wild.

reply
rty32
2 months ago
[-]
Have isolated web apps/web bundle gained any traction over the past few years? I just realized that this thing existed and there were some discussions around it -- I almost completely forgot this.

I did a search, and most stuff come from a few years ago.

reply
angra_mainyu
2 months ago
[-]
It makes much more sense to bundle a binary + web extension (w/ native messaging) to handle bridging the browser isolation in a sensible manner.

It's a minimal amount of extra work and would mean you cross browser isolation in a very controlled manner.

reply
meiraleal
2 months ago
[-]
It is used by chromeOS
reply
rty32
2 months ago
[-]
You means apps written by Google as "native apps"?

Any use cases outside that?

If not, it is probably fair to say nobody uses this.

reply
meiraleal
2 months ago
[-]
A PWA is a IWA so lots of people are using it besides Google
reply
chrisvenum
2 months ago
[-]
I found this issue indicating a bad idea for end user safety:

https://github.com/mozilla/standards-positions/issues/431

reply
hoherd
2 months ago
[-]
Mozilla won't even support webusb[1][2][3] due to security reasons, so there's no way they'd support raw sockets.

[1] https://developer.mozilla.org/en-US/docs/Web/API/USB#browser...

[2] https://wiki.mozilla.org/WebAPI/Security/WebUSB

[3] https://mozilla.github.io/standards-positions/#webusb

reply
jeswin
2 months ago
[-]
I prefer web apps to native apps any day. However, web apps are limited by what they can do.

But what they can do is not consistent - for example, it can take your picture and listen to your microphone if you give permissions; but it can't open a socket. Another example: Chrome came out with an File System Access API [2] in August; it's fantastic (I am using it) and it allows a class of native apps to be replaced by Web Apps. As a user, I don't mind having to jump through hoops (as a user) and giant warning screens to accept that permission - but I want this ability on the Web Platform.

For Web Apps to be able to complete with native apps, we need more flexibility Mozilla. [1]

[1]: https://mozilla.github.io/standards-positions/ [2]: https://developer.chrome.com/docs/capabilities/web-apis/file...

reply
1oooqooq
2 months ago
[-]
nah. we need even less. i rather webapps because of the limitations. much less to worry about
reply
Uptrenda
2 months ago
[-]
I saw this proposal years ago now and was initially excited about it. But seeing how people envisioned the APIs, usage, etc, made me realize that it was already too locked down. Being able to have something that ran on any browser is the core benefit here. I get that there are security concerns but unfortunately everyone who worked on this was too paranoid and dismissive to design something open (yet secure.) And that's where the proposal is today. A niche feature that might as well just be regular sockets on the desktop. 0/10
reply
mlhpdx
2 months ago
[-]
I’m excited, and anticipate some interesting innovation once browser applications can “talk UDP”. It’s a long time in the making. Gaming isn’t the end of it — being able to communicate with local network services (hardware) without involving an API intervening is very attractive.
reply
immibis
2 months ago
[-]
Indeed. I'll finally be able to connect to your router and change your wifi password, all through your browser.
reply
lazyasciiart
2 months ago
[-]
Shhh, you’re giving my parents unrealistic expectations of how much remote tech support I can do.
reply
Spivak
2 months ago
[-]
Anything that moves the web closer to its natural end state— the J(S)VM is a win in my book. Making web apps a formally separate thing from pages might do some good for the web overall. We could start thinking about taking away features from the page side.
reply
remram
2 months ago
[-]
This is beyond that, it's more a move to remove the VM than make JS a generic VM.
reply
fhdsgbbcaA
2 months ago
[-]
Great fingerprinting vector. Expect nothing less from Google.
reply
hipadev23
2 months ago
[-]
What about WebTransport? I thought that was the http/3 upgrade to WebSockets that supported unreliable and out-of-order messaging
reply
mmis1000
2 months ago
[-]
I think WebRTC data channels will be a good alternative if you want peer to peer connection. WebTransport is strictly for Client-Server architecture only.
reply
troupo
2 months ago
[-]
Status of specification: "It is not a W3C Standard nor is it on the W3C Standards Track."

Status in Chrome: shipping in 131

Expect people claiming this is a vital standard that Apple is not implementing because they don't want web apps to compete with App Store. Also expect sites like https://whatpwacando.today/ uncritically just include this

reply
meiraleal
2 months ago
[-]
Expect Apple claiming this is a not vital standard and Apple is not implementing because they don't want web apps to compete with App Store. Also expect sites like https://whatpwacando.today/ to obviously just include this
reply
troupo
2 months ago
[-]
Which part of "is not a w3c standard and not any standards track" do you not understand?

I am not surprised sites like that include Chrome-only non-standards, they've done this for years claiming impartiality

reply
meiraleal
2 months ago
[-]
Cry me a river. Apple doesn't need you to defend their strategic and intentional PWA boycott.
reply
troupo
2 months ago
[-]
Which part of "is not a w3c standard and not any standards track" do you not understand?

Do you understand that for something to become a standard, it needs two independent implementations? And a consensus on API?

Do you understand that "not on any standards track" means it's Chrome and only Chrome pushing this? That Firefox isn't interested in this either?

Do you understand that blaming Apple for everything is borderline psychotic? And that Chrome implementing something at neck-breaking pace doesn't make it a standard?

Here's Mozilla's extensive analysis and conclusion "harmful" that Google sycophants and Apple haters couldn't care less about: https://github.com/mozilla/standards-positions/issues/431#is...

reply
nulld3v
2 months ago
[-]
There are a lot of reasons why people have such extreme differing opinions on this.

I for one, am still salty about the death of WebSQL due to "needing independent implementations". Frankly put, I think that rule is entirely BS and needs to be completely removed.

Sure, there is only one implementation of WebSQL (SQLite) but it is extremely well audited, documented and understood.

Now that WebSQL is gone, what has the standards committee done to replace it? Well, now they suggest using IndexedDB or bringing your own SQLite binary using WASM.

IndexedDB is very low level, which is why almost no one uses it directly. And it also has garbage performance, to the point where it's literally faster for you run SQLite on top of IndexedDB instead: https://jlongster.com/future-sql-web

So ultimately if you want to have any data storage on the web that isn't just key-value, you now have to ship your own SQLite binary or use some custom JS storage library.

So end users now have to download a giant binary blob, that is also completely unauditable. And now that there is no standard storage solution, everybody uses a slew of different libraries to try to emulate SQL/NoSQL storage. And this storage is emulated on top of IndexedDB/LocalStorage so they are all trying to mangle high level data into key-value storage so it ends up being incredibly difficult to inspect as an end-user.

As a reminder: when the standards committee fails to create a good standard, the result is not "everybody doesn't do this because there is no standard", it is "everybody will still do this but they will do it 1 million different ways".

reply
troupo
2 months ago
[-]
> Frankly put, I think that rule is entirely BS and needs to be completely removed.

That's what Google is essentially doing: they put up a "spec", and then just ship their own implementation, all others be damned.

Here's the most egregious example: WebHID https://github.com/mozilla/standards-positions/issues/459

--- start quote ---

- Asked for position on Dec 1, 2020

- One month later, on Jan 4, 2021, received input: this is not even close to being even a draft for a standard

- Two months later, on March 9, 2021, enabled by default and shipped in Chrome 89, and advertised it as fait accompli on web.dev

- Two more months later: added 2669 lines of text, "hey, there's this "standard" that we enabled by default, so we won't be able to change it since people probably already depend on it, why don't you take a look at it?"

--- end quote ---

The requirement to have at least two independent implementations is there to try and prevent this thing exactly: the barreling through of single-vendor or vendor-specific implementations.

Another good example: Constructible Stylesheets https://github.com/WICG/construct-stylesheets/issues/45

Even though several implementations existed, the API was still in flux, and the spec had a trivially reproduced race condition. Despite that, Google said that their own project needed it and shipped it as is, and they wouldn't revert it.

Of course over the course of several years since then they changed/updated the API to reflect consensus, and fixed the race condition.

Again, the process is supposed to make such behavior rare.

What we have instead is Google shitting all over standards processes and people cheering them on because "moving the web forward" or something.

---

As for WebSQL: I'm also sad it didn't become a standard, but ultimately I came to understand and support Mozilla's position. Short version here: https://hacks.mozilla.org/2010/06/beyond-html5-database-apis... Long story here: https://nolanlawson.com/2014/04/26/web-sql-database-in-memor...

There's no actual specification for SQLite. You could say "fuck it, we ship SQLite", but then... which version? Which features would you have enabled? What would be your upgrade path alongside SQLite? etc.

reply
meiraleal
2 months ago
[-]
What part of "cry me a river" you didn't understand? Don't go crazy because at least one of the browsers propose things that move the web forward. Geez, you should take a break from the internet. So many "?"
reply
troupo
2 months ago
[-]
> Don't go crazy because at least one of the browsers propose things that move the web forward.

No, they shape the web in an image that is beneficial to Google, and Google only.

> Geez, you should take a break from the internet. So many "?"

Indeed, so may "?" because, as you showed, Google sycophants cannot understand why these questions are important.

reply
pjmlp
1 month ago
[-]
A generation lost in Internet Explorer....
reply
badgersnake
2 months ago
[-]
It’s pretty clear Google are building an operating system, not a browser.
reply
pjmlp
1 month ago
[-]
It is called ChromeOS, and its spread is helped by everyone that keeps pushing Electron all of the place.
reply
bloomingkales
2 months ago
[-]
Can a browser run a web server with this?
reply
melchizedek6809
2 months ago
[-]
Since it allows for accepting incoming TCP connections, this should allow for HTTP servers to run within the browser, although running directly on port 80/443 might not be supported everywhere (can't see it mentioned in the spec, but from what I remember on most *nix systems only root can listen on ports below 1024, though I might be mistaken since it's been a while)
reply
apitman
2 months ago
[-]
I assume they would limit it to clients.
reply
arzig
2 months ago
[-]
The inner platform effect intensifies.
reply
westurner
2 months ago
[-]
From "Chrome 130: Direct Sockets API" (2024-09) https://news.ycombinator.com/item?id=41418718 :

> I can understand FF's position on Direct Sockets [...] Without support for Direct Sockets in Firefox, developers have JSONP, HTTP, WebSockets, and WebRTC.

> Typically today, a user must agree to install a package that uses L3 sockets before they're using sockets other than DNS, HTTP, and mDNS. HTTP Signed Exchanges is one way to sign webapps.

But HTTP Signed Exchanges is cancelled, so arbitrary code with sockets if one ad network?

...

> Mozilla's position is that Direct Sockets would be unsafe and inconsiderate given existing cross-origin expectations FWIU: https://github.com/mozilla/standards-positions/issues/431

> Direct Sockets API > Permissions Policy: https://wicg.github.io/direct-sockets/#permissions-policy

> docs/explainer.md >> Security Considerations : https://github.com/WICG/direct-sockets/blob/main/docs/explai...

reply
demarq
2 months ago
[-]
Something tells me this is more to do with a product Google wants to launch rather than a genuine attempt to further the web.

I’ll keep my eyes on this one, see where we are in a year

reply
FpUser
2 months ago
[-]
All nice and welcome. At what point browser becomes full blown OS with the same functionality and associated vulnerabilities yet still less performant as it sites on top of other OS and goes through more layers. And of course ran and driven by one of the largest privacy invader and spammer of the world
reply
anilgulecha
2 months ago
[-]
> At what point browser becomes full blown OS.

Happened over a decade ago - ChromeOS. It's also the birthplace of other similar tech.. webmidi webusb Bluetooth etc.

reply
revskill
2 months ago
[-]
That means we can connect directly to remote Postgres server from web browser ?
reply
zamadatix
2 months ago
[-]
So long as you do it from an isolated web app rather than normal page.
reply
kureikain
2 months ago
[-]
This means that we can finally do gRPC directly from browser.
reply
Asmod4n
2 months ago
[-]
Thank god they plan to limit this to electron type apps.
reply
sabbaticaldev
2 months ago
[-]
so with this I would be able to create a server in my desktop web app and sync all my devices using webrtc
reply
hexo
2 months ago
[-]
Game over for security.
reply
tjoff
2 months ago
[-]
Great, so now a mis-click and your browser will have a field day infecting your printer, coffee machine and all the other crap that was previously shielded by NAT and/or a firewall.
reply
jeroenhd
2 months ago
[-]
As long as they don't change the spec, this will only be available to special locally installed apps in enterprise ChromeOS environments. I don't think their latest weird app format is going to make it to other browsers, so this will remain one of those weird Chrome only APIs that nobody uses.
reply
fensgrim
2 months ago
[-]
> special locally installed apps in enterprise ChromeOS environments

There was https://developer.chrome.com/docs/apps/overview though, so this seems to be a kind of planned feature creep after deprecating former one? "Yeah our enterprise partners now totally need this, you see, no reasoning needed"

reply
grishka
2 months ago
[-]
Can we please stop this feature creep in browsers already?
reply
pjmlp
2 months ago
[-]
Yet another small step into ChromeOS take over.
reply
huqedato
2 months ago
[-]
Just now, when I have only recently switched permanently to Firefox...
reply
Jiahang
2 months ago
[-]
nice!
reply
xenator
2 months ago
[-]
Can't wait to see it working.
reply
revskill
2 months ago
[-]
Why waiting ? What can you do with it ? Can't wait to wait for you.
reply