I can see why local UI development fell out of favor, but it is still better (faster, established tooling, computers already have it so you don't have to download it). I can't help but feel like a lighter weight VM (versus electron) is what we actually want. Or at least what _I_ want, something like UXN but just a little more fully featured.
I'm writing some personal stuff for DOS now with the idea that every platform has an established DOS emulator and the development environment is decent. Don't get me wrong this is probably totally crazy and a dead end but it's fun for now. But a universally available emulator of a bare bones simple machine could solve many computing problems at once so the idea is tempting. To use DOS as that emulator is maybe crazy but it's widely available and self-hosting. But to make networked applications advancing the state of emulators like DOSBox would be critical.
Actually local first, maybe the way forward is first to take a step back. What were we trying to accomplish again? What are computers for?
It wouldn't be such a mistake if it was done well. For all the bells and whistles web UIs have, they still lack a lot of what we had on the desktop 30 years ago, like good keyboard shortcuts, support for tab in forms, good mask inputs, and so on.
What are computers for?
Local first could be the file format that allows us to use devices without cloud services. Back to file first, vendors of software apps, pay for software once (or one year of updates or whatever).
It's s new horizon.
CRDTs allow for conflict-free edits where raw files start falling short (calendars, TODOs, etc). I'd love to see something like Syncthing for CRDTs, so that local-first can take the next logical step forward and go cloud-free.
The hardest trick to that would be securing it, in particular how you define an application boundary so that the same application has the same roamingStorage but bad actor applications can't spoof your app and exfiltrate data from it. My riffing hasn't found an easy/simple/dumb solution for that (if you want offline apps you maybe can't just rely on website URL as localStorage mostly does today, and that's maybe before you get into confusion about multiple users in the same browser instance using the app), but I assume it's a solvable problem if there was interest in it at the browser level.
I'm also firmly in the native app camp. And again, Apple did this right. The web interface to iCloud works great from both Firefox and Chromium, even on OpenBSD, even with E2EE enabled (you have to authorise the session from an Apple device you own, but that's actually a great way to protect it and I don't mind the extra step).
It's probably harder to answer those questions if you can't build the solution around a device with a secure element. But there's a lot of food for thought here.
Then you are answering the wrong question. I want a "web native" answer and proposed a simple modification of existing Web APIs. As a mixed iOS/Windows/Linux user, I have selfish reasons to want a cross-device solution that works at the Firefox standardized level. Even outside of the selfish reason, the kinds of "apps" I've been building that could use simple device-to-device sync have just as many or sometimes more Android users than Apple device users. I've also seen some interesting mixes there too among my users (Android phone, iPadOS device, Windows device; all Chrome browser ecosystem though).
> It's probably harder to answer those questions if you can't build the solution around a device with a secure element.
Raw Passkey support rates are really high. Even Windows 10 devices stuck on Windows 10 because no TPM 2.0 still often have reasonably secure TPM 1.0 hardware.
Piggybacking on Passkey roaming standards may be a possibility here, though mixed ecosystem users will need ways to "merge" Passkey-based roaming for the same reasons they need ways to register multiple Passkeys to an app. (I need at least two keys, sometimes three, for my collection of devices today/cross-ecosystem needs, again selfishly at least.)
I don't see why this mechanism shouldn't be available both on the web and in native apps. The libraries would just implement the same protocol spec, use equivalent APIs. Just like with WebRTC, RSS, iCal, etc. And again, ideally with P2P capability.
> [...] that works at the Firefox standardized level.
What about a W3C standard? Chrome hijacked the process by implementing whatever-the-hell they like and forcing it upon Firefox & Safari through sheer market share. It would be good to reinforce the idea that vendor-specific "standards" are a no-no.
It also just doesn't work the other way: Firefox tried the same thing with DNT, nobody respected it.
> Piggybacking on Passkey roaming standards may be a possibility here [...]
WebAuthn sounds good, that kinda covers the TPM/SEP requirement. Native apps already normalised using webviews for auth. I wonder if there's a reasonable way to cover headless devices as well, but self-hosted/P2P apps like Syncthing also usually have a web UI.
> [...] again selfishly at least.
No problem with being "selfish". Every solution should start with answering a need.
That's basically the JVM, isn't it?
It's interesting to think how some of the reasons it sucked for desktop apps (performance, native UI) are also true of Electron. Maybe our expectations are just lower now.
It's not completely crazy. Software was developed by much smaller teams and got out the door quickly in that era.
What we need is to have a device-transparent way to see our *data*. We got so used to the idea that web applications let us work from "dumb terminals" that we failed to realize that *there is no such thing as a dumb terminal anymore*. With multi-core smartphones, many of them with 4, 8, 12, 16GB of RAM; it's not too hard to notice that the actual bottlenecks in mobile devices are battery life and (intermittent and still relatively expensive) network connectivity. These are problems that can be solved by appropriate data synchronization, not by removing the data from the edge.
One of the early jokes about web2.0 was that to have a successful company you should take an Unix utility and turn it into a web app. This generation of open source developers are reacting to this by looking at successful companies and building "self-hosted" versions of these web apps. What they didn't seem to realize is that **we don't need them**. The utlities and the applications still work just fine, we just need to manage the data and how to sync between our mobile/edge devices and our main data storage.
If you are an open source developer and you are thinking of creating a web app, do us all a favor and ask yourself first: do I need to create yet-another silo or can I solve this with Syncthing?
[0]: https://raphael.lullis.net/thinking-heads-are-not-in-the-clo...I recently found out that Syncthing is ending Android support. :(
> Now, of course, there are many advantages to this shift: collaboration, backups, multi-device access, and so on. But it’s a trade! In exchange, we’ve lost the ability to work offline, user experience and performance, and indeed true dominion over our own data.
I’ve decided that the advantages of storing my bookmarks locally far outweigh the chance that I'll want to access them from a different device or collaborate with someone else on them. Yes, it means I've created something of a 'silo', but I'm starting to think that's not a bad thing.
Why don't you store those bookmarks as markdown files and then upload them to a private repo you can read on other devices and even your phone through the GitHub mobile app? If they're bookmarks, they'll work perfectly as links in markdown which you can click in the GitHub app.
Note: I pay $4/mo for GitHub private repos and I absolutely defy anyone to show me a better deal in any SaaS company. I open the GitHub mobile app at least 10 times a day. This is the only subscription service that is inarguably worth it as far as I'm concerned.
What’s your current solution for local bookmarks?
I'm right in the middle of setting it up. Basically, curl and a bunch of bash scripts.
In our case we're building a local-first multiplayer "IDE for tasks and notes" [1] where the syncing or "cloud" component adds features like real-time collaboration, permission controls and so on.
Local-first ensures the principles mentioned in the article like guaranteed access to your data and no spinners. But that's just the _data_ part. To really add longevity to software, I think it would be cool if it would also be possible to guarantee the _service_ part also remains available. In our set up we'll allow users to "eject" at any time by saving a .zip of all their data and simply downloading a single executable (like "server.exe" or "server.bin"). The idea is you can then easily switch to the self-hosting backend hosted on your computer or a server if you want (or reverse the process and go back to the cloud version).
Signed up for early access and looking forward to it!
Too few people are taking advantage of Redbean <https://redbean.dev/> and what it can do. Look into it.
A recent project is https://shrink.video, which is using the WASM version of ffmpeg to shrink or convert video in the user's browser itself, for privacy and similar reasons mentioned before.
I store the user data (progress, etc.) in an indexedDB in the user’s browser and I have to say:
> No spinners: your work at your fingertips
is not true at all. indexedDB can be frustratingly slow on some devices. It also has a frustratingly bad DX (even if you use a wrapper library like idb or dexie) due to limitations of the database design, which forces you into making bad database designs which further slows things down for the user (as well as increases the total storage consumption on the user’s device).
I also wished browsers offered a standard way to sync data between each other, even though you can share your firefox tabs between your phone and computer, you can‘t get the data from indexedDB on the same site between your computer and phone. Instead you have to either use a server or download and drop the data between the two clients.
indexedDB feels like a broken promise of local first user experience which browser vendors (and web standard committees) have given up on.
When I was at Apple (2010-2015) every app was Local First. In fact they legally had to be to be sold in Germany, where iCloud cannot be mandatory (they have a history of user data being abused).
You’ll notice when the network goes down all your calendars email contacts and photos are still there. The source of truth is distributed.
Client side apps writing to local db with background sync (application specific protocols) works excellently. You just don’t write your UI as a web page.
This, I've wondered for a while. There is plenty of talk about server side rendering, which I don't think is useful for many apps out there. SSR is quite wasteful of the resources on the client side that can be made useful. And, I've seen many apps being developed with "use cliënt" littered all over, and that begs to wonder why do you even want SSR in your app.
In Nextjs, "use client" is used to force the rendering to take place in the client, because many components cannot me rendered in the server. For example maps. In this case, it's unnecessary to use an SSR framework.
The issue with local first web dev in my experience is two fold:
1. It's super hard. The problem of syncing data is super hard to solve and there is little support from modern tooling. Mostly because it differs vastly on how you should handle it from app to app.
2. Very few that live in the west or people who pay for software are offline in longer stretches. There is simply very little to no requests for making apps work offline. I would argue to opposite that it probably is mostly bad business because you will get stuck on technical details due to #1 being so hard to solve. Even people who say they will use the app offline are never offline in my experience (from working on a real app that worked offline).
I work on an app that has clear offline benefits, although, pretty much no one will use it offline in practice and where I live, people have 5G access nowadays even at places that used to be offline, aka trains, tunnels etc. Even so, I plan to make my app offline supported but only after I have success with it.
Unfortunately that means if you download an html file and double click on it then it can send anything in your downloads folder to anywhere else.
I really don't understand what this means.
I too like local HTML and would love to see it restored without requiring a local server. If I spin up a local Webrick to serve anything from my file system, I’m no better off (except maybe that it’s scoped to a particular directory and children).
IMO the problem rooted in co-mingling documents and applications on a web page / in a HTML file. let the user save documents in .html: then it should not be able to do any harm - it's a digital sheet of paper! and web applications in, say, .hta: then he should not expect any more isolatedness then for a downloaded .exe or .sh file; and the user client program should treat it with due care when downloading, eg. by put it in a separate subfolder, set SELinux context, etc...
python -m http.server 8080
If if you're stuck with python2 python -m SimpleHTTPServer 8080
At least this doesn't introduce the security issues you'd see with any file:// resource being able to load other file:// resourceshttps://docs.python.org/3/library/http.server.html#http-serv...
You should also add a `--bind 127.0.0.1` so you don't expose the site to everyone on your network.
To be safe, this should be used instead: `-p 127.0.0.1:8000:8000`
> Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
So docker is "effectively" ignoring your firewall in the case of ufw. I don't see how it can be considered to not ignoring your firewall when it ignores the rules you've setup.
Does Docker violate the principle of least surprise? Yes. Was I bitten by this behavior? Definitely. Does it bypass the firewall? No.
To be frank, it kind of feels like the kind of technical nitpick argument I'd read from a Docker Inc employee trying to somehow defend ignoring the user's firewall.
The end result is that you setup rules in UFW, and Docker ignores them.
Windows doesn't ship with Python, but it does have a silly thing where if you type python in a command shell of your choice it will try to auto-install python. (When it works it is kind of magic, when it doesn't work it's a pain to debug the PATH problem.)
It's not exactly a one-liner, but in an off-the-shelf Windows install it is a short script to create a simple static file web server inside PowerShell.
Also there are npm and deno packages/CLIs that are easy to install/run for one-liners if you want to install something more "web native" than Python. `npx http-server` is an easy one-liner with any recent Node install.
https://tomlarkworthy.moldable.app/index.html
You can turn off network and the download works.
Because I can't think of a way to do that without serviceworkers. I mean a way that doesn't involve the end user reading a paragraph of instructions, based on their "OS".
Anyhow, sorry, I just can't tell what it does from the confines of that demo and a cellphone browser...
I recall you can't use web workers, but just loading a JS file with <script> tags is fine.
Though it's easy enough to run a caddy local server. It was easier in the past, in earlier caddy versions one could just place the caddy binary in PATH and launch `caddy` from any directory to auto serve a local server, now it requires more arguments or a config file.
about:config -> security.fileuri.strict_origin_policy -> false
source: https://stackoverflow.com/questions/58067499/runing-javascri...
+1 for having your own local 'tab home page'!
OP said
> a way to securely host local servers without root but also without just praying that another app doesn't port-squat me
Binding to port 0 is a solution. I tested it and I always get a port not used by other programs.
hosts file (the hosts NSS database) does not speak DNS.
however I guess this line of thought comes from that web developers want a number of *.dev.myproject.net domain names to use in URLs and handle by different web listener processes. why don't just run a nginx as reverse proxy for all of your *.dev.myproject.net domains? update your port number ↔ domain name in nginx config; reload is quite cheap.
As far as I know, the hosts file does not afford a port number so a solution would look similar to a proxy on a known port like 443 listening for a particular domain name that hosts routes to localhost and the proxy routes to the service on port whatever. Also need to set up local CA to sign a cert for each of the hostfile domain names. . .
I know I generally use fing in a pinch, or nmap, or ask the DHCP server for all the hostnamr<->ip mappings and then nmap -A - t5 (that's threat level five - the system I use)
Is this thing on
I think it was LogStash or Summit Patchy Project, that let you use IRC as a sync for redirecting logs or making a copy. So I made a PCI compliant logging service, immutable ircd (ro fs) that logged to itself via eggdrop locally. All little cattle VMs would have thier logs sourced and synced via logstash to an IRC channel for the vertical the VM "belonged to". All the irc stuff was on an append-only mount on the ircd server, so if audit just snapshot that mount and hand it over.
NOC was in the channels. I think some teams set up dashboard widgets that triggered on irc events.
All that was a POC done completely to prove that it could be done using only stuff from pre-1996 RFCs. I'm sure the company I built it for went with some vendor product instead and I don't blame them, I wouldn't have maintained that - you ever set up ircd? The volume of traffic just for logs was already fun to deal with.
Pedants: I'm working from 15 year old memories and I changed a few details to protect the innocent. I can't remember the project name other than it was a play on "BI" for business intelligence.
Listening on 127.0.0.1:80 is local first and not being routable is in fact local-only. Its common alias ‘localhost’ even has ‘local’ in the name.
(Alternately you could bundle multiple files into a single ePub, though that requires a few adjustments compared to simple HTML. It's a widely supported format and allows for scripting.)
and combine that with the js file APIs for working with data files.
And native JS only is adequate for tiny first web. Clients do care about ms vs 5s load-times.
Ironically, if you use something browser native indexeddb plus JS instead, you are also likely indirectly using SQLite as far as I know.
I also don't think you understand the complexity of the features that TinyBase is offering. It's possible you don't personally need these features, but critiquing the software for not being totally minimalistic is a bit silly.
The only annoying part was detecting when the browser was back online, I used some hacky solution since there was no native API for it for any browser at the time, but it worked.
Welcome to the world a huge chunk of the population lives in. It drives me crazy how quick people were to jump onto cloud computing without ever asking the question "how well does it work when my Internet connection sucks?"
Next they'll discover native applications! Innovation!
Maybe after that they'll even discover that you can give a shit about the user's battery/power consumption and ram consumption again!
The target platform for many local first apps is browser because you don’t have to mess with EXE/DMG/AppImages.
The goal is to ship, not to ship the most efficient application possible.
Like this little PHP script I had to spit out custom gradients before CSS had them, it's weird how fondly I remember that... it wasn't even special or complicated, but it was my grd.php and I used it everywhere :) And on some old pages I never got around to replacing it, I'm sure! Once it works, it just works.
Stay away from frameworks and pre-processors, but also study them (!) and make your own and you'll be fine (that is, you will have a lot less pointless churn). If in doubt, don't put sensitive info on it and don't allow visitors to. There is so much you can do where security really literally doesn't matter, because it's just something cool to make and show people.