Why haven't local-first apps become popular?
490 points
2 days ago
| 120 comments
| marcobambini.substack.com
| HN
crazygringo
1 day ago
[-]
> The Solution: CRDTs. The right approach is CRDTs (Conflict-Free Replicated Data Types)... This means you can apply messages in any order, even multiple times, and every device will still converge to the same state.

This is very much "draw the rest of the owl".

Creating a CRDT model for your data that matches intuitive user expectations and obeys consistent business logic is... not for the faint of heart.

Also remember it turns your data model into a bunch of messages that then need to be constantly reconstructed into the actual current state of data. It's a gigantic enormous headache.

reply
Aurornis
1 day ago
[-]
Almost every time I see CRDTs mentioned it’s used as a magic device that makes conflicts disappear. The details, of course, are not mentioned.

Technically an algorithm that lets the last writer win is a CRDT because there is no conflict.

Making a true system that automatically merges data while respecting user intent and expectations can be an extremely hard problem for anything complex like text.

Another problem is that in some situations using a CRDT to make the conflict disappear isn’t even the right approach. If two users schedule a meeting for the same open meeting room, you can’t avoid issues with an algorithm. You need to let them know about the conflict so they can resolve the issue.

reply
josephg
1 day ago
[-]
> Technically an algorithm that lets the last writer win is a CRDT because there is no conflict.

Yes. It’s called a LWW register in the literature. Usually a MV (multi value) register is more useful. When there is a conflict, MV registers store all conflicting values and make any subsequent reader figure out what to do.

But for my money, usually what you want is something operation based. Like, if we both append to a transaction list, the list should end up with both items. Operation based CRDTs can handle any semantics you choose - so you can mix and match different merging approaches based on the application. And the storage for LWW and MV is the same, so you can start simple and grow your app as needed.

IMO the reason local first software is still not popular is the same reason encrypted messaging apps took awhile. There’s a lag from good CS research enabling a class of applications and good implementations being available with good UX. Most CRDT implementations are still pretty new. Automerge has been really slow until quite recently. Most libraries don’t support ephemeral data or binary blobs well. And there aren’t a lot of well defined patterns around user login and authentication. Local first apps today have a lot of work ahead of them to just exist at all. Give it some time.

reply
LanceH
1 day ago
[-]
The details appear to be not giving a damn about changes to data. I personally wouldn't describe that as "converging". As it says, this is "Last Write Wins", which is great if the last writer is always correct.

"If it’s older → ignore" -- yea, I guess that's a solution but I would really have to look for the problem.

I've gone down this road and considered github (or similar) as the backing database. In the end, I just don't have an answer that isn't specific to nearly every field that might be entered. A notes field might be appended. First in might be important, or last in (if it can be trusted as correct). Usually it's something like, "which of these two did you meant to capture?"

reply
mcv
1 day ago
[-]
Funny thing is that the article gives an example where "last write wins" is quite clearly a bad solution.

Balance = 100 A: balance = 120 B: balance = 80

Clearly, these are transactions, and it doesn't matter in which order they're applied, but it does matter that they're both executed. End balance should be 100, not 80 or 120.

reply
LanceH
1 day ago
[-]
I was thinking about this overnight and maybe my beef is the article feels like it's written as a solution to "offline", when really it's a much narrower solution.

This solution doesn't nearly move us toward making local-first apps more popular, which was nominally the theme.

reply
rogerrogerr
1 day ago
[-]
I’ve never really thought about this - how does Outlook handle this? Has anyone received a “sorry, that room you reserved actually wasn’t available; talk to this other dude who reserved it too” message after reserving a meeting room?

Or does it just double book the room? Or is there a global lock on any transaction affecting a meeting room, so only one goes through? (Feels like it doesn’t scale)

reply
appreciatorBus
1 day ago
[-]
In Google Workspace, rooms are resources with calendars that can be configured to auto accept any invitation unless they’ve already booked. So it’s basically first come first serve. Even if two people are literally trying to book the room, at the same time, simultaneously, one request will go through first and will be accepted and the second will be declined. I imagine outlook is similar.
reply
yccs27
1 day ago
[-]
In other words, Google sacrifices availability/latency here - they don't accept the request until they can be sure it's still available.
reply
coldtea
1 day ago
[-]
They can accept the request (accept as in receive for processing).

They just can't send the acknowledgement of "succesfully booked" yet.

reply
account42
1 day ago
[-]
> Or is there a global lock on any transaction affecting a meeting room, so only one goes through? (Feels like it doesn’t scale)

Why wouldn't it scale? How many meetings are booked per second in your organization???

reply
throwaway4226
1 day ago
[-]
I think the potential for abuse is high. With a locking system, someone could (and probably would) click (manually or with a script) on a time slot to "reserve" a room just in case they needed it.
reply
jon-wood
1 day ago
[-]
These are physical meeting rooms within a company. The resolution to this sort of abuse doesn't need to be automated, first it's a person in the facilities team having a quiet chat with the person doing that and asking them not to, eventually it gets escalated through various managers until it's a very final chat with HR before being asked to leave the building and not come back.
reply
coldtea
1 day ago
[-]
And if they did it a lot, they're scolded or fired.

That's not a real problem - at least not in the "book a corporate meeting room" space.

reply
kirici
1 day ago
[-]
Clearly you fell for the premature measuring fallacy, everyone knows to optimize for web-scale first.
reply
bmm6o
1 day ago
[-]
Exchange server accepts or rejects meeting requests. There's no offline room reservation so it's pretty simple.
reply
bootsmann
1 day ago
[-]
Presumably exchange server is not a single node?
reply
immibis
1 day ago
[-]
Then it does whatever is needed to make it safe. For example, it might use a hash ring to assign each meeting room to a single node, and that node processes one request at a time. Most distributed systems are like this.

A traditional database funnels all your data changes down to one leader node which then accepts or rejects them, and (if A+C in the case of single node failure is desired) makes sure the data is replicated to follower nodes before accepting.

A distributed database is similar but different pieces of data can be on different leaders and different follower sets.

This comment was rate-limited.

reply
Aurornis
1 day ago
[-]
> I’ve never really thought about this - how does Outlook handle this?

Simple: It’s server based. These problems are trivial when the server is coordinating responses and the clients can only reserve a room if the server confirms it.

This is the problem that gets hand waved away with local first software that has multi user components: It doesn’t take long before two users do something locally that conflicts. Then what do you do? You have to either force a conflict resolution and get the users to resolve it, or you start doing things like discarding one of the changes so the other wins.

reply
coldtea
1 day ago
[-]
It doesn't scale universely, but it doesn't need to: it only needs to cover a specific company/organization/department. So it's trivial to work at that scale.

Hell, it's so feasible, it can even done manually IRL by some person (like discussions where a person holds the "talking stick" and only there are allowed to speak until they pass it to another person - that's a lock).

reply
tylerchilds
1 day ago
[-]
You can resolve it with an algorithm, like so

- prefer seniority - prefer pay scale - prefer alphabetical - roll dice

That’s how a business would probably do it since the first two alone align with how the business already values their Human Resources, which would translate to “the objects that the Human Resources compete for”

reply
arccy
1 day ago
[-]
and the interns get the blame for what they can't book why rooms, but for the people managing them it's just so easy.
reply
tylerchilds
1 day ago
[-]
Incorrect.

In a well designed system, the intern will be delegated with “room booking authority” on behalf of their most senior manager on the calendar invite.

Using something like this, that would be in the CRDT resolution algorithm.

https://w3c-ccg.github.io/zcap-spec/

Company culture will recognize it is an HR problem.

reply
motorest
1 day ago
[-]
> Technically an algorithm that lets the last writer win is a CRDT because there is no conflict.

Your comment shows some ignorance and a complete misunderstanding of the problem domain.

The whole point of CRDTs is that the set o operations supported is designed to ensure that conflict handling is consistent and deterministic across nodes,and the state of all nodes involved automatically converge to the same state.

Last-write-wins strategies offer no such guarantees. Your state diverges uncontrollably and your system will become inconsistent at the first write.

> Making a true system that automatically merges data while respecting user intent and expectations can be an extremely hard problem for anything complex like text.

Again, this shows a complete misunderstanding of the problem domain. CRDTs ensure state converges across nodes, but they absolutely do not reflect "user intent". They just handle merges consistently. User intent is reflected by users applying their changes, which the system then propagates consistently across nodes.

The whole point of CRDTs is state convergence and consistency.

reply
zovirl
1 day ago
[-]
I think the parent was complaining about mentions of CRDTs which don’t acknowledge that the problem domain CRDTs work in is very low level, and don’t mention how much additional effort is needed to make merging work in a way that’s useful for users.

This article is a perfect example: it says syncing is a challenge for local-first apps, logical clocks and CRDTs are the solution, and then just ends. It ignores the elephant in the room: CRDTs get you consistency, but consistency isn’t enough.

Take a local-first text editor, for example: a CRDT ensures all nodes eventually converge on the same text, but doesn’t guarantee the meaning or structure is preserved. Maybe the text was valid English, or JSON, or alphabetized, but after the merge, it may not be.

My suspicion, and I might be going out on a limb here, is that articles don’t talk about this because there is no good solution to merging for offline or local-first apps. My reasoning is that if there was a good solution, git would adopt it. The fact that git stills makes me resolve merge conflicts manually makes me think no one has found a better way.

reply
account42
1 day ago
[-]
There is definitely no general solution but for some domains there may be acceptable solutions.

Git is a good example though as we can definitely write merge algorithms that get good results in many more cases than git's default but with something like code it's preferable to let the human user decide what is the correct merge solution except trivial cases. Still, a language aware merge algorithm could do a lot better than git in both automatically merging more cases and refusing to merge nonsensical combinations of commits that don't touch the same lines.

reply
preommr
1 day ago
[-]
Not the parent comment, but I'll respond.

> Your comment shows some ignorance and a complete misunderstanding of the problem domain.

oof, this is a very strong position to take, and one would assume you have a very convincing follow up to back it up.

And unfortunately I don't think you do. CRDTs can definitely be implemented as a last-write implementaiton. This should be obvious for state-based crdts. The problem is that it's a horrible UX because somebody could type a response to something that's out of date, and then just see it dissapear as they get the most recent message.

Resolving "user intent" by choosing how to structure the problem domain (e.g. storing ids for lines, and having custom merging solutions) so that it reflects what the user is trying to do is the main challenge.

I am quite frankly baffled at how arrongant your tone is given how far off the mark you seem to be. Genuinely makes me think that I am missing something given your confidence, but I don't see what point you're making tbh.

reply
coldtea
1 day ago
[-]
>Your comment shows some ignorance and a complete misunderstanding of the problem domain

Imagine how better your comment would be if you ommited the above line, which adds nothing to the correction you try to make, but comes off as stand-offish.

reply
teleforce
1 day ago
[-]
There is next gen web standards initiative namely BRAID that will make web to be more human and machine friendly with a synchronous web of state [1],[2],[3].

"Braid’s goal is to extend HTTP from a state transfer protocol to a state sync protocol, in order to do away with custom sync protocols and make state across the web more interoperable.

Braid puts the power of operational transforms and CRDTs on the web, improving network performance and enabling natively p2p, collaboratively-editable, local-first web applications." [4]

[1] A Synchronous Web of State:

https://braid.org/meeting-107

[2] Braid: Synchronization for HTTP (88 comments):

https://news.ycombinator.com/item?id=40480016

[3] Most RESTful APIs aren't really RESTful (564 comments):

https://news.ycombinator.com/item?id=44507076

[4] Braid HTTP:

https://jzhao.xyz/thoughts/Braid-HTTP

reply
jwr
1 day ago
[-]
The author also assumes that users are rational, make no mistakes, and there exists a logical non-conflicting ordering of their updates that makes sense. This is naive, speaking from a perspective of someone who has spent the last 10 years supporting a SaaS mostly for engineers.
reply
perlgeek
1 day ago
[-]
Also, when you get a new requirement that needs a modification of the data model, you have to both remodel your CRDTs and make sure you have a migration strategy.

After doing this a few times, your stakeholders are probably fed up with the crawling pace of development, and the whole local-first app is scrapped again and replaced by a more traditional app. Maybe the architect is fired.

reply
kobieps
1 day ago
[-]
Agreed @ not for the faint of heart.

There is at least one alternative "CRDT-free" approach for the less brave among us: https://mattweidner.com/2025/05/21/text-without-crdts.html

reply
josephg
1 day ago
[-]
Matt Weindner is a really smart guy, but I really disagree with his reasoning with that one. I implemented his fuguemax crdt in just 250 lines of code or so. It’s small, simple and fast. In that blog post he proposes a different approach that might let you save 50 lines of code at the expense of always needing a centralised server.

Seems like a terrible trade to me. Just use a crdt. They’re good.

https://github.com/josephg/crdt-from-scratch

reply
quotemstr
1 day ago
[-]
> Difference from CRDTs

The author has made a CRDT. He denies that his algorithm constitutes a CRDT. It's a straightforward merge, not a "fancy algorithm".

What specific aspect of a CRDT does this solution not satisfy? The C? The R? The D? The T?

reply
justinpombrio
1 day ago
[-]
I was going to say that that's not a CRDT because it requires a centralized server (the conflict resolution is "order in which the server received the messages", and clients aren't allowed to share updates with each other, they can only get updates from the server). But now I'm looking at definitions of CRDTs and it's not clear to me whether this is supposed to count or not.

Still, every algorithm that's actually labeled a CRDT shares a magical property: if my replica has some changes, and your replica has some changes, our replicas can share their changes with each other and each converge closer to the final state of the document, even if other people have been editing at the same time, and different subsets of their changes have been shared with you or I. That is, you can apply peoples' changes in any order and still get the same result. I don't think it's useful to call anything without that property a CRDT.

reply
immibis
1 day ago
[-]
The C in CRDT means the order doesn't matter, which means you can just put all the gossiped changes into a big bag of changes and if everyone knows the same changes, they have the same final document, so a simple gossip protocol that just shares unshared data blobs will eventually synchronize the document. If order matters, it's not a CRDT. This one isn't a CRDT because the order matters if two clients insert text at the same position.
reply
yen223
1 day ago
[-]
There's the additional headaches of a) managing auth in a distributed manner, and b) figuring out how to evolve the data model across all participating clients.

CRDTs are a complicated way to solve a problem that most services don't really have, which is when you want to sync data but don't want to have any one client be the ultimate source of truth.

reply
BatteryMountain
1 day ago
[-]
Yeah I must say, this one (crdt) has been an impossible one to solve for me in practical terms. Every single time we end up with something like this: dataflow has 3 modes/directions: downstream only, upstream only and bi-directional. The majority of systems ends up needing bi-directionality (unless dealing with sensors/iot data) at some point, which means you are forced to deal with conflicts. Which means you end up having to compromise and the simplest compromise for 95% of applications is to say "last one wins", which works near perfect in the real world and it is simpler to maintain and debug. The remaining 5% has a hard constraint where you can either go down an academic rabbit hole with crdt's and come out the other end with some grey hairs, or, you still use your normal data flows but have multistep commits (not in a database sense, but a workflow/saga sense), so you have some supervising object that makes sure both sides agree after x amount of time or revert (think about banks, semi-realtime & distributed, but forced to be consistent).

And for the younger devs: please consider if you need these kinds of sync systems at all (distributed & offline modes), sometimes a simple store-forward of events and/or cache is all you need. If you have some leverage, try to advocate that your application has a hard requirement on an internet connection as often it is cheaper to install fibre redundancies than dealing with the side effects of corrupted or lost data. Might save you from early grey hairs.

ps: the above is written with business/LoB applications in mind, with some offline mobile/desktop apps in the mix, not things like control systems for factories or real time medical equipment.

reply
motorest
1 day ago
[-]
> Also remember it turns your data model into (...)

I don't this is true at all. A CRDT is not your data model. It is the data structure you use to track and update state. Your data model is a realization of the CRDT at a specific point in time. This means a CRDT instance is owned by a repository/service dedicated to syncing your state, and whenever you want to access anything you query that repository/service to output your data model.

Sometimes problems are hard. Sometimes you create your own problems.

reply
crazygringo
4 hours ago
[-]
A CRDT requires its own data model that becomes, in a sense, the "main" data model. Because its design and constraints effectively become the design and constraints and requirements on the downstream data snapshot.

CRDT's generally require you to track a lot more stuff in your snapshot model than you would otherwise, in order to create messages that contain enough information to be applied and resolved.

E.g. what was previously an ordered list of items without ID's may now need to become a chain of items each with their own ID, so you can record an insert between two other items rather then just update the list object directly.

So yes, the CRDT is effectively your data model.

reply
cyberax
1 day ago
[-]
We have a local-first app. Our approach?

Just ignore the conflicts. The last change wins.

No, really. In practice for most cases the conflicts are either trivial, or impossible. Trivial conflicts like two people modifying the same note are trivial for users, once you have a simple audit log.

And impossible conflicts are impossible to solve automatically anyway and require business processes around them. Example: two people starting to work on the same task in an offline-enabled task tracker.

reply
moggers123
1 day ago
[-]
>Example: two people starting to work on the same task in an offline-enabled task tracker. Wouldn't this just mean both people are working on it?

I agree that this means humans intervening.. It sounds like there was a comms breakdown. But rather than doing a first-in-best-dressed, it sounds like accurately recording that both users are in fact working on the same thing is the best option since it surfaces that intervention is required (or maybe its intentional, tools insisting that only one person can work on an item at once annoys me). Sounds much better than quietly blowing away one of the user's changes.

In principle, local-first to me means each instance (and the actions each user carries out on their instance) is sacrosanct. Server's job is to collate it, not decide what the Truth is (by first-in-best-dressed or otherwise).

reply
cyberax
1 day ago
[-]
Sure. But then you need to notify users when they come back online that there's a conflict, so they can resolve what to do. You likely need to have a report on the frequency of such occasions for the managers, and so on.

These kinds of conflicts simply can not be solved by CRDTs or any other automated process. The application has to be designed around that.

> In principle, local-first to me means each instance (and the actions each user carries out on their instance) is sacrosanct. Server's job is to collate it, not decide what the Truth is (by first-in-best-dressed or otherwise).

This makes sense only for some applications, though.

And we have not yet started talking about permissions, access control, and other nice fun things.

reply
moggers123
12 hours ago
[-]
I doubt you'll ever see this.. Oh well..

I probably should have been explicit in that I'm not arguing in favor of CRDTs, just that the adverse doesn't need to be "send it and accept the collateral".

Draw The Rest Of The Owl energy here, but at least its a nice northern star.

reply
evelant
1 day ago
[-]
I’ve been experimenting with this, it’s a very interesting problem space!

https://github.com/evelant/synchrotron

Idea is to sync business logic calls instead of state. Let business logic resolve all conflicts client side. Logical clocks give consistent ordering. RLS gives permissions and access control. No dedicated conflict resolution logic necessary but still guarantees semantic consistency and maximally preserves user intentions. That’s the idea at least, requires more thought and hacking.

reply
giancarlostoro
1 day ago
[-]
Wouldn't it be less of an issue if you track the change history, and let users pick a specific historical version? Then it doesn't matter who wins, the end-user can go in and change it. Version control is one of the best parts about Google Docs.
reply
kobieps
1 day ago
[-]
Who is the audience of your app? Is it an internal app for a company, or is it a public facing consumer app?
reply
cyberax
1 day ago
[-]
Public app used by professionals in the field, often with poor or no connectivity. Even having a local read-only copy of data is often helpful for them.
reply
kobieps
1 day ago
[-]
Cool. Yeah in my experience last-write-wins is sufficient for 95% of use cases, and if you add audit trails to help resolve any disputes it gets you to 98%
reply
sakesun
1 day ago
[-]
Just have audit log. No need to try solving every trivial cases. Make something useful.
reply
kazinator
1 day ago
[-]
One solution is to make it so that people see their literal keystrokes in real time. Then they solve the conflict themselves. Like, "stop typing into this text because bob is typing into it".

It's like Ethernet conflict resolution: just access the shared medium and detect collisions in real time.

reply
avemg
1 day ago
[-]
How will you know that Bob is typing into it if you're offline?
reply
kazinator
1 day ago
[-]
That's a fair question; we here being under a submission aout local-first apps, and al.

Of course, you know the answer: if you're offline, you're not online. Bob gets to type whatever Bob wants, and until you go online, you don't get to overtype anything.

reply
ongy
1 day ago
[-]
But the offline enabled property allows exactly that.

Both sides type offline and only sync later. Neither would like their change to just be discarded.

reply
kazinator
1 day ago
[-]
I was responding only to the idea of having no conflict resolution: last edit wins (proposedin a great grandparent comment):

https://news.ycombinator.com/item?id=45341335 "We have a local-first app. Our approach? Just ignore the conflicts. The last change wins."

if you can see the edits being made in real time, keystroke by keystroke, that pretty much solves that problem.

As for offline editing, either don't support it (then you're not local-anything obviously) or you can have some lame workflow like "the document was changed by another user ..."

reply
cyberax
1 day ago
[-]
It's fine if you're talking about a text editor or an Excel table. And it's one of the few cases where CRDTs make sense.

If you have a CRM-like application with a list of users? Not so much.

reply
__MatrixMan__
1 day ago
[-]
It sort of depends on the owl though, right? If your CRDT is nothing more than a set of tuples updated on the basis of: these are what my peers have... Is there an abyss of complexity that I'm overlooking here or are simple CRDTs in fact quite simple.
reply
dusted
1 day ago
[-]
They used to be really popular, back in the ancient times when I was young and full of excitement for all things compute, almost all software was local-first, and.. only :)

But since the entire world economy has turned to purely optimizing for control and profit, there's just no good reason to not screw people over as much and as often as possible, what'll they do ? Switch to someone who won't ? And who would that be ?

reply
Aurornis
1 day ago
[-]
> But since the entire world economy has turned to purely optimizing for control and profit, there's just no good reason to not screw people over as much and as often as possible, what'll they do ?

I worked on a somewhat well-known (at the time) product that used on-site hosting.

One of our biggest customer complaints was that we didn’t have a cloud hosted option. Many companies didn’t want to self-host. They’d happily pay a premium to not have to host it.

I think HN underestimates the demand for cloud hosted solutions. Companies don’t want to deal with self-hosted if they can pay someone else a monthly fee to do it.

reply
seec
7 hours ago
[-]
Yes because its a responsability and generally those are costly. If they don't have real upsides you don't want them. So if you can get the same software utility with none of the management responsability you are very much willing to pay a bit more.
reply
account42
1 day ago
[-]
That's a different situation than with offline first software though. With on-prem hosted solutions you'll have someone whose job it is to maintain that hosting and of course they'll want to push that work off to some service provider.
reply
rkomorn
1 day ago
[-]
I can't imagine wanting to self-host something like Jira, GitHub, or some wiki product unless there's a very big financial cost difference that more than offsets my time and hardware investment.

Otherwise it seems like I'm just spending time and effort achieving the exact same result.

reply
nightfly
1 day ago
[-]
I work in an org with 8ish FTEs, a handful of student workers, and like 200 volunteers. Almost every service wants $5 or more per user per month, that's $1,140 per month per service. We selfhost open source solutions for everything we can and sometimes have to write something in-house to meet our needs.
reply
rkomorn
1 day ago
[-]
That sounds an awful lot like, to you, that is the "very big financial cost difference" I mentioned.
reply
account42
1 day ago
[-]
That's short-term thinking. By making your business dependent on cloud solutions you are agreeing to future disruptions from forced changes and price increases that you can't foresee and won't be able to do much about when you learn about them.
reply
rkomorn
1 day ago
[-]
That factors into financial incentives, doesn't it?

There are also products that still have licensing costs even when you self host.

I've worked at a large company that self-hosted Atlassian products that were a big part of a full-time team's job.

I've worked at a large company that built virtually all their internal tooling in house.

I've worked at a large company that used cloud-based vendors.

They all had tradeoffs.

One of those companies even forced a migration from cloud based CI to internal CI for cost reasons then stopped us halfway through because they couldn't scale up our hosted CI to keep up fast enough.

I could argue your answer is just as short-term thinking when your critical tools end up costing you more hardware, data center, and salary opex than you planned for (which I have seen).

reply
raxxorraxor
1 day ago
[-]
I think Gitea is superior to Github to organize your repos. I deploy it in the corp I work for too and everyone is very happy with it. It is blazingly fast running on a small virtual machine.

Granted, this is a business that needs on-premise infrastructure anyway because we have inhouse production. So we have a domain controller that can be used for everything auth. We use a combination of that and Entra (shitty name).

I wouldn't want to host Jira because I don't like to work with it. Our wiki is self-hosted as well.

Sadly, we also buy into MS infrastructure and use SharePoint and co. It is soo incredibly slow to work with...

While you can be cloud only, it isn't an environment I would like to work in. Local alternatives are almost maintenance free these days and the costs are so much less for a faster service.

reply
rkomorn
1 day ago
[-]
For me it's just a question of where I would want to invest my org's time.

For example: how much time do I want us to spend looking after a task/ticketing system? About zero. How much time do I want my org to invest in DR planning for our self-hosted wiki? About zero.

So unless there's a convincing difference (cost, features, etc), cloud-hosted SaaS would be my choice there.

The answers also probably change a lot based on how the company operates. Would I have been comfortable with a self-hosted wiki when I worked at Facebook and we had a pretty damn good team providing DBs as a service? Yes. Would I have wanted to do the same when I was the sole infra person in my group at another job? No.

reply
raxxorraxor
12 hours ago
[-]
I think some time investment is very sensible for any form of decision about infrastructure. Today we have companies complaining about their software dependence, software license costs have heavily increased.

Also, an experienced admin can setup a repository server in a day. This is probably even less time investment than getting offers for solutions. In my opinion the maintenance amount isn't less with SaaS at all as most maintenance work is integrating data.

We do have a self-hosted wiki. We don't even need to think about it if we want to integrate document search or analysis. We own our data completely here and I would argue that data is quite an important chunk of wealth for a company. Some SaaS companies know that as well and they basically take your data hostage. And as a plus, as many things are on a local network, any access to that data is pretty much instant.

To save time on infrastructure decision overall is a mistake in my opinion. You wouldn't do that for your ERP or CRM solutions either.

reply
palata
1 day ago
[-]
Have you ever checked the cost of GitHub runners? Quickly offsets self-hosting ones.
reply
rkomorn
1 day ago
[-]
Runners? Yes. There are definitely some compute/resource heavy things that are more efficient to self host. Until you run out of capacity and getting more capacity involves something like having go buy more hardware and data center space (which I've had to do, though not for CI reasons specifically).

GPU-heavy tasks were also heavily in favor of buying hardware and self-hosting at the time I was making purchasing decisions at my job.

Not everything falls in that bucket and the examples in my comment don't (GitHub isn't just runners).

Edit: I'll also add a question: what part of "unless there's a very big financial cost difference that more than offsets my time and hardware investment" did you think would not cover "have you checked the cost of GitHub runners?"

reply
palata
21 hours ago
[-]
I didn't say you were wrong, I was just mentioning that I was absolutely amazed when I discovered how much my startup was spending in GitHub runners.
reply
rkomorn
14 hours ago
[-]
For sure. Everything metered on time scales up shockingly quickly (cost-wise).

In general, I do prefer fixed costs, so self-hosting resource-intensive things makes sense.

reply
friendzis
1 day ago
[-]
Good luck finding vendor that supports isolation of tenants with sensitive data.
reply
rkomorn
1 day ago
[-]
I'm not going to spend time trying to fix problems I don't have.

Obviously if your constraints are different, do what works for you.

reply
jonahx
1 day ago
[-]
> there's just no good reason to not screw people over as much and as often as possible, what'll they do ? Switch to someone who won't ? And who would that be ?

That argument flies in the face of basic economics.

The problem is that the people don't seem to care about being screwed. If they did, it would very profitable to provide the non-screwing, and lots of people would.

The optimist in me believes this is a just a problem of education, understanding, and risk-assessment (ie, the screwing is hidden to them). But even if we grant this, that turns out be a very hard problem to solve.

reply
dusted
8 hours ago
[-]
> That argument flies in the face of basic economics.

The first think any economist will tell you about the foundational rationale of basic economics is it's inability to explain the discrepancy between behavior as predicted by basic economics and behavior as carried out by human beings in the actual real world.

The hard truth that I, a human, do not behave rationally and in my own best interest at all time, or at least in those situations in which the risk or benefit to myself is the greatest, is a hard one to swallow, and many people are innately unable to even consider swallowing that, and as a result, to maintain their own illusion of rationality, they must take the stance that humans are as a rule, rational and therefore will act rationally and in their own best interest, if nothing else, as a preemptive defense against the question "if everyone around you is irrational, then what's the chance that you're the only rational one?"

reply
sheepybloke
1 day ago
[-]
> Switch to someone who won't? And who would that be?

The issue is that it's not as simple as just "switching" and giving another company your money. How would you migrate your 5-10 years of Confluence pages and Jira tickets if you wanted to switch from Atlassian? You're going to put all of your members through the hassle of switching a booking service/payment process? You know you're being screwed, but the cost to switch is often more than the increased cost. The modern economy is balancing cost increases to your customers with the cost to switch to a competitor.

reply
Eisenstein
1 day ago
[-]
I think you say 'basic economics' but actually mean 'ideal free market'. Economics is the science of economies, which do not have to be free or market based.

The problem with people's basic understanding of free markets is that it is heavily simplified. We are looking at it from the perspective of 'what if human nature wasn't ever a problem, everyone was always perfectly rational, and everyone had all of the information they needed to make a rational decision'. Those criteria do not and have never existed, which is why the free market fails sometimes at doing some basic things.

Don't get me wrong, it is a great idea and it solves all sorts of problems and we should keep using it -- but don't come to the conclusion that because it all works out in the theory part, then if something in the real world is a certain way then we have to accept that it is efficient and inevitable.

reply
jonahx
1 day ago
[-]
You've made a general argument that shows not all theoretical economic theories about free markets can be trusted. Fair enough. But my claim is much narrower.

It merely relies on the love of money of real people in our current economy, and the premise that there is enough information flow that, if people cared, they would find and pay for products that don't screw their privacy, control, etc. I think both those premises are undeniably true.

reply
ksynwa
1 day ago
[-]
> and the premise that there is enough information flow that, if people cared, they would

That's a terrible premise. Why are you assuming that this flow exists and that billions of people are failing individually rather than the simpler one this flow not existing?

reply
jonahx
1 day ago
[-]
One reason: Whenever I've made the case personally to friends/family, people who are smart but not interested in messing around with tech, I am usually met with a giant shoulder shrug. Or perhaps, in a best case scenario, "Yeah that doesn't sound great, but there's no way I'm installing X to deal with it".

We can always say the case hasn't been made well enough, and maybe it hasn't it, but at what point do you just start believing people?

reply
Eisenstein
1 day ago
[-]
So you are saying that because people are being screwed in the current market that is undeniable proof that people are OK with being screwed by the market?

If I misunderstand you please correct me.

If that is what you contend, then you have no addressed whether or not the market allows them to do otherwise. You take for granted that the market is free and that people are willingly choosing this outcome with perfect knowledge and without any incentives or forces which would compel them to act despite not wanting to be screwed. For instance, if you do not have bargaining power with a company over your contract, you have no choice but to accept it. Can you enter into negotiations before agreeing to a EULA?

There are forces that are not free market forces which constrain actions, there is human tendency to prioritize immediate needs over long term benefits, etc which all play a role.

The fact that:

1. The market exists 2. We have a conception of how the market should work in an ideal 3. People act in a certain way

do not all combine to make it true that 'people prefer it this way'.

That's the point I am making in counter to your assertion.

reply
jonahx
1 day ago
[-]
You are making all these theoretical points... do you doubt that people are demonstrably lazy and willing to give up their privacy and control for free or cheap or convenient stuff? I don't see how this is even a contentious point to make.

You're bringing up all these theoretical counterpoints that either obviously don't apply, or only apply very partially. There are many local only, FOSS options available for every type of software, right now, many free, for people that care about privacy and control in their computing. They generally have tiny market share. If they were made even more convenient, I don't believe the impact would be substantial. By that I mean stuff like brand recognition, what friends are using, and so on would still be more important decision factors than privacy/control features.

This is a people problem, not a "free market not behaving like its theoretical model" problem. Either people don't fully understand the meaning and importance of their privacy, or their revealed preference of not caring that much is just... sadly... true.

reply
account42
1 day ago
[-]
The root of the cause is that we allow companies to run mass psychological manipulation campaigns to get people to act against their bests interests. It's not a people problem, it's a corporate propaganda problem.
reply
Eisenstein
1 day ago
[-]
Right, I guess am focusing more on your using the 'economics' as proof that it is a people problem, because I see it used that way all the time without regard to structures and human elements. Basically people use 'but the market is obviously working so it must just be the way it is'. They see it as the result of a theoretical market structure in which people make purely rational decisions in their best interest on an even playing field with everyone else, instead of a situation which is affected by laws (or lack of them), specific cultural events, and psychology among many other things.

Sorry if I was talking past you instead of with you, but I have to say that I don't think it is fair to call my responses 'theoretical counterpoints'. What I am doing is pointing out that on the face of your claim, you are engaging in what could reasonably be called 'begging the question'. That is, assuming the conclusion in your own argument without actually showing that that conclusion has a logical basis. Saying 'the current situation exists, and because people have mechanisms by which they can affect that situation that means they are in complete control of the situation' is not logically valid unless you can account for those things which they do not have mechanisms to control being irrelevant to the outcome.

reply
Arch-TK
1 day ago
[-]
Either people are broadly ok with being screwed (my personal experience suggests this) or there is a grand conspiracy to prevent anyone who is not screwing their customers from competing in the market.

Maybe it is the latter, who knows. But what I do know is that the non-screwing options exist, but are often less popular and more expensive (either in price, time, or effort).

And this annoys me to no end. Because _I_ don't want to be screwed. But whether I get screwed or not increasingly depends on how much those around me are willing to get screwed or not.

reply
account42
1 day ago
[-]
It's not that people are OK with being screwed over but rather that they have been conditioned into being helpless about it. Big corporations hire psychological experts that know exactly how to manipulate you into thinking you need their products or otherwise act against your own best interests, whether that's through advertisement, peer pressure or whatever else they can come up with.

You yourself admit that while you don't want to be screwed you only have the option of not being screwed if those around you also choose not being screwed yet somehow you conclusion that others are different and must be OK with being screwed. Presumably you also often choose being screwed over being socially ostracized? Do you really make sure that all those around you have options to still interact without without being screwed?

Yes people often technically have options of not getting screwed but those options almost exist in a different world and in order to choose them you have to abandon the one you are living in now. That people cannot afford to do that does not mean that they are OK with being screwed.

reply
jonahx
1 day ago
[-]
I am partially with you on this one.

"Yes people often technically have options of not getting screwed but those options almost exist in a different world and in order to choose them you have to abandon the one you are living in now."

But the question that remains is this: If the true situation is people who'd desperately like not to be screwed, and would pay the same or more for this privilege, but are made helpless by corporate propaganda and market dominance, why do we not see new players rushing in to fill this need? They could take massive amounts of market share.

There are only two explanations I can see:

1. Monopoly forces or similar at work.

2. This is not the actual situation.

Regarding 1, you can make the argument for a network effect/coldstart problem. That seems possible to me as an alternative explanation, and as a way out. Still, in my personal experience, 90% of people just don't care that much, and so are vulnerable to essentially being "bribed" by short-term corporate incentives. The free/privacy-respecting alternatives would have to match this force, and also match the marketing.

reply
Arch-TK
19 hours ago
[-]
> Presumably you also often choose being screwed over being socially ostracised?

No, I only choose being screwed over being homeless, or jobless.

Which didn't use to be a particularly likely scenario but the tides are turning.

I don't care about being socially ostracised for refusing to ever use WhatsApp for example.

We teach children not to cave to peer pressure as if it was a choice they could make, and now you're claiming that caving to peer pressure is not something people choose.

reply
pnathan
1 day ago
[-]
I personally have needed to sync something between 3 and 6 devices over the past 5 years on a daily/weekly basis.

I invite you to figure out how to algorithmically figure this out in a general case without needing Git Merge levels of complexity for data structures far more complicated than lines of code. Peer to peer device merging of arbitrary files for n > 2 conflicts.

The answer - the algorithmically simple and correct answer - is to have a central hoster and the devices operate as clients. And if you have a central hoster, you now gave yourself a job as sysadmin with requisite backup/failover/restore with data management responsibility handling the decade+ case. Or I could hire a central cloud company to manage my tiny slice of sysadmin needs and _not_ have a part time job at home just dealing with file management and multifile merges.

All models that are "I have a laptop and a backup hard disk" are _broken_.

reply
qzx_pierri
1 day ago
[-]
> what'll they do ? Switch to someone who won't ? And who would that be ?

FOSS

reply
throw10920
16 hours ago
[-]
> But since the entire world economy has turned to purely optimizing for control and profit

Citation needed.

reply
xorvoid
2 days ago
[-]
I believe the lack of popularity is more of an economics problem. There are established business models for SaaS apps or freemium with ads. But, the business model for local-first apps is not as lucrative. Those who like the local-first model value features like: data-sovereignty, end-to-end encryption, offline usage, etc. These properties make existing business models hard-to-impossible to apply.

My current thinking is that the only way we get substantial local-first software is if it's built by a passionate open-source community.

reply
godshatter
2 days ago
[-]
It's crazy that we live in a time when "pay with money and your data" or "pay with your eyeballs" are the only viable options and "pay with your money without your data" can't even be considered.
reply
account42
1 day ago
[-]
That's because we allow corporations to into misleading that products are free even though you are still paying for them by letting yourself be manipulated into giving third parties your money who then give some of it back to the original service provider. This really needs to be considered a kind of fraud unless the full cost is displayed upfront, not unlike when a vendor charges your card more than the agreed price.
reply
chaostheory
2 days ago
[-]
Someone has to prove that there’s a demand for paid local first subscriptions. Open source and tailscale can’t shoulder it all if you want more adoption.
reply
nenenejej
1 day ago
[-]
Or split into 2 things:

Most people have a Dropbox, Apple Storage, Google Storage or similar.

A lot of people used to happily pay for desktop software.

It is sort of a combo of those 2 things economically.

Dropbox could sweep up here by being the provider of choice for offline apps. Defining the open protocol and supporting it. adding notifications and some compute.

You then use Dropbox free for 1, 5, 10 offline apps (some may need free some paid) and soon you'll need to upgrade Storage like any iPhone user!

reply
ghaff
1 day ago
[-]
>A lot of people used to happily pay for desktop software.

More or less no one used to "happily" pay. Absent pirating software, they did pay often hundreds of dollars for all sort of software sight unseen (though shareware did provide try before you bought) which often came with minimal updates/upgrades unless they paid for such.

But expectations have largely changed.

reply
nenenejej
1 day ago
[-]
Happy as you can be when paying for something!
reply
account42
1 day ago
[-]
Making people dependent on a cloud subscription isn't exactly in the spirit of spreading offline programs...
reply
nenenejej
21 hours ago
[-]
Dependent isn't the intent here. Obviously big corps are incentivised to do that so there is that danger. But ideally it is all based on open standards.
reply
data-ottawa
1 day ago
[-]
iCloud kind of does this and is the suggested way to store app data files.

It’s not immune to file conflicts across your devices though.

reply
chaostheory
1 day ago
[-]
Given the anemic sales of macOS apps vs online subscriptions, I would disagree. I’m sure it’s the same story on windows. Offline only makes sense when your public infrastructure is garbage. Otherwise most people will choose convenience over control.
reply
scarface_74
1 day ago
[-]
Adobe and Microsoft might disagree…
reply
chaostheory
1 day ago
[-]
MS favors Office 365 online.

Adobe is keen to emphasize that their products are cloud based

reply
mulmen
1 day ago
[-]
The problem with local apps is actually a problem with closed-source software. I refuse to rely on closed-source software to access my data because then I am beholden to the vendor of that software to access my data. It’s only slightly better than putting my data in the cloud. What I really want is the source code to that local app so I can guarantee the ability to continue accessing my data forever. This can be done with open source software but very few companies want to sell their product as open source. Some version of source-available may help but you still have the problem of the company discontinuing support so you need some escape hatch on the license in that case and as far as I know nobody has tried.
reply
walterlw
1 day ago
[-]
wouldn't it be enough for the underlying user data to be stored in a well-documented and widely supported format? I don't care if Obsidian, Logseq or similar are open or closed source if my data is just a folder of markdown and jpeg/pngs.
reply
account42
1 day ago
[-]
In simple cases maybe but in general how the format is actually interpreted matters more than what some spec says. Markdown is a great example because in practice almost every markdown renderer does things a bit differently.
reply
asherdavidson
1 day ago
[-]
Would you feel the same way about a closed source local-first app that used sqlite as the underlying database?

That would let you access your data forever, albeit you might still need to write your own scripts to port it to another app.

reply
account42
1 day ago
[-]
In theory, open formats would be enough. In practice you still end up depending on peculiarities of the software handling those formats more often than not so I agree that having access to the source code and permission to modify it when the original vendor's interests no longer align with yours is the only solution.
reply
RicoElectrico
1 day ago
[-]
We're living in the world of Dubai chocolate and labubu so this tells you everything you need to know about consumer behavior.
reply
Loughla
1 day ago
[-]
Fads and trends have always existed. Literally as long as we've had culture.

What point are you trying to make?

reply
wolvesechoes
1 day ago
[-]
But before it wasn't as easy to create them every few days.
reply
zelphirkalt
1 day ago
[-]
If I had to guess, I would say the GP wants to express that the mass of consumers acts in uninformed silly ways, and with such people local-first has a very low adoption rate, because they usually don't spend a thought about their digital foot/fingerprint or who really owns their data or how they do their personal computing and whether they are independent of anyone else in their personal computing. That there is this huge part of our society, that again and again creates incentives for enshittification.
reply
account42
1 day ago
[-]
Is it really a huge part of our society or is it just one that megacorporations amplify as loud as they can because that's how they want people to behave.
reply
zelphirkalt
1 day ago
[-]
That's a good question actually. I don't know for sure. I tend to think, that for many people things like the Internet are mysteriously working and they have no idea how it works, and as a consequence rarely they go further putting up requirements of how it should work for them. They just accept how things online are, status quo of that which is most visible. Ergo complete victims of the network effects in their social bubbles.
reply
account42
1 day ago
[-]
You mean influencer behavior. Tiktok/instagram/etc. personalities are very different from how most people behave in the real world. I don't know anyone who has bought into either of these products.
reply
immibis
1 day ago
[-]
Labubu obsession is a surefire sign of economic depression: https://www.youtube.com/watch?v=l1O6bN2zWSM

The really crazy thing is that everyone just forgot a couple of years ago "Dubai chocolate" meant something a lot more gross.

reply
account42
1 day ago
[-]
> The really crazy thing is that everyone just forgot a couple of years ago "Dubai chocolate" meant something a lot more gross.

It's called damage control and yes it's crazy that we blindly allow this kind of society-wide manipulation.

reply
roncesvalles
1 day ago
[-]
I'm not really sure about this because far too often I see ads for various services/courses/etc that I want to buy, but I don't end up buying because it's a subscription and I just don't have the bandwidth currently to spend time on the thing. I want to buy it and keep it on my shelf until I find the time to get to it, like a book.

And the price they give me from clicking the ad is a limited-time discount so then I'm turned off from ever coming back later and paying the full price i.e. the sucker's price.

Surely this isn't the optimal business model for many of the products that have adopted it.

reply
xmprt
1 day ago
[-]
You're not the average person. You probably also don't have significant credit card debt or rely on buy-now-pay-later for making purchases. Subscriptions are a win-win for the average company and user - users pay less upfront when evaluating the product, and companies can rely on a steady cash flow to continue paying for development and ongoing maintenance costs.
reply
account42
1 day ago
[-]
It's not a win for the user. The user is tricked into paying more than a fair price by deceptive business practices. The word for that is fraud, but somehow it's OK because everyone does it.
reply
xmprt
22 hours ago
[-]
> paying more than a fair price by deceptive business practices

I see product X is $10/month. I subscribe. I'm not sure where the deception is there? The alternative is likely either the cost of the product is exorbitantly high like $500 for lifetime. Or the developer makes it more affordable but sales peter out and they end up having to go out of business and can't afford to maintain the product after a couple of years. Likely both. And hackernews will complain either way.

The only sustainable model I've seen is lifetime licenses but updates for a single year.

reply
tokioyoyo
1 day ago
[-]
Every time I looked at customer spending graphs at my previous jobs, I realized how my habits have nothing in common with an average consumer. We’re extreme minority in most of the cases.
reply
canpan
2 days ago
[-]
Yes, I don't think replicated data structures are the problem.

Look at single player video games, cannot get more ideal for local-first. Still you need a launcher and internet connection.

reply
SilverbeardUnix
2 days ago
[-]
No you don't. There are plenty of games you can buy and go into the wilderness and play just fine offline. Just because game developers WANT you to be online so they can get data doesn't mean you NEED to be online.
reply
johnnyanmac
1 day ago
[-]
That's what confuses me about this whole topic. If you want a local app, make one. Nothing "requires" online if the features don't drive it.

We drove everything online for logistics and financial reasons. Not because the tech requires online connections for everything. it isn't changing because people don't see always-online as a big enough deterrent to change their habits.

reply
Reubachi
2 days ago
[-]
Your point is correct, but OP is correct.

There are currently tens of thousands of games that are unplayable due to requiring pinging to a network/patch server which long ago was deprecated.

Forsaking patch requirements, just as many games are no longer playable due to incompatibility/abandoned OS, codebase, gamebreaking bugs.

In both of these scenarios, my "lifetime license" is no longer usable through no action of my own, and breaks the lifetime license agreement. I shouldn't need to be into IT to understand how to keep a game I bought 5 years ago playable.

The solution to this "problem" for user, as offered by the corporate investment firms in control, is to offer rolling subscriptions that "keep your license alive", for some reason. Rather than properly charge for a service at time of purchase.

TLDR: Why move the goal posts further in favor of tech/IT/Videogame Investment firms?

reply
rpdillon
2 days ago
[-]
I think this thread is an example of a fascinating class of miscommunication I've observed on HN, but I want to say it out loud to see if I'm understanding it.

Two people meet in an HN thread, and they both dislike the status quo in a particular way (e.g. that copyright is awful, DRMed games suck, whatever). They both want to fight back against the thing that they dislike, but they do it in different ways.

One person finds alternatives to the mainstream and then advertises them and tell people: Look, here's the other way you can do it so you can avoid this terrible mess! That messaging can sometimes come across as downplaying the severity of the problem.

The second person instead wants to raise awareness of how awful the mess is, and so has to emphasize that this is a real problem.

The end result is two people that I think agree, but who appear to disagree because one wants to emphasize the severity of the problem and the other wants to emphasize potential solutions that the individual can take to address it.

Concretely, I think that's what happened here. I think everybody in this thread is pissed that single-player games would have activation and online DRM. Some people like to get around that by buying on marketplaces like GOG or playing open source games, and others want to change the policy that makes this trend possible, which means insisting that it really is a problem.

Sorry for all the meta commentary. If I got it wrong, I'd be interested to understand better!

reply
Loughla
1 day ago
[-]
Welcome to people management. Communication and miscommunication is about 99.999% of it.
reply
mrandish
1 day ago
[-]
Indeed. Most often due to divergence in definitions, scope, prior knowledge, assumptions, time frame, budget, share of burden, objective and/or incentives.
reply
Loughla
1 day ago
[-]
Assumptions. It's almost always assumptions.
reply
account42
1 day ago
[-]
Is that conclusion based on data? ;)
reply
grues-dinner
1 day ago
[-]
reply
nenenejej
1 day ago
[-]
You need a launcher and internet connection in the same way as you "need" to read a 15Mb wall of text to use your iPhone. They need to bully you into it.
reply
account42
1 day ago
[-]
I think video games are actually a counter-example: people are still willing to pay for single-player video games and the business model of those doesn't actually rely on Steam working the way it does.
reply
dtkav
1 day ago
[-]
I'm building a file-over-app local-first app call Relay [0] (it makes Obsidian real-time collaborative) and I agree with you.

We have a business model that I think is kind of novel (I am biased) -- we split our service into a "global identity layer"/control plane and "Relay Servers" which are open source and self-hostable. Our Obsidian Plugin is also open source.

So while we have a SaaS, we encourage our users to self-host on private networks (eg. tailscale) so that we are totally unable to see their documents and attachments. We don't require any network connection between the Relay Server and our service.

Similar to tailscale, the global identity layer provides value because people want SSO and straightforward permissions management (which are a pain to self-host), but running the Relay Server is dead simple.

So far we are getting some traction with businesses who want a best-in-class writing experience (Obsidian), google-docs-like collaboration, but local-first. This is of particular interest to companies in AI or AI safety (let's not send our docs to our competitors...), or for compliance/security reasons.

[0] https://relay.md

reply
Ingon
1 day ago
[-]
In some ways this reminds me of what I'm trying to do with connet [0] - give users the choice to completely self-host (all is open source), identity host (e.g. control server as cloud solution, host relays themselves) or even full cloud [1].

[0] https://github.com/connet-dev/connet

[1] https://connet.dev

reply
antonvs
2 days ago
[-]
This is the primary reason. The heaviest push for SaaS came from Silicon Valley, which wanted a recurring revenue stream model.
reply
bsder
1 day ago
[-]
> I believe the lack of popularity is more of an economics problem.

It's also a programming complexity problem.

A local-first app has to run on a zillion different configurations of hardware. A cloud-first app only has to run on a single configuration and the thing running on the user hardware is just a view into that singular cloud representation.

reply
account42
1 day ago
[-]
This is really not that much of a problem, most hardware differences are either irrelevant or easily abstracted away (if that isn't already done by the OS).
reply
johnnyanmac
1 day ago
[-]
It's not easy, but if the money flowed that way devs would figure stuff out. As is, devs who may have an interest in working on it will simply be rejected.
reply
moffkalast
1 day ago
[-]
Agreed. In the pre-internet days you had rigid version releases, there was a time investment in version 1, it released, people bought it and used it, no bugfixes, patches or anything. Then a year or two later, there was version 2, and you could buy that to get the improvements, or just use your old version just fine.

That sort of model collapses where software needs to be constantly updated and maintained lest it rots and dies as the rest of the ecosystem evolves and something dynamically linked changes by a hair. So what's left is either doing that maintenance for free, i.e. FOSS, or charging for it on a monthly basis like SAAS. We've mostly done this bullshit to ourselves in the name of fast updates and security. Maybe it was inevitable.

reply
tonymet
1 day ago
[-]
I wish we could get app developers to stop going online for every piece of content. Even my Tesla GPS map refuses to cache tiles it already has, so when connectivity goes down, my maps are blank.

Or streaming media apps (like Peacock & Kanopy) reloading the previous screen from the server instead of keeping the rendered media list object resident.

95% of the content is already on the device, let's please encourage it's use.

Dirty writes can be handled easily just mark the UI dirty until the write succeeds.

My point is that we could easily fix 95% of the offline challenges with no major changes to the app design -- just better habits.

reply
themafia
1 day ago
[-]
We solved a ton of issues by ensuring Cache-Control was being correctly used on API responses and that our application network layer respected them. The advantage is we can change our mind about cache lifetimes on the server side and no app update is required to start using them.
reply
tonymet
1 day ago
[-]
I love hearing about easy wins like this. very true on Android and iOS the Data Access Objects will handle content invalidation quite magically when used properly.
reply
m463
1 day ago
[-]
> no major changes to the app design

I'm pretty convinced they just want more data.

for example, apple allows offline maps, but they expire the data because they want you dependent on them.

I'm pretty sure the tesla (google) tile data has hidden motives.

reply
tonymet
1 day ago
[-]
I've also wondered. Having worked on similar apps, I think it's usually due to a bug where data is inconsistent , someone important complains, so the devs just invalidate all data to prevent the bug ever happening again. Inconsistent/stale data is more evident during testing than the caching issues (testers are usually using stable office wifi)
reply
materielle
1 day ago
[-]
It doesn’t even have to be a bug. Having some rule like “invalidate all data older than 6 months” makes it easier to reason about and test for backwards compatibility.

I’m sure the data format of Apple Maps is constantly changing to support new features and optimizations.

reply
m463
1 day ago
[-]
apple maps data expires after 30 days.

if you create offline maps for some vacation away from reliable cellphone service, when 30 days passes the (gigabytes of) maps just disappear. Unusable even if you are in a remote village.

reply
pkaye
1 day ago
[-]
Google maps (on my Android) actually has offline maps for a while now but you have to manually select the region. And you can cache multiple regions at once. I've used it in the past when driving to national parks and remote areas with no phone network in some spots.
reply
workfromspace
11 hours ago
[-]
As a frequent traveller I can say Google Maps (offline mode) sucks!

If you are fully offline, then it works as intended.

However, if you have even a tiny bit of connection, even if your connection is spotty or very slow, Google Maps refuses to use its offline data, crippling the experience and making it unusable.

I agree with the sibling comment. When I travel, I usually use 3 apps and download the offline maps of the regions I visit: Apple Maps (for iOS apps and embedded features), Google Maps, and thank god: Organic Maps.

reply
tonymet
1 day ago
[-]
I do appreciate this feature. I wish more apps provided & tested online access.

I have also tested Gaia & All Trails maps online features, and have had mixed results with those.

I just don't think app developers take the time to test their apps in offline or poor connectivity conditions. They are used to testing procedures while using office wifi

reply
Archelaos
1 day ago
[-]
I can recommend organic maps: https://organicmaps.app

The quality of the maps depends on the region, though. But for me it is typically good enough. I not only like the local maps, but also that I can save waypoints locally. And I can contribute things like points of interest to Open Street Map directly via the app. In my opinion, the biggest disadvantage is that there is no traffic information.

reply
Ostrogoth
22 hours ago
[-]
Another vote for Organic Maps. I use it as a lightweight maps app for backcountry or traveling in foreign countries where I don’t have a sim card. You can also record tracks in the app, or import .gpx files. In airplane mode it has low impact on battery consumption.

I was also pleasantly surprised to find out iOS Star Chart app (https://apps.apple.com/us/app/star-chart/id345542655) functions entirely offline. Recently used it while camping, and it just needed a GPS coordinates fix to adjust sky map to location.

reply
pabs3
1 day ago
[-]
Might be better to switch to the CoMaps fork:

https://www.comaps.app/news/2025-07-03/Announcing-Navigate-w...

reply
mcv
1 day ago
[-]
Interestingly, f-droid doesn't initially show CoMaps but does show Organic Maps because apparently CoMaps doesn't pass an antifeature filter for depending on a tethered service on codeberg. I don't quite understand why this is an issue; they both need to download their maps from somewhere, don't they?
reply
Ostrogoth
22 hours ago
[-]
Thanks for recommending. I was not aware that a fork of Organic Maps had been created, or that longtime contributors to OM had concerns about the project.
reply
mcv
1 day ago
[-]
This is the big reason why I prefer OSM apps. They all download the maps, and your map will work regardless of connectivity.
reply
bawolff
1 day ago
[-]
Because when your selling point is that its local first (or distributed, or any other politically trendy thing), you concentrate on that over the app's core value proposition. The end result is you make an app that does what people think they want at the expense of doing as good a job as possible at the thing people actually want.
reply
WD-42
1 day ago
[-]
When I switched to Immich, I thought I was going to give up a lot for the sake of self hosting. To my surprise, it’s actually better than anything I’ve used from Apple or Google. Unicorns exist, they are just rare.
reply
brianpan
1 day ago
[-]
> Why haven't local-first apps become popular?

The Immich "Quick Start" step 1 is mkdir and wget commands. Step 2 is "Populate the .env file with custom values"

I get it, when the prerequisites are 1) run a server and 2) have Docker installed, but this isn't inspiring confidence that local-first apps will become popular.

reply
reilly3000
1 day ago
[-]
Self-hosted services and local-first apps are distinct topics. Local-first is about keeping data on clients- orchestrated via a centralized or distributed system. Usually it means installing an app that works offline, and might sync to desktop. Self-hosting is about running servers in owned or rented infrastructure, and that is never a turnkey proposition.
reply
stavros
1 day ago
[-]
While Immich is fantastic enough for me to have paid the $100 just to support them, it's self-hosted, not offline-first. The two are different.
reply
0xffff2
2 days ago
[-]
I feel like I'm taking crazy pills. How on Earth could anyone consider the example in #2 "conflict-free"? You haven't removed the conflict, you're just ignored it! Anything can be conflict free in that case.

Obviously not every problem will have such an obvious right answer, but given the example the author chose, I don't see how you could accept any solution that doesn't produce "100" as a correct result.

reply
WorldMaker
2 days ago
[-]
I also think this is a place where CRDTs in general got stuck on the name "Conflict-Free" for way too long assuming it was fate that if they worked hard enough they find the magic data structures to eliminate conflicts altogether but real life data is a lot more more complicated than that and real life expectations of data semantics that a data type itself can't encode. I think we are just now getting to the point of seeing CRDT libraries understand some conflicts happen, and some conflicts still need to bubble up to a more complex semantic model or even/especially a user. I don't think there are any CRDT libraries that are strong for that yet, but the work seems to starting into those next steps at least.
reply
dwaltrip
2 days ago
[-]
I remember like 10 years ago seeing some people getting really excited by CRDTs. I was deeply confused about how the software would magically know what the correct thing to do would be when 2 people made edits that directly conflicted with each other…

You can’t know what to do without talking to the people involved, as they have to decide what makes sense for end goal. It’s mostly a social / collective action problem, not a purely technical one.

reply
WorldMaker
1 day ago
[-]
Yeah, it's hard to fault people from wanting to find as many technical solutions as possible to reduce social problems to a minimum. I can fault CRDTs for thinking that they could solve all of them technically, enough so that they put "conflict-free" in the name, because yeah, there are always problems you can't solve technically, there is always user knowledge you can't just assume the technically best approach is the semantically, socially, or even politically best approach (again as much as we might want to wish that we could solve these things technically, which is a beautiful dream sometimes).
reply
crazygringo
1 day ago
[-]
Yeah, "conflict-free" is from a theoretical perspective, that it always generates a final result guaranteed to match. Which, to be fair, is an accomplishment in and of itself, when the operations might be out of order or repeated.

It is absolutely not "conflict-free" from the user perspective, nor is it even necessarily logical. Tools like Google Docs manage this by providing a version history so if the merge messes up, you can still go back and see each recent change to grab whatever data got lost.

reply
Aldipower
1 day ago
[-]
And the example #1 isn't even better. Maybe, I am not smart enough, but when the physical clock of user B is _way_ off in the wrong direction, you still get the wrong ordering. Example 1 assumes that the users clocks are going more or less correct, which is sometimes just not the case.
reply
AlienRobot
1 day ago
[-]
Yeah, that's an insane things for a program to do. The only correct thing for a software to do in case of conflict is to warn the user there is a conflict and provide them with the tools to fix the conflict. Assuming a solution is the last thing anyone wants.
reply
alkonaut
1 day ago
[-]
This whole article reads like normal desktop software or mobile apps are some strange edge case of software, rather than still being a super common way of delivering software.

Doing local-first _in a browser_ still feels like taking all the drawbacks of a browser like difficult interop with the rest of the host system and getting only one benefit (ease of distribution).

reply
russnewcomer
2 days ago
[-]
I'm not sure CRDTs are actually the right answer here for your example of #2, Marco. A double-entry accounting system might actually be more ideal. In that case, what you are keeping in sync is the ledger, but depending on your use-case, that might actually be easier since you can treat them as a stream-of-data, and you would get the 'correct' answer of 100.

In this case, you would need two accounts, a credit and debit account, and then device A would write +20 to the credit account and -20 to the debit account, device B would write -20 to the credit account and +20 to the debit account, then using a HLC (or even not, depending on what your use-case is again), you get back to the 100 that seems from the description of the problem that it is the correct answer.

Obviously, if you are editing texts there are very different needs, but this as described is right in the wheelhouse of double-entry accounting.

reply
michaelsalim
1 day ago
[-]
That's not what a double-entry accounting system is for. If all you're doing is keeping track of one account/balance, then double-entry doesn't add anything. You might want to still implement it that way for future proofing though if you're implementing an accounting system.

The main thing to takeway is to store transactions like you mentioned (+20, -20). And in the simplest case, just apply all of them based on time.

reply
russnewcomer
1 day ago
[-]
You're not entirely wrong that double-entry accounting doesn't add much to keeping track of one balance. And the example provided in the article was very simple, just like mine was very simple. Transactions do help, but if you are trying to keep track of a balance and understanding how that balance is changing, double-entry accounting is helpful.
reply
miki123211
1 day ago
[-]
Local-first apps haven't become popular because users want shareable links, cross-device sync and maybe security.

For cross-device sync, you need a server. Either you use iCloud (limiting yourself to Apple platforms), host one (which encourages subscriptions and defeats the point of local-first) or ask users to self-host (which is difficult and error-prone).

Shareable links also need a server. You can't use iCloud for those, and if you're asking users to self-host, you now need to ensure their server is exposed publicly and has a domain. This adds yet another complication to an already complicated process, not to mention the security implications.

Security (where some users are restricted from seeing some of the data and performing some of the actions) requires much more "smarts" in the server, and makes local sync infinitely more complicated.

For local-first apps to be viable, we need a lot more primitives first. Apple provides some, but not nearly enough, and only for their own platforms. Nobody else even bothers.

reply
chris_money202
1 day ago
[-]
Creating a server devices can access on the same network is trivial the only issue you would run into is maybe what port to use and there are discovery processes for this.
reply
miki123211
1 day ago
[-]
For that to work, you need a device which is online 24/7, always stays on the network, and does not have energy efficiency requirements, which justifiably limit background app activity.

The only (consumer) devices that fulfill these requirements are routers and some smart gadgets, like fridges or smart speakers. None of them are easily programmable to a sufficient degree.

This still doesn't solve the problem of sharing among users on different networks and accessing your data when you're not home.

All these problems are solvable, but they'd require massive coordination among device vendors, and coordination is one of the hardest problems out there.

reply
VikingCoder
2 days ago
[-]
I want many more Local-Only apps, thanks. Self-Hosted.

Or Federated apps, again Self-Hosted.

And I think network infrastructure has been holding us back horribly. I think with something like Tailscale, we can make local-only apps or federated apps way, way easier to write.

reply
ChadNauseam
1 day ago
[-]
To me, writing a local-first app using CRDTs is the only way I ever want to build apps from now on. It's by far the most fun approach I've found. And here's why: the CRDTs I work with require you to maintain an event log, AKA a complete history of everything that's happened which you then replay to compute the current state. This allows events from other devices to be inserted at their proper chronological point in the stream, so that when you replay everything you get the correct current state with that event included.

In practice, it has an unexpected benefit. Whenever I hit a decision point where I'm not totally sure how something should behave, I can just ship it anyway without stressing about it. Later on, I can retroactively change how that action works by updating the function that processes events into state changes. It's hard to convey just how liberating this is, but trust me, this situation comes up way more often than you'd expect.

reply
michaelsalim
1 day ago
[-]
Agreed. I'm building a presentation software[1] so it needs to work locally (can't afford lost of connection). But I also want it to be accessible from anywhere. Unfortunately those two things don't mesh that easily.

The tech is getting there though. With things like hole punching or webrtc, P2P is getting pretty easy to implement. Though it's still a challenge to implement it in an actual project.

I do believe that we're going to start seeing quality local-first software w/ great networking pop up more and more soon. I'm working on that, but I'm sure there are plenty others who are doing the same.

[1] Open-source, see my profile for info

reply
VikingCoder
1 day ago
[-]
Really, why not use Tailscale?
reply
michaelsalim
1 day ago
[-]
Never said you shouldn't. Tailscale uses a bunch of different techniques including hole punching for it to work. If that's what you need, go ahead. I opted for Iroh for a few different reasons but Tailscale is an awesome choice too.
reply
hasanhaja
2 days ago
[-]
I've been having fun exploring this actually: https://news.ycombinator.com/item?id=45333494

I've found it to be a fun way to build apps.

reply
b_e_n_t_o_n
1 day ago
[-]
Perhaps I'm missing something, but aren't local-first apps the overwhelming norm? Like if I think about the apps I use, my friends use etc, most of them are local. Unless the author means "local first web apps", which would make more sense. And I think the oxymoronic nature of a "local first web app" gives a clue as to their unpopularity.
reply
crazygringo
1 day ago
[-]
The context should be clear that this is talking exclusively about apps that store their data in the cloud.

Local-first here means starting with a local data model that syncs to the cloud from the start, rather than an app that only works online.

reply
b_e_n_t_o_n
1 day ago
[-]
What are some examples of popular apps that do this which aren't web apps, and could feasibly work offline? Every native app I use is local-first. Eg. Photoshop, Blender, Figma, Xcode, Zed, Kitty, Affinity Photo/Designer, Notes, Music, Calendar, Messenger, Maps, Email, etc.
reply
rbits
1 day ago
[-]
Notion is a big one. I tried it out but didn't end up using it because it doesn't work offline.

(I know you said not web apps, but it does have a desktop version)

reply
b_e_n_t_o_n
1 day ago
[-]
Is the desktop version more than just a webview? :P
reply
crazygringo
1 day ago
[-]
I already explained: "exclusively about apps that store their data in the cloud".

So Photoshop, Blender, etc. -- these are not apps that store their data in the cloud. They're using filesystems. There's no sync. They're not local-first, they're just local.

But the Apple apps -- Notes, Music, Calendar -- they are very much local-first. Their source of truth is in iCloud when you activate that, but they fully work offline as well and then sync once there's a connection. This is completely different from e.g. Photoshop.

reply
zahlman
1 day ago
[-]
> Like if I think about the apps I use, my friends use etc, most of them are local.

Ones that you have to pay for directly?

Aside from game devs it's hard for me to think of who the major players are in that space any more. And now even when the game has a single-player mode it seems to demand an Internet connection, whether for DRM reasons, "anti-cheat" (why do you care?), updates etc.

reply
johnnyanmac
1 day ago
[-]
They aren't major players, but there are still plenty of apps that are more "app with options to save to cloud" than "thin client with maybe some caching options". A quick skim on my phone shows it mostly pertains to apps that connect to other hardware (my router/modem and smart bulb, for instance), utility apps (file exploreres, calculators, and task lists), and local media apps.

But one enables continual revenues streams and the other succumbed to extremely rampant piracy in the app space. As such, even many games on mobile are service games rather than a local single player game.

reply
b_e_n_t_o_n
1 day ago
[-]
Even the apps I pay (or paid) for directly are local first. Eg the Adobe suite, Unity Pro, Affinity. Some, like Copilot, do require internet but there is no feasible way to get offline access for an online service.
reply
gt0
1 day ago
[-]
I think developers overestimate how much everyday users care about local-first, or working offline.

At home and in the office, we have the Internet, so it's only on the move in places where you phone doesn't work where this matters and most of us probably don't work in those places enough for it to matter.

I worked on an app a while back, a synchronising drawing app, and ran into the issues mentioned in the article that synchronising is hard and is solving a problem very few people actually have, the need to work offline.

I'm a sort of "desktop app believer" in that I think you tend to get a better UX from native desktop apps than the web, but convincing enough people to believe you or give you money for it is another matter.

reply
Mordisquitos
1 day ago
[-]
> I think developers overestimate how much everyday users care about local-first, or working offline.

And that's because (many) everyday users are not even aware that being online is not essential to perform the functionality they need from their applications. It's not that users don't care that they cannot work offline. It's that they don't even understand that requiring an internet connection is not a technical necessity, but rather an artificial limitation imposed by business interests or incompetence.

reply
crazygringo
1 day ago
[-]
> Offline-first apps sound like the future: ...no more spinning loaders on flaky connections.

Even offline-friendly apps get spinning loaders on flaky connections. Google Docs will be stuck checking to see if a doc has been updated before it loads it, and will only actually open it when I turn on Airplane Mode. Similarly, Spotify will be stuck loading a saved playlist or album because it's trying to load extra information it loads when connected to the internet, and again only Airplane Mode will skip that so it knows it's offline-only.

Flaky connections are the bane of existence of apps that work offline, either because apps want to load extra online-only data whenever possible, or simply ensure everything is synced before taking a big action.

reply
hnaccountme
1 day ago
[-]
The article is wrong. Its not popular because of greed.

We used to have offline everything and everything just worked. Now we are forced to sync with the cloud weather we like it or not. A subscription model shows a continuous cash flow and businesses like that.

reply
sbt
1 day ago
[-]
This is the correct answer. Consumers also don’t seem willing to pay for privacy, sadly, when they can get a product for «free» in exchange for data.
reply
sebastianz
1 day ago
[-]
People want to sync their data, work on and see the same thing on their phone, their laptop and their desktop. This certainly did not "just work" when "we used to have offline everything".
reply
earthnail
1 day ago
[-]
Unfortunately the solution presented in the article is locked to a closed cloud offering. I have nothing against those, Firebase for example is like that, but I dislike the fact that it’s hidden. “It’s just an sqlite extension”, oh and it syncs with our commercial cloud offering only.

Other vendors like Powersync and ElectricSQL have a similar offering but are upfront about it. And at least in Powersync’s case (don’t know about Electricsql) you can also run their offering yourself.

reply
kobieps
1 day ago
[-]
It's kind of hard for me to think of a devtool as local-first or not, since the actual definition of local-first [1] talks about end-user software, not devtools.

So the question is whether it's possible to build software that adheres to the local-first principles using sqlite-sync.

edit: I'm aware people are using the term "local-first" very loosely, so perhaps my reply here is a bit off-topic

p.s. yes you can self-host ElectricSQL

p.p.s. I really need to keep my list of "sqlite sync" things updated: I have SQLSync [2], SQLiteSync [3] and now SQLite-Sync [4]. Simple!

[1] https://www.inkandswitch.com/essay/local-first/

[2] https://github.com/orbitinghail/sqlsync

[3] https://ampliapps.com/sqlite-sync/

[4] https://github.com/sqliteai/sqlite-sync

reply
SamLeBarbare
1 day ago
[-]
CRDTs and HLCs often feel like over-engineering. In most business/management apps, it’s rare for two people to edit the same piece of data at the exact same time.

A simpler approach works surprisingly well:

Store atomic mutations locally (Redux-like) in SQLite.

Sync those mutations to a central server that merges with last-writer-wins.

Handle the few conflicts that do occur through clear ownership rules and some UI/UX design.

This makes local-first behave more like Git: clients work offline, push their “commits,” and the server decides the truth. Most of the complexity disappears if the data model is designed with collaboration and ownership in mind.

reply
kobieps
1 day ago
[-]
Agreed @ overengineering for most use cases.

Still, where a simpler approach gets tricky is if you only want to sync a subset of the backend database to any client's SQLite

reply
czx111331
2 days ago
[-]
I believe that in the right contexts—specifically where eventual consistency is acceptable—the local-first paradigm is highly valuable and will gradually become mainstream. A major factor limiting adoption today is that existing local-first solutions are incomplete: when building such applications, developers must handle many problems that are trivial under strong-consistency or traditional models. This raises the learning cost and creates significant friction for paradigm shifts.

Our recent work on Loro CRDTs aims to bridge this gap by combining them with common UI state patterns. In React, developers can keep using `setState` as usual, while we automatically compute diffs and apply them to CRDTs; updates from CRDTs are then incrementally synced back into UI state [1]. This lets developers follow their existing habits without worrying about consistency between UI state and CRDT state. Paired with the synchronization protocol and hosted sync service, collaboration can feel as smooth as working with a purely local app. We’ve built a simple, account-free collaborative example app[2]. It only has a small amount of code related to synchronization; the rest looks almost the same as a purely local React app.

[1]: https://loro.dev/blog/loro-mirror

[2]: https://github.com/loro-dev/loro-todo

reply
canadiantim
1 day ago
[-]
Loro is very cool. I'm currently testing out building with it. Hadn't seen loro-mirror before, looks very nice too. Is loro-mirror supposed to be used with rich text editors like prosemirror, etc.?

Thanks for your great work with loro

reply
czx111331
1 day ago
[-]
loro-prosemirror[1] offers much better support for integrating Loro with ProseMirror/Tiptap.

In theory, loro-mirror could also be used to integrate Loro with other rich-text editors, but that wasn’t its original design goal and it may need further refinement to work well.

[1] https://github.com/loro-dev/loro-prosemirror

reply
MathMonkeyMan
2 days ago
[-]
The free software evangelist in me says "because local-first gives more to the user of the software," which will tend not to happen when the user is not in control of the software.

Realistically the reason is probably that it's easier to make changes if you assume everything is phoning home to the mother ship for everything.

Also, an unrelated nit: "Why Local-First Apps Haven’t Become Popular?" is not a question. "Why Local-First Apps Haven’t Become Popular" is a noun phrase, and "Why Haven't Local-First Apps Become Popular?" is a question. You wouldn't say "How to ask question?" but instead "How do you ask a question?"

reply
robenkleene
2 days ago
[-]
Apple is practically the most antithetical to "free software" company around, yet Apple maintains perhaps the largest fleet of local-first apps in existence, e.g., off the top of my head: Calendar, Contacts, Keynote, Mail, Notes, Numbers, Photos, and Pages (these are all examples of apps that support multi-device sync and/or real-time collaboration).

I think the truth of your statement is more that free software tends towards what you might call "offline" software (e.g., software that doesn't sync or offer real-time collaboration), because there's more friction for having a syncing backend with free software.

reply
MathMonkeyMan
1 day ago
[-]
Maybe the distinction is in that word "app." We started calling programs "apps" when smart phones came out. Smart phones are remote-first, and it makes sense (or it did) as long as you think of a phone as your terminal into... something.

Your examples are all programs that predate mobile, even though they are available on mobile (are they local-first on mobile too?).

reply
robenkleene
1 day ago
[-]
Not sure I'm following, what's the importance of the term "app"?

(And yes, they're all local first on mobile as well.)

(Also Notes and Photos were mobile apps on the iPhone first [not that it really matters, just FYI].)

Apple continues to release new apps under this model today (i.e., local first), e.g., https://en.wikipedia.org/wiki/Freeform_(Apple). In my mind, the evidence just points to Apple thinking local-first is the best approach for the software on their devices.

reply
jkaplowitz
1 day ago
[-]
The Windows desktop version of Microsoft's suite is probably bigger, more widely used, and just as local-first than Apple's suite, with the exception of the new version of Outlook that's still very far from replacing the traditional and local-first version. (They haven't tried to do any such replacement of the rest of their suite.)
reply
dredmorbius
1 day ago
[-]
Apple sells hardware.

(And, increasingly, entertainment and services.)

Software is a positive complement to those revenue sources.

reply
montebicyclelo
1 day ago
[-]
> One of the simplest CRDT strategies is Last-Write-Wins (LWW):

> Each update gets a timestamp (physical or logical).

> When two devices write to the same field, the update with the latest timestamp wins.

Please also have a robust synced undo feature, so it's easy to undo the thing you don't want that gets written. Apps that sync often seem to be stingy about how much "undos" they store/sync (if any).

reply
hinkley
1 day ago
[-]
Editing on phone. Phone dies. Scrounge up tablet, edit again, only rethink some of the work. Hit save. Plug in cellphone. Cellphone turns on. Now what?

Depends on the granularity of updates. Did the last changes get sent immediately? Are they gated by a save button? Are they periodically pushed?

Some of those don’t need a durable undo, but the rest definitely benefit, and undo has other uses.

reply
fireflash38
1 day ago
[-]
Or they save as soon as they launch, so no matter what local is newer and you'll lose whatever was on remote.
reply
throwmeaway222
1 day ago
[-]
The title doesn't match the content well, everyone's responding to the title!

The syncing issue is also handled better by making the logical unit that gets changed as small as possible. If you have two people editing a document, instead of locking the document, and wiping out changes from other people, think of it as a series of edits to a paragraph, or smaller a sentence, or even the exact word being changed. If someone has text highlighted, then lock anyone else out from editing it because there is a high chance they're going to erase it or paste over it.

Lastly, as AI moves forward, it would be an interesting experiment to start having AI auto-resolve conflicts in the small. (user has been replacing the word taco with burrito, and two users started typing in the same spot TaBurrcoito. Maybe Burrito would win.

reply
dpe82
1 day ago
[-]
You've essentially described Google Wave style Operational Transformation.
reply
PaulHoule
2 days ago
[-]
Lotus Notes solved syncing for object databases but the world forgot

https://en.wikipedia.org/wiki/HCL_Notes

reply
codegeek
2 days ago
[-]
You just reminded me of the nightmare Lotus Notes was. Great idea but absolutely horrendous implementation. Probably the worst piece of software I have ever used and I have been in the industry for 21+ years now.
reply
mschuster91
2 days ago
[-]
Ahhh Lotus Notes... in many ways ahead of its time, timeless and horribly outdated at the same time.
reply
echelon
2 days ago
[-]
I still remember the crazy password screen with the symbols that changed as you typed.

If that was deterministic, that was a very bad idea.

reply
AlexandrB
2 days ago
[-]
IIRC the symbols were basically a hash of the password that let you know if you typed the password correctly without showing it.
reply
patwolf
2 days ago
[-]
From my time using Notes I remember lots of manual replication config to get anything to properly work offline, and even then I struggled to get it to work reliably. So while they might have solved it, I don't think their solution was very good.
reply
lordnacho
2 days ago
[-]
Local-first was the first kind of app. Way up into the 2000s, you'd use your local excel/word/etc, and the sync mechanism was calling your file annual_accounts_final_v3_amend_v5_final(3).xls

But also nowadays you want to have information from other computers. Everything from shared calendars to the weather, or a social media entry. There's so much more you can do with internet access, you need to be able to access remote data.

There's no easy way to keep sync, either. Look at CAP theorem. You can decide which leg you can do without, but you can't solve the distributed computing "problem". Best is just be aware of what tradeoff you're making.

reply
marginalia_nu
2 days ago
[-]
> There's no easy way to keep sync, either. Look at CAP theorem. You can decide which leg you can do without, but you can't solve the distributed computing "problem". Best is just be aware of what tradeoff you're making.

Git has largely solved asynchronous decentralized collaboration, but it requires file formats that are ideally as human understandable as machine-readable, or at least diffable/mergable in a way where both humans and machines can understand the process and results.

Admittedly git's ergonomics aren't the best or most user friendly, but it at least shows a different approach to this that undeniably works.

reply
jordanb
2 days ago
[-]
I feel like git set back mainstream acceptance of copy-and-merge workflows possibly forever.

The merge workflow is not inherently complicated or convoluted. It's just that git is.

When dvcses came out there were three contendors: darcs, mercurial and git.

I evaluated all three and found darcs was the most intuitive but it was very slow. Git was a confused mess, and hg was a great compromise between fast and having a simple and intuitive merge model.

I became a big hg advocate but I eventually lost that battle and had to become a git expert. I spent a few years being the guy who could untangle the mess when a junior messed up a rebase merge then did a push --force to upstream.

Now I think I'm too git-brained to think about the problem with a clear head anymore, but I think it's a failure mostly attributable to git that dvcs has never found any uptake outside of software development and the fact that we as developers see dvcs as a "solved problem" outside more tooling around git is a failure of imagination.

reply
robenkleene
2 days ago
[-]
> The merge workflow is not inherently complicated or convoluted. It's just that git is.

What makes merging in git complicated? And what's better about darcs and mercurial?

(PS Not disagreeing just curious, I've worked in Mercurial and git and personally I've never noticed a difference, but that doesn't mean there isn't one.)

reply
WorldMaker
2 days ago
[-]
Darcs is a special case because it coevolved a predecessor/fork/alternative to CRDTs [0] (called "Patch Theory"). Darcs was slow because darcs supported a lot of auto-merging operations git or mercurial can't because they don't have the data structures for it. Darcs had a lot of smarts in its patch-oriented data structures, but sadly a lot of those smarts in worst cases (which were too common) led to exponential blowouts in performance. The lovely thing was that often when Darcs came out of that slow down it had a great, smart answer. But a lot of people's source control workflows don't have time to wait on their source control system to reason through an O(n ^ 2) or worse O(n ^ n) problem space. To find a CRDT-like "no conflict" solution or even a minimal conflict that is a smaller diff than a cheap three-way diff.

[0] Where CRDTs spent most of a couple of decades shooting for the stars and assuming "Conflict-Free" was manifest destiny/fate rather than a dream in a cruel pragmatic world of conflicts, Darcs was built for source control so knew emphatically that conflicts weren't avoidable. We're finally at the point where CRDTs are starting to take seriously that conflicts are unavoidable in real life data and trying new pragmatic approaches to "Conflict-Infrequent" rather that "Conflict-Free".

reply
account42
1 day ago
[-]
At the end of the day all of these have the user start with state A turn that into state B and then commit that. How the that operation is stored internally (as a snapshot of the state or as a patch generated at commit time) is really irrelevant to the options that are available for resolving conflicts at merge time.

Auto-merging code is also a double-edged sword - just because you can merge something at the VCS-level does not mean that the result is sensible at the format (programming language) or conceptual (user expectation) levels.

reply
WorldMaker
1 day ago
[-]
Having used darcs for a while and still being a fan of it despite having followed everyone to git, the data storage is not irrelevant and does affect the number of conflicts to resolve and the information to resolve it.

It wasn't just "auto-merging" that is darcs' superpower, it's in how many things that today in git would need to be handled in merges that darcs wouldn't even consider a merge, because its data structure doesn't.

Darcs is much better than git at cherry picking, for instance, where you take just one patch (commit) from the middle of another branch. Darcs could do that without "history rewriting" in that the patch (commit) would stay the same even though its "place in line" was drastically moved. That patch's ID would stay the same, any signatures it might have would stay the same, etc, just its order in "commit log" would be different. If you later pulled the rest of that branch, that also wouldn't be a "merge" as darcs would already understand the relative order of those patches and "just" reorder them (if necessary), again without changing any of the patch contents (ID, signatures, etc).

Darcs also has a few higher level patch concepts than just "line-by-line diffs", such as one that tracks variable renames. If you changed files in another branch making use of an older name of a variable and eventually merge it into a branch with the variable rename, the combination of the two patches (commits) would use the new name consistently, without a manual merge of the conflicting lines changed between the two, because darcs understands the higher level intent a little better there (sort of), and encodes it in its data structures as a different thing.

Darcs absolutely won't (and knows that it can't) save you from conflicts and manual merge resolution, there are still plenty of opportunities for those in any normal, healthy codebase, but it gives you tools to focus on the ones that matter most. Also yes, a merge tool can't always verify that the final output is correct or builds (the high level rename tool, for instance, is still basically a find-and-replace and can be over-correct false positives and and miss false negatives). But it's still quite relevant to merges the types of merges you need to resolve in the first place, and how often they occur, and what qualifies as a merge operation in the first place.

Though maybe you also are trying to argue the semantics of what constitutes a "merge", "conflicts", and an "integration"? Darcs won't save you from "continuous integration" tools either, but it will work to save your continuous integration tools from certain types of history rewriting.

"At the end of the day" the state-of-the-art of VCS on-disk representation and integration models and merge algorithms isn't a solved problem and there are lots of data structures and higher level constructs that tools like git haven't applied yet and/or that have yet to be invented. Innovation is still possible. Darcs does some cool things. Pijul does some cool things. git was somewhat intentionally designed to be the "dumb" in comparison to darcs' "smart", it is even encoded in the self-deprecating name (from Britishisms such as "you stupid git"). It's nice to remind ourselves that while git is a welcome status quo (it is better than a lot of things it replaced like CVS and SVN), it is not the final form of VCS nor some some sort of ur-VCS which all future others will derive and resembles all its predecessors (Darcs predates git and was an influence in several ways, though most of those ways are convenience flags that are easy to miss like `git add -p` or tools that do similar jobs in an underwhelming fashion by comparison like `git cherry-pick`).

reply
TylerE
2 days ago
[-]
That git won over hg is a true tragedy. The hg ux/ui is so much better.
reply
marginalia_nu
2 days ago
[-]
Yeah I mostly agree with this. I'm mostly talking about git the model, rather than git the tool when I say git has solved the problem of asynchronous decentralized collaboration.

For local-first async collaboration on something that isn't software development, you'd likely want something that is a lot more polished, and has a much more streamlined feature set. I think ultimately very few of git's chafing points are due to its model of async decentralized collaboration.

reply
jorvi
2 days ago
[-]
Hey, another hg enjoyer! I miss it too. So much simpler.

Apparently 'jujutsu' makes the git workflow a bit more intuitive. Its something that runs atop git, although I don't know how much it messes up the history if you read it out with plain git.

All in all I'm pretty happy with git compared to the olden days of subversion. TortoiseSVN was a struggle haha.

reply
rpdillon
2 days ago
[-]
Ah, I miss hg. Another cool aspect is that because it was written in Python and available as a library, I was able to write a straightforward distributed wiki based on hg in a single Python script. So much fun.
reply
jjcob
2 days ago
[-]
Git works, but it leaves conflict resolution up to the user. It's good for a tool for professional users, but I don't see it being adopted for mainstream use.
reply
PaulHoule
2 days ago
[-]
The funny thing about it is I see git being used in enterprise situation for non-dev users to manage files, often with a web back end. For instance you can tell the average person to try editing a file with the web interface in git and they're likely to succeed.

People say git is too "complex" or "complicated" but I never saw end users succeeding with CVS or Mercurial or SVN or Visual Sourcesafe the way they do with Git.

"Enterprise" tools (such as business rules engines) frequently prove themselves "not ready for the enterprise" because they don't have proper answers to version control, something essential when you have more than one person working on something. People say "do you really need (the index)" or other things git has but git seemed to get over the Ashby's law threshold and have enough internal complexity to confront the essential complexity of enterprise version control.

reply
jjcob
2 days ago
[-]
> you can tell the average person to try editing a file with the web interface

Yes, but then you are not using a "local first" tool but a typical server based workflow.

reply
criddell
2 days ago
[-]
How can you avoid leaving conflict resolution up to the user?
reply
robenkleene
2 days ago
[-]
The problem with "human understandable" with respect to resolving syncing conflicts, is that's not an achievable goal for anything that's not text first. E.g., visual and audio content will never fit well into that model.
reply
marginalia_nu
2 days ago
[-]
I can undo and redo edits in these mediums. Why can't these edits be saved and reapplied?

Not saying this would be in any way easy, but I'm also not seeing any inherent obstacles.

reply
robenkleene
2 days ago
[-]
Nothing. But that's not what the comment I was replying to was suggesting:

> It requires file formats that are ideally as human understandable as machine-readable, or at least diffable/mergable in a way where both humans and machines can understand the process and results.

What you're proposing is tracking and merging operations rather than the result of those operations (which is roughly the basis of CRDTs as well).

I do think there's some problems with that approach as well though (e.g., what do you do about computationally expensive changes like 3D renders?). But for the parts of the app that fit well into this model, we're already seeing collaborative editing implemented this way, e.g., both Lightroom and Photoshop support it.

To be clear though, I think the only sensible way to process merges in this world is via a GUI application that can represent the artifact being merged (e.g., visual/audio content). So you still wouldn't use Git to merge conflicts with this approach (e.g., a simple reason why is that what's to stop an underlying binary asset that a stack of operations is being applied to from having conflicting changes if you're just using Git?). Even some non-binary edits can't be represented as "human readable" text, e.g., say adding a layer of a vector drawing of rabbit.

reply
WorldMaker
2 days ago
[-]
Git's original merge algorithm was intentionally dumb, it was mostly just a basic three-way diff/merge. (Git's merge algorithms have gotten smarter since then.)

Three-way merges in general are easier to write than the CRDTs as the article suggests. They are also far more useful than just the file formats you would think to source control in get; it's a relatively easy algorithm to apply to any data structure you might want to try.

For a hobby project I took a local-first-like approach even though the app is an MPA, partly just because I could. It uses a real simple three-way merge technique of storing the user's active "document" (JSON document) and the last known saved document. When it pulls an updated remote "document" it can very simply "replay" the changes between the active document and the last known saved document to the active document to create a new active document. This "app" currently only has user-owned documents so I don't generally compute the difference between the remote update and the last saved to mark conflicted fields to the user, but that would be the easy next step.

In this case the "documents" are in the JSON sense of complex schemas (including Zod schemas) and the diff operation is a lot of very simple `===` checks. It's an easy to implement pattern and feels smarter than it should with good JSON schemas.

The complicated parts, as always, are the User Experience of it, more than anything. How do you try to make it obvious that there are unsaved changes? (In this app: big Save buttons that go from disabled states to brightly colored ones.) If you allow users to create drafts that have never been saved next to items that have at least one save, how you visualize that? (For one document type, I had to iterate on Draft markers a few times to make it clearer something wasn't yet saved remotely.) Do you need a "revert changes" button to toss a draft?

I think sometimes using a complicated sync tool like CRDTs makes you think you can escape the equally complicated User Experience problems, but in the end the User Experience matters more than whatever your data structure is and no matter how complicated your merge algorithm is. I think it's also easy to see all the recommendations for complex merge algorithms like CRDTs (which absolutely have their place and are very cool for what they can accomplish) and miss that some of the ancient merge algorithms are simple and dumb and easy to write patterns.

reply
taeric
2 days ago
[-]
Git does no such thing. Plain text files with free form merging capabilities somewhat solves the idea that you can merge things. But, to make that work, the heavy lifting has to be done by the users of the system.

So, sure, if you are saying "people trained to use git" there, I agree. And you wind up having all sorts of implicit rules and guidelines that you follow to make it more manageable.

This is a lot like saying roads have solved how to get people using dangerous equipment on a regular basis without killing everyone. Only true if you train the drivers on the rules of the road. And there are many rules that people wind up internalizing as they get older and more experienced.

reply
poszlem
2 days ago
[-]
Git solved this by pushing the syncing burden onto people. It’s no surprise, merge conflicts are famously tricky and always cause headaches. But for apps, syncing really ought to be handled by the machine.
reply
marginalia_nu
2 days ago
[-]
If you want local-first, conflict resolution is something you're unlikely to be able to avoid. The other option is to say "whoops" and arbitrarily throw away a change when there is a conflict due to a spotty wifi or some such.

Fortunately, a lot of what chafes with git are UX issues more than anything else. Its abstractions are leaky, and its default settings are outright bad. It's very much a tool built by and for kernel developers with all that entails.

The principle itself has a lot of redeemable qualities, and could be applied to other similar syncing problems without most of the sharp edges that come with the particular implementation seen in git.

reply
recursivedoubts
2 days ago
[-]
“solved”

imagine asking a normie to deal with a merge conflict

reply
marginalia_nu
2 days ago
[-]
That's an UX issue with git, not really what's being discussed.
reply
recursivedoubts
2 days ago
[-]
I don’t agree at all. Merging conflicts correctly is often incredibly hard and requires judgement and understanding of semantics and ramifications that are difficult for even skilled developers.
reply
balamatom
2 days ago
[-]
Who did what when why? Everyone has understanding of those semantics.

It's literally entirely on a computer. If that somehow makes it harder to answer basic human questions about the complex things we're using it for, well that means we've got a problem folks.

The problem is with comprehensibility, and it's entrenched (because the only way for a piece of software to outlive its 50 incompatible analogs and reach mass recognition is to become entrenched; not to represent its domain perfectly)

The issue lies in how the tools that we've currently converged on (e.g. Git) represent the semantics of our activity: what information is retained at what granularity determines what workflows are required of the user; and thence what operations the user comes to expect to be "easy" or "hard", "complex" or "simple". (Every interactive program is a teaching aid of itself, like how when you grok a system you can whip together a poor copy of it in a couple hours out of shit and sticks)

Consider Git's second cousin the CRDT, where "merges" are just a few tokens long, so they happen automatically all the time with good results. Helped in application context by how a "shared editor" interface is considerably more interactive than the "manually versioned folder" approach of Git. There's shared backspace.

Git was designed for emailing patches over dialup, there it obviously pays to be precise; and it's also awesome at enabling endless bikeshedding on projects far less essential than the kernel, thanks to the proprietary extension that are Pull Requests.

Probably nobody has any real incentive to pull off anything better, if the value proposition of the existing solution starts with "it has come to be expected". But it's not right to say it's inherently hard, some of us have just become used to making it needlessly hard on ourselves, and that's whose breakfast the bots are now eating (shoo, bots! scram)

reply
recursivedoubts
2 days ago
[-]
also, maybe it's a really hard problem
reply
balamatom
1 day ago
[-]
Given enough people working on a solution, every problem is hard. It's all in how you scope it.
reply
recursivedoubts
1 day ago
[-]
man
reply
balamatom
1 day ago
[-]
yes bot?
reply
account42
1 day ago
[-]
So in other words it requires the skills you need to make an edit in the first place.
reply
threatofrain
2 days ago
[-]
Gits approach is to make people solve it with organic intelligence.
reply
JustExAWS
2 days ago
[-]
Git hasn’t solved it in a way that any normal person would want to deal with.
reply
paldepind2
2 days ago
[-]
> Local-first was the first kind of app. Way up into the 2000s, you'd use your local excel/word/etc, and the sync mechanism was calling your file annual_accounts_final_v3_amend_v5_final(3).xls

To be precise, these apps where not local-_first_, they where local-_only_. Local-first implies that the app first and foremost works locally, but also that it, secondly, is capable of working online and non-locally (usually with some syncing mechanism).

reply
account42
1 day ago
[-]
That sync mechanism was called save to floppy and hand it to whoever you want to share your changes with.
reply
magicalhippo
2 days ago
[-]
> There's no easy way to keep sync, either. Look at CAP theorem.

Sure there is, you just gotta exploit the multiverse[1]. Keep all the changes in their own branch aka timeline, and when there's some perceived conflict you just say "well in the timeline I'm from, the meeting was moved to 4pm".

[1]: https://www.reddit.com/r/marvelstudios/comments/upgsuk/expla...

reply
neets
2 days ago
[-]
Those diffs increase software complexity subjectively a magnitude more
reply
bryanlarsen
2 days ago
[-]
There's no perfect general solution, but the number of conflicts generated by small teams collaborating in an an environment where the internet almost always works is going to be miniscule. No need to let the perfect be the enemy of the good.
reply
tcoff91
2 days ago
[-]
CRDTs are a pretty good experience for many data types when it comes to collaborative editing.
reply
madcaptenor
2 days ago
[-]
This is why I just use timestamps if I am doing local-first. (yyyy-mm-dd date format, for sortability)
reply
card_zero
2 days ago
[-]
> nowadays you want to have information from other computers.

Do I? What sort of information ...

> shared calendars

OK yes that would be a valid use, I can imagine some stressed executive with no signal in a tunnel wanting to change some planned event, but also to have the change superceded by an edit somebody else makes a few minutes later.

> the weather

But I don't usually edit the weather forecast.

> a social media entry

So ... OK ... because it's important that my selfie taken in a wilderness gets the timestamp of when I offline-pretend-posted it, instead of when I'm actually online and can see replies? Why is that? Or is the idea that I should reply to people offline while pretending that they can see, and then much later when my comments actually arrive they're backdated as if they'd been there all along?

reply
bluGill
2 days ago
[-]
Nearly everything I do is on a shared computer.

At work: I write code, which is in version control. I write design documents (that nobody reads), and put them on a shared computer. I write presentations (you would better off sleeping through them...) and put them on a share computer. Often the above are edited by others.

Even at home, my grocery list is shared with my wife. I look up recipes online from a shared computer. My music (that I ripped from CDs) is shared with everyone else in the house. When I play a game I wish my saved games were shared with other game systems (I haven't had time since I had kids, more than 10 years ago). When I take notes about my kid's music lessons they are shared with my wife and kids...

reply
raincole
2 days ago
[-]
> offline-pretend-posted

It's a far, far more complicated mental model than simply posting it. It'd be a huge barrier for normal users (even tech-savvy users, I'd say). People want to post it online and that's it. No one wants an app what requires its users to be aware of syncing state constantly unless they really have no choice. We pretend we can step on gas instead of mixing the gas with air and ignite it with a spark plug until we need to change the damn plug.

reply
dundarious
2 days ago
[-]
Email clients had the outbox that was local only and then you pushed to send them all. Hiding the outbox is why some of these things seem fiddly to use, despite being conceptually very simple. This model would seem to work very well at least for non-collaborative changes like IG posts.
reply
tetralobita
2 days ago
[-]
in our company some team is trying to solve offline selling items dropping from stock, when become offline it syncs, there are price and stock changes to be sync.
reply
Sharlin
2 days ago
[-]
Well, the first kind of PC app, anyway. For decades before that, programs were run on time-sharing mainframes via remote terminals.
reply
elzbardico
2 days ago
[-]
I remember the concept of departmental server/database/apps from that age. Lots of big companies still had mainframe applications running on their HQ, that had a consolidated view of their enterprise, and a lot of data transfers and batch jobs running off hours to synchronize everything.

It was the first practical manner to downsize mainframe applications.

reply
jakelazaroff
2 days ago
[-]
I wouldn't call local Excel/Word/etc "local-first". The "-first" part implies that the network is used for secondary functionality, but the apps you're talking about are fully offline. IMO local-first definitionally requires some form of networked multi-device collaboration or sync.
reply
robenkleene
2 days ago
[-]
Excel and Word both support real-time collaboration and sync, e.g., https://support.microsoft.com/en-us/office/collaborate-on-wo...
reply
jakelazaroff
2 days ago
[-]
Sure, now. I'm responding to this from OP:

> Local-first was the first kind of app. Way up into the 2000s, you'd use your local excel/word/etc, and the sync mechanism was calling your file annual_accounts_final_v3_amend_v5_final(3).xls

reply
fragmede
1 day ago
[-]
Google docs is the gold standard here.
reply
jakelazaroff
1 day ago
[-]
Google Docs is not "local" in any meaningful sense of the word.
reply
fragmede
1 day ago
[-]
it's not!
reply
echelon
2 days ago
[-]
Local-first apps aren't common because teams building web apps are primarily targeting online, database backed, SaaS.

With the exception of messenger clients, Desktop apps are mostly "local-first" from day one.

At the time you're beginning to think about desktop behavior, it's also worth considering whether you should just build native.

reply
guywithahat
2 days ago
[-]
I also suspect it's more portable. You build one site with one API, and then you just interact with that api across all the devices you support. If you write it locally it has to get rewritten for each platform
reply
ChadNauseam
1 day ago
[-]
Websites can be local in the sense of local-first, especially with modern techn like progressive web apps (which allow you to make your webapp work offline).
reply
MangoToupe
2 days ago
[-]
> There's no easy way to keep sync, either.

There's no easy way to merge changes, but if you design around merging, then syncing becomes much less difficult to solve.

reply
bluGill
2 days ago
[-]
We have gone back and forth several times in history.

It started with single computers, but they were so expensive nobody had them except labs. You wrote the program with your data, often toggling it in with switches.

From there we went to batch processing, then shared computers, then added a networking, with file sharing and RPC. Then the personal computer came and it was back to toggling your own programs, but soon we were running local apps, and now our computers are again mostly "smart terminals" (as opposed to dumb terminals), and the data is on shared computers again.

Sometimes we take data off the shared computer, but there is no perfect solution so distributed computing and since networks are mostly reliable nobody wants that anyway. What we do want is control of our data and that we don't get (mostly)

reply
encom
1 day ago
[-]
>calling your file annual_accounts_final_v3_amend_v5_final(3).xls

My last job was balls deep in the Google ecosystem, and all the collaborative editing, syncing and versioning and whatnot did nothing to stop that practice.

On a related note, I used to hate Gmail (I still do, but I used to too), until I had to use Outlook and all the other MS crap at my new job. Jesus christ. WTF even is Teams? Rhetorical question; I don't care.

reply
dustingetz
2 days ago
[-]
also, AI
reply
kelvinjps
2 days ago
[-]
I think just doing files and then syncing the files with another solution it's okay.

For example my local apps: Synthing syncs the files between my computer and phone for Note taking(just markdown and org files) obsidian on my phone, emacs/vim on my PC. Todo and reminders: org mode in emacs (desktop) orgzly on mobile. Password manager: KeePassxc desktop, keepassdx mobile. Calendar: just sync the ics file. Photos: just sync files use the default app. I can continue with this with every app, I don't know why people overcomplicate things

reply
internet_points
1 day ago
[-]
This is more or less what I do too. I sync my whole /storage/emulated/0 on Android, which gives me Photos and Downloads etc. to my laptop. I also put Syncthing on an always-on tiny computer at home so if either laptop or phone is off, I can still sync from the other one (so that little home computer is my "cloud", didn't require anything more than plugging in eth and running syncthing).

Syncthing uses "disovery servers" so no need for public IP's or wireguard or anything like that (you can set up your own if you don't trust syncthing https://docs.syncthing.net/users/stdiscosrv.html but they're only used for that discovery part, which is one of the main hurdles when you start thinking about peer-to-peer systems).

(However, I use Fastmail's calendar, for handling invites and since it keeps my subscriptions up-to-date. Considering the move from bitwarden to keepassxc.)

reply
kevincox
1 day ago
[-]
The problem is that this has effectively no conflict resolution. With naive file-based sync the best Syncthing can do is save both files. Then you are stuck merging yourself. Maybe if all of your devices are reliably online and you only use one at a time. But eventually you get a delayed sync resulting in a state fork and you end up with a mess.

For photos and some other type of data this likely isn't much of a problem. But for more complex data it definitely is.

reply
kelvinjps
1 day ago
[-]
You can tell synching how to manage conflicts or the application can do it, for example KeePass has good conflict resolution.
reply
R_Spaghetti
2 days ago
[-]
This is the reason why I'm reading this site - this is such a brilliant idea, super simple and without any vendor lockin at all.
reply
fruitworks
1 day ago
[-]
What do you use to integrate ICS into your phone's calendar. This is what i'm suck on. Apps like icsx5 are read-only for ics
reply
aborsy
2 days ago
[-]
I second Syncthing, it’s amazing!
reply
rpdillon
2 days ago
[-]
Been doing this for years. It's so utterly amazing and effective.
reply
robmccoll
2 days ago
[-]
I wonder about the categories of apps for which offline first with potentially infinitely delayed sync provides a better experience and how large those really are.

It seems like most of those are apps where I'm creating or working on something by myself and then sharing it later. The online part is almost the nice-to-have. A lot of other apps are either near-real-time-to-real-time communication where I want sending to succeed or fail pretty much immediately and queueing a message for hours and delivering it later only creates confusion. Or the app is mostly for consuming and interacting with content from elsewhere (be that an endless stream of content a la most "social media", news, video, etc. or be it content like banking apps and things) and I really mostly care about the latest information if the information is really that important at all. The cases in those apps where I interact, I also want immediate confirmation of success or failure because it's really important or not important at all.

What are the cases where offline-first is really essential? Maybe things that update, but referencing older material can be really useful or important (which does get back to messaging and email in particular, but other than something that's designed to be async like email, queueing actions when offline is still just nice-to-have in the best cases).

Otherwise the utility of CRDTs, OT, et al. is mostly collaborative editing tools that still need to be mostly online for the best experience.

reply
blacklion
2 days ago
[-]
> It seems like most of those are apps where I'm creating or working on something by myself and then sharing it later.

It is interesting. I've thought about things I don in non-messaging apps (which are online0first for obvious reasons), and all of them create something, which can be EXPORTED to on-line presence, but doesn't require connected app.

Code? I write it locally and use separate app to share it: git. Yes, code is collaborative creation (I'm working in the team), but it is still separate tool and I like it, as I control what I'll publish for my colleagues.

Photos? Of course I want to share result, but I'm working on RAW files with non-destructive editing and I want to share final bitmap (as JPEG) and not RAW data and editing steps.

Same with music, if I create one (I doesn't).

Texts must be polished in solitude and presented as final result (maybe, as typographically set one, as PDF).

All my "heavy" applications are and should be offline-first!

reply
skydhash
2 days ago
[-]
Or offline only. I can use an online drive and share a link to the result. Email is there for easy commenting. Or chat for quick collaboration.
reply
crote
2 days ago
[-]
Yeah, I feel like for most applications the online part acts more like a backup than an interactive sync. How often do you really work on the same file on different devices? And if you do so, how often do you want to continue working on an old version rather than first forcing a sync of the latest revision? After all, without a sync you can't continue your work but only edit completely unrelated parts...

I think most real-world applications fall under either "has to be done online", or "if there are conflicts, keep both files and let the user figure it out". Trying to automatically merge two independent edits can quickly turn into a massive mess, and I really don't want apps to do that automagically for me without giving me git-like tooling to fix the inevitable nightmare.

reply
wngr
2 days ago
[-]
Apart from small software studios with non-SaaS business models, there is just no viable local-first (offline-first with infinitely delayed sync) application category, in the consumer space. Consider military applications with heterogenous and constraint networks, where assets might need to coordinate p2p under adversarial jamming. There might be applications, where AP (choosing Availability and Partion Tolerance, cf CAP theorem) makes sense.
reply
em-bee
19 hours ago
[-]
you are missing one key aspect: high latency. i am in a rural area with bad internet access. especially bad latency. every request to hackernews for example takes seconds. even if it doesn't make sense for hackernews to be offline. updating new messages, posting replies, upvotes, downvotes. etc, would all work better if they happened in the background, so that i don't have to wait for them.

that's why local first also makes sense for server bound applications. it's not offline first, but by moving online activities into the background, all my interaction with the site would be snappy and smooth, and i could read and comment and move on to the next action without waiting for the browser to reload the page

reply
andsoitis
2 days ago
[-]
Lotus Notes and later Groove Networks (both brought to us courtesy Ray Ozzie) both provided a platform to create apps with data synchronizing as a first-class citizen.

The technology behind Groove now powers OneDrive and Microsoft 365.

Notes: https://en.wikipedia.org/wiki/HCL_Notes

Groove: https://en.wikipedia.org/wiki/Groove_Networks

reply
GMoromisato
1 day ago
[-]
I was a developer on both Lotus Notes and Groove, and I think both died for different reasons.

Notes was replaced by the web (and Outlook). Notes was, IMHO, ahead of its time. It was one of the first client-server systems where the client was treated as a full peer in the network. You could work offline for as long as you wanted and then the client would automatically synchronize with the server. It had military-grade encryption, a built-in development environment, integrated full-text search, and programmable agents for scheduled data processing, alerting, etc.

But the web was much cheaper and benefited from network scaling laws (the more sites, the more value it accrued). The web is a perfect example of "worse is better". The complexity of Lotus Notes kept the price high (both financially and in terms of time-commitment).

For Groove we doubled-down on peer-to-peer synchronization. Notes had synchronization at the "note" level. If two people edited the same note while offline then there were conflicts that had to be resolved manually. In contrast, Groove had custom synchronization systems for different kinds of data. For text files we could synchronize at the character level. Other data structures could synchronize at whatever level was appropriate.

We used a change-log merge system, not too different from block chain.

The problem with Groove was that the advantages (offline editing) never compensated for the disadvantages (lower performance, lack of visibility in sync state, and difficulty in onboarding).

The use cases that really needed Groove were rare, and not enough to build a viable business.

reply
gorfian_robot
2 days ago
[-]
Groove was cool for a hot minute before MS got their grubby hands on it
reply
flymasterv
1 day ago
[-]
I had my first internship at Groove. I did a ton of the I18n work, and something with the chess demo.
reply
sylens
2 days ago
[-]
There's no money in making a local first app. Businesses want your data, they want you to be dependent on them, and they want to be able to monetize your behavior and attention
reply
zsoltkacsandi
2 days ago
[-]
There is money in local first apps, businesses are just greedy.
reply
observationist
2 days ago
[-]
They want everything to be cloud applications and services that run on terminals you rent, managed through portals you have subscribe to. They want to sell your attention and screenshare on top of making you pay for the privilege of being nickel and dimed, and they want to surveil every aspect of your life, free from liability or accountability for what they do with the data, or who they sell it to.
reply
JustExAWS
2 days ago
[-]
Today you can still buy Microsoft Office as a one time purchase.
reply
Loughla
1 day ago
[-]
But it still prompts you to sign into your account and "finish setting up Windows" by using onedrive and whatever other cloud bullshit.

And it will continue to do that forever.

Source: when we lose power and the machine restarts unexpectedly, I die a little inside when it goes to the windows set up screen just because I haven't signed in to a Microsoft account.

reply
JustExAWS
1 day ago
[-]
Do you care about not having a subscription? If so that solves that problem. But in 2025, I have three devices where I for whatever reason may want to have access to my document - my phone, tablet and personal MacBook. That’s not even counting in a pinch, editing on the web if I’m at another trusted computer or on my work computer.
reply
1123581321
1 day ago
[-]
There is increasing awareness of CRDTs but it’s still unclear to many developers how to integrate them with sql. It seems like a CRDT requires developing a new app, or at least a new data model, rather than benefitting from their existing knowledge.

This should change over time; the libraries and documentation keep getting better.

I recommend Automerge 3.0’s documentation as an introduction to the concepts and how to bridge with traditional data. https://automerge.org/docs/hello/

reply
to11mtm
1 day ago
[-]
I think the biggest challenge is brownfield.

In greenfield it's easier (so long as you have the grit to build the base pattern); Events get put into tables (TBH this can be optional, but if you're new to it just do it b/c it will make debugging easier,) and if you want a friendly clean 'view' state you have some form of map-reduce replication running on the event set.

And yeah you can even remix that idea to work on 'existing' tables but it's gonna be more work and you'll probably want some form of optimistic concurrency check...

reply
ciju
1 day ago
[-]
We have been building a local-first browser app (PWA) for personal finance, based on double-entry accounting. https://finbodhi.com/

It's not always offline. We do use online services like firebase for auth and subscription, and some service to fetch commodity prices etc, but rest of the data is stored in browser storage (sqlite) and backed to local disk and dropbox. We also syncs data across devices, always encrypting data in transit. We use Evolu for sync.

For most personal applications, this model seems to fit. If you figure out sync, the development model is actually nicer than web apps. There is no need for dealing with calls over network for each action. It does make some things more difficult. Debugging, migrations etc.

reply
Aldipower
1 day ago
[-]
If you go on step further with double-entry accounting you will need to book the "stack" to the "journal" as required by the tax office. I do not see how this should work with you local-first approach, because you cannot really guarantee the correct order. If something is in the "journal" it has to be "canceled", if the action was wrong beforehand. This is an additional booking step and not a "revert" of the previous action. No conflict resolution will help you here. Especially the area of double-entry accounting, is something where I never would choose such an local-first/sync approach. Local-first in the classic desktop app single source of thruth sense, sure, but not this way.
reply
candiddevmike
2 days ago
[-]
IMO, offline read-only is enough of a compromise. How many times are users truly offline, AND want to be able to edit at that time (and deal with the potential conflicts, which by the nature of the operation, won't have a good UX)?
reply
ninalanyon
2 days ago
[-]
In my case quite a lot of the time. Which is why Google Keep as a web app is useless but the Android version is very useful.

Also I don't understand why so many people on HN are concentrating on the simultaneous editing scenario, for most ordinary people this is actually quite a rare event especially in their private lives. Google Keep on Android seems to work pretty well in his context my family uses it to share shopping lists and other notes very successfully even hough several of us are online only intermittently.

reply
arrowtrench
2 days ago
[-]
Offline read-only would already be a great feature for me, knowing that my data is always close to machine. But, my guess is that this isn't enough of a killer feature for most people to care.
reply
ComputerGuru
2 days ago
[-]
Any time looking at the RO data directly triggers a need to edit it. Eg, you notice a typo or a mistake, you get the idea for a follow up or clarification, etc.
reply
luplex
2 days ago
[-]
Apps that let you manipulate artifacts are often local. MS Office, Photoshop, Blender, CAD tools. But it turns out that actually, humans rarely work or live alone, and you can't facilitate communication through a local-first app.
reply
Sharlin
2 days ago
[-]
Yet somehow people managed to do that before everything went to the cloud.
reply
ninalanyon
2 days ago
[-]
Surely an email client like Thunderbird is local first? Even the email apps on mobile devices work offline and send and receive when the get a connection.
reply
rkapsoro
2 days ago
[-]
The use case I always think of is the developer experience for regular hobbyist and workaday devs writing their apps with local-first sync.

Apple comes close with CloudKit, in that it takes the backend service and makes it generic, basically making it an OS platform API, backed by Apple's own cloud. Basically cloud and app decoupled. But, the fundamental issue remains, in that it's proprietary and only available on Apple devices.

An open source Firebase/CloudKit-like storage API that requires no cloud service, works by p2p sync, with awesome DX that is friendly to regular developers, would be the holy grail for this one.

Dealing with eventually consistent data models is not so unusual these days, even for devs working on traditional cloud SAAS systems, since clouds are distributed systems themselves.

I would be very happy to see such a thing built on top of Iroh (a p2p network layer, with all the NAT hole punching, tunnelling and addressing solved for you) for example, with great mobile-first support. https://github.com/n0-computer/iroh

reply
lifty
2 days ago
[-]
Off topic, but why would I choose iroh instead of libp2p which seems to have much better language coverage compared to iroh?
reply
marcusestes
1 day ago
[-]
Your holy grail is probably Fireproof: https://fireproof.storage/

Open source, uses object storage without a web server dependency, syncs, and has great DX.

reply
gritzko
2 days ago
[-]
The author’s journey is probably just starting. I had this exact mindset about 10 years ago. Long story short: distributed systems are hard. A linear log of changes is an absolute lie, but it is a lie easy to believe in.
reply
aleph_minus_one
2 days ago
[-]
> distributed systems are hard.

While this may be true, the central issue is a different one: most users and/or developers are not very privacy-conscious, so they don't consider it to be worth the effort to solve the problems that go in hand with such distributed systems.

reply
Romario77
2 days ago
[-]
In the case the author presents there is one solution - last write wins. Which while simple to implement is not really satisfactory for many use cases.

Someone could write a whole slew of changes locally and someone else can eliminate all their work because they have an outdated copy locally and they made a simple update overriding the previous person's changes.

That's why git has merges and conflicts - it doesn't want to lose changes and it can't automatically figure out what should stay in case of a conflict.

reply
fellowniusmonk
2 days ago
[-]
I think anyone going down the path of distributed editing would be well served to start with Loro and an instance of chatgpt to answer questions.
reply
MangoToupe
2 days ago
[-]
> A linear log of changes is an absolute lie

Compared to what?

reply
ainiriand
2 days ago
[-]
A graph of changes in a distributed system, I assume.
reply
MangoToupe
2 days ago
[-]
...how is that any less of a lie? You can project a linear log into a graph trivially.
reply
Jaxan
2 days ago
[-]
But not the other way around!
reply
MangoToupe
1 day ago
[-]
I don't understand what you appear to view as obvious. Can you explain?

All you need is a way to resolve conflicts and you can serialize any distributed set of actions to a log.

reply
immibis
1 day ago
[-]
All you need is a time machine and then you can prevent the holocaust. All you need is a philosopher's stone and then immortality is within your grasp. All you need is an infinite source of energy and then the finiteness of the universe need not concern you. All you need is a magic genie granting infinite wishes and you can end world hunger.

If all you need to solve a problem is something impossible, then you haven't solved it.

reply
goodpoint
1 day ago
[-]
> distributed systems are hard

Everything is hard if you don't know how to do it.

reply
dboreham
2 days ago
[-]
While the article is about eventual consistency, the real reason local web apps haven't become popular is that current browsers don't support them. Any code running in the browser has to come from some url. The server at that url is assumed to control everything. Any data stored locally might vanish any time, and can't be accessed by code loaded from any other url. So while you might like the idea of "just running code on my computer via the browser to do something useful" what you're actually doing is providing a way for whoever controls the web server to do whatever they want on your computer (modulo what Google wants). Since any idea of an independent app running on your computer is a fiction, there's no need to even attempt to solve the eventual consistency problems. Just have the server that's in total control do your consistency.
reply
sehugg
1 day ago
[-]
It doesn't help that app developers don't have a clear idea of how long persistent data will be kept around, as eviction policies are based on complex and often-changing rules/heuristics, despite the availability of navigator.storage APIs.
reply
poisonborz
2 days ago
[-]
The goals have shifted. What is "local" nowadays? ~Everybody uses multiple devices, and things are expected to be in sync. What users now (should) need is easy self-hosting apps and centralised storage, with clients that can work/cache offline locally. A good example is Bitwarden.
reply
carlosjobim
2 days ago
[-]
What problem are you solving with that? I can not think of a single workflow that would be advantaged by self-hosting apps vs using local apps with sync.
reply
arrowtrench
2 days ago
[-]
Maybe the OP is saying that the advantage of "self-hosted [traditional/non-local-first] apps" over "local-first apps synced to someone else's server" is that with the former you get to control the server. Of course, there's nothing stopping you from using a local-first app that syncs to your own server, though.
reply
poisonborz
2 days ago
[-]
I meant both. What do you sync with if no central server (main client)? Anything else is unreliable, on/off all the time. And some simpler workflows don't need native apps.
reply
carlosjobim
2 days ago
[-]
Both what? What problem are you solving?
reply
poisonborz
2 days ago
[-]
I don't really get "local apps with sync". Sync to what, each other? In most cases there needs to be an always-on central authority (where backup also happens). My point is that "some device sitting alone in a corner with software and data stored on it" model is outdated (this is the "problem", I hope the "why" is obvious). But "local" should live on, just in the form of users owning most of their infrastructure.
reply
carlosjobim
2 days ago
[-]
Sync through iCloud or through the app manufacturers servers. Most apps I use are real apps that live on my local device. Then some of them sync through iCloud if I want to use sync. This is something I as the user can enable or disable in the app, and it costs me no subscription fee.

I think that this is much superior to using fake apps that live on some web server. It comes with great drawbacks even when it is on the web server of the app manufacturer. Ten times more pain if I have to host it myself.

reply
poisonborz
2 days ago
[-]
Meaning total trust with your cloud provider / app manufacturer. That is your choice, I wouldn't do it. Selfhosting is kind of a pain still, but gets easier every year (and for simple file sync that you mention, it's already rather easy).
reply
carlosjobim
1 day ago
[-]
If syncing is through iCloud, you'd only have to trust Apple and nobody else, if I understand things right. And yes, I trust Apple a thousand times than I'd trust a web hosting company or all the open source projects involved in self hosting.

Self hosting is an unsurpassable mountain for 99% of people who need to use computers. The risk to security and data loss is much higher than from syncing your apps through iCloud or through the app manufacturer.

reply
socalgal2
1 day ago
[-]
The article is about syncing an app that runs locally with other instances of the same app on other machines. Ok. What does that have to do with the title?

> Why haven't local-first apps become popular? (

I think that the title wants to be is "Why haven't more developers written local-first apps for their syncing solution?" because the the answer to title itself is because users don't care. Users (most) want apps that sync between their phone, their tablet, and their laptop/desktop. Up to some point, they don't care how it happens, just that it does and that they don't have to give it a second thought.

reply
vahid4m
2 days ago
[-]
I’ve been working on https://with.audio which is a local app.

I think there should be way more local apps with sync capabilities. I haven’t finished the sync feature in WithAudio and you have very nice ideas there. Specially the eventual consistency. That’s what will work.

But I must say for sure the most difficult part of local apps is debugging customer issues. For someone who is used to logs and traces and metrics and most of users using a one version of the code in the backend, debugging an issue in customers computer on an old version without much insight (without destroying all your premises of privacy) is very challenging.

reply
em-bee
2 days ago
[-]
i recently discussed the development of a new application for a customer. one of the important aspects was that the app would handle data that would have to stay local. despite that my thinking went like this:

if i develop it as a webapplication, then i can do all the work on my computer, test with various browsers and deliver a working result. if the customer has issues, i can likely reproduce them on my machine.

but if it was a desktop application, my feeling was that testing would be a nightmare. i would have to set up a machine like my client, or worse visit the client and work with them directly, because my own machine is just to different from what the client uses. also not to forget distribution and updates

in short: web -> easy. desktop -> difficult.

reply
jlarocco
2 days ago
[-]
As a user the simple answer is that I'll always use a regular application instead of a "local-first" web app when I have the ability.

"Local-first apps" are the worst of everything - crappy, dumbed down web UI; phoning home, telemetry, and other privacy violations; forced upgrades; closed source, etc.

At work, I don't have a choice, so it's Google Docs or Office 365 or whatever. And in that context it actually makes sense to have data stored on some server somewhere because it's not really my data but the company's. But at home I'll always choose the strictly offline application and share my data and files some other way.

reply
knubie
2 days ago
[-]
> crappy, dumbed down web UI; phoning home, telemetry, and other privacy violations; forced upgrades; closed source

What does any of this have to do with local first? Most online only apps have this stuff too.

reply
em-bee
2 days ago
[-]
"Local-first apps" are the worst of everything - crappy, dumbed down web UI along with no privacy.

which apps are you talking about here? that description doesn't make any sense to me.

reply
jlarocco
2 days ago
[-]
Aren't "local-first apps" in this context the ones where you vist "example.com", and it caches a bunch of HTML and Javascript and saves data locally using the "local storage" APIs? And then periodically makes requests back to "example.com" to check for updates, sync data, etc.?
reply
shortercode
2 days ago
[-]
Kinda yes kinda no? Most PWAs have some idea of offline support but it tends to be an afterthought. The argument of local first is that you design the app to work against a local database ( normally stashed in indexeddb or OPFS ) meaning that you don’t have to wait for data to load or network requests to complete. Your backend is then just a dumb storage engine which accepts changes and sends push messages to the client to indicate things have changed.

The only “big” local first app I’m aware of is Linear.

reply
miladyincontrol
2 days ago
[-]
While I hesitate to agree with that definition of local-first, I agree with your notion thats what the author by large is talking about and unfortunately many such an web-app have the worst of both worlds.

Most uses of the term local-first I see regularly mean to say, "an app that doesnt require 3rd party servers outside of your control to function", within some level of reason. Sometimes agnostic to how it's data is synced, sometimes meaning its an exercise left to the user entirely, sometimes meaning fully self-hosted and hardly local to the device being used to access the app.

reply
sumuyuda
2 days ago
[-]
Local first isn’t limited to web apps. It’s a style of application development in which you locally store and operate on the data rather than fetching it from a cloud backend every time. For native apps that means a local database or individual files on the file system.

> phoning home, telemetry, and other privacy violations; forced upgrades; closed source, etc.

This describes proprietary software developed by capitalist companies. This has nothing to do with local first.

reply
b_e_n_t_o_n
1 day ago
[-]
I think most native apps (at least that I use) are local first. I can't think of a single one that requires a consistent internet connection to function, that otherwise could run locally. The whole "local-first" trend that has cropped up lately was at least partially influenced by web apps like Linear and Figma, and if you look at the libraries and writing being produced on this topic it's almost exclusively Javascript and about making crud web apps local-first.
reply
afavour
2 days ago
[-]
The primary reason IMO is simply that users don’t care that much. If they did, offline functionality would be a selling point. But it isn’t, and it isn’t immediately simple to implement, so it isn’t worth it.
reply
radarsat1
2 days ago
[-]
I spent some time learning about PouchDB a little while ago, it seems to be a nice solution at least for a No SQL approach. Although I still need some more practical experience to understand the security model, because it feels weird to just sync a database that was updated on some web page, really you want to ensure there is validation and treat it as an untrusted source to some extent. Still not sure of the best way to deal with that without implementing most of the application as server side validation scripts, or maybe that is just the way to do it.
reply
DotaFan
1 day ago
[-]
Been working on offline apps (Android) for the past couple of years. Syncing is hard. Some takes:

- There was never a budget for CRDT's

- Conflicts were never our primary focus, instead our focus were business problems

- Event-sourced arch + offline first app was quite a fun. Basically I've recorded all the events in sequence in SQLite, queued them up when there was a connection and had network retry policy in place + reverting an event locally if retry failed n-th amount of times.

reply
otikik
1 day ago
[-]
The article focuses on the technical aspects, but I think the economical ones play a bigger role here.

They are misaligned here. What's good for the app user is not necessarily good for the people making the app.

I know several companies whole second or third round of investment was very much conditional on them making the pivot from "on premise" to "SASS". On paper, an on-premise app that I can manage on my own infrastructure seems more interesting for me as a consumer. But for the investors of those companies I just mentioned, having a SASS offering is seen as "having a bigger hold on your customers".

reply
trashb
2 days ago
[-]
This is not a systems issue this is a UI issue, the state status needs to be communicated to the user. It is ridiculous to retrofit locally first to a app that pretends to the user that it is always online.

Social media could allow you to make posts as a "draft" and automatically send it when you have a connection think of a email outbox. Or even give you a notification if after syncing with the master and it turns out the comment you replied to changed.

If you look at the web a lot of older (fundamental)protocols and applications have local-first build in. Often the design requirements required communication over a connection that is not as available. A few I can think of of HTTP, EMAIL, FINGER, GOPHER, FIDONET, NEWGROUP and more. A shared state is managed for a lot of different domains (code, gaming, chat, message boards) so I feel like it is already quite a solved problem, but there is no one size fits all solution. IMHO that's the sane thing to do for a networked application as you can never grantee that either party is available, and you want to still serve your user the best you can.

It can also have huge benefits for the data provider, especially at scale. - You can lower your bandwidth required drastically. - You can keep a consistent state for your user. - If you are evil you can send a lot more telemetry data or valuable (harvested) data instead.

the LWW is only really useful if you have a value where you can discard the previous updates for the latest, it's a bit like using UDP instead of TCP for communicating your position in a multiplayer game. Or if you don't mind forcing your user to resubmit.

reply
phkahler
1 day ago
[-]
We are both offline with identical copies of the same document. I modify paragraph 3 of 5. You replace the same paragraph with a completely different one.

No clocks or CRDTs are going to automatically fix this. The right solution could be my edits, or your new replacement, or some merge, but there isn't any automatic way to do this is there?

reply
willtemperley
1 day ago
[-]
Finding high quality libraries that are suitable for desktop / mobile distribution is currently very difficult.

I'd like to use DuckDB to remotely query Parquet. Unfortunately I can't use it in an iOS app because I'd have to work around the dynamic loading of extensions and bundle OpenSSL to support Httpfs in a static build. I'm building my own library instead.

That's just one example. I'd like to use certain LGPL licensed libraries but I can't because the legal risks are too high, as is the cognitive load in dealing with license uncertainty.

Neither of these things are an issue for web-apps.

I think the economics for local-first software could really work out for businesses that use significant compute. Cloud-compute is not cheap and comes with significant financial tail-risk.

reply
HeyLaughingBoy
1 day ago
[-]
How will they pay for it?

Subscription-based software made a lot of products affordable. $49.95 a month indefinitely is a lot easier for many small businesses to sustain than a lump sum of $1,500 for a software package.

Considering how undercapitalized most small businesses are, it's not hard to see why many of them may prefer to rent the software instead of buying it.

reply
api
2 days ago
[-]
I feel like a broken record here: it is not an engineering problem.

Local-first and decentralized apps haven't become popular because SaaS has a vastly superior economic model, and more money means more to be invested in both polish (UI/UX) and marketing.

All the technical challenges of decentralized or local-first apps are solvable. They are no harder than the technical challenges of doing cloud at scale. If there was money in it, those problems would be solved at least as well.

Cloud SaaS is both unbreakable DRM (you don't even give the user the code, sometimes not even their data) and an impossible to evade subscription model. That's why it's the dominant model for software delivery, at least 90% of the time. The billing system is the tail that wags the dog.

There are some types of apps that have intrinsic benefits to being in the cloud, but they're the minority. These are apps that require huge data sets, large amounts of burstable compute, or that integrate tightly with real world services to the point that they're really just front-ends for something IRL. Even for these, it would be possible to have only certain parts of them live in the cloud.

reply
Leftium
2 days ago
[-]
Epicenter[1] is attempting to go against the grain here[2]. I'm not sure if it will require one of these SaaS to become a client for this business model to work...

> The long-term direction is for Epicenter to become a foundational data framework for building apps where users truly own and control their own data. In other words, the Epicenter framework becomes an SSO for AI applications. Users use Epicenter to plug into any service, and they'll own their data, choose their models, and replace siloed apps with interoperable alternatives. Developers will use our framework to build highly customized experiences for said users. To pursue that goal seriously, we also need a sustainable model that honors our commitment to open source.

> ...The entire Epicenter project will be available under a copyleft license, making it completely free for anyone building open-source software. On the other hand, if a company or individual wants to incorporate the framework into their closed-source product, they can purchase a license to build with Epicenter without needing to open source their changes and abide by the copyleft terms. This is the model used by Cal.com, dub.sh, MongoDB, etc.

[1]: https://hw.leftium.com/#/item/44942731

[2]: https://github.com/epicenter-md/epicenter/issues/792

reply
DevItMan
1 day ago
[-]
Two things make local‑first frameworks break out: a boring hosted “default” and one killer app. Even if the vision is user‑owned data, ship a managed option with sane defaults so teams can try it in 5 minutes, then let them flip a switch to self‑host or bring their own storage/keys later. Pair that with a flagship app that proves the value (e.g., a shared notes/CRM/mail client where offline + conflict‑free collab is obviously better). Frameworks without a hero use case tend to stall because devs can’t justify the integration time.

On the business model, dual license works if you de‑risk the integration: stable plugin ABI, permissive SDKs, and paid “closed‑source embedding” tier with SLAs and on‑prem support. Where I’ve seen revenue actually land: (1) paid sync/relay with zero data retention, (2) enterprise key management and policy controls, and (3) priority support/migration bundles. One caution: “privacy” alone doesn’t convert; solve a concrete ops pain. I built CrabClear to handle the obscure brokers others miss, and the lesson was the same—privacy sells when it eliminates a specific, recurring headache. If Epicenter can quantify one such headache and make it vanish out‑of‑the‑box, the model becomes much easier to sustain.

reply
cosmic_cheese
2 days ago
[-]
Cloud SaaS services are often extremely heavily marketed, too, often with a VC-backed ad spend pool. That’s difficult if impossible to compete with.

There’s also an upcoming generation that doesn’t know what a filesystem is which also doesn’t help matters.

reply
api
2 days ago
[-]
> There’s also an upcoming generation that doesn’t know what a filesystem is which also doesn’t help matters.

This is why I sometimes think it's hopeless. For a while there -- 90s into the 2000s -- we were building something called "computer literacy." Then the phones came out and that stopped completely. Now we seem to have inverted the old paradigm. In that era people made jokes about old people not being able to use tech. Today the older people (30s onward) are the ones who can use tech and the younger people can only use app centric mobile style interfaces.

The future is gonna be like: "Hey grandpa, can you help me figure out why my wifi is down?"

reply
cosmic_cheese
2 days ago
[-]
I know some of us technically inclined millennials intend to make a point of (or are already in the process of) ensuring that their kids are computer-literate. I’m not a parent but should that change I would certainly plan to. Whether or not that gets passed down to the grandkids or not is out of our control, but gotta do what you can right?
reply
jjcob
2 days ago
[-]
I don't know. I think it's a lucky coincidence, but I genuinely think that cloud based solutions are better.

Local first tends to suck in practice. For example, Office 365 with documents in the cloud is so much better for collaborating than dealing with "conflicted copy" in Dropbox.

It sucks that you need an internet connection, but I think that drawback is worth it for never having to manually merge a sync conflict.

reply
api
2 days ago
[-]
Those technical problems are largely a result of trying to shoehorn collaboration onto older local-only PC era apps that store data in the form of simple files. For really rich collaboration you want something designed for it from the ground up, and Office is not. Office pre-dates even the Internet.

That has nothing to do with where the code lives and runs. There are unique technical challenges to doing it all at the edge, but there are already known solutions to these. If there was money in it, you'd have a lot of local first and decentralized apps. As I said, these technical challenges are not harder than, say, scaling a cloud app to millions of concurrent users. In some cases they're the same. Behind the scenes in the cloud you have all kinds of data sync and consistency enforcement systems that algorithmically resemble what you need for consistent fluid interaction peer to peer.

reply
jjcob
2 days ago
[-]
It's not a technical challenge, it's a fundamental problem.

When multiple people work on a document at the same time, you will have conflicts that will become very hard to resolve. I have never seen a good UI for resolving non-trivial changes. There is no way to make this merging easy.

The only way to avoid the merge problem is to make sure that the state is synchronised before making changes. With cloud based solutions this is trivial, since the processing happens on the server.

The local first variant of this would be that you have to somehow lock a document before you can work on it. I worked on a tool that worked like that in the early 2000s. Of course that always meant that records remained locked, and it was a bit cumbersome. You still needed to be online to work so you could lock the records you needed.

reply
api
1 day ago
[-]
It’s barely different from having multiple fat single page web apps editing a file in the cloud. All of those have local replicas of the data and present as if everyone is editing at once.

There are multiple ways to do this, like CRDTs plus raft based leader signaling for conflict resolution. The latter signaling requires almost no bandwidth. Raft based time skew adjustment works too if your problem domain can accept a small amount of uncertainty.

Like I said a lot of these same kinds of algorithms are used cloud side. All the big cloud stuff you use is a distributed system. Where the code runs is irrelevant. The cloud is just another computer.

reply
jjcob
13 hours ago
[-]
All these algorithms arent going to help if user A changes a headline from Arial to Helvetica while user B changes it to Helvetica Neue.

There is no way to tell which of the changes should win.

That's why most applications decided to require users be online for edits. when you have to be online, the chance of simultaneous edits becomes so small that you can just show an error message instead of trying to merge.

The online requirement also ensures that you are notified of conflicts immediately, which is vastly preferable to users. Nothing worse than working on a document for hours and discovering someone else also worked on the same document and now you have two edited copies that someone needs to consolidate.

That's the reason why offline first is becoming increasingly unpopular.

reply
dajtxx
1 day ago
[-]
I think Fusion360 is one of the worst things I've seen. It presents as local but it feels like every mouse click causes a web request to happen.

Something that should be local and snappy is just an awful experience.

reply
dibrale
1 day ago
[-]
I prize local execution for confidentiality and trust reasons, but this mindset seems to put me in a distinct minority. I've just accepted that I will end up having to build my own solutions from time to time.

That said, 'local' can mean a number of different things. On-device local, LAN local, intranet local... You get the idea. I chose to go with an approach of: 'assume resources are constrained and build for that'.

The result was a local-first agentic system (https://github.com/dibrale/Regions) that uses explicit resource sharing and execution patterns to make use of arbitrarily distributed compute. That way, local can be whatever I want it to be, so long as there's an endpoint.

reply
nitwit005
1 day ago
[-]
> When you build a local-first app, you’ve effectively created a distributed system.

This is true, but any time there's more than one computer involved it's a distributed system. It doesn't matter if you're "local first" or not.

Plenty of apps that resolve everything server side do a terrible job handling conflicts.

reply
hamstergene
1 day ago
[-]
Sometimes I think the way CRDT research formulates the problem, itself obstructs evolution of local-first.

That obsession with Google Docs-like collaborative real-time text editing, a pretty marginal use case, derails the progress from where local-first apps really need it:

- offline-enabled, rather than realtime/collaborated

- branching/undoing/rebasing, rather than combining edits/messages

- help me create conflict-aware user workflows, rather than pursue conflict-free/auto-converging magic

- embeddable database that syncs, not algorithms or data types

CRDT research gives us `/usr/bin/merge` when local-first apps actually need `/usr/bin/git`. I don't care much how good the merge algorithm is, I need what calls it.

reply
gizmo
2 days ago
[-]
I think that the database layer is the wrong layer for reconciliation of change sets.

The main problem with any sync system that allows extensive offline use is in communicating how the reconciliation happens so users don't get frustrated or confused. When all reconciliation happens as a black box your app won't be able to do a good job at that.

reply
KaiserPro
1 day ago
[-]
I think the biggest issue is that if you are putting that effort into syncing, its far nicer for the end user if you manage it for them.

If your using the "local" shared storage, then you're on the hook for any failure or setup, so there is a huge amount of debug and reputational damage in store for not very much gain.

If you are supporting multi device makers, and a normal audience, (ie people who don't know what CRDTs are) then you doing the hard work is far easier.

And thats before we get into ACLs for sharing.

reply
donatj
1 day ago
[-]
I have a note keeping app I have been building for well over a decade at this point. It's never going to release and it's chock full of features only I seem to care about. No one I've shown it to has cared.

Recently I went to the grocery store and presented with no signal had to try to recall my grocery list from memory. In all my years using my own app I think this is the first time I'd ever wished it were "local first"

reply
schickling
1 day ago
[-]
Host of localfirst.fm and author of LiveStore here. Your post strongly resonates and I particularly like your conclusion in regards to SQLite.

I've been exploring how to build a sync layer around SQLite for the last 5 years as part of LiveStore and settled on embracing event sourcing following a similar architecture to Git but for your application data.

reply
birdfood
1 day ago
[-]
I’m building a journaling app primarily for myself but with a view that others might want to use it to. I’ve built it in rails and deployed it. The experience has been great and it’s the first “app” I’ve deployed outside of work. But I haven’t shared it with anyone because I don’t want to be responsible for hosting their data considering the intention of this app. The only “business model” (method of covering costs) I’m interested in is pay to use. I don’t want to do ads or tracking. I think saas is the wrong fit for it. So I’m just this week thinking about making it a macOS / iOS app instead and work out how to do syncing without involving a server.
reply
degamad
1 day ago
[-]
"Without using a server" is possible, but hard.

An alternative is "using a server that the user has", e.g. saving to their iCloud/Dropbox/Google Drive/GitHub/etc.

(There used to be old notes applications which would save your notes in an IMAP folder on your email server; almost every user of your app will have email.)

reply
dajtxx
1 day ago
[-]
Such a weird and absurd conversation to be having from the PoV of someone who started using computers a long time ago.

Offline first apps 'sound like the future'? I agree they're far better than the trash we have now, but it's not like we can't build them now.

We should be building them now. The web was never meant to be an app delivery platform :(

reply
bvan
1 day ago
[-]
Bring back gold old desktop applications, I say. Apps are for your phone.
reply
wartywhoa23
2 days ago
[-]
Because Big Money are behind Local-Last Apps
reply
pbronez
2 days ago
[-]
This is content marketing for a open-source SQLite extension called `sqlite-sync` [0]. It seems pretty cool, implementing CRDTs and some networking directly in SQLite.

Unfortunately this extension "is tightly integrated with SQLite Cloud." It doesn't support peer-to-peer syncing or self-hosting a central source of truth. The project is really just a client for the sqlite.ai cloud service.

[0] https://github.com/sqliteai/sqlite-sync

reply
gejose
2 days ago
[-]
Shameless self plug, but my workout tracking app[1] uses a sync engine and it has drastically simplified the complexities of things like retry logic, intermittent connectivity loss, ability to work offline etc.

Luckily this is a use case where conflict resolution is pretty straightforward (only you can update your workout data, and Last Write Wins)

[1] https://apps.apple.com/us/app/titan-workout-tracker/id644949...

reply
pbronez
2 days ago
[-]
Which sync engine are you using?
reply
hasanhaja
2 days ago
[-]
I've only started playing with local first recently to learn the Service Worker API and IndexedDB, and I'm looking forward to learning about CRDTs more. Here's a little todo app that I built:

[1] GitHub: https://github.com/hasanhaja/tasks-app/ [2] Deployed site: https://tasks.hasanhaja.com/

reply
reclusive-sky
2 days ago
[-]
I've been working on a local-first distributed app and found it difficult to design something that is easy to install and maintain. Important prerequisites and user-managed dependencies (i.e. Redis, Docker) are nonstarters for most potential users. It seems cloud hosting is always the end-state when trying to optimize primarily for less-technical users. FWIW I also ended up arriving at CRDTs as the solution for my app.
reply
marapuru
2 days ago
[-]
I always start with a local first application. And then find that I want to access List X, Design Y, Scribble Z on the road and on my phone. Resulting in a frenzied search for a mobile / desktop alternative that includes syncing. Then finding a solution that requires a subscription and then falling back into fiddling with Google Drive, or something in the Google workspace.

I guess I should bring my devices back to exactly 1 device. Or just take a subscription on one service.

reply
sroerick
1 day ago
[-]
Does anybody remember the architecture that account _unwriter was creating on Bitcoin? You could write a transaction from anywhere using and store data in it. Then you could read it through an API of either a full node or a node that was only subscribed to your unique hash ID. It seemed like a really good architecture to me. He kind of disappeared and most of the work lost
reply
tracker1
2 days ago
[-]
My approach in a similar situation where individual sites may be disconnected from a main hub for potentially days is to basically treat things closer to a CQRS transaction/command model, where all transactions are recorded as such, there's a local "summary" table for current state, but this is overridden on sync with the main hub after transactions are evaluated there.
reply
mhuffman
2 days ago
[-]
Some people might say that the browser you are using to look at your webapp or ask chatgpt for something is a local-first app ... making it pretty damn popular.
reply
pixelpoet
2 days ago
[-]
That's a bit like saying we live in outer space because earth is in outer space.
reply
trashb
2 days ago
[-]
no, this does not make sense. The "outer" in outer space suggests an inner space. In this case earth because we have historically held a earth centrist view since humans primarily live on earth. The case that earth is inside of outer space is assumed.

Also I believe the original comment is right as your browser (HTTP) operates fundamentally on a request reply basis. You request resources, you receive an answer, while you are displaying the most recent answer the state on the server might change. You can have a browser that caches the resources for you and only retrieve the new state after for example a timeout or for example a archiving proxy.

reply
pixelpoet
2 days ago
[-]
Yes, and being a web browser that accesses websites from the internet doesn't make sense to call "local first"; the internet isn't local.
reply
trashb
2 days ago
[-]
I can browse my private my-site.html perfectly without ever having a network adapter in my device. Now granted some modern webapps (chatgpt) are not designed to operate locally, but there are plenty of webapps that work fine from local storage.

A browser is (in my definition) a tool to display what you call a websites, usually a collection of html/css/js resources. One of the options is to retrieve the resources through the network, if you want the latest news for instance. But it is not required and that makes it local-first in my view. I don't think local first means the app can not connect or sync with a remote server.

In the same way as saying we live on earth doesn't mean we are not in space, earth is in space and we are on earth. It is not mutually exclusive.

reply
pbronez
2 days ago
[-]
There's a bit of No True Scottsman happening here. Isn't every iOS app backed by iCloud's CloudKit [1] a "Local First App"?

[1] https://developer.apple.com/icloud/cloudkit/

reply
PLenz
2 days ago
[-]
Harder to rent-seek with non-saas software
reply
flkiwi
1 day ago
[-]
"local first" to me translates almost immediately into the following:

- What if I don't have that device? - How do I reliably sync between devices? - How do I deal with local storage limits of a device?

Silverbullet has become very nearly my illustration of the perfect app, because it has a local, offline feature but syncs back to a server controlled by me. The weakness with SB specifically is any limits on browser storage, though that's less likely an issue for a note-taking tool that is, for me, 99.9% text.

Compare Logseq, which used to be browser-based and self-hostable, to their more recent "local-first, app-based" model, with a $15/mo charge to sync and severe usability restrictions in a world where I cannot install arbitrary software at work.

So, local-first has for me become synonymous with inconvenience, proprietary, and revenue channel for the dev rather than security and flexibility.

reply
Aldipower
1 day ago
[-]
Local first web apps - the hybris of the modern web application developer.

Sure, for some solutions, local first even makes sense, but for the most, not.

reply
setnone
2 days ago
[-]
Just because "personal server" hasn't happened yet in the way personal computer did
reply
RajT88
2 days ago
[-]
That is an interesting thought.

How do you make self-hosting appealing to more than weird nerds?

reply
crote
2 days ago
[-]
Make it a comprehensive service you can rent at a fixed monthly cost. Which of course defeats the whole "self-hosting" part.

Regular people don't like the Magic Box Which Makes Things Work. They'll begrudgingly shove it in a cupboard and plug it in, but even that is already asking a lot. If it needs any kind of regular maintenance or attention, it is too much effort. "Plug in a harddrive once a month for backups"? You'll have just as much asking them to fly to Mars and yodel the national anthem while doing a cartwheel.

reply
tomxor
2 days ago
[-]
> 1. Unreliable Ordering

If you think this is only a problem for distributed systems, I have bad news for you.

reply
grigio
2 days ago
[-]
Local first apps are a niche, like offline maps (like tomtom, organics maps)
reply
xvrqt
2 days ago
[-]
>The reason is simple: syncing is hard.

The reason is simple: control is how you extract rent

reply
auggierose
1 day ago
[-]
Doing it properly is just hard. There is also this CRDT craze going around, but in the end I think operational transformation is a more general and flexible solution (CRDT is just a special case of operational transformation). I have been looking at the various solutions in this space for a few years now, and in the last two or three months started implementing my own experimental solution as a database using operational transformation. It doesn't help that operational transformation in the literature is somewhat confused (there are not many papers about it that are actually correct), although in the end it is really simple.
reply
elzbardico
2 days ago
[-]
Network speeds are of sufficient bandwidth and latency to allow cloud first applications, which are vastly simpler to develop than Local-first whenever you need to add any remote/collaborative feature.
reply
johannes1234321
1 day ago
[-]
> Network speeds are of sufficient bandwidth and latency

They are, as long as you in reach of a base station.

As soon as you travel through a tunnel, hike outside, are on an airplane or are in Australia with latency to U.S or European servers things are different.

Not to mention choice of being offline, either to disable distraction or for reducing roaming costs for some devices while travelling.

reply
wellpast
2 days ago
[-]
It’s a least-common denominator effect.

I.e., most people don’t care.

Local-first is optimal for creative and productivity apps. (Conversely, non-local-first are terrible for these.)

But most people are neither creative nor optimally productive (or care to be).

reply
chii
2 days ago
[-]
> most people don’t care.

it's not that they "don't care", but that they dont know this is an issue that needs to be cared about. Like privacy, they didnt think they need it until they do, but by then it's too late.

reply
ChaoPrayaWave
1 day ago
[-]
Most people don’t ask whether an app is local first or cloud first. They just want to know: “Can I open it?” and “Will my data be safe?”
reply
self_awareness
2 days ago
[-]
> Offline-first apps sound like the future: instant loading, privacy by default, and no more spinning loaders on flaky connections.

The future? I thought all apps were like this before this web2.0 thing ruined it.

reply
takluyver
2 days ago
[-]
The implied context for that is that whatever information you have in the apps should be kept in sync between devices and between people. Classic desktop applications are more like offline-only. Web 2.0 traded that approach for data living on a server, so it stays in sync because there's only one source of truth, but you can only edit it when online.

'Offline-first' is trying to combine the benefits of both approaches.

reply
threetonesun
2 days ago
[-]
What's a little funny to me is that most of Web 2.0 was supposed to solve things like collaborating on a Word doc on a shared server, ideally getting us around the X+1 versions of files problem, but collaboration is a human problem, so now we have X+1 versions, but in the cloud.
reply
gethly
1 day ago
[-]
I think this might have something to do with the data being tied to he application itself. In other words no portability/vendor lock-in type of thing.
reply
syntaxing
2 days ago
[-]
It’s a hard balance, local first a lot of times feel local only. I really like the model on Home Assistant/Nabu Casa. Local first and you can pay a subscription to cloud access.
reply
codeulike
2 days ago
[-]
Every problem is a sync problem these days.

Sync problems are harder than just 'use a CRDT'.

What counts as 'consistent' depends on the domain and the exact thing that is being modelled.

reply
jakelazaroff
2 days ago
[-]
The real answer to the question isn't technical. Local-first apps haven't become popular because companies recognize that their value comes from controlling your data.

In a talk a few years ago [1], Martin Kleppman (one of the authors of the paper that introduced the term "local-first") included this line:

> If it doesn't work if the app developer goes out of business and shuts down the servers, it's not local-first.

That is obviously not something most companies want! If the app works without the company, why are you even paying them? It's much more lucrative to make a company indispensable, where it's very painful to customers if the company goes away (i.e. they stop giving the company money).

[1] https://speakerdeck.com/ept/the-past-present-and-future-of-l...

reply
ktosobcy
2 days ago
[-]
I try to use those as much as possible. We ended up in this situation mostly because of the Google push to make everything an "web-app" with web-technologies instead of protocols...

Anyone uses IMAP email? Works just fine (save for IMAP design but that's another story).

Same with CalDAV.

For stydy I use Anki and it has brilliant sync (it can even automatically/automagaically merge study changes when I study some items on mobile and others on desktop).

Many seem to claim that it's impossible to sync correctly in "colaborative environment" as in it would always involved dozens of people constantly working and editing the document (which would be utterly difficult to track the evolution of)… Most of the time it's not that colaborative and having the data locally makes it easier to work with.

OTOH not everything has to be (web-)app…

reply
Anonyneko
2 days ago
[-]
With Anki the sync works perfectly until it doesn't and the app asks you to choose the copy you want to keep. Thankfully it doesn't happen often and hasn't caused any trouble for me, but it (seemingly) has no options for manual conflict resolution.
reply
ktosobcy
2 days ago
[-]
Hmm... it requires full sync if you edit the fileds of template.

In the past it required full sync due to conflict but in the last ~2 years (maybe) I haven't run into such probelm (I guess periodic sync in the background eliminating to much of a backlock probably helps)

reply
Anonyneko
1 day ago
[-]
Last time it asked me to do a full sync when I changed a single card to use a different template (because I had mistakenly used a wrong template when creating it). The fact that everyday sync works perfectly is still mighty impressive though.
reply
jjcob
2 days ago
[-]
IMAP works fine until you try to continue working on a draft email that you started on another computer... somehow it always gets broken and I end up with 5 copies of the message in the drafts folder...
reply
isaachinman
2 days ago
[-]
We fixed this at Marco, FYI.

https://marcoapp.io

reply
jjcob
13 hours ago
[-]
How?
reply
Xelbair
2 days ago
[-]
Technical reasons are honestly overblown - it all boils down to one, business, reason - control.

When you do serverside stuff you control everything. What users can do, and cannot do.

This lets you both reduce support costs as it is easier to resolve issues even by ad-hoc db query, and more importantly - it lets you retroactively lock more and more useful features behind paywall. This is basically The DRM for your software with extra bonus - you don't even have to compete with previous version of your own software!

i want my local programs back, but without regulatory change it will never happen.

reply
gwbas1c
2 days ago
[-]
> Technical reasons are honestly overblown

Having built a sync product, it is dramatically simpler (from a technical standpoint) to require that clients are connected, send operations immediately to central location, and then succeed / fail there. Once things like offline sync are part of the picture, there's a whole set of infrequent corner cases that come in that are also very difficult to explain to non-technical people.

These are silly things like: If there's a network error after I sent the last byte to a server, what do I do? You (the client that made the request) don't know if the server actually processed the request. If you're completely reliant on the server for your state, this problem (cough) "doesn't exist", because when the user refreshes, they either see their change or they don't. But, if you have offline sync, you need to either have the server tolerate a duplicate submission, or you need some kind of way for the client to figure out that the server processed the submission.

reply
Xelbair
1 day ago
[-]
but it is nothing that cannot be solved - if it was more profitable we would all be doing it.
reply
gwbas1c
1 day ago
[-]
Oh, it's very solvable. (As I pointed out.)

The bigger issue is naivety. A lot of these corner cases mean that it's unlikely someone can just put together a useful prototype in a weekend or two.

> if it was more profitable we would all be doing it.

More like, if there was more demand we would all be doing it. Most of us have reliable internet connections and don't go out of service often enough to really care about this kind of thing.

reply
hobs
2 days ago
[-]
Yep at every company I have ever worked at the question is not only how to assert control, but how to maintain it for the long term. Even if the company isn't exploiting you today, they want the option for later.
reply
JustExAWS
2 days ago
[-]
Well, I have three devices I would like to work with depending on what I am holding in my hand - my phone, tablet and computer. I want all of my devices to be in sync.

Right now, I can throw my phone in the ocean, go to the Apple Store, sign in and my new phone looks and acts like my old phone with all of my data available, my apps and my icons being in the same place.

My wife and I can share calendar events, spreadsheets, photo libraries etc.

That’s not to mention work.

reply
winrid
1 day ago
[-]
Time. If it takes more than like 30 seconds to setup the masses will never adopt it.
reply
thephyber
1 day ago
[-]
CRDTs may solve most of the technical side (albeit with lots of complexity), but how do you solve the “how to explain what happened to the user?” issue? Not all actions are intuitive while offline.

For example, if 2 users are both offline and both buy the same last item in the retailer’s inventory. The race condition has to be solved sometime, and the non-winning user will need a communication from customer service apologizing for overselling the item. Those are pretty frustrating conversations when they come from airlines or hotels.

reply
SilverbeardUnix
2 days ago
[-]
1. People don't work alone and need to collaborate

2. People value convenience over privacy and security

3. Cloud is easy.

reply
arrowtrench
2 days ago
[-]
Is local-first bad / more difficult for collaboration because of conflict resolution? (E.g., two users edit the same document when they're offline and then, during syncing, they find that their versions diverge too much for them to be merged cleanly.) If so, isn't it possible to mark certain assets as "undivergable" which would effectively mean that the program would act like a traditional cloud-first type of app for that specific asset? This middle ground could introduce too much complexity and some inconsistency, but it could prove useful in certain cases.
reply
jonathanstrange
1 day ago
[-]
Merge conflicts arise just as easily in a cloud-first app than in a local-first app once you go beyond any simple "last edit wins" consistency model (and if you have that model, it's also no problem for local-first). What counts as a conflict depends entirely on the data domain in both cases, and that's the hard part.

It's just easier to implement cloud-first because it's just CRUD with a centralized database on a server. It's still extremely hard to reliably connect two apps directly peer to peer without some centralized server as an intermediary and since you need a centralized server anyway and in addition have to do the peer to peer syncing, "local first with syncing" is inherently more complex than just syncing to a master database. But potential merge conflicts are the same in both.

reply
weego
2 days ago
[-]
Absolutely.

I can't believe so many replies are struggling with the easy answer: privacy, security, "local first", "open source", "distributed", "open format" etc etc etc are developer goals projected onto a majority cohort of people who have never, and will never, care and yet hold all the potential revenue you need.

reply
RajT88
2 days ago
[-]
Do not forget - cloud makes apps easier to monetize via ads and selling user data.

Business trumps perfect software engineering almost every time.

reply
tmpfs
1 day ago
[-]
I am totally sold on local first apps. I think the recent enshitification of many services (which is also inevitable for the new wave of AI services) plus the inherent privacy risks with allowing third parties access to our personal data is no longer a trade-off I am willing to make.

I set up a little Pi NAS and have moved all my git repositories there thanks to gitea (I still mirror to github for community interaction) and am gradually migrating everything to be stored locally with encrypted cloud backups.

I've also been working on a local-first, open-source, eventually consistent password manager[0] (using last write wins) for the last 3 years as I can't think of anything more important that we should have control over.

It will be hard for local-first to become more commonplace as SaaS and cloud has become so entrenched but I will keep forging towards a future where we take back ownership of our personal data.

[0]: https://saveoursecrets.com/

reply
AlienRobot
1 day ago
[-]
I feel like the problem is simply... it should just be a file.

And if it were a file, you could sync it with dropbox or OneDrive.

And if you can do that, they can't make money selling their own cloud service for their local-first app.

More specifically, if you can edit different parts of a same document on different devices, then the document should be split across multiple files that can be synced independently, e.g. a photoshop document where each layer is a separate file and a "main" document simply loads them into a single layer stack.

In fact there are too many document types nowadays that are composites of sub-files and are even actually just a zipped folder under the hood. It feels like we should have just been using files all along, or some sort of file-folder hybrid with OS-level support instead of using zipped folders to contain all the files of a single document.

reply
mweidner
1 day ago
[-]
> More specifically, if you can edit different parts of a same document on different devices, then the document should be split across multiple files that can be synced independently

A more robust idea is to store a log of changes in files that are synced with Dropbox/OneDrive/etc. To prevent conflict warnings from Dropbox, you'll want each device to write to a separate file. Readers re-assemble the logs into (some) canonical total order, then reduce over that to get the document state.

The hard part is re-architecting your app to record all changes, instead of just writing out the current state to disk. However, once you do that, it can form the basis for other features like undo/redo, a view of the file's history, etc.

(The changes don't need to be CRDT/OT messages - anything deterministic works, though it's a good idea to make them "rebase-able", i.e., they will still do something reasonable when replayed on top of a collaborator's concurrent changes.)

reply
MangoToupe
1 day ago
[-]
> And if it were a file, you could sync it with dropbox or OneDrive.

How do these services resolve conflicts?

reply
dhilish
1 day ago
[-]
Local first apps are slow when compared to cloud hosted
reply
bariumbitmap
1 day ago
[-]
Obsidian.md has over a million users in five years, that sounds fairly popular to me.

https://www.linkedin.com/pulse/how-obsidians-contrarian-play...

reply
whartung
1 day ago
[-]
I dunno.

Storing your files on a synced folder (i.e. DropBox or something similar) can handle a lot of heavy lifting.

The use case is not simultaneous editing, folks typically aren’t working on things on different systems at the same time.

Given that as a basis, next, “save early, save often”. The app is “cloud ignorant”. It just sees a local folder. Most of the syncing systems sync up quite quickly, so even if you save it on your desktop and pick up your phone, it’s probably already synced up. “For free”.

Finally your app needs to be cognizant that the file can change behind its back. When it fires up it reloads, or prompts the user, or whatever.

Adding in auto save and versioning like Apple does with their apps, and magic happens (though I honestly don’t know how the Apple apps respond if the saved document is changed underneath it).

There’s a difference between simultaneous and background changes. No reason to over complicate things for the vast majority of use cases.

reply
brainzap
2 days ago
[-]
Does Apple Notes count as local first? it is an amazing experience.
reply
ForHackernews
2 days ago
[-]
If they're building this on top of sqlite, probably worth considering adopting https://rqlite.io/ that has already done a lot of work for a clustered sqlite
reply
farseer
1 day ago
[-]
They were popular until the internet was invented.
reply
anon1395
2 days ago
[-]
Personally I think PWAs are a bit confusing for consumers as they think "It's a website, i won't be able to use it offline." and if they get reminded that they can use it offline they usually forget.
reply
alextingle
2 days ago
[-]
Why would they think "its a web-site"? PWAs just look like normal phone apps.

In fact, many "normal" phone apps are basically just a web-site inside a thin wrapper. So the difference is largely academic in many cases.

reply
crote
2 days ago
[-]
Because a user first encounters it on the web - where it acts exactly like a website. The only difference is that the bookmark-button-thingy says "add app to home screen" instead of "add to home screen".

It's not giving a "I'm installing an application" vibe, it is giving "I am creating a shortcut to a website" vibes. Apps are installed via the app store, not as weird quasi-bookmarks in your browser.

reply
anon1395
2 days ago
[-]
Most ordinary people thing that they are just adding a shortcut to the website, whereas these wrappers convince the user they are actually installing an app app.
reply
Fire-Dragon-DoL
2 days ago
[-]
Because lock in and walled gardens was my assumption
reply
lenerdenator
2 days ago
[-]
They're extra work.

Now that people are used to having someone in a data center do their backing up and distributing for them, they don't want to that work themselves again, privacy be damned.

reply
5kyn3t
1 day ago
[-]
local first: could actually be a downloadable index.html file, that runs locally only your browser.
reply
cadamsdotcom
1 day ago
[-]
If you spend effort being local-first, you can’t spend that on features or UX.

You have a disadvantage vs. competitors who focus solely on the latter.

reply
vkou
1 day ago
[-]
Is local-first local-only actually harder to build and secure than a four-nines uptime service?

I understand that it's less profitable, which is the real reason nobody does it.

reply
cadamsdotcom
1 day ago
[-]
Building and securing local first is definitely harder than four-nines.

Want 99.99% uptime? Easy.

Cloudflare, AWS, fly.io, Vercel etc. have poured decades of person-years of engineering into platform reliability and operational excellence so you can get it with a few quick CLI commands.

reply
vkou
1 day ago
[-]
Just because the infra you built on top of is 99.99% uptime doesn't mean that your application is.
reply
sbinnee
1 day ago
[-]
Answer: internet
reply
hulitu
1 day ago
[-]
> Why haven't local-first apps become popular?

Because FAANG needs your data. Because 5 Eyes needs your data.

reply
Joel_Mckay
1 day ago
[-]
The world went mobile, and Faustian software licenses undermined actual ownership.

Thus, customers get software companies that are SaaS even when they appear to sell some product. Accordingly, most "Computers" sold today are "information-appliances" with constrained predefined use-cases the dominant OS publishers allow people to use.

Few FOSS projects survive contact with such ecosystems, and likewise software App sellers drive platform decay due to shifting priorities that mess with users.

This is also why many modern AAA games have online game-play even when it makes no sense to do so... And why we see 147 fart Apps on the App store. =3

reply
jrm4
2 days ago
[-]
Ideas like this miss the mark when they don't factor in "the forces against us?"

This isn't like choice of fast-food burger where it's slightly harder to drive further to get something a little better; nearly the entire economy of Silicon Valley et al works every day to stop and slow down things like "local first."

reply
dhilish
1 day ago
[-]
Local first apps are slow when compared to others
reply
moralestapia
1 day ago
[-]
You mean like Microsoft Office?
reply
Sharlin
2 days ago
[-]
> Offline-first apps sound like the future

...and here we go again. Time is a flat circle.

reply
fennecbutt
1 day ago
[-]
Money.
reply
bongodongobob
1 day ago
[-]
They used to be. But if your app is cloud based, that means the businesses you're selling to don't have to have staff and hardware to manage that stuff on prem and can charge a higher subscription fee. For better or worse.

The pendulum will swing back, I think, for security reasons. "I don't want company X training AI on our data!" (Whatever that means)

reply
mrtesthah
1 day ago
[-]
Standard notes is a good example of a local-first web app that it super-easy to use without a sign-up.
reply
rvz
1 day ago
[-]
"It's open source, build it yourself" - Can't build it for my machine, but will try prebuilt executable.

"It works everywhere" - Doesn't run on my GPU, OS combination, will tweak settings for a bit.

"Add your API Key here" - Won't do it. "privacy reasons"

"You can run it locally then" - I'm really GPU poor and it runs 5 tok/s.

#1 Reason? - Lots of friction.

reply
insane_dreamer
1 day ago
[-]
The number of people willing to pay for local-first, self-hosted, or federated apps, and the amounts they're willing to pay, is just not enough compared to vast majority of people who don't care.
reply
dismalaf
1 day ago
[-]
Honestly, apps that require the web to actually do the thing you want, but also work offline, are more or less useless. You're basically just faking that it works offline, it doesn't do the thing you want, and it's a PITA for everyone involved. And to try get this right when most of the world has reliable data connections all the time is a colossal waste of effort.
reply
OhMeadhbh
1 day ago
[-]
Not really sure what you mean. I still run emacs locally.
reply
LAC-Tech
1 day ago
[-]
It's genuinely too difficult for most developers.

It may sound smug but I spent a lot of time trying to understand how to sync things offline. I'm not saying I am incredibly talented, just that I put in the hard work here.

It's very, very obvious to me that a lot of people in the space just aren't willing to do that. To be frank it's obvious in a few of these posts.

So yeah, it takes a bit of effort. If you don't want to genuinely learn some new shit, dig into the papers, be confused - it's probably not for you.

reply
cramcgrab
2 days ago
[-]
Because cloud. And local hardware and software lags now.
reply
kachapopopow
2 days ago
[-]
I don't understand why we have servers in the first place for any kind of local appliance, they ARE a server already, 256mb of flash costs pennies for storing hundreds of thousands of metrics - almost enough for a year!
reply
kachapopopow
2 days ago
[-]
Not sure why this is getting downvoted I assume because it has nothing to do with the article but these problems don't exist in applications that never have to reach a server in the first place. Remote access can be solved with a simple relay appliance (ex what homepod does for apple) for the average joe.
reply
Andrew_nenakhov
2 days ago
[-]
Server-centric computing is just more efficient, and it usually becomes less popular only at because means of communication aren't up to the task at the moment.

1. In the beginning, there were mainframes and terminals. You saved resources by running apps on a server and connecting to them with cheap terminals

2. Then, PCs happened. You could run reasonably complex programs on them, but communication capabilities were very limited: dialup modem connections or worse

3. Then, internet happened, and remote web apps overtook local apps in many areas (most of those that survived required massive usage of graphics, like games, which is difficult even with modern internet)

4. Then, smartphones happened. At the time of their appearance they didn't have ubiquitous network coverage, so many first apps for these platforms where local. This is eroding too, as communication coverage improves.

So if you look at this, it is clear that main share of computing oscillated back and forth between server and local, moving to local only when communication capabilities do not permit remote running, and once comms catch up, the task of running apps moves back to servers.

reply