Deploying this requires running 5 different open source servers (databases, proxies, etc), and 5 different services that form part of this suite. If I were self-hosting this in a company I now need to be an expert in lots of different systems and potentially how to scale them, back them up, etc. The trade-offs to be made here are very different to when architecting a typical SaaS backend, where this sort of architecture might be fine.
I've been going through this myself with a hobby project. I'm designing it for self-hosting, and it's a radically different way of working to what I'm used to (operating services just for my company). I've been using SQLite and local disk storage so that there's essentially just 2 components to operate and scale – application replicas, and shared disk storage (which is easy to backup too). I'd rather be using Postgres, I'd rather be using numerous other services, background queue processors, etc, but each of those components is something that my users would need to understand, and therefore something to be minimised far more strictly than if it were just me/one team.
Huly looks like a great product, but I'm not sure I'd want to self-host.
- Cheap and easy: embed into one executable file SQLite, a KV store, a queue, and everything else. Trivial to self-host: download and run! But you're severely limited in the number of concurrent users, ways to back up the databases, visibility / monitoring. If a desktop-class solution is good for you, wonderful, but be aware of the limitations.
- Cheap and powerful: All open-source, built from well-known parts, requires several containers to run, e.g. databases, queues, web servers / proxies, build tools, etc. You get all the power, can scale an tweak to your heart's content while self-hosting. If you're not afraid to tackle all this, wonderful, but be aware of the breadth of the technical chops you'll need.
- Easy and powerful: the cloud. AWS / Azure / DO will manage things for you, providing redundancy, scaling, and very simple setup. You may even have some say in tuning specific components (that is, buying a more expensive tier for them). Beautiful, but it will cost you. If the cost is less than the value you get, wonderful. Be aware that you'll store your data on someone else's computers though.
There's no known (to me) way to obtain all three qualities.
Easy on the magic powder. Cloud vendors manage some stuff, they mostly abstract you the hardware and package management part, but that's about it. Hosting a postgresql DB on RDS instead of a VM somewhere or on bare metal doesn't change much. Sure redundancy and scaling is easyto setup. But you still have to stay up to date with the best practices, how to secure it through network, choose and set your backup policy, schedule when to upgrade, plan the potential downtimes, what is deprecated, what is new, when price will sky rocket up because AWS doesn't want many customers to still run that old version. Same applies to many individual tech sold as "managed" by cloud vendors.
A whole lot of admin overhead is not removed by running some individual tech as a managed service.
You only remove a significant part of the management when the whole stack of what comprises your user facing application is managed as a single entity but that comes with another problem. Those managed apps end up being very highly coveted targets by black hat hackers. All the customers over the world usually end up being pwnable at the same time thanks to the same bug/security hole being shared. It becomes a question of when, not if, your users accounts will become public data.
I don't think there's any reason the same codebase can't support different trade-offs here.
Maybe I'm just looking at the past through rose-coloured glasses, but it seems to me that was the norm before we standardized on distributing apps as an entire stack through a docker-compose.yml file.
Your app depends on redis for caching. If no redis instance is configured, go ahead and just instantiate a "null" caching implementation (and print a warning to the log if it makes you feel better) and carry on.
You're using minio for object storage. Is there any reason you couldn't, I don't know, use a local folder on disk instead?
Think of it as "progressive enhancement" for the ops stack. Let your app run simply in a small single node deploy, but support scaling up to something larger.
For the first case, dev should just build on SQLite or use application code. For the latter case, choose a single storage engine and use it for everything (Postgres?).
I have 2 systems where in the first (route optimization platform), 1 user would, as part of just a normal 10 minute session:
- read ~100MB from the database - utilize 100% of 32-core machine CPU (and 64GB of RAM) - resulting in thousands of writes to the database - and side-effect processing (analytics, webhooks etc)
Over a course of a day, it would likely be ~10x for that single user.
In the other system - appointment scheduling - 1 user would, in 1 day, read ~1MB of data from the database, and make 2-3 writes from a single action, with maybe 1 email triggered.
I.e. load and show stuff from databases (but nothing compute intensive).
Insert / update performance is quite another story: https://stackoverflow.com/a/48480012 Even in WAL mode, one should remember to use BEGIN CONCURRENT and be ready to handle rollbacks and retries.
A big organization likely would just buy their hosted solution.
If your business uses said tool for all client work, and it's hosted on a single server (be it physical or a VM), that is a SPoF. Any downtime and your business grinds to a halt because none of your project or customer information is accessible to anyone.
It feels like you have to be a very big business for this to be a problem that is worth the extra engineering effort vs a single instance with an SQLite file that you backup externally with a cron job
Most stick trading systems and even many banking systems have availability like 40%, going offline outside business hours. But during these business hours they are 99.999999 available.
Usually operation without trouble is operation with plenty of resources to spare, and in absence if critical bugs.
Personally for me the issue with all these new project management platforms is that the target demographic is either: - they're so small they can't afford to self-host - they're big enough they can afford the pro plans of more established tools
Small companies can get by perfectly with the free/cheap plans of other tools, like Trello+Google Workspace. Heck if you're a one-man team with occasional collaborators free Trello + Google Workspace (~$6/mo) is enough.
A box to provision the Huly stack might come out at more per month...
In theory, the components for a universal runtime are pretty apparent by now. Wouldn't it be wonderful if truly portable and end-user friendly cloud computing was a thing.
What about lowering the number of dependencies your application uses, like only depending on a database? Running a database isn't that hard, and it also greatly simplifies the overhead of running 5 different services.
I would agree, though, that many software installations never need to scale to that point. Perhaps most.
You can easily handle 100-200 concurrent active users on a decent CPU with SQLite, if you don't do anything crazy. And if you need a project management solution that needs more than that, you probably are not too concerned about the price.
So "cheap and powerful" just looks like "powerful", at which point you may as well make it easy, too, and go with a managed or hybrid solution.
It seems like you look at the cost of things only through the lens of licensing and not the cost of people to run/maintain them.
I have nothing against OSS per se, but in my experience, the financial analysis of OSS vs paid software is much more subtle.
That's interesting, for us there was actually no trade-off in that sense. Having operated another SaaS with a lot of moving parts (separate DB, queueing, etc), we came to the conclusion rather early on that it would save us a lot of time, $ and hassle if we could just run a single binary on our servers instead. That also happens to be the experience (installation/deployment/maintenance) we would want our users to have if they choose to download our backend and self-host.
Just download the binary, and run it. Another benefit is that it's also super helpful for local development, we can run the actual production server on our own laptop as well.
We're simply using a Go backend with SQLite and local disk storage and it pretty much contains everything we need to scale, from websockets to queues. The only #ifdef cloud_or_self_hosted will probably be that we'll use some S3-like next to a local cache.
How do you make it highly available like this? Are you bundling an SQLite replication library with it as well?
In our experience a simple architecture like this almost never goes down and is good enough for the vast majority of apps, even when serving a lot of users. Certainly for almost all apps in this space. Servers are super reliable, upgrades are trivial, and for very catastrophical failures recovery is extremely easy: just copy the latest snapshot of your app directory over to a new server and done. For scaling, we can simply shard over N servers, but that will realistically never be needed for the self-hosted version with the number of users within one organization.
In addition, our app is offline-first, so all clients can work fully offline and sync back any changes later. Offline-first moves some of the complexity from the ops layer to the app layer of course but it means that in practice any rare interruption will probably go unnoticed.
In practice, wouldn't you need a load balancer in front of your site for that to be somewhat feasible? I can't imagine you're doing manual DNS updates in that scenario because of propagation time.
We have clearly worked in very different places.
I’ve been prototyping my app with a sveltekit user-facing frontend that could also eventually work inside Tauri or Electron, simply because that was a more familiar approach. I’ve enjoyed writing data acquisition and processing pipelines in Go so much more that a realistic way of building more of the application in Go sounds really appealing, as long as the stack doesn’t get too esoteric and hard to hire for.
S3 is pretty helpful though even on self-hosted instances (e.g. you can offload storage to a cheap R2/B2 that way, or put user uploads behind a CDN, or make it easier to move stuff from one cloud to another etc). Maybe consider an env variable instead of an #ifdef?
That is the nature of the beast for most feature rich products these days. The alternative is to pay for cloud service and outsource the maintenance work.
I don't mind a complex installation of such a service, as long as it is properly containerized and I can install and run it with a single docker-compose command.
> In terms of how much thought and design you put into the self-hosted story, this is one of the things that we've been slowly realizing that every time the product gets more complex and the infrastructure required that run it gets more complex. As soon as you include a time series database, then that's adding another ratchet up of complexity if you're self-hosting it, but more outside of docker name. How much did you have that in your mind when you were designing the platform?
> I would say that's changed a lot over time. Early on, Sentry’s goal was to adapt to infrastructure. We use Django. We’ve got support for different databases out of the box, SQLite, Postgres, MySQL. MySQL is going all this other stuff at some point too. We had that model early on. ... Our whole goal, and this is still the goal of the company, we want everybody to be able to use Sentry. That's why the open source thing is also critical for us. Adapting infrastructure is one way to do that. Now, we changed our mind on that because what that turned into is this mess of, we have to support all these weird edge cases, and nobody is informed about how to do things. We're like, “That's all dead. We only support Postgres.” Key value is still a little bit flexible. It's hard to mess up key values. We're like, “You can write an adapter. We only support Redis for this thing. We use a technology called Cortana. We only support ClickHouse for this thing.” We're over this era of, “Can we adapt it because it looks similar?” and it made our lives so much better.
Great for short/medium-term but unsustainable long-term.
Postgres is a relatively uncontroversial one, but I had the benefit of working for a company already operating a production postgres cluster where I could easily spin up a new database for a service. I went with SQLite/on disk storage because for most companies, providing a resilient block storage device, with backups, is likely trivial, regardless of which cloud they're on, being on bare metal, etc.
Infra, ops and dev intersect at different points in different orgs. Some may provide custom kubernetes operator and abstract the service away, some orgs may provide certain "managed" building blocks, e.g. postgres instance. Some will give you a VM and and iscsi volume. Some will allocate credits in cloud platform.
Having the ability to plug preexisting service into the deployment is an advantage in my book.
The options are either to minimise the dependencies (the approach I advocated for in my parent comment), or to maximise the flexibility of those dependencies, like requiring a generic-SQL database rather than Mongo, requiring an S3 compatible object store rather than whatever they picked. This is however far more work.
> there's an interesting tension here with applications that target self-hosting
Being already set up in Docker simplifies this quite a bit for smaller installs. But I notice a send tension - the introduction of some new tool on every project.I'm reasonably proficient with Docker, been using it for over a decade. But until now I've never encountered "rush". And it did not surprise me to find some new tool - actually I probably would have been surprised to not find some new tool. Every project seems to foregoe established, known tools for something novel now. I mean I'm glad it's not "make", but the overhead for learning completely new tools to understand what I'm introducing into the company is attrition.
Take a look at all the configs and moving parts checked in this very repo that are needed to run a self-hosted instance. Yes, it is somewhat nicely abstracted away, but that doesn't change the fact that in the kube directory alone [1] there are 10 subfolders with even more config files.
1: https://github.com/hcengineering/huly-selfhost/tree/main/kub...
That's just what you get with Kubernetes, most of the time. Although powerful and widely utilized, it can be quite... verbose. For a simpler interpretation, you can look at https://github.com/hcengineering/huly-selfhost/blob/main/tem...
There, you have:
mongodb supporting service
minio supporting service
elastic supporting service
account their service
workspace their service
front their service
collaborator their service
transactor their service
rekoni their service
I still would opt for something simpler than that and developing all of the above services would keep multiple teams busy, but the Compose format is actually nice when you want to easily understand what you're looking at.Which brings me back to the initial question: Is this complexity and the external dependencies really needed? For a decently decomposed, highly scalable microservice architecture, maybe. For an Open Source (likely) single tenant management platform? Unlikely.
It highlights the problem of clashing requirements of different target user groups.
These “moving parts” are implementation details which (iiuc) require no maintenance apart from backing up via some obvious solutions. Didn’t they make docker to stop worrying about exactly this?
And you don’t need multiple roles, specialists or competences for that, it’s a one-time task for a single sysop who can google and read man. These management-spoiled ideas will hire one guy for every explicitly named thing. Tell them you’re using echo and printf and they rush to search for an output-ops team.
> We can also take a look at the linux kernel that powers the docker instances and faint in terror.
Sure, and computers are rocks powered by lightning - very, very frighting. That doesn't invalidate criticism about the usability and design of this very product my friend.
Maybe they won’t change or migrations will be backwards-compatible. We don’t know that in general. Pretty sure all the software installed on my PC uses numerous databases. But somehow I never upgraded them manually. I find the root position overdefensive at best.
If it were a specific criticism, fine. But it uses lots of assumptions as far as I can tell, cause it references no mds, configs, migrations, etc. It only projects a general idea about issues someone had at their org in some situation. This whole “moving parts” idiom is management speak. You either see a specific problem with a specific setup, or have to look inside to see it. Everything else is fortune telling.
I think that self hosted has two meanings and not every person that self hosts want to use docker.
Unfortunately, JavaScript based apps can be quite convoluted to output HTML and JavaScript.
Until something goes wrong, or the business side of the house asks for some kind of customization.
Docker can be a min to packaging an application.
Appwrite is a good example of packaging a complex app nearly flawlessly with docker and making updates a little more seamless.
I continue to have my reservations about docker having used it for a long time but some applications are helpful.
It’s unrealistic to eliminate it on the basis of it not being perfect for any and all scenarios.
It makes software available to more people to be able to run locally, and I’m not sure that’s a bad thing.
That's the main reason why I've gone why Apache Skywalking instead, even if it's a bit jank and has fewer features.
It's kind of unfortunate, either you just have an RDBMS and use it for everything (key-value/document storage, caches, queues, search etc.), or you fan out and have Valkey, RabbitMQ and so on, increasing the complexity.
That's also why at the moment I use either OpenProject (which is a bit on the slow side but has an okay feature set) or Kanboard (which is really fast, but slim on features) for my self-hosted project management stuff. Neither solution is perfect, but at least I can run them.
My metric for this is something like Portainer, where it's installation instructions tend to be "install this Helm chart on the K8s cluster you already have" or "run this script which will install 10 services by a bunch of bash (because it's basically some developers dev environment)".
Whereas what I've always done for my own stuff is use whatever suits, and then deploy via all-in-one fat container images using runit to host everything they need. That way the experience (at most scales I operate at) is just "run exactly 1 container". Which works for self-hosting.
Then you just leave the hooks in to use the internal container services outside the service for anyone who wants to "go pro" about it, but don't make a bunch of opinions on how that should work.
On Linux.
The container everything trend that also has the effect of removing non-Linux POSIX-ish options from the table.
Postgres’ inability to be wrapped in Java (ie SQLite can be run as a Java jar, but PG need to be installed separately, because it’s in C) gives it a major, major drawback for shipping. If you ship installable software to customers, you’ll either have to make it a Docker or publish many guidelines, let alone customers will claim they can only install Oracle and then you’ll have to support 4 DBs.
How have you found SQLite’s performance for large-scale deployments, like if you imagine using it for the backend of Jira or any webapp?
You could certainly embed PostgreSQL in a jar and you can do something similar to this
https://github.com/electric-sql/pglite
I don’t think there’s that much interest, but it is doable.
H2: https://www.h2database.com/html/main.html
HSQLDB: https://hsqldb.org/
Apache Derby: https://db.apache.org/derby/
Though those would certainly be a bit niche approaches, so you'll find that there are fewer examples of using either online.
Although you do lack the simple on-disk db format that way.
I think this pattern is not that harsh if they have a script to guarantee setting up a k8s cluster or some sort of that.
This right here. Shitty architecture choices as a deliberate moat.
Postgres vacuuming comes to mind as an example. Pre-built docker images of it often just ignore the problem and leave it up to the defaults, but the defaults rarely work in production and need some thought. Similarly, you can tune Postgres for the type of storage it's running on, but a pre-built container is unlikely to come with that tuning process automated, or with the right tuning parameters for your underlying storage. Maybe you build these things yourself, but now you need to understand Postgres, and that's the key problem.
Containers do mostly solve running the 5 built-in services, at least at small scale, but may still need tuning to make sure each pod has the right balance of resources, has the right number of replicas, etc.
From a users point of view: If I'm interested in a project, I usually try to run it locally for a test drive. If you make me jump through many complex hoops just to get the equivalent of a "Hello World" running, that sucks big time.
From a customers point of view: Ideally you want both, local and cluster deployment options. Personally I prefer a compose file and a Helm chart.
In this specific case I'd argue that if you're interested in running an OSS project management product, you're likely a small/medium business that doesn't want to shell out for Atlassian - so it's also likely you don't have k8s cluster infrastructure, or people that would know how to operate one.
The «хули» (direct transliteration to huly) means “what a hell” or actually a bit spicier “what a f@ck”. This phrase is common for russian tradies who don’t bather to know anything but where is the nearest bottle shop and how much time left til the end of work shift.
The name reminded me PizData project from the russian speakers.
What. A. Joke.
To me it's a clear message that the author doesn't respect anybody. Also, it seems applicable to potential projects people will build using that Huly tool?
I guess my question would be -- how on earth it's possible people trust authors like this and commit into using product built by them?
> as in lunatic a person who lacks good sense or judgment
Not marked as rude or profane.
> a person, especially a man, who is stupid or unpleasant
Or [1]:
> If you refer to another person as a git, you mean you dislike them and find them annoying. [British, offensive, disapproval]
[0] https://dictionary.cambridge.org/us/dictionary/english/git
[1] https://www.collinsdictionary.com/us/dictionary/english/git
I guess this kind of things are inevitable...
Also, if you see their Twitter account, they clearly are aware of this naming clash and actually embracing it hahaha
Best of luck to Huly, this seems pretty cool. I've never gone all-in on a fully-integrated stack like this (issue tracking + docs + chat + calls), but it seems like in an ideal world, that's what you would want? Huge integration with O365 seems like the one thing people do actually like about MS Teams, for example.
Also, I'm a sucker for cool laser animations, so I'm saving the home + pricing pages to an inspiration folder for sure.
The idea of every service charging $15-30 per user per month is a myth perpetuated by companies who themselves have that budget to spend out of their VC funding.
Evernote once had a valuation of nearly 2 billion, and like 400 employees.
I replaced it with Obsidian which gives me more value and it was mostly just made by two people, now they list 9 employees, one of whom is the office cat.
Each company for me was just syncing some text and maybe a few larger things like PDFs. The actual cost of that is pennies per year.
The problem is that they realised they could make more money by trying to lock companies into a proprietary API definition platform – they want the design, testing, QA, documentation, etc, all to happen in Postman.
If you want an obvious example, look at Apple.
Somebody hasn't experienced Salesforce pricing
I am not a fan of Atlassian products, but what retains them the most aren't the qualities of the products themselves nowadays, but the integration and plugin ecosystem + the difficulty of exporting the data. Nearly every tool has an integration for either jira, bitbucket, confluence, or all of them. And you would usually dismiss any tool that doesn't have them if you are an Atlassian customer already. Once you have set that up but decide you are paying too much for it, good luck good luck telling your users they will surely lose data/formatting/integrations when migrating to some other tool. This + having to train people to use another tool while companies usually take for granted that their users won't get lost in Jira (which really isn't true).
Ultimately it becomes more of a tax than a price.
— <Why> not?
— OK, I've named the repo just that.
I think trying to replace all these Apps
> serves as an all-in-one replacement of Linear, Jira, Slack, and Notion.
is not a wise move. I can see Linear and Jira and maybe Notion, but fighting with Slack just is an uphill battle. There are so many Chat platforms out there and many open source too.
Why would yours be the one to be at least on par with the likes of Slack? Which hasn't really happend for the others either.
JetBrains tried this with JetBrains Space [1] it used to be a knowledge platform and chat platform all with a code hosting platform, CI platform and a little more. But even an experienced dev company like JetBrains gave that up and focused on the core.
I think they should remove the chat part and stick to being a Jira + Notion replacement.
Looking back at those days, I'd still take IRC over Slack any time. Just for the self-hosting part, but also because of better integration, simpler interface, simpler way of extending it to fit company's needs.
I'd also take GitWeb over Gitlab, Bitbucket or Gitea. None of the extras offered by these tools add any real value. The only reason I want any Web interface to Git at all is to be able to link to the code from Wiki / chat.
But, Jenkins turned out to be a really bad pick. And so is Redmine. I've tried many popular CI tools, and they are all bad, so, who knows maybe this project will do it better?.. but they don't seem to tell much about how they see things working. I also haven't found a good bug-tracking tool yet. So, maybe this project will make something better? -- Let's hope.
----
Ideally, I'd prefer it if tools around project management standardized around some basic interfaces, so that no single company was tempted to create "package deals" where you choose to use crappy tools because they are packaged together with what you really want. But, I don't think it's clear to anyone what such interfaces might look like. So, this isn't happening probably in this decade... at least.
With such an "Enterprise" system the choice of the chat tool is a lot more a too down decision, than a chat in a social group.
And it isn't like slack is perfect in all regards either ...
Similar to how Notion became popular to people who wanted to use the same app for journaling, knowledge management and CRM, for home and work.
Hopefully Huly pulls it off, having so many features that work well isn’t an easy task.
As shown in the rendered billboard, they agree with you…
"Huly: the app for – anything – and nothing"
Best of Luck. Looking good so far from the first browse and scroll-around.
I am blown away at all of his work. He has some graphics and animation that I want to mimic for a "solar" theme I am working on.
Linear is not quite flexible enough for what I need as it is too strongly in the task management camp, so I can't define a proper candidate pipeline without hacking what the "status" of each "issue" means and remembering that.
Most small companies don't need dedicated solutions for project management, ats and crm, so I'd love to combine them. But I'd be looking for something more lightweight than this, maybe even local without cloud, but at least single service with sqlite or similar.
Would be amazing if it was very customizable to let people build or customize their own pipelines from some basic building blocks, still keeping the slick and fast UI from Linear.
I agree going after Slack at the same time is too ambitious.
Landing page: " serves as an all-in-one replacement of Linear, Jira, Slack, and Notion."
Github repo: "alternative to Linear, Jira, Slack, Notion, Motion"
Github org: "Alternative to Jira, Linear, Slack, Notion, Motion, and Roam by Huly Labs"
For wiki / notion we use Outline. For slack, Matrix. For CRM, Espo. For code repositories, forgejo, but we don't self host it, we use codeberg.
I think this stack would probably cost us a couple thousand of dollars a month if we paid for SaaS since we have around a hundred members. Instead we pay something like 30$/month for a couple hetzner servers through elestio.
When I work full time and I get to use all the bells and whistles SaaS products like linear, sometimes I think it's cool that the tool will do something like point out I'm about to create a duplicate ticket... But thousands of dollars a month cooler? Not sure, but not my money!
Like ORMs, every programmer needs to write one.
Pick one. Stick with it.
EDIT: Ahh it's a video, https://huly.io/videos/pages/pricing/plans/common.mp4
[1]: https://ziglang.org/
Isn’t a PM platform mostly text-like?
Not criticising (gitlab manages same) just curious what’s driving that.
although I'm sure mongo isn't helping matters, either < https://github.com/hcengineering/huly-selfhost/blob/1d97e9ed...>
Any suggestions?
Disclaimer aside, I've heard very positive things about the search that ships with PostgreSQL, and similarly positive things for the few search extensions one can add (assuming extensions are available where you're running PG)
https://github.com/quickwit-oss/tantivy#features seems to be a good starting place if you want to embed search, and its https://github.com/quickwit-oss/quickwit friend comes up a lot, but be sure to bear its AGPL license in mind
Toshi is one of the ones which is striving for ES compatibility at the search level, making it a drop-in replacement https://github.com/toshi-search/Toshi#example-queries although last time I played with it, they were QUITE A WAYS away, depending on how deep one is in the ES query DSL
And then just last week <https://news.ycombinator.com/item?id=41797041> we were blessed with Nixiesearch <https://github.com/nixiesearch/nixiesearch> which is a pretty specialized engine but will almost certainly be less resource intensive than Elastic (or it's FOSS friend OpenSearch)
Browsing https://github.com/topics/search-engine I was reminded about a bunch that I either haven't tried or I tried so long ago I don't recall their pro/con lists
https://github.com/meilisearch/meilisearch
https://github.com/typesense/typesense
https://github.com/qdrant/qdrant
I'd try a more modular approach because IMO a substantial part of the problem is "horses for courses": PMs and developers have very different skills and requirements. Even inside those categories there is substantial variation.
I see no reason why the UI for developers has to be same as for PMs and higher ups. My ideal PM solution would involve the CLI and the notion of committing changes and being able to organize information cleverly. Efficiency, bare-boned, no fluff, no pixels that add zero information. These would all be things I am interested in.
My boss' ideal PM solution would probably involve some unholy marriage of Salesforce, Excel and CSVs without any organization whatsoever in a screen that explodes with fireworks and deliberately slows everything down and adds lag and loading screens so you feel you are doing important work. You can tell I am jaded, but my point is, fine. Let them have it. I see no reason to approach this problem with one, fixed, set of interaction patterns.
It's a common theme these days for me. Why does everything has to be so monolithic? Why is everything so samey?
There is some need of differentiation, but also need to make sure there is shared understanding on priorities and state.
In fact, thanks to your comment, I strengthened my conviction that the ontology needs to be foreground and center in all cases instead of the appearance(s), which receive substantial, and IMO excessive, attention.
found the tweet:
> We charged $89,775 for this new @huly_io landing page