Implementing the UI for one exact use case is not much trouble, but figuring out what that use case is difficult. And defending that use case from the line of people who want "that + this little extra thing", or the "I just need ..." is difficult. It takes a single strong-willed defender, or some sort of onerous management structure, to prevent the interface from quickly devolving back into the million options or schizming into other projects.
Simply put, it is a desirable state, but an unstable one.
I think it's because they are not using the product they are designing. A lot of problems you typically see in modern UIs would have been fixed before release if the people writing it were forced to use it daily for their job.
For example, dropdown menus with 95 elements and no search/filter function that are too small and only allow you to see 3 lines at a time. Engineering tools (eg. a FEM solution) that don't allow you to directly copy-paste data in tables is another one that they would never tolerate if they were the ones being forced to manually type it or go through a convoluted import/export process that requires a dozen clicks.
This is the part where people get excited about AI. I personally think they're dead wrong on the process, but strongly empathize with that end goal.
Giving people the power to make the interfaces they need is the most enduring solution to this issue. We had attempts like HyperCard or Delphi, or Access forms. We still get Excel forms, Google forms etc.
Having tools to incrementaly try stuff without having to ask the IT department is IMHO the best way forward, and we could look at those as prototypes for more robust applications to create from there.
Now, if we could find a way to aggregate these ad hoc apps in an OSS way...
The usual situation is that the business department hires someone with a modicum of talent or interest in tech, who then uses Access to build an application that automates or helps with some aspect of the department's work. They then leave (in a couple of cases these people were just interns) and the IT department is then called in to fix everything when it inevitably goes wrong. We're faced with a bunch of beginner spaghetti code [0], utterly terrible schema, no documentation, no spec, no structure, and tasked with fixing it urgently. This monster is now business-critical because in the three months it's been running the rest of the department has forgotten how to do the process the old way, and that process is time-critical.
Spinning up a proper project to replace this application isn't feasible in the short term, because there are processes around creating software in the organisation, for very good reasons learned painfully from old mistakes, and there just isn't time to go through that. We have to fix what we can and get it working immediately. And, of course, these fixes cause havoc with the project planning of all our other projects because they're unpredictable, urgent, and high priority. This delays all the other projects and helps to give IT a reputation as taking too long and not delivering on our promised schedules.
So yeah, what appears to be the best solution from a non-IT perspective is a long, long way from the best solution from an IT perspective.
[0] and other messes; in one case the code refused to work unless a field in the application had the author's name in it, for no other reason than vanity, and they'd obfuscated the code that checked for that. Took me a couple of hours to work out wtf they'd done and pull it all out.
Part of the problem is that the novices that create these applications don't consider all the edge cases and gnarly non-golden-path situations, but the experienced devs do. So the novice slaps together something that does 95% of the job with 5% of the effort, but when it goes wrong the department calls in IT to fix it, and that means doing the rest of the 95% of the effort. The result is that IT is seen as being slow and bureaucratic, when in fact they're just doing the fecking job properly.
Which is a huge reason that learning a RAD (rapid application development - emphasis on rapid) tool is a pretty useful skill.
If you want a developer to write good code quickly, put them in an isolated silo and don't disturb them.
If you want a developer to engage with the business units more, be prepared for their productivity to drop sharply.
As with all things in tech, it's a trade-off.
IT should not be focusing on the theoretical, platonic Business Process. It never exists in practice anyway. They should focus on streamlining actual workflow of actual people. I.e. the opposite advice to the usual. Instead of understanding what users want and doing it, just do what they tell you they want. The problem with standard advice is that the thing you seek to understand is emergent, no one has a good definition, and will change three times before you finish your design doc.
To help company get rid of YOLOed hacks in Excel and such made by interns, IT should YOLO better hacks. Rapid delivery and responsiveness, but much more robust and reliable because of actual developer expertise behind it.
If you streamline a shitty process, you will have diarrhea...
Unfortunately, most processes suck and need improvement. It isn't actually IT's job to improve processes. But almost always, IT is the only department that is able to change those processes nowadays since they are usually tied to some combination of lore, traditions, spreadsheets and misused third-party software.
If you just streamline what is there, you are cementing those broken processes.
It's because of that condescending, know-it-all attitude that people actively avoid getting IT involved in anything, and prefer half-assed Excel hacks. And they're absolutely right.
Work with them and not over them, and you may get an opportunity to improve the process in ways that are actually useful. Those improvements aren't apparent until you're knee-deep in mud yourself, working hand by hand with the people you're trying to help.
Also, if you have ever worked with anyone trying to get specifications worked out, you’ll see that most people (including devs) rely on intuition rather than checklists and will always forget to tell you something that is critical.
The thing is that cost of changes in the business can be a simple memo. But for software that usually means redesign.
That's an illusion. The reality is, it's all hacky solutions on top of hacky solutions. Even in manufacturing: the spec may be fixed, and the factory line produces identical products by the million - but the spec was developed through an ad-hoc process, and the factory line itself is a pile of hacks that needs continued tuning to operate. And there is no perfectly specced out procedure for retooling a factory line to support the newest spec that came out of design department - retooling is, in itself, a small R&D project.
> Also, if you have ever worked with anyone trying to get specifications worked out, you’ll see that most people (including devs) rely on intuition rather than checklists and will always forget to tell you something that is critical.
This is the dirty truth about the universe - human organizations are piles of hacks, always in flux; and so is life itself. The sameness and harmony we see in nature is an illusion brought on by scale (in two ways - at large scale, because we live too short to see changes happening; at small scale, because the chaos at biomolecular level averages out to something simpler at the scale we can perceive).
Order and structure are not the default. It takes hard work to create and maintain them, so it's better be worth the cost. The prevalence of Excel-based hacks in corporate is a proof positive that, for internal software, it usually isn't worth it, despite what the IT department thinks.
> The thing is that cost of changes in the business can be a simple memo. But for software that usually means redesign.
Which is why you shouldn't be building cathedrals that need expensive rework every other week because of some random memo. Instead, go down to where people work; see them tweaking their shovels, take those shovels and make the tweak they want the proper way.
A bit of tangent: I think the idea of coddling users is what’s leading to the complexity of all those system. We’re building cathedrals when we need tents. Instead of having small, sharp software tools that can be adjusted easily, we’re developing behemoths that’s supposed to handle everything under the sun (systemd, a lot of modern package managers, languages that is tied to that one IDE,…)
I'm not really arguing for mega-tools with locked-down workflows. But there's usually some happy-ish medium between chaos and rigid monoliths.
I disagree: it's a business prioritisation issue (not necessarily a problem). Ultimately, a lot of the processes are there because the wider business (rightly) wants IT to work on the highest impact issues. A random process that 3 people suffer from probably isn't the highest impact for the business as a whole.
Also, because it's not high impact, it makes sense that an intern is co-opted to make life easier (also as a learning experience), however it also causes the issues OP highlighted.
The problem is solvable, I think, but it's not easily solvable!
You need a structure if you have org of 100+ employees. If it is smaller than that I don’t believe you get dev department.
most of these teams only wants a straightforward spec, shut themselves off from distractions, just to emerge weeks or months later with something that completely misses the business case. and yet, they will find ways to point fingers at the product owner, project manager, or client for the disaster.
The huge majority of devs want to understand the business and develop high quality software for it.
In one business I worked for, the devs knew more about the actual working of the business than most of the non-IT staff. One of the devs I worked with was routinely pulled into high-level strategy meetings because of his encyclopaedic knowledge of the details of the business.
I.e. done right, it should be not just possible but completely natural for a random team lead in the mail room to call IT and ask, "hey, we need a yellow highlighter in the sheet for packages that Steve from ACME Shipping needs to pick on extra evening run, can you add it?", and the answer should be "sure!" and they should have the new feature within an hour.
Yes, YOLO development straight on prod is acceptable. It's what everyone else is doing all the time, in every aspect of the business. It's time for developers to stop insisting they're special and normal rules don't apply to them.
The main reason you want a computer is cheap emulation (cad, daw,…) or fast (and reliable) automation. Both requires great deal of specifications to get right.
The single most valuable tool is user testing. However it really takes quite a few rounds of actually creating a design and seeing how wrong you saw the other person’s capabilities, to grok how powerful user testing is in revealing your own biases.
And it’s not hard at all at core. The most important lesson really is a bit of humility. Actually shutting up and observing what real users do when not intervened.
Shameless plug, my intro to user testing: https://savolai.net/ux/the-why-and-the-how-usability-testing...
I assume those processes weren't applied when deciding to use this application, why? Was there a loophole because it was done by an intern?
The loophole is that if you have Office or similar you have a variety of development environment, IT/compliance/finance aren't caring what files you produce with the applications you have, and no one else is paying attention initially either, but would have a say (and a procedure for you to follow) if you wanted to bring in or create a new application. The usual process is bypassed.
This is more commonly associated with Excel, but it applies to Access too (less so than it used to, but there are still plenty people out there who rely on it daily).
Once the demo/prototype/PoC is there it is a lot easier to “fix up” that than spin up a project in anything else, or get something else in that is already available, for the same reasons as why it was done in Excel/Access in the first place plus the added momentum: the job is already at least part way done, using something else would be a more complete restart so you need to justify that time as well as any other costs and risks.
[Note: other office suites exist and have spreadsheets & simple DBs with similar capabilities, or at least a useful subset of them, of course, but MS Office's Excel & Access are, for better or worse, fairly ubiquitous]
Then think again of those managers getting paid manager salaries who couldn't figure this out themselves - or worse, the ones who want to shut it all down because he didn't "follow the procedure" (the procedure of not doing anything useful???)
This reminds me of the "just walk confidently to their office and ask for a job to get one!" advice. This sounded bullshit to me until I got to stay with some parts of a previous company, where the hiring process wasn't that far really.
That's also the kind of companies where contracts and vendor choices will be negociated on golf courses and the CEO's buddies could as well be running the company it would be the same.
I feel for you.
Love the assumption "when it inevitably goes wrong." In real life, many of these applications work perfectly for years and assist employees tremendously. The program doesnt fail, but the business changes - new products, locations, marketing, payment types, inventory systems, tons of potential things.
And yes, after the original author is gone, nobody is left to update the program. Of course, a lot of programmers or IT folks probably could update it, but ew, why learn and write Access when we can create a new React app with microservices-based backend including Postgres in the cloud and spin up a Kubernetes cluster to run it.
And then you need to implement that, which is never an easy task, and maintain the eternal vigilance to both adhere to the vision but also fit future changes into that vision (or vice versa).
All of that is already hard to do when you're trying to build something. Only harder in a highly collaborative voluntary project where it's difficult or maybe even impossible to take that sort of ownership.
Is this an inherently bad thing if the software architecture is closely aligned with the problem it solves?
Maybe it's the architecture that was bad. Of course there are implementation details the user shouldn't care about and it's only sane to hide those. I'm curious how/why a user workflow would not be obviously composed of architectural features to even a casual user. Is it that the user interface was too granular or something else?
I find that just naming things according to the behavior a layperson would expect can make all the difference. I say all this because it's equally confusing when the developer hides way too much. Those developers seem to lack experience outside their own domain and overcomplicate what could have just been named better.
Not at all. Talented human artists still impress me as doing the same level of deep "wizardry" that programmers are stereotyped with.
I don't think that's entirely true, what I usually see is people that think AI art is just as good as many artists.
You can be impressed by something and still think a machine can do it just as well. People that can do complex mental arithmetic are impressive, even if that skill is mostly obsolete by calculators.
I also remember the hostility of my informal universities IT chat groups. Newbs were rather insulted for not knowing basic stuff, instead of helping them. A truly confident person does not feel the need to do that. (and it was amazing having a couple of those persons writing very helpful responses in the middle of all the insulting garbage)
Other engineering disciplines are simpler because you can only have complexity in three dimensions. While in software complexitiy would be everywhere.
Crazy to believe that
Cost, safety, interaction between subsystems (developed by different engineering disciplines), tolerances, supply chain, manufacturing, reliability, the laws of physics, possibly chemistry and environmental interactions, regulatory, investor forgiveness, etc.
Traditional engineering also doesn't have the option of throwing arbitrary levels of complexity at a problem, which means working within tight constraints.
I'm not an engineer myself, but a scientist working for a company that makes measurement equipment. It wouldn't be fair for me to say that any engineering discipline is more challenging, since I'm in none of them. I've observed engineering projects for roughly 3 decades.
Are there any good resources for developing good UX for necessarily complex use cases?
For teasing apart complex workflows I'd suggest Holtzblatt and Beyer's Contextual Design book, I taught a user-centered research and design class many years ago and used that as our textbook, hopefully it still holds up.
For organizing complex applications I like to start with affinity diagrams, card sorts, and collaborative whiteboard sessions. And of course once you have a working prototype, spend as much time as possible quietly watching people interact with your software.
The best method I have found is to use the interface and fix the parts that annoy me. After decades of games and internet I think we all know what good interfaces feel like. Smooth and seamless to get a particular job done. If it doesn't feel good to use it is going to cause problems with users.
Thats said. I see the software they use on the sales side. People will learn complexity if they have to.
The toughest hurdle to overcome as a developer is not thinking about the gui as a thin client for the application, because to the user, the gui is the application. Developers intuitively keep state in their head and know what to look for in a complex field of information, and often get frustrated when not everything is visible all at once. Regular users are quite different— think about what problems people use your software to solve, think about the process they’d use to solve them, and break it down into a few primary phases or steps, and then consider everything they’d want to know or be able to do in each of those steps. Then, figure out how you’re going to give focus to those things… this could be as drastic as each step having its own screen, or as subtle as putting the cursor in a different field.
Visually grouping things, by itself, is a whole thing. Important things to consider that are conceptually simple but difficult to really master are informational hierarchy and how to convey that through visual hierarchy, gestalt, implied lines, type hierarchy, thematic grouping (all buttons that initiate a certain type of action, for example, might have rounded corners.)
You want to communicate the state of whatever process, what’s required to move forward and how the user can make that happen, and avoid unintentionally communicating things that are unhelpful. For example, putting a bunch of buttons on the same vertical axis might look nice, but it could imply a relationship that doesn’t exist. That sort of thing.
A book that helps get you into the designing mindset even if it isn’t directly related to interface design is Don Norman’s The Design of Everyday Things. People criticize it like it’s an academic tome — don’t take it so seriously. It shows a way of critically thinking about things from the users perspective, and that’s the most important part of design.
Nor can the design world, for that matter. They think that making slightly darker gray text on gray background using a tiny font and leaving loads of empty space is peak design. Meanwhile my father cannot use most websites because of this.
That's part of the problem, they'll defend their poorly visible choice by lawyering "but this meets the minimal recommended guideline of 2.7.9"
It's like dark patterns are the ONLY pattern these days.. WTF did we go wrong?
Win95 was peak UI design.
I don’t understand modern trends.
Then the world threw away the menus, adopted an idiotic “ribbon” that uses more screen real estate. Unsatisfied, we dumbed down desktop apps to look like mobile apps, even though input technology remains different.
Websites also decided to avoid blue underlined text for links and be as nonstandard as possible.
Frankly, developers did UI better before UI designers went off the deep end.
A few days ago I had trouble charging an electric rental car. When plugging it in, it kept saying "charging scheduled" on the dash, but I couldn't find out how to disable that and make it charge right away. The manual seemed to indicate it could only be done with an app (ugh, disgusting). Went back to the rental company, they made it charge and showed me a video of the screen where to do that. I asked "but how on earth do you get to that screen?". Turned out you could fucking swipe the tablet display to get to a different screen! There was absolutely no indication that this was possible, and the screen even implied that it was modal because there were icons at the bottom which changed the display of the screen.
So you had: zero affordances, modal design on a specific tab, and the different modes showed different tabs at the top, further leading me to believe that this was all there was.
99% of the users are not using the mobile version.
Developers built a web UI for creating containers for the labs, taking the advice from this (then future) article too literally. Their app could only build containers, in the approved way. Yet, not all labs were possible to run in containers, and the app did not account for that (it was a TODO). Worse, people responsible for organizing the labs did not know that not all labs are compatible with containers.
Lab coordinators thus continued to create containers even in cases where it didn't make sense, despite the explicit warning "in cases X, Y, Z, do not proceed, call Alexander instead".
So if you make one button you better make that it is always the right button. People follow the happy-but-wrong path way too easily if there is no other obvious one.
In this example I wonder if the tool was too "MVP" and they didn't evaluate what minimum viable would mean for the users?
Later the missing pieces were added, we had "two buttons" and the resulting user confusion because they did not know and could not be taught whether a container makes sense for a particular lab.
(Commercial software is far from immune to this as well: professional tools like CAD are notoriously arcane and often have a huge number of special-purpose features, and they're not incentivised to improve their UI model because it would alienate their existing users, as well as not show up on the feature lists which are often used to drive purchasing decisions)
Most people just keep the default. When the default is Linux (say, the Steam Deck), most people just keep Linux.
Yeah, no, that isn't it.
or, simply put, nerds
it takes both a different background, approach and skillset to design ux and interface
if anything FOSS should figure out how to attract skilled artists so majority of designs and logos doesn't look so blatantly amateurish.
It's difficult to get those kinds of creatives to donate their time (trust me on this, I'm always trying).
I'm an ex-artist, and I'm a nerd. I can definitively say that creating good designs, is at least as difficult as creating good software, but seldom makes the kind of margin that you can, from software, so misappropriation hurts artists a lot more than programmers.
I don't know if that qualifies as "getting ripped off", but it's not exactly paying me either.
Developers seem to have a product that people can actually attach a value to, but art and music; not so much. They seem to be in different Venn circles.
In all of it, we do stuff because of the love of the craft. One of the deeper satisfactions, for me, is when folks appreciate my work (payment is almost irrelevant; except for "keeping score"). It's pretty infuriating, to have someone treat my work as if it is a cheap commodity. There's a famous Star Trek scene, where Scotty and his crew are being disciplined for a bar fight with some Klingons[0], and Scotty throws the first punch. I can relate.
This says more of your perception I think. Many people attach value to art and music. Many people do not attach value to software.
Software people love writing software to a degree where they’ll just give it away. You just won’t find artists doing the same at the same scale. Or architects, or structural engineers. Maybe the closest are some boat designs but even those are accidental.
It might just be that we were lucky to have some Stallmans in this field early.
Ego is likely involved. I love my babies, but what others think of my work isn't that important (which is good, because others aren't very impressed).
I make tools that I use, mostly.
Not sure how that happens with a painting, even a digital one.
But professional graphic designers, train to work in product-focused teams. They also are able to create collaborative suites of deliverables.
Most developers will find utility in the work of graphic designers, as opposed to fine artists.
I think this is because there are plenty of software nerds with an interest in typography who want to see more free fonts available.
But more importantly, most of them don't really care beyond "oh copyright's the thing that lets me sue big company man[0]".
The real impediment to CC-licensed creative works is that creativity resists standardization. The reason why we have https://xkcd.com/2347/ is because software wants to be standardized; it's not really a creative work no matter what CONTU says. You can have an OS kernel project's development funded entirely off the back of people who need "this thing but a little different". You can't do the same for creativity, because the vast majority of creative works are one-and-done. You make it, you sell it, and it's done. Maybe you make sequels, or prequels, or spinoffs, but all of those are going to be entirely new stories maybe using some of the same characters or settings.
[0] Which itself is legally ignorant because the cost of maintaining a lawsuit against a legal behemoth is huge even if you're entirely in the right
Another thing is that the vast amount of fan fiction out there has a hub-and-spoke model forming an S_n graph around the solitary 'original work' and there are community norms around not 'appropriating' characters and so on, but you're right that community works like the SCP Foundation definitely show that software-like property of remixing of open work.
Anyway, all to say I liked your comment very much but couldn't reply because you seem to have been accidentally hellbanned some short while ago. All of your comments are pretty good, so I reached out to the HN guys and they fixed it up (and confirmed it was a false positive). If you haven't seen people engage with what you're saying, it was a technical issue not a quality issue, so I hope you'll keep posting because this is stuff I like reading on HN. And if you have a blog with an RSS feed or something, it would be cool to see it on your profile.
I don't, as a rule, ever ask artists to contribute for free, but I still occasionally get gifted art from kind folks. (I'm more than happy to commission them for one-off work.)
Artists tragically undercharge for their labor, so I don't think the goal should be "coax them into contributing for $0" so much as "coax them into becoming an available and reliable talent pool for your community at an agreeable rate". If they're enthusiastic enough, some might do free work from time to time, but that shouldn't be the expectation.
There’s a very good reason for me to be asking for gratis work. I regularly do tens of thousands of dollars’ worth of work for free.
It’s a matter of Respect. It’s really amazing, how treating folks with simple Respect can change everything.
I like working in teams, but I also participate in an organization, where we’re all expected to roll up our sleeves, and pitch in; often in an ad hoc fashion.
If it is your job, then go do it as a job. But we all have jobs. Free software is what we do in our free time. Artists don't seem to have this distinction. They expect to be paid to do a hobby.
It usually involves developing a design language for the app, or sometimes, for the whole organization (if, like the one I do a lot of work for, it's really all about one app). That's a big deal.
Logo design is also a much more difficult task than people think. A good logo can be insanely valuable. The one we use for the app I've done a lot of work on, was a quick "one-off," by a guy who ended up running design for a major software house. It was a princely gift.
> Logo design is also a much more difficult task than people think. A good logo can be insanely valuable. The one we use for the app I've done a lot of work on, was a quick "one-off," by a guy who ended up running design for a major software house. It was a princely gift.
A lot of developers also tend to invest quite an insane amount of work into their preferred open-source project and they do know how complicated their work is, and also how insane the value is that they provide for free.
So, where is the difference?
Are you quoting someone? Yeah it's a real job, and so is programming. I don't think anyone in this conversation is being dismissive about either job.
As a programmer, working with a good graphic designer can be very frustrating, as they can demand that I make changes that seem ridiculous, to me, but, after the product ships, makes all the difference. I've never actually gotten used to it.
That's also why it's so difficult to get a "full monty" treatment, from a designer, donating their time.
Which other comment?
If you mean the one saying it's not harder than programming, that's not calling it easy.
Very different skillset. There was a comment about how ghastly a lot of software-developed graphical assets can be.
Tasteful creativity does not grow on trees.
And even if they're wrong about which one is typically harder, they weren't saying it was easy, and weren't saying it was significantly easier than programming.
The same can be said for any vocation that generates a product. An expertly-crafted duck decoy can have the same level of experience and skill, as a database abstraction.
I have had the privilege to work with some of the top creatives, as well as scientists and engineers, in the world, and have seen the difference.
It’s not like graphic design is harder than programming.
I’d rather have crappy graphics than pay designers instead of programmers for free oss.
Because it's a different job!
Your post is like asking, "Why is breathing free but food costs money?"
Yeah it's a different job but they're both jobs. Why should one be free and one not be free?
That said, I don't think it's as simple as that. Coding is a kind of puzzle-solving that's very self-reinforcing and addictive for a certain type of person. Coders can't help plugging away at a problem even if they're not at the computer. Drawing, on the other hand, requires a lot more drudgery to get good, for most people anyway, and likely isn't as addictive.
Visual art is millennia older and has found many more niches, so, besides there being a very clear history and sensibility for what is actually fundamental vs industry smoke and mirrors, for every artist you encounter, the likelihood that their goals and interests happen to coincide with "improve the experience of this software" is proportionately lower than in development roles. Calling it drudgery isn't accurate because artists do get the bug for solving repetitive drawing problems and sinking hours into rendering out little details, but the basic motive for it is also likely to be "draw my OCs kissing", with no context of collaboration with anyone else or building a particular career path. The intersection between personal motives and commerce filters a lot of people out of the art pool, and the particular motives of software filters them a second time. The artists with leftover free time may use it for personal indulgences.
Conversely, it's implicit that if you're employed as a developer, that there is someone else that you are talking to who depends on your code and its precise operation, and the job itself is collaborative, with many hands potentially touching the same code and every aspect of it discussed to death. You want to solve a certain issue that hasn't yet been tackled, so you write the first attempt. Then someone else comes along and tries to improve on it. And because of that, the shape of the work and how you approach it remains similar across many kinds of roles, even as the technical details shift. As a result, you end up with a healthy amount of free-time software that is made to a professional standard simply because someone wanted a thing solved so they picked up a hammer.
It's really not deep.
This seems like a self selection problem. It’s not about forcing people to work for free. It’s about finding designers willing to work for free (just like everyone else on the project).
UI and UX are for all intents lost arts. No one is sitting on the other side of a 2 way mirror any more and watching people use their app...
This is how we get UI's that work but suck to use. This is how we allow dark patterns to flourish. You can and will happily do things your users/customers hate if it makes a dent in the bottom of the eye and you dont have to face their criticisms directly.
Which is also why UI/UX on open source projects are generally going to suck.
There's certainly no money to pay for that kind of experiment.
And if you include telemetry, people lose their goddamn minds, assuming the open source author isn't morally against it to begin with.
The result is you're just getting the author's intuitive guesswork about UI/UX design, by someone who is likely more of a coder than a design person.
> You can and will happily do things your users/customers hate if ... you dont have to face their criticisms directly.
A lot of software developers can't take criticism well when it comes to their pet projects. The entire FreeCAD community, for instance, is based entirely around the idea that FreeCAD is fine and the people criticising it are wrong and have an axe to grind, when that is exactly backwards.
Pretty much everyone is a power user of SOME software. That might be Excel, that might be their payroll processor, that might be their employee data platform. Because you have to be if you work a normal desk job.
If Excel was simpler and had an intuitive UI, it would be worthless. Because simple UI works for the first 100 hours, maybe. Then it's actively an obstacle because you need to do eccentric shit as fast as possible and you can't.
Then, that's where the keyboard shortcuts and 100 buttons shoved on a page somewhere come in. That's where the lack of whitespace comes in. Those aren't downsides anymore.
Excel is a simple intuitive UI.
I use 10% of Excel. I don't even know the 90% of what it's capable of.
It hides away it's complexity.
For people that need the complex stuff, they can access it via menus/formulas.
For the rest of us, we don't even know it's there.
Whereas, Handbrake shoves all the complexity in your face. It's overwhelming for first time users.
This means they want to add features they couldn't get anywhere else, and already know how to use the existing UI. Onboarding new users is just not their problem or something they care about - They are interested in their own utility, because they aren't getting paid to care about someone else's.
It's not a "nerd" thing.
i think the bigger issue is that the power users usecases are different from the non-power users. not a skillset problem, but an incentive one
Overly simplified example:
"Can you make this button do X?" where the existing button in so many ways is only distantly connected to X. And then they get stuck on the idea that THAT button has to be where the thing happens, and they stick with it even if you explain that the usual function of that button is Y.
I simplified it saying button, but this applies to processes and other things. I think users sometimes think picking a common thing, button or process that sort of does what they want is the right entry point to discuss changes and maybe they think that somehow saves time / developer effort. Where in reality, just a new button is in fact an easier and less risky place to start.
I didn't say that very well, but I wonder if that plays a part in the endless adding of complexity to UI where users grasp onto a given button, function, or process and "just" want to alter it a little ... and it never ends until it all breaks down.
In other words, if you need expert help, trust the expert. Ask for what you need, not how to do it.
But sometimes power structures don't allow for it. I worked tech support in a number of companies. At some companies we were empowered to investigate and solve problems... sometimes that took work, and work from the customer. It had much better outcomes for the customer, but fixes were not quick. Customers / companies with good technical staff in management understood that dynamic.
Other companies were "just fix it" and tech support were just annoying drones and the company and customer's and management treated tech support as annoying drones. They got a lot more "you got exactly what you asked for" treatment ... because management and even customers will take the self defeating quick and easy path sometimes.
On the other hand, if you are using your car for a decade and feel it needs a new belt - then get a new belt. Worst case scenario- you will lose some money but learn a bit more about an item you use everyday.
Experts don't have your instincts as a user.
So when they would come in asking for a specific part to be replaced with no context I used to tell them that we wouldn't do that until we did a diagnosis. This is because if we did do as they asked and, like in most cases, it turned out that they were wrong they would then become indignant and ask why we didn't do diagnosis for free to tell them that they were wrong.
Diagnosis takes time and, therefore, costs money. If the user was capable of it then they would also be capable enough to carry out the repair. If they're capable of carrying out the diagnosis and the repair then they wouldn't be coming to me. This has proved to be true over many years for everyone from kids with their first car to accountants and even electrical engineers working on complex systems for large corporations as their occupation. That last one is particularly surprising considering that an engineer should know the bounds of their knowledge and understand how maintenance, diagnosis and repair work on a conceptual level.
Don't trust your instincts in areas where you have no understanding. Either learn and gain the understanding or accept that paying an expert is part of owning something that requires maintenance and repair.
I agree though it sets up a weird dynamic where folks might come back to the expert and complain a problem isn't fixed, but that's not what they asked for / they broke the typical expert and customer dynamic.
If your mechanic is too stupid to recognize the problem after you explain it then you don't have a mechanic, the set of hands you are directing is basically unskilled labor.
That seems like a way to just be a stuck in the mud and wrong the whole time.
Sometimes the best solution is not the most widely-encouraged one.
I generally try to answer the Y but also indicate that it suggests there may be an X that could be better achieved some other way, and mention Z if I'm reasonably confident in what X is. It might increase the chance that the person asking just does Y anyway even if Z would be better, but frankly that's not really my business.
The point of XY problems isn't to call people out on supposedly bad behaviour, it's to push them in the right direction and provide more context.
Sometimes it's just me firing up some SQL queries and discovering "Well this happened 3 times ... ever ..." and we do nothing ;)
In my experience it may be solved by both parties spending the effort and time to first understand what is being asked... assuming they are both willing to stomach the costs. Sometimes it isn't worth it, and it's easier to pacify than respectfully and carefully dig.
That is, either determine what the optimal set of features is from the outset, design around that, and freeze or organically reach the optimium and then freeze. After implementing the target feature set, nearly all engineering resources are dedicated to bug fixes and efficiency improvements. New features can be added only after passing through a rigorous gauntlet of reviews that determine if the value of the feature's addition is worth the inherent disruption and impact to stability and resource consumption, and if so, approaching its integration into the existing UI with a holistic approach (as opposed to the usual careless bolt-on approach).
Naturally, there are some types of software where requirements are too fast-moving for this to be practical, but I would hazard a guess that it would work for the overwhelming majority of use cases which have been solved problems for a decade or more and the required level of flux is in reality extremely low.
But it is not a uniquely free software world problem. It is there in the industry as well. But the marketplace serves as a reality check, and kills egregious cases.
[1] Granted, "Political non sense" is a dual-purpose skill. In our context, it can be used both for "defending simplicity", as well as "resisting meaningful progress". It's not easy to tell the difference.
In the case of handbrake, I'd just see how I personally use it. Am I doing one thing 99% of the time? Maybe others are too. Let's silo that workflow.
Having people to man a 1-800 number is one way to get that feedback loop. Professional user testing is another. Telemetry / analytics / user tracking, or even being able to pull out statistics from a database on your server, is yet another. Professional software usually has at least two of these, sometimes all four. Free software usually has none.
There are still FLOSS developers out there who think that an English-only channel on Libera.chat (because Discord is for the uneducated n00bs who don't know what's good for them) is a good way to communicate with their users.
What developers want from software isn't what end users want from software. Take Linux for example. A lot of things on Linux can only be done in the terminal, but the people who are able to fix this problem don't actually need it to be fixed. This is why OSS works so well for dev tools.
It is hard to overestimate the difference between creating tools for people who use the tools for hours every day and creating tools for people who use tools once a week or less.
The casual user just wants a tool to crop screenshots and maybe draw simple shapes/lines/arrows. But once they do that they start to think of more advanced things and the simple tool starts to be seen as limiting.
Silksong Daily News went from videos of a voiceover saying "There has been no news for today" over a static image background to (sometimes) being scripted stop-motion videos.
I see people using DAWs, even "pro" ones made by companies presumably interested in their bottom lines. In all cases I have no idea how to use it.
Do I complain about intuitiveness etc? Of course not. I don't know how to do something. That's my problem. Not theirs.
Well, if people fail at that first five minutes, the subsequent thousand hours most often never happens.
And that's why designers are using Photoshop and not Microsoft paint.
Photoshop is good UI design. A normie can use photoshop the same way they use MS paint.
Albeit it just loads slower.
A normie doesn't need all the bells and whistles. They can just use photoshop like a glorified MS paint.
You can't do that with GIMP. It's actually really fucking annoying, if you try to use GIMP to do a MS paint job.
There are heaps of Photoshop tutorials on YouTube, which wouldn't be necessary if what you said were true.
I used GIMP to do MS paint stuff years ago when I used it fairly regularly.
GIMP is always a whipping boy for UI design on forums like this and I think it is pretty unfair. It is a pretty good program comparatively. If you want to see bad UI design a much better example is something like Visual Studio. What a mess.
Yeah, big button "Create project" and another, albeit smaller, button for "Run" puts a really high bar for the user to jump over.
Nothin as good as plain old cc followed by a bunch of cryptic flags.
Longer term I wonder if complex apps with lots of features might integrate AI in such a way that users can ask it to generate a UI matching their needs. Some will only need a single button, some will need more.
I'd say it's even more than you've stated. Not only for defending an existing project, but even for getting a project going in the first place a dictator* is needed.
I'm willing to be proven wrong, and I know this flies in the face of common scrum-team-everybody-owns approaches.
* benevolent or otherwise
Taking the Handbrake example, providing a default "simple" interface (as Magicbrake does) would be trivial to implement, maintain, and defend. The existing default "super user" interface could be just a toggle away (and make the toggle sticky so a power user doesn't have to touch it but once).
I used to work with an engineer who loved to remind us that a "perfect" interface would have a single button, and that button should be delivered pre-pushed. Always seemed like wise words to me.
I do feel quite strongly that this should be implemented in the app though.
There must be examples of this approach already being used?
FreeCAD is too complicated. Too many ways to accomplish the same task (nevermind only certain ways work too.)
So everything is simple and only 1 way to create gcode. No hidden menus. No hidden state.
Microsoft for a loooong time had that figured out pretty well:
- The stuff that people needed every day and liked to customize the most was directly reachable. Right click on the desktop, that offered a shortcut to the CPL for display and desktop symbols.
- More detailed stuff? A CPL that could be reached from the System Settings
- Stuff that was low level but still needed to be exposed somewhat? msconfig.
- Stuff that you'd need to touch very rarely, but absolutely needed the option to customize it for entire fleets? Group Policy.
- Really REALLY exotic stuff? Registry only.
In the end it all was Registry under the hood, but there were so many options to access these registry keys depending what level of user you were. Nowadays? It's a fucking nightmare, the last truly decent Windows was 7, 10 is "barely acceptable" in my eyes and Windows 11 can go and die in a fire.
My wife is not particularly tech savvy. She is a Linux user, however. When we started a new business, we needed certain applications that only run on Windows and since she would be at the brick and mortar location full time, I figured we could multi-purpose a new laptop for her and have her switch to Windows.
She hated it and begged for us to get a dedicated Windows laptop for that stuff so she could go back to Linux.
Some of you might suggest that she has me for tech support, which is true, but I can't actually remember the last time she asked me to troubleshoot something for her with her laptop. The occasions that do come to mind are usually hardware failure related.
Obviously the thing about generlizations is that they're never going to fit all individuals uniformly. My wife might be an edge case. But she feels at home using Linux, as it's what she's used to ... and strongly loathed using Windows when it was offered to her.
I feel that kind of way about Mac vs PC as well. I am a lifelong PC user, and also a "power user." I have extremely particular preferences when it comes to my UI and keyboard mappings and fonts and windowing features. When I was forced to use a Mac for work, I honestly considered looking for a different position because it was just that painful for me. Nothing wrong with Mac OS X, a lot of people love it. But I was 10% as productive on it when compared to what I'm used to... and I'm "old dog" enough that it was just too much change to be able to bear and work with.
In fact, when I had a similar experience I ended up making a short list (which I since lost) of things that seemed terribly wrong UI wise.
True, overall Mac is just different. The issue that I have with that ecosystem is the too many people consider it "perfect" and don't even consider discussing issues and complaining about things. Every product has pluses and minuses, but if you the user "believes blindly" that "there is only one way" that is probably not good for anybody.
After a couple of weeks I adapted just fine to using the Mac, but I surely don't miss it either.
That is becoming less and less true. More and more of the most ardent Apple fans have been complaining about the direction of macOS for years. Developer sentiment is low.
Honestly I’ve really enjoyed the swap. But man I really miss having iMessages across my devices as well as the shared clipboard. By far the two things I missed the most. Everything else I’ve kind of moved on from and can’t even think of off the top of my head anymore
My mom had no trouble adjusting to it. It was all just computer to her in some ways.
If "it [Xubuntu] works like Windows" offended you, I'd like to point out that normies don't care about how operating system kernels are designed. You're part of the problem this simplified Handbrake UI tries to solve. Normies care about things like a start menu, and that the X in the corner closes programs. The interface is paramount for non-technical users.
I currently work in the refurb division of an e-waste recycling company.[0] Most everyone else there installs Ubuntu on laptops (we don't have the license to sell things with Windows), and I started to initially, but an error always appeared on boot. Consider unpacking it and turning it on for the first time, and an error immediately appears: would you wonder if what you just bought is already broken? I eventually settled on Linux Mint with the OEM install option.
They always answered me "it works well".
But what I found during my next visit is a paper with a telephone number of computer helpers, and the laptop was running a fresh copy of Windows, presumably installed by these helpers.
Years later, I built a gaming machine so obviously I needed Windows. Got Win10 and eventually upgraded to 11 and it's just so jarring how unusable it is.
In older windows I could click on my computer to see all my drives and files. Now I have no idea where it is so I just click on the little folder icon at the bottom which opens I think my home directory, then I have to click on somewhere else to see my C and D drives. I can probably make a desktop shortcut or something but point being is that it's un intuitive. And powershell is not a great terminal, and I haven't found one for windows.
SO, after learning that gaming works well on Linux, I recently switched to Ubuntu, and I haven't looked back. Gaming, AI workflows, everything I need works just perfectly, and if it doesn't I can easily customize things to work the way I want it to. I'm not treated as a criminal for installing software on my computer. It's awesome.
It's one of those situations where "close enough" isn't. The fine details matter.
Windows isn't the way it is because of some purposeful design or anything. No, it's decades of poor decisions after poor decisions. Nothing, and I do mean nothing, is intuitive on Windows. It's familiar! But it is not intuitive.
If you conform to what these commercial offerings do, you are actively making your software worse. On purpose. You're actively programming in baggage from 25 years ago... in your greenfield project.
The advantage to Free Software is that you don't have to change everything with Windows, Apple, Adobe, or Google demand you do (unless they grab control of a FOSS project, like in Firefox's case.) There are a number of writers who recommend Linux and Free software only for that reason - that once you get a workflow going, you don't want to change it according to corporate whims.
> practically never requires its user to fire up a terminal window
This can be a problem. But it will be less of a problem with LLMs. We need to encourage amateur (and proficient) Linux adopters and users to lean on AI to deal with anything giving them problems. I had an LLM walk me through updating a .deb package in MATE to match HEAD upstream, and to do it in a way that would be replaced when Debian updated the package itself. This is something I've been carefully avoiding learning for a decade, and if I had taken the effort to try to learn, it would be weeks of research and I'd have messed up the system multiple times along the way. Instead, after a few false starts, I did it and gained the knowledge to do it again.
The DE needs to be as close to a drop-in replacement as possible while remaining legally distinct. The less the user needs to relearn the better.
Totally agree. My first distro was Elementary because it was sold to me as Mac-like. It’s…sort of that, but it was enough for me to stick with it and now I’ve tried 3 other distros! Elementary is still in place in my n150 server. Bazzite for my big gaming machine. Messed with Mint briefly, wasn’t for me but I appreciated what it was.
Familiarity is so important.
I share this aversion. I have a Mac book work sent me, sitting next to me right now, that I never use. Luckily I’m able to access the vpn via Linux and all the apps I need have web interfaces (office 365).
I hated OS X when I first used it. A lot, actually. I didn't consider leaving my job over it (I couldn't have afforded it at the time even if I had wanted to), but I did think about trying to do an ultimatum with that employer to tell them to buy me a computer with Windows or let me install Linux on the Macbook (this was 2012 so it had the Intel chip). I got let go from that job before I really got a chance (which itself is a whole strange story), but regardless I really hated macOS at the time.
It wasn't until a few years later and a couple jobs after that I ended up growing to really like macOS, when Mavericks released, and a few years later, I actually ended up getting a job at Apple and I refuse to allow anyone to run Windows in my house.
My point is, I think people can actually learn and appreciate new platforms if they're given a chance.
I know, techies love to love or hate the OS. Here there are endless threads waxing lyrical about Windows, MacOS or say dozen Linux installs. But 99% of users could care less.
It's kinda like cars. Petrol heads will talk cars for ages. Engine specs. What brand of oil. Gearbox ratios. Whereas I'm like 99% of people - I get in my car to go somewhere. Pretty much the only "feature" a car needs is to make me not worry about getting there.
So for 97% of people the "best" OS is the one they don't notice. The one that's invisible because they want to run a program, and it just runs.
The problem with switching my mom to Linux is not the OS. It's all the programs she uses. And while they might (or might not) be "equivalent" they're not the same. And I'm not interested in re-teaching her every bit of software, and she's not interested in relearning every bit of software.
She's not on "a journey" of software discovery. She has arrived. Changing now is just a waste of time she could be gardening or whatever.
The reason it'll never be the year for Linux Desktop is the same reason it's always been - it's not there already.
I don't see Windows as having much of an edge there. Lots of things seem to change on Windows just for change's sake. I get so tired of the churn on Windows versions and finding how to disable the new crummy features. If you want to avoid relearning all the time, something simple like XFCE is going to be way better.
I feel the need to constantly reiterate this; if someone who works on Windows Update reads this, please consider a different career, because you are categorically terrible at your job. There are plenty of jobs out there that don't involve software engineering.
The biggest brick event in recent times was Shockwave, not Windows. Personally I've never seen a bricked machine, not at home, not at work, not at family.
Of course my anecdata is meaningless as is your annecdata. Ymmv.
This is the second time this has happened to my family from Windows, on different computers.
To the average user, it absolutely will. Unless they happen to run on particularly well-supported hardware, the days of console tinkering aren't gone, even on major distros.
What's fixable to the average Linux user and what's fixable to the average person (whose job is not to run Linux) are two very, very different things.
But they specifically said it is not about OS, but about programs on this OS. There is Windows-based software that looks the same as 2 decades ago.
> I refuse to allow anyone to run Windows in my house
So you don’t care for peoples preferences unless they match your own? I don’t get that. You were in the same position. Why don’t you just let people use what they like?
There are only three people in my house, two of which appear to be happy with macOS and one (me) is happy with Linux.
I am not upset with the job requiring people to use macOS. That job was awful for a whole bunch of wonderful reasons, but that wasn’t one of them. If people expect me to play tech support in my house I want an OS that I understand and isn’t terrible to do it.
Linux was like a racing car. Raw and refined. Every control was directly connected to powerful mechanical components, like a throttle, clutch and steering rack. It did exactly what you told it to do, but to be good at it requires learning to finesse the controls. If you failed, the lessons the were delivered swiftly and harshly: you would lose traction, spin and crash.
Windows was more like a daily driver. Things were "easier", but that the cost of having less raw control and power, like a clutch with a huge dual mass flywheel. It's not like you can't break a daily driver, any experienced computer guy has surely broken Windows more than once, but you can just do more within the confines o the sandbox. Linux required you to leave.
It's different now. Distros like Ubuntu do almost everything most people want without having to leave the sandbox. The beautiful part about Linux, though, is it's still all there if you want it and nice to use if you get there, because it's build and designed by people who actually do that stuff. Nowadays I tend to agree it is mostly just what you're used to and what you've already learnt.
That’s not always a possibility. See for example:
https://www.theverge.com/2020/5/20/21262302/ap-test-fail-iph...
Those people didn’t need or want Photoshop or a complicated program with tons of options to convert image formats form anything to anything. Even a simpler app like Preview implies knowing where to look for the conversion part and which format to pick. They could have done instead with a simple window which said “Drop File Here”, and if it’s an HEIC, convert to JPEG. Could even have an option to delete the original, but that’s bout it.
There’s an Alfred workflow which is basically that. It does have some configuration, but the defaults are sensible and doesn’t let you screw up the important part of “detect this format which is causing me trouble and produce a format which will work”
But then you have to remember the names of 200 distinct software that all do this one thing, so you make a meta-software to manage and organize them, and you're back to square one only with more indirection
When we moved to Canada from the UK in 2010 there was no real way to access BBC content in a timely manner. My dad learned how to use a VPN and Handbrake to rip BBC iPlayer content and encode it for use on an Apple TV.
You had to do this if you wanted to access the content. The market did not provide any alternative.
Nowadays BBC have a BritBox subscription service. As someone in this middle space, my dad promptly bought a subscription and probably has never fired up Handbrake since.
> we have somebody who has somehow obtained a "weird" video file
Why are you arriving at the conclusion that this requires complex software, rather than just a simple UI that says "Drop video file here" and "Fix It" below? E.g., instead of your conclusion "stick to your walled-garden-padded-room reality where somebody else gives you a video file that works", another possibility is the simple UI I described? That seemed to me the point of the post.
This is really just my read for why this sort of software isn't more common. Go ahead and make it, and if it ends up being popular I'll look the fool.
I don't think that's true at all. The tool linked here is exactly the kind of utility that does one single task and that people are happy to download. Most people use software to solve a problem, not to play around with it and figure out if they have a use for it.
To be fair, the Audacity UX designer made a massive video about the next UX redesign and how he tried to get rid of "modes" and the "Audacity says no" problem:
https://www.youtube.com/watch?v=QYM3TWf_G38
So this problem should get better in the future. Good UX (doesn't necessarily have to have a flashy UI, but just a good UX) in free software is often lacking or an afterthought.
Now he got rid of the modes by adding handles and border actions - so 1) wasted some space that could be used for information 2) required more precision from the users because now to do the action you must target a tiny handle/border area 3) same, but for other actions as now you have to avoid those extra areas to do other tasks.
While this might be fine for casual users as it's more visible, the proper way out is, of course,... MODES and better ones! Let the default be some more casual mode with your handles, but then let users who want more ergonomics use a keybind to allow moving the audio segment by pressing anywhere in that segment, not just in the tiny handle at the top. And then you could also add all those handles to visually indicate that now segments are movable or turn your pointer into a holding hand etc.
Same thing in the example - instead of creating a whole new separate app with a button you could have a "1-button magicbrake" mode in handbrake
What is there to see? You add a bar that takes space. That space can be taken up by something useful. Just like you have apps that hide app title bar and app menus so you can have more space for your precious content. This is especially useful for high-info-density apps like these audio/video/photo authoring ones. Note how tiny those handles are in the video, why do you think that is?
> tradeoff is an incredible degree of customisation
You don't have that tradeoff, neither of the 2 solutions are anywhere close to "incredible customization", so you can pick either without it.
> In terms of precision, they're working on accessibility issues
Working towards what magic solution?
> but I'm not sure how this change is any special than any other UI.
why does it have to be special? Just a bog standard degradation common to any UI (re)design, nothing special about it.
> the modes were horrid
Of course they were. Just like they were horrid in that MS Paint app the dev worked on before. But you can make any UI primitive horrid, even buttons, that's no reason to remove them, but to improve them!
1. Open a file; 2. Click the start button in the toolbar.
The space is used for information. The fact clips in Audacity finally have names and you can see those names is a fantastic improvement. The space taken up by the clip title is the handle.
"Audio 1 #1" "Audio 1 #1" "Audio 1 #1"
"Audio 1 #1" "Audio 1 #1"
names replacing the audio wave height!
But sure, if you need names permanently right there and are ok to lose space to show them, and if the handle is inconveniently small to only fit the text, then yes, you wouldn't lose space in that case. You'd only have other issues.
But that coupling likely has other design implications, e.g., you're unlikely to get an option to only show names on hover instead of having a bar, or to show names as an overlay (in many cases the names aren't that long to need to take the height of the the whole segment)
You're making application for yourself and somewhere down pipeline you decide that it could benefit others, so you make it open-source.
People growl at you "It's ugly UX but nice features" when it was originally designed for your own tastes. The latter, people growl at you for "not having X feature, but nice UX".
Your own personal design isn't one-fits-all and designing mocks takes effort. Mental strain and stress; pleasing folks is hard. You now continue developing and redesign the foundations.
A theming engine you think. This becomes top-priority as integration of such becomes a PITA when trying to couple it with future features later.
That itself becomes a black hole in how & schematics. So now you're forever doomed in creating something you never desired for the people who will probably never use it. This causes your project to fail but at least you have multiple revisions of the theming engine. Or you strike it lucky and gain a volunteer.
Pre-emptive anti-snark: yes, the old version will still exist... if you can dig up the right github commit and still make it compile in 2030.
1. Free software is developed for the developer's own needs and developers are going to be power users.
2. The cost to expose options is low so from the developer's perspective it's low effort to add high value (perceiving the options as valuable).
3. The developer doesn't know who the customer is and rather than research/refine just tries to hit all the boxes.
4. The distribution of the software itself means anyone who successfully installs it themselves really is a power user and does like the options. Installing it for family and friends doesn't work.
Probably many other factors!
I think it’s essentially survivorship bias. The simple applications don’t get traction and later get abandoned.
i have seen many comments, by lay people, out of Sonobus [0] being superb on what it does and impressive by being 100% free. that's a niche case that if it was implemented on Ardour, could fit the same problem OP describes
however i can't feel where the problem of FOSS UX scaring normal people is. someone getting a .h264 and a .wav file out of a video-record isn't normal after all. there are plenty of converters on the web, i dunno if they run ffmpeg at their server but i wouldn't get surprised. the problem lies on the whole digital infrastructure running on FOSS without returning anything back. power-user software shouldn't simplify stuff. tech literacy hopefully can be a thing and by quickly learning how to import and export a file in a complex software feels better to install 5 different limited software over the years because your demands are growing
* Free software which gains popularity is developed for the needs of many people - the users who make requests and complaints and the developers.
* Developers who write for a larger audience naturally think of more users' needs. It's true that they typically cater more to making features available than to simplicity of the UI and ease of UX.
> 2. The cost etc.
Agreed!
> 3. The developer doesn't know who the customer is and rather than research/refine just tries to hit all the boxes.
The developer typically knows what the popular use cases would be. Like with the handbrake example. They also pretty much know how newbie users like simplified workflows and hand-holding - but it's often a lot of hassle to create the simplified-with-semi-hidden-advanced-mode interface.
> 4. The distribution of the software itself means anyone who successfully installs it themselves really is a power user
Are people who install, say, the Chrome browser on their PC to be considered power userS? They downloaded and installed it themselves after all... no, I believe you're creating a false dichotomy. Some users will never install anything; some users might install common software they've heard about from friends; and some might actively look for software to install - even though they don't know much about it or about how to operate the apps and OS facilities they already h ave. ... and all of these are mostly non-power-users.
If you don't want that, it provides you with presets.
I do understand that many users are too lazy to read a manual, do a google search, or put any minimal amount of effort in solving an issue, but making software worse or more convoluted the second you need to have a miniscule degree of extra control seems pointless.
Too many people look at FOSS the same way the look at commercial end user software, they think they are the equivalent of a custom and they should get a premium custom like experience. But that's false. Most of the time FOSS is the result of fixing a developer's problem, or fixing the problems of somebody who also happens to be a developer, and then sharing it all with the world.
You cannot be surprised when a software made by a developer offers every single possible setting, because that's exactly what it was meant to do in the first place.
The solution is not "making software easier", the solution is to RTFM.
I'm the kind of person who wants to understand how everything I touch works (and the kind of person that does a lot of video recompression and cares about quality and filesize tradeoff) and even I hate Handbrake's UI. Every time I set up a new machine I spend an hour or two looking for an internet guide to what settings mean what to configure a preset, and then I use that preset basically exclusively. I'd much rather have a couple of drop targets, one of which is "Bluray 1080p quality".
The majority of your users are not lazy, they have gazillions of other things to think about and to do. Even when they are developers.
Now, that's up to you, as a maker, to decide who your tool is serving, and for what purpose: it's totally fine and valid to target the power user, or your specific use case. Just don't act surprised, or reject the responsibility, that someone else will find a bigger audience with a lesser product, only because it is more affordable, cognitively speaking.
Actual good UI/UX design isn't trivial and it tends to require a tight feedback loop between testers, designers, implementers, and users.
A lot of FOSS simply doesn't have the resources to do that.
I'm not disagreeing with your basic take, but I think this part is a little more subtle.
I'd argue that 80% of users (by raw user count) do want roughly the same 20% of functionality, most of the time.
The problem in FOSS is that average user in the FOSS ecosystem is not remotely close to the profile of that 80%. The average FOSS user is part of the 1% of power users. They actively want something different and don't even understand the mindset of the other 80% of users.
When someone comes along to a FOSS project and honestly tries to rebuild it for the 80% of users, they often end up getting a lot of hate from the established FOSS community because they just have totally different needs. It's like they don't even speak the same language.
It was something like:
- almost everybody only uses about 20% of the features of Word
- everybody's 20% is different, but
- ~80% of the 20% is common to most users.
- on the other hand, the remaining 20% of the 20% is widely distributed and covers basically all of the product.
So if you made a version of Word with 16% of its feature set you would almost make everybody happy. But really, nobody would be happy. There's no small feature set that makes most people happy.
But many other projects, perhaps the majority, that is not their goal. By devs for devs, and I don't think there is anything wrong with that.
Pleasing customers is incredibly difficult and a never-ending treadmill. If it's not the goal then it's not a failure.
There are other times I want cropping or something similar, but it's really only 10-30% of the time. If people want to have a more custom workflow they can use an advanced UI
Some FOSS projects attempt something like this, but it can become a self-reinforcing feedback loop: When you're only testing on current users, you're selecting for people who already use the software. People who already use the software were not scared away by the interface. So the current users tend to prefer the current interface.
Big software companies have the resources to gather (and pay) people for user studies to see what works and what does not for people who haven't seen the software before, or at least don't have any allegiances. If you only ever get feedback from people who have been using the software for a decade, they're going to tell you the UI must not change because they know exactly how to use it by now.
Projects like GNOME, Elementary, Blender, Krita, KDE Plasma, Penpot, and MuseScore seem to attract contributions from designers.
I suspect it's because designers are like any other open source contributor: they want to work on projects that they use themselves and where their contributions will be appreciated.
How many of them are paid? I know MuseScore, Penpot, and Blender have paid for almost all of their design work (because they have paid staff)
Not just a relevancy problem, it's much easier to get free development work in OSS than design work. It's a decades-long problem.
Whereas with complicated GUI tools, you have to watch a video to learn how to do it.
Well, there are different issues.
Reading a manual is the best you can do, theoretically. But Linux CLI tools have terrible manuals.
I read over the ssh man page multiple times looking for functionality that was available. But the man page failed to make that clear. I had to learn about it from random tutorials instead.
I've been reading lvm documentation recently and it shows some bizarre patterns. Stuff like "for more on this see [related man page]", where [related man page] doesn't have any "more on this". Or, here's what happens if you try to get CLI help:
1. You say `pvs --help`, and get a summary of what flags you can provide to the tool. The big one is -o, documented as `[ -o|--options String ]`. The String defines the information you want. All you have to do is provide the right "options" and you're good. What are they? Well, the --help output ends with this: "Use --longhelp to show all options and advanced commands."
2. Invoke --longhelp and you get nothing about options or advanced commands, although you do get some documentation about the syntax of referring to volumes.
3. Check the man page, and the options aren't there either. Buried inside the documentation for -o is the following sentence: "Use -o help to view the list of all available fields."
4. Back to the command line. `pvs -o help` actually will provide the relevant documentation.
Reading a manual would be fine... if it actually contained the information it was supposed to, arranged in some kind of logically-organized structure. Instead, information on any given topic is spread out across several different types of documentation, with broken cross-references and suggestions that you should try doing the wrong thing.
I'm picking on man pages here, but actually Microsoft's official documentation for their various .NET stuff has the same problem at least as badly.
From which I was able to then say, "Can I have the equivalent source code" and it did that too, from which I was able to spot my mistake in my original attempt. ( The KDF was using md5 not sha ).
I'm willing to bet that LLMs are also just as good at coming up with the right ffmpeg or imagemagick commands with just a vague notion of what is wanted.
Like, can we vignette the video and then add a green alien to the top corner? Sure we can (NB: I've not actually verified the result here) : https://claude.ai/share/5a63c01d-1ba9-458d-bb9d-b722367aea13
they are. ive only used ffmpeg via llm, and its easy to get the LLM to make the right incantation as part of a multi-step workflow.
my own lack of understanding of video formats is still a problem, but getting ffmeg to do the right thing only takes a vague notion
    ffmpeg -i input.avi output.mp41) They have to know how to get to a command line somewhere/how (most of this group of users would be stymied right here and get no further along);
2) They now have to change the current directory of their CLI that they did get open to the location in their filesystem where the video is actually stored (for the tiny sliver who get past #1 above, this will stymie most of them, as they have no idea exactly where on disk their "Downloads" [or other meta-directory item] is actually located);
3) For the very few who actually get to this step, unless they already have ffmpeg installed on their PATH, they will get a command not found error after typing the command, ending their progress unless they now go and install ffmpeg;
4) For the very very few who would make it here, almost all of them will now have to accurately type out every character in "a-really_big_filename with spaces .mov", as they will not know anything about filename completion to let the shell do this for them. And if the filename does have spaces, and many will, they now need to somehow know 4a) that they have to escape the spaces and 4b) how to go about escaping the spaces, or they will instead get some ffmpeg error (hopefully just 'file not found', but with the extra parameters that unescaped spaces will create, it might just be a variant of "unknown option switch" error instead).
They can right-click in the folder view of their OS file viewer. On Windows they can also just type the command into the path bar.
When you tell them the command, you could also just install it. Also you could just tell them to type the name of the app 'ffmpeg' into the OS store and press install. They do this on their phone all the time.
But, if we assume the user has never seen a graphical application before, then likely all GUI tools will be useless too. What is clicking? What does this X do? What's a desktop again? I don't understand, why do I need 1 million pixels to change an mp3 to an avi? What does that window looking thing in the corner do? Oh no, I pressed the little rectangle at the top right and now it's gone, it disappeared. No not the one with the X, I think it was the other one.
Pretty much all computer use secretly relied on hundreds if not thousands of completely arbitrary decisions and functionality you just have to know. Of all of that, CLI tools rely on some of the least amount of assumptions by their nature - they're low fidelity, forced to be simple.
Heck, even computing education (and the profession even!) has been propped up by GUIs. After my first year in CS, there were like only three to five of us in a section of forty to fifty who could compile Java from the command line, who would dare edit PATH variables. I'm pretty sure that number didn't improve by much when we graduated. A lot of professionals wouldn't touch a CLI either. I'm not saying they are bad programmers but fact of the matter is there are competent professional programmers who pretty much just expect a working machine handed to them by IT and then expect DevOps to fix Jenkins when it's borked out.
Remember: HN isn't all programmers. There are more out there.
> But, if we assume the user has never seen a graphical application before, then likely all GUI tools will be useless too.
We don't even need to assume, we just need to look at history. GUIs came with a huge amount of educational campaigning behind it, be it corporate (i.e., ads/training programs that teach users how to use their products) or even government campaigns (i.e., computer literacy classes, computer curriculum integrated at school). That's of course followed by man-years upon man-years of usability studies and the bigger vendors keeping consistent GUI metaphors across their products.
Before all of this, users did ask the questions that you enumerated and certain demographics still do to this day.
> Of all of that, CLI tools rely on some of the least amount of assumptions by their nature - they're low fidelity, forced to be simple.
"Everything should be made simple, but not simpler." Has it occurred to you that maybe CLI tools assume too little?
Few people are able to see through the eyes of a beginner, when they are a master.
The 4th one is a pain to teach. Every other file and directory has spaces... so I encourage liberal use of the TAB key for beginners.
One of the things I've noticed is that people trying to help the true beginners vastly overestimate their skill level, and when you get a couple of people all trying to help, each of them is making a completely different set of suggestions which doesn't end up helpful at all. Recently, I was helping somebody who was struggling with trying to compile and link against a C++ library on Windows, and the second person to suggest something went full-bore down the "just install and use a Linux VM cause I don't have time to help you do anything on Windows."
Two decades ago, users understood what "C:\Documents and Settings\username\My Documents" meant and navigated those paths easily. Yet, we decided they were too "stupid" to deal with files and file paths, hiding them away. This conveniently locked users into proprietary platforms. Your point #2 reflects a lie we've collectively accepted as reality. Sadly, too many people now can’t even imagine that a straightforward way to exchange data among different software once existed, but that's a situation we're deliberately perpetuating.
This needs to change. Users deserve the opportunity to learn and engage with their tools rather than being treated as incapable. It’s time we started empowering users for a change.
Sorry, that's pretty damn indecipherable.
"If you only care about converting media without tweaking anything"
Handbrake is easier in that I don't have to think too hard about how to use it; but that I think a lot more critically about the options that I choose.
FFmpeg is easier when I go in knowing the exact options I want; often pulled from notes that I've stored away, or combined with scripts.
Someone who only wants to convert from one format to another, and isn't accustomed to CLIs, is far better served by "drag the file here -> type an output filename and extension in the text box".
The problem (and the reason both FFMpeg and Handbrake exist) is that tons of people "only" want to do two or three specific tasks, all in the same general wheelhouse, but with terrible overlap.
It struck me as a weird example in the OP because I don't really think of Handbrake as a power user tool.
I think in the context of this thread, we shouldn't overgeneralize or underestimate "normal people".
for certain types of tooling UIs should be cheap, disposable and task/worlflow specific.
Exactly what golden era of computing are you harking back to, and what are you doing that requires 14 different commands?
The idea behind a cheap UI is not constant change, but that you have a shared engine and "app" per activity.
The particular workflow/ui doesn't need to ever change, it's more of a app/brand per activity for non-power users.
This is similar to how some apps historically (very roughly lotus notes springs to mind) are a single app but have an email interface/icon to click, or contacts, or calendar, all one underlying app but different ui entry points.
See Don Norman's Design of Everyday things.
https://www.nngroup.com/articles/progressive-disclosure/
https://www.nngroup.com/videos/positive-constraints-in-ux-wo...
Definitionally, it means you're hiding (non-disclosing) features behind at least 1 secondary screen. Usually, it means hiding features behind several layers of disclosures.
Making a very simple product more powerful via progressive disclosure can be a good way to give more power to non-power users.
Making a powerful product "simpler" via progressive disclosure can annoy the hell out of power users who already use the product.
It's certainly possible they can't read. But more likely they're perfectly intelligent and simply don't appreciate being forced to deal with unnecessary complexity to complete a simple task.
(Someone else's comment reminded me of the CHI video of Allen Newell and Ron Kaplan, two brilliant AI pioneers, struggling with a poorly-designed copy machine https://athinkingperson.com/2010/06/02/where-the-big-green-c...)
I wonder who is responsible for the existence of such people.
It's a little harder to make an easy version
Making the progressive version is very difficult. Where you can please one audience with the powerful and easy versions, you can often disappoint both with the progressive version despite it taking much more effort.
In my personal experience, you're lucky if free software has the budget (time or money) to get to easy. There's very little free software that makes it to progressive.
So yes, it is hard to make the simple version. You have to have a very good understanding of what the user wants out of your product. Until you have this clarity, every feature seems important. Once you have this clarity you understand what the important features are. You make those features more prominent by giving them prime real estate, then tuck away the less important features in a less visible place. Simple things should be simple. Complex things only need to be possible.
Logic Pro has a “masking tape” mode. If you don’t turn on “Complete Features” [0], you get a simplified version of the app that’s an easier stepping stone from GarageBand. Then check the box and bam, full access to 30 years’ accumulation of professional features in menus all over the place.
[0] https://support.apple.com/guide/logicpro/advanced-settings-l...
The really hard problem to solve is making an UI where the simple thing is easy and the complex thing is still possible.
But that requires 1) strong leadership 2) people with the correct skill set for UX design.
Making a clean UI means cutting things out. And that's not easy in many large OSS projects because every menu item and every button is someone's pet feature. The leaders in the projects are often the most senior _developers_, not UX experts. It's (I'm assuming) a lot less common to have good telemetry and user labs in OSS than it is for commercial software. So you also might not know exactly what features people use and how they use them, making it even harder to remove features.
I think the author is absolutely right. If Handbrake is intimidating, or hard to use without reading a manual in order to do just the simple use case that 80% of users have, then they have failed in making a good UX. A good UX would make the simple thing easy without sacrificing power for the complex case. And the author is absolutely having the right idea when making a simpler wrapper for the powerful software.
For a little history on this design, see https://athinkingperson.com/2010/06/02/where-the-big-green-c...
The machine came with Windows 10 by default and what he needed was Firefox and FreeCell, and FreeCell on Windows10 was atrocious.
He likes it a lot and the only thing was that I had to put same wallpaper as he had on Windows7 and icons in the same places.
The "linux is to complicated" argument just doesn't hold true anymore! It's nice and easy, including the installation (even I was surprised)
On the one hand, I want the UI to be simple and minimal enough so even non savvy users can use it.
But on the other hand, I do need to support more advanced features, with more configuration panels.
I learned that the solution in this case is “progressive disclosure”. By default, the app only show just enough UI elements to get the 90% cases done. For the advanced use cases, it takes more effort. Usually to enable them in Settings, or an Inspector pane etc. Power users can easily tinker around and tweak them. While non savvy users can stick with the default, usual UX flow.
Though even with this technique, choosing what to show by default is still not easy. I learned that I need to be clear about my Ideal Customer Profile (ICP) and optimize for that profile only.
[0]: https://boltai.com
Free and open source developers are free to build whatever the heck software they want on the basis of "I find this software neat/useful/funny, you can use it too if you'd like".
The way it works is you grab the sources, make a fork, implement your desired changes (e.g. hide 80% of the features), and do a release that is in line with your own vision. "But that's a lot of work" someone might say, ... and yet people routinely think someone else should do such volumes of work for them in order to ensure a project is in line with their needs.
TFA is making the point that, if your goal is to make software that is broadly useful (and maybe that's not your goal), then you should try to make the most common use-cases easy and intuitive, even if that means compromising on complex ones or edge-cases.
I know about 10 people in real life that uses Handbrake. And 10 of them use it to rip Blu-ray discs and store media files on their NAS. It will piss them off if you hide all the codec settings and replace the main screen with a giant "convert to Facebook compatible video" button.
Instead, do it like how iina[1] packages mpv[2].
I dread "Can you add a button..." Or worse, "Can you add a check box..." Not only does that make it worse for other users, it also makes it worse for you, even if you don't realize it yet.
What you need is to take their use case and imagine other ways to get there. Often that means completely turning their idea on its head. It can even help if you're not in the trenches with them, and can look at the bigger picture rather than the thing that is interfering with their current work flow.
Eg an app to prevent MacOS going to sleep. I want a checkbox to also stop an external display sleeping. I don't need my entire usage of the app and my computer-feature desires analysed and interpreted in a way that would make a psychoanalyst look like a hack.
But yes in a professional setting people do use "Can we add a button" to attempt to subvert your skillset, your experience, to take control of the implementation, and to bypass solid analysis and development practices
The problem with why so many OSS/free software apps look bad can be demonstrated by the (still ongoing) backlash to Gnome 3+. It just gets exhausting defending every decision to death.
Sometimes projects need the spine to say "no, we're not doing that."
There's nothing wrong with an opinionated desktop environment or even an opinionated Linux distribution. But, prior to GNOME 3, the project was highly configurable. Now it is not.
When people start up new highly opinionated projects (e.g. crunchbang, Omarchy), the feedback is generally more positive because those who try it and stick with it are the ones who like the project's opinions. The people who don't like those opinions just stop using it. There isn't a large, established base of longstanding users who are invested in workflows, features, and options.
Gnome 3 was a big update and adding options, which does happen, is not free. There were changes from Gnome 2 and 3 and adding some options "back" from Gnome 2 is really asking for that feature to be rewritten from scratch (not all the time, but a lot of the time).
That the Gnome team has different priorities from other DEs, one of them being "keep the design consistent and sustainable," is completely valid and preferred by many users like myself.
A new design metaphor can be "completely valid" and simultaneously an aggravating rug pull if it is pitched as a new numbered version of an existing program---especially an existing desktop environment that many people use daily---rather than as a new program.
> Gnome 3 was a big update and adding options, which does happen, is not free. There were changes from Gnome 2 and 3 and adding some options "back" from Gnome 2 is really asking for that feature to be rewritten from scratch (not all the time, but a lot of the time).
If the "next version" of a software project is really a near-total rewrite, such that many-to-most features from the previous version must be rewritten from scratch, then you should start a new project and pick a new name if you do not want your current users comparing the new version to the previous version. Whether or not those users appreciate the cost of feature rewrites to the developers' satisfaction, they already have working software with a long list of features and a version number one lower.
KDE had a similar problem moving from 3.5 to 4 (also a major rewrite), but the initial wave of complaints had more to do with instability and heavy resource use because they didn't abandon the desktop metaphor entirely. They also explicitly had a roadmap of building back in many of 3.5's features as time went on, which made it a less radical transition.
It's all ancient history at this point, at least on software timescales. But the Gnome 2 to Gnome 3 transition is up there with Python 2.7 to Python 3 as an example of how not to manage or implement a major change to a widely used piece of software.
The funniest thing to me is I used to think of Gnome 3 as a effort toward Apple-esque design which wasn't as well put together as MacOS... But now, even though Gnome today is still not up to the consistency of 2011 MacOS, it is more consistent than 2025 MacOS because Apple has been driving drunk on desktop software design for a decade and a half.
After dealing with this kind of stuff for 14 years, it shouldn't be surprising that you don't have a lot of folk left who are willing to extend good faith to Gnome devs.
Eventually, I found the bug report that was filed against the guideline itself. The person who wrote that part of the guideline had responded that he made the decision based on a poll (presumably of people in the mailing list), and that no-one really had a strong opinion on it. He asserted that it was no big deal, and refused to reconsider the guideline.
Now, I think it is perfectly OK to make the wrong decision when it comes to something outside your expertise. If you are a backend software expert, it is OK for you to do the wrong thing when it comes to the UI for a project you are supporting for free. But when someone who does know that field makes a reasoned suggestion, you should not really be doubling down.
A UI/UX designer in this situation is not exactly going to be prepared fork and maintain a whole stack over this. It just means that the experience will be worse for users.
The "advanced mode" rarely actually covers all the needs of an advanced user (because software is never quite everything to everyone), but it's at least better at handling both types of users.
Not all free software has this problem... Mozilla and Thunderbird I've had my parents on for years. It's not a ton to learn, and they work fine.
Taking the case of Photoshop vs. Gimp - I don't think the problem is complexity, lol. It's having to relearn everything once you're used to photoshop. (Conversely, I've never shelled out for Adobe products, and now don't want to have to relearn how to edit images in photoshop or illustrator)
Let's do another one. Windows Media Player (or more modern - "Movies & TV"). Users want to click on a video file and have it play with no fuss. VLC and MPC work fine for that! If you can manage to hold onto the file associations. That's why Microsoft tries so hard to grab and maintain the file associations.
I could go on... I think the thesis of this article is right for some pieces of software, but not all. It's worth considering - "all models are wrong, but some are useful".
I don't think this comparison is really accurate, Adobe's suite is designed for professionals that are working in the program for hours daily (e.g., ~1000 hours annually for a creative professional). There are probably some power users of The GIMP that hit similar numbers, but Creative Cloud has ~35-40 million subscribers, these are entirely different programs for entirely different classes of users.
Therefore: If you want lots of users, design for the median user; if you don't, this doesn't apply to you
The thing is this takes a lot of resources to get right. FOSS developers simply don't have the wherewithal - money, inclination or taste - to do this. So, by default, there are no simple things. Everything's complex, everything needs training. And this is okay because the main users of FOSS software is others of a similar bend as the developers themselves.
I also heard that, once you try to apply this concept, you see that everyone needs a different 20%. Any thoughts on this?
You should know the common retort - but it's different 20%! So you can't create a one-button UI that covers 80%
But the challenge is real, though mostly "unsolvable" as there is too much friction in making good easily customizable UIs
Huge amounts of dumbed-down software that won't do interesting things is made. There's no need to present this challenge.
> a person who needs or wants that stuff can use Handbrake.
That's the part that is often ignored: providing the version with the features.
It works exactly for TV remote controls. Or, rather, it worked before everybody had an HDMI player or smart TVs. It doesn't work for TV remotes now either.
Handbrake is a bit like TV remotes in the turn of the century. That's an exception even among free software, and absolutely no mainstream DE is like that.
They are actually, literally, removing features. That's not an opinion, that's what is actually happening, repeatedly.
Now, maybe you say good riddance. Fine. However, it is indisputable that now the desktop is slightly less capable. The software can do less stuff than before.
Note that I've always been a KDE user...
1. Some day, those users think "Hey, I'm not happy with some setting, what do I do?" and they can do nothing.
2. The users who need more functionality can't get it - and feel like they have to constantly wrestle with the app, and that it constantly confounds and trips them up, even when they express their clear intents. It's not like the GNOME apps have a "simple mode" and an "advanced mode"
3. The GNOME apps don't even really go along those lines. You see, even non-savvy users enjoy consistentcy and clarity; and the GNOME people have made their icons inscrutable; take over the window manager's decorations to "do their own thing"; hide items behind a hamburger menu, as though you're on a mobile phone with no screen space; etc. So, even for the non-savvy users - the UX is kind of bad. And just think of things like the GTK file picker. Man, that's a little Pandora's box right there, for the newbie and power user alike.
One could say the same about people who don't bother to learn ffmpeg CLI.
By using GNOME and staying with it as it changed, they suffered more changes than they would have by switching to KDE at any point.
For telling software devs to embrace traditional design wisdom, using TV remotes is an interesting example - cause aside from the commonly used functionality people actually care about (channels, volume, on/off, maybe subtitles/audio language) the rest should just be hidden under a menu and the fact that this isn't the case demonstrates bad design.
It's probably some legacy garbage, along the lines of everyone having an idea for what a TV remote is "supposed" to look like and therefore the manufacturers putting on buttons in plain view that will never get used and that you'd sometimes need the manual to even understand.
At the same time, it might also be possible that the FOSS software that's made for power users or even just people with needs that are slightly more complex than the baseline is never going to be suited for a casual user - for example, dumbing down Handbrake and hiding functionality power users actually do use under a bunch of menus would be really annoying for them and would slow them down.
You can try to add "simple" and "advanced" views of your UI, but that's the real disconnect here - different users. Building simplified versions with sane defaults seems nice for when there is a userbase that needs it.
Remotes are fine. Except the modern ones that have a touchpad and, like, 8 buttons, four of which are ads.
People can handle many buttons just fine. Even one year old kids don't have a problem with this, which becomes apparent if you ever watch a small child play. The only people who have a problem here are UX designers high on paternalistic approach to users.
Maybe? But, like, what's the value added by a bunch of useless bullshit buttons that will never be used? Greebles to make the customers think they're buying something cool?
Just put a menu one in there for the 1% of the times when you actually have to use the functionality there and don't do the equivalent of putting 30 widgets in your weather homepage.
It's not like anyone is losing their life or finds TVs to be unusable due to the status quo, but it also feels like the equivalent of cutting off the ends of your roast when putting it on a pan because that's how your mom did it which she learnt from her mom, without either of you knowing that the real reason is that your grandmother just didn't have a big enough roasting pan and it's just habit.
Related problem is that software business, and through it UI/UX, is obsessed about gaining new users, so everything is designed for ease of onboarding, at the expense of ergonomics and efficiency of continued use. It's backwards and dumb, and subscriptions should theoretically prevent this, but the truth is, software products are pretty much unique (there's rarely an actual competitor to switch to with the same set of features you need), and most products die or get killed before they move past the growth phase, so incentives align to catering for first-timers instead of already paying users.
And then UI/UX gets trotted as some industrial design wisdom, and this kind of backwards approach starts infecting design of physical products...
When it comes to software intended for the general public it doesn't take bravery to decide that every user should only ever be allowed to do things exactly how you'd want them done. I might be more likely to attribute that to arrogance. Really, for something like converting audio/video I'd just see the inflexible software with few if any options as too limited for my needs (current, future, or both) and go looking for something more powerful.
It's better to not invest my time on software that is overly restrictive when more useful options are available, even if I don't need all of those options right now because it'll save me the trouble of having to hunt down something better later since I've already got what I need on my systems.
Dev: I'm too brave to let you do that
Being super simple ducks the problem.
I do not readily empathize with people who are scared of software, because my generation grows up tinkering with software. I'd like to understand why people would become scared of software in the first place.
The world is a complicated place, and there is a veritable mountain of things a person could learn about nearly any subject. But sometimes I don't need or want to learn all those things - I just want to get one very specific task done. What I really appreciate is when an expert who has spent the time required to understand the nuances and tradeoffs can say "just do this."
When it comes to technology 'simple' just means that someone else made a bunch of decisions for me. If I want or need to make those decisions myself then I need more knobs.
And scared is the word used by the original author in the title. I want to understand that emotion. I don't need someone to tell me we can't learn everything.
Computer damage is one potential consequence on the extreme end. On the conservative end, the software might just not work the way you want and you waste your time. It’s a mental model you have to develop. Even as a technical power user though, I want to reduce the risk of wasting my time, or even confront the possibility that I might waste my time, if I don’t have to.
For handbrake you can pick a preset and see what happens. Or don't even do that: when you open it it'll make you pick a video file, then you can just jam the green start button and see if it gives you what you need. Very little time spent.
And as far as time goes, it only takes a few seconds in either scenario. You hit go, you see the progress bar is moving, you check your file a few minutes later.
the decision to ignore this signal is a learned behavior that you and i have, is all i’m saying
You seem comfortable with the idea that a child not having this learned skill. I don’t know why you don’t extend that empathy towards the inexperienced in general.
people arent afraid of doing 2^n stuff, its just that we have a gut sense that its gonna take more time than its worth. im down to try 10-100 things, but if its gonna be 100 million option combinations i have to tinker with, thats just not worth it.
It's been hard pushing back and saying no to all the new features. We've started work on a plugin API so that people can add features and opt in to the complexity.
BUT with time and a variable effort a regular user can get accustomed to the new philosophy and be successful. Either by persistant use, by using different OSS apps in series or by touching the command line. Happy user of Firefox, Libre office, Avidemux, Virt-manager (sic)
I think the other big reason is availability of talent. FOSS is made by devs who usually are already well off and have time to contribute. You wont find as many artists nor graphic designers with the same privilege . so if there's bo designers on a project you get the bare basics.
Intuitive UX for the average non-nerd user is task-based. You start with the most common known goals, like sending someone money, or changing the contrast of a photo, and you put a nice big button or slider somewhere on the screen that either makes the goal happen directly or walks you through it step by step.
Professional tools are workbench-based. You get a huge list of tools scattered around the UI in various groups. Beginners don't know what most of the tools do, so they have to work out what the tools are for before they can start using them. Then, and only then, can they start using the tools in a goal-based way. Professionals already know the tradecraft, so they have the simpler - but still hard - "Which menu item does what I need?" problem.
Developer culture tends to be script-based. It's literally just lists of instructions made of cryptic combinations of words, letters, and weird punctuation characters. Beginners have to learn the words, the concepts behind them, and the associated underlying computer fundamentals at multiple levels - just to get started. And if you start with a goal - let's say you want a bot that posts on social media for you - the amount of learning if you're coming to it cold is beyond overwhelming.
FOSS has never understood this. Yes, in theory you can write your own almost anything and tinker with the source code. But the learning curve for most people is impossibly steep.
AI has some chance of bridging the gap. It's not reliable yet, but it's very obvious now that it has a chance to become a universal UI, creating custom code and control panels for specific personal goals, generating workbench UIs and explaining what the tools do if you need a more professional approach, and explaining core concepts and code structures if you want to work at that level.
It's scary for folks who are used to transactional relationships to encounter these different mindsets.
The difficult interface was a good filter.
https://contemporary-home-computing.org/RUE/
That's what "UX" is all about. "Scripting the users", minimizing and channeling their interactions within the system. Providing one button that does exactly what they want. No need to "scare" them with magical computer technology. No need for them to have access to any of it.
It's something that should be resisted, not encouraged. Otherwise you get generations of technologically illiterate people who don't know what a directory is. Most importantly, this is how corporations justify locking us out of our own devices.
> We are giving up our last rights and freedoms for “experiences,” for the questionable comfort of “natural interaction.” But there is no natural interaction, and there are no invisible computers, there only hidden ones.
> Every victory of experience design: a new product “telling the story,” or an interface meeting the “exact needs of the customer, without fuss or bother” widens the gap in between a person and a personal computer.
> The morning after “experience design:” interface-less, desposible hardware, personal hard disc shredders, primitive customization via mechanical means, rewiring, reassembling, making holes into hard disks, in order to to delete, to logout, to “view offline.”
I don't want most of consumer electronics to act like a computer, it is a deficiency for me. I chose "dumb" Linux-based eBook reader instead of Android-based, because I want it to read books, full stop.
The problem is nobody makes this distinction for some reason. In my mind there's two types of software - the kind for doing things, and the kind for mostly consuming. As the wise Britney Spears once said, "there's only two types of people in the world: those that entertain, and the ones that observe"
It makes no sense for your CAD program you're building a company out from to be dumbed down.
I use it mostly for work and academic papers, not for amusement.
Most of the regular simple pdf viewers on the PC don't have this kind of productivity functionality in mind. They might have some, but in general they are not designed to work with read-only text.
Always have been.
We could argue about the exact value of N, but in the present universe nearly anything that scares at least 1 out of N normies into coming back to their senses is literally a heroic act.
EDIT: You linked a great post btw. Consider: "Rich User Experience" as the experience of being a rich person who uses. It's all right there in the etymology, after all words are also designed artifacts. "You wanna be rich? You gotta be our customer."
I'm no dentist, I go to dentists. I let them work, and try not to be too annoying. I learn the minimum that I need to know to follow the directions that they deliberately make very simple for me.
This will result in generations of generally dentistry ignorant people, but I am not troubled by this.
As technologically competent people, one of our desires should be to help people maintain the ignorance level that they prefer, and at every level steer them to a good outcome. Let them manage their own time. If they want privacy and control, let's make sure they can have it, rather than lecturing them about it. My grandmother is in her 90s and she doesn't want people reading her emails, listening to her calls or tracking her face. She is not prepared to deal with more than a couple of buttons, and they should be large and hopefully have pictures on them that explain what they do. It's my job to square that circle.
WHY IS THERE CODE??? MAKE A FUCKING .EXE FILE AND GIVE IT TO ME. these dumbfucks think that everyone is a developer and understands code. well i am not and i don't understand it. I only know to download and install applications. SO WHY THE FUCK IS THERE CODE? make an EXE file and give it to me. STUPID FUCKING SMELLY NERDS"
https://old.reddit.com/r/github/comments/1at9br4/i_am_new_to...
UI/UX (which the article is about) is part of the broader approach.
Whereas I think they are in fact describing 'average' people.
I suspect most HNers self-assess as normal, but are also self-aware enough to acknowledge (rightly or wrongly) they're not average.
Certainly I have felt overwhelmed with the 'why have they broken so many conventions?' sensation with, invariably, audio apps on microsoft platforms - but OTOH the implied expectation that all software should have a comparably broken UI as mildly modern microsoft word versions expresses a collective poverty of expectations.
Maybe handbrake was never meant to be used by people who need the one button solution? That one button solution exists all over the place.
It has nothing to do with free vs not-free
https://en.wikipedia.org/wiki/The_Free_Software_Definition#T...
> The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this.
Free software can be audited for backdoors. Closed can not. Their backdoors will stay there indefinitely.
That aside, OP was complaining about software written by "random people". Thing is, people working in companies that write proprietary software are equally "random" in that sense. We know that some of them are North Korean agents, for example.
I’ve been ripping old DVDs recently. I just want something that feels simple from Handbrake: a video file I can play on my Apple TV that has subtitles that work (not burned in!) with video and audio quality indistinguishable from playing the DVD (don’t scale the video size or mess with the frame rate!), at as small a file size as is practical. I’m prepared for the process to be slow.
I’ve been messing with settings and reading forum posts (probably from similarly qualified neophytes) for a day now and think I’ve got something that works - though I have a nagging suspicion the file size isn’t as small as it could be and the quality isn’t as good as it could be. And despite saving it as a preset, I for some reason have to manually stop the subtitles from being burned in for every new rip.
Surely what I want is what almost everyone wants‽ Is there a simple way to get it? (I think this is a rhetorical question but would love it not to be…)
Which kind of fulfills the best of both worlds: Welcoming for beginners, but full-powered for advanced users.
More software should be designed this way.
Please assume I'm smarter than I actually am -- I will figure it out no problems. I like complex interfaces that allow me to do my task faster, especially if I'm using it often (e.g. for work).
Handbrake is only popular _because_ it is so powerful, not in spite of it.
So I'd like to welcome the author to make more apps based on FOSS.
We're getting there. I run Linux Mint with an XFCE desktop -- an intentionally minimal setup. The system performs automatic updates and the desktop layout/experience resembles older Windows desktops before Microsoft began "improving" things. No ads, no AI.
I'm by no means an end user, but in Linux I see incremental progress toward meeting the needs of that audience. And just in time too, now that Microsoft is more aggressively enshittifying Windows.
What's really missing are online fora able to help end users adjust to Linux -- helpful without being superior or condescending. Certainly true for Windows, not yet true for Linux.
claude-code actually does this really well, having used it to set up gnome on my phone, and fix all my problems without having to learn anything
That can be a problem with Linux but in my experience searching for Windows help is usually not good either.
I do have a Linux box, and I only have complaints about small things. Double screen works, VSCode works, Firefox works too. Not much to complaint for a personal dev box. The ability to just `apt install` a bunch of stuffs and then start compiling is pretty nice.
But again, I'm pragmatic, so if I'm doing something Windows related, I'd definitely use my Windows box.
Concise
Focus only on the parts that are really needed, necessary, or the most important; hide the currently unnecessary or secondary, or simply remove the truly unnecessary and secondary; remove all unnecessary
Maybe it can help you
I proactively stopped that decades ago.
"Oh, you use Windows? Sorry, I haven't used it in over a decade so I can't help. If you have any Linux questions, let me know!"
These kind of ui are extremely hard to make.
I wanted to write an article or short blog post about how Windows 10, menus and javascript, increasingly tuck away important tools/buttons in little folds. This was many months ago.
I want to write it and title it, "What the tuck?" But tuck refers exactly to the kind of hidden menus that make those so called sleek and simple UIs for the the 80% of users.
The problem is that it stupefies computing literacy, especially mobile web versions.
Perhaps not every casual web browser needs to sit at a desk to learn website navigation. Then again, they may never learn anything productive on their own.
True but with a caveat: Those people rarely need the same 20% of your features.
using the remote analogy, the taped versions are useful for (many!) specific people, but shipping the remote in that configuration makes no sense
i think normal people don't want to install an app for every specific task either
maybe a solution can look like a simple interface (with good defaults!!) but with an 'advanced mode' that gives you more options... though i can't say i've seen a good example of this, so it might be fundamentally flawed as well
And IMO, Handbreak is more complicated than CLI ffmpeg. It's really chaotic.
My criticism of Free Software is exactly the reverse. There isn't enough of that kind of stuff on Linux!
Though to be sure, the Mac category (It Has One Button) is even more underserved there, and I agree that there should be more! Heck, most of the stuff I've made for myself has one button. Do one thing and do it well! :)
> Do one thing and do it well!
This does not necessarily mean that it would have only one button (or any buttons). Depending on what is being done, there may be many options for being done, although there might also be default settings. A command-line program might do it better that you only need to specify the file name, but if there are options (what options and how many options there will be depends what program it is) then you can also specify those options if the default settings are not suitable.
For example, I'd love Gimp to have a Basic mode that would hide everything except the basic tools like Crop and Brush. Like basic mspaint.
I am in favour of simplified apps like this, maybe it can be a simple toggle switch in the top right corner between simple and advanced. Similar to that stupid new version of outlook I have to constantly switch back to the old version.
BTW, if all you can do it drag-and-drop a file, do you need the "convert" button?
It’s like creating a new tv controller with fewer options.
(The usual problem with remotes happens to be that they tend to have buttons that nobody has ever used or wanted and which don't even do anything. And there's still menus and state. And the one button that has to be pushed for every feature is the one that dies first.)
In this particular case I'd just tell people to download and use VLC Player. But I get the point.
FOSS's issue isn't that they trust users too much, it's that they aren't taking different types of users into account.
Corporate-built software that's locked down or limited like iCloud is 100% about not trusting the users.
This is so common, to the point that it's a FOSS misconception #1 for me. They can't get it that the developer can develop the software to solve only their specific problem and not interested in support, feature contributions, and other improvements or usecases.
    ffmpeg -i example.mkv example.mp4
In almost all cases I don’t want to mess with the defaults, because I know diddly about video formats.
Grab an LLM and make a nice single button app they can use.
LLMs writing code, plus free software, a match made in heaven.
It’s also a new take on Unix “do one thing well” philosophy.
The things the user does most frequently need to be the easiest things to do.
You expose the stuff the user needs to do quickly without a lot of fuss, and you can bury the edge cases in menus.
Sadly a lot of software has this inverted.
You can always cherry pick apps to fit a narrative.
FOSS apps with simple interfaces: Signal, Firefox, VLC, Gnome [1], Organic Maps, etc, the list goes on and on.
[1] it’s not a simple app but I think there’s a good argument to be made that it’s simpler/cleaner than commercial competitors.
Yes, but those 80% all use a different subset of the 20% of features. So if you want to make them all happy, you need to implement 100% of the features.
I see the pattern so often. There is a "needlessly complicated" product. Someone thinks we can make it simpler, we rewrite it/refactor the UI. Super clean and everything. But user X really needs that one feature! Oh and maybe lets implement Y. A few years down the line you are back to having a "needlessly complicated" product.
If you think it could easily be done better, you don't understand the problem domain well enough yet. Real simplicity looks easy but is hard to achieve.
You can't do anything wrong with it. There's no UI to fiddle with WiFi. It's all pre-configured to work automatically in the local WLAN (only; outside, all that's needed is to borrow someone's phone to look for the list of wifi nets in the area and type the name of selected network to /etc/wpa_supplicant/wpa_supplicant.conf). But there's rarely any need to go out anyway, so this is almost never an issue.
There are no buttons to click, ANYWHERE. Windows don't have confusing colorful buttons in the header. You open the web browser by pressing Alt + [. It shows up immediately after about 5 seconds of loading time. So the user action <-> feedback loop is rather quick. You close it with Alt + Backspace (like deleting the last character when writing text, simple, everyone's first instinct when you want to revert last action)
The other shortcut that closes the UI picture is Alt + ]. That one opens the terminal window. You can type to the computer there, to tell it what you want. Which is usually poweroff, reboot, reboot -f (as in reboot faster). It's very simple and relatable. You don't click on your grandma to turn it off, after all. You tell it to turn off. Same here.
All in all, Alt + [ opens your day. Alt + ] gives you a way to end it. Closing the lid sometimes even suspends the notebook, so it discharges slightly slowerly in between.
It's glorious. My gf uses it this way and has no issues with it (anymore). I just don't understand why she doesn't want to migrate to Linux on her own notebook. Sad.
Why? The same kind of people who find Handbrake intimidating also find the concept of files and juggling multiple applications intimidating.
That's why I use 3D Max, Photoshop, and Excel. For the simplicity.
That’s very much dependent on the author correctly judging what should go there though
The simplicity Apple had was always a mistake, an artifact of their hubris.
> Normal people often struggle with converting video. ... the format will be weird. (Weird, broadly defined, is anything that won’t play in QuickTime or upload to Facebook.)"
Except normal people don't use MacOS and don't even know what QuickTime is. Including the US, the MacOS user share is apparently ~11%. Take the US out, and that drops to something like... 6%, I guess? And Macs are expensive - prohibitively expensive for people in most countries. So, normal people use Windows I'm afraid.
https://gs.statcounter.com/os-market-share/desktop/worldwide
In fact, if you disregard the US, Linux user share seems to be around half of the Mac share, making the perspective of "normal people use Macs not FOSS" even sillier.
-----
PS - Yes, I know the quality of such statistics is low, if you can find better-quality user share analysis please post the link.
Seems like a win-win, take my money solution, for some reason the market (and I guess that means investors) are not pursuing this as a consumer service?
Dunno why people assume that FOSS developers are just dummies lacking insight but otherwise champing at the bit to provide the same refinement and same customer service experience as the "open source" projects that are really just loss leaders of some commercial entity.
In addition to this issue, I've also had good conversations with a business owner about why he chose a Windows architecture for his company. Paying money to the company created a situation where the company had a "skin-in-the-game" reason to offer support (especially back when he founded the company, because Microsoft was smaller at the time). He likes being able to trust that the people who build the architecture he relies on for his livelihood won't just get bored and wander off and will be responsive to specific concerns about the product, and he never had the perception that he could rely on that with free software.
While I agree that people generally feel better by getting something with little effort, I think that there is a longer-term disservice here.
Once upon a time, it used to be understood that repeated use of a tool would gradually make you better at it - while starting with the basics, you would gradually explore, try more features and gradually become a power user. Many applications would have a "tip of the day" mechanism that encouraged users to learn more each time. But then this "Don't Make me Think" book and mentality[0] started catching on, and we stopped expecting people to learn about the stuff that they're using daily.
We have a high percentage of "digital natives" kids, now reaching adulthood without knowing what a file is [1] or how to type on a keyboard [2]. Attention spans are falling rapidly, and even the median time in front of a particular screen before switching tasks is apparently down from 2.5 minutes in 2004 to 40 seconds in 2023 [3] (I shudder to think what it is now). We as a civilization have been gradually offloading all of our technical competency and agency onto software.
This is of course leading directly to agentic AI, where we (myself included) convince ourselves that the AI is allowing us to work at a higher level, deciding the 'what', while the computer takes care of the 'how' for us, but of course there's no clear delineation between the 'what' and 'how', there's just a ladder of abstraction, and as we offload more and more into software, the only 'what' we'll have left is "keep me fed and entertained".
We are rapidly rolling towards the world of Wall-E, and at this pace, we might live to see the day of AIs asking themselves "can humans think?".
[0] https://en.wikipedia.org/wiki/Don%27t_Make_Me_Think
[1] https://futurism.com/the-byte/gen-z-kids-file-systems , https://news.ycombinator.com/item?id=30253526
[2] https://www.wsj.com/lifestyle/gen-z-typing-computers-keyboar... , https://news.ycombinator.com/item?id=41402434
[3] https://www.apa.org/news/podcasts/speaking-of-psychology/att...
The majority of users probably want the same small subset of features from a program and the rest are just confusing noise.
Not 5 minutes after that someone else on the comments went on a weird rant about how allegedly Inkscape and all FOSS was "communist" and "sucked" and capitalist propietary stuff was "superior".
IN this particular case someone things more competition is communist...
The disaster that is "modern UX" is serving no one. Infantilizing computer users needs to stop.
Computer users hate it - everything changes all the time for the worse, everything gets hidden by more and more layers until it just goes away entirely and you're left with just having to suck it up.
"Normal people" don't even have computers anymore, some don't even have laptops, they have tablets and phones, and they don't use computer programs, they use "apps".
What we effectively get is:
- For current computer users: A downward spiral of everything sucking more with each new update.
- For potential new computer users: A decreasing incentive to use computers "Computers don't really seem to offer anything I can't do on my phone, and if I need a bigger screen I'll use my tablet with a BT keyboard"
- For the so-called "normal people" the article references (I believe the article is really both patronizing and infantalizing the average person), there they're effectively people who don't want to use computers, they don't want to know how stuff works, what stuff is, or what stuff can become, they have a problem they cannot put into words and they want to not have the problem because the moving images of the cat should be on the place with the red thing. - They use their phones, their tablets, and their apps, their meager and unmotivated desire to do something beyond what their little black mirror allow them is so week that any obstacle, any, even the "just make it work" button, is going to be more effort than they're willing (not capable of, but willing) to spend.
Thing is, regardless of particular domain, doing something in any domain requires some set of understanding and knowledge of the stuff you're going to be working with. "No, I just want to edit video, I don't want to know what a codec is" well, the medium is a part of the fucking message! NOTHING you do where you work with anything at all allows you to work with your subject without any understanding at all of what makes up that subject. You want to tell stories, but you don't want to learn how to speak, you want to write books, but you don't want to learn how to type, write or spell ? Yes, you can -dictate- it, which is, in effect, getting someone competent to do the thing for you.. You want to be a painter, but you don't care about canvas, brushes, techniques, or the differences between oil, acrylic and aquarelle, or colors or composition, just want to make picture look good? You go hire a fucking painter, you don't go whining about how painting is inherently harder than it ought to be and how it's elitist that they don't just sell a brush that makes a nice painting. (Well, it _IS_ elitist, most people would be perfectly satisfied with just ONE brush, and it should be as wide as the canvas, and it should be pre-soaked in BLUE color, come on, don't be so hard on those poor people, they just want to create something, they shouldn't have to deal with all your elitist artist crap!) yeah, buy a fucking poster!
I'm getting so sick and tired of this constant attack on the stuff _I_ use every day, the stuff _I_ live and breathe, and see it degenerated to satisfy people who don't care, and never will.. I'm pissed, because, _I_ like computers, I like computing, and I like to get to know how the stuff works, _ONCE_ and gain a deep knowledge of it, so it fits like an old glove, and I can find my way around, and then they go fuck it over, time and time again, because someone who does not want to, and never will want to, use computers, thinks it's too hard..
Yeah, I really enjoy _LISTENING_ to music, I couldn't produce a melody if my life depended on it (believe me, I've tried, and it's _NOT_ for lack of amazingly good software), it's because I suck at it, and I'm clearly not willing to invest what it takes to achieve that particular goal.. because, I like to listen to music, I am a consumer of it, not a producer, and that's not because guitars are too hard to play, it's because I'm incompetent at playing them, and my desire to play them is vastly less than my desire to listen to them.
Who are most software written for? - People who hate computers and software.
What's common about most software? - It kind of sucks more and more.
There's a reason some of the very best software on the planet is development tools, compilers, text editors, debuggers.. It's because that software is made by people who actually like using computers, and software, _FOR_ people who actually like using computers and software...
Imagine if we made cars for people who HATE to drive, made instruments for people who don't want to learn how to play.. Wrote books for people who don't want to read, and movies for people who hate watching movies. Any reason to think it's a reasonable idea to do that? Any reason to think that's how we get nice cars, beautiful instruments, interesting books and great movies ?
Fuck it. Just go pair your toaster with your "app" whatever seems particularity important.
Maybe we should just say free software is amazing and not a tool for home users, in order to get home users to use it.
One of the truest things I've read on HN. I've also tried to visit this concept with a small free image app I made (https://gerry7.itch.io/cool-banana). Did it for myself really, but thought others might find it useful too. Fed up with too many options.
It also opens up opportunities for money-making, and employment in Free Software for people who do not program. The kind of hand-holding that some people prefer or need in UX is not easy to design, and the kind of marketing that leads people to the product is really the beginning of that process.
Nobody normal cares that it's a thin layer over the top of a bunch of copyleft that they wouldn't understand anyway (plenty of commercial software is a thin layer over permissively licensed stuff.) Most people I know barely know what files and directories are, and the idea of trying to learn fills them with an anxiety akin to math-phobia. Some (most?) people get a lot of anxiety about being called stupid, and they avoid the things that caused it to happen.
They do want privacy and the ownership of their own devices as much as everyone else however, they just don't know how much they're giving up when they do a particular software thing, or (like all of us) know that it is seriously difficult if not possible to avoid the danger.
Give people mock EULAs to click through, but they will enumerate the software's obligations to them, not their obligations to the software. Help them remain as ignorant as they want about how everything works, other than emphasizing the assurances that the GPL gives them.
For those of you thinking (which 20%) following that article from the other day — this is where a good product sense and knowing which 80% of people you want to use it first. You could either tack on more stuff from there to appeal to the rest of the 20% of people, or you could launch another app/product/brand that appeals to another 80% of people. (e.g. shampoo for men, pens for women /s)
Q: Why does God allow so much suffering?
A: What? There is no God. We invented him.
Q: Doesn't this mean life has no purpose?
A: Create your own purpose. Eliminate the middleman.
Q: But doesn't atheism allow evil people free rein?
A: No, it's religion that does that. A religious evil person can always claim God either granted him permission or forgave him after the fact. And he won't be contradicted by God, since ... but we already covered that.
Hmm. If it works for HandBrake, it might work for life.