This is what XHTML was, and it was a complete disaster. There's a reason almost nobody serves XHTML with the application/xhtml+xml MIME type, and that reason is that getting a “parser error” (this is what browsers still do! try it!) is always worse than getting a page that 99% works.[0] I strongly believe that rejecting the robustness principle is a fatal mistake for a web-replacement project. The fact that horribly broken old sites can stay online and stay readable is a huge part of the web's value. Without that, it's not really “the web”, spiritually or otherwise.
[0] It's particularly “cool” how they simply do not work in the Internet Archive's Wayback machine. The page can be retrieved, but nobody can read it.
And it's still unambiguous. You can cringe at what some people do, but it would be strictly a taste issue rather than a technical one, as the parse would still be unambiguous. And if you think you can fix taste issues with technical specification, well, you've already lost anyhow.
Would you like to have a law that forbids you, under penalty of fine, to read any book you buy or borrow that is lacking or has damaged pages?
If it did somehow happen that a good deal of interesting content was published using the standard, the most popular client would probably be nonconforming, ignoring the rule to not render ambiguous content.
Protocols used to be limited by technology, now they're defined by ideology.
However, I don't see it that clearly that this cannot be done since the start so that the expectations are right since the beginning. For example, I don't see the same problem in other formats like JPEG or PNG where you expect the image to work perfectly or fail with a decoding error.
Other than implementing it and see how it goes, can you propose a feasible experiment to see how an new strict spec will measurably fail?
tried it right now - took a PNG and a JPEG, opened them in a text editor, literally deleted the second half of the file, saved, and dragged them into both Firefox and Chrome - they are displayed instead of erroring out.
there is a classic article why a minimal version of the web with features removed will fail - you removed 80% of the features that YOU think are not important. thats a classic fatal mistake
search the web for different proposals for a minimal web and you will understand - they will have removed some feature they think is bloat but which you kept in your proposal because you consider it critical. which is why you created a new proposal - their minimal proposal is not the right one for you
https://www.joelonsoftware.com/2001/03/23/strategy-letter-iv...
I think what is lost on many people, ironically even the ones who want to retvrn the web to its former glory, is that the browser tries to display broken, half transmitted content because it happened so frequently due to circumstances completely out of the website operator or the user's control. And in most cases showing a half transmitted web page with half of the closing tags missing is almost certainly better than just outright refusing to show anything.
That... is not how anything happened.
> I don't see the same problem in other formats like JPEG or PNG where you expect the image to work perfectly or fail with a decoding error.
Browsers absolutely decode as much as they can, and if the file is corrupted halfway through you generally get garbling, not the entire image being replaced by "fuck off". The only case where that is so is if the browser can't parse anything at all, or can't retrieve the file.
> Other than implementing it and see how it goes, can you propose a feasible experiment to see how an new strict spec will measurably fail?
We already did that and saw where it went.
What I meant is that you don't expect PNG or JPEG images to be created in a way that the parser needs to run a complex process to reconstruct the bits that are broken and interpret what you meant to say. Like this one:
https://html.spec.whatwg.org/multipage/parsing.html#adoption...
Perhaps a better example is a C program being compiled into an executable. You don't expect the compiler to guess what you meant while parsing.
The current expectation is that a web browser must load any broken HTML and still display what it can, and is this expectation what I would like to change.
I don't propose humans to write this format directly (although it should be human readable), but compile it from something that is easy to write, like Markdown or a similar language. The objective is to enforce tools that make the transformation to produce a strictly conformant document.
Having a context-free grammar allows simple and fast parsing tools that can process your document, in a similar way that you can query or manipulate a JSON file with tools like jq because the grammar is simple and strict.
So what you meant is neither what you wrote, nor what you advocate for?
because in case you have forgotten here is what you advocate for:
> Pages that don't conform with the specification won't be rendered.
That is not how image rendering in browsers works. That is how XHTML does not work.
> Perhaps a better example is a C program being compiled into an executable.
It's not a better example, because it's a completely different and unrelated use case: C programs are usually not dynamically generated, and even when they are the person who compiles the code is usually either the person who wrote it or a person who has ways to fix it or report errors.
Not so when trying to read a web random page on the web.
> I don't propose humans to write this format directly (although it should be human readable)
Approximately nobody wrote xhtml by hand, didn't save it.
This is also a nonsensical constraint-set on its face, there is no point to a human readable format which is not human writeable.
> The objective is to enforce tools that make the transformation to produce a strictly conformant document.
Ah, an open and non-monopolizable format which can only be written via an official toolchain.
> Having a context-free grammar allows simple and fast parsing tools that can process your document in a similar way that you can query or manipulate a JSON^H^H^H^HXML file with tools like jq^H^Hxmlstarlet because the grammar is simple and strict.
None of which seems of any use to a format which pretends to human production and consumption. JSON is an interchange format between machines.
>The current expectation is that a web browser must load any broken HTML and still display what it can, and is this expectation what I would like to change.
You really can't change it because what you're really asking for without realizing it is to make web surfers prefer user hostile web browsers. It is unrealistic to persuade web surfers to want broken web documents to be completely hidden or blocked with an error message.
Consider the following simple HTML examples:
<HTML><BODY><DIV>The current stock price for Tesla is $428</BODY</HTML>
<HTML><BODY><DIV>The 2026 FIFA World Cup final July 19, 2026 3:00 p.m.</BODY</HTML>
The missing </DIV> closing tags means it is "non-compliant" HTML and you want the browsers to be "forbidden" from rendering it. That is not software the end users want.But isn't bad HTML broken?!? Therefore, shouldn't the browser reject invalid HTML like C Language compilers reject invalid code?!? No, because those are not the same category of data formats.
The difference with HTML that that there is non-HTML raw data content that is still human-interpretable and more valuable to end users
Higher value types of data : "$428", "World Cup game is July 19 2026"
Lower value types of data: "<HTML><BODY><DIV>"
If you try to invent a "next-generation HTML" that forbids browsers from rendering invalid syntax, you can't really "enforce" that because there will always be software developers who will "defect" and make browsers that can ignore errors. Why?!? Because end users still want to see the critical data that's unrelated perfect HTML syntax. The users will deliberately choose the non-compliant browser that ignores the errors of non-compliant HTML.
Software has been doing "best efforts" parsing for various file formats for decades because it's the behavior users want. I wrote previous comments about that with more examples:
What the heck are you talking about? User agent devs and users did indeed always go toward it mostly works.
Today, when writers are using visual editors (or Markdown), few are writing their own HTML any more. A web standard requiring compliance would work differently today.
I'd say it was a minority of writers that were handcrafting XHTML. And it was the case that everyone or their handcrafting or using tools could validate their compliance using a browser which made it very easy to adjust your tools or your handcrafted code. We are now in a situation where there is no schema for HTML.
I, for one, am very much in favor of forking the web with a document format with a schema. It really seems like a small and simple change to me.
Honestly I don't think it was killed by one thing, or by anything. Just no platform really cared and it wasn't a win for anyone and occasionally a loss.
That’s not the reason almost nobody serves XHTML.
The real reason is Internet Explorer. Okay, it’s a little more nuanced than that, but I think it’s accurate enough. Microsoft killed XHTML by inaction.
It’s 2004. XHTML is now a few years old, and all the rage. You decide to use it for your new project which you’re developing. At the start, you serve pages as application/xhtml+xml, and that works well in Firefox; but you know that won’t work because Internet Explorer still doesn’t support XHTML, and 90% of your viewers will be using that. So, a little frustrated, you serve your nice XHTML as text/html. You still validate it manually for a while, but then that habit disappears. Eventually you make one or two small mistakes that would have been caught easily if it were parsed as XML—but it’s not, because of Internet Explorer. Over time this disparity grows.
People have been complaining of the inefficacy of XHTML for this exact reason for two or three years by this point.
It’s 2006. XHTML is acknowledged to have failed. Everything else supports it, but as long as IE doesn’t, you can’t serve as application/xhtml+xml, and so you can’t get the advantages of XML syntax.
Seriously, early failure is good—so long as you’re working with it from the start. The problems only occur when you try to add strictness later.
Just look at typing in code bases. Adding strictness to existing JavaScript or Python or Ruby? Nightmare. Starting with static types? Somewhere between fine and extremely desirable.
(I might be overselling strictness’s popularity at the time—people don’t always like what’s good for them. We’ve largely realised now that unfettered dynamic typing is a bad idea, but ten years ago that was not settled. People get used to things. If IE had permitted XHTML early on, people would have got used to the idea of XHTML’s strictness and, I think, got to mostly like it.)
XHTML did not fail because of XML’s catastrophic parse failure mode. It failed because HTML already worked, and Internet Explorer took way too losng to accept XHTML. If you’re forking the web and compatibility with existing documents is not a goal, you can’t use XHTML’s failure as an argument: it failed because of compatibility issues.
Well, Internet Explorer did eventually support application/xhtml+xml: in 2011, IE9. Way too late to matter. And so only by around 2015 or 2016 could you finally serve with XML syntax. And now why would you? For your system is big and has tiny errors here and there and your CMS just drops markup in and never got round to validating it and and and and so on. By that time, HTML had given up on the XML path, and although it worked, the momentum was entirely gone, so you’d run into difficulties due to inadequate documentation, inferior tooling (ironic), and various more.
Now, they enable applications to exist without going through app store gateways.
A new document-only protocol aligned the Web's original intention would be very useful simply for security reasons. I liked Gemini because, by design, a Gemini document is not executable in any way; there's no popups, plugins, or even cookies; all this is out of the box without having to manage settings, and Gemini documents are very readable without an app at all.
But replacing the modern browser rather than being another option will actually lock in people further than they already are-open protocols require apps which are all behind a gateway now on the primary computing device of users: phones.
It probably won't matter in a few years as the Web will likely be equally locked down soon, though.
What? You could deploy software without dealing with Microsoft back then and you still can today. Unless you meant avoiding building for Windows natively.
Nonsense, lots of software were just local, I've even see MSN clones written in TCL/Tk, and Lazarus still used in some places, and tons of VB6/C# software. Back in the day except for Intranet turds (which in the end causes disasters like Iloveyou.VBS "thanks" to IE/Outlook deeply tied to Windows 9x software ) everyone serious about programming security and correctness flew away from the web model for the good. It was everything about Java (and applets) and later C#. The web had an overgrowth and languages which shouldn't be part of the desktop.
Most of this document reads to me like that's the problem they're trying to solve, not just chrome's huge marketshare, so simply not targeting it doesn't serve their purpose.
in real democracies the populists (facebook, tiktok, chrome) always win. because that's what the masses want
Is Friedrich Merz a populist? Was Angela Merkel a populist? This theory seems to have considerable limits.
The context is real democracies, not messy extant nation-state governments. Please delete your comment so no one can read it.
All we need is one fork that merges all of them into a single standard!
I don't care about any of that, I just want to have fun on the internet. By that metric, most of the criticisms in this thread are irrelevant. It doesn't need to make money, it doesn't need to be used by more than a few nerds, and it doesn't need a zillion bells and whistles. Whether rdg (the author of the blog post) shares this goal, I don't know.
Being on the development of Dillo for a few years makes you see things from a different perspective. I also think that it should be fun to make your own tools from scratch and be able to understand the specs in a couple of weekends. Pretty much what happened with Gemini and the explosion of clients and servers:
it's as if nothing was learned from the XHTML debacle
Then html5 came along, providing all kinds of shiny goodies and saying not to bother with the tags. In the end, a more rigid standard would have been nice.. (Though this is mostly about the skin deep part of the standards.)
It failed because the smallest error by a client after the fact was like a server crash. Plus it would have created a mild barrier to entry when learning html at all.
xhtml was entirely opt-in, people opted into it, then served broken content. xhtml failed because that broke content (from people who, again, had specifically opted into serving xhtml) was an utterly terrible for everyone involved, as the user would get a big fuck off page devoid of any content, information, or means of redress, and there was no way for administrators or authors to get notified that their content was broken.
Meanwhile HTML would usually let you do the things you wanted to, and if you noticed something was broken you'd usually be able to hunt down a contact form and send a notice.
HTML5 is not what killed xhtml, xhtml is what did that, because it was a dreadful experience all around and had absolutely no redeeming quality.
Hell, the W3C was so into xml at the time there was an xhtml5 serialisation for html5. Technically it's still there (https://html.spec.whatwg.org/multipage/xhtml.html). That was of great use to the nobody whatsoever who was interested.
and what new capabilities does this new proposal provide?
What you want is to have scripting with capabilities -- preferably on top of WebAssembly (JS is a sin).
The best part is this improves the experience of noscript users -- rather than nice graphical widgets being broken, instead, they can just run scripts without any "network" capability -- which should forbid the scripts not only from accessing the network, but make it so anything they modify becomes "tainted" and is not allowed to show up on a network call (so e.g. if they encode some data in a form, trying to later submit that form somewhere else on the app will give a warning).
Now -- most people don't care and don't want this. And that's a good thing -- capabilities put the power in the hands of the user agent where they belong.
More interestingly-- capabilities can be shimmed! Rather than "you are not allowed to access my GPS", it should be a first-class feature to feed the WASM a GPS stream of your choice.
In the browser? The map viewer could just be a separate programme entirely, like a PDF viewer, etc. I remember watching rdg (the current main Dillo developer) demonstrating this with a separate map programme.
Most of your post seems to assume this "everything must be in the browser" approach, which is actively not what Dillo is about. (I would know, I use Dillo regularly.) It adheres to the Unix philosophy.
EDIT: Looking at it closely, did I just respond to an LLM post?
You can certainly make something with it, but I can't imagine most people finding a use for it.
Modern Internet is 45% appearances and 50% search traffic optimizations. For better or worse we lost all usable registries of websites, we lost appearance-less and traffic considerations-less websites. Information-focused Web is pretty much dead.
Maybe these ideas did not scale and did not monetize that well, but we will never really know what information-focused version of Internet would have looked like because evolution took it elsewhere. Unless we try building another one with different principles and limitations at the core.
Perhaps what's needed is for an alternative search engine. Assert that you will only index a site that meets some strict set of limits. If that's what people want they will use that engine. If it's popular sites will have have to find ways to get listed, e.g. "simple.amazon.com" which supports that standard.
For me, the information-sharing part of the internet now is the shadow libraries. I can get access to all (well, still not quite all) journals and university-press publications from the last century? Awesome. Vastly more informative than some blogger who nowadays is probably trying to monetize my attention.
Even so, those who want to share and access information can already do that via the Web. Nobody has to use scripting. Nobody has to use The Google as their search. Nobody has to rely on an LLM. If there is demand for simple webpages that are free of scripting, they can be built and shared today. Because of this, the proposal comes off as very out of touch and deep within the HN bubble. Strict grammar for declaring documents is merely a fetish. If there's no scripting, then there's no reason for a document to break for some silly reason.
> Instead, you can provide a Geo link to open the location in any client that supports the protocol.
Sorry but as someone old enough to remember when the web was mostly non interactive I vastly prefer the current situation despite its many shortcomings. I want to keep a minimal number of softwares on my computer. I don't want to give a hundred "clients" access to my computer when I can just run JavaScript sandboxed in my browser. If someone sends me a link and tells me it's a cool game he found online I will open it in my browser and have a look but I will not just run random binaries on my computer. Oh, and I like being able to access any website just from my browser on my Linux, instead of hoping that there is a Linux client that isn't 5 years out of date or fiddling with wine to figure out why the windows binary wouldn't run.
I understand why people dislike the web sandbox or having to run a full blown VM for everything, but please understand that this is also what makes the web great. You can run everything and fear nothing.
In general, Dillo follows the Unix philosophy. You use separate programmes to handle things that Dillo can't itself, like watching videos.
i use 50 different interactive web apps, i do not want to install 50 different apps
most of them do not have a "protocol" - ehat is the desktop equivalent of ExcaliDraw
> A published version of the standard NEVER, EVER, EVER, EVER changes.
WhatWG does have per-commit snapshots of the standard. They're just not semantically versioned because it is a living standard.
I think what the author wants is something like Gemini instead of HTML, but that has its own set of problems. My plea for Dillo would be to instead just support a text/markdown mime-type natively and we can try for adoption in more browsers.
> The objective is not to create a feature-by-feature clone of the Web, but to create an specification that allows humans to exchange knowledge, notes, and other forms of information without the imposed requirement of having to run a full blown VM to read it.
Markdown in browsers fits your objective! Only gotcha is commonmark extensions, and they can work with sub-type declarations in the mimetype.
"as soon as a monopolistic entity can build a mechanism to extract revenue from it, there will be an incentive to capture the standard and change it to for their own benefit"
Personally I'd love a simple semantic versioned subset of the web. The required traction and buy-in from existing key players (browser vendors, web hosting platforms etc) makes it largely a non-starter though. I'd love to be wrong though.
Instead of "forking", it may be more prudent to extend or revive something more like Gopher, so you don't constantly get baraged by incompatible sites (like you would in a forked web)
While HTML serves its purpose, especially for documents, the modern web is a giant mess of that legacy, combined with unfriendly ergonomics and glue/hacks built on top just so we as developers can have better DX for creating complex software on top of it.
Building a browser means having to deal with all that legacy, wether we like it or not, so most of the browser market got captured by the big players who have enough manpower to cover all those edge cases. That also means we have to deal with whatever technical choices or bloat they make, causing an infinite stream of issues, from memory usage, to size, to limitations that don't make sense in 2026 but are still there because someone 20 years ago decided to write them like that. As I deal with mobile webviews a lot in my daily work, I unfortunately had to get familiar with quite many gotcha's and edge cases, and some are just... absurd in this day and age.
I believe we need a separation between an application layer and the document layer, and especially between the UI language and the actual application code - script tags serve their purpose, but again, they are a hacky solution with its own bag of tricks, and those tricks impact all of the software built upon it.
Now, a bit of a shameless plug I've been working on something to fill that gap, at least for myself and hopefully for others who encounter the same issue - it's called Hypen (https://hypen.space) and it's a DSL for building apps that work natively on all platforms, with strict separation of code/UI/state, and support for as many languages and platforms as I can maintain, not "just javascript". While currently it's focused on streaming UI, it's built with Rust and WASM at it's core and will soon allow fully "compileable" apps.
While it may not be the future of software, once you get into building something like that, it becomes obvious that the way we are building now is at least wrong, and at best kafkaesque.
Maybe I'm just stupid, but I don't really know what the author is talking about here. What parts of the standard? HTTP? HTML? DOM APIs? What?
Gemini protocol?
Edit: actually it looks like w3m was ‘95 and Dillo was ‘99.
> If you are under 13 years of age, you are not authorized to register to use the Site.
(By the way, are you aware that the largest bakery company in the US is named “Bimbo”? Tee hee! You should tell them to change their name!)
The standards that make my life miserable at times are the secondary standards like GDPR and WCAG as well as the de facto "standard" systems we are forced to participate in such as Cloudflare, the advertising economy, etc.
It's easy to say "WebUSB is bloat" and I'd certainly say PWA is something that could only come out of the mind that brought us Kubernetes, but lately I've been building biosignals applications and what should my choice be: write fragile GUI applications for the desktop that look like they came out of a lab and crash from memory leaks or spend 1/5 the time to make web applications that look like they belong in the cockpit of a Gundam and "just work"?
How so? PWAs are awesome! Democratizing for users. Democratizing for developers. They work well for the right class of apps. They would go much further if there weren't forces actively resisting them. Think of all the electron type-apps out there. Now imagine if the average Joe could just install them from the web with 2 clicks.
(Regular ole bookmarks get you a decent percent of the way but clearly something extra than that was needed.)
> No scripting
How is will it be possible to go back? The average ecom presence usually relies heavily on JS. I haven't checked in a long time that any relevant sites work without JS. I think going back to more basic approaches could even improve user experience, as many usage patterns probably would converge and simply look and function as intended. But considering that the whole web world is so fixated to solve everything with JS seems like targeting the highest resistance target you can find. Don't get me wrong, I hate this situation and we must not have a single language that dominates everything.
I also don't believe is that enthusiasts will create a significant shift. They can surely provide the fundamentals, but if there isn't a huge mainstream impact, it will not change anything.
- We don't want multiple versions (1.1.1, 1.2.1), but we also don't want constant churn (the current dev/product fad). What we want is one thing that works well indefinitely, is backwards compatible, changes infrequently, and can be expanded if necessary. In order to achieve that, we have to abandon the idea of monolithic web browsers.
"The Web" is not a hypertext document viewer, as much as some people (myself, and Dillo probably) would like it to be. It is an application platform. So you must consider the needs of an application platform if you want a "new Web". The browser interfaces with the entire OS + a slew of protocols and libraries. It's Android in userland. It will change as constantly as OSes and tech changes, which is constant. So to get away from churn, we need to break up the application platform into layers. Those layers need to have simple, well-defined backwards-compatible interfaces, with extensions. The model for that has been around for decades; network protocols last 60+ years without needing to be replaced, but add features over time, without getting feature creep, and remain backwards compatible. There aren't a ton of versions of common internet network protocols. And importantly, you don't have to use one implementation, the way people get stuck on one browser.
The standard should follow this extremely well established pattern of layers of independent components which aren't built into a monolith. It can still have a version (initially), but we shouldn't need to change the version, we add feature flags and handshakes, the way network protocols do. The end result should be a combination of a "web POSIX" + "layered protocols/specs".
- "Pages that don't conform with the specification won't be rendered" - this simply is never going to happen. The history of software development is littered with examples of having to work around implementations of specifications. Your client can try to render strictly, but it will inevitably break on someone's implementation, and you will be forced to deal with it, or lose your customers/users.
"Having a strict grammar will likely cause humans to migrate to a language that is easy to write and is more forgiving ... The objective is that parsers can be simplified and the cost of creating tools that can manipulate the content is lowered" - This sounds like you're saying, programming is hard, so let's make the user have to work around our inability to solve hard problems. Easy is not always better.
- "Resistance to standard capture"* - I think this goes back to the layers. Remember you are building an entire Application Platform. Think about Linux and Open Source. How does it resist capture? Independent organizations and authors, loose associations, cobbled together components. There is nobody in control, so you can't capture it. This is actually the same with network protocols (other than HTTP, we all know Google controls the spec). We can take ideas from many places. As just one random example: MCP is a simple yet powerful way for independent entities to add functionality to an application both locally and remotely, yet is independent of both the client and the server. Another example is Plan9, where you can support anything in the world and use it as a file (both locally and remotely), as long as you make and run the driver for it.
- "Text first" - You just lost the room. If you want text only, stick to Gopher. An application platform requires multimedia. You would do well to craft the spec so that it can convert application presentation into a text structure. Sell it as accessibility.
- "No scripting" - Now your proposal is dead. Again, Application Platform!! People want a way to cheaply deliver and run application code in real time. I think this needs a lot of careful attention, because you don't want to continue the status quo of requiring a single monolith to interpret and execute logic for the entire application platform.
It would be great to differentiate between "static" and "dynamic" pages based upon scripting, IMO.
oh and also https://xkcd.com/927/