Imagine clicking a checkbox, which adds the `checked` attribute to its element, then using Hyperclay to globally persist this version of `document.body.outerHTML`, so that it's there next time someone visits the page. There's automatic versioning and read/write permissioning.
It's a pretty cool project! I'll definitely try for my own personal tools.
Do note that, from my understanding, it's most useful when there's one developer who is also the only content editor. Otherwise you'll have editors overwriting each other's changes, and if there are multiple copies there's no easy for the developer to push a change to all copies.
Note: we are working on a way for a developer to push "DOM-based schema migrations" to all forked apps.
So I'm trying to understand the difference, the payoff. I understand that local web APIs are ass and you very quickly run into the need for a server.
But I'm wondering about the utility of combining the two approaches. It seems like a contradiction in terms. Here's a server to help you with your dev setup oriented around not needing a server.
I guess the main win would be cross device access? You have it online and you can edit it easily.
I'm editing my stuff on my phone in a text editor. And syncing it to my laptop with a sync app.
ETA: Wikipedia has reminded me the feature was called UniversalXPConnect, and it was a Firefox thing and wasn't cross-browser. It still sucks that it was removed without sensible replacement.
I did a deep dive on TiddlyWiki recently. No, it doesn't need a server; an entire site can be self-contained in one HTML file.
If you want to run it as a multiuser, web accessible wiki, it runs on top of NodeJS.
I thought opening index.html in the browser was basically just a demo.
The other mode is with a nodejs server (it's what I'm using). This allows me to access the wiki from all devices.
I forget that File -> Save is even a thing for websites.
It is a common gotcha that new users will lose some of their work while they learn persistence.
With TiddlyWiki you had to essentially File -> Save As and save the HTML back over itself. There were other ways too but they were all workarounds to the issue that browsers don't allow direct access.
Your notes are the HTML file! You can keep it in your documents folder, sync it via any service, track it in version control, etc. It’s for folks who know what the filesystem is, don’t know how to host a server (or don’t want to), but want a website-like experience. Works offline, too!
The file itself provides both the dynamic functionality and data storage, but you need an engine (like obsidian) to make the data persistence and dynamic parts work together. I.e. if there is a button that adds a task to a todo app, your engine modifies the HTML file with the new content.
Edit: actually it looks more like a library/framework to make such apps. And now I'm not sure if it needs a backend component or nodejs or not.
hyperclay is a web server that stores and serves versions of html files
I don't get it.
- local HTML file cannot import local JS modules, so I have to use legacy writing style
- local HTML file cannot open other local files (for example, audio files)
I understand that there is a risk if we allow local HTML files to access anything, but there should be some way - for example, a file or directory might have some suffix that allows access to it.
I do not want to use a web-server because it feels like overengineering and I don't want to open a terminal, navigate to a directory and start a server every time, it takes too much time. I just want to type the URL and have my app running.
You can have a zero-build zero-dependency offline-only app which users could theoretically just save as a page, but copy buttons will not work, so you have to detect the API not being available and replace buttons with popover textareas. Clunky.
As for running a local server, VS Code devcontainers are the solution. Open your workspace, it runs the devcontainer, and you have a server up without extra hassle. Add a command to generate certs and you can have HTTPS locally.
Can somebody help me understand what’s going on?
> In particular, the user agent SHOULD treat file URLs as potentially trustworthy.
> User agents which prioritize security over such niceties MAY choose to more strictly assign trust in a way which excludes file.
A potentially trustworthy URL is a secure context: https://html.spec.whatwg.org/multipage/webappapis.html#secur...
So this is a matter of browsers not implementing it, probably because there’s just not a lot of demand for it.
Such things seem to be cycles.
Today a lot of browsers support .MHT which is a similar format, but also worse in many other ways. (The M stands for MIME and wrapping a website like an email seems somehow sillier and weirder to me than wrapping it in a ZIP file, though I get that MIME wrappers are ancient internet tech with an ancient track record.)
Then we see all the millions of apps in PWAs and Electron downloads.
At some point it feels like we should have better solutions and cut some of the gordian knot cycling between "local apps are too much of a security risk" and "local apps should be complicated collections of Service Workers to get offline support" and "local apps should just embed a full browser and all its security implications/risks rather than allowing browsers to directly open local apps" and back and forth. .HTA and .MHT both showcase possible directions back to "simpler" than PWAs/Electron, they just have such fascinating and weird histories.
Why does it block local pages? Well what benefit of is to Apple or Google if it were easier to make good localhost webapps?
Try deleting Safari site data(indexed DB etc) for your localhost site. You won't be able to. Hell, even deleting data for a specific public site is hilariously painful. Try adding a certificate authority to your iPhone. Try playing locally cached audio blobs via web APIs on an iPhone. There's probably 1000 more.
it's not easy to install VS Code, let alone use it. VS Codium (and VS Code) are Electron apps, and it is difficult to sandbox them, as Electron (and Chromium as well) use suid helper - a priviledged binary that launches as root, and I won't allow suid binaries in a sandbox. Also, it requires GPU access for fast rendering, and it is difficult to provide in a safe way or switch to software rendering (I couldn't figure it out). Electron apps are a pain to install.
One could use a virtual machine, but it would use more resources.
Obviously, that's not VS Code's issue, it is a problem with Linux which doesn't come with a good sandbox.
It is much easier to type python -m http.server to launch a server, but you need to open a terminal and navigate to a directory first, that takes lot of time I would rather spend on something different (like figuring out how to work around issues with suid binary in Electron apps). And this looks like a hacky workaround, launching a web server only to gain more privileges in a browser.
It does?
There's also Orca for building sandboxed wasm apps but it removes the browser and the Dom and only gives you a canvas to work with.
> I do not want to use a web-server because it feels like overengineering and I don't want to open a terminal, navigate to a directory and start a server every time, it takes too much time. I just want to type the URL and have my app running
I'd love for there to be an "offline only" mode for browsers where you can either access the local filesystem or remote pages, but not both. I don't think this would completely solve the use case you describe, since there would still presumably be circumstances where it would be helpful to reference something external, but it feels like it would be pretty useful to be able to use a browser as a limited server for static files, and it would be relative simple and intuitive compared to needing to have something in the files themselves indicate the intended semantics.
But! it is still very open (you complain about audio and javascript but they both work) also I've figured that it isn't really much of a problem to start a python/node webserver for the things I'm not allowed to do. Literally a fraction of a second. Just set up a "webserver_here" command in your terminal, or keep one running constantly. Also I'm starting to fear local HTML more and more, I'd actually be happier with a much stricter boundary.
Hyperclay has given me some ideas. What I want is something like [3] but that the user only needs to install once. One electron app that can load our mini-apps.
[1] - https://news.ycombinator.com/item?id=44930814
To be honest, I wanted this for myself and felt guilty not making something more serious out of it since I liked the idea.
I have a shell script that does these steps - including opening the browser with the target url. I use it on a regular basis.
But the src-included files must be in the same directory as the root html file (or a descendant directory)
I used this in my macOS app Pocket Log to output a local html audio log (https://enzom.dev).
Is it a lot of words to talk about localstorage? How exactly are the changes persisted to the HTML file? Is it using FileSystemAPI to overwrite the previous HTML file? How can they implement it seamless for the user without them having to choose the proper file in the "Save As.." dialog?
1. Hosted: You get a bunch of "HTML Apps" that persist themselves by calling their own /save endpoint. We grab the HTML and overwrite their-app-name.html, making a backup/version along the way. (Each user can edit their own app only, but they can also enable signups so that other people can fork their app. We also have plans to allow them to ship optional updates to forked apps.)
2. Local: You download the open-source Hyperclay Local [0] and you can have your own personal, local HTML apps that also call the /save endpoint and make backups. You're also open to extracting the core code from this to host your own personally malleable apps on your own server (just implement some kind of auth)
https://developer.mozilla.org/en-US/docs/Web/API/MutationObs...
AFAICT this is another cycle of; a decent system becoming overcomplicated because someone wanted to make it multi-tenant, and the re-discovery that 90% of the "improvements/advancements" are essentially bloat in the context & freedom you find/create.
> It would be a great to ignore all the noise of modern web dev and just build the experience I want
sandwiched in between meme images that intersperse short bursts of text as if the reader needs constant distraction from the act of reading.
The experience I want is a short prose description up front, backstory that flows, and diagrams only where they actually illustrate a concept that needs it.
We lost editing for two reasons:
1) The HTTP PUT method didn't exist yet, so edited HTML files could only be saved locally.
2) Mosaic built a cross-platform web browser that defined what the WWW was for 99% of users, and they didn't include editing because that would have been too complex to build from scratch in their multi-platform code base.
But also, it's a distinctly different answer for each page to build its own toolkit for the user (Hyperclay) vs TBL's read-write web. The user-agent ought, imo, afford standard tools that are going to work across web pages, that extend the user agency whatever site they are visiting.
https://www.w3.org/groups/wg/lws/
That would likely have some overlap.
If you were to have an accepted w3c proposal and working implementation in local browser forks, you could potentially chat with the browser teams to add the experimental feature first through a flag users would manually have to turn on, and then later potentially it could get integrated.
Isn't that basically Wikipedia? I can't imagine a much simpler system that could work at modern web scale.
Saying "we don't need this because Wikipedia already solved it" is kind of like saying in 1976: "Nobody needs the Apple II, we already have IBM mainframes that have solved every useful problem in computing much better."
I think it was not just an appealing idea but Amaya itself was a solid implementation for a "testbed" (again, their words).
I can see why it died but I still think it is a bit of a shame it did.
> It supports HTML 4.01, XHTML 1.0, XHTML Basic, XHTML 1.1, HTTP 1.1, MathML 2.0, many CSS 2 features, and SVG.
Perfect. Doesn't need anything else.
Imagine having a nice UNIX workstation on your desk at a university. This would resolve to machine.department.university.ac.uk rather than be hidden behind a router. If you wanted then you could run an x window on it or transfer files to and from it.
With standard issue Netscape of the era you could save an HTML file locally and it be fully accessible anywhere on the web.
The university would have the skills to setup the network for this, which was a difficult skill at the time.
In reality you would not save everything locally. The main department server would have a network share for you that would be mounted with NFS. So your files would be 'saved locally' over the NFS share to resolve to department.university.ac.uk/user.
You could also login to any workstation in the department for it to mount your NFS shares, with PCs of the era usually capable of running x windows and NFS in the university setting.
Servers physically existed in the server room rather than in the cloud.
I much preferred this model as, on SGI workstations, you had it all working out of the box. All you needed was some help with the networking.
Also important is that the web was originally all about structuring information rather than hacking it to make it pretty. It was up to the client to sort out the presentation, not the server.
In time we lost the ability to structure information in part because the 'section' element did not make it into HTML. Everyone wanted the WYSIWYG model that worked with word processors, where it was all about how it looked rather than how it worked.
We proceeded to make HTML that started as a soup of nested tables before the responsive design people came along and made that a soup of nested div elements.
Eventually we got article, section and other elements to structure information, but, by then it was too late.
It is easy to design something that is incredibly complicated and only understood by a few. It is far harder to design something that is simple and can be understood by the many.
We definitely lost the way with this. Nowadays people only care about platforms such as social media, nobody is writing HTML the TBL way, with everything kept simple. HTML has become the specialist skill where you need lots of pigeon holes skills. Nobody on these teams of specialists have read the HTML spec and no human readable HTML gets written, even though this is fully doable.
It seems that you are one of the few that understands the original context of HTML.
Ummmm all the browsers I know of are also editors... Are there any that aren't?
Edit - does no one use dev tools anymore? No HTML? No vanilla JS and CSS? Everyone just using TS, React and gluing things together? Like, you literally have an entire IDE in your browser (assuming you use anything derived from Chrome, Firefox or Safari) that can code a web page live...
Also Chrome does have stuff like SSH extensions.
That being said, some of the computing paradigms of the 80's and early 90's were very cool and I wish they caught on... Lisp machines, Smalltalk, early web ideas were interesting...
As a sidenote, does manipulating forms count as editing?
Like, Netscape Composer came OUT of Navigator...
There were other browser "dev tools" before firebug.
https://www.otsukare.info/2020/08/06/browser-devtools-timeli...
Then the save button downloads document.documentElement.outerHTML with line 2 replaced by the current state. No server required.
https://github.com/mcteamster/white/blob/main/src/lib/data.t...
After a quick look at the site, I like the idea. But I wonder where it's limitations start to get in the way.
How about security, if I can modify the page, who else can? And who controls that?
How much code and logic does it handle before getting difficult to maintain? And how much data?
If I make an useful app with it, say to track beers, can I share the app with other people without so they can track their own beers, without sharing my personal data?
2. Who can modify: You can modify any app you create. You can also "enable signups", which allows other users to easily fork your app, but they all trace back to your source app. We're making a plan right now where you can ship updates to forked apps.
3. Difficult to maintain: Pieter Levels (of NomadList) famously codes his $60k/month apps in single index.php files, so I suppose it matter how you organize your code and what level of navigating-through-the-cruft you're comfortable with.
4. Other people can fork your app and track their own beers. We also want to integrate collaboration features, so 2 people can have control over the same page simultaneously, but for now it's best for single-user apps.
So now we have a webapp. The webapp connects to a backend. The webapp stores changes in the backend. The webapp loads changes in the backend.
The original problem still persists?! State can be stored in browser with localstorage. Or in device with file access. That is about it. Across devices, you need online storage and access key.
I feel like this was a writeup on the problem with vanilla HTML app development.
Also, can we all agree that calling a webpage an app is an afront to all webpages? We should have two categories: webpages and webapps. A web page can contain a webapp. A web app is anything where the content changes and where it connects to a backend to store or load data. So interactive html stuff is not a webapp, but JS literally changing the data makes it a webapp, even a simple HTML page with a JS clock makes that portion of the page a webapp.
Only similar in the vaguest sense.
I love the idea of a local-first Hyperclay. Offline editing is one of the pillars of personal software and I'd like to head in that direction.
Would you be open to hopping on a video call at some point? I'd love to compare notes.
But the ultimate goal is to have an ecosystem of where you can host/deploy/use HTML apps, including other competing services.
But in my experience the potential audience shrinks significantly once anything git related is expected from a user.
I've been thinking for a while that the web really suffers from not having a built-in concept of (ideally fairly anonymous) identity. I shouldn't need to maintain a whole authentication system and a database full of PII just to let you see the same data across your laptop and your phone...
The browser should be able to vend me an (opaque, anonymous) token that identifies you as an individual. If your mobile and desktop browser vend the same token, then the website sees you as having the same identity on both platforms.
And it's a poor enough solution that we have to build extra layers around it (for example, Apple's auth "login with Apple ID", which lets you hide your real email address behind an anonymous relay)
Do we need a story with illustration to understand how a new framework works ? What's the plain markdown 2 to 3 paragraph that explains the concept ?
Edit : here it is. https://docs.hyperclay.com/docs/docs-tldr-paste-in-llm/#how-...
> Whenever the page changes—or the user explicitly saves the page—we grab all the HTML, make a few modifications, and then POST it to the backend’s “save” endpoint.
Wait, so instead of storing JSON we store HTML with all its verbosity and all its tags that have nothing to do with the user edit (e.g. a small profile description change) ? What about if the webmaster then wants to change the HTML title of the profile description block ? The user's version just diverged from the webmaster's ?
Yes. In exchange, we get a portable, malleable, self-contained application. That's the tradeoff.
> What about if the webmaster then wants to change the HTML title
1. The webmaster owns my-app.hyperlay.com (or somecustomdomain.com). 2. The user forks their version and gets user-version.hyperclay.com (or user-version.somecustomdomain.com)
You need to fork before editing. In the future, we'll have support for shipping updates to forked applications that can be accepted or denied by the end users.
I wish I could change the name from Hyperclay to TiddlyApp :)
The trick TiddlyWiki does with data URLs (IIRC?) (https://tiddlywiki.com/#Saving%20with%20the%20HTML5%20saver) seems pretty close to me.
I was actually just playing with a similar concept, except as an Obsidian plugin: https://bsky.app/profile/ezhik.jp/post/3lwoazfypx22j
I've done something similar using IPFS for persistence [1]. The format is different - it's a tutorial starting out with the dev tools that guides you to build an editor + site.
I wish that IPFS was more usable and mature as a general web technology! I've considered trying this with Git as the backend, but I haven't done much research into what the tradeoffs or complications would be?
[1] - https://www.thebacklog.net/projects/agregore-web-apps/
I'm thinking about the Kanban board example I saw in the demo video. It looks like column re-ordering wasn't supported yet. What if I fork the app, put time into creating my Kanban, then I want to update to a new version that supports column re-ordering?
source: https://github.com/est/gitweets/
demo: https://f.est.im/ (renders via github API)
Basically it renders its self git repo commit history as a feed timeline.
Never lose your data on microblogging service providers again! Clone the repo anywhere anytime, make tweets by "git commit --allow-empty".
Everything is placed inside a single-html.
If it was marketed to startups to reduce development overhead by eliminating separate build pipelines or servers, allowing small teams to focus on core features rather than infrastructure, definite winner. Not sure how much your into AWS but this made me think of an AWS partner we use HostJane could use Hyperclay to help bundle cost-optimized spot instances for testing (mainly tech startup clients - https://www.hostjane.com), then push out a seamless CI/CD via CodePipeline, global distribution through CloudFront, Hyperclay could enable clients to scale from proof-of-concept to production affordably while maintaining full control over their app's evolution without vendor lock-in. Potentially doing away with complex databases or backend frameworks... amazing, well done!
I recently (as an experiment) exclusively vibe-coded an Asteroids clone (with a couple of nifty additions), all in a single HTML file, including a unit test suite, which also works on either desktop or mobile: https://github.com/pmarreck/vibesteroids/blob/yolo/docs/inde...
Playable version (deployed via github docs) is here: https://pmarreck.github.io/vibesteroids/ Hit Esc to pause and see instructions (pause should be automatic on mobile; can re-pause by tapping top center area).
Type shift-B (or shake mobile device... you also would have had to approve its ability to sense that on iOS) to activate the "secret" ability (1 per life)
No enemy UFO's to shoot (yet) but the pace does quicken on each level which feels fun.
It doesn't update itself, however... (and I just noticed it has a test fail, and a test rendering bug... LOL, well at least the tests are valid!)
Are there any open source alternatives like this? First time I hear about this idea. However, I can imagine it wouldn't take much effort to implement the basics. Chromium even has a design mode you can activate by typing `document.designMode='on'` in the console. Then you would just need to write a little javascript that handles auth, a save button, and a backend to persist the altered html.
Your ideal customer a) is extremely technically proficient, such that they are even capable of finding this in the first place, and their brain doesn't glaze over at "jQuery is Your Starting Point" - the opening line of your docs. b) They for some reason would rather pay for someone else to do the world's easiest hosting job and deal with whatever baggage and limitations come with this.
Or am I misunderstanding? Like it's a nodejs server on some aws box. Charging people for this is fine, but not allowing them to do it themselves seems... ridiculous?
You gotta eat, I know, but I'm wondering who it is that is ok paying for someone else to do the easiest part what they do for a living.
I don't think that hosting is necessarily "part of what they do for a living" for people who write the code.
The `/save` endpoint looks almost trivial. Knocking up a mimic wouldn't take much. The client libs will be interesting, but from the looks of things they're not quite there yet.
I really liked this when it was launched and thought it had a great deal of potential when it launched. I think the main difference is that it's more focused on content-editing, not updating the code of the page itself.
I have a feeling that a lot of these little tools people make with low-code vibe AI apps do not require more than a single HTML page with JS imports.
(I also suspect that there is a ton of duplication in what people create, but, of course, I have no data to back it up.)
Pricing page returns a 404 as of now, though.
I build something along those lines as well. My problem was that I couldn't synchronize localStorage data, so I made htmlsync.io.
Any suggestions how to overcome this? I believe it's a security setting, not allowing localstorage to be set.
I've imagined our internal claim cases to be standalone html pages, making them easily versioned for when new regulations come.
Simplicity is a good goal to have, and these guys have it.
If it does this, then it is very exciting indeed, but I could not see it.
I experimented a bit with the concept to build my resume, I had naively been using gatsby a while back. "Just use a plain html file!" - It was a real "duh!" moment. You just edit the embedded markdown then viola. Want a pdf? Press cmd+P
Webservers are a pain in the ass and a legitimate barrier to entry. Wouldn't it be great it you could literally send around a single file, especially to non-technical users who cant run a web server, to run your apps?
Problems with the approach: * Customers still ask you why stuff doesn't work, except now it's their hardware * Observability is not as strong on self-hosted solutions * There is no true universal binary, so no matter what you have to put constraints up (only runs on windows, mac, etc.) * Updates are way harder
Browsers are built to browse the web, trying to remove the web out of browser apps seems illogical
*The tech: Hyperclay is a NodeJS server and frontend JS library*
So... it's not just an HTML file.
ps. Way back there was a project called tiddlywiki that was a self modifying html file and was pretty popular. I think initially there was no way to save new version but to basically save the file to disk to either fork a file or replace itself.
1) it says it’s a vanilla HTML file but it needs a server running
2) it’s just a landing page that collects your email to give you “early access”.
I really wonder what this is doing on the home page.
And I'm out!
Honestly I was nodding along with the landing page. But including a dependency to an old timey JS library which is now largely unnecessary given new native web APIs? That seems contrary to the spirit of this project.
Now I am wondering whether the same thing could be achieved with just nginx & WebDAV?
(but seriously, very cool)
The playground has lots of unique features, CLI agent integration for AI to let you collaborate with Claude Code or Codex CLI, etc. You can deploy straight to https://RTEdge.net
The playground has HTML I ↔ O iframe with mostly reload-free 2-way IO sync, built-in service/edge workers with URL imports of ESM, and a whole lot more. You can see some examples under Code > New…
Here is a recent one for accessible multi-OS keyboard shortcuts: https://kbd.rt.ht/?io