Alas it looks like it's web/electron based. :/ Downloading it and yep, 443.8 MB on MacOS. The Linux one is a bit better at 183.3 MB.
Electron really should get a kickback from disk manufacturers! ;)
Shameless plug, I've been working on a HTML inspired lightweight UI toolkit because I'm convinced we can make these sort of apps and they should only be ~10-20 MB [1] with nice syntax, animation, theming, etc. I'm finally making a suite of widgets. Maybe I can make a basic clone of this! Bet I could get in < 10MB. :)
python -m http.server
But variations exist for a lot of languages. Php has one built-in too
#!/usr/bin/env bash
set -e; [[ $TRACE ]] && set -x
port=8080
dir="."
if [[ "$1" == "-h" || "$1" == "--help" ]]; then
echo "usage: http-server [PORT] [DIRECTORY]"
echo " PORT Port to listen on (default: 8080)"
echo " DIRECTORY Directory to serve (default: .)"
exit 0
fi
if [ -n "$1" ]; then
port=$1
fi
if [[ -n "$2" ]]; then
dir=$2
fi
python3 -m http.server --directory "$dir" --protocol HTTP/1.1 "$port"
That whole script could just be the last line! Maybe you could add defaults like
"${port:-8080}"
port="${1:-8080}"
dir="${2:-.}"
python3 -m http.server "${$1:-8080}" "${$2:-.}"
python3 -mhttp.server "${1:-8080}" -d "${2:-.}"
#!/bin/bash
while :; do nc -l 80 < index.html; done
HTTP/0.9 web browser sends:
GET /
Netcat sends: <!doctype html>
...
Nowadays a browser will send `GET / HTTP/1.1` and then a bunch of headers, which a true HTTP/0.9 server may be able to filter out and ignore, but of course this script will just send the document and the browser will still assume it's a legacy server.(I'm sure if I dug in the http.server documentation I could find all those options too.)
The Sovereign Tech Agency, under the Federal Ministry for Economic Affairs and Climate Action, fund OpenJS, specifically for improving the state of open source in JavaScript.
Electron is now part of OpenJS.
I thought everyone tried this? It speaks HTML, and tossing in the few things the spec requires is peanuts to an llm.
For me this contradicts the claim of being simple. As opposed to this:
python -m http.server 8080
I recently explored both Tauri [1] and Wails [2]. Especially Wails is lots of fun. The simplicity of go paired with the fast iteration I get from using React just feels awesome. And the final application is ~10 MB in size!
It's almost as if web crap is optimizing for developer experience at the expense of users.
Just like many native apps will also be horrible in terms of UX. Good apps are good. And I believe that it's entire possible to build an amazing app with Electron.
Although not everyone might agree, IMO VSCode is a great example of that.
Personally though I’m just greedy. I want the best of QT and Electron. Figuro is my attempt at realizing that. ;)
https://x.com/daniel_nguyenx/status/1734495508746702936
Further discussion can be found here: https://www.macstories.net/linked/is-electron-really-that-ba... and in the linked video.
You say at the expense of users. But when even Apple does't go all native, it's telling.
For what’s its worth my Figuro library does pretty well for live updating text and scrolling! And I haven’t even optimized layouts yet, it currently re-layouts the entire tree for every frame.
macOS: too busy rewriting it in SwiftUI before Apple pulls the plug on Obj-C.
Windows: https://old.reddit.com/r/Windows10/comments/o1x183/
I use one that is 99K static binary.
Even the full-featured TLS/HTTPS forward proxy I use, linked with bloated OpenSSL, is still less than 10MB static binary: 8.7MB. When linked to WolfSSL it's only 4.6MB static binary. The proxy can serve small, static HTML pages, preloading them into memory at startup.
It's not only easy: it runs (or ran) huge sites in production.
I want to try adding a JavaScript runtime with simple DOM built on Figuro nodes instead. But there’s some issues with Nim’s memory management and QuickJs.
server {
listen 80;
server_name ~^(?<sub>\w+)(\.|-)(?<port>\d+).*; # projectx-20201-127-0-0-1.nip.io
root sites/$sub/public_html;
try_files $uri @backend;
location @backend {
proxy_pass http://127.0.0.1:$port;
access_log logs/$sub.access;
}
}
Configuration is done via the domain name like projectx-20205-127-0-0-1.nip.io which specifies the directory and port.All you need to do is create a junction (mklink /J domain folder_path). This maps the domain to a folder.
I think proxy_pass will forward traffic even when the root and try_files directives fail because the junction/symlink don't exist? And "listen 80" binds on all interfaces doesn't it, not just on localhost?
Is this clever? Sure. But this is also the thing you forget about in 6 months and then when you install any app that has a localhost web management interface (like syncthing) you've accidentally exposed your entire computer including your ssh keys to the internet.
server_name ~^(?<service>(?:lubelogger|wiki|kibana|zabbix|mail|grafana|git|books|zm))\.domain\.example$;
location / {
resolver 127.0.0.1;
include proxy.conf;
proxy_set_header Authorization "";
proxy_set_header Host $service.internal;
proxy_set_header Origin http://$service.internal;
proxy_redirect http://$proxy_host/ /;
proxy_pass http://$service.internal;
}
Basically any regex matched subdomain is extracted and resolved as $service.internal and proxy passed to it. For this to work, of course any new service has to be registered in internal DNS. Adding whitelisted IPs and basic auth is also a good idea ( which I have, just removed from example ).I use SimpleWebServer there since 6 years or something and it just works.
Gnome does this, you can develop apps in Typescript.
But, they started to migrate some of their own apps to Typescript and immediately received backlash from the community [0]. Although granted, the Phoronix forums can be quite toxic.
My observation is that there is just a big disconnect between younger devs who just want to get the job done, and the more old-school community that care about resource efficients. Both are good intentions that I can understand, but they clash badly. This unfortunately hinders progress on this point.
[0] https://www.phoronix.com/forums/forum/phoronix/latest-phoron...
I agree that this is, at least often, a case of where your roots lie. Whats most shocking to me is that the likes of Apple and Mircosoft don't seem to be interested in/capable of building an actually good framework.
I feel like Microsoft tried with .NET Maui, but that really isn't a viable choice if you go by developer sentiment.
I come from an async/lock free C++ then Rust background, but am using typescript quite a bit these days. Rust is data race free because of the borrow checker. Swift async actors are too, by construction (similar to other actor based frameworks like Orleans). Typescript is trivially data race free (only one thread). Very few other popular languages provide that level of safety these days. Golang certainly does not.
I was benchmarking some single-treaded WASM rust code, and couldn’t figure out why half the runs were 2x slower than the other. It turns out I was accidentally running native half the time, so those runs were faster. I’m shocked the single core performance difference is so low.
Anyway, as bad a javascript used to be, typescript is actually a nice language with a decent ecosystem Like Rust and C++, its type system is a joy to work with, and greatly improves productivity vs. languages like Java, C#, etc.
I have been coding since 1986, nowadays most of the UIs I get paid to work on are Web based, yet when I want to have fun in side projects I only code for native UIs, if a GUI is needed.
Want to code like VB and Delphi? Plenty of options available, and yes they do scalable layouts, just like they used to do already back in the 1990's for anyone that actually bothered to read the programming manuals.
The big player these days seems to be web-based (Electron and friends), though the JVM stack with a native theme for Win/Mac is certainly usable in an environment where you can rely on Java being around.
I think the best option would be some kind of cross-application client-side HTML etc. renderer that apps could use for their user interaction. We could call it a "browser". That avoids the problem of 10 copies of the whole electron stack for 10 apps.
Years ago, Microsoft had their own version of this called HTA (HTml Application) where you could delegate UI to the built-in browser (IE) and get native-looking controls. Something like that but cross-platfom would be nice, especially as one motivation for this project is that Chrome apps are no longer supported so "Web Server for Chrome" is going away. So the "like electron but most of the overhead is handled by Chrome" option is actively being discontinued.
I think Tauri is trying to go for this - a web app without the whole chromium bundled, but using a native web view
You want to write separate versions for MacOS, Linux, and Windows Visual .NET#++ and maintain 3 separate source trees in 3 languages and sync all their features and deal with every bug 3 times?
That would certainly make this more useful than `python3 -m http.server`.
There does seem to be a weird limitation that you can't enable both HTTP and HTTPS on the same port for some reason. That should be easy enough to code a fix for though.
Do any real web servers support this?
Its the same transport (TCP assuming something like HTTP 1.1) and trying to mix HTTP and HTTPS seems like a difficult thing to do correctly and securely.
Doing anything other than disconnecting or returning an error seems like a bad idea though.
The basic difference between SMTP and HTTP in this context is that email addresses do not contain enough information for the client to know whether it should be expecting encrypted transport or not (hence MTA-STS and SMTP/DANE [1]), so you need to negotiate it with STARTTLS or the like, whereas the https URL scheme tells the client to expect TLS, so there is no need to negotiate, you can just start in with the TLS ClientHello.
In general, it would be inadvisable at this point to try to switch hit between HTTP and HTTPS based on the initial packets from the client, because then you would need to ensure that there was no ambiguity. We use this trick to multiplex DTLS/SRTP/STUN and it's somewhat tricky to get right [2] and places limitations on what code points you can assign later. If you wanted to port multiplex, it would be better to do something like HTTP Upgrade, but at this point port 443 is so entrenched, that it's hard to see people changing.
[0] https://www.rfc-editor.org/rfc/rfc7230#section-6.7. [1] https://datatracker.ietf.org/doc/html/rfc8461 https://datatracker.ietf.org/doc/html/rfc7672 [2] https://datatracker.ietf.org/doc/html/rfc7983
Exactly my original point. If you really understand the protocols, there is probably zero ambiguity (I'm assuming here). But with essentially nothing to gain from supporting this, its obvious to me that any minor risk outweighs the (lack of) benefits.
In general the way that works is user navigates to http://contoso.com which implicitly uses port 80. Contoso server/cdn listening on port 80 redirects them through whatever means to https://contoso.com which implicitly uses 443.
I don't see value on both being on the same port. Why would I ever want to support this when the http: or https: essentially defines the default port.
Now ofcourse someone could go to http://contoso.com:443, but WHY would they do this. Again, failing to see a reason for this.
The "why/value" is usually in clearly handling accidents in hardcoding connection info, particularly for local API/webdev environments where you might pass connection information as an object/list of parameters rather than normal user focused browser URL bar entry. The upside is a connection error can be a bit more opaque than an explicit 400 or 302 saying what happened or where to go instead. That's the entire reason webservers tend to respond with an HTTP 400 in such scenarios in the first place.
Like I said though, once I remembered this was more a "hacky" type solution to give an error than built-in protocol upgrade functionality I'm not so sure the small amount of juice would actually be worth the relatively complicated squeeze for such a tool anymore.
These days just using `caddy` might be easier though.
cargo install --locked miniserve
miniserve path/to/directory
[1]: https://github.com/svenstaro/miniservephp -S localhost:8080
Many helpful options are available, just check the docs...
bun index.html
and even if you do want something dynamic, a lot of dynamic features can be had by embedding custom tags in html. still no code required.
if you have concerns about pike as a language for the server implementation itself, pike is a very performant language with a long history being used in high profile sites and services. both pike and roxen go back to the early 90s.
and if you do want to create custom features and need help you can hire me. i am looking for work (pike,js,ts,python,php,ruby,go,... ;-)
Anyway, Web UI != easy to administer.
docker run --rm -p 8080:80 -v "$(pwd):/usr/share/nginx/html:ro" nginx:alpine
will host the current directory on 8080Python 2 had a similar function (`python -m SimpleHTTPServer`). I know there's a Python 3 equivalent, but I don't have it memorized.
and I used this. Though I would prefer a way to keep it downloading from where it left because this method ISNT reliable for 40 gigs transfer
I am also wondering about this comment in the gist which was linked (gist.github.com/willurd/5720255) by olejorgenb which is
Limiting request to a certain interface (eg.: do not allow request from the network)
python -m http.server --bind 127.0.0.1
Like what does that really do? Maybe its also nice, IDK?
I usually do the opposite - 0.0.0.0 - which allows connections from any device.
mini_httpd -p 8080 # and no need to install a whole python interpreter