> Are our tools just worse now? Was early 2000s PHP actually good?
Not sure how rhetorical that was, but of course? PHP is a super efficient language that is tailor made to write dynamic web sites, unlike Go. The author mentions a couple of the features that made the original version easier to write and easier to maintain, they are made for the usecase, like $_GET.
And if something like a template engine is needed, like it will be if the project is a little bit bigger, then PHP supports that just fine.
> Max didn't need a request router, he just put his PHP file at the right place on the disk.
The tendency to abstract code away leads to complexity, while a real useful abstraction is about minimizing complexity. Here, the placement of PHP files makes stuff easier -> it's a good abstraction.
And that's why the original code is so much better.
This also elides a bit of complexity; if I assume I already have the Nginx and gunicorn process then my Python web server isn’t much worse. (Back in the day, LAMP stack used Apache.)
I’ll for sure grant the templating and web serving language features though.
> To be perfectly honest, as a teenager I never thought Max was all that great at programming. I thought his style was overly-simplistic. I thought he just didn't know any better. But 15 years on, I now see that the simplicity that I dismissed as naive was actually what made his code great.
They’ve all been solved 100x over by founders who’ve been funded on this site. It used to make sense to have a directory or cgi-bin of helpful scripts. Now it only makes sense as a bit of nostalgia.
I miss the days when we had less, could get less done in a day… but felt more ownership over it. Those days are gone.
I’m kind of getting tired of software made by “founders,” who are just looking to monetize me and get their exit, as opposed to software written by normal users just wanting to be useful. I know I’m on the wrong website for this, but the world could use fewer “founders” trying to turn my eyeballs and attention into revenue.
> They’ve all been solved 100x over by founders who’ve been funded on this site. It used to make sense to have a directory or cgi-bin of helpful scripts. Now it only makes sense as a bit of nostalgia.
Why does it make more sense to learn the syntax for someone else's helper scripts than to roll my own, if the latter is as easy or easier, and afterwards I know how to solve the problem myself?
That's true, but it was also true before. To the extent that solving a problem to learn the details of solving it was ever worthwhile, which I think is and was quite a lot, I'd say it's still true now, even though there are lots of almost-but-not-quite solutions out there. That doesn't mean that you should solve all problems on your own, but I think you also shouldn't always use someone else's solution.
But they're personal itches, not productizable itches. The joy is still there, though.
I don't know for sure what the problem was (I have my theories) and why could we not get there where most people build their own custom products.
User interfaces became more user-friendly [0], while developer experience - though simpler in many ways - also became more complex, to handle the complex demands of modern software while maintaining a smooth user experience. In isolation both of these things make sense. But taken together, it means that instead of developer and user experience converging into a middle place where tools are a bit easier to learn and interfaces a bit more involved, they've diverged further, to where all the cognitive load is placed on the development side and the user expects an entirely frictionless experience.
Specialization is at the core of our big interconnected society, so it's not a surprising outcome if you look at the past century or two of civilization. But at the same time I think there's something lost when roles become too segregated. In the same way homesteading has its own niche popularity, I believe there's a latent demand for digital homesteading too; we see its fringes in the slow rise of things like Neocities, the indie web, and open source software over the past few years.
Personally I think we just have yet to see the 'killer app' for digital homesteading, some sort of central pillar or set of principles to grow around. The (small) web is the closest we have at the moment, but it carries a lot of technical baggage with it, too much to be able to walk the fine line needed between approachability and flexibility.
Anyway, that's enough rambling for now. I'll save the rest for a blog post.
[0] user-friendly as in being able to use it without learning anything first; not that that's necessarily in the user's best interest
I certainly consider it a good idea, now that it has come to mind.
and it will work very well.
It's not like this person was ever going to pay someone to make a cartoon drawing so nobody lost their livelihood over it. Seems like a harmless visual identifier (that helps you remember if you read the article if you stumble across it again later).
Is it really such a bad thing when people use generative AI for fun or for their hobbies? This isn't the New York Times.
When the project becomes more complex, things change for the worse.
Also, you need to protect modules not only from errors, but from the other programmers in your team.
Re: hardening - I guess I deployed a lot of "insecure" LAMP-style boxes. My experience, mainly w/ Fedora Core and then CentOS, was to turn off all unnecessary services, apply security updates, limit inbound and outbound connectivity to only the bare minimum necessary w/ iptables, make sure only public key auth was configured for SSH, and make sure no default passwords or accounts were enabled. Depending on the application grubbing thru SELinux logs and adjusting labels might be necessary. I don't recall what tweaks there were on the default Apache or PHP configs, but I'm sure there were some (not allowing overrides thru .htaccess files in user-writeable directories, making sure PHP error messages weren't returned to clients, not allowing directory listings in directories without a default document, etc).
Everything else was in the application and whatever stupidity it had (world-writeable directories in shitty PHP apps, etc). That was always case-by-case.
It didn't strike me as a horribly difficult thing to be better-than-average in security posture. I'm sure I was missing a lot of obvious stuff, in retrospect, but I think I had the basics covered.
Today many devs (and not prograamers)
are always suspicious, and terrified on the potential of something going wrong because someone will point a finger
even if the error is harmless or improbable.
My experience is that many modern devs are incapable of assigning significance or probabilities, they are usually not creative, fearful of "not using best practices", and do not take into consideration the anthropic aspect of software.
My 2 cents
The end state of running 15 year old unmaintained PHP is that you accumulate webshells on your server or it gets wiped. Or you just lose it or forget about it, or the server stops running because the same dev practices that got you the PHP means you probably don't bother with things like backups, config management, version control, IaC etc (I don't mean the author, who probably does care about those things, I just mean in general).
If these things are not a big deal (often it is not! and it's fun!) then absolutely go for it. In a non-work context I have no issues.
TBH I'm not 100% sure that either the PHP version _or_ the go versions of that code are free from RCE style problems. I think it depends on server config (modern php defaults are probs fine), binary versions (like an old exiftool would bone you), OS (windows path stuff can be surprising) and internal details about how the commands handle flags and paths. But as you point out, it probably doesn't matter.
Am I just doing the meme? :)
An unsafe string can be abused as attack vector to your system.
There's your explanation why it could be so simple
I’ve said before and will say again: error handling is most of what’s hard about programming (certainly most of what’s hard about distributed systems).
I keep looking for a programming language that makes error handling a central part of the design (rather than focusing on non-error control flow of various kinds), but honestly I don’t even know what would be better than the current options (Java/Python’s exceptions, or Go’s multiple returns, or Rust’s similar-seeming Result<T, E>). I know Linus likes using goto for errors (though I think it just kind of looks like try/catch in C) but I don’t know of much else.
It would need to be the case that code that doesn’t want to handle errors (like Max’s simple website) doesn’t have any error handling code, but it’s easy to add, and common patterns (e.g. “retry this inner operation N times, maybe with back off and jitter, and then fail this outer operation, either exiting the program or leaving unaffected parts running”) are easy to express
https://gigamonkeys.com/book/beyond-exception-handling-condi... is a nice introduction; https://news.ycombinator.com/item?id=24867548 points to a great book about it. I believe that Smalltalk ended up using a similar system, too.
> It would need to be the case that code that doesn’t want to handle errors (like Max’s simple website) doesn’t have any error handling code, but it’s easy to add, and common patterns (e.g. “retry this inner operation N times, maybe with back off and jitter, and then fail this outer operation, either exiting the program or leaving unaffected parts running”) are easy to express
Lisp’s condition system can handle that! Here’s a dumb function which signals a continuable error when i ≤ 3:
(defun foo ()
(loop for i from 0
do (if (> i 3)
(return (format nil "good i: ~d" i))
(cerror "Keep going." "~d is too low" i))))
If one runs (foo) by hand then i starts at 0 and FOO signals an error; the debugger will include the option to continue, then i is 1 and FOO signals another error and one may choose to continue. That’s good for interactive use, but kind of a pain in a program. Fortunately, there are ways to retry, and to even ignore errors completely.If one wishes to retry up to six times, one can bind a handler which invokes the CONTINUE restart:
(let ((j 0))
(handler-bind ((error #'(lambda (c)
(declare (ignore c))
;; only retry six times
(unless (> (incf j) 6)
(invoke-restart 'continue)))))
(foo)))
If one wants to ignore errors, then (ignore-errors (foo)) will run and handle the error by returning two values: NIL and the first error.That's the simplicity argument here too: sometimes we only want to write the success case, and are happy with platform defaults for error reporting. (Another thing that PHP handled out-of-the-box because its domain was so constrained; it had started with strong default HTML output for error conditions that's fairly readable and useful for debugging. It's also useful for disclosure leaks which is why the defaults and security best practices have shifted so much from the early days of PHP when even php_info() was by default turned on and easy to run to debug some random cgi-bin server you were assigned by the hosting company that week.)
Most of the problems with try/catch aren't even really problems with that form of error handling, but with the types of the errors themselves. In C++/Java/C#/others, when an error happens we want stack traces for debugging and stack walks are expensive and may require pulling symbols data from somewhere else and that can be expensive. But that's not actually inherent to the try/catch pattern. You can throw cheaper error types. (JS you don't have to throw the nice Error family that does stack traces, you could throw a cheap string, for instance. Python has some stack walking tricks that keep its Exceptions somewhat cheaper and a lot lazier, because Python expects try/except to be a common flow control idiom.)
We also know from Haskell do-notation and now async/await in so many languages (and some of Rust's syntax sugar, etc) that you can have the try/catch syntax sugar but still power it with Result/Either monads. You can have that cake and eat it, too. In JS, a Promise is a future Either<ResolvedType, RejectedType> but in an async/await function you are writing your interactions with it as "normal JS" try/catch. Both can and do coexist in the same language together, it's not really a "battle" between the two styles, the simple conceptual model of try/catch "footnotes" and the robust type system affordances of a Result/Either monad type.
(If there is a war, it's with Go doing a worst of both worlds and not using a true flat-mappable Monad for its return type. But then that would make try/catch easy syntax sugar to build on top of it, and that seems to be the big thing they don't want, for reasons that seem as much obstinance as anything to me.)
There's a similar effect in transactional databases - or transactional anything. If you run into any problem, you just abort the transaction and you don't have to care about individual cleanup steps.
On the second point, make errors part of the domain, and treat them as a kind of result outside the scope of the expected. Be like jazz musician Miles Davis and instead of covering up mistakes, make something wrong into something right. https://www.youtube.com/watch?v=FL4LxrN-iyw&t=183
files := r.MultipartForm.File["upload"]
for _, file := range files {
src, err := file.Open()
filename := fmt.Sprintf("%d%s", imgNum, filepath.Ext(file.Filename))
dst, err := os.Create(ORIGINAL_DIR + "/" + filename)
_, err = io.Copy(dst, src)
Hmmm... can an attacker upload a file named "../../../etc/profile.d/script.sh" or similar ideas, i.e. path traversal?This. The hardest part of solving a problem is to think about the problem and then come up with the right solution. We actually do the opposite: we write code and then think about it.
This is how "features" get added to most Microsoft products these days :thumbsup:
I love the simplicity and some of the great tools that PHP offers out of the box. I do believe that it only works in some cases. I use go because I need the error handling, the goroutines and the continuously running server to listen for kafka events. But I always always try to keep it simple, sometimes preferring a longer function than writing a useless abstraction that will only add more constraints. This is a great reminder to double my efforts when it comes to KISS!
Why count lines of code? Error handling is nothing to sniff at, especially in prod. Imagebin had a small handful of known users. Open it up to the world and most the error handling in Go comes handy.
For PHP, quite a bit was left on the shoulders of the HTTP server (eg. routes). The final result of Go is a binary which includes the server. The comparison is not fully fair, unless I'm missing something.
Knowing a figure like that, you can reason that it's too big for a single developer. Therefore, you'll likely need at least two; and maybe a few thousand marketing people to sell it.
PHP, a Real Programming Tool.
That's it. That's the story.array_sort
sortArray
Even if you can answer that off the top of your head, consider how ridiculous it is that you needed to memorize that at some point. This is not the only example of such a thing a PHP dev needed to remember to be effective, either.
Any programming language can be wielded in a simple way. Perl, for example, is superior to PHP in every way that is important to me.
Go is as well, even though it’s slightly more verbose than PHP for the authors imagebin tool.
We don’t do things simply because we’ve all been taught that complexity is cool and that using new tools is better than using old tools and so on.
My employer creates pods in Kubernetes to run command line programs on a fixed schedule that could (FAR MORE SIMPLY) run in a cronjob on a utility server somewhere. And that cronjob could email us all when there is a problem. But instead I have to remember to regularly open a bookmark to an opensearch host with a stupidly complex 75-character hostname somewhere, find the index for the pod, search through all the logs for errors, and if I find any, I need to the dig further to get any useful contextual information about the failure … or I could simply read an email that cron automatically delivered directly to my inbox. We stumble over ourselves to shove more things like that into kubernetes every day, and it drives me nuts. This entire industry has lost its goddamned mind.
Yep, stay-with-the-fad pressures mean people need to farm experience using those fads.
It won't change until the industry is okay with slowing down
I like doing things properly and almost no one else at my enormous employer does. Certainly no one on my team does, and it is extremely stressful. I feel like I am talking to a wall when I talk to my team members. No one understands. No one wants to.
But for a service with 1 user, it's fine.
Please get it running at least PHP 8.3. Running PHP 5 or 7 on servers available to the public is negligent in 2025.
Ok, i actually cried at this part.
The example with the image sharing is pretty good, because it only needs to share images. In, shall we say more commercial settings, it would grow to handle meta data, scaling, comments, video sharing, account management and everything in between. When that happens Max's approach breaks down.
If you keep your systems as "image sharing", "comments" and "blog" and just string them together via convention or simply hard coding links, you can keep the simple solutions. This is at the cost if integration, but for many use that's perfectly fine.
Edit: Oh, that Mel.
I think for a kid, Max's code was great but ultimately you do need to learn to think about things like error handling, especially if your code is intended to go into "production" (i.e., someone besides yourself will use/host it).
Lots of software projects don't have this luxury, sadly.
c'mon, you're talking about 200 LoC here. anything except BrainFuck would be maintainable at this scale.
have you ever had to fix a non-trivial third party WordPress plugin? the whole API used to be a dumpster fire of global state and magic functions. i dont know what it is now, but 15 years ago it was a total nightmare.
Also, the image kinda looks like me. It's not me though. I don't think.
I think of "straight-line code" as a distinct sort of code. It's the sort of code that does a thing, then does the next thing, then does the next thing, and if anything fails it basically just stops and yields some kind of error because there's nothing else to do. Programmers feel like they ought to do something about it, like this is bad, but I think there's actually great value in matching the code to the task. Straight-line code is not necessarily improved by some sort of heavyweight "command" pattern implementation that abstracts it into steps, or bouncing around a dozen functions, or through many objects in some other pattern. There's a time and a place for that too; for instance, if these must be configured that may be superior. But a lot of times, if you have a straight-line task, straight-line code is truly the best solution. You have to make sure it doesn't become hairy, there are some traps, but there's also a lot of traps in a lot of the supposed "fixes", many of them that will actually bite you worse.
For many years now I've been banging on the drum that if you've been living solely in the dynamic scripting language world for over a decade, you might want to look back at static languages to put one in your tool belt. When the dynamic scripting languages first came out, they would routinely be estimated at using 1/10th the lines of static languages, and at the time I would have called that a pretty good estimate. However, since then, the gap has closed. In 1998, replacing a 233-line PHP script with a merely 305-line static-code replacement would have been unthinkable. And that's Go, with all its inline error-handling; an exception-based, modern static language might have been able to effectively match the PHP! Post this in the late 90s and people are going to be telling you how amazing it was the static code didn't take over 2000 lines. This doesn't represent our tools falling behind... this represents a staggering advance! And also the Go code is going to likely be faster. Probably not in a relevant way to this user, but at scale it would be highly relevant.
A final observation is that in the early PHP era, everything worked that way. Everything functioned by being a file that represented a program on the disk corresponding to that specific path. If you want to get fancy you had a path like "/cgi-bin/my.cgi/fake/path/here" and had your "my.cgi" receive the remainder of the path as a parameter, and that was a big deal. It took the web world more-or-less a decade to get over the idea that a URL ought to literally and physically correspond to something on the disk. We didn't get rid of that because we all hate fun and convenience. We got rid of that because it produces a lot of big problems at even a medium scale and it's not a good way to structure things in general. It's not something to mourn for, it's something we've had better ways of doing now for so long that people can forget why they're the better way.
People soon found out that it was not very good at complex web apps, though.
These days, there's almost no demand for very simple web apps, partially because common use cases are covered by SaaS providers, and those with a need and the money for custom web apps have seen all the fancy stuff that's possible and want it.
So it's no surprise that today's languages and frameworks are more concerned with making complex web apps manageable, and don't optimize much (or at all) for the "very simple" case.
I dunno about that.
In 2000, one needed a cluster of backends to handle, say, a webapp built for 5000 concurrent requests.
In 2025, a single monolith running on a single VM, using a single DB on another instance can vertically scale to handle 100k concurrent users. Put a load balancer in front of 10 instances of that monolith and use RO DB followers for RO queries, and you can easily handle 10x that load.
> So it's no surprise that today's languages and frameworks are more concerned with making complex web apps manageable, and don't optimize much (or at all) for the "very simple" case.
Maybe the goal is to make complex web apps manageable, but in practice what I see are even very simply webapps being mad with those frameworks.
I see the current state of web development as a spiral of complexity with a lot of performance pitfalls. Over-engineering seems to be the default.
Definitely not. PHP lost far more market share to Java,C# and Ruby on Rails than to node.js
> PHP is still very usable for server-side rendering and APIs.
Not "is still", but "has become". It has changed a lot since the PHP 3 days.
> You say "very simple" as if you can't have complex systems with PHP.
With early 2000s PHP, you really couldn't, not without suffering constantly from the language's inadequacies.
> I see the current state of web development as a spiral of complexity with a lot of performance pitfalls. Over-engineering seems to be the default.
I don't disagree, but that seems to happen most of all in the frontend space.
They eventually made it fit for purpose with Laravel ;-)
(Actually, the reworked story in free verse style, which is its most popular form)
TFA is cute but it kinda misses the point, because the original Mel didn't write code that was simple or easy to understand. It was simple to him, and arguably there was some elegance to it once you understood it, but unlike the PHP from the updated story, Mel's code was machine code, really hard to understand or modify, and the design was all in his mind.
Mel would also scoff at PHP.
Simple is robust.