Taking his contribution for Photoshop into account, one could say that if you saw mainstream motion or still pictures in the Western world in the last three decades, you'll probably saw something influenced by him in one way or another.
"There are only a few comments in the version 1.0 source code, most of which are associated with assembly language snippets. That said, the lack of comments is simply not an issue. This code is so literate, so easy to read, that comments might even have gotten in the way."
"This is the kind of code I aspire to write.”
I'm looking at the code and just cannot agree. If I look at a command like "TRotateFloatCommand.DoIt" in URotate.p, it's 200 lines long without a single comment. I look at a section like this and there's nothing literate about it. I have no idea what it's doing or why at a glance:
pt.h := BSR (r.left + ORD4 (r.right), 1);
pt.v := BSR (r.top + ORD4 (r.bottom), 1);
pt.h := pt.h - BSR (width, 1);
pt.v := pt.v - BSR (height, 1);
pt.h := Max (0, Min (pt.h, fDoc.fCols - width));
pt.v := Max (0, Min (pt.v, fDoc.fRows - height));
IF width > fDoc.fCols THEN
pt.h := pt.h - BSR (width - fDoc.fCols - 1, 1);
IF height > fDoc.fRows THEN
pt.v := pt.v - BSR (height - fDoc.fRows - 1, 1);
Just breaking up the function with comments delineating its four main sections and what they do would be a start. As would simple things like commenting e.g. what purpose 'pt' serves -- the code block above is where it is first defined, but you can't guess what its purpose is until later when it's used to define something else.Good code does not make comments unnecessary or redundant or harmful. This is a myth that needs to die. Comments help you understand code much faster, understand the purpose of variables before they get used, understand the purpose of functions and parameters before reading the code that defines them, etc. They vastly aid in comprehension. And those are just "what" comments I'm talking about -- the additional necessity of "why" comments (why the code uses x approach instead of seemingly more obvious approach y or z, which were tried and failed) is a whole other subject.
pt == point, r == rect, h, v == horizontal, vertical, BSR(...,1) is a fast integer divide by 2, ORD4 promotes an expression to an unsigned 4 byte integer
The algorithms are extremely common for 2D graphics programming. The first is to find the center of a 2D rectangle, the second offsets a point by half the size, the third clips a point to be in the range of a rectangle, and so on.
Converting the idiomatic math into non-idiomatic words would not be an improvement in clarity in this case.
(Mac Pascal didn't have macros or inline expressions, so inline expressions like this were the way to go for performance.)
It's like using i,j,k for loop indexes, or x,y,z for graphics axis.
You seem to be missing my point. It's not about improving "clarity" about the math each line is doing -- that's precisely the kind of misconception so many people have about comments.
It's about, how long does it take me to understand the purpose of a block of code? If there was a simple comment at the top that said [1]:
# Calculate top-left point of the bounding box
then it would actually be helpful. You'd understand the purpose, and understand it immediately. You wouldn't have to decode the code -- you'd just read the brief remark and move on. That's what literate programming is about, in spirit -- writing code to be easily read at levels of the hierarchy. And very specifically not having to read every single line to figure out what it's doing.The original assertion that "This code is so literate, so easy to read" is demonstrably false. Naming something "pt" is the antithesis of literature programming. And if you insist on no comments, you'd at least need to name is something like "bbox_top_left". A generic variable name like "pt", that isn't even introduced in the context of a loop or anything, is a cardinal sin here.
Part of figuring out a reasonable level of commenting (and even variable naming) is a solid understanding of your audience. When in doubt aiming low is good practice, but keep in mind that this was 2D graphics software written at a 2D graphics software company.
There's no context in those names to help you understand them, you have to look at the code surrounding it. And even the most well-intentioned, small loops with obvious context right next to it can over time grow and add additional index counters until your obvious little index counter is utterly opaque without reading a dozen extra lines to understand it.
(And i and j? Which look so similar at a glance? Never. Never!)
> There's no context in those names to help you understand them, you have to look at the code surrounding it.
Hard disagree. Using "meaningful" index names is a distracting anti-pattern, for the vast majority of loops. The index is a meaningless structural reference -- the standard names allow the programmer to (correctly) gloss over it. To bring the point home, such loops could often (in theory, if not in practice, depending on the language) be rewritten as maps, where the index reference vanishes altogether.
The issue isn't the names themselves, it's the locality of information. In a 3-deep nested loop, i, j, k forces the reader to maintain a mental stack trace of the entire block. If I have to scroll up to the for clause to remember which dimension k refers to, the abstraction has failed.
Meaningful names like row, col, cell transform structural boilerplate into self-documenting logic. ijk may be standard in math-heavy code, but in most production code bases, optimizing for a 'low-context' reader is not an anti-pattern.
That was my "vast majority" qualifier.
For most short or medium sized loops, though, renaming "i" to something "meaningful" can harm readability. And I don't buy the defensive programming argument that you should do it anyway because the loop "might grow bigger someday". If it does, you can consider updating the names then. It's not hard -- they're hyper local variables.
But once you nest three deep (as in the example that kicked off this thread), you're defining a coordinate space. Even in a 10-line block, i, j, k forces the reader to manually map those letters back to their axes. If I see grid[j][i][k], is that a bug or a deliberate transposition? I shouldn't have to look at the for clause to find out.
> (And i and j? Which look so similar at a glance? Never. Never!)
This I agree with.
Also, breaking things down to more atomic functions wasn't the best idea for performance-sensitive things in those days, as compilers were not as good about knowing when to inline and not: compiler capabilities are a lot better today than they were 35 years ago...
For clarity and to demonstrate, this is basically what this function is doing, but in css:
.container {
position: relative;
}.obj {
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
}Because it's quite clear, everything is well named, and the filename also gives the context.
ORD4 = cast as 32bit integer.
BSR(x,1) simply meant x divided by 2. This is very comment coding idom back in those days when Compiler don't do any optimization and bitwise-shift is much faster than division.
The snippet in C would be:
pt.h = (r.left + (int32_t)r.right) / 2;
pt.v = (r.top + (int32_t)r.bottom) / 2;
pt.h -= (width / 2);
pt.v -= (height / 2);
pt.h = max(0, min(pt.h, fDoc.fCols - width));
pt.v = max(0, min(pt.v, fDoc.fRows - height));
if (width > fDoc.fCols) {
pt.h -= (width - fDoc.fCols - 1) / 2;
}
if (height > fDoc.fRows) {
pt.v -= (height - fDoc.fRows - 1) / 2;
}If I understand it correctly, it was calculating the top-left point of the bounding box.
I'm sure the code would be immediately obvious to anyone who would be working on it at the time.
Comments aren't unnecessary, they can be very helpful, but they also come with a high maintenance cost that should be considered when using them. They are a long-term maintenance liability because by design the compiler ignores them so its very easy to change/refactor code and miss changing a comment and then having the comment be misleading or just plain wrong.
These days one could make some sort of case (though I wouldn't entirely buy it, yet) that an LLM-based linter could be used to make sure comments do not get disconnected from the code they are documenting, but in 1990? not so much.
Would I have used longer variable names for slightly more clarity? Today, sure. In 1990, probably not. Temporal context is important and compilers/editors/etc have come a long way since then.
Every comment is a line of code, and every line of code is a liability, and, worse, comments are a liability waiting to rot, to be missed in a refactor, and waiting to become a source of confusion. It’s an excuse to name things poorly, because “good comment.” The purpose of variables should be in their name, including units if it’s a measurement. Parameters and return values should only be documented when not obvious from the name or type—for example, if you’re returning something like a generic Pair, especially if left and right have the same type. We’d been living with decades of autocomplete, you don’t need to make variables be short to type.
The problem with AI-generated code is that the myth that good code is thoroughly commented code is so pervasive, that the default output mode for generated code is to comment every darn line it generates. After all, in software education, they don’t deduct points for needless comments, and students think their code is now better w/ the comments, because they almost never teach writing good code. Usually you get kudos for extensive comments. And then you throw away your work. Computer science field is littered with math-formula-influenced space-saving one or two letter identifiers, barely with any recognizable semantic meaning.
A name and signature is often not sufficient to describe what a function does, including any assumptions it makes about the inputs or guarantees it makes about the outputs.
That isn't to say that it isn't necessary to have good names, but that isn't enough. You need good comments too.
And if you say that all of that information should be in your names, you end up with very unwieldy names, that will bitrot even worse than comments, because instead of updating a single comment, you now have to update every usage of the variable or function.
This is exactly my view. Comments, while can be helpful, can also interrupt the reading of the code. Also are not verified by the compiler; curious, in the era when everyone goes crazy for rust safety, there is nothing unsafer as comments, because are completely ignored.
I do bot oppose to comments. But they should be used only when needed.
> comments are a liability waiting to rot, to be missed in a refactor, and waiting to become a source of confusion
This gets endlessly repeated, but it's just defending laziness. It's your job to update comments as you update code. Indeed, they're the first thing you should update. If you're letting comments "rot", then you're a bad programmer. Full stop. I hate to be harsh, but that's the reality. People who defend no comments are just saying, "I can't be bothered to make this code easier for others to understand and use". It's egotistical and selfish. The solution for confusing comments isn't no comments -- it's good comments. Do your job. Write code that others can read and maintain. And when you update code, start with the comments. It's just professionalism, pure and simple.
(Please note: I'm not arguing against comments. I'm simply arguing that trusting comments is problematic. It is understandable why some people would prefer to have clearly written code over clearly commented code.)
That doesn't justify matching their sloth.
Lead by example! Write comments half a page long or longer, explaining things, not just expanding identifier names by adding spaces in between the words.
> 2. Restrictions. Except as expressly specified in this Agreement, you may not: (a) transfer, sublicense, lease, lend, rent or otherwise distribute the Software or Derivative Works to any third party; or (b) make the functionality of the Software or Derivative Works available to multiple users through any means, including, but not limited to, by uploading the Software to a network or file-sharing service or through any hosting, application services provider, service bureau, software-as-a-service (SaaS) or any other type of services. You acknowledge and agree that portions of the Software, including, but not limited to, the source code and the specific design and structure of individual modules or programs, constitute or contain trade secrets of Museum and its licensors.
Edit: Disappointed is really not the right word but I am failing at finding the right word.
I think Adobe decided to release the code because they knew it was only valuable from a historical standpoint and wouldn't let anyone actually compete with Photoshop. If you wanted to start a new image editor project from an existing codebase, it would be much easier to build off of something like Pinta: https://www.pinta-project.com/
1) these historical source code releases really are largely historical interest only. The original programs had constraints of memory and cpu speed that no modern use case does; the set of use cases for any particular task today is very different; what users expect and will tolerate in UI has shifted; available programming languages and tooling today are much better than the pragmatic options of decades past. If you were trying to build a Unix clone today there is no way you would want to start with the historical release of sixth edition. Even xv6 is only "inspired by" it, and gets away with that because of its teaching focus. Similarly if you wanted to build some kind of "streamlined lightweight photoshop-alike" then starting from scratch would be more sensible than starting with somebody else's legacy codebase.
2) In this specific case the licence agreement explicitly forbids basically any kind of "running with it" -- you cannot distribute any derivative work. So it's not surprising that nobody has done that.
I think Doom and similar old games are one of the few counterexamples, where people find value in being able to run the specific artefact on new platforms.
> When will we get the linux port of Photoshop 1.0?
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
This makes the license transitive so that derived works are also MIT licensed.
[1] https://en.wikipedia.org/wiki/MIT_License?wprov=sfti1#Licens...
*: which unfortunately most users of MIT libraries do not follow as I often have an extremely difficult time finding the OSS licenses in their software distributions
AGPL and GPL are, on the other hand, as you describe.
You also could not legally remove the MIT license from those files and distribute with all rights reserved. My original granting of permission to modify and redistribute continues downstream.
Words have meaning and all that.
Ironic put down when “open source” consists of two words which have meaning, but somehow doesn’t mean that when combined into one phrase.
Same with free software, in a way.
Programmers really are terrible at naming things.
:)
* If a country doesn't have "closed borders" then many foreigners can visit if they follow certain rules around visas, purpose, and length of stay. If instead anyone can enter and live there with minimal restrictions we say it has "open borders".
* If a journal isn't "closed access" it is free to read. If you additionally have permissions to redistribute, reuse, etc then it's "open access".
* If an organization doesn't practice "closed meetings" then outsiders can attend meetings to observe. If it additionally provides advance notice, allows public attendance without permission, and records or publishes minutes, then it has “open meetings.”
* A club that doesn't have "closed membership" is open to admitting members. Anyone can join provided they meet relevant criteria (if any) then it's "open membership".
EDIT: expanded this into a post: https://www.jefftk.com/p/open-source-is-a-normal-term
* A set that is open can also be closed.
Just supporting a modern OS's graphical API (The pre-OSX APIs are long dead and unsupported) is a major effort.
https://fsck.technology/software/Silicon%20Graphics/Software...
And, for purity/completeness, avoid Maxx Desktop and/or NSCDE; EMWM with XMToolbar it's close enough to SGI's Irix desktop.
It nailed it, first try.
I cannot, unfortunately, share a link to the website it created because of the license.
LLM translations of historical software to modern platforms is a solved problem. Try it, you'll see.
I used https://exe.dev/ and their Shelley agent to drive Claude. Give it a try, it is jaw dropping.
Here is the prompt I gave it:
"Use wasm and go and a 68000 emulator to get the Photoshop 1.0.1 software at https://d1yx3ys82bpsa0.cloudfront.net/source/photoshop-v.1.0... to run correctly. You should not require an operating system, instead implement the system calls that Photoshop makes in the context of wasm. Because Go compiles to wasm, you might try writing some kind of translator from the pascal to go and then compile for wasm. Or you might be able to find such a thing and use it."
You can give it a try yourself, or contact me for a private link to it (see the CHM license for why I can't make it public).
It wasn't even broadband that destroyed that experience, when CDs came around developers realised they had space to just stick a PDF version of the manual on the CD itself and put in a slip that tells you to stick in the CD, run autorun.exe if it didn't already, and refer to the manual on the CD for the rest!
They weren’t like textbooks, which have knowledge that tends to be relevant for decades. You’d get a new set with every software release, making the last 5-20 lbs of manuals obsolete.
You did lose some of the readability of an actual book. Hard-copy manuals were better for that. But for most software manuals, I did more “look up how to do this thing” than reading straight through. And with a pdf on a CD you had much better search capabilities. Before that you’d have to rely on the ToC, the book index and your own notes. For many manuals, the index wasn’t great. Full text search was a definite step up.
Even the good ones, like the 1980s IBM 2-ring binder manuals, which had good indexes, were a pain to deal with and couldn’t functionally match a PDF or text file on a CD for searchability.
Even some well-documented modern software is obviously documented by the programmers and programmer-adjacent.
You might expect now and again to get some optional updates/patches later, but that was rare - and rarer still for most people to even know about them.
These days, software is never complete. Nothing is done. It's just a point-in-time state with a laundry list of bugs and TODOs that just roll out whenever. The software is just whatever git tag we're pointing to today.
I understand how/why it has become like this - but it still makes me sad.
The Office 4.3 set of manuals were large too, but didn't have the information density the AutoCAD ones did.
I think all floppies are magical :)
Back in time, black were ordinary, and only white/grey ones were for licensed software, thus more desirable.
https://computerhistory.org/wp-content/uploads/2019/08/photo...
E.g: https://c7.alamy.com/comp/2AA9BC4/ajaxnetphoto-2019-worthing...
OMG. Booch?? The father of UML is still around? Given that UML is a true crime against humanity, it just goes to show there is no justice in the world. (I want a lifespan refund for the amount of time I spent learning UML and Design Patterns back in the bad old Enterprise Java days. Oof)
It is like the YAML junk that gets pushed nowadays in detriment of proper schemas, and validation tools we have in XML.
For trivial CRUD apps, and maintaining modified versions of the generated code was a nightmare.
It is also a great way to document existing architectures.
Note this is a toxic license. Accepting it and/or reading of the code has potential for legal liability.
Still, applaud releasing the source code, even if encumbered. Preservation is most important, and any legal teeth will eventually expire with the copyright.
How would this potentially expose you to legal liability?
I feel like that has changed? Even Blender felt good the last time I used it, Firefox became kinda fine, though these are probably bad examples as they are both mainstream software. But what about OSS that is used primarily by OSS enthusiasts? What about GIMP now?
Whereas Photoshop and other "mainstream" software use terms and procedures non-programmers are more likely to be familiar with: heal this area with a patch, clone something with a clone stamp, scissors/lasso to cut something out (not saying GIMP doesn't have those)...
Unfortunately, designers are rare among the FOSS community. You can't attract real casual or professional users if you don't recognize the value of professional UI/UX.
> To change GIMP to single-window mode (merging panels into one window), go to "Windows" in the top menu and select or check "Single-Window Mode"; this merges all elements like the Toolbox, Layers, and History into one unified view.
and having the source available didnt help so far either :-))
could you please show me a good textting tool plugin for GIMP, then?
you can check their forums & other sites: the textingtools is on top of their discussion lists?
So can you expand why you think the text tool, is bad?
Reddit: https://www.reddit.com/r/GIMP/comments/1fecr6u/suggestion_im...
Its just the first two results from top of Google.
Maybe the tool was improved in version 3.0, I'm running an older 2.x version. I will check it next time.
The versions were difficult in: - font size applying - random loss / reset settings - there were some issues with the preview when editting - font preview before selection etc.
The strange font sizes and setting reset was mostly fixed as part of the 2020 massive refactor [0]. There are still some minor inconsistencies between the two font editor panels, but they're being worked on.
Thankfully, you shouldn't have any random setting changes since about 2018 build.
And thats the irony covered in my post: Even that the source is available didnt motivate someone enough so far to create better version of the built