My family had a bunch of "Dr. Dobb’s Journal of Computer Calisthenics & Orthodontia"[0] and similar things (BYTE, COMPUTE!). (Which seem slightly dryer, but maybe more like Paged Out.)
[0]:https://archive.org/details/dr_dobbs_journal_vol_01/mode/2up
I've never heard of this. It's a pity the article doesn't go into details.
If you want to learn more about query based compilers as a concept, I highly recommend ollef's aritcle: https://ollef.github.io/blog/posts/query-based-compilers.htm...
If you want to learn how to implement a query based compiler, I have a tutorial on that here: https://thunderseethe.dev/posts/lsp-base/ (which I also highly recommend but that might be more obvious since I wrote it)
Note that you can link to pages in a PDF with a hash like #page=64 (for example) in the URL.
https://pagedout.institute/download/PagedOut_008.pdf#page=64
EDIT: Fixed. It wasn't the tags - it was a trailing space we had in the "database". I honestly though I've handled that case, but apparently not .
Worth noting that the HTML tag in the title was stripped from the PDF table of contents as well, so the title for that article in the contents is missing a word. No big deal, but good to know for future submissions!
E.g. "Integer Comparison is not Deterministic", in the C standard you can't do math on pointers from different allocations. The result in the article is obvious if you know that.
Also, in the Logistic Map in 8-Bit. There is a statement
> While implementing Algorithm 1 in modern systems is trivial, doing so in earlier computers and languages was not so straightforward.
Microsoft BASIC did floating point. Every 8-bit of the era was able to do this calculation easily. I did it on my Franklin ACE 1000 in 1988 in basic while reading the book Chaos.
I suppose what I'm saying is the premise of the articles seem to be click-baity and I find that off putting.
In general when selecting articles we assume that the reader is an expert in some field(s), but not necessarily in the field covered by this article. As such, things which are simple for an expert in the specific domain, can still be surprisingly to learn for folks who aren't experts in that domain.
What I'm saying is, that we don't try to be a cutting edge scientific journal — rather than that, we publish even the smallest trick that we decide someone may not know about and find it fun/interesting to learn.
The consequence of that is that, yeah, some article have a bit clickbaity titles for some of the readers.
On the flip side, as we know from meme-t-shirts, there are only 2 things hard in computer science, and naming is first on the list ;)
P.S. Sounds like you should write some cool article btw :)
First, as for "serialization" vs "deserialization", it can be argued that the word "serialization" can be used in two ways. One is on the "low level" to denote the specific action of taking the data and serializing it. The other one is "high level", where it's just a bag where you throw in anything related (serialization, deserialization, protocols, etc) - same as it's done on Wikipedia: https://en.wikipedia.org/wiki/Serialization (note how the article is not called "Serialization and deserialization" for exactly these reasons). So yes, you can argue that the author could have written "deserialization", but you can also argue that the author used the "high level" interpretation of the word and therefore used it correctly.
As for insertion not happening and balancing stuff - my memory might be failing me, but I do remember it actually happening during serialization. I think there even was a "delete" option when constructing the "serialized buffer", but it had interesting limitations.
Anyway, not sure how deep did you go into how it works (beyond what's in the article), but it's a pretty cool and clever piece of work (and yes, it does have its limitations, but also I can see this having its applications - e.g. when sending data from a more powerful machine to a tiny embedded one).
No, not giving spoilers except there might be some polyglot files.
I was an avid follower of 2600, phrack, etc from the mid 90's up through the mid 2010s and it seemed to me that the 2600 community always sort of stuck to itself, never really growing or shrinking.
I get the 2600 zine at a local book store and I like it but there's a lot of articles that I don't really care about.
It might be a good thing though.
I'm surprised that they're now offering a digital format as, at one point, they were taking a hard stance to not provide one. I guess they changed their mind within the last 10 years or so.
Notice how Paged Out is libre/free licensed, making sure that they provide a CC0, CC-BY or CC-BY-SA for their articles. 2600 is locked under copyright.
https://www.kicksecure.com/wiki/Sdwdate https://tails.net/contribute/design/Time_syncing/
> Obviously the used fonts should be readable (and ideally their name shouldn't start with "Comic" and end with "Sans", though there might be some article topics that justify even that!), and while almost any font meets this requirement, please be careful when selecting a non-standard font.
I kinda want to see such an article, but taken seriously discussing the history of the font, its design and purpose, evolution, and purpose-related/derivative font families.
If you like polyglot files, see https://www.alchemistowl.org/pocorgtfo/
I believe it’s a dual use tool, hence a polyglot.
For example, one could argue that running a modern grammar checker over an article and based on that doing comma fixes should already be marked with "AI was used to create this article". But reading a statement like that makes folks think "AI slop", which would not be the case at all and would be insanely unfair towards the author. Even creating a scale of "no AI was used at all" → "a bit was used" → "..." wouldn't solve the misinterpretation issue, because regardless of how well we would define the scale, I have zero hope that more than a handful of people would ever read our definitions (and understand them the way we intended) posted somewhere on our website (or even in the zine itself).
Another example would be someone doing research for their article and using AI as a search engine (to get leads on what more to read on the topic). On one hand this is AI usage, on another it's pretty similar to just using a classical search engine. Yet still someone could argue that the article should be marked as "being AI enhanced".
There are also more popular use-cases for AIs, like just doing wordsmithing/polishing the language. A great majority of authors (including me) are not native English speakers, yet folks do want their articles to present well (some readers are pretty unforgiving when it comes to typos and grammar errors). LLMs are (if used correctly) good tools to help with the language layer. So, should an article where the author has written everything themselves and then used AI to polish it be grouped in the same bag with fully AI generated slop? From my PoV the answer is a pretty clear "no".
Anyway, at the end of the day I decided any kind of markings on the article won't work in an intended way, and outright banning any and all AI usage won't work either (would be hard to detect / there is no sense in some cases / there are reasons to allow some AI usage). But - as you know, since you refer to our AI policy - I still decided we draw a line in a kind of similar place to where some universities draw it.
I highly recommend it if you enjoy writing. It was painless and fun.
A nice break from writing blogs.
It claims clang is NOT "a pipeline that runs each pass of the compiler over your entire code before shuffling its output along to the next pass."
What I think the author is talking about is primarily AST parsing and clangd, where as "any compiler tome" is still highly relevant to the actual work of building a compiler.
https://news.ycombinator.com/item?id=11685317
https://lobste.rs/s/dwf2yn/sixten_s_query_based_compiler
https://ericlippert.com/2012/06/08/red-green-trees/
Rust's salsa, etc.
Related search terms are incremental compilation and red-green trees. It's primarily an ide driven workflow (well, the original use case was driven by ides), but the principles behind it are very interesting.
You can grok the difference by thinking through, for example, the difference between invoking `g++` on the command line - include all headers, then compile object files via includes, re-do all template deduction, etc. and one where editing a single line in a single file doesn't change the entire data structure much and force entire recompilation (this doesn't need full ownership of editing either by hooking UI events or keylogging: have a directory watcher treat the file diff as a patch, and then send it to the server in patch form; the observation being that compiling an O(n) size file is often way more expensive than a program that goes through the entire file a few times and generates a patch)
AST's are similar to these kinds of trees only insofar as the underlying data structure to understand programming languages are syntax trees.
I've always wanted to get into this stuff but it's hard!
some of this articles I wish I could read more (i.e IDA Database) :)
So great to find that spirit again!
Am I to understand that Aga is an AI bot? I see nothing mentioned about this in the FAQs or the webpage. Makes me wonder if this zine may be written by AI agents reproducing the old hacker magazine aesthetic.
Or is "bot-in-chief" some kind of tongue-in-cheek formulation that I can find nothing about online? Aga is listed as "Editor-in-Chief" on the About page.