Then I moved into backend development, where I was doing all Java, Scala, and Python. It was... dare I say... easy! Sure, these kinds of languages bring with them other problems, but I loved batteries-included standard libraries, build systems that could automatically fetch dependencies -- and oh my, such huge communities with open-source libraries for nearly anything I could imagine needing. Even if most of the build systems (maven, sbt, gradle, pip, etc.) have lots of rough edges, at least they exist.
Fast forward 12 years, and I find myself getting back in to Xfce. Ugh. C is such a pain in the ass. I keep reinventing wheels, because even if there's a third-party library, most of the time it's not packaged on many of the distros/OSes our users use. Memory leaks, NULL pointer dereferences, use-after-free, data races, terrible concurrency primitives, no tuples, no generics, primitive type system... I hate it.
I've been using Rust for other projects, and despite it being an objectively more difficult language to learn and use, I'm still much more productive in Rust than in C.
--> src/main.rs:45:34
|
45 | actions.append(&mut func(opt.selected));
| ---- ^^^^^^^^^^^^ expected `&str`, found `String`
| |
| arguments to this function are incorrect
|
help: consider borrowing here
|
45 | actions.append(&mut func(&opt.selected));
|
I even had to cheat a little to get that far, because my editor used rust-analyzer to flag the error before I had the chance to build the code.Also, I highly recommend getting into the habit of running `cargo clippy` regularly. It's a wonderful tool for catching non-idiomatic code. I learned a lot from its suggestions on how I could improve my work.
When I say Rust is harder to use (even after learning it decently well), what I mean is that it's still easier to write a pile of C code and get it to compile than it is to write a pile of Rust code and get it to compile.
The important difference is that the easier-written C code will have a bunch of bugs in it than the Rust code will. I think that's what I mean when I say Rust is harder to use, but I'm more productive in it: I have to do so much less debugging when writing Rust, and writing and debugging C code is more difficult and takes up more time than writing the Rust code (and doing whatever less debugging is necessary there).
> Also, I highly recommend getting into the habit of running `cargo clippy` regularly. It's a wonderful tool for catching non-idiomatic code.
That's a great tip, and I usually forget to do so. On a couple of my personal projects, I have a CI step that fails the build if there are any clippy messages, but I don't use it for most of my personal projects. I do have a `cargo fmt --check` in my pre-commit hooks, but I should add clippy to that as well.
require("lspconfig").rust_analyzer.setup({
settings = {
["rust-analyzer"] = {
checkOnSave = {
command = "clippy",
allFeatures = true,
},
},
},
})
Neither is cargo (nor npm, nor any other package manager, for that matter).
I'm not sure what value being that paranoid is buying you in the long run.
https://doc.rust-lang.org/cargo/reference/build-scripts.html
Regarding Clippy, you can also crank it up with `cargo clippy -- -Wclippy::pedantic`. Some of the advice at that level gets a little suspect. Don't just blindly follow it. It offers some nice suggestions though, like:
warning: long literal lacking separators
--> src/main.rs:94:22
|
94 | if num > 1000000000000 {
| ^^^^^^^^^^^^^ help: consider: `1_000_000_000_000`
|
that you don't get by default.Why 1_000_000_000_000, what does that number mean.
It is for free to:
let my_special_thing = 1_000_000_000_000
since the compiler will just inline it.The readability problem was never the lack of separators, since that number might be the wrong number regardless.
As someone who is more familiar with Rust than C: only if you grok the C build system(s). For me, getting C to build at all (esp. if I want to split it up into multiple files or use any kind of external library) is much more difficult than doing the same in Rust.
Rust doesn’t magically make the vast majority of bugs go away. Most of bugs are entirely portable!
So for example today I dealt with a synchronization issue. This turned out to not be a code bug but a human misunderstanding of a protocol specification saga, which was not possible to code into a type system of any sort. The day before was a constraint network specification error. In both cases the code was entirely irrelevant to the problem.
Literally all I deal with are human problems.
My point is Rust doesn't help with these at all, however clever you get. It is no different to C, but C will give you a superset of vulnerabilities on top of that.
Fundamentally Rust solves no problems I have. Because the problems that matter are human ones. We are too obsessed with the microscopic problems of programming languages and type systems and not concentrating on making quality software which is far more than just "Rust makes all my problems go away" because it doesn't. It kills a small class of problems which aren't relevant to a lot of domains.
(incidentally the problems above are implemented in a subset of c++)
Maybe not in a reasonable language no, but there are advances in type systems that are making ever larger classes of behaviours encodable into types. For example, algebraic effects (can this function throw, call a remote service etc)
https://koka-lang.github.io/koka/doc/index.html
linear types (this method must be called only once etc)
https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/line...
dependent typing (f(1) returning a different type from f(2), verifiable at compile time),
https://fstar-lang.org/index.html
Some of these features will eventually make it to “normal” PL. For example, Scala now has dependent types,
https://dotty.epfl.ch/docs/reference/new-types/match-types.h...
and Java can support linear type checking,
I run into those things nearly daily, so... ok then.
You might say that the C and Rust code will have the same number of logic errors, but I'm not convinced that's the case either. Sure, if you just directly translate the C to Rust, maybe. But if you rewrite the C program in Rust while making good use of Rust's type system, it's likely you'll have fewer logic errors in the Rust code as well.
Rust has other nice features that will help avoid bugs you might write in a C program, like most Result-returning functions in the stdlib being marked #[must_use], or match expressions being exhaustive, to name a couple things.
Actually it's a bit cleverer than that, and some people might benefit from knowing this. The Result type itself is marked #[must_use]. If you're writing a Goat library and you are confident that just discarding a Goat is almost always a mistake regardless of the context in which they got a Goat you too should mark your Goat type #[must_use = "this `Goat` should be handled properly according to the Holy Laws of the Amazing Goat God"] and now everybody is required to do that or explicitly opt out even for their own Goat code.
Obviously don't do this for types which you can imagine reasonable people might actually discard, only the ones where every discard is a weird special case.
Types I like in Rust which help you avoid writing errors the compiler itself couldn't possibly catch:
Duration - wait are these timeouts in seconds or milliseconds? It's different on Windows? What does zero mean, forever or instant ?
std::cmp::Ordering - this Doodad is Less than the other one
OwnedFd - it's "just" a file descriptor, in C this would be an integer, except, this is always a file descriptor, it can't be "oops, we didn't open a file" or the count of lines, or anything else, we can't Add these together because that's nonsense, they're not really integers at all.
[1]: https://www.chromium.org/Home/chromium-security/memory-safet...
> akshually they have same amount of bugs
You literally segfault by dereferencing invalid pointer.
But: If you are LUCKY! The worse case is if you DON'T segfault by dereferencing invalid pointer.
I have written a lot of C and Rust. The notion that identical projects written in both languages would have identical numbers of (or even severity of) bugs is laughable on its face.
I mean, literally just not being able to deref a NULL pointer by itself is enormous.
let p: *const i32 = std::ptr::null(); // Pointer to a signed 32-bit integer, but it's null
unsafe { some_local = *p }; // Dereferencing the null pointer, bang
If we instead use references they're nicer to work with, don't need unsafe, perform the same, and are never null. So obviously that's what real Rust programmers almost always do.In practice I have never encountered a null pointer in Rust. Most Rust programmers have never encountered a null pointer. For most non-FFI practical purposes you can act as if they don’t exist.
I won't disagree that correct C is harder to write, but it's not 2005 anymore and standard tooling gives you access to things like asan, msan, ubsan, tsan, clang-tidy...
You can also have that hooked up to the editor, just like `cargo check` errors. I find this to be quite useful, because i hace a hard time getting into habits, especially for thing that i'm not forced to do in some way. It's important that those Clippy lints are shown as soft warnings instead of hard errors though, as otherwise they'd be too distracting at times.
Granted I can still crank out a python program faster, that kinda works but god forbid you need to scale it or use any sort of concurrency at all.
* In Rust, you will have to deal with a lot of unnecessary errors. The language is designed to make its users create a host of auxiliary entities: results, options, futures, tasks and so on. Instead of dealing with the "interesting" domain objects, the user of the language is mired in the "intricate interplay" between objects she doesn't care about. This is, in general, a woe of languages with extensive type systems, but in Rust it's a woe on a whole new level. Every program becomes a Sisyphean struggle to wrangle through all those unnecessary objects to finally get to write the actual code. Interestingly though, there's a tendency in a lot of programmers to like solving these useless problems instead of dealing with the objectives of their program (often because those objectives are boring or because programmers don't understand them, or because they have no influence over them).
What it needs to say is something along the lines of "a function f is defined with type X, but is given an argument of type Y": maybe the function should be defined differently, maybe the argument needs to change -- it's up to the programmer to decide.
I'm not aware of an existing tool to produce blind-friendly output, but this would at least be a part of that!
I buy a fruit mixer from Amazon.com ; I send it back along with a note: expected a 230VAC mixer, found a 110VAC mixer.
The code tend to be loaded with primitives that express ownership semantics or error handling. Every time something changes (for instance, you want not just read but also modify values referenced by the iterator) you have to change code in many places (you will have to invoke 'as_mut' explicitly even if you're accessing your iterator through mutable ref). This could be attributed (partially) to the lack of function overload. People believe that overload is often abused so it shouldn't be present in the "modern" language. But in languages like C++ overload also helps with const correctness and move semantics. In C++ I don't have to invoke 'as_mut' to modify value referenced by the non-const iterator because dereferencing operator has const and non-const overloads.
Async Rust is on another level of complexity compared to anything I used. The lifetimes are often necessary and everything is warpped into mutliple layers, everything is Arc<Mutex<Box<*>>>.
What? It tells the user exactly what's wrong
> Every program becomes a Sisyphean struggle to wrangle through all those unnecessary objects to finally get to write the actual code
That is the cost of non-nullable types and correctness. You still have to do the Sisyphean struggle in other programming languages, but without hints from the compiler.
For example with C++ the language offers enough functionality that you can create abstractions at any level, from low level bit manipulation to high level features such as automatic memory management, high level data objects etc.
With C you can never escape the low level details. Cursed to crawl.
Back in 1994/95, I wrote an API, in C, that was a communication interface. We had to use C, because it was the only language that had binary/link compatibility between compilers (the ones that we used).
We designed what I call "false object pattern." It used a C struct to simulate a dynamic object, complete with function pointers (that could be replaced, in implementation), and that gave us a "sorta/kinda" vtable.
Worked a charm. They were still using it, 25 years later.
That said, I don't really miss working at that level. I have been writing almost exclusively in Swift, since 2014.
You were not alone in this. It is the basis of glib's GObject which is at the bottom of the stack for all of GTK and GNOME.
You don't have to think about exceptions, overloaded operators, copy constructors, move semantics etc.
You'll also still need to think about when to copy and move ownership, only without a type system to help you tell which is which, and good luck ensuring resources are disposed correctly (and only once) when you can't even represent scoped objects. `goto` is still the best way to deal with destructors, and it still takes a lot of boilerplate.
The beauty of C is that it allows you to pick your level of complexity.
That pretty much is the definition of C. It was designed to be a system language, that was "one step beyond" (my own Madness reference) the assembler.
It's a dangerous tool, and should be wielded by experts. The same goes for all kinds of similar stuff.
And "experts" is kind of a darwinian thing. There's not really that many folks that can successfully wield it in large quantities (a certain cranky Finn comes to mind). There's a ton of folks that are surfing the Dunning-Kruger wave, that think they are up to it, but they tend to face-plant.
For myself, I learned to do it right, but it was difficult, stressful work, and I don't miss it. Working in Swift is a joy. I don't have to worry about things like deallocations, and whatnot.
higher level languages often make it so you can focus on the language mostly. C and assembly, i think, you need a lot of context. what is running the code, and how.
you can abstract away perfectly fine low level details and use your own high level constructs to build what you want in a memory safe way...
high level daya objects?
sure it doesnt have garbage collection, so it motivates you not to leave garbage laying around for some slow background process to collect.
actually you can build this in C you just do not want to...
you can give all your objects reference counters and build abstractions to use that, you can implement 'smart pointers' if u want, and have threads do garbage collection if u want... why not? what exactly is stopping you?
maybe its less convenient to go that route. but impossible? nope.
connect GDB to your c++ program and see how it works... its not like c++ suddenly doesnt become machine code. and c translates perfectly fine to machine code...
Compare in C++ where you can have higher level types and compiler helpfully provides features that let the programmer do stuff such as RAII or unique_ptr.
Huge difference.
I suffered writing those for many years. I finally simply learned not to do them anymore. Sort of like there's a grain of sand on the bottom of my foot and the skin just sort of entombed it in a callous.
This all just comes off incredibly arrogant, if I'm being honest.
It's not all that different from learning any challenging task. I can't skateboard to save my life, but the fact that people can do it well is both admirable and the result of hundreds or thousands of hours of practice.
Skilled people can sometimes forget how long it took to learn their talent, and can occasionally come off as though the act is easy as a result. Don't take it too harshly.
You can only do that in code you control. You can not control the entire call stack of your functions in cooperative environments.
That's true. But I try by doing code reviews, and I try to lead the team by example and suggestion.
When I `git blame` segfaults in the D compiler repository, the aforementioned assumption -- that you can't control faults introduced by someone else -- seems to be largely true. We do use Go/Rust at work (albeit an easier flavor of it), because I don't trust my juniors.
I know, but it's the truth. Consider another case. I worked in a machine shop in college trying to make my steam engine. The man who ran the shop, was kind enough to show me how to operate the machines.
Those machines were like C, built during WW2. They had no user friendly features. The milling machine was a particular nightmare. It was festooned with levers and gears and wheels to control every aspect. There was no logic to the "user interface". Nothing was labeled. The worst one was a random lever that would reverse the operation of all the other levers. My terror was wrecking the machine by feeding the tool mount into the cutting bit.
I would make a part, and it would come out awful - things like the surface finish was a mess. If he had some time, he'd come over and help me. He'd turn a wheel, "ting" the blade on a grinding wheel, adjust the feed, and make a perfect part (all by eyeball, I might add). The man was just incredible with those machines. I was just in awe. He never made a mistake. Have you ever tried to get a part centered properly in a 4-jaw chuck? Not me. He could on the first try every time.
But he'd been doing it every day for 40 years.
In this instance Walter is correct - the mistakes he listed are very rarely made by experienced C programmers, just as ballet dancers rarely trip over their own feet walking down a pavement.
The problem of those errors being commonplace in those that are barely five years in to C coding and still have another five to go before hitting the ten year mark still exists, of course.
But it's a fair point that given enough practice and pain those mistakes go away.
So true. I know dancers who have been at it for decades. They "make it look easy", and it is easy for them. But try to do it yourself, and you look like a moose.
A friend of mine trains dancers for a living. He says the most effective teaching tool is the video camera. But he doesn't bring it out until his student is "hooked" on dancing, because prematurely showing them the video of them dancing just crushes them.
P.S. You can tell a ballet dancer just by the way they walk down the street. I once was at the airport and saw a lady unloading her bags. I just said Hi, you must be a ballet dancer! She replied nope, I'm an ice dancer, off to a competition. LOL.
What about about walking down a busy construction site? The most charitable and correct interpretation I can think of is "I'm a professional. Seatbelts and OSHA destroy my productivity."
Coordinated people with some years of experience pay attention to the ground and overhead cranes and conveyor belts and survive walking through construction sites, mine sites, aviation hangers, cattle yards, musters, et al on a routine basis. I'm 60+ and have somehow navigated all those environs - including C for critical system control.
These are dangerous environments. No one denies this. It's still true that the longer you inhabit such spaces the safer your innate learned behaviour is.
C has seatbelts and OSHA - valgrind, et al tools abound for sanity checking.
Walter's GP statement is literally little more than "eventually you grow out of making the simple basic maistakes" - eventually, after some years of practice - which is a real problem with C, it takes time to not make the basic mistakes. After all that, there's always room, in C, in Rust, whatever, to make non basic non obvious mistakes.
Correct, I guess. The number of relatively obvious mistakes should decrease with experience. And it stands to reason that eventually it settles near zero for some part of developer community.
How close to zero and which part of community? Statistic is scarce.
> C has seatbelts and OSHA - valgrind, et al tools abound for sanity checking.
Optional tools with no general enforcement. That is more like elective vaccination or travel advisories. That is, no, no seatbelts and no OSHA.
Yes, and the kind of mistakes I make have changed. Now they are usually a failure to understand the problem correctly, or are simply typos.
> Optional tools with no general enforcement
That's right. The tools don't work unless you use them. With D the idea is to build the tool into the language itself. I once annoyed the Coverity folks by saying D's purpose is to put Coverity out of business.
D is designed with seatbelts (like array overflow protection), and they work. I regularly show how C could add them with only minor additions.
Always pairing the creation of free() code and functions with every malloc() is one discipline.
Another, for a class of C utilities, is to never free() at all .. "compute anticipated resource limits early, malloc and open pipes in advance, process data stream and exit when done" works for a body of cases.
In large C projects of times past it's often the case that resource management, string handling, etc are isolated and handled in dedicated sub sections that resemble the kinds of safe handling methods baked into modern 'safe' languges.
And another - always use size_t for anything that is used as an index.
Do you know how fucking obnoxious it is when 200 people like you come into every thread to tell 10 C or Javascript developers that they can't be trusted with the languages and environments they've been using for decades? There are MILLIONS of successful projects across those two languages, far more than Rust or Typescript. Get a fucking grip.
[1] https://www.reddit.com/r/Python/comments/1iqytkf/python_type...
Feel free to downvote any and all posts I make.
zig seems like someone wanted something between C and "the good parts" of C++, with the generations of cruft scrubbed out
rust seems like someone wanted a haskell-flavoured replacement for C++, and memory-safety
i would expect "zig for C++" to look more like D or Carbon than rust. and i'd expect "rust for C" to have memory safety and regions, and probably steal a few ocaml features
Not everyone likes RAII by itself. Allocating and deallocating things one at a time is not always efficient. That is not the only way to use RAII but it's the most prevalent way.
If you destroy an object that outlives the lexical scope it was created in, then you have to clean up manually.
comptime is a better version of C++ templates.
The power and expressiveness of the C++ compile-time capabilities are the one thing I strongly miss when using other languages. The amount of safety and conciseness those features enable makes not having them feel like a giant step backward. Honestly, if another systems language had something of similar capability I’d consider switching.
Out of curiosity, do you happen to have any examples of what you describe, where C++ is more powerful and expressive than Zig?
Talking about how much you can do with C++ templates made me think of that.
I like strong, featureful type systems and functional programming; Zig doesn't really fit the bill for me there. Rust is missing a few things I want (like higher-kinded types; GATs don't go far enough for me), but it's incredible how they've managed to build so many zero- and low-cost abstractions and make Rust feel like quite a high-level language sometimes.
If anyone is the modern c++ its D. They have template metaprogramming while rust has generics and macros.
Rust doesn't have constructors or classes or overloading. I believe its type system is based on Hindley–Milner like ML or Haskell and traits are similar to Haskell type classes. Enums are like tagged Union/sum types in functional programming and rust uses Error/Option types like them. I believe rust macros were inspired by Scheme. And finally a lot of what makes it unique was inspired by Cyclone (an obscure excitemenal language that tried to be a safer C) and other obscure research languages.
I guess rust has RAII that's one major similar to c++. An there's probably some similarities in low level memory access abstraction patterns.
But I'd describe rust as more of an imperative non GC offshoot of MLs then a modern c++ evolution.
I do think Zig is a worthy successor to C and isn’t trying to be C++. I programmed in C for a long time and Zig has a long list of sensible features I wish C had back then. If C had been like Zig I might never have left C.
Rust and C++ both use RAII, both have a strong emphasis on type safety, Rust just takes that to the extreme.
I would like to even hope both believe in 0 cost abstractions, which contrary to popular belief isn't no cost, but no cost over doing the same thing yourself.
In many cases it's not even 0 cost, it's negative cost since using declarative programming can allow the compiler to optimise in ways you don't know about.
Rust feels like taking an imperative language and adding ML features till you cannot anymore.
> There is no question that libstdc++'s implementation of <regex> is not well optimized. But there is more to it than that. It's not that the standard requirements inhibit optimizations so much as the standard requirements inhibit changes.
The answer's one comment expands on this, it sounds like they're not able to add a sophisticated optimising regex engine into the libstd++ shared library (i.e. non-inline) as this would be an ABI break.
Perhaps other implementations of the C++ standard library perform better.
Of course, non-std implementations of regexes in C++ don’t have this issue. Were strictly talking about the standard library one.
I realize a lot of people don't want to use it; and that's fine, don't use it.
https://github.com/Michael-K-GH/RoseOS
https://vollragm.github.io/posts/kernelsharp/
https://www.microsoft.com/en-us/research/project/singularity... (asm, C, C++, largely C# (Sing#))
You're also still very much free to write either language purely, and "glue" them together easily using Cython.
Yeah, it has new features, but you're stuck working on a C89 codebase, good luck!
I don't know a great answer to that. I almost feel like languages should cut and run at some point and become a new thing.
The problem is that I want a language where things are safe by default. Many of the newer stuff added in C++ makes things safe, perhaps even to the level of Rust's guarantees -- but that's only if you use only these new things, and never -- even by accident -- use any of the older patterns.
I'd rather just learn a language without all that baggage.
For Ex: I still has no idea what clone() does, how does it interact with memory, on heap or stack , does it create a new instance, or just modify metadata of that object. Sometime creating a new instance is a big no-no because it takes a lot of memory.
Same thing with "ownership transfer", is variable freed at that moment, etc.
I bet i could find answers on internet, but rust has like 500 std functions, so the task is tedious
Regarding ownership transfer it is even worse in C, what if you forget, after moving an object out of a variable, to set that variable to NULL, then free that variable, that's a use after free. At least in C++ you have move semantics although it is still error prone. In rust it's a compiler error.
Copy and Clone is the same, and both are opt-in for your own types, by default the only options are move or reference for your own types, in C and C++ by default it is copy, which again leads to use after free in the situations you complained about.
I feel if these are your complaints you will actually benefit from spending some time in Rust.
If your preferred language is higher level languages with GC that is reference by default I encourage you to try any of the systems level programming languages, the things you complain about are things that are important for a language to have constructs for, reference semabtics by default causes many issues, and becomes untenable in the parallel world we live in.
The std in C is simple and explicit. For Ex: I can make an educated guess how memcpy() work by looking at its signature. It takes pointer to src and destination, and size, so i can guess it does not allocate any new memory(or if it has, it has to be some kind of optimization reason).
Another example is strstr(), it returns pointer to a piece of memory i provided to it, so i can safely do some pointer math with the return value.
It's true that i do not spend much time in Rust, so maybe i'm missing some fundamental things. I guess my mistake is trying to apply my knowledge in C to rust.
But still, it's kind of irritating now knowing (or guessing) how does function work just by looking at at its signature.
> piece of memory i provided to it, so i can safely do some pointer math with the return value.
Except pointer math is never going to be safe in C, a particular case might be bugfree but it is not safe. Moreover, nothing in the C language says that the provenance of the output pointer matches the provenance of the input pointer (it could take a reference to a pointer from some other thread while that other thread is freeing that memory). In rust you will pass a pointer and get a pointer back with the same lifetime, then you can safely use the pointer in that lifetime bound: the provenance is part of the signature. So in this case, you are incorrectly assuming things from the C signature while the corresponding rust signature definitively tells you.
So yeah, if you learn more about rust then you will see that in fact it tells you more than the corresponding C signatures.
Does the pointer provided by src get altered in any way? Might is be NULL after calling memcpy? What happens if I pass NULL to dst? Is size in bytes or whatever the pointer is pointing to?
The moment you need to read a man page to get any of those mab pages you can read the docs for clone and get all the information you would need.
> Another example is strstr(), it returns pointer to a piece of memory i provided to it
This is not at all clear from the signature, from the signature it might allocate and return a new string entirely that you would need the deallocate, the only way to know that is to read the docs which runs into the problem again.
And again there is no indications that the pointers passed into the function are not mutated in any way other than convention and documentation.
Rust makes all of these things explicit with a compile error if you fail an invariant of the type systems.
Btw it's possible to encode the same invariants in C++ but isn't the default most of the time.
I also like that C forces me to do stuff myself. It doesn't hide the magic and complexity. Also, my typical experience is that if you have to write your standard data structures on your own, you not only learn much more, but you also quickly see possibly performance improvements for your specific use case, that would have otherwise been hidden below several layers of library abstractions.
This has put me in a strange situation: everyone around me is always trying to use the latest feature of the newest C++ version, while I increasingly try to get rid of C++ features. A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.
The only community I know that produces developers that know this kind of stuff intimately are console game programmers, and then only the people responsible for maintaining the FPS at 60. I expect the embedded community knows this too, but is too small for me to know many of them to get a sense of their general technical depth.
Actually Rust goes a long way towards a similar strategy, by avoiding inheritance and instead relying on structs and traits. That already avoids a lot of BS programming. I am very glad they threw out inheritance and classes. Great design decision right there. I wish FP was made more convenient/possible in Rust though.
Do `#include <gc.h>` then just use `GC_malloc()` instead of `malloc()` and never free. And add `-lgc` to linking. It's already there on most systems these days, lots of things use it.
You can add some efficiency by `GC_free()` in cases where you're really really sure, but it's entirely optional, and adds a lot of danger. Using `GC_malloc_atomic()` also adds efficiency, especially for large objects, if you know for sure there will be no pointers in that object (e.g. a string, buffer, image etc).
There are weak pointers if you need them. And you can add finalizers for those rare cases where you need to close a file or network connection or something when an object is GCd, rather than knowing programmatically when to do it.
But simply using `GC_malloc()` instead of `malloc()` gets you a long long way.
You can also build Boehm GC as a full transparent `malloc()` replacement, and replacing `operator new()` in C++ too.
> Do `#include <gc.h>` then just use `GC_malloc()` instead of `malloc()` and never free.
Even more liberating (and dangerous!): do not even malloc, just use variable length-arrays:
void f(float *y, float *x, int n)
{
float t[n]; // temporary array, destroyed at the end of scope
...
}
This style forces you to alloc the memory at the outermost scope where it is visible, which is a nice thing in itself (even if you use malloc).In practice, stack sizes used to be quite limited and system-dependent. A modern linux system will give you several megabites of stack by default (128MB in my case, just checked in my linux mint 22 wilma). You can check it using "ulimit -all", and you can change it for your child processes using "ulimit -s SIZE_IN_KB". This is useful for your personal usage, but may pose problems when distributing your program, as you'll need to set the environment where your program runs, which may be difficult or impossible. There's no ergonomical way to do that from inside your C program, that I know of.
AFAIK Ada is typically more flexible, but that has to do with the language actually giving you enough facilities to avoid heap allocations in more cases - e.g. you can not only pass VLAs into a function in Ada, but also return one from a function. So it becomes idiomatic, and compilers then have to support this (usually by maintaining a second "large" stack).
It is easy enough to increase, but it does add friction to using the software as it violates the default stack size limit on most linux installs. Not even sure why stack ulimit is a thing anymore, who cares if the data is on the stack vs the heap?
Similar to what Ada does with access types which are lexically scoped.
Which just punts the problem from a mature and tested runtime library to some code you just make up on the spot.
The problem is that once it's there, people start using it as the proverbial hammer, and everything looks like a nail even if it isn't.
Note though that ""allocate all of memory into a buffer at startup" is a lot more viable if you scope it not to the start of the app, but to the entrypoint of some code that needs to make a complicated calculation. It's actually not all that uncommon to need something heap-like to store temporary data as you compute - e.g. a list or map to cache intermediary results - but which shouldn't outlive the computation. Ada access types give you exactly that - declare them inside the top-level function that's your entrypoint, allocate as needed in nested functions as they get called, and know that it'll all be cleaned up once the top-level function returns.
And that aside, there are still many apps that are more like "serve a web page". Most console apps are like that. Many daemons are, too.
My preferred solution is definitely to use the GC. With some help if you want. You can GC the nursery each time around the event loop. You can create and destroy arenas.
I think the only other language that has a similar property is Zig.
> Odin is a manual memory management based language. This means that Odin programmers must manage their own memory, allocations, and tracking. To aid with memory management, Odin has huge support for custom allocators, especially through the implicit context system.
https://odin-lang.org/docs/overview/#implicit-context-system
Not that I actually think this is a good idea (I think the explicitly style of Zig is better), but it is an idea nonetheless.
At least thinking in Scala as a base syntax here, it would mean:
def foo(name: String)(implicit allocator: MemoryAllocator): StringBuf = {
val buf = allocator.new(100, String)
buf.append("Hello ")
buf.append(name)
buf
}
println(foo("John")) // implicit allocator from context, default to GC, no need to free
implicit val allocator: MemoryAllocator = allocators.NewAllocator()
println(foo("Alice")) // another implicit allocator, but now using the manual allocator instead of GC
defer allocator.free()
val explicitAllocator = allocators.NewArenaAllocator()
foo("Marie")(explicitAllocator) // explicit allocator only for this call
defer explicitAllocator.free()
I don't think there is any language right now that has this focus, since Odin seems to be focused in manual allocations.Doing that means that I lose some speed and I will have to wait for GC collection.
Then why shouldn't I use C# which is more productive and has libraries and frameworks that comes with batteries included that help me build functionality fast.
I thought that one of the main points of using C is speed.
Malloc() and free() aren't free, and in particular free() does a lot more bookkeeping and takes more time than people realise.
I never liked that you have to choose between this and C++ though. C could use some automation, but that's C++ in "C with classes" mode. The sad thing is, you can't convince other people to use this mode, so all you have is either raw C interfaces which you have to wrap yourself, or C++ interfaces which require galaxy brain to fully grasp.
I remember growing really tired of "add member - add initializer - add finalizer - sweep and recheck finalizers" loop. Or calculating lifetime orders in your mind. If you ask which single word my mind associates with C, it will be "routine".
C++ would be amazing if its culture wasn't so obsessed with needless complexity. We had a local joke back then: every C++ programmer writes heaps of C++ code to pretend that the final page of code is not C++.
The most inconvenient aspect for me is manual memory management, but it’s not too bad as long as you’re not dealing with text or complex data structures.
In my experience, whether it's software architecture or programming language design, it's easy to make things complicated, but it takes vision and discipline to keep them simple.
One of these things is not like the others! Python's complexity has been increasing rapidly (e.g. walrus operator, match statement, increasingly baroque type hints) - has this put you off the language at all?
I like functional programming and procedural programming. Fits better to how I think about code. Code is something that takes data and spits data. Code shouldn't be forced into emulating some real life concepts.
C++ can avoid string copies by passing `const string&` instead of by value. Presumably you're also passing around a subset of the string, and you're doing bounds and null checks, e.g.
const char* Buf = "Hello World" ;
print_hello(Buf, 6);
string_view is just a char* + len; which is what you should be passing around anyway.Funnily enough, the problem with string view is actually C api's, and this problem exists in C. Here's a perfect example: (I'm using fopen, but pretty much every C api has this problem).
FILE* open_file_from_substr(const char* start, int len)
{
return fopen(start);
}
void open_files()
{
const char* buf = "file1.txt file2.txt file3.txt";
for (int i = 0; i += 10; ++i) // my math might be off here, apologies
{
open_file_from_substr(buf + i, buf + i + 10); // nope.
}
}
> When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the languageI agree this is true when you develop _methods_, but I think this falls apart when you design programs. I find that you spend as much time thinking about memory management and pointer safety as you do algorithmic aspects, and not in a good way. Meanwhile, with C++, go and Rust, I think about lifetimes, ownership and data flow.
Then I try actually going through the motions of writing a production-grade application in C and I realise why I left it behind all those years ago. There's just so much stuff one has to do on one's own, with no support from the computer. So many things that one has to get just right for it to work across edge cases and in the face of adversarial users.
If I had to pick up a low-level language today, it'd likely be Ada. Similar to C, but with much more help from the compiler with all sorts of things.
So, now, after a long time, Ada is starting to catch on???
When Ada was first announced, back then, my favorite language was PL/I, mostly on CP67/CMS, i.e., IBM's first effort at interactive computing with a virtual machine on an IBM 360 instruction set. Wrote a little code to illustrate digital Fourier calculations, digital filtering, and power spectral estimation (statistics from the book by Blackman and Tukey). Showed the work to a Navy guy at the JHU/APL and, thus, got "sole source" on a bid for some such software. Later wrote some more PL/I to have 'compatible' replacements for three of the routines in the IBM SSP (scientific subroutine package) -- converted 2 from O(n^2) to O(n log(n)) and the third got better numerical accuracy from some Ford and Fulkerson work. Then wrote some code for the first fleet scheduling at FedEx -- the BOD had been worried that the scheduling would be too difficult, some equity funding was at stake, and my code satisfied the BOD, opened the funding, and saved FedEx. Later wrote some code that saved a big part of IBM's AI software YES/L1. Gee, liked PL/I!
When I started on the FedEx code, was still at Georgetown (teaching computing in the business school and working in the computer center) and in my appartment. So, called the local IBM office and ordered the PL/I Reference, Program Guide, and Execution Logic manuals. Soon they arrived, for free, via a local IBM sales rep highly curious why someone would want those manuals -- sign of something big?
Now? Microsoft's .NET. On Windows, why not??
Where C was clearly designed to be a practical language with feedback from implementing an operating system in C. Ada lacked that kind of practical experience. And it shows.
I don't know anything about modern day Ada, but I can see why it didn't catch on in the Unix world.
You might have heard about the SPARK variant of Ada. I recall reading in an article many years ago that the original version of SPARK was based on Ada83 because it is a very safe language with a lot less undefined behaviors, which is key to trying to statically prove the correctness of a program.
I'm curious about this list, because it definitely doesn't seem that way these days. It'd be interesting to see how many of these are still possible now.
If a variable is declared and not given an initial value then great care must be taken not to use the undefined value of the variable until one has been properly given to it. If a program does use the undefined value in an uninitialised variable, its behaviour will be unpredictable; the program is said to be erronous.
Money and hardware requirements.
Finally there is a mature open source compiler, and our machines are light years beyond those beefy workstations required for Ada compilers in the 1980's.
Related-- I'm curious what percentage of Rust newbies "fighting the borrow checker" is due to the compiler being insufficiently sophisticated vs. the newbie not realizing they're trying to get Rust to compile a memory error.
This was made all the worse by the fact that I frequently, eventually, succeeded in "winning". I would write unnecessary and unprofiled "micro-optimizations" that I was confident were safe and would remain safe in Rust, that I'd never dare try to maintain in C++.
Eventually I mellowed out and started .clone()ing when I would deep copy in C++. Thus ended my fight with the borrow checker.
For example some tree structures are famously PITA in Rust. Yes, possible, but PITA nonetheless.
This must have been a very very long time ago, with optimizing compilers you don't really know even if they will emit any instructions.
I wouldn't dare guess what a compiler does to a RISC target.
(But yes, this was back in the early-to-mid 2000s I think. Whether that is a long time ago I don't know.)
[1]: https://entropicthoughts.com/python-programmers-experience
Just let your C(++) compiler generate assembly on an ARM-64 platform, like Apple Silicon or iOS. Fasten your seat belt.
C source files for demoscene and games were glorified macro assemblers full of inline assembly.
There's a big cloud of hype at the bleeding edge, but if you dare to look beyond that cloud, there are many boring and well matured technologies doing fine.
There are a lot of things that are so USEFUL, but maddening.
C is one. make is another.
They serve a really valid purpose, but because they are stable, they have also not evolved at all.
from your ada example, I love package and package body. C has function prototypes but it is almost meaningless.
everyone seems to think C++ is C grown=up, but I don't really like it. It is more like systemd. People accept it but don't love it.
Is that not the problem rust was created to solve?
For me the main benefit of C and C++ is the availability of excellent and often irreplaceable libraries. With a little bridging work, these tend to just work with Swift.
def route = fn (request) {
if (request.method == GET ||
request.method == HEAD) do
locale = "en"
slash = if Str.ends_with?(request.url, "/") do "" else "/" end
path_html = "./pages#{request.url}#{slash}index.#{locale}.html"
if File.exists?(path_html) do
show_html(path_html, request.url)
else
path_md = "./pages#{request.url}#{slash}index.#{locale}.md"
if File.exists?(path_md) do
show_md(path_md, request.url)
else
path_md = "./pages#{request.url}.#{locale}.md"
if File.exists?(path_md) do
show_md(path_md, request.url)
end
end
end
end
}
[1] https://git.kmx.io/kc3-lang/kc3/_tree/master/httpd/page/app/...Otherwise I use Go if a GC is acceptable and I want a simple language or Rust if I really need performance and safety.
It's difficult because I do believe there's an aesthetic appeal in doing certain one-off projects in C: compiled size, speed of compilation, the sense of accomplishment, etc. but a lot of it is just tedious grunt work.
When I simplify and think in terms of streams, it starts getting nice and tidy.
Yes it is unsafe and you can do absurd things. But it also doesn't get in the way of just doing what you want to do.
With C segmentation fault is not always easy to pinpoint.
However the tooling for C, with sone if the IDEs out there you can set breakpoints/ walk through the code in a debugger, spot more errors during compile time.
There is a debugger included with Perk but after trying to use it a few times I have given up on it.
Give me C and Visual Studio when I need debugging.
On the positive side, shooting yourself in the foot with C is a common occurrence.
I have never had a segmentation fault in Perl. Nor have I had any problems managing the memory, the garbage collector appears to work well. (at least for my needs)
There's your clue right there...
During Unix's early days, AT&T was still under this decree, meaning that it would not sell Unix like how competitors sold their operating systems. However, AT&T licensed Unix, including its source code, to universities for a nominal fee that covered the cost of media and distribution. UC Berkeley was one of the universities that purchased a Unix licenses, and researchers there started making additions to AT&T Unix which were distributed under the name Berkeley Software Distribution (this is where BSD came from). There is also a famous book known as The Lions' Book (https://en.wikipedia.org/wiki/A_Commentary_on_the_UNIX_Opera...) that those with access to a Unix license could read to study Unix. Bootleg copies of this book were widely circulated. The fact that university students, researchers, and professors could get access to an operating system (source code included) helped fuel the adoption of Unix, and by extension C.
When the Bell System was broken up in 1984, AT&T still retained Bell Labs and Unix. The breakup of the Bell System also meant that AT&T was no longer subject to the 1956 consent decree, and thus AT&T started marketing and selling Unix as a commercial product. Licensing fees skyrocketed, which led to an effort by BSD developers to replace AT&T code with open-source code, culminating with 4.3BSD Net/2, which is the ancestor of modern BSDs (FreeBSD, NetBSD, OpenBSD). The mid-1980s also saw the Minix and GNU projects. Finally, a certain undergraduate student named Linus Torvalds started work on his kernel in the early 1990s when he was frustrated with how Minix did not take full advantage of his Intel 386 hardware.
Had AT&T never been subject to the 1956 consent decree, it's likely that Unix might not have been widely adopted since AT&T probably wouldn't have granted generous licensing terms to universities.
After I finished I was puzzled, "what is the author trying to communicate to the reader here?"
As near as I can determine, enough people weren't using the author's program/utility because it was written in a language that hasn't been blessed by the crowd? It is hinted at that there might be issues involving memory consumption.
The author does not write lessons learned or share statistics of user uptake after the re-write.
No new functionality was gained, presumably this exercise was done as practice reps because the author could do it and had time.
No argument was made that the author has seen the light and now only C from this point on.
Then, I decided to move to Common Lisp and start gaining less and less money
Then, I decided to move to C and got Nerd Snipped "
Well, atleast he seems more happy xD
C is cool though
What is your killer app? What CL has to do with no one running it? What problem you had with garbage collectors? Why is C is the only option? Are you sure all those RCEs because of VMs and containers and not because it all written in C? "There are no security implications of running KC3 code" - are you sure?
Could he have jumped right into C and had amazing results, if not for the Journey learning Lisp and changing how he thought of programming.
Maybe learning Lisp is how to learn to program. Then other languages become better by virtue of how someone structures the logic.
A proper virtual machine is extremely difficult to break out of (but it can still happen [1]). Containers are a lot easier to break out of. I virtual machines were more efficient in either CPU or RAM, I would want to use them more, but it's the worst of both.
[1] https://www.zerodayinitiative.com/advisories/ZDI-23-982/
Assuming everyone has a CL installed is going to limit the audience pretty drastically.
And what about dependencies? Assume they have quicklisp installed as well?
Like I said, I love Common Lisp, but every language is some kind of compromise.
Your brain works a certain way, but you're forced to evolve into the nightmare half-done complex stacks we run these days, and it's just not the same job any more.
Just pick the right projects and the language shines.
C++'s stdlib contains a lot of convenient features, writing them myself and pretending they aren't there is very difficult.
Disabling exceptions is possible, but will come back to bite you the second you want to pull in external code.
You also lose some of the flexibility of C, unions become more complicated, struct offsets/C style polymorphism isn't even possible if I remember correctly.
I love the idea though :)
I've never understood the motivation behind writing something in C++, but avoiding the standard library. Sure, it's possible to do, but to me, they are inseparable. The basic data types and algorithms provided by the standard library are major reasons to choose the language. They are relatively lightweight and memory-efficient. They are easy to include and link into your program. They are well understood by other C++ programmers--no training required. Throughout my career, I've had to work in places where they had a "No Standard Library" rule, but that just meant they implemented their own, and in all cases the custom library was worse. (Also, none of the companies could articulate a reason for why they chose to re-implement the standard library poorly--It was always blamed on some graybeard who left the company decades ago.)
Choosing C++ without the standard library seems like going skiing, but deliberately using only one ski.
You can productively use C++ as C-with-classes (and templates, and namespaces, etc.) without depending on the library. That leaves you no worse off than rolling your own support code in plain C.
Plenty of code bases also predate it, when I started coding C++ in 1995 most people were still rolling their own.
The way he writes about his work in this article, I think he's a true master. Very impressive to see people with such passion and skill.
we also had to build a CPU from discrete bit-slice components and then program it. One of the most time intensive courses I took at CMU. Do computer engineers still have to do that?
I would certainly encourage all computer engineers, and perhaps even software engineers, to learn the "full stack".
But as to programming and C, I haven't done that in almost 30 years. It would be an interesting experiment to see how much of that skill if any I still possess.
I am fast becoming a Zig zealot.
What I've discovered is that while it does regularize some of the syntax of C, the really noticeable thing about Zig is that it feels like C with all the stuff I (and everyone else) always end up building on my own built into the language: various allocators, error types, some basic safety guardrails, and so forth.
You can get clever with it if you want -- comptime is very, very powerful -- but it doesn't have strong opinions about how clever you should be. And as with C, you end up using most of the language most of the time.
I don't know if this is the actual criterion for feature inclusion and exclusion among the Zig devs, but it feels something like "Is this in C, or do C hackers regularly create this because C doesn't have it?" Allocators? Yes. Error unions? Yes. Pattern matching facilities? Not so much. ADTs? Uh, maybe really stupid ones? Generics, eh . . . sometimes people hack that together when it feels really necessary, but mostly they don't.
Something like this, it seems to me, results in features Zig has, features Zig will never have, and features that are enabled by comptime. And it's keeping the language small, elegant, and practical. I'm a big time C fan, and I love it.
forth, you say?
They do (in Debug and ReleaseSafe modes)
For a C programmer, learning and becoming productive in Zig should be a much easier proposition than doing the same for Rust. You're not going to get the same safety guarantees you'd get with Rust, but the world is full of trade offs, and this is just one of them.
Rust is double expensive in this case. You have to memorize the borrow checker and be responsible for all the potential undefined behavior with unsafe code.
But I am not a super human systems programmer. Perhaps if I was the calculus would change. But personally when I have to drop down below a GC language, it is pretty close to the hardware.
Zig simply solves more of my personal pain points... but if rust matures in ways that help those I'll consider it again.
Correct me if I am wrong, but Rust at least has a borrow checker while in C (and Zig) one has to do the borrow checking in their head. If you read a documentation for C libraries, some of them mention things like "caller must free this memory" and others don't specify anything and you have to go to the source code to find out who is responsible for freeing the memory.
As I have always bought into Dennis Ritchie's loop programming concepts, iterator invalidating hasn't been a problem.
Zig has defer which makes it trivial to place next to allocation, and it is released when it goes out of scope.
As building a linked list, dealing with bit fields, ARM peripherals, etc...; all require disabling the Rust borrow checker rules, you don't benefit at all from them in those cases IMHO.
It is horses for courses, and the Rust project admits they chose a very specific horse.
C is what it is, and people who did assembly on a PDP7 probably know where a lot of that is from.
I personally prefer zig to c... but I will use c when it makes the task easier.
C/Zig do not even allow to specify who is responsible for freeing the allocation, and every time you use a library you need to either search the docs or (more often) reverse-engineer the code. Probably this is why writing or checking C code is so slow. I end up adding annotations to function prototypes but nobody is going to validate them so they only serve as documentation.
C and Zig have exactly the same facility to let you specify who's responsible for allocation - owning vs non-owning types for resources; it's just that you have to write them by hand, and for types that are owning, destructor calls must be done explicitly (but still, the pattern of "if I got an instance of OwnedFoo, I need to call destroy() on it" is much more straightforward then chasing the docs for each function).
I think it's a pretty normal pattern I've seen (and been though) of learning-oriented development rather than thoughtful engineering.
But personally, AI coding has pushed me full circle back to ruby. Who wants to mentally interpret generated C code which could have optimisations and could also have fancy looking bugs. Why would anyone want to try disambiguating those when they could just read ruby like English?
This happened to me too. I’m using Python in a project right now purely because it’s easier for the AI to generate and easier for me to verify. AI coding saves me a lot of time, but the code is such low quality there’s no way I’d ever trust it to generate C.
Given that low quality code is perhaps the biggest time-sink relating to our work, I'm struggling to reconcile these statements?
Also there’s often a spectrum of importance even within a project, eg maybe some internal tools aren’t so important vs a user facing thing. Complexity also varies: AI is pretty good at simple CRUD endpoints, and it’s a lot faster than me at writing HTML/CSS UI’s (ie the layout and styling, without the logic).
If you can isolate the AI code to code that doesn’t need to be high quality, and write the code that doesn’t yourself, it can be a big win. Or if you use AI for an MVP that will be incrementally replaced by higher quality code if the MVP succeeds, can be quite valuable since it allows you to test ideas quicker.
I personally find it to be a big win, even though I also spend a lot of time fighting the AI. But I wouldn’t want to build on top of AI code without cleaning it up myself.
There are also some tasks I’ve learned to just do myself: eg I do not let the AI decide my data model/database schema. Data is too important to leave it up to an AI to decide. Also outside of simple CRUD operations, it generates quite inefficient database querying so if it’s on a critical path, perhaps write the queries yourself.
Because they're implementing Ruby, for example?
Even in ecosystems where you don't have a way to opt-out of GC, there are strategies for dramatically minimizing the impact. Just because you have some aspects of the product rely on it should not be fatal unless we are being a bit hyperbolic about the purity of tech. If your allocation rate is <1 megabyte per minute and you are expressly forcing GC to run along the grain of the allocations (i.e., after each frame or sim tick), then it is exceedingly unlikely you will experience the canonical multiple-second, stop-the-world GC boogeyman.
Additionally, I don't think this is a fair take on how an RTS game would be engineered in practice. Each unit should be part of a big contiguous array in memory somewhere. The collection of garbage can be made largely irrelevant in the scenario of managing game state if you are willing to employ approximately the same approach you'd use in many other languages. Nothing stops you from newing up a 2 gig byte array and handing out spans to consumers just because you have GC turned on.
I looked at C++, but it seems that despite being more feature-rich, it also cannot auto-generate functions/methods for working with structures?
Also returning errors with dynamically allocated strings (and freeing them) makes functions bloated.
Also Gnome infrastructure (GObject, GTK and friends) requires writing so much code that I feel sorry for people writing Gnome.
Also, how do you install dependencies in C? How do you lock a specific version (or range of versions) of a dependency with specific build options, for example?
If you try to write the same complicated mess in C as you would in any other language it's going to hurt.
Not having a package manager can be a blessing, depends on your perspective.
Not that I write bug free software by any means; no one does, though some like to pretend.
I do not want to be rude, but C has some error-prone syntax: if you forget a *, you will be in trouble. If you do 1 byte offset error on an array in the stack, you get erratic behavior, core dump if you are lucky.
Buffer overlflows also poses security risks.
try...catch was not present on C, and it is one of the most powerful addition of C++ for code structuring.
Thread management/async programming without support from the language (which is fine, but if you see Erlang or Java, they have far more support for thread monitors).
Said that, there are very high quality library in C (pthreads, memory management and protection, lib-eventio etc) which can overcome most of its limit but... it is still error-prone
I refuse to touch anything else, but i keep an eye on the new languages that are being worked on, Zig for example
If only a very tiny fraction of the resources effort, research, time, money etc of all ML/AI funds are directed for the best design of high performance GC, it will make a software world a much better place. The fact that we have a very few books dedicated on GC design and thousands of books now dedicated on AI/ML, it is quite telling.
For real-world example and analogy, automotive industry dedicated their resources on the best design of high performance automatic transmission and now it has a faster auto transmission than manual for rally and racing. For normal driving auto is what the default and available now, most of the cars do not sell in manual transmission version.
> Linux is written in C, OpenBSD is written in C, GTK+ is object-oriented pure C, GNOME is written in C. Most of the Linux desktop apps are actually written in plain old C. So why try harder ? I know C
C is the lingua-franca of all other programming languages including Python, Julia, Rust, etc. Period.
D language has already bite the bullet and made C built-in by natively supporting it. Genius.
D language also has GC by default for more sane and intuitive programming, it's your call. It also one of the fastest compilation time and execution time languages in existence.
From the KC3 language website, "KC3 is a programming language with meta-programmation and a graph database embedded into the language."
Why you want to have a graph database embedded into the language? Just support associative array built-in since it has been proven to be the basis of all common data representations of spreadsheet, SQL, NoSQL, matrices, Graph database, etc.
[1] Associative Array Model of SQL, NoSQL, and NewSQL Databases:
https://arxiv.org/pdf/1606.05797
[2] Mathematics of Big Data: Spreadsheets, Databases, Matrices, and Graphs:
https://mitpress.mit.edu/9780262038393/mathematics-of-big-da...
Glib is c batteries included library I really like. Does anyone have any others they prefer?
yes, my life is still full of segfaults as many segfaults as ignorance and impatience. so its delightful because it helps me overcome those 2 things, slowly and steadily =)
- Get the `cdecl` tool to build intuition about function signatures. What does "int( * ( *foo)(void))[3]" mean?
- Write it yourself.
- Be disciplined. Develop good hygiene with compiler flags, memory/address checks, and even fuzzing.
- Read good source code such as the linux kernel. This is an amusing header file from the git source code that defines some banned functions. This is wisdom if you choose to follow it: https://github.com/git/git/blob/master/banned.h
- Push the language to its limits. Play with memory and data structures. Inspect everything. This book "Data-Oriented Design" by Richard Fabien is a great to explore as well. It's about organizing your data for efficient processing.
¹https://www.amazon.com/Programming-Language-2nd-Brian-Kernig...
That book is legendary only in the magnitude of financial damages it (directly and indirectly) enabled, caused by people who read it and thought that they could now write C.
I worked through this and felt well-prepared to actually use C in anger when I had to.
The last bit is also true for K&R The C Programming Language (avoid at all costs).
You're better off with:
- Modern C (whatever edition is current)
- Seacord's Effective C: Introduction to professional C programming
- C Programming: Modern Approach 2nd edition (bible)
a bit LOL, isn't it?
also the part about terraform, ansible and the other stuff.
Your work is genius! I hope KC3 can be adopted widely, there is great potential.
Archived at https://archive.is/zIZ8S
I wanted to do this on Linux, because I my main laptop is a Linux machine after my children confiscated my Windows laptop to play Minecraft with the only decent GPU in the house.
And I just couldn't get past the tooling. I could not get through to anything that felt like a build setup that I'd be able to replicate in my own.
On Windows, using Visual Studio, it's not that bad. It's a little annoying compared to a .NET project, and there are a lot more settings to worry about, but at the end of the day VS makes the two but very different from each other.
I actually didn't understand that until I tried to write C++ on Linux. I thought C++ on Windows was worlds different than C#. But now I've seen the light.
I honestly don't know how people do development with on Linux. Make, Cmake, all of that stuff, is so bad.
IDK, maybe someone will come along and tell me, "oh, no, do this and you'll have no problems". I hope so. But without that, what a disgusting waste of time C and C++ is on Linux.
I just don't understand why people want to live like that in C++ land. It almost feels like masochism. Especially considering VC++ basically "just works" in comparison. Why do Linux users hate DevEx so much?
I believe it. And I'd love to see it and hack on it, if it were open source.
This whole kc3 thing looks pretty interesting to me. I agree with the premise. It's really just another super-C that's not C++, but that's a pretty good idea for a lot of things because the C ABI is just so omnipresent.
Has it been fuzzed? Have you had someone who is very good at finding bugs in C code look at it carefully? It is understandable if the answer to one or both is "no". But we should be careful about the claims we make about code.