Defer available in gcc and clang
205 points
by r4um
4 days ago
| 14 comments
| gustedt.wordpress.com
| HN
kjgkjhfkjf
7 hours ago
[-]
The article is a bit dense, but what it's announcing is effectively golang's `defer` (with extra braces) or a limited form of C++'s RAII (with much less boilerplate).

Both RAII and `defer` have proven to be highly useful in real-world code. This seems like a good addition to the C language that I hope makes it into the standard.

reply
Zambyte
7 hours ago
[-]
Probably closer to defer in Zig than in Go, I would imagine. Defer in Go executes when the function deferred within returns; defer in Zig executes when the scope deferred within exits.
reply
CodesInChaos
3 hours ago
[-]
I wonder what the thought process of the Go designers was when coming up with that approach. Function scope is rarely what a user needs, has major pitfalls, and is more complex to implement in the compiler (need to append to an unbounded list).
reply
0xjnml
56 minutes ago
[-]
> I wonder what the thought process of the Go designers was when coming up with that approach.

Sometimes we need block scoped cleanup, other times we need the function one.

You can turn the function scoped defer into a block scoped defer in a function literal.

AFAICT, you cannot turn a block scoped defer into the function one.

So I think the choice was obvious - go with the more general(izable) variant. Picking the alternative, which can do only half of the job, would be IMO a mistake.

reply
mort96
3 hours ago
[-]
I hate that you can't call defer in a loop.

I hate even more that you can call defer in a loop, and it will appear to work, as long as the loop has relatively few iterations, and is just silently massively wasteful.

reply
usrnm
2 hours ago
[-]
The go way of dealing with it is wrapping the block with your defers in a lambda. Looks weird at first, but you can get used to it.
reply
mort96
2 hours ago
[-]
I know. Or in some cases, you can put the loop body in a dedicated function. There are workarounds. It's just bad that the wrong way a) is the most obvious way, and b) is silently wrong in such a way that it appears to work during testing, often becoming a problem only when confronted with real-world data, and often surfacing only as being a hard-to-debug performance or resource usage issue.
reply
rwmj
4 hours ago
[-]
This is the crucial difference. Scope-based is much better.

By the way, GCC and Clang have attribute((cleanup)) (which is the same, scope-based clean-up) and have done for over a decade, and this is widely used in open source projects now.

reply
bashkiddie
3 hours ago
[-]
I would like to second this.

In Golang if you iterate over a thousand files and

    defer File.close()
your OS will run out of file descriptors
reply
L-4
5 hours ago
[-]
Both defer and RAII have proven to be useful, but RAII has also proven to be quite harmful in cases, in the limit introducing a lot of hidden control flow.

I think that defer is actually limited in ways that are good - I don't see it introducing surprising control flow in the same way.

reply
fauigerzigerk
4 hours ago
[-]
But of course what you call "surprising" and "hidden" is also RAII's strength.

It allows library authors to take responsibility for cleaning up resources in exactly one place rather than forcing library users to insert a defer call in every single place the library is used.

reply
gpderetta
3 hours ago
[-]
RAII also composes.
reply
throwaway27448
3 hours ago
[-]
This certainly isn't RAII—the term is quite literal, Resource Acquisition Is Initialization, rather than calling code as the scope exits. This is the latter of course, not the former.
reply
mort96
3 hours ago
[-]
People often say that "RAII" is kind of a misnomer; the real power of RAII is deterministic destruction. And I agree with this sentiment; resource acquisition is the boring part of RAII, deterministic destruction is where the utility comes from. In that sense, there's a clear analogy between RAII and defer.

But yeah, RAII can only provide deterministic destruction because resource acquisition is initialization. As long as resource acquisition is decoupled from initialization, you need to manually track whether a variable has been initialized or not, and make sure to only call a destruction function (be that by putting free() before a return or through 'defer my_type_destroy(my_var)') in the paths where you know that your variable is initialized.

So "A limited form of RAII" is probably the wrong way to think about it.

reply
usrnm
2 hours ago
[-]
In my opinion, it's the initialization part of RAII which is really powerful and still missing from most other languages. When implemented properly, RAII completely eliminates a whole class of bugs related to uninitialized or partially initialized objects: if all initialization happens during construction, then you either have a fully initialized correct object, or you exit via an exception, no third state. Additionaly, tying resources to constructors makes the correct order of freeing these resources automatic. If you consume all your dependencies during construction, then destructors just walk the dependency graph in the correct order without you even thinking about it. Agreed, that writing your code like this requires some getting used to and isn't even always possible, but it's still a very powerful idea that goes beyond simple automatic destruction
reply
mort96
2 hours ago
[-]
This sounds like a nice theoretical benefit to a theoretical RAII system (or even a practical benefit to RAII in Rust), but in C++, I encounter no end of bugs related to uninitialized or partially initialized objects. All primitive types have a no-op constructor, so objects of those types are uninitialized by default. Structs containing members of primitive types can be in partially initialized states where some members are uninitialized because of a missing '= 0'.

It's not uncommon that I encounter a bug when running some code on new hardware or a new architecture or a new compiler for the first time because the code assumed that an integer member of a class would be 0 right after initialization and that happened to be true before. ASan helps here, but it's not trivial to run in all embedded contexts (and it's completely out of the question on MCUs).

reply
usrnm
2 hours ago
[-]
You're talking about the part of C++ that was inherited from C. Unfortunately, it was way too late to fix by the time RAII was even invented
reply
mort96
2 hours ago
[-]
And the consequence is that, at least in C++, we don't see the benefit you describe of "objects can never be in an uninitialized or partially-initialized state".

Anyway, I think this could be fixed, if we wanted to. C just describes the objects as being uninitialized and has a bunch of UB around uninitialized objects. Nothing in C says that an implementation can't make every uninitialized object 0. As such, it would not harm C interoperability if C++ just declared that all variable declarations initialize variables to their zero value unless the declaration initializes it to something else.

reply
throwaway27448
2 hours ago
[-]
> and make sure to...call a destruction function

Which removes half the value of RAII as I see it—needing when and to know how to unacquire the resource is half the battle, a burden that using RAII removes.

Of course, calling code as the scope exits is still useful. It just seems silly to call it any form of RAII.

reply
usrnm
2 hours ago
[-]
To be fair, RAII is so much more than just automatic cleanup. It's a shame how misunderstood this idea has become over the years
reply
randusername
24 minutes ago
[-]
Can you share some sources that give a more complete overview of it?

I got out my 4e Stroustrup book and checked the index, RAII only comes up when discussing resource management.

Interestingly, the verbatim introduction to RAII given is:

> ... RAII allows us to eliminate "naked new operations," that is, to avoid allocations in general code and keep them buried inside the implementation of well-behaved abstractions. Similarly "naked delete" operations should be avoided. Avoiding naked new and naked delete makes code far less error-prone and far easier to keep free of resource leaks

From the embedded standpoint, and after working with Zig a bit, I'm not convinced about that last line. Hiding heap allocations seems like it make it harder to avoid resource leaks!

reply
LexiMax
7 hours ago
[-]
A long overdue feature.

Though I do wonder what the chances are that the C subset of C++ will ever add this feature. I use my own homespun "scope exit" which runs a lambda in a destructor quite a bit, but every time I use it I wish I could just "defer" instead.

reply
pjmlp
4 hours ago
[-]
Never, you can already do this with RAII, and naturally it would be yet another thing to complain about C++ adding features.

Then again, if someone is willing to push it through WG21 no matter what, maybe.

reply
mort96
3 hours ago
[-]
C++ implementations of defer are either really ugly thanks to using lambdas and explicitly named variables which only exist to have scoped object, or they depend on macros which need to have either a long manually namespaced name or you risk stepping on the toes of a library. I had to rename my defer macro from DEFER to MYPOROGRAM_DEFER in a project due to a macro collision.

C++ would be a nicer language with native defer. Working directly with C APIs (which is one of the main reasons to use C++ over Rust or Zig these days) would greatly benefit from it.

reply
pjmlp
3 hours ago
[-]
Because they are all the consequence of holding it wrong, avoiding RAII solutions.

Working with native C APIs in C++ is akin to using unsafe in Rust, C#, Swift..., it should be wrapped in type safe functions or classes/structs, never used directly outside implementation code.

If folks actually followed this more often, there would be so much less CVE reports in C++ code caused by calling into C.

reply
mort96
3 hours ago
[-]
If I'm gonna write RAII wrappers around every tiny little thing that I happen to need to call once... I might as well just use Rust and make the wrappers do FFI.

If I'm constructing a particular C object once in my entire code base, calling a couple functions on it, then freeing it, I'm not much more likely to get it right in the RAII wrapper than in the one place in my code base I do it manually. At least if I have tools like defer to help me.

reply
leni536
3 hours ago
[-]
Not to mention that the `scope_success` and `scope_failure` variants have to use `std::uncaught_exceptions()`, which is hostile to codegen and also has other problems, especially in coroutines. C++ could get exception-aware variants of language defer.
reply
anilakar
4 hours ago
[-]
Various macro tricks have existed for a long time but nobody has been able to wrap the return statement yet. The lack of RAII-style automatic cleanups was one of the root causes for the legendary goto fail;[1] bug.

[1] https://gotofail.com/

reply
uecker
3 hours ago
[-]
I do not see how defer would have helped in this case.
reply
Davidbrcz
3 hours ago
[-]
People manually doing resource cleanup by using goto.

I'm assuming that using defer would have prevented the gotos in the first case, and the bug.

reply
anilakar
2 hours ago
[-]
To be fair, there were multiple wrongs in that piece of code: avoiding typing with the forward goto cleanup pattern; not using braces; not using autoformatting that would have popped out that second goto statement; ignoring compiler warnings and IDE coloring of dead code or not having those warnings enabled in the first place.

C is hard enough as is to get right and every tool and development pattern that helps avoid common pitfalls is welcome.

reply
mort96
2 hours ago
[-]
The forward goto cleanup pattern is not something "wrong" that was done to "avoid typing". Goto cleanup is the only reasonable way I know to semi-reliably clean up resources in C, and is widely used among most of the large C code bases out there. It's the main way resource cleanup is done in Linux.

By putting all the cleanup code at the end of the function after a cleanup label, you have reduced the complexity of resource management: you have one place where the resource is acquired, and one place where the resource is freed. This is actually manageable. Before you return, you check every resource you might have acquired, and if your handle (pointer, file descriptor, PID, whatever) is not in its null state (null pointer, -1, whatever), you call the free function.

By comparison, if you try to put the correct cleanup functions at every exit point, the problem explodes in complexity. Whereas correctly adding a new resource using the 'goto cleanup' pattern requires adding a single 'if (my_resource is not its null value) { cleanup(my_resource) }' at the end of the function, correctly adding a new resource using the 'cleanup at every exit point' pattern requires going through every single exit point in the function, considering whether or not the resource will be acquired at that time, and if it is, adding the cleanup code. Adding a new exit point similarly requires going through all resources used by the function and determining which ones need to be cleaned up.

C is hard enough as it is to get right when you only need to remember to clean up resources in one place. It gets infinitely harder when you need to match up cleanup code with returns.

reply
uecker
2 hours ago
[-]
I don't see this. The problem was a duplicate "goto fail" statement where the second one caused an incorrect return value to be returned. A duplicate defer statement could directly cause a double free. A duplicate "return err;" statement would have the same problem as the "goto fail" code. Potentially, a defer based solution could eliminate the variable for the return code, but this is not the only way to address this problem.
reply
mort96
2 hours ago
[-]
Is that true though?

Using defer, the code would be:

    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        return err;
        return err;
This has the exact same bug: the function exits with a successful return code as long as the SHA hash update succeeds, skipping further certificate validity checks. The fact that resource cleanup has been relegated to defer so that 'goto fail;' can be replaced with 'return err;' fixes nothing.
reply
anilakar
2 hours ago
[-]
It would have resulted in an uninitialized variable access warning, though.
reply
uecker
2 hours ago
[-]
I don't think so. The value is set in the assignment in the if statement even for the success path. With and without defer you nowadays get only a warning due to the misleading indentation: https://godbolt.org/z/3G4jzrTTr (updated)
reply
mort96
2 hours ago
[-]
No it wouldn't. 'err' is declared and initialized at the start of the function. Even if it wasn't initialized at the start, it would've been initialized by some earlier fallible function call which is also written as 'if ((err = something()) != 0)'
reply
surajrmal
5 hours ago
[-]
In many cases that's preferred as you want the ability to cancel the deferred lambda.
reply
jonhohle
7 hours ago
[-]
It’s pedantic, but in the malloc example, I’d put the defer immediately after the assignment. This makes it very obvious that the defer/free goes along with the allocation.

It would run regardless of if malloc succeeded or failed, but calling free on a NULL pointer is safe (defined to no-op in the C-spec).

reply
flakes
5 hours ago
[-]
I'd say a lot of users are going to borrow patterns from Go, where you'd typically check the error first.

    resource, err := newResource()
    if err != nil {
        return err
    }
    defer resource.Close()
IMO this pattern makes more sense, as calling exit behavior in most cases won't make sense unless you have acquired the resource in the first place.

free may accept a NULL pointer, but it also doesn't need to be called with one either.

reply
maccard
4 hours ago
[-]
This example is exactly why RAII is the solution to this problem and not defer.
reply
mort96
4 hours ago
[-]
I love RAII. C++ and Rust are my favourite languages for a lot of things thanks to RAII.

RAII is not the right solution for C. I wouldn't want C to grow constructors and destructors. So far, C only runs the code you ask it to; turning variable declaration into a hidden magic constructor call would, IMO, fly in the face of why people may choose C in the first place.

reply
miguel_martin
4 hours ago
[-]
defer is literally just an explicit RAII in this example. That is, it's just unnecessary boiler plate to wrap the newResource handle into a struct in this context.

In addition, RAII has it's own complexities that need to be dealt with now, i.e. move semantics, which obviously C does not have nor will it likely ever.

reply
maccard
1 hour ago
[-]
> RAII has it's own complexities that need to be dealt with now, i.e. move semantics, which obviously C does not have nor will it likely ever.

In the example above, the question of "do I put defer before or after the `if err != nil` check" is deferred to the programmer. RAII forces you to handle the complexity, defer lets you shoot yourself in the foot.

reply
mort96
2 hours ago
[-]
Free works with NULL, but not all cleanup functions do. Instead of deciding whether to defer before or after the null check on a case-by-case basis based on whether the cleanup function handles NULL gracefully, I would just always do the defer after the null check regardless of which pair of allocation/cleanup functions I use.
reply
masklinn
5 hours ago
[-]
It seems less pedantic and more unnecessarily dangerous due to its non uniformity: in the general case the resource won’t exist on error, and breaking the pattern for malloc adds inconsistency without any actual value gain.
reply
krautsauer
2 hours ago
[-]
3…2…1… and somebody writes a malloc macro that includes the defer.
reply
t43562
2 hours ago
[-]
In C I just used goto - you put a cleanup section at the bottom of your code and your error handling just jumps to it.

  #define RETURN(x) result=x;goto CLEANUP

  void myfunc() {
    int result=0;
    if (commserror()) {
      RETURN(0);
    }
     .....
    /* On success */
    RETURN(1);

    CLEANUP:
    if (myStruct) { free(myStruct); }
    ...
    return result
  }
The advantage being that you never have to remember which things are to be freed at which particular error state. The style also avoids lots of nesting because it returns early. It's not as nice as having defer but it does help in larger functions.
reply
vasama
2 hours ago
[-]
> The advantage being that you never have to remember which things are to be freed at which particular error state.

You also don't have to remember this when using defer. That's the point of defer - fire and forget.

reply
vbezhenar
2 hours ago
[-]
One small nitpick: you don't need check before `free` call, using `free(NULL)` is fine.
reply
t43562
1 hour ago
[-]
You're right that it's not needed in my example but sometimes the thing that you're freeing has pointers inside it which themselves have to be freed first and in that case you need the if.

There are several other issues I haven't shown like what happens if you need to free something only when the return code is "FALSE" indicating that something failed.

This is not as nice as defer but up till now it was a comparatively nice way to deal with those functions which were really large and complicated and had many exit points.

reply
Arch-TK
35 minutes ago
[-]
If you have something which contains pointers, you should have a destructor function for it, which itself should check if the pointer is not NULL before attempting to free any fields.
reply
jagged-chisel
2 hours ago
[-]
But it does keep one in the habit of using NULL checks.
reply
david-gpu
1 hour ago
[-]
It is pointless, because in Linux all you get is a virtual address. Physical backing is only allocated on first use.

In other words, the first time you access a "freshly allocated" non-null pointer you may get a page fault due to insufficient physical memory.

reply
3836293648
1 hour ago
[-]
By default, yes. You can configure it to not overcommit
reply
baq
2 hours ago
[-]
defer is a stack, scope local and allows cleanup code to be optically close to initialization code.
reply
bytejanitor
2 hours ago
[-]
But you have to give this cleanup jump label a different name for every function.
reply
spiffyk
2 hours ago
[-]
You don't. Labels are local to functions in C.
reply
pocksuppet
1 hour ago
[-]
This looks like a recipe for disaster when you'll free something in the return path that shouldn't be freed because it's part of the function's result, or forget to free something in a success path. Just write

    result=x;
    goto cleanup;
if you meant

    result=x;
    goto cleanup;
At least then you'll be able to follow the control flow without remembering what the magic macro does.
reply
cmovq
5 hours ago
[-]
Would defer be considered hidden control flow? I guess it’s not so hidden since it’s within the same function unlike destructors, exceptions, longjmp.
reply
fuhsnn
4 hours ago
[-]
It's one of the most commonly adopted feature among C successor languages (D, Zig, Odin, C3, Hare, Jai); given how opinionated some of them are on these topics, I think it's safe to say it's generally well regarded in PL communities.
reply
ZoomZoomZoom
2 hours ago
[-]
In Nim too, although, the author doesn't like it much.
reply
Ygg2
3 hours ago
[-]
What I always hated about defer is that you can simply forget or place it in the wrong position. Destructors or linear types (must-use-once types) are a much better solution.
reply
Someone
1 hour ago
[-]
It breaks the idea that statements get executed in the order they appear in the source code, but it ‘only’ moves and sometimes deduplicates (in functions with multiple exit points) statements, it doesn’t hide them.

Of course, that idea already isn’t correct in many languages; function arguments are evaluated before a function is called, operator precedence often breaks it, etc, but this moves entire statements, potentially by many lines.

reply
ozgrakkurt
4 hours ago
[-]
Can somebody explain why this is significantly better than using goto pattern?

Genuinely curious as I only have a small amount of experience with c and found goto to be ok so far

reply
mort96
4 hours ago
[-]
I feel like C people, out of anyone, should respect the code gen wins of defer. Why would you rely on runtime conditional branches for everything you want cleaned up, when you can statically determine what cleanup functions need to be called?

In any case, the biggest advantage IMO is that resource acquisition and cleanup are next to each other. My brain understands the code better when I see "this is how the resource is acquired, this is how the resource will be freed later" next to each other, than when it sees "this is how this resource is acquired" on its own or "this is how the resource is freed" on its own. When writing, I can write the acquisition and the free at the same time in the same place, making me very unlikely to forget to free something.

reply
qsera
2 hours ago
[-]
>"this is how the resource is acquired, this is how the resource will be freed later"

Lovely fairy tale. Now can you tell me how you love to scroll back and examine all the defer blocks within a scope when it ends to understand what happens at that point?

reply
mort96
2 hours ago
[-]
I don't typically do that. In 99.999% of cases, 'defer free(something)' was done because it's the correct thing to do at every exit point, so I don't need to think about it at the end of the block.
reply
qsera
10 minutes ago
[-]
If you only ever work in your own code bases, sure.
reply
kev009
4 hours ago
[-]
It allows you to put the deferred logic near the allocation/use site which I noticed was helpful in Go as it becomes muscle memory to do cleanup as you write some new allocation and it is hinted by autocomplete these days.

But it adds a new dimension of control flow, which in a garbage collected language like Go is less worrisome whereas in C this can create new headaches in doing things in the right order. I don't think it will eliminate goto error handling for complex cases.

reply
uecker
4 hours ago
[-]
The advantage is that it automatically adds the cleanup code to all exit paths, so you can not forget it for some. Whether this is really that helpful is unclear to me. When we looked at defer originally for C, Robert Seacord hat a list of examples and how the looked before and after rewriting with defer. At that point I lost interest in this feature, because the new code wasn't generally better in my opinion.

But people know it from other languages, and seem to like it, so I guess it is good to have it also in C.

reply
ozgrakkurt
3 hours ago
[-]
Thanks, seems like the document is this one

http://robertseacord.com/wp/2020/09/10/adding-a-defer-mechan...

reply
uecker
3 hours ago
[-]
If I remember correctly he had much more examples somewhere, but I am not sure this was ever shared in public.
reply
dapperdrake
4 hours ago
[-]
Confer the recent bug related to goto-error handling in OpenSSH where the "additional" error return value wasn’t caught and allowed a security bypass accepting a failed key.

Cleanup is good. Jumping around with "goto" confused most people in practice. It seems highly likely that most programmers model "defer" differently in their minds.

EDIT:

IIRC it was CVE-2025-26465. Read the code and the patch.

reply
uecker
4 hours ago
[-]
It is not clear to me that defer helps here. The issue is management of state (the return value) not control flow.
reply
anal_reactor
4 hours ago
[-]
1. Goto pattern is very error-prone. It works until it doesn't and you have a memory leak. The way I solved this issue in my code was a macro that takes a function and creates an object that has said function in its destructor.

2. Defer is mostly useful for C++ code that needs to interact with C API because these two are fundamentally different. C API usually exposes functions "create_something" and "destroy_something", while the C++ pattern is to have an object that has "create_something" hidden inside its constructor, and "destroy_something" inside its destructor.

reply
ozgrakkurt
3 hours ago
[-]
I found that some error prone cases are harder to express with defer in zig.

For example if I have a ffi function that transfers the ownership of some allocator in the middle of the function.

reply
joexbayer
5 hours ago
[-]
A related article discussing Gustedt’s first defer implementation, which also looks at the generated assembly:

https://oshub.org/projects/retros-32/posts/defer-resource-cl...

reply
babalark
7 hours ago
[-]
Yes!! One step closer to having defer in the standard.

Related blog post from last year: https://thephd.dev/c2y-the-defer-technical-specification-its... (https://news.ycombinator.com/item?id=43379265)

reply
bjackman
2 hours ago
[-]
The Linux kernel has been using __attribute___((cleanup)) for a little while now. So far, I've only seen/used it in cases where the alternative (one goto label) isn't very bad. Even there it's basically welcome.

But there are lots of cases in the kernel where we have 10+ goto labels for error paths in complex setup functions. I think when this starts making its way into those areas it will really start having an impact on bugs.

Sure, most of those bugs are low impact (it's rare that an attacker can trigger the broken error paths) but still, this is basically free software quality, it would be silly to leave it on the table.

And then there's the ACTUAL motivation: it makes the code look nicer.

reply
gignico
6 hours ago
[-]
I’m just going to start teaching classes of C programming to university first-year CS students. Would you teach `defer` straight away to manage allocated memory?
reply
zffr
5 hours ago
[-]
My suggestion is no - first have them do it the hard way. This will help them build the skills to do manual memory management where defer is not available.

Once they do learn about defer they will come to appreciate it much more.

reply
orlp
5 hours ago
[-]
In university? No, absolutely not straight away.

The point of a CS degree is to know the fundamentals of computing, not the latest best practices in programming that abstract the fundamentals.

reply
jurf
5 hours ago
[-]
My university also taught best practices alongside that, everytime. I am very grateful for that.
reply
leni536
6 hours ago
[-]
It's still only in a TS, not in ISO C, if that matters.
reply
flohofwoe
5 hours ago
[-]
No, but also skip malloc/free until late in the year, and when it comes to heap allocation then don't use example code which allocates and frees single structs, instead introduce concepts like arena allocators to bundle many items with the same max lifetime, pool allocators with generation-counted slots and other memory managements strategies.
reply
brabel
1 hour ago
[-]
Is there any C tutorials you know that do that so I can try to learn how to do it?
reply
kibwen
5 hours ago
[-]
If you're teaching them to write an assembler, then it may be worth teaching them C, as a fairly basic language with a straightforward/naive mapping to assembly. But for basically any other context in which you'd be teaching first-year CS students a language, C is not an ideal language to learn as a beginner. Teaching C to first-year CS students just for the heck of it is like teaching medieval alchemy to first-year chemistry students.
reply
NooneAtAll3
5 hours ago
[-]
I think I heard this in some cppcon video, from uni teacher who had to make students know both C and Python, so he experimented for several years

learning Python first is same difficulty as learning C first (because main problem is the whole concept of programming)

and learning C after Python is harder than learning Python after C (because of pointers)

reply
gignico
4 hours ago
[-]
Absolutely, it's not their first language. In our curriculum C programming is part of the Operating Systems course and comes after Computer Architecture where they see assembly. So its purpose is to be low level to understand what's under the hood. To learn programming itself they use other languages (currently Java, for better or worse, but I don't have voice on that choice).
reply
adhamsalama
2 hours ago
[-]
C is the best language to learn as a beginner.
reply
junon
4 hours ago
[-]
No. They need to understand memory failures. Teach them what it looks like when it's wrong. Then show them the tools to make things right. They'll never fully understand those tools if they don't understand the necessity of doing the right thing.
reply
uecker
5 hours ago
[-]
IMHO, it is in the best interest of your students to teach them standard C first.
reply
thayne
5 hours ago
[-]
There is a technical specification, so hopefully it will be standard C in the next version. And given that gcc and clang already have implementatians (and gcc has had a way to do it for a long time, although the syntax is quite different).
reply
uecker
4 hours ago
[-]
It is not yet a technical specification, just a draft for one, but this will hopefully change this year, and the defer patch has not been merged into GCC yet. So I guess it will become part of C at some point if experience with it is good, but at this time it is an extension.
reply
gignico
4 hours ago
[-]
I was under the wrong assumption that defer was approved for the next standard already.
reply
uecker
3 hours ago
[-]
We will likely decide in March that it will became an ISO TS. Given the simplicity of the feature and its popularity, I would assume that it will become part of the standard eventually.
reply
gignico
3 hours ago
[-]
That’s great news!
reply
Panzerschrek
7 hours ago
[-]
Such addition is great. But there is something even better - destructors in C++. Anyone who writes C should consider using C++ instead, where destructors provide a more convenient way for resources freeing.
reply
alextingle
6 hours ago
[-]
C++ destructors are implicit, while defer is explicit.

You can just look at the code in front of you to see what defer is doing. With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen.

Sure, if the situation arises frequently, it's nice to be able to design a type that "just works" in C++. But if you need to clean up reliably in just this one place, C++ destructors are a very clunky solution.

reply
Panzerschrek
5 hours ago
[-]
Implicitness of destructors isn't a problem, it's an advantage - it makes code shorter. Freeing resources in an explicit way creates too much boilerplate and is bug-prone.

> With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen

Isn't it a code quality issue? It should be clear from class name/description what can happen in its destructor. And if it's not clear, it's not that relevant.

reply
L-4
5 hours ago
[-]
> Implicitness of destructors isn't a problem

It's absolutely a problem. Classically, you spend most of your time reading and debugging code, not writing it. When there's an issue pertaining to RAII, it is hidden away, potentially requiring looking at many subclasses etc.

reply
flohofwoe
5 hours ago
[-]
Desctructors are only comparable when you build an OnScopeExit class which calls a user-provided lambda in its destructor which then does the cleanup work - so it's more like a workaround to build a defer feature using C++ features.

The classical case of 'one destructor per class' would require to design the entire code base around classes which comes with plenty of downsides.

> Anyone who writes C should consider using C++ instead

Nah thanks, been there, done that. Switching back to C from C++ about 9 years ago was one of my better decisions in life ;)

reply
amluto
6 hours ago
[-]
I think destructors are different, not better. A destructor can’t automatically handle the case where something doesn’t need to be cleaned up on an early return until something else occurs. Also, destructors are a lot of boilerplate for a one-off cleanup.
reply
Panzerschrek
5 hours ago
[-]
> A destructor can’t automatically handle the case where something doesn’t need to be cleaned up on an early return

It can. An object with destructor doing clean-up should be created only after such clean-up is needed. In case of a file, for example, a file object should be created at file opening, so that it can close the file in its destructor.

reply
mathisfun123
6 hours ago
[-]
i write C++ every day (i actually like it...) but absolutely no one is going to switch from C to C++ just for dtors.
reply
kibwen
5 hours ago
[-]
No, RAII is one of the primary improvements of C++ over C, and one of the most ubiquitous features that is allowed in "light" subsets of C++.
reply
flohofwoe
4 hours ago
[-]
> but absolutely no one is going to switch from C to C++ just for dtors

The decision would be easier if the C subset in C++ would be compatible with modern C standards instead of being a non-standard dialect of C stuck in ca. 1995.

reply
gpderetta
2 hours ago
[-]
Of course not! Those that would have, already did!
reply
Pay08
6 hours ago
[-]
Weren't dtors the reason GCC made the switch?
reply
uecker
3 hours ago
[-]
I don't think so. As a contributor to GCC, I also wished it hadn't.
reply
Pay08
27 minutes ago
[-]
Why do you think so?
reply
uecker
23 minutes ago
[-]
For two reasons: First, where C++ features are used, it make the code harder to understand rather than easier. Second, it requires newer and more complex toolchains to build GCC itself. Some people still maintain the last C version of GCC just to keep the bootstrap path open.
reply
Someone
6 hours ago
[-]
For the cases where a destructor isn’t readily available, you can write a defer class that runs a lambda passed to its constructor in its destructor, can’t you?

Would be a bit clunky, but that can (¿somewhat?) be hidden in a macro, if desired.

reply
userbinator
6 hours ago
[-]
As others have commented already: if you want to use C++, use C++. I suspect the majority of C programmers neither care nor want stuff like this; I still stay with C89 because I know it will be portable anywhere, and complexities like this are completely at odds with the reason to use C in the first place.
reply
laserbeam
6 hours ago
[-]
I would say the complexity of implementing defer yourself is a bit annoying for C. However defer itself, as a language feature in a C standard is pretty reasonable. It’s a very straightforward concept and fits well within the scope of C, just as it fit within the scope of zig. As long as it’s the zig defer, not the golang one…

I would not introduce zig’s errdeferr though. That one would need additional semantics changes in C to express errors.

reply
qsera
6 hours ago
[-]
>pretty reasonable

It starts out small. Then before you know the language is total shit. Python is a good example.

I am observing a very distinguishable phenomenon when internet makes very shallow ideas mainstream and ruin many many good things that stood the test of time.

I am not saying this is one of those instances, but what the parent comment makes sense to me. You can see another comment who now wants to go further and want destructors in C. Because of internet, such voices can now reach out to each other, gather and cause a change. But before, such voices would have to go through a lot of sensible heads before they would be able to reach each other. In other words, bad ideas got snuffed early before internet, but now they go mainstream easily.

So you see, it starts out slow, but then more and more stuff gets added which diverges more and more from the point.

reply
Galanwe
5 hours ago
[-]
I get your point, though in the specific case of defer, looks like we both agree it's really a good move. No more spaghetti of goto err_*; in complex initialization functions.
reply
qsera
5 hours ago
[-]
>we both agree it's really a good move

Actually I am not sure I do. It seems to me that even though `defer` is more explicit than destructors, it still falls under "spooky action at a distance" category.

reply
Galanwe
38 minutes ago
[-]
I don't understand why destructors enter the discussion. This is C, there is no destructors. Are you comparing "adding destructors to C" vs "adding defer to C"?

The former would be bring so much in C that it wouldn't be C anymore.

And if your point is "you should switch to C++ to get destructors", then it seems out of topic. By very definition, if we're talking about language X and your answer is "switch to Y", this is an entirely different subject, of very few interest to people programming in X.

reply
qsera
23 minutes ago
[-]
Sorry, I had some other thread that involved destructors in my head.

But the point is `defer` is still in "spooky action at a distance" category that I generally don't want in programming languages, especially in c.

reply
duckerude
5 hours ago
[-]
That comment is saying to use C++, not to add destructors to C.
reply
ubercore
4 hours ago
[-]
Modern Python is great :shrug:
reply
lich_king
5 hours ago
[-]
> I still stay with C89 because I know it will be portable anywhere

With respect, that sounds a bit nuts. It's been 37 years since C89; unless you're targeting computers that still have floppy drives, why give up on so many convenience features? Binary prefixes (0b), #embed, defined-width integer types, more flexibility with placing labels, static_assert for compile-time sanity checks, inline functions, declarations wherever you want, complex number support, designated initializers, countless other things that make code easier to write and to read.

Defer falls in roughly the same category. It doesn't add a whole lot of complexity, it's just a small convenience feature that doesn't add any runtime overhead.

reply
robinsonb5
3 hours ago
[-]
To be honest I have similar reservations.

The one huge advantage of C is its ubiquity - you can use it on the latest shiny computer / OS / compiler as well as some obscure embedded platform with a compiler that hasn't been updated since 2002. (That's a rare enough situation to be unimportant, right? /laughs in industrial control gear.)

I'm wary of anything which fragments the language and makes it inaccessible to subsections of its traditional strongholds.

While I'm not a huge fan of the "just use Rust" attitude that crops up so often these days, you could certainly make an argument that if you want modern language features you should be using a more modern language.

(And, for the record, I do still write software - albeit recreationally - for computers that have floppy drives.)

reply
uecker
3 hours ago
[-]
C has its unique advantages that make some of us prefer it to C++ or Rust or other languages. But it also has some issues that can be addressed. IMHO it should evolve, but very slowly. C89 is certainly a much worse language than C99 and I think most of the changes in C23 were good. It is fine to not use them for the next two decades, but I think it is good that most projects moved on from C89 so it is also good that C99 exists even though it took a long time to be adopted. And the same will be true for C23 in the future.
reply
Mond_
6 hours ago
[-]
I think a lot of the really old school people don't care, but a lot of the younger people (especially those disillusioned with C++ and not fully enamored with Rust) are in fact quite happy for C to evolve and improve in conservative, simple ways (such as this one).
reply
flohofwoe
5 hours ago
[-]
> still stay with C89

You're missing out on one of the best-integrated and useful features that have been added to a language as an afterthought (C99 designated initialization). Even many moden languages (e.g. Rust, Zig, C++20) don't get close when it comes to data initialization.

reply
pjmlp
4 hours ago
[-]
You mean what Ada and Modula-3, among others, already had before it came to C99?
reply
ablob
3 hours ago
[-]
Who cares who had it first, what matters is who has it, and who doesn't...
reply
pjmlp
2 hours ago
[-]
Apparently some do, hence my reply.
reply
masklinn
5 hours ago
[-]
Just straight up huffing paint are we.
reply
flohofwoe
5 hours ago
[-]
Explain why? Have you used C99 designated init vs other languages?

E.g. neither Rust, Zig nor C++20 can do this:

https://github.com/floooh/sokol-samples/blob/51f5a694f614253...

Odin gets really close but can't chain initializers (which is ok though):

https://github.com/floooh/sokol-odin/blob/d0c98fff9631946c11...

reply
phicoh
3 hours ago
[-]
In general it would help if you would spend some text on describing what features of C99 are missing in other languages. Giving some code and assume that the reader will figure it out is not very effective.

As far as I can tell, Rust can do what it is in your example (which different syntax of course) except for this particular way of initializing an array.

To me, that seems like a pretty minor syntax issue to that could be added to Rust if there would be a lot of demand for initializing arrays this way.

reply
flohofwoe
2 hours ago
[-]
I can show more code examples instead:

E.g. notice how here in Rust each nested struct needs a type annotation, even though the compiler could trivially infer the type. Rust also cannot initialize arrays with random access directly, it needs to go through an expression block. Finally Rust requires `..Default::default()`:

https://github.com/floooh/sokol-rust/blob/f824cd740d2ac96691...

Zig has most of the same issues as Rust, but at least the compiler is able to infer the nested struct types via `.{ }`:

https://github.com/floooh/sokol-zig/blob/17beeab59a64b12c307...

I don't have C++ code around, but compared to C99 it has the following restrictions:

- designators must appear in order (a no-go for any non-trivial struct)

- cannot chain designators (e.g. `.a.b.c = 123`)

- doesn't have the random array access syntax via `[index]`

> ...like a pretty minor syntax issue...

...sure, each language only has a handful minor syntax issues, but these papercuts add up to a lot of syntax noise to sift through when compared to the equivalent C99 code.

reply
phicoh
1 hour ago
[-]
In Rust you can do "fn new(field: Field) -> Self { Self { field } )" This is in my experience the most common case of initializers in Rust. You don't mention one of the features of the Rust syntax, that you only have to specify the field name when you have a variable with the same name. In my experience, that reduces clutter a lot.

I have to admit, the ..Default::default() syntax is pretty ugly.

In theory Rust could do "let x: Foo = _ { field }" and "Foo { field: _ { bar: 1 } }". That doesn't even change the syntax. Its just whether enough people care.

reply
majke
6 hours ago
[-]
Not necessarily. In classic C we often build complex state machines to handle errors - especially when there are many things that need to be initialized (malloced) one after another and each might fail. Think the infamous "goto error".

I think defer{} can simplify these flows sometimes, so it can indeed be useful for good old style C.

reply
mort96
3 hours ago
[-]
Defer is a very simple feature where all code is still clearly visible and nothing is called behind your back. I write a lot of C++, and it's a vastly more complex language than "C with defer". Defer is so natural to C that all compilers have relatively broadly non-standard ways of mimicking it (e.g __attribute__((cleanup)).

If you want to write C++, write C++. If you want to write C, but want resource cleanup to be a bit nicer and more standard than __attribute__((cleanup)), use C with defer. The two are not comparable.

reply
rwmj
4 hours ago
[-]
That ship has sailed. Lots of open source C projects already use attribute((cleanup)) (which is the same thing).
reply
ozgrakkurt
4 hours ago
[-]
Isn’t goto cleanup label good enough anyway?

Goto approach also covers some more complicated cases

reply
pjmlp
4 hours ago
[-]
Then why not even better, K&R C with external assembler, that is top. /s
reply
Am4TIfIsER0ppos
1 hour ago
[-]
> external assembler

Is that supposed to exacerbate how poor that choice is. External assembly is great.

reply
pjmlp
52 minutes ago
[-]
When talking about K&R C and the assembler provided by UNIX System V, yes.

Even today, the only usable Assemblers on UNIX platforms were born in PC or Amiga.

reply
sp1rit
4 hours ago
[-]
I quite dislike the defer syntax. IMO the cleanup attribute is the much nicer method of dealing with RAII in C.
reply
avadodin
3 hours ago
[-]
I think C should be reset to C89 and then go over everything including proposals and accept only the good&compatible bits.

If you can't compile K&R, you should label your language "I can't believe it's not C!".

I don't have time to learn your esolang.

reply
nananana9
5 hours ago
[-]
I took some shit in the comments yesterday for suggesting "you can do it with a few lines of standard C++" to another similar thread, but yet again here we are.

Defer takes 10 lines to implement in C++. [1]

You don't have to wait 50 years for a committee to introduce basic convenience features, and you don't have to use non-portable extensions until they do (and in this case the __attribute__((cleanup)) has no equivalent in MSVC), if you use a remotely extensible language.

[1] https://www.gingerbill.org/article/2015/08/19/defer-in-cpp/

reply
mort96
3 hours ago
[-]
Why is this a relevant comment? We're talking about C, not C++. If you wanted to suggest using an alternative language, you're probably better off recommending Zig: defer takes 0 lines to implement there, and it's closer to C than what C++ is.
reply
nananana9
1 hour ago
[-]
Everyone reading this (you included) knows full well that unlike Zig/Rust/Odin/whatever, C++ has the special property that you can quite literally* write C code in it, AND you can implement whatever quality of life fixes you need with targeted usage of RAII+templates+macros (defer, bounds checked accesses, result types, whatever).

My comment is targeted towards the programmer who is excited about features like this - you can add an extra two characters to your filename and trivially implement those improvements (and more) yourself, without any alterations to your codebase or day to day programming style.

reply
mort96
1 hour ago
[-]
C in C++ is a pretty terrible experience. The differences your asterisk alludes to are actually quite significant in practice:

C++ doesn't let you implicitly cast from void* to other pointer types. This breaks the way you typically heap-allocate variables in C: instead of 'mytype *foo = malloc(sizeof(*foo))', you have to write 'mytype *foo = (mytype *)malloc(sizeof(*foo))'. This adds a non-trivial amount of friction to something you do every handful of lines.

String literals in C++ are 'const char *' instead of 'char *'. While this is more technically correct, it means you have to add casts to 'char *' all over the place. Plenty of C APIs are really not very ergonomic when string literals aren't 'char *'.

And the big one: C++ doesn't have C's struct literals. Lots of C APIs are super ergonomic if you call them like this:

    some_function(&(struct params) {
        .some_param = 10,
        .other_param = 20,
    });
You can't do that in C++. C++ has other features (such as its own, different kind of aggregate initialization with designated initializers, and the ability for temporaries to be treated as const references) which make C++ APIs nice to work with from C++, but C APIs based around the assumption that you'll be using struct literals aren't nice to use from C++.

If you wanna use C, use C. C is a much better C than C++ is.

reply
nananana9
45 minutes ago
[-]
I'll give you the last bullet you missed, you're also giving up the { [3] = ..., [5] = ... } array initializer syntax.

I like both these syntax-es (syntacies? synticies?) and I hope they make their way to C++, but if we're honestly evaluating features -- take their utility, multiply it by 100 and in my book it's still losing out against either defer or slices/span types.

If you disagree with this, then your calculus for feature utility is completely different than mine. I suspect for most people that's not the case, and most of the time the reason to pick C over C++ is ideological, not because of these 2 pieces of syntax sugar.

"I like it" is a good enough reason to use something, my original comment is to the person who wants to use C but gets excited about features like this - I don't know how many of those people are aware of how trivially most of these things can be accomplished in C++.

reply