Though I do wonder what the chances are that the C subset of C++ will ever add this feature. I use my own homespun "scope exit" which runs a lambda in a destructor quite a bit, but every time I use it I wish I could just "defer" instead.
Various macro tricks have existed for a long time but nobody has been able to wrap the return statement yet. The lack of RAII-style automatic cleanups was one of the root causes for the legendary goto fail;[1] bug.
The article is a bit dense, but what it's announcing is effectively golang's `defer` (with extra braces) or a limited form of C++'s RAII (with much less boilerplate).
Both RAII and `defer` have proven to be highly useful in real-world code. This seems like a good addition to the C language that I hope makes it into the standard.
Probably closer to defer in Zig than in Go, I would imagine. Defer in Go executes when the function deferred within returns; defer in Zig executes when the scope deferred within exits.
This is the crucial difference. Scope-based is much better.
By the way, GCC and Clang have attribute((cleanup)) (which is the same, scope-based clean-up) and have done for over a decade, and this is widely used in open source projects now.
Both defer and RAII have proven to be useful, but RAII has also proven to be quite harmful in cases, in the limit introducing a lot of hidden control flow.
I think that defer is actually limited in ways that are good - I don't see it introducing surprising control flow in the same way.
But of course what you call "surprising" and "hidden" is also RAII's strength.
It allows library authors to take responsibility for cleaning up resources in exactly one place rather than forcing library users to insert a defer call in every single place the library is used.
It’s pedantic, but in the malloc example, I’d put the defer immediately after the assignment. This makes it very obvious that the defer/free goes along with the allocation.
It would run regardless of if malloc succeeded or failed, but calling free on a NULL pointer is safe (defined to no-op in the C-spec).
I love RAII. C++ and Rust are my favourite languages for a lot of things thanks to RAII.
RAII is not the right solution for C. I wouldn't want C to grow constructors and destructors. So far, C only runs the code you ask it to; turning variable declaration into a hidden magic constructor call would, IMO, fly in the face of why people may choose C in the first place.
defer is literally just an explicit RAII in this example. That is, it's just unnecessary boiler plate to wrap the newResource handle into a struct in this context.
In addition, RAII has it's own complexities that need to be dealt with now, i.e. move semantics, which obviously C does not have nor will it likely ever.
It seems less pedantic and more unnecessarily dangerous due to its non uniformity: in the general case the resource won’t exist on error, and breaking the pattern for malloc adds inconsistency without any actual value gain.
I feel like C people, out of anyone, should respect the code gen wins of defer. Why would you rely on runtime conditional branches for everything you want cleaned up, when you can statically determine what cleanup functions need to be called?
In any case, the biggest advantage IMO is that resource acquisition and cleanup are next to each other. My brain understands the code better when I see "this is how the resource is acquired, this is how the resource will be freed later" next to each other, than when it sees "this is how this resource is acquired" on its own or "this is how the resource is freed" on its own. When writing, I can write the acquisition and the free at the same time in the same place, making me very unlikely to forget to free something.
It allows you to put the deferred logic near the allocation/use site which I noticed was helpful in Go as it becomes muscle memory to do cleanup as you write some new allocation and it is hinted by autocomplete these days.
But it adds a new dimension of control flow, which in a garbage collected language like Go is less worrisome whereas in C this can create new headaches in doing things in the right order. I don't think it will eliminate goto error handling for complex cases.
The advantage is that it automatically adds the cleanup code to all exit paths, so you can not forget it for some. Whether this is really that helpful is unclear to me. When we looked at defer originally for C, Robert Seacord hat a list of examples and how the looked before and after rewriting with defer. At that point I lost interest in this feature, because the new code wasn't generally better in my opinion.
But people know it from other languages, and seem to like it, so I guess it is good to have it also in C.
Confer the recent bug related to goto-error handling in OpenSSH where the "additional" error return value wasn’t caught and allowed a security bypass accepting a failed key.
Cleanup is good. Jumping around with "goto" confused most people in practice. It seems highly likely that most programmers model "defer" differently in their minds.
EDIT:
IIRC it was CVE-2025-26465. Read the code and the patch.
1. Goto pattern is very error-prone. It works until it doesn't and you have a memory leak. The way I solved this issue in my code was a macro that takes a function and creates an object that has said function in its destructor.
2. Defer is mostly useful for C++ code that needs to interact with C API because these two are fundamentally different. C API usually exposes functions "create_something" and "destroy_something", while the C++ pattern is to have an object that has "create_something" hidden inside its constructor, and "destroy_something" inside its destructor.
It's one of the most commonly adopted feature among C successor languages (D, Zig, Odin, C3, Hare, Jai); given how opinionated some of them are on these topics, I think it's safe to say it's generally well regarded in PL communities.
I’m just going to start teaching classes of C programming to university first-year CS students. Would you teach `defer` straight away to manage allocated memory?
No. They need to understand memory failures. Teach them what it looks like when it's wrong. Then show them the tools to make things right. They'll never fully understand those tools if they don't understand the necessity of doing the right thing.
My suggestion is no - first have them do it the hard way. This will help them build the skills to do manual memory management where defer is not available.
Once they do learn about defer they will come to appreciate it much more.
No, but also skip malloc/free until late in the year, and when it comes to heap allocation then don't use example code which allocates and frees single structs, instead introduce concepts like arena allocators to bundle many items with the same max lifetime, pool allocators with generation-counted slots and other memory managements strategies.
If you're teaching them to write an assembler, then it may be worth teaching them C, as a fairly basic language with a straightforward/naive mapping to assembly. But for basically any other context in which you'd be teaching first-year CS students a language, C is not an ideal language to learn as a beginner. Teaching C to first-year CS students just for the heck of it is like teaching medieval alchemy to first-year chemistry students.
Absolutely, it's not their first language. In our curriculum C programming is part of the Operating Systems course and comes after Computer Architecture where they see assembly. So its purpose is to be low level to understand what's under the hood. To learn programming itself they use other languages (currently Java, for better or worse, but I don't have voice on that choice).
There is a technical specification, so hopefully it will be standard C in the next version. And given that gcc and clang already have implementatians (and gcc has had a way to do it for a long time, although the syntax is quite different).
It is not yet a technical specification, just a draft for one, but this will hopefully change this year, and the defer patch has not been merged into GCC yet. So I guess it will become part of C at some point if experience with it is good, but at this time it is an extension.
Such addition is great. But there is something even better - destructors in C++. Anyone who writes C should consider using C++ instead, where destructors provide a more convenient way for resources freeing.
C++ destructors are implicit, while defer is explicit.
You can just look at the code in front of you to see what defer is doing. With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen.
Sure, if the situation arises frequently, it's nice to be able to design a type that "just works" in C++. But if you need to clean up reliably in just this one place, C++ destructors are a very clunky solution.
Implicitness of destructors isn't a problem, it's an advantage - it makes code shorter. Freeing resources in an explicit way creates too much boilerplate and is bug-prone.
> With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen
Isn't it a code quality issue? It should be clear from class name/description what can happen in its destructor. And if it's not clear, it's not that relevant.
It's absolutely a problem. Classically, you spend most of your time reading and debugging code, not writing it. When there's an issue pertaining to RAII, it is hidden away, potentially requiring looking at many subclasses etc.
Desctructors are only comparable when you build an OnScopeExit class which calls a user-provided lambda in its destructor which then does the cleanup work - so it's more like a workaround to build a defer feature using C++ features.
The classical case of 'one destructor per class' would require to design the entire code base around classes which comes with plenty of downsides.
> Anyone who writes C should consider using C++ instead
Nah thanks, been there, done that. Switching back to C from C++ about 9 years ago was one of my better decisions in life ;)
I think destructors are different, not better. A destructor can’t automatically handle the case where something doesn’t need to be cleaned up on an early return until something else occurs. Also, destructors are a lot of boilerplate for a one-off cleanup.
> A destructor can’t automatically handle the case where something doesn’t need to be cleaned up on an early return
It can. An object with destructor doing clean-up should be created only after such clean-up is needed. In case of a file, for example, a file object should be created at file opening, so that it can close the file in its destructor.
> but absolutely no one is going to switch from C to C++ just for dtors
The decision would be easier if the C subset in C++ would be compatible with modern C standards instead of being a non-standard dialect of C stuck in ca. 1995.
For the cases where a destructor isn’t readily available, you can write a defer class that runs a lambda passed to its constructor in its destructor, can’t you?
Would be a bit clunky, but that can (¿somewhat?) be hidden in a macro, if desired.
As others have commented already: if you want to use C++, use C++. I suspect the majority of C programmers neither care nor want stuff like this; I still stay with C89 because I know it will be portable anywhere, and complexities like this are completely at odds with the reason to use C in the first place.
I would say the complexity of implementing defer yourself is a bit annoying for C. However defer itself, as a language feature in a C standard is pretty reasonable. It’s a very straightforward concept and fits well within the scope of C, just as it fit within the scope of zig. As long as it’s the zig defer, not the golang one…
I would not introduce zig’s errdeferr though. That one would need additional semantics changes in C to express errors.
It starts out small. Then before you know the language is total shit. Python is a good example.
I am observing a very distinguishable phenomenon when internet makes very shallow ideas mainstream and ruin many many good things that stood the test of time.
I am not saying this is one of those instances, but what the parent comment makes sense to me. You can see another comment who now wants to go further and want destructors in C. Because of internet, such voices can now reach out to each other, gather and cause a change. But before, such voices would have to go through a lot of sensible heads before they would be able to reach each other. In other words, bad ideas got snuffed early before internet, but now they go mainstream easily.
So you see, it starts out slow, but then more and more stuff gets added which diverges more and more from the point.
I get your point, though in the specific case of defer, looks like we both agree it's really a good move. No more spaghetti of goto err_*; in complex initialization functions.
Actually I am not sure I do. It seems to me that even though `defer` is more explicit than destructors, it still falls under "spooky action at a distance" category.
I think a lot of the really old school people don't care, but a lot of the younger people (especially those disillusioned with C++ and not fully enamored with Rust) are in fact quite happy for C to evolve and improve in conservative, simple ways (such as this one).
You're missing out on one of the best-integrated and useful features that have been added to a language as an afterthought (C99 designated initialization). Even many moden languages (e.g. Rust, Zig, C++20) don't get close when it comes to data initialization.
> I still stay with C89 because I know it will be portable anywhere
With respect, that sounds a bit nuts. It's been 37 years since C89; unless you're targeting computers that still have floppy drives, why give up on so many convenience features? Binary prefixes (0b), #embed, defined-width integer types, more flexibility with placing labels, static_assert for compile-time sanity checks, inline functions, declarations wherever you want, complex number support, designated initializers, countless other things that make code easier to write and to read.
Defer falls in roughly the same category. It doesn't add a whole lot of complexity, it's just a small convenience feature that doesn't add any runtime overhead.
Not necessarily. In classic C we often build complex state machines to handle errors - especially when there are many things that need to be initialized (malloced) one after another and each might fail. Think the infamous "goto error".
I think defer{} can simplify these flows sometimes, so it can indeed be useful for good old style C.
I took some shit in the comments yesterday for suggesting "you can do it with a few lines of standard C++" to another similar thread, but yet again here we are.
Defer takes 10 lines to implement in C++. [1]
You don't have to wait 50 years for a committee to introduce basic convenience features, and you don't have to use non-portable extensions until they do (and in this case the __attribute__((cleanup)) has no equivalent in MSVC), if you use a remotely extensible language.
Though I do wonder what the chances are that the C subset of C++ will ever add this feature. I use my own homespun "scope exit" which runs a lambda in a destructor quite a bit, but every time I use it I wish I could just "defer" instead.
Then again, if someone is willing to push it through WG21 no matter what, maybe.
[1] https://gotofail.com/
Both RAII and `defer` have proven to be highly useful in real-world code. This seems like a good addition to the C language that I hope makes it into the standard.
By the way, GCC and Clang have attribute((cleanup)) (which is the same, scope-based clean-up) and have done for over a decade, and this is widely used in open source projects now.
I think that defer is actually limited in ways that are good - I don't see it introducing surprising control flow in the same way.
It allows library authors to take responsibility for cleaning up resources in exactly one place rather than forcing library users to insert a defer call in every single place the library is used.
It would run regardless of if malloc succeeded or failed, but calling free on a NULL pointer is safe (defined to no-op in the C-spec).
free may accept a NULL pointer, but it also doesn't need to be called with one either.
RAII is not the right solution for C. I wouldn't want C to grow constructors and destructors. So far, C only runs the code you ask it to; turning variable declaration into a hidden magic constructor call would, IMO, fly in the face of why people may choose C in the first place.
In addition, RAII has it's own complexities that need to be dealt with now, i.e. move semantics, which obviously C does not have nor will it likely ever.
Genuinely curious as I only have a small amount of experience with c and found goto to be ok so far
In any case, the biggest advantage IMO is that resource acquisition and cleanup are next to each other. My brain understands the code better when I see "this is how the resource is acquired, this is how the resource will be freed later" next to each other, than when it sees "this is how this resource is acquired" on its own or "this is how the resource is freed" on its own. When writing, I can write the acquisition and the free at the same time in the same place, making me very unlikely to forget to free something.
But it adds a new dimension of control flow, which in a garbage collected language like Go is less worrisome whereas in C this can create new headaches in doing things in the right order. I don't think it will eliminate goto error handling for complex cases.
But people know it from other languages, and seem to like it, so I guess it is good to have it also in C.
Cleanup is good. Jumping around with "goto" confused most people in practice. It seems highly likely that most programmers model "defer" differently in their minds.
EDIT:
IIRC it was CVE-2025-26465. Read the code and the patch.
2. Defer is mostly useful for C++ code that needs to interact with C API because these two are fundamentally different. C API usually exposes functions "create_something" and "destroy_something", while the C++ pattern is to have an object that has "create_something" hidden inside its constructor, and "destroy_something" inside its destructor.
Related blog post from last year: https://thephd.dev/c2y-the-defer-technical-specification-its... (https://news.ycombinator.com/item?id=43379265)
Once they do learn about defer they will come to appreciate it much more.
learning Python first is same difficulty as learning C first (because main problem is the whole concept of programming)
and learning C after Python is harder than learning Python after C (because of pointers)
The point of a CS degree is to know the fundamentals of computing, not the latest best practices in programming that abstract the fundamentals.
You can just look at the code in front of you to see what defer is doing. With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen.
Sure, if the situation arises frequently, it's nice to be able to design a type that "just works" in C++. But if you need to clean up reliably in just this one place, C++ destructors are a very clunky solution.
> With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen
Isn't it a code quality issue? It should be clear from class name/description what can happen in its destructor. And if it's not clear, it's not that relevant.
It's absolutely a problem. Classically, you spend most of your time reading and debugging code, not writing it. When there's an issue pertaining to RAII, it is hidden away, potentially requiring looking at many subclasses etc.
The classical case of 'one destructor per class' would require to design the entire code base around classes which comes with plenty of downsides.
> Anyone who writes C should consider using C++ instead
Nah thanks, been there, done that. Switching back to C from C++ about 9 years ago was one of my better decisions in life ;)
It can. An object with destructor doing clean-up should be created only after such clean-up is needed. In case of a file, for example, a file object should be created at file opening, so that it can close the file in its destructor.
The decision would be easier if the C subset in C++ would be compatible with modern C standards instead of being a non-standard dialect of C stuck in ca. 1995.
Would be a bit clunky, but that can (¿somewhat?) be hidden in a macro, if desired.
https://oshub.org/projects/retros-32/posts/defer-resource-cl...
I would not introduce zig’s errdeferr though. That one would need additional semantics changes in C to express errors.
It starts out small. Then before you know the language is total shit. Python is a good example.
I am observing a very distinguishable phenomenon when internet makes very shallow ideas mainstream and ruin many many good things that stood the test of time.
I am not saying this is one of those instances, but what the parent comment makes sense to me. You can see another comment who now wants to go further and want destructors in C. Because of internet, such voices can now reach out to each other, gather and cause a change. But before, such voices would have to go through a lot of sensible heads before they would be able to reach each other. In other words, bad ideas got snuffed early before internet, but now they go mainstream easily.
So you see, it starts out slow, but then more and more stuff gets added which diverges more and more from the point.
Actually I am not sure I do. It seems to me that even though `defer` is more explicit than destructors, it still falls under "spooky action at a distance" category.
You're missing out on one of the best-integrated and useful features that have been added to a language as an afterthought (C99 designated initialization). Even many moden languages (e.g. Rust, Zig, C++20) don't get close when it comes to data initialization.
E.g. neither Rust, Zig not C++20 can do this:
https://github.com/floooh/sokol-samples/blob/51f5a694f614253...
Odin gets really close but can't chain initializers (which is ok though):
https://github.com/floooh/sokol-odin/blob/d0c98fff9631946c11...
With respect, that sounds a bit nuts. It's been 37 years since C89; unless you're targeting computers that still have floppy drives, why give up on so many convenience features? Binary prefixes (0b), #embed, defined-width integer types, more flexibility with placing labels, static_assert for compile-time sanity checks, inline functions, declarations wherever you want, complex number support, designated initializers, countless other things that make code easier to write and to read.
Defer falls in roughly the same category. It doesn't add a whole lot of complexity, it's just a small convenience feature that doesn't add any runtime overhead.
I think defer{} can simplify these flows sometimes, so it can indeed be useful for good old style C.
Goto approach also covers some more complicated cases
Defer takes 10 lines to implement in C++. [1]
You don't have to wait 50 years for a committee to introduce basic convenience features, and you don't have to use non-portable extensions until they do (and in this case the __attribute__((cleanup)) has no equivalent in MSVC), if you use a remotely extensible language.
[1] https://www.gingerbill.org/article/2015/08/19/defer-in-cpp/