> You effortlessly wield clever programming techniques today that would've baffled your younger self. (If not, then I'm afraid you stopped evolving as a programmer long ago.)
I think a better assessment of how well you've evolved as a programmer is how simple you can make the code. It takes real intelligence and flair to simplify the problem as much as possible, and then write the code to be boringly simple and easy to follow by a junior developer or AI agent.
If you're wielding increasingly clever programming techniques, then you're evolving in the wrong direction.
The best code, eg for embedded systems, is as simple as it can possibly be, to be maintainable and eg to let the compiler optimise it well, possibly across multiple targets. Sometimes very clever is needed, but the scope of that cleverness should always be minimised and weighed against the downsides.
Let me tell you about a key method in the root pricing class for the derivs/credit desk of a major international bank that was all very clever ... and wrong ... as was its sole comment ... and not entirely coincidentally that desk has gone and its host brand also...
This feels like a lot of rationalization for the purpose of excusing writing exactly the sort of code that Kernighan advised against.
Advising against writing complex code is not advising against learning.
The person who solves a hard problem correctly using simple code has generally spent more time learning than the person who solves it using complex code.
Looking at all he has done, I don't think he means "complex" when he says "clever". He's not advocating for (and most likely against) the architecture-astronautism of overengineering that some people seem to be associating with "clever" here.
He means code that appears indecipherable at first glance, but then once you see how it works, you're enlightened. Simple and efficient code can be "clever".
Good code should not be immediately understandable. Machines that do pasta do not look like humans that do pasta. Same for code; good code does things in a machine way and it won't look natural.
Example: convert RGB to HSV. If you look around for a formula, you'll likely find one that starts so:
cmin = min(r, g, b);
cmax = max(r, g, b);
Looks very natural to a human. Thing is, as we compute 'cmin', we'll also compute or almost compute 'cmax', so if we rewrite this for a machine, we should merge these two into something that will be way less clear on the first glance. Yet it will be better and make fewer actions (the rest of the conversion is even more interesting, but won't fit into a comment).
Yes, I agree this is true in some (many?) cases. But it is also true that sometimes the more complex solution is better, either for performance reasons or because it makes things simpler for users/API callers.
Yes, there's a valid argument that simple code is not always best performance. Optimizing simple code usually makes it more complex.
But I think the main point stands. There's an old saying that doing a 60 minute presentation is easy, doing one in 15 minutes us hard. In other words writing "clever" (complicated) code is easy. Distilling it down to something simple is hard.
So the final result of any coding might be "complex", "simplified from complex", or "optimized from simple".
The first and third iterations are superficially similar, although likely different in quality.
While I agree with the point about improving skills, I think there's a distinction to be made between artistic code and engineering code. Linus Åkesson writes some exceptionally clever code, but it's artistic code. The cleverness is both essential to the artistic effect and unlikely to break anything important.
But I wouldn't want my OS written like that. In engineering code, the only benefit of cleverness is better performance, and the risk is unreliability. My previous computer was a lot slower and it already did everything I need, so I'm willing to sacrifice a lot of performance for reliability. Most software is written so wastefully that it's usually possible to make up for the lost performance without cleverness anyway.
I like this insight, even though I think they are pushing Kernighan's quip a little too far.
I take away two ideas:
1. Always be learning. I think everyone believes this, but we often come up with plausible reasons to stick to what we know. This is a good reminder that we should fight that impulse and put in the effort to learn.
2. Always be fearless. This, I think, is the key insight. Fear is easy. We fear the unknown, whether they be APIs or someone else's code. We fear errors, particularly when they have real-world consequences. And we fear complexity, because we think we might not be able to deal with it. But the opposite of fear isn't recklessness, it's confidence. We should be confident that we will figure it out. And even if we don't figure it out, we should be confident that we can revert the code. Face your fears and grow.
> Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
It's worse than that. It might not be you who has to debug it, but someone else. Maybe after you left the company already. Maybe at 3AM after a pager alert in production ..
Why does everybody confound "twice as hard" with "need to be twice as clever"? Why nobody contemplates twice the time, a team of twice the people, using debugging tools twice as powerful or costing twice?
If debugging is 2 times harder than writing code we have at least two choices. One suggests to write simpler code. But another one means not debugging code at all, which may be achieved by using a programming language way better than C, which allows fixing (almost) all bugs in compilation time.
50 years of widespread C usage has shown that just trying writing without errors using C doesn't work. But surprisingly some people still believe it's possible.
> 50 years of widespread C usage has shown that just trying writing without errors using C doesn't work.
Millions upon millions of C code, over decades, controlled (and still control) things around you that would kill you, or similar catastrophic failure. Cars, microwaves, industrial machinery, munitions, aircraft systems ... with so few errors attributable to C that I can only think of one prominent example.
So sure, you can get bugs written in C. In practice, the development process is more important to fault-reduction than the language chosen. And yes, I speak from a place of experience, having spent considerable parts of my career in embedded systems.
Writing without errors using other languages also doesn't work. And if you go towards formal verification (which also does not completely avoid errors), C has good tools.
By using a better language you have no errors typical for C which usually require debugging. Logical errors may still happen, but they are easy to identify without even running a debugger.
I am very happy and sad for people who will never debug their own code for days to figure out subtle bugs. Happy because they won't endure the torture, sad because an LLM took away their opportunity to learn and better themselves.
This article can be summarised in one word: learning. I've noticed over the years that there seems to be a growing divide amongst programmers, between those who believe in learning, and those who don't (and actively try to avoid it); unfortunately the latter has become a majority position, but I still try to show others this article when they don't understand code that I've written and would rather I stoop to their level.
A look around the site at what else he has accomplished, should be enough evidence that he isn't just a charlatan, unlike some others who have made a consulting career out of spouting pompous hot air about methodology.
> You effortlessly wield clever programming techniques today that would've baffled your younger self. (If not, then I'm afraid you stopped evolving as a programmer long ago.)
... Perhaps if we allow that "clever techniques" can yield simpler results than my former self did.
I think a better assessment of how well you've evolved as a programmer is how simple you can make the code. It takes real intelligence and flair to simplify the problem as much as possible, and then write the code to be boringly simple and easy to follow by a junior developer or AI agent.
If you're wielding increasingly clever programming techniques, then you're evolving in the wrong direction.
Let me tell you about a key method in the root pricing class for the derivs/credit desk of a major international bank that was all very clever ... and wrong ... as was its sole comment ... and not entirely coincidentally that desk has gone and its host brand also...
Advising against writing complex code is not advising against learning.
The person who solves a hard problem correctly using simple code has generally spent more time learning than the person who solves it using complex code.
He means code that appears indecipherable at first glance, but then once you see how it works, you're enlightened. Simple and efficient code can be "clever".
Example: convert RGB to HSV. If you look around for a formula, you'll likely find one that starts so:
Looks very natural to a human. Thing is, as we compute 'cmin', we'll also compute or almost compute 'cmax', so if we rewrite this for a machine, we should merge these two into something that will be way less clear on the first glance. Yet it will be better and make fewer actions (the rest of the conversion is even more interesting, but won't fit into a comment).But I think the main point stands. There's an old saying that doing a 60 minute presentation is easy, doing one in 15 minutes us hard. In other words writing "clever" (complicated) code is easy. Distilling it down to something simple is hard.
So the final result of any coding might be "complex", "simplified from complex", or "optimized from simple".
The first and third iterations are superficially similar, although likely different in quality.
But I wouldn't want my OS written like that. In engineering code, the only benefit of cleverness is better performance, and the risk is unreliability. My previous computer was a lot slower and it already did everything I need, so I'm willing to sacrifice a lot of performance for reliability. Most software is written so wastefully that it's usually possible to make up for the lost performance without cleverness anyway.
I take away two ideas:
1. Always be learning. I think everyone believes this, but we often come up with plausible reasons to stick to what we know. This is a good reminder that we should fight that impulse and put in the effort to learn.
2. Always be fearless. This, I think, is the key insight. Fear is easy. We fear the unknown, whether they be APIs or someone else's code. We fear errors, particularly when they have real-world consequences. And we fear complexity, because we think we might not be able to deal with it. But the opposite of fear isn't recklessness, it's confidence. We should be confident that we will figure it out. And even if we don't figure it out, we should be confident that we can revert the code. Face your fears and grow.
It's worse than that. It might not be you who has to debug it, but someone else. Maybe after you left the company already. Maybe at 3AM after a pager alert in production ..
Millions upon millions of C code, over decades, controlled (and still control) things around you that would kill you, or similar catastrophic failure. Cars, microwaves, industrial machinery, munitions, aircraft systems ... with so few errors attributable to C that I can only think of one prominent example.
So sure, you can get bugs written in C. In practice, the development process is more important to fault-reduction than the language chosen. And yes, I speak from a place of experience, having spent considerable parts of my career in embedded systems.
This article can be summarised in one word: learning. I've noticed over the years that there seems to be a growing divide amongst programmers, between those who believe in learning, and those who don't (and actively try to avoid it); unfortunately the latter has become a majority position, but I still try to show others this article when they don't understand code that I've written and would rather I stoop to their level.
A look around the site at what else he has accomplished, should be enough evidence that he isn't just a charlatan, unlike some others who have made a consulting career out of spouting pompous hot air about methodology.
> You effortlessly wield clever programming techniques today that would've baffled your younger self. (If not, then I'm afraid you stopped evolving as a programmer long ago.)
... Perhaps if we allow that "clever techniques" can yield simpler results than my former self did.