These may be objectively superior (I haven't tested), but I have come to realize (like so many others) that if you ever change your OS installation, set up VMs, or SSH anywhere, preferring these is just an uphill battle that never ends. I don't want to have to set these up in every new environment I operate in, or even use a mix of these on my personal computer and the traditional ones elsewhere.
Learn the classic tools, learn them well, and your life will be much easier.
Some people spend the vast majority of their time on their own machine. The gains of convenience can be worth it. And they know enough of the classic tools that it's sufficient in the rare cases when working on another server.
Not everybody is a sysadmin manually logging into lots of independent, heterogeneous servers throughout the day.
Yeah, this is basically what I do. One example: using neovim with bunch of plugins as a daily driver, but whenever I enter a server that doesn't have it nor my settings/plugins, it isn't a huge problem to run vim or even vi, most stuff works the same.
Same goes for a bunch of other tools that have "modern" alternatives but the "classic" ones are already installed/available on most default distribution setups.
Some are so vastly better that it's worth whatever small inconvenience comes with getting them installed. I know the classic tools very well, but I'll prefer fd and ripgrep every time.
For my part, the day I was confused why "grep" couldn't find some files that were obviously there, only to realize that "ripgrep" is ignoring files in the gitignore, that was the day I removed "ripgrep" of my system.
I never asked for such behaviour, and I have no time for pretty "modern" opinions in a base software.
Often, when I read "modern", I read "immature".
I am not ready to replace my stable base utilities for some immature ones having behaviour changes.
The very first paragraph in ripgrep's README makes that behaviour very clear:
> ripgrep is a line-oriented search tool that recursively searches the current directory for a regex pattern. By default, ripgrep will respect gitignore rules and automatically skip hidden files/directories and binary files. (To disable all automatic filtering by default, use rg -uuu.)
You did ask for it though. Because ripgrep prominently advertises this default behavior. And it also documents that it isn't a POSIX compatible grep. Which is quite intentional. That's not immature. That's just different design decisions. Maybe it isn't the software you're using that's immature, but your vetting process for installing new tools on your machine that is immature.
Because hey guess what: you can still use grep! So I built something different.
When I got my first Unix account [1] I was in a Gnu emacs culture and used emacs from 1989 to 2005 or so. I made the decision to switch to vi for three reasons: (1) less clash with a culture where I mostly use GUI editors that use ^S for something very different than what emacs does, (2) vim doesn't put in continuation characters that break cut-and-paste, (3) often I would help somebody out with a busted machine where emacs wasn't installed, the package database was corrupted, etc and being able to count on an editor that is already installed to resolve any emergency is helpful.
[1] Not like the time one of my friends "wardialed" every number in my local calling area and posted the list to a BBS and I found that some of them could be logged into with "uucp/uucp" and the like. I think Bell security knew he rang everybody's phone in the area but decided to let billing handle the problem because his parents had measured service.
One of the reasons I really like Nix, my setup works basically everywhere (as long the host OS is either Linux or macOS, but those are the only 2 environments that I care). I don't even need root access to install Nix since there are multiple ways to install Nix rootless.
But yes, in the eventual case that I don't have Nix I can very much use the classic tools. It is not a binary choice, you can have both.
That goes against the UNIX philosophy IMO. Tools doing "one thing and doing it well" also means that tools can and should be replaced when a superior alternative emerges. That's pretty much the whole point of simple utilities. I agree that you should learn the classic tools first as it's a huge investment for a whole career, but you absolutely should learn newer alternatives too. I don't care much for bat or eza, but some alternatives like fd (find alt) or sd (sed alt) are absolute time savers.
apt-get/pacman/dnf/brew install <everything that you need>
You'll need install those and other tools (your favorite browser, you favorite text editor, etc) anyway if you're changing your OS.
> or SSH anywhere
When you connect through SSH you don't have GUI and that's not a reason for avoiding using GUI tools, for example.
> even use a mix of these on my personal computer and the traditional ones elsewhere
I can't see the problem, really. I use some of those tools and they are convenient, but it doesn't matter that I can't work without that. For example, bat: it doesn't replace cat, it only outputs data with syntax highlight, makes my life easier but if I don't have it, ok.
> apt-get/pacman/dnf/brew install <everything that you need>
If only it were so simple. Not every tool comes from a package with the same name, (delta is git-delta, "z" is zoxide, which I'm not sure I'd remember off the top of my head when installing on a new system). On top of that, you might not like the defaults of every tool, so you'll have config files that you need to copy over or recreate (and hopefully sync between the computers where you use these tools).
That said I do think nix provides some good solutions for this. It gives you a nice clean way to list the packages you want in a nixfile and also to set their defaults and/or provide some configuration files. It does still require some maintenance (and I choose to install the config files as editable, which is not very nix-y, but I'd rather edit it and then commit the changes to my configs repo for future deploys than to have to edit and redeploy for every minor or exploratory change), but I've found it's much better than trying maintain some sort of `apt-get install [packages]` script.
Stronhly agreed. I don't understand why I'd want to make >99% of my time doing things less convenient in offer to try to make my usage in the <1% of the time I'm on a machine where I can't install things even in a local directory for the user I'm ssh'd into feel less bad by comparison. It's not even a tradeoff where I'm choosing which part of the curve to optimize for; it's literally flattening the high part to make the lower overall convenience level constant.
> When you connect through SSH you don't have GUI and that's not a reason for avoiding using GUI tools, for example.
One major difference can emerge from the fact that using a tool regularly inevitably builds muscle memory.
You’re accustomed to a replacement command-line tool? Then your muscle memory will punish you hard when you’re logged into an SSH session on another machine because you’re going to try running your replacement tool eventually.
You’re used to a GUI tool? Will likely bite you much less in that scenario.
> You’re accustomed to a replacement command-line tool?
Yes.
> Then your muscle memory will punish you hard
No.
I'm also used to pt-br keyboards, it's easier to type in my native language, but it's ok if I need to use US keyboards. In terms of muscle memory, keyboards are far harder to adapt.
A non-tech example: if I go to a Japanese restaurant, I'll use chopsticks and I'm ok with them. At home, I use forks and knives because they make my life easier. I won't force myself to use chopsticks everyday only for being prepared for Japanese restaurants.
Because 'sometimes' doesn't mean you should needlessly handcuf yourself the other 80% of the time.
I personally haves an ansible playbook to ~setup all my commonly used tooling on ~any cli I will use significantly; (almost) all local installs to avoid need for root. It runs in ~minute - and I have all the Niceties. If it's not worth spending that minute to run; then i won't be on the machine long enough for it to matter.
That's a niche case. And if you need to frequently SSH into a lightweight server you'll probably will be ok with the default commands even though you have the others installed in the local setup.
> Learn the classic tools, learn them well, and your life will be much easier.
Agreed, but that doesn't stop you from using/learning alternatives. Just use your preferred option, based on what's available. I realise this could be too much to apply to something like a programming language (despite this, many of us know more than one) or a graphics application, but for something like a pager, it should be trivial to switch back and forth.
I like the idea of new tools though. But knowing the building blocks is useful. The “Unix power tools” book was useful to get me up to speed.. there are so many of these useful mini tools.
Miller is one I’ve made use of (it also was available for my distro)
IMO this is very stupid: don't let past dictate future. UNIX is history. History is for historians, it should not be the basis that shapes the environment for engineers living in present.
The point is that we always exist at a point on a continuum, not at some fixed time when the current standard is set in stone. I remember setting up Solaris machines in the early 2000s with the painful SysV tools that they came with and the first thing you would do is download a package of GNU coreutils. Now those utils are "standard", unless of course you're using a Mac. And newer tools are appearing (again, finally) and the folk saying to just stick with the GNU tools because they're everywhere ignore all of the effort that went into making that (mostly) the case. So yes, let's not let the history of the GNU tools dictate how we live in the present.
Well, even “Unix” had some differences (BSD switches vs SysV switches). Theoretically, POSIX was supposed to smooth that out, but it never went away. Today, people are more likely to be operating in a GNU Linux environment than anything else (that just a market share fact, not a moral judgement, BSD lovers). Thus, for most people, GNU is the baseline.
I started a new job and spent maybe a day setting up the tools and dotfiles on my development machine in the cloud. I'm going to keep it throughout my employment so it's worth the investment. And I install most of the tools via nix package manager so I don't have to compile things or figure out how to install them on a particular Linux distribution.
L
Learn Ansible or similar, and you you can be ~OS (OSX/Linux/even Windows) agnostic with relatively complex setups. I set mine up before Agentic systems were as good as they are now; but I assume it would be relatively effortless now.
IMO, it's worth spending some time to clean up your setup for smooth transition to new machines in the future.
I have some of these tools, they are not "objectively superior". A lot of them make things prettier with colors, bargraphs, etc... It is nice on a well-configured terminal, not so much in a pipeline. Some of them are full TUIs, essentially graphical tools that run in a terminal rather than traditional command line tools.
Some of them are smart but sometimes I want dumb, for example, ripgrep respects gitignore, and often, I don't want that. Though in this case, there is an option to turn it off (-uuu). That's a common theme with these tools too, they are trying to be smart by default and you need option to make them dumb.
So no, these tools are not "objectively superior", they are generally more advanced, but it is not always what you need. They complement classic tools, but in no way replace them.
Never will I ever set up tools and home environment directly on the distro. Only in a rootfs that I can proot/toolbx/bwrap into. Not only I don't want to set up again on different computer, distro upgrade has nuked "fancy" tools enough times to be not worth it.
I know well enough my way around vi, because although XEmacs was my editor during the 1990's when working on UNIX systems, when visiting customers there was a very high probability that they only had ed and vi installed on their server systems.
Many folks nowadays don't get how lucky they are, not having to do UNIX development on a time-sharing system, although cloud systems kind of replicate the experience.
Agreed, but some are nice enough that I'll make sure I get them installed where I can. 'ag' is my go to fast grep, and I get it installed on anything I use a lot.
I indeed would not want to feel stranded with a bespoke toolkit. But I also don't think shying away from good tools is the answer. Generally I think using better tools is the way to go.
Often there are plenty of of paths open to getting a decent environment as you go:
Mostly, I rely on ansible scripts to install and configure the tools I use.
One fallback I haven't seen mentioned, that can get a lot of mileage from it: use sshfs to mount the target system locally. This allows you to use local tool & setup effectively against another machine!
Along those lines, Dvorak layouts are more efficent, but I use qwerty because it works pretty much everywhere (are small changes like AZERTY still a thing? Certainly our French office is an "international" layout, and generally the main pain internationally are "@" being in the wrong place, and \ not working -- for the latter you can use user@domain when logging into a windows machine, rather than domain\user)
As someone who logs into hundreds of servers in various networks, from various customers/clients, there is so little value in using custom tooling, as they will not be available on 90% of the systems.
I have a very limited set of additional tools I tend to install on systems, and they are in my default ansible-config, so will end up on systems quickly, but I try to keep this list short and sweet.
95% of the systems I manage are debian or ubuntu, so they will use mostly the same baseline, and I then add stuff like ack, etckeeper, vim, pv, dstat.
"servers" is the key word here. Some of the tools listed on that page are just slightly "improved" versions of common sysadmin utilities, and indeed, those are probably not worth it. But some are really development tools, things that you'd install on the small number of machines where you do programming. Those might be.
The ones that leap out at me are ripgrep (a genuinely excellent recursive grepper), jq (a JSON processor - there is no alternative to this in the standard unix toolkit), and hyperfine (benchmarking).
In my last role rg and jq were included as part of our standard AMI as well as our base container images. It broadens our CVE exposure but it was undoubtably worth it.
What's the relevance of these "as someone who ..." posts? Nobody cares that these tools don't happen to fit into your carefully curated list of tools that you install on remote computers. You can install these on your local computer to reap some benefits.
Another reason emacs as an OS (not fully, but you know) is such a great way to get used to things you have on systems. Hence the quote: "GNU is my operating system, linux is just the current kernel".
As a greybeard linux admin, I agree with you though. This is why when someone tells me they are learning linux the first thing I tell them is to just type "info" into the terminal and read the whole thing, and that will put them ahead of 90% of admins. What I don't say is why: Because knowing what tooling is available as a built-in you can modularly script around that already has good docs is basically the linux philosophy in practice.
Of course, we remember the days where systems only had vi and not even nano was a default, but since these days we do idempotent ci/cd configs, adding a tui-editor of choice should be trivial.
You're again confusing this website with your personal email inbox. This is a public message board, all messages you see haven't been written for you specifically - including this blog post.
Actual LOL. Indeed. I was working for a large corporation at one point and a development team was explaining their product. I asked what its differentiators were versus our competitors. The team replied that ours was written in Go. #faceplam
The Rust rewrites can become tiresome, they have become a meme at this point, but there are really good tools there too.
An example from my personal experience: I used to think that oxipng was just a faster optipng. I took a closer look recently and saw that it is more than that.
If a new tool has actual performance or feature advantages, then that's the answer to "what problem does it solve", regardless of what language it's in.
That is a differentiator if your competitors are written in Python or Ruby or Bash or whatever. But yeah obviously for marketing to normal people you'd have to say "it's fast and reliable and easy to distribute" because they wouldn't know that these are properties of Go.
You can write slow unmaintainable brittle garbage in any language though. So even if your competition is literally written in Bash or whatever you should still say what your implementation actually does better - and if it's performance, back it up with something that lets me know you have actually measured the impact on real world use cases and are not just assuming "we wrote it in $language therefore it must be fast".
> You can write slow unmaintainable brittle garbage in any language though.
Sure. You can drive really slowly in a sports car. But if you're looking for travel options for a long distance journey are you going to pick the sports car or the bicycle.
Also I have actually yet to find slow unmaintainable brittle garbage written in Go or Rust. I'm sure it's possible but it's vastly less likely.
No. The differentiator is whatever benefits such an implementation might deliver (e.g., performance, reliability, etc.). Customers don’t start whipping out checkbooks when you say, “Ours is written in Go.”
Many of the entries do include this detail — e.g. "with syntax highlighting", "ncurses interface", and "more intuitive". I agree that "written in rust", "modern", and "better" aren't very useful!
Some of this just makes me think that they are compared against the wrong tool though. E.g.
> cat clone with syntax highlighting and git integration
doesn't make any sense because cat is not really meant for viewing files. You should be comparing your tool with the more/less/most family of tools, some of which can already do syntax highlighting or even more complex transforms.
Yup, I made that same point in another comment. Out of interest, though, how do you get syntax highlighting from any of those pagers? None of them give it to me out of the box.
I always enjoy these lists. I think most folks out there could probably successfully adopt at least one or two of these tools. For me, that’s ripgrep and jq. The former is a great drop-in replacement for grep and the latter solves a problem I needed solving. I’ll try out a few of the others on this list, too. lsd and dust both appeal to me.
I just enjoy seeing others incrementally improve on our collective tool chest. Even if the new tool isn’t of use to me, I appreciate the work that went into it. They’re wonderful tools in their own right. Often adding a few modern touches to make a great tool just a little bit better.
Thank you to those who have put in so much effort. You’re making the community objectively better.
I think many of us linux admins have such a list. Mine in particular is carefully crafted around GPL-izing my stack as much as possible. I really like the format of this ikrima.dev one though! The other stuff is great too, worth a peruse.
I'd like to read this list, but the color scheme is among the least accessible that I've ever come across. Dark, greyish-blue text with dark, bluish-grey highlighting over a dark grey background. Wow.
If any fledgling designers are here, then take note and add this to your list of examples to avoid.
I basically live in the terminal. However, every single one of these tools offers a solution to a problem that I don't have; aren't installed on my system; and mysteriously have many tens of thousands of github stars.
> I basically live in the terminal. However, every single one of these tools offers a solution to a problem that I don't have; aren't installed on my system; and mysteriously have many tens of thousands of github stars.
> I genuinely don't know what is going on here.
I basically live in my music library. However, every single pop artist offers songs that I don't like, are not in my library, and mysteriously have many millions of albums sold.
I genuinely don't know what is going on here.
Joking aside, have you ever tried to use some of these tools ? I use to not understand why people where using vim until I really tried.
The core Unix toolset is so good, that you can easily get by with it. Many of these tools are better, but still not necessary, and they certainly aren't widely available by default.
Out of curiosity, how would you recursively grep files ignoring (hidden files [e.g., `.git`]), only matching a certain file extension? (E.g., `rg -g '*.foo' bar`.)
I use the command line a lot too and this is one of my most common commands, and I don't know of an elegant way to do it with the builtin Unix tools.
(And I have basically the same question for finding files matching a regex or glob [ignoring the stuff I obviously don't want], e.g., `fd '.foo.*'`.)
Depends on how big the directory is. If it only contains a few files, I'd just enumerate them all with `find`, filter the results with `grep`, and perform the actual `grep` for "bar" using `xargs`:
find . -type f -name "*.foo" | grep -v '/\.' | xargs grep bar
(This one I could do from muscle memory.)
If traversing those hidden files/directories were expensive, I'd tell `find` itself to exclude them. This also lets me switch `xargs` for `find`'s own `-exec` functionality:
find . -path '*/\.*' -prune -o -type f -name "*.foo" -exec grep bar {} +
Thanks yeah this is a good example of why I prefer the simpler interface for `rg` and `fd`. Those examples would actually be fine if this were something only did once in awhile (or in a script). But I search from the command line many times per day when I'm working, so I prefer a more streamlined interface.
For the record, I think `git grep` is probably the best builtin solution to the problem I gave, but personally I don't know off-hand how to only search for files matching a glob and to use the current directory rather than the repository root with `git grep` (both of which are must haves for me). I'd also need to learn those same commands for different source control systems besides git (I use one other VCS regularly).
>Those examples would actually be fine if this were something only did once in awhile (or in a script). But I search from the command line many times per day when I'm working, so I prefer a more streamlined interface.
Makes sense. If I had to do this frequently, I'd add a function/alias encapsulating that `find` incantation to my .bashrc, which I keep in version control along with other configuration files in my home directory. That way, when moving to a new environment, I can just clone that repo into a fresh home directory and most of my customizations work out-of-the-box.
Yeah I do the same sometimes, at the risk of going too deep into personal preference. A couple of notes about that approach:
1. I don't recommend using shell functions or aliases for this (e.g., `bashrc`) because then these scripts can't be called from other contexts, e.g., like Vim and Emacs builtin support for shell commands. This can easily be solved by creating scripts that can be called from anywhere (my personal collection of these scripts is here https://github.com/robenkleene/Dotfiles/tree/master/scripts). Personally, I only use Bash functions for things that have to do with Bash's runtime state (e.g., augmenting PATH is a common one).
2. The more important part though, is I don't always want to search in `*.foo`, I want a flexible, well-designed, API that allows me to on-the-fly decide what to search.
#2 is particularly important and drifts in philosophy of tooling, a mistake I used to make is building my workflow today into customizations like scripts. This is a bad idea because then the scripts aren't useful as your tasks change, and hopefully your tasks are growing in complexity over time. I.e., don't choose your tools based on your workflow today, otherwise you're building in limitations. Use powerful tools that will support you no matter what task you're performing, that scale practically infinitely. "The measure of a bookshelf is not what has been read, but what remains to be read."
The one issue with this approach is that it would still traverse all hidden folders, which could be expensive (e.g. in a git repo with an enormous revision history in `.git/`). `-not -path ...` just prevents entities from being printed, not being traversed. To actually prevent traversal, you need to use `-prune`.
Curious if that answers the "I genuinely don't know what is going on here" then? Not searching hidden files (or third-party dependencies, which `rg` also does automatically with its ignore parsing) isn't just a nice to have, it's mandatory for a number of tasks a software engineer might be performing on a code base?
I've seen an online radio player in Go which was unusabiily slow on my Atom n270 due to
the badly coded ANSI audio visualization FX' using floating math. Meanwhile a with Cava or another visualizer and mpd+mpc I could
do the same using 200x less resources.
Many of us just don't use JSON in our day jobs, weird I know, but true.
The only thing I use JQ for at work is parsing the copilot API response so I remember what the model names are - that's it! TBH, I could just skip it and read the json
I find the opposite to be true. Most of these are really just reinventing the wheel of foundational GNU tools that are really powerful provided one has spent some time on them.
It's like people dont even know why people use or want these "modern" tools.
It's called "sane defaults", and improved UX.
Those "foundational GNU tools" just suck, sure, people are familiar with them and they are everywhere, but they just plain suck.
For many common operations you'd want to do by default with grep/find and so on, you have to type mountains of random gibberish to get it done. And that random gibberish isn't something that rolls of your tongue either, thus at minimum you'd define truckload of aliases.
OR you can use a tool(s) that has marginally "sane defaults" and marginally sane UX out of the box.
It really isn't that complicated.
This has nothing to do with "rust".
It looks quite fancy but I actually like it more for it's functionality, particularly it's tree view for navigating the processes list. I'm not a big fan of full multicolor in these kinds of tools and so appreciate how easy it is to flip to grey scale mode from the built in colour schemes (even from the TUI settings menu).
So much talk about paying open source developers and when someone actually does something about it and try to make some money, it's again not good enough.
On the contrary, that's exactly what “modern” sounds like. I wonder when all those tools will go unmaintained. Coreutils, with all their problems, are maintained since before authors of many listed tools were born.
I’m on a Mac, and some of the default tooling feels dated: GNU coreutils and friends are often stuck around mid-2000s versions. Rather than replace or fight against the system tools, I supplement them with a few extras. Honestly, most are marginal upgrades over what macOS ships with, except for fzf, which is a huge productivity boost. Fuzzy-finding through my shell history or using interactive autocompletion makes a noticeable difference day to day.
>some of the default tooling feels dated: GNU coreutils and friends are often stuck around mid-2000s versions
That’s because they’re not GNU coreutils, they’re BSD coreutils, which are spartan by design. (FWIW, this is one of my theories for why Linux/GNU dominated BSD: the default user experience of the former is just so much richer, even though the system architecture of the latter is arguably superior.)
qq should be on this list. It's like jq but works with multiple file formats, including JSON, YAML, XML, &c. and has a really cool interactive TUI mode.
Every time such a list is posted, it tends to generate a lot of debate, but I do think there is at least 2 tools that are really a good addition to any terminal :
`fd`: first I find that the argument semantic is way better than `find`, but that is more a bonus than a real killer feature. Now, it being much, much faster than `find` on most setup, I would consider a valuable feature. But the killer feature for me is the `-x` argument. It allows calling another command on the individual search result, which `find` can also do with `xargs` and co. But `fd` provide a very nice placeholder syntax[0], which remove the need to mess with `basename` and co. to parse the filename and make a new one, and it executes in parallel. For example, it makes converting a batch of image a fast and readable one line : `fd -e jpg -x cjxl {} {.}.jxl`
`rg` a.k.a `ripgrep` : Honestly it is just about the speed. It is so much faster than `grep` when searching through a directory, it opens up a lot of possibilities. Like, searching for `isLoading` on my frontend (~3444 files) is instant with rg (less than 0.10s) but takes a few minutes with grep.
But there is one other thing that I really like with `ripgrep` and that I think should be a feature of any "modern" CLI tool : It can format its output in JSON. Not that I am a big fan of JSON, but at least it is a well-defined exchange format. "Classic" CLI tool just output in a "human-readable" format which might just happen to be "machine-readable" if you mess with `awk` and `sed` enough. But it makes piping and scripting just that much more annoying and error & bug prone. Being able to output json, `jq` it and feed it to the next tool is so much better and feel like the missing chain of the terminal.
The big advantage of the CLI is that it is composable and scriptable by default. But it is missing a common exchange format to pass data, and this is what you have to wrangle with a lot of time when scripting. Having json, never mind all the gripes I have with this format, really join everything together.
Also, honorable mention for `zellij` which I find to be a much saner UX-wise alternative to `tmux`, and the `helix` text editor, which for me is neo-vim but with, again, a better UX (especially for beginner) and a lot more battery included feature while remaining faster (IMEX) than nvim with matching plugin for feature-parity.
EDIT: I would also add difftastic ( https://github.com/Wilfred/difftastic ) which is a syntax aware diff tool. I don't use it much, but it does makes some diff so so much easier to read.
> Like, searching for `isLoading` on my frontend (~3444 files) is instant with rg (less than 0.10s) but takes a few minutes with grep.
grep will try to search inside .git. If your project is Javascript, it might be searching inside node_modules, or .venv if Python. ripgrep ignores hidden files, .gitignore and .ignore. You could try using `git grep` instead. ripgrep will still be faster, but the difference won't be as dramatic.
> But the killer feature for me is the `-x` argument. It allows calling another command on the individual search result, which `find` can also do with `xargs` and co. But `fd` provide a very nice placeholder syntax[0], which remove the need to mess with `basename` and co. to parse the filename and make a new one, and it executes in parallel. For example, it makes converting a batch of image a fast and readable one line : `fd -e jpg -x cjxl {} {.}.jxl`
That was inherited from find, it has "-exec". Even uses the same placeholder, {}, though I'm not sure about {.}
`find` only support `{}`, it does not support `{/}`, `{//}`, `{.}` etc, which is why you often need to do some parsing magic to replicate basic thing such has "the full path without the extension`, `only the filename without the extension` etc
would be good to have an indicator if it’s available with your distro by default or what package you’ll need to install it since all tools are only as useful as available they are…
duf is pretty good for drive space, has some nice colours and graphs. But its also not as useful for feeding into other tools.
btop has been pretty good for watching a machine to get an overview of everything going on, the latest version has cleaned up how the lazy CPU process listing works.
zoxide is good for cding around the system to the same places. It remembers directories so you avoid typing full paths.
Modern doesn't always mean better. A better replacement for mplayer was mpv, and in some cases mplayer was faster than mpv (think about legacy machines).
- bat it's a useless cat. Cat concatenates files. ANSI colour breaks that.
- alias ls='ls -Fh' , problem solved. Now you have * for executables, / for directories and so on.
- ncdu it's fine, perfect for what it does
- iomenu it's much faster than fzf and it almost works the same
- jq it's fine, it's a good example on a new Unix tool
- micro it's far slower than even vim
- instead of nnn, sff https://github.com/sylphenix/sff with soap(1) (xdg-open replacement) from https://2f30.org create a mega fast environment. Add MuPDF and sxiv, and nnn and friends will look really slow compared to these.
Yes, you need to set config.h under both sff and soap, but they will run much, much faster than any Rust tool on legacy machines.
> bat it's a useless cat. Cat concatenates files. ANSI colour breaks that.
It's useless as a cat replacement, I agree. The article really shouldn't call it that, although the program's GitHub page does self-describe it as "a cat clone". It's more of a syntax highlighter combined with a git diff viewer (I do have an issue with that; it should be two separate programs, not one).
> bat it's a useless cat. Cat concatenates files. ANSI colour breaks that.
From the README:
>Whenever bat detects a non-interactive terminal (i.e. when you pipe into another process or into a file), bat will act as a drop-in replacement for cat and fall back to printing the plain file contents
bat works as normal cat for normal uses of cat and a better cat for all those "useless cat" situations we find ourselves in.
I can't see bat as a "useless cat" or a replacement for cat except for reading source code in the terminal. It's more a like a less with syntax highlight or a read-only vim.
I agree with this. cat is great for "cating" bat is great for throwing shit on the terminal in a fashion that makes it semantically easier to reason with, two different use cases.
I think that's because it's super common to use cat to quickly view a file. It has the nice property of using your terminal's scrollback rather than putting you into a pager application. For that use-case it is an alternative to cat.
That said, I've never really cared much about missing syntax highlighting for cases where I'm viewing file contents with cat. So the tool doesn't really serve a purpose for me and instead I'll continue to load up vim/neovim if I want to view a file with syntax highlighting.
Learn the classic tools, learn them well, and your life will be much easier.
Not everybody is a sysadmin manually logging into lots of independent, heterogeneous servers throughout the day.
Same goes for a bunch of other tools that have "modern" alternatives but the "classic" ones are already installed/available on most default distribution setups.
I never asked for such behaviour, and I have no time for pretty "modern" opinions in a base software.
Often, when I read "modern", I read "immature".
I am not ready to replace my stable base utilities for some immature ones having behaviour changes.
The scripts I wrote 5 years ago must work as is.
> ripgrep is a line-oriented search tool that recursively searches the current directory for a regex pattern. By default, ripgrep will respect gitignore rules and automatically skip hidden files/directories and binary files. (To disable all automatic filtering by default, use rg -uuu.)
https://github.com/BurntSushi/ripgrep
Because hey guess what: you can still use grep! So I built something different.
[1] Not like the time one of my friends "wardialed" every number in my local calling area and posted the list to a BBS and I found that some of them could be logged into with "uucp/uucp" and the like. I think Bell security knew he rang everybody's phone in the area but decided to let billing handle the problem because his parents had measured service.
But yes, in the eventual case that I don't have Nix I can very much use the classic tools. It is not a binary choice, you can have both.
apt-get/pacman/dnf/brew install <everything that you need>
You'll need install those and other tools (your favorite browser, you favorite text editor, etc) anyway if you're changing your OS.
> or SSH anywhere
When you connect through SSH you don't have GUI and that's not a reason for avoiding using GUI tools, for example.
> even use a mix of these on my personal computer and the traditional ones elsewhere
I can't see the problem, really. I use some of those tools and they are convenient, but it doesn't matter that I can't work without that. For example, bat: it doesn't replace cat, it only outputs data with syntax highlight, makes my life easier but if I don't have it, ok.
If only it were so simple. Not every tool comes from a package with the same name, (delta is git-delta, "z" is zoxide, which I'm not sure I'd remember off the top of my head when installing on a new system). On top of that, you might not like the defaults of every tool, so you'll have config files that you need to copy over or recreate (and hopefully sync between the computers where you use these tools).
That said I do think nix provides some good solutions for this. It gives you a nice clean way to list the packages you want in a nixfile and also to set their defaults and/or provide some configuration files. It does still require some maintenance (and I choose to install the config files as editable, which is not very nix-y, but I'd rather edit it and then commit the changes to my configs repo for future deploys than to have to edit and redeploy for every minor or exploratory change), but I've found it's much better than trying maintain some sort of `apt-get install [packages]` script.
One major difference can emerge from the fact that using a tool regularly inevitably builds muscle memory.
You’re accustomed to a replacement command-line tool? Then your muscle memory will punish you hard when you’re logged into an SSH session on another machine because you’re going to try running your replacement tool eventually.
You’re used to a GUI tool? Will likely bite you much less in that scenario.
Yes.
> Then your muscle memory will punish you hard
No.
I'm also used to pt-br keyboards, it's easier to type in my native language, but it's ok if I need to use US keyboards. In terms of muscle memory, keyboards are far harder to adapt.
A non-tech example: if I go to a Japanese restaurant, I'll use chopsticks and I'm ok with them. At home, I use forks and knives because they make my life easier. I won't force myself to use chopsticks everyday only for being prepared for Japanese restaurants.
The point is that sometimes you're SSHing to a lightweight headless server or something and you can't (or can't easily) install software.
I personally haves an ansible playbook to ~setup all my commonly used tooling on ~any cli I will use significantly; (almost) all local installs to avoid need for root. It runs in ~minute - and I have all the Niceties. If it's not worth spending that minute to run; then i won't be on the machine long enough for it to matter.
Agreed, but that doesn't stop you from using/learning alternatives. Just use your preferred option, based on what's available. I realise this could be too much to apply to something like a programming language (despite this, many of us know more than one) or a graphics application, but for something like a pager, it should be trivial to switch back and forth.
Awk and sed.
I like the idea of new tools though. But knowing the building blocks is useful. The “Unix power tools” book was useful to get me up to speed.. there are so many of these useful mini tools.
Miller is one I’ve made use of (it also was available for my distro)
E.g. I have ls set up aliased to eza as part of my custom set of configuration scripts. eza pretty much works as ls in most scenarios.
If I'm in an environment which I control and is all configured as I like it, then I get a shinier ls with some nice defaults.
If I'm in another environment then ls still works without any extra thought, and the muscle memory is the same, and I haven't lost anything.
If there's a tool which works very differently to the standard suite, then it really has to be pulling its weight before I consider using it.
IMO, it's worth spending some time to clean up your setup for smooth transition to new machines in the future.
Only to feel totally handicapped when logging in into a busybox environment.
I'm glad I learned how to use vi, grep, sed..
My only change to an environment is the keyboard layout. I learned Colemak when I was young. Still enjoying it every day.
Some of them are smart but sometimes I want dumb, for example, ripgrep respects gitignore, and often, I don't want that. Though in this case, there is an option to turn it off (-uuu). That's a common theme with these tools too, they are trying to be smart by default and you need option to make them dumb.
So no, these tools are not "objectively superior", they are generally more advanced, but it is not always what you need. They complement classic tools, but in no way replace them.
Wow, that is so cool. This looks a lot more approachable than other sandboxing tools.
Many folks nowadays don't get how lucky they are, not having to do UNIX development on a time-sharing system, although cloud systems kind of replicate the experience.
Often there are plenty of of paths open to getting a decent environment as you go:
Mostly, I rely on ansible scripts to install and configure the tools I use.
One fallback I haven't seen mentioned, that can get a lot of mileage from it: use sshfs to mount the target system locally. This allows you to use local tool & setup effectively against another machine!
I have a very limited set of additional tools I tend to install on systems, and they are in my default ansible-config, so will end up on systems quickly, but I try to keep this list short and sweet.
95% of the systems I manage are debian or ubuntu, so they will use mostly the same baseline, and I then add stuff like ack, etckeeper, vim, pv, dstat.
The ones that leap out at me are ripgrep (a genuinely excellent recursive grepper), jq (a JSON processor - there is no alternative to this in the standard unix toolkit), and hyperfine (benchmarking).
As a greybeard linux admin, I agree with you though. This is why when someone tells me they are learning linux the first thing I tell them is to just type "info" into the terminal and read the whole thing, and that will put them ahead of 90% of admins. What I don't say is why: Because knowing what tooling is available as a built-in you can modularly script around that already has good docs is basically the linux philosophy in practice.
Of course, we remember the days where systems only had vi and not even nano was a default, but since these days we do idempotent ci/cd configs, adding a tui-editor of choice should be trivial.
What are you talking about? I'm still living those days in modern day AWS with latest EC2 machines!
Actual LOL. Indeed. I was working for a large corporation at one point and a development team was explaining their product. I asked what its differentiators were versus our competitors. The team replied that ours was written in Go. #faceplam
An example from my personal experience: I used to think that oxipng was just a faster optipng. I took a closer look recently and saw that it is more than that.
See: https://op111.net/posts/2025/09/png-compression-oxipng-optip...
Sure. You can drive really slowly in a sports car. But if you're looking for travel options for a long distance journey are you going to pick the sports car or the bicycle.
Also I have actually yet to find slow unmaintainable brittle garbage written in Go or Rust. I'm sure it's possible but it's vastly less likely.
> cat clone with syntax highlighting and git integration
doesn't make any sense because cat is not really meant for viewing files. You should be comparing your tool with the more/less/most family of tools, some of which can already do syntax highlighting or even more complex transforms.
I just enjoy seeing others incrementally improve on our collective tool chest. Even if the new tool isn’t of use to me, I appreciate the work that went into it. They’re wonderful tools in their own right. Often adding a few modern touches to make a great tool just a little bit better.
Thank you to those who have put in so much effort. You’re making the community objectively better.
If any fledgling designers are here, then take note and add this to your list of examples to avoid.
I genuinely don't know what is going on here.
> I genuinely don't know what is going on here.
I basically live in my music library. However, every single pop artist offers songs that I don't like, are not in my library, and mysteriously have many millions of albums sold.
I genuinely don't know what is going on here.
Joking aside, have you ever tried to use some of these tools ? I use to not understand why people where using vim until I really tried.
No.
> I use to not understand why people where using vim until I really tried.
There's your problem. I respectfully suggest installing Emacs.
I use the command line a lot too and this is one of my most common commands, and I don't know of an elegant way to do it with the builtin Unix tools.
(And I have basically the same question for finding files matching a regex or glob [ignoring the stuff I obviously don't want], e.g., `fd '.foo.*'`.)
If traversing those hidden files/directories were expensive, I'd tell `find` itself to exclude them. This also lets me switch `xargs` for `find`'s own `-exec` functionality:
(I had to look that one up.)For the record, I think `git grep` is probably the best builtin solution to the problem I gave, but personally I don't know off-hand how to only search for files matching a glob and to use the current directory rather than the repository root with `git grep` (both of which are must haves for me). I'd also need to learn those same commands for different source control systems besides git (I use one other VCS regularly).
Makes sense. If I had to do this frequently, I'd add a function/alias encapsulating that `find` incantation to my .bashrc, which I keep in version control along with other configuration files in my home directory. That way, when moving to a new environment, I can just clone that repo into a fresh home directory and most of my customizations work out-of-the-box.
1. I don't recommend using shell functions or aliases for this (e.g., `bashrc`) because then these scripts can't be called from other contexts, e.g., like Vim and Emacs builtin support for shell commands. This can easily be solved by creating scripts that can be called from anywhere (my personal collection of these scripts is here https://github.com/robenkleene/Dotfiles/tree/master/scripts). Personally, I only use Bash functions for things that have to do with Bash's runtime state (e.g., augmenting PATH is a common one).
2. The more important part though, is I don't always want to search in `*.foo`, I want a flexible, well-designed, API that allows me to on-the-fly decide what to search.
#2 is particularly important and drifts in philosophy of tooling, a mistake I used to make is building my workflow today into customizations like scripts. This is a bad idea because then the scripts aren't useful as your tasks change, and hopefully your tasks are growing in complexity over time. I.e., don't choose your tools based on your workflow today, otherwise you're building in limitations. Use powerful tools that will support you no matter what task you're performing, that scale practically infinitely. "The measure of a bookshelf is not what has been read, but what remains to be read."
Hits in hidden files is not really a pain point for me
The only thing I use JQ for at work is parsing the copilot API response so I remember what the model names are - that's it! TBH, I could just skip it and read the json
Those "foundational GNU tools" just suck, sure, people are familiar with them and they are everywhere, but they just plain suck.
For many common operations you'd want to do by default with grep/find and so on, you have to type mountains of random gibberish to get it done. And that random gibberish isn't something that rolls of your tongue either, thus at minimum you'd define truckload of aliases.
OR you can use a tool(s) that has marginally "sane defaults" and marginally sane UX out of the box.
It really isn't that complicated. This has nothing to do with "rust".
It looks quite fancy but I actually like it more for it's functionality, particularly it's tree view for navigating the processes list. I'm not a big fan of full multicolor in these kinds of tools and so appreciate how easy it is to flip to grey scale mode from the built in colour schemes (even from the TUI settings menu).
https://difftastic.wilfred.me.uk/
It's a huge improvement over purely character-based diffs.
exa modern replacement for ls/tree, not maintained
"not maintained" doesn't smell "modern" to me...
eza: https://github.com/eza-community/eza
Yeeeah, nope.
Damned if you do and damned if you don't.
That’s because they’re not GNU coreutils, they’re BSD coreutils, which are spartan by design. (FWIW, this is one of my theories for why Linux/GNU dominated BSD: the default user experience of the former is just so much richer, even though the system architecture of the latter is arguably superior.)
https://github.com/JFryy/qq
`fd`: first I find that the argument semantic is way better than `find`, but that is more a bonus than a real killer feature. Now, it being much, much faster than `find` on most setup, I would consider a valuable feature. But the killer feature for me is the `-x` argument. It allows calling another command on the individual search result, which `find` can also do with `xargs` and co. But `fd` provide a very nice placeholder syntax[0], which remove the need to mess with `basename` and co. to parse the filename and make a new one, and it executes in parallel. For example, it makes converting a batch of image a fast and readable one line : `fd -e jpg -x cjxl {} {.}.jxl`
`rg` a.k.a `ripgrep` : Honestly it is just about the speed. It is so much faster than `grep` when searching through a directory, it opens up a lot of possibilities. Like, searching for `isLoading` on my frontend (~3444 files) is instant with rg (less than 0.10s) but takes a few minutes with grep.
But there is one other thing that I really like with `ripgrep` and that I think should be a feature of any "modern" CLI tool : It can format its output in JSON. Not that I am a big fan of JSON, but at least it is a well-defined exchange format. "Classic" CLI tool just output in a "human-readable" format which might just happen to be "machine-readable" if you mess with `awk` and `sed` enough. But it makes piping and scripting just that much more annoying and error & bug prone. Being able to output json, `jq` it and feed it to the next tool is so much better and feel like the missing chain of the terminal.
The big advantage of the CLI is that it is composable and scriptable by default. But it is missing a common exchange format to pass data, and this is what you have to wrangle with a lot of time when scripting. Having json, never mind all the gripes I have with this format, really join everything together.
Also, honorable mention for `zellij` which I find to be a much saner UX-wise alternative to `tmux`, and the `helix` text editor, which for me is neo-vim but with, again, a better UX (especially for beginner) and a lot more battery included feature while remaining faster (IMEX) than nvim with matching plugin for feature-parity.
EDIT: I would also add difftastic ( https://github.com/Wilfred/difftastic ) which is a syntax aware diff tool. I don't use it much, but it does makes some diff so so much easier to read.
[0] https://github.com/sharkdp/fd?tab=readme-ov-file#placeholder...
grep will try to search inside .git. If your project is Javascript, it might be searching inside node_modules, or .venv if Python. ripgrep ignores hidden files, .gitignore and .ignore. You could try using `git grep` instead. ripgrep will still be faster, but the difference won't be as dramatic.
Then I tried them and it was such a night and day performance difference that they're now immediate installs on any new system I use.
That was inherited from find, it has "-exec". Even uses the same placeholder, {}, though I'm not sure about {.}
I know I have hyperfine, fd, and eza on my Windows 11, and maybe some more I cannot remember right now.
They are super easy to install too, using winget.
Got featured here on HN few weeks ago.
btop has been pretty good for watching a machine to get an overview of everything going on, the latest version has cleaned up how the lazy CPU process listing works.
zoxide is good for cding around the system to the same places. It remembers directories so you avoid typing full paths.
It's useless as a cat replacement, I agree. The article really shouldn't call it that, although the program's GitHub page does self-describe it as "a cat clone". It's more of a syntax highlighter combined with a git diff viewer (I do have an issue with that; it should be two separate programs, not one).
From the README:
>Whenever bat detects a non-interactive terminal (i.e. when you pipe into another process or into a file), bat will act as a drop-in replacement for cat and fall back to printing the plain file contents
bat works as normal cat for normal uses of cat and a better cat for all those "useless cat" situations we find ourselves in.
I can't see bat as a "useless cat" or a replacement for cat except for reading source code in the terminal. It's more a like a less with syntax highlight or a read-only vim.
That said, I've never really cared much about missing syntax highlighting for cases where I'm viewing file contents with cat. So the tool doesn't really serve a purpose for me and instead I'll continue to load up vim/neovim if I want to view a file with syntax highlighting.