ls -l | awk '{print $3}'
That’s typical usage of Awk, where you use it in place of cut because you can’t be bothered to remember the right flags for cut.But… Awk, by itself, can often replace entire pipelines. Reduce your pipeline to a single Awk invocation! The only drawback is that very few people know Awk well enough to do this, and this means that if you write non-trivial Awk code, nobody on your team will be able to read it.
Every once in a while, I write some tool in Awk or figure out how to rewrite some pipeline as Awk. It’s an enrichment activity for me, like those toys they put in animal habitats at the zoo.
Before I learned Perl, I used to write non-trivial awk programs. Associative arrays, and other features are indeed very powerful. I'm no longer fluent, but I think I could still read a sophisticated awk script.
Even sed can be used for some fancy processing (i.e scripts), if one knows regex well.
Sort of! A lot of AWK is easy to read even if you don't remember how to write it. There are a few quirks like how gsub modifies its target in-place (and how its default target is $0), and of course understanding the overall pattern-action layout. But I think most reasonable (not too clever, not too complicated) AWK scripts would also be readable to a typical programmer even if they don't know AWK specifically.
I then re-wrote it in awk out of curiosity and it looked almost the same.
Crazy bash expansion syntax and commandline parser abuse was replaced by actual proper functions, but the whole thing when done was almost a line by line in-place replacement, so almost the same loc and structure.
Both versions share most of the same advantages over something like python. Both single binary interpreters always already installed. Both versions will run on basically any system any platform any version (going forward at least) without needing to install anything let alone anything as gobsmacking ridiculous as pip or venv.(1)
But the awk version is actually readable.
And unlike bash, awk already pretty much stopped changing very much decades ago, so not only is it forward compatible, it's pretty backwards compatible too.
Not that that is generally a thing you have to worry about. We don't make new machines that are older than some code we wrote 5 years ago. Old bash or awk code always works on the next new machine, and that's all you ever need(2).
There is gnu vs bsd vs posix vs mawk/nawk but that's not much of a problem and it's not a constantly breaking new-version problem but the same gnu vs posix differences for the last 30 years. You have to knowingly go out of your way to use mawk etc.
(1) bash you still have for example how everything is on bash 5 or at worst 4, except a brand new Mac today still ships with bash3, and so you can actually run into backwards compatibility in bash.
(2) and bash does actually have plugins & extensions and they do vary from system to system so you do have things you either need to avoid using or run into exactly the same breakage as python or ruby or whatever.
For writing a program vs gluing other programs together, really awk should be the goat.
let's have a bash and bash that backwards compatibility in bash.
Plain text accounting program in awk https://github.com/benjaminogles/ledger.bash
Literate programming/static site generator in awk https://github.com/benjaminogles/lit
Although the latter just uses awk as a weird shell and maintains a couple child processes for converting md to html and executing code blocks with output piped into the document
Even you remember the flags, cut(1) will not be able to handle ls -l. And any command that uses spaces for aligning the text into fixed-width columns.
Unlike awk(1), cut(1) only works with delimiters that are a single character. Meaning, a run of spaces will be treated like several empty fields. And, depending on factors you don't control, every line will have different number of fields in it, and the data you need to extract will be in a different field.
You can either switch to awk(1), because its default field separator treats runs of spaces as one, or squeeze them with tr(1) first:
ls -l | tr -s' ' | cut -d' ' -f3You don't have to use fields.
$ ls -l | cut -c 35-41
22
4096
4096
4096
4096
4096
4096
68
456
690
7926
8503
19914 ls -l | cut -c 35-41
6 Nov 1
6 Nov
6 Nov 1
6 Nov 1What were you expecting? That your character ranges in ls would match mine?
I would expect the command to work in any directory. Try a few different directories on your computer and you'll see that it won't work in some of them.
But ... why expect that? That's not what "character ranges" mean.
I mean, I was only trying to clarify that `cut` is not limited to fields only.
very fast, highly underrated language
I'm not sure how good it would be for pipelines, if a step should fail, or if a step should need to resume, etc.
This pipeline may be significantly reduced by replacing cut's with awk, accommodating grep within awk and using awk's gsub in place of tr.
$ echo token:abc:def | grep -E ^token | cut -d: -f2
abc
$ echo token:abc:def | awk -F: '/^token/ { print $2 }'
abc
Conditions don't have to be regular expressions. For example: $ echo $CSV
foo:24
bar:15
baz:49
$ echo $CSV | awk -F: '$2 > 20 { print $1 }'
foo
baz //d
You can get a list of them with a single Awk line. awk -F'//d[[:space:]]*' 'NF > 1 {print FILENAME ":" FNR " " $2}' source/*.c
You can even create a GDB script, pretty easily.(IMO, easier still to configure your editor to support breakpoints, but I’m not the one who chose to do it this way.)
Would you have //d<0xA0>rest of comment?
Or some fancy Unicode space made using several UTF-8 bytes?
Because it’s the one I remembered first, it worked, and I didn’t think that it needed any improvement. In fact, I still don’t think it needs any improvement.
If tabs are supported,
[ \t]
is still shorter than [[:space:]]
and if we include all the "isspace" characters from ASCII (vertical tab, form feed, embedded carriage return) except for the line feed that would never occur due to separating lines, we just break even on pure character count: [_\t\v\f\r]
TVFR all fall under the left hand, backspace under the right, and nothing requires Shift.The resulting character class does exactly the same thing under any locale.
The isblank function tests for any character that is a standard blank character or is one of a locale-specific set of characters for which isspace is true and that is used to separate words within a line of text. The standard blank characters are the following: space (’ ’), and horizontal tab (’\t’). In the "C" locale, isblank returns true only for the standard blank characters.
[:blank:] is only the same thing as [\t ] (tab space) if you run your scripts and Awk and everything in the "C" locale.
cat /etc/passwd | \
awk -v n=10 '{ lines[NR] = $0 }
END{
for (i = NR - n + 1; i <= NR; i++)
if (i > 0) print lines[i]
}'And there's also tail -f, how would you go about doing that? a while loop that sleeps and reopens the file? yuck
> ls -l | get user
┌────┬──────┐
│ 0 │ cube │
│ 1 │ cube │
│ 2 │ cube │
│ 3 │ cube │
│ 4 │ cube │
│ 5 │ cube │
│ 6 │ cube │
│ 7 │ cube │
│ 8 │ cube │
│ 9 │ cube │
│ 10 │ cube │
│ 11 │ cube │
│ 12 │ cube │
│ 13 │ cube │
│ 14 │ cube │
│ 15 │ cube │
└────┴──────┘
You don't need to memorize bad tools' quirks. You can just use good tools.https://nushell.sh - try Nushell now! It's like PowerShell, if it was good.
MIT licensed.
https://learn.microsoft.com/en-us/powershell/scripting/insta...
For TSV, use the --separator flag.
So, I'm curious. What's the Nushell reimplementation of the 'crash-dump.awk' script at the end of the "Awk in 20 Minutes" article on ferd.ca ? Do note that "I simply won't deal with weirdly-structured data." isn't an option.
The structure can be a bit confusing if you've only seen one liners because it has a lot of defaults that kick in when not specified.
The pleasant surprise from learning to use awk was that bpftrace suddenly became much more understandable and easier to write as well, because it's partially inspired by awk.
Aside from AWK being a handy language to know, understanding the ideas behind it from a language design and use case perspective can help open your eyes to new constructs and ideas.
Certainly sed will outlive me
sed is a time-saver, enabling computer users to make the most of the time they have left
https://git.luxferre.top/nnfc/
AWK goodies (git clone --recursive) :
stat -c %U *However given what I've been able to acomplish with Claude Code, I no longer find it necessary to know any details, tips, or tricks, or to really learn anything more (at least for the types of projects I am involved in for my own benefit).
Update: Would love to know why this was downvoted...
Making a buck off the disinterested is ok, being disinterested yourself isn't.
The reason is (yes I will be so bold as to speak for all on this one) both using ai to do your thinking for you, and essentially advocating to any readers to do the same simply by writing how well it works for you. Some people find this actively bad, of negative value, and some find it merely utterly uninteresting, of no value, and both responses produce downvotes.
But it's automatic that you can not see this. If you recognized any problem, you would not be doing it, or at the very least would not describe it as anything but an embarrasing admission, like talking about a guilty pleasure vs a wholesome good thing.
So don't bother asking "What's wrong with using this tool that works vs any other tool that works?" If you have to ask... There are several things wrong, not just one.
Or for some it could just be that "I used to use awk but now I just use ___" just doesn't add anything to a discussion about awk. "I used to use awk a lot but now I just use ruby". Ok? So what? Some people go as far as to downvote for that.
Also now that you whined about downvotes, I wouldn't be surprised if that isn't the cause of some itslef, because it absolutely does deserve it.
There might possibly also be at least some just from "I'm not a programmer but here's my thoughts on this programming topic" though that isn't very wrong in my own opinion. You even say you've actually used awk a lot so as far as I'm concerned you can absolutely talk about awk and probably don't need to be so humble as to deny yourself as a pragrammer. It's admirable to avoid making claims about yourself, but I bet a bystander would call you at least a programmer, even if we'll leave the actual level of sophistication unspecified.
Since I wrote this comment, I did not up or downvote myself. But for the record, I would have downvoted for the ai.