Not on macOS. You can do it easily by just invoking /usr/bin/python3, but you’ll get a (short and non-threatening) dialog to install the Xcode Command-Line Developer Tools. For macOS environments, JavaScript for Automation (i.e. JXA, i.e /usr/bin/osascript -l JavaScript) is a better choice.
However, since Sequoia, jq is now installed by default on macOS.
Like: printing all but one column somewhere in the middle. It turns into long, long commands that really pull away from the spirit of fast fabrication unix experimentation.
jq and sql both have the same problem :)
cut -d " " -f1-2,4-5 file.txt
where file.txt is: one two three four five
and the return is: one two four five
$ echo "one two three four five" | awk '{$3="";print}'
one two four five
echo "one two three four five" | awk '{$3="\b"; print}'
Inserts the backspace character (^H), which you can then remove with [global] substitution: awk '{$3="\b"; sub(/\32\b/, ""); print}'
You can, of course, use an arbitrary sentinel value for the field to be deleted. Should work in gawk and BWK awk. sed 's/ / /g'
awk '{$1=$1};1'
Whence perl.
The following command is handy for grepping the output:
cat mydata.json | json_pp
Within infrastructures where perl is installed on 95% of hosts, that 5% really bites you in the ass and leads to infrastructure rot very quickly. You're kinda stuck writing and maintaining two separate scripts to do the same thing.
With Perl, I find that a base installation is almost always available, but many packages might not be.
(Once had a coworker reject a PR I wrote because I included a bash builtin in a deployment script. He said that python is more likely to be installed than bash, so we should not use bash. These debates are funny sometimes.)
That's why I use Perl instead (besides some short one liners in awk, which in some cases are even shorter than the Perl version) and do my JSON parsing in Perl.
This
diff -rs a/ b/ | ask '/identical/ {print $4}' | xargs rm
is one of my often used awk one liners. Unless some filenames contain e.g. whitespace, then it's Perl again
And I use this "historic" one liner only when I know about the contents of both directories. As soon as I need a "safer" solution I use a Perl script and pattern matching, as I said.
diff -rs a/ b/ | perl -ane '/identical$/ && unlink $F[3]'
Yes, shell is definitely too weak to parse JSON!
(One reason I started https://oils.pub is because I saw that bash completion scripts try to parse bash in bash, which is an even worse idea than trying to parse JSON in bash)
I'd argue that Awk is ALSO too weak to parse JSON
The following code assumes that it will be fed valid JSON. It has some basic validation as a function of the parsing and will most likely throw an error if it encounters something strange, but there are no guarantees beyond that.
Yeah I don't like that! If you don't reject invalid input, you're not really parsing
---
OSH and YSH both have JSON built-in, and they have the hierarchical/recursive data structures you need for the common Python/JS-like API:
osh-0.33$ var d = { date: $(date --iso-8601) }
osh-0.33$ json write (d) | tee tmp.txt
{
"date": "2025-06-28"
}
Parse, then pretty print the data structure you got: $ cat tmp.txt | json read (&x)
osh-0.33$ = x
(Dict) {date: '2025-06-28'}
Create a JSON syntax error on purpose: osh-0.33$ sed 's/"/bad/"' tmp.txt | json read (&x)
sed: -e expression #1, char 9: unknown option to `s'
sed 's/"/bad/"' tmp.txt | json read (&x)
^~~~
[ interactive ]:20: json read: Unexpected EOF while parsing JSON (line 1, offset 0-0: '')
(now I see the error message could be better)Another example from wezm yesterday: https://mastodon.decentralised.social/@wezm/1147586026608361...
YSH has JSON natively, but for anyone interested, it would be fun to test out the language by writing a JSON parser in YSH
It's fundamentally more powerful than shell and awk because it has garbage-collected data structures - https://www.oilshell.org/blog/2024/09/gc.html
Also, OSH is now FASTER than bash, in both computation and I/O. This is despite garbage collection, and despite being written in typed Python! I hope to publish a post about these recent improvements
IMO the real problem is that JSON doesn't work very well at as a because it's core abstraction is objects. It's a pain to deal with in pretty much every statically typed non-object oriented language unless you parse it into native, predefined data structures (think annotated Go structs, Rust, etc.).
Parsing is a trivial, rejecting invalid input is trivial, the problem is representing the parsed content in a meaningful way.
> bash completion scripts try to parse bash in bash
You're talking about ble.sh, right? I investigated it as well.
I think they made some choices that eventually led to the parser being too complex, largely due to the problem of representing what was parsed.
> Also, OSH is now FASTER than bash, in both computation and I/O.
According to my tests, this is true. Congratulations!