"I want to be clear here, I am not advocating writing “proper” scripts, just capturing your interactive, ad-hoc command to a persistent file."
What's the difference? Why not version control it, share it with colleagues. Imagine writing a unit test to test a new feature then deleting it when done, what a waste. Ok it's not exactly the same because you aren't using these scripts to catch regressions, but all of that useful learning and context can be reused.
I don't think the language you use for scripting is too important as long as the runtime is pinned and easily available on all engineers machines, perhaps using a toolchain manager like... mise[3].
[1] https://mise.jdx.dev/tasks/ [2] https://mise.jdx.dev/shell-aliases.html [3] https://mise.jdx.dev/dev-tools/
At least nowadays LLMs can rewrite Bash to JS/Python/Ruby pretty quickly.
- when ls started quoting filenames with spaces (add -N)
- when perl stopped being installed by default in CentOS and AlmaLinux (had to add dnf install -y perl)
- when egrep alias disappeared (use grep -E)
The best way is a scripting language with locked-down dependency spec inside the script. Weirdly .NET is leading the way here.
bash is glue and for me, glue code must survive the passage of time. The moment you use a high-level language for glue code it stops being glue code.
As soon as you have state accumulating somewhere, branching or loops it becomes chaotic too quickly.
No repetitive short sentences, no "Not X, just Y." patterns, and lots of opinionated statements, written confidently in the first person.
Please more of this.
Another quite standard way of savings your command history in a file that I have seen used in all ecosystems is called "make", which even saves you a few characters when you have to type it, and at least people don't have to discover your custom system, have auto complete work out of the box, etc
I quite like make or just as a task runner, since the syntax / indentation / etc overhead is a lot lower. I haven't yet tried to introduce it in any JS based projects though, because it adds yet another tool.
One very big upside I have to use package.json is that we use pnpm which has very sophisticated way of targeting packages with --fiter (like "run tests from packages that had modification compared to master and all their transitively dependents packages" which is often exactly what you want to do)
Coming from a web background, my usual move is to put all scripts in the package.json, if present. I'd use make for everything, but it's overkill for a lot of stuff and is non-standard in a lot of the domains I work in.
Same!
Usual move used to put everything in Makefile, but after getting traumatized time and time again from ever-growing complexity, I've started to embrace Just (https://github.com/casey/just) which is basically just a simpler Make. I tend to work across teams a lot, and make/just seems easier for people to spot at a glance, than scripts inside of a package.json that mostly frontend/JavaScript/TypeScript people understand to take a look at.
But in the end I think it matters less specifically what you use, as long as you have one entrypoint that collects everything, could be a Makefile, Justfile or package.json, as long as everything gets under the same thing. Could be a .sh for all I care :)
Just integrate fzf into your shell and use ctrl-r to instantly summon a fuzzy shell history search and re-execute any command from your history!
I cannot imagine going back to using a terminal without this.
I still write plenty of scripts if I need to repeat multi command processes but for one liners just use fzf to reexecute it.
Also in a shared project you can ignore script files with .git/info/exclude instead of .gitignore so you don’t have to check in your personal exclusion patterns to the main branch.
Seriously people if you use a terminal you need the following tools to dominate the shell:
ripgrep, zoxide, fzf, fd
I made a function called y that is like the z function but is git worktree / jj workspace aware. So useful!
Though, I generally run these scripts using bun (and the corresponding `$` in bun) - basically the same thing, but I just prefer bun over deno
Anyway, what kills this for me is the need to add await before every command.
Edit: zero-dependency Python.
Stopped using python for scripting for this reason
I've started using Python for many more tasks after I discovered this feature. I'm primarily a JS/TS developer, but the ability to write a "standalone" script that can pull in third-party dependencies without affecting your current project is a massive productivity boost.
life() {
python3 << EOF
print(42)
EOF
}Historically we had to use pip which was super janky. Uv solves most of pip's issues but you still do have to deal with venvs and one issue it doesn't solve is that you can't do imports by relative file path which is something you always end up wanting for ad-hoc scripting. You can use relative package paths but that's totally different.
Just add the targeted path to sys.path, or write your own importhandler. importlib might help there. But true, out of the box, imports in python3 are a bit wacky for more flexible usage.
No, they don't. Tooling is fine with those things.
I’m not sure about that. All those ‘await’s, parentheses really kill my mojo. Why do you find it better than Python?
I said already - the main reason is you can import files by relative file path.
You can get close to the Deno UX with uv and a script like this:
#!/usr/bin/env -S uv run --script
#
# /// script
# requires-python = ">=3.12"
# dependencies = ["httpx"]
# ///
import httpx
print(httpx.get("https://example.com"))
But you still have to deal with the venv e.g. for IDE support, linting and so on. It's just more janky than Deno.I wish someone would make a nice modern scripting language with arbitrary precision integers, static types, file path imports, third party dependencies in single files, etc. Deno is the closest thing I've found but in spite of how good Typescript is there are still a ton of Javascript warts you can't get away from (`var`, `==`, the number format, the prototype system, janky map/reduce design, etc.)
iwr https://example.com
You also have arbitrary precision integers and all the other stuff from .NET $b = [BigInt]::Parse('10000000000000000000000000000000000000000000000000')...this is the same sort of 'works for me' philosophy as in Matklads post though, it's so heavily opinionated and personalized that I don't expect other people to pick it up, but it makes my day-to-day work a lot easier (especially since I switch multiple times between macOS, Linux and Windows on a typical day).
I'm not sure if Bun can do it too, but the one great thing about Deno is that it can directly import without requiring a 'manifest file' (e.g. package.json or deno.json), e.g. you can do something like this right in the code:
import { Bla } from 'jsr:@floooh/bla^1';
This is just perfect for this type of command line tools.This article used Dax instead which also looks fine! Https://github.com/dsherret/dax
One of my projects actually use bun shell to call some rust binary in a website itself and I really liked this use case.
...and that was also the one concrete example where it makes sense to have extra dependency and abstraction layer on top of a shell script:)
say you know TS and even if you walk back to where $ is defined, can you tell immediately why $`ls {dir}` gets executed and not just logged?
so how does it get executed?
unless it was just an example and you are supposed to switch in $ from some third party library... which is another dependency in addition to deno... and which can be shai-huluded anytime or you may be offline and cannot install it when you run the script?