The standard-ish Unix shell is a disjointed monstrosity riddled with incongruous features agglomerated from inconsistent design decisions made at various times for unknown reasons by a unknowable multitude of developers.
It is also one of the most important pieces of software in the world today.
Sure would be nice if it didn’t suck.
Digital wizardry
Early in my Unix journey, I was in awe of the shell. Watching my friend who had grown comfortable with the shell was like watching a master work with his tools. The data yielded to his precise commands like putty in his hands. The arcane symbols and abstruse output flew by, totally unintelligible to me, but clearly full of meaning for him, as he worked to configure some webserver on a Linux machine. Was he a wizard? No. He was a hacker. I wanted to be one too.
Awe is the right word for what I felt here. We use awesome quite casually these days, but to say we’re in awe is reserved for a more serious, almost spiritual experience. Up until then, I had only really dealt with windowed GUIs. GUIs were obviously more approachable, but they were limited in that you can only ever do actions the GUI developer had specifically programmed. These shell prompts seemed so abstract and unbounded by comparison. If I used a shell instead, I would not limited by the software. I could do whatever I want, provided I could craft the right command. Here was real power, right at my fingertips, if only I could read the symbols.
It sounds crazy to attribute this admiration to shell, right? It’s just goofy lines of text on the screen. But this is really a big part of what got me into computers. It’s why I installed Linux on my parents computer when I was 13 and made crash. It’s why I learned to program in high school. Digital alchemy was available to those who could learn how to make these things work.
This image isn’t totally unique to programmer-types. Even Hollywood loves to makes shell and code fly by on a hacker’s screen as a visual shorthand for their technical wizardry. There’s a reason why these depictions work. It’s alluring to think there’s secret knowledge available to us just beyond our reach: that if we could master the arcane incantations it requires, we could bend reality to our will. So the trope persists. Regular humans use GUIs. Real hackers use shell.
An indispensable tool of the trade
This isn’t just a silly affectation. I genuinely use shell in my work more than almost any other tool. And I’m not just banging out one-liners into an interactive prompt, although I do plenty of that. No, I commit shell scripts to repositories. I write tests for shell scripts. I use shell scripts as fundamental elements of production workflows.
As a SiteReliabilityDevSecOpsPlatformInfra Engineer, a huge amount of my work involves bridging disparate domains and configuring big, complex pieces of software in such a way that they work together. Shell is the near-ideal tool for this. It’s ubiquitous. It’s approachable. It’s malleable. It’s the perfect glue language.
Perhaps calling it glue is underselling it. We typically don’t think of glue as super important in construction projects—although you might be surprised how critical adhesives are in industrial applications—nor do we think of glue as having much internal complexity. Shell ends up being more like custom couplings between various parts of a system: small pieces improvised to get a component wired up to another or to shuttle some critical pieces of data across a gap.
You could think of conventional software as the major components of a car: the engine could be a database, the transmission could be a backend server, and the tires could be frontend applications. As important as these bits are, if that’s all we got, we’re not going anywhere. We need linkages between these components to create a full powertrain. In a typical software deployment, this is going to be a combination of off-the-shelf open-source support tools and a ton of configuration files. And anywhere there’s a disconnect between what the components expect and the reality of their environment, there’s bound to be a little script to bridge that chasm.
The upshot is that shell, along with other glue technologies, becomes critical infrastructure quite easily. Because it’s so ubiquitous you can rely on it to fill any gap you might find on a Unix machine. Well, I mean any gap between executable programs on that machine.
You see, shell is really just a domain-specific language for calling other programs. That may not sound like much, but it’s incredibly powerful. It’s this feature that makes it perfect for glue in a way other languages are not. Most of the things you type into a shell script are just going to treated as names of other programs or arguments passed to them. The rest is mostly capturing program output, composing programs together, and interpolating variables into program inputs. There’s some other interesting abilities but these are the big ones.1
It’s this limited but powerful set of abilities that makes it so effective in the glue role. Unlike languages like Lua, Python, or Ruby, which can also function in this space, there’s not much incentive to start making larger programs or components that themselves need to be glued together. In contrast, Lua, Python, and Ruby are languages that are designed to make it easy to express business or domain logic, often with the help of extensive libraries and third-party code. They are general-purpose languages in every sense of that term.
But being general-purpose is different from fit-for-purpose. Domain-specific languages exist for a reason. There’s no reason I couldn’t express configuration or vector graphics or web markup as Python code, but we seldom do that because there are better tools for these jobs. Hell, Python used to express package metadata in itself (setup.py
) until it was conceded that there are a lot of problems using a executable script express these things and the ecosystem moved to a static config file. The new TOML format may have its problems, but it doesn’t allow or expect arbitrary code execution, and it ensures a declarative and relatively clean layout of relevant information that is easy to parse visually and doesn’t permit arbitrary code execution. By giving up certain features, we actually gain confidence and understanding.
The same is true of shell. It’s not convenient to express complex business logic in shell, so we delegate that to larger programs. We can use shell to calculate and pass arguments to these larger programs, or to compose programs together in useful ways. The limited feature set becomes a forcing function for modular, component-based design. Sure, you can end up with too much shell this way, and it can become hard to follow, but that difficulty itself is usually the code telling you its time to create a new component. This is analogous to discovering abstractions as opposed to creating them. We don’t create whole-cloth glue components until we have concrete use cases that show us how we need them.
This work pattern has made me a better programmer in general. Maybe that’s because my role is more of a compositor than many developers, but I think a lot of folks could benefit from this more pragmatic approach. It certainly helps avoid doing more work than necessary, and I will always contend that avoiding unnecessary work is a far more valuable skill as a developer than being able to crank out lines-of-code at a breakneck pace.
Cracks in the Mortar
For all my love of the shell-first approach, I actually think the Bourne shell and its descendants kinda suck. I mean, they get the core paradigm right—basic string interpolation, output capture, and pipelines are insanely powerful tools—but they are also old and crufty, invented in a different era of computing, and they have failed to evolve to keep up with the time. I called shell a “domain-specific language for calling other programs”, and while that’s true, if you take a step back, you can see that it leaves a lot to be desired in that role.
A DSL for calling programs should be able to gracefully handle the results of calling programs. I mean the successful results, and the failures. This is the most glaring problem with shell code. It should be able to handle errors in a sensible way, but 50+ years into using it and the best we’ve got is set -e
, which still kinda sucks.
Let me cut right to the chase: ignoring errors is a terrible default. It was a design mistake in 1979 when the Bourne shell was introduced—Lisp REPLs had existed for 15 years at that point and experimented with multiple ways to handle errors in an interactive context2—and it is a goddamn travesty that we still have to live with this inane behavior. I understand it’s for backward compatibility, but it feels like a behavior you could opt into for compatibility mode, rather than keeping it as the primary behavior of a shell.
I’m probably not being fair to the language designers. This is, after all, how C works. But it’s widely viewed as one of the biggest problems with C. While programmers don’t always agree how errors should be handled, we seem to all agree these days that it should be hard to just ignore them. This consensus had mostly solidified by the time I sat down to code. I have been programming for 20 years and I don’t think I’ve every written a line of code with the assumption that the line directly above it might fail and then just execute my code anyways.
This doesn’t make sense even for human algorithms. If I’m explaining to you how to mow the lawn, I might say:
- Start the lawnmower.
- Push the lawnmower around the whole surface of the lawn.
- Turn off the lawnmower.
What the lawnmower failed to start? Would you push it around the lawn anyways? No! You’d come get me and tell me there’s a problem and it won’t start. Imagine how dense you’d have to ignore failure at the first step. Typical human narratives tend to assume that sequence implies causality. In this case, starting the lawnmower causes a condition that makes pushing the lawnmower useful. This implication is so strong in lists of instructions that we have to specifically call when steps are optional.
In shell, as far as I can tell, this behavior of ignoring errors seems to exist to support the interactive case, although I don’t have the history on it and I could be wrong. But whatever the reason, it’s still a bad design choice, at least in retrospect. Yet it persists.
“But wait!” you interject. “All of this can be fixed with set -e
.” Well, bless your adorable little heart, if only it were so simple. This errexit
feature works by simply checking the exit code after a command runs, and exiting the shell process if it is non-zero. This would work, except that there are critical shell features that conflict with this principle in non-obvious ways. Pipelines run commands in parallel, meaning we have to interpret the semantics of multiple error codes. Outer statements can suppress the exit code of inner statement in command substitution scenarios. The if
statement interprets error codes as a boolean, where 0 is true and non-zero is false, leading to swallowed errors. That means grep text /dev/null
and grep text /nonexistent
both are interpreted as false
in a boolean context, even if the second is clearly a file-not-found error. And all of this subtle behavior can vary between shell flavors, versions, and which options are set. Don’t even get me started on the interaction between all these options and subshells.
Actually, I will get started a little bit: subshells themselves are exemplary of the quirky and non-intuitive way shell languages can futz with your shiz. It seems simple enough that shell might fork()
at times for various reasons, and there are certainly times you want this, e.g. explicit subshells like (cd foo-dir || exit; bar-cmd baz-arg; ...)
. But there are several constructs that do this without it being obvious, like pipelines and redirection. You’ll know what I mean if you’ve ever tried to read
command output through a pipe. Or if you’ve tried to mutate variables in a function and have that stop working when you change the call site from a simple function call my_func
to a command
substitution result=$(my_func)
, which makes all your my_func
changes happen in a separate process.
There are, of course, workarounds for most of this issues. But also the exact behavior of your shell varies depending on which shell flavor you’re running (sh
, bash
, dash
, zsh
, ksh
, etc.), which version of that shell you’re using, and which options and modes are enabled (e.g., POSIX mode, lastpipe
). I’m exhausted just explaining this nonsense. It’s ten times worse trying to write code and remember all this minutiae while also, you know, keeping your actual problem in mind.
That’s where this stuff adds up to detract from the usefulness of shell. It’s not impossible to work with once you understand the main quirks and gotchyas, but all that stuff takes up valualbe mindspace where your domain problem should be. The bizarre syntax doesn’t help either. Wasn’t shell about simply gluing together the odd bits of our other solutions and smoothing over those rough edges? I’m sorry, I can’t see whether those edges are smooth because my eyes are still bleeding from trying to visually parse ${#options[*]}
, ${values[@]^^?}
, ${!var####}
, and :(){ :|:& };:
.
Sure, there’s an element to subjectivity in determining what is and isn’t hard to read. But Bourne Shell-derived languages combine a strange feature set with a difficult syntax in way that takes up far more cognitive real estate than a tool in its space should. And this awkward unapproachability discourages many a developer from embracing the utility of shell coding. We really need something that gets out of our way.
Some of this gratuitous syntax can be avoided. Because of shell pipelines and the near-universal availability of some common tools, we can send our strings through other programs to do our text manipulation for us. There’s the ever-popular sed
which is used mostly to perform replacements in files and streams. That’s neat, but sed
is also its own whole-ass language with its own design quirks and inconsistencies between versions. Have you ever done shell argument pre-processing to conditionally pass different replacement patterns to sed
to do text transformation you couldn’t figure out how to do directly in shell? It’s starting to feel like the cure is worse than the disease. And sed
really isn’t that powerful, or at least its super terse syntax makes expressing more complicated things so cumbersome that its power is unavailable to mere mortals.
“Don’t worry,” you tell me, “you can always use AWK!” Bruh, stop naming text-processing mini-languages. You’re scaring the hoes. Anyways, awk
is another 1970s Bell Labs creation that has made its way onto virtually every base Unix install everywhere. It has its own quirks and version inconsistencies and strengths and weaknesses that you apparently need to know in order to read other people’s shell code because it’s been around so long and some people have gotten used to it that it’s now critical infrastructure. Fun times.
It’s super neat that shell can reach out to other tools when it lacks those abilities itself. And parsing complex text is certainly a more complicated task than one should rightly expect to see included in shell. But we shouldn’t need to parse arbitrary text formats. Back here in the future, programs consume and output structured data represented in a handful of well-known exchange formats. Like JSON. At some point in the 2000s, everybody agreed that you only need a handful of primitive data types, and two compound ones, and you can encode almost anything you want in a format that is a decent compromise between human readability and machine parser-friendliness.
Oh, but shell lacks these data types. It mostly has strings and arrays of strings. Even, arrays aren’t even supported in POSIX, and newer shell support for them varies greatly. This makes sense: this is 1970s technology we’re talking about here. It probably save a ton of time in the interpreter to skip all type checks. But in the modern age it just feels like an egregious lack, because it means we’ll never be able to deal with pre-structured data.
“Hey, you can use jq
for dealing with JSON!” you interject, like a smart-ass. What did I say about naming more mini-languages? STOP IT! Yes, this does work, but it has all the problems of sed
and awk
: namely, that I have to learn a whole ’nother language and all its quirks. Sadly, unlike sed
and awk
, which predate the fall of disco, jq
and its brother-from-another-mother curl
are rarely installed in “base” OS builds. You have to figure out how to install them before you use them. This can be solved fairly easily by containers in some cases, but not always when you need portability. This is a lot of junk to remember.
All of this might have made sense at some past time, when the idea of querying a server over HTTP and extracting information from the response was some rare and niche use case. But it is 2025 and everything is an API now. It seems like our glue code should be able to keep up with that.
Shell: the good parts?
Let’s check the score here. I called shell a “DSL for calling other programs.” What makes it good for that role?
- Calling other programs is syntactically obvious and natural.
- Composing programs together is extremely easy.
- It is challenging to “overdo it” in shell.
What makes it terrible for that role?
- It is extremely challenging to handle errors properly and consistently.
- Loads of quirky syntax and surprising behaviors crowd out domain logic.
- It lacks a way to represent or process JSON data types, which are the lingua franca of modern computing.
It really seems like it should be simple to solve these problems. But a strong desire for backwards-compatibility in the shell world has made it all but impossible. There are fundamental design problems with how Bourne Shell-derived languages work. The base semantics of the language have no way to distinguish between results and output streams, other than a single byte of data that we call the exit code, which is itself overloaded with both error and boolean meanings. This is not a problem you can patch.
I won’t deny that working with zsh
or modern bash
is vastly superior to using a classic shell. Features like errexit
and lastpipe
do generally reduce the pain of common shell behaviors, even if they can also lead to some surprising issues. And while I think it’s safe to say that these shells have made improvements on POSIX shell, I wonder if some of these improvements just make it easier to start tackling harder problems with a language that is ill-suited for it. Do I really want associative arrays if I can’t easily tell if a key is present or not? Were coprocs really the missing feature we needed to make concurrent programming tractable in shell?
What we actually need are some things “regular” programming languages have had for decades: return values and exceptions. These features essentially solve the error-handling problem outright, and provide the building blocks (structured data types) that you need to solve the JSON problem. But, since these semantics would utterly break compatibility with POSIX shell, they haven’t been pushed in any mainstream Unix shell implementation I’m aware of. Outside the mainstream, folks have been trying to re-integrate features like this into shell since at least as early as 1993.
But for the rest of us, all this utility gets sacrificed on the altar of POSIX compatibility. Which… I mean, do we really even need to be POSIX compatible? Why do we care so much? Hell, I’d argue a lot of shell code writers don’t even realize when they are using Bashisms (or Zshisms for that matter). The times when we actually need to write very compatible shell code are quite limited, and you can use ShellCheck to check for many of the most common issues. Going further, you could look for a minimal POSIX-compatible shell to develop against when you need this. That combined with, you know, actual testing will cover you for compatibility.
Alternative Shells
I don’t want to pretend compatibility isn’t important. There are clear times when various Unix-ecosystem constructs expect POSIX-compatibility. But it is a limited set of things and is probably disjoint with 95% of your work unless you’re a distro package maintainer. You are free to use non-compatible shell for most of your life, and drop into a simple Bourne-shell thingy in the rare event that you need it.
So where can you use whatever shell you want? Well, you have control on your personal machines. You almost certainly control what shell runs in your development environment. You might be able to influence what runs on your team’s servers. These are all chances to try out new tools. But what tool to try?
This gets a bit tricky because, there’s a ton of options, and no clear community consensus, as is always the case when veering away from mainstream technology; if there was consensus we’d see a new branch in the mainstream, not an “alternative shell”. But there are a few generally good options.
I personally use Fish as my daily driver, even though it only fixes one of my three main shell grips (the syntax one). But it comes with such a great out-of-the box interactive experience, that I can forgive it for not solving the others. I don’t need perfect error handling at the command line as often, since there’s a human in the loop. But I do love how it gets rid of POSIX shell eye-bleed in favor of a simple syntax and useful builtins. A lot of the ${var[@]##%}
gobbledygook gets replaced with the string
and path
commands. Completion works like a charm, with little gray suggestions quietly extending beyond your cursor, gingerly offering themselves to you as you type, without getting pushy and hyperactive. Quoting behavior becomes a non issue, and variables have the values you expect, with no subshells and clear variable scopes.
That said, I still think fish
is a pretty lousy scripting language, because it doesn’t solve the error/output problem. This is a big deal, and it’s why I’ve never bothered trying to fight with my team to install fish on servers or even use it in our repositories. But if you’re used to POSIX-descendant shells, it’s a great demonstration that it isn’t that hard to ditch all that historical baggage. Fish easy to get used to, and with the compatibility layer bass
installed, you can consume bash scripts with ease, so you don’t get locked out of the shell-script ecosystem. It will give you a taste of what a better experience is like, with sensible defaults out-of-the box, without having to install an enormous framework or trying to sell you cringe T-shirts. Anecdotally, it’s also much faster than most bash
or zsh
-based plugin things I’ve tried, probably because most of the salient features are implemented in native code.
Windows users would be remiss if I didn’t mention PowerShell, which is definitely not POSIX compatible and was never intended to be. I stopped working with Windows before I ever really learned it, so I’m not sure what the fuss is about, but it does seem to have structured data and is oriented around pipelines. And what do you know, it’s available on MacOS and Linux too! But it requires the (very large) Common Language Runtime VM and God-knows-what-else .NET gunk to run. So I’m going to pass on PowerShell for Unix. But if I ever end up in a project working with Windows again, I think it would be worth investing time to learn it instead of trying to hack it out with Unix shells on WSL.
In that vein, if you’re interested in the structured data pipelines emphasis, Nushell has you covered. Drawing on the design of PowerShell, Nushell aims to be cross-platform by default, and generally seems to be a bit tidier and more parsimonious than the shell that inspired it. It focuses on tabular, structured data, and DSLs for manipulating that. To this end, it reimplements a lot of typical Unix-style programs directly in the shell, to make them more aware of this data format. This does give me a bit of pause though. I’ve become skeptical of tools that become so convinced of the power of a particular worldview that they start contorting everything around them to fit that model. In a shell, if this made interacting with regular programs feel second-class, then it starts to move away from that “DSL for calling programs” sweet spot, and more into a “systems data processing language.” But I haven’t really tried it yet, so I can’t say if this is a valid criticism or just armchair architecture critique. Overall, with some neat ideas and a lot of polish put on it, Nushell definitely looks worth trying.
Another candidate for a replacement shell is Elvish. In some ways it seems similar to Nushell, but with less emphasis on the table presentation layer and more emphasis on being like a functional dynamic programming language. It really aims to feel like a big boy language, except with sigils and barewords-as-strings. At first glance, I’m getting “readable Perl” or “restrained Ruby” vibes, which I mean as compliments. Both of those languages arose out of the frustrations with handling more complex tasks in shell scripts. It also seems to avoid re-implementing common commands like Nushell, which makes me suspect it will feel a bit more natural interacting with tools you find in the wild. My only worry is that all this PL-focus—there’s mentions of work on package managers and type systems and macros!—will detract from the simple, raw domain logic you want in shell scripts. Advanced features like this make it much easier to build a big goopy mess. A big ball of mud is still a big ball of mud even if it’s really elegant mud.
The last alternative shell I’ll discuss is the Oils Project, which aims to provide an “upgrade path from Bash to a better language and runtime.” It does this by providing a Bash-compatible mode osh
and then allowing you to layer on options until you get to ysh
, a target language that ticks all the boxes I have, without even sacrificing compatibility. Seriously. It has exception-based error handling, actual return values, structured data, native JSON capabilities, and a Bash-compatible mode. Granted, the way it achieves this is by essentially having two language modes embedded into one: command mode, which is more like classic shell, and expression mode, which is more like Python on JavaScript. This does make it a bit more complex, and one of my big criticisms of Oil has got to be the syntax, which I’d suggest looks quite busy.
It’s also sorely lacking in polish, and I’m not sure it will ever be “finished” in any meaningful sense. As of today, there is no cute and easy installer; you have to build it from source to prove that you are worthy.3 The interactive experience is somewhat humdrum. The docs are incomplete. For a project that has being going on for 8-9 years that seems suspect. There are still important design decisions being worked out in the open. But at the same time, it’s almost refreshing seeing that someone is willing to be so deliberate when designing a language. And you can follow all of that design thinking in the Oils blog which goes back for years. So it’s worth reading up on the thought process that went into this, even if you don’t end up using it.
For me, Oils is simultaneously the most promising and most frustrating option. It legitimately aims to provide a path for us to “get off the Bell Labs timeline”, but I don’t know where this path leads. And I’m not sure the Oils folks know either! That could be exciting, but I feel like I need more re-assurance that a project will start to stabilize before I start leaning on it for load-bearing elements of my work.
Final Thoughts
This may seem like a long, long post for such a lukewarm take—I am certainly neither the first nor the last person to claim the Bourne/POSIX shell sucks—but it’s a damn travesty that we are stuck here. The utility and power of the shell was what drew me into computing. It’s still one of the main selling points of Unix, sitting right at the core of the Unix philosophy: it is (generally) better to have many simple tools and a rich way to compose them than to have few complex tools that can’t work together. This notion is held back by weaknesses in the composition language.
I’d go so far as to argue that the creation of many modern overly-complex god-programs is in some way attributable to the weaknesses of shell. Do you like to hate on systemd
? I sure do! Well, it replaced a bunch of shitty, hard-to-maintain shell scripts for the folks who do nuts-and-bolts Linux distro work. Who knows? Maybe better shell could have prevented it. Are you tired of editing YAML for a living? A lot of that YAML is replacing things that could be expressed in shell, except shell can’t express structured data well so we have to embed shell inside of config files. Oops, I typed ${var}
when I meant {{.var}}
. Or was it ${{var}}
? How silly of me to confuse one of the 5 templating languages I work with for another!
We are seriously still living in the past in so many ways because of this nonsense. And unlike some other ancient Unix jank like signals, we could mostly get rid of this! We generally get to pick the programs we write code in. There are some exceptions-login shells need to be able to process /etc/profile
, and some tools like Vi or GDB expect $SHELL
to reference a POSIX shell—but for the most part you can write shell scripting in whatever you want if you add a #!/she/bang
, so what’s stopping us?
Try one of these alternative shells. Seriously! Give it a go. If you’re not a big shell user, then maybe having a shell language with decent semantics will open your eyes to a new way of working. If you use shell all the time, it might be hard to break from usual habits, but you may end up liking it! If none of the alternatives I listed are appealing to you, take a look at the Oils wiki’s list of alternative shells and find something you like.
To move on from the Bell Labs timeline, we need to gain a critical mass of developers that regularly and confidently use non-POSIX shells. One of the better arguments for sticking with these old-timey shells is that they are well-worn and battle-tested. They have their faults but we know where they are. The only way to counter that is to start testing an alternative in new battles! Do this locally and privately at first.
Don’t start jamming it into shared load-bearing infrastructure at work without talking to your team. No one likes being surprised with shiny new tech they don’t know without being consulted. This kinda thing leaves a bad taste in people’s mouths. It is perfectly reasonable to be skeptical of new technology, and perfectly understandable to be resentful when it is shoved down your throats. I don’t think any developers I know love POSIX shell, but many reasonable and intelligent folks think it’s the best choice given all the trade-offs. So I’m not going to fight to install a neoshell on our servers until I fully understand the implications of what that means.
That said, if you find yourself liking one of these, show it off to your team! Feature it during demo day, either live, or by scripting up some actions that you want to show off. Use it to solve problems you found difficult. Talk about it. Write about it. Emit microblog status updates about it. Show the world that there are alternatives. As I said before, I don’t think people love POSIX shell, I just think they are just resigned to the sludge, convinced that it is the least bad option for the task. Can we show them some good options?
Call me a dreamer, but I think it is possible for us to finally change. We don’t all have to program in C anymore. Maybe one day we won’t all have to script in sh
.
I’m aware that shell is Turing complete and thus technically capable of writing any program. Thanks for reminding me, nerd. But I’m talking more about what shell’s design encourages and makes easy than what is technically possible. ↩︎
I could probably levy this criticism at almost every language. There have been more features pioneered in Lisp than you can probably imagine, and there’s still some great ones that have yet to go mainstream. Did you know you can make exception handling code that prompts the user to choose options to fix the error condition? ↩︎
It’s not complicated and it builds fast, but this will be a hurdle to adoption for a lot of folks who’d rather just copy
curl -sSf https://oils.pub/get-oils | sh
into a terminal and press enter. ↩︎