Introduction
💡 The Core Idea
Shell-Cell is a lightweight containerized shells orchestrator that turns simple YAML blueprints into instant, isolated shell sessions.
It could be really handy, when you want to have secure, isolated place for your development.
🏛️ Architecture concepts
-
The Blueprint (
scell.yml).
Everything starts with the configuration file. It defines your Shell-Cell targets (the environment layers). -
Shell-Cell targets.
Think of targets as named function, instead of one giant, monolithicDockerfile, Shell-Cell encourages you to break your setup into logical pieces. -
“Shell Server” Model.
Unlike a standard container that runs a single task and exits, a Shell-Cell is designed to hang. By using thehanginstruction, the container stays alive in the background, acting as a persistent server. This allows you to attach multiple Shell-Cell sessions to a warm, ready-to-use environment instantly and preserving the container’s state across different sessions.
Install and Configure Shell-Cell
Shell-Cell requires a running instance of either Docker or Podman daemon. You have to firstly prepare and install Docker or Podman, for your choice.
Install
- Build for Unix
curl -fsSL https://github.com/Mr-Leshiy/shell-cell/releases/latest/download/shell-cell-installer.sh | sh
- Build from source (any platform)
cargo install shell-cell --locked
Shell-Cell requires a running instance of either Docker or Podman daemon.
UNIX socket configuration (UNIX)
To interact with the Docker or Podman daemon
Shell-Cell uses a UNIX socket connection on UNIX based operating systems.
The URL of this socket is read from the DOCKER_HOST environment variable.
Before running Shell-Cell, you should set the proper value of DOCKER_HOST
export DOCKER_HOST="<unix_socket_url>"
To find out the *.sock URL you could run
- for Docker
docker context inspect | grep sock
- for Podman
When you are starting a podman virtual machine
podman machine start, it prints it in stdout, e.g.
Starting machine "podman-machine-default"
API forwarding listening on: /var/folders/5m/2c6173tx1nb6m5mnkjz27gk00000gn/T/podman/podman-machine-default-api.sock
The system helper service is not installed; the default Docker API socket
address can't be used by podman. If you would like to install it, run the following commands:
sudo /opt/homebrew/Cellar/podman/5.8.0/bin/podman-mac-helper install
podman machine stop; podman machine start
You can still connect Docker API clients by setting DOCKER_HOST using the
following command in your terminal session:
export DOCKER_HOST='unix:///var/folders/5m/2c6173tx1nb6m5mnkjz27gk00000gn/T/podman/podman-machine-default-api.sock'
Machine "podman-machine-default" started successfully
Shell-Cell CLI Reference
Now that you’ve installed and configured Shell-Cell, you’re ready to launch your very first session!
To get started, create a blueprint scell.yml file in your project directory.
This file defines the environment your shell will live in.
(For a deep dive into the blueprint specification, check out the Blueprint Guide).
Example scell.yml:
main:
from: debian:bookworm
workspace: workdir
shell: /bin/bash
hang: while true; do sleep 3600; done
Once your file is ready, simply open your terminal in that directory and run:
scell
That’s it, simple as that!
Shell-Cell will automatically look for a file named scell.yml in your current location and start the Shell-Cell session on the spot.
It would try to locate an entry point target - main.
If you want to specify some other entry point target, rather than main,
you could pass a -t, --target CLI option.
scell -t <other-entrypoint-target>
If your configuration file is located elsewhere and you don’t want to change directories, you can point Shell-Cell directly to it.
scell ./path/to/the/blueprint/directory
Commands
ls — List Shell-Cell Containers
scell ls
Displays an interactive table of all existing Shell-Cell containers.
stop — Stop All Running Shell-Cell Containers
scell stop
Stops all running Shell-Cell containers (only Shell-Cell related containers, not any others).
Press Ctrl-C or Ctrl-D to abort early.
cleanup — Remove Orphan Containers and Images
scell cleanup
Cleans up orphan Shell-Cell containers and corresponding images. A container is considered an orphan when it is no longer associated with any existing scell.yml blueprint file (e.g., the blueprint was deleted or moved, or the blueprint contents changed so the container hash no longer matches).
❓ Need more help ?
If you want to explore the full list of commands, flags, and capabilities, our built-in help menu is always there for you:
scell --help
Shell-Cell Blueprint Reference
Shell-Cell builds your environment by reading a set of instructions from a scell.yml files.
scell.yml - is a YAML formatted file that contains everything needed to configure your session.
Here is a minimal functional example:
main:
from: debian:bookworm
workspace: workdir
shell: /bin/bash
hang: while true; do sleep 3600; done
Shell-Cell follows a strict logic when building your image.
It parses your target definitions into a Directed Linear Graph,
moving from your entry point (main) down to the base “bottom” target.
The actual image building process, on contrary, happens backwards.
Starts from the “bottom” target and works its way up to your entry point (main):
bottom_targettarget_3target_2target_1main
Shell-Cell target
Shell-Cell are comprised of a series of target declarations and recipe definitions.
<target-name>:
<recipe>
...
A valid target name must start with a lowercase letter and contain only lowercase letters, digits, hyphens, and underscores (pattern: ^[a-z][a-z0-9_-]*$).
Inside each target, during the Shell-Cell image building process, the instructions are executed in a specific, strict order:
workspacefromenvcopybuild
from
Similar to the Dockerfile FROM instruction,
it specifies the base of the Shell-Cell image.
It could be either a plain image, or reference to other Shell-Cell target
- Image with tag
from: <image>:<tag>
- Shell-Cell target reference
from: path/to/file+<target_name>
shell
A location to the shell, which would be available in the build image and running container.
Such shell would be used for a Shell-Cell session.
Only the first shell statement encountered in the target graph (starting from the entry point) is used.
shell: /bin/bash
hang
This instruction ensures your container stays active and doesn’t exit immediately after it starts. This effectively transforms your Shell-Cell container into a persistent “shell server” that remains ready for you to jump in at any time.
Only the first hang statement encountered in the target graph (starting from the entry point) is used.
To work correctly, you must specify a command that keeps the container running indefinitely. The most recommended approach is a simple infinite loop:
hang: while true; do sleep 3600; done
This command would be placed as a Dockerfile ENTRYPOINT instruction.
workspace (optional)
Similar to the Dockerfile WORKDIR instruction.
workspace: /path/to/workspace
copy (optional)
Copies files into the Shell-Cell image.
Similar to the Dockerfile COPY instruction.
copy:
- file1 .
- file2 .
- file3 file4 .
env (optional)
Sets environment variables in the Shell-Cell image.
Similar to the Dockerfile ENV instruction.
Each item follows the list format <KEY>=<VALUE>:
env:
- DB_HOST=localhost
- DB_PORT=5432
- DB_NAME=db
- DB_DESCRIPTION="My Database"
build (optional)
Will execute any commands to create a new layer on top of the current image,
during the image building process.
Similar to the Dockerfile RUN instruction.
build:
- <command_1>
- <command_2>
config (optional)
Runtime configuration for the Shell-Cell container.
Unlike build, copy, and workspace, which affect the image building process,
config defines how the container behaves when it runs.
All config statements are optional.
Only the first config statement encountered in the target graph (starting from the entry point) is used.
config:
mounts:
- <host_path>:<container_absolute_path>
ports:
- "<host_port>:<container_port>"
mounts
Bind-mounts host directories into the running container.
Each mount item follows the format <host_path>:<container_absolute_path>.
- The host path can be relative (resolved relative to the
scell.ymlfile location) or absolute. Relative host paths are canonicalized during compilation, so the referenced directory must exist. - The container path must be an absolute path.
config:
mounts:
- ./src:/app/src
- /data:/container/data
ports
Publishes container ports to the host. Partially follows the Docker Compose short form syntax.
Each item can be one of:
| Format | Description |
|---|---|
HOST_PORT:CONTAINER_PORT | Map a specific host port to a container port |
HOST_IP:HOST_PORT:CONTAINER_PORT | Map with a specific host IP and port |
HOST_IP::CONTAINER_PORT | Bind to a host IP with a random host port |
Append /tcp or /udp to any format to specify the protocol (default: tcp).
config:
ports:
- "8080:80"
- "127.0.0.1:9000:9000"
- "6060:6060/udp"