Introduction
💡 The Core Idea
Shell-Cell is a lightweight containerized shells orchestrator that turns simple CUE blueprints into instant, isolated shell sessions.
It could be really handy, when you want to have secure, isolated place for your development.
🏛️ Architecture concepts
-
The Blueprint (
scell.cue).
Everything starts with the configuration file. It describes how your environment should be built, how it should behave at runtime and what data or resources you are exposing to it. -
Shell-Cell targets.
Think of targets as named functions — instead of one giant, monolithicDockerfile, Shell-Cell encourages you to break your setup into logical pieces. Targets are chained together viafrom, forming a linear graph resolved from your entry point down to the root. At the root level, the chain must terminate with either a registry/locally-built image (from_image) or a Dockerfile (from_docker):
graph TD
R1["📦 Registry / Local Image (from_image)"]
R2["📄 Dockerfile (from_docker)"]
T2["🔧 base-target"]
T1["🔧 target"]
M["🔧 main"]
R1 & R2 --> T2 --> T1 --> M
- “Shell Server” Model.
Unlike a standard container that runs a single task and exits, a Shell-Cell is designed to hang. By using thehanginstruction, the container stays alive in the background, acting as a persistent server. This allows you to attach multiple Shell-Cell sessions to a warm, ready-to-use environment instantly and preserving the container’s state across different sessions.
graph TD
C["🐳 Shell-Cell Container"]
C --> S1["💻 Session 1"]
C --> S2["💻 Session 2"]
C --> S3["💻 Session N"]
Install and Configure Shell-Cell
Shell-Cell requires a running instance of either Docker or Podman daemon. You have to firstly prepare and install Docker or Podman, for your choice.
Install
- Build for Unix
curl -fsSL https://github.com/Mr-Leshiy/shell-cell/releases/latest/download/shell-cell-installer.sh | sh
-
Build from source (any platform)
Prerequisites:
Go 1.24+— the Go toolchain is required.
cargo install shell-cell --locked
Shell-Cell requires a running instance of either Docker or Podman daemon.
UNIX socket configuration (UNIX)
To interact with the Docker or Podman daemon
Shell-Cell uses a UNIX socket connection on UNIX based operating systems.
The URL of this socket is read from the DOCKER_HOST environment variable.
Before running Shell-Cell, you should set the proper value of DOCKER_HOST
export DOCKER_HOST="<unix_socket_url>"
To find out the *.sock URL you could run
- for Docker
docker context inspect | grep sock
- for Podman
When you are starting a podman virtual machine
podman machine start, it prints it in stdout, e.g.
Starting machine "podman-machine-default"
API forwarding listening on: /var/folders/5m/2c6173tx1nb6m5mnkjz27gk00000gn/T/podman/podman-machine-default-api.sock
The system helper service is not installed; the default Docker API socket
address can't be used by podman. If you would like to install it, run the following commands:
sudo /opt/homebrew/Cellar/podman/5.8.0/bin/podman-mac-helper install
podman machine stop; podman machine start
You can still connect Docker API clients by setting DOCKER_HOST using the
following command in your terminal session:
export DOCKER_HOST='unix:///var/folders/5m/2c6173tx1nb6m5mnkjz27gk00000gn/T/podman/podman-machine-default-api.sock'
Machine "podman-machine-default" started successfully
Shell-Cell CLI Reference
Now that you’ve installed and configured Shell-Cell, you’re ready to launch your very first session!
To get started, you need a blueprint scell.cue file in your project directory.
This file defines the environment your shell will live in.
(For a deep dive into the blueprint specification, check out the Blueprint Guide).
The quickest way to get one is to let Shell-Cell generate it for you:
scell init
This creates a minimal, ready-to-use scell.cue in the current directory.
You can then open and adjust it to your needs.
Commands
run — Start a Shell-Cell Session
scell
Shell-Cell will automatically look for a file named scell.cue in your current location and start the Shell-Cell session on the spot.
Custom entry point (-t, --target)
By default, Shell-Cell tries to locate an entry point target named main.
If you want to use a different entry point, pass the -t, --target option.
scell -t <other-entrypoint-target>
Detach mode (-d, --detach)
If you want to start the container without attaching to the shell session,
pass the -d, --detach flag.
scell -d
This is useful for pre-warming containers in the background — the container will be started and kept alive, but no interactive shell will be opened.
Custom blueprint path
If your configuration file is located elsewhere and you don’t want to change directories, you can point Shell-Cell directly to it.
scell ./path/to/the/blueprint/directory
init — Create a Blueprint
scell init
Creates a minimal, functional scell.cue blueprint in the current directory (or in the directory passed as an argument).
Returns an error if a scell.cue already exists at that location.
scell init ./path/to/directory
ls — List Shell-Cell Containers
scell ls
Displays an interactive table of all existing Shell-Cell containers.
stop — Stop All Running Shell-Cell Containers
scell stop
Stops all running Shell-Cell containers (only Shell-Cell related containers, not any others).
Press Ctrl-C or Ctrl-D to abort early.
cleanup — Remove Orphan Containers and Images
scell cleanup
Cleans up orphan Shell-Cell containers with their corresponding images and just images.
An item is considered an orphan when it is no longer associated with any existing scell.cue blueprint file
(e.g., the blueprint was deleted or moved, or the blueprint contents changed so the container hash no longer matches).
❓ Need more help ?
If you want to explore the full list of commands, flags, and capabilities, our built-in help menu is always there for you:
scell --help
Shell-Cell Blueprint Reference
Shell-Cell builds your environment by reading a set of instructions from a scell.cue file.
scell.cue - is a CUE formatted file that contains everything needed to configure your session.
To learn more about CUE capabilities go to the original docs.
The full formal definition of the blueprint schema is available at
src/scell/types/scell_schema.cue.
Here is a minimal functional example:
main: {
from_image: "debian:bookworm"
workspace: "workdir"
shell: "/bin/bash"
hang: "while true; do sleep 3600; done"
}
Shell-Cell follows a strict logic when building your image.
It parses your target definitions into a chain,
moving from your entry point (main) down to the base “bottom” target.
The actual image building process, on contrary, happens backwards.
Starts from the “bottom” target and works its way up to your entry point (main):
bottom_targettarget_3target_2target_1main
Shell-Cell target
Shell-Cell are comprised of a series of target declarations and recipe definitions.
"<target-name>": {
<recipe>
...
}
A valid target name must start with a lowercase letter and contain only lowercase letters, digits, hyphens, and underscores (pattern: ^[a-z][a-z0-9_-]*$).
Inside each target, during the Shell-Cell image building process, the instructions are executed in a specific, strict order:
workspacefrom/from_image/from_dockerenvcopybuild
Statement groups
Every statement in a target belongs to one of three groups, depending on what it influences:
| Group | Statements | Influences |
|---|---|---|
| Image | from, from_image, from_docker, workspace, env, copy, build, hang | The built Docker image. Any change to an image statement produces a different image and triggers a rebuild. |
| Container | config | How the container is started and kept alive. Changes here cause the existing container to be replaced. |
| Session | shell | The interactive shell session attached to the running container. Changes here take effect on the next session without affecting the image or container. |
from, from_image, from_docker
Similar to the Dockerfile FROM instruction,
these statements specify the base of the Shell-Cell layer.
Only one of these statements must be present in the Shell-Cell target definition.
Either from_image or from_docker is required somewhere in the target chain — without one of them
there is no way to specify the basis of the image. from on its own only delegates to another target
and must eventually resolve to a from_image or from_docker.
from_image
Uses a Docker registry image as the base layer.
from_image: "<image>:<tag>"
from_docker
Uses a Dockerfile on the filesystem as the base layer.
The path is resolved relative to the scell.cue file.
from_docker: "path/to/Dockerfile"
from
References another Shell-Cell target, resolved recursively.
Use +<target_name> to reference a target in the same file, or path/to/dir+<target_name>
to reference a target in another scell.cue.
from: "+<target_name>"
from: "path/to/dir+<target_name>"
shell
A location to the shell, which would be available in the build image and running container.
Such shell would be used for a Shell-Cell session.
Only the first shell statement encountered in the target chain (starting from the entry point) is used.
shell: "/bin/bash"
hang
This instruction ensures your container stays active and doesn’t exit immediately after it starts. This effectively transforms your Shell-Cell container into a persistent “shell server” that remains ready for you to jump in at any time.
Only the first hang statement encountered in the target chain (starting from the entry point) is used.
To work correctly, you must specify a command that keeps the container running indefinitely. The most recommended approach is a simple infinite loop:
hang: "while true; do sleep 3600; done"
This command would be placed as a Dockerfile ENTRYPOINT instruction.
workspace (optional)
Similar to the Dockerfile WORKDIR instruction.
workspace: "/path/to/workspace"
copy (optional)
Copies files into the Shell-Cell image.
Similar to the Dockerfile COPY instruction.
copy: [
"file1 .",
"file2 .",
"file3 file4 .",
]
env (optional)
Sets environment variables in the Shell-Cell image.
Similar to the Dockerfile ENV instruction.
Each item follows the list format <KEY>=<VALUE>:
env: [
"DB_HOST=localhost",
"DB_PORT=5432",
"DB_NAME=db",
"DB_DESCRIPTION=\"My Database\"",
]
build (optional)
Will execute any commands to create a new layer on top of the current image,
during the image building process.
Similar to the Dockerfile RUN instruction.
build: [
"<command_1>",
"<command_2>",
]
config (optional)
Runtime configuration for the Shell-Cell container.
Unlike build, copy, and workspace, which affect the image building process,
config defines how the container behaves when it runs.
All config statements are optional.
Only the first config statement encountered in the target chain (starting from the entry point) is used.
config: {
mounts: [
"<host_path>:<container_absolute_path>",
]
ports: [
"<host_port>:<container_port>",
]
services: {
"<service_name>": {
from_image: "<image>:<tag>"
shell: "<shell>"
hang: "<hang_command>"
}
}
}
mounts
Bind-mounts host directories into the running container.
Each mount item follows the format <host_path>:<container_absolute_path>.
- The host path can be relative (resolved relative to the
scell.cuefile location) or absolute. Relative host paths are canonicalized during compilation, so the referenced directory must exist. - The container path must be an absolute path.
config: {
mounts: [
"./src:/app/src",
"/data:/container/data",
]
}
ports
Publishes container ports to the host. Partially follows the Docker Compose short form syntax.
Each item can be one of:
| Format | Description |
|---|---|
HOST_PORT:CONTAINER_PORT | Map a specific host port to a container port |
HOST_IP:HOST_PORT:CONTAINER_PORT | Map with a specific host IP and port |
HOST_IP::CONTAINER_PORT | Bind to a host IP with a random host port |
Append /tcp or /udp to any format to specify the protocol (default: tcp).
config: {
ports: [
"8080:80",
"127.0.0.1:9000:9000",
"6060:6060/udp",
]
}
Extra Arguments (.scell_args.cue)
Shell-Cell supports a companion file .scell_args.cue placed in the same directory as scell.cue.
When present, its CUE values are unified with the blueprint before compilation, allowing you to supply
concrete values for CUE constraints declared in scell.cue.
This is useful for parameterize a blueprint — keeping the blueprint generic and checked in, while
supplying environment-specific or personal overrides through a gitignored .scell_args.cue file.
Typical uses include machine-specific paths, image tags, and secrets such as API keys or tokens
that should never be committed to version control.
To learn more about CUE capabilities go to the original docs.
How it works
Declare open constraints (string fields) in scell.cue:
_from_image_arg: string
_workspace_arg: string
_env_arg: int
main: {
from_image: _from_image_arg
workspace: _workspace_arg
shell: "/bin/bash"
hang: "while true; do sleep 3600; done"
env: [
"SOME_ENV=\(_env_arg)"
]
}
Then provide the concrete values in .scell_args.cue. Since the file is full CUE, you can use
all CUE features — including string interpolation to compose values from other fields:
_from_image_arg:"debian:bookworm"
_workspace_arg: "/app"
_env_arg: 10
At compile time, Shell-Cell unifies the two files. The result is equivalent to having written
those values directly in scell.cue.
main: {
from_image: "debian:bookworm"
workspace: "/app"
shell: "/bin/bash"
hang: "while true; do sleep 3600; done"
env: ["SOME_ENV=10"]
}
Notes
.scell_args.cueis optional. If it is absent, the blueprint is compiled as-is.- The file is looked up only in the same directory as
scell.cue; there is no recursive search. - Any CUE unification error (e.g. a value that conflicts with a constraint) is reported as a user error.
- Add
.scell_args.cueto.gitignoreto keep secrets and personal overrides out of version control.