Services
Deploy containerized applications as web services, background workers, or cron jobs.
Services are the core compute unit in SMLL. You deploy Docker containers as services, and SMLL handles networking, scaling, and lifecycle management.
Service types
| Type | Description |
|---|---|
| Web | Receives HTTP traffic via a public URL. Default port 8080, health check at /healthz. |
| Worker | Background process with no inbound traffic. Ideal for queue consumers, data pipelines. |
| Cron | Scheduled job using standard cron syntax (e.g. 0 * * * * for hourly). |
Deployment modes
| Mode | Description |
|---|---|
| Always on | Service runs continuously with the configured number of replicas. Supports HPA autoscaling. |
| On demand | Scales to zero when idle. Wakes automatically on incoming traffic (web) or schedule (cron). |
On-demand services use KEDA for scale-to-zero. See Scaling for details.
Instance sizes
| Plan | CPU | Memory |
|---|---|---|
smll.nano | 0.25 cores | 512 MiB |
smll.small | 0.5 cores | 1 GiB |
smll.medium | 1 core | 2 GiB |
smll.large | 2 cores | 4 GiB |
smll.xlarge | 4 cores | 8 GiB |
Creating a service
- Navigate to your VPC
- Click Create Service
- Configure:
- Name — alphanumeric and hyphens
- Type — web, worker, or cron
- Image — from your VPC's private registry
- Instance size — choose a plan
- Deployment mode — always on or on demand
- Replicas — min and max replica count
- Click Create
Service status
Services transition through these states:
creating → running / stopped / error → deploying → running / error / crashed
Persistent storage
You can attach a persistent disk to any service:
- Size: configurable in GB
- Mount path: defaults to
/data
The disk persists across restarts and redeployments.