Skip to Content

Project Setup

How Computalot sets up your project. For the basic project lifecycle, see Projects.

How It Works

Projects run as sandboxed OCI containers. Your tarball includes a Dockerfile and computalot.project.json manifest. On push, Computalot builds the container image and publishes the revision immediately. Jobs then trigger runtime preparation on demand; /init is optional if you want to prepare currently available workers ahead of time.

See Project Manifest for the full manifest schema.

Init Flow

When you call POST /api/v1/projects/:name/init:

  1. Computalot prepares the OCI container image on eligible workers
  2. Manifest validation checks run (executables, files, commands)
  3. The active revision becomes ready_for_jobs after preparation succeeds

Init is asynchronous. You do not need to wait for it before submitting jobs. Use GET /api/v1/projects/:name/status when you want to inspect whether the active revision is merely published or already warm.

can_accept_new_jobs: true means the latest project revision is published and jobs can be submitted now. ready_for_jobs: true means Computalot finished platform-side runtime preparation for the current content hash. Neither field guarantees your application imports or credentials are valid. Use manifest validation checks and run one smoke job after changes.

Project readiness is per worker, not a single global switch. One node can already be warm while another is still initializing or failed. Use GET /api/v1/projects/:name/status/details when you need the per-node picture.

Project init is free but requires at least $5 of available balance. If init returns 402 Payment Required, fund the account and retry.

Minimal Project Structure

my-project/ ├── Dockerfile ├── computalot.project.json └── job.py

Dockerfile

Install your dependencies and set up the runtime in your Dockerfile:

FROM python:3.11-slim WORKDIR /workspace COPY requirements.txt . RUN pip install -r requirements.txt COPY . .

computalot.project.json

Minimal manifest:

{ "version": 1, "runtime": { "kind": "oci", "sandbox": "gvisor", "workdir": "/workspace" }, "entrypoint": { "command": ["python", "job.py"] } }

See Project Manifest for the full schema including validation, cache mounts, data sources, and services.

Worker Environment

Tasks run inside your container image. Include all dependencies in your Dockerfile. GPU-capable workers have NVIDIA drivers and CUDA available.

Setup and validation may run again on reused workers. Treat your project bootstrap as idempotent: if you create a .venv, cache directory, or generated file during setup, your setup path must tolerate partial previous state and rebuild it cleanly when needed.

Do not use worker setup as your dev shell by default. Avoid commands like uv sync --extra dev unless those dev-only packages are required at runtime. Install production/runtime dependencies in the image, keep dev extras separate, and prefer a smaller CPU runtime for screening/eval jobs when your GPU training stack is much larger.

For OCI + sandboxed jobs, worker disk usage is usually much larger than your source tarball. Budget for the built runtime, any downloaded weights/data, writable caches, checkpoints, temp files, and the per-task sandbox copy of the prepared runtime.

Updating Code

Push a new tarball. The new revision is published immediately; use invalidate only if you want to discard old prepared runtimes, and use init only if you want to prepare currently available workers ahead of time:

tar czf code.tar.gz Dockerfile computalot.project.json job.py curl -sS "$BASE_URL/api/v1/projects/my-project/push" \ -X POST -H "Authorization: Bearer $TOKEN" --data-binary @code.tar.gz curl -sS "$BASE_URL/api/v1/projects/my-project/invalidate" \ -X POST -H "Authorization: Bearer $TOKEN" curl -sS "$BASE_URL/api/v1/projects/my-project/init" \ -X POST -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" -d '{}'
Last updated on