{"name":"computalot","status":"private_beta","version":"v2","auth":{"type":"bearer","header":"Authorization: Bearer <token>","note":"Anonymous callers can use /health, /docs, /llms.txt, /llms-full.txt, /api/v1/docs/*, POST /api/v1/feedback, POST /api/v1/auth/register, POST /api/v1/auth/wallet/challenge, and POST /api/v1/auth/wallet/verify. GET /metrics is operator-gated; /metrics requires admin auth, a dedicated metrics token, or a local request. Other endpoints require one of the two supported beta access paths: an admin-issued API key or a wallet session from an admin-whitelisted wallet. Note: POST /api/v1/auth/register currently returns 403 because self-service API key issuance is disabled.","wallet_auth":"POST /api/v1/auth/wallet/challenge to create a wallet challenge for a wallet that an admin has approved in the dashboard whitelist, sign the returned message, then POST /api/v1/auth/wallet/verify to receive a short-lived fls_-prefixed session token. If the wallet is not yet allowlisted, join the waitlist at / or ask an admin to whitelist it before relying on wallet auth or x402.","how_to_get_a_key":"API keys are admin-issued only for now. Wallet auth and x402 currently work only for admin-whitelisted wallets. If you do not already have beta access, start at / and join the waitlist, or ask an admin to create a user and issue a flk_-prefixed API key via the admin API.","roles":{"member":"Create/manage own projects and jobs.","viewer":"Read-only access to own projects and jobs"},"scoping":"Endpoints are scoped by account ownership. API keys and wallet sessions both resolve to an account. You see projects you own and jobs submitted under that account. Infrastructure identities are not exposed in public responses because placement is managed internally by Computalot."},"description":"Computalot is a distributed compute platform. Submit jobs, get structured JSON results. GPU and CPU workers.","docs":{"index":"/api/v1/docs","python_sdk":"/api/v1/docs/python-sdk","recipes":"/api/v1/docs/recipes","workflows":"/api/v1/docs/workflows","skill":"/skill.md","llm":"/llms.txt","llm_full":"/llms-full.txt","web":"/docs"},"controller_url":"https://computalot.com","api":{"auth":[{"path":"/api/v1/auth/register","response":"403: {error, recommended_action, details}. Join the waitlist at / if you need beta access, use POST /api/v1/auth/wallet/challenge + /verify only after your wallet is allowlisted, or ask an admin to issue an API key.","body":{"name":"string (ignored while disabled)","email":"string (ignored while disabled)"},"method":"POST","purpose":"Self-service API key issuance is currently disabled. No auth required, but the endpoint returns 403 with beta-access guidance toward the landing-page waitlist, admin-whitelisted wallet auth, or admin-issued keys."},{"path":"/api/v1/auth/wallet/challenge","response":"201: {challenge: {id, chain, wallet_address, nonce, message, status, expires_at}}. 422 for malformed chain/address input. 403 for wallets that are not allowlisted.","body":{"chain":"string (default base)","wallet_address":"string"},"method":"POST","purpose":"Create a wallet auth challenge for an autonomous agent wallet. No auth required, but non-whitelisted wallets are denied with next-step guidance."},{"path":"/api/v1/auth/wallet/verify","response":"201: {account, wallet, session, token}. 401 for invalid signatures or wallet/chain mismatch. 409 for reused challenges. 410 for expired challenges. 422 for malformed verify input. token is an fls_-prefixed bearer token.","body":{"signature":"string","wallet_address":"string","challenge_id":"string"},"method":"POST","purpose":"Verify a signed wallet challenge and mint a short-lived session token. No auth required, but verification is still beta-gated by wallet allowlisting."}],"ops":[{"path":"/health","method":"GET","purpose":"Liveness probe (no auth). Returns {\"status\":\"ok\",\"app\":\"computalot_api\"}."},{"path":"/live","method":"GET","purpose":"Liveness probe (no auth). Same as /health."},{"path":"/ready","method":"GET","purpose":"Readiness probe (no auth). 200 with {checks: {repo, api_supervisor, controller_core}} when the controller core is up; 503 otherwise."},{"path":"/metrics","method":"GET","purpose":"Prometheus metrics (operator-gated). Requires a local request, admin auth, or a dedicated metrics token."}],"results":[{"path":"/api/v1/results/:job_id","response":"200: {job_id, project, status, client_ref, tags, meta, variant?, recipe_cache?, artifact_ids, links, summary, aggregate_result, aggregate_aliases, completeness, result_persisted, output_persisted, results: [{task_id, status, payload, result, artifact_ids, result_artifact_id?, result_present, result_quality, result_warnings, output, output_present, error, project_content_hash?, started_at, completed_at}], count, result_count, output_count}","method":"GET","note":"Recommended way to read task outcomes. The 'result' field contains the JSON your runner wrote to $COMPUTALOT_TASK_RESULT. 'summary' includes aggregate_result, aggregate_aliases, completeness, task_outcome_counts, and result_persisted/output_persisted flags. Weighted fan-out jobs also expose completeness coverage fields such as weight_field, expected_weight, completed_weight, and pending_weight. Chunk fan-out payload aliases like chunk_index, chunk_count, and seed_range stay visible per task. 'artifact_ids' lists files produced by the task or large spilled result blobs; download those with GET /api/v1/artifacts/:id, or list follow-up files through GET /api/v1/artifacts. 'client_ref' and 'tags' help you search, but job_id remains the canonical lookup key. For live retry-loop diagnostics, prefer GET /api/v1/jobs/:id/output and GET /api/v1/jobs/:id/tasks because they preserve the latest failed-attempt output/error even before the next attempt finishes. Placement infrastructure is not part of the public result surface, so provider IDs, raw runtime paths, and image refs/digests stay redacted.","purpose":"Get per-task completion records for one job, with structured result/output presence flags, artifact IDs, follow-up links, and quality scores. Reads from PG — available immediately after task completion."},{"path":"/api/v1/results","response":"200: {results, count, limit, offset, applied_filters, result_guide, group_by?, groups?}. 422 when limit/offset are malformed (non-integer, limit <= 0, offset < 0).","method":"GET","query_params":"?limit=20&offset=0&job_id=job_...&ids=job_a,job_b&project=my-proj&client_ref=batch_123&tag=experiment_alpha&user_id=42&recipe_cache_key=abc&recipe_cache_scope=user&recipe_cache_hit=true&group_by=project&include_tasks=false","note":"Defaults to terminal statuses (completed, partial, failed, cancelled) rather than only completed jobs. Pagination uses limit (1-100, default 20) and offset (>= 0, default 0); malformed values return 422 with a specific error message, and the response echoes both back. Returns applied_filters, a result-guide block, and per-job links so users can pivot directly to /jobs/:id, /jobs/:id/tasks, /jobs/:id/stream, or /results/:job_id.","purpose":"List terminal jobs in result-oriented form so you can find finished work before drilling into GET /api/v1/results/:job_id."}],"account":[{"path":"/api/v1/account/balance","method":"GET","purpose":"Get account credit summary: ledger balance, held funds, available funds, and open quote count. This is the canonical balance snapshot."},{"path":"/api/v1/account/ledger","method":"GET","purpose":"List settled credit ledger entries for the current account"},{"path":"/api/v1/account/holds","method":"GET","purpose":"List active and historical holds for the current account"},{"path":"/api/v1/account/quotes","method":"GET","purpose":"List funding and shortfall quotes for the current account"},{"path":"/api/v1/account/quotes/topup","method":"POST","purpose":"Create an x402 funding quote. Returns 402 Payment Required with PAYMENT-REQUIRED, or 422 when amount_usd is malformed, not positive, or exceeds the $10,000 per-top-up cap."},{"path":"/api/v1/account/quotes/:quote_id/pay/x402","method":"POST","purpose":"Settle an x402 quote using PAYMENT-SIGNATURE and credit the account on success. First settlement returns 201, replay-safe repeats return 200 with replay=true, malformed PAYMENT-SIGNATURE returns 422, and settlement failures return 402 with machine-readable payment details."}],"artifacts":[{"path":"/api/v1/artifacts","response":"201: {id, sha256, size, source: local}","headers":"X-Artifact-Filename: name.ext, X-Artifact-Job-Id: job_external_id (optional, links artifact to job for access control)","method":"POST","note":"Body is streamed directly to disk — no memory buffering. Computalot packages all files in COMPUTALOT_ARTIFACT_DIR and uploads them as a single archive.","purpose":"Upload artifact (streaming, max 2GB). Computalot also auto-uploads task artifacts as tar.gz."},{"path":"/api/v1/artifacts/direct","response":"201: {source: \"r2\", sha256, size, filename, content_type, upload: {method, url, headers, key, expires_in_s}, complete: {method, path}}","body":{"size":"non-negative integer bytes","filename":"string","job_id":"optional job external_id","sha256":"64-char lowercase hex digest","content_type":"optional string","ttl_seconds":"optional positive integer"},"method":"POST","purpose":"Request a presigned direct-upload URL for object storage. Use this for large datasets and model bundles so the controller is not in the byte path."},{"path":"/api/v1/artifacts/direct/complete","response":"201: {id, sha256, size, source: \"r2\", url, ...}","body":{"size":"non-negative integer bytes","filename":"string","job_id":"optional job external_id","sha256":"64-char lowercase hex digest","content_type":"optional string"},"method":"POST","purpose":"Finalize a previously uploaded direct object-store artifact after the client PUT succeeds and Computalot verifies the object exists."},{"path":"/api/v1/artifacts/multipart","response":"201: {source: \"r2\", sha256, size, filename, content_type, upload_id, key, part_upload, list_parts, complete, abort}","body":{"size":"non-negative integer bytes","filename":"string","job_id":"optional job external_id","sha256":"64-char lowercase hex digest","content_type":"optional string"},"method":"POST","purpose":"Start a resumable multipart direct upload for large artifacts and receive an upload_id plus the follow-up multipart endpoints."},{"path":"/api/v1/artifacts/multipart/part","response":"201: {source: \"r2\", sha256, upload_id, part_number, upload: {method, url, headers, expires_in_s}}","body":{"job_id":"optional job external_id","sha256":"64-char lowercase hex digest","ttl_seconds":"optional positive integer","upload_id":"string","part_number":"positive integer"},"method":"POST","purpose":"Get a presigned PUT URL for one multipart chunk. Repeat per part_number and upload directly to object storage."},{"path":"/api/v1/artifacts/multipart/parts","response":"200: {source: \"r2\", sha256, upload_id, key, parts: [{part_number, etag, size}]}","method":"GET","query_params":"?sha256=<digest>&upload_id=<upload_id>&job_id=<job_external_id>","purpose":"List uploaded multipart chunks so interrupted clients can resume from the existing upload_id."},{"path":"/api/v1/artifacts/multipart/complete","response":"201: {id, sha256, size, source: \"r2\", url, ...}","body":{"size":"non-negative integer bytes","filename":"string","job_id":"optional job external_id","parts":"[{part_number, etag}, ...]","sha256":"64-char lowercase hex digest","content_type":"optional string","upload_id":"string"},"method":"POST","purpose":"Finalize a multipart upload in object storage and register the resulting artifact after Computalot verifies the completed object."},{"path":"/api/v1/artifacts/multipart/abort","response":"200: {ok: true, aborted: true, source: \"r2\", sha256, upload_id}","body":{"job_id":"optional job external_id","sha256":"64-char lowercase hex digest","upload_id":"string"},"method":"POST","purpose":"Abort an in-flight multipart upload when a client no longer intends to complete it."},{"path":"/api/v1/artifacts/external","body":{"filename":"string","url":"string","sha256":"optional string"},"method":"POST","purpose":"Register external URL (no upload)"},{"path":"/api/v1/artifacts","method":"GET","purpose":"List your artifacts (includes artifacts from your jobs, even if they were uploaded automatically during task completion)"},{"path":"/api/v1/artifacts/:id","method":"GET","purpose":"Download artifact binary (access: own artifacts + artifacts from own jobs). Authenticated GET /api/v1/artifacts/:id requests stream bytes through the controller, and artifact metadata may expose a signed object-store URL when a client wants object-store details."},{"path":"/api/v1/artifacts/:id/meta","method":"GET","purpose":"Get artifact metadata"},{"path":"/api/v1/artifacts/:id","method":"DELETE","purpose":"Delete artifact"}],"leases":[{"path":"/api/v1/leases/:job_id","response":"200: {leases?: {total, done, failed, requeued}, reservation?: {mode, parallelism, guaranteed_for_s, max_wait_s, routing_profile, requirements, active, admitted_at, expires_at, available_parallelism}}","method":"GET","purpose":"Get lease/chunk status and guaranteed-capacity reservation status for a job"}],"jobs":[{"path":"/api/v1/jobs","response":"201: job object with id, status, type, requirements, reservation, checkpointing, and summary.billing_estimate/billing_admission. When funded, summary.billing_hold is also present. 402: PAYMENT-REQUIRED shortfall quote when the account cannot admit the request yet; fund the account, then retry the same submit request. Public responses do not include placement infrastructure.","body":{"priority":"optional string — high | normal | low. Default normal. Biases scheduling between otherwise comparable jobs without exposing infrastructure details.","type":"structured_runner | sweep | map_reduce | benchmark","split":"object — {field, start, total, chunks} (map_reduce only)","state":"object (optional) — shared orchestration helpers. state.publish can write selected terminal values into project-scoped shared state, e.g. {publish: {best_score: %{path: \"results.0.score\"}, ready: %{value: true}}}.","reduce":"map of field -> operator (map_reduce only). Operators: sum, mean, max, min, weighted_avg:<weight_field>, concat, count, collect.","tags":"array of strings — labels for grouping/filtering (max 20). Query with GET /jobs?tag=sweep_72.","gpu_required":"bool (default false)","payload":"object — task input (structured_runner, map_reduce). Written to $COMPUTALOT_TASK_PAYLOAD.","recipe":"optional string — alias for a shared platform recipe. If set, Computalot resolves the recipe's fixed runtime bundle and entrypoint.","fan_out":"object — {by: \"field\"}, {items: [%{...}, ...]}, or {chunks: N, total: N} (structured_runner only). These shapes are mutually exclusive: mixing `by`, `items`, or `chunks` + `total` returns 422. `batch_size` / `batch_per_task` groups multiple fan-out items into one dispatched task while preserving batch metadata in payload._batch.","project":"string — registered project name. Public platform recipes can also be targeted here by name.","parameters":"map of param_name -> [values] (sweep only). Cartesian product, max 1000 combos.","requirements":"object (optional) — minimum cpu, memory_mb, storage_gb, gpu_count, gpu_memory_mb, profile for each task","reservation":"object (optional) — {mode, parallelism, guaranteed_for_s, max_wait_s}. Guaranteed mode is immediate admit-or-reject.","max_retries":"int (default 0) — auto-retry failed tasks","checkpointing":"object (optional, structured_runner only) — {enabled, resume_from_latest, payload_key}. When enabled, tasks can emit progress/result checkpoint maps under `checkpoint`; Computalot durably publishes artifact-backed checkpoints when an `artifact_id` is present or a checkpoint path is publishable, and retries recover that state into the next task payload.","depends_on":"array of job IDs — DAG dependency (waits for all to complete)","callback_url":"string — URL to POST webhook when job reaches terminal status","candidates":"map of name -> config (benchmark only). Min 2 candidates.","fixed_payload":"object merged into every task (sweep only)","merge_strategy":"collect | keyed | weighted_avg (structured_runner only)","rank_by":"string — result field to rank/sort by (sweep, benchmark)","rank_order":"asc | desc (default desc) (sweep, benchmark)","replicas":"int (default 1) — runs per candidate (benchmark only)","runner_command":"array — e.g. [\"python\", \"script.py\"]. Required for user-controlled runner submissions; omitted for recipe-backed sealed jobs.","runtime_hint_s":"optional int — expected runtime hint used by hold estimation. timeout_s remains the hard ceiling.","shared_payload":"object merged into every task (benchmark only)","timeout_s":"int (default 3600)","reliability_mode":"optional string — best_effort | strict_complete. Recommended for research-sensitive fan-out work.","result_schema":"object (optional) — custom result validation. {required_fields: [\"field1\"], field_types: {\"field1\": \"number\"}, field_ranges: {\"field1\": [0.0, 1.0]}}. Extends the job type's default schema. Tasks missing required fields get low quality scores."},"method":"POST","purpose":"Submit a job"},{"path":"/api/v1/jobs","response":"200: {jobs, total, count, limit, offset}. Each job includes: payload (extracted from request), user_id, has_error (bool), has_output (bool), error_snippet (string|null, ~120 char extract). Heavy fields (request, output, summary, error) are stripped — use GET /jobs/:id for full data.","method":"GET","query_params":"?status=queued&project=my-proj&type=structured_runner&tag=sweep_72&limit=50&offset=0","purpose":"List jobs (your projects + jobs you submitted)"},{"path":"/api/v1/jobs/:id","method":"GET","purpose":"Get full job state. Access: own projects and own jobs. Poll until status is terminal. Includes requirements, reservation, checkpointing, feedback_summary, and checkpoint summary when enabled; does not expose placement infrastructure."},{"path":"/api/v1/jobs/:id/output","response":"200: {output, error}","method":"GET","note":"If a platform/runtime failure happens before the user process starts, output/error can contain preflight stderr from Computalot rather than user stdout/stderr.","purpose":"Read stdout/stderr output. During auto-retry, this preserves the most recent failed attempt's diagnostics until the current attempt emits its own output."},{"path":"/api/v1/jobs/:id/tasks","response":"200: {tasks: [...], count}","method":"GET","note":"Each task includes: status, result_present, output_present, output (full stdout, up to 10KB), error (last ~1000 chars on failure), result (structured JSON), result_quality, result_warnings, live_feedback, latest_progress, checkpoint, resume_state, runtime_s, stale_for_s, health_status, started_at, completed_at. During auto-retry, queued/running tasks can continue showing the previous failed attempt's output/error until the current attempt emits its own diagnostics. Checkpoint state can include durable publication fields like artifact_id, artifact_source, publish_status, and published_at; resume payloads can include artifact-backed checkpoint download metadata. For failed tasks, result may be a machine-readable failure payload with fields such as failure_kind, exit_code, command, cwd, and combined_output. Public task responses keep the submitted task payload contract, but they do not expose current_node, provider IDs, raw runtime paths, or image refs/digests.","purpose":"List tasks with individual statuses, result/output presence flags, output, and errors"},{"path":"/api/v1/jobs/:id/events","method":"GET","query_params":"?limit=200","purpose":"Lifecycle events (state changes, progress, errors)"},{"path":"/api/v1/jobs/:id/stream","method":"GET","purpose":"Authenticated SSE stream for live job feedback. Emits snapshot, job, task, event, done, and timeout frames."},{"path":"/api/v1/jobs/watch","query":"ids=id1,id2,... (comma-separated, max 100)","method":"GET","note":"More efficient than opening one /stream per job. Best for batch submissions where you need to track 2-100 jobs. Idle periods emit ping keepalives. Snapshot and terminal job frames include client_ref, tags, meta, variant, plus the same public summary fields exposed by GET /api/v1/results/:job_id, including aggregate_result, aggregate_aliases such as avg_edge when available, completeness coverage, and result_persisted/output_persisted flags.","purpose":"Watch multiple jobs via a single SSE connection (max 100 IDs). Emits snapshot, per-job deltas, and done when all are terminal."},{"path":"/api/v1/jobs/:id/metrics","method":"GET","purpose":"Aggregate metrics"},{"path":"/api/v1/jobs/:id/cancel","body":{"reason":"string"},"method":"PUT","purpose":"Cancel a job. Kills running tasks and releases reserved or inflight capacity."}],"recipes":[{"path":"/api/v1/recipes","response":"200: {count, recipes: [{name, project, runtime_source_kind, runtime_kind, default_execution_policy, default_placement_policy, supported_placement_policies, default_requirements, payload_schema, artifact_inputs, operations, entrypoint, manifest, cache_policy, runtime_version, created_at, updated_at}]}","method":"GET","purpose":"List public platform recipes. Recipes are platform-owned sealed compute steps backed by fixed tarball runtimes and fixed entrypoints."},{"path":"/api/v1/recipes/:name","response":"200: recipe object with fixed runtime metadata, default_requirements, payload_schema, entrypoint, manifest, and supported placement policies","method":"GET","purpose":"Inspect one public platform recipe"},{"path":"/api/v1/recipes/:name/jobs","response":"201: job object. The controller resolves the recipe to its fixed entrypoint and sealed runtime contract.","body":{"type":"optional string — omit it for recipe submissions. The API infers structured_runner when recipe is set, and the runtime plus entrypoint are fixed by the recipe.","placement_policy":"optional string — placement policy must be one of the recipe's supported_placement_policies.","payload":"object — task input","requirements":"optional object — minimum task resources","reservation":"optional object — {mode, parallelism, guaranteed_for_s, max_wait_s}"},"method":"POST","purpose":"Submit a sealed job against one public recipe without referencing the underlying project directly"}],"projects":[{"path":"/api/v1/projects","response":"201: project object with name, remote_dir, env, setup_timeout_s, runtime_kind, image_ref, image_digest, manifest, cache_policy, runtime_version, content_hash, created_at, updated_at","body":{"env":"optional object of project-level runtime env vars merged after env files and before meta.env","name":"string (1-64 chars, a-z0-9_-)","remote_dir":"string (absolute path where Computalot prepares the project environment)","image_digest":"optional OCI image digest","runtime_kind":"optional string, must be oci for public execution","image_ref":"optional OCI image reference","manifest":"optional object for runtime contract metadata","cache_policy":"optional object for explicit cache policy","runtime_version":"optional positive integer runtime contract version","setup_timeout_s":"optional int > 0. Overrides the default 600s project setup timeout."},"method":"POST","purpose":"Register a new project"},{"path":"/api/v1/projects","response":"200: {count, projects: [...]}","method":"GET","purpose":"List your projects"},{"path":"/api/v1/projects/:name","response":"200: project object with runtime metadata plus init_status {project_ready, can_accept_new_jobs, ready_for_jobs, init_state, availability_status, progress_phase, pending_init, requested_state?, content_hash, status_message, next_action, latest_issue?, last_ready_at, latest_activity_at?, latest_failure_at?, ready_replicas, initializing_replicas, failed_replicas, stale_replicas, total_replicas, queue_depth, queued_tasks, active_jobs, active_tasks}","method":"GET","purpose":"Get project config plus Computalot-managed readiness status"},{"path":"/api/v1/projects/:name","response":"200: updated project object. 404 if not found. 422 if you send tarball/code fields; use POST /api/v1/projects/:name/push for code updates.","body":{"env":"optional object of project-level runtime env vars","remote_dir":"optional string","image_digest":"optional OCI image digest","runtime_kind":"optional string, must be oci for public execution","image_ref":"optional OCI image reference","manifest":"optional object for runtime contract metadata","cache_policy":"optional object for explicit cache policy","runtime_version":"optional positive integer runtime contract version","setup_timeout_s":"optional int > 0"},"method":"PUT","purpose":"Update project metadata only (owner only)"},{"path":"/api/v1/projects/:name","method":"DELETE","note":"Blocked if project has active (queued/running) jobs — cancel them first.","purpose":"Delete project + tarball (owner only)"},{"path":"/api/v1/projects/:name/push","response":"200: {status, content_hash, size_bytes, runtime_kind, runtime_version, manifest_present, tarball_diff?, ready_for_jobs, status_message, next_action, init_status}. 409: {error, init_state, ready_for_jobs, next_action}. 422: {error, details}","method":"POST","note":"Raw gzip binary body (not multipart/form-data). Max 100MB. Include Dockerfile and computalot.project.json in your tarball. Returns 400 if body is not valid gzip, 409 if already initializing, 422 if invalid tarball or manifest.","purpose":"Upload code tarball (owner only)"},{"path":"/api/v1/projects/:name/init","response":"200: {status, ready_for_jobs, status_message, init_status}. 402: PAYMENT-REQUIRED shortfall quote when available balance is below the init funded floor; after funding, retry the same POST /api/v1/projects/:name/init.","body":{"max_nodes":"optional int"},"method":"POST","note":"Async and optional — poll GET /projects/:name/status for progress. Push already builds the OCI image; this endpoint only prepares the runtime on currently available workers and does not provision fresh capacity by itself. If the funded floor is missing, returns a shortfall quote — fund the account and retry.","purpose":"Prepare the published project runtime on currently available matching workers (owner only)"},{"path":"/api/v1/projects/:name/invalidate","response":"200: {status, ready_for_jobs, status_message, next_action, init_status}","method":"POST","purpose":"Mark prepared runtime state as stale for the latest revision so future jobs or optional manual init rebuild it cleanly (owner only)"},{"path":"/api/v1/projects/:name/cancel-queued","response":"200: {status, project, tag?, queued_before, cancelled_jobs}","body":{"reason":"optional string","tag":"optional string"},"method":"PUT","purpose":"Cancel queued or planning jobs for one project without listing them individually (owner only)"},{"path":"/api/v1/projects/:name/kv","response":"200: {project, prefix, entries: [{key, value, updated_at, ttl_s?, meta}], count}","method":"GET","query_params":"?prefix=checkpoint&limit=100","purpose":"List project-scoped shared state entries for orchestration and cross-job coordination (owner only)"},{"path":"/api/v1/projects/:name/kv/:key","response":"200: {project, key, value, updated_at, ttl_s?, meta}","body":{"value":"any JSON value","ttl_s":"optional positive integer"},"method":"PUT","purpose":"Write a small JSON shared state value for one project (owner only)"},{"path":"/api/v1/projects/:name/kv/:key","response":"200: {project, key, value, updated_at, ttl_s?, meta}","method":"GET","purpose":"Read one project-scoped shared state value (owner only)"},{"path":"/api/v1/projects/:name/kv/:key","response":"200: {status: \"deleted\", project, key}","method":"DELETE","purpose":"Delete one project-scoped shared state value (owner only)"},{"path":"/api/v1/projects/:name/status","response":"200: {project, project_ready, can_accept_new_jobs, ready_for_jobs, init_state, availability_status, progress_phase, pending_init, requested_state?, content_hash, status_message, next_action, latest_issue?, last_ready_at, latest_activity_at?, latest_failure_at?, ready_replicas, initializing_replicas, failed_replicas, stale_replicas, total_replicas, queue_depth, queued_tasks, active_jobs, active_tasks}","method":"GET","note":"This is the public readiness view for the active revision. Clients should use progress_phase plus the replica counts to distinguish queued init, active init, and failed-with-no-active-attempt states. Machine identities remain internal.","purpose":"Public readiness status for the project"},{"path":"/api/v1/projects/:name/status/details","response":"200: {project, project_ready, can_accept_new_jobs, ready_for_jobs, init_state, availability_status, progress_phase, pending_init, requested_state?, content_hash, status_message, next_action, latest_activity_at?, latest_failure_at?, ready_replicas, initializing_replicas, failed_replicas, stale_replicas, total_replicas, queue_depth, queued_tasks, active_jobs, active_tasks, diagnostics: [{id, status, phase, message, log_tail, content_hash, inserted_at, updated_at, initialized_at, recommended_action}], last_ready_at}","method":"GET","purpose":"Public readiness plus sanitized diagnostics for debugging setup or refresh issues"},{"path":"/api/v1/projects/:name/stream","response":"200 text/event-stream. Events: snapshot (initial active/queued/recent jobs), job (delta on any change), timeout (after 1h — reconnect).","method":"GET","note":"Recommended for clients that submit many jobs to the same project. Open one stream, submit jobs, and watch them complete — no per-job polling needed. Reconnects automatically after server timeout.","purpose":"SSE stream of all job activity in a project. One persistent connection replaces per-job polling. Scoped to the caller's API key (admins see all jobs)."}],"presets":[{"path":"/api/v1/presets","method":"GET","purpose":"List available resource presets, including common training shapes, requirements, and reservation defaults"}],"batch":[{"path":"/api/v1/jobs/batch","response":"201/207: {jobs: [{index, id, status, payload, meta, variant?, ...}], submitted: N, errors: [{index, error, recommended_action?}], error_count: N}","body":{"jobs":"[array of job submission objects]"},"method":"POST","purpose":"Submit multiple jobs at once (max 200)"}],"feedback":[{"path":"/api/v1/feedback","body":{"type":"bug | feature_request | provisioning | job_type_request","description":"string","title":"string"},"method":"POST","purpose":"Submit feedback (no auth required)"}],"artifact_workflow":{"description":"Tasks can produce artifacts by writing files to $COMPUTALOT_ARTIFACT_DIR and declaring uploads in payload._artifacts.upload. Worker-managed uploads now prefer presigned direct or multipart object-store transfer and only fall back to controller relay if the direct path is unavailable. For user-managed large files, request a presigned direct upload URL with POST /api/v1/artifacts/direct, or use the resumable multipart flow starting at POST /api/v1/artifacts/multipart for very large uploads. Artifact IDs appear in task results under 'artifact_ids'.","env_vars":{"COMPUTALOT_ARTIFACT_DIR":"Directory for task output files. Write files here and declare uploads in payload._artifacts.upload; managed uploads prefer direct object-store transfer and fall back to controller relay only when needed.","COMPUTALOT_TASK_RESULT":"Path to write JSON result file (artifact_ids are auto-appended)."},"access_control":"Artifacts are accessible to: (1) the uploader, (2) the owner of the job that produced them.","retention":"Artifacts are retained for 7 days by default (configurable via ARTIFACT_TTL_HOURS env var). Download important artifacts promptly."}},"project_runtime":{"setup":"Include a Dockerfile and computalot.project.json in your tarball. Computalot builds the OCI image on push and publishes the project revision immediately. Jobs trigger runtime preparation on demand; POST /init is optional if you want to prepare currently available workers ahead of time. See /docs/projects/project-manifest for the manifest schema.","summary":"Projects run as sandboxed OCI containers. Push a tarball with your code, Dockerfile, and computalot.project.json manifest. Computalot builds a container image and runs tasks in a sandboxed environment.","validation":"Use the manifest validation section for runtime checks (executables, files, commands). Non-zero exit fails fast.","task_env":"Each task starts from the container image environment, then loads project env files and applies meta.env overrides."},"result_quality":{"summary":"Each task result includes result_quality (0.0-1.0) and result_warnings. Jobs with suspect results (quality < 0.5) are marked 'partial'.","custom_validation":"Add result_schema to your job submission with required_fields, field_types, and field_ranges. Quality scoring is advisory — all results are stored regardless."},"project_manifest":{"docs":"/docs/projects/project-manifest","summary":"Container-based projects use a computalot.project.json manifest to define the runtime contract.","docs_absolute":"https://computalot.com/docs/projects/project-manifest"},"resource_requirements":{"fields":{"cpu":"minimum CPU cores per task","profile":"\"cpu\" or \"gpu\". CPU jobs can spill onto idle GPU-capable capacity; GPU jobs require GPU-capable capacity.","gpu_count":"minimum GPU count per task","gpu_memory_mb":"minimum GPU memory in MB per task","memory_mb":"minimum RAM in MB per task","storage_gb":"minimum free disk in GB per task"},"summary":"Submit minimum resource needs with a job instead of targeting infrastructure directly. Computalot may place the work on any larger matching runtime.","example":{"requirements":{"cpu":8,"profile":"gpu","gpu_count":1,"gpu_memory_mb":12288,"memory_mb":16384,"storage_gb":40}}},"feedback":{"types":["bug","feature_request","provisioning","job_type_request"],"endpoint":"POST /api/v1/feedback","note":"No auth required. Report bugs, request features, share ideas."},"sdk":{"package":"computalot","python":{"install":"python3 -m pip install --user --break-system-packages https://computalot.com/docs/downloads/computalot-0.2.0-py3-none-any.whl","minimum_version":"3.10","quickstart":["from computalot import ComputalotClient","client = ComputalotClient(controller_url='https://computalot.com', token='YOUR_TOKEN')","docs = client.docs_index()","recipes = client.list_recipes()","jobs = client.list_jobs(limit=5)"]}},"docs_absolute":{"index":"https://computalot.com/api/v1/docs","python_sdk":"https://computalot.com/api/v1/docs/python-sdk","recipes":"https://computalot.com/api/v1/docs/recipes","workflows":"https://computalot.com/api/v1/docs/workflows","skill":"https://computalot.com/skill.md","llm":"https://computalot.com/llms.txt","llm_full":"https://computalot.com/llms-full.txt","web":"https://computalot.com/docs"},"billing":{"summary":"Computalot uses one account-level credit system for API-key callers and wallet-authenticated agents.","account_endpoints":{"balance":"/api/v1/account/balance","holds":"/api/v1/account/holds","quotes":"/api/v1/account/quotes","ledger":"/api/v1/account/ledger"},"inspection":{"balance":"GET /api/v1/account/balance is the canonical balance snapshot: ledger_balance_usd, held_usd, available_usd, and open quote counts.","holds":"GET /api/v1/account/holds lists each active or historical hold so you can see which job admissions are still reserving funds.","quotes":"GET /api/v1/account/quotes lists open top-up and shortfall quotes so clients can inspect pending funding actions before retrying blocked work.","ledger":"GET /api/v1/account/ledger is the settled transaction history for credits, debits, and other posted account activity."},"shortfall_remedy":{"job_submit":"If POST /api/v1/jobs returns 402 Payment Required with a shortfall quote, inspect the same account surfaces, fund the account, then retry the same submit request to POST /api/v1/jobs.","project_init":"If POST /api/v1/projects/:name/init returns 402 Payment Required with a shortfall quote because the funded floor is missing, fund the account and retry `POST /api/v1/projects/:name/init`.","shared_retry_rule":"A shortfall response blocks admission before work starts. Do not mutate the project or payload first — once the funding gap is fixed, retry the same request."},"supported_access_paths":{"api_key":"Use an admin-issued API key when beta access was provisioned for you directly. It authenticates the same account surfaces, including balance, holds, ledger, quotes, project init, and job submit.","wallet_session":"Use an admin-whitelisted wallet session when you want the wallet-auth + x402 flow. The wallet signs in through challenge/verify, can pay x402 quotes, and reaches the same account billing truth."},"v1_policy":"Job execution reserves a submit-time hold and settles to actual internal cost-ledger totals. Project init and artifact download remain free in v1, but are still metered internally. Project init currently requires a minimum funded balance floor of $5.","x402":"Create an x402 quote with POST /api/v1/account/quotes/topup, then settle it with POST /api/v1/account/quotes/:quote_id/pay/x402 using PAYMENT-SIGNATURE. Submit-time or init-time insufficient-balance responses may also include a shortfall quote with PAYMENT-REQUIRED."},"choosing_a_job_type":[{"when":"You have a script, a JSON payload, and want 1 task (or simple fan-out over a list field).","use":"structured_runner"},{"when":"You want to try every combination of parameters and rank results. Grid search / hyperparameter tuning.","use":"sweep"},{"when":"You want to split a numeric range into chunks, process in parallel, and aggregate with operators (sum, mean, max, etc).","use":"map_reduce"},{"when":"You want to compare 2+ named candidates with replicas for statistical significance.","use":"benchmark"}],"command_validation":{"allowed_executables":["python","python3","node","deno","bun","ruby","julia","Rscript","uv","pip","npm","npx","cargo","rustc"],"note":"runner_command must be an array with an allowed executable.","blocked_executables":"bash, sh, zsh, and other shell executables are blocked.","example_invalid":["bash","-c","python train.py"],"example_valid":["python","train.py"]},"debugging_failures":{"summary":"When a job or task fails, check these endpoints.","common_issues":{"402 Payment Required":"Fund the account and retry the same request.","Setup fails":"Fix Dockerfile/manifest or runtime issue, POST /invalidate, then submit a job normally or call POST /init to prepare currently available workers.","Tasks stuck in queued":"Check project status — the revision may be published but still waiting on runtime capacity or initialization.","exit_code_1 with no useful error":"Check per-task output field for full 10KB output."},"steps":["1. GET /api/v1/jobs/:id — check status, error, and recommended_action","2. GET /api/v1/jobs/:id/tasks — per-task error and output (up to 10KB)","3. GET /api/v1/jobs/:id/output — aggregated stdout/stderr","4. GET /api/v1/projects/:name/status — project readiness","5. GET /api/v1/projects/:name/status/details — diagnostics and recovery steps"]},"getting_started":{"overview":"Computalot runs jobs on managed CPU/GPU capacity. Supported beta access today is either an admin-whitelisted wallet session or an admin-issued API key. Billing truth lives on the account balance/holds/ledger/quotes endpoints. Fund the account when needed, set up a project environment, then submit jobs with the resource minimums and guarantees you need. Computalot handles placement internally.","job_type_examples":[{"name":"structured_runner — single task or fan-out","example":{"request":{"path":"/api/v1/jobs","body":{"type":"structured_runner","payload":{"model":"gpt4","dataset":"test_v3"},"project":"my-proj","runner_command":["python","evaluate.py"],"timeout_s":600},"method":"POST"},"note":"Single task. For parallelism, add fan_out: {by: \"models\"} to split a list field into N tasks, or fan_out: {items: [%{...}, %{...}]} for one explicit payload object per task."}},{"name":"sweep — grid search","example":{"request":{"path":"/api/v1/jobs","body":{"type":"sweep","gpu_required":true,"project":"ml-training","parameters":{"batch_size":[32,64,128],"learning_rate":[0.001,0.01,0.1]},"fixed_payload":{"dataset":"cifar10","epochs":5},"rank_by":"accuracy","rank_order":"desc","runner_command":["python","train.py"],"timeout_s":3600},"method":"POST"},"note":"Creates 9 tasks (3x3 grid). Each task receives one parameter combination in $COMPUTALOT_TASK_PAYLOAD. Results ranked by accuracy."}},{"name":"map_reduce — chunked parallelism with aggregation","example":{"request":{"path":"/api/v1/jobs","body":{"type":"map_reduce","split":{"start":0,"total":10000,"chunks":50,"field":"seed"},"reduce":{"max_dd":"max","sharpe":"weighted_avg:sample_count","total_pnl":"sum"},"payload":{"strategy":"momentum"},"project":"monte-carlo","runner_command":["python","simulate.py"],"timeout_s":7200},"method":"POST"},"note":"Creates 50 tasks. Each gets {seed_start, seed_count} in payload. Results aggregated with per-field operators. You can also use split.ranges for explicit non-contiguous ranges, e.g. %{field: \"seed\", ranges: [%{start: 860791000, count: 1000}, %{start: 200000000, count: 1000}] }."}},{"name":"benchmark — candidate comparison with replicas","example":{"request":{"path":"/api/v1/jobs","body":{"type":"benchmark","project":"my-proj","candidates":{"baseline":{"model":"random"},"strategy_a":{"model":"gpt4","temperature":0.7},"strategy_b":{"model":"claude","temperature":0.5}},"rank_by":"score","replicas":3,"runner_command":["python","evaluate.py"],"shared_payload":{"dataset":"test_set_v3","n_trials":100},"timeout_s":1800},"method":"POST"},"note":"Creates 9 tasks (3 candidates x 3 replicas). Each gets candidate config + _candidate + _replica in payload. Leaderboard with mean/std/min/max."}}],"quick_start":["1. Authenticate through one supported beta path: use an admin-issued API key, or POST /api/v1/auth/wallet/challenge for an admin-whitelisted wallet, sign the challenge, then POST /api/v1/auth/wallet/verify to get a session","2. Inspect account billing truth on GET /api/v1/account/balance, GET /api/v1/account/holds, GET /api/v1/account/ledger, and GET /api/v1/account/quotes","3. POST /api/v1/account/quotes/topup and settle it with POST /api/v1/account/quotes/:quote_id/pay/x402 if your account needs credits","4. POST /api/v1/projects — register a project with name and remote_dir","5. Create tarball with your code, Dockerfile, and computalot.project.json: tar czf code.tar.gz Dockerfile computalot.project.json script.py","6. POST /api/v1/projects/:name/push — upload tarball (raw binary body)","7. POST /api/v1/jobs — submit job with optional requirements/reservation. If submit returns a shortfall quote, fund the account and retry the same submit request.","8. Optional: POST /api/v1/projects/:name/init if you want to prepare currently available workers ahead of time. If it returns a shortfall quote, fund the account and retry the same init request.","9. GET /api/v1/projects/:name/status — inspect whether the revision is merely published or already ready_for_jobs","10. GET /api/v1/jobs/:id — poll until terminal status","11. GET /api/v1/results/:id — read per-task structured results (recommended), or GET /api/v1/results?project=my-proj&client_ref=batch_123 to find finished jobs","12. Use GET /api/v1/jobs/:id/stream, GET /api/v1/jobs/watch?ids=id1,id2, or GET /api/v1/projects/:name/stream for live progress instead of polling many endpoints","13. GET /api/v1/artifacts — list your artifacts (includes artifacts from your jobs), then GET /api/v1/artifacts/:id to download files referenced by result artifact_ids"]},"heavy_job_guidance":{"summary":"For GB-scale datasets, large checkpoints, and long training runs, treat the job payload as control-plane data only.","outputs":["Write checkpoints and other task-produced files under $COMPUTALOT_ARTIFACT_DIR.","Use _artifacts.upload when you want named uploads or direct upload to external object storage plus Computalot registration.","If a structured JSON result is too large to store inline, Computalot spills it to an artifact and returns result_spilled, result_artifact_id, and result_filename."],"inputs":["Do not embed large datasets, archives, or model weights in payload JSON. Submit small metadata in payload and move large inputs through artifacts.","Use _artifacts.download for large inputs. Workers download and cache these files before launch.","For reusable remote datasets or model weights, declare manifest data_sources so the worker prepares them before launch instead of downloading ad hoc inside the runner.","For Hugging Face-hosted immutable inputs, declare a manifest data_source with source=huggingface. Use delivery=mount when you want worker-managed hf-mount rather than a runner-side snapshot_download call.","Downstream jobs can resolve dependency-produced named artifacts at dispatch time with refs like {_artifacts: {download: {dataset: %{job_id: \"job_...\", artifact: \"dataset\"}}}} when that upstream job is listed in depends_on.","For smaller dependency outputs or coordination flags, use payload._shared.resolve with refs like %{best_score: %{job_id: \"job_...\", path: \"results.0.score\"}} or %{dataset_ready: %{key: \"dataset_ready\"}}.","Project-scoped shared state lives behind GET/PUT/DELETE /api/v1/projects/:name/kv/:key and is resolved into payload._shared.values plus COMPUTALOT_SHARED_<NAME> env vars at dispatch time.","Resolved artifact paths are injected into payload._artifacts.local_paths. Single-file entries also get COMPUTALOT_ARTIFACT_<NAME> env vars."],"operational_defaults":["Prefer external/object storage artifacts for multi-GB datasets and model bundles.","Declare cache_mounts for writable package/model caches your code populates at runtime. For Hugging Face or Transformers downloads, use a huggingface cache mount so HF_HOME and TRANSFORMERS_CACHE persist per worker.","hf-mount only applies to manifest-declared Hugging Face data_sources. If your runner downloads from Hugging Face directly, it will not use hf-mount unless you route that data through data_sources or a declared cache mount.","Enable checkpointing.resume_from_latest for long jobs and emit periodic durable checkpoints so retries can resume instead of restarting from zero.","Set timeout_s with headroom above expected wall-clock runtime for training jobs.","Start project setup before the run so initialization does not consume the first training attempt."]},"job_lifecycle":{"output":"GET /api/v1/jobs/:id/output — aggregated stdout/stderr","stream":"GET /api/v1/jobs/:id/stream — SSE stream for one job","cancel":"PUT /api/v1/jobs/:id/cancel","statuses":["planning","queued","running","completed","partial","failed","cancelled"],"terminal":["completed","partial","failed","cancelled"],"watch":"GET /api/v1/jobs/watch?ids=... — SSE stream for multiple jobs (max 100)","results":"GET /api/v1/results/:job_id — per-task results with artifact IDs and quality metadata","retention":"Terminal jobs are queryable for 30 days. Artifacts retained 7 days.","auto_retry":"Set max_retries on submission. Failed tasks auto-requeue up to N times.","polling":"GET /api/v1/jobs/:id — poll every 2-5s until terminal"},"jobs_vs_tasks":{"summary":"A job is a unit of work you submit. Tasks are the parallel units Computalot creates from your job. When you need to run the same script on 1000 different inputs, you can either submit 1 job that fans out into 1000 tasks, or submit 1000 separate single-task jobs. This section helps you choose.","many_jobs":{"description":"Submit each unit of work as its own job. Each gets an independent ID, status, and lifecycle.","behaviors":["Each job has its own ID, status, and lifecycle — one failure does not affect the others.","Cancel, retry, or inspect any job independently.","Each job fires its own webhook on completion.","Fine-grained depends_on: build DAGs where specific jobs depend on specific predecessors.","Per-job status in list views — see '743 completed, 12 running, 245 queued' at a glance.","Computalot schedules across jobs fairly by default. You can also set priority: high | normal | low to bias scheduling between otherwise comparable jobs; guaranteed reservations still win first."],"best_for":["Independent work items that should not affect each other on failure (e.g. processing unrelated customer requests)","Work where you need to cancel, retry, or inspect individual items independently","Pipelines with fine-grained DAG dependencies (job B depends on job A, job C depends on job B)","Work where each item needs its own webhook callback","Submissions that arrive over time rather than all at once"]},"one_job_many_tasks":{"description":"Use fan_out, sweep parameters, map_reduce split, or benchmark candidates to let Computalot expand a single job into many tasks.","behaviors":["One job ID to track. Poll or stream a single endpoint.","Cancel once to stop everything.","One webhook fires when the entire job finishes.","Job status reflects all tasks: 'completed' means all succeeded, 'partial' means some failed or had low quality, 'failed' means all failed.","Results are accessed per-task via GET /api/v1/results/:job_id, but they belong to one job.","depends_on references this single job ID — downstream work starts when all tasks finish.","Computalot aggregates results automatically for sweep (leaderboard), map_reduce (reduced values), and benchmark (statistics)."],"best_for":["Logically related work that should succeed or fail together (e.g. a grid search, a chunked simulation, a benchmark comparison)","Work where you want a single aggregated result (sweep leaderboard, map_reduce aggregation, benchmark statistics)","Batch processing where individual items don't need independent lifecycle management","High-throughput pipelines — one job submission is faster than many"]},"rule_of_thumb":"If your 1000 inputs are one logical batch and you want one answer at the end, use one job with fan_out/split/parameters. If your 1000 inputs are independent requests that should succeed or fail on their own, use 1000 jobs. When in doubt, start with one job — it's simpler to manage and faster to submit."},"platform_model":{"summary":"End users interact with Computalot through projects and jobs. Node provisioning, placement, runtime preparation, and mixed-hardware allocation are internal Computalot concerns.","placement_note":"Public API responses do not expose infrastructure identities. Placement decisions, warm homes, and cluster topology are managed internally by Computalot.","user_visible_primitives":["projects define code and environment","jobs define work to run","requirements define minimum hardware per task","reservation defines best-effort or guaranteed admission behavior","checkpointing defines whether structured-runner retries receive the latest checkpoint state"]},"project_lifecycle":{"setup":"Include a Dockerfile and computalot.project.json manifest. Computalot builds the OCI image on push and publishes the project revision immediately. Jobs trigger runtime preparation on demand; POST /init is optional. See /docs/projects/project-manifest for the manifest schema.","summary":"Register -> Push tarball (with Dockerfile + computalot.project.json + your code) -> Submit jobs. Optional: call init to prepare currently available workers ahead of time.","debugging_init":"Use GET /api/v1/projects/:name/status for readiness, then GET /api/v1/projects/:name/status/details for diagnostics and recovery steps.","update_flow":"Push new tarball -> optional POST /invalidate if you want to discard old prepared runtimes -> submit jobs normally; the first job on the new revision may cold-start while runtime preparation catches up"},"project_readiness":{"fields":{"init_state":"\"not_initialized\", \"published\", \"initializing\", \"ready\", \"refreshing\", or \"attention_required\"","content_hash":"current project upload hash; for image-backed OCI projects, readiness is still checked against the prepared image_digest internally","availability_status":"\"cold\", \"warming\", \"ready\", \"degraded\", or \"mixed\" aggregate capacity summary for the active revision","requested_state":"controller-side init intent when present; useful for distinguishing a requested refresh from steady-state readiness","can_accept_new_jobs":"true when the latest project revision is published and can be submitted; the first job may still pay a cold-start cost while runtime capacity is prepared","failed_replicas":"count of replicas currently marked failed for the active revision","initializing_replicas":"count of replicas actively initializing now","last_ready_at":"ISO8601 timestamp of the most recent ready state, when available","project_ready":"true when Computalot currently considers the project runnable","ready_replicas":"count of currently ready replicas for the active revision","stale_replicas":"count of replicas that are stale relative to the active revision or unavailable worker set","total_replicas":"count of tracked replicas contributing to readiness","status_message":"human-readable summary of what Computalot is doing now","ready_for_jobs":"true when Computalot considers the active revision platform-ready to admit work without waiting for runtime preparation; this is not by itself proof that your application imports, credentials, or wrapped subprocesses are correct unless validation covers them","progress_phase":"\"queued\", \"running\", \"failed\", \"refreshing\", \"refresh_pending\", \"ready\", or \"idle\" so clients can tell waiting-for-capacity from active setup and terminal failure","latest_activity_at":"most recent sanitized init activity timestamp across diagnostics, when available","latest_failure_at":"most recent sanitized failed-attempt timestamp, when available","next_action":"human-readable guidance for what to do next","pending_init":"true when Computalot currently has project init work queued or in flight","latest_issue":"optional sanitized setup error summary when attention is required"},"summary":"Project readiness is exposed as active revision truth managed by Computalot, not machine counts or per-machine deployment state.","diagnostics":"GET /api/v1/projects/:name/status/details returns the same active-revision readiness summary plus sanitized diagnostics entries for setup or refresh issues: id, status, phase, message, log_tail, content_hash, inserted_at, updated_at, initialized_at, and recommended_action. Use the top-level status for runnable truth and status/details when you need recovery guidance."},"recommended_endpoint":"Use https://computalot.com as the default Computalot API origin. Fetch https://computalot.com/api/v1/docs or https://computalot.com/llms.txt first so agents and clients anchor themselves on the live public endpoint.","reservations":{"fields":{"parallelism":"number of matching task slots to reserve for this job","mode":"\"best_effort\" or \"guaranteed\"","guaranteed_for_s":"reservation lifetime in seconds","max_wait_s":"accepted and stored for future queue-aware admission. Current production behavior is still immediate admit-or-reject."},"modes":{"best_effort":"default. Job is accepted and runs when matching capacity is available.","guaranteed":"Computalot admits the job only if it can reserve the requested matching parallelism immediately."},"summary":"Reservations control admission behavior for a submitted job, not a user-managed infrastructure lease.","example":{"reservation":{"parallelism":2,"mode":"guaranteed","guaranteed_for_s":1800,"max_wait_s":0}},"current_behavior":{"inspect":"GET /api/v1/leases/:job_id returns reservation.active, parallelism, expires_at, and available_parallelism.","submit_failure":"POST /api/v1/jobs returns 409 with error, recommended_action, and details when guaranteed capacity cannot be admitted.","submit_success":"POST /api/v1/jobs returns 201 when a guaranteed reservation can be admitted."}},"result_guide":{"search":"Use GET /api/v1/results to search terminal jobs by job_id, ids, project, client_ref, tag, type, user_id, and recipe_cache_* filters.","canonical_lookup":"Use job_id as the canonical identifier. GET /api/v1/results/:job_id is the default endpoint for one job's per-task results.","identifiers":{"job_id":"Canonical job identifier used by /jobs/:id and /results/:job_id.","tags":"Search labels only. Use /results?tag=... to filter.","client_ref":"Client-supplied search/grouping label. Not a result identifier.","artifact_id":"Artifact download identifier. Use GET /api/v1/artifacts/:id for files referenced by result payloads."},"live_updates":"Use GET /api/v1/jobs/:id/stream for one job, GET /api/v1/jobs/watch?ids=id1,id2 for many jobs, and GET /api/v1/projects/:name/stream for one project's full feed."},"runner_protocol":{"progress":"Print COMPUTALOT_PROGRESS:{json} to stdout for live progress updates.","summary":"All job types use the runner protocol. Your script receives input and writes output via environment variables.","env_vars":{"COMPUTALOT_ARTIFACT_DIR":"Directory for output files (models, checkpoints). Auto-uploaded on completion.","COMPUTALOT_TASK_PAYLOAD":"Path to JSON input file. Read this.","COMPUTALOT_TASK_RESULT":"Path to write your JSON output. Computalot reads this after exit.","COMPUTALOT_TASK_SCRATCH_DIR":"Writable temp directory for scratch files."},"example":"import json, os\npayload = json.load(open(os.environ['COMPUTALOT_TASK_PAYLOAD']))\nresult = {'score': 0.95}\njson.dump(result, open(os.environ['COMPUTALOT_TASK_RESULT'], 'w'))","exit_code":"Exit 0 = success. Non-zero = failure."},"status_note":"Computalot is in private beta. We're actively building and want your feedback — use POST /api/v1/feedback (no auth) to report bugs and request features. Access requires an admin-issued API key or admin-whitelisted wallet session. If you do not already have beta access, join the waitlist at /. Install the skill from /skill.md to get started."}