Remote CLI and API access
This guide is for operators and users who already have orlojd reachable on a network (self-hosted, VPS, Kubernetes, or internal URL) and need to call the API from orlojctl, scripts, or CI. It complements the quickstart, which focuses on a single-machine dev loop.
For deeper security context (generation, rotation, threat model), see Control plane API tokens.
Install orlojctl locally
You need the CLI on your machine (or in CI), not inside the server container. The easiest path is the standalone binary from GitHub Releases: download orlojctl_<tag>_<os>_<arch>, verify with checksums.txt on that release, extract, and add the binary to your PATH. Details and naming conventions are in Install: CLI only for hosted deployments. If you already cloned the repo with Go installed, go run ./cmd/orlojctl works the same way against a remote --server.
API tokens (shared secret)
Orloj does not issue API tokens from the web console. The operator generates a random string, configures the same value on the server and on every client that uses Authorization: Bearer <token>.
openssl rand -hex 32Store the value in your secrets manager or deployment environment—not in git.
On the server, set orlojd --api-key=... or ORLOJ_API_TOKEN=... (or ORLOJ_API_TOKENS for multiple token:role pairs). See Control plane API tokens for details.
Server-side wiring
Where you set ORLOJ_API_TOKEN depends on how you run orlojd:
- Docker Compose / systemd — env var or secret in the service definition (e.g. VPS deployment).
- Kubernetes / Helm —
runtimeSecretor equivalent env injection (see Kubernetes deployment).
Client-side: environment and flags
From any machine that should talk to the API:
| Mechanism | Purpose |
|---|---|
ORLOJ_SERVER | Default API base URL when --server is omitted |
ORLOJCTL_SERVER | Same default; takes precedence over ORLOJ_SERVER |
ORLOJ_API_TOKEN | Bearer token |
ORLOJCTL_API_TOKEN | Same token; checked before ORLOJ_API_TOKEN by the CLI |
orlojctl --api-token <token> | Overrides env for that process |
orlojctl --server <url> | Overrides per-command default server |
Precedence
Token (first match wins):
orlojctl --api-token ...ORLOJCTL_API_TOKENORLOJ_API_TOKEN- Active profile:
tokenfield, else value of the env var named bytoken_env
Default --server when the flag is omitted (first match wins):
ORLOJCTL_SERVERORLOJ_SERVER- Active profile
server http://127.0.0.1:8080
Explicit --server on a subcommand always overrides the default above.
orlojctl config and config.json
Named profiles are stored as JSON:
- Path:
orlojctl config path(typically~/.config/orlojctl/config.jsonon Unix). - Permissions: file is written with mode
0600when created or updated.
The file does not exist until the first successful save (for example orlojctl config set-profile <name> ...). Until then, only environment variables and flags apply—if you open the path early, an empty or missing file is normal.
Commands:
orlojctl config path
orlojctl config set-profile production --server https://orloj.example.com --token-env ORLOJ_PROD_TOKEN
orlojctl config use production
orlojctl config getset-profile creates or updates a profile. The first profile you create also becomes current_profile if none was set. Prefer --token-env so the token is not stored in the JSON file.
Example config.json
Shape matches the CLI (field names are JSON):
{
"current_profile": "production",
"profiles": {
"local": {
"server": "http://127.0.0.1:8080"
},
"production": {
"server": "https://orloj.example.com",
"token_env": "ORLOJ_PROD_TOKEN"
}
}
}You can hand-edit this file if you prefer; invalid JSON will cause orlojctl to error on load.
Local UI auth vs API tokens
If you use --auth-mode=native, the web UI uses an admin username/password and session cookies. That is separate from API access: orlojctl and automation should use the bearer token configured with ORLOJ_API_TOKEN / --api-key on the server, not the UI password. See Control plane API tokens and CLI reference: orlojctl.
Related docs
- CLI reference — full command list and flags
- Configuration —
orlojd/orlojworkerenvironment variables - VPS deployment — single-node Compose + systemd
- Kubernetes deployment — Helm and manifests