Skip to main content
This guide covers deploying Oxy on Kubernetes using our official Helm chart. The Helm approach provides a production-ready deployment with sensible defaults and extensive customization options.
Prerequisites: You’ll need a Kubernetes cluster (>= 1.25), Helm 3.x installed, and kubectl configured to access your cluster.
For the most up-to-date chart documentation and configuration options, visit the official chart README at github.com/oxy-hq/charts.

Oxy Architecture Overview

Before diving into the deployment, let’s understand how Oxy’s architecture translates to Kubernetes components:

Core Architecture Principles

Oxy is a Rust-based framework for agentic analytics that follows these design principles:
  • Stateful by design: Requires persistent storage for agent state and git repositories
  • Single-tenant: Each deployment serves one workspace/organization
  • Database-backed: Uses PostgreSQL for data persistence
  • Git-integrated: Can sync with git repositories for configuration and workflows

Kubernetes Component Mapping

Oxy ComponentKubernetes ResourcePurpose
Oxy ServerStatefulSetMain application server with persistent identity
State StoragePersistentVolumeClaimStores agent state and git repos
PostgreSQL DatabaseExternal Service/SecretPostgreSQL database connection
ConfigurationConfigMap + SecretOptional environment variables and sensitive data
Git SyncInit ContainerOptional synchronizes git repositories on startup
Web InterfaceService + IngressExposes the web UI and API endpoints

Why StatefulSet?

The Helm chart uses a StatefulSet instead of a Deployment because:
  1. Persistent Identity: Oxy maintains state across restarts
  2. Stable Storage: Requires consistent volume attachment for databases and workspace data
  3. Ordered Deployment: Git sync (when enabled) needs to complete before the main app starts
  4. Single Replica: Oxy is designed as a single-tenant application

Helm Charts

Oxy provides two Helm charts depending on your deployment model:
ChartDescriptionOCI Reference
oxy-appStandard deployment using oxy serve — requires external PostgreSQLghcr.io/oxy-hq/helm-charts/oxy-app
oxy-startSelf-contained deployment using oxy start — runs PostgreSQL and other services in-pod via Docker-in-Dockerghcr.io/oxy-hq/helm-charts/oxy-start

Installation Methods

Install directly from GitHub Container Registry:
# Install oxy-app chart (requires external PostgreSQL)
helm install oxy-app oci://ghcr.io/oxy-hq/helm-charts/oxy-app \
  --namespace oxy \
  --create-namespace \
  --version 0.4.2 \
  --values values.yaml

Option 2: Classic Helm Repository

# Add the GitHub Pages Helm repository
helm repo add oxy https://oxy-hq.github.io/charts/
helm repo update

# Install Oxy
helm install oxy-app oxy/oxy-app \
  --namespace oxy \
  --create-namespace \
  --values values.yaml

Configuration

Basic Configuration

Create a values.yaml file for customization:
# values.yaml
app:
  image: "ghcr.io/oxy-hq/oxy"
  imageTag: "latest"
  replicaCount: 1

# Persistent storage configuration (required)
persistence:
  enabled: true
  size: "20Gi"
  storageClassName: "gp3" # Adjust for your cluster
  accessMode: "ReadWriteOnce"

# Resource limits
resources:
  requests:
    cpu: "250m"
    memory: "512Mi"
  limits:
    cpu: "1000m"
    memory: "2Gi"

# Environment variables (optional - can use env, configMap, or secrets)
env:
  OXY_STATE_DIR: "/workspace/oxy_data"
  # Add other environment variables as needed

# Service configuration
service:
  type: ClusterIP
  port: 80
  targetPort: 3000

# Ingress configuration (optional)
ingress:
  enabled: false
  # Uncomment and configure for external access
  # className: "nginx"
  # hosts:
  #   - host: oxy.your-domain.com
  #     paths:
  #       - path: "/"
  #         pathType: Prefix

# Git sync (optional - disabled by default)
gitSync:
  enabled: false
  # repository: "git@github.com:your-org/your-oxy-workspace.git"
  # branch: "main"
  # sshSecretName: "oxy-git-ssh"
Then install with your custom values:
helm install oxy-app oxy/oxy-app \
  --namespace oxy \
  --create-namespace \
  --values values.yaml

Database Configuration

PostgreSQL Database (Required)

Oxy requires a PostgreSQL database. You can use a managed PostgreSQL service or deploy PostgreSQL in your cluster.
# Configure PostgreSQL connection
env:
  - name: OXY_DATABASE_URL
    value: "postgresql://user:password@postgres-service:5432/oxy"
  size: "20Gi"

Option 2: External PostgreSQL

For production deployments with external PostgreSQL, refer to the PostgreSQL official documentation for setup and configuration details.
database:
  external:
    enabled: true
    host: "postgres.example.com"
    port: 5432
    database: "oxy"
    user: "oxy_user"
    password: "your-secure-password"
    # Or use connection string
    # connectionString: "postgresql://oxy_user:password@postgres.example.com:5432/oxy"

env:
  OXY_DATABASE_URL: "postgresql://oxy_user:password@postgres.example.com:5432/oxy"

Option 3: Bundled PostgreSQL

database:
  postgres:
    enabled: true
    postgresqlDatabase: "oxydb"
    postgresqlUsername: "oxy"
    postgresqlPassword: "postgres"
The bundled PostgreSQL is not recommended for production use. Use an external managed database service instead.

Git Integration

Enable git synchronization to automatically sync your Oxy configuration from a repository:
Git sync is disabled by default. When disabled, Oxy starts in --readonly mode and git setup is handled by the app UI.
GitHub Apps provide fine-grained access control and are the recommended approach for production:
gitSync:
  enabled: true
  repository: "https://github.com/your-org/your-oxy-workspace.git"
  branch: "main"
  githubApp:
    secretName: "oxy-github-app" # K8s secret containing GitHub App credentials
    privateKeyKey: github_app_private_key
    applicationIdKey: github_app_application_id
    installationIdKey: github_app_installation_id
Create the GitHub App secret:
kubectl create secret generic oxy-github-app \
  --namespace oxy \
  --from-literal=github_app_application_id="<app-id>" \
  --from-literal=github_app_installation_id="<installation-id>" \
  --from-file=github_app_private_key=path/to/private-key.pem

Option B: SSH Deploy Key

gitSync:
  enabled: true
  repository: "git@github.com:your-org/your-oxy-workspace.git"
  branch: "main"
  sshSecretName: "oxy-git-ssh"
  period: "15s" # Sync interval
Create the SSH secret:
# Generate SSH key
ssh-keygen -t ed25519 -C "oxy-k8s@your-org.com" -f oxy-deploy-key

# Create Kubernetes secret
kubectl create secret generic oxy-git-ssh \
  --namespace oxy \
  --from-file=ssh-privatekey=oxy-deploy-key \
  --from-file=ssh-publickey=oxy-deploy-key.pub
Add the public key to your Git repository’s deploy keys.

External Secrets Integration

For production deployments with external secret management, refer to the External Secrets Operator documentation for setup and configuration details.
externalSecrets:
  create: true
  storeRef:
    name: "vault-secret-store"
    kind: "SecretStore"

  # Environment variables from external secrets
  envSecretNames:
    - "oxy-api-keys"
    - "oxy-database-credentials"

  # Files from external secrets
  fileSecrets:
    - secretName: "oxy-config-files"
      mountPath: "/workspace/secrets"

Advanced Configuration Options

Extra Containers

The chart supports custom init containers and sidecars:
# Add custom init containers
extraInitContainers:
  - name: custom-init
    image: busybox:1.35
    command: ["sh", "-c"]
    args:
      - |
        echo "Custom initialization"
        # Your custom init logic here
    volumeMounts:
      - name: workspace
        mountPath: /workspace

# Add custom sidecar containers
extraSidecars:
  - name: log-shipper
    image: fluent/fluent-bit:2.1.9
    command: ["/fluent-bit/bin/fluent-bit"]
    args: ["--config=/etc/fluent-bit/fluent-bit.conf"]
    volumeMounts:
      - name: workspace
        mountPath: /workspace
        readOnly: true
Each container entry should be a valid Kubernetes container spec. If your containers need additional volumes, add them using the chart’s supported volume fields and reference them in your container’s volumeMounts.

Ingress Configuration

Expose Oxy externally using an Ingress:
ingress:
  enabled: true
  className: "nginx" # or your ingress controller
  hosts:
    - host: oxy.your-domain.com
      paths:
        - path: "/"
          pathType: "Prefix"
  tls:
    - secretName: oxy-tls
      hosts:
        - oxy.your-domain.com

# If using cert-manager
# annotations:
#   cert-manager.io/cluster-issuer: "letsencrypt-prod"

Base + Override Values Pattern

For managing multiple deployments or environments, use a base values file with per-deployment overrides:
# _base.yaml — shared defaults across all deployments
serviceAccount:
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "8080"
    prometheus.io/path: "/metrics"

service:
  port: 3000

resources:
  requests:
    cpu: 250m
    memory: 512Mi
  limits:
    cpu: 1000m
    memory: 2Gi

persistence:
  enabled: true
  storageClass: gp3
  accessMode: ReadWriteOnce
  size: 5Gi
  mountPath: /workspace

nodeSelector:
  workload-type: oxy-app

tolerations:
  - key: dedicated
    operator: Equal
    value: oxy
    effect: NoSchedule

# Allow 5 minutes for startup (DuckLake, Postgres, git workspace initialization)
startupProbe:
  httpGet:
    path: /
    port: 3000
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 10
  failureThreshold: 30

readinessProbe:
  httpGet:
    path: /
    port: 3000
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 10
  failureThreshold: 3

ingress:
  enabled: true
  ingressClassName: nginx
  annotations:
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-expires: "86400"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "86400"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
# values.yaml — deployment-specific overrides
app:
  image: ghcr.io/oxy-hq/oxy
  imageTag: "0.5.21"
  imagePullPolicy: IfNotPresent
  command: ["oxy", "serve", "--enterprise"]

serviceAccount:
  create: true
  name: oxy-app-sa

semanticEngine:
  enabled: true

persistence:
  size: 10Gi

gitSync:
  enabled: true
  repository: "https://github.com/your-org/your-oxy-workspace.git"
  branch: main
  githubApp:
    secretName: oxy-app-github-app
    privateKeyKey: github_app_private_key
    applicationIdKey: github_app_application_id
    installationIdKey: github_app_installation_id

env:
  OXY_STATE_DIR: "/workspace/oxy_data"
  OXY_DATABASE_URL: "postgresql://oxy:password@postgres-rw.postgres.svc.cluster.local:5432/oxydb"

externalSecrets:
  envSecretNames: ["oxy-env-secret"]

ingress:
  hosts:
    - host: oxy.your-domain.com
      paths: []
Install with both files:
helm install oxy-app oci://ghcr.io/oxy-hq/helm-charts/oxy-app \
  --namespace oxy-app \
  --create-namespace \
  --version 0.4.2 \
  --values _base.yaml \
  --values values.yaml

oxy-start: Self-Contained Deployment

The oxy-start chart runs all services (PostgreSQL, ClickHouse, OTel Collector) inside the pod using Docker-in-Docker. This is ideal for single-tenant deployments where you want a fully self-contained instance with no external database dependency.
# oxy-start values.yaml
app:
  image: ghcr.io/oxy-hq/oxy
  imageTag: "0.5.23"
  imagePullPolicy: IfNotPresent

# Docker-in-Docker sidecar — runs PostgreSQL and other services
dind:
  enabled: true
  storageSize: 40Gi
  resources:
    requests:
      cpu: 500m
      memory: 1Gi
    limits:
      cpu: 2000m
      memory: 4Gi

# oxy start configuration
oxyStart:
  enterprise: true # Enables ClickHouse and OTel Collector
  clean: false # Set true to wipe data on restart

persistence:
  enabled: true
  size: 20Gi
  mountPath: /workspace

gitSync:
  enabled: true
  repository: "https://github.com/your-org/your-oxy-workspace.git"
  branch: main
  githubApp:
    secretName: oxy-github-app
    privateKeyKey: github_app_private_key
    applicationIdKey: github_app_application_id
    installationIdKey: github_app_installation_id

env:
  OXY_STATE_DIR: "/workspace/oxy_data"

externalSecrets:
  envSecretNames: ["oxy-start-env-secret"]

resources:
  requests:
    cpu: 500m
    memory: 1Gi
  limits:
    cpu: 2000m
    memory: 4Gi

ingress:
  enabled: true
  ingressClassName: nginx
  annotations:
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-expires: "86400"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
  hosts:
    - host: oxy.your-domain.com
      paths: []
Install the oxy-start chart:
helm install oxy-start oci://ghcr.io/oxy-hq/helm-charts/oxy-start \
  --namespace oxy-app \
  --create-namespace \
  --version 0.2.1 \
  --values oxy-start-values.yaml

GitOps Deployment with ArgoCD

For production environments, we recommend managing Helm deployments via ArgoCD using the App-of-Apps or ApplicationSet pattern. This provides automated sync, drift detection, and rollback.

Single Application

# argocd-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: oxy-app
  namespace: argocd
spec:
  project: default
  sources:
    # Values files from your infrastructure repo
    - repoURL: https://github.com/your-org/infrastructure
      targetRevision: HEAD
      path: k8s/oxy/values
      ref: values
    # Helm chart from OCI registry
    - repoURL: ghcr.io/oxy-hq/helm-charts
      chart: oxy-app
      targetRevision: "0.4.2"
      helm:
        valueFiles:
          - $values/k8s/oxy/values/_base.yaml
          - $values/k8s/oxy/values/oxy-app.yaml
  destination:
    server: https://kubernetes.default.svc
    namespace: oxy-app
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - ServerSideApply=true

ApplicationSet for Multiple Instances

Use an ApplicationSet to manage multiple Oxy instances (e.g., prod, staging, embed) from a single definition:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: oxy-instances
  namespace: argocd
spec:
  generators:
    - list:
        elements:
          - name: oxy-app
          - name: oxy-staging
          - name: oxy-embed
  template:
    metadata:
      name: "{{.name}}"
      namespace: argocd
    spec:
      project: default
      sources:
        - repoURL: https://github.com/your-org/infrastructure
          targetRevision: HEAD
          path: k8s/oxy/values
          ref: values
        - repoURL: ghcr.io/oxy-hq/helm-charts
          chart: oxy-app
          targetRevision: "0.4.2"
          helm:
            valueFiles:
              - $values/k8s/oxy/values/_base.yaml
              - $values/k8s/oxy/values/{{.name}}.yaml
      destination:
        server: https://kubernetes.default.svc
        namespace: oxy-app
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true
          - ServerSideApply=true

Upgrade and Maintenance

Upgrading Oxy

# Update Helm repository
helm repo update

# Check current version
helm list -n oxy

# Upgrade to latest version
helm upgrade oxy-app oxy/oxy-app \
  --namespace oxy \
  --values production-values.yaml \
  --atomic

# Rollback if needed
helm rollback oxy-app 1 --namespace oxy

Backup and Restore

For comprehensive backup and restore capabilities, we recommend using Velero, which provides backup and disaster recovery for Kubernetes clusters. To set up Velero for backing up your Oxy deployment:
  1. Install Velero following the official installation guide
  2. Create a backup of your Oxy namespace:
# Create a backup of the oxy namespace
velero backup create oxy-backup --include-namespaces oxy

# Schedule regular backups
velero schedule create oxy-daily --schedule="0 2 * * *" --include-namespaces oxy
  1. Restore from backup when needed:
# List available backups
velero backup get

# Restore from backup
velero restore create --from-backup oxy-backup

Monitoring

Monitor your Oxy deployment:
# Check pod status
kubectl get pods -n oxy

# View logs (all pods in StatefulSet)
kubectl logs -f statefulset/oxy-app -n oxy
# Or view logs for a specific pod (e.g., the first replica)
kubectl logs -f oxy-app-0 -n oxy

# Check persistent volume
kubectl get pvc -n oxy

# Port forward for local access
kubectl port-forward svc/oxy-app 3000:80 -n oxy

Troubleshooting

App starts in readonly mode: This is the default behavior when gitSync.enabled=false. Git setup will be handled by the app UI. For other issues, please view the logs or check pods status. File an issue at GitHub Issues if you need further assistance.