Forjinn Docs

Development Platform

Documentation v2.0
Made with
by Forjinn

Docker K8s

Learn about docker k8s and how to implement it effectively.

2 min read
🆕Recently updated
Last updated: 12/9/2025

Docker & Kubernetes Configuration Guide

This reference details how to harness Docker and Kubernetes for robust, scalable deployments of InnoSynth-Forjinn, including advanced configuration, secure secrets handling, multi-service orchestration, and upgrade patterns.


Docker Configuration

Basics

  • Use official images or build with the supplied Dockerfile
  • Set up environmental files: .env, bind as volumes or pass in via compose/services

Docker Compose

Example:

version: '3.8'
services:
  app:
    image: innosynth/forjinn:latest
    env_file: ./docker/.env
    ports:
      - "3000:3000"
    volumes:
      - ./data:/app/data
      - ./logs:/app/logs
    depends_on:
      - redis
      - db
  redis:
    image: redis:6.2
    ports: [ "6379:6379" ]
  db:
    image: postgres:13
    environment:
      POSTGRES_DB: forjinn
      POSTGRES_USER: admin
      POSTGRES_PASSWORD: <secure>
    volumes:
      - ./pg:/var/lib/postgresql/data

Secrets

  • For production, avoid plain .env—use Docker secrets or file-based secrets in /run/secrets
  • Map credentials/secrets as read-only files, point your config to load from those paths

Healthchecks

  • Use Docker healthcheck: "curl --fail http://localhost:3000/health || exit 1"

Kubernetes (K8s)

Helm Chart/Manifest Use

  • Provided Helm chart or custom manifests for flexible multi-pod deployment
  • Example values to override service, ingress, volume claims

Secrets & ConfigMaps

  • Store sensitive values as Kubernetes Secrets; mount or inject as env
  • Use ConfigMaps for non-sensitive, shared configuration

Storage

  • Use persistent volume claims (PVCs) for data, logs, and uploads
  • Multiple storageClass options: local, NFS, cloud-native (AWS EFS, GCP Filestore)

Network & Ingress

  • Expose web/API via Ingress Controller; enforce HTTPS
  • Enable sticky sessions (if required for user experience)

Horizontal/Vertical Scaling

  • Set pod resource requests/limits per workload
  • Use Horizontal Pod Autoscaler (HPA) for stateless scaling; explicitly scale workers separately for batch workloads

Security Configuration

  • Set network policies for pod-to-pod and external communication
  • Enable liveness/readiness probes for fast failover and detection
  • Run containers as non-root wherever possible

Upgrade & Zero Down-Time

  • Use rolling updates with Helm or kubectl
  • Blue/green deployment (swap service endpoints after verifying new version)
  • Always backup DB/data pre-upgrade

Troubleshooting

  • Use kubectl logs, docker logs, and platform error logs
  • Monitor with Prometheus, Grafana, or your cloud provider’s dashboards
  • Debug pod restarts/crashes by checking OOMKilled status, probe failures, or log errors

Best Practices

  • Always use unique, per-environment secrets
  • Scale horizontally for peak API or batch workload times
  • Test upgrades on staging before production
  • Regularly prune unused images/volumes to save disk and reduce attack surface

Robust container and cluster config is the foundation for reliable, scalable, and secure AI automation in any environment.