Skip to content

The 12-Factor App Methodology

Standardizing our approach ensures that our applications are portable, scalable, and easy to maintain. We adhere to the 12-Factor App methodology to keep our development and production cycles smooth.

  1. One app, one repo.

    We keep things simple. A single repository tracks the codebase, ensuring that the exact same code deployed to Staging is the one that lands in Production.

  2. Explicitly declared and isolated.

    We avoid “it works on my machine” surprises. All dependencies must be locked (e.g., package.json, pom.xml).

  3. Store config in the environment.

    Keep secrets secret and configuration flexible. Store config in environment variables, never in the code or hardcoded files. This allows us to switch settings between environments without rebuilding the image.

  4. Treat backing services as attached resources.

    Databases, caches, and queues should be treated like pluggable resources. Your app should be loosely coupled enough that switching from PostgreSQL to MySQL is mostly a configuration change.

  5. Strictly separate build and run stages.

    • Build: We use CloudBuild to create a lean Docker image (binary + essentials).
    • Release: ArgoCD combines the image with the environment config.
    • Run: Helm Charts execute the app in the cluster.
  6. Execute the app as one or more stateless processes.

    Any data that needs to survive a restart (like user sessions) must live in a shared store like Redis, not in the pod’s memory. This allows Kubernetes to scale our replicas up and down freely without data loss.

  7. Export services via port binding.

    The application must be self-contained and listen on a specific port (configurable via env vars). This ensures Kubernetes Services and Ingress controllers can reliably route traffic to your pods.

  8. Scale out via the process model.

    Design your code assuming multiple copies will run simultaneously. Ensure your logic (especially Pub/Sub listeners) handles locking or idempotency so that multiple instances don’t process the same message at the same time.

  9. Maximize robustness with fast startup and graceful shutdown.

    • Shutdown: Handle the SIGTERM signal to finish active requests before the pod closes.
    • Startup: Keep boot times short. Long startup routines hinder the autoscaler during traffic spikes.
  10. Keep development, staging, and production similar.

    By using the same Docker images and just swapping environment variables, we ensure that if it passes in Staging, it will work in Production.

  11. Treat logs as event streams.

    • Format: Use JSON formatting so aggregation tools can parse them.
    • Content: Log meaningful info. Avoid cluttering Production logs with high-frequency “I’m healthy” checks.
  12. Run admin/management tasks as one-off processes.

    Need to run a database migration? Run it as a separate Kubernetes Job. It should use the same image and config as the app but execute independently.