steps.sh doesn’t belong in your Kubernetes entrypoint
I set GUNICORN_CMD_ARGS="--workers 2" on a deployment. Four workers booted. I set WEB_CONCURRENCY=2. Four workers. The pod was on a 4GB node with three other tenants doing the same thing, and the node died from memory pressure because twelve gunicorn workers were loading weasyprint simultaneously.
The Dockerfile’s CMD was bash -c "source ./steps.sh && migrate_and_serve". Inside steps.sh:
function run_server() {
gunicorn -w 4 -k uvicorn.workers.UvicornWorker app.main:app -b 0.0.0.0:8000 &
GUNICORN_PID=$!
wait $GUNICORN_PID
}
Hardcoded -w 4. Gunicorn resolves the worker count from the command line first, GUNICORN_CMD_ARGS second, WEB_CONCURRENCY third. The -w 4 in the shell script wins every time. The env vars I set from terraform were dead on arrival.
This pattern makes sense in docker-compose, where the CMD in the Dockerfile is the only entrypoint you have and a helper script keeps the startup sequence readable. On Kubernetes, terraform owns the deployment spec. The command field in the pod spec is the entrypoint. Baking deployment decisions into a shell script inside the image hides them from the system that manages the deployment.
I replaced it with an inline command in terraform:
command = ["bash", "-c",
"alembic upgrade head && exec gunicorn -w \"$${WEB_CONCURRENCY:-2}\" -k uvicorn.workers.UvicornWorker app.main:app -b 0.0.0.0:8000"
]
Migrations, then gunicorn with a worker count the orchestrator controls. No shell script, no hidden overrides. Two workers on a 4GB node instead of four, and the node stopped dying.