Docker builds in CI were taking 10-15 minutes per run. With dozens of repos building multiple times a day, this was burning hours of developer time and CI compute.
I got most builds under 40 seconds. The fix wasn’t faster runners — it was where the cache lives.
Why CI Docker Builds Are Slow
Every CI run gets a fresh machine. No local Docker cache. The docker build starts from scratch — pulling base images, installing dependencies, compiling code. Even if nothing changed except one line of application code, it rebuilds every layer.
The standard fix is GitHub Actions cache (actions/cache), which stores layers as a tarball. But it has limits: 10GB per repo, slow upload/download for large images, and cache eviction after 7 days of no access.
The Better Fix: Registry-Based Caching
Instead of caching layers locally or in GitHub’s cache, push them to a container registry:
- name: Build and push
uses: docker/build-push-action@v5
with:
push: true
tags: registry.example.com/app:${{ github.sha }}
cache-from: type=registry,ref=registry.example.com/app:cache
cache-to: type=registry,ref=registry.example.com/app:cache,mode=max
cache-from pulls the previous build’s layers from the registry. cache-to pushes the new build’s layers back. mode=max caches all layers, not just the final image layers.
The result: every CI run — regardless of which runner picks it up — has access to the full layer cache from the last build. Base image layers, dependency installation, compilation outputs — all cached and reusable.
The Numbers
| Before | After |
|---|---|
| 10-15 min per build | 30-40 seconds |
| No cache between runs | Full layer cache from registry |
| GitHub Actions cache (10GB limit, slow) | Registry cache (no practical limit, fast pull) |
The biggest wins were on builds with heavy dependency installation steps (Go modules, Node.js packages, Python pip). Those layers rarely change but take minutes to rebuild.
What Else Helped
- Merged redundant jobs — some workflows ran
docker buildtwice (once to test, once to push). Combined into a single build-test-push step. - Dedicated cloud runners — replaced GitHub-hosted runners with self-hosted runners closer to the registry. Reduced image pull/push latency.
- Multi-stage builds — separated build dependencies from runtime. The cached build stage rarely changes even when application code does.
Takeaway
If your CI Docker builds are slow, check where your cache lives. GitHub Actions cache works for small images but doesn’t scale. Registry-based caching (cache-from/cache-to with type=registry) gives every CI run access to the previous build’s layers regardless of runner. The cache is persistent, fast, and doesn’t count against GitHub’s 10GB limit.