TL;DR
- Non-technical QA testers shouldn't need to understand Docker, git, or package managers to test your app
- I first tried a fully Dockerized setup with ngrok tunnels — it worked but was painfully slow
- I then built a single bash script that checks/installs prerequisites, checks out branches, runs migrations, and starts everything
- The QA tester runs one command and gets the full stack running locally
- Self-updating script distributed via a GitHub Gist
Requirements
- macOS (uses Homebrew for dependency management)
- A project with separate frontend and backend directories
- Basic terminal access (just enough to paste a command)
The Problem
Our QA tester doesn't have a dev background. They're great at finding bugs, writing test cases, and verifying features — but asking them to run git checkout, install runtime dependencies, set up virtual environments, start Docker containers, run database migrations, and then start multiple services? That's a full day of onboarding that ends in frustration.
The "normal" answer is a staging environment. But staging deployments aren't instant — you push a branch, wait for CI to build, wait for the deploy to finish, and only then can QA start testing. For a quick bug fix, that's 10-15 minutes of dead time. Multiply that by every branch that needs testing and it adds up fast. And if two people need to test different branches? You're either queuing or spinning up multiple staging environments.
I needed a way for them to test any branch of our app with one command, instantly, without waiting for a deploy. No dev knowledge required.
Attempt 1: Docker Compose + ngrok
My first idea was to containerize everything and expose it through an ngrok tunnel. The QA tester wouldn't even need to clone the repo — they'd just open a URL.
The setup used Docker Compose to orchestrate all the services:
services:
db:
image: postgres:17
environment:
POSTGRES_DB: app_db
POSTGRES_USER: app_user
POSTGRES_PASSWORD: localdev123
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app_user -d app_db"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:6
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
backend:
build:
context: ../backend
env_file:
- backend.env.docker
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
command: >
sh -c "npm install &&
npm run migrate &&
npm start"
frontend:
build:
context: ../frontend
dockerfile: ../docker-environment/Dockerfile.frontend
env_file:
- frontend.env
nginx:
image: nginx:alpine
ports:
- "3000:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
- backend
- frontendAn nginx reverse proxy sat in front to route /api to the backend and everything else to the frontend:
server {
listen 80;
location /api/ {
proxy_pass http://backend:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
proxy_pass http://frontend:8080;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}Then a start script would build everything and optionally open an ngrok tunnel:
#!/bin/bash
docker compose -f docker-compose.qa.yml up --build -d
echo "Waiting for services to start..."
sleep 10
echo "QA environment is ready at http://localhost:3000"
# Optional: expose via ngrok
if command -v ngrok &> /dev/null; then
read -p "Start ngrok tunnel? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
ngrok http 3000
fi
fiWhy I Moved Away From This
It worked. The QA tester could open an ngrok URL and test. But there were real problems:
- ngrok was painfully slow — API calls that took 50ms locally took 500ms+ through the tunnel. The app felt broken even when it wasn't.
- Free ngrok URLs change every session — the QA tester had to get a new URL from me every time.
- I had to run it on my machine — since the QA tester didn't have Docker, I was the one running the Docker stack and keeping the tunnel alive. Defeating the purpose.
- Debugging was harder — logs were buried inside containers, and the QA tester couldn't share their browser console easily.
The Docker approach is still useful for CI/CD or staging environments, but for day-to-day QA testing, I needed something that ran on the tester's own machine.
Attempt 2: The QA Runner Script
The better approach: a single bash script that sets up everything on the QA tester's machine. They don't need to understand what it's doing — they just run it.
Design Principles
- Zero prior knowledge required — the script checks for every prerequisite and installs what's missing
- One command to start —
./qa-runner.sh -f main -b main - Safe by default — destructive operations (like wiping the database) require explicit flags and confirmation
- Self-updating — the script can update itself from a Gist
Prerequisite Checking
The script checks for every tool it needs and installs missing ones automatically:
check_homebrew() {
if ! command -v brew &> /dev/null; then
echo "Homebrew is not installed!"
echo "Please install it by running:"
echo '/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"'
return 1
fi
echo "Homebrew is installed"
}
check_docker() {
if ! command -v docker &> /dev/null; then
echo "Installing Docker Desktop via Homebrew..."
brew install --cask docker
echo "Please open Docker Desktop, complete setup, then run this script again."
return 1
fi
# Check if Docker daemon is actually running
if ! docker info &> /dev/null; then
echo "Docker Desktop is not running. Starting it..."
open -a Docker
# Wait for it to be ready
local attempt=1
while ! docker info &> /dev/null; do
if [ $attempt -ge 60 ]; then
echo "Docker failed to start"
return 1
fi
sleep 2
((attempt++))
done
fi
}
check_node() {
if ! command -v node &> /dev/null; then
brew install node@${REQUIRED_NODE_VERSION}
echo "export PATH=\"/opt/homebrew/opt/node@${REQUIRED_NODE_VERSION}/bin:\$PATH\"" >> ~/.zshrc
export PATH="/opt/homebrew/opt/node@${REQUIRED_NODE_VERSION}/bin:$PATH"
fi
local version=$(node -v | sed 's/v//' | cut -d'.' -f1)
if [ "$version" -lt "$REQUIRED_NODE_VERSION" ]; then
echo "Node.js version too old, upgrading..."
brew install node@${REQUIRED_NODE_VERSION}
fi
}Each check follows the same pattern: check if it exists, install if not, verify the version. The QA tester sees green checkmarks or gets clear instructions for anything that needs manual action (like opening Docker Desktop for the first time).
Branch Management
The QA tester specifies which branches to test — usually a feature branch for either the frontend or backend:
update_branch() {
local dir=$1
local branch=$2
local name=$3
cd "$dir"
git fetch --all --prune
if ! git rev-parse --verify "origin/$branch" &> /dev/null; then
echo "Branch '$branch' does not exist!"
git branch -r | grep -v HEAD | sed 's/origin\// /' | head -20
exit 1
fi
# Stash local changes so we don't lose anything
if [ -n "$(git status --porcelain)" ]; then
git stash push -m "Auto-stash by qa-runner on $(date)"
fi
git checkout "$branch" 2>/dev/null || git checkout -b "$branch" "origin/$branch"
git pull origin "$branch"
echo "$name is on branch '$branch' ($(git rev-parse --short HEAD))"
cd ..
}If the branch doesn't exist, it shows available branches instead of a cryptic git error. Local changes get stashed automatically so nothing is lost.
Starting Everything
The script starts all services as background processes and shows a clean summary:
run_services() {
# Start database and cache (Docker)
cd "$BACKEND_DIR"
docker compose -f "$DOCKER_COMPOSE_FILE" up -d
cd ..
# Wait for DB to be ready
while ! docker compose -f "$DOCKER_COMPOSE_FILE" exec -T db pg_isready &>/dev/null; do
sleep 1
done
# Start backend
cd "$BACKEND_DIR" && $BACKEND_START_CMD &
BACKEND_PID=$!
cd ..
# Start frontend
cd "$FRONTEND_DIR" && $FRONTEND_START_CMD &
FRONTEND_PID=$!
cd ..
echo "Services are running!"
echo " Frontend: http://localhost:${FRONTEND_PORT}"
echo " Backend: http://localhost:${BACKEND_PORT}"
echo ""
echo "Press Ctrl+C to stop all services"
# Clean shutdown on Ctrl+C
trap cleanup SIGINT SIGTERM
wait
}
cleanup() {
kill $FRONTEND_PID $BACKEND_PID 2>/dev/null
cd "$BACKEND_DIR" && docker compose -f "$DOCKER_COMPOSE_FILE" down
echo "All services stopped"
exit 0
}The trap ensures a clean shutdown — killing background processes and stopping Docker containers when the tester presses Ctrl+C.
Database Management
Sometimes the QA tester needs a fresh database (testing onboarding flows) or seeded test data:
# Fresh database — wipes everything and reruns migrations
./qa-runner.sh -f main -b main --fresh
# Fresh database with test data
./qa-runner.sh -f main -b main --fresh --seed
# Nuclear option — removes all dependencies too
./qa-runner.sh -f main -b main --reset-allDestructive operations show a warning and require confirmation:
confirm_action() {
echo "WARNING: $1"
read -p "Are you sure? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Cancelled."
exit 0
fi
}
if [ "$FRESH_DATABASE" = true ]; then
confirm_action "This will ERASE ALL DATA in your local database!"
fiSelf-Updating
The script is hosted as a GitHub Gist. When the QA tester runs ./qa-runner.sh --update, it downloads the latest version and replaces itself:
GIST_RAW_URL="https://gist.githubusercontent.com/your-user/abc123/raw/qa-runner.sh"
update_script() {
local temp_file=$(mktemp)
if curl -fsSL "$GIST_RAW_URL" -o "$temp_file"; then
# Verify it's actually a script, not an HTML error page
if head -1 "$temp_file" | grep -q "^#!/bin/bash"; then
local new_version=$(grep "^SCRIPT_VERSION=" "$temp_file" | cut -d'"' -f2)
if [ "$new_version" != "$SCRIPT_VERSION" ]; then
cp "$temp_file" "$0"
chmod +x "$0"
echo "Updated to version $new_version"
else
echo "Already on latest version"
fi
fi
fi
rm "$temp_file"
}This is a nice trick — the script checks that the downloaded file actually starts with #!/bin/bash before replacing itself. This prevents a failed download (which might return an HTML error page) from bricking the script.
The QA Tester's Workflow
Here's what the tester's daily workflow looks like now:
# First time setup (one-time)
# 1. Clone both repos into a folder
# 2. Download the script from the Gist
# 3. Make it executable: chmod +x qa-runner.sh
# Daily testing
./qa-runner.sh -f feature/new-login -b main
# Testing a backend change
./qa-runner.sh -f main -b fix/api-validation
# Need fresh test data
./qa-runner.sh -f main -b main --fresh --seed
# Something went wrong, nuclear reset
./qa-runner.sh -f main -b main --reset-all
# Stop everything
./qa-runner.sh --stopThey don't need to know what a package manager is, what migrations are, or how Docker works. They paste a command, wait for green checkmarks, and open localhost:3000.
Which Approach Should You Use?
| Staging Deploy | Docker + Tunnel | Local Runner Script | |
|---|---|---|---|
| Wait time | 10-15 min per deploy | Build time + tunnel latency | Seconds (already running) |
| QA needs to install tools | No | No (if you host it) | Yes (automated) |
| Performance | Full speed | Slow (tunnel overhead) | Native speed |
| QA independence | Medium (waits for deploys) | Low (depends on you) | High (runs on their machine) |
| Branch switching | New deploy cycle | Rebuild containers | Git checkout + restart |
| Parallel testing | Need multiple environments | One tunnel at a time | Each tester runs their own |
| Setup complexity | High (CI/CD pipeline) | High (Dockerfiles, nginx) | Medium (one bash script) |
| Best for | Final verification, demos | Remote testers, CI/CD | Day-to-day QA testing |
Staging environments are the standard answer, but they're slow feedback loops — every branch change means another deploy cycle. The Docker + tunnel approach removes the deploy wait but introduces tunnel latency and keeps QA dependent on you. The local script gives QA full independence with native speed.
Lessons Learned
- Don't make QA depend on you — if the tester needs to ask you to start the environment, you're the bottleneck
- Install tools for them — prerequisite checkers that auto-install save hours of back-and-forth
- Safe defaults, explicit destruction — never wipe data without
--freshand a confirmation prompt - Show what's happening — colored output with checkmarks and progress indicators builds trust. The tester knows the script is working, not hanging
- Self-update mechanism — distributing the script via a Gist with
--updatemeans you can push fixes without the tester doing anything
The whole script is about 800 lines of bash. It took a day to write and has saved weeks of "hey, can you help me set up the app?" messages.
The full source for both approaches (local runner script + Docker environment) is on GitHub: RielJ/qa-runner