Ditching Vercel: Self-Hosting a Next.js Site on a $7 VPS

I ran this site on Vercel for six months. It worked fine. Deploys were instant, the CDN was global, and I never thought about infrastructure.
Then the bill came. And the lock-in became obvious. And I realized I was paying a premium for a platform that was actively making architectural decisions for me, whether I wanted them or not.
So I moved everything to a $6.65/month Hetzner VPS. Two vCPUs, 2GB RAM, 40GB SSD. Ubuntu 24.04. The entire stack runs on a machine that costs less than a single Vercel Pro seat.
This is the complete walkthrough. Not a sanitized tutorial. The actual decisions, configurations, and tradeoffs I encountered migrating a production Next.js site to self-hosted infrastructure.
#Why I Left Vercel
Three reasons, in order of importance.
1. Control. Vercel abstracts away the server. That's the selling point and the problem. When I needed custom security headers with a restrictive Content Security Policy, I had to work within Vercel's middleware constraints. When I wanted to run PostgreSQL locally alongside the app, I needed an external database provider. When I wanted to understand exactly how my application was being served, I was reading Vercel documentation instead of my own config files.
On a VPS, I control the entire stack. The reverse proxy config is 6 lines I wrote. The process manager is a tool I understand. The database is running on the same machine. Nothing is abstracted. Nothing is magic.
2. Cost. Vercel's free tier is generous for side projects. But the moment you need anything beyond the basics (analytics, additional bandwidth, team features), pricing escalates quickly. A Hobby plan works until it doesn't, and the jump to Pro is $20/month per developer.
My Hetzner VPS costs $6.65/month. That includes the server, bandwidth, and enough headroom to run the database, the app, and a reverse proxy simultaneously. The entire annual cost is less than four months of Vercel Pro.
3. Education. I'm an engineer. Understanding the infrastructure my code runs on isn't optional. Running a VPS forces you to understand DNS, TLS, process management, reverse proxies, and system administration. These aren't abstractions I want to outsource.
#The Stack
Here's what runs on the production server:
| Component | Version | Purpose |
|---|---|---|
| Ubuntu | 24.04 LTS | Operating system |
| Node.js | v24.13.0 | Next.js runtime |
| Caddy | Latest | Reverse proxy, automatic TLS |
| PM2 | 6.0.14 | Process manager, auto-restart |
| PostgreSQL | 16.11 | Database |
| Cloudflare | N/A | DNS, CDN proxy, DDoS protection |
The server itself is a Hetzner CPX11: 2 vCPU, 2GB RAM, 40GB SSD, located in Ashburn, Virginia (us-east). Total cost including VAT is approximately $6.65/month.
#Server Provisioning: The First 30 Minutes
I spin up the VPS from Hetzner's Cloud console. Ubuntu 24.04 LTS, SSH key authentication only. The first thing I do after connecting is lock down access.
>Create a deploy user
Never run applications as root. Create a dedicated user:
adduser deploy
usermod -aG sudo deploy
>Disable root login and password authentication
Edit /etc/ssh/sshd_config:
PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes
Restart SSH:
systemctl restart sshd
From this point forward, the only way in is SSH with an Ed25519 key as the deploy user.
>Firewall
UFW, three ports:
ufw allow 22 # SSH
ufw allow 80 # HTTP (Caddy redirect)
ufw allow 443 # HTTPS
ufw enable
Everything else is blocked. The database listens on localhost only. No external access.
#Caddy: The Best Decision I Made
I evaluated three reverse proxy options: Nginx, Traefik, and Caddy. I chose Caddy for one reason: automatic TLS with zero configuration.
Nginx requires certbot, cron jobs for renewal, and manual configuration. Traefik is powerful but overengineered for a single-site setup. Caddy handles TLS certificate provisioning, renewal, and OCSP stapling automatically. You just tell it the domain name.
Here's the entire Caddyfile:
thechosenvictor.com {
reverse_proxy localhost:3000
}
www.thechosenvictor.com {
redir https://thechosenvictor.com{uri} permanent
}
Six lines. That's the complete reverse proxy and TLS configuration.
Caddy sees the domain name, provisions a Let's Encrypt certificate, configures HTTPS, and proxies traffic to the Next.js server running on port 3000. The www block handles the canonical redirect. No certbot. No cron jobs. No renewal scripts.
>The Cloudflare Interaction
There's one subtlety. I use Cloudflare for DNS and CDN proxying. The DNS records:
| Record | Type | Value | Proxy |
|---|---|---|---|
@ | A | YOUR_VPS_IP | Proxied |
www | CNAME | thechosenvictor.com | Proxied |
With Cloudflare proxying enabled, traffic flows: User -> Cloudflare CDN -> My VPS -> Caddy -> Next.js.
The SSL/TLS mode must be set to Full (Strict). This means Cloudflare expects a valid certificate on my origin server (which Caddy provides automatically). If you set it to "Flexible," Cloudflare connects to your origin over plain HTTP, which defeats the purpose.
#PM2: Process Management
Next.js needs a Node.js process running permanently. If the process crashes, it needs to restart automatically. If the server reboots, the process needs to start on boot. PM2 handles all of this.
Starting the application:
cd ~/app
pm2 start node_modules/next/dist/bin/next \
--name "tcv" \
-- start
This starts the Next.js production server (after next build has been run). The --name "tcv" gives the process a human-readable identifier.
Key PM2 commands I use daily:
pm2 status # Check if the process is running
pm2 logs tcv # Tail application logs
pm2 restart tcv # Restart after deploy
pm2 save # Save current process list
pm2 startup # Generate boot script
pm2 startup is critical. It generates a systemd service that restarts PM2 (and all saved processes) on server reboot. Without it, a server restart means your site is down until you manually SSH in and start the process.
>Memory Management
2GB of RAM is tight. Next.js with React Server Components isn't exactly lightweight. I added 2GB of swap space as a safety net:
fallocate -l 2G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' >> /etc/fstab
In practice, the Next.js process hovers around 200-400MB of RAM. PostgreSQL adds another 100-200MB. With 2GB physical + 2GB swap, there's enough headroom for build processes and occasional traffic spikes.
#PostgreSQL: Local Database
Running the database on the same machine as the application eliminates network latency for database queries. On Vercel, you'd need an external provider (Supabase, PlanetScale, Neon). Each adds latency, another service to manage, and often another bill.
PostgreSQL 16.11 runs locally. Installation on Ubuntu:
apt install postgresql postgresql-contrib
Create the application database and user:
sudo -u postgres psql
CREATE USER tcv_app WITH PASSWORD 'secure_password_here';
CREATE DATABASE tcv OWNER tcv_app;
\q
The connection string lives in /home/deploy/.db_credentials, not in the application code or environment files committed to git. The deploy script sources this file during build.
>Backups
Daily automated backups via cron:
# /etc/cron.d/pg-backup
0 3 * * * deploy pg_dump -U tcv_app tcv | gzip > /home/deploy/backups/tcv-$(date +\%Y\%m\%d).sql.gz
Runs at 3 AM daily. A companion cleanup script retains 7 days of backups:
find /home/deploy/backups/ -name "*.sql.gz" -mtime +7 -delete
Is this enterprise-grade backup infrastructure? No. Is it better than having no backups at all (which is what many Vercel deployments effectively have for their database)? Yes.
#The Deploy Script
Deployment is a single command:
ssh deploy@YOUR_VPS_IP "bash ~/deploy.sh"
The script on the server:
#!/bin/bash
set -euo pipefail
APP_DIR="$HOME/app"
echo "=== Pulling latest changes ==="
cd "$APP_DIR"
git pull origin main
echo "=== Installing dependencies ==="
npm ci --production=false
echo "=== Building ==="
npm run build
echo "=== Restarting PM2 ==="
pm2 restart tcv
echo "=== Deploy complete ==="
pm2 status
set -euo pipefail is essential. It ensures the script stops on any error. Without it, a failed npm ci would still proceed to npm run build, which would fail with a cryptic error, and pm2 restart would restart the old build. The pipeline flag catches failures in piped commands.
npm ci (not npm install) is deliberate. ci does a clean install from the lockfile, ensuring the server's node_modules exactly matches what was tested locally. install can resolve differently and introduce phantom bugs.
The --production=false flag ensures dev dependencies (TypeScript, ESLint, PostCSS) are installed, because next build needs them. After the build, you could prune dev dependencies with npm prune --production, but on a 40GB SSD with a single app, the disk savings aren't worth the added complexity.
>GitHub Deploy Key
The VPS clones the repository via SSH using a read-only deploy key. This key is stored at /home/deploy/.ssh/github_deploy and is scoped to the single repository. If the server is compromised, the attacker can read the repo (which is public anyway) but can't push to it or access any other repositories.
#Next.js Configuration for Self-Hosting
Next.js on Vercel gets automatic optimizations. On a VPS, you configure them yourself.
>Security Headers
All security headers are set in next.config.ts via the headers() function:
async headers() {
return [
{
source: '/:path*',
headers: [
{
key: 'Strict-Transport-Security',
value: 'max-age=63072000; includeSubDomains; preload'
},
{
key: 'Content-Security-Policy',
value: "default-src 'self'; upgrade-insecure-requests; ..."
},
{
key: 'X-Frame-Options',
value: 'SAMEORIGIN'
},
{
key: 'X-Content-Type-Options',
value: 'nosniff'
},
{
key: 'Permissions-Policy',
value: 'camera=(), microphone=(), geolocation=()'
}
]
}
];
}
On Vercel, some of these are set automatically or via vercel.json. Self-hosting means you own every header. The upside is complete control over the Content Security Policy. The downside is that you have to actually write it.
The CSP is restrictive by design: default-src 'self' blocks everything not explicitly allowed. Scripts, styles, images, fonts, and connections each have their own allowlist. This prevents XSS attacks even if an attacker finds an injection point. Getting the CSP right took three iterations of "deploy, check the console for blocked resources, adjust the policy."
>Build-Time Environment Validation
The site uses Zod to validate environment variables at build time:
// next.config.ts
import "./src/env";
The src/env.ts file defines a Zod schema for every required environment variable. If a variable is missing or malformed during next build, the build fails with a clear error message. This catches configuration drift before it reaches production.
On Vercel, environment variables live in the dashboard. On the VPS, they live in the deploy user's shell profile and the .db_credentials file. Validating them at build time ensures the deploy script doesn't produce a broken build from missing config.
#Monitoring: The Crude but Effective Approach
I run a health check cron that pings the site every 5 minutes:
*/5 * * * * deploy curl -sf https://thechosenvictor.com > /dev/null || pm2 restart tcv
If the curl fails (non-2xx response or connection timeout), PM2 restarts the process. This isn't Datadog or PagerDuty. It's a single cron job that catches the most common failure mode (process crash or hang) and recovers automatically.
For deeper monitoring, I check PM2 logs periodically:
pm2 logs tcv --lines 100
And system resources:
htop
df -h
Could I set up Prometheus, Grafana, and alerting? Sure. For a personal site that serves a few hundred visitors a day, a health check cron and periodic log review is sufficient. Engineering discipline is knowing when not to over-engineer.
#What I Gained
Full transparency. I know exactly how every request is served. There's no black box between the user and my application. DNS resolves to Cloudflare, Cloudflare proxies to my VPS, Caddy terminates TLS and proxies to Node.js, Next.js renders the page. Every layer is visible and configurable.
Cost efficiency. $6.65/month for the entire infrastructure. No per-seat pricing. No bandwidth surprises. No feature gating. The VPS handles everything: web server, database, process management.
Learning. In six months of self-hosting, I've learned more about DNS propagation, TLS certificate chains, process management, and Linux system administration than I did in two years of deploying to Vercel. This knowledge transfers to every future project.
Performance. Counterintuitively, the VPS is faster for my use case. Vercel's Edge Network is distributed, but cold starts on serverless functions add latency. My VPS keeps the Node.js process warm permanently. Time to First Byte is consistently under 100ms for server-rendered pages.
#What I Lost
Automatic preview deployments. Vercel creates a unique URL for every pull request. On the VPS, I preview locally. This is fine for a solo project. It would be a problem for a team.
Global CDN. Vercel serves assets from the nearest edge node worldwide. My VPS is in Ashburn, Virginia. Cloudflare's CDN proxying helps with static assets, but server-rendered pages come from a single origin. For a site with primarily US-based traffic, this is acceptable. For a global audience, it wouldn't be.
Zero-config deploys. git push to deploy is addictive. My current workflow requires an SSH command after pushing. It's one extra step, but it's a step I didn't have before.
Managed scaling. If this site suddenly got 100,000 concurrent visitors, Vercel would handle it. My 2-vCPU VPS wouldn't. This is a tradeoff I'm comfortable with because the probability of that traffic spike is effectively zero for a personal portfolio.
#The Decision Framework
Self-hosting isn't for everyone. Here's how I think about the decision:
Self-host if:
- You want to understand your infrastructure deeply
- Cost matters and your traffic is predictable
- You need a colocated database without external service dependencies
- You want unrestricted control over your server configuration
- You're building a personal project or small team product
Stay on Vercel (or similar) if:
- You need preview deployments for team collaboration
- Your traffic is global and unpredictable
- You want zero operational overhead
- You don't want to be on-call for your infrastructure
- Your time is better spent on product than on server administration
There's no universal correct answer. For me, at this stage, with this project, self-hosting was the right call. If this were a startup with a team of five and paying customers, I would probably still be on Vercel.
#The Commands I Run Most Often
For reference, the commands that make up my daily operational workflow:
# Deploy
ssh deploy@YOUR_VPS_IP "bash ~/deploy.sh"
# Check status
ssh deploy@YOUR_VPS_IP "pm2 status"
# Tail logs
ssh deploy@YOUR_VPS_IP "pm2 logs tcv --lines 50"
# Restart (without deploy)
ssh deploy@YOUR_VPS_IP "pm2 restart tcv"
# Check disk space
ssh deploy@YOUR_VPS_IP "df -h"
# Check memory
ssh deploy@YOUR_VPS_IP "free -h"
Six commands. That's the entire operational surface of self-hosting a production Next.js site.
The server has been running for three months with no unplanned downtime. The health check cron has caught two process hangs and auto-recovered both. Total time spent on infrastructure maintenance: maybe two hours per month, and most of that is reviewing logs out of curiosity rather than necessity.
For $6.65/month and a few hours of initial setup, I've got a production environment I fully understand, fully control, and can fully explain to anyone who asks.
That's the trade. It's a good one.
Last updated: March 5, 2026