Setting Up a Production-Ready VPS Infrastructure from Scratch

October 15, 2024

Learn how to set up a production-ready VPS infrastructure from scratch, including choosing the right VPS provider, configuring the server, securing the server, and deploying your applications.

Recently, I have been working on my brand new Startup project. As a founding engineer, I was responsible for setting up the production infrastructure from scratch. Fortunately, it is pretty easy to deploy our applications to the cloud these days. With the help of modern tools and services like Railway, Vercel, and Netlify, you can deploy your applications with a single click. However, it is still not perfect due to the underlying business model for these serverless platform can cost you a fortune if you have a high traffic application.

Thus, VPS (Virtual Private Server) is still a good choice for deploying your applications for a consistent billing and better control over your infrastructure. In this post, I will guide you through the process of setting up a production-ready VPS infrastructure from scratch without using any Infrastructure as Code (IaC) tools like Terraform or Ansible or open-source PaaS like Coolify.

What are the requirements for "Production-Ready"?

Before we dive into the details, let's define what "Production-Ready" means. Here are some of the key requirements for a production-ready VPS infrastructure:

  • Domain Name
  • App Running
  • TLS + HTTPS + Auto-Renewal
  • OpenSSH Hardening
  • Firewall
  • Load Balancer + High Availability
  • Automatic Deployment
  • Monitoring

Choosing the Right VPS Provider

Today, we will be using Hostinger as our VPS provider. However, you can choose any VPS provider that you like. They all have similar features and pricing

Configuring the Server

The first step is to create a new VPS instance on Hostinger. Once the instance is created, you will be prompted to select the operating system. We will be using Ubuntu 24.04 LTS (Long Term Support) for this tutorial.

Then, you will need to create root password and add an SSH key for secure access to the server. You can generate an SSH key using the following command:

# Local Machine
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

Then, navigate to the ~/.ssh directory and copy the public key to the clipboard:

# Local Machine
cat ~/.ssh/id_rsa.pub

Next, paste the public key into the Hostinger dashboard by navigating to the Add SSH Key section.

After that, the VPS instance will be created, and you will be provided with an IP address to access the server.

Login to the Server

To login to the server, use the following command:

# Local Machine
ssh root@your_server_ip

Now, you are logged in to the server as the root user. However, it is not recommended to use the root user for day-to-day operations. Thus, we will create a new user with sudo privileges.

# Server
adduser your_username # You can skip the prompts by pressing Enter
usermod -aG sudo your_username

Let's test it out by switching to the new user:

# Server
su - your_username

Run the following command to verify that the new user has sudo privileges:

# Server
sudo whoami

Domain Name

The next step is to configure the domain name for your application. You can purchase a domain name from any domain registrar like Namecheap, GoDaddy.

Once the purchase went through, you will need to configure the DNS settings to point to your VPS IP address. First, delete the existing A record and CName record if you have any. Then, add a new A record with the following settings:

  • Type: A
  • Name: @
  • Points to: Your VPS IP address
  • TTL: 1 Hour

BTW, you can check your ip address by running the following command:

# Server
ip addr

Now, wait for the DNS settings to propagate. You can check the status by running the following command:

# Local Machine
dig your_domain_name # or
nslookup your_domain_name

Remove password authentication

Because there are many automated bots out there trying to perfoming "SSH Brute Force Attack" on your server, it is recommended to disable password authentication and use SSH keys for secure access to the server.

Before we disable password authentication, let's ensure our non-root user we created earlier can access the server using SSH keys. First, copy the SSH key from your local machine to the server:

# Local Machine
ssh-copy-id your_username@your_server_ip # Enter your password when prompted

Let's test if everything is working as expected:

# Local Machine
ssh your_username@your_server_ip # You should be able to login without entering the password

Now, let's disable password authentication by editing the SSH configuration file:

# Server
sudo vim /etc/ssh/sshd_config

Find the following line and change it to no:

PasswordAuthentication no
PermitRootLogin no
UsePAM no

Also, on Hostinger, there was another file called 50-cloud-init.conf that was also enabling password authentication. You can delete it directly:

# Server
sudo rm /etc/ssh/sshd_config

Finally, restart the SSH service to apply the changes:

sudo systemctl reload ssh

We should now be able to login to the server using SSH keys only.

# Local Machine
ssh root@domain_name # Disabled
ssh your_username@domain_name # Enabled

Running the App

Now that we have configured the server and domain name, it's time to deploy our application with Docker. For this tutorial, we will be deploying a Next.js application with PostgreSQL as the database.

Follow this video for more details: https://www.youtube.com/watch?v=sIVL4JMqRfc

Install Docker

First, install Docker on the server by running the following commands:

# Server
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

Then, install Docker:

# Server
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Verify the installation
sudo systemctl enable docker
sudo usermod -aG docker your_username
docker ps # You should see a list of running containers, which might be empty now

Install Node.js

Next, install Node.js on the server:

# Server
# installs nvm (Node Version Manager)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.0/install.sh | bash

# download and install Node.js (you may need to restart the terminal)
nvm install 20

# verifies the right Node.js version is in the environment
node -v # should print `v20.18.0`

# verifies the right npm version is in the environment
npm -v # should print `10.8.2`

Now, let's deploy the Next.js application with Docker with the example shown in the Docs.

# Server
npx create-next-app --example with-docker nextjs-docker
cd nextjs-docker
docker build -t nextjs-docker .
docker run -p 3000:3000 nextjs-docker

Now, you should be able to access the Next.js application by navigating to http://your_domain_name:3000.

(To be continued...)

Firewall

The next step is to configure the firewall to restrict access to the server. We will be using UFW (Uncomplicated Firewall) for this tutorial.

As we are using Ubuntu, UFW is already installed by default. You can check the status of UFW by running the following command:

# Server
man ufw

What we want to do is to allow incoming traffic only on ports 22 (SSH), 80 (HTTP), and 443 (HTTPS), and allow all outgoing traffic.

# Server
sudo ufw default deny incoming # Deny all incoming traffic by default
sudo ufw default allow outgoing # Allow all outgoing traffic by default

❗Now, there is a very important step that you should not miss. You should allow incoming traffic on port 22 (SSH) before enabling the firewall. Otherwise, you will be locked out of the server.

# Server
sudo ufw allow OpenSSH # Allow incoming traffic on port 22
sudo ufw enable # Enable the firewall
sudo ufw status # Verify the status

Now, you should not be able to access the server on any port other than 22. Try visiting http://your_domain_name:3000 in your browser, and you should see a connection error.

Wait, what? We still can access the server on port 3000? That's because Docker somehow overwrites the ip tables runles defined by UFW. Unfortunately, it was a well-known issue and there is no easy fix for it.

You can read more about it by simply Googling "Docker UFW issue".

Thus, we now try not to expose the port 3000 to the public and use a reverse proxy to route the traffic to the Next.js application instead.

Reverse Proxy

A reverse proxy is a server that sits between the client and the application server and forwards client requests to the application server. We will be using Nginx as the reverse proxy for this tutorial.

First, install Nginx on the server:

# Server
sudo apt update
sudo apt install nginx
sudo systemctl start nginx
sudo systemctl enable nginx

Then, adjust the firewall settings:

# Server
sudo ufw app list # List the available applications
sudo ufw allow 'Nginx Full' # Allow incoming traffic on ports 80 and 443
sudo ufw status # Verify the status

Set up the reverse proxy by creating a new configuration file:

sudo vim /etc/nginx/sites-available/nextjs-docker

Add the following configuration to the file:

server {
    listen 80;
    server_name your_domain_name;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Then, create a symbolic link to the sites-enabled directory:

sudo ln -s /etc/nginx/sites-available/nextjs-docker /etc/nginx/sites-enabled/

Finally, restart Nginx to apply the changes:

sudo nginx -t # test the configuration
sudo systemctl reload nginx

Let's further secure the server by enabling TLS and HTTPS.

TLS + HTTPS + Auto-Renewal

TLS (Transport Layer Security) is a cryptographic protocol that provides secure communication over a computer network. HTTPS (Hypertext Transfer Protocol Secure) is an extension of HTTP that uses TLS to encrypt the data transferred between the client and the server.

We will be using Let's Encrypt to generate a free TLS certificate for our domain name.

First, install Certbot on the server:

# Server
sudo apt install certbot python3-certbot-nginx

Then, generate the TLS certificate:

# Server
sudo certbot --nginx -d your_domain_name

Follow the prompts to generate the certificate. Once the certificate is generated, Certbot will automatically configure Nginx to use the certificate. You can test it by visiting https://your_domain_name in your browser.

Finally, set up auto-renewal for the certificate:

# Server
sudo certbot renew --dry-run

Now, let's run the app again with port binding to localhost only:

# Server
docker run -d -p 127.0.0.1:3000:3000 nextjs-docker

To put it all together, here is our final setup:

  • Nginx will handle incoming traffic on ports 80 and 443 and route the traffic to the Next.js application running on localhost:3000.
  • UFW will restrict access to the server to only ports 22, 80, and 443.
  • SSL is configured for secure HTTPS access.

Load Balancer + High Availability

The next step is to set up a load balancer for high availability. A load balancer distributes incoming traffic across multiple servers to ensure that no single server is overwhelmed. We will be using Kubernetes for this tutorial.

First, install Kubernetes on the server:

# Server
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo add-apt-repository "deb https://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt update
sudo apt install -y kubeadm kubelet kubectl

Then, initialize the Kubernetes cluster:

# Server
sudo kubeadm init --pod-network-cidr=

Follow the instructions to set up the Kubernetes cluster. Once the cluster is initialized, you will be provided with a kubeadm join command to join other nodes to the cluster.

Next, install a CNI (Container Network Interface) plugin to enable networking between pods:

# Server
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Finally, deploy the Next.js application to the Kubernetes cluster:

# Server
kubectl create deployment nextjs-docker --image=nextjs-docker
kubectl expose deployment nextjs-docker --type=LoadBalancer --port=80 --target-port=3000

Now, you should be able to access the Next.js application by navigating to http://your_domain_name.

Automatic Deployment

The next step is to set up automatic deployment for the Next.js application. We will be using GitHub Actions for this tutorial.

First, create a new GitHub repository for the Next.js application. Then, create a new workflow file in the .github/workflows directory:

# .github/workflows/deploy.yml
name: Deploy

on:
  push:
    branches:
      - main

jobs:
    deploy:
        runs-on: ubuntu-latest
    
        steps:
        - name: Checkout code
            uses: actions/checkout@v2
    
        - name: Build and push Docker image
            run: |
            docker build -t nextjs-docker .
            echo ${{ secrets.DOCKER_PASSWORD }} | docker login -u ${{ secrets.DOCKER_USERNAME }} --password-stdin
            docker tag nextjs-docker ${{ secrets.DOCKER_USERNAME }}/nextjs-docker
            docker push ${{ secrets.DOCKER_USERNAME }}/nextjs-docker
    
        - name: Deploy to Kubernetes
            run: |
            kubectl set image deployment/nextjs-docker nextjs-docker=${{ secrets.DOCKER_USERNAME }}/nextjs-docker

Then, create a new secret in the GitHub repository for the Docker username and password:

# Local Machine
gh secret set DOCKER_USERNAME
gh secret

Finally, push the changes to the GitHub repository to trigger the deployment workflow.

Monitoring

The final step is to set up monitoring for the VPS infrastructure. We will be using Prometheus and Grafana for this tutorial.

First, install Prometheus and Grafana on the server:

# Server
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/bundle.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml

Then, access the Prometheus and Grafana dashboards by navigating to http://your_domain_name:9090 and http://your_domain_name:3000, respectively.

That's it! You have successfully set up a production-ready VPS infrastructure from scratch. You can now deploy your applications with confidence knowing that your infrastructure is secure, scalable, and monitored.

I hope you found this post helpful. If you have any questions or feedback, please feel free to leave a comment below. Happy coding!

What do you think? Let me know by sending me an email at b10705052@ntu.edu.tw.