In this post, I am going to document how I deployed a Node.js application on a DigitalOcean VPS with Nginx acting as a reverse proxy.

By the end, we will be able to access the site from a custom domain using HTTPS.

Before starting

Before starting, what I have is a simple Node.js application that shows a welcome message when someone visits the home page. For all other pages, the application returns a 404 status code with a message saying that the page doesn’t exist.

Create a server

To deploy our Node.js application, we need a fresh Ubuntu server. You can get one from any cloud service provider like DigitalOcean, Linode, AWS, etc. Since it is a straightforward process, I am not going into the details of that.

Pointing a Domain Name

The next step is to point a domain name to the IP address of the server. For this example, I am going to use a subdomain.

You might need to log in to your domain registrar’s DNS settings section to add the relevant A records. DNS edits can take a couple of hours to take effect. If you are using Cloudflare, things can be quicker.

Initial Server Configuration

After launching the server and pointing the domain, make sure you can connect to it via the terminal using SSH. Optionally, you can do the following things:

  • Create a non-root user with Sudo privileges
  • Setup key-based authentication
  • Disable password authentication
  • Disable root login
  • Add the host credentials to the SSH config file for easier access.

Linux and Mac support SSH commands from the terminal. If you are using Windows, I would suggest using Git Bash instead of the Command Prompt.

Installing Node & NPM

Ubuntu’s default package manager APT does provide the Node and NPM packages. However, the versions available in the repository are often much older than the current stable release. As I am writing this post, Node.js is in v18, while the official Ubuntu repository is still providing the v10 or so.

For that reason, we will install Node.js from Nodesource, which offers the current stable version.

The below command fetches the package file from the remote source, then executes it with root privileges. The pipe symbol in the middle channels the output from the curl command to the bash command without writing it on a file.

curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -

The above command adds the new repository to the sources list. After that, you can run the apt-get command to install Node.js.

sudo apt-get install -y nodejs

Finally, check the version to make sure you’ve successfully installed both Node.js and NPM on your system.

node -v; npm-v

You should see something like this:

Installing Git

Our application’s code is residing in a Github repository. So we want Git installed on our machine to pull it.

Git will mostly like be available already on Ubuntu machines. Otherwise, you can install it from the APT repository:

sudo apt install git

Verify the installation:

git --version

Cloning the repository

Github provides two URLs for a repository: one is the HTTPS URL while the other is the SSH URL that starts with git@github.com.

The SSH URL requires authentication before we can pull any repository. So we’ll use the other one.

Don’t forget to change the below repo URL with your repo URL.

git clone https://github.com/iabhinavr/nodejs-welcome

The clone command downloads the remote repo to our VPS. Later we can use the pull command to download updates.

git fetch --all
git reset --hard origin/master

Note: simply running the git pull origin master command may not work if there are any modified files in the server. That’s why we’re using the reset command.

Installing the NPM packages

Now we have the code downloaded to our machine. But still the application is not complete. Because we usually don’t commit the node_modules folder to a Github repository. Without that, our application won’t work.

If you check the files in the repository, you can see a file called .gitignore, which contains the list of files or folders to exclude from commits.

Since the packages inside the node_modules folder are just dependencies, there’s no need to commit them to the remote repo. Instead, Node.js gives package.json and package.json.lock files, which contain the list of dependencies we need in our application.

The npm install command uses these JSON files to recreate the node_modules directory tree.

cd nodejs-demo
npm install

Now our application should the successfully installed.

Note: In some cases, you may need to install additional dependencies for the application to work. For instance, one of my applications required the Puppeteer module, which requires the libx11-xcb1 package to work. It was already installed on my local machine, so the application worked flawlessly on localhost. But when I installed it on the VPS, I got errors as these additional packages were not available. So I had to install all the dependencies mentioned in this troubleshooting guide to solve the issue.

After installing new packages or making some changes to the server (not code edits), don’t forget to restart PM2:

pm2 restart app.js --watch

The option --watch tells pm2 to watch for any file changes.

Starting the Application using PM2

On the local machine, we were using Nodemon to enable live reloading for Node.js applications. It automatically watches for file changes and reloads the node.js server.

But in production, PM2 is the most recommended process manager for Node.js servers. You can install it globally from the NPM repository.

Then start the process using the start command.

sudo npm install pm2 -g
pm2 start app.js

You may need PM2 to automatically start running on system restarts. For the run the command:

pm2 startup

Then you will get a command that you need to copy-paste and run (replace /home/abhinav with your home directory):

sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u abhinav --hp /home/abhinav

Now, our Node.js app should be successfully running on the port you set in the http.creatServer() or app.listen() function. In my case, it is port 3000. So the app should be available at http://serveripaddress:3000 (unless you limit it to localhost).

But we don’t want to make the Node server available directly to the public. Instead, we are going to place Nginx in front of it for serving user requests.

Installing Nginx

Nginx is also available in the APT repository so that you can install it right away:

sudo apt install nginx -y

Nginx comes with a default welcome page, which is enabled by default. You can view it by going to your server’s IP address in your browser.

Similar to Virtual hosts in Apache, Nginx uses server blocks to define hosts. The above welcome page is defined in the default server block located at /etc/nginx/sites-available/default. While the HTML file is located at /var/www/html.

At the bottom of the default server block file, you can also find some commented code that shows you how to define additional server blocks (or virtual hosts). We are going to use that for our Node.js application.

Creating a Server Block for the Reverse Proxy

Open the default site block in the nano editor:

sudo nano /etc/nginx/sites-available/default

At the top of the file, you can find the default server block. Below that add the following to define our reverse proxy setting:

server {
        listen 80;
        listen [::]:80;

        server_name nodejs-demo.codingreflections.com;

        location / {
                proxy_pass http://localhost:8080;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
        }
}

There are mainly three things you should know in the above code:

  • listen: Port 80 is the default HTTP port. So ‘listen 80’ asks Nginx to watch port 80 for any requests.
  • server_name: The name of the server. Enter the domain name here.
  • location: The location directive sets the request URI to match. The slash (/) matches all requests to the domain name (set in the server_name directive). Inside the location block, we use the proxy_pass directive to redirect the requests internally to the Node.js server running on port 8080. It is accessible at the URL – http://localhost:8080.

In addition to that, we also set a few more options. One of them is the proxy_http_version, which is set to 1.1. The available options are http/1.0 and http/1.1. Since this is an internal request, http/2 doesn’t make much sense. It is not available either.

However, we will be enabling http/2 for incoming client requests after installing SSL and enabling HTTPS.

After making the changes, press ctrl-O to save it, and ctrl-X to exit the nano editor.

Finally, test the Nginx configuration and restart it.

sudo nginx -t
sudo systemctl restart nginx

Now, our API is available at http://nodejs-demo.codingreflections.com.

Installing SSL Certificate using Certbot

Next, we need to secure our API using HTTPS. For that, we will install a Let’s Encrypt SSL certificate on our server for the domain name specified in the server_name directive.

Certbot is a piece of software that makes managing Let’s Encrypt certificates easier. We want to install the certbot package as well as its Nginx plugin.

sudo apt install certbot
sudo apt install python3-certbot-nginx
certbot --version

Then we can generate the certificate by running the certbot command with --nginx option.

sudo certbot --nginx -d nodejs-demo.codingreflections.com

This will automatically edit our server block to add the additional directives for HTTPS and SSL. You can add the ‘http2’ option to the ‘listen’ directives to enable http/2. The default is http/1.1.

Certbot can take care of certificate renewal as well. You can verify it by performing a dry run:

sudo certbot renew --dry-run

Note: Before Certbot can successfully issue the certificates, you need to point the domain name to the server’s IP address. If you don’t want to do that, then there are also plugins available such as the certbot-dns-cloudflare, which allows domain verification using API tokens.

Now our web page should be accessible using HTTPS: https://nodejs-demo.codingreflections.com

Configuring UFW Firewall

Finally, we can tighten the security further by enabling the built-in UFW firewall. It will be inactive by default. You can check the status by:

sudo ufw status

By default, UFW blocks all incoming requests when enabled. So you need to open the required ports one by one by specifying the port and the protocol.

A couple of ready-made profiles are also available. For instance, Nginx automatically creates the Nginx Full, Nginx HTTP, and Nginx HTTPS profiles when we install the webserver. OpenSSH is another one, which is required to connect to the server using SSH.

sudo ufw app list

We will be enabling OpenSSH (port 22), Nginx Full (ports 80 and 443), and 8080 (that’s where our Node.js server is listening).

sudo ufw allow 'OpenSSH'
sudo ufw allow 'Nginx Full'
sudo ufw allow 8080/tcp

BTW, UFW blocks only external requests by default. And Nginx is accessing port 8080 from the same machine via localhost. So the last rule is not necessary unless you want to open port 8080 for public access.

Finally, enable the firewall:

sudo ufw enable

Important: Don’t forget to allow OpenSSH before enabling UFW. Otherwise, you will be locked out of the server.