nginx CentOS 8 quickstart

About nginx

NGINX is an alternative to apache2 / httpd.
It is faster, less complex and more often used in production environments than apache2.
Commonly used features are: Reverse-proxy, Handling SSL Certificates, load balancing and caching (files, FastCGI, uWSGI).
It creates (by default) one worker thread per CPU core available, and each core can handle up to 10.000 connections easily.
With a 2 core CPU you have 3 processes: 1 main and 2 worker. With HTTPD (Apache2) you have ~220 Processes / Worker Threads.
NGINX uses way less RAM and processes then apache2. (source)
Warning: NGINX doesn't have a feature like .htaccess (but is faster because of that), so you have to deny access to folders via the nginx config instead of an .htaccess file.
It is way more difficult to install third-party modules into nginx then into apache2.

Read more: https://en.wikipedia.org/wiki/Nginx
Official site: https://www.nginx.com/company/ (Official partners are IBM, Microsoft, redhat, AWS and more)

Installation

Install nginx via sudo yum install nginx
You may have to start and enable nginx manually via sudo systemctl enable nginx --now
Default-Configuration is at /etc/nginx/nginx.conf
Default-Webpages are located in /usr/share/nginx/html
If you can't connect check your firewall, maybe it blocks http / port 80 traffic.
To enable the HTTP and HTTPS via firewalld run the following:
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
You should now be able to reach your website from your browser.

Config & Commands

Configurations for domains or subdomains are stored in /etc/nginx/conf.d/ (on debian in /etc/nginx/sites-available) On debian, these files will be linked to the /etc/nginx/sites-enabled directory via the ln -s command.
To create your first file for your domain (we use the example.com domain here) simply create the file and paste your config in there: sudo nano /etc/nginx/conf.d/example.com.conf (on debian: sudo nano /etc/nginx/site-available/example.com)
Warning: On CentOS the config files have to end with .conf, or else they won't load.
Content of the file:

server {
    listen 80;
    listen [::]:80;
    server_name example.com;
    root /var/www/example.com;
    index index.html;
    location / {
        try_files $uri $uri/ =404;
    }
}
Please be careful with the root /var/www/example.com line of the config.
This folder has to exist! This folder is your web folder, so everything in there will be publicly available via the server_name example.com.
For Debian users: To "activate" the config you have to symlink the file to the sites-enabled folder. To do this simply use the following command: sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com
To test your config file you can use the nginx test-config command: sudo nginx -t
If there are no problems you can reload the config via sudo nginx -s reload

HTTPS

Having an HTTPS webserver is standard nowadays and easy to do aswell.
Install certbot to get ssl certificates for https: sudo yum install certbot python-certbot-nginx
To Create these certificates just run the command (in our example for example.com and test.example.com):
sudo certbot --nginx
For this command to work you need to have a config file with your ServerName already in the conf.d/ directory, because certbot will fetch all the ServerNames and generate HTTPS certificates automatically. You will get asked a few questions, just fill them out correctly. If certbot asks you if you want auto redirect from HTTP to HTTPS say Yes (the second option) so that traffic automatically runs on HTTPS only.
This redirect setting happens automatically on the newest Certbot version.
Afterwards check your firewall if you are allowing https traffic.
Then restart nginx with sudo nginx -s reload and you are good to go.

Reverse-Proxy

With nginx you can redirect URLs on your server to internal services on different ports / sockets.
This makes it easier to handle HTTPS (SSL) certificates for your services, because nginx can handle the HTTPS connections and internally redirect them to your applications in plain HTTP.
This way you don't have to install SSL modules into your applications because nginx will handle that.

For reverse-proxy to work under CentOS you have to enable the SELinux setting for it. Execute the following command for it to work:

setsebool -P httpd_can_network_connect 1

Now you can, for example, make a NodeJS http-server that listens on port 3000. To now make this webservice available through your website via the example.com/myapp/ url you can use the following location block:
location /myapp/ {
    proxy_pass http://localhost:3000/;
}
Now people can access your nodejs app via example.com/myapp/ instead of example.com:3000/.
All things behind the slash will be forwarded to your nodejs backend app. Example: example.com/myapp/user/me will be "proxied" to localhost:3000/user/me.
This means you can now close port 3000 with your firewall and let all traffic go through your nginx.

This location block has to be added in your server { block where your port 443 is listening.


IP Whitelisting
To only allow specific IPs you can use the following allow/deny commands in your location block:
location /myapp/ {
    allow 1.1.1.1;
    deny all;
    proxy_pass http://localhost:3000/;
}
This would allow the ip 1.1.1.1 to access this location but all others get a 403 http error instead.

Real-IP of request
By default the backend application will only get localhost IPs as requestors (127.0.0.1 and ::1). This is because the request originates from the nginx locally.
To correct this you can set the following headers in your location block, which will set the corresponding headers with the real ip from the requestor:
location /myapp/ {
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://localhost:3000/;
}
Now your backend application will see the real IP of the requestor and not the localhost IP of the nginx server.

Socket.io
If you want to use socket.io with a nginx reverse-proxy you have to add the following location block:
location ~* \.io {
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_set_header X-NginX-Proxy false;

    proxy_pass http://localhost:3001;
    proxy_redirect off;

    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
}
Don't forget to change the port in the proxy_pass line.

Caching responses

NGINX allows you to cache responses from your proxied services.
This means you don't have to implement caching into your backend webservers.
The nginx doc website for caching: https://docs.nginx.com/nginx/admin-guide/content-cache/content-caching/
The nginx site for the proxy cache commands used: https://nginx.org/en/docs/http/ngx_http_proxy_module.html

First put the proxy_cache_path config into your http config block, which on Debian should be located in the default nginx.conf at /etc/nginx/nginx.conf

http {
    # ...
    
    proxy_cache_path /opt/nginx/cache keys_zone=mycache:10m;
    
    # ...
}
The above config creates the cache named "mycache", which will be stored in the folder "/opt/nginx/cache" and hold a maximum of 10MB (10m).

Next you can use this cache in your server blocks by using the proxy_cache command. After this you can use the proxy_cache_key and proxy_cache_valid commands to set the valid URLs and for how long they should be cached.
Example:
# MyService
proxy_cache mycache;
location /myservice/ {
    # ...
    proxy_cache_key "$scheme$proxy_host$request_uri";
    proxy_cache_valid 200 302 5m;
    proxy_cache_valid 404 1m;
    proxy_cache_valid any 10s;
    add_header X-Cache-Status $upstream_cache_status;
    proxy_pass http://localhost:3000/;
}
The above example caches the requests to /myservice/. It caches requests by host and request URI. It caches 200 and 302 http response codes for 5 minutes and 404 for 1 minute. The Client (=Browser) can see if their response is cached if they inspect the "X-Cache-Status" header in the response.

By default only GET and HEAD responses are cached. To set more you could use e.g.:
proxy_cache_methods GET HEAD POST;
To change the cache time from inside your backend you can set the "X-Accel-Expires" header to the number of seconds for how long this specific response sould be cached.
An example in the programming language golang with the gin web framework:
func main() {
    // ...
    router.GET("/getnothing", func(c *gin.Contect) {
        c.Writer.Header().Set("X-Accel-Expires","600")
        c.String(200,"Nothing")
    })
    // ...
}
This example, no matter what is configured in the nginx server or location config, sets the cache time for this request/response to 600 seconds aka. 10 minutes.

PHP with nginx

PHP is not that easy to install with nginx. It's not difficult, but definetely more work then with apache2.
First install the PHP backend php-fpm, which will most likely be version 7.2 for you. Example command: sudo apt-get install php-fpm
Now check if php-fpm is actually running with the following command (replace version if yours is different):

sudo systemctl status php-fpm
If it says enabled and running then you are good to go. If it doesn't state that then enable and start it via sudo systemctl enable php-fpm --now.
Now you have to configure nginx to use that PHP instance. Edit your nginx domain config (probably located at /etc/nginx/conf.d/example.com with redhat-based systems or /etc/nginx/sites-available/example.com with debian-based systems) and add the following code in your server block:

Redhat-based:
location ~ \.php$ {
    include /etc/nginx/fastcgi_params;
    fastcgi_pass unix:/run/php-fpm/www.sock;
}
Debian-based:
location ~ \.php$ {
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/run/php/php7.4-fpm.sock;
}
It is highly recommended to just use the code that should already be present in your nginx default config for enabling PHP.

Check your nginx config with sudo nginx -t.
If there are no problems then you can reload the config with sudo nginx -s reload.
Now you can create a test file and check if everything works:
simply create a file like /usr/share/nginx/html/test.php and write the php testcode in the file and save it.
If you now visit that file on your website (e.g.: example.com/test.php) it should print the phpinfo.

Don't forget to change your "index" entry on your nginx config to include php files:
server {
  index index.html index.php;
  ...
}

Password Protect a folder

A simple password-protected folder can be achieved via .htpasswd files.

location /admin/ {
    auth_basic "Admin Login";
    auth_basic_user_file /etc/nginx/.htpasswd
}
If you want to have the .htpasswd file in your web folder then deny access to it:
location = /.htpasswd {
    deny all;
    return 404;
}
To create this file you can use the apache2-utils (Debian) or the httpd-tools (CentOS) tools.
After installing them you can use the following command to create a file and the first user:sudo htpasswd -c /etc/nginx/.htpasswd chris
The password will be put in on the next line via an interactive console.
To create additional users remove the -c flag: sudo htpasswd /etc/nginx/.htpasswd lukas

Performance limits

The nginx server is fast, but it also has it's limits.
One limit is the "max files open at once" linux limitation. A system can only have that many files open at once. To see the number you can use the ulimit -a command. These are the soft limits. The hard limits can be seen by appending an -H to the command. On Raspbian this is 1024 (soft) and 1048576 (hard).

Another big limit is the max connections a system can have. There are only so many ports the system can use. To see the number of ports you can use for connections use the following command: cat /proc/sys/net/ipv4/ip_local_port_range. On raspbian it is 32768-60999, which means around 25.000 Ports can be used, which means around 25.000 Connections can be handled at once.
Your PHP-FPM / other backend application can be limited too. Your nginx worker threads may work fast, but if you have a PHP website and PHP-FPM is slow, then everything gets slowed down too.