comments 6

RaspberryPi Nginx Secure Reverse Proxy Server

Intro

So we have our RaspberryPi ready, with MinibianPi installed and secured. If you’re like me, there’s already a ton of ideas about what kind of systems I want to set up in my house, and this is just from browsing and reading some awesome blog posts. What many people forget is that when you’re about to jump into the “smart home” or “personal cloud” bandwagon, there’s one thing that I believe you should have ready from day one: a secure reverse proxy.

Enter the RaspberryPi-Nginx-SSL combo.

pi-nginx-ssl

Disclaimer

I am by no means an expert in server installation or maintenance. I do this for fun, and sometimes I get lazy and cut corners. Please don’t treat this as a you-must-do-this guide, but more of a this-is-what-I-did and I’d like to share it with you. If you spot any mistakes, please let me know and I’ll do my best to address it immediately.


Table of Contents

  1. Intro
  2. Table of Contents
  3. Objectives
  4. Why Nginx?
  5. Prerequisite
  6. Setting Up the Firewall
  7. Installing Nginx from Source
  8. Optimizing Nginx Configuration
  9. Configuring the Website
  10. Securing Connections with Encryption
  11. Setting Up Nginx Virtual Host to Support SSL
  12. Testing
  13. What’s Next?
  14. References and Further Reading

Objectives

What does a reverse proxy do, you say? Here‘s a good read. Now, why would a home user like me need to have a reverse proxy? Here are my reasons:

  1. I want to have a single point of entry for all inbound traffic for my internal systems, using a single domain (or subdomain). All other systems (running on the same RaspberryPi or another unit downstairs) will be assigned a “virtual directory”.
  2. I want to have all said inbound (and outbound traffic) encrypted using the strongest and latest cipher suites and have all reasonable optimization and security features enabled.
  3. I want to have all internal traffic to be sent in cleartext to reduce the load of encryption protocols on my internal systems, and have a termination proxy do all the encryption work.

Your reasons may be different, and your mileage will surely vary. As with any tutorial you’ll find on this site, this is more of a “this-is-how-I-did-it” rather than “you-should-follow-my-example”. Proceed with your own considerations.


Why Nginx?

That question will probably spark a debate longer than this article. I’ve heard that Nginx is much more lightweight than Apache, and more suitable as a reverse proxy. But don’t take my word for it. Googling “nginx vs apache” will get you situated.


Prerequisite

This build is based on my previous article on how to set up a secure MinibianPi-based server. It’s up to you if you need as much security as I do.


Setting Up the Firewall

Previously, we’ve put a firewall in place that will only allow SSH connections. We’ll have to change it so that it allows HTTP and HTTPS (ports 80 and 443) from anywhere to connect.

  1. Open the firewall rules file
    sudo nano /etc/iptables.firewall.rules
  2. Add the following lines anywhere in the file as long as it’s above the default deny statements (the “Drop all other inbound” line)
    # Allow HTTP and HTTPS connections
    -A INPUT -p tcp -m state --state NEW --dport 80 -j ACCEPT
    -A INPUT -p tcp -m state --state NEW --dport 443 -j ACCEPT
  3. Save and exit, then put the rules into effect
    sudo iptables-restore < /etc/iptables.firewall.rules
  4. Check to make sure the rules are in place
    sudo iptables -L

Installing Nginx from Source

Matt Wilcox has written an excellent article about this on his blog, but I’ll re-write the steps here anyway.

  1. Install Nginx and libssl-dev (just in case we need it later) using apt-get
    sudo apt-get install nginx libssl-dev -y
  2. Remove Nginx using apt-get
    sudo apt-get remove nginx
  3. Download the script and then open it for editing
    cd
    wget https://gist.githubusercontent.com/MattWilcox/402e2e8aa2e1c132ee24/raw/b88e2a85b31bcca57339c8ed8858f42557c3cf53/build_nginx.sh
    nano build_nginx.sh
  4. Perform the following edits:
    • First, go to the top of the file, you should see version numbers there saved as variables
      #!/usr/bin/env bash
      
      # names of latest versions of each package
      export VERSION_PCRE=pcre-8.35
      export VERSION_OPENSSL=openssl-1.0.1g
      export VERSION_NGINX=nginx-1.7.1

      We need to check the version numbers and make sure that they’re the ones we want to use. To check, go their respective websites, which we can find immediately below the code above:

      # URLs to the source directories
      export SOURCE_OPENSSL=https://www.openssl.org/source/
      export SOURCE_PCRE=ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/
      export SOURCE_NGINX=http://nginx.org/download/

      At the time of writing, I used the latest, which is PCRE v8.35, OpenSSL v1.0.2, and Nginx v1.7.10.

    • Remove the 64-bit flag. Raspbian Wheezy doesn’t qualify as a 64-bit operating system, nor does the RaspberryPi as a 64-bit computing device. So, we’ll find this line…
      ./config --prefix=$STATICLIBSSL no-shared enable-ec_nistp_64_gcc_128 \

      and change it to this:

      ./config --prefix=$STATICLIBSSL no-shared \
  5. To be able to run the script, we need to make it executable. Let’s make it executable only for us and deny access to anyone else, because we’re paranoid like that.
    sudo chmod 700 build_nginx.sh
  6. Now all that’s left is run the script. This will take about 20 minutes, could be more. If you want to grab a beer, now’s the time.
    sudo ./build_nginx.sh
  7. After the build is done, check the output message and make sure that there are no errors. No errors? Great! Let’s open our browser and browse to the Pi’s address to check. If you see the Nginx welcome page, you’re all set. If not, recheck your build output, you might’ve missed something there.

That’s it! Now you have Nginx installed and ready to use. But of course the vanilla installation always has loopholes that we must fix. Onwards!


 Optimizing Nginx Configuration

We could tune our Nginx installation to suit our needs. What I’m about to mention here are what I think would be beneficial for my setup. Your mileage will vary, so please take it with a grain of salt, do your research, and make an informed decision about your own setup.

All of the modifications in this section are done in the nginx.conf file, so let’s open it up:

sudo nano /etc/nginx/nginx.conf
  1. First on my checklist is the number of worker processes. This is probably best set equal to the number of cores available on your processor, that way a worker process can be “attached” to a core. I use RaspberryPi 2 Model B, which has a quad-core processor, so I’ll be using 4 worker processes:
    worker_processes 4;
  2. Next, I want to optimize gzip compression, which will allow our server to save a bit of bandwidth when serving our content. Most of the settings are already there, I’m just adding a minimum length and some content types. Again, do your research and add content types or tweak settings as you see fit:
    gzip			on;
    gzip_disable		"msie6";
    
    gzip_min_length		1100;
    gzip_vary		on;
    gzip_proxied		any;
    gzip_comp_level		6;
    gzip_buffers		16 8k;
    gzip_http_version	1.1;
    gzip_types		text/plain
    			text/css
    			application/json
    			application/x-javascript
    			text/xml
    			application/xml
    			application/xml+rss
    			text/javascript
    			images/svg+xml
    			application/x-font-ttf
    			font/opentype
    			application/vnd.ms-fontobject;
    

    If you’re wondering, gzip_comp_level is how much compression we want. It accepts values from 1 to 9; 1 is the least compressed but fastest to compute and 9 is the most compressed but slowest to compute. With the limited power of our minuscule processor, I’m going with a middle ground value of 6.

  3. Now, I want to have basic protection against DDoS attacks. I do this by limiting the timeouts for several events. Note that some of the settings below may already be present in the file, so do make sure that you’re not mentioning any duplicate settings (Nginx configtest will complain about it if you do, so don’t worry):
    client_header_timeout	10;
    client_body_timeout	10;
    keepalive_timeout	10 10;
    send_timeout		10;
    
  4. Last, to support longer domain names, I want to increase the hash bucket size. I find that 128 works best, but set it as you see fit:
     server_names_hash_bucket_size 128;

That’s all there is to it. Let’s proceed with configuring our website.


 Configuring the Website

This section assumes you already have dynamic DNS setup for your public IP address. If you haven’t got that setup yet, you can read my article on the topic and come back here to proceed.

It’s probably considered a good practice to not mess with the “default” settings; just copy it and work with a new file. That way if anything goes awry we can always revert to default. Here’s the complete set of commands that should get you set up, all in one go:

cd /etc/nginx/sites-available \
&& sudo cp default mysite \
&& cd /etc/nginx/sites-enabled \
&& sudo rm default \
&& sudo ln -s /etc/nginx/sites-available/mysite

After executing that command we should have a copy of “default”, which is named “mysite”, and it is enabled. Now let’s open it up and edit it:

cd /etc/nginx/sites-available && sudo nano mysite

I don’t like too much comments in my config, so I just delete all lines that begin with “#”, then save and close. After that, let’s do a config test:

sudo service nginx configtest

If no errors show up, let’s reload Nginx to make the new settings take effect:

sudo service nginx reload

All done!


 Securing Connections with Encryption

SSL Certificates

To encrypt communications between our server and the internet, we need to have an SSL certificate in place. I’m not going to go too deep into explaining what an SSL certificate is, because (1) I’m not that well-versed in cryptography and (2) this article is supposed to focus on the technicalities of implementing such encryption. If you’d like to know more, there are a lot of articles that you can read elsewhere.

Preparing a Private Key

To obtain an SSL certificate, we must first create a private key. This key must be known to no other than ourselves and must be kept secret in a safe place, preferably off-site and offline, to prevent unauthorized access. But that’s for the big players. For us, safekeeping in a USB flash drive that’s not plugged in unless needed will probably be sufficient.

Here are the set of commands we need to issue to generate our private key, which will reside in the /etc/nginx/ssl/ directory:

sudo mkdir /etc/nginx/ssl \
&& cd /etc/nginx/ssl \
&& openssl req -new -newkey rsa:2048 -nodes -keyout mysite.key -out mysite.csr

I recommend replacing ‘mysite’ with the domain name the certificate will be issued for to avoid further confusion.

The command starts the process of CSR and Private Key generation. The Private Key will be required for certificate installation. You will be prompted to fill in the information about your Company and domain name.

It is strongly recommended to fill all the required fields in. If a field is left blank, the CSR can be rejected during activation. For certificates with domain validation it is not mandatory to specify “Organization” and “Organization Unit” -you may fill the fields with ‘NA’ instead. In the Common Name field you need to enter the domain name the certificate should be issued for.

Please use only symbols of English alphanumeric alphabet, otherwise the CSR can be rejected by a Certificate Authority. If the certificate should be issued for a specific subdomain, you need to specify the subdomain in ‘Common Name’. For example ‘sub1.ssl-certificate-host.com’. I just used: mysite.com.

Once all the requested information is filled in, you should have *.csr and *.key files in the folder where the command has been run.

The *.csr file contains the CSR code that you need to submit during certificate activation. It can be opened with a text editor. Usually it looks like a block of code with a header: “—–BEGIN CERTIFICATE REQUEST—-” It is recommended to submit a CSR with the header and footer.

The *.key file is the Private Key, which will be used for decryption during SSL/TLS session establishment between a server and a client. It has the header: “—–BEGIN RSA PRIVATE KEY—–“. Please make sure that the private key is stored somewhere safe and secure as it will be impossible to install the certificate without it on the server afterwards.

Generating/Requesting a Signed Certificate

There are a few options where we can get a signed SSL certificate.

  1. If you’re OK with a self-signed certificate (and the annoying warnings you get everywhere you connect to your home network), then this is the way to go. To sign your own certificate, issue the following command:
    openssl x509 -req -days 365 -in mysite.csr -signkey mysite.key -out mysite.crt
  2. If you want to have a Certificate Authority sign your SSL certificate, you can apply for it at one of the Certificate Authorities. Just google “cheap ssl certificates” and you’ll get a list of candidates you might want to look at. I use one from GoGetSSL which at the time of writing costs $12.35 and is valid for 3 years. Just register at the provider of your choice and follow their instructions. However, keep in mind that you should never send your private key to anyone. Only submit your CSR.

Either way, we should end up with a *.crt file in our hands, and this is what we will use to secure our server using SSL/TLS. Before we can use the certificate, though, we must chain it together with the CA’s certificates. Here’s a pretty good article on COMODO’s site on how to do that.

Setting Up Nginx for SSL

After we have our certificate signed and ready, it’s time to configure Nginx to use those certificates. To make things easy to maintain, we’ll create another file that we will include in the main configuration file. The file will reside in the /etc/nginx/conf.d/ directory; Nginx is preconfigured to scan that location for additional configuration files.

If you would like to use different security settings or different certificates for your virtual hosts, you can put the additional configuration file outside of /etc/nginx/conf.d/ and include it manually in your http block in /etc/nginx/sites-available/mysite. Let’s create the configuration file:

sudo nano /etc/nginx/conf.d/security.conf

The list below will explain bit by bit what each setting will do.

  1. Set Supported Protocols
    We do this to support TLSv1 as the minimum standard because anything below that would compromise our security.

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  2. Set Server Cipher Preference
    This will tell browsers to let our server decide what cipher suites to use throughout the session.

    ssl_prefer_server_ciphers on;
  3. Set Supported Cipher Suites
    This defines what cipher suites we are supporting. I researched quite a bit for this list of cipher suites, and I finally settled using the modern compatibility cipher suite example from Mozilla. (actually that article is a pretty darn good resource, so please read it when you have the chance)

    ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4';
  4. Enable Forward Secrecy
    Forward Secrecy (or Perfect Forward Secrecy) is how servers and clients communicate using keys that are never sent through the wire using what’s called a Diffie-Hellman handshake. This ensures that should our private key be compromised, an attacker will not be able to decipher past communications. To perform Diffie-Hellman handshakes, both the client and server needs to send a prime number in cleartext. A Diffie-Hellman parameter file will determine the size of the prime number that is used. The bigger the safer, but generating a big file will take a lot longer. Mozilla recommends a minimum of 2048 bits, but I decide to use 4096 bits. It takes forever to generate on a Pi, so decide for yourself if you want that much bits. To use the parameter file, we include this in our security configuration file:

    ssl_dhparam /etc/nginx/conf.d/dh4096.pem;

    This setting only defines the dhparam file location. We need to generate the dhparam file by issuing the following command:

    cd /etc/nginx/conf.d && sudo openssl dhparam -out dh4096.pem 4096
  5. Enable HTTP Strict Transport Security (HSTS)
    HTTP Strict Transport Security is a feature that will make browsers remember to always connect to our server using HTTPS. The first time a browser connects, it can use HTTP, but all subsequent connections will be “forced” to use HTTPS. The max-age here is set for 365 days (equivalent to 31536000 seconds). Mozilla recommends a minimum max-age of 6 months.

    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
  6. Enable OCSP Stapling
    OCSP Stapling is a mechanism to “staple” certificate information from our own server so that we can skip contacting the Certificate Authority every time we need to authenticate an encrypted connection. This has the benefit of not sharing any request details to the Certificate Authority (thus increasing our privacy) and relieving the Certificate Authority of precious clock cycles that it would otherwise have to provide to authenticate every request to our server.

    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/nginx/ssl/comodo.trusted.pem;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;

    This tells Nginx to enable OCSP stapling and verification of OCSP responses, sets the location of our trusted certificates file, and configures Google’s public DNS as our resolver. Details of each setting can be found in the Nginx documentation. To use this feature, we must also prepare the comodo.trusted.pem file (COMODO is my CA, yours may be different). We do this by chaining all certificates like we did before except our entity certificate (the innermost certificate).

  7. Enable Session Resumption
    Session resumption is a feature that will allow a full SSL handshake to be abbreviated during subsequent requests. This shaves previous milliseconds off, which can decrease latency and improve efficiency. Here‘s a primer you can read to get acquainted with the concept. To enable session resumption both by cache and by session tickets, add the following lines:

    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets on;
  8. Enable HTTP Private Key Pinning (HPKP)
    I would like to start off by mentioning that at the time of writing this feature is not yet widely used, and I’m not even sure that it’s already a standard. Here’s the draft document if you’re interested in the hairy details. Suffice it to say for now that you should enable this feature only if you’re sure you know what you’re doing and what the consequences are. Please don’t blame me if your site’s visitors come complaining to you about authentication errors that only time can fix. You’re still on board? Well let’s go then! To generate your certificate’s SPKI pin, issue the following command:

    openssl req -inform pem -pubkey -noout < mysite.com.csr | openssl pkey -pubin -outform der | openssl dgst -sha256 -binary | base64
    

    Take note of the result. Now, before we actually commit to setting the HPKP header, I strongly urge you to generate a backup set of private key and CSR. This is considered a best practice to help you when your current private key is compromised and you need to revoke your certificate. If you don’t set this now and don’t have a backup, you’ll have your visitors stuck with invalid pins and they will have to wait out until the HPKP header has expired to be able to access your site with new certificates. Finally, to set the headers, insert the following in your security.conf file:

    add_header Public-Key-Pins "pins-sha256=[YOUR_PRIMARY_PIN_HERE]; pins-sha256=[YOUR_BACKUP_PIN_HERE]; max-age=15768000; includeSubdomains"
    

    This should force all HPKP-compliant browsers to stop the SSL handshake if they detect that your public key has changed or been tampered with.

The security settings are now in place and ready to go, so let’s save and close the file. Next, we’ll configure the Nginx virtual host to use HTTPS exclusively.


 Setting Up Nginx Virtual Host to Support SSL

This section is fairly easy to do. To do this, let’s open up the virtual host file that we’ve set up before:

sudo nano /etc/nginx/sites-available/mysite

We’ll introduce the following changes into the file:

  1. Set Existing Server Block to Listen to Port 443 (HTTPS)
    The existing server block we have in place listens to port 80 (HTTP). We need to change this so that it listens to port 443 (HTTPS) instead, and while we’re at it why not throw in SPDY support as well:

    server {
    	listen 443 ssl spdy;
    	# listen [::]:443 default_server ipv6only=on;
    
    	...
    
    }
  2. Add New Server Block to Listen to Port 80 (HTTP)
    We can’t  have a server that doesn’t listen to port 80 (HTTP), because most users will type in an address and the browser will assume that it’s HTTP. So, we create a new server block and place our redirection directives there:

    server {
    	listen 80;
    	server_name mysite.com;
    	return 301 https://$server_name$request_uri; #enforce https
    }
  3. Set Site Name and Certificates
    In order for our site to work with HTTPS, we need to set the correct site name (FQDN) and certificates that go along with it. Let’s set it up in our HTTPS server block:

    # Make site accessible from http://mysite.com
    server_name mysite.com;
    
    # certificate locations
    ssl_certificate /etc/nginx/ssl/mysite.crt;
    ssl_certificate_key /etc/nginx/ssl/mysite.key;

After we’ve made the changes above, let’s save the file and exit, and perform a configuration test:

sudo service nginx configtest

If that went well, we can go ahead and reload the configuration:

sudo service nginx reload

Now, we’re ready to test it.


 Testing

Finally after all the security settings are in place (don’t forget to make copies of your private keys and store them somewhere safe!), it’s time to test. I like to use SSL Labs to test my settings, and I suggest you do too. Just head over to https://www.ssllabs.com/ssltest/index.html and enter your domain there. If you spot anything out of place, they have a lot of articles that can help you address it appropriately.

Here’s my test results:

qualys-aplus

If you’d like to see the details of the test, just go over to SSL Labs and run a test on this website.


What’s Next?

Now we have a secured Nginx reverse proxy that handles all incoming HTTP and HTTPS requests. We can tweak these settings as we see fit, and as more internal applications need outside access, but the scope of this article stops here.

As I write more articles based on this setup, I’ll update and post links in this section. So, stay tuned!


References and Further Reading

These articles have been instrumental in my writing this article and some of them may provide more insight for you if you’d like to dive deeper into the “why” instead of the “how”:

  1. How to set up a secure Raspberry Pi web server, mail server and Owncloud installation — pestmeester.nl
  2. Setting up a (reasonably) secure home web-server with Raspberry Pi — Matt Wilcox
  3. Security/Server Side TLS — Mozilla Wiki
  4. Strong SSL Security on nginx — Remy van Elst
  5. How To Configure OCSP Stapling on Apache and Nginx — DigitalOcean Community Tutorials
  6. Configuring Apache, Nginx, and OpenSSL for Forward Secrecy — Qualys Blog

6 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *