All posts filed under “Geeky Stuff

comment 0

Pimp My PC: From HDD to SSD

It’s been a while since I’ve done anything to upgrade my PC. My last upgrade was “out-of-the-box”: a Ducky Shine 3 full-sized mechanical keyboard to replace my flimsy Rapoo wireless keyboard that kept me away from any serious games. This time, I’m addressing a more performance-related aspect of my PC.

What’s The Issue

I consider my PC good enough in all other areas: Intel Core i5-3570K, 2×4 gigs of Corsair Vengeance, a 2-gig MSI HD7850, a 250-gig WD Blue for the system and 3 terabytes of Hitachi for the primary storage.

Everything is great for now. Gaming is okay, with enough frames per second to prevent me from damaging my Ducky. But I’m not fond of the idea of shutting it down after a day’s work (well, a day’s play).

It takes about 21 seconds for my PC to display the Windows 8.1 login screen from a cold boot. This, I don’t mind. What I do mind is the time it takes from login to ready. Get this: 4 minutes plus. I could fall asleep in half of that! I have enough RAM and definitely enough juice in my i5 to crunch through the startup process easily. The bottleneck is clearly my WD Blue.

Choices, Choices

The course of action is clear on this one: ditch the HDD and get an SSD. It has to be (1) big enough so that I can migrate my existing HDD without problems, (2) fast enough so that grumpy is replaced with happy, and (3) doesn’t make me cry when I look at my bank account later on.

Five minutes of googling will tell you that in mid-2015, the one of the best all-around performer SSD is the Samsung 850 Pro. I quickly found a unit at a computer store near my home. At about $148 (or IDR 2 million) at the time of writing, it’s also not that expensive considering its performance.

Without really thinking much, I grabbed a unit and here it is:

Samsung 850 Pro 256GB
Samsung 850 Pro 256GB


I tried to follow the steps outlined here. On second thought, the new drive is actually larger than my old drive. I can skip the backup and the move-here-and-then-move-back-there routine, right? Awesome! Opening up my Corsair K550 to find a spot for the SSD is a walk in the park, and after fumbling with the box my P8Z77-V Pro came with to find an extra SATA cable the installation is straightforward.

Turning on the PC, I see that the BIOS detected the SSD and proceeded to log into Windows as usual. I skipped the backup, cloned my WD Blue to the 850 Pro, reconfigured the boot priority, and it’s good to go.


The first thing I wanted to do after installing was test it out. I grabbed a copy of AS SSD Benchmark, and here are my initial test results:

AS SSD Benchmark Samsung 850 Pro with 3GB/s SATA
AS SSD Benchmark Samsung 850 Pro with 3Gb/s SATA Cable on Asmedia 6Gb/s SATA III Port

Hmmm… Something’s not right. The benchmarks I found online suggested that the drive is capable of sequential write speeds of around 500 MB/s. So why is this unit lacking? I decided to check the cable and my hunch was right: I was using a 3 Gb/s SATA cable and was plugging it into an Asmedia SATA port. After swapping the cable out with a more suitable 6 Gb/s cable and plugging it into an Intel SATA port, the tests reveal numbers closer to the 500 MB/s mark:

AS SSD Benchmark Samsung 850 Pro with 6GB/s SATA
AS SSD Benchmark Samsung 850 Pro with 6Gb/s SATA Cable on Intel 6Gb/s SATA III Port

Benchmarks are just benchmarks and I never believe that any benchmark could ever supersede real-life performance. So before installing the 850, I decided to measure the time it took for my PC to boot to login, and then from login to ready (which I set as the point in which my Rainlendar widgets show up). Here are the numbers:

ConditionBIOS Beep to LoginLogin to ReadyTotal
Initial (WD Blue)25 secondsAbout 4 minutes4-5 minutes
Samsung 850 Pro w/ 3 GB/s cable21 secondsAbout 1 minute90-ish seconds
Samsung 850 Pro w/ 6 GB/s cable21 seconds31 secondsUnder 60 seconds

These numbers (especially Samsung’s numbers) are not taken on the first successful boot. After I switched to the SSD and/or whenever I plug the SSD into a different SATA III port on my motherboard, the boot time will take forever. Well, no, actually it’s about 5 minutes. I’m assuming Windows 8.1 is doing it’s thing, but you should google it if you really want to know.


You can’t go wrong with an SSD upgrade. My total boot time is now improved from 4-5 minutes down to less than 60 seconds. That’s a roughly 5X increase. I bet Adobe Lightroom and everything else on my PC loads that many times faster also.

This is by far the easiest and most convenient way for me to squeeze that extra oomph out of my existing hardware. It may be a bit on the pricey side, especially for some people who contact me and say “hey, can you build me a PC for 250 bucks?” (you know who you are). I already spent about $2000 on my PC, so yeah, this is an enthusiast’s upgrade.

All said, I couldn’t be happier with the decision to join the SSD bandwagon. Also, I’m glad I’ll have an extra 250 gigs of storage to stash my music or whatever. This has been fun, and I’d recommend the 850 Pro for enthusiasts who want the extra bit of performance.

comments 6

RaspberryPi Nginx Secure Reverse Proxy Server


So we have our RaspberryPi ready, with MinibianPi installed and secured. If you’re like me, there’s already a ton of ideas about what kind of systems I want to set up in my house, and this is just from browsing and reading some awesome blog posts. What many people forget is that when you’re about to jump into the “smart home” or “personal cloud” bandwagon, there’s one thing that I believe you should have ready from day one: a secure reverse proxy.

Enter the RaspberryPi-Nginx-SSL combo.



I am by no means an expert in server installation or maintenance. I do this for fun, and sometimes I get lazy and cut corners. Please don’t treat this as a you-must-do-this guide, but more of a this-is-what-I-did and I’d like to share it with you. If you spot any mistakes, please let me know and I’ll do my best to address it immediately.

Table of Contents

  1. Intro
  2. Table of Contents
  3. Objectives
  4. Why Nginx?
  5. Prerequisite
  6. Setting Up the Firewall
  7. Installing Nginx from Source
  8. Optimizing Nginx Configuration
  9. Configuring the Website
  10. Securing Connections with Encryption
  11. Setting Up Nginx Virtual Host to Support SSL
  12. Testing
  13. What’s Next?
  14. References and Further Reading


What does a reverse proxy do, you say? Here‘s a good read. Now, why would a home user like me need to have a reverse proxy? Here are my reasons:

  1. I want to have a single point of entry for all inbound traffic for my internal systems, using a single domain (or subdomain). All other systems (running on the same RaspberryPi or another unit downstairs) will be assigned a “virtual directory”.
  2. I want to have all said inbound (and outbound traffic) encrypted using the strongest and latest cipher suites and have all reasonable optimization and security features enabled.
  3. I want to have all internal traffic to be sent in cleartext to reduce the load of encryption protocols on my internal systems, and have a termination proxy do all the encryption work.

Your reasons may be different, and your mileage will surely vary. As with any tutorial you’ll find on this site, this is more of a “this-is-how-I-did-it” rather than “you-should-follow-my-example”. Proceed with your own considerations.

Why Nginx?

That question will probably spark a debate longer than this article. I’ve heard that Nginx is much more lightweight than Apache, and more suitable as a reverse proxy. But don’t take my word for it. Googling “nginx vs apache” will get you situated.


This build is based on my previous article on how to set up a secure MinibianPi-based server. It’s up to you if you need as much security as I do.

Setting Up the Firewall

Previously, we’ve put a firewall in place that will only allow SSH connections. We’ll have to change it so that it allows HTTP and HTTPS (ports 80 and 443) from anywhere to connect.

  1. Open the firewall rules file
    sudo nano /etc/iptables.firewall.rules
  2. Add the following lines anywhere in the file as long as it’s above the default deny statements (the “Drop all other inbound” line)
    # Allow HTTP and HTTPS connections
    -A INPUT -p tcp -m state --state NEW --dport 80 -j ACCEPT
    -A INPUT -p tcp -m state --state NEW --dport 443 -j ACCEPT
  3. Save and exit, then put the rules into effect
    sudo iptables-restore < /etc/iptables.firewall.rules
  4. Check to make sure the rules are in place
    sudo iptables -L

Installing Nginx from Source

Matt Wilcox has written an excellent article about this on his blog, but I’ll re-write the steps here anyway.

  1. Install Nginx and libssl-dev (just in case we need it later) using apt-get
    sudo apt-get install nginx libssl-dev -y
  2. Remove Nginx using apt-get
    sudo apt-get remove nginx
  3. Download the script and then open it for editing
  4. Perform the following edits:
    • First, go to the top of the file, you should see version numbers there saved as variables
      #!/usr/bin/env bash
      # names of latest versions of each package
      export VERSION_PCRE=pcre-8.35
      export VERSION_OPENSSL=openssl-1.0.1g
      export VERSION_NGINX=nginx-1.7.1

      We need to check the version numbers and make sure that they’re the ones we want to use. To check, go their respective websites, which we can find immediately below the code above:

      # URLs to the source directories
      export SOURCE_OPENSSL=
      export SOURCE_PCRE=
      export SOURCE_NGINX=

      At the time of writing, I used the latest, which is PCRE v8.35, OpenSSL v1.0.2, and Nginx v1.7.10.

    • Remove the 64-bit flag. Raspbian Wheezy doesn’t qualify as a 64-bit operating system, nor does the RaspberryPi as a 64-bit computing device. So, we’ll find this line…
      ./config --prefix=$STATICLIBSSL no-shared enable-ec_nistp_64_gcc_128 \

      and change it to this:

      ./config --prefix=$STATICLIBSSL no-shared \
  5. To be able to run the script, we need to make it executable. Let’s make it executable only for us and deny access to anyone else, because we’re paranoid like that.
    sudo chmod 700
  6. Now all that’s left is run the script. This will take about 20 minutes, could be more. If you want to grab a beer, now’s the time.
    sudo ./
  7. After the build is done, check the output message and make sure that there are no errors. No errors? Great! Let’s open our browser and browse to the Pi’s address to check. If you see the Nginx welcome page, you’re all set. If not, recheck your build output, you might’ve missed something there.

That’s it! Now you have Nginx installed and ready to use. But of course the vanilla installation always has loopholes that we must fix. Onwards!

 Optimizing Nginx Configuration

We could tune our Nginx installation to suit our needs. What I’m about to mention here are what I think would be beneficial for my setup. Your mileage will vary, so please take it with a grain of salt, do your research, and make an informed decision about your own setup.

All of the modifications in this section are done in the nginx.conf file, so let’s open it up:

sudo nano /etc/nginx/nginx.conf
  1. First on my checklist is the number of worker processes. This is probably best set equal to the number of cores available on your processor, that way a worker process can be “attached” to a core. I use RaspberryPi 2 Model B, which has a quad-core processor, so I’ll be using 4 worker processes:
    worker_processes 4;
  2. Next, I want to optimize gzip compression, which will allow our server to save a bit of bandwidth when serving our content. Most of the settings are already there, I’m just adding a minimum length and some content types. Again, do your research and add content types or tweak settings as you see fit:
    gzip			on;
    gzip_disable		"msie6";
    gzip_min_length		1100;
    gzip_vary		on;
    gzip_proxied		any;
    gzip_comp_level		6;
    gzip_buffers		16 8k;
    gzip_http_version	1.1;
    gzip_types		text/plain

    If you’re wondering, gzip_comp_level is how much compression we want. It accepts values from 1 to 9; 1 is the least compressed but fastest to compute and 9 is the most compressed but slowest to compute. With the limited power of our minuscule processor, I’m going with a middle ground value of 6.

  3. Now, I want to have basic protection against DDoS attacks. I do this by limiting the timeouts for several events. Note that some of the settings below may already be present in the file, so do make sure that you’re not mentioning any duplicate settings (Nginx configtest will complain about it if you do, so don’t worry):
    client_header_timeout	10;
    client_body_timeout	10;
    keepalive_timeout	10 10;
    send_timeout		10;
  4. Last, to support longer domain names, I want to increase the hash bucket size. I find that 128 works best, but set it as you see fit:
     server_names_hash_bucket_size 128;

That’s all there is to it. Let’s proceed with configuring our website.

 Configuring the Website

This section assumes you already have dynamic DNS setup for your public IP address. If you haven’t got that setup yet, you can read my article on the topic and come back here to proceed.

It’s probably considered a good practice to not mess with the “default” settings; just copy it and work with a new file. That way if anything goes awry we can always revert to default. Here’s the complete set of commands that should get you set up, all in one go:

cd /etc/nginx/sites-available \
&& sudo cp default mysite \
&& cd /etc/nginx/sites-enabled \
&& sudo rm default \
&& sudo ln -s /etc/nginx/sites-available/mysite

After executing that command we should have a copy of “default”, which is named “mysite”, and it is enabled. Now let’s open it up and edit it:

cd /etc/nginx/sites-available && sudo nano mysite

I don’t like too much comments in my config, so I just delete all lines that begin with “#”, then save and close. After that, let’s do a config test:

sudo service nginx configtest

If no errors show up, let’s reload Nginx to make the new settings take effect:

sudo service nginx reload

All done!

 Securing Connections with Encryption

SSL Certificates

To encrypt communications between our server and the internet, we need to have an SSL certificate in place. I’m not going to go too deep into explaining what an SSL certificate is, because (1) I’m not that well-versed in cryptography and (2) this article is supposed to focus on the technicalities of implementing such encryption. If you’d like to know more, there are a lot of articles that you can read elsewhere.

Preparing a Private Key

To obtain an SSL certificate, we must first create a private key. This key must be known to no other than ourselves and must be kept secret in a safe place, preferably off-site and offline, to prevent unauthorized access. But that’s for the big players. For us, safekeeping in a USB flash drive that’s not plugged in unless needed will probably be sufficient.

Here are the set of commands we need to issue to generate our private key, which will reside in the /etc/nginx/ssl/ directory:

sudo mkdir /etc/nginx/ssl \
&& cd /etc/nginx/ssl \
&& openssl req -new -newkey rsa:2048 -nodes -keyout mysite.key -out mysite.csr

I recommend replacing ‘mysite’ with the domain name the certificate will be issued for to avoid further confusion.

The command starts the process of CSR and Private Key generation. The Private Key will be required for certificate installation. You will be prompted to fill in the information about your Company and domain name.

It is strongly recommended to fill all the required fields in. If a field is left blank, the CSR can be rejected during activation. For certificates with domain validation it is not mandatory to specify “Organization” and “Organization Unit” -you may fill the fields with ‘NA’ instead. In the Common Name field you need to enter the domain name the certificate should be issued for.

Please use only symbols of English alphanumeric alphabet, otherwise the CSR can be rejected by a Certificate Authority. If the certificate should be issued for a specific subdomain, you need to specify the subdomain in ‘Common Name’. For example ‘’. I just used:

Once all the requested information is filled in, you should have *.csr and *.key files in the folder where the command has been run.

The *.csr file contains the CSR code that you need to submit during certificate activation. It can be opened with a text editor. Usually it looks like a block of code with a header: “—–BEGIN CERTIFICATE REQUEST—-” It is recommended to submit a CSR with the header and footer.

The *.key file is the Private Key, which will be used for decryption during SSL/TLS session establishment between a server and a client. It has the header: “—–BEGIN RSA PRIVATE KEY—–“. Please make sure that the private key is stored somewhere safe and secure as it will be impossible to install the certificate without it on the server afterwards.

Generating/Requesting a Signed Certificate

There are a few options where we can get a signed SSL certificate.

  1. If you’re OK with a self-signed certificate (and the annoying warnings you get everywhere you connect to your home network), then this is the way to go. To sign your own certificate, issue the following command:
    openssl x509 -req -days 365 -in mysite.csr -signkey mysite.key -out mysite.crt
  2. If you want to have a Certificate Authority sign your SSL certificate, you can apply for it at one of the Certificate Authorities. Just google “cheap ssl certificates” and you’ll get a list of candidates you might want to look at. I use one from GoGetSSL which at the time of writing costs $12.35 and is valid for 3 years. Just register at the provider of your choice and follow their instructions. However, keep in mind that you should never send your private key to anyone. Only submit your CSR.

Either way, we should end up with a *.crt file in our hands, and this is what we will use to secure our server using SSL/TLS. Before we can use the certificate, though, we must chain it together with the CA’s certificates. Here’s a pretty good article on COMODO’s site on how to do that.

Setting Up Nginx for SSL

After we have our certificate signed and ready, it’s time to configure Nginx to use those certificates. To make things easy to maintain, we’ll create another file that we will include in the main configuration file. The file will reside in the /etc/nginx/conf.d/ directory; Nginx is preconfigured to scan that location for additional configuration files.

If you would like to use different security settings or different certificates for your virtual hosts, you can put the additional configuration file outside of /etc/nginx/conf.d/ and include it manually in your http block in /etc/nginx/sites-available/mysite. Let’s create the configuration file:

sudo nano /etc/nginx/conf.d/security.conf

The list below will explain bit by bit what each setting will do.

  1. Set Supported Protocols
    We do this to support TLSv1 as the minimum standard because anything below that would compromise our security.

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  2. Set Server Cipher Preference
    This will tell browsers to let our server decide what cipher suites to use throughout the session.

    ssl_prefer_server_ciphers on;
  3. Set Supported Cipher Suites
    This defines what cipher suites we are supporting. I researched quite a bit for this list of cipher suites, and I finally settled using the modern compatibility cipher suite example from Mozilla. (actually that article is a pretty darn good resource, so please read it when you have the chance)

  4. Enable Forward Secrecy
    Forward Secrecy (or Perfect Forward Secrecy) is how servers and clients communicate using keys that are never sent through the wire using what’s called a Diffie-Hellman handshake. This ensures that should our private key be compromised, an attacker will not be able to decipher past communications. To perform Diffie-Hellman handshakes, both the client and server needs to send a prime number in cleartext. A Diffie-Hellman parameter file will determine the size of the prime number that is used. The bigger the safer, but generating a big file will take a lot longer. Mozilla recommends a minimum of 2048 bits, but I decide to use 4096 bits. It takes forever to generate on a Pi, so decide for yourself if you want that much bits. To use the parameter file, we include this in our security configuration file:

    ssl_dhparam /etc/nginx/conf.d/dh4096.pem;

    This setting only defines the dhparam file location. We need to generate the dhparam file by issuing the following command:

    cd /etc/nginx/conf.d && sudo openssl dhparam -out dh4096.pem 4096
  5. Enable HTTP Strict Transport Security (HSTS)
    HTTP Strict Transport Security is a feature that will make browsers remember to always connect to our server using HTTPS. The first time a browser connects, it can use HTTP, but all subsequent connections will be “forced” to use HTTPS. The max-age here is set for 365 days (equivalent to 31536000 seconds). Mozilla recommends a minimum max-age of 6 months.

    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
  6. Enable OCSP Stapling
    OCSP Stapling is a mechanism to “staple” certificate information from our own server so that we can skip contacting the Certificate Authority every time we need to authenticate an encrypted connection. This has the benefit of not sharing any request details to the Certificate Authority (thus increasing our privacy) and relieving the Certificate Authority of precious clock cycles that it would otherwise have to provide to authenticate every request to our server.

    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/nginx/ssl/comodo.trusted.pem;
    resolver valid=300s;
    resolver_timeout 5s;

    This tells Nginx to enable OCSP stapling and verification of OCSP responses, sets the location of our trusted certificates file, and configures Google’s public DNS as our resolver. Details of each setting can be found in the Nginx documentation. To use this feature, we must also prepare the comodo.trusted.pem file (COMODO is my CA, yours may be different). We do this by chaining all certificates like we did before except our entity certificate (the innermost certificate).

  7. Enable Session Resumption
    Session resumption is a feature that will allow a full SSL handshake to be abbreviated during subsequent requests. This shaves previous milliseconds off, which can decrease latency and improve efficiency. Here‘s a primer you can read to get acquainted with the concept. To enable session resumption both by cache and by session tickets, add the following lines:

    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets on;
  8. Enable HTTP Private Key Pinning (HPKP)
    I would like to start off by mentioning that at the time of writing this feature is not yet widely used, and I’m not even sure that it’s already a standard. Here’s the draft document if you’re interested in the hairy details. Suffice it to say for now that you should enable this feature only if you’re sure you know what you’re doing and what the consequences are. Please don’t blame me if your site’s visitors come complaining to you about authentication errors that only time can fix. You’re still on board? Well let’s go then! To generate your certificate’s SPKI pin, issue the following command:

    openssl req -inform pem -pubkey -noout < | openssl pkey -pubin -outform der | openssl dgst -sha256 -binary | base64

    Take note of the result. Now, before we actually commit to setting the HPKP header, I strongly urge you to generate a backup set of private key and CSR. This is considered a best practice to help you when your current private key is compromised and you need to revoke your certificate. If you don’t set this now and don’t have a backup, you’ll have your visitors stuck with invalid pins and they will have to wait out until the HPKP header has expired to be able to access your site with new certificates. Finally, to set the headers, insert the following in your security.conf file:

    add_header Public-Key-Pins "pins-sha256=[YOUR_PRIMARY_PIN_HERE]; pins-sha256=[YOUR_BACKUP_PIN_HERE]; max-age=15768000; includeSubdomains"

    This should force all HPKP-compliant browsers to stop the SSL handshake if they detect that your public key has changed or been tampered with.

The security settings are now in place and ready to go, so let’s save and close the file. Next, we’ll configure the Nginx virtual host to use HTTPS exclusively.

 Setting Up Nginx Virtual Host to Support SSL

This section is fairly easy to do. To do this, let’s open up the virtual host file that we’ve set up before:

sudo nano /etc/nginx/sites-available/mysite

We’ll introduce the following changes into the file:

  1. Set Existing Server Block to Listen to Port 443 (HTTPS)
    The existing server block we have in place listens to port 80 (HTTP). We need to change this so that it listens to port 443 (HTTPS) instead, and while we’re at it why not throw in SPDY support as well:

    server {
    	listen 443 ssl spdy;
    	# listen [::]:443 default_server ipv6only=on;
  2. Add New Server Block to Listen to Port 80 (HTTP)
    We can’t  have a server that doesn’t listen to port 80 (HTTP), because most users will type in an address and the browser will assume that it’s HTTP. So, we create a new server block and place our redirection directives there:

    server {
    	listen 80;
    	return 301 https://$server_name$request_uri; #enforce https
  3. Set Site Name and Certificates
    In order for our site to work with HTTPS, we need to set the correct site name (FQDN) and certificates that go along with it. Let’s set it up in our HTTPS server block:

    # Make site accessible from
    # certificate locations
    ssl_certificate /etc/nginx/ssl/mysite.crt;
    ssl_certificate_key /etc/nginx/ssl/mysite.key;

After we’ve made the changes above, let’s save the file and exit, and perform a configuration test:

sudo service nginx configtest

If that went well, we can go ahead and reload the configuration:

sudo service nginx reload

Now, we’re ready to test it.


Finally after all the security settings are in place (don’t forget to make copies of your private keys and store them somewhere safe!), it’s time to test. I like to use SSL Labs to test my settings, and I suggest you do too. Just head over to and enter your domain there. If you spot anything out of place, they have a lot of articles that can help you address it appropriately.

Here’s my test results:


If you’d like to see the details of the test, just go over to SSL Labs and run a test on this website.

What’s Next?

Now we have a secured Nginx reverse proxy that handles all incoming HTTP and HTTPS requests. We can tweak these settings as we see fit, and as more internal applications need outside access, but the scope of this article stops here.

As I write more articles based on this setup, I’ll update and post links in this section. So, stay tuned!

References and Further Reading

These articles have been instrumental in my writing this article and some of them may provide more insight for you if you’d like to dive deeper into the “why” instead of the “how”:

  1. How to set up a secure Raspberry Pi web server, mail server and Owncloud installation —
  2. Setting up a (reasonably) secure home web-server with Raspberry Pi — Matt Wilcox
  3. Security/Server Side TLS — Mozilla Wiki
  4. Strong SSL Security on nginx — Remy van Elst
  5. How To Configure OCSP Stapling on Apache and Nginx — DigitalOcean Community Tutorials
  6. Configuring Apache, Nginx, and OpenSSL for Forward Secrecy — Qualys Blog
comments 2

Preparing a Secured Minimum System for Your RaspberryPi Server

Table of Contents

  1. Intro
  2. Operating System Selection
  3. Installing MinibianPi
  4. Creating New User With Administrative Privileges
  5. Securing The Operating System
  6. Getting The System Up-to-Date


A RaspberryPi isn’t the most powerful computer around, nor is it equipped to perform server-grade mission-critical tasks. But a lot of people use it as their home-grown server, some have multiple units or clusters of them, and some companies even offer RaspberryPi data centers.

I’m a tinkerer. This means I make mistakes, I ruin one of my servers, and — more often than not — I re-install it from scratch because I don’t know how to fix it. Then I do it all over again. Setting up a base system that I can build on has become a somewhat mundane task. This post documents just that: my endeavor to create a secured minimum system from which I can build a RaspberryPi server.


I am by no means an expert in server installation or maintenance. I do this for fun, and sometimes I get lazy and cut corners. Please don’t treat this as a you-must-do-this guide, but more of a this-is-what-I-did and I’d like to share it with you. If you spot any mistakes, please let me know and I’ll do my best to address it immediately.

Operating System Selection

The objective of this build is to prepare a minimum system that is reasonably secured and can be used as a base system from which we can build a RaspberryPi-based server. Note the keywords: minimum and server. We won’t need to have a desktop environment, let alone Minecraft. Let’s leave all unnecessary things out of the build.

To achieve the objective, I chose MinibianPi. It’s a stripped-down version of Raspbian which is quite up-to-date, and even supports Pi 2. I think it’s a great place to start.

Installing MinibianPi

  1. Download MinibianPi image.
  2. Write the image to an SD card (there are instructions for Windows users, Mac users, and Linux users).
  3. Login as root (password is raspberry), either directly or via SSH using Terminal if you’re on a Mac or PuTTY if you’re a Windows user. The image has SSH enabled by default.
  4. Install raspi-config. We could just re-partition the SD card ourselves, but I would personally try to not mess with partitions.
    apt-get install raspi-config -y
  5. Run raspi-config, and then expand filesystem. We have to do this now to avoid running out of disk space when we start installing other stuff.
  6. Reboot

Creating New User With Administrative Privileges

I believe this is one of the most important things to do, because the default ‘pi’ user on the Raspbian image is vulnerable enough if exposed to the internet, let alone having a system with a known root password.

  1. First we need to update our repositories. I found out the hard way that if we do this any later, some installations just won’t work.
    apt-get update
  2. Next, we need to install sudo so that we can grant administrative privileges to non-root users.
    apt-get install sudo
  3. Alright, with that done, let’s add the new user.
    adduser USERNAME

    We have to set a password for this new user, but all other fields can be left blank.

  4. To grant administrative privileges, we can add USERNAME to the sudo group.
    adduser USERNAME sudo
  5. With that done, we can reboot and login as USERNAME.

Securing The Operating System

Regenerating Unique Host Keys

  1. The very first thing we need to do is regenerate the unique host keys.
    sudo rm /etc/ssh/ssh_host_* && sudo dpkg-reconfigure openssh-server
  2. Next we need to verify that the keys have indeed changed and make sure we’re still able to login. If you’ve been doing all this via SSH, try to login using another Terminal or PuTTY session. If you’re using PuTTY, you should see a warning and getting past it is simple. If you’re using Terminal or a Linux machine, you need to remove the host key from your known hosts file. Here’s how you do it if you previously logged in using the Pi’s IP address:
    ssh-keygen -R

    If that doesn’t work, and you’re okay with resetting your entire known hosts file, just delete it:

    rm .ssh/known_hosts

Setting Up Key Pair Authentication for SSH

Next, we should prefer key pair authentication for SSH than the default password authentication. As long as we don’t lose our private key, that is. If you’re a Mac or Linux user, follow along. If you’re stuck with Windows, like I am, here‘s a good article about setting up key pair authentication using PuTTY.

  1. On your own terminal (not on the Pi), do this to create your private and public key pair:
  2. Follow the on-screen instructions to create the SSH keys on your computer. To use key pair authentication without a passphrase, press Enter when prompted for a passphrase. After completing this, 2 files will be created in the directory ~/.ssh (or /home/USERNAME/.ssh). The file id_rsa is your private key, and is your public key. Now, we need to upload the public key to the Pi. Let’s use SCP for this:
    scp ~/.ssh/ USERNAME@

    Change to your Pi’s reserved IP address.

  3. Then, using the Pi’s console, on the Pi’s home directory, let’s create the .ssh directory:
    mkdir .ssh
  4. Next, move and rename the file to the .ssh directory and change permissions to secure it:
    mv ~/.ssh/authorized_keys
    chown -R USERNAME:USERNAME .ssh
    chmod 700 .ssh
    chmod 600 .ssh/authorized_keys

After that’s done, our Pi will be able to authenticate us using our public key. To try this out, logout from your Pi and try to login again. Instead of the usual SSH login prompt, you will be prompted to enter your passphrase if you have one. If you didn’t set a passphrase, you’ll be logged in immediately.

Disabling Password Authentication and Root Login

When we’re able to log in using key pair authentication, we should disable password authentication. While we’re at it, let’s disable root login as well:

  1. Edit the file /etc/ssh/sshd_config
    sudo nano /etc/ssh/sshd_config
  2. Set ‘PasswordAuthentication’ value to ‘no’.
  3. Set ‘PermitRootLogin’ value to ‘no’.
  4. Restart SSH service.
    sudo service ssh restart
  5. Test by logging in without your private key, the attempt should be rejected.

Installing Iptables and Setting Up Basic Firewall Rules

To protect against attackers who take advantage of open ports, we should install iptables and set our default firewall rules. For now, we’ll just block everything except localhost, network pings, and SSH on port 22.

  1. Install iptables using apt-get
    sudo apt-get install iptables -y
  2. Create a new file for our default firewall settings
    sudo nano /etc/iptables.firewall.rules
  3. Put the following text into the file, then save it and exit
    # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
    -A INPUT -i lo -j ACCEPT
    -A INPUT -d -j REJECT
    # Accept all established inbound connections
    # Allow all outbound traffic - you can modify this to only allow certain traffic
    # Allow SSH connections
    # The -dport number should be the same port number you set in sshd_config
    -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT
    # Allow ping
    -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
    # Log iptables denied calls
    -A INPUT -m limit --limit 5/min -j LOG --log-prefix "[NETFILTER] denied: " --log-level 7
    # Drop all other inbound - default deny unless explicitly allowed policy
    -A INPUT -j DROP
  4. Save and exit nano, then apply the rules
    sudo iptables-restore < /etc/iptables.firewall.rules
  5. Check to see that the rules are in place
    sudo iptables -L

    You should get an output like this:

    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination
    ACCEPT     all  --  anywhere             anywhere
    REJECT     all  --  anywhere             reject-with icmp-port-unreachable
    ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
    ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:ssh
    ACCEPT     icmp --  anywhere             anywhere             icmp echo-request
    LOG        all  --  anywhere             anywhere             limit: avg 5/min burst 5 LOG level debug prefix "[NETFILTER] denied: "
    DROP       all  --  anywhere             anywhere
    Chain FORWARD (policy ACCEPT)
    target     prot opt source               destination
    DROP       all  --  anywhere             anywhere
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination
    ACCEPT     all  --  anywhere             anywhere
  6. All good! We just need to make sure those rules get loaded on every reboot. To do that, we’ll need to create a script:
    sudo nano /etc/network/if-pre-up.d/firewall
  7. Put the following code in the file:
    /sbin/iptables-restore < /etc/iptables.firewall.rules
  8. Save, exit, nano, and make the file executable by root and not accessible by anyone else.
    sudo chmod 700 /etc/network/if-pre-up.d/firewall
  9. Test by rebooting and checking iptables right after you’re able to login again.
  10. After confirming that the firewall is automatically restored on boot, let’s make the system logger divert all iptables-related rules to a different log file. This will come in handy when we need to analyze the logs, saving us the time to sort out the firewall log from other log entries. First, let’s create a new file:
    sudo nano /etc/rsyslog.d/iptables.conf

    In the file, enter the following:

    :msg, contains, "[NETFILTER]" /var/log/netfilter.log
    :msg, contains, "[NETFILTER]" ~

    Save and exit, and then restart rsyslog service:

    sudo service rsyslog restart

    After doing this you should have all log messages having the prefix [NETFILTER] stored in /var/log/netfilter.log.

NOTE: This build will eventually evolve into a server, which means we will need some more ports opened for our services to be able to function properly. When you have your services up and running locally but can’t access it from your network, just remember to check your iptables settings and make sure it’s not blocking your service port.

Installing fail2ban

The last item in our security checklist is installing a piece of software called fail2ban. It will monitor our log files for failed login attempts. After an IP address has exceeded the maximum number of authentication attempts, it will be blocked at the network level and the event will be logged in /var/log/fail2ban.log. To install it, issue the following command:

sudo apt-get install fail2ban -y

Getting The System Up-to-Date

Now the system is relatively secure and almost ready, but before we proceed with actually building the server, let’s upgrade everything first.

  1. Update all installed software packages:
    sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y
  2. Update the Pi’s firmware while we’re at it
    sudo apt-get install rpi-update -y
    sudo rpi-update

That’s it, now we have a secured minimum system ready to build. What servers can we build from this starting point? That depends on your imagination. Enjoy!

References and Further Reading

These articles are a good read if you’re into setting up your own RaspberryPi-based server. I believe they use the standard Raspbian Wheezy image for their tutorials, but most of the commands will run just fine.

  1. How to set up a secure Raspberry Pi web server, mail server and Owncloud installation —
  2. Setting up a (reasonably) secure home web-server with Raspberry Pi — Matt Wilcox

Please have a look at them if you’re interested in learning more. Meanwhile, I’ll be documenting my own endeavor to build my server based on MinibianPi (as opposed to standard Raspbian Wheezy).

comment 1

To The Clouds We Go!

Google's datacenter, lit by thousands of blue LEDs from the servers' status panels.
Google’s datacenter, lit by thousands of blue LEDs from the servers’ status panels.

I’ve been messing around with my WordPress installation somewhat in the past few weeks, and I did that while I was gawking at the prospect of getting into the Game of Clouds. You see, the thought of having my own VPS for me to tinker with has refused to leave my mind these past few months.

Then the unthinkable happened: I deactivated my JetPack plugin, only to realize that I couldn’t reactivate it because the hosting company blocked access to xmlrpc.php. I began emailing the administrator, almost begging for him to open up access to my xmlrpc.php if only for a short while. But nothing happened. I guess he and the admins before his time put way too much security checks to really prevent xmlrpc.php to be accessed.


I thought this was my chance to migrate, because I never did have a good enough reason to do it in the past. The cheapest option I can find (shoutout to you good people at still costs more than four times my current service subscription, but I eventually decided that the benefits would outweigh the cost by n-fold. This is because moving into the cloud allows me to spawn (and kill) any server configurations as I like, and have root access (with reasonable security measures in place, of course) to do anything I could possibly want (even wipe out my entire filesystem if I was stupid enough).

So, this day marks the day I officially move to DigitalOcean’s cloud. The move was rather reckless, and I killed my own blog site for a few days in the process, but the experience was worth it.

To the clouds we go!

PS: I’ll be publishing the migration process within a few days. If that kind of stuff is right up your alley, please stay tuned 🙂

PPS: Even though I complain about xmlrpc.php being blocked on my previous subscription, I do know that it is for the good of everybody who paid to be on that shared hosting server. If you’re wondering how xmlrpc.php could use your site (and thousands like it, together) to bring down another server without you ever knowing anything about it, here‘s a good read.

Dynamic DNS for Your Pi

My last two posts talked about how I used a Raspberry Pi to act as a low-power home server that now mainly serves two functions: as a NAS box and as a Torrentbox. Now, there are a multitude of other things that we can do with the Raspberry Pi, and a lot of them are controllable using a web-based interface. This gives us ease of control using a lot of devices, but still limits us to the confines of our local network. This post will attempt to address that.

Why I Did It

I don’t have a static IP assigned to my home internet connection. This means that each time something forces my modem to disconnect and then reconnect, I will have a different public IP address. Not good if I want to set up remote access for, let’s say, my Deluge web interface, because I’d have to first figure out what my public IP address changes to, which means I’d have to go home and… well, you get the idea.

Hence, Dynamic DNS, or DDNS.

Port Forwarding

The first thing I need to do to make this work is to enable port forwarding. If you don’t know what it is, a simple Google search will get you all the info you need. For the sake of example, I’ll be forwarding port 22 to my Raspberry Pi’s port 22 so that I can access its ssh from outside my network.

Next, I need to set up the Dynamic DNS service. What it does is basically enable me to route all requests to a particular URL so that it reaches the IP address of my choice. There are two ways to go about doing this.

Free Dynamic DNS Service

A popular free Dynamic DNS service, DynDNS, apparently has stopped its free service. One of the free alternatives that many people start to use is NoIP. After signing up for free, I get to choose my subdomain.

Why a subdomain? Because it’s free.

After I have my subdomain setup, I can go ahead and download the DUC Client for Linux. There are plenty of instructions on the internet on how to set this up on a Raspbian Wheezy. After I got it up and running on boot on my Pi, I can sit back, relax, and have my Pi update its public IP address to the NoIP service. I can then access my Pi’s secure shell from anywhere by instructing PuTTY to connect to [mysubdomain].[my-no-ip-domain] at port 22.

There’s a catch, though. I’d have to reactivate the free subdomain every 30 days. Well, they gotta do something to get me to buy their paid services, right?

But wait… there’s another way.

Free Dynamic DNS Service With My Own Domain

One of the perks of having my own domain and hosting at IdWebHost is that I can have as many subdomains as I’d like, and it’s dirt cheap. Check their pricing tables for more info, and I also have to say kudos to their customer support team that runs 24/7.

Before I do anything else, I setup a subdomain and confirm that it is indeed accessible. I also check my A records using the Zone DNS Editor. This could be different for each hosting company, so I won’t go into its details here; it’s best to ask their customer support directly.

The Dynamic DNS service of my choice is CloudFlare. I’ve heard a lot about them, and most are positive reviews. Most important, though, is that they offer a free service package. I then enter add my domain and let CloudFlare scan my  DNS records.

After the usual sign-up – confirm email – activate 2-factor authentication routine, I proceeded with setting up my domain. I let CloudFlare scan my DNS records, and make sure that the new subdomain I added earlier is detected. If not, I just manually add a new A record, copying settings from my domain’s A record.

Next, I go and read an article about how to set up a custom script both on my hosted web and on my Pi to automatically check for public IP changes every 5 minutes. Just for fun, I changed the PHP script a bit so that it sends me an email every time it detects that my public IP address has changed.

Finally, I have to actually make the internet use CloudFlare’s DNS to make this all work. So I go to my IdWebHost management page, and set my nameservers to point to CloudFlare’s nameservers instead of IdWebHost’s. People usually say that DNS update propagation is slow as a slug and it takes up to 24 hours at a time to make the updates propagate. Not the case with CloudFlare. I just need to issue a “ipconfig /flushdns” on my Windows command prompt a few times to force an update.

A word of caution for this setup: because of the way some web hosting companies setup their CPanel sites as [domain]:[port], in other words it doesn’t use a subdomain for CPanel, you may need to tinker a little bit with your DNS settings so that CPanel doesn’t run through CloudFlare’s CDN. I chose to completely bypass CloudFlare’s CDN, except for this blog, because really the only thing I’m using from them is the Dynamic DNS service. Your mileage will vary.


Now, I have access to my home network that I can configure simply by configuring port forwarding on my router, and I don’t have to worry about my public IP address changing after a blackout (which unfortunately is a common thing on this side of the world).

As a matter of fact, my house just experienced a blackout today at about 9 a.m., at which time everybody was out working. At about 3.15 p.m., I was able to access my torrentbox from my cellphone. A query on my Pi’s uptime showed 12 minutes, which means that within 12 minutes of my Pi starting up, my new public IP was successfully propagated. Awesome!

Raspberry Pi as a Torrentbox

This article is a follow-up of my earlier post about how to run a Raspberry Pi box as an always-on NAS box. I figured that since I only have a theoretical maximum of 4 concurrent clients in the house, and none of them are intensive users, I might as well squeeze out some more juice out of the Pi to run as an always-on torrentbox. Internet speeds in Indonesia are notoriously slow, and even downloading the Raspbian Wheezy image takes quite a while. Scheduling the download for off-peak hours (middle of the night, or mid-day when nobody is in the house) seems to be reasonable. I could even schedule the machine to be a seeder during those hours, and for once give back to the torrent community. (confession: I’m usually a leecher and stop seeding once my download is completed)

Choice of Clients

I like the idea of thin client access, which minimizes resource usage on the Pi itself. That’s why I chose Deluge. I did try using Transmission, and the fact that I need to run the Transmission desktop client on X so that I can access its web UI appalled me. So there you go, Deluge FTW!

Thin Client Installation

To install Deluge Thin Client, I did this as the pi user:

sudo apt-get install deluge
sudo apt-get install deluge-console

I log in as mysambauser to do everything, so that the thin client will run as mysambauser. Why? Because I think running it as a user who already has access to the USB HDD mounts makes a lot more sense than adding pi to the sambausers user group. If I wanted to be more careful, I’d create an entirely new user, let’s say deluge-user, put it in the sambausers group, and work from there. But I’m too lazy for that. Ha!

Alright, let’s login as mysambauser:

su mysambauser

Now let’s run deluged so that it generates a config file and then kill it afterwards:

pkill deluged

Alright, that should do it. Now, let’s create a backup of the config file:

cp ~/.config/deluge/auth ~/.config/deluge/auth.old
sudo pico ~/.config/deluge/auth

Go the end of the file and add the user/password/access level of your liking. I went like this:


Not the best, I know, but for the sake of example it’ll do. Now, go into the console:


Once inside, I went:

config -s allow_remote True

After that, I went like this to check that the setting is indeed changed:

config allow_remote

After I was sure remote is allowed, I exit the deluge console by, surely enough, typing exit. Then, I restart deluge daemon:

pkill deluged
Deluge Console
Setting up remote access.

Right. All done for the Pi for now. Time to install Deluge client on my PC. Downloads are available at the Deluge website. After installing it, I fire it up and go to preferences:

Deluge Desktop 01
Preferences menu.

I then disable classic mode by unchecking that first checkbox:

Deluge Desktop 02
Disable classic mode.

I then have to restart Deluge client to load the classic interface, and be presented with the connection manager.

Deluge Desktop 03
Connection manager.

Next, I enter mysambauser’s credentials (leaving the port alone for now), and click connect.

Deluge Desktop 04
Host credentials.

Alright, so now I can control my Pi’s torrents from the comfort of my PC without having to keep my PC on to do the actual download. Cool bananas!

Web Interface Setup

Now, I’d like to have web access to my Deluge thin client. First, I log out of mysambauser’s shell to get back to pi’s shell by pressing Ctrl-D, then:

sudo apt-get install python-mako
sudo apt-get install deluge-web

Next, I switch back to mysambauser:

su mysambauser

Then I run deluge-web and kill it, to make it create a configuration file that I can edit:

deluge-web -f
pkill deluge-web

The -f flag up there tells deluge-web to fork the current interface. If you don’t do it, the console will just sit there and wait and you’d have to exit by hitting Ctrl-C. Not pretty. Deluge should’ve given some warning about this. Anyway, I go and edit the configuration file:

sudo pico ~/.config/deluge/web.conf

I just change the port. I don’t like using the default port. Alright then, now I start deluge-web again:

deluge-web -f

Next, I browse from my PC to [Pi’s-IP-address-here]:[deluge-web’s-port]. I log in using the default password “deluge” and chose to change it immediately. Done.

Download Locations

I’d want my downloads to go to my external hard drive, lest my SD card become crowded and eventually run out of space. I do this by going to Preferences, Downloads, and set up my directories there. This is the reason why I use the mysambauser user instead of the default pi user, because pi doesn’t have access to my external hard drive.

I create four directories under /media/USBHDD1/Torrents: Backup, Completed, Incomplete, and Watch. Next, I add a samba share location to the Torrents directory. This way I can just drop torrents into the Watch folder or copy completed torrents from my Completed folder. Easy.

To test, simply copy a torrent file into the Watch directory. It’ll immediately disappear and be whisked to the Backup directory, and if you’re watching your desktop client or web UI, the torrents will show up and immediately start downloading. Delightful!

Setting Up Deluge to Start On Boot

To have the Pi start Deluge on boot, I’m going to skip the scripts and just use the download link the guys at provided:

sudo wget -O /etc/default/deluge-daemon
sudo chmod 755 /etc/default/deluge-daemon
sudo nano /etc/default/deluge-daemon

I change the user to mysambauser, and then do this to download the actual init script:

sudo wget -O /etc/init.d/deluge-daemon
sudo chmod 755 /etc/init.d/deluge-daemon
sudo update-rc.d deluge-daemon defaults

Alright, time to reboot now:

sudo reboot

After rebooting, if you can’t find the web UI, there’s something wrong either with the permissions, the hard drive mounting, or the init script itself. I won’t get into it as I experienced no problems at this point.

So, there you have it, I now have a Pi box capable of leeching (and seeding torrents) automatically just by dropping a torrent file into the Watch folder, and can be controlled either with a desktop client or web UI. As before, any comments or suggestions are welcome. Cheers!

Raspberry Pi as a NAS Box

I finally bought myself a Raspberry Pi Rev B from Element 14. I got it from an importer in Bandung, and I decided I’d go for a little shopping myself to the computer store to get me the tidbits. So here’s my kit.

My Raspberry Pi Kit
My Raspberry Pi Kit
  1. Raspberry Pi Revision B (Element 14 UK)
  2. 8GB Class 10 SDHC with NOOBS pre-installed
  3. Local-made acrylic casing
  4. 1.5m USB cable
  5. USB power adapter rated at 2.1A
  6. CAT-5e UTP cable
  7. Ralink RT5730 USB WiFi dongle with rotatable antenna (not shown)
The Pi, made in the U.K.
The Pi, made in the U.K.

The plan is to get it to work with my existing Western Digital MyBook 2TB external hard drive. I found so many tutorials on the interwebs about how to do this, but I guess I’ll just share my experience.

Hardware Assembly

The hardware assembly is pretty straightforward. The acrylic case snaps together without any problems, and voilà, my Pi’s got a casing.

Pi In A Box
Pi In A Box

Operating System

Next, I need to flash the SD card. It’s already got NOOBS on it, but I won’t be using it because I want to use the latest Raspbian Wheezy. I downloaded it from the Raspberry Pi website and flashed my SD card using Win32 Disk Imager. No problems here.

First Boot

OK, so here’s the first problem. I have two 22″ monitors connected to my PC, and nothing else. So I had to hijack the TV in the living room to do the first boot. Luckily I have a wireless mouse/keyboard combo handy, so less cables and it plugs into just one USB port on the Pi.

My Raspberry Pi Rev B, with 8GB SD Card, connected via RCA to my TV, controlled using a wireless keyboard. Yes, that’s a clay cat.
My Raspberry Pi Rev B, with 8GB SD Card, connected via RCA to my TV, controlled using a wireless keyboard. Yes, that’s a clay cat.
I had to do this to install the Pi and enable SSH. Yes, I have three clay cats. Meow.
I had to do this to install the Pi and enable SSH. Yes, I have three clay cats. Meow.

Default username is “pi” and password is “raspberry”. Upon first login, the Pi will automatically enter the raspi-config utility. Here’s my checklist on first boot (and second boot, for that matter):

  1. Expand the filesystem. This needs a reboot. Do this to enter raspi-config:
    sudo raspi-config
  2. Enable SSH. To do this go to Advanced, SSH, then Enable SSH.
  3. Exit raspi-config, and run the following command

    Take note of the hardware address shown, because it’ll come in handy later on when I reserve an IP address for the Pi.

The Pi is now capable of running headless, so I shut it down.

sudo halt

Then I moved it to a better position, and connected it via UTP to my router. I also plugged in my external hard drive. I’ll be continuing on the setup remotely from the comfort of my PC. Before powering it up again, I decided to go ahead and reserve an IP address for the Pi, using the web-based router management UI. I won’t go into this because every router is different, but the point is to “map” the MAC address of the Pi’s network interface card to an IP address typically assigned by the router’s DHCP server. Alright, now power up!

System Update

The first thing I did after getting the Pi accessible by SSH (I use PuTTY for this) is to do a complete system update.

sudo apt-get update
sudo apt-get dist-upgrade

The upgrade process required that I download 460MB worth of debian packages. After the update completed, I decided to create a backup of the image, using Win32 DiskImager that I also use to write the system image on the SD card. This way, when I want to do a fresh install, I have the Pi updated to a more recent version and ready to run headless.

After saving the updated image, I decided to run raspi-config again to change my hostname. This is probably a good idea because if I buy another Raspberry Pi and run it using the image I just created, the two of them won’t have the same hostname.

Formatting The Hard Drive

I use a 2TB external hard drive connected via USB. I chose to format it using EXT3 for reasons I’ll explain later. I just went along with instructions I found using Google Almighty. First, I go into fdisk:

sudo fdisk /dev/sda

I deleted the one and only partition existing on the hard drive, and created a new primary partition that spans the entire disk. This is probably the scenario if you’re using a new hard drive. If you are going this route, just make sure you don’t have any data worth keeping on the disk. You have been warned.

Command (m for help): p

Disk /dev/sda: 2000.4 GB, 2000365289472 bytes
58 heads, 60 sectors/track, 1122690 cylinders, total 3906963456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00064002

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048  3906963455  1953480704   83  Linux

Command (m for help): d
Selected partition 1

Command (m for help): p

Disk /dev/sda: 2000.4 GB, 2000365289472 bytes
58 heads, 60 sectors/track, 1122690 cylinders, total 3906963456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00064002

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-3906963455, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-3906963455, default 3906963455):
Using default value 3906963455

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Explanation of commands in the order they are issued:

  • p: list partitions
  • d: delete partition
  • p: list partitions, to check that the partition is indeed marked for delete
  • n: new partition, and then just accept default values for a single primary partition
  • w: write the changes to disk

I then check that the partition is indeed available.

sudo fdisk -l

That command spits out the following result:

Disk /dev/mmcblk0: 15.7 GB, 15720251392 bytes
4 heads, 16 sectors/track, 479744 cylinders, total 30703616 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b5098

        Device Boot      Start         End      Blocks   Id  System
/dev/mmcblk0p1            8192      122879       57344    c  W95 FAT32 (LBA)
/dev/mmcblk0p2          122880    30703615    15290368   83  Linux

Disk /dev/sda: 2000.4 GB, 2000365289472 bytes
38 heads, 36 sectors/track, 2855967 cylinders, total 3906963456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00064002

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048  3906963455  1953480704   83  Linux

Alright, now that I have the disk recognized as /dev/sda1, I need to make the filesystem.

sudo mkfs /dev/sda1

It takes a while to make the filesystem, so grabbing a cup of joe at this point might be a good idea.

Mounting and Sharing

After mkfs, the drive is ready to use but not yet mounted. So I prepared a directory for the share mount point:

sudo mkdir /media/USBHDD1

I decided to just add an entry into fstab and reboot the Pi so that I know for sure it works.

sudo pico /etc/fstab

In the editor, I added the following entry:

/dev/sda1       /media/USBHDD1  ext3    noatime           0       0

Next, I reboot the machine.

sudo reboot

After logging in, I tried running df to see if the hard drive is properly mounted.


The following output thus emerge:

Filesystem      1K-blocks    Used  Available Use% Mounted on
rootfs           14988412 2785492   11548708  20% /
/dev/root        14988412 2785492   11548708  20% /
devtmpfs           219764       0     219764   0% /dev
tmpfs               44788     220      44568   1% /run
tmpfs                5120       0       5120   0% /run/lock
tmpfs               89560       0      89560   0% /run/shm
/dev/mmcblk0p1      57288    9864      47424  18% /boot
/dev/sda1      1922698040   69088 1824954920   1% /media/USBHDD1

Now that I’m sure I have the drive mounted properly on boot, it’s time to install samba. I found some instructions for how to do it, but it uses NTFS-formatted hard drives and is different than my setup. The samba part is, however, pretty much the same.

sudo apt-get install samba samba-common-bin

Next, it’s always a good idea to store a backup of any config file before editing it.

sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.old

On to the config!

sudo pico /etc/samba/smb.conf

First order of business: activate user-based security. So I find the following line:

####### Authentication #######

# "security = user" is always a good idea. This will require a Unix account
# in this server for every user accessing the server. See
# /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
# in the samba-doc package for details.
#   security = user

I uncommented the last line to activate user-based security by removing the hashtag:

# "security = user" is always a good idea. This will require a Unix account
# in this server for every user accessing the server. See
# /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
# in the samba-doc package for details.
   security = user

To add shared folders, I add the following section to the very end of the configuration file:

comment = Public Shared Folder
path = /media/USBHDD1/Public
valid users = @sambausers
read only = no

comment = Applications Repository
path = /media/USBHDD1/Apps
valid users = @sambausers
read only = yes
write list = mysambauser

comment = Media Center Repository
path = /media/USBHDD1/MediaCenter
valid users = @sambausers
read only = yes
write list = mysambauser

comment = mysambauser's private folder
path = /media/USBHDD1/Users/mysambauser
valid users = mysambauser
read only = no

I repeated the private folder for every user I plan to have in the network to provide them with personal space. This would probably not be necessary if I had a huge SD card because I could just activate their home folders with read-and-write access. Unfortunately, that is not the case.

Now, I could just go ahead and create users who will have access to the samba shares, but I like to put them in a group together so I can easily assign privileges as a group. Thus the new user group sambausers:

sudo addgroup sambausers

Next, the users that will be in the group:

sudo adduser mysambauser -m -G sambausers

The -m flag creates a home directory for the user (in case we need it later), and the -G flag adds the new user to the sambausers group. I do this for every samba user I have. After that’s done, I set UNIX passwords and samba passwords for them (also repeat for all intended users):

sudo passwd mysambauser
sudo smbpasswd -a mysambauser

Now, I need to change ownership of the mount point, set its permissions, and create the correct directory structure inside it to match the samba configuration file.

sudo chown mysambauser.sambausers /media/USBHDD1
sudo chmod -R 775 /media/USBHDD1

I won’t go into the details of creating the directories, but I will point out that this will need to be done as the mysambauser user. All done! Now I can log out as mysambauser (and return to pi user) and start the samba service.

sudo /etc/init.d/samba start

Now I can browse to \\HOSTNAME from any Windows machine and map a network drive to the shares I just set up.

NOTE: If you use a firewall of some sort, please make sure your firewall allows access to ports 137, 138, 139, and 445 using TCP, and also port 137 using UDP. These ports are in use either by Samba itself or Windows hostname resolution (NetBIOS).

Performance and Conclusion

This article actually documents the second build. That’s why you may have noticed that the SD card in use is 16GB instead of 8GB. Also, I chose to use a wired connection to my router, I’m saving the Ralink for a second unit later on.

Performance is a lot better on this build. Here are the figures:

TestWireless ConnectionWired Connection
Copy 3GB file from desktop to Pi400 ~ 600 kB/s600 kB/s ~ 1 MB/s600 kB/s ~ 1.5 MB/s700 kB/s ~ 3 MB/s
Copy several files (2 ~ 10 MB each) from desktop to Pi450 ~ 800 kB/s700 kB/s ~ 1.2 MB/s700 kB/s ~ 1.8 MB/s700 kB/s ~ 4 MB/s

The figures are approximations only and should be taken with a grain of salt. I am by no means an experienced tester, and these are just some real-world results that I’ve got from my week-long use of the Pi unit. Further testing should include simultaneous read/write operations serving anywhere from 2-10 clients concurrently, but I’ll leave that to Anand or Linus or whoever else wants to do the testing.

Any comments or suggestions are welcome. Until next time. Cheers!

How I Setup My PHP Dev Env: Part 2

This post is the second part of a 2-part series on how I setup my PHP Development Environment using Windows 7 as a base operating system and a basic LAMP server running on Ubuntu installed on Oracle VirtualBox. (You can read the first part here)

Section 1: phpMyAdmin

To be able to manage MySQL conveniently, we will install phpMyAdmin. We will need an internet connection and perform the following steps:

  1. Fire up your virtual machine and login.
  2. Run sudo apt-get install phpmyadmin to install the package.
  3. Direct your browser to localhost:8080/phpmyadmin. You should see the phpMyAdmin login page there. Try to login using the root account and the password you designated to it when you installed Ubuntu LAMP server.
  4. If you did not set a root password for MySQL, run cd /etc/phpmyadmin to switch to the phpMyAdmin installation directory and then run sudo pico to edit the configuration file. Press Ctrl-W and type in allownopassword to find the keyword, and then uncomment the line. Press Ctrl-X and then Y to save the file. You should now be able to log in as root without using a password.

That’s it, we should now be able to use phpMyAdmin.

Section 2: NetBeans PHP

Now, we’re going to install NetBeans PHP as our IDE of choice. You’re free to install any other IDE to your liking, but I’m only going to explain how to use NetBeans PHP.

  1. Go to the shared folder you have set up previously (see Part 1), create a new folder for your new project, and create a new index.php file there. Just fill it up with some HTML or some code.
  2. Download the Java SE Development Kit if you haven’t already done so. This is a prerequisite to install the NetBeans IDE. Install it.
  3. Download the installation file from the NetBeans website. Install it.
  4. Open NetBeans, then go to File –> New Project.
  5. Select PHP Application with Existing Sources. Click Next.
  6. Set the Sources Folder to the folder you created in step 1. Select your preferred PHP version, then click Next.
  7. Set the Run As option to Local Web Site, and click Finish. Make sure to edit your site URL to add your forwarded port number (set to 8080 in Part 1).

We are now ready to start writing code. Try creating a PHP file and write some echoes or whatever in it. Save it and then click the play button on the top toolbar to open a new browser window/tab directed at your site.

Section 3: Apache/PHP Configuration

The apache2 server works almost out of the box, but usually we would want it to be properly configured and equipped with some of the more common modules. Here’s how to do some basic configuration:

  1. Run sudo pico /etc/php5/apache2/php.ini to open and edit the PHP configuration file.
  2. Find the date.timezone parameter and set it to your liking. You can find the list of valid timezones here.
  3. Find the short_open_tag parameter and set it to Off. This is to prevent you from coding with short open tags, which might not be supported on some servers. Press Ctrl-X and then Y to save and close the file.
  4. Run sudo pico /etc/apache2/sites-available/default to open and edit the apache2 virtual host configuration file.
  5. In the <Directory> tag for root (/) and /var/www/, set AllowOverride to FileInfo Indexes. Save and close the file.
  6. Run sudo a2enmod rewrite proxy proxy_http proxy_ftp proxy_connect to enable the listed modules needed to support mod_rewrite.
  7. Run service apache2 restart to restart the apache2 server.

That’s it. Now we’re practically ready to start cracking. I might write some more about the actual code I’m working on, which is based on CodeIgniter 2 integrated with Doctrine ORM version 2, but I can’t make any promises as of now.

How I Setup My PHP Dev Env: Part 1

This is a post long overdue. Originally started as a favor to a good friend of mine, this now has also turned into my personal quest to document my efforts to set up my PHP Development Environment on my MacBook.

DISCLAIMER: This is some geeky stuff, you have been warned.

So, without further ado, let’s get to it!

Section 1: The Operating System

I have an Apple MacBook A1342, so the documentation pertains to that specific hardware. If you have another type or brand of computer, or if you choose another operating system, the following instructions should not apply to you. You should go ahead and install your operating system as you would normally.


  1. Apple MacBook A1342, or a variant of it
  2. Apple Mac OS X installation DVD that came with the laptop
  3. Single-disc Windows 7 64-bit installation DVD
  4. Internet connection (either via LAN or broadband modem)


  1. Fire up your MacBook and let it enter Mac OS X as usual. Find Bootcamp Assistant in your Applications folder and open it. Set up your BOOTCAMP partition. As I’ve done this 2 years ago, I don’t need to do it again, and I can’t remember the exact steps, so please go visit Apple’s support site if you need more detailed information.
  2. Insert your Windows 7 64-bit installation DVD and restart your MacBook. Hold down the Option key while booting up to show the boot options, and select the DVD.
  3. Install Windows 7 as you would normally. Select BOOTCAMP as the partition to install Windows 7 on.
  4. At this point you should be able to log into Windows 7. Now, insert your Mac OS X installation DVD, find Bootcamp for Windows, and install it.
  5. Install any additional drivers and software as required.
  6. Voila! Your base operating system is now ready.

Section 2: The Virtual Machine

I choose to use a virtual machine to host my PHP and MySQL server. Why?

  1. I can emulate the deployment server environment as closely as possible without having to compromise my development environment. This way I can only install server software as necessary, perform server configuration as if I was doing it on the real server, and reinstall the entire server if necessary without having to install my base operating system.
  2. I can take the VM with me and run it on another computer if need be.
  3. I don’t have to install server software directly on my base operating system. This keeps it lean and clean for gaming and such 😀


  1. Newest stable release of Oracle VirtualBox
  2. Installation DVD (or disk image file) of a guest operating system of your choice, or preferably an image of it. I tend to like the latest server version, so I use Ubuntu 11.10 Oneiric Ocelot Server Edition (64-bit), which is downloadable from the Ubuntu website.
  3. A preferably fast and reliable internet connection.


  1. Install Oracle VirtualBox. Just click next next next done. Simple as that.
  2. Fire up Oracle VirtualBox and click File –> Preferences.
  3. Set your Default Machine Folder, or leave it as it is if you prefer to do so. Click OK.
  4. Specify a name for your new virtual machine, set Operating System to Linux and Version to Ubuntu (64 bit). Click Next.
  5. Set the RAM allocated for the virtual machine. Usually 512MB is enough for servers, but you’d probably want to use more if you intend to install the desktop version of Ubuntu. Click Next.
  6. Create a new virtual hard drive. Leave all options as it is and click Next to bring up the virtual hard drive dialog. Select VDI, Dynamically Allocated, and set the size to your liking. I use 20GB.
  7. Click Next, then click Create. After the hard drive dialog disappears, click Create again.
  8. Insert your Ubuntu installation DVD or set the disk image file in Settings –> Storage –> IDE Controller and start your virtual machine.

At this point you should be booted into the Ubuntu installation DVD. Let’s move on:

  1. Select English and then select Install Ubuntu Server.
  2. Select your language and location, and configure the keyboard if necessary. I use English (U.S.).
  3. Specify your hostname.
  4. Set up your partitions. To keep it simple, I use guided partitioning using the entire disk.
  5. Set up your user name, password, and home folder encryption (if you want to).
  6. When asked for a proxy configuration, leave it blank. The installer should start downloading stuff using apt.
  7. Choose your automatic security update setting.
  8. In the software selection, select LAMP Server and Mail Server. Select Internet Site for the mail server setting.
  9. It’s safe to install GRUB as recommended, go ahead and do it.

Section 3: VirtualBox Guest Additions

Proceed until the virtual machine restarts, then shut it down. Remove the Ubuntu installation DVD (image) from the (virtual) drive and insert the VBoxGuestAdditions.iso image from C:Program FilesOracleVirtualBox. Now it’s time to play with the command line:

  1. Run sudo apt-get update to update the repository to the latest source.
  2. Run sudo apt-get dist-upgrade to upgrade all installed packages to the latest version.
  3. Run sudo apt-get install dkms to install Dynamic Kernel Module Support (DKMS), which will be used to install VirtualBox guest additions on your Linux machine.
  4. Run sudo mount /dev/cdrom /media/cdrom to mount the VBoxGuestAdditions media.
  5. Run cd /media/cdrom to switch directory to the mounted media.
  6. Run sudo sh ./ to install the VirtualBox Guest Additions.

Section 4: Shared Folders

Now that we have VirtualBox Guest Additions installed, we can start adding shared folders to the virtual box:

  1. Shutdown the virtual box, go to Settings –> Shared Folders, and create a shared folder referencing a directory of your choice on the host machine. Take note of the share name.
  2. Run mkdir /home/[your_username]/public_html to create your own public html directory. This will be the mount point of the shared folder.
  3. Run id [your_username] to find out your uid and gid. Take note of this.
  4. Run cd /etc/init.d to change to the startup script directory.
  5. Run sudo pico vboxshare-automount to open pico and create a new file called vboxshare-automount.
  6. Write mount -t vboxsf -o uid=[your_uid],gid=[your_gid] [your_shared_folder_name] /home/[your_user_name]/public_html in the file. Press Ctrl-X and then Y, enter, to save the file.
  7. Run sudo chmod +x vboxshare-automount to make the file executable.
  8. Run sudo update-rc.d vboxshare-automount defaults to make the script run on every startup.
  9. Restart your virtual box, then login and run cd public_html. If you run ls, you should see the contents of your shared folder there.

Section 5: Port Forwarding, User Directories, and PHP Parsing

Alright, we now have a shared folder that we can use to directly edit our PHP files. Now we need to enable our host machine to access the virtual box’s PHP server:

  1. Shutdown your virtual box if it’s not already off.
  2. Go to Settings –> Network –> Adapter 1. It should be set to NAT by default.
  3. Click on Advanced and then Port Forwarding.
  4. Click on the add icon, then set rule name to http, protocol is TCP, host port 8080, and guest port 80.
  5. Start up your virtual box.
  6. Fire up your browser and go to http://localhost:8080. You should see the default apache2 page saying that it works.
  7. Now go to the virtual box’s console and run sudo a2enmod userdir to enable per-user directory service.
  8. Run sudo pico /etc/apache2/mods-available/php5.conf to edit the PHP5 module configuration. Comment out the part between the <IfModule> tags to enable PHP parsing in home directories.
  9. Run sudo /etc/init.d/apache2 restart to restart the apache server.
  10. On your host machine, create a new file info.php in the root of the shared folder. Fill it with the usual phpinfo(); code.
  11. Direct your browser to localhost:8080/~[your_user_name]/info.php. You should see the PHP information page there.

That’s it. Now we have a working Apache/PHP server running on Ubuntu and accessible seamlessly through our Windows machine. Next we should check if we have phpMyAdmin and everything else we need to really start cracking. But I think I’ll save that for another post: Part 2.

Ambidextrous No More

So I’ve been visiting RTC Denpasar twice, annoying my friend about twice a week for three weeks, and scouring the interwebs to get a hold of this:

Logitech Attack 3
Logitech Attack 3

Sadly, nobody had it. Word is that the supply was halted from I don’t know where. Bottom line is that there is no telling when a resupply would take place.

A couple of friends suggested some other series that I might be interested in, particularly from Genius. I wasn’t interested, because I had no knowledge of the product quality. Yokes from Saitek were also offered, but they were way beyond my price range. A few days later I found Attack 3 at an online store and I immediately ordered it. Sadly, it was out of stock, too, and the warehouse admin probably forgot to update its status on the page. The sales guy offered to bump it up a class and I only had to add about $16 for it, so I said sure. So this is what I got:

Logitech Extreme 3D Pro
Logitech Extreme 3D Pro

This baby fails in one requirement I made for myself: it is not ambidextrous. I thought I was gonna have some problems flying with my right hand, because (1) I’m left-handed and (2) I couldn’t access the mouse while I played. As it turns out, my right hand works just fine. I guess the same thing happened when I first learned to use the mouse with my right hand, to save everybody the trouble of providing a left-handed mouse. And the mouse, it turns out I didn’t need it so bad, and whenever I needed it I could always revert to using the trackpad with my left hand, which was free anyway.

So there you have it, my new joystick. I’ve flown a couple of times using it on 737s, 747s, Cessnas, various vintage planes, and even an F-14. I’ve also managed to land a 737 manually in a crosswind, not to mention F-14 night landings (which were particularly hard because of the high approach speed). I’m overall satisfied, and although I do hope that someday I’ll have my own virtual cockpit, this little guy is more than enough for now.