Setup Minecraft Server on Google Cloud Engine with terraform

    11 May 2016

    My children like to play Minecraft and they often like to play with their friends and cousins who are remote. To do so in the past I would set up my laptop at the house, set up port forwarding on the router, etc. This would often not work as the router would not accept the changes, my laptop firewall was on etc. Instead I decided to shift all this to the cloud. In this particular example I will be using Google Cloud Engine since it allows you to have persistent disks. To minimize costs I will automate creation and destruction of minecraft server(s) using Hashicorp’s Terraform.

    All the terraform template and files can be found in this specific Github Repo

    https://github.com/vvuksan/terraform-playground

    You will need to sign up for a Google Cloud account. You may also optionally buy a domain name from a registrar so that you don’t need to enter IP addresses in your minecraft client. If you do so rename dns.tf.disabled to dns.tf and change this section

    variable "domain_name" {
      description = "Domain Name"
      default     = "change_to_the_domain_name_you_bought.xyz"
    }
    

    As described in the README what this set of templates will do is create a persistent disk where you will store your gameplay and spin up a minecraft server just for that time being. When you want to play you will need to type

    make create
    

    and when you are done playing you will type

    make destroy
    

    Cost of this should be minimal. In the TF template I’m setting a persistent disk of size of 10 GB (change that in main.tf if you need to). That will cost you approximately $0.40 per month. On top of it you’d be paying for g1.small instance cost which is about $0.02 per hour. You can certainly opt for a faster instance by adjusting the instance size in main.tf file. Also if you are using DNS there will be DNS query costs but those should be minimal.

    Have fun.


    Rsyslog server TLS termination

    10 May 2016

    I was working with a customer trying to configure Fastly’s Log Streaming and ship logs to their Rsyslog server. Fastly supports sending Syslog over TLS however it appeared that TLS handshake was not succeeding as we would end up with gibberish in the logs e.g.

    May  3 13:22:08 192.168.0.10 #001#000#000M#033#000#020#023#000#001#000#000#016log.domain.com#000#002#000#005#001#000#000#000#000 
    

    I looked over a number of different guides with no luck. After trying a number of different things I ended up with a following configuration. This was tested on RSyslog 7 and 8.

    auth,authpriv.*                 /var/log/auth.log
    *.*;auth,authpriv.none          -/var/log/openandclick.log
    kern.*                          -/var/log/kern.log
    mail.*                          -/var/log/mail.log
    
    #
    # Emergencies are sent to everybody logged in.
    #
    *.emerg                                :omusrmsg:*
    
    # Setup disk assisted queues
    $WorkDirectory /var/log/spool # where to place spool files
    $ActionQueueFileName fwdRule1     # unique name prefix for spool files
    $ActionQueueMaxDiskSpace 1g       # 1gb space limit (use as much as possible)
    $ActionQueueSaveOnShutdown on     # save messages to disk on shutdown
    $ActionQueueType LinkedList       # run asynchronously
    $ActionResumeRetryCount -1        # infinite retries if host is down
    
    #RsyslogGnuTLS
    # CA certificate store. Uses generic Debian/Ubuntu CA store
    $DefaultNetstreamDriverCAFile /etc/ssl/certs/ca-certificates.crt
    $DefaultNetstreamDriverCertFile /etc/letsencrypt/archive/log.domain.com/fullchain1.pem
    $DefaultNetstreamDriverKeyFile /etc/letsencrypt/archive/log.domain.com/privkey1.pem
    $DefaultNetstreamDriver gtls
    
    module(load="imtcp"
    streamdriver.mode="1"
    streamdriver.authmode="anon")
    input(type="imtcp" port="5144" name="tcp-tls")
    

    It will use the TLS certificate from /etc/letsencrypt and listen to TLS requests on port 5144. There is no client authentication ie. authmode=anon. If you want to authenticate clients you will need to change authmode to e.g.

    streamdriver.authMode="name" 
    streamdriver.permittedpeer=["test1.example.net", "test.example.net"]
    

    Ganglia Web frontend in Ubuntu 16.04 install issue

    03 May 2016

    Ubuntu 16.04 Xenial comes with Ganglia Web Front end 3.6.1 included however doesn’t pull in all the dependencies. If you get an error like this

    Sorry, you do not have access to this resource. "); } try { $dwoo = new Dwoo($conf['dwoo_compiled_dir'], $conf['dwoo_cache_dir']); } catch (Exception $e) { print "
    

    You are missing Mod PHP and PHP7-XML module. To correct that you need to do execute following commands

    sudo apt-get install libapache2-mod-php7.0 php7.0-xml ; sudo /etc/init.d/apache2 restart
    

    If you don’t have Ganglia web frontend enabled all you need to do is type

    sudo ln -s /etc/ganglia-webfrontend/apache.conf /etc/apache2/sites-enabled/001-ganglia.conf
    sudo /etc/init.d/apache2 restart
    

    Google Compute Engine Load balancer Let's Encrypt integration

    18 April 2016

    Let’s Encrypt (LE) is a new service started by Internet Security Research Group (ISRG) to offer free SSL certificates. It’s intended to be automated so that you can obtain a certificate quickly and easily. Currently however LE requires installation of their client software which makes a request to their API for a domain you want to secure then generates a random script that it puts at a random Web Path for the domain so that LE backend servers can check them. In a nutshell to get a certificate for domain myhost.mydomain.xyz LE client will require you add predetermined text at a URL they provide e.g.

    http://myhost.mydomain.xyz/.well-known/jdoiewerhwkejhrwehrheuwhruewh

    If that matches you have validated you are the owner of the domain and LE issues you a certificate. More detail on how it works can be found here.

    Difficulty is that in order to automate this process you either

    • have to allow LE client to control your web server (currently only Apache) - this may disrupt your traffic in case of any issues
    • allow it to drop files into a web root which may be problematic if your domain is behind load balancer and you need to copy the validation content to all nodes
    • use standalone method where LE spins it’s own standalone server but requires you to shut down your web server
    • devise a different method

    In following section I will describe a method on how to do this with Google Cloud Engine (GCE) Load balancer since it supports conditional URL path matching. You could also do something very similar with other load balancers such as Varnish or Haproxy.

    Conceptually what we’ll do is

    • Modify the GCE Load balancer URL map to send all traffic intended for LE to a special backend e.g. any URL with /.well-known/ will be sent to a custom backend
    • Spin up a minimal VM with Apache on GCE
    • Use the LE client Docker image to manage the signing process or simply install the LE client

    To make configuration easy I will be using https://www.terraform.io since it greatly simplifies this process. This process also assumes you are already running GCE load balancer against the domain you are trying to secure.

    First we’ll need to create an instance template. I am using the Google Container Engine images as they already come with Docker installed.

    variable "gce_image_le" {
        description         = "The name of the image for Let's Encrypt."
        default             = "google-containers/container-vm-v20160321"
    }
    
    resource "google_compute_instance_template" "lets-encrypt" {
        name                = "lets-encrypt"
        machine_type        = "f1-micro"
        can_ip_forward      = false
        tags                = [ "letsencrypt", "no-ip" ]
    
        disk {
            source_image    = "${var.gce_image_le}"
            auto_delete     = true
        }
    
        network_interface {
            network         = "${var.gce_network}"
            # No ephemeral IP. Use bastion to log into the instance
        }
    
        metadata {
            startup-script  = "${file("scripts/letsencrypt-init")}"
        }
    
    }
    

    You will notice I am using a startup script (scripts/letsencrypt-init) inside this instance template which looks like this

    apt-get update
    apt-get install -y apache2
    rm -f /var/www/index.html
    touch /var/www/index.html
    docker pull quay.io/letsencrypt/letsencrypt:latest
    
    mkdir /root/ssl-keys
    echo "email = myemail@mydomain.com" > /root/ssl-keys/cli.ini
    

    Basically I’m just preinstalling Apache and pulling the Let’s Encrypt Client Docker Image.

    Next step is to create an Instance Group Manager (IGM) and Autoscaler. Instance group manager defines what instance template is gonna be used and base instance name whereas autoscaler starts up instances in IGM and makes sure there is one replica running. Last step is to define the backend service and attach IGM to it.

    resource "google_compute_instance_group_manager" "lets-encrypt-instance-group-manager" {
        name                = "lets-encrypt-instance-group-manager"
        instance_template   = "${google_compute_instance_template.lets-encrypt-instance-template.self_link}"
        base_instance_name  = "letsencrypt"
        zone                = "${var.gce_zone}"
    
        named_port {
            name            = "http"
            port            = 80
        }
    
    }
    
    resource "google_compute_autoscaler" "lets-encrypt-as" {
        name                = "lets-encrypt-as"
        zone                = "${var.gce_zone_1_fantomtest}"
        target              = "${google_compute_instance_group_manager.lets-encrypt-instance-group-manager.self_link}"
        autoscaling_policy = {
            max_replicas    = 1
            min_replicas    = 1
            cooldown_period = 60
            cpu_utilization = {
                target = 0.5
            }
        }
    }
    
    resource "google_compute_backend_service" "lets-encrypt-backend-service" {
        name                = "lets-encrypt-backend-service"
        port_name           = "http"
        protocol            = "HTTP"
        timeout_sec         = 10
        region              = "us-central1"
    
        backend {
            group           = "${google_compute_instance_group_manager.lets-encrypt-instance-group-manager.instance_group}"
        }
    
        health_checks       = ["${google_compute_http_health_check.fantomtest.self_link}"]    
        
    }
    

    Next thing we’ll need to do is change the URL map for the load balancer. Basically we’ll send anything matching /.well-known/* to our LE backend service. My URL map is called fantomtest that by default uses the fantomtest backend service. This means any requests that don’t match /.well-known/ will end up on my default backend service (which is what we want)

    resource "google_compute_url_map" "fantomtest" {
        name                = "fantomtest-url-map"
        description         = "Fantomtest URL map"
        default_service     = "${google_compute_backend_service.fantomtest.self_link}"
    
        # Add Letsencrypt
        host_rule {
            hosts           = ["*"]
            path_matcher    = "letsencrypt-paths"
        }
    
        path_matcher {
            default_service = "${google_compute_backend_service.fantomtest.self_link}"
            name            = "letsencrypt-paths"
            path_rule {
                paths       = ["/.well-known/*"]
                service     = "${google_compute_backend_service.lets-encrypt-backend-service.self_link}"
            }
        }
    
    }
    

    Terraform apply it and if you have been successful you should see the letsencrypt service become healthy.

    Now log into the instance running the LE client and run

    docker run -it -v "$(pwd)/ssl-keys:/etc/letsencrypt" -v "/var/www:/var/www" quay.io/letsencrypt/letsencrypt:latest \
      certonly --webroot -w /var/www -d www.mydomain.xyz
    

    If you get

    - Congratulations! Your certificate and chain have been saved at
       /etc/letsencrypt/live/www.mydomain.xyz/fullchain.pem. Your
       cert will expire on 2016-07-17. To obtain a new version of the
    

    You are done and your certificate will be found in ssl-keys/live/www.mydomain.xyz/fullchain.pem. By default LE issues certificates with validity of 90 days and they will nag you starting 30 days before expiration to update them. I will leave it as an excercise to the reader to automate this. Do note that if you are gonna automate pushing certificates make sure you validate the full chain to make sure things look good.


    Signing AWS Lambda API calls with Varnish

    15 April 2016

    A number of months ago Stephan Seidt @evilhackerdude posed a question on Twitter if it was possible to use Fastly to sign requests going to AWS Lambda. For those who do not know what AWS Lambda is here is Wikipedia’s succinct explanation

    AWS Lambda is a compute service that runs code in response to events and automatically manages the compute resources required by that code. The purpose of Lambda, as opposed to AWS EC2, is to simplify building smaller, on-demand applications that are responsive to events and new information. AWS targets starting a Lambda instance within milliseconds of an event.

    AWS Lambda was designed for use cases such as image upload, responding to website clicks or reacting to output from a connected device. AWS Lambda can also be used to automatically provision back-end services triggered by custom requests.

    Unlike Amazon EC2, which is priced by the hour, AWS Lambda is metered in increments of 100 milliseconds.

    Initially I thought this was not going to be possible since I thought I could only make asynchronous calls however Stephan pointed out that there was a way to invoke synchronous calls as well since that is what AWS API Gateway does to expose Lambda functions.

    In order to be able to send requests to Lambda you would need to sign requests going to Lambda. AWS has gone through a number of versions of their signing API however for most services today you will need to use signature version 4. SIGV4 API relies on a number of HMAC and hashing functions that are not in stock varnish but are available in the Libvmod-Digest VMOD if you are deploying your VCL on Fastly this VMOD is already built it.

    Code

    You can find full VCL for signing requests to Lambda here

    https://github.com/vvuksan/misc-stuff/blob/master/lambda/lambda.vcl

    This code has some Fastly specific macros and functions which you can upload as custom VCL however most of the heavy lifting is done inside the aws4_lambda_sign_request subroutine so if you are using stock varnish copy that. Things to change in the vcl_recv are

    set req.http.access_key = "CHANGEME";
    set req.http.secret_key = "CHANGEME";
    

    Change those with your AWS credentials that have access to Lambda. You can also change the region where you functions run. In addition you will need to come up with a way to map incoming URLs to Lambda functions. In my sample VCL I am using Fastly’s Edge Dictionaries e.g.

    table url_mapping {
        "/": "/2015-03-31/functions/homePage/invocations",
        "/test": "/2015-03-31/functions/test/invocations",
    }
    
    # If no match req.url will be set to /LAMBDA_Not_Found
    set req.url = table.lookup(url_mapping, req.url.path, "/LAMBDA_Not_Found" );
    
    # If page has not been found we just throw out a 404
        if ( req.url == "/LAMBDA_Not_Found" ) {
            error 404 "Page not found";
        }
    

    Pros and Cons

    Pros:

    • You get the power of VCL to route requests to different backends including Lambda
    • You may be able to cache some of the requests coming out of Lambda
    • Lower costs since API Gateway can be pricey

    Cons:

    • Only POST requests with payload of up to 2 kbytes and GET requests with no query argument are supported
      • In order to compute the signature we need to calculate a hash of the payload. Unfortunately Varnish exposes only 2 kbytes of the payload inside the VCL. This is a tunable if you run your own varnish. You can adjust by running
        varnishadm param.set form_post_body 16384 
        
      • Any request other than POST needs to be rewritten as a POST hence GET can query no argument
    • You can output straight HTML however returned payload you will end up with leading and trailing ‘ character. You will also need to fix up the returned Content Type since it returns as application/json. You can set Content Type in VCL by doing following in vcl_deliver e.g.
      set resp.http.Content-Type = "text/html";
      
    • Currently it’s impossible to craft POST request froms scratch

    Future work

    Look into using something like libvmod-curl VMOD to create POST requests on the fly.