[Insert Job Title]

PHP, MySQL and Servers.

Sunday, 17 May 2015

Mecer Laptop Xpression W940TU: Review

I recently purchased a new Mecer Xpression W940TU laptop from Computer Mania as my (not so) beloved MacBook Pro started giving me troubles. Seeing that a full repair would cost something around R5000 I decided to give a new laptop a try as I was in need of a fallback device anyway.

The Mecer laptop was the cheapest laptop I could buy right-on-the-spot at one of the Computer Mania franchises - for only R3600.
Mecer itself seems to be a brand of Mustek Limited which does not seem to be affiliated with the Mustek I know from Germany.

Unfortunately there is not a lot of tech review going on in South Africa so it was difficult to find any information or opinion about this specific or similar model. It was a risky buy, but so far I am happy.

The laptop comes with the following specs:

  • Intel Celeron N2840
  • 500GB HDD
  • 14" - 1366x786 - 16:9 (they call it here "HD")
  • Windows 8 pre-installed (mine even came with a pre-configured user... yay)
  • UEFI
  • No CD/DVD player

I have never heard of this kind of CPU before but I am pretty impressed - it's definitely lagging way behind my older i5 - but since I am only using it for work and not doing anything multimedia richy it's perfectly fine. The power consumption is very low, thus also not really generating heat under high load - which is great! You will find the exact details here.

The quality of the chasis is pretty good - feels very cheap but also very durable. Same goes for the keyboard which I actually enjoy using. The touchpad is unfortunately disappointing - especially the buttons.

Of course from the beginning on it was clear that the specs are not enough for my usage - so I had to open it up and upgrade it.

This gave me also the possibility to have a look in side, and damn it looks cheap. Anyways:
  • replaced the 2GB RAM module with 2x4GB DDR3L (low voltage, got 1600 but they will anyway went down to 1333 automatically)
  • Samsung EVO 840 250GB SSD
  • removed Windows 8 in favour of Debian 8 (+ Mate)
Works like a charm, pretty fast, and the only bottleneck now is really the CPU.


  • Absolutely NO fucking vendor locking - want to upgrade the hard drive or replace the RAM? Just do it!
  • Pretty simple technology - many things you can replace by yourself - so no exotic stuff soldered/glued on (well except the battery, see cons)
  • You can get (at least here in South Africa) a replacement for pretty much every part (won't be always the original though) - and thats because they did not build anything "special behind closed doors" but just jammed in popular components together


  • Being used to a Macbook, jesus the power connector on the laptop itself is fucked up - I need to jam it in with force and taking it out feels like I am ripping some inner parts out
  • the battery is unfortunately not easy changeable - similar to other laptops with this form - but still much better as it looks like they are some generic ones I can get from RS - so no special build or glued on
  • the battery have a very very low quality - it is reporting the wrong status under Debian and once it hits 50% it shuts off automatically
  • keyboard is not backlit

Conclusion: A cheap, but not the cheapest, laptop.Having nothing special built in and no vendor locking means upgrades and repairs will be easy and cheap to do.

Monday, 27 April 2015

Laravel Queues with Supervisor on ElasticBeanstalk

Job and/or message queues is an important component of a modern web application. Simple calls like sending verification emails should always be pushed to a queue instead of done directly, as these calls are expensive and will cause the user to wait a while for the website to finish loading.

In this blog post I will write how to keep a stable queue-worker running on an ElasticBeanstalk environment with the help of the watchdog: Supervisor.

First checkout queues.io for a list of queue-daemons and of course Laravel's 5 own documentation page about queues so you know what's coming up.

You will then most probably come to the conclusion that you need to run the following command for your queue to be actually processed:

$ php artisan queue:listen

Now, I have already seen the weirdest setups, but the most prominent might be maybe something like this:

$ nohup php artisan queue:listen &

The ampersand at the end will cause the call to go into the background, and the preceding nohup will make sure that it will keep running even if you exit your shell.
Personally I would always do something like this in a screen for various reasons - especially for convenience.

Anyways, on your server you will want this to run stable, for as long as possible, and automatically restart on crashes or server reboots.
This is especially true on ElasticBeanstalk, Amazon's poorly but unfortunately popular implementation of a "Platform as a Service":
  • Nothing really has a state - instances can go down and up independently of the application
    • This is especially true when AutoScaling is configured
  • Deploying can crash the queue-listener
  • The server could reboot for various reasons
  • Your queue-listener could crash for various reasons (this happens the most)
    • Application error (PHP exception, for example while working off a malformed payload)
    • SQS is down (yup, it happens!)
To get a grip of this you definitely need to use some kind of watchdog. You can either go with monit or use Supervisor which I found was easier to configure.

Use the following .ebextension to achieve the following (abstract, but checkout the source ;) ):
  1. Install Supervisor
  2. Make sure it runs after a reboot
  3. stop the queue-worker shortly before a new application version goes live
  4. start the queue-worker shortly after a new application version went live

You will notice that you have to set a new param SUPERVISE and set it to "enable" for the script to run. This allows me to switch it on - depending on the environment - or off, if a script is causing problems.
Also be aware, this will only work with newer ElasticBeanstalk versions (1.3+).

I almost forgot to mention the following commands (do not run as root!) that will help you around.

Display last Worker Output
$ supervisorctl tail -1000 laravel_queue

Display last Worker Errors
$ supervisorctl tail -1000 laravel_queue stderr

Display Worker Status
$ supervisorctl status

Start Worker
$ supervisorctl start laravel_queue

Stop Worker
$ supervisorctl stop laravel_queue
Sunday, 22 March 2015

Logstash recipe: Akamai on ELK

One of the perks of working for the new company is the usage of cool tools and providers. One of the new providers is Akamai - state of the art CDN/EdgeCache provider - and also the first one to exist.
The cool new tool: Kibana4!

Just a quick introduction to Kibana: Kibana belongs to the ELK Stack (Elasticsearch, Logstash and Kibana) - and as you spotted correctly comes in last, as it forms the Web/Userinterface to the underlying Elasticsearch database. Logstash sits somewhere in between and is a powerful tool to parse many log formats and also to inject them into Elasticsearch. Elasticsearch itself holds the data and offers a search engine/index.

Why do you need ELK? In a multi-server environment you will want to have your logs somewhere centralized - that is so you do not need to log into each box. Also you want to maybe have some kind of webinterface so you do simple tasks without some commandline-fu - like filtering all failed cronjobs.
There are some great tools that can achieve this as well, like syslog-ng or Graylog.

Wanna see what you are going to get? Here you go:

BTW, yes this is a demo dashboard only, meaning a lot is most probably redundant to Google Analytics - nevertheless it is more exact as it will also capture bots and file requests where no JavaScript is being loaded. The possibilities are of course fare more extensive.

This recipe will take care of three major points:
  • Actual grok filter to match the logs
  • Fix @timestamp to be parsed directly from log-line (as the logs come in batches and often also not in chronological order)
  • Apply GeoIP (via maxmind) filter to ClientIP so we can create cool looking maps on our dashboard

1) First things first

Currently there are two options to receive Logs from Akamai, via FTP and via email. You will want to receive it via FTP so I would suggest to setup a ftp server on your ELK setup.
Akamai will either send the log file gzipped or GPGP encrypted, both formats that Logstash can not read in-house, so you will need to build a script to translate it into plain/text.
Be smarter than me and chose a ftp-daemon that supports uploadscripts, like pure-ftpd or proftpd. It is much easier than using vsftpd + some funky logfile-analyzer-upload-script.

2) Setup Akamai Log Delivery Service (LDS)

  • Log into your Luna Control Center
  • Configure > Log Delivery
  • Select your Object > "Begin Delivery"
  • Make sure you choose "Combined + Cookie + Host Header" as log format - this will give us the possibility to extinguish between different projects later on Kibana
My settings look approx. like this:

3) Use the following Logstash configuration

$ sudo nano /etc/logstash/conf.d/11-akamai-access.conf
input {
  file {
    path => "/home/logs/incoming/akamai/access_*"
    exclude => "*.gz"
    type => "akamai-access"

filter {
  if [type] == "akamai-access" {
    grok {
      match => { "message" => "%{IP:clientip} - - \[%{HTTPDATE:timestamp}\] %{HOSTNAME:hostname} \"%{WORD:verb} /%{HOSTNAME:origin}%{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response:int} %{NUMBER:bytes:int} \"(?:%{URI:referrer}|-)\" %{QS:agent} %{QS:cookie}" }
    date {
      match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
  if [clientip] {
    geoip {
      source => "clientip"
      target => "geoip"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]

4) Thats it!

  • Restart/reload logstash
  • Wait a few for the log files to come in (might take some hours)
  • Explore the data and create some nice visuals!

Some final notes: there are some major advantages (well and also disadvantages) when analyzing logs directly from the CDN/EdgeCache:

  1. You will get the actual Client-IP (you might be able to redirect it through your ELB until down to your EC2 - but that might be hell of a job)
  2. You will get more accurate Data, as in the best scenario your actual webserver will only get hit once a day ;)
One of the disadvantages: you do not (though there might be products for that) get the data in real time.
Friday, 30 January 2015

Compile OpenSSH 6.7 with LibreSSL on OSX (10.10 / Yosemite)

Lets say you want to use the newest version of OpenSSH on your MacBook / OSX for reasons like:
  • your current version is too old for newer ciphers, key exchanges, etc.
  • you trust LibreSSL more than some OSSLShim
  • you are just some hipster that wants to have cipherli.st running
No worries, in this short tutorial I will show you how to compile OpenSSH 6.7p1 from source without replacing your current installed ssh implementation shipped by OSX.

We will be using LibreSSL instead of OpenSSL which is easier to compile and might be more secure than OpenSSL itself.

Some of the gists I took from here: https://github.com/Homebrew/homebrew-dupes/blob/master/openssh.rb

Get sources

$ wget \
http://mirror.is.co.za/mirror/ftp.openbsd.org/OpenSSH/portable/openssh-6.7p1.tar.gz \
http://www.nlnetlabs.nl/downloads/ldns/ldns-1.6.17.tar.gz \

Compile LibreSSL

$ tar xvfz libressl-2.1.3.tar.gz
$ ./configure --prefix=/opt/libressl --with-openssldir=/System/Library/OpenSSL --with-enginesdir=/opt/libressl
$ make
$ sudo make install

Compile ldns

$ tar xvfz ldns-1.6.17.tar.gz
$ cd ldns-1.6.17.tar.gz
$ ./configure --with-ssl=/opt/libressl
$ make
$ sudo make install

Compile OpenSSH

$ tar xvfz openssh-6.7p1.tar.gz
$ cd openssh-6.7p1

$ wget \
https://trac.macports.org/export/131258/trunk/dports/net/openssh/files/0002-Apple-keychain-integration-other-changes.patch \
https://gist.githubusercontent.com/sigkate/fca7ee9fe1cdbe77ba03/raw/6894261e7838d81c76ef4b329e77e80d5ad25afc/patch-openssl-darwin-sandbox.diff \

$ patch -p1 < 0002-Apple-keychain-integration-other-changes.patch
$ patch -p1 < patch-openssl-darwin-sandbox.diff
$ patch -p1 < launchd.patch

$ autoreconf -i
$ export LDFLAGS="-framework CoreFoundation -framework SecurityFoundation -framework Security"
$ ./configure \
--prefix=/opt/openssh \
--sysconfdir=/etc/ssh \
--with-zlib \
--with-ssl-dir=/opt/libressl \
--with-pam \
--with-privsep-path=/opt/openssh/var/empty \
--with-md5-passwords \
--with-pid-dir=/opt/openssh/var/run \
--with-libedit \
--with-ldns \
--with-kerberos5 \
--without-xauth \
$ make
$ sudo make install

Use newly installed ssh-agent

$ sudo nano /System/Library/LaunchAgents/org.openbsd.ssh-agent.plist
/usr/bin/ssh-agent > /opt/openssh/bin/ssh-agent

$ sudo launchctl unload /System/Library/LaunchAgents/org.openbsd.ssh-agent.plist
$ sudo launchctl load /System/Library/LaunchAgents/org.openbsd.ssh-agent.plist

Set alias

$ echo "alias ssh=/opt/openssh/bin/ssh" >> ~/.bash_profile


(verify with "ssh -V")