[Insert Job Title]

PHP, MySQL and Servers.

Sunday, 22 March 2015

Logstash recipe: Akamai on ELK

One of the perks of working for the new company is the usage of cool tools and providers. One of the new providers is Akamai - state of the art CDN/EdgeCache provider - and also the first one to exist.
The cool new tool: Kibana4!

Just a quick introduction to Kibana: Kibana belongs to the ELK Stack (Elasticsearch, Logstash and Kibana) - and as you spotted correctly comes in last, as it forms the Web/Userinterface to the underlying Elasticsearch database. Logstash sits somewhere in between and is a powerful tool to parse many log formats and also to inject them into Elasticsearch. Elasticsearch itself holds the data and offers a search engine/index.

Why do you need ELK? In a multi-server environment you will want to have your logs somewhere centralized - that is so you do not need to log into each box. Also you want to maybe have some kind of webinterface so you do simple tasks without some commandline-fu - like filtering all failed cronjobs.
There are some great tools that can achieve this as well, like syslog-ng or Graylog.

Wanna see what you are going to get? Here you go:



BTW, yes this is a demo dashboard only, meaning a lot is most probably redundant to Google Analytics - nevertheless it is more exact as it will also capture bots and file requests where no JavaScript is being loaded. The possibilities are of course fare more extensive.

This recipe will take care of three major points:
  • Actual grok filter to match the logs
  • Fix @timestamp to be parsed directly from log-line (as the logs come in batches and often also not in chronological order)
  • Apply GeoIP (via maxmind) filter to ClientIP so we can create cool looking maps on our dashboard

1) First things first

Currently there are two options to receive Logs from Akamai, via FTP and via email. You will want to receive it via FTP so I would suggest to setup a ftp server on your ELK setup.
Akamai will either send the log file gzipped or GPGP encrypted, both formats that Logstash can not read in-house, so you will need to build a script to translate it into plain/text.
Be smarter than me and chose a ftp-daemon that supports uploadscripts, like pure-ftpd or proftpd. It is much easier than using vsftpd + some funky logfile-analyzer-upload-script.

2) Setup Akamai Log Delivery Service (LDS)


  • Log into your Luna Control Center
  • Configure > Log Delivery
  • Select your Object > "Begin Delivery"
  • Make sure you choose "Combined + Cookie + Host Header" as log format - this will give us the possibility to extinguish between different projects later on Kibana
My settings look approx. like this:



3) Use the following Logstash configuration


$ sudo nano /etc/logstash/conf.d/11-akamai-access.conf
input {
  file {
    path => "/home/logs/incoming/akamai/access_*"
    exclude => "*.gz"
    type => "akamai-access"
  }
}

filter {
  if [type] == "akamai-access" {
    grok {
      match => { "message" => "%{IP:clientip} - - \[%{HTTPDATE:timestamp}\] %{HOSTNAME:hostname} \"%{WORD:verb} /%{HOSTNAME:origin}%{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response:int} %{NUMBER:bytes:int} \"(?:%{URI:referrer}|-)\" %{QS:agent} %{QS:cookie}" }
    }
    
    date {
      match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
    }
  }
  
  if [clientip] {
    geoip {
      source => "clientip"
      target => "geoip"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
  }
}


4) Thats it!


  • Restart/reload logstash
  • Wait a few for the log files to come in (might take some hours)
  • Explore the data and create some nice visuals!

Some final notes: there are some major advantages (well and also disadvantages) when analyzing logs directly from the CDN/EdgeCache:

  1. You will get the actual Client-IP (you might be able to redirect it through your ELB until down to your EC2 - but that might be hell of a job)
  2. You will get more accurate Data, as in the best scenario your actual webserver will only get hit once a day ;)
One of the disadvantages: you do not (though there might be products for that) get the data in real time.
Friday, 30 January 2015

Compile OpenSSH 6.7 with LibreSSL on OSX (10.10 / Yosemite)

Lets say you want to use the newest version of OpenSSH on your MacBook / OSX for reasons like:
  • your current version is too old for newer ciphers, key exchanges, etc.
  • you trust LibreSSL more than some OSSLShim
  • you are just some hipster that wants to have cipherli.st running
No worries, in this short tutorial I will show you how to compile OpenSSH 6.7p1 from source without replacing your current installed ssh implementation shipped by OSX.

We will be using LibreSSL instead of OpenSSL which is easier to compile and might be more secure than OpenSSL itself.

Some of the gists I took from here: https://github.com/Homebrew/homebrew-dupes/blob/master/openssh.rb

Get sources


$ wget \
http://mirror.is.co.za/mirror/ftp.openbsd.org/OpenSSH/portable/openssh-6.7p1.tar.gz \
http://www.nlnetlabs.nl/downloads/ldns/ldns-1.6.17.tar.gz \
http://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.1.3.tar.gz

Compile LibreSSL


$ tar xvfz libressl-2.1.3.tar.gz
$ ./configure --prefix=/opt/libressl --with-openssldir=/System/Library/OpenSSL --with-enginesdir=/opt/libressl
$ make
$ sudo make install

Compile ldns


$ tar xvfz ldns-1.6.17.tar.gz
$ cd ldns-1.6.17.tar.gz
$ ./configure --with-ssl=/opt/libressl
$ make
$ sudo make install

Compile OpenSSH


$ tar xvfz openssh-6.7p1.tar.gz
$ cd openssh-6.7p1

$ wget \
https://trac.macports.org/export/131258/trunk/dports/net/openssh/files/0002-Apple-keychain-integration-other-changes.patch \
https://gist.githubusercontent.com/sigkate/fca7ee9fe1cdbe77ba03/raw/6894261e7838d81c76ef4b329e77e80d5ad25afc/patch-openssl-darwin-sandbox.diff \
https://trac.macports.org/export/131258/trunk/dports/net/openssh/files/launchd.patch

$ patch -p1 < 0002-Apple-keychain-integration-other-changes.patch
$ patch -p1 < patch-openssl-darwin-sandbox.diff
$ patch -p1 < launchd.patch

$ autoreconf -i
$ export CPPFLAGS="-D__APPLE_LAUNCHD__ -D__APPLE_KEYCHAIN__ -D__APPLE_SANDBOX_NAMED_EXTERNAL__"
$ export LDFLAGS="-framework CoreFoundation -framework SecurityFoundation -framework Security"
$ ./configure \
--prefix=/opt/openssh \
--sysconfdir=/etc/ssh \
--with-zlib \
--with-ssl-dir=/opt/libressl \
--with-pam \
--with-privsep-path=/opt/openssh/var/empty \
--with-md5-passwords \
--with-pid-dir=/opt/openssh/var/run \
--with-libedit \
--with-ldns \
--with-kerberos5 \
--without-xauth \
--without-pie
$ make
$ sudo make install


Use newly installed ssh-agent



$ sudo nano /System/Library/LaunchAgents/org.openbsd.ssh-agent.plist
/usr/bin/ssh-agent > /opt/openssh/bin/ssh-agent

$ sudo launchctl unload /System/Library/LaunchAgents/org.openbsd.ssh-agent.plist
$ sudo launchctl load /System/Library/LaunchAgents/org.openbsd.ssh-agent.plist

Set alias


$ echo "alias ssh=/opt/openssh/bin/ssh" >> ~/.bash_profile


Reboot!


(verify with "ssh -V")
Saturday, 22 November 2014

Hunspell spell checking under PHP with enchant

The spell checking that works perfectly on Google Chrome, OpenOffice and Mozilla Firefox is available to you and PHP as well - all thanks to open source software!

The above mentioned apps use the "Hunspell" library, which can be used directly under PHP without the usage of ugly (and unsecure) exec/system calls.

The following steps I did on my OSX MPB (10.10 / Yosemite) but they will be very similar on any Linux/Unix system (well, might even be easier on Ubuntu or Debian via their package system).
Just make sure you at least use libenchant 1.5


Compile and Install hunspell 


$ wget http://downloads.sourceforge.net/hunspell/hunspell-1.3.3.tar.gz 
$ tar xvfz hunspell-1.3.3.tar.gz
$ cd hunspell-1.3.3
$ ./configure
$ make
$ sudo make install

Compile and Install libenchant 


$ wget http://www.abisource.com/downloads/enchant/1.6.0/enchant-1.6.0.tar.gz 
$ tar xvfz enchant-1.6.0.tar.gz 
$ cd enchant-1.6.0
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install

Compile and Install php-enchant (in this case as shared lib)


(there is currently a bug in the configure file that will not recognize your libenchant version and thus not giving you some of the newer features, patch is here)

$ cd php-5.5.14/ext/enchant/
$ phpize
$ ./configure
$ make
$ sudo make install

Something something extension=enchant.so in your php.ini... Dictionaries

$ cd Dicts
$ sudo wget https://chromium.googlesource.com/chromium/deps/hunspell_dictionaries/+archive/master.tar.gz 
$ sudo tar xvfz master.tar.gz

Sample usage 

Wednesday, 12 November 2014

Compile libffi under OSX (10.10 / Yosemite)

Sometimes when playing around and compiling stuff you can mess up your system badly.

In this case I was compiling libffi on my OSX 10.10 system without knowing that other Apps were linking to it - especially Adobe Acrobat Reader (but it seems as Skype is also depending on it). Unfortunately it is linking to a 32bit version of that library and thus crashing on startup (though the screenshot is actually from after I deleted the library out of frustration).


Libffi is distributed by Apple/OSX directly, so it won't help to re-install Adobe Acrobat Reader or Skype, instead you will just have to recompile it!

I would not consider it a common problem as most MacBook Fanboys do not even know what a terminal is, but just in case, here are the steps to create a fat file (e.g. universal binary supporting 32bit and 64bit architecture) of libffi for your OSX 10.10 (Yosemite) system:

 

Download libffi and prepare

$ wget ftp://sourceware.org/pub/libffi/libffi-3.1.tar.gz 
$ tar xvfz libffi-3.1.tar.gz
$ cd libffi-3.1
$ rm -rf ~/libffi
$ mkdir -p ~/libffi/32 ~/libffi/64

Compile libffi as 32bit

$ make clean
$ CXXFLAGS=-m32 CFLAGS=-m32 LDFLAGS=-m32 ./configure --prefix=/usr --build=i386-apple-darwin14.0.0
$ make
$ cp i386-apple-darwin14.0.0/.libs/* ~/libffi/32

Compile libffi as 64bit

$ make clean
$ CXXFLAGS=-m64 CFLAGS=-m64 LDFLAGS=-m64 ./configure --prefix=/usr --build=x86_64-apple-darwin14.0.0
$ make
$ cp x86_64-apple-darwin14.0.0/.libs/* ~/libffi/64

Create Fat File/Lib aka "Universal Binary" and Install (poor mans version)

$ lipo -create ~/libffi/{32,64}/libffi.a -output /usr/lib/libffi.a
$ lipo -create ~/libffi/{32,64}/libffi.6.dylib -output /usr/lib/libffi.6.dylib
$ ln -s /usr/lib/libffi.6.dylib /usr/lib/libffi.dylib


All done! Now go enjoy your working system again... oh of course you could also use macports, I guess..