contact Me

Use the form on the right to contact me.

You are welcome, to contact me regarding the topics of this page, my open source projects, or my work. Please use the contact form and leave a valid email address for me to respond to.

Thank you.

Egidestr. 9
44892 Bochum
Germany

render01.png

/brain/dump

Random thoughts, bright ideas and interesting experiments. In short the ramblings of a fulltime nerd.

 

Playgrounds with Vagrant and Ansible

Jakob Westhoff

As a Mac user I often have to fight the difficulty of installing new kinds of services, databases and other daemons, which have been crafted for a *nix like environment. Even though OS X technically belongs to this family of systems certain types of software don't feel at home instantly. Usually most services can be convinced to do their deed on OS X as well, even though in certain cases it is a quite time consuming process. As I am in the business of presenting at conferences and providing training services for companies, I often need a clean demonstration environment, which is guaranteed to work, regardless of system updates or things I changed while playing around with new technologies. While researching solutions for this problem, I got used to working with VMs running a simple Linux guest system containing all the different environments and services I need. As you might have guessed the last thing I wanted to do is manually managing all those VMs. Fortunately Vagrant in combination with Ansible allows for easy automation of all the necessary steps.

While recently cleaning up some of my VMs, which involved creating proper Ansible playbooks for provisioning, as well as structuring there contents, I ended up thinking, that those virtual machines may be useful for other developers as well. They allow to maintain a clean host system, without having to give up the flexibility of installing all kinds of different services for project or research work.

Because of that I created a repository on github and started putting playground VM definitions there. At the time of writing this blogpost the repository hosts a VM with a recent version of Elasticsearch, as well as one, containing a nvm managed nodejs environment with all kinds of different versions installed. Expect more playgrounds to pop up in the future. Cleaning up all my VMs is ongoing process. I will push each of them to the repository as soon as they are ready.

Using a playground

Using any of the playgrounds is quite simple. First of all you need to checkout the repository:

$ git clone https://github.com/jakobwesthoff/playgrounds.git

All playgrounds are contained inside this single repository. Don't worry about the filesize. The different playgrounds only contain Ansible definitions, which are containing a list of all the needed downloads and installation steps, to correctly provision the VM. Therefore the repository itself is quite small.

Each playground is put into a separate directory. Its name already hints at the purpose of the corresponding VM. Inside each directory a detailed README file can be found. It documents why I created the VM in the first place, as well as which stuff is installed into it once it is provisioned. Further information like ip addresses, exposed services and ports are documented there as well.

After picking a playground to instantiate, change into it's directory and tell Vagrant to bring it to life:

$ cd Node.js
$ vagrant up

In order to successfully execute the whole provisioning process you need recent versions of Vagrant and Ansible installed on your system. Mac OS X users can install those two quite easily through homebrew and homebrew-cask:

$ homebrew install ansible
$ homebrew cask install vagrant

For other environments please consult your systems package manager or download and install those tools from the corresponding web pages manually.

The before mentioned call to vagrant up will fetch the base box for the corresponding playground, if it is not already in place. At the time of writing this article all playgrounds are using Ubuntu 14.04 LTS as guest operating system.

After fetching the base box, a new instance of it will be created. Once the VM has been correctly configured and booted, Vagrant triggers Ansible in order to provision it. Ansible will execute each step to download, install and configure the playgrounds environment.

After the provisioning process has finished the playground is ready to be used. While you are inside its directory you can easily create a connection to it by calling vagrant ssh. This command will drop you straight to a prompt inside the VM. You may either use it from there, or simply talk to the exposed services from your host operating system. Details about different usage possibilities are usually provided within the playgrounds README file.

Some of the playgrounds, like nodejs, utilize shared folders. Those folders allow the easy transportation of files between the VM and the host os. They will be automatically connected through the used virtualization software (VirtualBox by default). The playgrounds README details all of its shared folders, while providing information where exactly they are placed within the VM. For example the nodejs playground does contain a Shared folder, which is automatically synced with the /home/vagrant/Shared folder inside of the VM.

Future playgrounds

As I mentioned before, I am going to publish some more playgrounds in the future. Nevertheless you can help to improve things as well. If you are missing something inside a certain environment already present don't hesitate to add it to the provisioning scripts and submit a pull request. If you take a look at the corresponding Ansible playbooks and roles you will notice, that they are not that complicated. As a matter of fact most of the Ansible configuration is based on YAML files. The configuration of each playground is well structured, clean and self-documenting. It should be quite easy to use what is already there in order to create your own playgrounds containing all the services and environments you may desire. I am looking forward to your pull requests containing new playgrounds and all kinds of amazing things :).

How I seed a new JavaScript Project

Jakob Westhoff

I often need to start a new JavaScript project quickly. There are a lot of different reasons to start a new project:

  • Playing around with a new library or technology.
  • Develop a new utility library to be used in other projects.
  • Setup a new project for developing a customers project.

Of course there are further possible reasons, but I think you get the picture. In each of those cases starting from scratch isn't desirable. I want a certain basic structure, which is kind of similar for all my latest projects. Let's take a look at the needed components: Static Code Analysis, Testing, Dependency Management, Packaging and Minifying. Of course being easily extensible with custom or library specific tasks is important as well. To solve all of those different Tasks I currently tend to utilize the following tools/libraries:

  • Static Code Analysis: Currently only linting with jshint
  • Testing: Unit-Tests with karma-runner and PhantomJS and/or other Browsers
  • Dependency Management: npm and bower
  • Packaging: During development require.js inside the browser. For production r.js in combination with almond.js
  • Minifying: Usually done by uglifyjs
  • The build system combining all the tasks is currently grunt. (Mainly because there are plugins for mostly anything available)

Creating a generic Seed

Compiling and setting up a running environment of all those tools running in cooperation isn't hard, but always takes quite some time. As I did it for the nth time last month I decided to create a basic seed project for myself, to be reused every time I need such an environment. As I thought some other people might have a use for this as well, I put the whole seed up on github.

Shortly after pushing the seed I had the first use case for it. I wanted to develop something with Facebooks React. React is a little bit special regarding it's build process, as it utilizes JSX to allow for neat HTML-inline syntax during development, while still creating fast pure JavaScript code for production. Therefore I needed to adapt the seed to properly handle this in conjunction with requirejs and all the other parts of my toolset. Therefore I branched the seed (framework/react) and integrated all the necessary changes. Using branches for those special adaptions allows to easily backport new features into the main branch, as well as importing future additions of the main seed into the framework seeds.

You might think, why didn't he use Yeoman and its generators for all of this. The answer is simple: Yeoman is quite close to what I want, but doesn't provide the exact combination of tools I utilize out of the box. Writing a special generator for my needs wouldn't have been a problem, but I simply didn't see any real reason for it, as a git repository works quite well for me :).

Utilizing the Seed

Even though I don't want to replicate the documentation of all the different tools used, let's take a look at what my basic workflow of using the seed is:

  1. git clone the seed to a directory intended to be used for the new project.
  2. Optionally checkout a specific project branch
  3. Delete the .git folder inside the newly created seed
  4. Adapt package.json as well as bower.json to your needs. Especially name and description ;)
  5. Install basic dependencies using npm and bower: npm install && npm install bower.
  6. Create basic symlinks of all needed libraries for access during development: grunt symlink:www

After those initialization steps you might put up all your JavaScript source under the src directory using AMDs module syntax. The require.js configuration for arbitrary thirdparty libraries is stored at src/require.conf.js.

Opening up www/index_dev.html will dynamically load all the dependencies and execute them. After changing base dependencies or installing further assets you may need to call grunt symlink:www again to update the symlinks inside the www directory to be available for dynamic loading. However this should not be required most of the times, as the basic directories are linked, not their content specifically.

To create minified and combined versions a simple call to grunt build will create a production ready result inside the dist directory. It will include everything needed to run your library/application.

To specifically run separate steps of the build either call grunt tasks to show all available tasks, or take a look at the Gruntfile.js. Most of the basic tasks are defined as watch tasks as well. For example if you want to automatically build a new production version of the software everytime a relevant file changed just call grunt watch:build.

If you have any comments, questions or maybe additions to the seed I am always happy to accept pull requests. :)

Migrating my services into the cloud

Jakob Westhoff

For quite a long time I have been maintaining my own servers for all the different services I needed on a daily basis. Those included webpages, email, an irc bouncer and source code version control. Keeping those setups up to date certainly required some of my time, while I did less with the servers every day. In the past I did not only host those services for myself, but for other people as well. But lately I migrated more and more of them to other providers, as I didn't want to maintain all this stuff anymore.

After I managed to only host my own stuff in the end, I decided it was time to stop administrating all this on my own. Therefore I looked for service providers, which would accommodate my needs. After thinking about, what exactly those needs are I came up with the following list:

  1. My Blog
  2. EMail
  3. Source Code Version Control
  4. Project pages
  5. Miscellaneous PHP-Scripts
  6. IRC-Bouncer

Let's take a look at my decisions where to host the different services in the future. As more fine grained requirements let me to the decision for each of the services I will detail them in the following sections.

tl;dr

If you don't want to know about why I exactly chose which service, here is a short Mapping of the providers I am now using.

  1. Blog
  2. EMail
  3. Source Code Version Control
  4. Project Pages
  5. Miscellaneous PHP-Scripts
  6. IRC-Bouncer

Blog: Squarespace

I decided to utilize Squarespace for hosting my blog. Squarespace is a hosting provider, which does not only provide regular website hosting, but a fully fledged graphical user interface to build and publish content as well.

As I was going to migrate my blog anyways I wished to revamp its design as well. The content should be emphasized. Furthermore I wanted to have a responsive design, which would look nice on each kind of display size and device. Squarespace helped me with this as it does provide a lot of base templates to be used in order to create content and pages. Those templates are of course only a starting point, as mostly any part of it may be customized using Squarespaces really amazing in-site editor. If this does not fit your needs you may of course create and include custom CSS rules.

I ended up using a basic template and started adapting it to my needs integrating a blog, a proper about me page as well as some legal information required. Furthermore I integrated links to all my social profiles as well as my github pages. All of this was quickly done using the in-site editor. Especially the formatting and creation of more complex layouts like the about me or legal page has been really easy and fun using the WYSIWYG editor.

Another requirement for my blog engine was the capability to either use reStructuredText or Markdown as a possible input format. Squarespace supports to utilize Markdown for creating content, by simply adding a Markdown block to any page, which will then be rendered into html. For more complex layouts, which can't be completely represented using Markdown you may combine those blocks with any other layout block supported by the editor.

In addition to a proper input format I wanted to be able to export the pages and the blog to a reusable format as well. For once I wanted to be able to backup blog content, but even more important I wanted to be able to leave Squarespace again, if I ever wanted to, taking my blogged content with me. Currently the system allows a complete export of all sites into a Wordpress compatible XML format. Even though I am not a big fan of Wordpress, this export function gives me the opportunity to retrieve all my content in an easily parseable format. For importing Squarespace supports Wordpress, Tumblr, Blogger, Shopify and Big Cartel.

If you are interested using this for your own blog, you can test Squarespace 14-days for free, by simply signing up and testing.

EMail: Uberspace

I decided to have Uberspace handle mail hosting for me. You might ask, why I didn't use something like Google Apps or some other dedicated mail provider. In order to explain this I need to further detail some of my mail related requirements.

One of the main reasons I kept hosting all the different services myself were my requirements for email handling. EMail is one of the main communication channels I use. Therefore I have quite an amount of incoming and outgoing messages each day. In order to keep up with all the information flowing into this medium I need a quite sophisticated set of filters. As I am using different devices to access my mail I needed a proper IMAP setup with server-side filtering as well.

Due to historical reasons I still have different mail accounts at other providers I need to check regularly. As I don't want to manage those services separately I want my mail server to fetch all of those mails every x minutes and have them integrated into my inbox.

Of course I want proper spam filtering capabilities for my all my mail. Using all of the currently battle proven techniques.

Filtering

Let's discuss my mail filtering needs a little more. Until now I have been using Sieve to define my filter rules, which are executed directly on the server for each newly received message. Using this technique mail clients don't need any knowledge about the filters. This allows properly filtered and sorted mails on any device without thinking about its software or capabilities. As my set of filters is quite extensive I wanted my new mail solution to support Sieve as well.

Furthermore I use specialized mail addresses in order to quickly filter stuff into folders, without creating explicit rules for it. For example mails to an address like jakob@example.com are sent to my inbox, while mails to jakob-squarespace@example.com are sent to the company/squarespace folder instead.

Uberspace

Uberspace is a quite fancy hoster residing in Germany. You could say it is a geek compatible shared hoster. Uberspace provides you with an account on one of their shared hosting servers giving you ssh access by default. They explicitly allow you to install custom daemons and services. In addition to giving you this kind of flexibility they provide you with a default LAMP (Linux, Apache, MySQL and PHP) stack. Furthermore you get a complete mail infrastructure as well, which is based on netqmail.

Utilizing this infrastructure I realized all the mail handling I need. I installed GNU Mailutils to have a sieve command ready for execution. After that I let qmail pipe all my mails through sieve using a .qmail-File. The whole setup maybe took 20 minutes. After that I had my rules up and running again.

In order to integrate the afore mentioned external mail accounts I installed fetchmail and configured it using the .fetchmailrc file to retrieve my external messages every 15 minutes, which is quite enough for my use case.

What I really like about my new mail setup is the fact, that I have full flexibility using sieve as well as .qmail-files without taking care of the basic mail infrastructure including spam filtering and keeping all the stuff up2date.

Furthermore Uberspace does take care of my needs for backup. In addition to all of that their support is unbelievably fast and responsive. Any problem I encountered was solved within minutes.

Source Code Version Control: Github and Bitbucket

I decided to migrate all of my source code repositories to Github and Bitbucket. In order to explain why I am using two different services for this I will explain a little bit more about what my situation has been before. 

I stored a lot of different source code repositories of different types on my servers. Fortunately I migrated all the CVS stuff years ago to Subversion. For new projects I mostly use Git now. Still there have been some Subversion repositories, with stuff I didn't need to change for quite a while but still use from day to day.

Some of the repositories are meant to be publicly available, others are private only to myself or to a specific group of people. Usually this group is quite small for my private repositories.

My first step was to convert all of my still existing Subversion repositories into Git repositories. The git svn command made this quite easy.

After that I pushed all the public repositories to my Github account. Regarding the private repositories I thought about paying Github to give some private repositories to me. However they are quite pricey actually. Furthermore their payment model is based on the numbers of private repositories you are using. I evaluated Bitbucket as an alternative and was quite surprised, that for my use case their hosting is actually free of charge. The reason for this is, that Bitbucket uses a different payment model than Github. They are asking you to pay for the number of collaborators, which might access the repositories in contrast to the number of repositories. Up to 5 people are free of charge with an unlimited amount of private repos. As none of my private repositories have more then 5 collaborators this model works quite well for me. I therefore pushed all my private repositories to Bitbucket. Their REST api even allowed me to easily automate the creation of the corresponding Repositories, before pushing to them.

Project pages: Github

Some of my projects had dedicated pages in the past. Either those were part of my website (westhoffswelt.de), or they had separate static websites with a separate domain (eg. ineedmoretime.org). All of those pages are now either realized using github pages or have been replaced by detailed README files within their corresponding github repositories. This is a perfect fit for me, as the content of the pages has been static anyways in the past. Now the hosting is completely done by github, which works including custom domains by the way. In the end I had the same static pages up and running in now time, while not having to maintain any of the servers behind them.

Miscellaneous PHP-Scripts: Uberspace

In addition to my mail setup I utilize Uberspace for miscellaneous PHP scripts I had running before. Some of them are simple triggers I can reach from the outside to initiate certain tasks. For example I have got a simple script running, which allows me to push arbitrary messages to my mobile devices using an HTTP request. Other scripts are called based on a certain interval to process some data for me. For example a script, which scans one of my IMAP folders for amazon orders, pushing their shipment tracking numbers to an app on my iPhone, which then informs me about the packages tracking status.

Before deploying those scripts to Uberspace I created a second account there, as I did not want to mix anything up with my mails. After that the deployment and configuration of the whole set of scripts maybe took me 30 minutes.

IRC-Bouncer: Uberspace

Besides email one of my main communication channels is IRC. In order to allow for asynchronous communication through this medium I need an IRC-Bouncer, which records messages for me, if I am not currently connected to the network. Furthermore I wanted the bouncer to push messages to my iPhone if they pass a certain set of filters, like certain channels and highlighted keywords (username, company name, ...).

I realized all of this easily, by installing a ZNC on the same Uberspace account, which does contain my PHP trigger scripts. To make ZNC push to my phone I am using the ZNC Push module in combination with the Pushover service.

In order to reach the bouncer from the outside using my phone I needed the service to be exposed to the public by Uberspace. Usually they block anything besides default services inside their firewall. I just wrote a mail to the Uberspace support asking if something like this would be possible. Within minutes I got assigned a port, which they opened to the outside world for me.

Conclusion

I am definitely happy with my new setup so far. I have already started canceling most of my server contracts as they are not needed any more. There is still one server, which is currently handling stuff for my sister ;), but this one will be migrated to another provider in the future as well. It feels good to cancel those servers, as I now I don't need to manage them in future. Therefore I can dedicate more of that time to other projects. A quite nice side effect of switching to all those alternative services is that I am actually saving money as well.