No(t)(only)SQL and Internet of Things (IOT)

Thursday, 19 May 2016

IoRIT, Internet of Really Important Things | Dr.Dot's Daily Dose

A reblog of an article by one of my favorite people. If I could tell the future I would spend far more time drinking beers with Graham at the Kiara Park swimming pool. What a wasted opportunity, to chat to Graham and drink beers!

IoRIT, Internet of Really Important Things


Just recent I published a post entitled “The Three A’s of Predictive Maintenance” basically discussing the importance of maintaining assets in these current economically volatile times. The post does contain some references to IoT (Internet of Things), but here I want to concentrate what is really important, so I am going to steal the phrase from Mr Harel Kodesh, Vice President, Chief Technology Officer at GE, who introduced the phrase in his key note speech at the Cloud Foundry Summit in May of 2015 (
We build huge assets to support our way of living and these assets are the REALLY important things that without maintenance will disrupt everything if left to the “fix it when it breaks mentality”. Mr Kodesh uses two examples which I have explained in the table below, we have the Commercial Internet and we have the IndustrialInternet.Both are equally as important as each other, but impacts on business and environment are much greater in the Industrial Internet and could have far reaching consequences.


When we wake in the morning we tend to think about having a shower and getting ready for work, cooking our breakfast either via electric or gas. We don’t think about the Water Distribution system, We don’t think about power generation or it distribution and we certainly don’t think about gas extraction or it’s distribution.We don’t think about the fuel or where it was made for the fight across the world for us to do business in another country. We are not sure about where the petrol or diesel comes from that powers are cars or trucks.
Well it’s reasonably simple to define, all of these commodities come from huge assets that may power other assets and have to be maintained. We are talking here about Oil & Gas Drilling and Production platforms, or Oil Refineries, or Power Stations. All of these asset include other assets which have to be maintained.

Above is a good example of what we are talking about and one that I was intimately involved with. Some 195 miles out to sea, the first concrete platform (Condeep, built by Aker in Stavanger, Norway ), the Beryl Alpha, was given a life expectancy of 20 years when it was installed by Mobil, now part of ExxonMobil, on the Beryl oilfield (Block 9/13-1) in 1975. Now 41 years on and being purchased from ExxonMobil By the Apache Corporation there is no sign of it being decommissioned and the addition in 2001 of facilities to process gas from the nearby Skene gas field has given it a new lease of life.


At its peak in 1984, Beryl Alpha was producing some 120,000 bpd. It is still pumping an average of 90,000 to 100,000 barrels of Beryl, a high quality crude (Beryl) named after Beryl Solomon, wife of Mobil Europe president Charles Solomon. Gas production is around 450 mm cfpd, representing nearly 5 % of total British gas demand or the needs of 3.2 mm households. Today “The challenge is the interface between technology 41 years old and new technology.”

So here we are thinking now about “The Internet of Really Important Things” and how we can use technology of today with the technology of yesteryear? Doing more with less, sweating the assets to coin a phrase! Compliance to specifications and rules and regulations, this is where we need tools and techniques such as Predictive Maintenance (PdM).The link specifications is a snapshot ofspecifications for the Beryl, monitors and sensor ensure that data is captured which as a result can be used to highlight problems before they occur, this information is being collected in realtime.
To achieve what it is called World Class Maintenance (WCM), it is necessary to improve adopted maintenance processes.Various tools available today have adopted the word maintenance. It is important to note that these are not new types of maintenance but tools that allow the application of the main types of maintenance.

IoRIT, Internet of Really Important Things | Dr.Dot's Daily Dose:

'via Blog this'

Monday, 25 April 2016

IoT Ecosystem - Internet of Things Forecasts & Business Opportunities - Business Insider

The Internet of Things (IoT) has been labeled as "the next Industrial Revolution" because of the way it will change the way people live, work, entertain, and travel, as well as how governments and businesses interact with the world.
In fact, the revolution is already starting. 
That brand new car that comes preloaded with a bunch of apps? Internet of Things. Those smart home devices that let you control the thermostat and play music with a few words? Internet of Things. That fitness tracker on your wrist that lets you tell your friends and family how your exercise is going? You get the point.
But this is just the beginning.
BI Intelligence, Business Insider's premium research service, has tracked the growth of the IoT for more than two years, specifically how consumers, businesses, and governments are using the IoT ecosystem. John Greenough and Jonathan Camhi of BII have compiled an exhaustive report that breaks down the entire IoT ecosystem and forecast where the burgeoning IoT market is headed.  And you can learn more and purchase the report here:  The Internet of Things Ecosystem Research Report
During the creation of this report, they created the infographic below to show how the IoT ecosystem functions and to demonstrate how the IoT is poised to explode by 2020.
If you found this infographic to be valuable, you will LOVE our extensive IoT Ecosystem Research Report. 
Here are some key points from the report: 
  • In total, we project there will be 34 billion devices connected to the internet by 2020, up from 10 billion in 2015. IoT devices will account for 24 billion, while traditional computing devices (e.g. smartphones, tablets, smartwatches, etc.) will comprise 10 billion.
  • Nearly $6 trillion will be spent on IoT solutions over the next five years.
  • Businesses will be the top adopter of IoT solutions. They see three ways the IoT can improve their bottom line by 1) lowering operating costs; 2) increasing productivity; and 3) expanding to new markets or developing new product offerings.
  • Governments are focused on increasing productivity, decreasing costs, and improving their citizens’ quality of life. We believe they will be the second-largest adopters of IoT ecosystems.
  • Consumers will lag behind businesses and governments in IoT adoption. Still, they will purchase a massive number of devices and invest a significant amount of money in IoT ecosystems.
In full, the report:
  • Distills the technological complexities of the Internet of Things into a single ecosystem
  • Explains the benefits and shortcomings of many networks, including mesh (e.g. ZigBee, Z-Wave, etc.), cellular (e.g. 3G/4G, Sigfox, etc.), and internet networks (e.g. Wi-Fi, Ethernet, etc.)
  • Discusses analytics systems, including edge analytics, cloud analytics, and more
  • Examines IoT security best practices
  • Details the four IoT market drivers and four IoT market barriers
  • Forecasts IoT investment by six layers: connectivity, security, data storage, system integration, device hardware, and application development
  • Analyzes how the IoT ecosystem is being using in a number of industries
  • Defines Internet of Things terminology within a glossary
Interested in getting the full report? Here are two ways to access it:
  1. Purchase & download the full report from our research store. >> Purchase & Download Now
  2. Subscribe to an All-Access pass to BI Intelligence and gain immediate access to this report and over 100 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. >> Learn More Now
The choice is yours. But however you decide to acquire this report, you’ve given yourself a powerful advantage in your understanding of the fast-moving world of the IoT.

IoT Ecosystem - Internet of Things Forecasts & Business Opportunities - Business Insider:

'via Blog this'

Tuesday, 5 January 2016

11 Internet of Things (IoT) Protocols You Need to Know About » DesignSpark

titleThere exists an almost bewildering choice of connectivity options for electronics engineers and application developers working on products and systems for the Internet of Things (IoT).
Many communication technologies are well known such as WiFi, Bluetooth, ZigBee and 2G/3G/4G cellular, but there are also several new emerging networking options such as Thread as an alternative for home automation applications, and Whitespace TV technologies being implemented in major cities for wider area IoT-based use cases. Depending on the application, factors such as range, data requirements, security and power demands and battery life will dictate the choice of one or some form of combination of technologies. These are some of the major communication technologies on offer to developers.


titleAn important short-range communications technology is of course Bluetooth, which has become very important in computing and many consumer product markets. It is expected to be key for wearable products in particular, again connecting to the IoT albeit probably via a smartphone in many cases. The new Bluetooth Low-Energy (BLE) – or Bluetooth Smart, as it is now branded – is a significant protocol for IoT applications. Importantly, while it offers similar range to Bluetooth it has been designed to offer significantly reduced power consumption.

However, Smart/BLE is not really designed for file transfer and is more suitable for small chunks of data. It has a major advantage certainly in a more personal device context over many competing technologies given its widespread integration in smartphones and many other mobile devices. According to the Bluetooth SIG, more than 90 percent of Bluetooth-enabled smartphones, including iOS, Android and Windows based models, are expected to be ‘Smart Ready’ by 2018.

Devices that employ Bluetooth Smart features incorporate the Bluetooth Core Specification Version 4.0 (or higher – the latest is version 4.2 announced in late 2014) with a combined basic-data-rate and low-energy core configuration for a RF transceiver, baseband and protocol stack. Importantly, version 4.2 via its Internet Protocol Support Profile will allow Bluetooth Smart sensors to access the Internet directly via 6LoWPAN connectivity (more on this below). This IP connectivity makes it possible to use existing IP infrastructure to manage Bluetooth Smart ‘edge’ devices. More information on Bluetooth 4.2 is available here and a wide range of Bluetooth modules are available from RS. 
  • Standard: Bluetooth 4.2 core specification
  • Frequency: 2.4GHz (ISM)
  • Range: 50-150m (Smart/BLE)
  • Data Rates: 1Mbps (Smart/BLE)



ZigBee, like Bluetooth, has a large installed base of operation, although perhaps traditionally more in industrial settings. ZigBee PRO and ZigBee Remote Control (RF4CE), among other available ZigBee profiles, are based on the IEEE802.15.4 protocol, which is an industry-standard wireless networking technology operating at 2.4GHz targeting applications that require relatively infrequent data exchanges at low data-rates over a restricted area and within a 100m range such as in a home or building.

ZigBee/RF4CE has some significant advantages in complex systems offering low-power operation, high security, robustness and high scalability with high node counts and is well positioned to take advantage of wireless control and sensor networks in M2M and IoT applications. The latest version of ZigBee is the recently launched 3.0, which is essentially the unification of the various ZigBee wireless standards into a single standard. An example product and kit for ZigBee development are TI’s CC2538SF53RTQT ZigBee System-On-Chip IC and CC2538 ZigBee Development Kit. 
  • Standard: ZigBee 3.0 based on IEEE802.15.4
  • Frequency: 2.4GHz
  • Range: 10-100m
  • Data Rates: 250kbps


titleZ-Wave is a low-power RF communications technology that is primarily designed for home automation for products such as lamp controllers and sensors among many others. Optimized for reliable and low-latency communication of small data packets with data rates up to 100kbit/s, it operates in the sub-1GHz band and is impervious to interference from WiFi and other wireless technologies in the 2.4-GHz range such as Bluetooth or ZigBee. It supports full mesh networks without the need for a coordinator node and is very scalable, enabling control of up to 232 devices. Z-Wave uses a simpler protocol than some others, which can enable faster and simpler development, but the only maker of chips is Sigma Designs compared to multiple sources for other wireless technologies such as ZigBee and others. 
  • Standard: Z-Wave Alliance ZAD12837 / ITU-T G.9959
  • Frequency: 900MHz (ISM)
  • Range: 30m
  • Data Rates: 9.6/40/100kbit/s 


titleA key IP (Internet Protocol)-based technology is 6LowPAN (IPv6 Low-power wireless Personal Area Network). Rather than being an IoT application protocols technology like Bluetooth or ZigBee, 6LowPAN is a network protocol that defines encapsulation and header compression mechanisms. The standard has the freedom of frequency band and physical layer and can also be used across multiple communications platforms, including Ethernet, Wi-Fi, 802.15.4 and sub-1GHz ISM. A key attribute is the IPv6 (Internet Protocol version 6) stack, which has been a very important introduction in recent years to enable the IoT. IPv6 is the successor to IPv4 and offers approximately 5 x 1028addresses for every person in the world, enabling any embedded object or device in the world to have its own unique IP address and connect to the Internet. Especially designed for home or building automation, for example, IPv6 provides a basic transport mechanism to produce complex control systems and to communicate with devices in a cost-effective manner via a low-power wireless network.
Designed to send IPv6 packets over IEEE802.15.4-based networks and implementing open IP standards including TCP, UDP, HTTP, COAP, MQTT, and websockets, the standard offers end-to-end addressable nodes, allowing a router to connect the network to IP. 6LowPAN is a mesh network that is robust, scalable and self-healing. Mesh router devices can route data destined for other devices, while hosts are able to sleep for long periods of time. An explanation of 6LowPAN is available here, courtesy of TI. 
  • Standard: RFC6282
  • Frequency: (adapted and used over a variety of other networking media including Bluetooth Smart (2.4GHz) or ZigBee or low-power RF (sub-1GHz)
  • Range: N/A
  • Data Rates: N/A



A very new IP-based IPv6 networking protocol aimed at the home automation environment is Thread. Based on 6LowPAN, and also like it, it is not an IoT applications protocol like Bluetooth or ZigBee. However, from an application point of view, it is primarily designed as a complement to WiFi as it recognises that while WiFi is good for many consumer devices that it has limitations for use in a home automation setup. 
Launched in mid-2014 by the Thread Group, the royalty-free protocol is based on various standards including IEEE802.15.4 (as the wireless air-interface protocol), IPv6 and 6LoWPAN, and offers a resilient IP-based solution for the IoT. Designed to work on existing IEEE802.15.4 wireless silicon from chip vendors such as Freescale and Silicon Labs, Thread supports a mesh network using IEEE802.15.4 radio transceivers and is capable of handling up to 250 nodes with high levels of authentication and encryption. A relatively simple software upgrade should allow users to run thread on existing IEEE802.15.4-enabled devices. 
  • Standard: Thread, based on IEEE802.15.4 and 6LowPAN
  • Frequency: 2.4GHz (ISM)
  • Range: N/A
  • Data Rates: N/A


titleWiFi connectivity is often an obvious choice for many developers, especially given the pervasiveness of WiFi within the home environment within LANs. It requires little further explanation except to state the obvious that clearly there is a wide existing infrastructure as well as offering fast data transfer and the ability to handle high quantities of data. 
Currently, the most common WiFi standard used in homes and many businesses is 802.11n, which offers serious throughput in the range of hundreds of megabit per second, which is fine for file transfers, but may be too power-consuming for many IoT applications. A series of RF development kits designed for building WiFi-based applications are available from RS. 
  • Standard: Based on 802.11n (most common usage in homes today)
  • Frequencies: 2.4GHz and 5GHz bands
  • Range: Approximately 50m
  • Data Rates: 600 Mbps maximum, but 150-200Mbps is more typical, depending on channel frequency used and number of antennas (latest 802.11-ac standard should offer 500Mbps to 1Gbps) 


<titleAny IoT application that requires operation over longer distances can take advantage of GSM/3G/4G cellular communication capabilities. While cellular is clearly capable of sending high quantities of data, especially for 4G, the expense and also power consumption will be too high for many applications, but it can be ideal for sensor-based low-bandwidth-data projects that will send very low amounts of data over the Internet. A key product in this area is the SparqEE range of products, including the original tiny CELLv1.0 low-cost development board and a series of shield connecting boards for use with the Raspberry Pi and Arduino platforms. 
  • Standard: GSM/GPRS/EDGE (2G), UMTS/HSPA (3G), LTE (4G)
  • Frequencies: 900/1800/1900/2100MHz
  • Range: 35km max for GSM; 200km max for HSPA
  • Data Rates (typical download): 35-170kps (GPRS), 120-384kbps (EDGE), 384Kbps-2Mbps (UMTS), 600kbps-10Mbps (HSPA), 3-10Mbps (LTE) 


titleNFC (Near Field Communication) is a technology that enables simple and safe two-way interactions between electronic devices, and especially applicable for smartphones, allowing consumers to perform contactless payment transactions, access digital content and connect electronic devices. Essentially it extends the capability of contactless card technology and enables devices to share information at a distance that is less than 4cm. Further information is available here. 
  • Standard: ISO/IEC 18000-3
  • Frequency: 13.56MHz (ISM)
  • Range: 10cm
  • Data Rates: 100–420kbps


titleAn alternative wide-range technology is Sigfox, which in terms of range comes between WiFi and cellular. It uses the ISM bands, which are free to use without the need to acquire licenses, to transmit data over a very narrow spectrum to and from connected objects. The idea for Sigfox is that for many M2M applications that run on a small battery and only require low levels of data transfer, then WiFi’s range is too short while cellular is too expensive and also consumes too much power. Sigfox uses a technology called Ultra Narrow Band (UNB) and is only designed to handle low data-transfer speeds of 10 to 1,000 bits per second. It consumes only 50 microwatts compared to 5000 microwatts for cellular communication, or can deliver a typical stand-by time 20 years with a 2.5Ah battery while it is only 0.2 years for cellular. 
Already deployed in tens of thousands of connected objects, the network is currently being rolled out in major cities across Europe, including ten cities in the UK for example. The network offers a robust, power-efficient and scalable network that can communicate with millions of battery-operated devices across areas of several square kilometres, making it suitable for various M2M applications that are expected to include smart meters, patient monitors, security devices, street lighting and environmental sensors. The Sigfox system uses silicon such as the EZRadioPro wireless transceivers from Silicon Labs, which deliver industry-leading wireless performance, extended range and ultra-low power consumption for wireless networking applications operating in the sub-1GHz band. 
  • Standard: Sigfox
  • Frequency: 900MHz
  • Range: 30-50km (rural environments), 3-10km (urban environments)
  • Data Rates: 10-1000bps 


titleSimilar in concept to Sigfox and operating in the sub-1GHz band,Neul leverages very small slices of the TV White Space spectrum to deliver high scalability, high coverage, low power and low-cost wireless networks. Systems are based on the Iceni chip, which communicates using the white space radio to access the high-quality UHF spectrum, now available due to the analogue to digital TV transition. The communications technology is called Weightless, which is a new wide-area wireless networking technology designed for the IoT that largely competes against existing GPRS, 3G, CDMA and LTE WAN solutions. Data rates can be anything from a few bits per second up to 100kbps over the same single link; and devices can consume as little as 20 to 30mA from 2xAA batteries, meaning 10 to 15 years in the field. 
  • Standard: Neul
  • Frequency: 900MHz (ISM), 458MHz (UK), 470-790MHz (White Space)
  • Range: 10km
  • Data Rates: Few bps up to 100kbps


titleAgain, similar in some respects to Sigfox and Neul, LoRaWANtargets wide-area network (WAN) applications and is designed to provide low-power WANs with features specifically needed to support low-cost mobile secure bi-directional communication in IoT, M2M and smart city and industrial applications. Optimized for low-power consumption and supporting large networks with millions and millions of devices, data rates range from 0.3 kbps to 50 kbps. 
  • Standard: LoRaWAN
  • Frequency: Various
  • Range: 2-5km (urban environment), 15km (suburban environment)
  • Data Rates: 0.3-50 kbps.
More about the Internet of Things in our IOT Design Centre


11 Internet of Things (IoT) Protocols You Need to Know About » DesignSpark:

'via Blog this'

Tuesday, 8 December 2015

Tiny wireless sensor never needs a battery

The internet of things is a nice idea, but there's one big catch: you have to power all those smart devices, which is no mean feat when some of them might not even have room for a battery. Dutch researchers think they have a solution, though. They've built an extra-small (2 square millimeters) wireless temperature sensor that gets its power from the radio waves that make up its wireless network. All it needs is energy from a nearby router -- once there's enough, it powers up and starts working.
Right now, the sensor can't be further than an inch from its host, which isn't exactly practical. Thankfully, this isn't the end of the story. The team hopes to extend that range to nearly 10 feet within a year, and ultimately to 16 feet. If the network-based power takes off, you could see smart homes full of virtually invisible sensors that control all your devices. You could have lights that turn on the moment you enter any room (not just those you care about the most), or heating that shuts off as each room warms up. The best part may be that these sensors would be very cheap, at about 20 cents each. At that price, it wouldn't cost a fortune to make the upgrade.
[Image credit: Bart van Overbeeke/Eindhoven University of Technology]

Tiny wireless sensor never needs a battery:

'via Blog this'

Saturday, 25 April 2015

the painful journey of painless deployments: from github to aws ebs and docker

the painful journey of painless deployments: from github to aws ebs and docker:

'via Blog this'

This document shows the result of a few weeks long journey of finding the best solution to automatically deploy a project that's being hosted on GitHub to Amazon Web Services. This is not a theoretical document, but a guide that aims to help you deploy your code faster and more frequent.
I'm not a devOps guy. I'm a developer that did go through some pain to deploy his code in a painless and automated way to free up time for more interesting things (playing ping pong with co-workers, more coding, working on stuff that may or may not involve drones).
First, we will build a small test application, an API really. I'm using the nodejs framework hapi here and connect it to a Neo4j database which is hosted for free onGraph Story.
For a great build experience, I've signed up for a Startup Account at Shippable which is at the time of this writing $12/year.
I'm building the application image on a dedicated t2.small AWS EC2 instance.
For the actual deployment of the different application environments (just development and production here) we're utilizing AWS ElasticBeanstalk.
If everything works out, a docker image gets created and pushed to Docker Hub, either as a public or a private repository.
Setting this infrastructure up takes a little bit of time, but the big payoff is that a commit to the master branch will result in the deployment to the production environment, while a commit to the develop branch will result in the deployment to the development environment.
Say good-bye to manual deployments.

The Application

The Database

While I could fill pages talking about how just awesome Neo4j is, I'm just letting you know that you have to take a look at this database. Graph databases are not just for social networks. For the sake of simplicity, the application is not going to utilize any of the great benefits of Neo4j or graph databases in general. The focus of this post is after all on deploying software and not how to design and write it.
After signing up and logging into the free Graph Story account, you should log create a database and click on the Neo4j Web UI button to access the built-in web interface of Neo4j. graph story dashboard.
Once we're in the web interface, let's create some nodes (or edges, as they're more commonly called in graph theory) with cypher.
CREATE (u:User {name: 'Matthias Sieber', email: ''}),(v:User {name: 'Test User', email: '', isFake: true}) RETURN u,v
This query will create two nodes with a user label and some properties. We're also returning those nodes, so you'll have something to look at.
created user nodes
And that's actually all the data we're creating now.
On to our hapijs app.


We will call our nodejs application Epione. It's become a practice at the companies I'm working at to name our projects after gods, spirits and other mythological beings. According to Wikipedia, Epione was the goddess of soothing of pain.
As with any node application, I'm starting with a new directory and the package.json. Here are the contents for this sample application.
  "name": "epione",
  "private": true,
  "version": "0.0.1",
  "repository": {
    "type": "git",
    "url": "git://"
  "description": "Epione is the goddess of soothing of pain.",
  "author": "Matthias Sieber <>",
  "dependencies": {
    "boom": "^2.7.0",
    "hapi": "^8.4.0",
    "joi": "~6.1.0",
    "request-promise-json": "^1.0.4"
  "devDependencies": {
    "better-console": "^0.2.4",
    "chai": "^2.2.0",
    "gulp": "^3.8.11",
    "gulp-env": "^0.2.0",
    "gulp-lab": "^1.0.5",
    "gulp-nodemon": "^2.0.2",
    "lab": "^5.5.1"
  "engines": {
    "node": "0.12"
  "scripts": {
    "test": "gulp test",
    "start": "gulp serve"
Since this is a rather simple application, I'm going to put all the server logic into the server.js.
'use strict';

var Hapi = require('hapi');
var Boom = require('boom');
var rp = require('request-promise-json');
var constants = require('src/config/constants.js');
var host =;
var port = constants.application.port;
var commitURL = constants.database.commitUrl;
var server = new Hapi.Server({
  connections: {
    routes: {
      cors: {
        origin: ['*'] // to allow API requests from our front end later

server.connection({ host: host, port: port });

  method: 'GET',
  path: '/realusers',
  handler: function(request, reply) {
    var query = 'MATCH (user:User) WHERE NOT HAS (user.isFake) RETURN user';
    var options = {
      uri: commitURL,
      method: 'POST',
      body: { 'statements': [{'statement': query }] }
    return rp.request(options).then(function(result) {
      if (result.results.length > 0) {
        return reply(result.results[0].data);
      } else {
        return reply(Boom.notFound());

  method: 'GET',
  path: '/fakeusers',
  handler: function(request, reply) {
    var query = 'MATCH (user:User) WHERE HAS(user.isFake) RETURN user';
    var options = {
      uri: commitURL,
      method: 'POST',
      body: { 'statements': [{'statement': query }] }
    return rp.request(options).then(function(result) {
      if (result.results.length > 0) {
        return reply(result.results[0].data);
      } else {
        return reply(Boom.notFound());

if (!module.parent) {
  server.start(function() {
    console.log('Server running at: ',;

module.exports = server;
If you take a look at the source code, you might have noticed that we're referencing a constants.js. This is a configuration file that's especially beneficial when we're developing locally with different environments. That way we can store environment variables in our ~/.zshenv or wherever you store your environment variables of your shell of choice.
Create this file in src/config/constants.js:
'use strict';

module.exports = (function() {

  var env = process.env.NODE_ENV || 'development';

  var databaseConfig = function() {
    return {
      'production': {
        'protocol': process.env.EPIONE_DB_PROTOCOL,
        'host': process.env.EPIONE_DB_HOST,
        'user': process.env.EPIONE_DB_USER,
        'password': process.env.EPIONE_DB_PASS,
        'port': process.env.EPIONE_DB_PORT
      'development': {
        'protocol': process.env.EPIONE_DEVELOPMENT_DB_PROTOCOL,
        'host': process.env.EPIONE_DEVELOPMENT_DB_HOST,
        'user': process.env.EPIONE_DEVELOPMENT_DB_USER,
        'password': process.env.EPIONE_DEVELOPMENT_DB_PASS,
        'port': process.env.EPIONE_DEVELOPMENT_DB_PORT
      'test': {
        'protocol': process.env.EPIONE_TEST_DB_PROTOCOL,
        'host': process.env.EPIONE_TEST_DB_HOST,
        'user': process.env.EPIONE_TEST_DB_USER,
        'password': process.env.EPIONE_TEST_DB_PASS,
        'port': process.env.EPIONE_TEST_DB_PORT


  var applicationConfig = function() {
    return {
      'production': {
        'url': 'http://' + process.env.EPIONE_NODE_HOST + ':' + process.env.EPIONE_NODE_PORT,
        'host': process.env.EPIONE_NODE_HOST,
        'port': process.env.EPIONE_NODE_PORT
      'development': {
        'url': 'http://' + process.env.EPIONE_DEVELOPMENT_NODE_HOST + ':' + process.env.EPIONE_DEVELOPMENT_NODE_PORT,
        'host': process.env.EPIONE_DEVELOPMENT_NODE_HOST,
        'port': process.env.EPIONE_DEVELOPMENT_NODE_PORT
      'test': {
        'url': 'http://' + process.env.EPIONE_TEST_NODE_HOST + ':' + process.env.EPIONE_TEST_NODE_PORT,
        'host': process.env.EPIONE_TEST_NODE_HOST,
        'port': process.env.EPIONE_TEST_NODE_PORT

  var dbConstants = databaseConfig();
  var appConstants = applicationConfig();

  var obj = {
    application: {
      url: appConstants[env].url,
      host: appConstants[env].host,
      port: appConstants[env].port
    database: {
      host: dbConstants[env].host,
      user: dbConstants[env].user,
      password: dbConstants[env].password,
      port: dbConstants[env].port,
      commitUrl: dbConstants[env].protocol + '://' + dbConstants[env].user + ':' + dbConstants[env].password + '@' +
                 dbConstants[env].host + ':' + dbConstants[env].port +
    server: {
      defaultHost: 'http://localhost:8001'

  if (! {
    throw new Error('Missing constant Check your environment variables.');
  } else if (!obj.application.port) {
    throw new Error('Missing constant application.port. Check your environment variables.');
  } else if (! {
    throw new Error('Missing constant Check your environment variables.');
  } else if (!obj.database.port) {
    throw new Error('Missing constant database.port. Check your environment variables.');
  } else if (!obj.database.user) {
    throw new Error('Missing constant database.user. Check your environment variables.');
  } else if (!obj.database.password) {
    throw new Error('Missing constant database.password. Check your environment variables.');

  return obj;
Since we only want to deploy our code when all the tests pass, we need to create some tests first.
Create a simple test for each of our two end points in test/api/user.js:
'use strict';

var Lab = require('lab');
var lab = exports.lab = Lab.script();
var server = require('server');
var assert = require('chai').assert;

lab.experiment('Email/pw authentication', function() {
  lab.test('Returns real users', function(done) {
    var options = {
      method: 'GET',
      url: '/realusers',
    server.inject(options, function(response) {
      assert.equal(response.statusCode, 200);
      var result = JSON.stringify(response.result);
      var expected = JSON.stringify([{"row":[{"name":"Matthias Sieber","email":""}]}]);
      assert.strictEqual(expected, result);
  lab.test('Returns fake users', function(done) {
    var options = {
      method: 'GET',
      url: '/fakeusers',
    server.inject(options, function(response) {
      assert.equal(response.statusCode, 200);
      var result = JSON.stringify(response.result);
      var expected = JSON.stringify([{"row":[{"name":"Test User","email":"","isFake":true}]}]);
      assert.strictEqual(expected, result);
Now we need a nice way to test the app and serve it as well. We're using gulp for this. In your project root, create a gulpfile.js:
'use strict';

var gulp = require('gulp');
var env = require('gulp-env');
var nodemon = require('gulp-nodemon');
var lab = require('gulp-lab');
var betterConsole = require('better-console');

gulp.task('test', ['set-test-env', 'set-node-path'], function() {
  return gulp.src([
    ], { read: false })
        args: '-c -t 85',
        opts: {
          emitLabError: true

gulp.task('set-test-env', function() {
  return env({
      vars: {
        NODE_ENV: 'test'

gulp.task('set-node-path', function() {
  return env({
      vars: {
        NODE_PATH: '.'

gulp.task('serve', function() {
    vars: {
      NODE_PATH: '.',
      NODE_ENV: 'development'

    script: 'server.js',
    ext: 'js',
    nodeArgs: ['--debug']
And that's it for now with the application.
Now run npm install to install all the dependencies. Save your graph story neo4j environment variables into your ~/.zshenv (don't forget to source it afterwards) or prepend these to gulp test like this:
Your two tests should pass and the code coverage should be over 85%.

Push to GitHub

Now that your application is locally tested, we can push it to GitHub.
I won't go over the git work flow, but after you've initialized your git repository via git init it's a good idea to create a .gitignore file and exclude some files and directories. My minimal .gitignore for a node project on a Mac using vim looks usually like this:
After adding the files, I commit and push to the master branch of my newly created private GitHub repository.
Now let's make it so that whenever we push to master (or the yet to be created develop branch), a build/deployment process will take place.

AWS & Shippable

We've decided to host our application on AWS primarily for scalability reasons. We will build a dedicated host that takes care of our automated deployments and utilize Elastic Beanstalk to share our application with the world.
Shippable is a containerized continious integration platform. Because we want to build a docker image for our application from the official node docker image later on, I recommend signing up for the Startup account. This tutorial requires it (at least at this point) and the currently $12/year are well spent.

Setting up the dedicated host

Now is the time to set up a dedicated host that will communicate with Shippable to make the seemingless deployment possible. For this example, I've chosen to set up a t2.small EC2 instance in North Virginia (USA), also known as us-east-1. Note: Shippable won't run on a t2.micro, as 2 GB of RAM are required.
Use the Ubuntu 14.04 image on a t2.small instance (or better) with 30 GB of storage (or more). Everything else can stay the same. You'd also need SSH access, so be sure to generate a new key-pair or chose an existing one you can use. ubuntu imageAfter the instance has launched, I recommend giving it an Elastic IP as a public not changing IP address is required in shippable.
Talking about Shippable, let's head over there. Once you set-up your startup account for yourself or your organization, we now need to make our AWS instance available for Shippable. set up the dedicated host in shippable
Now click Add Node, connect to your AWS EC2 instance that's going to be the dedicated host, copy the command that's showing up on the modal and paste it into your ssh session. You can then exit the ssh session. Now fill out the connection details and hit save. add node
After the node is added, we can initialize it by hitting the power icon. This process might take a few minutes. After that, there's one more icon to hit to deploy the builder. deploy builder
Your AWS EC2 instance is now ready to build and deploy.

Enable GitHub repo for Shippable

Let's find our GitHub repo in Shippable by browsing to and enable the repo where our application code is hosted. enable repo
Now the repository has been activated and Shippable uses the webhooks to start a build process. As you can see, no builds have been run yet. We're about to change that. repository activated
In order to do that, we need to switch between AWS, Shippable and our application source code quite a bit.

Get AWS credentials

If you haven't done it already, create a user in the IAM Management Console for the buildserver. You will need the credentials in a bit.

Create AWS Elastic Beanstalk

Now we're preparing our Elastic Beanstalk application and environments. I invite you to create a new EBS application in us-east-1.
I fill out the Application name with epione. On the next page I select Create Web Server. One step further the correct settings for this scenario are Docker in "Predifined configuration" and Load balancing, auto scaling in "Environment type". I confirm those settings and also accept the default for "Application version" on the next page, which is sample application.
On the next page we're setting up our production environment. We are in luck as the name epione is not taken yet. setting up the environment
On the following pages you can set up this environment to your liking. I stuck with the defaults in this case.
While our production environment is launching with a sample application, we should go ahead and set up another environment for development. create new env
The steps are the same, but you'll need a different environment name and url. Also, you will probably use less powerful instances in development than in production. dev env

Preparing the app for AWS EBS

So let's actually get back to our application source code. In order for our git push to have the desired effect of deploying to AWS, we need to do just a little bit of preparation.
First, let's create and work on a develop branch: git checkout -b develop
In our project's root, create a .elasticbeanstalk directory: mkdir .elasticbeanstalk
In the newly created directory, create a config.yml with these contents:
    environment: epione-dev
    environment: epione
    environment: epione-dev
  application_name: epione
  default_ec2_keyname: null
  default_platform: Docker 1.5.0
  default_region: us-east-1
  profile: eb-cli
  sc: git
Up next is the shippable.yml:
build_image: node
  - node_js
  - "0.12"
    - master
    - develop
  - apt-get install -y python-dev
  - pip install awsebcli
  - npm install
  - mkdir -p ~/.aws
  - echo '[profile eb-cli]' > ~/.aws/config
  - echo "aws_access_key_id = $AWSAccessKeyId" >> ~/.aws/config
  - echo "aws_secret_access_key = $AWSSecretKey" >> ~/.aws/config
  - npm test
commit_container: manonthemat/epione
  - eb init && eb deploy --timeout 20
  email: false
Some applications might take longer to deploy and while the deployment will still succeed, Shippable will return an error due to EBS not responding in time. To avoid that scenario I recommend increasing the timeout from the default of 10 minutes to 20 minutes with the timeout option in the deploy step as shown above.
Shippable allows you to encrypt the environment variable definitions and keep your configurations private using the secure tag as shown above. To do so, browse to the organization dashboard or individual dashboard page from where you have enabled your project and click on ENCRYPT ENV VARS button. encrypt env variables
I've encrypted these environment variables:
  • AWSAccessKeyId
  • AWSSecretKey
Replace the line - secure: YOURSECRETENVVARIABLESENCODEDHERE in the shippable.yml with your encrypted environment variables.
encoding secrets
If you have your own IRC Server and/or channel feel free to add shippable notifications to your channel, too. At the end of the file add a line in the format ofirc: "". Make sure its indentation matches the disabled email notifications.


Create the docker hub repo

Replace the commit_container: manonthemat/epione with your DockerHub repository. If you want it to be a private repository, you have to create it first on Docker Hub and set that repository (not automated build) to private, before you initiate the build process. create private docker hub repository
Also note that when you are using a private repository, you have to give shippable access to your docker hub account. You can do that from the same screen where you started setting up your dedicated host and also started to encrypt your env variables. docker hub on


Now we're creating the Dockerfile:
FROM node:0.12

MAINTAINER Matthias Sieber <>


COPY . /data
RUN npm install


CMD ["npm","start"]
Since the resulting Docker image will be deployed to AWS EBS in either production or development, we don't include the env variables for test. As before, set the values to your values. It is important to set the node host to for proper access.
Notice also how we're putting all the environment variables in one line to reduce the layers in the resulting Docker image.
We also need a minimalistic for AWS ElasticBeanstalk to expose port 8000:
  "AWSEBDockerrunVersion": "1",
  "Ports": [
      "ContainerPort": "8000"


Now let's commit these changes to GitHub to kick off our build process and deployment to AWS EBS and the creation of a Docker image.
git add .
git commit -m 'ready to deploy'
git push origin develop
Since shippable registered a webhook with GitHub, shippable will be notified about the recent push of the develop branch and start building the image. You can see the live process by clicking into the build group icon. first deploy
When the build is successful (which also means our application tests passed), the image gets deployed to AWS ElasticBeanstalk. deploy to AWS EBS
You can check that in your AWS EBS dash board.

Develop deployed

Browse to your equivalent of to see the develop branch of your application live. develop deployed
We're happy with this so we're going to merge the develop branch into master and push that to GitHub, so the app will be deployed to the production environment as well.

Ready to launch

Without changing anything, let's go ahead and merge the develop branch into master.
In our application source code directory:
git checkout master
git merge develop
git push origin master
Once again, GitHub notifies shippable about the changes on our repository and the build & deployment process starts again. While the application is being deployed, you can check that the old version (the sample application) of the production environment is still live. This is known as a rolling deployment and it's good!
Once AWS is finished with the deployment, go ahead and test your live application.

But wait, there's more!

One of the reasons we're using docker is that we can have an exact match of our application on a local machine.
One of the many use cases is that I can give my colleagues on the front-end access to my docker repository of my API. They can then develop against an imutable docker container and not have to mess with git for that application.
The last step in our build/deployment process is the upload of our docker image to docker hub. A developer can then go ahead and run a specific version locally and use that to further develop his/her own application against it.
docker run -p 8000:8000 manonthemat/epione:master.2
Will run the application locally and map the applications port 8000 to another port on the host machine.

Testing with the Docker image

Try running the tests:
docker run manonthemat/epione:master.2 npm test
Instead of actually running the tests, this should fail as the node environment get switched to test and we haven't included any test environments in the dockerfile. So let's set the expected environment variables for the docker container:
docker run -e EPIONE_TEST_NODE_HOST='' -e EPIONE_TEST_NODE_PORT='8000' -e EPIONE_TEST_DB_PROTOCOL='https' -e EPIONE_TEST_DB_PORT='7473' -e EPIONE_TEST_DB_HOST='' -e EPIONE_TEST_DB_USER='graphstoryuser' -e EPIONE_TEST_DB_PASS='graphstorypassword' manonthemat/epione:master.2 npm test
Now the tests should pass and everything is awesome.

Things to do when you share your Docker image with the world

There's tne thing to keep in mind when you want to share your created Docker images with the world or you don't want to expose environment variables that are critical to the app running on AWS. The environment variables you define in your Dockerfile are readable by the user downloading that docker image. Since you're running Docker as platform for your Amazon Elastic Beanstalk, you still need these environment variables for our application, but there's an easy fix for this scenario.
First, let's get rid of the env variables in our Dockerfile, so this is all that's left in it.
FROM node:0.12

MAINTAINER Matthias Sieber <>


COPY . /data
RUN npm install

CMD ["npm","start"]
Without the environment variables your docker image is safe, but even if you now deploy your application to AWS Elastic Beanstalk, nginx will probably greet you with a HTTP Status Code 502 "Bad Gateway". A look into the logs and you'll notice that you have not supplied the environment variables needed for our hapi app.
The easy solution is to go to your local git repository and add an .ebextensionsdirectory in the root of your project. In this directory, create a file env.config and fill it with your environment variables (the ones you had in your dockerfile) using this format.
    value: "https"
    value: "7473"
    value: ""
    value: "8000"
    value: ""
    value: "graphstoryuser"
    value: "graphstorypass"
  - option_name: EPIONE_DB_PROTOCOL
    value: "https"
  - option_name: EPIONE_DB_PORT
    value: "7473"
  - option_name: EPIONE_NODE_HOST
    value: ""
  - option_name: EPIONE_NODE_PORT
    value: "8000"
  - option_name: EPIONE_DB_HOST
    value: ""
  - option_name: EPIONE_DB_USER
    value: "graphstoryuser"
  - option_name: EPIONE_DB_PASS
    value: "graphstorypass"
Now add this directory and the file to your git commit and push it. Your deployment should be successful and your created docker image should not contain any environment variables you don't want the outside world to know.

Where to go from here

This deployment strategy saved my development teams a lot of pain, stress and all-around suffering. Because deploying is now real simple, we can focus on developing software again.
One thing that I want to do is to include a after_failure step.
For example: When the build breaks, the Release The Drones protocol will be activated and will find the developer who's responsible for the build breaking. This developer then will get attacked by the swarm and a stationary Nerf turret.

If this article gets shared over 2500 times via Airpair's social media widget, I shall write a follow up article on that topic.

Total Pageviews

Google+ Followers


Blog Archive

Popular Posts

Recent Comments

Rays Twitter feed


Web sites come and go and information is lost and therefore some pages are archived. @rayd123 . Powered by Blogger.