Archive

JavaScript

A problem I had found when setting up my Node.js services on a new Raspberry Pi (or resetting one that had gotten into a bad state) was keeping track of the individual port numbers of each one. This might typically look like this:

  • New Headlines Backend – 5000
  • Tube Status Backend – 5050
  • LED Server – 5001
  • Backlight Server – 5005
  • Attic – 5500
  • Spotify Auth – 5009

…and so on. This wasn’t only a problem with setup, but also with maintaining all the numerous config.json files for each app that needed to talk to any of the other ones.

So to do something about it, I decided to have a go implementing a central message broker service (nominally called Message Bus, or MBus) from scratch (one of the key features of my hobbyist development, as you tend to learn a lot more this way). This new service had to be generic to allow all kinds of messages to flow between the services that they define themselves. It had to be fault tolerant and so should use JSON Schema to make sure the messages are all of the correct format. And lastly, it shouldn’t care what the connection details are for each app at startup.

 

Client Registration and Message Exchange

To solve this last problem, each app uses a common Node.js modules that knows the port of a local instance of MBus and requests a port assignment. MBus responds with a randomly rolled port number from a range (making sure it isn’t already allocated to another app), and the client app then creates an Express server that listens on the allocated port. If MBus receives a message with a known client app as the destination, it simply sends it on to that port within the local machine, where the client app will be listening as expected. These two processes are summarised below:

 

Client Implementation

To implement a new client to talk to MBus, it includes the mbus.js common module, and registers itself at runtime. It also specifies the message schema it will expect from MBus using conventional JSON Schemas:

const mbus = require('../node-common').mbus();

const GET_MESSAGE_SCHEMA = {
  type: 'object',
  required: [ 'app', 'key' ],
  properties: {
    app: { type: 'string' },
    key: { type: 'string' }
  }
};

const SET_MESSAGE_SCHEMA = {
  type: 'object',
  required: [ 'app', 'key', 'value' ],
  properties: {
    app: { type: 'string' },
    key: { type: 'string' },
    value: {}
  }
};

async function setup() {
  await mbus.register();

  mbus.addTopic('get', require('../api/get'), GET_MESSAGE_SCHEMA);
  mbus.addTopic('set', require('../api/set'), SET_MESSAGE_SCHEMA);
}

Once this is done, the config.json is also updated to specify where it can find the central MBus instance and the name it is to be identified as when messages are destined for it:

{
  "MBUS": {
    "HOST": "localhost",
    "PORT": 5959,
    "APP": "LedServer"
  }
}

The mbus.js module also takes care of the message metadata and the server checks the overall packet schema:

const MESSAGE_SCHEMA = {
  type: 'object',
  required: [ 'to', 'from', 'topic', 'message' ],
  properties: {
    status: { type: 'integer' },
    error: { type: 'string' },
    to: { type: 'string' },
    from: { type: 'string' },
    topic: { type: 'string' },
    message: { type: 'object' },
    broadcast: { type: 'boolean' }
  }
};

 

Example Implementations

You can find the code for MBus in the GitHub repository, and also check some example clients including Attic, LED Server, and Monitor.

Barring a few client app updates (luckily no very serious user-facing apps depend on these services for core functionality right now), all the main services now use MBus to talk to each other. The image below shows these setups for the main machines they are deployed on:

Finally, over the next few months I’ll be updating what clients there are to talk to their remote counterparts in this manner, and also take advantage of the fact it is now each to add and address future services in the same manner without needing to configure ports and addresses for every individual service.

Advertisements

Adding Raspberry Pi based backlighting to my desktop PC with backlight-server, and moving to a new flat gave me an interesting idea – add an API to the backlight server to set the lights to the dominant colour of whatever album is playing in my Spotify account. How hard could it be?

The first step was to read up on the Spotify API. I quickly found the ‘Get the User’s Currently Playing Track’ API, which fit the bill. Since it deals with user data, I had to authenticate with their Authorization Code Flow, which requires multiple steps as well as a static address for a callback containing the authorization code granted to my application. I experimented with giving the Spotify Developer site the IP address of my Server Pi, but that could change, which would mean editing the application listing each time that happened, which was unacceptable for a seamless ‘setup and forget’ experience I was aiming for.

The solution was to resurrect my DigitalOcean account to host a small Node.js app with a simple task – receive the callback from Spotify with the authorization code with which access and refresh codes would be granted, and fetch and determine the dominant colour of the album art currently playing. This service would in turn be used by backlight-server to light up my living room with the appropriate colour.

This authorization flow took a long time to get right, both from a code perspective (I used the spotify-web-api-node npm package to make things programmatically easier), as well as a behavioural perspective (when should the token be refreshed? How to propagate errors through different services? How can the app know it is authorized at any given time?), but once it worked, it was very cool to see the room change colour as my playlist shuffled by.

I had a half-hearted attempt at figuring out the dominant colour myself using buckets and histograms, but in the end decided to preserve my sanity and use the node-vibrant package instead, which worked like magic!

So this is basically how the whole thing works, and you can see the code for the spotify-auth microservice on GitHub. The diagram below may also help explain:

So what next? Well, those smart RGB light bulbs are looking a lot more interesting now…

With not a lot going on in terms of my Pebble apps (still very much in a ‘if it ain’t broke’ situation), my hobbyist attentions in recent months turned to my Raspberry Pi. With not a lot of exciting ideas for hardware hacking, it occurred to me that software applications of the device might be a bit more interesting.

Beginning with moving the backend services for News Headlines and Tube Status out of a $5 Digital Ocean Droplet to a $0 Raspberry Pi under my desk (with a few forwarded ports, of course), I’ve steadily refined the standard pattern used to write and maintain these apps. At the most there have been six, but today there are five:

  • News Headlines Backend – pushing headlines pins.
  • Tube Status Backend – pushing delay alerts pins.
  • LED Server – providing a localhost RESTful interface to the Blinkt! hat on the physical Pi between apps.
  • Attic – a new app, serving and receiving simple JSON objects for storage, backed by a Gist.
  • Monitor – responsible for monitoring uptime of the other services, and providing Greater Anglia and TfL Rail outage alerts to myself via my watch. Monitor actually just schedules regular invocations of its plugins’ update interface function, making it extremely extensible.

With my adventures in Node and discovering convenient or standardised ways of doing things like modules, data storage/sharing, soft configuration, etc. these apps have all been refined to use common file layouts, common modules, and a standard template. With its relatively stable state of maturity, I’d like to share this with readers now!

What? It’s not February 2017 anymore? The pattern has matured even further, but I’ve only now found the time to write this blog post? Well, OK then, we can make some edits…

Disclaimer: This isn’t an implementation of any actual accepted standard process/pattern I know of, just the optimum solution I have reached and am happy with under my own steam. Enjoy!

File Layout

As you can see from any of the linked repositories above, the basic layout for one of my Node apps goes as follows:

src/
  modules/
    app-specific-module.js
  common/
    config.js
    log.js
  main.js
package.json
config.json
.gitignore   // 'config.json'

The src folder contains modules (modules that are specific to the app), and common (using common modules shared between all apps, such as log.js (standard logger, pid logging, and uncaughtException & unhandledRejection handlers), as well as main.js, which initialises the app.

This pattern allows all apps to use common modules that can be guaranteed not only the presence of each other, but of a common config.json that they can all use to draw configuration information (such as log level, API keys, latitude and longitude, etc.).

Soft Configuration

Of particular interest is the config.js module, which all modules that use config.json information include instead of config.json. It is untracked in git, and so can safely contain sensitive keys and other values. It also guarantees that keys required by modules are present It also provides some additional benefits:

  • Ensuring the config.json file is present
  • Allowing modules that include it to requireKeys to be present in the config.json file, that they themselves require. Here is an example.
  • Stop app launch if any of these keys are not present
  • Allow access to the app’s launch directory context.

For example, a fictitious module may require an API key to be present in the ENV member of config.json:

const config = require('../common/config');

config.requireKeys('fictitious.js', {
  ENV: {
    API_KEY: ''
  }
});

The way config.js behaves, if this structure is not present in config.json, the app will not start, and will tell the operator (i.e: me!) that the value should be provided. Handy!

Standard Modules

Any of these Node apps (and any new apps that come along in the future) can make use of a library of drop-in standard modules, many of which can be found in action in any of the linked repositories at the top of this post), including:

  • event-bus.js – Provide a pub/sub ‘event bus’ style of communication between modules
  • fcm.js – Send an event to Firebase Cloud Messaging to show me a notification
  • led-server-client.js – Communicate with the localhost Blinkt! LED Server instance
  • scraper.js – Scrape some text using a series of ‘before’ markers, and one after ‘marker’
  • config.js – Access ‘smart’ configuration with additional capabilities
  • gist-sync.js – Synchronise a local JSON file/set with a remote Gist
  • leds.js – Directly drive the connected Blinkt! hat
  • db.js – Emulate a simple get/set/exists interface with a local JSON file
  • ip.js – Look up the address of the ‘mothership’ server (either Server Pi or a Digital Ocean Droplet)
  • log.js – Standard logger, asserts, uncaught/unhandled catching.

Wrapping Up

So with this standard pattern to my Node apps, it makes it a lot easier to manage the common modules as they are updated/improved, manage SCM untracked soft configuration values (as well as make sure I provide them after migration!), and allow modules to be as drop-in as possible. As with most/all of my hobbyist programming, these approaches and modules are the result of personal refinement, and not from any accepted standard, which is my preferred style when I am the only consumer. Maximise the learnings!

Expect more sporadic information as these apps develop, and enjoy the pins!

For some just beginning their programming journeys a common example to conquer is blinking an LED, which usually goes something like this:

digitalWrite(13, HIGH);
delay(1000);
digitalWrite(13, LOW);

For me, I decided to try a much harder approach, in a fiddly effort that could be regarded as virtually pointless. Nevertheless, I persisted, because I thought it would be cool.

The idea: blink a Blinkt LED on Server Pi whenever it serviced a request from the outside.

For those unfamiliar with my little family of Raspberry Pi minions, here is a brief overview:

  • Server Pi – A Raspberry Pi 3 running three Node.js processes for various Pebble apps (News Headlines pin pusher, Tube Status pin pusher, unreleased notification and discovery service).
  • Backlight Pi – Another Raspberry Pi 3 with a single Node.js Express server that allows any device in the house to HTTP POST a colour to be shown behind my PC.
  • Monitor Pi – A Raspberry Pi Zero W (W, as of today) that pings the three processes running on Server Pi via the GitHub Gist discovery mechanism to give me peace of mind that they’re still up. It also checks the weather for ice and rain, and whether or not Greater Anglia have fallen over before I’ve taken the trouble of leaving for work at 7AM.

Maintaining this small fleet is a joy and a curse (one or both of “my own mini infrastructure, yay!” or  “It’s all fallen over because Node exceptions are weird, noo!”), but since I started versioning it all in Git and adding crontab and boot scripts, it’s become a lot easier. However, for this particular task, I found only one process can usefully control the Blinkt LEDs on top of Server Pi. Since this is a parameterised (services only) instance of Monitor Pi, it must be this process that does the blinking when a request is processed.

Since I’m already a big fan of modular Node.js apps, I just added another module that sets up a single-endpoint Express server, and have each of the other three Server Pi processes POST to it whenever they service a request with their own Express servers. Neat!

An hour of synchronising and testing four processes locally and on-device later, and I now have a blue blinking LED whenever a request is serviced. Sadly the activity isn’t as high as it was in the News Headlines heyday when it was tasked with converting news story images to Pebble-friendly 64 colour thumbnails and an experimental analytics service late last year, but with the interesting tentative steps the unreleased notification service is taking, Server Pi may end up seeing a bit more action than simple status checks and app news lookups in the future.

With all this work done, it’s also time for another diagrammatic mess that I like to call my infrastructure…

Update: Added changed IP facility details.

Update: Added status watchapp details.

Two of my Pebble apps push pins to the timeline to enhance their experience beyond the apps themselves:

  • News Headlines – Posts the top headline (if it’s new) every four hours. Used to push notifications and serve decoded PNG images, but that went away. Maybe someday they will return. But not for now.
  • Tube Status – Checks the TFL feed every five minutes, and pushes a pin if there is a change in the delay status. This can be a new delay, a delay that’s ended, and ‘all clear’ (no delays anymore).

Both servers also respond to GET /status to show app users if they’re up, and this has proved useful when they occasionally went down. Thanks for a ‘do node index.js forever’ loop script, this is very rarely now an issue.

Up until now, these pins were served from a $5 Digital Ocean instance which essentially spends 99.9% of its time doing absolutely nothing! After coming back to the UK and making progress towards cancelling subscriptions and emptying my US bank account, I had a better idea – use my dusty Raspberry Pi instead!

As part of my new job at EVRYTHNG, a natural avenue of exploration for getting to grips with the IoT is using a Raspberry Pi, which can run Node.js, as it turns out. Perfect! The pin servers for the two apps above use Node.js with Express.

So after a bit of code/dependency cleanup, I set up both servers on the Pi with screen and put plenty of stickers around warning against turning it off or rebooting the router.

So far, so good! What could go wrong?

img_20160911_143438

The new ‘Pin Pusher’ Raspberry Pi in its native habitat – under the family computer desk.

Followup: Getting a Changed Router IP while Out the House

In the eventuality that I have to update the IP of the family router for apps to use in their status check (otherwise they think the servers have gone down, bad for users!), I used to have a Python script email me its own IP address. Sadly, Google doesn’t like this unauthenticated use of my GMail account, so I devised an alternative.

I set up my Pi as an EVRYTHNG Thng, gave it an ‘ip’ property, and wrote the following Python script to update this property in the EVRYTHNG cloud when it boots up. This way, all I have to do is ask whoever’s in to reboot the Pi, and then wait for the updated IP address! I may also make it run periodically to cover the ‘router randomly restarted’ scenario.


#!/usr/bin/python

import requests
import socket
import fcntl
import struct
import json

user_api_key = "<key>" # Probably shouldn't publish this!
thng_id = "<id>"

def get_ip_address(ifname):
  r = requests.get("http://www.canyouseeme.org")
  spool = r.text
  start_str = "name=\"IP\" value=\""
  start_index = spool.index(start_str) + len(start_str)
  spool = spool[start_index:]
  end_index = spool.index("/>") - 1
  return spool[:end_index]

def main():
  ip = get_ip_address("eth0")
  print("IP: {}".format(ip))

  headers = {
    "Authorization": user_api_key,
    "Content-Type": "application/json",
    "Accept": "application/json"
  }
  payload = [{
    "value": ip
  }]
  r = requests.put("https://api.evrythng.com/thngs/{}/properties/ip".format(thng_id), headers=headers, data=json.dumps(payload))
  res = r.text
  print(res)

main()

Followup: Checking Status Conveniently

Each of the two apps mentioned above have a built-in server monitoring feature in their settings screens, but that’s a lot of scrolling. To put my mind at ease I have also created a simple monitoring app that uses the same backend mechanism:

img_20160911_225459

Once again, it’s been a while! The last update talked about updating apps for Chalk (Pebble Time Round), and it was around that time that I was aiming for stability on the ‘Big Three’ apps (namely Dashboard, News Headlines, and Wristponder), as well as a couple of the more popular watchfaces (namely Thin, Beam Up, Isotime, etc), so I could not be doing Pebble development all day and all night.

Happily, I eventually achieved this after a few weekend sessions, and all was good. With some interesting developments in the world of app configuration (see Clay), I added vastly improved color-selection configuration pages to those watchfaces. Color pickers beat manually entering hex strings any day of the week!

Since I’m no longer doing developer documentation/other general advocacy for Pebble (perhaps the massive Guides rewrite was my parting gift?), I have decided to try and pick it up again as a hobby, like I was doing before getting hired. I found it great fun, and very rewarding when I saw people using my apps. In general, they start life as apps I want to use my watch for, then I polish and publish them so other can find them useful.

The trouble I was running into was finding time to meet the maintenance demands of bugs/feedback from users, so now I have more time for that. Indeed, I’ve picked up a few processes/skills from my time managing my projects at Pebble that should make this process much easier. It is yet to be seen if Sheets is more efficient for a single person than JIRA, but I think I know what the answer is…

region

Anyway, just now I released version 3.6 of News Headlines. For some time, I’ve received the question “Can it show news from outside the UK?”. Since it started life as ‘BBC News’, that makes more sense. Yesterday I saw that the BBC has feeds for multiple regions, and so a fun exercise in adding a new feature presented itself, with a lot of potential value for users who aren’t interested in the latest scandal at Westminster.

In adding this new feature, I was reminded how complicated News Headlines is as an app, but it made the end result that much more satisfying. The process went something like this:

  • Add new enumerations for the region values.
  • Add new defaults and internal APIs for passing around the region value.
  • Add new UI items and logic to the Settings Window.
  • Add new keys for AppMessage and Persistent Storage APIs.
  • Add region-passing to the initial sync communication phase.
  • Generalise JS feed download to choose either a selected region, or a ‘category’ if the region is ‘UK’.
  • Ensure all these things played nicely for new users and also upgrading users (the latter where I’ve been stung far too many times before).
  • And as usual get massively sidetracked with refactoring and code style updates.

So now we have that. Readers around the world can make their headlines-reading experience a tad more localised if they wish. Another request I’ve been getting recently in general is to accept donations. Historically (excepting the paid version of Watch Trigger) I’ve not dabbled in donations, but since I’m not paid by Pebble anymore I will use this update to do a little experimentation. It can always be removed if nothing happens. Another experiment is making a /r/pebble subreddit post, so we’ll also see how that’s received.

After integration of the Isometric and WebSocket modules (previously ‘additional’) into PGE, I took some time to do something I’d wanted to do for a while: make it a repo usable directly after git clone. Previously the repo was an example project which could be cloned and played around with, but to use the engine in a new game required knowing which files to copy into the new project.

After re-organization, the repo can now be directly git cloned into the new project’s src directory and requires no further manipulation to be compiled. The previous asteroids example has been moved to a new pge-examples repository on the asteroids branch, which also hosts a new example ‘game’ for the WebSockets module PGE WS, which aims to allow developers to send and receive multiplayer data with as few lines as possible. The example allows each player who installs the example to trigger a vibration on all other player’s watches while they are running the game, after hosting the server.

For an overview of how to use the new WebSockets module, check out the docs for PGE WS, which summarizes how to set up the server (which forwards all data both directions automatically by default), the JS client, and a C client, which needs only to connect, send and receive data.