It’s been some time since the last post and now some new apps are available! The lineup now includes:

  • Upcoming – A simple card-based app that shows the title, description, time, and date of the next ten events on your primary Google calendar.
  • News Headlines – A new design for the tried and true News Headlines app on Pebble (one of my oldest), which re-uses the card UI built for Upcoming.

Since both these new apps use the same layout and code style, I’m considering perhaps trying to make it into a UI framework that can be easily reused with other data sets… Perhaps one for the future.

These apps were the two I was most looking forward to having completed on FitBit, since one replaces the extremely important ability to quickly see what meetings I have coming up that I got from Pebble timeline, and the other enables me to stay vaguely up to date with the world (this weekend’s wedding excepting!) in small glimpses.

So what’s next? As mentioned in the last post, I’d like to try and get back to basics with some new watchface concepts, and also explore the possibility of beginning to bring some Dashboard functionality to FitBIt, depending on which data is available. I’m thinking web APIs for networks and battery level to get started, but sadly no actuation for the time being.

As always, you can see the code for yourself on GitHub.

After some months and a couple of releases for the FitBit Ionic a few months ago, here are the first batch of watchfaces and app for both Ionic and the new Versa! Reviewed and released right now are:

  • Elemental – An original watchface, and my first completed for Ionic.
  • Tube Status – Ported from Pebble, but with an updated pager UI,
  • Isotime – Ported from Pebble, with higher resolution digits, though sadly no longer rendered in individual blocks with PGE.
  • Beam Up – Ported from Pebble, the classic (and one of my oldest!) animated watchface, complete with inverting beams (but this time faked with clever timings, instead of using an inverter layer or framebuffer hack.

The development experience has gotten much better, with very good connectivity of the developer connection with the updates paving the way for Versa, and also due to the FitBit OS Simulator, which closes the iterative gap from minutes to seconds!

So what’s next?

News Headlines port needs to be completed, though getting the same UI as the Pebble app is proving to be a layout challenge. So I may opt to scrap it and build a new one, similar for Tube Status.

I also want to create some more original watchfaces for FitBit OS to take advantage of the gorgeous full-color screens these watches have. So look out for more!

I’d also love to port Dashboard (as I still use it regularly, and many have found it an invaluable remote and automation agent), but that will have to wait until an equivalent of PebbleKit Android is released by FitBit, or some other Intent-based mechanism for receiving app messages in a third party Android app.

In the meantime, you can find the source for all my FitBit OS apps and watchfaces in my fitbit-dev GitHub repo.

With not a lot going on in terms of my Pebble apps (still very much in a ‘if it ain’t broke’ situation), my hobbyist attentions in recent months turned to my Raspberry Pi. With not a lot of exciting ideas for hardware hacking, it occurred to me that software applications of the device might be a bit more interesting.

Beginning with moving the backend services for News Headlines and Tube Status out of a $5 Digital Ocean Droplet to a $0 Raspberry Pi under my desk (with a few forwarded ports, of course), I’ve steadily refined the standard pattern used to write and maintain these apps. At the most there have been six, but today there are five:

  • News Headlines Backend – pushing headlines pins.
  • Tube Status Backend – pushing delay alerts pins.
  • LED Server – providing a localhost RESTful interface to the Blinkt! hat on the physical Pi between apps.
  • Attic – a new app, serving and receiving simple JSON objects for storage, backed by a Gist.
  • Monitor – responsible for monitoring uptime of the other services, and providing Greater Anglia and TfL Rail outage alerts to myself via my watch. Monitor actually just schedules regular invocations of its plugins’ update interface function, making it extremely extensible.

With my adventures in Node and discovering convenient or standardised ways of doing things like modules, data storage/sharing, soft configuration, etc. these apps have all been refined to use common file layouts, common modules, and a standard template. With its relatively stable state of maturity, I’d like to share this with readers now!

What? It’s not February 2017 anymore? The pattern has matured even further, but I’ve only now found the time to write this blog post? Well, OK then, we can make some edits…

Disclaimer: This isn’t an implementation of any actual accepted standard process/pattern I know of, just the optimum solution I have reached and am happy with under my own steam. Enjoy!

File Layout

As you can see from any of the linked repositories above, the basic layout for one of my Node apps goes as follows:

.gitignore   // 'config.json'

The src folder contains modules (modules that are specific to the app), and common (using common modules shared between all apps, such as log.js (standard logger, pid logging, and uncaughtException & unhandledRejection handlers), as well as main.js, which initialises the app.

This pattern allows all apps to use common modules that can be guaranteed not only the presence of each other, but of a common config.json that they can all use to draw configuration information (such as log level, API keys, latitude and longitude, etc.).

Soft Configuration

Of particular interest is the config.js module, which all modules that use config.json information include instead of config.json. It is untracked in git, and so can safely contain sensitive keys and other values. It also guarantees that keys required by modules are present It also provides some additional benefits:

  • Ensuring the config.json file is present
  • Allowing modules that include it to requireKeys to be present in the config.json file, that they themselves require. Here is an example.
  • Stop app launch if any of these keys are not present
  • Allow access to the app’s launch directory context.

For example, a fictitious module may require an API key to be present in the ENV member of config.json:

const config = require('../common/config');

config.requireKeys('fictitious.js', {
  ENV: {
    API_KEY: ''

The way config.js behaves, if this structure is not present in config.json, the app will not start, and will tell the operator (i.e: me!) that the value should be provided. Handy!

Standard Modules

Any of these Node apps (and any new apps that come along in the future) can make use of a library of drop-in standard modules, many of which can be found in action in any of the linked repositories at the top of this post), including:

  • event-bus.js – Provide a pub/sub ‘event bus’ style of communication between modules
  • fcm.js – Send an event to Firebase Cloud Messaging to show me a notification
  • led-server-client.js – Communicate with the localhost Blinkt! LED Server instance
  • scraper.js – Scrape some text using a series of ‘before’ markers, and one after ‘marker’
  • config.js – Access ‘smart’ configuration with additional capabilities
  • gist-sync.js – Synchronise a local JSON file/set with a remote Gist
  • leds.js – Directly drive the connected Blinkt! hat
  • db.js – Emulate a simple get/set/exists interface with a local JSON file
  • ip.js – Look up the address of the ‘mothership’ server (either Server Pi or a Digital Ocean Droplet)
  • log.js – Standard logger, asserts, uncaught/unhandled catching.

Wrapping Up

So with this standard pattern to my Node apps, it makes it a lot easier to manage the common modules as they are updated/improved, manage SCM untracked soft configuration values (as well as make sure I provide them after migration!), and allow modules to be as drop-in as possible. As with most/all of my hobbyist programming, these approaches and modules are the result of personal refinement, and not from any accepted standard, which is my preferred style when I am the only consumer. Maximise the learnings!

Expect more sporadic information as these apps develop, and enjoy the pins!

For some just beginning their programming journeys a common example to conquer is blinking an LED, which usually goes something like this:

digitalWrite(13, HIGH);
digitalWrite(13, LOW);

For me, I decided to try a much harder approach, in a fiddly effort that could be regarded as virtually pointless. Nevertheless, I persisted, because I thought it would be cool.

The idea: blink a Blinkt LED on Server Pi whenever it serviced a request from the outside.

For those unfamiliar with my little family of Raspberry Pi minions, here is a brief overview:

  • Server Pi – A Raspberry Pi 3 running three Node.js processes for various Pebble apps (News Headlines pin pusher, Tube Status pin pusher, unreleased notification and discovery service).
  • Backlight Pi – Another Raspberry Pi 3 with a single Node.js Express server that allows any device in the house to HTTP POST a colour to be shown behind my PC.
  • Monitor Pi – A Raspberry Pi Zero W (W, as of today) that pings the three processes running on Server Pi via the GitHub Gist discovery mechanism to give me peace of mind that they’re still up. It also checks the weather for ice and rain, and whether or not Greater Anglia have fallen over before I’ve taken the trouble of leaving for work at 7AM.

Maintaining this small fleet is a joy and a curse (one or both of “my own mini infrastructure, yay!” or  “It’s all fallen over because Node exceptions are weird, noo!”), but since I started versioning it all in Git and adding crontab and boot scripts, it’s become a lot easier. However, for this particular task, I found only one process can usefully control the Blinkt LEDs on top of Server Pi. Since this is a parameterised (services only) instance of Monitor Pi, it must be this process that does the blinking when a request is processed.

Since I’m already a big fan of modular Node.js apps, I just added another module that sets up a single-endpoint Express server, and have each of the other three Server Pi processes POST to it whenever they service a request with their own Express servers. Neat!

An hour of synchronising and testing four processes locally and on-device later, and I now have a blue blinking LED whenever a request is serviced. Sadly the activity isn’t as high as it was in the News Headlines heyday when it was tasked with converting news story images to Pebble-friendly 64 colour thumbnails and an experimental analytics service late last year, but with the interesting tentative steps the unreleased notification service is taking, Server Pi may end up seeing a bit more action than simple status checks and app news lookups in the future.

With all this work done, it’s also time for another diagrammatic mess that I like to call my infrastructure…

This one took a while. Weighing in at 38 versions and I-don’t-know-how-many reflected and hacked APIs, Dashboard is now open-source for all to see. This was probably the app that took the most development time until now (still on-going!), and I think the one I’m most proud of.

I’m liking the idea of doing future releases via self-approved pull requests. Could be interesting!

>>> Dashboard source code <<< 

In my last post, I promised I would open-source the remainder of my apps to promote community learning and collaboration. Well, I’m following through, and the first to be shown naked to the world (please don’t judge my code, it’s accumulated over at least two years!) is News Headlines!

Included here is the code for the watchapp, as well as the backend server that’s been serving pins obediently for the last year and a half or so.

The only things left out are API keys and the current public server URL, but these are configurable through simple config.json files if you want to roll your own.

>>> News Headlines source code  <<<

This post was originally going to be a lot gloomier, but the official announcement yesterday (after a few days of utter FUD) has proven that the worst-case scenario has not come to be, and there’s reason to be optimistic about Pebble’s future.

So what better time to summarise my part of the Pebble story?

The Beginning

I backed the original Pebble in the first Kickstarter campaign, after a few weeks on the fence I was finally convinced by the promise of an open SDK. I’d had a bit of experience with C as part of my degree course, and played with Java in the second year (including prodding the Android SDK to see if I could make it do anything interesting). Why not try and make my watch do some cool things?

After the now legendary delays, I finally got my watch. It had screen tearing from the moment I turned it on, but I found that by pressing a certain part of the case I could get it to behave (Pebble replaced it within two weeks, so props to them for that). The original 1.x SDK was a bit harder to grasp than the one we have now, but even so, I eventually got my first app working:


What a moment! I could put any message I wanted on my wrist! Over the next couple of weeks I worked on a couple of watchfaces, most notable of which was Split Horizon. Back then we had MyPebbleFaces in place of an official app store, which involved the community uploading the build PBW files and then users downloading them and installing via the Android/iOS apps, which were also quite primitive at the time. Also due to the lack of app configuration, watchfaces were released in multiple listings, so Split Horizon has a Seconds Edition (animation every 15 seconds), Minutes Edition, and Plain Edition.

Once SDK 1.12 was released (two-way communication, woo!) I was able to use the first version of PebbleKit Android to do interesting things with Android APIs. The first outcome of this was Watch Trigger, the first app that allowed you to capture a photo remotely using your Pebble as the remote. This was since superseded by better efforts (PebbleCam, etc), but it was a big thing for me at the time to have this futuristic capability. I would then go on to a paid app experiment by offering a video capture upgrade (Watch Trigger +) for £0.99, and the main lesson learned was that 95% of my users loved free stuff! Here’s the second iteration (the first was just the logo!):


Hot on the heels of this was my exploration of other Android APIs. At the time, I had an Android phone (Galaxy S) that would suck power if it was connected to the wrong network. To solve this problem, I created Data Toggle to allow me to turn off WiFi when I went outside, and switch over to 3G. This would later become Dashboard as we now it today.




At the same time I also begun work on Wristponder to allow initiating and replying to SMS messages (before it was integrated into the firmware!):


The SDK Tutorials

It was that summer I started working on my SDK tutorials. Little did I know that these pieces (drafted on a notepad in Tuscany, and originally quite popular with other community members, being the only real tutorials at the time) would literally change my life.

Not shown in the image below is the pad I was writing feverishly my ideas for structured learning content that would guide through the exciting Pebble SDK opportunities:


When SDK 2.0 came out (along with PebbleKit JS, localStorage, and a better C style API) I wrote another whole multi part series out of the same motivation as the 1.x tutorial – now that I’d learned how to make this revolutionary device do my bidding, I wanted to help everyone else do the same to theirs. I still maintain that the ability to make Pebble fit into your own lifestyle (down to news stories, train delay alerts, even scheduling when your phone switches to Silent for the night) was it’s most potent feature. Especially the potential of being paired with the whole Android platform, which I hope Dashboard and the Dash API demonstrate as well as I was able.

Getting Hired

After I was almost done with the SDK 2.0 tutorials, I was contacted out of the blue on Twitter by Pebble’s lead Developer Evangelist – Thomas Sarlandie:


Originally the deal was to write some tutorials for Pebble’s Developer Blog, but that quickly turned into a full job offer. I was torn – I was in my last year of University with no job lined up, but I’d never lived out of the country before. It was such a huge opportunity I had no idea what to do. My mind was made up when one of my best uni friends said “You know your friends who are off doing amazing things travelling the world? This is your opportunity to do that too. Take it!”.

So I did.

After a few months of sorting visas, I arrived in Palo Alto and was greeted into the arranged shared accommodation by Joseph Kristoffer. The effect was incredible. I’d practically run myself into the ground finishing my fourth year of university – physically and emotionally. Being on the other side of the world in sunny California surrounded by people who wanted me there so badly was very good for me. A chance to start again, make new friends, and new first impressions.

The welcome was extremely friendly wherever you looked at Pebble, everyone wanted to know who I was and how I’d come to be in the office. I was quickly given a Macbook (which I had very little idea of how to use) and tasked with managing the original drab docs for the colourful Pebble Developer site design you see now:


After a few weeks, and a very quick crash course in git, we did it, and shipped the new site. It was bright and colourful and full of opportunity for new content. We had a company retreat and the famous 2014 Developer Retreat (with ROBOTS!!1). I got to go to Maker Faire, see the East Coast and New York on my way to YHack in Connecticut. I was having the time of my life, and knew how lucky I was at every turn.

Pebble Time

But there was little time to rest. After SDK 2.5 and the Compass API, the company threw itself into the Pebble Time project, which gradually sifted out from the Design team to the whole company. It would be more powerful, with a colour screen, a web API, and a microphone! I think the excitement started to climb when the film crew came in to film the second Kickstarter video, which if you look really carefully, you can see me in:


We launched the campaign, and there was much celebration with every million the campaign earned. It was a sign of how passionate everybody was, and how badly they wanted to make this awesome new kind of Pebble a reality, if only for at least themselves. Many of the engineers worked long hours and weekends, and famously took no voluntary holidays, their passion was that great. The same engineers who managed to fit firmware 3.x into the original hardware with mere bytes to spare. After the manufacturing started, we got some samples in the office and tried building some colour apps. I say ‘tried’, because the SDK was on the bleeding edge of what firmware functionality was built each and every day. Each new API brought more possibilities, such as the block game demo, and an early version of Isotime:


Finally, the backers started receiving watches, and the developer community responded admirably. Every day someone would be going round showing off the latest cool colour app they’d found on the app store, and I worked in my spare time to update all my apps and watchfaces to use the new colour functionality. After this we finished working on timeline, culminating in a 4AM final merge of a monstrous documentation Pull Request.

Product Owner

Sometime while writing the Smartstrap guides and the Design guides I began planning my own work and execution, with input from the rest of the team. This was completely new to me, but with a few well-maintained Google Sheets, project after project came together without issue. It was good to be more at the helm of the documentation, and being able to help all developers with useful guides, tutorials, and example apps. I also loved (and still do love!) chatting with the more active developer community members in the then Slack chat (now on Discord) and giving one on one feedback and help as much as I could.

It was during this time that we had the 2015 Developer Retreat in San Francisco, and I did fresh re-writes of the Big Four (Dashboard, News Headlines, Wristponder, and Beam Up) to make them more modular and maintainable. I’m glad I did now! I can can dive in, change some things, and only have to look at small parts of the app at any one time. I took great pleasure in perfecting my modular pattern and module interfaces, such as data_get_news_story(), or splash_window_reload_data(), allowing easy exchange of data and actions from anywhere in an app. I guess that was the result of getting better with each app I made, which is a natural part of software development, apparently.

Moving Up

Right as the office moved from Palo Alto to Redwood City (and myself getting my first apartment in RWC), we were already two months into a complete re-write of the Guides section. Reducing 78 guides crammed into ageing categories in inconsistent styles into about 60 new ones, written from the ground up show how to do everything in the same manner, from button input, to JS/Android/iOS communication (including images!), to bespoke frame-buffer drawing. I did such a thorough job that even now I frequently find myself using the snippets from those new guides in my own apps. MenuLayer? Sure, chuck that snippet in. No problem!

This last huge project was completely planned and executed by me, and I consider my last great gift to the community in an official capacity. I’m very proud of it, I won’t lie! Here’s where the magic happened, until the end:


Moving On

In March, Pebble made 25% (about 40 people) redundant, but it was made very clear it was not from a lack of good work. At the time it felt like a cost-saving measure, and now we can look back with full clarity. Since my visa was tied to my job, I had to leave the country, my apartment, my bills/utilities, furniture rental, etc, as soon as possible. I had the option of trying to transfer my visa by getting another job in the Valley, but I was quite put out by the shock of it all, so just decided to pack it all in and come home. I also had to say rushed goodbyes to about 100 people I’d come to know over the last two years. Hardest of all was the Developer Relations team, who I’d shared many adventures, days out, travel trips, etc. with. It was very hard to do, but had to be done.

I came back to the UK with everything I could fit into two airport style cases, and it was all I wanted – except my beloved walnut bass guitar, which was two inches too large (and would have cost half its value to transport properly), so I left it behind.


After Easter (which I’d spookily already booked flights and leave for), I went back to the US for a gallop around Yosemite with my Dad. We’d planned it in November 2015, planning to use my apartment to lessen the cost, but decided to go for it anyway, and boy, was it worth it!


Coming back from this trip I had no idea what I was going to do with my Pebble development. I’d sunk so much time, and accrued too many thousands of users to stop completely. But my sudden ejection back to the UK left me without any energy to do anything. Days blurred into weeks. Eventually I got it together and started looking for jobs. After about 12 attempts, I found an extremely warm and welcome home at EVRYTHNG. I can say with confidence that I wouldn’t have this job if it weren’t for Thomas taking a chance on hiring me for Pebble and giving me the credential on my CV!  I also created the Dash API to let C app developers use Android APIs, which was an interesting extension to the ecosystem.

Keeping My Hand In

I decided to maintain my apps, and only do improvements if I got the burst of energy and inspiration required to crack the dusty covers off monsters like Dashboard or News Headlines and gently coerce the insides into accepting new features. I reconnected with the developer community in my original role as a third party developer, but with some insight into how Pebble worked. But I still didn’t see the recent acquisition coming. With so many days of just rumours to go off, the community admirably began simultaneously panicking and trying to preserve everything it could in case the servers and SDK ecosystem vanished overnight. Happily, it did not, but we don’t know how long it will last.

The Future

We’re at a cross roads. It’s time for developers to keep the flame alive, as I know they can and want to do. I foresee a time when the servers are gone (no app login, timeline, lockers, dictation, etc), but we can still keep going with side-loaded apps (remember MyPebbleFaces? Ahead of its time, perhaps) until the watches die!

And that’s what I intend to do. I’ll still maintain my apps (since the most popular ones I happen to use myself every day) as long as it is possible to do so with the SDK ecosystem. I used to have an Android app to distribute my Pebble app/face’s PBW files, but it was a nightmare to keep in sync with the app store. Now the latter may one day disappear (or it may not!), I will dust it off and use it to preserve my offerings for all who are interested.

In addition to this maintenance, I will also be completing my open-source collection – including the Big Four! Well, Beam Up is already open source, so that leaves Dashboard, News Headlines, and Wristponder. Understand that this isn’t because I’m abandoning them – this recent shift has put emphasis on the community carrying the torch, and this is the best way to keep contributing to the whole and helping others learn how things are done. And maybe now it’ll force me to clean the code up! So look out for those in the next few weeks, when I get round to them in my free time. And I’ll save time by not needing to upgrade them to Emery’s display… bitter sweet.

For now, you can see all my open source apps on my GitHub account.


I hope it’s passively become clear in reading this piece how much of a personal impact Pebble has had on my life. The experience of living and working in Palo Alto patched me up after my gruelling final year of uni. I got to see and experience things and places I never would have otherwise. I got to meet and make friends with so many Team Pebble members, and so many Pebble Developer community member too, who I very much hope to keep collaborating with into 2017 and hopefully beyond. So many people, places, occasions captured in so many photos –  I would never be able to post them all. But I am lucky to be able to look back fondly on all the good times.


In the words of what I imagine Eric said at the company’s inception: “Let’s see how far we can take this thing!”



Here’s a little something to take everybody’s mind of things.


This is a neat idea I had a while ago but only just got around to doing –

“What would a map of all the interconnections and services that my Pebble apps rely upon look like?”

Well, thanks to the neat tool that is Google Drawings, here it is. Scary dependencies!


Of course, this isn’t the full picture. The Server Pi and Monitor Pi provide me with useful services I use in my day-to-day life, such as train delay timeline pins, weather alerts, and updates on the health of the services apps rely upon. Those details aren’t shown here for brevity, but would increase the complexity of the drawing about 50%.

Update: Added changed IP facility details.

Update: Added status watchapp details.

Two of my Pebble apps push pins to the timeline to enhance their experience beyond the apps themselves:

  • News Headlines – Posts the top headline (if it’s new) every four hours. Used to push notifications and serve decoded PNG images, but that went away. Maybe someday they will return. But not for now.
  • Tube Status – Checks the TFL feed every five minutes, and pushes a pin if there is a change in the delay status. This can be a new delay, a delay that’s ended, and ‘all clear’ (no delays anymore).

Both servers also respond to GET /status to show app users if they’re up, and this has proved useful when they occasionally went down. Thanks for a ‘do node index.js forever’ loop script, this is very rarely now an issue.

Up until now, these pins were served from a $5 Digital Ocean instance which essentially spends 99.9% of its time doing absolutely nothing! After coming back to the UK and making progress towards cancelling subscriptions and emptying my US bank account, I had a better idea – use my dusty Raspberry Pi instead!

As part of my new job at EVRYTHNG, a natural avenue of exploration for getting to grips with the IoT is using a Raspberry Pi, which can run Node.js, as it turns out. Perfect! The pin servers for the two apps above use Node.js with Express.

So after a bit of code/dependency cleanup, I set up both servers on the Pi with screen and put plenty of stickers around warning against turning it off or rebooting the router.

So far, so good! What could go wrong?


The new ‘Pin Pusher’ Raspberry Pi in its native habitat – under the family computer desk.

Followup: Getting a Changed Router IP while Out the House

In the eventuality that I have to update the IP of the family router for apps to use in their status check (otherwise they think the servers have gone down, bad for users!), I used to have a Python script email me its own IP address. Sadly, Google doesn’t like this unauthenticated use of my GMail account, so I devised an alternative.

I set up my Pi as an EVRYTHNG Thng, gave it an ‘ip’ property, and wrote the following Python script to update this property in the EVRYTHNG cloud when it boots up. This way, all I have to do is ask whoever’s in to reboot the Pi, and then wait for the updated IP address! I may also make it run periodically to cover the ‘router randomly restarted’ scenario.


import requests
import socket
import fcntl
import struct
import json

user_api_key = "<key>" # Probably shouldn't publish this!
thng_id = "<id>"

def get_ip_address(ifname):
  r = requests.get("")
  spool = r.text
  start_str = "name=\"IP\" value=\""
  start_index = spool.index(start_str) + len(start_str)
  spool = spool[start_index:]
  end_index = spool.index("/>") - 1
  return spool[:end_index]

def main():
  ip = get_ip_address("eth0")
  print("IP: {}".format(ip))

  headers = {
    "Authorization": user_api_key,
    "Content-Type": "application/json",
    "Accept": "application/json"
  payload = [{
    "value": ip
  r = requests.put("{}/properties/ip".format(thng_id), headers=headers, data=json.dumps(payload))
  res = r.text


Followup: Checking Status Conveniently

Each of the two apps mentioned above have a built-in server monitoring feature in their settings screens, but that’s a lot of scrolling. To put my mind at ease I have also created a simple monitoring app that uses the same backend mechanism: