Archive

Raspberry Pi

A quick post about moving my Node app tests to Mocha and Chai. These are frameworks used where I work, and I figured I may as well learn something new and have more confidence in my Node apps when I deploy them, so how hard can it be?

Turns out, quite easy. These testing frameworks are deigned to be flexible and resemble plain-english test descriptions. In fact, the new testing code looks a lot like my home-grown test framework. Have a look below for a comparison (taken from the led-server project):

(You can see the implementation of testBed module in the node-common project)

Home-grown

async function testSetAll(expected) {
  const response = await testBed.sendConduitPacket({
    to: 'LedServer',
    topic: 'setAll',
    message: { all: [ 25, 25, 52 ] }
  });

  testBed.assert(response.status === 200 && response.message.content === 'OK',
    'setAll: response contains status:200 and content:OK');
}

async function testSetPixel(expected) {
  const response = await testBed.sendConduitPacket({
    to: 'LedServer',
    topic: 'setPixel',
    message: {
      '0': [ 25, 25, 52 ],
      '1': [ 100, 100, 100 ]
    }
  });

  testBed.assert(response.status === 200 && response.message.content === 'OK',
    'setPixel: response contains status:200 and content:OK');
}

Mocha/Chai

describe('Conduit topic: setPixel', () => {
  it('should return 200 / OK', async () => {
    const response = await testBed.sendConduitPacket({
      to: 'LedServer',
      topic: 'setPixel',
      message: {
        '0': [ 10, 20, 30],
        '1': [30, 50, 60]
      }
    });

    expect(response.status).to.equal(200);
    expect(response.message.content).to.equal('OK');
  });
});

describe('Conduit topic: setAll', () => {
  it('should return 200 / OK', async () => {
    const response = await testBed.sendConduitPacket({
      to: 'LedServer',
      topic: 'setAll',
      message: { all: [64,64,64] }
    });

    expect(response.status).to.equal(200);
    expect(response.message.content).to.equal('OK');
  });
});

As a result, my script to run all the test suites of each Node app (after booting them all together locally) looks like a lot of this – all green, and no red!

Hopefully this new skill will enable me to write better code both personally and professionally in the future – I may even try out TDD for my next project!

A problem I had found when setting up my Node.js services on a new Raspberry Pi (or resetting one that had gotten into a bad state) was keeping track of the individual port numbers of each one. This might typically look like this:

  • New Headlines Backend – 5000
  • Tube Status Backend – 5050
  • LED Server – 5001
  • Backlight Server – 5005
  • Attic – 5500
  • Spotify Auth – 5009

…and so on. This wasn’t only a problem with setup, but also with maintaining all the numerous config.json files for each app that needed to talk to any of the other ones.

So to do something about it, I decided to have a go implementing a central message broker service (nominally called Message Bus, or MBus) from scratch (one of the key features of my hobbyist development, as you tend to learn a lot more this way). This new service had to be generic to allow all kinds of messages to flow between the services that they define themselves. It had to be fault tolerant and so should use JSON Schema to make sure the messages are all of the correct format. And lastly, it shouldn’t care what the connection details are for each app at startup.

 

Client Registration and Message Exchange

To solve this last problem, each app uses a common Node.js modules that knows the port of a local instance of MBus and requests a port assignment. MBus responds with a randomly rolled port number from a range (making sure it isn’t already allocated to another app), and the client app then creates an Express server that listens on the allocated port. If MBus receives a message with a known client app as the destination, it simply sends it on to that port within the local machine, where the client app will be listening as expected. These two processes are summarised below:

 

Client Implementation

To implement a new client to talk to MBus, it includes the mbus.js common module, and registers itself at runtime. It also specifies the message schema it will expect from MBus using conventional JSON Schemas:

const mbus = require('../node-common').mbus();

const GET_MESSAGE_SCHEMA = {
  type: 'object',
  required: [ 'app', 'key' ],
  properties: {
    app: { type: 'string' },
    key: { type: 'string' }
  }
};

const SET_MESSAGE_SCHEMA = {
  type: 'object',
  required: [ 'app', 'key', 'value' ],
  properties: {
    app: { type: 'string' },
    key: { type: 'string' },
    value: {}
  }
};

async function setup() {
  await mbus.register();

  mbus.addTopic('get', require('../api/get'), GET_MESSAGE_SCHEMA);
  mbus.addTopic('set', require('../api/set'), SET_MESSAGE_SCHEMA);
}

Once this is done, the config.json is also updated to specify where it can find the central MBus instance and the name it is to be identified as when messages are destined for it:

{
  "MBUS": {
    "HOST": "localhost",
    "PORT": 5959,
    "APP": "LedServer"
  }
}

The mbus.js module also takes care of the message metadata and the server checks the overall packet schema:

const MESSAGE_SCHEMA = {
  type: 'object',
  required: [ 'to', 'from', 'topic', 'message' ],
  properties: {
    status: { type: 'integer' },
    error: { type: 'string' },
    to: { type: 'string' },
    from: { type: 'string' },
    topic: { type: 'string' },
    message: { type: 'object' },
    broadcast: { type: 'boolean' }
  }
};

 

Example Implementations

You can find the code for MBus in the GitHub repository, and also check some example clients including Attic, LED Server, and Monitor.

Barring a few client app updates (luckily no very serious user-facing apps depend on these services for core functionality right now), all the main services now use MBus to talk to each other. The image below shows these setups for the main machines they are deployed on:

Finally, over the next few months I’ll be updating what clients there are to talk to their remote counterparts in this manner, and also take advantage of the fact it is now each to add and address future services in the same manner without needing to configure ports and addresses for every individual service.

Adding Raspberry Pi based backlighting to my desktop PC with backlight-server, and moving to a new flat gave me an interesting idea – add an API to the backlight server to set the lights to the dominant colour of whatever album is playing in my Spotify account. How hard could it be?

The first step was to read up on the Spotify API. I quickly found the ‘Get the User’s Currently Playing Track’ API, which fit the bill. Since it deals with user data, I had to authenticate with their Authorization Code Flow, which requires multiple steps as well as a static address for a callback containing the authorization code granted to my application. I experimented with giving the Spotify Developer site the IP address of my Server Pi, but that could change, which would mean editing the application listing each time that happened, which was unacceptable for a seamless ‘setup and forget’ experience I was aiming for.

The solution was to resurrect my DigitalOcean account to host a small Node.js app with a simple task – receive the callback from Spotify with the authorization code with which access and refresh codes would be granted, and fetch and determine the dominant colour of the album art currently playing. This service would in turn be used by backlight-server to light up my living room with the appropriate colour.

This authorization flow took a long time to get right, both from a code perspective (I used the spotify-web-api-node npm package to make things programmatically easier), as well as a behavioural perspective (when should the token be refreshed? How to propagate errors through different services? How can the app know it is authorized at any given time?), but once it worked, it was very cool to see the room change colour as my playlist shuffled by.

I had a half-hearted attempt at figuring out the dominant colour myself using buckets and histograms, but in the end decided to preserve my sanity and use the node-vibrant package instead, which worked like magic!

So this is basically how the whole thing works, and you can see the code for the spotify-auth microservice on GitHub. The diagram below may also help explain:

So what next? Well, those smart RGB light bulbs are looking a lot more interesting now…

With not a lot going on in terms of my Pebble apps (still very much in a ‘if it ain’t broke’ situation), my hobbyist attentions in recent months turned to my Raspberry Pi. With not a lot of exciting ideas for hardware hacking, it occurred to me that software applications of the device might be a bit more interesting.

Beginning with moving the backend services for News Headlines and Tube Status out of a $5 Digital Ocean Droplet to a $0 Raspberry Pi under my desk (with a few forwarded ports, of course), I’ve steadily refined the standard pattern used to write and maintain these apps. At the most there have been six, but today there are five:

  • News Headlines Backend – pushing headlines pins.
  • Tube Status Backend – pushing delay alerts pins.
  • LED Server – providing a localhost RESTful interface to the Blinkt! hat on the physical Pi between apps.
  • Attic – a new app, serving and receiving simple JSON objects for storage, backed by a Gist.
  • Monitor – responsible for monitoring uptime of the other services, and providing Greater Anglia and TfL Rail outage alerts to myself via my watch. Monitor actually just schedules regular invocations of its plugins’ update interface function, making it extremely extensible.

With my adventures in Node and discovering convenient or standardised ways of doing things like modules, data storage/sharing, soft configuration, etc. these apps have all been refined to use common file layouts, common modules, and a standard template. With its relatively stable state of maturity, I’d like to share this with readers now!

What? It’s not February 2017 anymore? The pattern has matured even further, but I’ve only now found the time to write this blog post? Well, OK then, we can make some edits…

Disclaimer: This isn’t an implementation of any actual accepted standard process/pattern I know of, just the optimum solution I have reached and am happy with under my own steam. Enjoy!

File Layout

As you can see from any of the linked repositories above, the basic layout for one of my Node apps goes as follows:

src/
  modules/
    app-specific-module.js
  common/
    config.js
    log.js
  main.js
package.json
config.json
.gitignore   // 'config.json'

The src folder contains modules (modules that are specific to the app), and common (using common modules shared between all apps, such as log.js (standard logger, pid logging, and uncaughtException & unhandledRejection handlers), as well as main.js, which initialises the app.

This pattern allows all apps to use common modules that can be guaranteed not only the presence of each other, but of a common config.json that they can all use to draw configuration information (such as log level, API keys, latitude and longitude, etc.).

Soft Configuration

Of particular interest is the config.js module, which all modules that use config.json information include instead of config.json. It is untracked in git, and so can safely contain sensitive keys and other values. It also guarantees that keys required by modules are present It also provides some additional benefits:

  • Ensuring the config.json file is present
  • Allowing modules that include it to requireKeys to be present in the config.json file, that they themselves require. Here is an example.
  • Stop app launch if any of these keys are not present
  • Allow access to the app’s launch directory context.

For example, a fictitious module may require an API key to be present in the ENV member of config.json:

const config = require('../common/config');

config.requireKeys('fictitious.js', {
  ENV: {
    API_KEY: ''
  }
});

The way config.js behaves, if this structure is not present in config.json, the app will not start, and will tell the operator (i.e: me!) that the value should be provided. Handy!

Standard Modules

Any of these Node apps (and any new apps that come along in the future) can make use of a library of drop-in standard modules, many of which can be found in action in any of the linked repositories at the top of this post), including:

  • event-bus.js – Provide a pub/sub ‘event bus’ style of communication between modules
  • fcm.js – Send an event to Firebase Cloud Messaging to show me a notification
  • led-server-client.js – Communicate with the localhost Blinkt! LED Server instance
  • scraper.js – Scrape some text using a series of ‘before’ markers, and one after ‘marker’
  • config.js – Access ‘smart’ configuration with additional capabilities
  • gist-sync.js – Synchronise a local JSON file/set with a remote Gist
  • leds.js – Directly drive the connected Blinkt! hat
  • db.js – Emulate a simple get/set/exists interface with a local JSON file
  • ip.js – Look up the address of the ‘mothership’ server (either Server Pi or a Digital Ocean Droplet)
  • log.js – Standard logger, asserts, uncaught/unhandled catching.

Wrapping Up

So with this standard pattern to my Node apps, it makes it a lot easier to manage the common modules as they are updated/improved, manage SCM untracked soft configuration values (as well as make sure I provide them after migration!), and allow modules to be as drop-in as possible. As with most/all of my hobbyist programming, these approaches and modules are the result of personal refinement, and not from any accepted standard, which is my preferred style when I am the only consumer. Maximise the learnings!

Expect more sporadic information as these apps develop, and enjoy the pins!

For some just beginning their programming journeys a common example to conquer is blinking an LED, which usually goes something like this:

digitalWrite(13, HIGH);
delay(1000);
digitalWrite(13, LOW);

For me, I decided to try a much harder approach, in a fiddly effort that could be regarded as virtually pointless. Nevertheless, I persisted, because I thought it would be cool.

The idea: blink a Blinkt LED on Server Pi whenever it serviced a request from the outside.

For those unfamiliar with my little family of Raspberry Pi minions, here is a brief overview:

  • Server Pi – A Raspberry Pi 3 running three Node.js processes for various Pebble apps (News Headlines pin pusher, Tube Status pin pusher, unreleased notification and discovery service).
  • Backlight Pi – Another Raspberry Pi 3 with a single Node.js Express server that allows any device in the house to HTTP POST a colour to be shown behind my PC.
  • Monitor Pi – A Raspberry Pi Zero W (W, as of today) that pings the three processes running on Server Pi via the GitHub Gist discovery mechanism to give me peace of mind that they’re still up. It also checks the weather for ice and rain, and whether or not Greater Anglia have fallen over before I’ve taken the trouble of leaving for work at 7AM.

Maintaining this small fleet is a joy and a curse (one or both of “my own mini infrastructure, yay!” or  “It’s all fallen over because Node exceptions are weird, noo!”), but since I started versioning it all in Git and adding crontab and boot scripts, it’s become a lot easier. However, for this particular task, I found only one process can usefully control the Blinkt LEDs on top of Server Pi. Since this is a parameterised (services only) instance of Monitor Pi, it must be this process that does the blinking when a request is processed.

Since I’m already a big fan of modular Node.js apps, I just added another module that sets up a single-endpoint Express server, and have each of the other three Server Pi processes POST to it whenever they service a request with their own Express servers. Neat!

An hour of synchronising and testing four processes locally and on-device later, and I now have a blue blinking LED whenever a request is serviced. Sadly the activity isn’t as high as it was in the News Headlines heyday when it was tasked with converting news story images to Pebble-friendly 64 colour thumbnails and an experimental analytics service late last year, but with the interesting tentative steps the unreleased notification service is taking, Server Pi may end up seeing a bit more action than simple status checks and app news lookups in the future.

With all this work done, it’s also time for another diagrammatic mess that I like to call my infrastructure…

Here’s a little something to take everybody’s mind of things.

 

This is a neat idea I had a while ago but only just got around to doing –

“What would a map of all the interconnections and services that my Pebble apps rely upon look like?”

Well, thanks to the neat tool that is Google Drawings, here it is. Scary dependencies!

services-architecture

Of course, this isn’t the full picture. The Server Pi and Monitor Pi provide me with useful services I use in my day-to-day life, such as train delay timeline pins, weather alerts, and updates on the health of the services apps rely upon. Those details aren’t shown here for brevity, but would increase the complexity of the drawing about 50%.

I’m a big fan of Blinkt light hats for Raspberry Pi. I have one showing me server status, rail delays, and weather conditions.

_20161016_153730

Server down!

I have another at work showing the status of the last link check on our ReadMe.io site.

img_20161117_165458

And now I have one as a dynamic backlight for my new PC build.

img_20161120_135026

And the best part? This last one has an API! It has five modes, powered by a Node.js Express server and the node-blinkt NPM package:

  • /set { "to": [r, g, b] } – Set a color instantly.
  • /fade { "to": [r, g, b] } – Fade from the current colour to this one.
  • ‘CPU’ – Fade to a colour on a HSV scale from green to red depending on current CPU load.
  • ‘Screen’ – Take an average of the four screen corners, and set to that colour ten times a second.
  • ‘Demo’ – Fade to a random colour from the rainbow every 15 seconds.
  • ‘Test’ – Ping the Pi and set the ‘Test’ button to green if it responds ‘OK’ and HTTP 200.

The last three are driven by a Java control panel that permanently lives on the new PC.

controller

Thanks to the motherboard supplying power after the PC turns off, I can use ‘Demo’ as a nightlight! Not that I need one…

Update: Added changed IP facility details.

Update: Added status watchapp details.

Two of my Pebble apps push pins to the timeline to enhance their experience beyond the apps themselves:

  • News Headlines – Posts the top headline (if it’s new) every four hours. Used to push notifications and serve decoded PNG images, but that went away. Maybe someday they will return. But not for now.
  • Tube Status – Checks the TFL feed every five minutes, and pushes a pin if there is a change in the delay status. This can be a new delay, a delay that’s ended, and ‘all clear’ (no delays anymore).

Both servers also respond to GET /status to show app users if they’re up, and this has proved useful when they occasionally went down. Thanks for a ‘do node index.js forever’ loop script, this is very rarely now an issue.

Up until now, these pins were served from a $5 Digital Ocean instance which essentially spends 99.9% of its time doing absolutely nothing! After coming back to the UK and making progress towards cancelling subscriptions and emptying my US bank account, I had a better idea – use my dusty Raspberry Pi instead!

As part of my new job at EVRYTHNG, a natural avenue of exploration for getting to grips with the IoT is using a Raspberry Pi, which can run Node.js, as it turns out. Perfect! The pin servers for the two apps above use Node.js with Express.

So after a bit of code/dependency cleanup, I set up both servers on the Pi with screen and put plenty of stickers around warning against turning it off or rebooting the router.

So far, so good! What could go wrong?

img_20160911_143438

The new ‘Pin Pusher’ Raspberry Pi in its native habitat – under the family computer desk.

Followup: Getting a Changed Router IP while Out the House

In the eventuality that I have to update the IP of the family router for apps to use in their status check (otherwise they think the servers have gone down, bad for users!), I used to have a Python script email me its own IP address. Sadly, Google doesn’t like this unauthenticated use of my GMail account, so I devised an alternative.

I set up my Pi as an EVRYTHNG Thng, gave it an ‘ip’ property, and wrote the following Python script to update this property in the EVRYTHNG cloud when it boots up. This way, all I have to do is ask whoever’s in to reboot the Pi, and then wait for the updated IP address! I may also make it run periodically to cover the ‘router randomly restarted’ scenario.


#!/usr/bin/python

import requests
import socket
import fcntl
import struct
import json

user_api_key = "<key>" # Probably shouldn't publish this!
thng_id = "<id>"

def get_ip_address(ifname):
  r = requests.get("http://www.canyouseeme.org")
  spool = r.text
  start_str = "name=\"IP\" value=\""
  start_index = spool.index(start_str) + len(start_str)
  spool = spool[start_index:]
  end_index = spool.index("/>") - 1
  return spool[:end_index]

def main():
  ip = get_ip_address("eth0")
  print("IP: {}".format(ip))

  headers = {
    "Authorization": user_api_key,
    "Content-Type": "application/json",
    "Accept": "application/json"
  }
  payload = [{
    "value": ip
  }]
  r = requests.put("https://api.evrythng.com/thngs/{}/properties/ip".format(thng_id), headers=headers, data=json.dumps(payload))
  res = r.text
  print(res)

main()

Followup: Checking Status Conveniently

Each of the two apps mentioned above have a built-in server monitoring feature in their settings screens, but that’s a lot of scrolling. To put my mind at ease I have also created a simple monitoring app that uses the same backend mechanism:

img_20160911_225459

I wrote a while ago about a mechanism to locate and connect to a headless Raspberry Pi over Ethernet using an LCD display and some start-up code.

Well today I broke it while preparing to move house (and use it in it’s intended situation!), which was bad news. Listen to your GND markings, people!

But a moment’s search for a replacement strategy yielded another idea. Nothing original by any means, but something new to my programming adventures thus far: Get the IP address by e-mail on boot!

Looking at a Raspberry Pi as it boots you will see the Ethernet port is initialized pretty early on in the boot procedure. A quick Google search revealed the existence of the ‘smtplib‘ module included with Python, which I leveraged to make this happen. Here is the final code (get_ip_address() found here):

import smtplib
import struct
import socket
import fcntl

msg = "From RPi Python"

def get_ip_address(ifname):
	s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
	return socket.inet_ntoa(fcntl.ioctl(
		s.fileno(), 0x8915, # SIOCGIFADDR
		struct.pack('256s', ifname[:15])
	)[20:24])

fromaddr = <from address>
toaddr = <to address>

msg = """RPi IP is  %s""" % get_ip_address('eth0')

username = <username>
password = <password>

print("Sending IP: %s to %s" % (get_ip_address('eth0'), toaddr))

print("Opening connection...")
server = smtplib.SMTP('smtp.gmail.com:587')
server.ehlo()
server.starttls()
server.ehlo()

print("Logging in...")
server.login(username, password)

print("Sending message: %s" % msg)
server.sendmail(fromaddr, toaddr, msg)

print("Closing connection...")
server.close()

print("Sent.")

The next step is to make it run on boot, which only involved writing this script (called ipmailboot.sh):

#!/usr/bin/python

sudo python ~/python/ipmail.py

Then changing the permissions for execution:

 sudo chmod 755 ipmailboot.sh
 

And then registering it to run on boot using update-rc.d ipmailboot.sh defaults.

So, nothing revolutionary, but when I move house and find the router is nowhere near a TV, all I’ll have to do is connect it to the network with an Ethernet cable, power on and wait for the email to arrive (on my Pebble watch, no less!)

GClientLOgoWhile developing a number of different minor projects that involved communication between Android and my laptop, Pebble or Raspberry Pi I found myself almost writing a new app for each project, with names like ‘PebbleButtonDemo’ or ‘AndroidTestClient’.

It occurred to me ‘why keep inventing the wheel?’, meaning that this was a wasted practice. The vast majority of these situations simply called for a TCP Socket connection that sent pre-defined commands as text strings. With that in mind, I conspired to create a general purpose app that did these things and did them well.

But in order to solve the problem of writing a new app for every application when the underlying mechanism remained the same, it needed more ‘customisability’ than simply an EditText for address and port. The next logical step to this is to allow each project/application (server, that is) to customize the general client on the Android phone to it’s purpose, and in this first incarnation offers the following ‘customisables’:

  • Android Activity title in the Action bar by sending “TITLE” + the new title String
  • A set of three customizable Buttons by sending ‘BUTTON” followed by the button number, button text and the command it should trigger to be sent back to the application server.
  • Protocols agreed for these actions in both Java and Python servers I’ve written.

More features are possible with a more advanced protocol, such as sending vectors to draw on the remote Canvas, or even images, but those will have to come later when a specific project calls for it.

So, upon opening the app, this is what is seen:

Screenshot_2013-07-25-22-37-13The key UI elements to note are:

  • The customizable Activity title.
  • The ‘I/O Traffic’ section, which contains a customized SurfaceView element (actually a subclass of my Android Game Engine), which fills up green when a connection is established and animates blue ‘bits’ left and right whenever data is sent or received.
  • The ‘Connection Settings’ section, which contains EditText fields for host address and port number, a Spinner for language selection on the application server side, and connect/disconnect Buttons.
  • The ‘Log History’ section contains a ScrollView housing a TextView that shows all events that take place, be they received data, sent commands or local events such as IOExceptions and disconnects.
  • The ‘Custom Buttons’ section, which houses the three customizable Buttons that can be setup from the application server side with details I’ll now detail below.

To continue the spirit of a general purpose app, I created static methods for setting up these customizable UI elements, shown below:

public class GClientTools {
	//Protocol configuration
	private static final String PROTOCOL_TITLE = "TITLE";
	private static final String PROTOCOL_BUTTON = "BUTTON";
	private static final String PROTOCOL_SEP_1 = ":";
	private static final String PROTOCOL_SEP_2 = ";";
	private static final String PROTOCOL_SEP_3 = ".";

	/**
	 * Use the GClient syntax to set the GClient Activity title
	 * @param inStream 	Established output stream.
	 * @param inTitle	Title to set to.
	 */
	public static void setTitle(PrintStream inStream,String inTitle) {
		String packet = PROTOCOL_TITLE + PROTOCOL_SEP_1 + inTitle;
		inStream.println(packet);
		System.out.println("Title now '" + inTitle + "'. (" + packet + ")");
	}

	/**
	 * Configure a GClient custom button
	 * @param inStream		Established output stream.
	 * @param inButtonNumber	Which button to customise. No range checking.
	 * @param inText		Text to display on the button.
	 * @param inCommand		Command the button will send back to this server.
	 */
	public static void setCustomButton(PrintStream inStream, int inButtonNumber, String inText, String inCommand) {
		String packet = PROTOCOL_BUTTON + PROTOCOL_SEP_1 + inButtonNumber + PROTOCOL_SEP_2 + inText + PROTOCOL_SEP_3 + inCommand;
		inStream.println(packet);
		System.out.println("Button " + inButtonNumber + " now '" + inText + "' --> <" + inCommand + ">. (" + packet + ")");
	}

}

As a test case, I wrote a quick application server that accepts the GClient connection and makes use of these static methods to set the Activity title and one custom Button. The I/O Traffic bar has filled up green and the Log History shows all events:

Screenshot_2013-07-25-22-36-50That’s it for now. The major things I learned writing this app were:

  • A much more stable and UI friendly threaded approach to networking, using four threads (UI, sending, receiving and connecting)
  • Precise Android XML UI design including nested layouts and more features of the RelativeLayout.
  • Setting Android UI views to use 9-Patch images and custom background styles and colours.

First version source code is available here! The GClientTestServer port is a constant field in the class file. The GClientTestServer also contains the GClientTools class in the util package, which I’ll be next using for adapting current project servers and eliminating a few test apps altogether!