Category Archives: Ideas

Hardware projects of the week

As an avid backer of quite a few interesting kickstarter projects, I have early access to a number of new technologies. Two projects in particular are Bluetooth LE based, which we believe will be a multi million dollar industry in the next couple of years, and one dealing with wearable computing, which is set to explode.

The first project that we would like to talk about is Spark (, which is a programmable networked core that can be made to report data from numerous sensors via HTTP. With the dev kit, we get a number of useful sensors out the box, as well as a high voltage relay to control appliances in your home via REST method calls. What this means, in a simple project, would be that you can turn on your coffee machine from your phone on the way home from work, and have a fresh pot brewed as you walk in the door. There are obviously numerous other applications, which we will be exploring in the coming weeks. Spark cores can also make use of an Arduino shield shield, which will allow you to add on any Arduino compatible shield to expand capabilities.

Next up, is the MetaWear dev board from MbientLabs. See for more information. MetaWear is a tiny dev board that can be used to power wearable computing devices, and which reports back to your phone via Bluetooth LE. It can be used to quickly create fitness bands, or any other wearable device. It is also relatively inexpensive and has API’s for Android and iOS already, as well as a few sample apps. Metawear can make use of any I2C compliant add on so the applications are almost limitless!

The third product that we would like to bring your attention to is the PowerUp3.0 device.
This device is sold as a toy, and enables you to use a Bluetooth LE receiver with your phone to create a remote controlled paper aeroplane. The interesting portion of this device is that we can see many interesting applications with it, both as a toy and not!

Another Bluetooth device that we are currently working on, with the Raspberry Pi as a back end, will allow us to transmit data of all sorts via a cheap commodity Bluetooth transmitter. This is very similar in nature to an Apple iBeacon and we are imagining a world where these things are attached to billboards at busy traffic intersections. The user will be able t receive updates on entertainment schedules, interact with Sports games or download advertising clips with special offers while they wait in traffic. Utilizing a Wifi breakout, we can then also create networks of users and information at your fingertips!

Using OpenCV for great customer service

OpenCV is an Open Source Computer Vision library that can be used in a variety of applications. There are a few wrappers for it that will expose the OpenCV API in a number of languages, but we will look at the Python wrapper in this post.

One application that I was thinking could be done very quickly and easily, would be to use facial recognition to look up a customer before servicing them. This can easily be achieved using a simple cheap webcam mounted at the entrance to a service centre that captures people’s faces as they enter the building. This can then be used to look up against a database of images to identify the customer and all their details immediately on the service centre agent’s terminal. If a customer is a new customer, the agent could then capture the info for next time.

Privacy issues aside, this should be relatively easy to implement.

import sys
import as cv
from optparse import OptionParser

# Parameters for haar detection
# From the API:
# The default parameters (scale_factor=2, min_neighbors=3, flags=0) are tuned
# for accurate yet slow object detection. For a faster operation on real video
# images the settings are:
# scale_factor=1.2, min_neighbors=2, flags=CV_HAAR_DO_CANNY_PRUNING,
# min_size=<minimum possible face size

min_size = (20, 20)
image_scale = 2
haar_scale = 1.2
min_neighbors = 2
haar_flags = 0

def detect_and_draw(img, cascade):
    # allocate temporary images
    gray = cv.CreateImage((img.width,img.height), 8, 1)
    small_img = cv.CreateImage((cv.Round(img.width / image_scale),
                   cv.Round (img.height / image_scale)), 8, 1)

    # convert color input image to grayscale
    cv.CvtColor(img, gray, cv.CV_BGR2GRAY)

    # scale input image for faster processing
    cv.Resize(gray, small_img, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(small_img, small_img)

        t = cv.GetTickCount()
        faces = cv.HaarDetectObjects(small_img, cascade, cv.CreateMemStorage(0),
                                     haar_scale, min_neighbors, haar_flags, min_size)
        t = cv.GetTickCount() - t
        print "detection time = %gms" % (t/(cv.GetTickFrequency()*1000.))
        if faces:
            for ((x, y, w, h), n) in faces:
                # the input to cv.HaarDetectObjects was resized, so scale the
                # bounding box of each face and convert it to two CvPoints
                pt1 = (int(x * image_scale), int(y * image_scale))
                pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
                cv.Rectangle(img, pt1, pt2, cv.RGB(255, 0, 0), 3, 8, 0)

    cv.ShowImage("result", img)

if __name__ == '__main__':

    parser = OptionParser(usage = "usage: %prog [options] [filename|camera_index]")
    parser.add_option("-c", "--cascade", action="store", dest="cascade", type="str", help="Haar cascade file, default %default", default = "../data/haarcascades/haarcascade_frontalface_alt.xml")
    (options, args) = parser.parse_args()

    cascade = cv.Load(options.cascade)

    if len(args) != 1:

    input_name = args[0]
    if input_name.isdigit():
        capture = cv.CreateCameraCapture(int(input_name))
        capture = None

    cv.NamedWindow("result", 1)

    if capture:
        frame_copy = None
        while True:
            frame = cv.QueryFrame(capture)
            if not frame:
            if not frame_copy:
                frame_copy = cv.CreateImage((frame.width,frame.height),
                                            cv.IPL_DEPTH_8U, frame.nChannels)
            if frame.origin == cv.IPL_ORIGIN_TL:
                cv.Copy(frame, frame_copy)
                cv.Flip(frame, frame_copy, 0)

            detect_and_draw(frame_copy, cascade)

            if cv.WaitKey(10) >= 0:
        image = cv.LoadImage(input_name, 1)
        detect_and_draw(image, cascade)


So as you can see, by using the bundled OpenCV Haar detection XML documents for frontal face detection, we are almost there already! Try it with:

python ./ -c /usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml 0

Where 0 is the index of the camera you wish to use.

Low Earth Orbit ads

This may seem a little nuts, but it is an idea that I have been throwing around for a few years now.

Imagine a large company being able to launch a large billboard into Low Earth Orbit (LEO) and have it fly over entire countries displaying an advert? Companies like Coca-Cola could easily afford the costs, and the returns on a nice Equatorial orbit may be great!

I would approach it something like this (leave comments if you have better ideas!):

  • Mylar (space blankets) can be printed with the company logo etc.
  • Mylar also has the advantage of being thermally resistant as well as being very light to keep launch costs low.
  • Current launch costs are around USD 4000 – 20 000 depending on weight. This is completely affordable for most large advertisers
  • The billboard would burn up on re-entry, so the chance of persistent debris is lowered.
  • Careful monitoring of these objects would be simple, due to the reflective nature of the mylar.
  • Somebody could make a whole lot of dough if they ever get past the regulatory issues about putting very large pieces of junk into LEO.

Just an idea, but I think it may work. These should be low enough orbits so as to not interfere with things, and be very short lived.

What do you think?

Shopping mobile apps

Alright, seeing as though WordPress decided to discard this post before, I will attempt to rewrite it now.

There have been a number of mobile apps that have come to market from supermarkets, to help users look out for specials and get better deals. I would like to focus mainly on the Pick ‘n Pay “Smart Shopper” campaign as this is what I, personally use the most.

I must applaud Pick ‘n Pay on making Smart Shopper well worth my while, as I have already benefited greatly from using it, and remain a loyal customer mainly due to the programme.

There are, however, some ways that I feel that it could be improved.

  • Add in a barcode scanner on the mobile app. This will allow me to scan the barcodes of products that I need, and will automatically populate a shopping list. If, for example, I need more coffee, I could scan the code on the old bag and have it added as a list item to buy. I could also use, say, 500 Smart Shopper points and use a personal shopper service in store, where a Pick ‘n Pay employee could fill a trolley with the stuff I need waiting for me, or have it delivered to my home.
  • Have the Smart Shopper vouchers applied automatically. Going the to terminal at the store entrance to print vouchers, and then remembering to use the vouchers at the till is a pain. Why not have an online check at the till, that applies all the valid vouchers automatically, and lets you know at the end of the slip what was used? It will certainly save on printing silly receipts!
  • Profile my spending habits properly. I, more often than not, receive vouchers that are completely irrelevant to my shopping habits. Why collect all that data, and then not use it?
  • Allow me to transfer points to other people. Pick ‘n Pay already allow you to donate points to charities, but what happens if, for example, I have a brother at University that could do with a decent meal? I could transfer points to him to use!
  • Have an online Smart Shopper store. Allow me to buy stuff with points on an online store (like Christmas gifts etc) using points instead of cash.
  • Add in possible geo location and contextual ads/vouchers. If I can select the store that I am in at the moment, and have contextual vouchers and notifications sent to me while in the store, I would appreciate that. Again, this depends on my profile data.
  • Allow me to opt in for push notifications or SMS of specials and deals. Example: “Hey, if you spend over R500 on Saturday, you will get triple points!” or “This Friday, earn double points on cheese”. Stuff like that.

There are many more ideas that I have, simple things that could make a huge difference, but let’s get the basics done first eh?

This is not meant to be a rant, just constructive ideas. If you have any, please leave a comment too!

iBeacons and Raspberry Pi

A while back, I came across this article on Radius Networks which is a set of very simple instructions to make your own iBeacon with a Raspberry Pi and a cheap bluetooth dongle.

These things should be ultra cheap to roll out, so I am expecting to see a lot of applications, such as the contextual apps that I have spoken about before, rolling out quite soon.

The possibilities of these things are huge for guerilla marketing in malls and large spaces, especially food courts and bigger shops.

Imagine contextual ads according to your preferences, that should be pretty easy actually.

Mark my words, these things will either be a complete flop (as regular bluetooth was) or huge (thanks to the iCrowd).

Time will tell!


I decided to have a crack at writing a BSON based data store. I know, there are many around already, but I really wanted to see how BSON worked from a lower level, and, of course, if I could actually pull something off.

The code that I came up with today is hosted at It is nothing really much (yet), but I hope to be able to add more to it soon.

Basically, the way that it works is that you supply a key as a String, and a value as a document. The document itself can be just about anything, including JSON documents. The documents then get serialized to BSON and stored (for now) in an in-memory HashMap. I realise that this is not the best approach (nor the fastest), but I have limited the document sizes to a maximum of 200KB in order to make it as efficient, and as useful, as possible in the few hours that I have dedicated to it so far.

The code includes a “local” store, which you can simply include in your Java projects and use with the dead simple API, as well as a “server” mode, which spawns a socket connection (Threaded) on the specified port. You can then telnet to the server and use the server API to put and get documents by key index.


[email protected]:~$ telnet localhost 11256
Connected to localhost.
Escape character is '^]'.
put/1/Hello World!
Put 1 Hello World!
Getting key 1
Hello World!

Very simple!

The next steps will be to:

  • add persistent storage mechanism
  • make it faster
  • optimise the code a bit
  • write some tests
  • probably rewrite it in C

DSTV decoders and set top boxes in general

Reposted from old site – original date: Tuesday 20 December 2011

DSTV has an opportunity to do something awesome with their decoders. Many people have the things in their homes already, and with the next iteration of undersea cables coming along, more people will be able to make more use of broadband connections. Put these things togather and we have some ideas that may or may not be worth exploring…

Imagine the following: A set top box with a programmable API. Sure, keep the proprietary signal decoder bit proprietary, but open up an API on device for 3rd party apps. Create an appstore concept for developers to lodge apps in and make them downloadable via the satellite tv connection. This could open up a whole new world for people to explore and have more fun with their tv, which, after all, will translate into more folks signing up for the services and more folks spending more time using the services.

The tech could be simple. Android possibly, or a simple python interpreter would make the barrier to entry really low and the appstore ecosystem would flourish quickly. Paid for apps that would add in value added services could be deployed fast and easily, as well as be maintained from a central repository.

With something like this, there would be an app to tweet minute by minute football results! No need to watch your phone all the time, just set it to go and relax!

What do you think?

Laptop tracking system (cheap)

Reposted from old site – original date: Wednesday 7 September 2011

Following on from a discussion on a work forum, I decided to have a stab at writing a super simple laptop tracking system. The main aim of the system will be to track stolen laptops and (hopefully) recover them somehow.

After a little bit of consideration, I decided to have a look at doing this in a client/server way with a MongoDB back end. The following is a simple, but workable system that uses the python bindings to the DBUS messaging system on Linux (tested on Ubuntu 11.04), so I am doubtful that this could be used for anything other than that. That being said, however, the client bit simply needs to send through wifi access points and MAC addresses to get a triangulation on the position of the laptop, so I am pretty sure that this can be achieved with relative ease on other platforms.

Triangulation is done via a Google API and the coordinates as well as the accuracy level is then inserted to the MongoDB back end for mapping or whatever else needs to be done with the data. Features that the client should also probably support include bricking the machine remotely or something to that effect, as well as possibly sending messages or SOS tweets or something to let people know that it is not in its rightful owners posession.

Enough rambling! To the code!


As I said, the code uses python-dbus and the standard json and urllib modules. You can install python-dbus with an apt-get install python-dbus.

import dbus
import json
import urllib

NM = 'org.freedesktop.NetworkManager'
NMP = '/org/freedesktop/NetworkManager'
NMI = NM + '.Device'
PI = 'org.freedesktop.DBus.Properties'

def list_ssids(): 
    bus = dbus.SystemBus()
    nm = bus.get_object(NM,NMP)
    nmi = dbus.Interface(nm,NM)
    # Iterate over the devices queried via the interface
    for dev in nmi.GetDevices():
        # get each and bind a property interface to it
        devo = bus.get_object(NM,dev)
        devpi = dbus.Interface(devo,PI)
        if devpi.Get(NM+'.Device','DeviceType') == 2:
            wdevi = dbus.Interface(devo,NMI + '.Wireless')
            wifi = []
            for ap in wdevi.GetAccessPoints():
                apo = bus.get_object(NM,ap)
                api = dbus.Interface(apo,PI)
                wifi.append({'ssid':''.join(["%c" % b for b in api.Get("org.freedesktop.NetworkManager.AccessPoint", "Ssid", byte_arrays=True)]), 'mac':''.join(["%c" % b for b in api.Get("org.freedesktop.NetworkManager.AccessPoint", "HwAddress", byte_arrays=True)])})
    return wifi

if __name__ == '__main__':
  ap = list_ssids()
  data = json.dumps(ap)
  params = urllib.urlencode({'wifi': data})
  f = urllib.urlopen("", params)

You will need to modify the post URL at the end to somewhere meaningful for yourself of course.

Initially, I did the client using the PHP ext/dbus, but decided against that due to the fact that it is hard to install and nobody really uses it…

The server side is just as simple. You will need a Mongodb instance running on the server and then you need a simple script to catch the POSTs. NOTE: This script is just a concept, so if you are actually going to do something like this, clean it up!

$wifi = $_POST['wifi'];
$wifi = json_decode($wifi);
$request = array( 'version' => '1.1.0', 'host' => '', 'wifi_towers' => $wifi );
$c = curl_init();
curl_setopt( $c, CURLOPT_URL, '' );
curl_setopt( $c, CURLOPT_POST, 1 );
curl_setopt( $c, CURLOPT_POSTFIELDS, json_encode( $request ) );
curl_setopt( $c, CURLOPT_RETURNTRANSFER, true );
$result = json_decode( curl_exec( $c ) )->location;

$fields = array(
            'laptopid' => urlencode('16cptl-pscott'),

// connect
$m = new Mongo();
// select a database
$db = $m->laptoptrack;
$collection = $db->lappoints;

// add a record
$obj = array( "loc" => array("lon" => floatval($fields['lon']), "lat" => floatval($fields['lat'])), "lon" => floatval($fields['lon']), "lat" => floatval($fields['lat']), "accuracy" => intval($fields['accuracy']), "laptopid" => $fields['laptopid'], "wifi" => json_encode($wifi));
$collection->ensureIndex(array('loc' => "2d"));
$collection->insert($obj, array('safe'=>true)); 

So from the above code, you will see that we create a 2d geospatial index on the Mongodb instance as well. Not sure if this is useful, but it will probably help in speeding up queries like “Gimme all the laptops that Company X owns in area Y” or something – if that is something that you would like to add to your query interface of course…

Also, I am not 100% sure of the legality of storing SSID’s with a location, so check that one first too!

Dead simple, works well.

I would say that the client bit should be on a cron job or something that pings the service every hour or something.

Remember: Mongodb works best on 64bit OS. If you are using a 32bit arch, then you will only be able to store around 2GB data at a time. Depending on the need for historical records etc, keep that in mind…

Most importantly, HAVE FUN!

Is infrastructure really a barrier to education?

Reposted from old site – original date: Tuesday 23 August 2011
A few folks over the years have asked me what the biggest barrier to (online) education is and the pretty stock standard answer has normally been an infrastructure related one.
This does not need to be the case. Over the past few years, I have been thinking about an infrastructure deployment that would work in most areas of the developing world, as well as those in the more developed regions.
The answer is mesh networks. Yes, a few people are experimenting with mesh connected devices for edu already, but I do not believe that they are seeing the true potential here. Most are small scale, underfunded and rely on donor grants. That should not really be the case, as there is an alternative approach that can work with the cooperation of banks/loan vendors or some corporate backing in the same way. Let us have a look at what I am on about shall we?

Small, wireless thin client devices. I have done some research into these things and it seems that the shopping list below is not a huge ask, it just needs for someone to actually do it!

1. small to very small form factor. The device should be small enough to be stuck to the back of a screen.
2. 12v Power supply. This means that the device can also be powered by solar panels, small wind turbines or a car battery.
3. Resistant to dust.
4. Operates in an ambient temperature range of around 5-40 degrees Celcius
5. Has multiple USB ports (at least 2 but preferebly 6)
6. Has wifi mesh capability. This is the most important one. Many mesh networks can operate in proximity at greater than 1 Gigabit per second.
7. Outputs for (wireless) keyboard and mouse as well as a (HD/HDMI) monitor.
8. NO moving parts (possibly a fan, but limited to that). Easy to maintain and replace parts.
9. Open hardware and BIOS/ROM if possible. Will allow folks to maintain the devices easily with minimal training.
10. Audio in and out.

With these basics, we can set up a mesh of devices in proximity based urban conditions (like informal settlements and tower apartments) – both vertically and horizontally. Devices should be given to people at very low cost or free of charge in order to minimise theft and essentially flood the market. All devices within a proximity area should connect to each other in multiple ways, and then utilise a wired backhaul to the internet on a high speed connection. Server set up, maintenance and connection can be run as a business by a single person and all connected devices can subscribe optionally to an internet connection at a small monthly charge to offset costs.

What these kinds of networks will provide is:

1. a Community based network where skills can be shared and learned on an eLearning system hosted on the server.
2. IP phones within the network, and optionally the internet via the backhaul, and IP TV streamed directly from the server. IP TV could be a paid service as well.
3. Access to educational resources and Creative Commons licensed courseware to learn new skills and possibly get a better job
4. Government services via an eGov initiative (especially social services, HIV education, etc)
5. Community bulletin board style communication to post notices, services on offer and possible job openings.
6. A bunch of other good things.

The next steps really involve getting together a proof of concept. This shouldn’t be too challenging. Other things that should be considered are creating a standard business plan that people could take to banks etc to fund the server/backhaul setup as a SMME and then just doing it!

Please don’t steal this idea

Reposted from old site – original date: Thursday 2 June 2011

I have been tossing around the idea of an “anti-curation” application. Now, we all see that curation is really hot right now and everybody and their grandmother uses some sort of curation system to build up little newsletters or pages to share with their friends. Usually stuff that is interesting to you and your in group.

To me, these kind of curation systems are a little flat. The problem of contextual realtime (search for that on this site for more detail) becomes more and more of an issue as people curate other people’s links and pages. There is also very little social mindedness in these types of applications. I, for one, would like to take a slightly different approach to curation itself.

My idea here would be to get your friends to curate stuff for you. Things that a group of trusted people you know will curate decent stuff that you will be interested in. That is, anti-curation. This would work OK, except how do you ever find people with similar interests to you? Ah, well therein lies the tricksy bit. I think that a Hadoop cluster would be a good thing here. Think about a billion tweets a week going into a Hadoop cluster. You then run almost user defined Map/Reduce jobs against that store and get some simple metrics from it. Much the same way that I presume “Trending Topics” on twitter works. OK, nothing too spectacular yet. That should give you all top things that your group are interested in across a time scale (e.g. weekly, daily, hourly, whatever). You also get the advantage that you see stuff once. A Retweeted URL that did the rounds in your group could get annoying, but a system like this would reduce that to a single occurrance of each “thought”. So this now sounds like a realtime social proxy/filter system. Good. We can make it better.

At some point, your trusted group of friends may become a little stale and your interests may change slightly. We then have the opportunity to run Euclidean Distance algorithms across the entire data store to look for closest relatives to your interests. These can also be almost user defined, so you could search for a hash tag, a keyword, a user, a URL, almost anything. A simple interface to refine your interests would be easy to put together as well, so that filtering and anti-curation can be done in just as near as damnit realtime. Cool!

Some of the cool things about this would be that it is not restricted to a single web presence like twitter OR Facebook OR whatever, but anyone can curate anything for anyone anywhere. A simple browser extension would suffice for that.
Another very cool thing about this would be that the distance algorithm need not only work on folksonomy (tags etc), but also around images, videos, audio etc.

The same system will easily work as a curation system, a social bookmark service and an anti-curation service. What a win!

Data can be sent back to users in almost any way too. Web, XMPP feeds, RSS feeds, tweets, facebook status updates, and more.

Sound exciting? Thoughts?