Author Archives: paulscott56

Full Raspberry Pi (Raspbian) emulation with qemu

I wanted to do some experimental hacking on my Raspberry Pi, specifically to try a bit of fun with talking to Arduino and Spark Cores. My ultimate aim was to have a go at doing something fun with the meArm robotic arm (

I started off compiling OpenCV and OpenNI on the physical pi, but quickly realised I didn’t have a big enough SD card lying around. I momentarily thought about stealing one of my wife’s pro camera SD cards, but then thought about the consequences… I then decided to emulate the whole thing and then buy an SD card when the project was done.

First off, you need a qemu environment. I’ll assume you have a basic qemu installation going, but if not, get started with

sudo apt-get install qemu-system qemu-user-static binfmt-support

Next, you will need to download the latest raspbian release image. Make a directory to use, and then grab it

mkdir ~/qemu_vms
cd ~/qemu_vms

You also need a kernel:


XEC Design maintains a qemu kernel with the ARMhf patches already, but if you would like to build your own one, feel free to grab it at

You will need to extract the zip archive that you just downloaded, and you should be left with something like:

~/qemu_vms$ ls
2014-06-20-wheezy-raspbian.img kernel-qemu

which means you are ready to start doing cool stuff! (Remember that if you are reading this, the .img file has probably changed, so keep a note of that!)

Lets boot this thing up!

qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1 rootfstype=ext4 rw init=/bin/bash" -hda 2014-06-20-wheezy-raspbian.img

which should start up qemu with a command prompt. Login with the default credentials (user: pi, pass: raspberry) and have a cookie for getting this far.

Now, you will notice that not everything can be emulated by qemu, so change /etc/ like this

nano /etc/
#Comment out the libcofi_rpi object like this

Now you need to edit


(This is a new file!)
Add the following to your new file:

KERNEL=="sda", SYMLINK+="mmcblk0"
KERNEL=="sda?", SYMLINK+="mmcblk0p%n"
KERNEL=="sda2", SYMLINK+="root"

Now you should halt/shutdown the system, and prepare for your first real boot!

Boot up again with

qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1 rootfstype=ext4 rw" -hda 2014-06-20-wheezy-raspbian.img

Do a df -h and notice with horror that you have almost no space to work with!

Resizing the image “disk” is pretty easy though.

First close down the emulator again, then

qemu-img resize 2014-06-20-wheezy-raspbian.img +4G

This will make your partition 6GB long (do more if you like…) which should be plenty of space and will fit onto a relatively cheap 8GB SD Card.

Now boot up your emulator again and do:

sudo ln -snf mmcblk0p2 /dev/root
sudo raspi-config

Choose the first option to resize your disk, and it will tell you to reboot. Great, once everything is halted, manually restart your emulator, and do another df -h. SURPRISE! It now looks like this:

Filesystem      Size  Used Avail Use% Mounted on
rootfs          6.6G  2.1G  4.2G  33% /
/dev/root       6.6G  2.1G  4.2G  33% /
devtmpfs        125M     0  125M   0% /dev
tmpfs            25M  204K   25M   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            50M     0   50M   0% /run/shm
/dev/sda1        56M  9.5M   47M  17% /boot

You are done! Great job!

Have fun!

Making a temperature monitor with Spark Core, and a bit of PHP

I have recently started making biltong at home, and coincidentally also received a Spark Core dev kit from as well. I then decided that a quick and simple biltong monitor would be a good starter project, so set out to do just that.

I grabbed this diagram as a starter and built a simple Spark Core temperature sensor (thermometer), straight from the Spark docs




As soon as your sensor is built, you can then proceed to the Spark Online IDE and flash some code to it

#include <math.h>

// -----------------
// Read temperature
// -----------------

// Create a variable that will store the temperature value
int temperature = 0;
double voltage = 0.0;
double tempinC = 0.0;

char vStr[10];
char tStr[10];

void setup()
  // Register a Spark variable here
  Spark.variable("temperature", &temperature, INT);
  Spark.variable("voltage", &voltage, DOUBLE);
  Spark.variable("tempinC", &tempinC, DOUBLE);

  // Connect the temperature sensor to A7 and configure it to be an input
  pinMode(A7, INPUT);

void loop()
  // Keep reading the temperature so when we make an API
  // call to read its value, we have the latest one
  temperature = analogRead(A7);
  voltage = (temperature * 3.3)/4095;
  tempinC = (voltage - 0.5)*100;


 * The API request will look something like this:
 * GET /v1/devices/{DEVICE_ID}/temperature
 * # Core ID is 0123456789abcdef
 * # Your access token is 123412341234
 * curl -G \
 * -d access_token=123412341234

You should then be able to test it out by sending some HTTP requests to it. Generally, I would use Curl from, but I am doing some Win8.1 dev and didn’t want to bother with it, so I simply used to fiddle with my Spark core.

A couple of things that you need to do to set up with is that you need a copy of your access_token and deviceID from your Spark Core. These are both available in your online IDE sidebar.

Set the parameters in as access_token and then whatever you set your code vars as. In the above example it will be “tempinC”, “voltage”, or “temperature”. We will mostly be concentrating on tempinC though.

Set your URL as

with your device ID as part of the URL. You may now execute the request and it should return some JSON from your Spark core!

Right, now that we know that it works and is sending us back a valid temperature, we can proceed with a bit of PHP.

Data is not really useful until you do something with it, so let’s build a very quick and VERY dirty PHP/MySQL app to house our data. PLEASE NOTE that this code is awful, so make it better…

First up, the index.php file:

// Get cURL resource
$curl = curl_init();
// Set some options - we are passing in a useragent too here
curl_setopt_array($curl, array(
    CURLOPT_URL => '',
// Send the request & save response to $resp
$resp = curl_exec($curl);
// Close request to clear up some resources

$data = json_decode($resp);
$temp = $data->result;
echo "<h3>Current temperature is: " . $temp . " C</h3>";

// MySQL stuff... sigh
$dbhost = 'localhost:3036';
$dbuser = 'username';
$dbpass = 'password';
$conn = mysql_connect($dbhost, $dbuser, $dbpass);
if(! $conn )
  die('Could not connect: ' . mysql_error());
$sql = 'INSERT INTO biltong '.
       '(temperature, humidity, currtime) '.
       'VALUES ( '.$temp.', "0", NOW() )';

$retval = mysql_query( $sql, $conn );
if(! $retval )
  die('Could not enter data: ' . mysql_error());
//echo "Entered data successfully\n";

// check out the nifty graph...
echo '<h3>Temperatures in the Biltong maker over time</h3> <img src="graph.php" />';

This basically grabs the data by making a cURL request, then adds it to a MySQL database. Dead simple.
Next we want the file that this file refers to (graph.php)

$graph = new PHPGraphLib(1100,700); 
$dataArray = array();

$dbhost = 'localhost:3036';
$dbuser = 'username';
$dbpass = 'password';

$link = mysql_connect($dbhost, $dbuser, $dbpass)
   or die('Could not connect: ' . mysql_error());
mysql_select_db('sparkdata') or die('Could not select database');

//get data from database
$sql = "SELECT temperature, id, currtime FROM biltong LIMIT 150";
$result = mysql_query($sql) or die('Query failed: ' . mysql_error());
if ($result) {
  while ($row = mysql_fetch_assoc($result)) {
      $temperature = $row["temperature"];
      $count = $row["currtime"];
      //add to data areray
      $dataArray[$count] = $temperature;
//configure graph
$graph->setGradient("lime", "green");
//echo $sql;

Done! You will also need PHPGraphLib from

Now, whenever a request is sent to the index file, it will grab the data and graph it. You can add the curl request to a cron job too and get regular data etc as well.

As I say, this is a simple example, but demonstrates how easy it is to start building some very cool sensors with Spark Core!

Hardware projects of the week

As an avid backer of quite a few interesting kickstarter projects, I have early access to a number of new technologies. Two projects in particular are Bluetooth LE based, which we believe will be a multi million dollar industry in the next couple of years, and one dealing with wearable computing, which is set to explode.

The first project that we would like to talk about is Spark (, which is a programmable networked core that can be made to report data from numerous sensors via HTTP. With the dev kit, we get a number of useful sensors out the box, as well as a high voltage relay to control appliances in your home via REST method calls. What this means, in a simple project, would be that you can turn on your coffee machine from your phone on the way home from work, and have a fresh pot brewed as you walk in the door. There are obviously numerous other applications, which we will be exploring in the coming weeks. Spark cores can also make use of an Arduino shield shield, which will allow you to add on any Arduino compatible shield to expand capabilities.

Next up, is the MetaWear dev board from MbientLabs. See for more information. MetaWear is a tiny dev board that can be used to power wearable computing devices, and which reports back to your phone via Bluetooth LE. It can be used to quickly create fitness bands, or any other wearable device. It is also relatively inexpensive and has API’s for Android and iOS already, as well as a few sample apps. Metawear can make use of any I2C compliant add on so the applications are almost limitless!

The third product that we would like to bring your attention to is the PowerUp3.0 device.
This device is sold as a toy, and enables you to use a Bluetooth LE receiver with your phone to create a remote controlled paper aeroplane. The interesting portion of this device is that we can see many interesting applications with it, both as a toy and not!

Another Bluetooth device that we are currently working on, with the Raspberry Pi as a back end, will allow us to transmit data of all sorts via a cheap commodity Bluetooth transmitter. This is very similar in nature to an Apple iBeacon and we are imagining a world where these things are attached to billboards at busy traffic intersections. The user will be able t receive updates on entertainment schedules, interact with Sports games or download advertising clips with special offers while they wait in traffic. Utilizing a Wifi breakout, we can then also create networks of users and information at your fingertips!

QEMU Linu/xMipsel emulation

This tutorial will assume that you have a running QEMU environment and that it works. These are old notes that I am simply capturing here as a reference, so things may have changed regarding links etc. Please exercise some caution with copying and pasting!

First step is to create a disk image to use with our new OS:

qemu-img create -f qcow hda.img 10G 

This will hold all the files related to our MIPSel machine.

Now go and grab the kernel image:


Now you should be able to start the installation process with

qemu-system-mipsel -M mips -kernel vmlinux-2.6.18-6-qemu -initrd initrd.gz -hda hda.img -append "root=/dev/ram console=ttyS0" -nographic 

Once you have gone through the debian installer, you can boot your new system with

qemu-system-mips -M mips -kernel vmlinux-2.6.18-6-qemu -hda hda.img -append "root=/dev/hda1 console=ttyS0" -nographic 

Using OpenCV for great customer service

OpenCV is an Open Source Computer Vision library that can be used in a variety of applications. There are a few wrappers for it that will expose the OpenCV API in a number of languages, but we will look at the Python wrapper in this post.

One application that I was thinking could be done very quickly and easily, would be to use facial recognition to look up a customer before servicing them. This can easily be achieved using a simple cheap webcam mounted at the entrance to a service centre that captures people’s faces as they enter the building. This can then be used to look up against a database of images to identify the customer and all their details immediately on the service centre agent’s terminal. If a customer is a new customer, the agent could then capture the info for next time.

Privacy issues aside, this should be relatively easy to implement.

import sys
import as cv
from optparse import OptionParser

# Parameters for haar detection
# From the API:
# The default parameters (scale_factor=2, min_neighbors=3, flags=0) are tuned
# for accurate yet slow object detection. For a faster operation on real video
# images the settings are:
# scale_factor=1.2, min_neighbors=2, flags=CV_HAAR_DO_CANNY_PRUNING,
# min_size=<minimum possible face size

min_size = (20, 20)
image_scale = 2
haar_scale = 1.2
min_neighbors = 2
haar_flags = 0

def detect_and_draw(img, cascade):
    # allocate temporary images
    gray = cv.CreateImage((img.width,img.height), 8, 1)
    small_img = cv.CreateImage((cv.Round(img.width / image_scale),
                   cv.Round (img.height / image_scale)), 8, 1)

    # convert color input image to grayscale
    cv.CvtColor(img, gray, cv.CV_BGR2GRAY)

    # scale input image for faster processing
    cv.Resize(gray, small_img, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(small_img, small_img)

        t = cv.GetTickCount()
        faces = cv.HaarDetectObjects(small_img, cascade, cv.CreateMemStorage(0),
                                     haar_scale, min_neighbors, haar_flags, min_size)
        t = cv.GetTickCount() - t
        print "detection time = %gms" % (t/(cv.GetTickFrequency()*1000.))
        if faces:
            for ((x, y, w, h), n) in faces:
                # the input to cv.HaarDetectObjects was resized, so scale the
                # bounding box of each face and convert it to two CvPoints
                pt1 = (int(x * image_scale), int(y * image_scale))
                pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
                cv.Rectangle(img, pt1, pt2, cv.RGB(255, 0, 0), 3, 8, 0)

    cv.ShowImage("result", img)

if __name__ == '__main__':

    parser = OptionParser(usage = "usage: %prog [options] [filename|camera_index]")
    parser.add_option("-c", "--cascade", action="store", dest="cascade", type="str", help="Haar cascade file, default %default", default = "../data/haarcascades/haarcascade_frontalface_alt.xml")
    (options, args) = parser.parse_args()

    cascade = cv.Load(options.cascade)

    if len(args) != 1:

    input_name = args[0]
    if input_name.isdigit():
        capture = cv.CreateCameraCapture(int(input_name))
        capture = None

    cv.NamedWindow("result", 1)

    if capture:
        frame_copy = None
        while True:
            frame = cv.QueryFrame(capture)
            if not frame:
            if not frame_copy:
                frame_copy = cv.CreateImage((frame.width,frame.height),
                                            cv.IPL_DEPTH_8U, frame.nChannels)
            if frame.origin == cv.IPL_ORIGIN_TL:
                cv.Copy(frame, frame_copy)
                cv.Flip(frame, frame_copy, 0)

            detect_and_draw(frame_copy, cascade)

            if cv.WaitKey(10) >= 0:
        image = cv.LoadImage(input_name, 1)
        detect_and_draw(image, cascade)


So as you can see, by using the bundled OpenCV Haar detection XML documents for frontal face detection, we are almost there already! Try it with:

python ./ -c /usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml 0

Where 0 is the index of the camera you wish to use.

An introduction to Apache Spark

What is Apache Spark?

Apache Spark is a fast and general engine for large-scale data processing.

Related documents

You can find the latest Spark documentation, including a programming
guide, on the project webpage at


Spark needs to be downloaded and installed on your local machine
Spark requires Scala 2.10. The project is built using Simple Build Tool (SBT),
which can be obtained ( If SBT is installed we
will use the system version of sbt otherwise we will attempt to download it
automatically. To build Spark and its example programs, run:

./sbt/sbt assembly

Once you’ve built Spark, the easiest way to start using it is the shell:


Or, for the Python API, the Python shell (`./bin/pyspark`).

Spark also comes with several sample programs in the `examples` directory.
To run one of them, use `./bin/run-example <class> <params>`. For example:

./bin/run-example org.apache.spark.examples.SparkLR local[2]

will run the Logistic Regression example locally on 2 CPUs.

Each of the example programs prints usage help if no params are given.

All of the Spark samples take a `<master>` parameter that is the cluster URL
to connect to. This can be a mesos:// or spark:// URL, or “local” to run
locally with one thread, or “local[N]” to run locally with N threads.


Testing first requires building Spark. Once Spark is built, tests
can be run using:

`./sbt/sbt test`

Hadoop versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported
storage systems. Because the protocols have changed in different versions of
Hadoop, you must build Spark against the same version that your cluster runs.
You can change the version by setting the `SPARK_HADOOP_VERSION` environment
when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop
versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions
with YARN, also set `SPARK_YARN=true`:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the
“hadoop-client” artifact to your project’s dependencies. For example, if you’re
using Hadoop 1.2.1 and build your application using SBT, add this entry to

“org.apache.hadoop” % “hadoop-client” % “1.2.1″

If your project is built with Maven, add this to your POM file’s `<dependencies>` section:


Spark could be very well suited for more in depth data mining from social streams like Twitter/Facebook
Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing.
Write applications quickly in Java, Scala or Python.
Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala and Python shells.
Combine SQL, streaming, and complex analytics.
Spark powers a stack of high-level tools including Shark for SQL, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these frameworks seamlessly in the same application.
Spark can run on Hadoop 2′s YARN cluster manager, and can read any existing Hadoop data.
If you have a Hadoop 2 cluster, you can run Spark without any installation needed. Otherwise, Spark is easy to run standalone or on EC2 or Mesos. It can read from HDFS, HBase, Cassandra, and any Hadoop data source.


Once Spark is built, open an interactive Scala shell with


You can then start working with the engine

We will do a quick analysis of an apache2 log file (access.log)

// Load the file up for analysis
val textFile = sc.textFile("/var/log/apache2/access.log")
// Count the number of lines in the file
// Display the first line of the file
// Display the Number of lines containing PHP
val linesWithPHP = textFile.filter(line => line.contains("PHP"))
// Count the lines with PHP
val linesWithPHP = textFile.filter(line => line.contains("PHP")).count()
// Do the classic MapReduce WordCount example
val wordCounts = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b)

Apps should be run as either Maven packaged Java apps, or Scala apps. Please refer to documentation for a HOWTO


Overall a good product, but some Hadoop expertise is required for successful set up and working
Mid level or senior level developer required
Some Scala language expertise is advantageous

Customising F-Droid client (Android repo)

This is part 2 of a 2 part series about rolling out your own “Play Store” based on F-Droid for Android devices.

Server set up instructions can be found at

The first thing that we need to do, is to get hold of the source for the client at gitorius.

Do a git clone of the code with

git clone fdroid-client

which will give you a checkout in the fdroid-client directory.
Directly afetr the checkout completes, cd into your new fdroid-client directory and grab the git submodules with:

git submodule update --init

and then we need to prepare for our ant biulds with:

./ # This runs 'android update' on the libs and the main project
ant clean release

Which will then build the stock release version of the client. Now, the point here is to do some customising right? OK, so let’s start with that then…

The first step here is to make sure that you can install your new package manager client alongside FDroid client too. In order to do this, edit tools/ and change the parameters that you need to (package name and identifier)

FDROID_NAME=${2:-Your FDroid}

Change these values to something closer to what you want i.e.

FDROID_NAME=${2:-Pauls FDroid Client}

Execute the package name changer script and get yourself a cookie for getting this far!

Now would be a good tme to import the code into an IDE like Eclipse or Android Studio. Note, however, that due to the way this project is built and laid out, it will probably not compile, and Eclipse will moan about it. No matter though, you are only modifying code in Eclipse, and will rebuild via the command line once done.

One thing that you really should change is to edit res/values/default-repo.xml and modify it to at least include your new server repo that you set up in part 1.
You probably also want to change no-trans.xml and the “about” clause to reflect your own installations too.
If you are rolling out a complete solution, don’t forget to check out strings.xml as well, so that your application makes sense!

Once that is done, you are free to look through the code to modify whatever else you think may be useful, but I think that the above should do for most situations. If you want to change the launcher icon etc, then create a new Android icon project in Eclipse and do it that way, as it is far easier than manually.

Once all your changes are done, rebuild and ship your brand new client!

ant clean emma debug install [test]


Android AsyncTask

I have seen with many apps that the main thread is sometimes (ab)used by doing too many asynchronous tasks in it. This is very easily resolved by making use of the AsyncTask class in android.os.AsyncTask.

A simple example. Let us assume that you want to call a set of WordPress REST URL’s and get a bunch of JSON back to work with. This is actually really simple and non-blocking if you do it properly with AsyncTask.

The key here is that you need to subclass all of your calls with AsyncTask calls.The class will need to override at least one method

doInBackground(Params ...)

and in some cases, you will probably want to override



I think that the method names are sufficient description for what they do in this case.

With that in mind, let’s create our “Task” class:

package com.myapp.tasks;

import java.util.ArrayList;
import java.util.List;

import com.myapp.PostLister;
import com.myapp.model.PostInfo;

import android.os.AsyncTask;

public class PostListTask extends AsyncTask<String, Integer, List<PostInfo>> {

	private static final String TAG = "PostListTask";

	protected List<PostInfo> doInBackground(String... params) {
		List<PostInfo> posts = new ArrayList<PostInfo>();
		for (String urlid : params) {
			PostLister postlist = new PostLister();
			PostInfo post = postlist.getURL(urlid);
		if (isCancelled()) {
			Log.e(TAG, "User cancelled listing " + params);
		Log.i(TAG, "Post list done..");
		return posts;

Once that is done, we need to fill in the missing classes

package com.myapp;

import org.json.JSONException;
import org.json.JSONObject;

import com.myapp.model.PostInfo;
import com.myapp.util.JSONParser;

public class PostLister {

	public static final String url="";
	public static final String urlid = "";
	public static final String TAG_CONTENT = "content";
	public static final String TAG_TITLE = "title";
	public static final String TAG_LINK = "link";
	public static final String TAG_ID = "ID";
	public static final String TAG_SLUG = "slug";
	public static final String TAG_DATE = "date";

	JSONParser jParser = new JSONParser();
	public PostInfo getURL(String urlid) {
		JSONObject json = jParser.getJSONFromUrlByGet(url+urlid);
		try {
			String str_content = json.getString(TAG_CONTENT);
			String str_title = json.getString(TAG_TITLE);
			String str_link = json.getString(TAG_LINK);
			String str_ID = json.getString(TAG_ID);
			String str_slug = json.getString(TAG_SLUG);
			String str_date = json.getString(TAG_DATE);
			PostInfo post = new PostInfo(str_content, str_title, str_link, str_ID, str_slug, str_date);
			return post;
		} catch (JSONException e) {
			// whatever...
		return null;

As you can see, we are only using a few fields from the JSON produced by the WP-JSON plugin, but you get the gist right?

Now for a model

package com.myapp.model;

public class PostInfo {
	private String content;
	private String title;
	private String link;
	private String id;
	private String slug;
	private String date;
	public PostInfo(String content, String title, String link, String id,
			String slug, String date) {
		this.content = content;
		this.title = title; = link; = id;
		this.slug = slug; = date;

	public String getContent() {
		return content;

	public void setContent(String content) {
		this.content = content;

	public String getTitle() {
		return title;

	public void setTitle(String title) {
		this.title = title;

	public String getLink() {
		return link;

	public void setLink(String link) { = link;

	public String getId() {
		return id;

	public void setId(String id) { = id;

	public String getSlug() {
		return slug;

	public void setSlug(String slug) {
		this.slug = slug;

	public String getDate() {
		return date;

	public void setDate(String date) { = date;

Pretty standard stuff.

Once we are ready to fire it off, we simply invoke the AsyncTask with

AsyncTask<String, Integer, List<PostInfo>> posts = new PostListTask().execute("492", "491");

Which you can pretty much do whatever you want with:

try {
        	List<PostInfo> res = posts.get();
        	for(PostInfo post : res) {
        	    String content = post.getContent();
        	    Log.d(TAG, content);
        } catch (InterruptedException e) {
        catch (ExecutionException e) {

Where the String array we send is a list of the posts that we want to retrieve.

Dead simple, fast and efficient! Yay!

Android Volley – HTTP async Swiss Army Knife

This serves as a post to help you get started with Android Volley. Volley is used for all sorts of HTTP requests, and supports a whole bang of cool features that will make your life way easier.

It is a relatively simple API to implement, and allows request queuing, which comes in very useful.

The code below is a simple example to make a request to this blog and get a JSON response back, parse it and display it in a simple TextView widget on the device.


import org.json.JSONObject;

import android.os.Bundle;
import android.util.Log;
import android.view.LayoutInflater;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.view.ViewGroup;
import android.widget.TextView;


public class MainActivity extends ActionBarActivity {

    private TextView txtDisplay;
    protected void onCreate(Bundle savedInstanceState) {

        if (savedInstanceState == null) {
                    .add(, new PlaceholderFragment())
        txtDisplay = (TextView) findViewById(;

		RequestQueue queue = Volley.newRequestQueue(this);
		String url = "";

		JsonObjectRequest jsObjRequest = new JsonObjectRequest(Request.Method.GET, url, null, new Response.Listener<JSONObject>() {

			public void onResponse(JSONObject response) {
				String txt = response.toString();
				Log.i("volleytest", txt);
		}, new Response.ErrorListener() {

			public void onErrorResponse(VolleyError error) {
				// TODO Auto-generated method stub



    public boolean onCreateOptionsMenu(Menu menu) {
        // Inflate the menu; this adds items to the action bar if it is present.
        getMenuInflater().inflate(, menu);
        return true;

    public boolean onOptionsItemSelected(MenuItem item) {
        // Handle action bar item clicks here. The action bar will
        // automatically handle clicks on the Home/Up button, so long
        // as you specify a parent activity in AndroidManifest.xml.
        int id = item.getItemId();
        if (id == {
            return true;
        return super.onOptionsItemSelected(item);

     * A placeholder fragment containing a simple view.
    public static class PlaceholderFragment extends Fragment {

        public PlaceholderFragment() {

        public View onCreateView(LayoutInflater inflater, ViewGroup container,
                Bundle savedInstanceState) {
            View rootView = inflater.inflate(R.layout.fragment_main, container, false);
            return rootView;


Maven for Android

This is a quick howto on setting up and using Maven for your Android projects. Maven Android integration is not yet excellent, but is coming along nicely, and if you are familiar with Maven projects, will make managing your dependencies a lot easier!

I will be working with Ubuntu, but your set up will be similar. Just adapt paths etc for your setup as you need.

Ubuntu ships with Maven2, but we need Maven-3.0.5 at least in order to work with Android. I prefer to install maven manually because you don’t need to stress about pinning and other such nonsense from a binary distro. I also usually install stuff in /opt/ so that is where we will be working from.

The first thing that you need to do, is to grab the maven distribution file. I used 3.2.1, but anything later than 3.0.5 should work OK.


Extract the archive and copy it to /opt/

sudo cp apache-maven-3.2.1 /opt/

Great! First steps completed! You are doing well so far!
I am assuming that you have a semi-recent JDK installed, in our case we need JDK 6+. Check for your JDK version with

java -version

If all comes back OK, we are ready to proceed.

Get the path to your JDK now with

locate bin/java | grep jdk

and make a note of it. Mine is at


Edit your bashrc file (located at /etc/bash.bashrc on Ubuntu) and add the following parameters (modify according to your paths) to the end of the file:

export ANDROID_HOME=/opt/android-sdk-linux
export M3_HOME=/opt/apache-maven-3.2.1
export M3=$M3_HOME/bin
export PATH=$M3:$PATH
export JAVA_HOME=/opt/java7/jdk1.7.0_45
export PATH=$JAVA_HOME/bin:$PATH:/opt/java7/jdk1.7.0_45

Load up your new basrc file with

source /etc/bash.bashrc

and check that everything is OK.
You should now be able to test your brand new Maven3 installation with

mvn -version

If that seems OK, you are ready to install the Android m2e connector in Eclipse. Please note that this works best in Eclipse Juno or later (I use Kepler).

Open up Eclipse, and choose to install software from the Eclipse Marketplace. This is found in Help -> Eclipse Marketplace. Do a search for “android m2e” and install the Android configurator for M2E 0.4.3 connector. It will go ahead and resolve some dependencies for you and install.

You should now be able to generate a new Android project in Eclipse with New Project -> Maven -> new Maven project and in the archetype selection, look only in the Android catalogue or filter on “” and choose the android quickstart project.

If this fails, you can also generate a new project on the command line and simply import it to Eclipse.

mvn archetype:generate \
  -DarchetypeArtifactId=android-quickstart \ \
  -DarchetypeVersion=1.0.11 \ \

Once all of that is complete, dev carries on as usual. Remember that now dependencies are in your POM.xml document, so check that out first and ensure that you have some basics in there:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi=""




		<!-- Androlog is a logging and reporting library for Android -->




As you can see, I have included some other stuff, like ActionBarSherlock and JodaTime in case, as they are generally really useful, and it may save you some time just copying the dependency information!

Have fun!