Blog Archive

This Page contains every post on the site, going back to antiquity. If you only want to see what is recent, please check out The main blog page.

First Day in Seattle

I’ve now been in Seattle for almost a full day. The flight was great, except that I couldn’t sleep properly. I’ve got a car (Toyota Corolla) and an apartment (200m from work :-/), but I’ve managed to bust the internet already and I have to wait until tomorrow to get it fixed. Luckily they have free WiFi in the guest lounge as well, so thats where I am right now

Interesting Points on Day one:

Doggie care area at LAX

A fenced off area where you can look after your dog at LAX airport. Why would you take your dog to LAX? I especially like that they have put a hydrant in there, just in case your pooch is picky in what he pisses on.

First Amazon Fresh Truck

I saw an Amazon Fresh (the place where I’ll be working) delivery truck while I was out getting a feel for the neighbourhood. Good to see that presence!

Bicycle polo! Played in the park in Capitol Hill.

And finally, I saw a guy shouting at himself “Don’t forget me, motherfucker!” over and over. At least there’s one thing that reminds me of Melbourne :)

Categories: me and seattle

Moving On

Happy New year!

With the formalities now over, I turn my attention to my favourite subject: Me. I jet off tomorrow for the United States, where I will be taking up a position with Amazon, working on Amazon Fresh. Melissa will join me in a few months. We’re very excited by the opportunity to live and work in a new place, and to work with one of the world’s top tech companies.

If you have any suggestions for things to do in Seattle or weekend trips that we can take, I’m all ears :)

I’ll miss Melbourne, and I’ll miss the friends I have made here. I don’t think the move is permanent. We are planning on returning to Australia in a few years with a bunch of awesome stories :) Viva las aventura!

Categories: me, employment, news and amazon

Decrypting data encrypted by openssl on Java/Android

I’m posting this little snippet up because I spent ages trying to work out how to do this, and thought that other googlers might benefit from this.

I’ve got an Android application that stores some of its information on the SDCard which has some commercial value, and we don’t want our competitors simply walking away with the information. The security doesn’t need to be too tight, so I’m happy to have a password based encryption scheme which has the password in the code of the application. If a competitor really wants to get at it, they can, but it will stop casual theft of the data.

To do this, I have written a simple little script in ruby to generate and then encrypt the data files. I dind’t want to write the encryption code in java too, so I needed something that would work across Java and Ruby. OpenSSL to the rescue! OpenSSL provides a simple little command line tool to encrypt files. So, to encrypt the files, I used this snippet:

	system("openssl enc -aes-128-cbc -in encrypted#{i+1}.txt -out encrypted#{i+1}.dat -pass pass:ThisIsMyPassword ");

I could have also used the built in openssl gems of ruby, but I couldn’t work out how to get it to save the salt/IV in the file as well as the encrypted data, so I just left it as a command line thing. Perhaps somebody can provide an improvement.

Android provides a pretty comprehensive encryption library through the BouncyCastle Project. The documentation isn’t very helpful though, and it took me a while to work out the correct combination of parameters to decrypt it on the Java side. Here’s the result:

	Security.addProvider(new BouncyCastleProvider());

	byte[] encrypted = read(args[0]);  // Whole encrypted file.
	String password = "ThisIsMyPassword";

	Cipher c = Cipher.getInstance("AES/CBC/PKCS5Padding", "BC");

	// Openssl puts SALTED__ then the 8 byte salt at the start of the file.  We simply copy it out.
	byte[] salt = new byte[8];
	System.arraycopy(encrypted, 8, salt, 0, 8);
	SecretKeyFactory fact = SecretKeyFactory.getInstance("PBEWITHMD5AND128BITAES-CBC-OPENSSL", "BC");
	c.init(Cipher.DECRYPT_MODE, fact.generateSecret(new PBEKeySpec(password.toCharArray(), salt, 100)));

	// Decrypt the rest of the byte array (after stripping off the salt)
	byte[] data = c.doFinal(encrypted, 16, encrypted.length-16);

The Key line is the SecretKeyFactory.getInstance() call, which sets up the cipher to accept the openSSL data. Once we’ve got that, its trivial. Nothing to it really, once you know the magic string!

Categories: Encryption, Java and Snippet

CSS media selectors for mobile web - making it work on Android

I’m not a very good graphics artist/web designer. That doesn’t stop my trying though, as you’ll see through this site, and a plethora of others. I’m getting better all the time, but I’m still not up to a decent grade. This article is about one of those “getting better” moments, and I thought I’d share.

One of the things I like my sites to do is to behave differently on different sized screens, mobiles in particular. Often this is wrapped into the concept known as Responsive Web Design, but really its all about making your sites work on teeny tiny screens. So far, I’ve been doing this using CSS Media Selectors, which delivers different CSS styles to the browser depending on helpful information provided such as screen size. Although some people say that CSS media selectors are fools gold, I’m more inclined to think that they are sufficient for the majority of cases.

The basic principle is that when you create your CSS, you include some additional hints to the browser as to when it should be applied. Below is the usually accepted method, in which you place the following HTML tags in the <head> of your document

<link rel="stylesheet" href="/css/style.css?v=2">
<link rel="stylesheet" media="only screen and (max-device-width: 480px)" href="/css/mobile.css" type="text/css" />

The idea is that you have your standard CSS first, which applies to the desktop version. The mobile version, which is only matched when the browser passes the media test, will override any settings that need to change the site to work properly on mobiles. This is the most basic media selector. You can also look at the orientation (landscape/portrait), the media it uses (printer, screen) and so on. I use this on this site, and in much greater depth on the jewellist’s portfolio site.

The problem is, they’re inconsistent, and they don’t always work. Many devices lie about these properties to try and get the right experience for users. In the above example, we’ve specified that the style will match on devices that have a maximum width of 480px. An iPhone4 has a maximum resolution of much wider than 480 pixels, thanks to its lovely retina display. In order to give users a consistent experience however, apple has made the iPhone lie about its resolution. Some browsers (notably Windows Phone 7) don’t even respect the media selector attribute. This means that developers that want to use these queries have to understand the quirks of the devices rather than have meaningful tests.

In an even more dangerous turn though, there’s been a bug in the Android implementation of WebKit (its built in browser) which has meant that it sometimes decided it wouldn’t apply the style sheet until you refreshed the browser. This has infuriated me for some time now, but I have found a workaround, thanks to this lovely post.

To make it work correctly on Android, you have to add an additional query parameter, which also matches any screens based upon their window width, in addition to the device width.

<link rel="stylesheet" href="/css/style.css?v=2">
<link media="handheld, only screen and (max-width: 480px), only screen and (max-device-width: 854px)" href="/css/mobile.css" type="text/css" rel="stylesheet" />

This is a hack to be sure, but it does work. It also shows another challenge to using this method. Notice how I’ve changed the max-device-width parameter to 854px? Thats because we want it to work on landscape orientations too, ones that don’t lie about their resolution like the iPhone does. Some Android phones have lengths of 854px. This works great for current devices, but what about newer devices that have higher resolutions. The next generation of phones are likely to have 1280x720px screens, so these pixel based approaches will once again fall down (unless the phones lie).

All of this seems to push me in the direction of using User-Agent matching to deliver different content, rather than using CSS selectors. Most big sites seem to take this approach, having a site and a site. This also gives you the advantage of allowing users the option to switch over to the desktop version if they choose. It makes sense, but requires more server side logic, which kind of goes against my static website philosophy. I’ll need to look more into that…

Categories: HTML, CSS and web design

Slides for presentation

As I am presenting to MobSIG on tuesday about Android Widget programming, I thought I should put together a slide pack. The session will be mostly coding, so there’s not too much to it, but here are the Slides anyway. I’ve decided to do the slide pack totally in HTML5. The skeleton for the slides was shamelessly stolen from HTML5 Rocks. I hope that in the future, I’ll be able to tweak the presentation a bit more to make it work really well, and fit in with the theme of the site.

Please note that as the slideshow is written in HTML5, it won’t work with older versions of browsers, in particular Internet Explorer. If you really want to see the presentation and you use IE, let me know.

The links that are important for the presentation are also included below:

Categories: presentation, android, widgets and experiement

A new layout for a new job

As I mentioned the other day, I am moving on from Unico and becoming an independent consultant (a grandiose title I give myself… Really I’m a contractor). To coincide with this, I am also revamping the layout of my site, with additional information about the sort of work that I do, community engagement, and how to get in touch. The site should now render better on mobile devices as well.

Is it an improvement on what was there before? Maybe. Hopefully I’ll be tweaking it more in the coming weeks.

On Monday, I start my first contract, working at Transtech.

Categories: news, site and job.

Mob SIG Presentation 2nd Aug

I have been asked to present a talk to the Melbourne Mob SIG, to be held at the Telstra Conference Centre on the 2nd of August. It will be a technical presentation, where I open up eclipse and show people how widgets work on the Android platform. This could be considered a basic topic, but it is one of the most requested topics on the Build Mobile site, so I thought it’d be a good topic.

If you have any other suggestions for presentations, or articles that I could write for BuildMobile, please let me know.


Telstra Conference Centre Conference Room 2 Level 1/242 Exhibition Street, Melbourne


Tuesday 2nd August 2011


5:30PM for a 6PM Start


Ever wanted to know how to program a widget for Android?

Join us as we go through a live example of how to create one using the Android Development Kit and Eclipse.

We’ll go through the basics of creating a widget, then move on to updating its contents on a frequent basis, and finish up with how to show complex graphics which otherwise wouldn’t be allowed.

More details can be found at the ACS web site.

Categories: news, android and presentation

Resignation from Unico

After four years working at Unico Computer Systems, I handed in my resignation this morning. I have been working in the enterprise space performing integration architecture and devleopment work. I have found however that my own interests have shifted more and more towards working with mobile devices, and this has become a real passion of mine.

As a result, I have now accepted a contract to work in the logistics industry working with mobile devices to make truckies lives easier. In the long term, I hope to grow this work into a business working across the mobility industry, working on the (rapidly dissapearing) boundary between enterprise systems and mobility solutions.

I’d like to thank Unico for its support over the last four years. They gave me a job when I moved to Melbourne, and I have learned a lot in the period that I have been here. Furthermore, I have made a lot of friends, whom I hope to be able to continue to work with in the future.

And now on to the exciting future!

Categories: news and employment

Apple's iCloud: awwww

So the speculation was wrong. Its not terribly surprising. Requiring people to buy a new device to sync their media would have been an impost, and Apple have a brand new shiny data center which will do the job nicely. Its a shame however, I was hoping for something a bit more personal. Wishful thinking!

Opinions seem to be that whilst its a welcome addition and will integrate seamlessly, its hardly anything revolutionary: Just well executed.

Categories: mobile agents, apple and rumour

Speculation on Apple's iCloud: Magical game changing mobile agents around the corner?

Rumours are circulating before WWDC about how Apple will be supplying its iCloud service using a new version of its Time Capsule router come backup device. The idea is that the new version will contain a processor similar to that found in the iPhone and iPad and that it will run iOS and apps.

The ability to sync my apps and music using iCloud sounds great, but I’m beginning to get excited about the other opportunities that a device like this would bring. Would a device like this finally provide the impetus for people to have a personal server in their homes? People have long tried to get this sort of capability going, originally through their own PCs but more recently through PVRs, SmartMeter devices and appliances such as the PogoPlug. None have had the business model or the customer appeal to really gain a foothold with consumers. If this rumor turns out to be real, it smells like it has the possibility to work.

I have ranted recently about the possibilities of having collaborative apps (agents) running on “the cloud”. One of the advantages I postulate is the ability to take ownership of your data again. The idea is that each person have their own stable of applications/servers that can do things in the background but don’t have to run directly on the person’s phone. I think an app enabled Time Capsule provides exactly the right sort of platform to perform these tasks.

Hopefully, the way it works is that when you install an app, companion apps automatically get installed on the time capsule, or there is a way an app can direct companion apps to be installed. The app on the Time Capsule does all the repetitive, background and possibly battery draining tasks, so that the mobile devices don’t have to. If they can make the cooperative nature of the apps seamless, then this will be a game changing, “magical” move by Apple.

So does Google have an answer to this brave new world? Possibly. We know that Google are consolidating their Android and GoogleTV products to form a single code base, so the GoogleTV could provide these capabilities. Thats not supposed to happen till the end of the year, and the rumors claim that Apple have their solution is ready to go now. I think that Apple might really have a game changer on their hands here, and I’m excited by the prospect. What remains to be seen is how open they make it for 3rd party developers. If it is limited to the apple provided applications, then it won’t be anywhere near as exciting. Perhaps the next must-have application will be a TimeCapsule?

For those that are interested in the topic, I suggest looking at the video of Steve Jobs’ thoughts on the matter from 1977 in the rumor article from

Categories: mobile agents, apple and rumour

Classloading from Google App Engine's data store

I have been playing around with the ideas of mobile processing agents lately, and to do this I need to execute code that has been uploaded to my processing environment, rather than the traditional “deployed” code. Java has this ability built in, via ClassLoaders. At any point, a java application can create classes from byte arrays. I wanted to do this on Google App Engine, but GAE does not support the traditional methods of loading classes would work. There are no “Files” or direct access to URLs in GAE. The only real data store available is the Data Store, which provides a HTTP based file upload and storage service through its own proprietary interface.

Creating a ClassLoader which took advantage of the Data Store wasn’t difficult however. I made use of the (currently experimental) File Store API to access the code once it has been uploaded to GAE’s datastore using the file upload facility. Once I’d gotten that sorted out, it was simply a matter of writing the classloader to read in the data.

To make things a bit more interesting, I’ve also added an AppLoader, which can set up a classloader for a contained application with extra JARs and a manifest, similar in structure to a WAR file. For example, if we uploaded a JAR file to GAE with the following structure


The AppLoader would construct a classloader with the contents of the classes folder, plus any JAR files that are in the lib dir. The agent-inf.yaml file specifies which class is the “main” class for the agent, which will be used for execution.

There is one limitation to the classloader that I’ve written. Sometimes java code refers to non-class files that are stored on the classpath, usually referred to as Resources. They are loaded by the Class.getResource() and Class.getResourceAsStream() methods. getResource() returns a URL which points to the resource asked for. The problem that I have is that GAE does not support URLs properly, and certainly doesn’t allow you to have your own URL handlers. As a result, getResource() does not work in my classLoader. This may break some libraries. getResourceAsStream() does work however.

One final warning about isolation and Classloaders. Giving people access to upload arbitary code to your app server is a dangerous undertaking. Make sure you understand who is uploading classes and make sure they can’t do anything they shouldn’t be doing. In my agent code, I want each of the agents to be completely isolated from each other. The challenge that I have is that each one, if it has access to the DataStore API, will be able to overwrite the others, which isn’t cool. I’m still thinking about ways that I can fix that.

In case anyone else is looking for a similar solution, I thought I’d post it here. The Code for this example is available at my GitHub Repo

Now, back to writing that mobile processing agent system…

Categories: google, appengine, java, code example, technical and not for you richard

BuildMobile has just launched a mini-site dedicated to the builidng of mobile applications in all their forms, named BuildMobile. Inexplicably, they have chosen to feature my little application NodeDroid as their first featured app, and even more explicably they have asked me to contribte some stories as well. The featured app is up now, and my article will be posted in a few days.

There’s nothing like having a wider audience (not to mention a deadline) to inspire you to write, so hopefully I’ll be a bit more regular with my postings there than I have been here :)

Categories: nodedroid, site and news

NodeDroid source code released

When I started writing NodeDroid, I did it to learn about writing mobile applications, and all of its associated technologies. One of those technologies was advertising. I added Admob advertising to the bottom of the application, and wated for the megabucks to roll in :). In the little over 3 months that its been available, it has netted a grand total of $13.88US. Thats enough to cover hosting costs, but thats about it.

A number of people have asked if I can support their provider. I have variously been asked to support Optus Cable, Exetel, Telstra, and a bunch of others. In order for me to do this properly, I’d need to have access to an account to be able to perform testing. People have quite understandably been reluctant to share this information with me, which means we are stuck with the providers that NodeDroid already has.

I’d like NodeDroid to support as many providers as posisble. I’d like it to be better than consume, the iPhone application. I don’t think I can do that on my own. As a result, I’ve decided to open source NodeDroid, in the hope that other geeks out there will write their own providers for their own provider, and then contribute the code back to the application.

I can’t ask people to work for free on something that I could make money off (even if it is a paltry amount), so I have also decided to remove all advertising from the application. I just uploaded a new version that has no admob.

The code for NodeDroid can now be found on GitHub at the NodeDroid Repository. If you are a developer, please feel free to check out the code, and get in contact with me. If you have a bug that you want to report, you can also report issues there too.

Categories: nodedroid, news, android, opensource and releasenote

Showing a post tree using jekyll

I’ve been playing with jekyll to create my website over the past few days. Primarily, I’m doing it to play with Ruby, but its also nice to have a new website :)

Jekyll produces a static site, but does this using templates and markup. Its remarkably easy to set up a site, and to give it the look and feel that you want. Any dynamic capability can be provided by external services (e.g. I use disqus for comments) and javascript.

One thing that I wanted to do was have a post tree in the sidebar. By Post Tree, I mean a tree showing all my previous posts, broken down by year and month. Have a look to the right here and you should see it. Jekyll doesn’t provide the ability to do it out of the box, but it is very easy to extend, so I thought I’d write a plugin to do it.

Here’s the code I wrote. The first file is the ruby plugin, which would be placed in _plugins/postsintree.rb . Its responsibility is to set up the data in a format that is easy for the template to output:

module Jekyll
  # Extends Site to have an field that gives you a map of posts by year.
  class Site
    def postsbyyear

      # Create a tree of the posts by year and then month
      tree = {}
      self.posts.each do |post|
	 year =
	 month =
         if tree[year] == nil
	    tree[year] = { "number" => year, 
			   "count" => 0, 
			   "months" =>  {} 
	 if tree[year]["months"][month] == nil
	    tree[year]["months"][month] = { "number" => month, 
					    "count" => 0, 
					    "posts" => [] 
	 tree[year]["months"][month]["posts"] << post

      # Turn the tree into sorted arrays, so it is easier to interpret
      # in liquid
      years = tree.values.sort { |x, y| y["number"] <=> x["number"] }

      # Calculate counts of posts and sort each of the months as well
      years.each do |year|
	year["months"] = year["months"].values.sort { |x, y| y["number"] <=> x["number"] }

	year["months"].each do |month| 
	    month["count"] = month["posts"].size
	    month["posts"] = month["posts"].sort {|x,y| <=> }

	sum = 0
	year["months"].each {|month| sum += month["count"] }
	year["count"] = sum
      return years

    # Redefine site_payload to include our posts by year.  This is ugly
    # but I don't know how else to do this without changing the jekyll code
    # itself.  #rubynoob
    def site_payload
      {"site" => self.config.merge({
          "time"       => self.time,
          "posts"      => self.posts.sort { |a,b| b <=> a },
          "pages"      => self.pages,
          "html_pages" => self.pages.reject { |page| !page.html? },
          "categories" => post_attr_hash('categories'),
          "tags"       => post_attr_hash('tags'),
          "postsbyyear" => self.postsbyyear



Now that we have the data in the right format, its just a matter of altering our page template to show the tree. This is facilitated by the following HTML:

    <ul id="posttree">
	{% for year in site.postsbyyear %}
	<li>{{ year.number }} ({{ year.count }})
	    {% for month in year.months %}
	    <li> {{ ({{ month.count }})
		{% for post in month.posts %}
		<li><a href="{{ post.url }}">{{</a></li>
		{% endfor %}
	    {% endfor %}
	{% endfor %}

The List is then translated into a clickable, expandable tree using the JQuery Treeview plugin


All of this is available at the GIT repostiory of my website, available at GitHub

Categories: site, html and jekyll

Blog Migration

Earlier this month, I posted a new website for my hobby company As part of this change, I have now decided to host my personal blog here as well. My Blog will henceforth redirect to All URLs will continue to retrieve the correct posts, but with the new layout. There shouldn’t be any disruption to service, but who knows. RSS feeds should automatically switch over as well. I apologise if it re-posts everything I’ve ever done…

So why have I done this? Blogger is a great platform, but I’ve been experimenting with jekyll as a website creation method, and thought I’d try it out for a bit. If I don’t like it, I think I’ll move to wordpress, which I’ve been mulling over for a while, so perhaps this is a simple way of testing it out. In the end, I did it so that I had something new to play with. Yeah, I know… I’m sad…

I appreciate your patience.

Categories: site and news

Small update & Merry Christmas

I have just posted a new version of NodeDroid, which incorporates only one small change:

  • Bugfix for issue caused by interrupting a running fetch. Future fetches may not have worked correctly.

I’m currently working on a widget, which I hope to release before the new year. In the mean time, I wish you a merry christmas!

Categories: nodedroid, releasenote and news

New Layout

This site is primarily intended to allow me to play around with new stuff. As part of this, I’m playing with Jekyll, which is a simple, ruby based web generator which creates static sites programatically.

Some advantages:

  • Simpler hosting requirements. All you need is a web server like Apache
  • If you want dynamic capabiilties, they can be added in via javascript.
  • Will be able to survive a slashdotting (not that that is ever likely to happen to me)
  • Not hackable through attacks on the CMS product, because there isn’t one
  • Uses Markdown, which makes writing the content much easier. You can always fall back to HTML when you need it though
  • Want to cross-post? Thats easy. Simply symlink between your different blogs and re-publish.

This also gives me the opportunity to replace the old design for 8bitcloud with a new one. I really enjoy doing web site designs. What a pity I’m no good at it :)

Big props to Dlimiter who showed me jekyll

Categories: nodedroid, site, html and news

New version of NodeDroid, now with Optus

I'm the author of a usage tracking utility on Android called NodeDroid. Originally it only supported Internode, but I am now expanding it to support other ISPs and telcos. The first one I want to try out is Optus. I've just uploaded the new version of the application, and I'm hoping some of you guys would like to try it out.
The application works by screen-scraping the optus web site, and presenting it in a better format, along with usage graphs and the normal sort of thing you would expect from a usage meter. In the future, I hope to add the ability to see bills, as well as provide warnings when your quota is being reached. That sort of thing.
Because there are a large number of plans, and they all have different rules, I would imagine that I will need to take into account a large number of factors. At the moment, I've only been able to work from my own plan, which is an Extreme Cap. It should work quite well for other Caps, but it will probably break on prepaid and other account types.
If you would like to participate, you must already have a log in to the optus account page. If my application can't read your usage, it would be useful for me to see what your usage page on the optus portal looks like, along with all the usage lines expanded. If you can provide a screenshot or source, that would be beneficial (remember to black out the phone numbers first though!!)
If you would like to participate, please fetch the application from the market by searching for NodeDroid, or through AppBrain Here
I will also be expanding the program to support other providers in the near future. Vodafone Mobile prepaid broadband will be easiest for me, as I have an account, as well as Telstra prepaid, but if you are extra keen on getting something, please send me a whim.
I would appreciate any feedback you can give, either here or on my website at The website will be updated soon with details of the new beta.  I have also started a thread on Whirlpool where people can discuss it (or not...)
Categories: brucecoopernet, optus, nodedroid, 8bitcloud, news, android and usage meter

Do it on the device, or do it on the server?

This weekend, I thought I'd extend my little Android usage tracking application to work on more ISPs than the one (Internode) that it already does. As my phone is (sadly) on Optus, I thought I'd write one for that.

Internode was easy to add, as they have a documented API for accessing usage counters which are ideal for computer consumption. Optus on the other hand only provide a web application interface to check usage, necessitating the use of a web scraper. A web scraper is an application that pretends to be a user on a web page, makes all the appropriate calls (and fudges any javascript calls that are necessary) to get the results it needs. It then parses the (often non-compliant) HTML results that come back to get data. I have no problem doing this, and have done so on several occasions before, but it is not easy work and can be quite fiddly.  Parsing the HTML is often the most difficult part, as it is usually not well formed XML so you can't just use DOM to parse it.

In short order, I had a working prototype that used JTidy to clean up the HTML into something that I could use properly and then XPath to extract the elements of the document I needed.  It works great, except that the document clean up and parsing into DOM takes a really long time on a resource constrained device such as a phone.  It takes about 20 seconds to clean up and parse the document on my development emulator, which is too slow to produce a good mobile experience, especially if you have to parse multiple documents as I do.

So now I'm faced with a choice.  I could write a man in the middle service that the phone sends the user's login details to which then performs the parsing on behalf of the user and sends the results on to the phone, but there are a number of drawbacks to this:
  1. This means that the user is sending his login details to a 3rd party, which is a security no no.
  2. It introduces a single point of failure into the equation.  If my app gets popular the middle man service could get slammed.  If Optus decides that they don't like what I'm doing, they could easily block it.
  3. It means I need to host a service, which means additional expense.
I don't want to do this, so what I'm left with is more hacky solutions, using regular expressions to find what I want in the HTML documents retrieved from the provider.  This will take me longer to code, will be more prone to failure, and is just generally nasty.  I'm not happy.  Devices these days are very powerful, and there should not be the need for intermediary servers to help with processing.

Of course this would all be much easier if the providers published web services interfaces to their data, rather than just web applications.  This has been the mantra of SOA and internet connected businesses since the terms were coined.  It doesn't even cost them that much more to do it, and would lead to better designed web applications, but thats a subject for another rant.  Optus doesn't do this because there's no economic incentive for them to do so.  They gain nothing directly from publishing a usable web interface, so they can't be bothered... bah!

To be fair to Optus, they aren't the only ones that don't get it.  No ISPs and telcos provide any decent interfaces, other than Internode.
Categories: brucecoopernet, rant, computers, nodedroid, programming, 8bitcloud, news and android

I wrote an android application

I purchased an Android phone a few weeks ago. Part of the reason that I got it was that I wanted to see what the differences were between Android and iPhone.  This extends out to how to program them as well, so I had to write an application, just like I did for iPhone.

Last night, I released my little application.  Its a usage meter for my ISP, Internode. I deliberately chose something quite simple so that I could cut my teeth on the platform, and I must say that I'm very impressed. I found it very easy to write the application for Android, especially because it uses the same tools and libraries that I am used to using for my day job.  One other difference that I notice is that it is much more obvious what is going on inside an Android application.  The documentation describes things clearly and gives you full visibility.  Apple on the other hand like to keep their platform a little more mysterious.  There are plenty of good documents on how to do things, but you still get the impression that there's something going on under the hood that you don't quite understand.  Perhaps its just that I'm more familiar with the Java ecosystem.

If you'd like to have a look at it, check out its site at 8-bit cloud.  I've also been playing with the web site, and I will be improving it and hopefully making it more fun.

PS: I was tempted to call the application NodePony after a recent meme that Internode has got going with its cute little plush toys.  In the end, I decided that I shouldn't try to cash in on it.  It would spoil the meme...
Categories: brucecoopernet, computers, nodedroid, announcement, Internode, programming, nodepony, 8bitcloud, news, android and usage meter

What do I do now?

I've been trying to work out what I want to do with my career recently.  When I've been speaking with mentors and colleagues, the first question that comes up is quite reasonably always "What is it you want to do?".  I have to admit that this question has had me stumped for some time.  For the last three years, I've been working as a principal consultant in the system integration space for medium to large size businesses, and I have not found it satisfying.  I do the work well enough (some say even exemplary), but I haven't been able to summon the passion that I can put into work which allows me to excel.  I find myself compelled to seek more responsibility and higher pay, but its just not working for me.

Yesterday, I went to see Stephen Fry speak.  It was a very entertaining journey through Stephen's life, and the things that make him tick.  He called it his personal WWW, the things that inspire and drive him that for the contrivance of his topic all start with the letter W.  One of the Ws that he spoke of was Writing.  He doesn't just like to write.  Its not that he is paid to write.  He is compelled to write.  His love of language and how it can move people is his passion, his reason for getting up in the morning.

For many people, working in IT is just a job.  They have other things in their life that they call their passions, and they just come to the office to get a pay cheque.  I have many interests and hobbies, but my central passion is programming computers.  I view it as a creative pursuit.  Not art per se, but a craft that can produce elegant and useful things.  When I come home at night, more often than not I end up still camped in front of a computer.  So for me, computers are my passion, my raison d'eŠtre.  As with all passions, this gives me an edge.

It goes beyond being a code monkey however. Sometimes when I discuss the topic, people say "oh, you'd get bored just doing programming", because I've worked as a solution architect for so long, and I can operate at a business level as well as a technical one.  This is true, but not for the reason that people think.  I'm a bloody good programmer, an nothing gives me more pleasure than solving a technical challenge, but that's not enough to make something that works.  Creating modern software is a complex task, and to do it correctly there is a mix of communication, leadership and technical expertise required. I'd arrogantly like to think that I can perform all of these tasks.  I like the challenge of setting up simple, streamlined processes and teams that get jobs done.  It'd be nice if I got to do a bit of hands on work, because a true leader is a doer as well as a manager, but its more of an oversight, training and review function.

Which technology I work with is largely irrelevant.  There are some that are more interesting to work with than others, and some I have more experience with than others, but all are interesting to me.  Because technology is my passion, I pick new techniques and languages up very easily.  My company recently had a need to develop an iPhone application.  It just so happened I had been playing with iPhone programming in my spare time.  Where other engineers would have said "no, I haven't done that before" or asked to go on training, I relished the opportunity to pick up something new and got stuck in.  I trained a small team and let it to success.

So much of consulting consists of going into organisations that do not operate effectively.  They are rendered moribund by internal politics, people of limited talent in positions of power, and the sheer difficulty of organising a large workforce (usually too large) that is quite often afraid of change.  This is the reason that they bring in outside expert help.  These organisations get the job done, quite often a boring one, and they turn a profit.  But they do not produce exceptional results.

There are people that are very good at going into dysfunctional situations and turning them around.  I have immense admiration for these people because they do a very difficult and often thankless task.  They turn failure into a bare pass.  They can take pride in the fact that they put the hard yards in to get a result, but the end product is rarely anything to rave about.

I have performed project recovery work successfully before, but I am rarely given the opportunity to effect any real change.  I'm brought in as a technical specialist, often very late in the piece when many of the decisions have been made, to solve a particular problem, often within ludicrous constraints that don't make any sense in the context of producing something that works.  I've lost count of how many times I've been asked to work on a "platform project" to put in place tools and procedures for an organisation that does not have any function to put on that platform yet. How pointless.

So here's what I want from my job:  I want to create good software that does useful things. To do this, I want the freedom to be a good chief nerd (or lead engineer, or architect, or whatever else you want to call it).  I know what I'm doing.  Just get out of my way and let me do it.

My challenge now is to engineer the opportunity to do this.  To quote a review of yesterday's performance that appeared in The Age, "Fry proves that we can take power over, and joy in, the role that is ourselves.".  Time to prove all the outrageous claims that I've just made :)
Categories: brucecoopernet, rant, computers and work

Samsung Galaxy S - 3 weeks in

My ageing iPhone 3G didn't take the update to iOS 4.0 very well. It still worked, but it seemed clunky and slow after the upgrade. Six weeks ago, I (conveniently) took this as a sign that I needed a new phone, and started considering my options. I could wait for the new shiny iPhone 4, or I could look at Android. It has been reported that Android has caught up to Apple now, both in terms of hardware and software. Of course, the biggest thing that everyone touted as the biggest advantage of moving to android was: freedom. Being a developer and nerd, this appealed to me.

So I went out and bought a Samsung Galaxy S I9000 (now there's a mouthful of a name...). Its got a big super amoled 4 inch screen, a very fast processor (reportedly faster than that in any other current phone), and came for free on a very reasonably priced Optus plan. I've had the phone for three weeks now, long enough for me to sort out any learning curves and kinks in the system, so I thought I'd post my impressions.

The biggest thing I noticed buying this phone was that it didn't "Just work" like an iPhone does. It turns out that there was a performance problem which detracted from the new phone feel that I expected. In order to get it working properly, I needed to go onto a bunch of technical forum sites and work out how to flash a new (leaked beta) firmware to fix the problem. Flashing was made even more difficult because they were windows only flashing tools, and they wouldn't work in a virtual machine on my mac. In the end, I needed to drag an old clunker machine out of the cupboard to get flashing to work.

The headphone jack also uses a different configuration to iPhones.  This means that I can't use my existing Sennheiser headset on the phone.  If I do, the sound ends up muddy and the lyrics almost disappear.  The supplied headphones are ok, but I'd still prefer to use my aftermarket headset.  Apparently Samsung have used the same configuration as Nokia, which makes some sense, but it still pisses me off that I need to use an adapter to get my headphones working.

Now that I've passed those problems, I can start to see why this is such a great phone. I now get twitter & yammer notifications on the phone in an unobtrusive way,  and there's no waiting for updates either.  On the iPhone, when there was a new message, a notification would appear, and when you clicked on "view" it took you to the application which then loaded the message (or whatever) from the internet.  This could take a few (up to 10) seconds.  On android, it loads the message first, then shows you the notification.  No waiting!

There's also widgets on the home screen. If I don't like the default home screen that Samsung Provides (TouchWiz 3.0) I can replace it with a number of open source alternatives that are much whizzier and glitzy. There's a cool little active background that makes the screen look like you can see through to a circuit board. There's just so much stuff!!!

Of course, there's also a downside to this. I installed a couple of different music players to try them out. When I pressed the "play music" button on my headset, all of them started at once :S. I worked out how to fix this, but its another example of not "Just working".  I also need to do a bit of research when I want an application, because there isn't just one way of doing it.

For work, there's exchange support built in, so it works just as well as an iPhone in that department.  There's also an office docs reader built in, and all the applications we are used to such as dropbox work as well, if not better than on the iPhone.

Battery life is okay.  Its taken a while to settle down into a pattern, but I'm now getting almost two days between charges, including moderate usage.  The firmware upgrade has been a big help here.

Finally, there's the looks. I have to admit that one reason for moving away from iPhone was that everyone has one now. Sadly, the I9000 doesn't help here, because it looks exactly like an iPhone 3GS.  They even copied the packaging with the phone coming in a little black cardboard box.   Its so similar I'm surprised apple didn't sue them. Oh well, its a pretty vain reason to want to change.

  1. The screen is gorgeous.
  2. There's a lot of choice of applications
  3. lots of storage - 16Gb internal, which I have since augmented with a 32Gb MicroSD card to give me a total of 48Gb.
  4. SWYPE is a great text input method
  5. Google integration is great
  6. When Android 2.2 comes out, it will get much faster.

  • Performance problem - Now Solved!
  • Doesn't like OS X very much
  • There's a lot of choice of applications
  • The Games aren't as good.
  • Only available on Optus at the moment.  I would have preferred to stay on Telstra Prepaid

All in all, now that I've sorted out my performance problem, I'm very happy with my new phone. I like the choice that I get, and I'm willing to invest the time it takes to tweak the experience to my taste. In the end, I suppose thats the difference between the masses and the nerds: Some are willing to be told the best way to do something, whilst others prefer to tread their own path (or at least the path that is shown them by a bunch of other nerds on a forum)

Now, to work out how to program the thing!
Categories: brucecoopernet, geekery, review, phones and android

I'm giving a lecture on Cloud Computing

This post is basically just to try out embedding a google wave post in a blog post :) There is some news however.

I am giving a lecture to a Masters of IT class at Monash. This wave is the place where I am gathering information about what I will present. I will also provide access to the students during the class, if they wish to participate or continue the discussion at a later date.

Date and Time: Friday 4th June 2010,
Location: Rooom H125, Monash University, Caulfield Campus

EDIT: Hmmm, it seems that Disqus, the service I use for managing comments on this site, includes the script that is used to view google wave. This means that there will be an unfortunate doubling of the content below.

Google Reader also strips off the google wave component, which makes sense considering the content is added by javascript. We wouldn't want nasty javascript content sneaking out there into reader land, would we?

Good to know.

Google wave frame removed as it attracts focus...
Categories: brucecoopernet, computers, monash, lecture, Presentation, me me me and cloud computing

Distributed social networking

With Facebook's recent stumbles in online privacy, a lot of people are now calling for distribtued information sharing systems, which people can use to interoperate between different servers, different providers, or run your own if you so choose. The Diaspora project has been started by a bunch of grad students to do exactly that, and it looks like they've gone viral.  At the time of writing, they had raised over $160,000, when all they were looking for is $10,000.  Not bad for a first round of funding, especially considering all they've done so far is produce a video.

After talking about this at the pub on friday (the pub is where I do most of my best theorising :)), a friend suggested that I look at the Friend of a Friend Project. He suggested that distributed social networking has been around for ages.  Unfortunately, foaf does not deal properly with security or webs of trust, so I feel that it is useless for what most people would want.

There's no technical impediment to stop distributed social networking from working.  In the end, it would be a much more secure, robust and scalable solution.  As we've seen a number of times, Twitter sometimes struggles with stability and when it goes down, all of Twitter goes down.  These are the original principles upon which the Internet was founded.  The problem is that modern, slick systems take a lot of resources to put together.  Many people and companies don't view the internet as a way of speaking together in an interoperable fashion any more; they want to make money which means not being interoperable.   Google, Yahoo, Twitter and Facebook provide these services so that they can make money of the information we give them.  We make a choice to trade our privacy for the service that we are provided.

I wish the Diaspora guys good luck.  The biggest hurdle they will face is getting the masses to care.  In order to get enough take up of their solution, they will need to either convince your average user to host their own server, or find companies that are willing to host the service but not get access to any of the data. I can't think of any big company that would want to do that.   There's nothing in it for them if they can't mine all of your data.  Most ISPs will host email for you, but they do it as a method of lock in.  once you're on, it provides a reason to stay with that ISP rather than taking your email with you.  Besides, email was well entrenched before the rise of the ISP.  Every ISP had to provide email. It was just expected.  There's no reason to start hosting a social networking server unless everyone else does, which means you've got a chicken and egg problem.

Its something I've been thinking about a lot this weekend.  How do we make the general populace care enough about their personal data to protect it?  I don't know the answer.
Categories: brucecoopernet, social networking, facebook and privacy

Easy Exchange Email Extraction

Okay, that should be forwarding, but I wanted some alliteration :)

Some time ago, I posted a very technical approach to forwarding email properly using Microsoft Exchange, which is useful if you work at an organisation that uses it. At the time, I was aware that the steps involved were to technical for some people, so I didn't really expect too many people to take it up.

Some colleagues at work expressed some interest in using themselves, but didn't want to go to the effort of setting up their own man-in-the-middle server to fix the emails. To combat this, and to make it easy enough for anyone to set up email forwarding for exchange, I updated my forwarder to work for any email address. I now present this work to the public, in the form of the Crimson Cactus Exchange Redirector. If you would like to perform exchange redirection easily, this might be of some use to you.

In fact, I set this up quite some time ago, but I've been too frightened to publicise it until now because I was concerned that it might become a source of spam or too much traffic on my web server. I've finally gotten up enough courage to try it out now, but please consider it a beta service. It may have to be turned of at short notice if it is abused.

Please give me your feedback if you find this service useful.

Categories: brucecoopernet, computers, crimson cactus, email and Exchange


I'm presenting a session at EJA tomorrow, entitled "Choose Your Cloud". I'll be comparing the different cloud vendors, with a particular focus on Google App Engine.

As an experiment, I've done my presentation in Prezi. I'm not sure if its cool or distracting. I'll check with my audience tomorrow to find out :)

Categories: brucecoopernet, computers, Presentation, amazon, me me me, cloud computing, google, microsoft and EJA

How predictable are you?

Ars Technica has posted about research that has been conducted about prediction your location, based upon data gathered from mobile phone towers.  The research shows that it is possible to predict where you will be at any given time with 93% accuracy, even before additional information such as calendars is taken into consideration.  That is spectacularly good, and just goes to show how good the information that is being gathered by our gadgets really is.  What this article makes me think about is how well adaptive user interfaces, once provided with the appropriate data, will be.  I want it... now.
Categories: brucecoopernet, computers, phones and adaptive user interfaces

How I would design a forms workflow system for google wave

Yesterday, I posted a quick reply to a post on the google wave developer blog about creating a form based workflow system for google wave.  I was quite busy at the time and i think in my haste I may have been a bit brusque.    By way of making amends, here is how I would set up such a system.

A forms workflow engine needs a number of components in order to operate correctly.  In particular:

  1. A way of specifying relationships between people.  The Manager-Employee relationship is probably the most important here, but there are others too (like whom is in the HR department).
  2. A way of specifying what information is needed in the form.  What fields, how they are validated, that sort of thing.  Calculated fields would be important here.
  3. A way of specifying the workflow for the form, and working out which relationships to use and which form to use in that workflow.
  4. A way for a participant in a group to select which form he wants to fill in.  A particular person may be engaged in multiple organisations, so having a single form library to choose from isn't enough
All of this could be managed in Google Wave, but it would not be trivial.  If set up correctly, it could work quite smoothly however.  First, we need a bot that performs multiple functions, or multiple bots that co-operate to provide the functions listed above.  I think it would be more understandable for users to just have one bot, so lets use that model.  For the purposes of this discussion, I'll call it FormBot.

Next, we need a way of creating the models that are specified above.  An operator would create a wave, tag it with its function (so that the bot can tell what purpose the wave serves) and then adds FormBot to the wave.  FormBot adds a gadget to the wave which allows the operator to edit the data.

For the relationship diagram, the gadget/bot combo would be able to work out which users they were dealing with by looking at the participants of the wave.  This effectively becomes your personnel database, in lieu of having it hooked up to an LDAP server.  The gadget would then allow the operator to drop people into relationships.  The robot would detect any changes, and then suck up the information into its own database (this bit is quite important, as it is how the bot, when processing an instance of a form, will be able to find the information in needs).

The Form & Workflow models are linked together in a one-one relationship, so they can be added on another wave, this time tagged with workflowmodel.  There would be a couple of gadgets to allow the models to be edited, which are then sucked into the database again.

So finally, how does the bot process an instance of a workflow form?  A user would have an extension installed on his client which adds an entry to the "new wave" menu which would say "New Form".  This would create a new wave and add FormBot.  FormBot would check its database to see what Form Waves it has which are linked to relationship models with the user as a participant.  It would then present these as a list to the user to allow him to choose which form he wanted to fill in.  The Bot would then create the form and allow the user to edit it and progress through the workflow.  

In summary, what we've done here is create a couple of setting waves that the bot uses to configure the application, and to allow it to provide different form libraries to different groups of people.  I think its quite elegant and would work really well.  There'd be a lot of work to get it implemented though.  The gadgets to allow the editing of the models would be the key bit.  I'd love to do it but I doubt I'll find enough time to do it myself.  I did create a proof of technology some time ago, but it wasn't configurable, nor did it hook up to a proper user database.
Categories: brucecoopernet, google wave, computers and workflow

Google wants a workflow engine/robot for wave too.

Google just posted on their Wave Developer Blog that their "wishlist" would have on it a way to process document workflow using google wave.  This reminds me of a post I wrote a while back on what google wave could be used for.  I did a proof of concept, but taking the concept to production would take too much effort for one guy in his spare time.  Probably the biggest barrier is integrating with HR systems.  How does the system plug into your corporate HR system to work out who your manager is when you post?

Either way, its good to see that other people see the same value in wave that I do.
Categories: brucecoopernet, google wave and workflow

Toshiba announces 'digital secretary' functionality

Last week, I ranted about how our cellphones will start adapting to what we are doing based upon the information they can gather on our behaviour.  Right on cue, Toshiba have now announced that they are building exactly this technology, and it will be available by the end of the year.  I'm impressed.  I wonder if they will introduce it only into the Japanese market, or more widely on Android or something similar.
Categories: brucecoopernet, computers, phones, adaptive user interfaces and android

The time is now for inference engines in user interfaces

I've been thinking the last couple of days about the future of phone user interfaces, and I suppose the future of interfaces in general.  At the moment we have fairly static interfaces, with a scrolling list of applications with the occasional widget to tell us the weather forecast or what not.  We set it up how we like it and thats it.  Some user interfaces (in particular, I'm thinking of Android's pages) provide different screens to cater for different use cases, but it is still a manual affair.

What I'm looking forward to is the day when my phone can infer things about what I'm doing.  Our phones (and our extended computer networks)  know an incredible amount of information about us.  They know where we are, and if we are moving or not, and they can remember where we have been before and at what times.  They have our calendar, they know what time it is now.  That there is enough information to begin to infer things.

For example, I tend to finish work at roughly the same time every day.  When I finish, I walk to Flinders St train station and catch a train home.  During the day, I couldn't give a tinkers cuss what the train timetables look like, but when I walk out the door, all of a sudden I'm keen.  My phone, based upon my previous history of movement, detecting when I start moving from the building I've been in for the last few hours, could easily infer that I am leaving, and that my most likely destination is home.  Wouldn't it be cool if it could alter its home page to show timetable information because thats where it thinks I'm going.

We can get more sophisticated here by adding in additional information too.  If I throw my calendar into the equation, the destination guesser has more information available to it.  If I have an appointment at 5:30pm in a different part of town, it can logically infer that I'm not going home, but rather that I'm going to this location.  Instead of showing me the train timetables for my home line, it could show me a tram route to my meeting, or the location of the nearest taxi rank.  It could even choose which option to show me based upon the time until my meeting.  If I've left myself lots of time then I can take public transport, but if time is running short perhaps I should take a taxi.  My phone becomes a true digital assistant, rather than a window on to information that I have to instruct how to operate.

I want my phone to do this now, and there's no reason it couldn't be done.  I'm sorely tempted to get venture capital funding and go do this.  Its where the action is going to be in the near future, in my opinion.  I've got a whole bunch of ideas about what sorts of information could be fed into an inference engine.

We do need be careful to remember the lesson of Microsoft's clippy however.  In order for an inference engine to work, it needs to be accurate.  It needs to provide value to the user.  It should also be unobtrusive.  if the user simply wants to get to his email, or his web browser, it should be no more difficult to get to than it is on today's interfaces.  Clippy failed on both counts here, and was widely lampooned and hated for it.  The amount of information that the clippy inference engine had to work with was limited, so therefore the assistance it could provide was worthless.

No inference engine will be perfect.  If I walk out the door and start walking towards the train station, I might not actually be going home.  Instead, I might be going to a bar on the way to the train station to meet a friend that just phoned me up.  There's no way my phone is going to be able to guess that (unless it monitors my phone conversation, and understands what we are talking about.  We can't do that yet...).  The secret here is to be unobtrusive.  The phone should offer information on what it thinks I'm doing, but in such a way that if I want to do something completely different it doesn't get in the way.  Phone user interfaces are getting sophisticated now(e.g. SlideScreen) so I see no reason why inferred information can not be incorporated.

One final thought about this is privacy. In order for this to work properly, our computer/phone needs to collect, store (for pattern analysis) and cross-check a lot of really private information.  Where does this information get stored and analysed?  at Google/Yahoo/Facebook?  The inference engine will then construct a model of my behaviour to predict what I'm doing.  I'm not sure I want a big corporation like Google to be able to make these sorts of inferences for me...  but thats naive.  Big corporations like Google already construct models of our behaviour, and every time we sign up to a new service like Google calendar or gmail we give them more information to model us.  I'd prefer to run such a service on my own hardware, but that isn't the way that the industry is going.

But as I ranted recently, perhaps that isn't an issue any more.  Are we willing to give up fundamental privacy in order to get the advantages?  I'm not sure...
Categories: brucecoopernet, computers, phones and adaptive user interfaces

My google wave post just broke :( but google fixed it :)

About 3 months ago, I wrote a google wave gadget, which allows users to collaboratively work on a google wave gadget within google wave.  I've gotten a bit of publicity out of it and its all been great.   Sadly, tonight, the wave that I created to document my gadget has crapped itself, collapsing under the publicity of its own popularity, with more than 200 blips and 500 participants.   The main blip in the wave now has no content whatsoever, and I can't view its history to repair its view.

This is a bad sign.  I like google wave, and I want to support it as a new way of communicating that is easy, effective and efficient.  The problem is that I can't trust it.  If its going to destroy information on me, how can I rely upon it?  Sure, its in beta at the moment, and I probably shouldn't complain, but I'm disappointed.  Its got a truckload of participants now, and I understand that that brings a lot of complexity, but I don't want to loose data.  If I could restore its state it would be fine... The really sad thing is that I'm due to deliver a presentation on my gadget tomorrow night, espousing how good google wave is and how it can be used.   What am I going to do now?

Time to address reliability issues Google.

Update: I've asked google to look into the problem, and it looks like somebody accidentally deleted all of the content in the wave.  This could have been a simple PEBKAC, or a more complex interaction of network latency causing the google wave software to do something unexpected.  It is beta software after all, so these things are to be expected.  It wouldn't be a problem at all, except I can't get access to playback to retrieve an older copy.

Never fear however, the dynamic chaps at google have had a look at my wave, and apparently there are over 30000 revisions to it.  This was a bit much for the javascript engine's tiny little brain and it broke.  They've fixed it now, and all is back as it was.  Thanks Christian!
Categories: brucecoopernet, google wave, computers, reliability and oh bugger what will I do tomorrow night?

I'm part of a presentation on developing Google Wave extensions, Wed 27th of Jan at RMIT

The local Melbourne chapter of GTUG is hosting a meeting around developing extensions for Google Wave this wednesday at RMIT.  The big presenter there will be Pamela Fox, Google's developer relations person, but I've been asked to give a quick run down at the end on my experiences developing a mind map gadget.  If I get time, I might also theorise about how wave could be used to provide ad-hoc communications focused tools inside business.

If you'd like to come along, please register here.  Its open to everyone.
Categories: brucecoopernet, google wave, computers and google

On Privacy

Facebook's CEO, Mark Zuckerberg recently said that privacy is no longer a social norm, Google's Eric Schmidt has also said that if you want to do things online that you want to keep private, then you really shouldn't use online services such as Google, due to laws requiring identification and retention of data.

There are benefits to treating your privacy as a commodity.  The most obvious examples are the personalities that have achieved fame and riches through living their lives in a very public fashion, such as Paris Hilton or Kim Kardashian, but each and every one of us makes a decision to trade our privacy each time we go online, whether we do it knowingly or not.

I recently showed my Father in law how to use Picasa web-albums feature as a way that he could easily catalogue his photos from a trip, upload them to the web and then share them with people.  He was delighted and immediately started pestering his friends to view the web album, which of course involves creating a google account (if you want to restrict viewing to a group of people).  One of his friends refused, citing that he didn't want Google to know about his every move online.

This goes to show the difference between not caring about privacy online (my Father in law) and having enough knowledge to be scared but not enough to fully understand (his friend).  My Father in law knows in an abstract sense that google can track his activities, but he doesn't care.  He has made an implicit (and some would say uninformed) decision to trade some of his privacy for the additional features that Picasa gives him.   His friend doesn't want to share this information, and understands that an account is a tracking mechanism.  What he doesn't understand is that sites like google routinely issue web cookies which are almost as good at tracking people as an account when tied to server logs. Even if you don't sign up to google they will be storing information about you and your browsing habits.  It won't be as easy to pin the usage directly to you as in a person with a name, but it definitely can be, and is, done.  Google do this so that they can target adverts that are tailored to you to your screen, but that doesn't mean that the info can't be used for other means too.

In order to be completely private online, a user needs to go to extreme lengths, using cookie blocking software and IP anonymising routing.  If you do this however, many features that we have come to rely upon no longer work.  You can't browse online email.  You can't one-click share photos with friends, you can't use social networking sites.  Even if you could, there would no longer be any incentive for software giants to produce cool software for us to use, because they could no longer make money off us. The vast majority of us seem to be quite happy to make this trade.

If privacy becomes a commodity, then I would like to have control of that information.  I tend to agree with both Mark and Eric, and I am quite open on the internet, but I want to be able to control what information is used, and what isn't.  If my privacy is a commodity,  that means it has value, and I'd like to see whether I am getting value for privacy.  It is possible to envisage an architecture to the net whereby all personal information is stored locally, on a server that each of us controls, or encrypted on central servers in such a way that only people we allow can have access to that information.  In order for this to work, there would need to be legislation to enforce this separation, as there is cost associated with implementing things in this way.

And there's the problem.  There's no incentive for companies to give us privacy, or to give us control over our privacy, as it will loose them money.  There's no incentive for governments to give us control over privacy as they want to collect information on us too.  "The People" are unlikely to get thier shit together, as they are too easily distracted by the latest shiny product released by Google, or Yahoo or Facebook.   As a result I think we are doomed to a future where information is routinely collected on every aspect of our life.

This conversation has been going on for ages, and its good to see people were thinking about it decades ago.  Credit card companies have been constructing models of us consumers for years and years, based upon our purchasing history.  They then sell this information back to department stores and marketing companies.  A particularly good scenario surrounding how this could end up is played out as a side story in David Murasek's excellent novel Mind over Ship.  Its a sequel to Counting Heads, so you might like to read that first if you are interested.  Now that I think about it, there's also an interesting plot in there about the relative privacy of the rich/powerful vs the middle class.  In both novels the middle class is routinely scanned for information, and their personal AI's (called Mentars) are incapable of keeping the information gatherers out.  The rich have much better Mentars and as a result are able to navigate their way through life with relative anonymity (but of course the heroes undergo a lot more scrutiny because they are the focus of big events).  Perhaps that is our future...
Categories: brucecoopernet, computers, online rights, google and privacy

New Version of Google Wave MindMap Gadget available

I've just updated my Google Wave Mind Map gadget to a new version.  New features include:
  • It renders graphically now, so things are a little prettier
    • It won't work on Firefox versions earlier than 3.0.  If you are running an earlier version I strongly suggest that you upgrade
  • There is now the ability to edit properties, accessable via the properties menu button.
  • There is a context menu that can be used to access the menu, accessible via the Right Mouse button.
  • You can change the colour of nodes. There's also the ability to edit the background colour, but this is currently unused.
  • You can change the outline of nodes:
    • Underline
    • Circle
    • Cloud
  • You can change the direction of layout of the root node. Where is places nodes is a bit random, and I do plan on fixing this
  • You can specify whether nodes are collapsed upon intial view or not.
  • Upload/Download now supports non-ASCII characters, such as Cyrillic
Its not perfect, and there are a few things that need to be fixed, but it is an improvement on the old version so I thought I'd release it. If thre are any problems, please do not hesitate to let me know. I can easily revert it back to the old version if its really broken, or issue quick patches.

I'd be delighted to hear your feedback.
Categories: brucecoopernet, google wave, computers and mind map

Why don't more engineers follow the KISS principle?

I was having a drink with some colleagues last night, and the subject of the over-use of high-availability environments came up.   At too many customer sites we see requirements that the system must have 5 9s uptime (approximately 15 minutes a year downtime) when there is patently no reason for such a requirement.  As a result, we end up spending far more time, more hardware, and more software licenses on the solution than is required.  This not only hurts the project during development, but also during maintenance as a more complex solution requires more maintenance.  This problem isn't limited only to HA, but extends to all areas of design.  In my opinion, this tendency to over-complicate projects is more responsible for project overruns and failures than any other cause.  To make things even worse, organisations who decide to do things "just because" tend to under-invest in them which means that the result is half-arsed and doesn't work properly anyway.

In a quick survey that I conducted of the people around the table at the pub, all of us subscribe to the KISS principle, and I suspect that most engineers would agree with us.  So why do so many projects end up bloated and lumbering?  The knee jerk reaction that everyone other than myself is an idiot just doesn't hold water.  Chances are the next enterprise architect that I meet will be almost as smart as me (even if my ego tells me otherwise), so what is it?  Here's a couple of theories.

Firstly, keeping things simple means that you will be done quickly, but you may need to come back later to make changes or to add functionality.  This is normal and expected within an Agile development methodology, but more problematic in big institutions that take ages to approve budgets and have difficulty dealing with change.  In many organisations it is easier to ask for $1M in one go, rather than ask for 10 $100K budgets. This sucks, but it is the way things go.

Secondly, companies contain many individuals, each of which has their own view on what is important. The reporting guy thinks that all his reports are the most important thing.  The IT guy thinks that good data architecture is the be all and end all.  If one constructs a plan by consulting everybody in the organisation you will end up with gold plating and and a very very long build cycle.

In order for KISS to reign a project needs ruthless leadership.  They need to make sure that their staff understand the principles by which the project is being run, and the intended goal. Knowing what you want to achieve is very important.

P.S. Happy new year!
Categories: brucecoopernet, computers, high availability, solution design and why is everyone so stupid?

I oppose the mandatory internet filter proposed in Australia

It will come as no surprise to anyone that I oppose the idiotic mandatory internet filter that is being proposed by the Australian federal government at the moment.  I took the time today to write to my local member, Michael Danby, to oppose the policy.  I suggest that anyone who agrees with me that the filter is stupid, which should be anyone that understands the concepts of the internet, does the following things:

  1. Sign the petition against the policy at GetUp
  2. Write a letter to your own Member of Parliament complaining about the policy.  Most if not all members of parliament will have web sites with feedback forms.  Who knows if they actually read them, but it will only take 10 minutes of your time.  Besides, you'll feel better after doing it.
Below is the content of the message I wrote to Mr Danby

Good afternoon Mr Danby, I am a voter registered in your electorate, and would like to speak with you regarding the proposed mandatory internet filter announced by Stephen Conroy and in the media eye at the moment. I am opposed to any such measures, not because I oppose censorship (which I do), but because the scheme will inflict significant penalties on normal people accessing the internet and it will simply not work.
When the trial was introduced, a 16 year old boy managed to circumvent the measures put in place within 30 minutes (ref,130061744,339281500,00.htm). Any user capable of using google will be able to bypass the filter by using a foreign proxy (of which there are many (e.g. or an encrypting router (such as The Onion Router (ref This indicates that anyone who wants to get around the filter can do so trivially. Children growing up right now already have the skills to circumvent these filters. Given that this is the case, all the filter will do is affect people that are viewing any other sort of content.
As another has put much more eloquently than I, the use of the filter also directly contradicts the interests of the NBN (ref The filter has not been trialed at any speeds above 8 megabits/sec, 1/12 the proposed bandwidth of the NBN. The federal government is sending out a mixed message of both trying to usher in a new era in connectivity in this country while attempting to hold it back with ineffectual and intrusive measures.  
I have worked in the I.T. industry for 16 years, and this is the single most stupid technology policy that I have seen in my working career. I implore you to oppose this policy within your party and within parliament. I would welcome the opportunity to speak directly with you on the topic. I can be reached by email on or by phone on xxxxxxxxx  
Thank you for your time,

Its not poetry, but I hope it gets the point across.
Categories: brucecoopernet, australia, censorship, internet, government and stupidity

I want to invest in Social Business for christmas

My extended family is beginning to ask the standard christmas question: What do you want for a present?  I'm having difficulty thinking of a gaudy trinket to ask for, so I've decided to ask for a donation to a worthy cause.

We saw Muhammad Yunus On Andrew Denton's Elders show the other night, and I was very impressed with his vision for improving the lot of the world's poor.  He makes it seem like solving poverty isn't hard, which makes a great contrast to the things that some people say.  I especially like the concept of a Social Business, whose purpose is to serve the people in a locally logical way rather than turn a profit.  Interestingly, one would invest in a social business with the intention of getting the investment back at some point, although not necessarily with a profit.

So now I'm all fired up, and I'm going to ask people to invest in Social Business instead of buying me presents.  The only problem is I don't know how to let people do that.  I'm going to have to do some research to find out how it can be done.
Categories: brucecoopernet, christmas and social business

TI introduces a customisable watch which does HRM out of the box

Engadget have just reported that TI have released a hackable watch, which can do all sorts of things including HRM straight out of the box.  This is really interesting.  I wonder if I can make it work with Bleep.  Of course I should probably concentrate of finishing Bleep first. At $49 I reckon its a steal!  I might ask Santa for one for Christmas
Categories: brucecoopernet, crimson cactus, heart rate monitors and hacking

Google Wave for EJA Enterprise Futures Forum 09

A lot of people are saying that using Google Wave to discuss conferences live is the new hotness. Given that there will be a special google wave announcement at tomorrow's Enterprise Futures Forum in Melbourne, I'm willing to guess there will be waves for the conference.

In anticipation, I have created a wave for the tech discussion session I am running on local implications for cloud computing. There's not much there at the moment. I'm hoping to bulk it out a bit this afternoon and tonight in prep for tomorrow's conference.
Categories: brucecoopernet, google wave, conference, cloud computing, melbourne and EJA

Abbe May at the Wesley Anne

Melissa, Courtenay and I went to see Abbe May perform a blues/rock solo gig at the Weseley Anne last night. We always try and see her when she's in town, and as usual she delivered. The style was a bit different this time, as she was on her own and had to adapt some of her songs to fit the format. Abbe also performed a bunch of covers, as she explained afterwards to "keep it fun for me", and you could tell from the performance. Especially good were the two covers originally by Willie Dixon, whom Abbe cites as a major influence. 'Twas great

She's playing the Edinburgh Castle (in Melbourne) next Friday, and she's back at the Wesley Anne on the 27th.  Go see her.  Its a bargain.
Categories: brucecoopernet, pub, music, melbourne and gig

EJA Futures Forum, Nov 17th

Enterprise Java Australia are holding a conference on the 17th of November in Melbourne, with keynote speeches on the Broadband initiative, Green IT, SOA, and Google Wave.  I will be facilitating one of the afternoon tech sessions on Cloud Computing.  If you're at the event, come and say hi to me.

There's a 2 for 1 registration offer open until mid day on Friday.
Categories: brucecoopernet, conference, SOA, cloud computing, melbourne and EJA

Melissa's name is on the wall

Its not nearly as bad as all that.  In fact its a good thing.  Melissa's first solo show outside of the university system opened last night at Metalab in Sydney, and was very successful.  She sold some pieces, chatted with lots of people, and we had some fun.  The proprietors of Metalab are very welcoming and friendly and we all went out for Vietnamese food afterwards which I thought was a nice touch.

Apparently numbers for the opening were a little down on usual, but that is because both artists that were opening that night are from out of state, so the locals weren't brought in by the artists themselves.  Still, there were lots of people there even if they weren't spilling into the street so I thought it went very very well.

If you are in Sydney and you'd like to check it out, the exhibition is on till the 26th of November 2009 at 10B Fitzroy Place, Surry Hills, NSW 2010

View Larger Map
Categories: brucecoopernet, jewellery, Family, exhibition and melissa

What should I do when google wave topics become too popular/cluttered?

The other day, I released a mind mapping gadget for google wave, and its proven to be quite popular. Popular for something I knocked together quickly anyway. There's an active wave discussing features, which also serves as the main description of the gadget. Its getting a bit long now, and I'm aware that there is a limit to how big waves can get before they start to slow down. It also gets to the point where I want to simplify things so that a new reader coming upon the wave doesn't get confused by the threads of conversation there.

The way that I am using wave at the moment is that there is a single shared document at the top (the root blip) which contains the topic of discussion, and in this case, a mind map of features and votes for features. The blips that come afterwards are a discussion list, much in the same way that comments can be added to blog posts. Whilst they form an important part of the wave, the value of information they contain decreases as they become less topical. It is really the latest comments and blips that are the important bit, at least to people that are returning to long running conversations.

Other systems show the most recent comments on the top, and show older comments on separate pages to stop the page from getting to big. I'm tempted to suggest that wave should do the same thing, but I wonder if its because we're all still figuring out the best way to use wave. Are there different usage patterns for waves that means it makes sense to have every single blip on the screen, even if it means the page is three miles long? I'm still mulling that one, but in the mean time I think there is a need to come up with a way of managing long running waves. Here's my thoughts on how it could be done.

Option 1: Create a new wave to "Continue Discussion". This is what most people seem to be doing at the moment, but it means that anybody who has linked to the page is now linking to (or embeding) a dead version of it, and would then need to click through to see the new version. It also breaks the fundamental principle of URLs in that a URL represents an object for its lifetime.

Option 2: Delete the old crufty posts. That's not very nice to the people that wrote those posts in the first place. Besides, those comments provide useful context for a new reader to be able to catch up with the rest of the people on the wave. In the end, you are removing information from the system, rather than presenting it in an accessible way, and thats never a good idea.

Option 3: Have an archival bot participant on the wave. This bot would monitor the wave, and when the number of blips starts to get high, it would progressively copy the older blips into an archive wave and subsequently delete them from the original. It would also add a link to the end of the root blip showing people where the archive wave is.

I've had a quick look to see if anyone has done this yet, and I haven't found anything, but I think its a great idea. The only technical issue with this approach that I see at the moment is that the archived blips would not have the same author(s) as the original blips, as it would be the bot that authored them. The bot could add some text indicating who the authors were, but its not quite the same.

I really like the idea of the archival bot. Once I get back from Sydney I might give it a go.
Categories: brucecoopernet, google wave, archival and bots

Mind Map Gadget for Google Wave

As you can probably tell by my recent posts, I've been mucking about with Google Wave for the last week or so.  It shows a lot of promise, but we still need to work out the best way to use it.

Some colleagues and I were discussing some practice development the other day.  One of them said that they had created a mind map on and had shared it with us so that we could map out some ideas.  Mind42 is a great tool and normally I would jump straight on it, but it seemed unnatural to leave the context which we had created in the wave.  It would have been much cooler if we could have had the mind map directly in the wave.

Google has thought of this, and have included the ability to incorporate gadgets into your wave, which allow essentially any web application to participate in waves.  A Mind map is a natural tool to include in waves as it forms the start of a lot of collaborations, which is exactly what wave is for.  Rather than wait for mind42 to change their application so that it could be embedded in google wave, I decided to write my own.  Mind maps are relatively straight forward applications, and I wanted an excuse to use GWT in anger.

The result is my newly released google wave component.  I've uploaded it to the Google Wave samples gallery, but if you have access to the wave preview then you can go directly to the source (With a sample) at this wave link.  Install, have a play, and make suggestions for improvements.  I'm hoping we'll start using it within our organisation as well.

Here's a video of it in action:

Categories: brucecoopernet and google wave

Does Google Wave herald the arrival of natural language interaction with computers?

I've been spending some time recently thinking about Google Wave, and how it can be useful as a method of communicating and working with multiple participants at the same time, which is what Wave is for, but with a robot as one of those participants.  Wave provides an easy way to incorporate a computer participant in a conversation with people, getting it to receive all updates to the wave and provide its own input.  By doing this it makes performing workflow like tasks much easier, and brings the computer system into the conversation with the humans, rather than having the humans interact with a computer system.

In my example, everybody has a "manager" to which their leave application must be sent before it can be approved.  The bot would detect when you were finished with your application and automatically add in your manager for approval.  This works great, but it needs to know who your manager is.  This information is stored in the HR system, which has its own user interface and logins and whatnot.  If you change manager, somebody with admin privileges on the HR system would need to go in and change your record in that UI.  This breaks the paradigm of bringing the computer into the conversation with humans.

For my demo, I was planning on writing a little user interface for my bot.  It was going to be a web application which allowed any user to go in and play with their manager, so that they could try out the different permutations of the workflow.  Then I got thinking: Why should somebody need to leave the conversation in wave to make changes to the leave system?  Why should they have to interact with yet another user interface?  I already had put in some basic capability for it to tell you your leave balance if you asked it, so why not extend that even further?

Imagine the following scenario:
Bob is having a discussion with Jane about planning for the next release of their product on google wave.  They have a query about upcoming leave for the staff because they might need to cancel leave in order to meed deadlines.  In order to do this, they have to get a list of users on the project, log onto the HR system, and perform queries on each of those users.  They went to the HR systems world, rather than bringing the HR system into their conversation.
Wouldn't it be cool if Bob could talk to the HR system instead?  he could add the HR bot to the wave, and say something like:
Hey HRBot, what is the upcoming leave for the following users:
  • Mary 
  • John
  • Simon
The bot could then parse the question, send the request to the HR system web services, and provide a response directly in the wave so that both Bob and Jane can see the results.  Any leave requests would be listed as links to the request waves that submitted those requests, so Bob could quickly check to see how important that leave is or ask the users if it is okay to cancel that leave.

People have been trying to get natural language systems going for decades, and its still really really hard.  When I was a postgraduate student, I entered the Loebner Prize competition with a colleague of mine, Jason Hutchens.  The purpose of the Loebner Prize is to encourage people to write a bot that is capable of fooling humans that they are talking to a person instead of a computer.  This is impossibly difficult of course, and pretty silly really as Marvin Minsky has pointed out, but its a good bit of fun to write a chatterbot.  The bot that comes closest to being human wins an annual prize of $2000.  Natural language processing was Jason's area of research and he had won the prize before so we decided we'd have a crack.

We lost (curse you Ellen Degeneres!), but sold the software to a german company that wanted to put chatterbot robots on company's web sites as level-0 helpdesk for a sum considerably more than we would have won as prizemoney.  In the end, they had difficulty in selling the software and I don't think it ever really went anywhere.   The range of language that the bot was expected to deal with was very broad which makes parsing it exponentially more difficult, plus websites those days weren't really sophisticated enough to support that level of interaction.

These days, I think there is more of a case for using natural language parsing bots within narrow contexts (to keep the complexity down).  Salesforce have an interesting demo which shows the sort of thing that I am talking about within Wave.  They talk about customer interaction, but I think it will be more useful within an organisation.

In the future, I think we'll see more and more of these bots participating in our business practices.  They can enrich the information we type (for example, by putting in contextual links into what we type, bringing up supporting reports and figures when we are performing tasks, that sort of thing), plus they can become active participants in the procedures while still giving us control over what is going on.

P.S. I feel terrible that I have used cancelling leave as an example here.  I would never advocate cancelling leave as a valid project management technique.  Its a terrible thing to do to your staff, and you should never do it.  Its only used as an example.
Categories: brucecoopernet, google wave and natural language

Who plays the part of transformation in mashups?

In the last week, two people have independently told me about an Australian government sponsored conference to create interesting mashup applications from government data.  I love the idea, and I'm really glad that the government believes that its data should be freely available.  I think most app providers are realising the power of providing open access to their data to drive adoption now.  In my opinion however, independent transformation of data between web applications is still missing as a generic tool to mashup creators.

Generally, in enterprise as in mash-up applications, the source data is not in the correct format to be directly consumed by the final application.  As an example, I am writing an iPhone application which takes heart rate monitor recordings of your exercise and stores them as a Google spreadsheet.  The reason I chose to store the information in an online spreadsheet instead of a bespoke database service is that google already provide all of the tools to make the available in the spreadsheet easily available as XML for others to consume.  It does this using the Atom protocol, which is great, but hardly easy to consume.

Traditionally, a mash-up is seen as the combination of data from one or more external sources with a javascript driven user interface.  The data flow looks something like this.

This is great, however it induces a great amount of coupling between the components.  The mashup provider needs to communicate directly with the data sources, transform them into its own native format, then consume it.  There's no opportunity to substitute in a different datasource if it becomes available, or easily fix things if the source data format changes slightly.  In enterprise applications, this has long been recognised as an issue and ESBs were developed as a way of handling this.  When an ESB is used correctly, the source data (or application) is abstracted from the destination by a transformation process, usually performed by XSLT.

I think that the same approach should be used for mash-up style applications.  The big advantage this brings is that it releases the data from the application (and the user interface).  More importantly, it allows the application itself to fetch data from different sources.  It is no longer limited to the sources that the programmer's put in.  A sufficiently talented user can take any datasource, transform it into the correct format that the application expects, and then get the application to point at that transformed source.

For this to work, there is a need for a generic XSLT service, that can take a data feed and an XSL style sheet and produce the desired output.  W3C provide a service which does exactly this.  Unfortunately the bandwidth requirements of any large enterprise use would crush their server so they have put limitations on the service to restrict it to personal use.  This is a shame.  I've written a very similar service for bleep, but it is run as a free Google App Engine app which has quite severe resource limitations of its own.   I reckon Google should release a transformation service of its own.  It would be very useful in many of its apps.  There's no way to make advertising revenue off it though :-/

Its not really within the average person's skill set to write a transform.  Many software engineers do not know how to do it properly.  In the future, I'd like to hope to think a web app will be written which brings it more within the reach of normal internet consumers.

To bring this back to the govhack conference I mentioned at the beginning of this post, I think that its good that the government wants people to make mashups, but in some ways they are a little misguided.  Its not just about the applications.  Most people will just put some data up on a google map, which is hardly innovative.  Instead, what I would like to see people taking the source data and transforming it and correlating it against other data sources to produce new data sets.  Then its possible for any number of people to take that data and visualise it in cool and impressive ways.

For Bleep, some of my random thoughts on data transformation and visualisation have been gathered at this page

Categories: brucecoopernet, mashups, bleep, crimson cactus, XML and transformation

Using Google Wave for Workflow tasks

I've been thinking over the last few days about what Google Wave could be used for. Obviously it can be used as a document collaboration and review platform. It can also be used as a multi-user chat program, although there are probably other existing programs that are just as good for managing that. Some people have claimed that it isn't anything revolutionary. In and of itself, this is true, as it just takes concepts already available in email, instant messaging and collaborative documents and puts them together. What I'm more interested in now however is what we can do with it now that it has brought those technologies together.

In a blog post, Daniel tenner quite succinctly outlines where Google Wave will be useful, and primarily its going to be used by people working together. Why couldn't we use it to do workflow related stuff too, especially when there is an automated component to it? I decided to look at how we could do leave forms. The first question I asked was "why would you want to do leave forms in wave?". There are already web applications for doing workflow. The problem that I see with that is that they are still fairly rigid in their operation. Whenver your manager is on leave or you want to go slightly outside the pre-defined workflow thats there the procedure becomes very brittle and you need to go to a user with superuser access to get things done. In addition to this, all notifications that you have new forms to attend to or that the status has changed go out by email. Email shouldn't be a notification system. We just use it that way because its familiar, and it is the tool that we spend most of our time in.

Assuming that google wave becomes popular enough that we use it each day, wouldn't it be nice if the workflow/collaboration tool entered into our messaging tool. That way we wouldn't need to log into a separate tool to manage things. It would be there in front of us, and allow us to deal with the situation immedaitely.

To test this theory, and also to play with Google Wave bots, I have written an extension to Google wave, which gives users the option to create a Leave application in google wave. Interestingly enough, I found it easiest to document the procedure directly in Wave, which would make it very easy to introduce new users to our procedures. The procedure wave is available at Leave Application Procedure. One nice thing about this is that if HR needs to change the procedure, It automatically pops back up in people's inbox so that they see the changes. There's no need for notifications to be sent out, as the change to the document (to which everyone is subscribed) automatically gets distributed. Likewise, if a staff member has a question on the procedure, it can be asked directly in the wave itself (privately if necessary) to provide context to the conversation.

When a user creates a new leave form wave, it automatically includes a bot which is responsible for progressing the wave through the workflow. This is done by a series of buttons (or actions) at the bottom of the document which take the standard approval route (Draft -> Submitted -> Approved -> Processed). It is flexible however, as anyone that needs to deviate from the process, say to get additional approval from another team leader because he is to another team, can simply add the other team leader to the wave and have a chat. All of the context associated with the process are kept with the wave, and it can easily be searched later on to see what happens.

Its also possible for the bot to take a greater role in the process itself. It can check leave balances (assuming that the leave system is available to it), add leave to the company leave calendar, and any number of other integration tasks because it is the thing that is managing the workflow itself. Its very flexible, easy to change, and completely under the control of the organisation. One thing I played with was getting the bot to understand (as best as bots can) natural language. If you wanted to query your leave balance for example you could start a wave with the bot and ask it "What is my leave balance". It would could then look up your balance and reply. This would be able to free up HR staff from having to perform mundane tasks. Obviously bots have got a long way to go before they can understand our language properly, but if queries conform to simple grammar rules then it should work.

If anyone would like to have a play with what I have written, let me ( know and I will add you to the procedure wave, which will allow you to install the extension and create a dummy leave application. It puts me in as the approver for everyone (except for me, for whom it uses another bloke) as their manager. Only the manager can approve/reject a timesheet. If you say anything to the bot, it replies with a message about your leave balance (which is bogus. the bot isn't connected to our HR system).

What does everyone think? Is this a good idea for managing workflow? Until we have broader access for people to get on Wave it will be difficult to tell, but I think it is a good use.

P.S. I wrote this post in Wave originally, but had to copy and paste it here because the embedding API doesn't allow anonymous viewing yet.  Google Wave is great, but it is still definitely a beta product.

UPDATE: I've created a screencast demonstration of how the flow will work.  Remember this is intended as a proof of concept rather than the full thing, but it gets the point across.

Categories: brucecoopernet, google wave, automation and workflow

Interesting article on app pricing

Gizmodo have an interesting article on the price of iPhone applications, how they have dragged the consumer's expectation of app pricing down, and how this might not be a good idea in the future.

I can certainly say that I won't be expecting to make much money out of Bleep. Its taken longer than expected to develop, and I don't think I'm going to sell lots of copies. How anyone can make a business out of developing these things is beyond me... I suppose we'll see :)
Categories: brucecoopernet, bleep and crimson cactus

Another Heart Rate monitor device/app for the iPhone

There's another company that is producing a heart rate monitor device for the iPhone. Its outlined at, and looks great. Its exactly the sort of thing that would render Bleep irrelevant. Sadly, its not going to be made into a product at this stage :(
Categories: brucecoopernet, bleep, crimson cactus and Fitness

New blog format for the crimson cactus

When I started Crimson Cactus, I started up a blog for it. It seemed to make sense, and I could keep the company posts separate from holiday pictures and musings and whatnot that way. As its turned out, I find that I want to cross-post. That is, I want to be able to post to both of my blogs.

There are a couple of ways of doing this, but I think it just goes to show that I'm doing it wrong. What I really want is to continue posting to my personal blog, but tag those posts that are of interest to the crimson cactus, and then have my web page pick up that tag. This is exactly what I've done now, so now all of the old posts have been imported into my personal blog, and thats what you'll be seeing from now on.

For those of you that follow via RSS/Atom, The new link to use is
Categories: brucecoopernet and crimson cactus

Fitbit review on engadget

Engadget have released a review of the fitbit networked pedometer. I remember first seeing this about 12 months ago, when it had just launched at techcrunch 50. I like the idea of the device, but it is yet another thing to carry around, and yet another thing to charge.

It makes me thing of the belt valet computers in David Marusek's Counting Heads (Thanks for the reccomendation @doctorow). One day, not too far away, we will have computers that we carry, strapped to our person (perhaps in belt form), that can handle all of the biometric sensing that we want. It will be able to count our steps, work out our heartbeat, and record everything that we hear and see for future reference.

I'm really excited about the prospect, even if it does open up a lot of privacy concerns. It will be important that we as individuals retain control of the information that is being collected. I'm pretty sure that google having access to all of my heartbeat (which is basically an EKG) information would be a bad idea. David's sequel book, Mind over Ship shows us a very good example of the mis-use of complete information on people.
Categories: brucecoopernet, bleep, crimson cactus, Fitness and gadget

Costumes from fancy dress party

Melissa and I went to a Royalty themed fancy dress party last night, so we went as Louis XVI and Marie Antoinette. Here's a couple of photos.
Categories: brucecoopernet, party, bruce and fancy dress melissa

Gizmodo AU running a blog theme of fitness for geeks this week

Gizmodo are running a theme of playing with balls this week, which fits right up the alley for Bleep. In the linked article, they mention heart rate monitor gadgets in particular. At least my approach will be relatively inexpensive. What a pity that Bleep isn't ready for the publication yet. I'll be following their posts with interest.
Categories: brucecoopernet, bleep, crimson cactus and Fitness

Ahh, there _are_ heart rate monitor accessories for the iPhone already

I was operating under the impression that nobody had created a heart rate monitor system for the iPhone yet. This seemed illogical to me as it is such an obvious thing to do.

As it turns out, there is one. I found it today at Smheartlink's site. It looks like a great product, but it is a lot of money to spend, especially after you have already purchased a HRM belt.

Its a bit disappointing to see this considering my app will do much the same stuff, but I still see a niche for my app, as it doesn't require you to charge and carry another device around with you, plus it will be a bit cheaper :)
Categories: brucecoopernet, bleep, crimson cactus and Fitness

Apple approves how many apps a day?!?

I saw an article on gizmodo a few minutes ago that says that apple approved almost 1400 iphone apps last friday. Even on the slow days they approve hundreds of applications.

Its massively impressive that Apple have got so many applications. It just goes to show how much of a runaway success they have on their hands. I can't help wonder though how hard it will be for anyone to find the app that they want when there are so many apps to choose from. I suppose thats why we've got app review sites popping up now, like the App store equivalent of gizmodo... Here's hoping I can get them to review my app when its finished.

In Bleep development news, I got it running on my iPhone 3G last night (rather than just the simulator) and it works okay. There are some performance problems which I think I can rectify fairly easily, and then there is just user interface tweaking to go.
Categories: brucecoopernet, bleep, crimson cactus and Fitness

Progress on Bleep

I had hoped to finish Bleep to the point where it could be submitted to Apple over the weekend.  Sadly, I've run into some problems causing the application to crash on the iPhone, even though it runs fine in the simulator.  I'm also trying to polish the user interface to make it a better experience for users.

In the mean time, I've uploaded a sneak peek video of Bleep in action.  Bear in mind that this is an early version of the software, and it is still being tweked. Have a look:

Categories: brucecoopernet, bleep, crimson cactus and Fitness

Why is requesting resources from other hosts such a big problem in Javascript?

Recently, I found out that Yarra Trams has published a web service interface for finding out information on tram arrival times. This is awesome, and I've been trying to think of cool little apps that I can write to take advantage of it. As I'm also playing with GWT at the moment, so perhaps I could do something that way... There is a problem however, in that the Same Origin Policy will block any attempts by my javascript code to access the webservice, as it will come from a different origin.

There's a couple of things that I could do about this:
  1. Ask Yarra Trams if they could host my work, but that kind of defeats the purposes of a mashup, doesn't it. Plus, to be honest, the chances of me ever coming up with something that they would want to host are minimal :)
  2. Write a proxy service that I host myself, and send any requests through it. This kind of seems pointless, as there is already a perfectly good webservice out there that I can use. Besides, if what I wrote ended up becoming popular, I'd be up for a bandwidth bill for all the traffic to the web service.
  3. Try and convince YarraTrams to use a cross site friendly interface, such as is provided by google data. Fat chance of that happening. Besides, there's nothing wrong with web services. This callback based approach seems like a hack, probably because it is.
  4. Use a new HTML5 extension called CORS which allows for fetching resources from other origins.
The first three options all are really really nasty. The fourth solution seems like it is perfect. It is specifically designed to allow for fetching resources from differing sites. It requires the destination server to include a header however that specifies what external domains are allowed to access it however, so it still won't work for me :(

I've been thinking about the SOP restrictions and how CORS works, and I must admit I'm confused. The restrictions that it places on requests just seem too over the top. I understand that it is important to stop XSS attacks, but there should be a way for the javascript programmer to explicitly state that (s)he understands the risks and he promises to treat the requested data as potentially dangerous so that he can use it. To require any form of server side change means that almost all mashup style applications can't operate correctly.

To come back to my example, if I want to perform a POST operation to a Web Service, which returns XML, which I then parse and use as data to, say, plot on a map, I can't see any security problem with allowing me to make a XMLHttpRequestObject request to go do exactly that. If I were to eval() the result, I could potentially open up my application to XSS attacks, but I'm not that stupid.

Now perhaps I haven't spent enough time thinking about this, but I don't see why we can't have an override flag on the XMLHttpRequestObject to allow us to disable the SOP for an individual request. That would give me the flexibility to decide where I want to perform an unsafe request and take the appropriate steps to make sure I don't get burned by the results. Perhaps the security experts out there can educate me as to what I've missed.
Categories: brucecoopernet, javascript, GWT, Single Origin Policy, AJAX, tramtracker, web services and CORS

Just got back from a week of Snowboarding

I just got back from 5 days in the Snow at Mt Hotham in Victoria. Despite the weather forecasts looking like complete arse all week, we got to ski 4 days out of 5 which is not a bad for Australia. Only friday let us down with bucketing rain and howling wind. Given we were due to head back to Melbourne on Friday anyway, we packed it in and headed back early.

One of these days, I'm going to go to a country with real snow and find out what Snowboarding is really about :)
Categories: brucecoopernet, snow, hotham and holiday

TramTracker has a Web Service!

I'm a fan of the TramTracker iPhone application. Its a little doohickey that fetches information from Yarra Tram's real time tram arrival service to tell you when your next tram will be coming. Having used it a lot over the last couple of months, I wondered last night how the app got its data. What service did it contact to fetch the data.

So I hooked up a logging proxy to my iPhone and traced the calls it was making. Turns out, TramTracker has a fully functioning and documented Web Service just sitting there begging to be used by people. I'm now trying to think of what other nice apps could be made. My first thought was an Android version of the TramTracker application, but that's just a clone (plus I don't own an android handset)... How about a dashboard widget? Oh, turns out there already is one! Its good to see that there is a qango out there that is doing its IT right for a change...

What other sorts of mashups could we make? One other thought I've had is making a Google Transit datasouorce from this information, but I suspect Google is already talking to Yarra Trams about that.
Categories: brucecoopernet, trams, tramtracker, web services and melbourne

Google Apps allows for custom SMTP servers

I've been using my google account to store mail for a while now, including forwarded mail from client organisations, where this is allowed. The only problem that I've had so far is that when I'm sending mail as another identity (like a client one), it would always come up as "from on behalf of me@the.right.domain" Its a bit annoying, but thats what Google had to do in order to be good email citizens and not get everything marked as spam. As it turned out, many Exchange servers automatically mark such email as spam anyway, so some people lost mail that I sent them.

Google recently introduced a new feature, whereby you can specify your own SMTP server to use when sending mail, and you can specify a different one for each identity. This is great, as you no longer get the on behalf of bullshit, and your mail doesn't get spammed any more. I was originally stumped by the fact that I couldn't get to the SMTP servers of the organisations that I wanted to send mail from, but then it dawned on me that I could use any SMTP server that would allow me to relay mail as other identities. I used my ADSL provider and all is good. Of course, if the organisations that I'm sending mail as had SPF rules set up to disallow this sort of stuff I'd still be in trouble, but so far that hasn't been a problem.
Categories: brucecoopernet, email, gmail and Exchange

The Brown Paper Collective Launches

Melissa and some of her colleagues are launching their new collaboration, the brown paper collective at 'this is not a design market' in Melbourne in July.

The Brown Paper Collective is a group of artists from the fields of drawing, glass and jewellery, and the market at which they will make their first group outing will be happening in Melbourne on Sunday the 19th of July at The Factory, 500 La Trobe St, Melbourne from 10am - 5pm.

Categories: brucecoopernet, jewellery and melissa

Online mind maps

I was going to write a blog post yesterday about where integration platforms were going, which seems to me to be online webapps without the need for an IDE at all. some products like Oracle's Aqualogic ESB are pretty much already there. I couldn't quite gather my thoughts properly though, so I thought I would do a mind map. I was going to use FreeMind, but considering I was talking about web-apps, I thought I'd do a search for whats out there. Turns out, there's heaps of online mind map editors, but most of them charge for anything but basic capabilities.

I did find mind42 though, which appears to be quite good. I think I'll be reccomending its use in the future. Below is the mind map that I'm working on. As you can see, I still haven't gathered my thoughts very well :-/ One of the nice things about this is that using an online service means that we can post links (just like the one below) in our Wiki documentation and have live updating when the mind map changes. Brilliant!

Categories: brucecoopernet, computers, product endorsement and mind map

The king is alive, and he's riding the tram network in Melbourne

Moments after taking this shot, the passenger folded up his Elvis cut out and got off the tram. Given that the cutout had well worn fold marks at the hips and knees I can't help wondering if he takes the king with him on all tram rides for company (or deterring other people from sitting next to him)

Either way it's hilarious
Categories: brucecoopernet, trams, silly and melbourne

How to automatically forward email from Exchange without loosing headers

UPDATE: I've now created a service to make this much easier to forward email. If you are interested in this, please have a look at the service's site.

I've got a million email accounts. Every time I start work on a new client site, I get given yet another email account. Its a pain in the butt to manage all of these, so wherever possible I forward the mail onto my main gmail account where it can get filtered, stored and searched easily.

This works great for sites with unix based email, but more often than not my clients use Microsoft Exchange for their mail. You can set up a redirecting rule to forward the email (as long as the server has been set up to allow this), but when it does so it strips off the To: and CC: headers from the message when it sends it on. For example, if Bob Log had sent a message to,, when it was forwarded on to my gmail account it would still appear to come from Bob Log but the To: would just be, not the original recipients. This won't work!

I've toyed with a number of solutions to this problem, including writing a bot that uses EWS to poll the server, re-form the message and send it on, but that requires a bot to run on the client's lan and may not be able to forward on the message as getting access to Exchange to send the message can sometimes be difficult.

I now have a solution that works without requiring any additional software to be running on the client's network. The solution involves forwarding the message to an intermediate account as an attachment (which preserves the headers), filtering the message back out of the attachment at the intermediate account, then sending that message on to gmail in all its original glory. To do this, you require an intermediate email account that you can use purely for the purposes of filtering the mail, which is capable of piping incoming mail to a perl filter script. Generally, these aren't that hard to come by. I happen to have a getting started plan with Cove here in Melbourne which fits the bill nicely. Its free to run (although they do have a $2 setup fee) and its servers are in Australia which is a benefit for me. If you live elsewhere, there are probably other options that would do just as well.

To set it up, there are three steps,

step 1: create the filter script
Below is included a perl script which takes an email address as a parameter, and reads a MIME encoded email on STDIN. It will look for a part with a content-type of message/rfc822 which is an embedded message, and then it will stream that message out to sendmail with the supplied email address paramater as the final destination. This file should be uploaded onto your server somewhere where the mail filter can get hold of it.

I originally wrote a script that used the MIME::Parser perl module, but I've found that most hosting providers don't have that module installed, so it was easier to just do it from scratch. I'm not a perl programmer really, nor have I spent a lot of time on this script, so it definitely could be improved, but it works!

my $recipient = $ARGV[0];
my $boundary = '';
my $endMarker;
my $partType;
my $sendmail = "/usr/sbin/sendmail -oi $recipient";

# Reads a line from STDIN, and makes sure it isn't the EOF
sub fetchLine {
my $txt = <STDIN> or die "Reached end of file prematurely";
chomp $txt;
return $txt;

# reads a message part, looking for an rfc822 message. If it finds one, it
# forwards it on to the recipient. When it finds the part end, it returns 1
# if there are more parts, or 0 if it is the end of the message
sub parsePart {
my $isMessage = 0;
my $returnCode = -1;

# First, read the headers, looking for a content type
while ((my $text = &fetchLine()) ne '') {
if ($text eq 'Content-Type: message/rfc822') {
$isMessage = 1;
open(SENDMAIL, "|$sendmail") or die "Cannot open $sendmail: $!";
# Then read the body, streaming it out if it is a message
# End the loop when we find a boundary or the end marker
while ($returnCode == -1) {
$text = &fetchLine();
if ($text eq $boundary) {
$returnCode = 1; # Meaning we still have parts to parse
} elsif ($text eq $endMarker) {
$returnCode = 0; # Meaning we are finished parsing the message
} elsif ($isMessage) {
print SENDMAIL "$text\n";
if ($isMessage) {
return $returnCode;

# First, Read Headers, looking for the multi-part content type and
# boundary separator
while ((my $text = &fetchLine()) ne '') {
if($text =~ m/^Content-Type: (.*)/i) {
$nextline = &fetchLine();
$nextline =~ m/\s+boundary="(.*)"/i or die "Could not get boundary after content type";
$boundary = "--$1";
$endMarker = "--$1--";
# We don't care about any other headers

# Check to see that we have the right mimetype and a boundary
die "No boundary found" if $boundary eq '';

# Skip until the first part separator
while ((my $text = &fetchLine()) ne $boundary) {

# Parse the message, looking for a part with a type of message/rfc822
while (&parsePart()) {

exit 0;

step 2: set up the filter in cpanel (or whatever else you use)
On your server, you now need to set up mail filtering so that any incoming mail from your work account that isn't a bounced message gets sent to your filter script. In cpanel, I did this by setting up an email filter for all mail, which looked something like this:

step 3: enable forwarding of mail in Exchange
Now that you've got your email forwarding filter set up, all that remains is to set up exchange to forward any incoming mail to your filter account. You do this by selecting Rules and Alerts

And then setting up a rule that looks like below:

Be careful with what you put in your rule definition, as some rules are "client only" which means they will only run when outlook is open. As an example, I tried to make it also mark the message as read when it forwards, but that is "client only" which means the rules won't run unless outlook is open :(

Once you've got that set up, you can test. Send a message to your work email and see if it makes it through. If anything goes wrong, it should bounce with a message telling you what went wrong. One thing I did notice though is that if I send the message from gmail itself, it tends to ignore the message when it gets forwarded through as it already has a copy (in sent), so I send test messages from an alternate account just to be safe.

Okay, so in summary, its possible to forward messages on from exchange, but it requires a man in the middle to extract the message contents, and its a bit fidlly. If you like this tip, let me know.
Categories: brucecoopernet, computers, forwarding, email, cove and Exchange

Could I use my iPhone to work on?

I'm an IT consultant. As a result, I spend the vast majority of my time at work doing one of the following
  1. Reading or Composing Email
  2. Reading or writing Word Documents
  3. Editing our corporate Wiki
  4. Researching stuff (or skiving off) on the Web
  5. Looking at Microsoft Project plans
  6. Very occasionally coding... very occasionally
To perform these tasks, I lug around a quite heavy laptop. Its not a particularly special laptop, but it does the job. I would like to exchange it for something lighter and easier to work with in order to save my back, especially when I ride to work. I originally thought about getting a netbook. They seem to fit the bill nicely, except for a couple of annoying things:
  1. The screen is a tad small to be using all day long
  2. The keyboard could be considered small to be using all day long.
  3. They don't really have enough grunt to do coding
The first two problems can easily be solved by using an external keyboard, mouse, and screen. I always work in offices, so it generally isn't hard to find something that I can appropriate for the purposes of working while there. The third problem is a little more tricky. One way I've thought about solving this problem is using remote desktop to a server. Given that I generally need a server to code on anyway (I do enterprise SOA work), this seems to make sense. I simply log into the server (either via ssh or VNC/RDP) and I can do anything I would have originally wanted to do on my laptop, albeit with a little more lag. A RDP server would also allow me to get to those windows only tasks that I occasionally need to do without needing to bloat my netbook with software

This sounds great, and I might just do it, but why should I carry a wee little laptop around if I'm never going to use it as a laptop. I'll use my iPhone when I'm on the road, and plug the netbook into a KVM when I'm in an office. Why not just use the iPhone? I love my iPhone, and I carry it with me everywhere. It can do most of what I need to do just as well as a netbook, but it suffers from the smallness problems even worse than a netbook does. Why couldn't Apple make a docking station for the iPhone which allows it to work with an external keyboard, mouse and screen? That way, I can carry my phone around with me, get to work, plug it into the docking station, and work directly on my phone.

I think the docking station would need the following features to be successful:
  1. Be relatively small, so that it can be transported if necessary
  2. Provide charge to the iPhone while operating
  3. Have the following connectors:
    1. 1x Power - I would prefer an integrated transformer, but a wall wart would work too
    2. 3x USB - one for keyboard, one for mouse, one spare for something else...
    3. 1x DisplayPort, or DVI, or whatever to connect up a monitor
    4. 1x Ethernet port
    5. 2x Speakers
    6. Audio jacks for external speakers and mic
    7. RCA/Composite video out, so that it can do everything a current iPod dock does
    8. IR receiver for those cute little remotes.
    9. Possibly a phone handset to allow it to be used as a phone while docked. Perhaps just a jack to allow a handset (or hands free kit) to be plugged in
  4. Provide at least 1920x1200 resolution screen - this would probably involve improving the graphics card of the iPhone
  5. Be capable of receiving calls while in the dock. If the user removes the phone from the dock to receive a call, the user's session should be saved so that he can pick up where he left off when it is re-docked. Likewise, if I pull it out of the dock in the evening, take it home, and dock it again, my session should pop straight back up
  6. Be capable of running faster (at a higher clock speed) when plugged into power. iPhones are deliberately left running at a low clock speed to conserve battery power, but when plugged in they could easily ramp up.
I realise that iPhone apps, as they currently stand, would not be suitable for use on a large screen, but they could be adapted. Alternatively, the phone side and the desktop side could be kept largely separate, and there could be dedicated desktop applications (just a port of the normal os x version) along side the mobile versions. They would still need to synchronise app data (bookmarks for example), but that wouldn't be difficult to achieve.

I don't think this is an especially original idea. I know that other people have thought about doing it for ages. I just wish we could convince Apple to produce it as a product. Here's how I think we do it: tie ins to .mac. .mac is a good service, but most people don't want to fork out for what they can get for free elsewhere. If the iPhone plus had better integration with .mac it would make it a much more compelling offering. iDisk is a perfect example. Devices with limited storage need online storage. voilla!

Ahh apple, I doubt you will ever read this, but if you do, please make this device! I'll buy two. Lots of people I know will buy them. It'll be awesome.
Categories: brucecoopernet, computers, iPhone, mobile computing and apple

Tasmanian Holiday 2009

Melissa and I just got back from our holiday in Tasmania. We spent 5 nights in Cradle Mountain, 2 in Launceston, and traveled on the Spirit of Tasmania. As long as you don't mind loosing a night, the Spirit of Tasmania is not a bad way of travelling. Below are some blurry iphonecam shots from the trip

Categories: brucecoopernet, Family and holiday

Melissa's Thesis Corrections

Melissa got her thesis corrections back today. One correction in total, and that was for a typo.

Congratulations Mel. That's a great effort!
Categories: brucecoopernet, jewellery, w00t, thesis and melissa

Melissa's Jewellery exhibition

My wife Melissa is having her final Masters of fine art exhibition happening on the 1st of April (no its not a joke) at Monash uni in Melbourne. Click on the image for more info and a slideshow.
Categories: brucecoopernet, jewellery, shameless plug, melbourne and melissa

More bike lanes for Melbourne

I saw this on the AGE this morning. Its good to see that cycling as a form of transport is getting some more attention these days. Melbourne used to be consistently rated as one of the world's best cities to live in, based in no small part on the excellent transport options. Due to decades of under-investment we no longer can lay claim to that title. Maybe we can start going back in the right direction.
Categories: brucecoopernet, Fitness, cycling and melbourne

Present Time

When I started trying to loose weight, I said to myself that if I reached a weight milestone, then I would get myself a present, paid for by my tax return. It couldn't be a present that would undo all of my good work, so chocolates and wine were out, rather it had to be something that would excite me but also enable me to continue getting fitter and loose weight.

The reward weight I chose is 80Kg. Its not my goal weight, as I would need to go further still to get rid of by belly, but rather it was intended to be an interim reward.

Well, I was getting close to the reward weight yesterday, so I went down to Goldcross cycles in Richmond, Victoria and bought a 2008 Fuji Team bike, based on the assumption that I wouldn't be able to pick it up straight away and by that time I would probably hit my weight. To my surprise, I got there today, so I'm very happy.
Categories: brucecoopernet, Fitness, cycling and weight

Cute Design Poster

Available from Etsy
Categories: brucecoopernet

Cloud Computing in Development

Virtualisation is a pretty commonly known practice these days. IT operations staff use it as a method of consolidation and getting rid of old legacy hardware. Now we are being presented with virtualisation on demand facilities, usually referred to as Cloud Computing. This allows any user to create a virtual machine as a clone of a disk image at any time, use it for a while, and then throw it away. This presents some interesting opportunities to streamline the development process, especially on large distributed systems such as one would find in a SOA style architecture.

When developers are working on a given task, they will generally wish to work in their own environment which is not subject to any other changes (and starts, stops) that other developers may be making at the same time. This becomes more and more important as the size of a development team increases, as more and more changes will be put into the system every day. Traditionally, each developer will run a copy of the software on their own development machine, or on a shared development server. Every couple of days, if the Continuous Integration system indicates that the software is in a working state, the developer will update his system with everyone else's changes, and can continue developing on an up to date copy of the software. Depending on the level of changes, this could take a significant amount of time.

If each developer in the team has their own environment, we quickly reach the point where there are dozens of environments, all running slightly different versions of the system. If we add in the system testing environments, it all becomes very complicated very quickly. On a recent project, we had a total of 40 developers working on a distributed Web Service based system, co-operating together to provide a business capability. Performing a full build from scratch took approximately one and a half hours. Every few days, each developer would spend this time getting his system up to date. We also had two engineers, working almost full time on keeping the system test environments up to date, along with managing the other aspects of environment management (operating system updates, testing bug fixes provided by software vendors etc...). This is a lot of time spent on just keeping the environment up to date. To make things worse, if a problem is discovered during the build process or if a bug sneaks into the system, the person maintaining the environment is faced with the prospect of going through the entire build process again to revert to an older copy the software.

This is where Cloud Computing can help, or rather two important concepts from it: Disk Snapshots, and the ability to quickly create computing environments. The concept is quite simple: When a user (developer or tester) wants an up to date environment, he goes and finds the disk image of the last known good Continuous Integration build, clones it, runs it as a virtualised environment and voilla! No waiting around for one and a half hours... no worrying about whether the build has completed successfully or not. No wondering whether the little experiment you did last Thursday has affected the operation of your system because you just created a new completely fresh environment.

When a new tool is required for the development environment, or a new
version is released, instead of instructing each developer to install
it separately, all that has to be done is to update the base image and
the next time each developer creates an environment, he will
automatically pick up any changes that have been made.

The situation is just as good for the system testing environments, as generally we want all of our system testing environments to be identical. Instead of having to build each environment separately, we simply build it once, and clone it as many times as we need it.

Snapshots and Cloud Computing also has the advantage of making very efficient usage of the available computing resources. In particular, it will be very efficient on disk storage. By using copy-on-write volumes, each environment will only require disk storage for what has changed between itself and the base image on which it is based. Because the environments will be 99% the same, each environment will not use very much storage at all. One terabyte disks are commonplace now, and would be capable of storing hundreds of disk images.

But the best advantage of using this approach is that it drastically reduces the workload on the project's environment engineers. If a change is required, or a particular developer needs another environment (say to test out operation in a clustered environment), he can do it himself. The results of the few tasks that he still needs to perform are also much more scalable. He can manage a project with 100 engineers almost as easily as he could for a project with just 10.

Of course, there are a few things that need to be done to your environments in order to be able to support operating in a cloud computing system, especially in the presence of cloning. For example, Oracle's application server OC4J stores the hostname and IP address of the server in its configuration. This will need to be changed each time the disk image is cloned. Many cloud computing environments (including EC2) do not support Multicast either, so alternative methods must be found for managing clusters. None of these problems are insurmountable however.

More of a problem is the licensing arrangement for Application Servers. Some vendors do not charge license fees for development environments, which is great. Others, such as Oracle, charge for each server, or each named developer. It is difficult to reconcile this licensing model with a cloud computing system, where environments come and go very often.

The final challenge is organisational. Some development shops are not set up in a way that makes it easy to use cloud computing services. It may not be possible to get to EC2 from your intranet, or management (or even your client) may be nervous about running software on computers that are not under their direct control. To get around this, you might be able to set up your own virtualisation cloud within your organisation. Its not that hard to do, and depending on how sophisticated you make the setup you may get most of the benefits you would see from using a real cloud computing provider.

First up, we need to get some shared (NAS) storage on which we can store our disk images. Because different people on different computers will be wanting access to the images, we need a way of getting to disk images over the network. ATA over Ethernet (AoE) is a way of getting a central file server (NAS) to which other computers (the virtualised ones) can access over the network and pretend that it is a normal drive. iSCSI is another option, which is more standards complient, works over routed networks instead of just the local Ethernet segment, but is a little more resource hungry. Both are supported well supported by Linux so should be easy to use.

Whatever NAS solution we choose should also support Copy on Write snapshots of disk images. Linux LVM has got image support, but performance will drop as the number of snapshots increases. A better solution would be to use ZFS, which comes with OpenSolaris. ZFS has very good snapshot support, along with other new and exciting storage features, but OpenSolaris only supports iSCSI, not AoE. Thats fine, as the xVM virtualisation solution we are about to talk about has iSCSI support out of the box. Once the image snapshot box is set up, it is important to ensure that spare parts are kept and backups are taken, as it becomes a central point of failure for all of your developers.

So, now we've got our snapshot storage sorted out, how are we going to do the virtualisation? If we were going to set up a full cloud, we could use Eucalyptus. That would allow us to centralise all of our virtual environments and provide proper scalability. Each developer is likely to only need one environment though, and even laptops these days have enough oomph to run at least one virtual environment. So why don't we allow our developers to run the virtualised environment directly on their own machines? Sun provide a virtualisation solution called xVM Virtualbox which can source its disk images via a built in iSCSI driver. It also has a command line utility, which makes it very easy to include in scripts. Perfect.

All that is left now is to produce the tools that will allow users (and Continuous Integration) to manipulate disk images from their desktops, and set up a base level development image. As the technique is intended for internal development, we have chosen to use shell scripts and SSH to run our scripts, with a Web application fronting it to allow for manual management. Setting up the images can be tricky. Luckily Oracle have already set up some oracle Fusion images. You can download the image and convert it to an VDI image (Instructions to follow).

So, we can set up a virtualised environment with snapshot management for use in development using only one extra server and a few cheap drives. This provides a very easy way of setting up this sort of development environment without having to go down the sometimes difficult path of convincing your company to fully embrace cloud computing.
Categories: brucecoopernet, SOA, development, virtualisation and cloud computing

UDDI registries and Mocking

This is a re-post of an earlier article I wrote. Any links will now be busted... Sorry.

UDDI registries provide a number of features. Primarily they are billed as governance mechanisms for enterprises running SOA environments. They also provide endpoint indirection capabilities, which are useful from a governance perspective, but can also be used for testing. This is an extension of the dependency injection pattern, which is commonly used in object oriented programming, into the distributed world. When a developer unit tests he wants to test only his component, and not necessarily the dependencies of his component. This leads to more directed testing, and means that a high level component can be developed in parallel to lower level components, or even before.

By using dependency injection, we can abstract away the dependencies of a component and replace the code that we are calling with an interface which gets updated at run time to point to the implementation that we want: either the real implementation (for production) or a testing stub implementation that suits the test that we are running. This approach is generally called Mocking and is in widespread use throughout the industry.

In the SOA world, our interfaces are WSDLs, so we don't really need to change our development practices to support dependency injection; all we need is a mechanism. Oracle's BPEL product provides a test tool that allows the developer to run his orchestration process in a special mode where any invoke/receive step can be replaced by a Mocked response. This is great, but it is not a general solution, and the testing tool has several.... well bugs.... that make it more difficult to use. UDDI registries also provide the capability to indirect endpoints, so they could equally well be used in this regard. It has the advantage of being ubiquitous. Wherever you invoke web services, you can use UDDI. Those endpoints can be changed to point to your mock services, possibly written in SoapUI. What this requires is runtime access to the UDDI server The general process for running a unit test would now be:
  1. Set up your UDDI registry to contain your consumed interface (the target service)
  2. Code and deploy your consumer service to use this UDDI service
  3. Run your unit test
    1. Start up your Mock Service
    2. Update the UDDI endpoint to point to the mock
    3. Run the test code
    4. Restore the UDDI key to its original value (optional)
  4. Report your Results

For this to work correctly your tests must have control over the UDDI server. For the duration of the tests, the endpoints must be set correctly on your UDDI server for the tests to work. If you have multiple developers all working on the same system, this means that they each need their own environment. If a common environment was used, one developer could update the key for a service that another developer was using, thus making the results inconsistent.

Now product licensing rears its ugly head. Commercial products need to be licensed to be used, and most SOA toolsets do not support free licenses for these tools. The thought is that these are enterprise tools and should be charged appropriately. This is unfortunate, as it makes dependency injection and unit testing using services very difficult. JUDDI is an open source implementation of the UDDI v2 specification. In my project we are currently evaluating whether this would be appropriate for use as a dependency injection tool. If it is, then this will be great, but I still don't think its ideal. All products are not created equal, and we will need to perform additional testing to make sure that our system supports our commercial, expensive UDDI registry as well as the free JUDDI. It also means that we are restricted to the most common denominator of functionality between the two products.

Life would be much easier if the SOA tools were freely licensed for development purposes. I'm probably hoping against hope, but to me it makes good business sense. If you let developers use your products for development, they will get used more in production.

Categories: brucecoopernet, SOA and Oracle

Error Handling in Oracle's ESB

This is a re-post of an earlier article I wrote. Any links to the old version will now be busted... Sorry.

In our project, we follow a strict governance process for governing our services. We identify the services and their operations from our Business Process Model, then proceed to producing a WSDL and associated XSDs to represent that service. Only once this is done will we proceed to implementation. This is called top down design and is generally a good thing. By doing this, we end up with a clean design that represents the ideal, pure business requirements, rather than being technology driven as is often the case with bottom up designs. Nothing new here. Sooner or later, the rubber hits the road, and we end up implementing the service using some sort of technology. In a recent case, Oracle's ESB product was selected as the implementation technology for some entity services that we are developing. It allowed us to provide a SOAP interface to our entities, whilst still preserving transactions and speedy performance by tying into WSIF and Oracle's optimised message delivery capabilities. So far, everything was looking rosy. Then we got to implementing the fault handling. Now, I think most people would agree that having a variety of different faults for an operation is a good idea. That way, the caller can distinguish between the different types of fault that can happen. In particular, this service had these faults:
  1. PersistenceFault, if there was a general technical fault with the service (e.g. the database was down)
  2. IllegalUpdateFault, if the caller has attempted to modify a field that they shouldn't be.
Now we find a limitation of the ESB product. It turns out that the current version ( doesn't support operations which can return multiple faults. All products have their limitations, so all we need here is a workaround, right? What are the possible solutions. Here's what we came up with:
  1. Create some sort of XSL mapping the ESB to try and fudge multiple faults: This is impossible, as normally faults would have different messages, and the ESB will only route to one destination message.
  2. Use a common fault type, and distinguish between the different faults using a fault code, either numeric or enumeration based. This will work, but there is no way of including structured data in the fault, as it will need to be generic
  3. Use a common fault type which is a <choice> of the different faults, and get the caller to work out which one it was.
  4. Use a common fault type, and then extend it via XSD methods, and use polymorphism to tell the difference.
Options 3 and 4 are discussed further below:

Option 3.

<xsd:complexType name="SimpleFaultType">


<xsd:element name="faultstring" type="xsd:string"/>

<xsd:element name="detail" type="xsd:string"/>



<xsd:element name="CombinedFault">



<xsd:element name="PersistenceFailureFault" type="SimpleFaultType"/>

<xsd:element name="IllegalUpdateFault" type="PersistenceFailureFaultType"/>




Option 4.

<xsd:complexType name="BaseFaultType">


<xsd:element name="faultstring" type="xsd:string"/>

<xsd:element name="detail" type="xsd:string"/>



<xsd:complexType name="PersistenceFailureFaultType">


<xsd:extension base="BaseFaultType">

<!-- Place additional fields in here -->




<xsd:complexType name="IllegalUpdateFault">


<xsd:extension base="BaseFaultType">

<!-- Place additional fields in here -->




Using this approach, the fault message/element in the WSDL will be of type BaseFaultType. The caller can then interrogate the xsi:type attribute to work out exactly which fault has been thrown.

Our entity pattern calls for using the ESB to wrap EJB functionality, and will need to route exceptions back from the EJB into the Core Data Model version of the fault. This will need to preserve any polymorphic faults, which represents additional work both for the java developer and the ESB designer who must write a non-standard XSLT file to map the exceptions. This approach seems quite neat, but it does sacrifice type safety. Unlike Option 3, a consumer looking at a WSDL will not know exactly which faults a web service can throw, as it in theory could throw any of the extensions of the base fault type. Instead, he will need to look at additional documentation (we call this the CSP or Consumer Service Profile) to work out which faults can be thrown by the service. Developer tools such as JDeveloper will also not be able to use wizards to interrogate parts of the fault message, for exactly the same reason. Instead, the developer will need to examine the xsi:type attribute, then copy the XML element into a variable that represents the right fault (this is as close as BPEL gets to casting). If we were to take the xsd:choice approach, then the WSDL would represent all of the fault information in the XSD, and JDeveloper would be able to pick up the types and work with them. In addition to all of this, a special BaseFaultType will need to be added to the Canonical Data Model (CDM). This would not be necessary using Option 3, as the CombinedFault fault used would be specific to each service and be placed in its local namespace.

The Wash Up

Either option 3 or option 4 will work. They are effectively the same solution, but use different mechanisms to shoe horn multiple exceptions into the one message. Due to the enhanced type safety, my position is that Option 3 is the way to go. Either solution is a pain in the butt, for a few reasons. The consumer is going to have to perform logic when a fault occurs to work out which error occurred. This will unnecessarily clutter up our BPEL or Java processes, especially if the different faults have different scopes. For example, we might want to deal with an IllegalUpdateFault within a tight context, but bubble PersistenceFault out to a wider context. The fault handling code in the consumer will need to deal with that. We also sacrifice readability of the interface itself, as it becomes difficult to see which Faults are being thrown where. But the real annoying thing is that now our technology is dictating terms of the interface. We can not participate fully in top down design because our tool doesn't support all of the ways that we may want to shape our WSDL. For something as fundamental as multiple fault handling to be left out is unforgivable in my opinion.
Categories: brucecoopernet, SOA and Oracle

Weight Loss

I'm trying to loose weight, and I'm tracking it using google docs

Here's a graph of my current weight. Due to some google docs charts wierdness, I can't show the actual weight, so you'll have to read 0 as 80, 1 as 81, etc... Hopefully, this graph should change as I update the data. Thats what I'm checking out.

How am I achieving this? Work kindly provides a gym, and has hired personal trainers to come in each day during the lunch break. By going to these I'm managing to drop some weight, plus I'm finding it has some motivational benefits as well. I feel happier in the afternoons and more willing to work with a smile on my face :)

Nobody cares, except for me of course, but insn't blogging the height of narcacism anyway?
Categories: brucecoopernet, Fitness and mememe

I give in

I have run my own blog, on and off, for some time now. I was paying for hosting, and setting up my own wordpress install.

The first one got deleted due to user error (PEBKAC).
The second one got defaced by remarkably funny russians.
The third one had its database corrupted.

I don't care if google has my information any more. You win, its easier to do it this way... oh well.
Categories: brucecoopernet and rant

Final Stop-Mo

Its the end of Movember and now all of us Mo-Bros can finally get rid of the itchy, food catching monstrosities!

Here's the final Stop-Mo video

Categories: brucecoopernet, silly, stop motion and movember

The right sunglasses for a man with a mo

Here's lookin' at you shorty!

Categories: brucecoopernet, silly and movember


Thats right, I'm growing a mo for charity.

Get with it!

Movember - Sponsor Me
Categories: brucecoopernet, charity and movember


Its been a good week for music this week. In adition to seeing Darren Hanlon, we also went to the first filming session for season 5 of RocKwiz. Melissa was the nominated rock-brain from our table, and competed for a position on the show. Unfortunately she wasn't fast enough for the other freaks. A boatload of beer and a load of fun was had by all. I heartily recommend it to any person that likes music.

Categories: brucecoopernet, rockwiz and gig

Darren Hanlon

We went to see Darren Hanlon at the Ruby Lounge Belgrave on Wednesday.

It rocked

Categories: brucecoopernet, music and gig

We're Engaged

Melissa and I got engaged at 1:30 am on Christmas eve!

Categories: brucecoopernet and Family

Out listening to the DJ DJ

Narelle's Boyfriend Dean is in town, and we went to see his DJ set at Luxe last night. Good work Deej, you had me dancing. He is playing again tonight (New Years) at 12am at the Bakery

Categories: brucecoopernet, friends and gig

Seasons Greetings

Merry Christmas everyone. Thanks to Mrs Yabuka for the lovely cake, which was gleefully consumed at a dinner party at Spaz's on friday.
Categories: brucecoopernet, friends and holiday

New Job

I doubt anyone cares about this, but I've left my position at Thales Australia, and am now working as a software engineer contractor. My first contract is with BankWest.
Categories: brucecoopernet and work

Wedding Video

This has been posted for the wrong date. Our wedding was months ago... I must have just received this

Categories: brucecoopernet, Family and Wedding

Its My Birthday

And I'm not going to work. No siiree!
Categories: brucecoopernet and Family

Killer Dog

I was walking to get lunch today, past some nice houses in Nedlands. One property I passed had a dog in the front yard, presumably guarding the valuable property

Perhaps not. He looks vicious :)
Categories: brucecoopernet and silly

The Mountain Goats.... Again!

Yes, that's right. They keep on coming back, and we keep on going to see them. Another good performance from the boys. John snapped a G string (ohhh vicar!) in the first song, and to the crowd's dismay, didn't have a replacement. Luckily the support band had an acoustic guitar, so the entire gig was played with a borrowed guitar. They played Jenny so I was a happy camper.

Categories: brucecoopernet, The Mountain Goats and gig

And now for something completely different.... well mostly the same

Just to break the monotony of wedding photos, I thought I would post some honeymoon photos instead!

Categories: brucecoopernet, Family and Wedding

April's photos from the wedding

From Flickr

Thanks a bomb April!

In The Pink Proud Parents (to be) Glow The Bride's Maid The Groom Waits Lady In Red Flowers Her Day Couture Try To Remember This Feeling Vows Who Brought Tissue! Smiles Do Your Bit Jase! Exchange Rings You May Kiss - 1 You May Kiss - 2 Best Friends Forever Official! My Photo Op With The Newly Weds Tee, Jase, Dee, Laine Dee, Laine Hott Pink Chicky Girl I Hart U Bruce The Hubby! Jase, D, Tee, Bruce Bruce, Donna, Mark, Cooper, Me, Neera The Players Mary, Mel, Bruce, Fadi Dee, Me, Nee(ra) Bridesmaid's Speech Bestman's Speech Plant! Dee, Laine Bestman's Speech Groom's Rebuttal
Categories: brucecoopernet, Family and Wedding

Narelle's Photos from the wedding

Categories: brucecoopernet, Family and Wedding

We're married

Well, we've done it now....... we're married.

Melissa and I would just like to say thankyou to all of the people that made yesterday such a special day. We're off on our honeymoon now, but we'll post photos when we get back.

If anyone has taken their own shots, please feel free to email them to me, or send a flickr link. We'll collate them all here.

We loves you guys!
Categories: brucecoopernet, Family and Wedding

And first in with photos for the wedding

Is John Boyland

Categories: brucecoopernet, Family and Wedding

The Automasters at Mojos

Last Night, Melissa, Jason, Dzung, Jason's Parents and myself went to see The Automasters play at Mojos in Freemantle. The singer wearing the sunglasses is Jason's Brother Brendan Hutchens, who is a TV presenter on the ABC by day. They were ably supported by Petanque, whom we have also seen playing with the Burgers of Beef. Gotta love a band with a Moog in it.

The automasters will be playing at Mojos every Tuesday night for the month of January

Categories: brucecoopernet, friends and gig

IMPORTANT: Drink Responsibly this Christmas ...One drink at a time !

Categories: brucecoopernet and silly

Girl Power

My Girlfriend's sister's best friend (Hey Star Man!) drives a ute around and is very proud of it. I saw this sticker on the back of another kingswood ute today, and immediately thought of her
Categories: brucecoopernet and silly


While in florence, there was some sort of wierd art installation whereby people had decorated life size fiberglass cows and had placed them throughout the city. These are some of the ones that I took photos of, not necessarily because I really liked them, but because they were everywhere.

Categories: brucecoopernet and europe

Back From Europe

Categories: brucecoopernet, europe and holiday

I'm off to France and the U.K.

I have been sent by work to France and the U.K. and will be away for 6 weeks.
Categories: brucecoopernet and europe

The Mountain Goat's Triumphant Return

Last night, Melissa, Courtenay and myself went to see The Mountain Goats at the Rosemount Hotel in North Perth. We've seen John Darnielle live before, but not with his bass player, and not since he became quite popular.

Last time, John wasn't even the headline act, and as a result, there weren't as many fans there. It had a very intimate feel to it. He played material from most of his back catalog, including my favourite, "All Hail West Texas"

This time, the venue was much larger and there were a lot more people there. John got a lot more loving from the audience, and he reflected it with a high intensity, high energy (for John) performance. He mostly played songs from his latest album, The Sunset Tree, but still managed to finish off with "The best ever death metal band in denton" which was very good. John's bass player was very good too. His playing was very minimalistic, but placed just perfectly to fatten out the sound coming from this duo. In a departure from two piece convention, he didn't play purely rhythm either, quite often playing melody lines of his own. At one point, he launched into a brief bass solo, the only one I have ever heard that didn't come from the slap bass school of bass playing.
Categories: brucecoopernet, The Mountain Goats and gig

Is this the coolest ute ever?

I saw this ute parked out on the street near my house today.

I like it! I wonder what its fuel efficiency is?

Categories: brucecoopernet and silly

My first HTPC

I've been trying to put together a HTPC for quite a long time now, up until now with little success.

Recently, some of my workmates have started working on HTPCs of their own, which has spurred me on to try again. For a change, I succeeded, mostly through the really good MythTV distribution KnoppMyth. It takes out all of the hard work in configuring linux to work properly as a set top box, especially for obscure hardware like the VIA EPIA-M mother boards

So, I now have a working set up to record Digital TV programs.... Yay!

Here are some pictures

The hardware list:
  • An old DVD-Player case (Thanks Peter!)
  • Via EPIA-Me6000 Motherboard
  • 60W DC-DC power supply
  • 256 Mb DDR400 RAM (Thanks Mike!)
  • 300Gb Segate Barracuda 7200.8 Hard Drive
  • ATI RF Remote
  • Logitech Di-Novo bluetooth keyboard/mouse
  • Netgear WG311 802.11g PCI Card
  • Hauppage Nova-T DVB tuner card
Things that are still wrong, or I need to fix:
  • The Hard Drive runs too hot, and you can hear it click when it seeks. I think I will replace it with a notebook drive
  • The Tuner card is too slow.
  • I think the power supply is a little too weak, as sometimes it struggles to boot up. I think changing the hard drive will help a lot
  • The EPIA-M boards, even with their hardware decoder, can not decode HD (1080i) signals properly. I am considering building another machine using a Pentium-M solution
  • No DVD Drive, but that is just a matter of buying one and sticking it in
Categories: brucecoopernet, computers and HTPC

Yesterday was my 24th birday

Yesterday (Thursday the 24th) was my birthday. We went out for a curry dinner with some mates, which was very nice. It also happened to be the date of my Brother's new band's first performance at the Rosemount Hotel here in Perth. The band is called "Dobson and Fitch". Its a two piece band, with Mark on Guitar and his mate on drums. The guitar gets fed through a laptop to give it all sorts of harmonic distortion, which gives the sound a much broader feel.

It was actually really good. This is the first time I have seen my brother perform live. The harmonics give the music a complex and free flowing feel, yet the melody still runs strongly through the music. This gives it appeal both from an experimental electronic music and a straight up rock and roll perspective.

He could do with looking up from his guitar every now and then though. He looked like cousin it for most of the performance
Categories: brucecoopernet and Family

Trail Map

Here's the Trail map from Perisher Blue

Categories: brucecoopernet and snow

Ski Trip Photos

For the last week, my girlfriend and I, her family, and a bunch of mates have been skiing at Perisher Blue in the Snowy Mountains.

Not much photographic evidence I'm afraid, as I was too busy snowboarding, but here is what I did get.

Ski Trip 2005
Categories: brucecoopernet, friends and snow

We're going on a ski trip

Melissa, her family, a few friends, and myself are all going skiing next week. As a commemorative piece, and to make it feel more like a school outing, we have screen printed t-shirts for the expedition
Categories: brucecoopernet, friends and snow

The Grates are Great!

Melissa, Courtenay and myself went to see the Grates (supported by the Fuzz) at the Amplifier Bar on Thursday. It turned out to be a great gig, mainly based upon the energy of the Grates' lead singer, Patience. She was dancing around the entire gig with a gigantic smile on her face. Her energy rubs off on the crowd, which made it a memorable night.

Apologies about the poor quality. I haven't mastered the art of Gig photography yet.
Categories: brucecoopernet and gig

flowers!!!..... okay leaves

I have a cyclamen pot plant, which has been through the wars over the years. As a result, it produces the most interesting shaped leaves that I have ever seen.
Categories: brucecoopernet and silly

Silly 60s Lambretta ad

Check out this fantastic lambretta scooter ad, recently featured on Boing Boing
Categories: brucecoopernet and silly

Otto von Bizmark reborn in Australia....

My nephew recently did a bit where he was telling his classmates about Otto von Bizmark. Isn't he cute?
Categories: brucecoopernet and Family

Oi! Thats my Suit!

You know you are all grown up when your father starts borrowing your suit to wear to a wedding. He's the one in the darker suit. Still, he doesn't look too bad in it.
Categories: brucecoopernet and Family

Cheryl and J.P. get hitched.

It was a family weekend this weekend, with a wedding, an engagement party, and a dinner party for Mothers day. The only photographic evidence I have is of the wedding, so here it is
Categories: brucecoopernet and Family

Beer! In glasses - Brilliant!

While half pints of guinness are cute, i know which of the two i'd rather be drinking
Categories: brucecoopernet and silly


Melissa things my web site needs doodles on it. This is her suggestion
Categories: brucecoopernet and silly

Love a bit of Cake!

We went to a cake gig last night. It was fabo! First we went to the newly refurbished and re-opened Vivace restaraunt for some fine dining. Kirsten had volunteered to drive us to the gig, then she was going to go out clubbing. By the time we got there, we had conviced her to pay for a ticket at the door and come in.

A good thing too. Despite a crappy sound system at the Lookout and the lead singer John McCrea having a sore throat, they were fantastic. Everyone in the crowd was going nuts and singing at the top of their lungs. And that was just the first song - Sheep go to heaven, goats go to hell.
Categories: brucecoopernet, Cake, Family, holiday and gig

Card for Gordon

A Card for Gordon's Birthday
Categories: brucecoopernet and Family