Circles Of Causality

The Misnomer of ‘Move Fast and Break Things’

image06For a while I’ve wanted to visualise the pain points of the development cycle so I could better explain to product & business owners why their new features take time to deliver. Also to squash the misnomer of the overused “Move fast, break things” mantra of Facebook.

 

So some of you may know that recently Facebook realised this wasn’t actually sustainable once you have your product stable and established. It might work in the early conception of a product, but later on it will come back to haunt you.

At the F8 Developers conference in 2014, Facebook announced they are now embracing the motto “Move Fast With Stable Infra.”

“We used to have this famous mantra … and the idea here is that as developers, moving quickly is so important that we were even willing to tolerate a few bugs in order to do it,” Zuckerberg said. “What we realized over time is that it wasn’t helping us to move faster because we had to slow down to fix these bugs was slowing us down and not improving our speed.”

I’ve recently been reading Peter Senge’s “The Fifth Discipline: The Art and Practice of the Learning Organization” and thought circles of causality would be a good way to express this subject. Using system thinking and looking at the whole picture can really help you stand back and see what’s going on.

circles of causality1Be it growing user acquisition or sales, a good place to start is what are you trying to achieve and how to exponentially increase it. In this case lets say that as a business you want to drive user engagement and grow your DAU.  One possible way is to add new features to your product.

So following the circle in this diagram we assume that by delivering new features, we increase user engagement which in turn leads to growing your DAU/Virality.

Lets assume the product has been soft launched, you’re acquiring users, A/B testing initial features and have begun growing your list of improvements and new features. Lets create a ‘Backlog’ of all this work, prioritise them, plan how we deliver those quickly using an Agile Scrum framework.

We want to deliver a MVP as quickly as possible, so lets do two week ‘Sprints’. The product team have lots of great ideas but far too many to put into one sprint and some of the new features require several sprints. Product owners & other business leaders debate the product roadmap and you have your sprint planning….simple right ?….Well to begin with yes

 

circles of causality2So lets look at what the size of the backlog does to delivery. In this diagram you see that the size of backlog directly affects how soon improvements/features are made. Why? Well because the backlog is also made up of bug fixes and technical debt, often inherited from your prototyping phase and deploying your MVP.

You’d love to tell the business that you’ll just work on new stuff; but hey worse case we deliver something every two weeks, but some features could take months to appear.

So with a relatively small backlog we are ok. Yes some business leaders are a bit frustrated their feature won’t get deployed in this sprint but the short term roadmap is clear right ?

Dev team gets their head down and get on with the sprint….but in the background product/business owners have moved on from yesterday’s must have feature to yet another new shiny idea or potential crisis; the backlog grows and the roadmap is changed. Features get pushed further down the priority list.

So the situation is we have X amount of resource and over time the business is getting frustrated at the pace of delivering changes to the product. Weeks have passed and only 25% of these ideas/features are shipped.

The Symptomatic Solution

So there could be two potential fixes for this…move faster and drop quality so we can ship stuff quicker or throw more people at it. Lets look at what happens with the “Move fast, break things” mantra. So to increase delivery time we cut corners, drop some testing, code reviews, developers pushed to make hacky solutions etc etc

circles of causality3As you see in this diagram, as you do this you create more bugs and the QA process takes longer. Any initial advantages are lost as this builds up.

 

 

Now we have also added a ‘side effect’. More bugs increase the size of the backlog creating the opposite effect you intended in the first place.

 

 

 

So lets put in more man hours (overtime) to get those bugs down and reduce this growing backlog. More overtime increases fatigue & the quality of the work. Developers get burnt out, they make more mistakes, quality of work suffers and again more bugs and are even more demoralised.

Lets look at the result of this on staff & the complexity of their work. In this diagram we see that by reducing quality we also increase code complexity which generates technical debt, which again slows down development. Tech debt is pretty demoralising, as usually no one is invested in fixing it and in most cases you just work around it.

Adding more developers has a different outcome with equally diminishing results. Big teams in an Agile framework, isn’t always a great idea. The typical strategy is to organize your larger team into a collection of smaller teams, and the most effective way to do so is around the architecture of your system.

The harder you push, the harder the system pushes back

When you look at the whole system, each part has a cause and effect. The harder you push one part, other parts are affected. Each one of these parts needs to be balanced against each other so that the system runs efficiently. It’s also important to step back and make sure you are solving the actual problem not trying to fix a symptom.

Balance

In this example the perceived view is that the team is moving slowly, whereas in fact they are moving at a pace that balances the system. Move fast, with stable infra is the sensible option. Use system diagrams like this to seek out counter balance to reinforcing circles.

Reference – “The Fifth Discipline: The Art and Practice of the Learning Organization” Peter Senge – Chapter 5 “A Shift Of Mind”

 

Mindcandy is looking for an exceptional Android Engineer..

Mind Candy is looking for an exceptional Android Engineer to join the team building the worlds most exciting kids social networking app! The team moves rapidly, especially during the experimental phase of the project, and so will you. You’ll work closely with frontend designers and server engineers and members of the senior management team to design, implement and A/B test new features to make it fun for kids to connect with other kids. Culture is important to us – you’ll be humble but have huge aspirations. You’ll have a great work ethic, will thrive in a fast-paced environment and you’ll enjoy both autonomy and responsibility.

Responsibilities

Design and implement high quality, visually rich Android applications that work on a wide range of devices

Integrate with 1st and 3rd party online services for in-app purchases, analytics, a/b testing and contacts

Collaborate with other mobile engineers to identify best practices

Requirements BS degree in Computer Science or equivalent experience Professional experience building Android applications

Advantages

Experience working in an Agile environment.

Experience with static analysis tools.

Understanding of Gradle build system.

Apply Now

Great British Summer Game Jam

https://www.facebook.com/GBSGameJam/timeline

Ready… Steady… JAM!
Autodesk and Mind Candy are combining forces to host an epic GAME JAM at Mindcandy’s famous central London HQ.
Keep a very British stiff upper lip whilst you get to grips with the best Game Developer technology in the business, such as Autodesk MAYA LT and Scaleform for Unity, and be ready to get your next game discovered as you put your skills to the test in this year’s most exciting game jam!

Tell your friends, form your team, learn the tools- then compete to win!
The full agenda of speakers is still to be announced, but make sure you save the date and stay tuned to Twitter (@GBSGameJam) and right here on Facebook, as the #GBSGameJam is not something to be missed!

Mindcandy Techcon 2014

Mindcandy Techcon 2014

On 11th February 2014 we held our own mini tech conference in London at the Rich Mix Cinema in Shoreditch.

Today we streamlined our jelly beans, expanded our guilds, sprinkled DevOps everywhere, emphasised our polyglotism then un-cheated our backends !

In 2011 & 2012 we held a company Techcon to get together all Tech Mindcandies to share our technical experiences and technologies. It gives us an opportunity to share knowledge & give insights into different teams and products we wouldn’t normally get time to do.

Jeff Reynar

As Jeff, our new CTO joined us in the new year, it was a good opportunity for him to talk to all of us about our Tech & strategy going forward; especially on how we can build a great Tech culture here, so we can all grow and learn great things.

photo-1

We had some great talks from all the teams & learn’t some new things. One talk even caused an outbreak of nose bleeds….too much data !

A learning organisation

Collaboration was the focus of the day, where we talked about how we better collaborate across cross functional teams & disciplines. And so the “Guilds” were born & there was much rejoicing.

DevOps soon followed with a healthy smattering of dev and ops hugging. We have been practising DevOps methodologies here for a while, so we showed the fruits of our labour. From shared tools, infrastructure as code to automation & sharing the PagerDuty rota. We saw the future & it was continuous delivery…..& there was much rejoicing.

Screen Shot 2014-02-12 at 13.47.19

Middleware team splashed us with more water themed services, with Plunger & Pipe Cleaner. We are safe in the knowledge our events can make it through the pipeline so quickly. We learn’t about how we use FluentD, AWS SQS & Redshift to get gazillions of events from our games into our data warehouse. And they showed improvements made to our identity & AB testing services.

Tools team had a vision…..and it was “make things less crappy”. They talked about deployment tools & automation with the promise of making everyone productive & happy. We were happy…and less crappy. The future would be filled with tools that are whizz bang and swishy like Iron Man….I have raised a JIRA ticket for my flying suit, its in the backlog people !

Screen Shot 2014-02-12 at 14.30.19

NetOps Team talked about our implementation of autoscaling using cloudformation stacks & how we manage dynamic disposable infrastructure. Also, we got introduced to the Moshling Army, who will tirelessly automate and keep tidy our AWS accounts. The Bean Counter app would also keep all the product teams updated on a daily basis with their Amazon AWS costs.

We then had presentations by all the product teams, QA & IT OPS.

The product teams deep dived into their front & back end architecture. We were shown how we load tested one of our game backends using Gattling. Blasting an AWS Autoscale group with 1.2 million req/min , breaking Cassanda & RDS Postgres along the way. The monster 244GB, 88 ECUs RDS instance took it in the end. ( err time to scale out before we need that me thinks )

This was swiftly followed by the “Bastardisation of Unity” & how we mashed it , rung its neck & made awesome 3D on mobile devices. Apache Thrift made an appearance, & we learn’t how we use the binary communication protocol in one of our apps.

Moshi Monsters web team talked about the lessons learn’t over the last 5-6 years in managing a complicated code base. They revealed the pain of deployments back in the day & how they have been streamlined with “The Birth Monster” deploy tool. Wisdom was imparted about tech debt, code reviews & knowledge sharing.

Screen Shot 2014-02-12 at 15.19.03

Our Brighton studio dazzled us with their game architecture & visually jaw dropping in-game graphics. The front end tools they built to improve workflow was awesome. Using timelines to combine animation, audio, cameras, game logic, UI & VFX means they can build stuff super fast. They also talked about cheat detection & the best ways to tackle it.

The QA Team told us to pull our socks up ! Together we should strive to always finish backlog stories, improve TDD & automation. Make sure we have plenty of time for regression testing. Fortunately, they are working like crazy with acceptance criteria on stories, testing & improving communication.

Finally the IT OPS team joined the DevOps march by turning their Mac builds from manual to automated nirvana. Using Munki & Puppet to handle software / configuration of all our company Apple Macs. Amazed !

Also we learn’t …never go on extended holiday

Or this happens….

IMG_20140102_094643

Looking forward to the next Mindcandy Techcon !

Migrating the Moshi Monsters backend from SVN to Git

Currently at Mindcandy we use a combination of SVN and Git for all our code. This is because storing lots of frequently changed binary Flash assets in Git is a pretty bad idea. There was also some legacy code that would benefit from moving to Git, but finding the time to do anything about it had been difficult.

Thanks to some recent cleanup and changes, some of those barriers have been disappearing, so over the last couple of days I’ve been getting the Moshi Monsters backend migrated over. It ended up being quite involved, and as we’ll hopefully be migrating more code over, I decided to write-up how I did it.

Problem

Migrating a large SVN repository to Git can cause issues when it contains a large amount of history, tags and branches. This is primarily due to the differences in the way that SVN and Git handle commits and branches.

In short, when using git-svn to migrate, it’s necessary to pull down each commit from SVN, have Git calculate a commit hash for it, then re-commit that to the local repository. Furthermore, because SVN works by copying files for branches and doesn’t merge changes back into trunk in the same way as Git, it is also necessary to track back through every commit in a branch and calculate the commit information. Tags are awkward for similar reasons.

In a repository like the Moshi backend, with a little over 6 years history and plenty of old branches and tags, this can result in Git taking a lot of time, and a lot of CPU to try and go calculate this information, much of which is actually so old that it isn’t needed.

Interestingly though, if we ignore all the branches and tags and just pull down trunk into Git then the process takes about 10 minutes.

Solution

The decision was made to not migrate across all the branches and tags, but instead to get the entirety of the trunk history, and just a select number of the recent branches and tags. Unfortunately, this is a little fiddly to do with git-svn and requires a bit of config magic.

I’ll cover the commands necessary for doing this, however there is some other information that is useful when doing an SVN migration that this won’t go over. Primarily dealing with commit author name transformations. The http://git-scm.com/book/en/Git-and-Other-Systems-Migrating-to-Git site article covers that in more detail.

Commands

The first step is simple enough thankfully, and just requires cloning the SVN repository trunk folder, making sure to tell git-svn that this is just trunk. You can do this by using the -T or –trunk flags, which will make sure Git knows that there could be other folders containing the tags or branches.

git svn clone -T trunk http://svn.url/svn/repo/project/ project

It is worth pointing out that there may be multiple remotes with the same name, but followed by “@“. This happens when a branch was copied from a subfolder in the repository, and is not necessarily a whole repository copy. For example, when cloning our backend project I got this :-

remotes/trunk 0f6ddda [maven-release-plugin] prepare for next development iteration
remotes/trunk@8127 cbef06a Fixing Bug

Going back through the SVN history, it’s possible to see that revision 8128 was where /TRUNK was copied to /trunk. These should be safe to remove, because once Git has pulled everything it will track the history in its own commits. We’ll cover getting rid of them later.

Branches

Once we have this, we need to manually add each branch we want to pull down by adding an svn-remote to our Git config. This needs to have a URL and a fetch ref so Git knows what to get from where.

git config --add svn-remote.mybranch.url http://svn.url/svn/repo/
git config --add svn-remote.mybranch.fetch branches/mybranch:refs/remotes/mybranch

With that done we can fetch it from SVN and create a local branch.

git svn fetch mybranch
git checkout -b local-mybranch remotes/mybranch
git svn rebase mybranch

The fetch may also take a while but once the above is done you have a normal-looking Git branch, ready to be pushed to our new remote Git repository.

Tags

Adding specific tags is pretty similar to adding branches, in fact Git treats SVN tags like branches because really they are just copies of the entire project up to a certain revision. This means that once they’ve been fetched, we’re going to have to convert them to Git tags.

git config --add svn-remote.4.9.9.url http://svn.url/svn/repo/
git config --add svn-remote.4.9.9.fetch tags/4.9.9:refs/remotes/tags/4.9.9
git svn fetch 4.9.9

So now we need to turn this into a real Git tag. We’ll make this an annotated tag and mention that it’s been ported from SVN as well. If you were going to continue working with this repo against SVN then you’d probably want to delete the remote branch, but since we’re just doing a migration I won’t bother.

git tag -a 4.9.9 tags/4.9.9 -m "importing tag from svn"

At this point, if you go back and look at the tag in the Git history, you’ll see that actually it is pointing to a commit that’s sitting off on its own, and not part of the branch history. This is because SVN created a new commit just for the tag, unlike Git which creates tags against existing commits. If you really don’t like this then you could create the tag against the previous commit using :-

git tag -a 4.9.9 tags/4.9.9^ -m "importing tag from svn"

Pushing to Git

With that done, we can now push our repository up to our Git host and not have to worry about SVN again.

git remote add origin 
git push origin --all
git push origin --tags

Now we have a Git repository with all of the Trunk history in and only those branches and tags we specifically wanted. At this point you probably want to set the old SVN repo to be read only and get everybody moved over to Git.

Cleaning up

If you aren’t using this repo for migrations, and are instead just wanting to use git-svn to interact with your Git repository, then you will probably want to clean up the remotes a little. As I mentioned earlier, when Git pulls everything out of SVN, it will create extra remotes for tags and branches at revisions where there were non complete repository copies. Once the data is in Git you don’t need these, so we can safely remove them.

To get a list of them we can use the Git plumbing command for-each-ref.

git for-each-ref --format="%(refname:short)" refs/remotes/ | grep "@"

With this we can iterate through and delete them.

git for-each-ref --format="%(refname:short)" refs/remotes/ | grep "@" | while read ref
do 
  git branch -rd $ref
done

Other options

There are a few other options to git-svn that can be useful when migrating over, though it’s worth investigating them before setting a script running for two days so you don’t end up with a repository that doesn’t contain what you were expecting.

The –no-follow-parent option can be passed when cloning for fetching so that Git won’t follow the commit history all the way back. This will result in things being much quicker, but it also means that, according to the git-svn docs:

branches created by git-svn will all be linear and not share any history

In practice I found that this gave me a linear Git history with nothing in the places I expected. On the plus side, it was way quicker! Worth looking at but use with caution.

The other option worth knowing about is –no-metadata which will stop Git adding in the git-svn-id metadata to each commit. This will result in cleaner commit logs, but means you won’t be able to commit back to the SVN repository. It’s fine if you’re making a clean break from Git, but dangerous otherwise. I’m also not sure how well it works with pulling down separate branches from SVN to merge into Git. That investigation is left as an exercise for the reader! :)

Automating

So it’s all well and good being able to add our branches and tags, but we don’t want to do this by hand for each one when we can write a script to do it for us.

Combining everything we’ve done so far, this shell script should do the job for us and leave us with a nice looking, ready to push, Git repository. I’m doing the cleanup step in the middle just to make sure there’s no ambiguity with which branches and tags are being created, and also so it’s easier to see what’s been created once all the dust settles.

#! /bin/bash

SVNURL='http://svn.url/svn/repo/'
FOLDER_NAME='gittosvn'
BRANCH_FOLDER='branches'
TAG_FOLDER='tags'

BRANCHES='branch1
branch2
branch3'

TAGS='tag1
tag2
tag3'

git svn clone -T trunk $SVNURL $FOLDER_NAME

cd $FOLDER_NAME

for bname in $BRANCHES; do

    git config --add svn-remote.svn-$bname.url $SVNURL
    git config --add svn-remote.svn-$bname.fetch $BRANCH_FOLDER/$bname:refs/remotes/svn-$bname

    git svn fetch svn-$bname

done

for tname in $TAGS; do

    git config --add svn-remote.$tname.url $SVNURL
    git config --add svn-remote.$tname.fetch $TAG_FOLDER/$tname:refs/remotes/tags/$tname

    git svn fetch $tname

done

git for-each-ref --format="%(refname:short)" refs/remotes/ | grep "@" | while read ref; do 

    git branch -rd $ref

done

for bname in $BRANCHES; do

    git checkout -b $bname remotes/svn-$bname
    git svn rebase svn-$bname

done

for tname in $TAGS; do

    git tag -a $tname tags/$tname -m "importing tag from svn"

done

Conclusion

So migrating SVN to Git isn’t too tricky, but there are a few things worth knowing and it can certainly take a long time if you have a lot of history and branches. There are probably some mistakes and useful things I missed so feel free to get in contact if so.

Kanban at Mindcandy

Operations & Agile Kanban

Here at Mind Candy we use an Agile development process called Scrum, and it suits our business needs very well. But this suits the development process and not IT operations. Because we find it hard to commit to a two week sprint,  iterations don’t make too much sense. Our work is a mix of reactive & proactive maintenance, and mostly we are working on whatever is most urgent for the day/week. Yes, we do work on infrastructure projects which are more iterative, but team members aren’t dedicated to projects, so they could still get pulled off project tasks to deal with any urgent issues.

We needed a workflow that was more adaptive but complimented the Agile processes we already had in place. Welcome to the world of Kanban.

I’m not going to go into explaining Kanban in detail given that there is an excellent article already written by Henrik Kniberg and Mattias Skarin. This can be downloaded here http://www.infoq.com/minibooks/kanban-scrum-minibook.

When deciding to use Kanban, some of the key differences were really important for this to work for our team.

  • Timeboxed iterations are optional & tasks on the board can be event driven

  • Estimation of time to complete is optional

  • A Kanban board is persistent & not reset every sprint

  • We can add and remove tasks from the board as capacity or business needs change

  • Tasks can easily feed in from different development teams on diverse projects

  • We can prioritise our own work alongside other parts of the business.

Backlog

If you use Scrum you will have a backlog of tasks that needs to be prioritised & fed into the next Sprint. You will need to estimate points/time for each story & break them down into smaller tasks that you can complete within one iteration. For Kanban you can have a prioritization scheme or just go with the flow. You may or may not have a backlog, but here we use Jira* and we feed the backlog for the week from this. Users can raise tickets and set importance using the defined Jira priority scheme. However, not all tasks have to be in Jira, for example frequent maintenance tasks, quick fire issues & deploys can be added to the board directly. Priorities are negotiable and I will arbitrate between teams.

 *At the time of adopting KanBan we were still using Trac before moving to Jira. Greenhopper is mentioned later on.

The Board

A Kanban board doesn’t look much different to a Scrum board and in basic form has a ‘To do’, Work In Progress (WIP) & Done column and the tasks flow from left to right. I wanted to break it down into different lanes so I could focus on different types of tasks or a current project. The lanes should always be reconfigurable. For this I set  the board to flow vertically rather than left to right so I could add more lanes.

The big difference from Scrum is the limits on each state. In Scrum there is no rule preventing how many tasks are ‘Work In Progress’, the limit is defined in sprint planning with  a fixed scope …you can’t add more tasks part way through. ( Well you shouldn’t !) Kanban limits per workflow state, so in this case we defined a ‘To Do’ limit of 15 and a WIP limit of 6. In our first KanBan board we used different colour post-its to define difficulty/effort as well.

Green = Tasks < 1 hour

Orange = 1 Day

Yellow = 3 Days

Purple = 5 Days

Each post-it had a start and finish date written on it so I could track cycle time and efficiency. ( This was prior to using Jira and Leankit ). Also, we broke down the backlog into ‘This week’ & ‘Next Week’ so we could see upcoming date related tasks way in advance.

We added magnetic Avatars and used these to indicate the owner of the task.

( First attempt )

The Rules of the jungle …

  • Always take the top item from the To Do list

  • Take any item

  • Don’t take consecutive purple tasks ( let others share the bigs ones ! )

  • Work on one ticket at a time unless its green then go for it

  • If you start another task and the current one is on hold move it to the right place

  • No more than 6 post-its in WIP

  • Every week a dedicated person to deal with the deploy lane

  • Every week a dedicated person to deal with the Tech support lane

  • Daily standup at 10am

First thoughts

So we were relatively happy with the KanBan process but felt we could add some further improvements and tweaks.

  • Writing out post-its became a bore

  • People would forget to go to the board and move post-its

  • Post-its fall off. Even super sticky ones !

  • Anyone working remotely couldn’t see the board

  • It looked messy

  • Tracking efficiency was a pain.

Heelllllloooo Virtual Whiteboard….

I started looking around for a way to replace the board with an electronic version. I’d used Greenhopper before in Jira but at this point we hadn’t yet adopted Jira. I looked at a few software solutions as well as online ones. I re-visited Jira but found Greenhopper not configurable enough for my needs…simply put I wanted an exact replica of my whiteboard with post-its.

Welcome to Leankit http://leankit.com/


Leankit is a great tool, lots of board templates for Scrum or Kanban and can be configured very easily. I can create, remove and adjust lanes on the fly, and totally customise the format of cards. Here are some of the features which have transformed the way we use the board.

Card features such as avatars, colour coding, priority indicators, task size & date make the board easy to read and use.

 Mini-boards – You can break down bigger tasks into sub-tasks with its own mini board.



Reports

Conclusion

After adopting KanBan we found it greatly improved our visibility on our day to day work, and gave other teams a better understanding of how we worked. The daily stand ups encouraged better team communication through task reviews, sharing issues & bouncing ideas off each other.

Having the limits on the workflow meant we never over committed our work and made it easier to negotiate with project managers regarding last minute requests. In turn it helped them to understand that something had to give in order to facilitate an expedited task.

Building testing tools against REST APIs with node.js and Coffeescript

During the dev cycle here at Mind Candy it is useful to have tools to help automate some of the more repetitive tasks, for example, registering a user. The great thing about test tools is that they are a great excuse to try out new technologies too! In this blog post, I’ll be telling you about a tool we wrote to register new users. It uses node.js, the non-blocking I/O server side JavaScript platform that is powered by Google’s V8 VM.

The reasons why I chose node.js for this project are as follows:

  1. It makes working with I/O operations an absolute breeze. Every I/O operation is required to be non-blocking, so its perfectly valid to keep a request open from a client such as a web browser while you make a bunch of (sometimes concurrent) calls off to REST APIs, databases, external systems etc, wait for them to call back to your code and then return a response. All of this happens without occupying a thread per request, unlike your standard java servlet. In fact, node.js is single threaded and uses an event loop, so as long as your expensive I/O operations are non blocking, the event loop just keeps ticking over and your system stays responsive.
  2. While node.js is a comparatively new technology, it has a HUGE and vibrant community behind it. There are so many 3rd party modules that if you need to interface with any other thing, there is most likely a module already written! Node also has an awesome package manager in npm, which makes declaring and downloading modules super easy.
  3. The holy nirvana – the same language running on the server and the client. Since the app is going to be deployed for the web, and therefore going to be powered by javascript, there are no problems with things such as serialising and deserialising objects between client and server (they all natively speak JSON), or different paradigms between languages giving you an impedance mismatch.

The whole project is actually written in a language called Coffescript. For those who haven’t heard of it, Coffeescript is a language that compiles down to javascript. It abstracts away some of the more grizzly parts of javascript, has a nice, clear and concise syntax and has a bunch of useful syntax features built in, such as loop comprehensions, conditional and multiple assignment and a simple class system. It’s like a bunch of best practices for javascript!

So let’s have a look at some of the code. For example, here is how we talk to the Adoption endpoint:

request = require('request')
xmlbuilder = require('xmlbuilder')

class Adoption
    constructor: (@host) ->

    start: (username, password, email, cb) ->
        adoption = xmlbuilder.create()

        adoption.begin('adoption')
            .ele('email').txt(email).up()
            .ele('password').txt(password).up()
            .ele('username').txt(username).up()

        adoptionXml = adoption.toString({pretty: true})

        request({
            method: 'POST',
            uri: "http://#{@host}/my/rest/service",
            body: adoptionXml,
            headers: {
                'Content-type': 'application/xml'
            }
        }, (err, resp, body) ->
            cb(err) if err

            if(body.search(/<error name="username" rule="is_not_unique"\/>/) > 0)
                usernameError = {
                    message: 'username is already taken'
                }

            cb(usernameError)
        )

module.exports = Adoption

First thing to note is indenting and whitespace is important! You MUST use spaces instead of tabs here, otherwise the compiler complains! Here we’re creating a ‘class’ called Adoption. Javascript doesn’t really have classes, but Coffeescript translates that into some javascript that gives you class-like behaviour. At line 5, we declare the class constructor function. In Coffeescript, functions are a very small declaration: just the function arguments in brackets and then an arrow. Anything after the arrow is the function body. The constructor is very simple, all it’s doing is setting the arguments of the function as member variables of that object. Looking at the javascript generated from the Coffeescript compiler illustrates this:

function Adoption(host) {
    this.host = host;
}

The start function (line 7) takes a bunch of parameters of data from the user and a callback function as the last argument. In node.js, if we are doing any async operation such as calling a REST endpoint, we cannot return the response data from that function since that would block the event loop. Instead, we are provided with a callback function which we can then call with the response once the server responds.

On line 10 we build up the xml payload to the Moshi Monsters REST endpoint, using a module called xmlbuilder. It would be a lot simpler if the endpoint accepted JSON! Next (line 17) we send the request to the endpoint itself. Here, we use the excellent request module. If you are familiar with how you perform ajax requests with jQuery, this should look quite familiar to you! Its another example of how node.js makes full stack development a lot less trouble for your developers as so many of the techniques used on the client side can be applied to your server side code.

The request function takes an object with the options for that request, and a callback that gets called upon error or success. The convention in node.js is to always expect the first argument to your callback function as a possible error, since if the async operation fails, you can check that parameter for the exception. On line 25 there is an example of Coffeescript’s postfix form of the if operator.

We then check for some response xml with a regular expression (line 27) and call the callback with the possible error object if the regex matches. Notice we do not have to declare variables before we use them. If we did this in raw javascript, they would end up becoming global variables, but Coffeescript handily declares them up front for us.
The last line is how we expose our class to the rest of the program. Node.js uses the CommonJS module system, so every file loaded is self contained as a module. We can expose our class by assigning it to the module.exports variable. This allows us to instantiate an Adoption object in another file:

Adoption = require('./path/to/adoption.coffee')

I used the brilliant express http server to serve this webapp. It has the concept of ‘middleware’ – effectively a bunch of functions that every request and response pass through. This means it is super easy to add functionality to express like caching, static file serving, and even asset pipelining as we will see later! We can set up a handler for our adoption request like so:

express = require('express')
Adoption = require('./adoption') #note that the file extension is optional!
app = express.createServer()

app.set('view engine', 'jade')

app.use(express.bodyParser())

adoption = new Adoption('www.moshimonsters.com')

app.post('/adopt', (req, res) ->
    username = req.body.username
    password = req.body.password
    email = req.body.email
    adoption.start(username, password, email, (err) ->
        if(err)
            res.json(err.message, 500)
        else
            res.send()
   )
)

app.listen(3000)

We’re using a template language called jade for the client side HTML markup. Jade is a simple, lightweight alternative to html. Here’s an example straight from their website:

doctype 5
html(lang="en")
  head
    title= pageTitle
    script(type='text/javascript')
      if (foo) {
         bar()
      }
  body
    h1 Jade - node template engine
    #container
      if youAreUsingJade
        p You are amazing
      else
        p Get on it!

Jade is nice and lightweight, but you can also do more hardcore stuff like scripting in the templates if you wish. It supports layouts and partials and all kinds of other nice stuff. Check out the website to learn more about it!

The client side javascript for this tool is written in Coffeescript too! But how does the browser understand it? The answer is that it doesn’t – we have to compile it first into javascript. You could do this as part of the build, but we have a better solution available to us.

There is a middleware module for express called connect-assets. This middleware adds asset pipelining to connect, so that you can write your code in Coffeescript and it will compile it on the fly and serve it to the browser, without you having to do anything! It can even minify the resulting javascript. You add it like this:

connectAssets = require('connect-assets')

...

# set build to true to minify the js
app.use(connectAssets({build: false}))

…and then we add a macro into our jade template:

doctype 5
html(lang="en")
  head
    // add macro in the head of your html document
    != js('adoption')

    // rest of your markup below

…passing in the name of your Coffeescript file (minus the .coffee) extension.

Obviously this is not the whole of the source code of the tool, but hopefully it has been a taster of how awesome modern javascript development can be! In the last few years, javascript has gone from being an unliked toy language into something a lot more powerful and expressive. Here at Mind Candy we hope to leverage amazing new tools like node.js and coffeescript in our future work to allow us to become a more happy and productive development team!

Testing Unity’s Flash Export on a Large Project – Part 1

Over the last weekend, I’ve been hard at work trying to get an unannounced Mind Candy project (made in Unity) to export to Flash. I thought it would be useful to share some details from the experience since most of the issues I’ve encountered would probably be avoidable if your project is architected in a way that lends itself to Flash export.

During the Christmas holidays, I made a game for Unity’s Flash in a Flash contest. It wasn’t the most exciting game, but it worked. The core mechanic of that game (including new 3.5 features such as nav mesh) exported to Flash well. The reason this game worked is because I had been paying close attention to the Flash export and knew what features wouldn’t work at that time. I avoided anything overly complicated and developed the game with the limitations in mind. Fundamentally, I decided to make a new, simple game rather than trying to port an existing one.

Now, I’m doing the opposite. I’m trying to get an existing game to publish to Flash. This project is a relatively large one. The game has been in development for quite a long time. It contains a lot of complex C# code and most importantly: a lot of features that don’t yet work in the Flash export. Trying to get this game to export to Flash is no easy task. I’ve spent numerous weekends on this since the 3.5 beta was made available and I still haven’t got it to work.

Despite the export (currently) not working for our project, there are a lot of lessons to be learned. Hopefully these will be of use to other people attempting the same task, and will be a good reference point for myself when I inevitably try to export the game again at a later date.

Currently Unsupported Features

Unity have already listed some of the unsupported features in the Flash export as part of the 3.5 preview faq. Some of these features (and the ones that have proved most problematic for me) are:

  • LINQ
  • Terrains
  • Asset Bundles
  • WWW
  • Raknet networking

If you’re using these features, then you’ll encounter a lot of errors as soon as you try to get Unity to build to Flash. Some example errors I’ve seen are:

Networking

  • error CS0246: The type or namespace name `ConnectionTesterStatus’ could not be found. Are you missing a using directive or an assembly reference?
  • error CS0246: The type or namespace name `NetworkView’ could not be found. Are you missing a using directive or an assembly reference?
  • error CS0246: The type or namespace name `BitStream’ could not be found. Are you missing a using directive or an assembly reference?
  • error CS0246: The type or namespace name `WWW’ could not be found. Are you missing a using directive or an assembly reference?

MovieTextures

  • error CS0246: The type or namespace name `MovieTexture’ could not be found. Are you missing a using directive or an assembly reference?

These errors are effectively a checklist of all the classes you’re using that aren’t yet supported and there’s only one thing you can do: remove them from your build. There are numerous ways to do this (depending on what you’re trying to achieve), from brute force deletion to telling Unity to skip these sections in a Flash build. You can do the latter by using platform dependant compilation.

All you need to do is wrap your Flash specific code in a platform check such as:

#if UNITY_FLASH
Debug.Log(&amp;quot;Flash build&amp;quot;);
#endif

In my case, the first thing I had to do was to try and remove these unsupported features. MovieTextures were easy to take out, as they’re not vital to our game. Networking, however was more problematic. And this is my first (and most important) lesson…

Lesson 1 – Separation of Networking Code

Our game currently uses the inbuilt RakNet networking solution. These networking elements are fundamental to our game, and as such, the networking code exists in many different areas of our codebase. When publishing to the web player or a standalone app/exe build this is fine. For Flash export, this suddenly creates a big problem when the networking solution isn’t yet supported.

As an example, if your game uses RPCs across clients to update core data in your game, then you’re going to have problems. I’m sure that there are other solutions which are better suited to Flash export, but this doesn’t fix my immediate problem: we have a game, where our chosen networking solution won’t publish to Flash. Unity suggest that you can use Flash networking instead of RakNet, but since I’m doing this export with tight time constraints (self imposed, by the mere fact it’s a weekend), that solution is not feasible for this test.

This has left me with one option in my mission to get our game working: rip out RakNet. This is not ideal, but luckily, our game copes with it ok.

This raises an interesting point in that the networking code should be as decoupled from the core mechanic of your game as possible. In some cases this can’t be done, but if you can find a way to make your networking layer easily removed/changed, then you’ll be in a much better place than I was regarding Flash export. It will also help you if you ever decide to switch to a different networking solution.

At this point, I’m going to gloss over about 10 other failed builds. It takes a few attempts at building to clear up this first wave of errors. Once you’ve cleared that first wave, you can breathe a sigh of relief and ready yourself for wave two: Attempting an actual build…

Attempting a Build

Once you’ve fixed/removed/hacked-out all the unsupported features, you’ll get to a point where the build process will now try to publish your game to Flash. The type of errors you get now will be more complex than those in wave one. Below is a screenshot of one of my build attempts at this stage:

You’ll note that these errors are more complicated than the “you can’t use ClassX because it’s unsupported” ones. In the case of these errors, it’s up to you to go into each of these classes and try to simplify your code as much as possible.

Some areas where our build failed were where we’d used generics. For example, we had fairly complex code to randomise the order of elements in an array. It wasn’t vital, so it went in the bin. This seems to be a common trend in trying to get this project to build to Flash. I’m slowly, over time, discarding features to the point where it’s a very stripped-down version of the game.

There are a couple of errors regarding our audio library in the above screenshot. This library wouldn’t convert at all (I got multiple waves of errors). My only solution at present has been to remove it.

The last item in that list is log4net. This caused a lot of issues. Rather than spending ages resolving them for this test, I decided it should also be removed. Since we used the logging in a lot of our code, I’ve ended up writing my own logging classes based on the log4net interfaces. This meant that I only had to fix up the imports in the class and our existing logging would still work using Unity’s own Debug.Log etc.

A few more iterations and build attempts occurred before wave 2 was complete. All in all, the first two waves have taken out large chunks of our features, and as a result the game feels somewhat unstable.

Akin to a game of [insert zombie survival FPS game of your choice here], we’ve just about survived the first few waves. We’re battererd, we’re bruised, but most importantly, we’re not defeated! We’re now ready for the next wave. Bring on the boss; the tank; the last major hurdle in the flash export – the conversion of your code to ActionScript.

Converting your code to ActionScript

At this stage, when you try to build, Unity will attempt to convert your source to ActionScript. Having previously spent years as a Flash developer, I find this part of the build rather exciting. The guys at Unity have done a fantastic job of getting this process to the stage it’s at.

That said, this is probably the toughest part of the process. Ripping out features and (to some extent) fixing the errors in the previous stage is easy. Trying to work out why the generated ActionScript doesn’t work is much more difficult. Luckily, when a build fails, you can find all the AS classes in a temp folder in your project (/Temp/StagingArea/Data/ConvertedDotNetCode/global/). This will enable you to look at them (if you wish) and try to understand where it might be going wrong, such that you can adjust your C# or js accordingly.

In my first attempt at this stage, I was left with 87 errors. The following are a small selection of these to give you an idea of the kind of problems I’ve seen:

Conversion Error 1

  • Error: Access of possibly undefined property $Type through a reference with static type Class.

This error seems to be very common and occurs when reflection is used (and probably in other situations). Unfortunately, a lot of our core libraries use reflection, and as such, this is a large problem to try and fix.

Conversion Error 2

  • Error: Call to a possibly undefined method IComparable$1_CompareTo_T through a reference with static type Number.

This has occurred because we’re trying to compare two values whose classes implement IComparable. In our case, this could be worked around relatively easily.

Conversion Error 3

  • Error: Type was not found or was not a compile-time constant: ReadOnlyCollection$1

In some of our classes we’re providing access to ReadOnlyCollections. It seems that we can’t use these at present and we could work round this by simply returning a standard Collection.

Conversion Error 4

  • Error: Call to a possibly undefined method String_Constructor_Char_Int32 through a reference with static type String.

A common style of conversion error that’s quite tricky to work out. I saw a lot of errors similar to this one.

These are just 4 of the 87 errors which need fixing. I expect that if/when all 87 are resolved, I’d have another wave or two to get through before the game would actually build. For now though, it’s Sunday night and I’ve run out of time to work on this test.

Next Steps…

My next challenge in this Flash export test is to go through the aforementioned 87 conversion errors and try to resolve them. I’m hoping that I’ll be able to get the game to build after another solid few days working on the export.

If that task proves too difficult then I will try a different approach of starting from a clean project and adding features one by one. In theory, that should be easier to get working, although that’s not how we’d want to export to Flash in the long run.

If I do get the export to work, I shall write a follow-up post with a walkthrough of some conversion errors. For these, I’ll include (where possible) the raw C#, the converted AS, and examples of how the errors can be avoided/solved.

For now though, I’m going to give up and play some well-earned Killing Floor! :D

99 Bottles of JMeter on the wall

I’ve recently had to do some performance testing on a couple of our new web services. I know of a few handy open source tools available for this. Tsung, Grinder and JMeter spring to mind. I find that I can get up and running in JMeter quicker than I can with the other tools and it tends to be the one I use most. One of the web services I wanted to test required some dynamic content to be generated in the body of the HTTP POST. Normally I’ve been able to get away with using the standard counters and config variables provided by JMeter, but this time I needed a bit more control. Luckily, JMeter allows you to do some scripted preprocessing of HTTP requests and I was able to use this to generate dynamic content within JMeter. I found the JMeter documentation to be a bit lacking in this area, so I created this blog post to give a quick explanation of how I managed to do this.

I’ve created a demo project that you can follow along with to see how it all works. Download the source code here: https://github.com/groodt/99bottles-jmeter Follow the README on GitHub to get everything setup and running. All you need is git, Python and JMeter. Open up the file “Test Plan.jmx” in JMeter to follow along.

The demo
The demo project is a simple web service that parses JSON payloads and prints a modified version of the “99 Bottles of beer on the wall” song onto the console. The JSON payload looks something like this:

{“drink”:”beer”, “bottles”:”99″, “date”:”1321024778956″, “thread”:”4″}

The server then parses these payloads and prints them out to the console:

JMeter then aggregates the response times in the summary report:

The HTTP POST
If you navigate to the “HTTP Request” node in the example you can see the JSON POST body being constructed:

The variables ${drink}, ${bottles}, ${date} and ${thread} are generated dynamically by a script that JMeter executes for each request.

The BSF PreProcessor
The BSF PreProcessor is run before each HTTP request to generate the dynamic content mentioned earlier. The BSF PreProcessor allows you to run Javascript, Python, Tcl and a few other languages inside JMeter. I decided to write my script in Javascript.

If you navigate to the “BSF PreProcessor” node in the example you can see the script that is used:

The Javascript
The simple Javascript basically places 4 variables in scope that are then available for JMeter.

// Calculate number
var count=vars.get("count");
var bottles=99-count;
vars.put("bottles",bottles);

// Calculate drink
var random=new Packages.java.util.Random();
var number = random.nextInt(4);
var drink = vars.get("drink"+number);
vars.put("drink", drink);

// Calculate date
var date=new Packages.java.util.Date().getTime();
vars.put("date",date);

// Calculate thread
var thread=ctx.getThreadNum();
vars.put("thread",thread);
  • In lines 1 to 4, the counter is read from JMeter then 99 is subtracted and the value is placed into scope under the name “bottles”.
  • In lines 6 to 10, a random number from 0 upto 4, it then uses this number as a lookup into the names of drinks (beer, wine, mead, cider) defined in the JMeter General Variables. It then stores this value in a variable named “drink”. It makes uses of java.util.Random to generate the random integer.
  • In lines 12 to 14, java.util.Date is used to generate a timestamp in milliseconds. This value is stored in a variable named “date”.
  • In lines 16 to 18, the JMeter thread number is read from the JMeter context and then stores this value into a variable named “thread”.

Executing Java libraries within the scripts
If you noticed in the scripts above, the Java libraries are exposed in JMeter under Packages.*. This allows you to execute any of the Java standard libraries or Java code in the classpath of JMeter. I think you can also write your own Java code and place it in the JMeter classpath and expose it in the same way.

Putting it all together
Putting all of that together gives you a handy way of doing reasonably complex performance testing in JMeter and I hope you find it useful.

NOSQL Exchange

This is a quick run through of the NOSQL exchange that Ciaran & I attended on Nov 2 at SkillsMatter, which featured 8 speakers and links to all talks are included.

A lot of people were asking which NoSQL solution to use?

This was the advice given by the speakers…. There is no silver bullet. Is there a need for reading/writing lots of Big data? Think about the shape of the data and how are you going to query your data to help understand which NOSQL solution fits best. Also understand the trade-offs when you choose your solution. Finally at the talks there was a lot of evidence of people using NOSQL solutions when a SQL solution would have sufficed.

1) THE STATE OF NOSQL TODAY by Emil Eifrem
This was the best talk of the day and anyone interested in NOSQL should watch the talk.

NOSQL stands for Not Only SQL.

Main types of NOSQL:

  1. Key-value originated from Amazon’s paper on Dynamo e.g. Riak, Voldemort (used in Linkedin)
  2. Column Family e.g. Cassandra, Hbase, Hyper table
  3. Document databases (most popular) descended from Lotus notes. e.g. CouchDb & MongoDb
  4. Graph Databases (nodes with properties) originated from Euler and Graph theory. e.g. infinitegraph, Neo4J

Documents are superset of Key-values. Graphs are supersets of documents and thus all others. Does this imply you should use Graph NOSQL solutions for all your NOSQL concerns? The graph NOSQL advocates think so.

Trends:

  • Acidity is increasing e.g. MongoDb adding durable logging storage, Cassandra adding stronger consistency
  • More query languages – Cassandra -CQL, CouchDb UnQL, Neo4J – Cyper, Mongo.
  • Potentially more schemas?

NoSql challenges:

  • Mindshare
  • Tool support
  • Middleware support

Oracle now adopting NOSQL with a KeyValue solution despite debunking NOSQL in May this year. NOSQL seems to be following similar historical trends to SQL. SQL which had many vendors to begin with, over time resulted in 4 large vendors. Could NOSQL result in a similar situation in the near future?

2) HANDLING CONFLICTS IN EVENTUALLY CONSISTENT SYSTEMS by Russell Brown
Key quote from this talk: “Large systems are always in some degree of failure”

Problem: According to CAP: Consistency, Availability & Partition tolerance – you can’t have all 3. Have to compromise by picking 2.
PACELC:
In the case of a partition (P), trade availability (A) for consistency (C)
Else (E) trade latency for consistency (C)

Riak inspired by Dynamo. Built in Erlang/OTP. Has features such as MapReduce, links, full text search. Uses vector clocks not timestamps. Statebox for automation of resolving conflicts.
Uses a wheel for storing clustered data.

3) MONGODB + SCALA: CASE CLASSES, DOCUMENTS AND SHARDS FOR A NEW DATA MODEL by Brendan McAdams (creator of Casbah)

MongoDb is not suited for highly transactional applications or ad-hoc intelligence that requires SQL support. MongoDb resolves around memory mapped files. Mongo has an autosharding system.

Things to remember:
The datastore is a servant to the application not vice-versa
Don’t frankenshard

4) REAL LIFE CASSANDRA by Dave Gardner (from Cassandra user group)

  • Elastic – Read/Write throughput increases as scale horizontally.
  • Decentralised no master node.
  • Based on Amazon’s Dynamo paper
  • Rich data set model
  • Tunable
  • High write performance

If your requirements are big data, high availability high number of writes then use Cassandra.
When data modelling, start from your queries and work backwards.
Has expiring columns.
Avoid read before write & locking by safely mutating individual columns
Avoid super columns, instead use composite columns
Use Brisk (uses hadoop) for analysing data directly from Cassandra cluster.

5) DOCTOR WHO AND NEO4J by Ian Robinson
Although it was a fairly slick presentation it seemed to focus too much on modelling Doctor Who and his universe as a working example of graphs & Neo4J. Could this be to hide some shortcomings in Neo4J?

  • Neo4J is a fully ACID replacement for Mysql/Oracle.
  • Neo4j is a NOSQL solution that tries to sell itself as the most enterprise ready solution.
  • Has master/slave nodes.
  • Has 3 licenses: Community/Advanced/Enterprise.

With mentions of 2 phase commits, other than the advantage of modelling relationships such as social networks, there seemed little benefit from moving away from a relational database.
Having spoken to the Neo4J guys afterwards, it seems that the DB loses its ACIDity once you cluster it, and becomes another eventually-consistent store!

6) BUILDING REAL WORLD SOLUTION WITH DOCUMENT STORAGE, SCALA AND LIFT by Aleksa Vukotic

CouchDb:

  • Written in Erlang has Lift support (Scala framework)
  • Exposes REST/JSON endpoints
  • Eventually consistent
  • Versioning appends only updates
  • Mapreduce for querying using views

7) ROBERT REES ON POLYGLOT PERSISTENCE
A muddled presentation talking about mixing graph NOSQL solution with a document based one.

8) THE FUTURE OF NOSQL AND BIG DATA STORAGE by Tom Wilkie
Rather than using the out of the box storage engines for NOSQL solutions, there can be dramatic throughput gains for using alternative storage engines such as Tokutek and Acunu (Castle).