A Good Stand up


A Good , Stand-Up

The light goes green , the door opens and i walk into the secure area, my team area is immediately to my left and already i can feel the buzz, the passion the energy coming from my kiddies as they discuss, code and pair.

The clock ticks by and there is electric excitement in the room as they make their way to the stand up area, i follow.

The story wall is to my left as i walk with purpose into the room the team are gathered in a semi circle around the board awaiting my arrival.

Standing in front of the team i see in there eyes and alertness that leaves behind the night before, the early morning starts, the arguments with the partners – there focus is on me. With my arms clasped behind my back , i open with welcome to the morning stand up team, today we will be Awesome i follow this with a intense and bright smile.

My eyes fix on each team member as i watch there body language , there eyes, the position of there stance looking for anything that might be a tell , or out of place or indicate to me that this person is not 100% ready for the days tasks, some thing i may as Team leader have to deal with, i ask each member to describe how he feels in terms of weather i hardly notice the verbal replays as i watch there body language – my team is healthy ready , charged, energised no weak points this morning.

Each member quickly talks about what they did yesterday , what they will do today and any blockers which i take actions to handle.

I say to them , ‘ Lets play, be awesome out there and remember we are the elite’ the team leave the room – talking chatting about the cards they will work on, the challenges they will conquer today.

I meet with my senior team and talk to them briefly about how we must look after the team and about the deadlines and the meetings i had had with management the day before and the medium to long term projections. The seniors leave the room empowered.

Top Level Points 

# Always be on time

# It docent matter how your feeling be full of energy be positive your team draw there energy from you

# Your main job is not to police the cards it is to LOOK AFTER your team and to spot issues before they happen

# Become a expert in body language , get to a point were you can read your team

# Be prepared to proactively go to bat for a team member

# Support your team remember there is no bad teams only bad team leads – if there is a problem in your team find it deal with it don’t let it spread like fungus

# Above all remember these are your people , your kiddies your responsibility your not there to police deliveries or process your there to support and grow your team the deliveries and process will come as a consequence of this.

# In a team , to the outside business it must never be a individuals fault , the team including you takes responsibility for everything the happens good or bad

# Don’t take credit give it to the team, the team will make you shine

Becky martin

New Social Site – To be hosted on Azure

Recently i have started working on a new social network engine with built in communications network using knockout and Signal R asp mvc 4.5. 

This site is currently in the planning stages but will be built to be hosted on Azure. 

Sample page layouts below

Social Nest frontpage  with colourblock for home  v4

Chat page 

22052012 chat section basic release1

22nd of June – Azure conference

I will be speaking at this event , this is the place to be on the 22nd June 2012




Patterns , Request and Response – Why

Recently i was asked by members of my team why we used the Request and Response Messaging pattern, it lead to a interesting conversation form which we resolved number of high level points, i thought i might share them here.

When dealing with 2 systems with defined edges such as a group of services which perhaps form a SOA [Service Orientated Architecture] , it is quite often advantageous to implement strict communication standards.

In the physical world when we enter into a conversation with another person it is considered good manners to wait for a response when we have made a statement or asked a question in short we have spoken and set by convention a expectation that we will receive some form of response.

We often that this is form of communication collaboration is more constructive than constantly communicating in such a fashion as to demand things form our audience as if they are in fact slaves to our every wish as opposed to collaborators in a well mannered interchange.

These two social interchange patterns can be summarised in the following definitions:

# Ask don’t tell
# Tell don’t ask

We see both these communication constructs in the software world.

Tell Don’t Ask
Traditionally in classic code we consume objects and call there methods directly passing to them parameters. The expectation with this style of programming is that the object will just do what we say when we say it as a consumer.
The object becomes a slave to the consuming code.

Ask Don’t Tell
There is a alternative to the demanding style of code we demonstrated with the ‘Tell Don’t Ask’ methodology. We can make a active choose to communicate with other objects in a way which treats the objects with good manners.

We can request the object to perform an operation for us and then wait for a well mannered response form the object.

When work in this way we can establish a convention that say in simple dialect.

When i send a request to an object, that object is guaranteed to make a response even if the object has nothing to say back to me other than it has completed my request and the status of that request at the point it was completed.’

Why would we want to do this?
By engaging in a collaborative relationship within the domain space with other objects we achieve a number of thing.

# We protect the object interface and give the object room to grow and evolve as all it is expecting is a Request and all we are expecting form it is a Response.

# We make the object a collaborator in a domain relationship and therefore we honour its right to control its own state and to hide form us its implementation.

# We guarantee a response from the object by convention , therefore we are free to engage in other conversations while listening for our object to come back and talk to us.

#We can ask the object to enrol in a group chat were perhaps we have multiple collaborators who are responsible each other and to our initial object for the overall context of the conversation. We can even become the orchestrater of the group and direct the flow of the conversation to a active a collective result.

#We are implementing a true black box SOC [ Separation of Concerns] paradigm at a object level with this pattern.

#Messages can be made durable and persisted if need be

#Messages can take part in a retry type pattern

#We are modelling the real world with this relationship

I hope this post gives food for thought and provokes deeper conversations into collaborative code relationships as opposed to slave based enforced patterns.

Automating the generation of service certificates in Windows Azure

Automating the generation of service certificates in Windows Azure.

Rocking and Rolling with Cloudy Team Foundation Server and Visual Studio 11

Part 2 of this set of post has been written by John Mitchell also from blushpackages can be found here : http://blushpackages.com/community/2012/04/14/rocking-and-rolling-with-cloudy-team-foundation-server-and-visual-studio-11-part-2/

As an Agile developer team who is working in the Microsoft space we have been waiting with bated breath for Team Foundation Server and Microsoft’s other ALM offerings to be made available on the cloud, so it was with much excitement I activated our Cloudy TFS trial.

As we know a lot of people have wanted access to the previews but have for one reason or another haven’t managed to get an activation code or get set up, we thought we would add to the growing amount of material on the internet on Cloudy TFS and contribute a post.

Enter Stage Right Cloudy Team Foundation Server


Create Your First Cloudy TFS project

This is simply a case of clicking Create Team Project and following through with the wizard you also have to pick a template I have a love / hate relationship with the standard MS Agile templates, the best one i have found so far out of the stock selection is the “MSF for Agile Software Development 6.0” in this case marked preview 3.



We have success

Project succesfully_created

Once your team project is created you will have access to the Dashboard view


Adding Your Team

Next you will want to invite your team, everything in Cloudy TFS is tied to live Id.



Next we click the Add button, we are then taken through a short invite wizard if all goes well we can then see the member added on the portal page.


Adding a Story as a Work Item

Next lets add a Story to do this head over to the work items tab and select new , from this list you can add different item types in our case we added a User Story.

tfs-cloudy-create new item

Next we are presented with the Work Item Editor, you can see I have supplied the basic story and a title for the card.

TFS -requirements-creating -new story

Please pay careful attention to filling in the metadata on the right hand side.

TFS -requirements-creating -new story-metadata

Ok let’s save our Story

TFS -requirements-creating -new story-saved-view

The cards by default get added to the backlog ready to be planned , in our case we wanted to move it into a iteration so head on over to the backlog tab, click on the card and move it.


If we look at our work items view we can now see our card and we have a number of options here all of which are self explanatory.


Add a Task to your Story Card

The next step is to add a task to the user story , TFS has always taken a task based approach to Agile there are a number of different views on this , for ourselves we have no wish to fight the tool or the template so we will push in our first task.

Tasks can be added from a number of different views including Visual Studio Team Explorer, for ourselves we are going to do it from the Card board view.

TFS -requirements-creating -new story-addtasks-addbutton

We are now presented with the Task Editor view.


TFS -requirements-creating -new story-addtasks

The completed task.

Lets Tool Up


We now have a configured instance of Cloudy TFS there is much more we could do with it, but we now have the minimum set to start thinking about development integration and the tools we need.

First out of the starting blocks is Visual Studio 2011 , Cloudy TFS will work with Visual Studio 2010 but in reality it is built to work with Visual Studio 11 and it is this partnership that really shines and makes you hugely effective, in addition our team would not step out into the rain with out Resharper. Its fortunate that there is a EAP build of Resharper that is setup to work with Visual Studio 11.


As always with pre-release software the risk to use these tools is yours and you need to weigh up the gains and disadvantages of installing this software on a given machine or VM.

Hopefully you now are set to install the tools so lets move forward, first of all lets install Visual Studio 11 Beta , i have installed this along side visual studio 10 including the April updates and have not suffered any problems so far, this is not to say that this statement should be taken as a safety certificate.


Once visual studio is installed it is time to install Resharper.

Resharper EAP is released on a nightly basis so its important to keep on top of releases, its also a good idea to feedback any issues and to check the version notes prior to downloading – as with all good agile teams this is ‘As Is Software’.





Once visual studio is started you will be asked to select a programing language of choice, for my team this would be C#


Connecting all the Pieces

Once Visual Studio has started and we are fully configured you have agreed to Resharper Trial Licence, next we need to set up connectivity team foundation server, you will find a link on the home page click it and then click the Servers button enter the HTTPS Url for your Cloudy TFS instance.



Remember everything in Cloudy TFS uses Live Id Authentication


Then select the projects you want to work with



Congratulations you are now connected to Cloudy TFS.

Ok so now you need to

  • Map a Local source root
  • Mark the task as active
  • Create and add your project
  • Check it in
  • Update the Task


As you can see the Visual Studio , Cloudy TFS gives us full integration and brings Project management to Visual studio.


Map Workspace to local folder click local path Not Mapped


We then perform a Initial Get


This will then give us a source structure on our file system which you can configure as per your coding standards, and push back in.



As you can see here you can in fact associate a Work Item including your story with your checkins.


As you can see the partnership of Visual Studio 11 , TFS (cloud) is highly adhesive and very powerful as a Full life cycle ALM solution, but i hear you cry where is CI and Build well it turns out TFS has a build engine and my colleague John Mitchell will be talking about this in the second half of this blog for now here is the tantalising build tab inside Visual Studio 11.


Building Azure Applications using Agile Methodologies and Continuous Integration


There’s been much chatter on user group forums about the use of agile methodologies within cloud projects. We, at Blush Packages, have successfully completed a project and wanted to share our architecture with the wider community. This article is based on talks we’re currently giving to various user groups up and down the country. We hope it gives you pause for thought on how to approach cloud projects. Whilst this article refers to a TeamCity build environment, all principles can equally be applied to TFS. To further the agenda of developers using the new cloudy TFS Preview we’ll be talking on this at the UK Windows Azure Group conference in June.

Put simply, agile development practice begins with a TestFixture.

Once the first test is written using TDD principles then the whole project can begin in earnest.

A modern Microsoft web technology stack begins with ASP.NET MVC Razor for the presentation layer. We defined a production and test stack using the following key tools and libraries. The architecture and choices should come as no surprise because this is a common and powerful application stack.

The test stack we chose:

o NUnit

o NUnit Fluent Extensions

o NSubstitute

And the application stack:


· Razor

· Services Business Layer

· POCO Domain Layer

· Entity Framework 4.1

· Repository Pattern

Evolutionary Point

Once the stack and has been determined it’s a fairly trivial exercise to begin thinking about how to engineer Windows Azure into a solution using best practice.

To this point you generally have several passing tests and a functioning web application. In order to make room for Azure the testing approach needs to be modified.This entails refactoring controller-based Unit tests to become UI tests that are browser-driven.

The adjusted Testing stack will now include:

· NUnit

· StoryQ

· Watin

The Windows Azure SDK comes with a compute and storage emulator which contains 90% of the code you would find in the cloud. It’s essential therefore that we also plan to test our “devfabric”. The Azure Emulator should be invoked on the Test/ClassIntialize methods and torn down at the end of the TestFixture runs. Some example code to do this may look like the following:



We also have some appsettings in our test config file:


The csrun command line tool is used to start up the devfabric and run a packaged azure project within the emulator. The corresponding DFService is killed to free up any locked resources that the emulator would otherwise hold open to prevent subsequent test runs from completing.

The architecture we briefly touched on in the application stack was embodied in a set of web and worker roles as illustrated in the diagram below. As you’ll appreciate this is a common Service Oriented Architecture (SOA) approach with adapters used by web and worker role clients. This architecture is in fact a model Microsoft architecture depicted in many best practice guides.


Nightly Build Goals

TeamCity can be utilised fairly easily to do Check-in builds. It will check for changes in a source control repository (in our case we used Subversion) and on detecting a change, will do the following in the sequenced order:

· Remove any existing source local to the CI

· Get the latest source code from subversion

· Build that source

· Run any configured unit tests

· Send an email detailing build success and a summary of the changes made

At this stage none of the above had any interation with Azure. Any deployments to Azure were done on demand by Visual Studio manually and would be performed prior to exposing the deployment to its intended audience (typically the business).

Although a perfectly good solution at the start of a project, it quickly becomes untenable as your functionality footprint increases.

In order to “Continually Integrate” as agile best practice describes it’s important to ensure that code is pushed to the intended deployment target, which in this case is Windows Azure. As such this cycle needs to be automated such that “Integration Tests” can be performed directly against an Azure deployed host as part of the test lifecycle.

The aim of the nightly build were to extend our check in build to push all the way to Azure followed by the requisite testing.


The following tools are a common approach to this problem. Many of you will have had experience of several or all of these:

· JetBrains TeamCity

· Cerebrata CmdLets

· Gallio test runner

· Powershell

· Windows cmd files

Using JetBrains TeamCity

There are a number of reasons to choose TeamCity over other solutions:

1. Familiarity. We have used TeamCity on a number of prior projects which gave us a negligible learning curve to get up and running.

2. Ease of use. A Web based UI for all tasks (configuration, monitoring etc.)

3. Cost. For a single project with a small number of developers, TeamCity is free.

4. We had to use something. Sounds obvious but we had nothing in our toolset that offered CI as part of its functionality so we had to look externally.

5. Confidence. All of the above plus choosing a tool from a respected supplier minimised the risk that we would encounter issues around the tooling.

Using Cerebrata Cmdlets

For just over $100.00 you get over 100 CmdLets to automate all aspects of your Azure deployment and its ongoing management.

If you were to code the functionality required you would have probably spent several days or weeks, end up with a non generic solution and know it would not be reusable in the same way as the cmdLets. The choice is one of simple economics and time budgeting. Writing the functionality would also be a shift in focus away from the prime goal of the task.

CmdLets have multipurpose uses and cover all aspects of the Service Management and Storage Services APIs so become invaluable in the project.

The Build

Please note this is not an instructional step by step guide on using TeamCity or any of the other tools.

In fact the steps we use here should be transferable to any build tool. Reading this will not make you an expert on any the tools used but will hopefully give you an idea of what you can achieve with them. We also do not discuss installation and configuration of these tools although we intend to in a more detailed walkthrough post planned for the near future.

It is worth noting that if any of the build steps fail, the build will stop. i.e. you should not get a bad deployment because we continued ignoring an error.

Build Triggering

When you create a TeamCity build, you need to tell it when to run. In our case we want the build to run at night and preferably at a time when no one will be checking in any source code (we are Ninjas… we check in all night!). Our build is triggered to kick off at 2.00am in the morning.


You will also note in the rare event that nothing has changed, that the build will not run.

Source Control Integration

The configuration settings for source control checkout (Subversion) are made once per installation. You can then use this connection in your build. (The VCS Root in this case).

It’s good practice in the build to tell where to get the latest source "too" (Checkout directory) and whether the folder should be cleaned first.



We chose a Visual Studio build to build our solution. This could easily have been an MSBuild but in the early stages of our CI life, it helps us to have Visual Studio installed on the CI machine to trouble shoot any issues that arise.

The configuration is as shown:


Backup Source

Not really a step that you would see in a build, as there are more obvious ways to do this. The nightly build, however, is a convenient spot in a development life cycle where you know that you will have up-to date source in a single place consistently so it’s worth taking the time to back it up to the CI server.

Run Unit Tests

The unit tests are NUnit tests that test functionality that sit below our controllers. They test services, any facades over those services (adaptors in the architecture diagram) and any helper classes. Heavy use of mocking (NSubstitute in our case) to Isolate tests to the functionality they should be testing and the majority, if not all of these tests come about from a "Test First (TDD)" style development process.

It should be noted that developers do not commit source without first checking the stack of tests are "Green".

The configuration here is to tell TeamCity:

· Which test runner to use,

· What version of .NET is in play

· A list of assemblies containing test fixtures (classes marked with the [TestFixture] attribute)


Package For Azure

If you get a clean compile/build and our unit tests all run green, the build can be deployed to the Azure Staging environment. The environment looks just like the live environment that you would use for tests and allows deployment and test without touching the current live setup.

By default, a Visual Studio build does not package your application for deployment to Azure. This is part of the deployment step that you would manually trigger when you deploy from Visual Studio.

Therefore we have an additional build step to do this that utilises MSBuild. The build file path points at your Azure project file. There is no requirement here to create an MSBuild file as the .ccproj is the msbuild file.

The key to creating the publish files is the CorePublish target.


Publish to Azure

This is where you would make use of CereBrata CmdLet to take the assets created in the previous step and publish them to the Azure staging environment.

From a build step perspective, this is just a call to a PowerShell script. The -ExecutionPolicy Unrestricted command line option instructs PowerShell to ignore any default PowerShell execution policy restrictions.


Deployment Script

# Subscription Id
$subscriptionId = "OurId";
# Following example illustrates how to create a certificate object for a certificate
# present in certificate store.
$certificate = Get-ChildItem -path cert:\CurrentUser\My\OurCertId
# Name of the hosted service
$serviceName = "OurServiceName";
# Slot (Production or Staging)
$slot = "Staging";
# Package file (.cspkg) location. It could be a file on the local computer or a file stored in blob storage.
$packageLocation = "C:\PathtoPackage\OurPackage.ccproj.cspkg";
# Configuration file (.cscfg) location. It is a file on the local computer.
$configFileLocation = "C:\PathtoPackage\bin\Debug\app.publish\OurPackage.Cloud.cscfg";

# Label for deployment
$label = "Nightly Build";
# Upgrade mode. It could either be "Auto" or "Manual"
$mode = "Auto";
echo "Running Azure Deployment to staging"
Update-Deployment -ServiceName $serviceName -Slot $slot -PackageLocation $packageLocation -ConfigFileLocation $configFileLocation -Label $label -SubscriptionId $subscriptionId -Certificate $certificate
echo "Azure deployment to staging complete"

Get Url of Staging Instance

The Url for a staging instance of azure is not guaranteed to be consistent between deployments. Cerebrata provides a CmdLet to get deployment metadata and the payload of data contains the staging instance URL for a webrole.

Here is the TeamCity configuration for the build step:


The PowerShell script to call the CmdLet is again largely unmodified from the Cerebrata sample script apart from pushing the output to a file for further parsing.

Deployment Script

# Subscription Id
$subscriptionId = "OurId";
# Following example illustrates how to create a certificate object for a certificate
# present in certificate store
$certificate = Get-ChildItem -path cert:\CurrentUser\My\OurCertId
# Name of the hosted service
$serviceName = "OurServiceName";
# Slot (Production or Staging)
$slot = "Staging";
echo "Running GetDeployment"
echo $subscriptionId
echo $slot
echo $serviceName
echo $certificate
Get-Deployment -ServiceName $serviceName -Slot $slot -SubscriptionId $subscriptionId -Certificate $certificate | Out-File StagingDeployment.txt

Transform UI Specification configuration

This is probably a common problem for most CI unit test assemblies where the configuration might have to change from environment to environment. If this was a web.config file, you could utilise visual studio web.config transformations out of the box but not in this case.

Fortunately for us this problem has been solved by… and we use his config utilities to perform the environment changes we need for our config file.[e1]

The step configuration looks like:


The .cmd file looks like:

echo off
set ProjectPath=C:\ OurRootPath
set Source=%ProjectPath%\OurSpecificationsPath\app.config
Set Transform=%ProjectPath%\ OurSpecificationsPath \Transform.config
set Target=%ProjectPath%\OurSpecificationsPath\bin\debug\OurSpecifications.UI.Specifications.dll.config
C:\OurRootPath\utils\ctt.exe s:%Source% t:%Transform% d:%Target%

Run integration tests

All the previous steps lead to this. The Integration tests are a set of Watin tests (Watin, an extension for NUnit that allows you to drive a browser programmatically and test various assertions (e.g. the page contains certain text)). These are run against the newly deployed Azure instance. We could have used the NUnit test runner used in our unit tests step but instead we used Gallio for its enhanced test output.

The build step looks like:


The command puts the output in an html format to a known directory on the CI machine. This content is web-enabled and a link to this in our final email gets sent on build completion.

The output looks something like this:


VIP Swap

Assuming everything is green with our tests and our build is good enough for a "live" deployment, we run a final Cerebrata Cmdlet to perform a Virtual IP Swap (VIP Swap) from staging to production.

The build step looks like:


The Cerebrata based PowerShell script looks like:

# Subscription Id
$subscriptionId = "OurId";
# Following example illustrates how to create a certificate object for a certificate
# present in certificate store.
$certificate = Get-ChildItem -path cert:\CurrentUser\My\ourCertId
# Name of the hosted service
$serviceName = "OurServiceName";
# Script below moves a deployment in staging slot to production slot if production slot is empty
# otherwise it swaps staging and production slot.
Move-Deployment -ServiceName $serviceName -SubscriptionId $subscriptionId -Certificate $certificate


The above set of steps and explanations should give you a broad overview of the steps required to publish your applications to Azure in an automated way and run your integration tests. We are available to answer any questions on this topic. A fuller description of the above will presented by us at the UK Windows Azure User Group meeting in Manchester on the 4th April. Registration for this can be done at http://www.ukwaug.net



John Mitchell

John Mitchell has worked in technical roles with a number of brand names including Tesco.com, Vodafone, Volkswagon Finance, Royal Bank of Scotland. Recently John was responsible for delivering the Ticketing system for the Abu Dhabi Grand Prix. John specialises in Agile development in the Microsoft stack and is currently engaged with Integrity-software as a consultant on their Azure SAAS solution.

Rebecca Martin

Becca Martin is currently the development lead for Integrity software, she has worked on a number of Azure projects but is mainly known within the Agile space. Becca is an active speaker both in the UK and USA at user groups and events. Becca has worked on software for end customers such as Toyota, Coca-cola, Elateral, Enta Ticketing, Thoughtworks, Dot Net Solutions and The Carbon Trust. Becca recently set up the Windows Azure inside Solutions group.