Castle: We Still XML The Old Way

Post 4 in a 101 series on the Castle IOC project called Windsor. See part one for background

XML Config Based Registration

Although demoware may show us the inline registration of components is good it may not prove to be the best thing for your application. The previous posts have shown registration of types in C#, this means we must recompile if we are making changes to any parts of our registration. This may be considered a bad thing, it may also be considered a bad thing that the assembly that you are performing all of this registration in knows about (and has references to)  a lot of possibly inappropriate libraries and projects. One way around all of this is by using the XML configuration.

Using our same basic console app I add an app.config file and put the following in it:




<section name="castle"
type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor" />



<component lifestyle="transient"
id="Writer"
service="CastleIocConsole.IWriter, CastleIocConsole"
type="CastleIocConsole.HelloWriter, CastleIocConsole" />
<component
id="Gday"
service="CastleIocConsole.IWriter, CastleIocConsole"
type="CastleIocConsole.GdayWriter, CastleIocConsole" />
<component
id="List"
service="CastleIocConsole.IWriter, CastleIocConsole"
type="CastleIocConsole.ListWriter, CastleIocConsole" />
<component
id="MessageService"
service="CastleIocConsole.IMessageService, CastleIocConsole"
type="CastleIocConsole.GuidMessageService, CastleIocConsole" />



Several things to note:

  • You should specify the fully qualified name any time you are declaring types in XML, that is – Full.NameSpace.Type, Assembly
  • All components have an id
  • You don’t need to specify a service, if the type has no interface or is the base calls you can just declare the type.
  • you can declare lifestyles (the transient lifestyle specified is not important in this example, its just there to show it can be done)

In addition to this you can specify which concrete dependencies you want, assign component properties and declare interceptors for AOP style run time inject of code blocks (more on this to come)

Using this code we could do some thing like this:

using System;
using Castle.Core.Resource;
using Castle.Windsor;
using Castle.Windsor.Configuration.Interpreters;

namespace CastleIocConsole
{
class Program
{
static void Main(string[] args)
{
IWindsorContainer container =
new WindsorContainer(
new XmlInterpreter(
new ConfigResource("castle")));
var writer1 = container.Resolve();
writer1.Write();
var writer2 = container.Resolve("List");
writer2.Write();
Console.ReadKey();
}
}
}

which would give the output of (see previous post for component definitions):

Hello World
122eb0e3-0812-4611-a580-7cf976039a89
5a916589-f7a5-4121-9fb6-a51cf24a6f43
977ec6f9-8110-4074-b537-047f21e6a70f
a136ae03-0caf-49fb-8511-92e3f2d39446
3aa1db17-78c8-438f-84a9-91ef14753ca7

As with any XML there seems to be a few catches so here are some gotchas that may help:

  • All components need an ID
  • First in best dressed: The first component registered is the default for that service (interface)
  • Standard .net XML config notation applies eg no line breaks in definitions, generics marked up with back tick notation etc
  • There is no official XSD schema but there are user defined ones, find one and put it is  the VS schema folder eg : C:\Program Files\Microsoft Visual Studio 9.0\Xml\Schemas to give you intellisense

That is the last in the 101 series for now. Soon I would like to cover

  • Interception and how you can use AOP with the Castle framework
  • Run time Config with out XML (Binsor)

Rhys

Back to Post 3

Castle: Control Your Dependents!

Post 3 in a 101 series on the Castle IOC project called Windsor. See part one for background

What About the Dependencies?

An Ioc Container is pretty much no use if it does not resolve a components dependencies, so we will show you how this can work.

We introduce a new component, the ListWriter which use constructor based dependency injection to provider a IMessageService. This dependency is used when writing our message:

class ListWriter : IWriter
{
private readonly IMessageService messageService;
public ListWriter(IMessageService messageService)
{
this.messageService = messageService;
}
public void Write()
{
foreach (var message in messageService.GetMessages())
Console.WriteLine(message);
}
}
interface IMessageService
{
IEnumerable GetMessages();
}
class GuidMessageService : IMessageService
{
public IEnumerable GetMessages()
{
for (int i = 0; i < 5; i++)
{
yield return Guid.NewGuid().ToString();
}
}
}

Because we have established a obvious dependency by exposing a constructor parameter the container is smart enough to know that it is going to have to find a registered component that fits that service. This means we also have to register this new service that we are dependent on:

class Program
{
static void Main(string[] args)
{
var container = new WindsorContainer();
container.AddComponent();
container.AddComponent();
var writer1 = container.Resolve();
writer1.Write();
Console.ReadKey();
}
}

Which when run will now resolve the IWriter service as the LiveWriter component, realise the component can not yet be constructed as it has a dependency on the IMeassgeService, so will resolve that (as the Guid Writer Component we registered on the third line of the method), inject it into the LiveWriter component and then call Write() giving the output expected; 5 random GUID in their string representation:

122eb0e3-0812-4611-a580-7cf976039a89
5a916589-f7a5-4121-9fb6-a51cf24a6f43
977ec6f9-8110-4074-b537-047f21e6a70f
a136ae03-0caf-49fb-8511-92e3f2d39446
3aa1db17-78c8-438f-84a9-91ef14753ca7

This shows that the Container can now worry about creating object with complex dependencies and deep object graphs, allowing you to have a loosely coupled application, ready for any change that may come your way.

Next up : Using XML configuration

Back to Post 2

Castle: Pick Your Lifestyle

Post 2 in a 101 series on the Castle IOC project called Windsor. See part one for background

Lifestyle Management

Often you will want to control the lifestyle of the object you create, eg a common but over used lifestyle is the singleton. Using an IoC container means you do not have to manage this in your code, which means other dependencies don’t have to implicitly know they are dealing with a singleton (or whatever lifestyle is being implemented). This is good and drastically cleans up production code and makes it a lot easier to test.

So lets have a look at a new implementation we have here the MultiGreeter; a concrete implementation (or component) that tells you how often this instance has greeted you.

class MultiWriter : IWriter
{
private int timesGreeted = 0;
public void Write()
{
Console.WriteLine("I have greeted you " + timesGreeted + " time(s) before");
timesGreeted++;
}
}

We change the program class so we register our new component and resolve the service a couple of times to see the output:

using System;
using Castle.Windsor;

namespace CastleIocConsole
{
class Program
{
static void Main(string[] args)
{
var container = new WindsorContainer();
container.AddComponent();
var writer1 = container.Resolve();
writer1.Write();
var writer2 = container.Resolve();
writer2.Write();
var writer3 = container.Resolve();
writer3.Write();
Console.ReadKey();
}
}
}

The output is somewhat unexpected for new comers:

I have greeted you 0 time(s) before
I have greeted you 1 time(s) before
I have greeted you 2 time(s) before

Even though we have resolved the service each time to a new variable the count is incrementing. This is because Castle will resolve types as singletons by default. What we can do, if this lifestyle is inappropriate, is specify that we want the type to be of a differing type,transient for example, which would give the same behaviour associated with calling the constructor of the component each time we resolve the service (ie var writer = new MultiWriter()). This requires are slightly different way of registering our components by using the AddComponentWithLifestyle method and use the Castle.Core.LifeStyleType enumeration to select our desired lifestyle.

using System;
using Castle.Windsor;
using Castle.Core;

namespace CastleIocConsole
{
class Program
{
static void Main(string[] args)
{
var container = new WindsorContainer();
container.AddComponentWithLifestyle(LifestyleType.Transient);
var writer1 = container.Resolve();
writer1.Write();
var writer2 = container.Resolve();
writer2.Write();
var writer3 = container.Resolve();
writer3.Write();
Console.ReadKey();
}
}
}

The output is now the expected:

I have greeted you 0 time(s) before
I have greeted you 0 time(s) before
I have greeted you 0 time(s) before

This is because each time we resolve a brand new instance is created. The other Lifestyle I like is PerWebRequest (for obvious reasons).

Next we control our dependencies

Back to Post 1

Castle: The .Net framework of choice

Post 1 in a 101 series on the Castle IOC project called Windsor

I have long been an advocate of the Castle stack, stemming from my days at Change Corp. At the time we were using Spring.Net for our IoC container but also were using NHibernate. I noticed that we had castle DLL’s in the bin and realised that NHibernate was using them for object creation. I looked into the Castle project and despite is lack lustre documentation was able to get IoC working pretty quickly. One of the things I personally like is that you did not NEED to use XML for you container registration (i.e.  defining which concrete type to use for which abstract request). On top of this the Monorail  project (the original MCV framework for ASP.Net) and Active Record The AR pattern sitting on top of NH) struck a chord as being particularly handy tools.

Years later the stack remains pretty much the same. There are some off shoot developments but by and large the same stuff I went to castle for is why I still like to use it…. well at least I thought.

Yesterday when writing the DI blog post I realised I had not directly used Castle in over a year. I have a wrapper that I use to abstract my interaction with the container and the OSS libraries that I use/play with hide the details too (MassTransit, SutekiShop etc). So I have decide to give a crash course for myself and the rest of the world on how to set up and exploit some of the more basic but super handy Castle features.

Castle, IoC And You: The set up

First and foremost you need to download the latest castle libraries. Currently these are RC3 and last updated in sep 07. Get them here.

Install the binaries, these will be jammed in the GAC for safe keeping. You can also use the straight DLLs if you need to for source control etc, just make to reference everything you need (i.e. secondary dependencies)

Right, with that sorted we are able to do the world silliest IOC demo.

Create a console application and include references to  Castle.Core.dll, Castle.MicroKernel.dll, Castle.Windsor.dll & Castle.DynamicProxy.dll.

I have create a very basic interface called IWriter which has one method: void Write()

I have two instances that implement it: HelloWriter and GdayWriter. It is probably a good time to note that in the castle world a “service” is generally referring to the interface or abstract type that you are calling and the “component” is the concrete implementations. So the IWriter would be our service and the HelloWriter and GdayWriter are considered components.

interface IWriter
{
void Write();
}
class HelloWriter : IWriter
{
public void Write()
{
Console.WriteLine("Hello World");
}
}
class GdayWriter : IWriter
{
public void Write()
{
Console.WriteLine("G'day world!");
}
}

Right, there are not the most useful of classes but they will help show you how to use Castle to control creation of dependencies next. In our Program class place the following:

using System;
using Castle.Windsor;

namespace CastleIocConsole
{
class Program
{
static void Main(string[] args)
{
var container = new WindsorContainer();
container.AddComponent();
var writer = container.Resolve();
writer.Write();
Console.ReadKey();
}
}
}

Hit F5 to debug and see that

Hello world 

was output as the HelloWorld class was resolved and its implementation of Write was called on the second to last line before Console.ReadKey().

Next up we show how to get Named instances of concrete types. Say for example you have a default type, but in the odd occasion you need a different type; well instead of breaking the notion of DI and using concrete dependencies, you can call for implementation by a key. In this example we register 2 concrete types to the same interface, however one has a key specified.

static void Main(string[] args) 
{
var container = new WindsorContainer();
container.AddComponent();
container.AddComponent("gday");
var writer1 = container.Resolve();
var writer2 = container.Resolve("gday");
Console.Write("writer1 output: ");
writer1.Write();
Console.Write("writer2 output: ");
writer2.Write();
Console.ReadKey();
}

The output is:

writer1 output: Hello world 
writer2 output: G'day world!

I find that most of the time I expect a default implementation so do not explicitly set a key at registration time, but its nice to know its there when you need it.

Next we manage LifeStyle

Real World Dependency Injection

On Thursday I gave a talk on Real World TDD at the Perth .Net Community of Practices. I’m not sure what people were expecting but the turn out was incredible, standing room only.. Hopefully I gave the punters what they were hoping for. A lot of secondary topics were raised, one being the notion of design and allowing TDD to help shape good design by following the SOLID principles. Dependency Injection was probably the most notable and for some people this may have been somewhat unusual. This article hopes to explain why we use DI and how we can use it in the real world.

What is Dependency Injection

Dependency Injection or dependency inversion is the idea that we depend on abstractions not concrete implementations. The example I gave in the talk was a trivial one using a logger as an example. Most of us use some sort of logging in our code so a lot of our code has a concrete dependency to a logging class eg:

public void DoSomething()
{
Logger.Log("Entering Do Something");
//Do the thing you wanted
Logger.Log("Exiting Do Something");
}

Why do we use it

In the example above, if we wanted to change the logger we used we would have to go into this code and change it, in fact we would most likely have to change it in every method call for something as prolific as logging. That is a lot of changes. Not only this but the reuse of this code without DI is limited and you will have to pass around the concrete logger as a reference to anything that wants to use it. This is not a big deal for logging but what if you had some potentially reusable components? Those concrete dependency will be come painful, reducing reusability very quickly. An example that parallels DI that many .Net developer will be familiar with is the Plug In pattern. Asp.Net membership is an example of this, in which we specify what type of provider we are going to using for membership. Our code doesn’t change all we need to do is change the config in one place and the implementation is changed.

When do we use it

I will use DI anywhere that I have a dependency that is not static. (This may not be the best rule of thumb but it is how I do it, the number of static helper classes I have are minimal so this is not a big deal for me.)

Take for example we are using the MVP pattern; the presenter has a dependency on a view and on the model. The view however does not have a dependency on the presenter (despite what many coding samples out there may say) and the model does not have a dependency on the presenter but may do on other models data access components or services etc. Assume we have decided that the presenter is not in a valid state without these dependencies; We therefore decide that these should be injected as constructor items.

public class FooPresenter
{
private readonly IFooView view;
private readonly IFooModel model;

public FooPresenter(IFooView view, IFooModel model)
{
//DBC checks eg null checks etc
this.view = view;
this.model = model;
}
//rest of the presenter
}

Note: by stipulating the dependencies are read only they must be assigned by the end of the constructor. To me, this helps clarify the intention and I tend to use this over private properties eg

//I dont like this for constructor dependencies, 
//but it is valid
public IFooView View { get; private set; }

The presenter is now only dependent on the interfaces of the view and model. we can change From Winforms to WPF without any dramas, we can move from a web service based model to an EF model… we don’t care about how the dependencies do their job,, they just have to adhere to the contract that is the interface.

To me this is critical part of TDD. This allows me to focus on the task at hand, eg writing the presenter, without having to worry about coding the model and view at the same time; all I have to do is create the interfaces that they have to adhere to while I create the presenter. This is typically done all at the same pace. I may decide I need to raise an event from the view, say “AddFooRequest”. I create a test to listen for that event in the presenter test, create the event on the view interface, create a handler for the event in the presenter and wire it up. I have not done any concrete code in the view, just view interface, concrete presenter and its test fixture. This to me is already a huge benefit of DI.

How do  use it

The first problem people run into when using DI is when they try and plug pieces together. Basically we have stated that the object should not know about its concrete dependency, but something has to! Is it the object that is calling it? It would be silly if presenter A did not know about its dependencies but when navigating to presenter B it new about those dependencies so it could construct presenter B… we there are 2 realistic ways around it, “Poor Mans DI” and “Inversion of Control” containers.

Poor mans DI is a way of saying “I don’t want to code to implementations, but I will anyway”. Its actually a pretty good place to start when DI is still a relatively new concept. You basically have defaults in your class and allow for them to be overridden eg:

public class FooPresenter
{
private readonly IFooView view;
private readonly IFooModel model;

//Poor mans DI
public FooPresenter()
:this(new FooView(), new FooModel())
{}

//Complete ctor allowing for DI
public FooPresenter(IFooView view, IFooModel model)
{
//DBC checks eg null checks etc
this.view = view;
this.model = model;
}
//rest of the presenter
}

The problem with this is that if you are using DI everywhere then (in this example) the presenter will have to know how to create the model or the model will have to have poor mans DI too.

A better option is to have one place that knows how to construct things, like a big object factory. This is called your Inversion of Control container and a usually third party API’s that become a critical part of you framework. Such containers are godsends in large projects and allow for quickly changing dependencies for whole solutions very quickly and cleanly. The common containers are StructureMap, Castle (Windsor) , Spring.net, Unity, Ninject etc. You will most likely at first use a tiny portion of these container capabilities, generally resolve and setting the concrete type to construct for a given abstraction. eg

//set a default for the whole application
Container.AddComponent();
Container.AddComponent();
Container.AddComponent();

//Retrive a concrete instance from the container
var presenter = Container.Resolve();

The Container.Resolve call on the last line asks for the concrete instance of IFooPresenter. We have defined that above as been the concrete FooPresenter. We know that the FooPresenter has dependencies on IFooView and IFooService so the container will look for registered components too and try to instantiate them. We have specified that we want the container to create a FooNhibernateRepository any time we ask for an IFooModel and a FooWinFormView any time we ask for an IFooView. Most IoC containers will tend to take the greediest constructor (the one with the most parameters) so you can even migrate from poor mans DI to an IoC implementation quite smoothly.

NB: This is a generic wrapper that I use as I tend to use a different container depending on the employer/contract I work on, so I use an adapter pattern to isolate me from the varying syntax. If this approach appeals then check out the Common Service Locator at CodePlex

I hope this helps people move towards better designed software and an easier to use and looser, decoupled API.

Rhys

Presenter Tests 101

What to wrtite a test to make sure the prsenter load dats from a service/moeld/repository to the view?

IFooEditView view ;
IFooService service ;

[TestInitialize]
public void MyTestInitialize()
{
view = MockRepository.GenerateMock();
service = MockRepository.GenerateMock();
}

[TestCleanup]
public void MyTestCleanup()
{
//global assertion
view.VerifyAllExpectations();
service.VerifyAllExpectations();
}

[TestMethod]
public void CanInitialiseFooEditPresenter()
{
//arrange
var id = 1;
var record = new Foo(id);
view.Expect(v => v.LoadRecord(record));
service.Expect(s => s.RetrieveFooRecord(id)).Return(record);
//act
var pres = new FooEditPresenter(view, service);
pres.Initialize(id);
//assert – mock verification in tear down
}

NB: I really should stop posting code by writing staight in to the blogger create post screen. just lazy…

PowerShell to save the day!

I have been doing a fair bit of build script stuff over the last couple of months. I guess it started when we were having big problems dealing with the build process at my last contract. (I had been using Nant for about a year prior but it never really did anything other than clean rebuild and run my tests. That’s cool, it’s all it need to do.)
We really need to look at our build process as it took about 3 hours to do a deploy, which we were doing up to 2 times a week…. 6 hours a week of a London based .Net contractor: that is some serious haemorrhaging of cash. I started really looking in to build server and properly configuring build scripts. As most places I work at are very M$ friendly and not overly fond of OSS, so I tend to be stuck with MSBuild if it is a shared script. So goodbye Nant.
Fast forward to a few weeks ago and I have moved country and company and am working with a great team of developers that are incredibly pragmatic and receptive to new or different ideas. We set up a build server and installed Jet Brain TeamCity to point at VSS and a basic MSBuild script that was a basic port of my Nant script. It worked, it did what we need, which was take what was checked in, rebuild, test and send a zip of the output to a network folder and let us know if the whole process succeeded or not. Simple and sweet.
Enter ClickOnce. Ahhh. Ok, so ClickOnce is a great idea in that it manages your companies deployments of smart client software. No longer do you have to worry if the users are using the correct version of your software, the latest will always be on their machine. Personally I think this is a great idea and can see why mangers would love the idea. Its also really easy to deploy… if you are using Visual Studio… and if you only have one deployment environment. Unfortunately I don’t want to use VS (I want to do this from a build sever using a potentially automated process) and we deploy to Dev, Test, UAT and Prod. MSBuild really struggles when it comes to this… it basically just cant do it.
The biggest problem was I need to be able to change assembly names so the ClickOnce deployments don’t get mixed up (I want o be able to install Test and Prod on the same box). Changing the exe assembly name in MSBuild changes all the assembly names, which is not too good.
After struggling with MSBuild I realised I was hitting the limits of what MSBuild is supposed to do, it was either change my approach or enter hack town.
Initially I thought Boo, Python or Ruby would be my saviours… then quickly rethought. Although they would be good in MY mind, other people have to use this and those options are not real M$ friendly… yet. I don’t know why I didn’t think of it earlier but PowerShell was the obvious answer. I downloaded PowerShell and after playing with it for a couple of minutes I was super impressed. All the stuff I was struggling with in my bat files or my MSBuild scripts were trivial in PowerShell.
Variable assignment, Loops, switches etc are all trivial. It extend .Net so you can handle exceptions, interact with Web service Ado.Net Active Directory… the sky is the limit.

Anyway if you haven’t played with PS go download it, get the manual and get the cheats sheets

Documents
http://www.microsoft.com/downloads/details.aspx?FamilyId=B4720B00-9A66-430F-BD56-EC48BFCA154F&displaylang=en

Cheat Sheets
http://blogs.msdn.com/powershell/attachment/1525634.ashx
http://refcardz.dzone.com/assets/download/refcard/5f78fd3b70e077cb9a5b3782356a8a14/rc005-010d-powershell.pdf

And Check out PSake from James on codeplex if you are keen on incorporating PS into your build cycle.

Rhys

NB: I hope to post my revised ClickOnce build strategy… as my last one was a bit of a failure, sorry if I lead anyone astray.

EDIT: Check out Powershell GUI if a nice free IDE

Stilling Learning CI

Today some colleagues and I were discussing prospective deployment options. We have all worked on many projects, isnt it funny how often the deployment process, the most important part, is a complete after thought. After mentioning it this morning to the PM/BA/Non techies we realised there was no actual deployment procedure in place (this is my first real release at the current company). We decided to let them come up with a standardised plan for their end (“put last build into production” was an option we had to take away from them) while we sorted out our plan.

I am still very much learning abut builds and CI, so this is by no means the authoritative answer; This post basically described what we came up with.

Preface

One thing we are dong well is actually using ClickOnce, a technology that is perfectly suited for the large corporate environment we work in and deliver smart client app’s to. We want to continue to do this but make sure it is done properly.

We have very loose processes out side of the people actually writing code. The non tech’s are not au fait with agile, they are not too good at requirements, planning, resourcing… well you get it. So we need to insulate ourselves from any curve balls that get thrown by these guys. We also need to seriously cover our ass, because when the proverbial hits the fan around here it slide down the ranks very fast; we want to make sure the have no way of letting it get to us, unless of course we actually deserve it.

We don’t have full control over deployments. There is a system team that will copy our applications up to the next environment (ie Test -> UAT), and it has to be the same application to get pushed up. This is fair enough, however it doesn’t work well for ClickOnce, by default any one testing on a lower level would not get updated*, we actually need a separate application for each environment. We also have only once chance for deployments. if we mess up even one deployment it means this process is no longer automatically approved and every change must go thru a 2 week change management process.

The Plan

In the time I have been at my new contract I have managed to get a couple of pretty big wins. We are now completely TDD we have a build server, we do (close to) weekly deployments to a dev test environment and we are getting up to speed on scrum. What I really wanted next was “one click deployments”.

The plan basically is:

  • We will decide a release date/time for moving from Dev to Test. We are currently doing 7 day sprints and trying (scrum is new) to deliver a working production-quality piece of software each week. ATM this is Wednesday 2pm (for a variety of reasons).
  • We run the Deploy Script**. The deploy script runs our standard build which performs unit and integration test, static analysis etc, then it modifies the config to the test, environment and publishes to the pre-deploy network location (then repeats for UAT then Prod) .
  • We now have all the releases of each version in a known pre-deployment folder. From here the test click once is copied to the real deployment folder.
  • The Testers test away and of course there are no bugs [ 😉 ] and they approve release of version x.y.z.b. Because we have all the releases for each environment produced at the same time and are the exact same build (other than 3 small config files) the system lads can do their Test->UAT (or UAT->Prod)deployment based on the version number that has been approved.

This means

  • We have every release in pre-deployment for Test, UAT and Prod
  • The testers can let the system guys know the exact version number that has been approved. It is now up to the system guys to copy the correct versions up to the next environment.
  • We cant modify the files as it will break the manifest and render the app useless. This keep the system guys happy.
  • We are removed from the deployment process, which means we don’t have to be at work at 9pm when the deployment takes place.
  • Multiple versions of the application can be held on the users workstation, each one assured it is the latest for it given environment. This keeps PM’s, BA’s,Testers & UAT testers very happy.

This process takes about a minute. A lot happens but it is totally repeatable and completely versioned. This certainly is a better/ faster/ more reliable option than the 3 hour deploys we did at my last place of work. To be honest I’m pretty happy with it. This should also work well with ASP.Net deployments however there would have to be a versioning folder “hand created”*** (I believe) to get the same effect.

So I haven’t quite got my “one click deployments”, but half a dozen clicks and some automated scripts that run in under 5 minutes (most of that is watching static analysis and  tests run) is a bloody good start. Plus it’s a good time for me to sit back and have a coffee; I’m almost looking forward to deployments 🙂

 

For you nosey bastards; The deployment part of the script looks like:




<AssemblyInfo CodeLanguage="CS"
OutputFile="$(ApplicationPropFolder)\AssemblyInfo.cs"
AssemblyTitle="$(AssemblyTitle)"
AssemblyDescription="$(AssemblyDescription)"
AssemblyCompany="YOURCOMPANY"
AssemblyProduct="$(AssemblyProduct)"
AssemblyCopyright="Copyright © YOURCOMPANY"
ComVisible="false"
CLSCompliant="true"
Guid="$(WinUiProjectGuid)"
AssemblyVersion="$(BUILD_NUMBER)"
AssemblyFileVersion="$(BUILD_NUMBER)"
/>

$(RootClickOnceDeploymentLocation)\DEV\$(ProjectName)\
$(RootClickOnceDeploymentLocation)\TEST\$(ProjectName)\
$(RootClickOnceDeploymentLocation)\UAT\$(ProjectName)\
$(RootClickOnceDeploymentLocation)\PROD\$(ProjectName)\









<MSBuild Projects="$(SolutionFile)"
Targets="Publish"
Properties="Configuration=Release;
PublishDir=$(DevFolderLocation);
PublishUrl=$(DevFolderLocation);
InstallUrl=$(DevFolderLocation);
UpdateUrl=$(DevFolderLocation);
ApplicationVersion=$(BUILD_NUMBER);
"/>
<MSBuild Projects="$(SolutionFile)"
Targets="Publish"
Properties="Configuration=Release;
PublishDir=$(TestFolderLocation);
PublishUrl=$(TestFolderLocation);
InstallUrl=$(TestFolderLocation);
UpdateUrl=$(TestFolderLocation);
ApplicationVersion=$(BUILD_NUMBER);
"/>
<MSBuild Projects="$(SolutionFile)"
Targets="Publish"
Properties="Configuration=Release;
PublishDir=$(UatFolderLocation);
PublishUrl=$(UatFolderLocation);
InstallUrl=$(UatFolderLocation);
UpdateUrl=$(UatFolderLocation);
ApplicationVersion=$(BUILD_NUMBER);
"/>
<MSBuild Projects="$(SolutionFile)"
Targets="Publish"
Properties="Configuration=Release;
PublishDir=$(ProdFolderLocation);
PublishUrl=$(ProdFolderLocation);
InstallUrl=$(ProdFolderLocation);
UpdateUrl=$(ProdFolderLocation);
ApplicationVersion=$(BUILD_NUMBER);
"/>

 

NB:You will need the MS Build Community Task download to update the AssemblyInfo.cs. The M$ one is a bit flakey (apparently). I used the community task for other things anyway so I figured I would use what is there.

NB: the BUILD_NUMBER is being passes in as a parameter to the script.

As my MSBuild skills are not the sharpest, I have left the repeated code in. I spent a couple of minutes trying to figure out how to prevent repeating myself with MSBuild but to no avail. if you have any ideas (short of Rake) let me know.  I am also still figuring out ClickOnce so some of those URLs are probably not necessary, have a play yourself…. this is just my first run.

*because, for example, Dev version would be higher than Test so when clicking on the latest Test ClickOnce application it would deem the application does not need to be updated as the dev version number is higher than the Test version number (if they were being run on the same machine). The applications have to be different applications for each environment. We still need to confirm this is the case with today’s changes. Yeah… a bit of a pain, but worth it I guess.

**we actually stop the build server for the project we are deploying, reset the build counter, increment the release version, then run the script then restart the server.

** MakeDir is not really hand created, but you get my drift 😉

MSTest XSLT

I am sure someone out there may find this useful. Its an XSLT to transform the TRX file that is output by the MSTest runner. Its a slightly better visual depiction than my MSBuild output of thousands of lines of courier new text…

This is very much “It works on my computer”, I am running VS 2008 Team edition (?). Let me know if this works for you.

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform&quot;
                xmlns:vs=”http://microsoft.com/schemas/VisualStudio/TeamTest/2006″&gt;
 
   
     
       

Test Results Summary

       

         

           

           

         

         

           

           

         

       

              Run Date/Time
           
             
           
              Results
           
             
           

        Coverage Summary
       
       
     
   
 
 
   

Test Summary

   

     

       

       

       

     

     

       

       

       

     

   

Total Failed Passed
         
       
         
       
         
       

 
 
   

Unit Test Results

     

       

       

     

     
       

         
           
              background-color:pink;
background-color:lightgreen;
              background-color:yellow;
           
         
         

         

       

     
   

Test Name Result
           
         
           
              FAILED
              Passed
              Inconclusive
           
         

 

Rhino Mocks: AAA vs Record- Playback

Rhino Mocks is one of my favourite pieces of open source software. It has, more than any other piece of code, changed the way I code, for the better, I hope.
Many moons ago I first played with it and like the fact it was strongly typed, NMock2 was the mock framework I was using at the time and it is string based, which can lead to havoc when refactoring.
Back in those days RhinoMocks was only Record-Playback and to be honest it never felt natural to me. Due to popular demand the framework was extended to allow for either the record play back or IMO the more natural AAA syntax

Arrange – Act – Assert

Arrange, Act, Assert to me help break up the way I write my test to make it very clear what I am trying to achieve. I even have code snippets that auto populate my test. I type “mstest” and i get

[TestMethod]
public void Can()
{
//Arrange
//Act
//Assert
Assert.Inconclusive("Test not completed");
}

I also feel this allows newcomer to see what is going on more clearly and also helps them write test first.
How?
Well, in my mind the hardest thing to do when starting TDD is knowing what to write! If you have the code stub with comments as above, it gives you a visual guide to nudge you into progress.

I also find it helps if n00bs actually write the ACT part first, not the ARRANGE. Typically this involves writing 2 lines of code

  • create the object and
  • call the method you want to test

eg:


[TestMethod]
public void CanValidateCustomerFromCustomerAddPresenter()
{
//Arrange
//Act
var presenter = new CustomerPresenter(view,service);
presenter.Validate(customer);
//Assert
Assert.Inconclusive("Test not completed");
}

The fact the above code wont even compile is irrelevant. It shows intent. Now the developer writing the test has a clear direction of what they need to do. Often this way of TDD fleshes out new tests. To me this (incomplete and fictitious) test straight away is crying out for complimentary tests: eg CanNotValidateCustomerFromCustomerAddPresenterWithNullCustomer etc etc
The fact that I have not even defined what a customer is, means my mind is still open to possibilities.
On top of the benefits of writing the ACT first, I think AAA syntax makes the test more readable in terms of maintaining code bases, as it has the top down procedural look that coders are used to (even OO has top down).

[TestMethod]
public void CanValidateCustomerFromCustomerAddPresenter()
{
//Arrange - Set up mocks (put in your TestInitialize)
var view = MockRepository.GenerateMock();
var service = MockRepository.GenerateMock();
//Arrange - Set up your parameters & return objects
var customer = TestFactory.CreateVaildCustomer();
//Arrange - Set up your expectations on your mocks
view.Expect(v=>v.ShowValidation(customer));
service.Expect(s=>s.Validate(customer)).Return(ValidationFactory.Success);
//Act
var presenter = new CustomerPresenter(view,service);
presenter.Validate(customer);
//Assert
view.VerifyAllExpectations();
service.VerifyAllExpectations();
}

Now I have not run this thru a compiler I just threw this down, but to me this is pretty readable. I used Record-playback only for a few months and found it a little confusing, perhaps my pitiful little brain was maxing out on simple syntax, but hey.
If you are not using AAA try it out, it works great with the C# lambda expressions too (as above) which, to me, means you have incredibly readable tests.

*please ignore the fact the test is odd.. I am trying to show readability as opposed to how to write a crap object 😉
**is it incredibly obvious that i am writing MVP triplets ? 😉