Real World TDD Talk

I am talking at next months Perth .Net Community of Practice on TDD in the real world.

The reason I have gone with this topic is it seems that the Perth community as a whole has ben very reluctant to adopt TDD, hopefully this talk will break down those barriers. I feel this is something that can help developer on any technology as so I hope it is a good turn out.

The talk is on March the 5th in the Perth CBD at at Excom Education starting @ 5:30pm.

Continuous Integration : 101

Continuous integration is a term with mixed meaning, usually intended to describe an automated build process associated with a development groups source control system. Typically on check-in the build server polls the source control repository, gets the latest source code and proceeds to build the code. Notifications can be on success or failure of the build meaning we can always have a very recent view of the health of the code base.

CI is a very helpful part of an agile teams delivery and can greatly aid in improving the build/deployment times and just as importantly can keep the quality of source code high. Unfortunately there seems to be a large number of agile teams that are not embracing this very simple and beneficial process; it seems there is a high perceived point of entry, so lets try to eliminate this; this whole process takes less than 30 minutes.

*NOTE: this is not going to be a ground breaking article. However if you have never set up a CI environment then this is, I am sure, a worthwhile read. It is also not a generalisation. This is just how I do it. Feel free to adapt and modify at will. This example is just using the set up I have for the contract I am on at the moment: VS2008, MS-Test, VSS, MS-Build, TeamCity… I am sure it is known that they are not my preferred tests, source control or build script… so use your initiative and figure out those specifics if yours are different… its really not very hard.

Step 1: Set up a Solution Build configuration

Assuming you have a solution already set up, open the solution in VS and right click on the solution and select the configuration manager.

img1

Set up a new solution configuration that inherits from debug:

img2

Name this new configuration AutomatedDebug and make sure you create a new solution configuration for it (the check box). As a note I prefer to get it to inherit from Debug.

image

Now set the build in VS to AutomatedDebug in the menu (the drop down usuallly next to the play/Debug button) should read AutomatedDebug)

Step 2: Configure your Automated Build

I believe part of having an easy to use solution is having everything required to run and build the solution in source control. I find the following file structure a good start

img4

In the Lib folder is all my references. I try to keep non framework things out of the GAC, it just leads to headaches. Basically anything external DLL XML or config that you do not control and is required to build goes here.

In the Tools folder is everything thing that is not code that runs, things like Code Gen tools, Build tools, Templates, Snippets, Static Analysis tools, Test runners. Some think this is over kill and is a waste of disk space. Disk space is cheap, my time is not. I want to be able to get latest and hit build and it should work.

In the Src is all the source code, basically your solution sits in here: Be sure to include SQL.. it is code that runs, so it belongs in source control.

I prefer to have all of my output in one build folder so I go through all of the projects properties and change the output for the AutomatedDebug build to ..\..\Build\AutomatedDebug\ with the XML docs in the same place. I turn on “Warnings to Errors” and Warning to level 4 plus I set FXCop to the desired settings (personally I don’t care about localisation, mobility etc.).

My test projects are set up a little different, the difference here is I like my tests in a separate folder so have ..\..\Build\AutomatedDebugTest\ for my output for test projects with no increased warnings or FXCop or XML docs.

As anything in the Build folder is build output you do not need to put this in source control and its is perfectly ok to delete this folder at any time.

*NB: when adding a project to the solution the project specific version of that build configuration will not automatically be added, you have to create this yourself. Go into the solution config manager again and on the new project create a new config to match the others.

Step 3: Create a Build Script

This can be the tedious part of the process, so I am just going to give you one ready to go. Place the build script in your Tools folder as it is ineffect a tool. This build script is MSBuild as this is the build tool I get the least friction with in terms of management acceptance. Other options are Nant, Rake, PSake and Bake.

















..\MySolutionName.sln
..\Build
AutomatedDebug
$(OutputDirectory)\$(Configuration)Test
$(OutputDirectory)\$(Configuration)Reports
MyCompany.MySolutionName
/testcontainer:$(TestOutputDirectory)\$(DefaultNamespace)
@(UnitTestProject, ' ')
@(IntegrationTestProject, ' ')








<RemoveDir Directories="$(OutputDirectory)"
Condition="Exists($(OutputDirectory))">



<MSBuild Projects="$(SolutionFile)"
Properties="Configuration=$(Configuration)" >




<RemoveDir Directories="$(ReportOutputDirectory)"
Condition="Exists($(ReportOutputDirectory))">











Quick over view of what the build file does (skip this if you are comfortable with MSBuild/Nant)

  1. We define that this is an MSBuild XML file with a default target.
  2. We set up all the variables. The tags are not special tags, MSBuild Allows you to use self defined XML tags as the variable names. Note we can use other variables to define further variables. Of special note is the way the test project names are collated together. The “UnitTests” and “IntegrationTests” nodes in the “PropertyGroup” Node use MSBuild syntax to concatenates the project names previously specified with the given format so we can run MS Test later.
  3. We define a target. The first we have called “Clean” and it delete the build output folder. That’s it.
  4. The “Build” target specifies our solution to build with the predefined build configuration, being “AutomatedDebug” as specified in the local variable section. Also note that the Build depends on the Clean target. This means “Clean” will always run before this target.
  5. We define a “UnitTest” and an “IntegrationTest” target that both run test and depend on Build and therefore Clean. These targets are very specific to MS Test, most of the other runners are much easier to set up. This also requires VS on the build machine. To do this without VS installed check out the Gallio project.

Be sure to add test project names in as they are added or your build server will not be running those tests! Create a Build.bat file in the toolls folder with your build script and insert the following text so you can run the script.

@C:\Windows\Microsoft.NET\Framework\v3.5\MSbuild.exe AutomatedDebug.build /t:AllTests /l:FileLogger,Microsoft.Build.Engine;logfile="AllTests.log"
@pause

This assumes your are using .Net 3.5 and the name of your build file is AutomatedDebug.Build which resides in the same folder. the last part gives a log file of the build output, which I find handy, although verbose.

I really try to stress using very slim bat files to call my build scripts. They should be one liners that call a target in the build script. Your targets are like methods, target should be very specific, do one thing and do it well…Like your very well written code 😉 If you need to do lots of things then create a target that depends on the other target you need to run (see the build file)

Step 4: Set up Source Control

I am not going to tell you how to set up Source control, you should be able to this by yourself. I am currently using VSS *gasp* but this process works with SVN, TFS, GIT… the source control tool really doesn’t matter in terms of this specific process. I do prefer the build server to have a read-only login to retrieve the code however.

One more reminder: DO NOT CHECK IN THE BUILD FOLDER! This is a throw away folder that should not be in source control.

Step 5: Set up your Build Server

TeamCity is my new favourite toy. In 7 minutes I had downloaded, installed and set up a build server. Awesome. CC.Net is the other big name in .Net CI, but the XML config has caused many a headache, TC was so easy I have to recommend it. Follow this video to get up and running. Typically this is done on a separate machine. It does not need to be a flash machine, in fact I would recommend whatever is lying around not getting used, mine is a 4 year old single core laptop.

Now to set up notifications. I personally like the tray notification tool, clean and out of the way.

Now is a good place to add in the extra tasty stuff to you build script like code analysis tools (Simian, NDepend, Code Coverage/NCover), Deployment (Click-Once, WIX packaging), Documentation (Sandcastle) etc. These things normally get left off the shopping list as they add to the build time. Well, now another machine is doing the build and it doesn’t affect me. Be sure to add these in as separate targets in the script to keep things nice and clean.

Wrap up

Now that was pretty trivial wasn’t it? To be honest the hardest thing is setting up the solution folder structure and build script. This set up has served me well for a while now, I’m sure there are many others doing things differently, but this is as good a place to start as any.

Key points are:

  • Have a separate build config that is set up for high code quality with a separate output folder for test projects
  • Keep a clean file system, preferably use a root build output folder for all build, test and reports (code coverage,test results, static analysis etc)
  • Keep build project targets focused
  • Use TeamCity; its the easiest part of the whole process, but is pointless if the rest is not set up ready to go.

Plans for the Future

The last year was a big year for me, professionally, highlighted by several things such as working in London, attending the Seattle & London Alt.Net conference and the move back to Perth. The amount I have grown largely from realising that big companies and big cities do not necessarily mean good quality of work… in fact quite the opposite and that after meeting the hard core Alt.Net’ers I realised how much I dint know about fundamentals of coding… I was basically a Microsoft coder (which is not a bad thing but its bad not to know the alternatives). Since moving back to Perth I have managed to pick up a great contract with a very open minded and technically skilled team. Fortunately they are taking on board a lot of the new design and process concepts I have picked up over the last 2-3 years that I have managed to solidify of the last 18 months. Its actually been really good because it has forced me to put my money where my (particularly large) mouth is. With this is have realised that I have something more to give the community than I am perhaps currently giving. With this realisation I am intending on doing more presentations at user group type meetings and have plans for a Back to Basics coding day in which I would like to share with the community the core concepts behind things like TDD, IoC, SOLID, DDD, AOP etc that I think are not being picked up by the masses, especially here in Perth. The locals have excellent understandings of the new cool wizz-bang  tech based stuff (like Azure, WPF etc) but without core fundamental programming knowledge and design principles you will still be building a potential house of cards architecture.

I also plan on realising the Artemis West Stack as a guidance package that allows for flexible Data Driven and Domain Driven smart client application that are technology agnostic. This has been months in the making with many revisions and re works (and lots of swearing at the GAT). I do not initially intend for this to be OSS at the moment (AW is not an OS company) however I will chat with the other directors to see what direction this will take.

Last year I took time out to learn two new languages, Ruby and Python as well as a dabble in JQuery. This year I plan on Learning F# and actually putting the Python and Ruby to use… learning a language and not using it is not really any good e.g. I studied French for 5 years but I am conversational at best now due to lack of use.

I also think it may be of benefit to AW for me to get certified. So I will probably do some of the dirty M$ certifications and hopefully a scrum master course if they do one in Perth.

On top of looking for a house to buy, 2009 looks to be another big year 🙂

TeamCity – Late adoption

TeamCity is a build server put out by the wonderful JetBrains team, possibly best know in the .Net community for ReSharper.
TeamCity is a builder that basically take on the likes on CC.Net but aims to make the process a little less painful in terms of set up.
A little painful is probably an understatement. TeamCity rocks!
In about 10 minutes I have a build server up and running, including install time! It was completely trivial. I use a build script (Nant or MSBuild) anyway so all I had to do was point source control, point to the to the script and I’m done. Completely painless.
For those not completely au fait with what a build server is, this is what TC is doing for me:
On check someone checking in code to source control:

  • It gets latest from the source control
  • Builds the code using my config
  • Performs static code analysis
  • Runs unit tests
  • Run integration tests
  • Deploys to a drop location, so our “tester” can always get a copy of what we are currently working on.

That’s a pretty basic build process but it is fine for me and our team. I am well happy with the last 20minutes of work, cheers Jetbrains!

Unity – OMG… it just works!

I have started up at a new company back in Perth and am so far feeling pretty good about things. Initially I thought I may be back in “waterfall land”, but the guys are all super receptive to new ideas and very keen on moving to a more agile (or at least less waterfall) process.
The other dev’s and I have come up with a nice MVP/Presenter First framework with a repository pattern for the DAL and are currently using service with DTO‘s in the middle. All good so far.
Then I learnt we were using EF… ahhh ok…
Well, luckily the guys here have enough experience with it and have manged to put together a nice usable repository implementation using EF that is agnostic enough that any non EF should be able to come along and use it.. happy days.
Next step for me was to introduce IoC and AOP to the stack. These are somewhat new concepts here so I wasn’t to sure how they would go down. I have a wrapper Container that I use to abstract away all the container specific gunk and jargon that you can get with some containers. As we were very much in the EF/PnP/EntLib park here I thought I had better at least look into Unity to see if it is a viable option.
My last dealing with M$ IoC was ObjectBuilder in CAB… what a f&?king pig.
Needless to say I was not expecting anything special. I was however pleasantly surprised. Unity is supper usable and slotted in perfectly to my abstracted container adapter. If you are in a M$ centric software house I HIGHLY recommend trying out Unity. Far to many M$ devs don’t use IoC, often because there is a 3rd party framework that is not from M$ that is now required… the amount of time i have been told i can not use OSS on a project… grrr…well now there is no excuse. To see how painless it is checkout PnP guru David Hayden screencast and posts. Actually if you use any on the EntLib or PnP stuff you should be subscribed to DH blog; he is the ninja in this field and pragmatic enough to use (and have extensive knowledge of) other 3rd party frameworks such as the wonderful Castle stack.
Next step is to investigate the PnP AOP implementation namely the Policy Injection Application Block… will keep y’all posted

DDD : Value Types

*A follow on from : http://rhysc.blogspot.com/2008/09/when-to-use-enums-vs-objects.html

This is a brief demo of mapping constructing and using Value type in the domain. To stick with the cliches we will use orders and order status. To give some structure we will lay out some ground business rules

  1. When an order is created it is of the status In Process
  2. it is then Approved
  3. then Shipped
  4. Cancelled backordered need to be in the mix too

Ok, not the most robust order system, but that’s not the point.

Lets first look at how the domain logic could be handled with using an Enum…. bring on the switch statement!!!!

Ok so let see if we can edit an order when the status is an enum:

public class Order{
//...
public bool CanEdit()
{
switch(this.OrderStatus)
case: OrderStatus.InProcess
return true;
case: OrderStatus.Approved
return false;
case: OrderStatus.Shipped
return false;
//etc etc
}
//...
}

Ok that is not exactly scalable… the more statuses we get the more case statements we have to add. If we add a status we also have to find every place that there is a switch statement using this enum and add the new status is not a case. think about his for a second… really think about it, how many enum do you have that have functionality tied to them. Right.

No lets look at the same code but we are using “real” objects; Exit the switch and enter the strategy pattern:

public class Order
{//...
public bool CanEdit()
{
return this.OrderStatus.CanEditOrder();
}//...
}

Now obviously there need to be some know-how on this non-enum enum. let have a look at how I have done this is the past.

///

    /// Sales Order Status Enumeration

    ///

    public abstract class SalesOrderStatus

    {

        #region Statuses

        ///

        /// InProcess

        ///

        public static SalesOrderStatus InProcess = new InProcessSalesOrderStatus();

        ///

s

        /// Approved

        ///

        public static SalesOrderStatus Approved = new ApprovedSalesOrderStatus();

        ///

        /// Backordered

        ///

        public static SalesOrderStatus Backordered = new BackorderedSalesOrderStatus();

        ///

        /// Rejected

        ///

        public static SalesOrderStatus Rejected = new RejectedSalesOrderStatus();

        ///

        /// Shipped

        ///

        public static SalesOrderStatus Shipped = new ShippedSalesOrderStatus();

        ///

        /// Cancelled

        ///

        public static SalesOrderStatus Cancelled = new CancelledSalesOrderStatus();

        #endregion

        #region Protected members

        ///

        /// The status description

        ///

        protected string description;

        #endregion

        #region Properties

        ///

        /// Gets the description of the order status

        ///

        /// The description.

        protected virtual string Description

        {

            get { return description; }

        }

        #endregion

        #region Public Methods

        ///

        /// Determines whether this instance allows the diting of it parent order.

        ///

        ///

        ///     true if this instances parent order can be edited; otherwise, false.

        ///

        public abstract bool CanEditOrder();

        #endregion

        #region Child Statuses

        private class InProcessSalesOrderStatus : SalesOrderStatus

        {

            public InProcessSalesOrderStatus()

            {

                description = “In Process”;

            }

            public override bool CanEditOrder()

            {

                return true;

            }

        }

        private class ApprovedSalesOrderStatus : SalesOrderStatus

        {

            public ApprovedSalesOrderStatus()

            {

                description = “Approved”;

            }

            public override bool CanEditOrder()

            {

                return false;

            }

        }

        private class BackorderedSalesOrderStatus : SalesOrderStatus

        {

            public BackorderedSalesOrderStatus()

            {

                description = “Back ordered”;

            }

            public override bool CanEditOrder()

            {

                return true;

            }

        }

        private class RejectedSalesOrderStatus : SalesOrderStatus

        {

            public RejectedSalesOrderStatus()

            {

                description = “Rejected”;

            }

            public override bool CanEditOrder()

            {

                return false;

            }

        }

        private class ShippedSalesOrderStatus : SalesOrderStatus

        {

            public ShippedSalesOrderStatus()

            {

                description = “Shipped”;

            }

            public override bool CanEditOrder()

            {

                return false;

            }

        }

        private class CancelledSalesOrderStatus : SalesOrderStatus

        {

            public CancelledSalesOrderStatus()

            {

                description = “Cancelled”;

            }

            public override bool CanEditOrder()

            {

                return false;

            }

        }

        #endregion

    }

Note this is especially good for Value object in a DDD sense and can be easily mapped to the database. More benefits include that I do not have to hit the DB to get a status. As they are value object and have no need for an ID (in the the domain), we only map the id in the mapping files. The POCO object know nothing of ids. I can also create lists for drop down binding too if required… with no need to retrieve from the DB.

I have had some people raise the point “but what if we need a change in the DB for a new status?”. Well that sounds like new logic to me and should mean reworking the logic and then a recompile anyway, however now we are being very clear with how we handle each enum as the object now possesses its own logic.

If you are using NHibernate the mapping would look like this:

   

       

           

       

       

       

       

       

       

       

       

       

   

The above SalesOrderStatus abstract class can now have static method on it to do thing you may normally hit the DB for eg to get Lists of Statuses, however now you are confined to the realms of the domain. This makes life easier  IMO as there is less external dependencies.  I have found I use enums very rarely in the domain and usually only have them in the UI for Display object or in DTOs across the wire (eg error codes; as an enum fails back to its underlying universal int type).

Try it out, see if you like it and let me know how it goes.

T4 GAX & GAT: Revisited

I have dabbled with T4, GAX and most specifically the GAT before and never really got any traction. Its a great idea but it is very intricate. Nothing by itself is overly complicated but there are lots of little things that can quickly put you off.

I am trying to set up default MVP solutions for myself. I have a default architecture that I have used for several commercial application and would like a quick way to replicate it. Typically I follow a Presenter First pattern and interact with a service layer for the model. The service layer may be a proxy for a distributed app or it may be a thin tier for interaction with the model, it doesn’t really matter. The fact is I have very similar classes, very similar tests, and very similar structure on many of these app’s. This is a perfect time to look to generate these frameworks. One of the big things I want out of this exercise is to get my default  build configurations and build scripts predefined. This is a fiddley aspect that I hate doing, but always do it because of the time it saves in the long run.

So attempt one will be a WinForms MVP solution with out a domain project. I will use MSTest, RhinoMocks and MSBuild on 3.5 version of the framework. Not sure what IoC I will use yet.

As this is something i want to reuse where ever I work I don’t want to make NH a default aspect. i may include an NH model project in later.

So far the whole process has not been overly pleasant. I have had files (as in dozens of them) just get deleted on an attempt to register a project, projects trying to compile template that are marked as content (ie not for compilation), packages that just decide they are no longer packages… so I decided to set up a vm to contain the madness.. unfortunately I only have vista 64 install on me and VPC can only host 32 bit OSs… oh well the PnP (Pain ‘n Phailures?) impedance continues.

Wish me luck…

DDD Confusion

this is mainly in comment about this post:
http://the-software-simpleton.blogspot.com/2008/12/twat-of-ddd-increasing-complexity-in.html

My points:

  • DDD is not needed in every situation.
  • DDD is used when there is a significant amount of business logic. If you are writing crud screens DDD is probably not the best option.
  • DDD is hugely beneficial in an Enterprise Solution. This is because there is business logic in business applications.
  • DDD is not hard, if done right. Start simple and add complexity AS REQUIRED.
  • DDD scales. I have a base framework that I use for DDD solutions which let me get up and running with in a few minutes. I still have to write the domain objects, but if these are simple objects this takes a trivial amount of time, but still leaves me open to scaling to a large solution if and when it is necessary.

Like most architectures the majority of people get a wiff of it and run with the idea without properly implementing it. This is when you run in to problems.
Of the last 5 web applications I have done DDD was involved in only 1 of them. The other 4 took less than a week to complete.
Here is something a little more concrete: I would use my NH based DDD stack for anything I thought would take more than 2 weeks of dev time to complete.
Like anything the more you do it the more you learn, you pick up good ideas and recognise bad ones. The problem with DDD is you cant just read the book and know it you have to go thru the pain of doing whole projects to get that knowledge.