DDD eXchange 2010 – London

Un fortunately i dio not attend this event (which looked like an awesome event) however the Skills matter guys were good enough to post the videos of the event… sweeet!

DDD EXCHANGE 2010 – PODCASTS, SLIDES AND PICTURES

Thye have recorded all the DDD eXchange 2010 talks. The slides and podcasts can be viewed here:

You can also find some photos taken at the conference here.

Thanks SkillsMatter!

Test Fakes – Solving a Domain Entity Issue

Follow on from :Testing with Domain Driven Design

Background on the design in question

  • We are using NHibernate for persistence on Oracle.
  • We have service that accept command that are coarse grained and perform a clearly defined business function, they are not chatty services.
  • We are using the notion of aggregate roots so all child objects are created and managed by the owning aggregate root. If you need to access the child it must be done via the AR
  • The domain is relatively large and complex.

Background problem

We had issues where we were dealing with domain objects in test and getting problems with interrogating child collections. For example we sometimes need to be able to update a child of an aggregate root and we were using the id of the child (within our command object) to indicate which child to update. We are using a basic database identity field (oracle sequence) for setting the ids on all of our user defined entities.
Herein lies our problem;* In our test we create an aggregate root complete with child objects. We then want to test, for example, updating a child of the aggregate root via a DTO based command (i.e. using an id to define the child to update) and we run into issues when all of the child object have the same id of 0 as they are not persisted (it’s a unit test). Now this would never happen in real life. Why would you issue a command to update something that has not been persisted, how do you even know about that object?
The quick solution that I have seen used a lot is to set the ids of all of the domain objects prior to running the test. I don’t like this if you are doing this by exposing the ID setter on the domain object. This is opening up the API of our domain for testability and is potentially the start of a slippery slope in to a pit of hell. An easy way around this is to use fakes. These object are just child object of the domain objects in question that help expose stuff the domain shouldn’t; in this case the ID setter.
The other alternative is to set the id on creation of the fake so the setter of the id is not exposed. This can also work but it will mean your fake will always have an id set. For the situation I was in this was not suitable.

The end solution

All domain objects have a fake create for it. All of the fake implement an interface ISettableId. This interface is defined as below:

**CODE GOES HERE**

public interface ISettableId
{
bool HasIdBeenSet();
void SetId(TId id);
}

With an implementation example (its id type is and integer) :

public class FooFake : Foo, ISettableId
{
public FooFake(string name, TypeRequiredForValidConstruction myDependecy)
: base(name, myDependecy)
{}
public bool HasIdBeenSet()
{
return _id != 0;
}
public void SetId(int id)
{
_id = id;
}
}

This mean we can now create fake objects and emulate persistence later by reflecting down the object graph and setting all of the ids. This is much faster than hitting the database and has proved to be a very valid exercise as we can now run tests again transient and persisted versions of the object graph without having a db connection.
One additional thing I should mention is that the creation of child object now must be exposed as a hook. For example when we create child objects I do not just new up to object and add it to a collection. I call a protected virtual method that creates the child object and then add that to the list. This allows my fake to override the return type that is added to the list so children can also be fakes. This has not increased the exposure of the domain but has now facilitate a hook to allow me to create fakes for my child collection.

Caveats:

Fakes should not include any logic. The fakes I am using only allow the setting of ids. The only logic they have is to check whether the id is set (this allows us to skip objects that have had their ids set when we walk the object graph and not go around in circles). The only other methods that are required is the overrides of the creation of child objects.
Other methods you may find in the fakes are builder methods. Instead of having big factories for creating test object you may choose to put the object creation closer to the owner by putting it in the fake class itself.

Summary

As I mentioned, we have a pretty large and complex domain we were running into issues where the difference between persisted and non persisted objects graphs was becoming problematic; using the fakes has allowed us to keep the domain clean and only as open as possible while allowing us to emulate persistence without the cost of hitting a database. Over all I’m pretty happy with the result.

*I’m sure there are other way to solve this higher level problem

Alt.Net, Openspaces, whatever its coming…

Google groups and Twitter has been busy today after I seemed to have started some hype about an Alt.Net styled openspaces event (#ozopenspace)

PLEASE spread the word. We need ideas when people can come and where they would go to (Australia is a big place, we are all very spread out)

The wiki looks like its going to be based here: http://ozopenspace.pbworks.com

Register you interest here. Seriously please do this otherwise we can not estimate or cater for your needs/wants etc. This will also show sponsors if we are serious or not and may help convince overseas guest to attend too, of which there is already interest 🙂

The discussion is on the oz alt.net google groups here

I cant wait!

Powershell and the web

Lately I have been playing with PowerShell to see if it can help with some uptime/availability checks for our web sites and services at work; Yes i know what you are thinking but I’m not a sys-admin and I need to know if my stuff is working with out having access to the server… long story I’m sure you will appreciate.

In my reading I have seen lots of flaky posts about PowerShell and web services. I think people need to remember that web services still conform to the http protocol, there is no magic, you do not need to access any visual studio dll or do any weird crap like that, you just need to hit the url and receive a payload.


$wc = new-object system.net.WebClient
$webpage = $wc.DownloadData($url)

That’s it. No weird proxies no importing custom dll’s.

Here’s an example calling the Google Maps API (which is a nice API BTW) to get a latitude and longitude of an address, specifically Perth’s craziest bar Devilles Pad (think Satan living in a trailer park that hangs out at a Go-Go club that used to be a James Bond-villians liar, yeah its that cool:


$url = "http://maps.google.com/maps/api/geocode/xml?address=3+Aberdeen+St,+Perth,+WA+6000,+Australia&sensor=false"
$wc = new-object system.net.WebClient
$webpage = $wc.DownloadString($url)#Edit -Thanks Ken
$xmldata = [xml]$webpage
$lat = $xmldata.GeocodeResponse.result.geometry.location.lat
$lng = $xmldata.GeocodeResponse.result.geometry.location.lng
write-host "Latitude = $lat - Longitude = $lng"

Like that? Go play: Google Maps API Doco

MSBuild 101

Lately for some reason I have seen a fair bit of tension around MSBuild. I guess this is good thing, more people are using build tools like PSake, Rake etc and most of the time if you are building .Net stuff sooner or later you will have to call into MSBuild, unless your feel bold enough to uses csc… um… not for me.
MSBuild is, to be honest, very basic. 
There are typical 2 things that I use MSBuild for
-Defining a build script
-Running a build
The term MSBuild is a little confusing to some as its covers a few things: a file type and an executable.
Firstly we need to be aware that all .Net solution files (*.sln) and project files (*.*proj) files in .Net are MSBuild files; that should hopefully ease some tension as it is now pretty obvious that you are already using MSBuild! The MSBuild files are XML and therefore human readable and modifiable. The files typically define properties and tasks. The properties can reference other properties and they can be collections too. The tasks can have dependencies on other task (i.e. Test requires Build which requires Clean) and the task can make use of our properties we have defined in our script as well as define items that are basically task level variables. This is starting to sound just like procedural programming, which I hope we are all au fait with. It is probably also a good time to note that pretty much all build tools use the notion of dependencies. This basically means if you call a task and it has a dependant task then that task will be called first. You can have chained dependencies like the example we mentioned before (i.e. Test requires Build which requires Clean) and you can also have multiple dependencies for one given task (Deploy Requires Package, Document, Test).
After reading that it is quite clear that MSBuild is actually very simple. It is the syntax and general XML noise that scares most people, including myself.

Running MSBuild

To achieve the most trivial usage out of MSBuild we need to know how the executable works so we can actually call MSBuild to do something. MSBuild comes with Visual studio, I don’t think it actually comes with the .Net framework (someone can correct me here) and it is also bound to a framework version.
To start with I will show the most trivial example of how to use MSBuild and that is to just build a solution.
@C:\Windows\Microsoft.NET\Framework\v3.5\MSbuild.exe MySolution.sln
Assuming that this command is run in the directory that MySolution.sln resides this will build that solution. It can’t get much easier than that. Note this will not do anything clever, it will just build to your default location ie bin/debug for each project in the solution. Personally for me this offers little value other than it is a bit faster than building in visual studio.
Typically if I am building using the command line it is because I am building from a build tool. Build tools like PSake allow me a lot more flexibility as they are not constrained by the bounds of XML and have powerful functions I can use that may be associated to builds and deployments that might not otherwise exist in MSBuild. If you are using a tool like PSake or Rake as your build script then it is more likely that you will use the following syntax:
&$msbuild “$solution_file” /verbosity:minimal /p:Configuration=”Release” /p:Platform=”Any CPU” /p:OutDir=”$build_directory”\\  /logger:”FileLogger,Microsoft.Build.Engine;logfile=Compile.log”
This is from a PSake script so anything with a $ in front of it is a PowerShell variable (i.e. not MSBuild syntax). Walking through this I have defined the location of the MSBuild exe ($msbuild) and the sln file. Note that the sln file is note associated to a switch. It is the first argument and this indicates that this is the file we are building. Other arguments are prefixed with a switch. These switches begin with a forward slash and end in a colon and contain either full descriptive words (e.g. /verbosity:minimal) or short hand syntax for a word (e.g. /p:Platform=”Any CPU” which is short for property and in this example defines the platform property)
FYI : The & at the start of the line is a PowerShell construct to say “run this command, don’t just print it to the screen”

Defining Build Scripts with MSBuild

First up we will walk through a very basic build script that justs cleans and builds a solutions. It will be pretty obvious that this scripts really offers little value but we will get to some of the nitty gritty stuff soon.
<Project DefaultTargets=Clean xmlns=http://schemas.microsoft.com/developer/msbuild/2003 ToolsVersion=3.5>
  <Import Project=$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets/>
 
  <PropertyGroup>
    <ApplicationName>MySolutionApplicationName>
    <SolutionFile>..\$(ApplicationName).slnSolutionFile>
  PropertyGroup>
  <Target Name=Clean>
    <MSBuild Targets=Clean Projects=$(SolutionFile)/>
  Target>
  <Target Name=Build DependsOnTargets=Clean; >
    <MSBuild Targets=Rebuild Projects=$(SolutionFile) Properties=Configuration=Release; />
  Target>
Project>
Let’s go over this line by significant line:
The xml documents root is the project where we define the xml namespace, the MSBuild Tools version and the default target. Make sure you tools version work with your .Net framework version. This first line is pretty standard for me. I like having a default target and im typically working with .net 3.5 at the moment so this is all good.
Next up we are importing an external extension. This is an example of importing the very useful MSBuild Community Extension toolset. If you are making use of MSBuild a lot then you will want this toolset. Another Import I often make is for Gallio a very handy test runner. Please note that this import demonstrated is not being used in the script so could safely be removed, it is just here to show the syntax
Next we define our property group section. The nodes in here are user defined. MSBuild does not have “ApplicationName in its schema, i made that name up. These properties are in effect you global variables. You will also note that you can reference other Properties from with properties as shown in the solution file property. This is often very handy when dealing with file paths. Then Syntax for referencing a single properties is $(MyProperty)
Next up we define our first target. Targets are just like methods in that they define functionality. The Clean target just calls an MSBuild task that cleans the solution. Not very impressive, I know. The next target shows how we can specify dependencies. This means any time we call the Build target the Clean target must run first.
Having a build file is all very nice but we need to be able to run it. As it is not a solution of project file I tend to treat them a little different to how I would call those files directly. I typically prefer to be explicit in calling target, even though it is a DRY violation.
I have an example extracted from a bat file that calls into an MSBuild script below to show you the syntax (this is the same syntax as if you were to run from the cmd line):

@C:\Windows\Microsoft.NET\Framework\v3.5\MSbuild.exe MyMSBuildFile.build /t:UnitTests /l:FileLogger,Microsoft.Build.Engine;logfile=”UnitTests.log”
From this you can see:
-I am calling MSBuild 3.5; MSBuild is slight different for each .Net version, be sure you are not calling the .Net 2 version for your new shiny .Net 4 solutions!
-MyMSbuildFile.build is in the location I am running this command. If it was not I would need a relative or full path.
-The extension of an MSBuild script is not important. I personally prefer to call my MSBuild files *.build to make it clear that they are in fact build scripts
-The “/t:” switch defines the target I am calling: UnitTest. You do not need to specify this if you have defined the Initial Target in the project tag in you build file. I recommend defining a default target and make sure it is a safe one… you don’t want to accidently deploy something to production “by default”
-By using the “/l:” switch I can define a logger so I don’t lose the info once the console window disappears. I pretty much only use this syntax and change the name of the output file.
 For more help you can just call the MSBuild exe and use the /? for a detail example of what the exe can do. I recommend doing this 🙂

Stepping Up

So everything I have shown you so far is all well and good but it is of little value in the real world. So here is a quick fire list of problem and MSBuild solutions, a cheat sheet of sorts:
Note the command line syntax used below is from PowerShell and to get the MSBuild variable assigned I have called:
[System.Reflection.Assembly]::Load(‘Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’) | Out-Null
 $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile(“msbuild.exe”, “VersionLatest”)

Change a property from outside the script

You have a property, say application name defined in a script, you want to reuse the script but not have to edit the script for everything that uses it.
Solution : use the property switch
&$msbuild “$build_file” /p:ApplicationName=”RhysCNewApp”

Directories with spaces are failing when passing them to MSBuild

If you are passing in paths from a build tool that requires you to use quote (e.g. the path has spaces in it) then MSBuild may throw its toys and blow up on you. To dodge this use double backslashes (\\) at the end of the argument to escape the nastiness that is MSBuild
&$msbuild “$solution_file”  /p:OutDir=”c:\Rhys Hates Spaces\In His\ Directory Paths\”\\ 

 

Inside your Scripts

Display a message to the console

I want to display the paths of my Visual Studio tools folder for ‘05 and ‘08. Note VS80COMNTOOLS and VS90COMNTOOLS are environment variable, so yeah MSBuild can pick those up too J
<Target Name=VS>
  <Message Text=***********$(VS80COMNTOOLS)*********** />
  <Message Text=***********$(VS90COMNTOOLS)*********** />
Target>

I want to reference a whole Item group

Sometimes we have collections of related items and want to reference all of those items:
 <ItemGroup>
    <Items Include=Foo />
    <Items Include=Bar />
    <Items Include=Baz />
  ItemGroup>
  <Target Name=Message>
    <Message Text=@(Items) />  
  Target>
This will produce:
Foo;Bar;Baz

I want a conditional execution

I want to delete a folder, but only if it exists
<Target Name=Clean>
  <RemoveDir Directories=$(OutputDirectory)
       Condition=Exists($(OutputDirectory))>
  RemoveDir>
Target>
This show the use of Exists() and the MSBuild command RemoveDir.

Error Messages

I want to display a meaningful error when an exception should be raised in my task. Solution use the Error command within you task
<Target Name=UnitTests DependsOnTargets=Build>
  <Gallio
      Assemblies=@(UnitTestProject)
      IgnoreFailures=true
      EchoResults=true
      ReportDirectory=$(ReportOutputDirectory)
      ReportTypes=$(ReportTypes)
      RunnerType=$(TestRunnerType)
      ShowReports=$(ShowReports)>
   
    <Output TaskParameter=ExitCode PropertyName=ExitCode/>
  Gallio>
  <Error Text=Tests execution failed Condition=‘$(ExitCode)’ != 0 />
Target>
This examples show the use of the Gallio MSBuild task and how we can raises an error if the exit code is not 0 was the command is run

Conditional Properties

I want properties to have value dependant on conditions:
  <PropertyGroup>
    <ReportTypes Condition=‘$(ReportTypes)’==”>
      Html-Condensed
    ReportTypes>
  PropertyGroup>

Swap out Configuration files

This is last as it is a personal preference on how to manage environmental differences. I basically have all my configuration files in one directory and have N+1 files per file type where N is the number of environments. I may have a ConnectionString.Config and right next to it will be ConnectionString.Config.DEV, ConnectionString.Config.TEST, ConnectionString.Config.UAT, ConnectionString.Config.PROD with each environments values in the respective file. My build only acknowledges the ConnectionString.Config file and it copies that one, the rest are not included in the build. So when i build for an environment I swap out the config files as part of the build. A contrived example of how to do this is below:
<Target Name =SetDevConfigFiles>
  <ItemGroup>
    <SourceFiles Include=$(ApplicationConfigFolder)\*.Dev/>
  ItemGroup>
  <Copy   SourceFiles=@(SourceFiles)
          DestinationFiles=@(SourceFiles->’$(ApplicationConfigFolder)\%(Filename)’) />
 
Target>
This grabs all of the files from the Application Config Folder that end in .DEV and copies them to the same location without the DEV.
Explaination: The %(Filename) means I get the filename with out the extension. MSBuild basially just removes anything after the last “.” in the file name, this means I over write ConnectionString.Config with the contents from ConnectionString.Config.DEV. Its handy, it works for me, take it or leave it.

Creating your own MSBuild Task

This was way easier than I thought it was going to be. To create a task you just need to inherit from Microsoft.Build.Utilities.Task, override the execute method and add some properties. Here is a SQL migration build task I built (please excuse the logging and the IoC Cruft). The code to do all the leg work was already done and it had a console to fire it up. At the time I was using MSBuild to call it and i was using the exec functionality of MSBuild when I thought: I could probably just make a plug in for this! Below is the result
using System;
using Microsoft.Build.Framework;
using Microsoft.Build.Utilities;
using Microsoft.Practices.ServiceLocation;
namespace SqlVersionMigration.MsBuildTask
{
    public class RunAllScripts : Task
    {
        [Required]
        public string ScriptLocation { get; set; }
        [Required]
        public string ConnectionString { get; set; }
        public override bool Execute()
        {
            try
            {
                ContainerRegistration.Init();
                var versionMigrationService = ServiceLocator.Current.GetInstance<VersionMigrationService>();
                TaskHelper.LogEntry(Log, ScriptLocation, ConnectionString);
                var result = versionMigrationService.RunAllScriptsInOrder(ConnectionString, ScriptLocation);
                TaskHelper.LogResult(Log, result);
                return !result.Isfailed;
            }
            catch (Exception ex)
            {
                TaskHelper.LogError(Log, ex);
                return false;
            }
        }
    }
}

Some hints to get the most out of MSBuild: 

Download the MSBuild Community extensions, there is lots of good stuff here. 
Use plugins where it is a logical idea as opposed to calling out to exes. This can be done by using the import tag at the start of your file:
<Import Project=$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets/>
Know when to use MSBuild. It is a good starting point if you are learning about automated builds, especially as you have to use it every day (if you are a .Net dev) whether you like it or not (remember all sln and proj file are MSBuild files). Be warned though that it is not really the best option for stuff out side of a pure build. Once you are comfortable with MSBuild you will soon hit a very low ceiling. At that point investigate other tools like PSake and Rake. Due to the XML nature of MSBuild certain task are just messy or impossible to perform; Loops is a good example, just don’t do it.

I hope that is enough to get you up and running, if not let me know and I will try to fill in any of the gaps

Castle – Reborn!

It looks like Castle has had a shake up! I don’t *know* who is behind all this but I know my my castle go-to-guy has his sticky fingers in it and I’m glad he does!
Of late there is a new Castle Wiki and thanks to Andy Pike a bunch of new screen cast dedicated to  getting up and running with the various aspect of the Castle framework.

So the links are:

Please check them out and if you want to add anything i have been told that the wiki is a real wiki, so contribute!
Thanks to Krzysztof Koźmic for the heads up 🙂

*OK so Castle is not reborn, just the doco 😉

Soft Deletes in Nhibernate

The following was an email to my dev colleagues, sorry about the code formatting Im using the Blogger web interface (OSX really needs a WLW equivlient!):

Fellow nerds;
There are frequently requirements that we need to have soft deletes. These things are a pain, especially with an ORM as they can start becoming invasive to your domain code.

If you want soft deletes to be transparent as possibly in normal workings of the domain you can use a DeleteListener to intercept the deletion and, well, soften it.
Below is some code to show a parent and child relationship (ParentSampleDomainObject and SampleDomainObject).

Key points:
• The entities need to have a flag to show they are deleted eg IsDeleted
• The mapping on the parent needs to say that it is not interested in retrieving deleted items by using a where clause
• The mapping also need to say it want to delete orphan children, (we won’t actually delete them, but this needs to be here)
• We need to have an NHibernate delete listener to intercept the cascading delete and instead of deleting the entity mark it as deleted. This is why we need cascading deletes in the mapping
• We need to register the listener to the config.

The test code will be checked into the core soon.

Rhys

Nhibernate Configuration:




NHibernate.Dialect.MsSql2005Dialect
NHibernate.Connection.DriverConnectionProvider
NHibernate.Driver.SqlClientDriver
NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle
Data Source=.\SqlExpress;Database=CoreTests;Integrated Security=True;
ReadCommitted
CoreTests.dbo












Parent Domain entities mapping file:













[Serializable]
public class ParentSampleDomainObject : BaseEntity<int>
{
public ParentSampleDomainObject()
{
Children = new List<SampleDomainObject>();
}
public virtual IList<SampleDomainObject> Children { get; set; }
public virtual void AddChild(SampleDomainObject child)
{
Children.Add(child);
child.Parent = this;
}
public virtual void RemoveChild(SampleDomainObject child)
{
Children.Remove(child);
}
}
[Serializable]
public class SampleDomainObject : BaseEntity<int> , ISoftDeletable
{
public virtual string Name { get; set; }
public virtual bool IsDeleted { get; set; }
public virtual void MarkAsDeleted()
{
IsDeleted = true;
}
public virtual ParentSampleDomainObject Parent { get; protected internal set; }
}

public class SoftDeleteListener : DefaultDeleteEventListener
{
protected override void DeleteEntity(IEventSource session, object entity,
EntityEntry entityEntry, bool isCascadeDeleteEnabled,
IEntityPersister persister, ISet transientEntities)
{
if (entity is ISoftDeletable)
{
var e = (ISoftDeletable)entity;
e.MarkAsDeleted();
if(entity is IAuditableRecord)
{
var a = (IAuditableRecord) entity;
AuditListener.SetAuditInfo(a);//need to have a log of when this was actually deleted, probably the intent of the soft delete!
}
CascadeBeforeDelete(session, persister, entity, entityEntry, transientEntities);
CascadeAfterDelete(session, persister, entity, transientEntities);
}
else
{
base.DeleteEntity(session, entity, entityEntry, isCascadeDeleteEnabled,
persister, transientEntities);
}
}
}

Begining with the end in mind

A long time ago, what feels like a previous life, I used to train bodybuilders. Yeah, weird, I know. I actually learnt a lot from this subculture: discipline, sacrifice and hard work are things you can not escape in that life. One huge lesson I picked up that many missed was “Begin with the end in mind“. The art of bodybuilding is often confused with weight lifting. The number of competitors that complained that a physically weaker contestant beat them amazed me. Being able to bench more than your opposition counts for nothing.
In the sport of bodybuilding the winner is decided by a panel of judges; that’s right, you are judged by humans. It is the image you present that they must judge you on. You may even have a better physique but if you do not display it better than the others you can lose. For this reason all of my athletes posed at every training session. In the middle of the gym, in front of mirrors down to their underwear, unashamedly posing as if they were in front of a panel of judges. We would critique, take photos, film it, change the lighting… I never saw any other body builders do this. I am sure that why, collectively, they won several, national and international championships.

Now ask yourself: Are you training for game day?
For us this is deploying to the production environment.
Are you practicing it everyday?
You should be.

If you are doing manual deployments you are a weight lifter in a bodybuilding show. You will lose.
To be honest we have it better than body builders. We have no competitors and we have predefined requirements. We can measure our performance, they cant, they can only contrast and compare.

So what do we need to do to prepare for the big day? Well first and foremost don’t make it a big day. Make it just like every other day. Make deployments part of your daily routine and start deploying from day one. Personally I like to have automated deployments working prior to writing any business code. Infrastructure cruft should be done in iteration zero and deployments are infrastructure cruft.

A daily routine makes deployments so easy the are a non event. This usually mean defining everything you need to do to do a deployment then scripting it – Writing the best code in the world means nothing if the deployment is botched… and manual deployments get botched.

Deployments should also cover all the steps you will do on production deployment day, they do not cover just what needs to get done to make the application work on a developers box.

Define where you production environment is at and then work from there. If your testing environments are not the same, you should question: why? The more similar they are the less likely you are of having deployment issues. Where I currently work we have 5 environments

  1. Developers machine
  2. “Development” Environment – automated CI deployment
  3. Test
  4. UAT
  5. Production

The developer machine I consider an environment. I often get latest and (sometimes) hit F5 and run the application and I expect the application to work! This means I require a database that is in a valid state, external services to work, file paths to be correctly set up etc. For this reason I like developers to have local sandbox which includes local databases. Nothing pisses me off more than when I am running an app and some bugger has cleared the dev database for testing a script breaking my flow. Having your own database also forces you to properly script database changes in a sane manner. Checking in those changes and getting latest, running you build script should get you up and running every time. See Tarantino from Eric Hexter or  RoundHouse from the ChuckNorris framework for a simple way to get database migrations working cleanly in a .Net-SQL world.

The Development Environment is a name sake. No development is done on it, but it is owned by the developers. We may use this for debugging on a production-like environment if things go pear shaped, I just never have. Its main purpose is for, IMO, automated deployments from our build server. If anything breaks here the red light on the build server goes on and the check in for that build fails. The steps to do this include

  1. Cleaning the server – ie getting rid of the last deployment and backing it up
  2. Setting up the server including
    • creating apps pools/virtual and physical directories, 
    • ensuring dependencies are present eg MSMQ, DTC etc
    • ensuring the dependencies can run, ie queues are set up, services are running
    • setting up account privileges
  3. Deploying the packages to the server and installing it if applicable
  4. Running SQL scripts including
    • creating users roles and permissions
    • creating the database objects (tables, views, keys constraints, triggers etc)
    • creating required reference data
  5. Testing the deployment
    • creating test data
    • running high level acceptance and smoke tests

If you can get to this stage then is it not obvious that doing a test deployment is going to be next to trivial? Migrating to the test environment should be the same as migrating UAT and therefore the same as Production. Production deployments should therefore be just a matter of going through the motions.

This also means that you may need various scripts or at least functions with in these scripts to carry out these various steps. Obviously if the Production environment is already set up we do not need to do it again, and the deployment scripts should reflect that. Just like in normal code use pre-conditions and post-conditions to enforce a correct deployment. If certain set ups are not required log it and move on, just make sure it is part of the agreed process.

DBAs are involved and decide how you want to manage your deployments. Keep reminding the team that this should be streamlined.
One thing that often trip up teams is permission issues. Personally I prefer not having access to anything outside of the development environments (I’m pretty sure I am alone on this one). As far as I am concerned the testers can deploy there own stuff. It will be the same script that the SysAdmins and DBAs will run in UAT and production, why should I do it? I have code to write! They can have permission to run the scripts in their own environment making sure that no manual step has been introduced by any developer along the way. This separation I feel further reduces risk of failed deployments. If the deployment to Test does fail then they can raise a bug, roll back and tell the developers what they think of them. Sure this will be embarrassing and it does happen, but would you rather it done in the confines of the IT department or in full view of the customers?

This brings us back to what we are here for : to fix a customers problem. I assume this typically means delivering working software. Working software on your development machine has not fixed the customers problem, that’s like being a weight lifter in a body building show. Don’t be that guy, think about the end game and make sure that each day you are working towards that end goal of providing you customer with a business solution, in the cleanest possible way.

*Sorry for putting the images of most nude men in your mind, it (probably) wont happen again

Continuous Improvement – Bringing back UML

Continuous improvement is something we should all endeavour to pursue on a daily basis irrelevant of our chosen field. For me I am not a natural geek. I never built my own computer and never wrote my own gaming engine. I am much more interested in business than computers. What I love about computers is the way they can stream line our processes and, ironically, how unforgiving they are. They do what they are told, not you you meant. To me this poses a nice daily challenge and keeps my brain from turning to mush at work. Because i am not a natural geek I have to actively push myself to learn things with regard to IT.

When PowerShell came out a few years ago I looked at it in fear. I have never really been a shell guy. My ‘nix mates were always playing on the green screens and I secretly knew there was a lot of power there but it was too much of a mind shift from the comfort of my drag-n-drop/intellisense laced world… But I knew it was a weakness and jumped in feet first.

Although i am still a complete n00b at PowerShell I can get what i need done with it. I am comfortable with the tools and have got real return on my learning investment. I got uncomfortable and it paid off in spades. I have even moved on to PSake and replaced most of my build and deployments steps to be completely PowerShell/PSake based, a feat I am pretty proud of. It has also helped immensely in me picking up Rails and Git, both of which are command line driven.

For me the next thing I think I really have to tackle is UML. I have dodged it for years.
I draw diagrams a lot. I use a white board every day and like to draw while I talk. I feel in mixed audiences it help get points across by using multiple forms of communication at the same time, voice body language and diagrams. However I am communicating I should be adhering to common practices where possible (as my last post pushed), when there is a common language, you should use it.
For diagrams in nerd world its all about UML.

My dislike for UML stems from my dislike of concrete documentation. I have inherited too many projects that have had documentation that did not marry up to the implementation, with UML typically being one of the worst offenders. It is incredibly frustrating and most importantly wasteful; wasted time, money and trees (paper has to come from somewhere)!

So I sit thinking that perhaps I was throwing the baby out with the bath water; UML is not bad, inaccurate documentation is bad. If I am to communicate with diagrams then I should use the common language. I also understand there is a time to be pragmatic and a time to be dogmatic. Most of the time a couple of boxes with some lines joining them will suffice but the ability to step up to the more accurate implementation is a worthwhile ability.
So my UML journey begins… or at least is reborn.

Common language and Patterns

Imagine if craftsman from other industries called everything basic patterns they used they did by different names. Instead of a butterfly stitch a doctor called it “the sticky skin healer” or a carpenter calling a dovetail join “the zig zag corner”… it just wouldn’t fly.
Well unfortunately this is not common in our industry, it is the norm.
Firstly I want to define what “our industry” is:
It is not computer science.
It is, if anything, Information Systems.
We push data around, make it easy for users to persist and transmit data so systems or other users can make decisions, business decisions. All of this is intended to some how improve the bottom line of the client and/or the company you are working for. People forget this. Our job is 99.999% of the time about making money. The Wiki on cucumbers git hub site has a good blurb about defining features and asking “why?” with regard to implementing features, you should get to the following reasons:

  • Protect revenue
  • Increase revenue
  • Manage cost
  • Increase brand value
  • Make the product remarkable
  • Provide more value to your customers

Now this is clearly for a product; I have no problem with team defining what the underlying drivers for their project, department or business are but being able to see how a feature helps enhance that is important.

Anyway… back to language.
Language and the words we use are very underrated. I think it is possible that, in our industry, communication skills are so rare (hey, we are nerds!) that we trivialise the importance of the words we use. Evans makes it very clear that the ubiquitous language is a key aspect of DDD and the same can be said about whole industries and the language used by the individuals that make up that industry.
This is why patterns books are popular. If I want to have a discussion with a developer at another firm to bounce questions off them, which I frequently do, if we do not share the same language then we spend half the time getting our terms aligned. There is a way to avoid this: understand the common language!
How? Well, I’m sorry but you are going to have to pick up a book or two… OK that’s a lie. You are going to have to read a crap load. Hey, we get paid well, I expect senior developers and consultants (again, in our industry) to have read the books I am about to describe.

This list is an ordered list of pattern books, it focuses on patterns i.e. a language based term to describe a common approach to solving a problem. The language is as important as the fix

  • Refactoring – Fowler
  • PEAA – Fowler
  • Head First Design Patterns -or- Refactoring To Patterns
  • APPP – Uncle Bob
  • Refactoring Databases – Ambler
  • GoF
  • XUnit Test Patterns – Meszaros
  • EIP – Hohpe
  • DDD – Evans

There is also one I really cant wait for and that is the forthcoming Presentation Patterns by JDM; which should round out the whole stack.
To be honest this reading list would take the average developer that has a family and a life about a year to read. I’m sorry. Suck it up.

The order I have presented this list in is very deliberate. People will argue that GoF should be first. I heartily disagree. Gof is a hard book to read, with out having the understanding of refactoring and the softening blow of “Head First” the book is too much for the average nerd. I know this because it was my first patterns book. I have read it 4 times and only the last 2 time I have actually got anything out of it.

Refactoring
I feel this book is a minimum in a developers arsenal of pattern books. With this you at least can understand why resharper is doing what it is doing. It defines some basic terms that are low level enough that it is applicable to most developers irrelevant of the layers they work in. Some common terms like parameter object are introduced to the developers language

PEAA
Although he book references a metric tonne of other books, I think its aggregation of basics is its strength. The 2 book approach is great and the patterns introduced are very basic and high level enough that the are able to be picked up quickly. Read in conjunction with conversations with an experience developer that uses these patterns regularly and the developer should be very familiar with these patterns very quickly. This is a great book for Book Club type discussions too (hint hint @wolfbyte)

Head First Design Patterns -or- Refactoring To Patterns
These two books are a softer introduction to lower level patterns when compared to the godfather of patterns books, GoF. These are much easier to read and depending on the developer one may prefer one over the other. Ideally read both but read at least one of these prior to continuing down the stack. You may find it odd that the list starts with a low level pattern book (Refactoring) then a high level book (PEAA) then something in between. The pattern described in HFDP and RTP are more abstract patterns that require much more thought than the high level patterns in PEAA eg specific patterns like Table Gateway or ActiveRecord. I also find it easier to show what tool implements the PEAA pattern, eg Rails ActiveRecord or Castle Activerecord, NHibernate as DataMapper (all as examples of Lazy loading) etc

APPP
This book is great at helping developers step up to actually writing clean, readable, maintainable code by showing the patterns that you can use on a daily basis. S.O.L.D.I S.O.L.I.D is introduced to the reader. Unfortunately SOLID has just become another rote learnt acronym/pattern that is infrequently applied. I have found this does however allow code reviews to now have a common language and not just “um this code looks ugly” type code reviews, which i have been guilty of in the past.

Refactoring Databases
This book is in here by necessity. I doubt many .Net developers would get much from this. We have been force feed SQL drizzled in a Table sauce and Views jus and a side of Stored Procs for years. We know Databases, at times, it feels too well. However if you are from a non data base driven framework then this is a necessary read. I’m talking to you Ruby and Java kids. You lucky bastard get ORMs by default. In the twilight of the SQL age this book may seem a bit legacy but if you are truly and enterprise developer you will be dealing with SQL for years to come. I’m sorry, I really am 😉

GoF
Ahhh. The serotonin of pattern books. If you have insomnia this is for you. Its a great yet boring read, hmm what a paradox. This is just developer tax, its rite of passage, a necessity. Please note that just because a pattern exists it does not mean it must always be used, that it should ever be used or that it is not hard wired into you language. I’ll let the reader guess/figure what I’m referring to.

If you have got through all of these books in my mind you have done you due diligence and are now allowed near an IDE.. OK I’m just kidding, but seriously I believe all of those previous books are minimum requirements for some one who leads a team of other software developers or consults. Below are the one you should continue on to if you want to be an above average developer

XUnit Test Patterns
By this stage you should be testing you software. The XUnit book, although not my favourite Testing book (that goes to The Art of Unit Testing) it is the best in describing the patterns of unit testing. Understanding these patterns will help you help other learn how to correctly add TDD to their skill set and firm up the teams language around key terms like doubles, fakes, stubs, mocks and spies; all things that are commonly misunderstood

EIP
This book cover various messaging patterns that relate to integrating system. Ideally you should read this book prior to using any ESB type framework like NServiceBus or MassTransit. If you have read this book and are using these patterns without having read or understood the previous books I fear you may have a big distributed mess on your hands. This is a great book but IMO this system you would tend to build using these patterns are most like system that should have been built on the patterns from the other books. If you do any integration you owe it to your self to read this, which means unless you are just doing the integration, reading all the other books first 😉

DDD
My favourite and I believe the most misunderstood book of the lot. This book is not just a code book it a process book. It raises the issue of language and introduces the notion of a ubiquitous language that the team including the stakeholders/SME/Users understand.
I believe this book has pushed what I can produce for a customer to the next level. The code feel more aligned with the business as opposed to just getting a task done. The closer code matches business processes and concepts the easier it can change with the business too.

So there it is. My list of pattern books I assume a Tech lead or Consultant has read, understands and implements. These are not the be all and end all of pattern books but I believe they cover the bases. Please note there are a suite of other books that are not pattern books that I believe are also “essential reads”, but that is perhaps another blog post.

Would love to hear feedback,
cheers Rhys