Just a remnder that we are meeting at 43below on Barrack Street in the city at 5:30 thonight (11th feb 09).
Be there or… well, or dont be there.
Category Archives: Uncategorized
Stuff to play with
More stuff i am looking at:
- Android dev… very exciting i will finally be able to do some real world dev in my osx environment
- I will be starting to use GIT* due to android dev with my java mates… should be interesting. * this bring me up to using 4 different SCMs at the moment… bloody hell.
- Automocking from the Jimmy Bogard. This looks to be a great help in the mismatch between dto and domain object. Used with the NH fluent interface for mapping life could be significantly easier 🙂
Some links:
Fellow OzAlt.netter post on GIT : http://www.paulbatum.com/2009/02/im-starting-to-git-it.html
Singleton Pattern
I am not a fan of the singleton pattern. This may come as a surprise to some, as the very first thing that people may see when using my code is the wrapper I have for my IOC container acts as a singleton.
So why do I, along with many others, not like the singleton? Because it is usually used incorrectly and hard to test.
The first time I had seen massive singleton abuse was when I had to go on a consulting gig to help “finish” a project, i.e. the final sprint prior to go live. The whole data access layer was made up of a mess of singletons. There was no need for it, none for the objects had state let alone required to hold state in a single instance but they would not let us, the hired consultants, refactor it out. Bizarre*. Since then I have seen a singleton butcher-job at just about every contract I have had. it seems to be the first pattern people use and the first to be abused.
So when do i use a singleton? Well when an object that should only have one instance. The notion of singleton implies there is only one logical possible instance of that type that can be in creation at a time. I think this is the fundamental problem I regularly see. Most times I see the use of a singleton this is just not the case.
To highlight this even more, often the object itself does not even have state. If the type has no possible (instance) state, then there is surely no need for singular state! This is the time where the object is just a static class. In the same way it is ok to use singletons, it is ok to use static classes, just make sure it is the right circumstances for your choice.
One annoyance is when singletons are used so they can be “thread safe” and then the construction of the object is not thread safe. please investigate how to do this if this is actually a concern. Even better use an IoC container! By using an IoC the object becomes easily testable and you infrastructure concerns are hidden from the consumer. To me, this is a good thing. 🙂
*That project is still going, still not live and apparently still has singletons used inappropriately in the data access layer. Oh well.
Loose and Free – Hippie tools for Hippie code
I don’t know about you but I like loose coupling. Being able to code against abstractions is great, I can be TDD and focus on SRP and proper SoC. Maintenance and extensibility become much easier and code is visibly cleaner. There are tools out there that help me do this currently such as various mock frameworks, IoC containers and ESBs but I though I would introduce a couple of relative new comers to the scene that can help loosen things up a bit.
A few months (maybe years ago I get up an abstraction of the IoC containers I use (Spring.Net, Castle, StructureMap and Unity) so I didn’t have to remember all the different semantics for each container for each contract/project I worked on. Well a few months ago M$ came up with the Common Service Locator. It is a good idea. I like it and may replace my abstraction with this soon (only for the sake of people maintaining my code). If you are a contractor/consultant or use various IoC libraries then this could be a godsend. IoC is a crucial part of my work and many of the patterns I use are very hard to implement without even a basic IoC container. this will hopefully break down some barriers for new comers who cant decide what to start with.
MEF is the Managed Extensibility Framework which allows post deployment extensions. Think about that; it allows me to have hooks in my application so that later on I, or a third party can easily extend the framework, like an improved way of bolting on Resharper to VS or as Glenn mention in this video like can be done in World of Warcraft with various plug ins. MEF is in its 4th release as of this week. Check it out. it has a kick ass team working on it (Glenn and Hammet both get 2 thumbs up from me).
Live and code free.. be a hippie code, get loose man…
Some existing tools and libraries I use and recommend are:
Mock Framework: RhinoMocks
IoC: Castle, Unity & StructureMap
ESB: MassTransit
Perth Alt.Net!
Yay! I have just found out that Perth is setting up an Alt.Net group… im super psyched about this… its exactly what this town needs, a good shake up about core coding fundamentals and best practices 😉
Cheers
Real World TDD Talk
I am talking at next months Perth .Net Community of Practice on TDD in the real world.
The reason I have gone with this topic is it seems that the Perth community as a whole has ben very reluctant to adopt TDD, hopefully this talk will break down those barriers. I feel this is something that can help developer on any technology as so I hope it is a good turn out.
The talk is on March the 5th in the Perth CBD at at Excom Education starting @ 5:30pm.
Continuous Integration : 101
Continuous integration is a term with mixed meaning, usually intended to describe an automated build process associated with a development groups source control system. Typically on check-in the build server polls the source control repository, gets the latest source code and proceeds to build the code. Notifications can be on success or failure of the build meaning we can always have a very recent view of the health of the code base.
CI is a very helpful part of an agile teams delivery and can greatly aid in improving the build/deployment times and just as importantly can keep the quality of source code high. Unfortunately there seems to be a large number of agile teams that are not embracing this very simple and beneficial process; it seems there is a high perceived point of entry, so lets try to eliminate this; this whole process takes less than 30 minutes.
*NOTE: this is not going to be a ground breaking article. However if you have never set up a CI environment then this is, I am sure, a worthwhile read. It is also not a generalisation. This is just how I do it. Feel free to adapt and modify at will. This example is just using the set up I have for the contract I am on at the moment: VS2008, MS-Test, VSS, MS-Build, TeamCity… I am sure it is known that they are not my preferred tests, source control or build script… so use your initiative and figure out those specifics if yours are different… its really not very hard.
Step 1: Set up a Solution Build configuration
Assuming you have a solution already set up, open the solution in VS and right click on the solution and select the configuration manager.
Set up a new solution configuration that inherits from debug:
Name this new configuration AutomatedDebug and make sure you create a new solution configuration for it (the check box). As a note I prefer to get it to inherit from Debug.
Now set the build in VS to AutomatedDebug in the menu (the drop down usuallly next to the play/Debug button) should read AutomatedDebug)
Step 2: Configure your Automated Build
I believe part of having an easy to use solution is having everything required to run and build the solution in source control. I find the following file structure a good start
In the Lib folder is all my references. I try to keep non framework things out of the GAC, it just leads to headaches. Basically anything external DLL XML or config that you do not control and is required to build goes here.
In the Tools folder is everything thing that is not code that runs, things like Code Gen tools, Build tools, Templates, Snippets, Static Analysis tools, Test runners. Some think this is over kill and is a waste of disk space. Disk space is cheap, my time is not. I want to be able to get latest and hit build and it should work.
In the Src is all the source code, basically your solution sits in here: Be sure to include SQL.. it is code that runs, so it belongs in source control.
I prefer to have all of my output in one build folder so I go through all of the projects properties and change the output for the AutomatedDebug build to ..\..\Build\AutomatedDebug\ with the XML docs in the same place. I turn on “Warnings to Errors” and Warning to level 4 plus I set FXCop to the desired settings (personally I don’t care about localisation, mobility etc.).
My test projects are set up a little different, the difference here is I like my tests in a separate folder so have ..\..\Build\AutomatedDebugTest\ for my output for test projects with no increased warnings or FXCop or XML docs.
As anything in the Build folder is build output you do not need to put this in source control and its is perfectly ok to delete this folder at any time.
*NB: when adding a project to the solution the project specific version of that build configuration will not automatically be added, you have to create this yourself. Go into the solution config manager again and on the new project create a new config to match the others.
Step 3: Create a Build Script
This can be the tedious part of the process, so I am just going to give you one ready to go. Place the build script in your Tools folder as it is ineffect a tool. This build script is MSBuild as this is the build tool I get the least friction with in terms of management acceptance. Other options are Nant, Rake, PSake and Bake.
..\MySolutionName.sln
..\Build
AutomatedDebug
$(OutputDirectory)\$(Configuration)Test
$(OutputDirectory)\$(Configuration)Reports
MyCompany.MySolutionName
/testcontainer:$(TestOutputDirectory)\$(DefaultNamespace)
@(UnitTestProject, ' ')
@(IntegrationTestProject, ' ')
<RemoveDir Directories="$(OutputDirectory)"
Condition="Exists($(OutputDirectory))">
<MSBuild Projects="$(SolutionFile)"
Properties="Configuration=$(Configuration)" >
<RemoveDir Directories="$(ReportOutputDirectory)"
Condition="Exists($(ReportOutputDirectory))">
Quick over view of what the build file does (skip this if you are comfortable with MSBuild/Nant)
- We define that this is an MSBuild XML file with a default target.
- We set up all the variables. The tags are not special tags, MSBuild Allows you to use self defined XML tags as the variable names. Note we can use other variables to define further variables. Of special note is the way the test project names are collated together. The “UnitTests” and “IntegrationTests” nodes in the “PropertyGroup” Node use MSBuild syntax to concatenates the project names previously specified with the given format so we can run MS Test later.
- We define a target. The first we have called “Clean” and it delete the build output folder. That’s it.
- The “Build” target specifies our solution to build with the predefined build configuration, being “AutomatedDebug” as specified in the local variable section. Also note that the Build depends on the Clean target. This means “Clean” will always run before this target.
- We define a “UnitTest” and an “IntegrationTest” target that both run test and depend on Build and therefore Clean. These targets are very specific to MS Test, most of the other runners are much easier to set up. This also requires VS on the build machine. To do this without VS installed check out the Gallio project.
Be sure to add test project names in as they are added or your build server will not be running those tests! Create a Build.bat file in the toolls folder with your build script and insert the following text so you can run the script.
@C:\Windows\Microsoft.NET\Framework\v3.5\MSbuild.exe AutomatedDebug.build /t:AllTests /l:FileLogger,Microsoft.Build.Engine;logfile="AllTests.log"
@pause
This assumes your are using .Net 3.5 and the name of your build file is AutomatedDebug.Build which resides in the same folder. the last part gives a log file of the build output, which I find handy, although verbose.
I really try to stress using very slim bat files to call my build scripts. They should be one liners that call a target in the build script. Your targets are like methods, target should be very specific, do one thing and do it well…Like your very well written code 😉 If you need to do lots of things then create a target that depends on the other target you need to run (see the build file)
Step 4: Set up Source Control
I am not going to tell you how to set up Source control, you should be able to this by yourself. I am currently using VSS *gasp* but this process works with SVN, TFS, GIT… the source control tool really doesn’t matter in terms of this specific process. I do prefer the build server to have a read-only login to retrieve the code however.
One more reminder: DO NOT CHECK IN THE BUILD FOLDER! This is a throw away folder that should not be in source control.
Step 5: Set up your Build Server
TeamCity is my new favourite toy. In 7 minutes I had downloaded, installed and set up a build server. Awesome. CC.Net is the other big name in .Net CI, but the XML config has caused many a headache, TC was so easy I have to recommend it. Follow this video to get up and running. Typically this is done on a separate machine. It does not need to be a flash machine, in fact I would recommend whatever is lying around not getting used, mine is a 4 year old single core laptop.
Now to set up notifications. I personally like the tray notification tool, clean and out of the way.
Now is a good place to add in the extra tasty stuff to you build script like code analysis tools (Simian, NDepend, Code Coverage/NCover), Deployment (Click-Once, WIX packaging), Documentation (Sandcastle) etc. These things normally get left off the shopping list as they add to the build time. Well, now another machine is doing the build and it doesn’t affect me. Be sure to add these in as separate targets in the script to keep things nice and clean.
Wrap up
Now that was pretty trivial wasn’t it? To be honest the hardest thing is setting up the solution folder structure and build script. This set up has served me well for a while now, I’m sure there are many others doing things differently, but this is as good a place to start as any.
Key points are:
- Have a separate build config that is set up for high code quality with a separate output folder for test projects
- Keep a clean file system, preferably use a root build output folder for all build, test and reports (code coverage,test results, static analysis etc)
- Keep build project targets focused
- Use TeamCity; its the easiest part of the whole process, but is pointless if the rest is not set up ready to go.
Plans for the Future
The last year was a big year for me, professionally, highlighted by several things such as working in London, attending the Seattle & London Alt.Net conference and the move back to Perth. The amount I have grown largely from realising that big companies and big cities do not necessarily mean good quality of work… in fact quite the opposite and that after meeting the hard core Alt.Net’ers I realised how much I dint know about fundamentals of coding… I was basically a Microsoft coder (which is not a bad thing but its bad not to know the alternatives). Since moving back to Perth I have managed to pick up a great contract with a very open minded and technically skilled team. Fortunately they are taking on board a lot of the new design and process concepts I have picked up over the last 2-3 years that I have managed to solidify of the last 18 months. Its actually been really good because it has forced me to put my money where my (particularly large) mouth is. With this is have realised that I have something more to give the community than I am perhaps currently giving. With this realisation I am intending on doing more presentations at user group type meetings and have plans for a Back to Basics coding day in which I would like to share with the community the core concepts behind things like TDD, IoC, SOLID, DDD, AOP etc that I think are not being picked up by the masses, especially here in Perth. The locals have excellent understandings of the new cool wizz-bang tech based stuff (like Azure, WPF etc) but without core fundamental programming knowledge and design principles you will still be building a potential house of cards architecture.
I also plan on realising the Artemis West Stack as a guidance package that allows for flexible Data Driven and Domain Driven smart client application that are technology agnostic. This has been months in the making with many revisions and re works (and lots of swearing at the GAT). I do not initially intend for this to be OSS at the moment (AW is not an OS company) however I will chat with the other directors to see what direction this will take.
Last year I took time out to learn two new languages, Ruby and Python as well as a dabble in JQuery. This year I plan on Learning F# and actually putting the Python and Ruby to use… learning a language and not using it is not really any good e.g. I studied French for 5 years but I am conversational at best now due to lack of use.
I also think it may be of benefit to AW for me to get certified. So I will probably do some of the dirty M$ certifications and hopefully a scrum master course if they do one in Perth.
On top of looking for a house to buy, 2009 looks to be another big year 🙂
DDD : Value Types
*A follow on from : http://rhysc.blogspot.com/2008/09/when-to-use-enums-vs-objects.html
This is a brief demo of mapping constructing and using Value type in the domain. To stick with the cliches we will use orders and order status. To give some structure we will lay out some ground business rules
- When an order is created it is of the status In Process
- it is then Approved
- then Shipped
- Cancelled backordered need to be in the mix too
Ok, not the most robust order system, but that’s not the point.
Lets first look at how the domain logic could be handled with using an Enum…. bring on the switch statement!!!!
Ok so let see if we can edit an order when the status is an enum:
public class Order{
//...
public bool CanEdit()
{
switch(this.OrderStatus)
case: OrderStatus.InProcess
return true;
case: OrderStatus.Approved
return false;
case: OrderStatus.Shipped
return false;
//etc etc
}
//...
}
Ok that is not exactly scalable… the more statuses we get the more case statements we have to add. If we add a status we also have to find every place that there is a switch statement using this enum and add the new status is not a case. think about his for a second… really think about it, how many enum do you have that have functionality tied to them. Right.
No lets look at the same code but we are using “real” objects; Exit the switch and enter the strategy pattern:
public class Order
{//...
public bool CanEdit()
{
return this.OrderStatus.CanEditOrder();
}//...
}
Now obviously there need to be some know-how on this non-enum enum. let have a look at how I have done this is the past.
///
   /// Sales Order Status Enumeration
   ///
   public abstract class SalesOrderStatus
   {
       #region Statuses
       ///
       /// InProcess
       ///
       public static SalesOrderStatus InProcess = new InProcessSalesOrderStatus();
       ///
       /// Approved
       ///
       public static SalesOrderStatus Approved = new ApprovedSalesOrderStatus();
       ///
       /// Backordered
       ///
       public static SalesOrderStatus Backordered = new BackorderedSalesOrderStatus();
       ///
       /// Rejected
       ///
       public static SalesOrderStatus Rejected = new RejectedSalesOrderStatus();
       ///
       /// Shipped
       ///
       public static SalesOrderStatus Shipped = new ShippedSalesOrderStatus();
       ///
       /// Cancelled
       ///
       public static SalesOrderStatus Cancelled = new CancelledSalesOrderStatus();
       #endregion
       #region Protected members
       ///
       /// The status description
       ///
       protected string description;
       #endregion
       #region Properties
       ///
       /// Gets the description of the order status
       ///
       /// The description.
       protected virtual string Description
       {
           get { return description; }
       }
       #endregion
       #region Public Methods
       ///
       /// Determines whether this instance allows the diting of it parent order.
       ///
       ///
       ///    true if this instances parent order can be edited; otherwise, false.
       ///
       public abstract bool CanEditOrder();
       #endregion
       #region Child Statuses
       private class InProcessSalesOrderStatus : SalesOrderStatus
       {
           public InProcessSalesOrderStatus()
           {
               description = “In Process”;
           }
           public override bool CanEditOrder()
           {
               return true;
           }
       }
       private class ApprovedSalesOrderStatus : SalesOrderStatus
       {
           public ApprovedSalesOrderStatus()
           {
               description = “Approved”;
           }
           public override bool CanEditOrder()
           {
               return false;
           }
       }
       private class BackorderedSalesOrderStatus : SalesOrderStatus
       {
           public BackorderedSalesOrderStatus()
           {
               description = “Back ordered”;
           }
           public override bool CanEditOrder()
           {
               return true;
           }
       }
       private class RejectedSalesOrderStatus : SalesOrderStatus
       {
           public RejectedSalesOrderStatus()
           {
               description = “Rejected”;
           }
           public override bool CanEditOrder()
           {
               return false;
           }
       }
       private class ShippedSalesOrderStatus : SalesOrderStatus
       {
           public ShippedSalesOrderStatus()
           {
               description = “Shipped”;
           }
           public override bool CanEditOrder()
           {
               return false;
           }
       }
       private class CancelledSalesOrderStatus : SalesOrderStatus
       {
           public CancelledSalesOrderStatus()
           {
               description = “Cancelled”;
           }
           public override bool CanEditOrder()
           {
               return false;
           }
       }
       #endregion
   }
Note this is especially good for Value object in a DDD sense and can be easily mapped to the database. More benefits include that I do not have to hit the DB to get a status. As they are value object and have no need for an ID (in the the domain), we only map the id in the mapping files. The POCO object know nothing of ids. I can also create lists for drop down binding too if required… with no need to retrieve from the DB.
I have had some people raise the point “but what if we need a change in the DB for a new status?”. Well that sounds like new logic to me and should mean reworking the logic and then a recompile anyway, however now we are being very clear with how we handle each enum as the object now possesses its own logic.
If you are using NHibernate the mapping would look like this:
  Â
      Â
          Â
      Â
      Â
      Â
      Â
      Â
      Â
      Â
      Â
      Â
  Â
The above SalesOrderStatus abstract class can now have static method on it to do thing you may normally hit the DB for eg to get Lists of Statuses, however now you are confined to the realms of the domain. This makes life easier IMO as there is less external dependencies. I have found I use enums very rarely in the domain and usually only have them in the UI for Display object or in DTOs across the wire (eg error codes; as an enum fails back to its underlying universal int type).
Try it out, see if you like it and let me know how it goes.
T4 GAX & GAT: Revisited
I have dabbled with T4, GAX and most specifically the GAT before and never really got any traction. Its a great idea but it is very intricate. Nothing by itself is overly complicated but there are lots of little things that can quickly put you off.
I am trying to set up default MVP solutions for myself. I have a default architecture that I have used for several commercial application and would like a quick way to replicate it. Typically I follow a Presenter First pattern and interact with a service layer for the model. The service layer may be a proxy for a distributed app or it may be a thin tier for interaction with the model, it doesn’t really matter. The fact is I have very similar classes, very similar tests, and very similar structure on many of these app’s. This is a perfect time to look to generate these frameworks. One of the big things I want out of this exercise is to get my default build configurations and build scripts predefined. This is a fiddley aspect that I hate doing, but always do it because of the time it saves in the long run.
So attempt one will be a WinForms MVP solution with out a domain project. I will use MSTest, RhinoMocks and MSBuild on 3.5 version of the framework. Not sure what IoC I will use yet.
As this is something i want to reuse where ever I work I don’t want to make NH a default aspect. i may include an NH model project in later.
So far the whole process has not been overly pleasant. I have had files (as in dozens of them) just get deleted on an attempt to register a project, projects trying to compile template that are marked as content (ie not for compilation), packages that just decide they are no longer packages… so I decided to set up a vm to contain the madness.. unfortunately I only have vista 64 install on me and VPC can only host 32 bit OSs… oh well the PnP (Pain ‘n Phailures?) impedance continues.
Wish me luck…