Begining with the end in mind

A long time ago, what feels like a previous life, I used to train bodybuilders. Yeah, weird, I know. I actually learnt a lot from this subculture: discipline, sacrifice and hard work are things you can not escape in that life. One huge lesson I picked up that many missed was “Begin with the end in mind“. The art of bodybuilding is often confused with weight lifting. The number of competitors that complained that a physically weaker contestant beat them amazed me. Being able to bench more than your opposition counts for nothing.
In the sport of bodybuilding the winner is decided by a panel of judges; that’s right, you are judged by humans. It is the image you present that they must judge you on. You may even have a better physique but if you do not display it better than the others you can lose. For this reason all of my athletes posed at every training session. In the middle of the gym, in front of mirrors down to their underwear, unashamedly posing as if they were in front of a panel of judges. We would critique, take photos, film it, change the lighting… I never saw any other body builders do this. I am sure that why, collectively, they won several, national and international championships.

Now ask yourself: Are you training for game day?
For us this is deploying to the production environment.
Are you practicing it everyday?
You should be.

If you are doing manual deployments you are a weight lifter in a bodybuilding show. You will lose.
To be honest we have it better than body builders. We have no competitors and we have predefined requirements. We can measure our performance, they cant, they can only contrast and compare.

So what do we need to do to prepare for the big day? Well first and foremost don’t make it a big day. Make it just like every other day. Make deployments part of your daily routine and start deploying from day one. Personally I like to have automated deployments working prior to writing any business code. Infrastructure cruft should be done in iteration zero and deployments are infrastructure cruft.

A daily routine makes deployments so easy the are a non event. This usually mean defining everything you need to do to do a deployment then scripting it – Writing the best code in the world means nothing if the deployment is botched… and manual deployments get botched.

Deployments should also cover all the steps you will do on production deployment day, they do not cover just what needs to get done to make the application work on a developers box.

Define where you production environment is at and then work from there. If your testing environments are not the same, you should question: why? The more similar they are the less likely you are of having deployment issues. Where I currently work we have 5 environments

  1. Developers machine
  2. “Development” Environment – automated CI deployment
  3. Test
  4. UAT
  5. Production

The developer machine I consider an environment. I often get latest and (sometimes) hit F5 and run the application and I expect the application to work! This means I require a database that is in a valid state, external services to work, file paths to be correctly set up etc. For this reason I like developers to have local sandbox which includes local databases. Nothing pisses me off more than when I am running an app and some bugger has cleared the dev database for testing a script breaking my flow. Having your own database also forces you to properly script database changes in a sane manner. Checking in those changes and getting latest, running you build script should get you up and running every time. See Tarantino from Eric Hexter or  RoundHouse from the ChuckNorris framework for a simple way to get database migrations working cleanly in a .Net-SQL world.

The Development Environment is a name sake. No development is done on it, but it is owned by the developers. We may use this for debugging on a production-like environment if things go pear shaped, I just never have. Its main purpose is for, IMO, automated deployments from our build server. If anything breaks here the red light on the build server goes on and the check in for that build fails. The steps to do this include

  1. Cleaning the server – ie getting rid of the last deployment and backing it up
  2. Setting up the server including
    • creating apps pools/virtual and physical directories, 
    • ensuring dependencies are present eg MSMQ, DTC etc
    • ensuring the dependencies can run, ie queues are set up, services are running
    • setting up account privileges
  3. Deploying the packages to the server and installing it if applicable
  4. Running SQL scripts including
    • creating users roles and permissions
    • creating the database objects (tables, views, keys constraints, triggers etc)
    • creating required reference data
  5. Testing the deployment
    • creating test data
    • running high level acceptance and smoke tests

If you can get to this stage then is it not obvious that doing a test deployment is going to be next to trivial? Migrating to the test environment should be the same as migrating UAT and therefore the same as Production. Production deployments should therefore be just a matter of going through the motions.

This also means that you may need various scripts or at least functions with in these scripts to carry out these various steps. Obviously if the Production environment is already set up we do not need to do it again, and the deployment scripts should reflect that. Just like in normal code use pre-conditions and post-conditions to enforce a correct deployment. If certain set ups are not required log it and move on, just make sure it is part of the agreed process.

DBAs are involved and decide how you want to manage your deployments. Keep reminding the team that this should be streamlined.
One thing that often trip up teams is permission issues. Personally I prefer not having access to anything outside of the development environments (I’m pretty sure I am alone on this one). As far as I am concerned the testers can deploy there own stuff. It will be the same script that the SysAdmins and DBAs will run in UAT and production, why should I do it? I have code to write! They can have permission to run the scripts in their own environment making sure that no manual step has been introduced by any developer along the way. This separation I feel further reduces risk of failed deployments. If the deployment to Test does fail then they can raise a bug, roll back and tell the developers what they think of them. Sure this will be embarrassing and it does happen, but would you rather it done in the confines of the IT department or in full view of the customers?

This brings us back to what we are here for : to fix a customers problem. I assume this typically means delivering working software. Working software on your development machine has not fixed the customers problem, that’s like being a weight lifter in a body building show. Don’t be that guy, think about the end game and make sure that each day you are working towards that end goal of providing you customer with a business solution, in the cleanest possible way.

*Sorry for putting the images of most nude men in your mind, it (probably) wont happen again

Explicit interfaces

Further to our teams discussions with Greg Fox and following on from Colin Scotts blog post I thought i would highlight this:

It is a compile-time error for an explicit interface member implementation to include access modifiers, and it is a compile-time error to include the modifiers abstract, virtual, override, or static.

Explicit interface member implementations have different accessibility characteristics than other members. Because explicit interface member implementations are never accessible through their fully qualified name in a method invocation or a property access, they are in a sense private. However, since they can be accessed through an interface instance, they are in a sense also public.

For more information see the MSDN documentation.

that’s all, interesting tho…

Coding Guidelines

Last night I presented to the Perth .Net Community on an upcoming tool called PEX. There were a couple of mentions in the talk of “allowable exceptions” backed up by mentions of the .Net Framework Guidelines.
I was asked by a few people afterward what the book was and whether I had presumably made these guidelines up 😉
I was under the impression that this book was widely read, so it is clearly not as common knowledge as i may have thought.
Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition)
is a must read for .Net devs that are writing code that is consumable by others (ie anything that uses public or protected accessors)

I would highly recommend this as it also gives a lot of background as to “why” behind the recommendations. It is also nice to read the comment from authors of certain .Net framework as they point out many things including their mistakes.

The books is made available online for free (not sure if it is in its entirety) at MSDN here

The allowable exception was in reference to section 7.3.5 (page 237) or a cut down version here

Oh, the links to the Pex stuff are here:

Thanks to everyone who came (especially those who bought beers afterwards) 😉

TeamCity – Late adoption

TeamCity is a build server put out by the wonderful JetBrains team, possibly best know in the .Net community for ReSharper.
TeamCity is a builder that basically take on the likes on CC.Net but aims to make the process a little less painful in terms of set up.
A little painful is probably an understatement. TeamCity rocks!
In about 10 minutes I have a build server up and running, including install time! It was completely trivial. I use a build script (Nant or MSBuild) anyway so all I had to do was point source control, point to the to the script and I’m done. Completely painless.
For those not completely au fait with what a build server is, this is what TC is doing for me:
On check someone checking in code to source control:

  • It gets latest from the source control
  • Builds the code using my config
  • Performs static code analysis
  • Runs unit tests
  • Run integration tests
  • Deploys to a drop location, so our “tester” can always get a copy of what we are currently working on.

That’s a pretty basic build process but it is fine for me and our team. I am well happy with the last 20minutes of work, cheers Jetbrains!

Postsharp and sanity checks

While playing with Postsharp for a validation framework i stumbled upon this.
This is a great little code block that stops sneaky team members referencing layers they should not be referencing. a compile time error will ensue and let them know, for example, that they can not access the DAL via the View projects… happy days!

Loose coupling: The Illusion

I must say I totally agree with the sentiment Jeremy has here. Loose coupling is an illusion to most of the projects have worked on. The project I current work on has the client UI in a distributed system knowing that we use NHibernate as our ORM. Unbelievable. Needless to saying Unit testing this solution is VERY hard! To me, this is the first place where loose couping rears its head. If you can not unit test a class due to concrete dependencies rearing their ugly head, then its either

  • Deal with the fact that you are not, in fact, loosely coupled or
  • Fix it.

As mention in Jeremy’s post having separate assemblies is not loose coupling. At best this forces a direction of flow of control, at worst it hides circular references or creates the illusion of loose coupling. Jeremy doesn’t break his solution down to quite the project granularity I do and nor do others (JP Boodhoo for example is know to have very few projects in a solution). The notion is that you are not trying to hide behind the perception of more assemblies == less coupling. You can also separate the single project into multiple assemblies in your build scripts if required. It then becomes a gentleman’s agreement amongst the team that coupling rules will be adhered too. Now this is much easier to police with a tool like NDepend.

Currently I am without NDepend so I still break up my solution to multiple projects, for a couple of reasons. I like to be able to visually see what is going on quickly and I like the default name spacing that is then applied (Sure this can be done with folders too). Probably the aspect I like most however is that i can see what reference what at any given time by checking the references (again, we don’t have NDepend on this project). By opening the UI project I can now see that Data Access references are made, or there are WCF references in a Data access project. Without NDepend this is my last resort to policing the silly things that go on in the current (and no doubt futures) project.

With NDepend I would certainly be moving toward smaller projects. *Thinking out loud* I think a common assembly for server side (with my default and abstract data access/repository stuff) a server side assembly and a client side assembly. It kinda makes sense. If none of that server side stuff will ever be accessed by an other application or assembly then why break it up? Hmmm

Anyway, on the path to loose(r) coupling consider a couple of things:

  • Are you using a Test first approach. Although it is not necessary it tends to flush out dependencies early on
  • Are you using a means of Dependency injection? If you are new’ing up dependencies in line then you have just tightly coupled yourself to an implementations. Check out this for a start on DI including poor mans DI, which is still infinitely better than no DI, IMO
  • Code to interfaces not implementations. I always thought this was a pretty obvious statement, but apparently not. Your DEPENDENCIES should be interfaces. Anything you interact with in terms of Actions (ie methods or delegates/events) should ideally be via the interface. I very rarely see a point of having DTO’s implement an interface…
  • Stream line code that interacts with the unmockable (file system, DB, WCF/Windows services etc) ; it should be as thin as possible. Try to get your working code out of these classes. This is more of a testing issue, but will also lead to better design.
  • Get NDepend. It is a kick ass tool that i wish my boss would get for me 😦
  • Code reviews. Get out of trouble at the first sign, its very hard to “loosen up” an app once the tight coupling has set in.

Loose coupling is a design goal we should strive for but I believe it deserves a bit more than just lip service. Get the team on board an explain the benefits. The earlier this begins in the project timeline, obviously, the better

When to use Enum’s vs object’s

Enum’s are a touchy point with .net developers. There are the pure OO types that detest the use of them and then the perhaps more MS inclined that love the little buggers.
I will admit that I am more of the later but I have been rethinking my use of them lately and think I have settled on a few rules of thumbs that I may start to follow, which of course I would like your thoughts on.

Enum’s in the domain.
Enum’s can easily maps to reference tables in most ORM’s and so this is an easy win here. Unfortunately I am starting to lean towards the thought of not using Enum’s in the domain. The presence of Enum’s usually means different means of handling certain scenarios and instead of using ugly switch statements in the domain I am going to try to move to using objects over Enum’s, which may help with using a more robust strategy patterns.
These objects are still easily mapped using discriminators and this means it allows domain functionality in these new more DDD styled value types.
Possibly one approach is to start using Enum’s in the intial stages of mapping and as functionality grows, refactor to objects as necessary.

Enum’s over the wire
Enum’s over the wire I am completely ok with. Provided the Enum’s are well documented these little buggers just go across as the given value type you have assigned (commonly int). This keeps messages sizes down and allows the client to create an Enum on the receiving side to map to give Enum values. NServiceBus is an example of where this happens (for error codes IRC).

Enum’s in the application
I think this is where is would be most pragmatic with my approach. A lot of application developers, especially in the .Net world are more that happy to deal with Enum’s and small switch statement in the application may actually be easier for many to maintain. These may also be easier to deal with on UI displays, like drops downs as many people have standardised helpers to manipulate Enum’s. Again it really depends on the situation and how much logic is dealt with on the client/application.

Again I hope I will take a reasonably pragmatic approach to this. Hard and fast rule often mean you are unnecessarily painting yourself into a corner.

For those wondering what the hell I am talking about when using Objects as Enum’s this nasty code give a vague idea. Note that you can now subclass the type, providing type specific logic.

class Program

{

static void Main(string[] args)

{

Person bob = new Person(OccupationType.Developer, OccupationEnum.Developer);

//do other stuff…

}

public class Person

{

OccupationType occupation;

OccupationEnum occupationEnum;

public Person(OccupationType occupation, OccupationEnum occupationEnum)

{

this.occupation = occupation;

this.occupationEnum = occupationEnum;

}

}

public class OccupationType

{

public static OccupationType RockStar = new OccupationType();

public static OccupationType Developer = new OccupationType();

public static OccupationType BusDriver = new OccupationType();

public static OccupationType Maid = new OccupationType();

}

public enum OccupationEnum

{

RockStar,

Developer,

BusDriver,

Maid

}

}

Setting up a Greenfields project

My last post talked about technical investment. well like financial investment, its usually better to invest early.

Lee recently posted about Standards and Success Criteria. I think this is crucial for project success and hopefully this post is a concrete example of what I think are standards when beginning a project.

Setting up your solution well early can make life a lot easier. Some of the main concerns to me when initially setting up are as follows:

Naming
Establish a naming convention early on. It does not really matter what that convention is, just define it. There are some .Net guideline already established, use these if you don’t already have one (iDesigns coding standard is pretty good starting place).
Be sure to talk to other developers and stake holders about the names you use and change them as needed. It will much easier to change these names early one than it will later on when multiple Dev’s are in the mix and releases are already being made.
Use the correct names for commonly know things. One of the key areas I often see is pattern names used incorrectly. Make sure you understand what the pattern is if you are going to use the terms. Every self respecting developer should have GoF (or at least read it).

Libraries & tools
Define what Core and 3rd party libraries and frameworks you are going to use at the start of the project. If you are going to use .Net 3.5 and WCF use it from the beginning and use it properly.
Libries you need to consider include: Test/Mock libraries, Persistances/ORM, communication/ESB, UI, Logging, Framework (IoC, AOP etc) etc.
Tools include: Soruce control, analysis tools, code gen, IDE, CI, build, test runners/coverage, IDE plugins, documentation generators
This is also a good chance to be an early adopter. If there is a new beta release of a product you are interested in and the release is schedule for before your release, why not start using it now? Sure you will have to weigh this up yourself, however this has proved successful to me in the past.
One caveat here is make sure you know how to use the library before jumping on board. It is only after you have learnt the library that you know how steep the learning curve is. Personally I found it easier to learn a whole new language (Python) than picking up an ORM (NHibernate). Don’t drag your team through hell because you wanted to play with the latest toy.

Build process
Establish a build process early on. Set you team up for success. You may not have CI set up yet but that does not mean you cant have the solution ready to go. Have your build scripts set up before you have any significant code base. Use the build script.
Ensure the build script is doing everything is should, you should not have to move dll’s or copy config files etc. It should do it all.

Project Processes
Define how the project will be run from day one and do it properly. The term Agile is so flippantly thrown around, yet so rarely done properly… I find i quite annoying. To me its like saying I drive a Lexus when I am rollin’ in a Toyota. There is nothing wrong with a Toyota, and it may be the best vehicle for the job, just call it what it is.

Tests
Whether you are TDD or not, I assume you are writing some sort of automate-able tests, whether they are strictly unit test or otherwise. Integrate this in to your build process. As we are talking about Greenfield projects here, define your expected test coverage up front. Now this is a doable edged sword, but i think it is worth while. Also encourage your Dev to truly embrace TDD and correct UNIT testing using mocks/fakes/stubs.
On a side note, I was talking to a Java mate yesterday and apparently there is a a pug in for IntelliJ that runs your test in a back ground thread every time you build in the IDE. With the multi-processor machines we uses nowadays, why not? It got me thinking, i could pretty easily set up something like that too for my local machine.

Follow coding best practices
The first port of call in doing this is setting warnings as errors and setting the error level as high as it can go in each of your project files in the solution.
Get FXCop and use it. if there are things you don’t like then document and removed the waring, but start with it.
Some will disagree with me on this, but on this subject I say F*^K ’em. Not having these on just sets up a slippery slope. I would rather deal with the issues as they come up, not retro fix.

Follow design best practices
Terms like IoC, AOP & TDD are not just cool buzz word, they are best practices there to make YOUR life easier. Using IoC make TDD easier. Using TDD encourage the use of IoC. AOP cleans up your code from so the business problem at hand is not cluttered with technical sideshow concerns.Use these, embraces these, understand these and introduces them early on into the picture. Create a pit of success.
Also be sure to use a metrics tool like NDepend to check on your solutions health. Again, dealing with issues early may help stem some nasty heads from popping up later.

Defining future Standards and Success Criteria
This is pretty broad and somewhat recursive, but there will be personally issues that you are concerned with and project specific thing that should be address early on in the picture

Domain object mapping

I have been using Nhibernate now for almost 2 years and to be honest for most of the time really missed the point of using an ORM. I think alot of this stems for the M$ influence mindset of building from the DB up. This is not necessarily a bad thing and to be honest is often a necessity if you are using an existing DB or working disparate to the DBA or owner. However as i explore the DDD world i see the benefit of build from the domain out, especially in complex domains that are not just crud operations.
Most of my time using an ORM i have just been using it as a glorified typed data set. The object would contain data in the form of properties that mapped to the structure of the table it is mapped to. This really is not using the full strength of an ORM, and to be honest if this is all you are doing then a code gen option like Net Tiers is probably a much better path (faster, less stress easier).
However now I feel the ORM really is a way of abstracting your domain from your persistence mechanism. Unfortunately it took an nasty database to show me the light.
I dont mind admitting my mistakes, usually its in a less public forum, however these are mistake I see frequently that i dont know if the authors are aware they are making.
So here’s some guidelines that i am now trying to follow, take them for what they are.

Properties != Columns
Column and property names do not need to be the same. I have often seen the suffix “Flag” for Boolean columns; however this is less IMO readable than the “Is” prefix in managed code. do not feel the need to map names directly. use the most suitable domain representation

Incorrect Mapping of Types
I have seen properties that are of type string because the underlying table has a char(1) for Y/N flags. there is then a property that interrogate the string flag property and return a bool depending on the result… yuck. Why not use the mapping correctly

this also applys to numeric types and strings. There are more numeric types than int, so use the most appropriate one. if a string has a max limit on the DB of 30, enforce that in your map and in your class.

Inappropriate use of the public keyword
The public key word for some reason seems to be a default accessor. it should only be used when you w this aspect of your code.
Classes
If you have a class that is in way relevant to the outer world, do not mark it as public. Many mapping classes fall in to this category. I dont want to see Foo.FooBar[0].Bar… this show me implementation details that i dont care about and just add noise. If you need the join class but it serves no public functionality encapsulate it by providing a Bar collection property that transverses the join classes to give Foo.Bar[0]. You may want to consider if your loading strategy for this relationship is appropriate (ie not lazy) if you are using this out of session.
Bottom line is: ORMs are there to provide DB/Persistence ignorance not highlight the DB structure.
Properties
Properties that are mapped to columns may not always be publicly accessible. There may be time when if you update property A then B must be considered too. Dont make these setters public, or you give the impression that changing them freely is OK. Use a method to populate the properties so you can also encapsulate business logic associated to these.
Methods
This point follows on from the preceding point. You domain objects do not have to be buckets designed to carry data. They can and should have behaviour associated to them. Not doing so leaks your logic out of the domain and results in duplicated logic and hard-to-maintain uber services. Dont let this happen.

Fear of re factoring
It is often difficult to change a database especially in post live situations. this does not mean your domain has to reflect old/incorrect data structures. Feel free to remove mapped columns form your classes and mapping files if they no longer make sense in the domain. just because one area of development is wrong it does not mean it has to leak out to the other areas.

Light Handed Mappings
That title doesn’t really makes sense, what i really mean is be heavy handed with your mappings, classes and tables. Assert your assumptions. If a column can not be null, assert that in the DB, maps and in the managed code. If you know the class a will always need reference to its collection of class B then set lazy loading to false. Ignorance is a common excuse, but its one i am trying to minimise

Redundant Repositories
A repository is generally only required for an Aggregate Root. Define your boundaries and remove redundant Repositories. Do you really need an ICustomerAddressRepository? get rid of it!

Confusing the purpose of a Repository
This may be more than a ORM issue but a design issue. A Repository is a store for entities. It does not do reporting. It does not do complex SQL searches that return weird and wonderful datasets. I “like” Greg Youngs notion* (which i initially thought was wildly over the top, but goes to prove a point) of just saving the id of aggregate root and then serialise the object graph in to the other column of the table. This database is not for reporting, its for serving the domain.
Although your approach man be a little less drastic, this highlights a point. If you cant think of your repository as an in memory store then you are not using like a repository. In fact my current thought is your should be easily be able to inject an in memory store that uses the I[Foo]Repository interface and work exactly as intend , albeit a smaller set of data (eg for testing). I may be going a bit OTT here and would like feedback.

*i hope i am not miss-quoting Greg

Hopefully that has clarified some of the errors i have done/seen repeatedly in the past. i would like to here feedback. have i yet again missed a point? am i being over the top?
lay it on 🙂