Holidays are good

I have just had a 12 day holiday catching up with family and friends in Sydney (Aus) and Auckland, Hamilton and Whangarei (NZ). Was a lovely trip, nice to see all the new additions to the family, kids husbands and wives etc!

Back to work now, but i feel good, well relaxed. hoping to get right back into the fun stuff.. coding!

FYI there looks to be a Coding Dojo being organised by Mike (Alt.Net Perth) which i may be contributing, so keep april the 8th earmarked if you are a perth local. For those unfamilir it is a way of cross polinating skill by actually coding with others, not just watching. The theme will be TDD.

Ain’t That the Truth!

A man in a hot air balloon, realizing he was lost, reduced altitude
And spotted a woman below. He descended further and shouted to the lady
“Excuse me, can you help me? I promised a friend I would meet him an
hour ago, but I don’t know where I am”

The woman below replied, “You’re in a hot air balloon, hovering
approximately 30 feet above the ground. You’re between 40 and 41
degrees north latitude and between 59 and 60 degrees west longitude.”

“You must be in IT,” said the balloonist.

“Actually I am,” replied the woman, “How did you know?”

“Well,” answered the balloonist, “everything you have told me is
technically correct but I’ve no idea what to make of your information
and the fact is I’m still lost. Frankly, you’ve not been much help at
all. If anything, you’ve delayed my trip.”

The woman below responded, “You must be in Management.”

“I am,” replied the balloonist, “but how did you know?”

“Well,” said the woman, “you don’t know where you are or where you’re
going. You have risen to where you are due to a large quantity of hot
air. You made a promise, which you’ve no idea how to keep, and you
expect people beneath you to solve your problems. The fact is you are
in exactly the same position you were in before we met, but now, somehow,
it’s my f***ing fault…”

Yeah, an oldie but a goodie…

Castle: We Still XML The Old Way

Post 4 in a 101 series on the Castle IOC project called Windsor. See part one for background

XML Config Based Registration

Although demoware may show us the inline registration of components is good it may not prove to be the best thing for your application. The previous posts have shown registration of types in C#, this means we must recompile if we are making changes to any parts of our registration. This may be considered a bad thing, it may also be considered a bad thing that the assembly that you are performing all of this registration in knows about (and has references to)  a lot of possibly inappropriate libraries and projects. One way around all of this is by using the XML configuration.

Using our same basic console app I add an app.config file and put the following in it:




<section name="castle"
type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor" />



<component lifestyle="transient"
id="Writer"
service="CastleIocConsole.IWriter, CastleIocConsole"
type="CastleIocConsole.HelloWriter, CastleIocConsole" />
<component
id="Gday"
service="CastleIocConsole.IWriter, CastleIocConsole"
type="CastleIocConsole.GdayWriter, CastleIocConsole" />
<component
id="List"
service="CastleIocConsole.IWriter, CastleIocConsole"
type="CastleIocConsole.ListWriter, CastleIocConsole" />
<component
id="MessageService"
service="CastleIocConsole.IMessageService, CastleIocConsole"
type="CastleIocConsole.GuidMessageService, CastleIocConsole" />



Several things to note:

  • You should specify the fully qualified name any time you are declaring types in XML, that is – Full.NameSpace.Type, Assembly
  • All components have an id
  • You don’t need to specify a service, if the type has no interface or is the base calls you can just declare the type.
  • you can declare lifestyles (the transient lifestyle specified is not important in this example, its just there to show it can be done)

In addition to this you can specify which concrete dependencies you want, assign component properties and declare interceptors for AOP style run time inject of code blocks (more on this to come)

Using this code we could do some thing like this:

using System;
using Castle.Core.Resource;
using Castle.Windsor;
using Castle.Windsor.Configuration.Interpreters;

namespace CastleIocConsole
{
class Program
{
static void Main(string[] args)
{
IWindsorContainer container =
new WindsorContainer(
new XmlInterpreter(
new ConfigResource("castle")));
var writer1 = container.Resolve();
writer1.Write();
var writer2 = container.Resolve("List");
writer2.Write();
Console.ReadKey();
}
}
}

which would give the output of (see previous post for component definitions):

Hello World
122eb0e3-0812-4611-a580-7cf976039a89
5a916589-f7a5-4121-9fb6-a51cf24a6f43
977ec6f9-8110-4074-b537-047f21e6a70f
a136ae03-0caf-49fb-8511-92e3f2d39446
3aa1db17-78c8-438f-84a9-91ef14753ca7

As with any XML there seems to be a few catches so here are some gotchas that may help:

  • All components need an ID
  • First in best dressed: The first component registered is the default for that service (interface)
  • Standard .net XML config notation applies eg no line breaks in definitions, generics marked up with back tick notation etc
  • There is no official XSD schema but there are user defined ones, find one and put it is  the VS schema folder eg : C:\Program Files\Microsoft Visual Studio 9.0\Xml\Schemas to give you intellisense

That is the last in the 101 series for now. Soon I would like to cover

  • Interception and how you can use AOP with the Castle framework
  • Run time Config with out XML (Binsor)

Rhys

Back to Post 3

Castle: Control Your Dependents!

Post 3 in a 101 series on the Castle IOC project called Windsor. See part one for background

What About the Dependencies?

An Ioc Container is pretty much no use if it does not resolve a components dependencies, so we will show you how this can work.

We introduce a new component, the ListWriter which use constructor based dependency injection to provider a IMessageService. This dependency is used when writing our message:

class ListWriter : IWriter
{
private readonly IMessageService messageService;
public ListWriter(IMessageService messageService)
{
this.messageService = messageService;
}
public void Write()
{
foreach (var message in messageService.GetMessages())
Console.WriteLine(message);
}
}
interface IMessageService
{
IEnumerable GetMessages();
}
class GuidMessageService : IMessageService
{
public IEnumerable GetMessages()
{
for (int i = 0; i < 5; i++)
{
yield return Guid.NewGuid().ToString();
}
}
}

Because we have established a obvious dependency by exposing a constructor parameter the container is smart enough to know that it is going to have to find a registered component that fits that service. This means we also have to register this new service that we are dependent on:

class Program
{
static void Main(string[] args)
{
var container = new WindsorContainer();
container.AddComponent();
container.AddComponent();
var writer1 = container.Resolve();
writer1.Write();
Console.ReadKey();
}
}

Which when run will now resolve the IWriter service as the LiveWriter component, realise the component can not yet be constructed as it has a dependency on the IMeassgeService, so will resolve that (as the Guid Writer Component we registered on the third line of the method), inject it into the LiveWriter component and then call Write() giving the output expected; 5 random GUID in their string representation:

122eb0e3-0812-4611-a580-7cf976039a89
5a916589-f7a5-4121-9fb6-a51cf24a6f43
977ec6f9-8110-4074-b537-047f21e6a70f
a136ae03-0caf-49fb-8511-92e3f2d39446
3aa1db17-78c8-438f-84a9-91ef14753ca7

This shows that the Container can now worry about creating object with complex dependencies and deep object graphs, allowing you to have a loosely coupled application, ready for any change that may come your way.

Next up : Using XML configuration

Back to Post 2

Castle: Pick Your Lifestyle

Post 2 in a 101 series on the Castle IOC project called Windsor. See part one for background

Lifestyle Management

Often you will want to control the lifestyle of the object you create, eg a common but over used lifestyle is the singleton. Using an IoC container means you do not have to manage this in your code, which means other dependencies don’t have to implicitly know they are dealing with a singleton (or whatever lifestyle is being implemented). This is good and drastically cleans up production code and makes it a lot easier to test.

So lets have a look at a new implementation we have here the MultiGreeter; a concrete implementation (or component) that tells you how often this instance has greeted you.

class MultiWriter : IWriter
{
private int timesGreeted = 0;
public void Write()
{
Console.WriteLine("I have greeted you " + timesGreeted + " time(s) before");
timesGreeted++;
}
}

We change the program class so we register our new component and resolve the service a couple of times to see the output:

using System;
using Castle.Windsor;

namespace CastleIocConsole
{
class Program
{
static void Main(string[] args)
{
var container = new WindsorContainer();
container.AddComponent();
var writer1 = container.Resolve();
writer1.Write();
var writer2 = container.Resolve();
writer2.Write();
var writer3 = container.Resolve();
writer3.Write();
Console.ReadKey();
}
}
}

The output is somewhat unexpected for new comers:

I have greeted you 0 time(s) before
I have greeted you 1 time(s) before
I have greeted you 2 time(s) before

Even though we have resolved the service each time to a new variable the count is incrementing. This is because Castle will resolve types as singletons by default. What we can do, if this lifestyle is inappropriate, is specify that we want the type to be of a differing type,transient for example, which would give the same behaviour associated with calling the constructor of the component each time we resolve the service (ie var writer = new MultiWriter()). This requires are slightly different way of registering our components by using the AddComponentWithLifestyle method and use the Castle.Core.LifeStyleType enumeration to select our desired lifestyle.

using System;
using Castle.Windsor;
using Castle.Core;

namespace CastleIocConsole
{
class Program
{
static void Main(string[] args)
{
var container = new WindsorContainer();
container.AddComponentWithLifestyle(LifestyleType.Transient);
var writer1 = container.Resolve();
writer1.Write();
var writer2 = container.Resolve();
writer2.Write();
var writer3 = container.Resolve();
writer3.Write();
Console.ReadKey();
}
}
}

The output is now the expected:

I have greeted you 0 time(s) before
I have greeted you 0 time(s) before
I have greeted you 0 time(s) before

This is because each time we resolve a brand new instance is created. The other Lifestyle I like is PerWebRequest (for obvious reasons).

Next we control our dependencies

Back to Post 1

Castle: The .Net framework of choice

Post 1 in a 101 series on the Castle IOC project called Windsor

I have long been an advocate of the Castle stack, stemming from my days at Change Corp. At the time we were using Spring.Net for our IoC container but also were using NHibernate. I noticed that we had castle DLL’s in the bin and realised that NHibernate was using them for object creation. I looked into the Castle project and despite is lack lustre documentation was able to get IoC working pretty quickly. One of the things I personally like is that you did not NEED to use XML for you container registration (i.e.  defining which concrete type to use for which abstract request). On top of this the Monorail  project (the original MCV framework for ASP.Net) and Active Record The AR pattern sitting on top of NH) struck a chord as being particularly handy tools.

Years later the stack remains pretty much the same. There are some off shoot developments but by and large the same stuff I went to castle for is why I still like to use it…. well at least I thought.

Yesterday when writing the DI blog post I realised I had not directly used Castle in over a year. I have a wrapper that I use to abstract my interaction with the container and the OSS libraries that I use/play with hide the details too (MassTransit, SutekiShop etc). So I have decide to give a crash course for myself and the rest of the world on how to set up and exploit some of the more basic but super handy Castle features.

Castle, IoC And You: The set up

First and foremost you need to download the latest castle libraries. Currently these are RC3 and last updated in sep 07. Get them here.

Install the binaries, these will be jammed in the GAC for safe keeping. You can also use the straight DLLs if you need to for source control etc, just make to reference everything you need (i.e. secondary dependencies)

Right, with that sorted we are able to do the world silliest IOC demo.

Create a console application and include references to  Castle.Core.dll, Castle.MicroKernel.dll, Castle.Windsor.dll & Castle.DynamicProxy.dll.

I have create a very basic interface called IWriter which has one method: void Write()

I have two instances that implement it: HelloWriter and GdayWriter. It is probably a good time to note that in the castle world a “service” is generally referring to the interface or abstract type that you are calling and the “component” is the concrete implementations. So the IWriter would be our service and the HelloWriter and GdayWriter are considered components.

interface IWriter
{
void Write();
}
class HelloWriter : IWriter
{
public void Write()
{
Console.WriteLine("Hello World");
}
}
class GdayWriter : IWriter
{
public void Write()
{
Console.WriteLine("G'day world!");
}
}

Right, there are not the most useful of classes but they will help show you how to use Castle to control creation of dependencies next. In our Program class place the following:

using System;
using Castle.Windsor;

namespace CastleIocConsole
{
class Program
{
static void Main(string[] args)
{
var container = new WindsorContainer();
container.AddComponent();
var writer = container.Resolve();
writer.Write();
Console.ReadKey();
}
}
}

Hit F5 to debug and see that

Hello world 

was output as the HelloWorld class was resolved and its implementation of Write was called on the second to last line before Console.ReadKey().

Next up we show how to get Named instances of concrete types. Say for example you have a default type, but in the odd occasion you need a different type; well instead of breaking the notion of DI and using concrete dependencies, you can call for implementation by a key. In this example we register 2 concrete types to the same interface, however one has a key specified.

static void Main(string[] args) 
{
var container = new WindsorContainer();
container.AddComponent();
container.AddComponent("gday");
var writer1 = container.Resolve();
var writer2 = container.Resolve("gday");
Console.Write("writer1 output: ");
writer1.Write();
Console.Write("writer2 output: ");
writer2.Write();
Console.ReadKey();
}

The output is:

writer1 output: Hello world 
writer2 output: G'day world!

I find that most of the time I expect a default implementation so do not explicitly set a key at registration time, but its nice to know its there when you need it.

Next we manage LifeStyle

Real World Dependency Injection

On Thursday I gave a talk on Real World TDD at the Perth .Net Community of Practices. I’m not sure what people were expecting but the turn out was incredible, standing room only.. Hopefully I gave the punters what they were hoping for. A lot of secondary topics were raised, one being the notion of design and allowing TDD to help shape good design by following the SOLID principles. Dependency Injection was probably the most notable and for some people this may have been somewhat unusual. This article hopes to explain why we use DI and how we can use it in the real world.

What is Dependency Injection

Dependency Injection or dependency inversion is the idea that we depend on abstractions not concrete implementations. The example I gave in the talk was a trivial one using a logger as an example. Most of us use some sort of logging in our code so a lot of our code has a concrete dependency to a logging class eg:

public void DoSomething()
{
Logger.Log("Entering Do Something");
//Do the thing you wanted
Logger.Log("Exiting Do Something");
}

Why do we use it

In the example above, if we wanted to change the logger we used we would have to go into this code and change it, in fact we would most likely have to change it in every method call for something as prolific as logging. That is a lot of changes. Not only this but the reuse of this code without DI is limited and you will have to pass around the concrete logger as a reference to anything that wants to use it. This is not a big deal for logging but what if you had some potentially reusable components? Those concrete dependency will be come painful, reducing reusability very quickly. An example that parallels DI that many .Net developer will be familiar with is the Plug In pattern. Asp.Net membership is an example of this, in which we specify what type of provider we are going to using for membership. Our code doesn’t change all we need to do is change the config in one place and the implementation is changed.

When do we use it

I will use DI anywhere that I have a dependency that is not static. (This may not be the best rule of thumb but it is how I do it, the number of static helper classes I have are minimal so this is not a big deal for me.)

Take for example we are using the MVP pattern; the presenter has a dependency on a view and on the model. The view however does not have a dependency on the presenter (despite what many coding samples out there may say) and the model does not have a dependency on the presenter but may do on other models data access components or services etc. Assume we have decided that the presenter is not in a valid state without these dependencies; We therefore decide that these should be injected as constructor items.

public class FooPresenter
{
private readonly IFooView view;
private readonly IFooModel model;

public FooPresenter(IFooView view, IFooModel model)
{
//DBC checks eg null checks etc
this.view = view;
this.model = model;
}
//rest of the presenter
}

Note: by stipulating the dependencies are read only they must be assigned by the end of the constructor. To me, this helps clarify the intention and I tend to use this over private properties eg

//I dont like this for constructor dependencies, 
//but it is valid
public IFooView View { get; private set; }

The presenter is now only dependent on the interfaces of the view and model. we can change From Winforms to WPF without any dramas, we can move from a web service based model to an EF model… we don’t care about how the dependencies do their job,, they just have to adhere to the contract that is the interface.

To me this is critical part of TDD. This allows me to focus on the task at hand, eg writing the presenter, without having to worry about coding the model and view at the same time; all I have to do is create the interfaces that they have to adhere to while I create the presenter. This is typically done all at the same pace. I may decide I need to raise an event from the view, say “AddFooRequest”. I create a test to listen for that event in the presenter test, create the event on the view interface, create a handler for the event in the presenter and wire it up. I have not done any concrete code in the view, just view interface, concrete presenter and its test fixture. This to me is already a huge benefit of DI.

How do  use it

The first problem people run into when using DI is when they try and plug pieces together. Basically we have stated that the object should not know about its concrete dependency, but something has to! Is it the object that is calling it? It would be silly if presenter A did not know about its dependencies but when navigating to presenter B it new about those dependencies so it could construct presenter B… we there are 2 realistic ways around it, “Poor Mans DI” and “Inversion of Control” containers.

Poor mans DI is a way of saying “I don’t want to code to implementations, but I will anyway”. Its actually a pretty good place to start when DI is still a relatively new concept. You basically have defaults in your class and allow for them to be overridden eg:

public class FooPresenter
{
private readonly IFooView view;
private readonly IFooModel model;

//Poor mans DI
public FooPresenter()
:this(new FooView(), new FooModel())
{}

//Complete ctor allowing for DI
public FooPresenter(IFooView view, IFooModel model)
{
//DBC checks eg null checks etc
this.view = view;
this.model = model;
}
//rest of the presenter
}

The problem with this is that if you are using DI everywhere then (in this example) the presenter will have to know how to create the model or the model will have to have poor mans DI too.

A better option is to have one place that knows how to construct things, like a big object factory. This is called your Inversion of Control container and a usually third party API’s that become a critical part of you framework. Such containers are godsends in large projects and allow for quickly changing dependencies for whole solutions very quickly and cleanly. The common containers are StructureMap, Castle (Windsor) , Spring.net, Unity, Ninject etc. You will most likely at first use a tiny portion of these container capabilities, generally resolve and setting the concrete type to construct for a given abstraction. eg

//set a default for the whole application
Container.AddComponent();
Container.AddComponent();
Container.AddComponent();

//Retrive a concrete instance from the container
var presenter = Container.Resolve();

The Container.Resolve call on the last line asks for the concrete instance of IFooPresenter. We have defined that above as been the concrete FooPresenter. We know that the FooPresenter has dependencies on IFooView and IFooService so the container will look for registered components too and try to instantiate them. We have specified that we want the container to create a FooNhibernateRepository any time we ask for an IFooModel and a FooWinFormView any time we ask for an IFooView. Most IoC containers will tend to take the greediest constructor (the one with the most parameters) so you can even migrate from poor mans DI to an IoC implementation quite smoothly.

NB: This is a generic wrapper that I use as I tend to use a different container depending on the employer/contract I work on, so I use an adapter pattern to isolate me from the varying syntax. If this approach appeals then check out the Common Service Locator at CodePlex

I hope this helps people move towards better designed software and an easier to use and looser, decoupled API.

Rhys

Presenter Tests 101

What to wrtite a test to make sure the prsenter load dats from a service/moeld/repository to the view?

IFooEditView view ;
IFooService service ;

[TestInitialize]
public void MyTestInitialize()
{
view = MockRepository.GenerateMock();
service = MockRepository.GenerateMock();
}

[TestCleanup]
public void MyTestCleanup()
{
//global assertion
view.VerifyAllExpectations();
service.VerifyAllExpectations();
}

[TestMethod]
public void CanInitialiseFooEditPresenter()
{
//arrange
var id = 1;
var record = new Foo(id);
view.Expect(v => v.LoadRecord(record));
service.Expect(s => s.RetrieveFooRecord(id)).Return(record);
//act
var pres = new FooEditPresenter(view, service);
pres.Initialize(id);
//assert – mock verification in tear down
}

NB: I really should stop posting code by writing staight in to the blogger create post screen. just lazy…

PowerShell to save the day!

I have been doing a fair bit of build script stuff over the last couple of months. I guess it started when we were having big problems dealing with the build process at my last contract. (I had been using Nant for about a year prior but it never really did anything other than clean rebuild and run my tests. That’s cool, it’s all it need to do.)
We really need to look at our build process as it took about 3 hours to do a deploy, which we were doing up to 2 times a week…. 6 hours a week of a London based .Net contractor: that is some serious haemorrhaging of cash. I started really looking in to build server and properly configuring build scripts. As most places I work at are very M$ friendly and not overly fond of OSS, so I tend to be stuck with MSBuild if it is a shared script. So goodbye Nant.
Fast forward to a few weeks ago and I have moved country and company and am working with a great team of developers that are incredibly pragmatic and receptive to new or different ideas. We set up a build server and installed Jet Brain TeamCity to point at VSS and a basic MSBuild script that was a basic port of my Nant script. It worked, it did what we need, which was take what was checked in, rebuild, test and send a zip of the output to a network folder and let us know if the whole process succeeded or not. Simple and sweet.
Enter ClickOnce. Ahhh. Ok, so ClickOnce is a great idea in that it manages your companies deployments of smart client software. No longer do you have to worry if the users are using the correct version of your software, the latest will always be on their machine. Personally I think this is a great idea and can see why mangers would love the idea. Its also really easy to deploy… if you are using Visual Studio… and if you only have one deployment environment. Unfortunately I don’t want to use VS (I want to do this from a build sever using a potentially automated process) and we deploy to Dev, Test, UAT and Prod. MSBuild really struggles when it comes to this… it basically just cant do it.
The biggest problem was I need to be able to change assembly names so the ClickOnce deployments don’t get mixed up (I want o be able to install Test and Prod on the same box). Changing the exe assembly name in MSBuild changes all the assembly names, which is not too good.
After struggling with MSBuild I realised I was hitting the limits of what MSBuild is supposed to do, it was either change my approach or enter hack town.
Initially I thought Boo, Python or Ruby would be my saviours… then quickly rethought. Although they would be good in MY mind, other people have to use this and those options are not real M$ friendly… yet. I don’t know why I didn’t think of it earlier but PowerShell was the obvious answer. I downloaded PowerShell and after playing with it for a couple of minutes I was super impressed. All the stuff I was struggling with in my bat files or my MSBuild scripts were trivial in PowerShell.
Variable assignment, Loops, switches etc are all trivial. It extend .Net so you can handle exceptions, interact with Web service Ado.Net Active Directory… the sky is the limit.

Anyway if you haven’t played with PS go download it, get the manual and get the cheats sheets

Documents
http://www.microsoft.com/downloads/details.aspx?FamilyId=B4720B00-9A66-430F-BD56-EC48BFCA154F&displaylang=en

Cheat Sheets
http://blogs.msdn.com/powershell/attachment/1525634.ashx
http://refcardz.dzone.com/assets/download/refcard/5f78fd3b70e077cb9a5b3782356a8a14/rc005-010d-powershell.pdf

And Check out PSake from James on codeplex if you are keen on incorporating PS into your build cycle.

Rhys

NB: I hope to post my revised ClickOnce build strategy… as my last one was a bit of a failure, sorry if I lead anyone astray.

EDIT: Check out Powershell GUI if a nice free IDE

Please be aware of P&P guidance

First and formost, I think the idea of a P&P team at M$ is a good thing. Its a team that is suposed to give guidance on how to use proven practices to build enterprise grade solutions. Unfortunately tit is not always the case. Normally I wouldnt care if someone was giving dodgy advice however when it is a team that people follow blindly it can be a bit agrevating.
I personally have nothing to add to the mess that is P&P. I had used various things that come out of P&P including Ent lib from 1 – 4.1 & guidance packages suchs as SCSF. Largely they are not best of breed but, “enough to get you by”. 99% of the time there is an OSS version that is better (Castle, Log4Net/NLOG) and when there is not something i want, i create my own over useing the out of the box products. Ironically I use fthe GAX and GAT to create my own Software factory, which I feel provides a much more usabale version of the SCFC/CAB for the majority of user and projects out there.
Well aparently they have done it again… I rarely even read what comes out from them now , my disillusionment is such that I don’t think it is worth my time. Sebastian AKA SerialSeb has put out a ppst recently that highlights his concerns. Really a lot of it is semantics; however when you have proclaimed to the world that your are an authority of a subject, as P&P have done then those semantics actually matter. If you muddy the means of communicating your message then your intents can be interpeted in many ways. In this case I don’t think P&P are trying to be ambiguous or abstartc I just think they don’t have a level of understanding of the problem space that is required when trying to give this type of advice. For example I understand agile and I use it, but it does not make me an authority on the subject and therefore it is not appropriate to give best practice advice on the subject. P&P guidance is percieved by the masses to be just that. My stance now unfortunately is to assume the worst and hope for the best when it come to M$ or P&P guidance because they have got it wrong so often.
How can this be changed?
I have to be honest, my uinderstanding of the inner workings of P&P is pretty much nothing. I have met a bunch of the guys, they are ALL super nice, friendly and generally knowledgable about M$ and .Net stuff.
But I don’t want any of those things. All I want is for them to be super experts in architecture, design patterns and frameworks,: specifically the fiedl they are giving advice on. If they have not persoanlly rolled out production code using those best practices and had it peer review (peer is not the guy you worte is with) then how can it be Proven Best Practices?

*The following is all opions do not construe any of theis as me stating facts*
Smart Client Software Factory to me was a mistake. It should have been preceeded by a lightweight application framework and the SMSF should have been kept for the M$ consultants. Huge red writing should be placed all over the download page telling you thios is a big bloated framework. Every project I have come across that has used it has failed. Miserably. Why? Because it was decide that M$ had “recommeded” SCSF and therefoe it was best practice. The fact the M$ nor P&P never actually recommended it is beside the point, in fact many time they recommend you to seriously consoder whetherth this is the best option. Unfortunatley the people who make the descision on what software I use are not coders, don’t like reading docs and have more faith in M$ than their own team. They also have ego’s and believe that their project is a big enterprise system that need the biggest and best framework. I find this is not the exception, this is the rule. It usually take months of “Good Behaviour” at a new contract for employers to have faith in my descions making skills. By then we are usually up to our eyes in whatever technical descision management made for us many months ago. SCSF, EntLib EF and are a couple of pain points I have had to bare.

So if you are embarking on a new green fileds project and the mentality at your firm is still M$ == best of breed, please rethink and make an educated descuion on your tool of choice. It may be the case that the M$ product is the best choice for you, bu then again you wont know that unless you do a bit of research. As for P&P, it really is time to pull your socks up. LIike anything in life, if you dont know, ask. Dont publish junk… please.