MassTransit Host & Setup

[Intro]

The glue that ties all of MassTransit’s moving pieces has to be done  on starting you application. We need configure the service to know what to start up, how to find it and in what context to run it.

MassTransit has split the host service into a separate project, namely TopShelf. you will see TopShelf being used to set up our MassTransit programs in the entry points of our application, typically with the Program.Main(string[] args) method.

The basic set up steps for creating a runner configuration are:

  • Describe the Service
  • Instruct how the service will be run
  • Configure the the service

Once you have done this, the TopShelf Runner can host the service.

Describing the Service means give the service a name, a display name and a description. The display name and description are visible from the Service Control Manager while the service name is intend for console line interactions

Instructing how the service will be run: Define any known dependencies (MSMQ, IIS, SqlServer etc), any actions that should be performed prior to running the service/host and also how the service is to be run: i.e what credentials will the service run under. We can also use the UseWinFormHost where we can supply the name of the WinForm that is the host. This is great for demos, but I am not sure if it is intended for production use… Chris and Dru may care to comment on this; either way its handy when getting to terms with the stack.

Next we need to configure the service(s) we are hosting. In here we can define some delegate for certain event in service life (WhenStarted, WhenStopped etc) and we can also weave some of our IoC voodoo majic by defining our service locator. Again the authors have decided to use Castle Windsor for the sample, however I believe you can use any of the CommonServiceLocator Containers. As this method need to returns something that implements IServiceLocator, using the DefaultMassTransitContainer type makes life a little easier as it does a fair bit of the plumbing for you including setting the current service locator to itself.

[STAThread]
private static void Main(string[] args)
{
//from Starbucks.Barista.Program.Main(string[] args) - Modified for readability
var cfg = RunnerConfigurator.New(configurator =>
{
//Describe the Service
configurator.SetServiceName("StarbucksBarista");
configurator.SetDisplayName("Starbucks Barista");
configurator.SetDescription("A Mass Transit sample service for making orders of coffee.");

//Instruct How the service will be run
configurator.DependencyOnMsmq();
configurator.RunAsFromInteractive();
configurator.BeforeStart(a => { });

//Configure the service(s)
configurator.ConfigureService(serviceConfigurator =>
{
serviceConfigurator.CreateServiceLocator(() =>
{
//Use MassTransit's built in Container (Castle Windsor specific), described earlier
IWindsorContainer container = new DefaultMassTransitContainer("Starbucks.Barista.Castle.xml");

//Add the components to the container
container.AddComponent("sagaRepository", typeof(ISagaRepository), typeof(InMemorySagaRepository));
container.AddComponent();
container.AddComponent();

//Tracing - not super important in this context
Trace.Listeners.Add(new TextWriterTraceListener(Console.Out));
StateMachineInspector.Trace(new DrinkPreparationSaga(CombGuid.Generate()));

//Return the Current ServiceLocator, which has been assigned in the DefaultMassTransitContainer ctor
return ServiceLocator.Current;
});
//Define delegates (specifically service methods) to fire on given ServiceConfigurator events
serviceConfigurator.WhenStarted(baristaService => baristaService.Start());
serviceConfigurator.WhenStopped(baristaService => baristaService.Stop());
});
});
Runner.Host(cfg, args);
}

MassTransit End Points

[Intro]

Many people will be familiar with the notion of an “End Point” especially those who use WCF or other web service frameworks. An end point is “the entry point to a service, a process, or a queue or topic destination”. My WCF background has had the ABC drilled into me (Address, Binding and Contract) as the 3 things that basically define an end point. MT is pretty much the same. Also like WCF, the endpoints are a configuration aspects of the solution so it seems valid to put this information in a config file. The MT boys are clearly Castle fans (although other IoC frameworks can be used) and they have chosen in most of the samples to use Castle Windsor to configure the endpoints.

SIDE NOTE: For those unaware of Castle Windsor (an IoC implementation) it allows you to write loosely coupled code and specify the concrete implementation detail via config, a little bit like the example of the Asp.Net Membership Provider which is a plug in pattern. Using MT without understanding IoC may prove to be difficult… in fact I would say you are almost certainly biting off more than you can chew. Look in to the Castle stack, it really is great OSS framework to help pick up good habits.

Moving on…

The defining of the endpoints should not be confused with the Castle implementation. It is just as easy to do this in code. Anyway Lets walk through a typical castle config file for MT:

From the Starbucks Sample:





<bus id="customer"
endpoint="msmq://localhost/mt_client">




MassTransit.Transports.Msmq.MsmqEndpoint, MassTransit.Transports.Msmq



First and foremost this is a castle config. The name of the file “Starbucks.Customer.Castle.xml” is a pretty good hint and I know “facilities” is a castle concept. MassTransit have embraced the concept of facilities which you can investigate here. MassTransit have their own Facility, namely the MassTransit.WindsorIntegration.MassTransitFacility which helps us get up and running with out having to know about all the plumbing. In this MassTransit specific facility we define the Bus and the Transports. The transports child node is equivalent to our “Binding”; it is essential so we know what transport mechanism to use. You will see standard .Net notation for expressing a type in XML, i.e: “Fully.Qualified.Namspace.TypeName, Assembly.Name”. This type must implement the interface MassTransit.IEndpoint. Currently there are adapters for MSMQ, NMS, Amazon SQS and WCF.

The other child node in the facility defines the Bus. Here we give the Bus a identifier and its end point. These are both mandatory. The end point will be the URI the bus will receive communication from, when the application publishes a message. The Id indicates that there can be multiple buses configured, which there can. The bus also can have several child nodes specifically:

  • controlBus
  • dispatcher
  • subscriptionService
  • managementService

The Control Bus is involved in managing the disparate system. For example the Starbucks example uses a control bus to manage the interaction amongst the server side consumers: the Cashier and the Barrister. For more info on a Control bus see page 540 in Enterprise Integration Patterns.

The Dispatcher is a means to control the use of threads. High volume message interaction can be handled using multithreading specifically with the attributes maxThreads and readThreads, both of which are self explanatory integer values.

The Subscription Service is the common service that provides an endpoint for subscriptions. The only value required is the end point attribute.

The Management Service allows for specifying a heartbeat monitor for checking the health of your services queue. The samples use the SubscriptionManagerGUI to show the queues  that are being listened to and the health of the subscriptions.

I do not believe any of these bus child nodes are mandatory, from looking into the code the only requirements are that the bus has must have an id & end point and the facility has a defined transport.

There are a couple of notes for new comers to Castle and MassTransit. Like most config files the XML file that is shown above should have its build action as “Content, Copy Always”. The Queues that each service uses also need to be set up (e.g. in MSMQ) before they can be used. Luckily the exception handling in MassTransit is pretty good and will let you know that and endpoint is not set up if it is required, just be sure to read the queue name correctly. I spent a about 15 minutes trying to figure out why a sample subscription was failing when the exception was saying I had  not set up “mt_server1”. I thought it was saying “mt_server”. If in doubt read the exception! We will cover how the castle config is tied up in the Host And End points post.

End points  and their configuration may be a bit tricky for new comers, but if you break each piece down it becomes more manageable.

MassTransit Publishers

[Intro]

So we feel we have something that the world needs to know about, we have messages to publish. This is what kicks off the events that make up the Pub/Sub system. The IT division have told you they are sick of modifying the HR application to call a growing number of web service to let those services know about  new or updated employee information. You decide this may be a good candidate for some Pub/Sub love. We will start with New Employees, firstly we would need to create a suitable message to publish, say “NewEmployeeNotificationMessage”. This has all the relevant info in the message. As part of the creation process all we need to do is create a message of the given type and publish it.

var message = CreateNewEmployeeMessage();
_serviceBus.Publish(message);

That is it. Well…. its not, but as far as the publishing code goes that all there is too it, there is a little bit of infrastructure set up that goes on at start up, but to publish a message is really that simple.

There are times where you may want to know of a response if a subscriber sends one, this can be done by setting a response address in a delegate as part of the publish eg:

_bus.Publish(message, x=> x.SetResponseAddress(_bus.Endpoint.Uri));

If a response is expected then the service publishing the message should also be a consumer of the response message type, see the consumers post

The bus is a MassTransit.IServiceBus that is injected in to the service. We will cover setting up the bus later on in the series.

*this may be a bit over the top example. If you a re building enterprise wide service and integrating system perhaps MT is a little too light weight, judge for yourself. Personally I am angling at using for intra component messaging.

MassTransit Consumers/Subscribers

[Intro]

A messaging system does make a lot of sense if no one or nothing is listening, consuming or subscribing to those sent messages. If you are interested in a particular event that a message represents then you subscribe to that event.

Continuing on with the idea of a new employee at a company, lets assume that head office have decided that all staff members must do a new online intranet based safety course and any new employees must do the safety course as part of the induction. We can create this online application, send out the notifications to all existing staff, but how do we ensure all new staff do the course? well we know that HR publish a New Employee Notification when an employee joins the company so we decide to subscribe to the message so our application notifies the new employee and his supervisor that this course must be completed as part of their induction.

Ok, so how do we do this in MassTransit?

Well one option is to create a consumer, a service that subscribes to the message and acts on it when it happens.

public class NewEmployeeService : Consumes.All
{
private IServiceBus _serviceBus;
private UnsubscribeAction _unsubscribeToken;
public void Consume(NewEmployeeNotificationMessage message)
{
//Notify user and supervisor of course requirement
}
public void Dispose()
{
_serviceBus.Dispose();
}
public void Start(IServiceBus bus)
{
_serviceBus = bus;
_unsubscribeToken = _serviceBus.Subscribe(this);
}
public void Stop()
{
_unsubscribeToken();
}
}

A couple of things to note here:

The NewEmployeeService implements the “Consumes.All” interface. This means we are subscribing to any message published of type T, in this case NewEmployeeNotificationMessage. By doing so we must implement Consume(T message), this is the method that will be called when the message arrives.  Start and stop are methods we have defined that get call when the host starts up the hosting service (we will cover this is later posts). More importantly and something that may not be obvious is the unsubscribeToken. When subscribing to the bus the subscribe method returns an UnsubscribeAction delegate that can be called when the subscription is no longer required. Therefore calling this delegate on the stopping of the service would be a good idea 🙂

A service can subscribe to many  messages by specifying and implementing more of the consume interfaces, as it is not a base class you are not limited to a single inheritance. So you may want to define the class as :

public class NewEmployeeService : 
Consumes.All,
Consumes.All
{
//...etc

It is also worth while to note that the message can be responded to:

CurrentMessage.Respond(responseMessage);

This will send the message back to the response address specified by the client, see the Starbucks example: CashierSaga.ProcessNewOrder(..) and OrderDrinkForm. NB: The OrderDrinkForm also implements the consume interface for the response message, otherwise it will not know what to do with the message

MassTransit Messages

[Intro]

Messages are the backbone of MassTransit, without them there would not really be a need for the solution. Messages IMO should be a Verb. “Customer” is not a suitable message name as it has no intent, “NewCustomerCreated” is therefore a more suitable name. As far as MassTransit goes a message just needs to be a class that is marked as [Serializable]. For most scenarios is have encountered I actually want to track a specific message, i.e. I want to know its identity (which we will cover soon), so I have my message implement the interface “MassTransit.CorrelatedBy” which gives the message a Correlation Id so I can track it. It is probably a good time to mention that messages are Immutable dumb DTO’s. I have worked on several systems now that try to ignore this and every time it has ended in trouble. The message is a trigger, it should never be the entity you manipulating.

An Example from the MassTransit Pub/Sub Sample is below:

[Serializable]
public class RequestPasswordUpdate :
CorrelatedBy
{
private readonly string _newPassword;
private readonly Guid _correlationId;
public RequestPasswordUpdate(string newPassword)
{
_correlationId = Guid.NewGuid();
_newPassword = newPassword;
}
public string NewPassword
{
get { return _newPassword; }
}
public Guid CorrelationId
{
get { return _correlationId; }
}
}

Using the correlation Id means that  later on when I want to listen for associated messages I can. This will be covered in [Consumers/Publishers]

Models inc

Recently I have noticed there seems to be a confusion over what the term model means. I am happy to admit I am not the guru on all things code, but i am happy enough to put it out there that there is not necessarily one model per system. I have a feeling that a lot of this confusion has come from the MVC, MVP and DDD wave currently sweeping the world. A multitude of examples showing the possible way of creating an MVC application will use just the one model, i.e. the controller will talk directly to the repositories and the objects retrieved are passed to the view. This scenario is fine for web applications that have little need to scale and are effective bound to a 1-2 tier architecture. The new ASP.Net MVC + Linq – SQL is a fantastic candidate for this and allows you to get a testable solution up and running in no time.

But what if you are using WPF, with a application server  using an ORM for persistence and WCF to get the info to the client. Reusing the ORM specific objects is asking for a messy solution. To me this is where it become very important to define your models. In this common three tier type application architecture I immediately can see 3 types of “models”. First and foremost the Domain model. These are the business classes that make up the model that reflect the “business truths”. This is where the business rule are enforced at the up most. You ORM will interact with these domain entities and value types and map them appropriately to your persistence layer, which is most likely your Relational Database. This closely follows the core concepts of DDD. This is all well and good but it is often where people will stop in terms of models. These objects will be items such as Customer and OrderLine in my mind belong in the bounds of the domain. I have been hit, as have many, by trying to reuse these object and send them to the client to be “reused”. This is a bad idea.

Lets play devils advocate and say that we will distribute these domain objects. Lets say we are also using and ORM that allows attribute defined mapping. Lets also say we wish to mark up the object using the old school way, defining the object as data and member contracts for WCF. Lets also say we are using WPF and want or binding support. Straight away the class is going to be bloated with infrastructural concerns. It is going to look like a mess. What if this object is sent to the client and a property that is marked as lazy loaded is referenced? How is this object going to get that information? Is it going to jump back across the wire and get the data… just for a lazy load operation?

I am obviously pushing for something here: separate your concerns. The domain object should remain as domain objects. The objects that get passed across the wire should be DTOs. These are incredibly simple and are just data holders. The service layer will convert the domain object to the DTO depending on the operation. For example when returning a list of objects it may not be important to send all the object information; perhaps just the Id, Name and Description would suffice, however if you are return a single item then it would be likely that more detailed information is required. This conversion scares people off. It sounds like too much hard work. It is not. It is trivial and easily tested. Please do not use the excuse of “it is too much overhead” as this is the easiest code you will write. Further to this you may find yourself using Specific DTO for specific service interactions. This is a good thing. Intention revealing interface are good. Creating these will most likely save you a lot of maintenance time later on down the track. For example you may have a CustomerDto for lists and CustomerDetailedDto for single instances etc (possibly not the best names used here, sorry)

Once the DTOs are passed over to the application tier they can then be used as-is, or if there is more application specific needs than a simple DTO can provide, then create an Application Model. This application model is, as the name suggests, specific to this application. A web application model will most likely be subtlety different to a WPF application model with infrastructural additional being the most prominent (i.e. data binding concerns).

This certainly appears to be a fair bit of extra work, however you will end up with a design that is incredibly simple at each layer and very simple to maintain. Each concern now only has one reason to change and you can easily facilitate multiple people work on vertical and horizontal aspects of the stack. To me the ease of working with a stack like this and the significantly reduced maintenance costs will push me to consider this approach very early on if  the application is moving towards a 3+tier design. I would strong recommend it if you are doing the same.

MEF: The Double edged sword

I am currently investigating the workings of MEF. MEF is the forth coming Managed Extensibility Framework that aims to allow for easy facilitation of plug-ins for framework that lend themselves to be open to such extensions. Visual Studio is likely to pop up and be used as a typical example as much of what MEF is doing is to be used in VS2010 and should be a great way for the M$ lads to dog food MEF.

What I have been running in to is the blurring of the lines of MEF and IoC, which I think will hit a lot of people. A large reason for this is the similarity in the usage of MEF and a typical IoC container:

var things = container.GetExportedObjects();

My take on MEF, and I am paraphrasing somewhat from Glenn Block,  is that I will want to use MEF to help me deal with unknown components while I will let IoC deal with the known. Unfortunately what I see is the use of MEF as just another IoC container. Now the demos that are out there are trivial in nature so it is not really fair to pick them apart, but it seems that there are people out there saying things like “should I use StructureMap or MEF on my next project?”… to me that’s quite an odd question as they are not mutually exclusive.

  • MEF should be used when there may (or may not) be extensions available for your host application to consume; these parts are unknown and should be treated as so.
  • IoC should be used when there should be implementations of a given service contract* that you applications needs to consume. Generally these are well defined in their contracts and it is the implementation details we are trying to separate.

Another way to look at it is IoC should deal with the internal wiring up and MEF bolts extra stuff on. The pain I am currently feeling is how do I wire up (in terms of IoC) my extensions? The host application should certainly not explicitly know about the extensions components… would I have a wiring up module in my extensions? At the moment I am almost tempted to have a export part called IoCRegistration that has an IoC container registration aspect to it that will be called on app start up…. hmmm… i will have to think about this.

I really hope a lot of the dust settles from MEF with Preview 5 being released, this needs to be clearly defined prior to being released to the masses. IoC is currently a buzz word which means its “cool” and therefore dangerous. Once its use settles in the .Net world, sanity should prevail again. Hopefully MEF is not too close to this that it gets drag in.

* I uses the term Service in the Castle sense: Service is the interface and the Component is the implementation

Explicit roles and pipelining strategies

After watching an excellent presentation by Udi Dahan this morning I have rethought some of my infrastructure concerns and the way I can handle certain aspect of my generic stack that I heavily lean on. On example that is relatively low hanging fruit is the persistence mechanism.

As a bit of background: I use a service locator pattern heavily in my code where dependency injection is not appropriate which just keeps things clean, lessening to knowledge of the underlying mechanisms and infrastructure concerns. Currently one good example of where this is used is in my application presentation level code to assist in navigation. We call a basic method

NavigateTo(Action preInit) where T: IPresenter

The service locator gets a presenter of type T and the DI container (which is the same thing as the service locator) instantiates the presenter with its view and any other decencies. Based on the type of the view the presenter has the navigation implementation displays it accordingly. As the application expands we can extend this to be able to do more specific actions, however the coding calling the NavigateTo does not have to know how the views are arranged.

The part that I am most interested in is the very specific example the Udi raised in his talk: Persistence.

I have been involved in a couple of projects that did exactly what Udi old school example did. We had an IEntity interface with a validate method contract on it. Everyone of our classes had to override this and it was messy when it came to validating children for the exact reason Udi mentioned. I think at one stage we even had reflection getting jammed in… it was a mess. Looking back a lot of this could have been cleaned up by implementing the IValidate that Udi proposed. This validator can be incorporated in the concrete persistence mechanism as part of a persistence pipeline.

calling IRepository.Persist(IEntity entity) would under the covers also potentially call a bunch of other infrastructure concerns

=>ILog.Log(“IRepository.Persist(IEntity entity)”, entity, user)

=>IPersistSecurity.Auth(user)

=>IValidate.Validate(entity)

=>NHibernateSession.Save(entity)

=>IAudit.Audit(entity, user)

=>ILog.Log(“IRepository.Persist(IEntity entity)”, entity, user)

Each one of these infrastructure concerns can be left generic, allowing a service locator to give you the concrete implementation of the type. eg the IValidateEntity may be a customer validator that just calls the validate method on the customer itself… or it may interrogate the customers getters and evaluate based on those values. It may even ask the service locator for an instance of IValidateEntity and validate each of the orders in the customer that it has been passed. How it is done is not up to customer any more, and it is certainly not up to the persistence mechanism…it is now separated cleanly into its own role.

NB: The fact that when saving an address means that the service locator is calling for an IPersistSecurity

type and that may not exist is great! If there is no defined IPersistSecurity
then we can explicitly say there is a a default return value of “IsValid = true” using a null type (or however you want to implement it). The infrastructure concerns can be pushed aside and dealt with if and when required.

This also raises the question of AOP and policy injection. This pseudo code above implies that we call methods on each of these interfaces. This does not have to be in inline code. This can be added at run time and configured on the fly depending on the application is question.

Now our Entity can focus on what it needs to do and not worry about the myriad of other infrastructure concerns that can be dumped onto it. I am looking forward to this simple modification that should clean things up nicely.

Agile Documentation and Stake Holder Engagement

My continuing battle to heavily restrict documentation continues at my current place of work.

We are a very waterfall oriented business by the shear nature of the sector we operate in. We are not a software development company, we get oil and gas out of the ground. Exploration, building oil rigs, getting the resources out & selling them and cleaning up afterwards when there is no more resources left, by nature is very waterfall-ish. It requires big design up front.

Fortunately I work in the small project team where we have a reasonable amount of freedom for self governance. Since I have been involved (and most likely several months prior) the developers I work with and I have been pushing for an improved engagement and development process as the waterfall approaches has only ever failed to deliver. We have tried to implement more and more agile techniques with great (but isolated) success. TDD, DDD, CI and some aspects of scrum have been taken up primarily by the developers. Engaging the non technical staff has been problematic to say the least. This primarily revolves around Engagement and Documentation processes and the slow uptake and lack of interest of the non technical staff to process improvement.

Firstly is the personal disagree with the amount of documentation that is required for a project. I have no problem with this as I believe healthy conflict and the healthy resolution of those conflicts, usually results in a better working environment. My thoughts are: given we are a small projects team (I have been involved in 3 projects already, 2 of which are deployed, one is half done), I believe there should be minimal documentation and the code should simulate the vast majority of the detailed documentation.
The default documentation the developers have proposed is:

  • A Vision Scope document: Define why we are even doing this project, the business outcomes and risk and very high level requirements
  • Architectural design with design document if the design deviates from our standard web or smart client architecture. All integration points must be defined (i.e. SAP, JDE, Service buses, Web Services etc) and how they will be subscribed to or published to. The detail of this document is heavily reliant on the project itself. It could be as simple as a class diagram or a full blown very details design document.
  • Use case/user stories. Definition of the business problem with the desired business functionality required to solve this problem. High level work item is probably broken down into several use cases. I personally don’t care what format these are in; If a B.A. prefers one style over an other that is fine as long as all the necessary information is captured. One key aspect here is I do not want unnecessary technical information in this document. The person writing this document probably has a comparatively low technical comprehension when compared to the person delivering it. Don’t tell me how to do my job!!!! If I ever see another proposed database table or stored procedure in a use case I will make the author eat it.

As far as documentation, that is it. The Vision and Scope is about 3 pages and if this can not be delivered then the PM/BA/Stake holder has no right to engage the team. Once these practices are agreed upon I would like to think that I wont even engage a project unless this fits our minimum templated requirements.

Architectural Design is done by the technical lead of the project. As we are lucky enough to have very skilled developers on our team (not that the company has acknowledge it yet) this is most likely done in a quick work shop with the PM, BA & SA. Other stakeholder may be invited, especially as we often work with other teams such as reporting. Their level of engagement is largely determined whether they are considered a technical owner or not. This work shop for 80% of our work will be done in about 30 minutes. Many of our applications are basic application with only a few integrations points that are well known. This document should be signed off by at least one other approved technical person.

From here use case and customer estimates can be done. I am still trying to push for an iterative approach, which is slowly sinking in. Typically we do 4 x 1 week sprints and release monthly. As the project moves on we may increase releases to fortnightly or weekly releases as functionality snowballs. This is a major benefit of a reusable architecture, reusable build and deployment scripts combined with Continuous Integration.

We are now at the point of Iteration zero. Our iteration zero should be about 1 day, including all of the interruptions we get. We have a custom software factory that allows us to have our infrastructure and application architecture standardised. This means with in about 5 minutes we can have a proven, architecturally sound application skeleton checked into source control and running off the build server ready for deployments. If only our non technical brothers were this organised… don’t worry though, because we (the techies) have even written the templates for the Vision and Scope and Use Cases for them, all they have to do is fill in the blanks. To be honest I could probably write a Power Shell script to replace most of our non development staff…  ;p ^

AS for breaking down the actual work that a dev does, A use case will most likely be broken to multiple development tasks till each task has an estimate of less than 1 day, preferably .5 of a day. These tasks can be very briefly described eg

  • create edit customer view- est 30 minutes
  • create edit customer presenter logic & tests 45 minutes
  • etc

Typically these will be described at the start of a sprint and added as sub tasks in the task tracker (eg TFS) by the developers so the PM has visibility of development progress. This is in no way an essential part of the documentation but I believe it aids in

  • assigning responsibility (and therefore accountability),
  • increases visibility of project progress
  • Imp[roves estimation of what can be achieved in a sprint and
  • increase ease of assigning bugs to people and to associated work items.

An excellent overview of agile documentation that is almost completely inline with my feelings of documentation is found here:
http://www.agilemodeling.com/essays/agileDocumentation.htm

Specially the following points:

  1.      The fundamental issue is communication, not documentation.
  3.      You should understand the total cost of ownership (TCO) for a document, and someone must explicitly choose to make that investment.
  7.        Documentation should be concise: overviews/roadmaps are generally preferred over detailed documentation.
  9.      With high quality source code and a test suite to back it up you need a lot less system documentation.
10.      Documentation should be just barely good enough.
12.      Comprehensive documentation does not ensure project success, in fact, it increases your chance of failure.
13.      Models are not necessarily documents, and documents are not necessarily models.
14.       Your team’s primary goal is to develop software, its secondary goal is to enable your next effort.
16.      The benefit of having documentation must be greater than the cost of creating and maintaining it.
17.      Each system has its own unique documentation needs, one size does not fit all.
19.      Ask whether you NEED the documentation, not whether you want it.
21.      Create documentation only when you need it at the appropriate point in the lifecycle.

We define our measure of success in term of production quality deployed software. For us as developers to move towards this we must provide a suitable engagement process for the non techies to follow. I believe the document outlined can be seen to be a bare minimum, but it is enough to deliver software. Any addition to this set of documents should be justified and be delivering an significant increase in business value; if not eliminate it. 

Too much documentation is a waste of time. Inaccurate or poorly maintained documentation is costly. Don’t do it!

Recommend reading:

Software Requirements, Second Edition: Wiegers

Writing Effective Use Cases: Cockburn

Agile Project Management with Scrum: Schwaber

^ My disdain for the non technical people is not personal at all, I actually get on very well socially with them. I count myself lucky to work with a bunch of very nice people. What I don’t like is the fact that these people are paid very well and I expect them to be not only competent, but experts in their field.  Unfortunately the developers are on a path on constant improvement; that passion is however not shared by our colleagues, which is a shame. We have very good developers running at about 30% efficiency as we spend too much time on non technical aspects of the SDLC.

Ain’t That the Truth!

A man in a hot air balloon, realizing he was lost, reduced altitude
And spotted a woman below. He descended further and shouted to the lady
“Excuse me, can you help me? I promised a friend I would meet him an
hour ago, but I don’t know where I am”

The woman below replied, “You’re in a hot air balloon, hovering
approximately 30 feet above the ground. You’re between 40 and 41
degrees north latitude and between 59 and 60 degrees west longitude.”

“You must be in IT,” said the balloonist.

“Actually I am,” replied the woman, “How did you know?”

“Well,” answered the balloonist, “everything you have told me is
technically correct but I’ve no idea what to make of your information
and the fact is I’m still lost. Frankly, you’ve not been much help at
all. If anything, you’ve delayed my trip.”

The woman below responded, “You must be in Management.”

“I am,” replied the balloonist, “but how did you know?”

“Well,” said the woman, “you don’t know where you are or where you’re
going. You have risen to where you are due to a large quantity of hot
air. You made a promise, which you’ve no idea how to keep, and you
expect people beneath you to solve your problems. The fact is you are
in exactly the same position you were in before we met, but now, somehow,
it’s my f***ing fault…”

Yeah, an oldie but a goodie…