Loose coupling: The Illusion

I must say I totally agree with the sentiment Jeremy has here. Loose coupling is an illusion to most of the projects have worked on. The project I current work on has the client UI in a distributed system knowing that we use NHibernate as our ORM. Unbelievable. Needless to saying Unit testing this solution is VERY hard! To me, this is the first place where loose couping rears its head. If you can not unit test a class due to concrete dependencies rearing their ugly head, then its either

  • Deal with the fact that you are not, in fact, loosely coupled or
  • Fix it.

As mention in Jeremy’s post having separate assemblies is not loose coupling. At best this forces a direction of flow of control, at worst it hides circular references or creates the illusion of loose coupling. Jeremy doesn’t break his solution down to quite the project granularity I do and nor do others (JP Boodhoo for example is know to have very few projects in a solution). The notion is that you are not trying to hide behind the perception of more assemblies == less coupling. You can also separate the single project into multiple assemblies in your build scripts if required. It then becomes a gentleman’s agreement amongst the team that coupling rules will be adhered too. Now this is much easier to police with a tool like NDepend.

Currently I am without NDepend so I still break up my solution to multiple projects, for a couple of reasons. I like to be able to visually see what is going on quickly and I like the default name spacing that is then applied (Sure this can be done with folders too). Probably the aspect I like most however is that i can see what reference what at any given time by checking the references (again, we don’t have NDepend on this project). By opening the UI project I can now see that Data Access references are made, or there are WCF references in a Data access project. Without NDepend this is my last resort to policing the silly things that go on in the current (and no doubt futures) project.

With NDepend I would certainly be moving toward smaller projects. *Thinking out loud* I think a common assembly for server side (with my default and abstract data access/repository stuff) a server side assembly and a client side assembly. It kinda makes sense. If none of that server side stuff will ever be accessed by an other application or assembly then why break it up? Hmmm

Anyway, on the path to loose(r) coupling consider a couple of things:

  • Are you using a Test first approach. Although it is not necessary it tends to flush out dependencies early on
  • Are you using a means of Dependency injection? If you are new’ing up dependencies in line then you have just tightly coupled yourself to an implementations. Check out this for a start on DI including poor mans DI, which is still infinitely better than no DI, IMO
  • Code to interfaces not implementations. I always thought this was a pretty obvious statement, but apparently not. Your DEPENDENCIES should be interfaces. Anything you interact with in terms of Actions (ie methods or delegates/events) should ideally be via the interface. I very rarely see a point of having DTO’s implement an interface…
  • Stream line code that interacts with the unmockable (file system, DB, WCF/Windows services etc) ; it should be as thin as possible. Try to get your working code out of these classes. This is more of a testing issue, but will also lead to better design.
  • Get NDepend. It is a kick ass tool that i wish my boss would get for me 😦
  • Code reviews. Get out of trouble at the first sign, its very hard to “loosen up” an app once the tight coupling has set in.

Loose coupling is a design goal we should strive for but I believe it deserves a bit more than just lip service. Get the team on board an explain the benefits. The earlier this begins in the project timeline, obviously, the better

Coding style and preferences

There are a couple of things i try to keep in mind when writing code, well they is plenty but specifically for this post 2.

Re-usability and Reliability.

I do all that I can to fail fast. I want my errors to come up as I type, ideally your IDE will support you on this, if not plug ins like ReSharper help. Other wise I want build errors to tell me, β€œHey mate your code is junk this does not make sense!”. The last thing I want is run time errors. This is not obviously shared with others.

In my mind using strings for business logic is a last resort and if done there are always constants.

Casting is also a last resort, and usually only done from a know parent class to a subclass.

I would much rather use enumerated constants, generics, type checks to perform this business logic or flow control.

i have spotted in a few place where effective type checks are being done using string

if( Object.symbolX = “Fully.Qualified.Name.Space.ClassName”)
{…}

if some one changes that name space then every check now has to be changed and the compiler will not pick it up.

it would have been just as easy (actually easier) to write

if( Object.symbolX = typeof(Fully.Qualified.Name.Space.ClassName).Fullname)
{…}
or even just do a bloody type check!

Secondly re-usability:
I am currently working as a contractor on an application that was many many week behind schedule largely due to an over worked non functioning UI. Myself and another contractor worked very hard to rebuild the UI framework (ie throw all the old stuff out and create new projects for the client side!) We knew it didn’t have to be the best looking code it just had to work so the the user could see the application. 14 weeks and no visible app is pretty bad. 2 weeks later we had a app that functioned, very basically but you could so stuff. We are now almost up to schedule (6 weeks to catch up 14 weeks with 2 guys fired is not too bad).
The other developer who I wrote the UI framework with was reworking some of my code and asked if he could change (or overload) the constructor so he could inject the views controller into the view. I explained I prefer not to as it means the view become more tightly coupled with the controllers and I don’t want the views to know anything other than they are Win or Web Forms.
I could understand were he was coming from, calling the controller from the view means its is easier and faster to write code. ie to save from the view just type this.IXXXXController.Save(XXXXX);
His points were valid. It is easier to code against, it is kinda loosely coupled as the View only know of the common interface assembly. I was starting to doubt why I go to the bother of creating events and delegates and event handlers (even most of them get reused)…..
Then I had to make a change to a view and i realised why I follow this.
Raising events means that ANY controller can use the view. i am reusing a lot of my views. Having knowledge of the controller leads to spaghetti code….. as the other developer i currently work with is a pretty good dev I am not really worried that he is going to go down that path, worst come to worst we can retro fit the events when/if we get the time. However when we got another contract dev involved, it went pear shaped fast. He is gone now, thank god.

Benefits of using events

  • They can go on your interfaces
  • As many other class can subscribe to the event as needed
  • separation of concern
  • Changing the EventArgs doesn’t change signatures of events meaning you wont break the build (provide nothing of importance is removed!)
  • Its clear where calls are coming from and where they are going. If i see a method in my controller called SuperView_RhysKicksAss(object sender, AssKickingEventArgs e){…} it is pretty obvious where this method is being fired from and why it is being fired (because I am Kicking Ass!), but you may want to check it is actually assigned a handler if you didn’t write the code…


Negatives

  • You have to write your EventArgs and Event Handler. This is perceived as a headache to pretty much everyone i have worked with. When I explain the xxxEventArgs is just a DTO and the handler is a one liner they usually easy up a bit. BUT it is still an extra dozen or so line that need to be created.
  • Its easy to write sh1te event code if the devs are not aware of coding standards, meaning it is hard to understand what is going on. Follow conventions to save headaches later
  • testing frameworks offer crap support for testing events

After weighing it all up, i think for the mean time I am going to stick to events when calls need to go out of scope.

Using AOP

I have been dodging AOP for a while now for the main reason that is “Looks to hard”, especially when you have to explain it to PM’s and other dev’s who are still trying to get to grips with basic IoC and ORM concepts, after all projects are not solo efforts.
However I have now found a nice looking framework that may help.
PostSharp is an Attribute driven lightweight AOP framework that modifies the IL as a Post build event in the .Net framework.
This article here has got me very excited about implementing standardised Logging, Security, Exception Handling and maybe even Design by Contract.

Once i get my home development PC back from the dead I will post more about how easy it is (or isnt) and how it affect the project at runtime.
Should be interesting.

DB Normalisation and other DB points of view

Over the last couple of months I have heard the commments “its a very normalised database” or various implications that database X is “too” normalised. i often found this odd.
Why?

  1. By looking at a DB schema how do you know how well indexed the DB is?
  2. How do you know how many rows are in it?
  3. How do you know about it performance

I have always been of the opinion that you

  1. Normalise first, create a beautiful piece of art that is a nicely and appropriately normailised database.
  2. Add indexes.
  3. Write your sored procs.
  4. Check the performance on test data.
  5. Denormalise if nessecary, however if there were issues I would go over everything with a fine tooth comb first to ensure keys, indexes, constraints, triggers are all in place where appropriate, working as intended and not adding unessecary overhead. Then I would start denormalising.

Yes i am a fan of stored procs, especially for anything that actaully is DB intensive. As Gumble also points out (frequently) it adheres to a basic concept we use daily, encapsulation. Normal Objects and Layers/Tiers cant see into the next object/layer/tier, why should this be broken at the data access layer. Using SP’s also (IMO) aids in security and data control (well it can, anyone can butcher code) ;). The OO coders should not have to know the underlying data structure, they know what they want, they just need a means to get it. The Dba may be adding a whole bunch of stuff that the OO boys dont need to about, inactive flags, triggers, loggin, other extra columns, how the data is retireved etc etc… also this mean performance is left in the DB world and can be easily tested with out any manged code interfering.
One thing i do prefer is isolation of tests, i dont want to not be able to test individual units. A stored proc is a unit, a BusObj –> Repository –> DA–> db –> DA –> Repository –> BusObj is not a very succinct unit. If i want to test that vertical i still can however i want to be able to break it down too.

Then there is the point of ORM’s; I’m all for them, anything that gets rid of boring DA code, sweet! But sometime you have some complex resultsets and DB call that do not suit your run of the mill ORM’s. I do believe for basic database and when fast turnover of code is of higher importances, then something like NHibernate (perhaps with AR) can be handy… but i think it needs to be reviewed on a case by case basis, unlike some of my peers who believe NHibernate is the silver for all solutions…

no doubt more will come…