Intention revealing interfaces

In the argument of “Intention revealing interfaces” vs “Reusable DTO/message” i will 99% of the time lean heavily toward the “Intention revealing interfaces” side.

I have just encountered code where i can see a (somewhat) noble intention of trying to keep the number of messages available in an assembly down to a minimum. Considering the system(s) mainly passes entities across boundaries, message are not common currency.
Unfortunately i don’t think enough thought was put into the decision. Leaving the notion of passing entities around as another issue, the fact that this DTO is now being used as a request AND response object in MULTIPLE service functions means the intention of each DTO is very unclear.
There are multiple fields on the DTO that are not used by the service and there is a magical “ObjectPacket” property that is of type object (as in System.Object) that is actually quite important. The reason the property’s type was left as object is because of “reuse”, as the different functions do different things with that property.
OMG.
Many hours, nay days, of bug fixing could have been saved if this DTO was split in to just 4 classes, 2 response and 2 requests. These could have been generic enough for the remained of the functions and revealed, along with the service function names* the intention of the call.

Thinks of other when you names stuff (classes, components, service, methods and parameters) and think of whether re usability is actually a benefit before bundling on bonus properties to a transfer object. 😉

*yes this is a JBOS interaction

Greenfields project

I am setting up a Greenfields ASP.Net service based app for a “friend”. It is not completely a self less act. The work I do on a day to day basis bores me to tears so this is an opportunity for me to play with new tools, frameworks ideas etc and stop me hating being a coder.

Also as a contractor I often come in after the initial set up has occurred, which means I suck at setting up solutions. I am using the TreeSurgeon project and Mike Roberts article to guide into best practise here. So have a single batch file to call a single NAnt build script to build and test all my stuff. As there are a bunch of empty projects and empty tests this is pretty bloody fast!!! LOL

So straight off the bat I have in my Development IDE

  • VS2005 (waiting form my copy of 2008)
  • TestDriven.Net
  • Ghost Doc
  • ReSharper (old but still happy with it as i am not 3.5 yet)

.Net aspects I am using

  • WCF
  • Old Skool Asp.net WebForms. the first Web layer will probably be throw away especially as it uses AP.Net Security which doesn’t suit the requirements. I may move to MVC later, but I honestly have never had issues with WebForms mainly because everything I usually kept very simple. Plus i wasn’t TDD last time i did a web app, certainly not in the web projects at least.
  • SQL 2005

Frameworks I am using

  • Castle stack for IOC & AOP
  • NHibernate 2.0 (alpha)
  • MBUnit (because i haven’t used it)
  • Rhino Mocks (because i am yet to use it properly), possibly TypeMock too for odd stuff
  • Nlog/Log4net… who cares some sort of logging… actually probably L4N as it is in other assemblies already.

Deployment and source control

  • NAnt 0.86
  • SVN
  • CC.Net or Team City. Not set up yet.
  • Assembla.com for svn and project management

Quality Control Tools

  • FX Cop
  • NDepend
  • NCover
  • Sandcastle

and would like to use Spec# but its too much of a hassle to have it in “real” code.
I think that pretty much covers it.
Thoughts?

I am keen to discuss lot of the architectural side of things in weeks to come. I am probably going post a few of the questions on the altdotnet list (yahoo group) too.

Rhys

DBC within the C# language – Spec#

Spec# or SpecSharp
I have been hoping for something like this for quite awhile. I am hoping this will be something that will aid in compile time assistance for Design By Contract programming.
I have been looking into third party apps and even building my own libraries, however what i really wanted was compile time errors as opposed to run time errors, which although Test would usually find, don’t really help other developers as they leverage of my code.
Currently i have comments and run time checks however this still does not force consumers of my method to adhere to the contract, it only throw run time exceptions when they break the contract. the benefit here is they fail fats and they get a more meaningful exception message as to WHY the parameters are not valid. But it often also means that i have code the looks like:

public class Foo: IFoo
{
public void Bar(object param1)
{
Check.IsNotnull(param1, “Foo.Bar: param1 can not be null”);
if(param1 != null) //***this line is effectively redundant
{
//do something with param1…
}
}
}

The null check is still in place to prevent FXCop errors arising by not checking for the objects state before using it.
I really hope the Spec# C# additions will aid in this.

When is a bug not a bug?

I am sure the Tester at the place I currently work at secretly hates me. He is far too nice and far too professional to ever say… but i am sure of it.
He is in a position where he knows what the application we are building should do and how it should do it. Unfortunately this is only ever communicated to me after I have built the module. He doesnt write the use case so dosen’t get a chance to prewarn me.
These changes are raised as bugs (as we only have the concept of bug, no concept of a change request) and so it looks like my code sucks as i have a couple of dozen bugs logged against my name (which is standard across the team). I dont like the idea of my code sucking. While looking through the buglist today, after reading the first 12 and realising that they were either A) Not bugs, but changes or B) not my bugs to fix; I got a little stroppy.
*begin blowing trumpet*
Now, when I get a use case I assume that this is what is required… I know, what an idiot… so I write my unit tests and do the whole Red, Green, Refactor like a good TDD agile boy,
*end blowing trumpet*
however I forget that we only call ourselves agile we are in fact… dreamers.
Huge numbers of tests fail and are left unattended, iteration after iteration… our scope is non existent.. our use case are guesses at what we kinda, maybe want and are completely ok to change at any time with the expectation to deliver on time still intact.
So I have tried to push back.
-A bug is raised
> If in use case ==> FIX NOW!
>Else ==>can be done in next iteration and is logged as “Not in spec/Functional change”.

I thought this may rustle some feathers and hopefully means the uses case would be a little more robust. It also means the actual bugs got higher priority, as I think they should.
No…
Now we just do non-functional iterations where we do “bug fixes” on all the functionality that was never originally asked for.
As there are a few developers (6-12) all on UK rates (not exactly cheap) and one BA (still only UK rates, but only 1), I would think it would make sense to focus the effort on the up front work, hell maybe even hire another “BA” so the development team don’t have to handle code 2,3,4 + times.
It also means the teste (again on UK rates) has to test and then retest every time the change is made… how do you spell D.R.Y???
To say this annoys me is somewhat of an understatement. It is pretty hard to be focused and passionate about what you are doing, knowing full well in only a few hours/days/weeks it will all get thrown out because someone threw together a Use case, with out putting more than 5 minutes thought into it.
More time scoping => less time “bug fixing”*

*i.e. retrofitting missing functionality

Now I know Agile encompasses the ability to “handle change”, but never getting the original scope correct… ever, through laziness, is not Agile, it’s just software cowboy bollocks.

Unfortunately nothing is going to change. This could be a really good project, even a flag ship project for the company as it is using new exciting tools for the comapny (.Net 3.0, Nhibernate etc) and it would honestly only require the smallest extra bit of effort. But it wont and that’s a shame.

end of yet another rant…

Coding style and preferences

There are a couple of things i try to keep in mind when writing code, well they is plenty but specifically for this post 2.

Re-usability and Reliability.

I do all that I can to fail fast. I want my errors to come up as I type, ideally your IDE will support you on this, if not plug ins like ReSharper help. Other wise I want build errors to tell me, “Hey mate your code is junk this does not make sense!”. The last thing I want is run time errors. This is not obviously shared with others.

In my mind using strings for business logic is a last resort and if done there are always constants.

Casting is also a last resort, and usually only done from a know parent class to a subclass.

I would much rather use enumerated constants, generics, type checks to perform this business logic or flow control.

i have spotted in a few place where effective type checks are being done using string

if( Object.symbolX = “Fully.Qualified.Name.Space.ClassName”)
{…}

if some one changes that name space then every check now has to be changed and the compiler will not pick it up.

it would have been just as easy (actually easier) to write

if( Object.symbolX = typeof(Fully.Qualified.Name.Space.ClassName).Fullname)
{…}
or even just do a bloody type check!

Secondly re-usability:
I am currently working as a contractor on an application that was many many week behind schedule largely due to an over worked non functioning UI. Myself and another contractor worked very hard to rebuild the UI framework (ie throw all the old stuff out and create new projects for the client side!) We knew it didn’t have to be the best looking code it just had to work so the the user could see the application. 14 weeks and no visible app is pretty bad. 2 weeks later we had a app that functioned, very basically but you could so stuff. We are now almost up to schedule (6 weeks to catch up 14 weeks with 2 guys fired is not too bad).
The other developer who I wrote the UI framework with was reworking some of my code and asked if he could change (or overload) the constructor so he could inject the views controller into the view. I explained I prefer not to as it means the view become more tightly coupled with the controllers and I don’t want the views to know anything other than they are Win or Web Forms.
I could understand were he was coming from, calling the controller from the view means its is easier and faster to write code. ie to save from the view just type this.IXXXXController.Save(XXXXX);
His points were valid. It is easier to code against, it is kinda loosely coupled as the View only know of the common interface assembly. I was starting to doubt why I go to the bother of creating events and delegates and event handlers (even most of them get reused)…..
Then I had to make a change to a view and i realised why I follow this.
Raising events means that ANY controller can use the view. i am reusing a lot of my views. Having knowledge of the controller leads to spaghetti code….. as the other developer i currently work with is a pretty good dev I am not really worried that he is going to go down that path, worst come to worst we can retro fit the events when/if we get the time. However when we got another contract dev involved, it went pear shaped fast. He is gone now, thank god.

Benefits of using events

  • They can go on your interfaces
  • As many other class can subscribe to the event as needed
  • separation of concern
  • Changing the EventArgs doesn’t change signatures of events meaning you wont break the build (provide nothing of importance is removed!)
  • Its clear where calls are coming from and where they are going. If i see a method in my controller called SuperView_RhysKicksAss(object sender, AssKickingEventArgs e){…} it is pretty obvious where this method is being fired from and why it is being fired (because I am Kicking Ass!), but you may want to check it is actually assigned a handler if you didn’t write the code…


Negatives

  • You have to write your EventArgs and Event Handler. This is perceived as a headache to pretty much everyone i have worked with. When I explain the xxxEventArgs is just a DTO and the handler is a one liner they usually easy up a bit. BUT it is still an extra dozen or so line that need to be created.
  • Its easy to write sh1te event code if the devs are not aware of coding standards, meaning it is hard to understand what is going on. Follow conventions to save headaches later
  • testing frameworks offer crap support for testing events

After weighing it all up, i think for the mean time I am going to stick to events when calls need to go out of scope.

Dependency on self?

I am worried that DI is the new singleton. A pattern with good intentions and fantastic helpful when used properly but slipping into the realm of the abused.
Wehn leaving Australia i was working on a project where there where dozen and dozens of unnessecary singletons. they really just ahd no reason to be there other than to slow things down.
Recently i have seen in quite bit of code using “constructor injection”, no problem. But when you are passing in a object and then make this object dependent yopu are in effect providing circular references. However using interfaces this is allowed.
eg

My problem is: why?

I am seeing this specifically in the MVC pattern.
I dont need my views to know anything about my controller, so why pass the controller in as a constructor parameter. if it need to communicate with the controller raise an event, if the controller cares about this event it will subscribe to it.
I belive people are dodging using events because it seems tricky and DI is “cooler”.
bollocks to that I say.

Strings

Strings are really beginging to annoy me. People need to think when they design API’s if strings are really the best thing to use.
String are for reading, for humnas to read.
I am using a system where most of the strings should be replaced with enum or custom value. the reduction in runtime errors would be significant.
i have to campare time code in one exapmle, these are in string format. i now need to know and understand the other applications time code format so i can do a comparison instead of using
if(timecodeA > timecodeB)
{
//DO LOGIC
}
which would be the logical thing to do.

Using AOP

I have been dodging AOP for a while now for the main reason that is “Looks to hard”, especially when you have to explain it to PM’s and other dev’s who are still trying to get to grips with basic IoC and ORM concepts, after all projects are not solo efforts.
However I have now found a nice looking framework that may help.
PostSharp is an Attribute driven lightweight AOP framework that modifies the IL as a Post build event in the .Net framework.
This article here has got me very excited about implementing standardised Logging, Security, Exception Handling and maybe even Design by Contract.

Once i get my home development PC back from the dead I will post more about how easy it is (or isnt) and how it affect the project at runtime.
Should be interesting.

"Public" API’s

I have for along time prided myself on writing code that other developers would want to use. It was my brother that actual gave me that definition of good code and i think it is a good way to think of how to write code.

I have recently been using “legacy” code and am constantly amazed that none of the public methods /classes are commented or documented or have argument checks. What I find slightly amusing is the screeds of code that is commented out in certain code files. I have no problem commenting out code while running test etc, but delete before you check it in, that’s what source control is there for! Certainly do not release code with all that crap in there…

If you write c# in the .Net framework you probably use VS. Turn on FX cop.
For gods sake its there to make you a better coder, not to just randomly show arbitrary message at the bottom of your screen. If i am working on a project i prefer the set warning to errors from the start, then the code stays clean.


Use ReSharper. Your code will be cleaner. I am the only one using it at my current place of employment and the HUGE amount of redundant code that shows up is unbelievable! It may go a way to explaining why after 14 week the U.I. was still not up and running.


Use GhostDoc. Its makes commenting a completely trivial task and also helps you name methods in a more appropriate manner. If I notice the comments don’t make sense then instead of rewriting the comment I now rewrite the name of the method till the comment makes sense. The more you use it the less this occurs.


Check your arguments on all public methods. This is call Design By Contract. I basically check everything that isn’t a Boolean.

These things all reduce the chance of you being one of those developers… the one that i curse at as i work with their code.

Look at your code.
Is it clean to look at, with out even reading it?
Can you, at any stage, generate meaningful documentation from your code?
Do you know off the top of your head what the code coverage is likely to be on your current coding project (assuming this is greenfields)? Is it appropriate?

Some colleague I have worked with think this is Gold Plating.
Bullshit. We get paid good money to do our job.
Number one priority is to develop working code on time, sure.
But I believe, certainly as a contractor who may never be seen again, it is also my duty to make it as easy and intuitive for the next guy, of whatever ability, to use, re factor, debug and modify my code.
These practices do not deter from our number One priority. In fact I have found the above implementations, along with TDD, usually mean that you are more likely to handle code once and not have rummage through it again later looking for bugs. And if you do, I guarantee it will be a lot faster than if you had not put these practices in place.

* certainly since I have been a contractor!

Measuring Productivity

LOC is possibly the most stupid way to measure productivity. It really doesnt show anything and possibly encourages bloated code for the not so noble coder keen to grease up to the project manager.
Possibly a better option is measuring based on passing Unit Tests.
Why?

  • A unit test usual defines one part of functionality.
  • It encourages the developer to write untis tests and encourages more robust coding.

This is kind of a brain dump, but i think it may have merit…possibly.