Functional .Net : Closures

One of the more commonly used functional techniques that can be used in C# is the use of Closures, a technique that if your are currently using lambdas, you may be using them inadvertently. My understanding of closure may be different to others as there seems to be so many subtlety different definitions especially when comparing languages. Anyway, in my mind the comments of Javascript closure best align with my understanding (http://www.jibbering.com/faq/faq_notes/closures.html)

A closure is a delegate that has references to a variable not passed to it and in an scope outside the delegates immediate scope.

Like any delegate its definition and execution are not the same thing. You can define a closure and never use it or just call it later

A simple closure example I can think of is:

static void Main(string[] args)
{
var timesToRepeat = 100;
//Declare the Action
Action print = text => //text (string) is the only parameter
{
//using varb declared outside of Action
for (int i = 0; i < timesToRepeat; i++)
{
Console.WriteLine(text);
}
};
timesToRepeat = 3;//Lets modify the variable
print("Hello!");//Call the action/evaluate the expression
//Prints:
//Hello!
//Hello!
//Hello!
}

Note that the timeToRepeat variable is declared outside of the declaration of the lambda statement. Think about this; the Action ‘print’ can be passed out side of this scope, it could be passed to another class which does not have visibility of the locally declared variable. The ‘print’ expression is bound to that variable declared outside of its scope. This obviously has ramification in terms of holding reference to that object. Please also note that the expression ‘print’, like all delegates is evaluated when it is called, not when it is declared; Stepping over the above code will not print when declaring the ‘print’ Action but at the last line when it is called. One last thing to note is that the variable timetoRepeat is modified after defining the print Action and this is carried when we call ‘print’ in the last line; “Hello!” is printed 3 times, not 100 times as the variable would imply when the closure was declared.

You may have been using closures with out knowing it. Javascript and the associated libraries like jQuery use this technique a lot, as do many open source library such as TopShelf, MassTransit etc.

Functional .Net : First Class Functions

One thing I notice in .Net is that many developers do not think of functions as first class citizens. I guess in the OO world classes or more appropriate the instance representations are the real hero’s, however, in my mind, functions deserve much more appreciation than they perhaps get.

Since .Net 1.0 delegates have been around and I still think many developers do not fully understand how they work. I have previously made a post with regard to delegates showing how they can be used in a real world way to save code duplication here. I guess one of the first steps to being comfortable with functional programming is being comfortable with functions as first class citizens; The best way for a typical C# developer to do this is get comfortable with delegates. Before I continue on with my Functional programming journey I want others following with me to be on the same page. Please be sure you understand what a method & delegate are; I feel I describe them reasonably well on the previous mentioned post.

Functional .Net: The Beginning

Of late I have (along with a few colleagues and friends) started to make a bit more of a concerted effort to up skill in the area of functional programming. I admit that my knowledge is  of functional programming is high level (at best) although have in advertently been using several of the core concepts due to the language feature I am exposed by the C# language I use on a day to day basis.

What has spurred me on is the talks from Dr Erik Miejer on channel 9 (the first of 13 can be found here). The talks plan on tackling Functional programming by working through the benchmark Functional book: Programming in Haskell

I am keen to see how the series progresses, I am only up to episode 2 but am already seeing value, more in the “why” as opposed to the “how”, which is fine for this early stage of my journey.

I also want a bit of commercial return on investment with relation to what I can do in my day to day job with functional programming. As I have mentioned C# actually handles several of the Functional paradigms (although perhaps not as elegantly as F# and the like) and thanks to .Net resident Functional voice-to-the-masses, a bunch of Functional programming samples in C# can be found here to download; Cheers Matt! Along with just raw C# code he has a bunch of Wiki links to highlight what each example is actually doing; you may be surprised that you are inadvertently using some of these techniques!

Any way I will keep you posted as to how I progress as I move forward on the beginning on what is hopefully a fruitful journey!

TDD Mind Shift

Ok so I am not the fastest adopter, however recent situations have almost forced me into a new way of thinking. Recently I was going over a fellow colleagues code and was about to add a test to one of his existing fixtures. He had previously mentioned that he felt there were a lot of tests, I told him not to worry, often when you are new to TDD it seems like a lot of extra code… I had underestimated his comments, he was right there was a ton of tests, thousands upon thousands of lines of tests, in one fixture.  I had clearly not been doing my job and should have been helping him out.

The test were broken down into one fixture per class under test, using regions to separate out test groupings, typically for each method under test. There was some very basic common set up however there was still a lot of set up in each test. It became quite clear that there were certain common set ups that were occurring.  Although the test per class is common it is usually not the best way of keeping your tests together, especially if you are being thorough in your testing. What my refactoring produced was akin to something I have been following for awhile but not really embraced, context based tests.  The tests were broken up so those with common set ups were grouped in their own fixtures; i.e. those with the same defined stubs. Although in this case it was done to make maintenance easier it highlighted for me the benefits of this approach.

Fixtures become specific to the context in which they apply, not blindly and solely to the class they are testing. This will lead to lead to multiple test fixtures per class and multiple test fixtures per method. This is what initially turned me off the approach; fixtures would be come to fine grained, however I have changed tact. Although it may not always be appropriate I can see it being beneficial for many situations. For example you may emulate the Repository having no records in it (via a stubbed method call) and run a bunch of test with that context. This encourages you to test not just method calls but to think it terms of a given scenario, something that I see unit sometimes missing in the pursuit of just achieving code coverage. The fixture tend to have test that are very small and very clear as to what they are asserting, often only a couple of lines per test.

Another thing that originally put me off the notion of Context specification and BDD was the perception of tooling… RSpec etc are not required, it is the thought patterns that I think are more important. Setting up a specification can be done using basic setup method with each fixture defining a specific context. Test inheritance can be very helpful here too.

Although Context specification and BDD are not the same thing I believe they are a movement in  the same direction; moving away for blind testing to defining scenarios that we need to test. Test become closer to one of the goals of being “readable documentation”.

If you do what to read up a bit more check out:

Article introducing BDD styled tests with the notion of Context Specifications : http://www.code-magazine.com/article.aspx?quickid=0805061&page=1

A nice coding example showing one way of thinking in a context spec way: http://www.lostechies.com/blogs/rssvihla/archive/2009/05/21/context-spec-style-testing-and-my-approach-to-bdd.aspx

MSpec with Boo looks to be very cool too: (requires git client): http://github.com/olsonjeffery/machine.specifications.boo

Although the shift for me has not been great it has been significant, I would encourage you to at least investigate to consider if some of the principles can be applied in your testing.

When Easier is Better

Most of the people I currently or have worked with know that I have a strong preference for NHibernate for my persistence mechanism. I typically use a repository pattern, often with services hiding the repositories from the outside world. This is great as I get enterprise scalable domain-driven solutions up and running pretty quick and helps me focus on fixing business problem, not spinning my wheels with infrastructure details. However some time having DTO’s, services, repositories, mapping files, anti-corruption/translation layers etc are just overkill. This is where is would typically say “Use Linq2Sql”, if someone asked me what they should use, but its probably not what I would use. I like the idea of having the flexibility of moving from a simple domain to a complex domain with out too much problem. Enter Castle Active Record

Active record is in no way a new concept (PEAA p160) but is heavily under used in the .Net realm. AR is a great pattern when you are fleshing out a domain. You can very quickly start building up relationships have screens up and running for client very quickly. This is great for spikes but also for writing real code. The database can be generated from the code (Castle AR sits on top of NHibernate) so it is a great fix for fast moving agile project, especially in the initial sprints. The thing I like most about it is that if I decide I want a more complex domain, all is not lost; I remove the Castle attributes and references wrap a repository pattern around it and I am done. It really is that easy. All my existing domain unit tests should still pass. In a matter of hours you could switch from a developed 2 tier app to an enterprise ready scalable architecture.

Basically AR is great if you are lazy (or need results now).

To prove all of this I plan on presenting Castle AR in an up coming Perth Alt.Net meeting. Originally I had wanted to show NHibernate however I think the progression from AR to NH will help show both of there benefits. This may also allow us to show the limits of both… time will tell.

As a side note the castle stack (as well as NHibernate) has had a “proper” release this year so if you haven’t had a look in a while check it out : http://www.castleproject.org/castle/download.html

Links – Articles

http://www.castleproject.org/ActiveRecord/gettingstarted/index.html

All you wanted to know about Castle ActiveRecord – Part I

All you wanted to know about Castle ActiveRecord – Part II

Links – Videos

Ayende Rahien – Using Active Record to write less code (Oredev 2008) << Watch me I'm great!

Ayende & Hammet – Painless Persistence with Castle ActiveRecord (JAOO)

The Build XML Divorce – Part II

OK, so as I continue to play with Rake I am very quickly seeing what is happening. I am in effect building up a bunch of reusable scripts that can manage task dependencies but really are just orchestrating other applications and system admin tasks. The syntax is nice, Ruby is a very clean fluent  language, however it is becoming abundantly clear that all I am really doing I reorganising my build flow from PowerShell and MSBuild to Rake and MSBuild… It quickly dawns on me that I probably have not understood how PSake really works. I have briefly looked into PSake only because James Kovacs (who I hold in very high regard) was the author. I quickly pushed it to the back of my to do list as it look more like a pet project that’s only intention was to add to the variants of Make. The problem was I didn’t really understand what it was supposed to do. At the time, in my mind all PSake provided was a means to have dependant task hierarchy written in PowerShell… that’s it… buts that’s all it needs to be! It should be calling out to MSBuild or csc.exe to build assemblies, it should be calling out to your test runners and analysis tools. The (R/B/M/Ps)ake tool is (IMCO) just a way to facilitate tasks and control their dependencies.

Ok so why the big rant? Well it was becoming obvious to me that what I was trying to do things in Rake last night were things that I could easily do in PowerShell. Not only easily but arguably much more appropriate to be done in PowerShell; things like file and directory manipulations. My build process is really pretty basic and can been done completely in an XML based tool like Nant or MSBuild. Its what I do after the most rudimentary clean/build/test that requires a bit more muscle, this is where I have been using PowerShell any way, so using PSake just makes sense. PSake is just PowerShell with a nice clean API to declare tasks with dependencies. Anything you can do in PowerShell you can do in PSake.

This is good news. So the next step is to refactor my PowerShell bootstrapper scripts into PSake tasks, pull some of my MSBuild task into PSake task and keep the MSBuild file down to the bare minimum that MSBuild does well.. namely: Build. One thing that I would have thought 6 months ago was that Rake/Ruby would soooo much cleaner sexier code… but no, I actually think the PowerShell code is very nice and very well suited to these types of task. Sure its got a few bugs to iron out, but my affection for PowerShell continues.

Sorry Rake, its been a fun 3 days, but its over… its not you, its me.

The Build XML Divorce

Like many .Net Dev’s is have been using NAnt and MSBuild a lot of the last few year to speed up my own local build and to create a suite of task for my build server to do when I check code in. If you have been doing the same I am sure you will run into issues as soon as you surpass the most basic of clean->build->test->analyse type scripts. For some reason I like to build deployable versioned  packages for each environment when I deploy to Test. We only deploy to test every day or 2 and I want to know that version x on Test will have the exact same complied code as version x on UAT and Prod… it sounds obvious, however its no surprise that this is not always the case in may software departments. Having these versioned packages ready to go also means when it is time to push to UAt or prod it a matter of seconds before they could be live (not hours or days like some places i have worked at). Doing this more detailed versioned pre-deployment packaging meant my XML based build soon became messy and I turned to PowerShell to bootstrap some of the processes and loop thru things like swapping out config’s etc. This is fine but it was becoming a little confusing  for the other Dev’s who had not been as involved in the process as any of us would have liked (especially as a bat file was kicking the whole thing off for local builds). It also means they now have to know MSBuild and PowerShell…

Ok so this blog post is not going to be anything ground breaking for those out there that are au fait with the ruby community, however I have had an itch to check out (properly) rake for a while now. I have finally got a home project that I am sinking my teeth into and I thought this is a great opportunity to bring rake in to the folds.. finally!

Right so a quick brief on Rake: Its loosely based on Make, Its a build tool written in Ruby, its much cleaner than the XML based options & you are writing real code, so you can do what you want (including loops; which in ruby is oh-so-clean)!

here is a super simple skeleton rakefile.rb script below. The rake file should be somewhere  in your solution directory structure. Just calling rake from the cmd line in this directory will call the default task.

task :default => ["build:test"]

namespace :build do
desc "Clean Solution"
task :clean do
puts "Cleaning..."
end

desc "Build Solution"
task :buildsln => :clean do
puts "Building..."
end

desc "Test Solution"
task :test => :buildsln do
puts "Testing..."
end
end

namespace :deploy do
desc "Publish Soln"
task :publish do
puts "Publishing..."
end
end

So say this is in “c:\rhysc\rakefile.rb”, I just open a cmd window change the dir to “c:\rhysc” and type rake and the follow will be printed:

Cleaning...
Building...
Testing...

Right so lets look at the above rakefile.rb document:

  • First we define our default task, the thing that will run if the rake command is not given any parameters. This is saying the Test task in the build namespace is the default task to run.
  • next we define a namespace (build); this is standard ruby
  • next we document our task. If we type “rake -T” we get to see the list of available task with their descriptions. Personally I think this is fantastic
  • next we define a task! these task are pretty silly as they only print to the console what they should be doing but it helps show the basic structure

Note the build and test task have the => notation. this show dependencies i.e. test depends on build which depend son clean; so calling test will mean the tasks that are run (in order) are clean, build then test

Also note that we have quote marks in the default dependency ([“build:test”]). My knowledge of ruby is poor at best (i’ll get there!), but for some reason this is required when using a namespace. If test was not in a name space the line could read:

task :default => [:test]

To call the publish task with all the build tasks we would just call :

rake "build:test" "deploy:publish"

Clearly this is a very light taste of what rake does. I intend on posting more scripts as I continue to build real scripts* to incorporate into my code base however for now the link below may be a good starting place… as well as reading the doc’s…  I’m so looking forward to losing this XML bride 😉

Links:

(you obviously need Ruby installed.. its click once so its pretty painless)

*don’t worry work colleagues; I don’t intend on inflicting this onto you… yet.. we’ll keep to our Bat File/MSBuild/PS cocktail for now 😉

AOP with Delegates

In the past I have made mentions of the notion of Aspect Orientated Programming (AOP) in regards to reducing the noise that can occur when cross cutting concerns, like logging, invade business logic. Unfortunately most of the posts I have made have been in reference to tools and the assistance they can offer. Such tools like PostSharp, Unity, Castle etc provide some “magic” to eliminate the code clutter. Unfortunately many of the people I talk just do not implement tools like this at the place they work and want a POCO option to deliver such results. Well this is actually simpler than many people realise and also points to the issue of the misunderstanding of delegates, anonymous methods and lambdas ; as well as the huge amount of code reuse they can provide.

Firstly I will show an example of “typical” business code that has a lot of business noise. Secondly the code will be show how it could be written if we were use AOP and later on a clean version that mixes POCO with other forms of AOP

class AopEnabledSampleService : ITransferable //from Wiki
{
void Transfer(Account fromAcc, Account toAcc, int amount)
{
if (fromAcc.getBalance() < amount)
{
throw new InsufficientFundsException();
}
fromAcc.withdraw(amount);
toAcc.deposit(amount);
}
}
class NoAopSampleService : ITransferable
{
private string OP_TRANSFER = "Transfer";
private Database database = new Database();
private Logger systemLog;

void Transfer(Account fromAccount, Account toAccount, int amount)
{
if (!getCurrentUser().canPerform(OP_TRANSFER))
{
throw new SecurityException();
}

if (amount < 0)
{
throw new NegativeTransferException();
}

if (fromAccount.getBalance() < amount)
{
throw new InsufficientFundsException();
}


Transaction tx = database.newTransaction();
try
{
fromAccount.withdraw(amount);
toAccount.deposit(amount);

tx.commit();
systemLog.logOperation(OP_TRANSFER, fromAccount, toAccount, amount);
}
catch (Exception e)
{
tx.rollback();
throw e;
}
}
//...more code
}
 

It is quite clear that the AOP code is much cleaner to look at however there is a lot that is potentially happening that we do not know about. You have to know that the AOP injection or interception is catering for all of the things that the second example dealt with. This is a fundamental problem with AOP: it is not explicit. This obviously can make it very hard to debug and can be confusing to the developer maintaining the code. One way you can get around this by marking up methods or classes with attributes; this at least gives the user of the code a hint as to what is going on. Many of the AOP providers allow for this. However sometime you are just shifting the noise from inside the method to an attribute. How you deal with this is up to you and your team, however I will later on offer some ideas how to manage this.

What the purpose of this post was is to show how we can achieve the functionality of the verbose code above with reduced noise, yet still be maintainable and somewhat explicit. What we will eventually be using is lambdas to achieve the same functionality. Many .Net Dev’s use lambdas on a semi regular basis but many do not know how to write a basic API that uses them or even what is really going on when they are using a lambda. Bare with me now while we have a code school moment and cover methods, delegates, anonymous methods, lambdas (closures will be covered in another post). If you are comfortable with all of these then I don’t really know why you are reading this post, you should know how to solve this problem already.

Method

Right we all know what a method is; its a function, something that does something, typically a command or a query. You can pass in parameters and you can get something back from a method. The way we typically use a method is in the named sense i.e. 5.ToString(); we are calling the ToString Method on the integer object 5. The name of the method is “ToString”

Delegate

A delegate is to a method what a class is to an object. A class defines an object as a delegate defines a method. Typically most code will never need to define a delegate for a given method unless it is passing the method around like an object… read that again; you can pass methods around like objects. This is where delegates become powerful and this is where the notion of delegates is often misunderstood and often not even known! We will cover more of this later… but for now here is how you define a delegate and what a method would look like that adheres to a delegate.

public class UsingDelegates
{
public delegate void MyDelegate();

public void Main()
{
UseADelegate(this.MyMethod);
}

private void MyMethod()
{
Console.WriteLine("This is My Method!");
}

private void UseADelegate(MyDelegate myDelegate)
{
Console.WriteLine("Before using my delegate");
myDelegate();
Console.WriteLine("After using my delegate");
}
}
/*Output is:
Before using my delegate
This is My Method!
After using my delegate*/

In this code we expose the public method Main which then calls the UseADelegate method passing in the address of the MyMethod method. Note that the parameter passed in to the UseADelegate method does not have the typical parenthesis associated with the method, that is because we want to pass the method as a delegate, not the returned value of the method; This is hugely significant. You will also notice that the UseADelagate method takes in a variable of type MyDelegate. We have defined MyDelegate as a delegate at the start of the class. When you define a delegate you are defining a signature of a method. The name does not matter (except for readability), the only things that matters are A) whatever can use it must be able to access it (an appropriate accessor) and B) the return type and parameter types are consistent with the methods that you intend to use as the delegate. To me this is similar to classes using interfaces, you don’t care what the name of the classes that implements the interface is it just has to implement what the interface says to implement. Delegates are similar, however they are not explicit. A method does not says it implements a delegate in the same way a class says it implements an interface.

The syntax for defining a delegate is

[accessor] delegate [return type] [Custom Delegate Name] ([parameter list]);

e.g. public delegate List CustomerFilterDelegate(string filter);

Now any method that returns a list of customers and takes in one string parameter is compliant with this delegate.

Right, now that I have told how to define a delegate I am going to throw a spanner in the works and tell you to never do so… sorry. The reason is that now .Net has given us reusable delegates in the form of Func and Action. Action specifies a delegate with a return type of void so each of its generic parameters are indicators to the parameters in its signature it is defining. Func is used the same however the last generic argument is the return type.

You can now define any reasonable delegate signature with these two generic delegate types. For example the delegate we defined above would now be used as Func<string,List> instead of CustomerFilterDelegate. See Framework Design Guidelines for more info.

example of the above code rewritten to be guideline compliant

public class UsingDelegatesCorrectly
{
public void Main()
{
UseAnAction(this.MyMethod);
}

public void MyMethod()
{
Console.WriteLine("This is My Method!");
}

public void UseAnAction(Action myDelegate)
{
Console.WriteLine("Before using my delegate");
myDelegate();
Console.WriteLine("After using my delegate");
}
}

Anonymous Delegates

An anonymous delegate is a method without a name, i.e. it has a body but no name… hmm. As we have mentioned the name of a method has no relevance to whether it adheres to a delegate definition, it is its signature that counts. Previously we were only using the method name as a effective pointer for the address body. What many people don’t know is that you can create a method body without a name, commonly known as “anonymous methods”, “anonymous delegate” or “inline methods” e.g.:

Action myDelegate = delegate()
{
Console.WriteLine(”Hello, World!”);
};
myDelegate();//writes “Hello, World!” to the console

You can use an anonymous delegate anywhere you would typically use a named delegate, however you define the method at the point you wish to use it. The syntax for defining and anonymous delegate is

var x = delegate([parameter list]){[body of method including the return statement]};

Note that the return type is not declared, it is inferred by the presence and type of the return value in the body of the anonymous method. If there is no return value the delegate is considered to have a return type of void. Below we show how the code above would have been written using anonymous delegates:

public class UsingAnonymousDelegates
{
public void Main()
{
UseADelegate(delegate()
{ Console.WriteLine("This is My Method!"); }
);
}

private void UseADelegate(Action myDelegate)
{
Console.WriteLine("Before using my delegate");
myDelegate();
Console.WriteLine("After using my delegate");
}
}

This case show that we do not have to define a delegate signature (the .Net built in Action type is suitable) and we do not even need to create a named method!

Lambdas

Anonymous delegates were great when they came out, it saved a lot of code rewriting and promoted better code reuse; however it was ugly. The majority of the signature still had to be declared and worst of all it was reasonably easy to write anonymous delegates but almost impossible to read them, making maintenance a PITA.

Introducing Lambdas: Lambdas are exactly the same as anonymous delegates in functionality however they have a very different and more readable syntax. Lambdas basically allow the writer of the code to infer a lot about the method signature without explicitly doing so. The reason this can be done is because often the signature is already defined so the lambda can make use of it. Enough chat, lets see what the previous anonymous delegate would look like as a lambda:

Action myDelegate = () => Console.WriteLine("This is My Method!");
myDelegate();

Ok so not a huge difference; we have dropped the key word “delegate” and added an arrow looking thing. Perhaps I should show something a little more complex. Firstly lets define a more realistic anonymous delegate using the method from the first example:

Action transfer = delegate(Account fromAccount, Account toAccount, int amount)
{
if (fromAccount.getBalance() < amount)
{
throw new InsufficientFundsException();
}
fromAccount.withdraw(amount);
toAccount.deposit(amount);
};

as a lambda:

Action transfer = (fromAccount,toAccount, amount) =>
{
if (fromAccount.getBalance() < amount)
{
throw new InsufficientFundsException();
}
fromAccount.withdraw(amount);
toAccount.deposit(amount);
};

As you can see the method body is the same, it is just the definition of the parameters that is different and that is because the types are inferred. Again this may not seem like much at the moment but the heavily reduced noise has allowed for much more readable framework usage. I would hate to think how my current test would look in RhinoMocks if I was not using lambdas!

Couple of things I should mention:

  • when using a lambda expression that takes in no parameters use the empty parameters to signal this is the case e.g. ()=>//method body
  • if the method body is a one liner you do not need the curly brackets{}, but you do if there is more than one line!
  • You do not need the parameter brackets when defining the parameter name if there is only one parameter, you do if there is more than one
  • if the return statement is a single statement without curly braces you do not even need the return key word!

(a) =>               {

                             return “bob”;

                         }

can be written as

a => “bob”;

Just to keep things consistent here is the Console.WriteLine example using lambdas:

public class UsingLambdas
{
public void Main()
{
UseADelegate(
() =>
Console.WriteLine("This is My Method!")
);
}

private void UseADelegate(Action myDelegate)
{
Console.WriteLine("Before using my delegate");
myDelegate();
Console.WriteLine("After using my delegate");
}
}

Using Delegation to Achieve AOP-like Coding

Alright, the whole point to this post was to show how you can use plain .net without any other libraries to do AOP like activities.

Firstly using Lambdas is not as clean as interception, but it is a lot cleaner than copy and paste (right click inheritance) I see so often. I want to help create better code too so here are some thought on where to use AOP and where to use delegation:

  • Use delegation when you want to be specific and and explicit about your intentions (e.g. transactions)
  • Use interception/injection based AOP for things are are truly behind the scenes (e.g. logging)
  • Use attribute based (i.e. explicit) AOP when you want the developer maintaining your code to know that some aspect is taken care of (e.g. security) but you do not want it polluting the method body

Below is an example of what the first example could look like if using a combination of lambdas and AOP:

public class SampleService : BaseService, ITransferable
{
[SecurityCheck]
void Transfer(Account fromAcc, Account toAcc, int amount)
{
TransactionWrapper(() =>
{
if (fromAcc.getBalance() < amount)
{
throw new InsufficientFundsException();
}

fromAcc.withdraw(amount);
toAcc.deposit(amount);
});
}
}
//BASE CLASS
internal abstract class BaseService
{
protected void TransactionWrapper(Action wrappedDelgate)
{
Transaction tx = database.newTransaction();
try
{
wrappedDelgate();
tx.commit();
}
catch (Exception e)
{
tx.rollback();
throw e;
}
}
}
 

Note

  • The logging is no where to be seen. I personally hate seeing logging code, it should be hidden away. To me it is pure noise. This would have been taken care of by the AOP framework of choice.
  • Security is kept a subtle as possible without leaving it off the radar. This is not always possible but if I can I keep it out of the method body and as an attribute.
  • The transaction is dealt with by a separate method that takes in a delegate. This method can now be reused allowing any other method to take advantage of the pre existing transaction handling. this can now be pushed into a base class, or if a standard .net transaction is being created, a static method that anything can use.

Personally I like the last example the most. However to implement this it does require a reasonably detailed knowledge of AOP so interception can be done  using attributes or not and it does require a basic understanding of delegation. Hopefully this post has helped with the later. Next time you start to see repeated code in your code base think if you could use delegation to clean your code up and star making it more reusable.

Rhys

Many to Many joins- Revisited

I do a Google search a year later & I find my own post and I don’t like it.
There are several issues with this post:

Firstly sending objects over the wire was always a bad idea, this post was trying to dodge that as the team did not want to move to DTOs, which i maintain would have saved us time in the end. The real fix is to map the entities to DTO’s and send those over the wire specific to the service calls needs.
Secondly MANY-MANY joins are not cool. There are very few places where Many-Many actually exists. Hiding the join in the Entity should have been done, not eliminating the mapping and the joining classes. Redoing this i would have kept the joining class and mapping as a one-many many-one relationship.

eg to expose a customers favourite restaurants

public class Customer
{
//stuff
public IEnummerable GetFavouriteRestaurants()
{
foreach(var customerRestaurant in CustomerRestaurants)
{
if(customerRestaurant.IsValid())//some check if required
yield return customerRestaurant.Restaurant;
}
}

This now hides the notion of a CustomerRestaurant entity from the outside world as it can be contained with the realms of the domain entity classes (that being Customer and Restaurant)

Well, i guess its good to review ones work, I’m not happy that this was a decision I made, however acknowledging ones mistake is an opportunity for growth

Gallio : Why? When? How?

In a time were TDD and Continuous Integration is becoming common place Gallio is a great tool to have in the tool belt. I have been a fan of the related MbUnit for several years but only in the last 6 months have I really seen the light in the separation of the Gallio project and why it is such a good thing.

Lets back the truck up a bit and shed some light on what exactly (in my mind) Gallio really is then we can talk about why you would want to use it.

Gallio is basically an interop facility that act act as a generic test runner. Sure it can be much more than that but at the end of the day 99.99% of people that would be using it  will be using it as a  means to execute tests. Gallio actually is a project that has broken away from the MbUnit project to provide an neutral test runner for other test frameworks.

So what the hell is a test runner? Firstly we would need to look at how we would normally run a (unit) test. Firstly we would typically choose a test framework to write tests in; the common API’s that fall in to this category are NUnit, MbUnit, XUnit.Net and MSTest. These allow us to write classes and methods with attributes that describe what and how we wish to test the system under test (SUT). Writing these test does not run the tests, we still need something to kick the process off. This is where our test runners come in to play. TestDriven.Net, ReSharper, Visual Studio test window and the various separate GUI that come with the frameworks (e.g. Nunit GUI Runner and Icarus for MbUnit) allow us to select what tests we wish to execute. Unfortunately there is some degree of coupling present here, i.e. the Visual studio test runner may or may not run your given test framework, NUnit GUI surely doesn’t run XUnit tests. There is also the very large issue of being able to run these from the command line or a script; this is pretty important for continuous integration. This is where Gallio fits.

Gallio provide a neutral system that provides a “neutral system for .NET that provides a common object model, runtime services and tools (such as test runners) that may be leveraged by any number of test frameworks.” What this means is Gallio can sneak in between your chosen test runner and the test API, providing an abstraction between the two. When I first understood this is was underwhelmed… who cares? Well apparently I do!

You see at my current place of work we, like many .Net teams, use MSTest as our test framework. Being the good kids we are we were keen to get CI up and running and with out TFS properly installed (at the time) we decide to use TeamCity as  our build server. Its a great tool and I have no regrets in using it. Unfortunately trying to get MSTest test to run from a script is a little fiddly and requires an install of a version of visual studio that has MSTest on it on the machine that want to run the test script. Obviously we want or build server to run the test for the solution too, so now we had to install Visual Studio onto our build server… this is not cool.

  1. It takes up a lot of space, we had to fight to get a VM created for us to have a build server and  installing VS took up most of the space we were given
  2. We had just used up one of our licences of Visual Studio. VS is not cheap. Sure, I work for a huge company that haemorrhages cash, but wasting money is still wasting money.

Enter Gallio. With a minor adjustment* of our build script I can now use Gallio to run my MSTest tests from my MSBuild script. This is pretty cool. What this means is now I have a test framework agnostic build script. If we converted all of our tests to MbUnit I would not have to change my build scripts; MbUnit is supported by Gallio so I am covered. This also means I have nice reports generated for me without crazy MSTest stuff spewing all over my hard drive. The reports are very clean, configurable and human readable. I can show my department manager (who may or may not be technical) the test reports for all of our projects and he can see what state they are in. Having a clean readable report seriously helps in promoting our good work, something an MSBuild log file or nasty MSTest XML would not do so well.

OK so who should be interested in Gallio?

People who “do” CI: Having a free test runner on the build server may be saving you cash and is a big benefit, I would say however having a neutral runner means easier maintenance and is the biggest win here. The build scripts will all use the same syntax. Gallio works with the above mention test frameworks but also integrates with MSBuild, NAnt, NCover, PowerShell, CC.Net and TeamCity.

People who use (or potentially may use) more than one test framework: Having Gallio in the mix means running NUnit from visual studio is very simple. Pick your poison; TD.Net, ReSharper and VS can all now run that or any other Gallio supported framework.

People who want good consistent Test Reports: This is certainly my opinion, but I really like the Gallio reports. They are clear, easy to navigate and if you are using multiple frameworks you can now have a consistent format to display your reports.

Something to get you started – an MSBuild template for using the Gallio.MSbuildTasks assembly:










<Gallio
Assemblies="@(TestAssemblies)">
IgnoreFailures="true"
ReportDirectory="%(ReportDirectory)"
ReportTypes="html"
ShowReports="true">
<!-- This tells MSBuild to store the output value of the task's ExitCode property
into the project's ExitCode property -->




Hopefully this helps shed some light on the Gallio project and how it may fit into your build and test process.

*The minor adjustment is actually cleaning up the script which is also a good thing. It is much clearer what is happening. The MSTest hacks involve small amounts of wand waving.