Monday, December 29, 2008

Ayende - Open Closed Principle

Bored?
Read this 10 page epic blog post:
http://ayende.com/Blog/archive/2008/12/30/discussion-oo-101-solutions-and-the-open-close-principle-at.aspx

Monday, December 22, 2008

Unity - OMG... it just works!

I have started up at a new company back in Perth and am so far feeling pretty good about things. Initially I thought I may be back in "waterfall land", but the guys are all super receptive to new ideas and very keen on moving to a more agile (or at least less waterfall) process.
The other dev's and I have come up with a nice MVP/Presenter First framework with a repository pattern for the DAL and are currently using service with DTO's in the middle. All good so far.
Then I learnt we were using EF... ahhh ok...
Well, luckily the guys here have enough experience with it and have manged to put together a nice usable repository implementation using EF that is agnostic enough that any non EF should be able to come along and use it.. happy days.
Next step for me was to introduce IoC and AOP to the stack. These are somewhat new concepts here so I wasn't to sure how they would go down. I have a wrapper Container that I use to abstract away all the container specific gunk and jargon that you can get with some containers. As we were very much in the EF/PnP/EntLib park here I thought I had better at least look into Unity to see if it is a viable option.
My last dealing with M$ IoC was ObjectBuilder in CAB... what a f&?king pig.
Needless to say I was not expecting anything special. I was however pleasantly surprised. Unity is supper usable and slotted in perfectly to my abstracted container adapter. If you are in a M$ centric software house I HIGHLY recommend trying out Unity. Far to many M$ devs don't use IoC, often because there is a 3rd party framework that is not from M$ that is now required... the amount of time i have been told i can not use OSS on a project... grrr...well now there is no excuse. To see how painless it is checkout PnP guru David Hayden screencast and posts. Actually if you use any on the EntLib or PnP stuff you should be subscribed to DH blog; he is the ninja in this field and pragmatic enough to use (and have extensive knowledge of) other 3rd party frameworks such as the wonderful Castle stack.
Next step is to investigate the PnP AOP implementation namely the Policy Injection Application Block... will keep y'all posted

DDD : Value Types

*A follow on from : http://rhysc.blogspot.com/2008/09/when-to-use-enums-vs-objects.html

This is a brief demo of mapping constructing and using Value type in the domain. To stick with the cliches we will use orders and order status. To give some structure we will lay out some ground business rules

  1. When an order is created it is of the status In Process
  2. it is then Approved
  3. then Shipped
  4. Cancelled backordered need to be in the mix too

Ok, not the most robust order system, but that's not the point.

Lets first look at how the domain logic could be handled with using an Enum.... bring on the switch statement!!!!

Ok so let see if we can edit an order when the status is an enum:

public class Order{
//...
public bool CanEdit()
{
switch(this.OrderStatus)
case: OrderStatus.InProcess
return true;
case: OrderStatus.Approved
return false;
case: OrderStatus.Shipped
return false;
//etc etc
}
//...
}


Ok that is not exactly scalable... the more statuses we get the more case statements we have to add. If we add a status we also have to find every place that there is a switch statement using this enum and add the new status is not a case. think about his for a second... really think about it, how many enum do you have that have functionality tied to them. Right.



No lets look at the same code but we are using "real" objects; Exit the switch and enter the strategy pattern:



public class Order
{//...
public bool CanEdit()
{
return this.OrderStatus.CanEditOrder();
}//...
}



Now obviously there need to be some know-how on this non-enum enum. let have a look at how I have done this is the past.



/// <summary>

    /// Sales Order Status Enumeration


    /// </summary>


    public abstract class SalesOrderStatus


    {


        #region Statuses


        /// <summary>


        /// InProcess


        /// </summary>


        public static SalesOrderStatus InProcess = new InProcessSalesOrderStatus();


        /// <summary>s


        /// Approved


        /// </summary>


        public static SalesOrderStatus Approved = new ApprovedSalesOrderStatus();


        /// <summary>


        /// Backordered


        /// </summary>


        public static SalesOrderStatus Backordered = new BackorderedSalesOrderStatus();


        /// <summary>


        /// Rejected


        /// </summary>


        public static SalesOrderStatus Rejected = new RejectedSalesOrderStatus();


        /// <summary>


        /// Shipped


        /// </summary>


        public static SalesOrderStatus Shipped = new ShippedSalesOrderStatus();


        /// <summary>


        /// Cancelled


        /// </summary>


        public static SalesOrderStatus Cancelled = new CancelledSalesOrderStatus();


        #endregion



        #region Protected members

        /// <summary>


        /// The status description


        /// </summary>


        protected string description;


        #endregion



        #region Properties

        /// <summary>


        /// Gets the description of the order status


        /// </summary>


        /// <value>The description.</value>


        protected virtual string Description


        {


            get { return description; }


        }


        #endregion



        #region Public Methods

        /// <summary>


        /// Determines whether this instance allows the diting of it parent order.


        /// </summary>


        /// <returns>


        ///     <c>true</c> if this instances parent order can be edited; otherwise, <c>false</c>.


        /// </returns>


        public abstract bool CanEditOrder();


        #endregion



        #region Child Statuses

        private class InProcessSalesOrderStatus : SalesOrderStatus


        {


            public InProcessSalesOrderStatus()


            {


                description = "In Process";


            }


            public override bool CanEditOrder()


            {


                return true;


            }


        }


        private class ApprovedSalesOrderStatus : SalesOrderStatus


        {


            public ApprovedSalesOrderStatus()


            {


                description = "Approved";


            }


            public override bool CanEditOrder()


            {


                return false;


            }


        }


        private class BackorderedSalesOrderStatus : SalesOrderStatus


        {


            public BackorderedSalesOrderStatus()


            {


                description = "Back ordered";


            }


            public override bool CanEditOrder()


            {


                return true;


            }


        }


        private class RejectedSalesOrderStatus : SalesOrderStatus


        {


            public RejectedSalesOrderStatus()


            {


                description = "Rejected";


            }


            public override bool CanEditOrder()


            {


                return false;


            }


        }


        private class ShippedSalesOrderStatus : SalesOrderStatus


        {


            public ShippedSalesOrderStatus()


            {


                description = "Shipped";


            }


            public override bool CanEditOrder()


            {


                return false;


            }


        }


        private class CancelledSalesOrderStatus : SalesOrderStatus


        {


            public CancelledSalesOrderStatus()


            {


                description = "Cancelled";


            }


            public override bool CanEditOrder()


            {


                return false;


            }


        }



        #endregion

    }



Note this is especially good for Value object in a DDD sense and can be easily mapped to the database. More benefits include that I do not have to hit the DB to get a status. As they are value object and have no need for an ID (in the the domain), we only map the id in the mapping files. The POCO object know nothing of ids. I can also create lists for drop down binding too if required... with no need to retrieve from the DB.



I have had some people raise the point "but what if we need a change in the DB for a new status?". Well that sounds like new logic to me and should mean reworking the logic and then a recompile anyway, however now we are being very clear with how we handle each enum as the object now possesses its own logic.



If you are using NHibernate the mapping would look like this:



<?xml version="1.0" encoding="utf-8" ?>

<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">


    <class name="Sample.SalesOrderStatus,Sample" table="Sales.SalesOrderStatus" abstract="true">



        <id column="SalesOrderStatusID" type="Int32" unsaved-value="0">

            <generator class="native"/>


        </id>


        <discriminator column="SalesOrderStatusID" />


        <property column="Description" type="String" name="Description" not-null="true" length="50" />



        <subclass discriminator-value="1" extends="Sample.SalesOrderStatus,Sample" name="Sample.SalesOrderStatus+InProcessSalesOrderStatus,Sample"/>

        <subclass discriminator-value="2" extends="Sample.SalesOrderStatus,Sample" name="Sample.SalesOrderStatus+ApprovedSalesOrderStatus,Sample"/>


        <subclass discriminator-value="3" extends="Sample.SalesOrderStatus,Sample" name="Sample.SalesOrderStatus+BackorderedSalesOrderStatus,Sample"/>


        <subclass discriminator-value="4" extends="Sample.SalesOrderStatus,Sample" name="Sample.SalesOrderStatus+RejectedSalesOrderStatus,Sample"/>


        <subclass discriminator-value="5" extends="Sample.SalesOrderStatus,Sample" name="Sample.SalesOrderStatus+ShippedSalesOrderStatus,Sample"/>


        <subclass discriminator-value="6" extends="Sample.SalesOrderStatus,Sample" name="Sample.SalesOrderStatus+CancelledSalesOrderStatus,Sample"/>


    </class>


</hibernate-mapping>



The above SalesOrderStatus abstract class can now have static method on it to do thing you may normally hit the DB for eg to get Lists of Statuses, however now you are confined to the realms of the domain. This makes life easier  IMO as there is less external dependencies.  I have found I use enums very rarely in the domain and usually only have them in the UI for Display object or in DTOs across the wire (eg error codes; as an enum fails back to its underlying universal int type).



Try it out, see if you like it and let me know how it goes.

Sunday, December 14, 2008

T4 GAX & GAT: Revisited

I have dabbled with T4, GAX and most specifically the GAT before and never really got any traction. Its a great idea but it is very intricate. Nothing by itself is overly complicated but there are lots of little things that can quickly put you off.

I am trying to set up default MVP solutions for myself. I have a default architecture that I have used for several commercial application and would like a quick way to replicate it. Typically I follow a Presenter First pattern and interact with a service layer for the model. The service layer may be a proxy for a distributed app or it may be a thin tier for interaction with the model, it doesn't really matter. The fact is I have very similar classes, very similar tests, and very similar structure on many of these app's. This is a perfect time to look to generate these frameworks. One of the big things I want out of this exercise is to get my default  build configurations and build scripts predefined. This is a fiddley aspect that I hate doing, but always do it because of the time it saves in the long run.

So attempt one will be a WinForms MVP solution with out a domain project. I will use MSTest, RhinoMocks and MSBuild on 3.5 version of the framework. Not sure what IoC I will use yet.

As this is something i want to reuse where ever I work I don't want to make NH a default aspect. i may include an NH model project in later.

So far the whole process has not been overly pleasant. I have had files (as in dozens of them) just get deleted on an attempt to register a project, projects trying to compile template that are marked as content (ie not for compilation), packages that just decide they are no longer packages... so I decided to set up a vm to contain the madness.. unfortunately I only have vista 64 install on me and VPC can only host 32 bit OSs... oh well the PnP (Pain 'n Phailures?) impedance continues.

Wish me luck...

Wednesday, December 10, 2008

DDD Confusion

this is mainly in comment about this post:
http://the-software-simpleton.blogspot.com/2008/12/twat-of-ddd-increasing-complexity-in.html

My points:
  • DDD is not needed in every situation.
  • DDD is used when there is a significant amount of business logic. If you are writing crud screens DDD is probably not the best option.
  • DDD is hugely beneficial in an Enterprise Solution. This is because there is business logic in business applications.
  • DDD is not hard, if done right. Start simple and add complexity AS REQUIRED.
  • DDD scales. I have a base framework that I use for DDD solutions which let me get up and running with in a few minutes. I still have to write the domain objects, but if these are simple objects this takes a trivial amount of time, but still leaves me open to scaling to a large solution if and when it is necessary.
Like most architectures the majority of people get a wiff of it and run with the idea without properly implementing it. This is when you run in to problems.
Of the last 5 web applications I have done DDD was involved in only 1 of them. The other 4 took less than a week to complete.
Here is something a little more concrete: I would use my NH based DDD stack for anything I thought would take more than 2 weeks of dev time to complete.
Like anything the more you do it the more you learn, you pick up good ideas and recognise bad ones. The problem with DDD is you cant just read the book and know it you have to go thru the pain of doing whole projects to get that knowledge.

Thursday, December 4, 2008

Asp.Net

Things I have forgotten about asp.net and web dev:

ASP.Net thinks safari is old skool and not capable. this is annoying when you cant change the servers .browser files (http://www.big-o.org/?p=20)... meaning some of the web controls don't work to well.

CSS can be a pain in the ass. In fact I hate UI altogether. Relying on customers to give you pictures and content is painful too

Linq 2 XML is a god sent for customers that don't want to pay for a database but still want some dynamic features.. my XML and file system interrogation based web control library is growing fast ;)

In saying that, building web sites in general is fast. Unfortunately it is all the tedious things that take time. Seriously I can get a site up in under a day but its the finicky crap is the thing that will annoy me...font changes, image alignment, cross browser qwirks.. arrrgh!!!

One site I always for get about is http://www.html-kit.com/favicon/ its great for creating your favicon.

So If any one need a Ma and Pa web site, holla. I don't promise anything uber wicked  but  can give a quick turn around.... plus my girlfriends hand bags don't pay for themselves... pffft ;)

Tuesday, November 25, 2008

Interfaces are implementation details

I believe hiding away the implementations of a code base to be a good thing. I generally try to minimise my interactive surface and make it as user friendly as possible with descriptive names considerate arguments etc.

One thing I have not considered until recently is the very fact that  calling my interfaces IFoo and not just Foo is, in a sense, giving away an implementation detail. is this a .Net thing? Possibly. When I asked my Java mate for his coding standard he informed me that it is common for  quite the reverse to happen in his work place. The implementations of the interface are suffixed with "impl" to imply it is an implementation with the interface having the more user friendly name. Will I change my style? Probably not, as I generally work on other peoples code base and I prefer to stick to language/framework standards, even if they may not be the best (well sometimes).

On a whole other tangent but still discussing interfaces as implementation details Scott Bellware and Greg Young (2 very smart and very opinionated guys) are tussling over this on twitter as we speak.. well as I write..(25/11/2008 ~7am GMT)

Monday, November 24, 2008

Technical Reviews

As you may know I have recently moved back from London to Perth, due to many reason including a credit crisis that means London is not the best place for us to be living.

I have been job hunting with favourable feedback, but not a lot of cash in my pocket. The state of affairs the world is in means a lot of companies are waiting for the new year to hire people. This, needless to say, is not so good for me as I need to eat before then.

I am in talks with a few companies that I hold I reasonably high regard and would be happy to have any of them on my CV in years to come, however the interview process is really getting dragged out.

I have been involved in more interviews test this years than the all my previous years combined. This includes writing them, completing them as a bench mark for my current employer and of course completing them in hopes of a new job.

My disdain for them however continues, yet I understand they are somewhat of a necessary evil.

The problems I have is:

  • Most test technical knowledge, i.e. my knowledge of a library, something that becomes obsolete very quickly. I do not hold this is very high regard. Core concepts are much more important to me, I can get someone productive in ASP.Net in a day or two, I cant teach them OO coding that quick.
  • A test that tests coding ability tend to be short and inane. Ironically these to me are more beneficial, because now I get to see the workings of their mind and their coding style. Still I have never done a test that dealt with interaction, which is unrealistic for an enterprise developer, and is a place where most intermediate devs fall over. SoC is still something many struggle with.
  • I have never seen a test that tests ability to write enterprise code... which is what I do, the technologies are implementation details. Design, interaction and domain logic to me is much more important

How would I fix these? Well my first approach would be by talking to the candidate. The time it takes to mark the test could be done talking to the candidate, you are going to have to anyway. keep it brief; in 10-20 minutes I would have a very good understanding of what the candidate knows.

Secondly I would sit with them and get them to write some code. again in 20 minutes I will ascertained what I need. The type of person to me is of utmost importance.

If you are taking a interview type test be sure to get something out of it. You are investing your time so there should be some sort of return, ideally a job, but at least feedback.

You should be able to tell straight away what you are not comfortable with, today I realised my knowledge of IIS is not up to scratch. I do not need to see my results to know that, I know I got the answers wrong. I have highlighted a weakness. Now it is up to me to decide if that is a weakness I want to address; time is a limited resource, do I spend it on learning IIS? Well I am going to have to, my ignorance is unacceptable. Will I go for guru status? No chance, I don't value it that much.

It is also important to push the interviewer for feedback, critical feedback, personally I want them to be borderline nasty. If they tell me I was awesome and don't give me a job, well then something is wrong. Find out what. If they say you were crap/"not right for the position" then find out what set you apart from the others and what you would need to work on. Be forceful. HR will be fluffy, ask for concrete reasons. Was it because I suck at IIS? Asking for too much in terms of reimbursement? Do I smell funny?

Note to recruiters/employers:

Please update your tests at lest yearly. Asking questions about

  • IIS 5 when IIS 7 is out,
  • ADO (not ADO.Net)
  • SQL 2000
  • Operating systems that are 8 years old

may give insight on the candidate and there knowledge of legacy stuff, but it also reflects a certain something on your company and what the candidate will come to expect. I will certainly be favouring the company asking about TDD, ORM and C# 3.0 features over one grilling me on VB 6.0, Com interop and Access.

Process refinement: Lean

Lean* is a buzz word I have been hearing for a while and know guys like Dave Laribee are right in to it, which is really reason enough to have a look. My knowledge of Lean was only anecdotal until I met  a lovely young lady in Melbourne  who was in fact a Lean specialist. She is old friends of my partner (Tori) and much to Tori's disgust we talked nerd a fair bit. It was cool to know that she actually had a Software and Manufacturing background and is applying Lean in the traditional manufacturing sense with a very large manufacturing company.

So after speaking to this Lean specialist and seeing her enthusiasm for the process I picked up a book I have glazed over a few time of the bookshelves: Implementing Lean Software Development. It is part of the Kent Beck series and all of the books of that series I have read so far have been worth the effort and cost, so I thought "why not?".

What I like about Lean is how it works well with Agile & XP practices, it defines things we already know work, but often the act of defining or providing a framework is in itself beneficial. It is something that can be easy for managers to understand and easy for dev's to apply. For the (second to) last project I worked on in the UK this book would have been a welcome addition to the library. Errors in our process were evident, but you still need to define and identify them to fix them, this book would have helped in this regard.

Anyway my understanding is still growing, but so far so good, the book is well written and should be done by the end of this week, just in time to swap with what ever book Gumble brings me from London... hint hint...

*Uber brief back ground for those not too familiar with Lean:

Toyota came up with some some cool ways to make stuff better, these principles and practices have lead Toyota to being one of the most efficient manufacturers in the planet. Core concepts have been extracted from Toyota, Lean is one of those by products. Since software dev has similar parallels to manufacturing some of this P&P have been applied to define Lean software development.

Sunday, November 23, 2008

Postsharp and sanity checks

While playing with Postsharp for a validation framework i stumbled upon this.
This is a great little code block that stops sneaky team members referencing layers they should not be referencing. a compile time error will ensue and let them know, for example, that they can not access the DAL via the View projects... happy days!

Thursday, November 20, 2008

Yay!! New books!

Having left the UK and my library behind *weep*, I am now virtually bookless.. other than WPF books... So new additions have added, specifically "Implementing Lean Software Development" and "Applying Domain-Driven Design and Patterns", 2 book I have been looking forward to reading for a while. As I am stuck on public transport or away from my computers a fair bit, conceptual books are probably a better investment than purely technical books; learning WPF without a PC is not that easy and a little hard to retain!

So back to antisocial booking reading in the car/train/bus/waiting room/restaurant/movies... um, well yeah.

Wednesday, November 19, 2008

Easy performance increases

Patrick has posted an interesting article on foreach performance stating that a for loop with an array is up to 5 times faster than the same code executed in a foreach on List<T> (assuming the contents is the same, just the container type is different). This wont affect most of us, however if you have a situation with nested loops this could be a significant performance improvement.

Tuesday, November 18, 2008

Web Development

My original roots in .Net were very much in ASP.Net. My first real dev role was in .Net 1.0 on an early e-commerce app that had its early prerelease roots in classic ASP. I really enjoyed ASP.Net but found myself moving away from the front end and more in to SQL and middle ware as I worked on larger project.

Over the last couple of years I have not had a lot of commercial involvement with asp.net front end. I had the (dis)pleasure of some win-forms front ends on my last contract but mainly middleware/domain/messaging level stuff.

So I thought I had really fallen out of touch the other day when my partner mentioned that a company she was working for was paying $X for what was basically a brochure-ware web site. This value was around the monthly wage of a dev in this part of the world. I could not believe it. I told her to make sure they got a price break down and said (more out of example) that I could probably do it in 2-3 days. The price break downs were just silly and I was shown the concept of what they wanted. This was on Monday night. Tuesday afternoon I had a fully functioning site that met all of the requirements and look pretty good. I gave a demo to the missus and with a few CSS changes the site was done. Hardly a months work. The other benefit is the site is built in a framework that allows for extension of functionality. The company may may need email marketing, blogs, newsletters and basic site management, something that was not available from the other vendor. I could not believe it, the cheeky little buggers!

It was nice to know that my web skills have completely gone out the window either... CSS; how I have missed you... lol!

Sunday, November 9, 2008

DbC edging closer to mainstream!

As you may be aware I am a big DbC fan. So I am obviously stoked to see a couple of things pop of late (sorry if this is old news i am on a big holiday at the moment so finger is not really on the pulse)

1: The System.Diagnostics.Contracts name space. Thanks goodness this is getting put into the core libraries.
2:http://social.msdn.microsoft.com/Forums/en-US/pex/thread/14115b4d-52c1-4e93-89cd-19db3fd86756/
A temp forum leaning on the PEX* team ;)

My first up impressions.
Well its good that it is non longer 3rd party ie Spec#, well at least moving out of the lab and into real .Net
The code is in the method body, which i think is ok as an option but i would like to have more visible, or perhaps more logically located. Spec#s incarnations looked pretty good to me.

Anyways this along with TDD should be making for some pretty robust libraries. looking fwd to getting my pc back and having a play.

*PEX is a whitebox testing frameworks that looks pretty cool too :)

Tuesday, October 28, 2008

Technical Competence and the Interview Process

As I begin the job hunt all over again I am renewed with the realisation that sales skills, although possibly only rarely used in day to day coding, are as important as ever. Over the last 2 days I have been talking with friends, former colleagues, recruiters, HR and connections from all of the above in the Perth IT market regarding jobs.
The people who dont know me just want to to know what in vogue technical skils I have. Buzz wordy stuff that pass with the wind like WPF, MOSS, Asp.Net MVC etc etc
These technologies are all great at helping us do our job faster, prettier and more consistently but they only HELP. Core coding concepts, IMO, are of a much greater importance. I am not a WPF guru. Can i learn it? Bet your ass I can, in fact I am now. I give myself 6 weeks to do so and to be come good at it. Why six weeks? As I have been around the block a few time i have picked up a few technologies, API's & languages over the time. What i find is the more technologies, API's & languages I learn, the faster I can learn the next one. There is such a thing as "learning to learn".
In the past 2 years I feel I have become a much better developer in general.
Alot of that knowledge has come from reading the dozens of books and thousands of blog/forum posts, watching multitudes of videos and seminars and attending and presenting gigs like the Alt.Net open spaces... basically all the things that have stolen quality time from my friends and loved ones. Alot of that knowledge has come from using 3rd party API's that both kick ass (like StructureMap) and suck (like Infragistics). You learn from both, however it is with continuual exposure to new APIs that you are exposed to overall patterns and styles that you begin to associate as good or bad.
for example: Over the last couple of weeks I have found myself talking to a few friends and colleagues about the Law of Demeter. Infragistics is a perfect example of why this law (or guideline) exists. Having 5 properties chained off to change a setting in the grid is stupid and shows poor API design. Having not used this API i may not have "got" that specific law (lemons => lemonade). ;)

Learning languages like Python, Boo, F# and Ruby highlight the strengths and weaknesses of my own day to day language C#. It also highlights the fact the Java, VB.Net and C# (or whatever other c based managed language) are basically the same and people who argue between C# over VB.net or Java over VB.net need to learn a new language. Possibly they just mean they believe the .Net framework or libraries are better than java (or vica versa etc) and don't realise it.
Anyway...
Having been in the position of the interviewee and interviewer now I can see what I look for is basically the opposite of what recruiters look for... well to a degree. I assume by the time the candidate has got to me the recruiter has figured out the candidate has
-used .Net for a number of years
-has used the basic technology we are interested in (win, web, services, messaging etc)

What I look for is if the candidate has just done the one technology, eg just Asp.Net over WinForms for example. I mean they can not help that their employer dictates that what they used, but they can, in their own time, investigate other angles. In this instance i would also ask
-Have you used an MVC framework such as Monorail or ASP.net MVC?
-When did you use it and which version?
-what did you find different/better?
-Did you ever use it in production?
-have you used JSP, Ruby on Rails or Groovy on Grails or any other web based framework in another language?
-What CMS's have you used... etc etc
I don't care really what the answers here are, I am just fishing to see if this guy is a 9-5er or someone genuinely interested in his job. A 9-5er would take what the boss has given him and stick with it. Someone serious about their job investigates things outside of his comfort zone and finds out why this alternative exists and evaluates if it can help him perform better.

I also look for basic OO skills. If the candidate has never used Asp.Net but has ninja coding skills and is passionate and enthusiastic I would take him on. I can teach a monkey Asp.Net in 3 weeks. I cant teach enthusiasm. By that notion i also look for the knowledge of test and mock APIs and the use of framework libraries (e.g. castle and spring) as they tend to show the candidate understands the benefits and quality of code that can come with using such a tool.

Unfortunately this logic does not bode well with recruiters who want to pattern match. So the sales hat goes on to please the gate keepers. My problem is I wont lie to get a job. It has probably cost me some pretty kick ass roles/pay packets but, you still have to look yourself and co-workers in the eye everyday. The problem is the recruiters encourage it. My tack is to try and sell the global picture, its just not that easy. The same is true for interviews with non technicals such as PMs. They don't care if i know the intricacies of 5 different IoC containers, they want to know if I delivered production ready software on time, and so they should! However they still have the notion of patten matching. My lack of an ultra specific skill may have cost me a job yesterday (you will never guess what that skill is!). The problem is the guy would have given the job to someone of much lesser ability who had the the skill listed on his resume in a previous job. He has been hunting for months for this person... which to me would mean alarm bells ringing if I was him. If i fell in to that situation i would look for good people that you can train. If the good person has the skill set, awesome! If not, get the best you can and train them, FAST.

Fortunately I know people in the market and luckily I haven't burnt any bridges and have forged some pretty good relationships here in Perth. Which is lucky, because Perth is a small very well connected network. So hopefully mouths start moving and the word gets out that a new developer is in town ;). Otherwise I may follow a good friends advise and make it a leisurely summer of freelance development and lying on the beach, corona in hand.... things could be worse ;)

I would like to hear other comments from others... from the perspective of both the candidate and the employer.

cheers
Rhys

Monday, October 20, 2008

Announcing Gallio and MbUnit v3.0.4

Jeff brown has recently announced the release of the new MbUnit and Gallio suite.

Looks to have quite a large number of improvements. For more details see Jeffs write up here

"Im leaving on a jet plane..

..don't know when I'll be back again"

Yep so its my last week here in the UK as I pack my stuff and get ready to head back to the land of Aus. First up will be Perth to catch up with my better half then we decide wether to stay in Perth or venture eastwards and see if Sydney/Melbourne are better suited for us and our plans of world domination.

The plan is to head back to London in the new year, but the with the current climate London is not boding well for us (my partner is in finance), so we would rather be in the sun while riding the "crunch" out.

I would like to thank the team at Channel 4 for having me most of the time is was in the UK. I would especially like to thank the team for placing the faith in me with the VAPS project and letting me run wild on the new architecture & design. I really think it is going to be a lot easier to work with, test and maintain so stick with it.  :)

I would also like to to say thanks to the "London .Net Meet Up" guys the "London DNUG" and most of all the Alt.net guys who have been great sounding board.

Fingers crossed that the world is in a better state and London is a viable option for us in the New Year. Until then is back to hot sun, cold beer and Water Polo... oh and some code here and there ;)

Thursday, October 16, 2008

Loosen Your Domain

Really good 101 on using messaging in a system.
http://www.viddler.com/explore/PhatBoyG/videos/2/

Monday, October 13, 2008

DTO's from XSD's

Something I found out today. The xsd.exe from the VS command prompt can create your c# XML representations for you. Its lightening fast and means you can use code like this to get you XML to a CLR type:
MessageBase messageLoader = xmlSerializer.Deserialize(new StringReader(messageDocument.InnerXml)) as MessageBase;

Where MessageBase is my base abstract message class (this is obviously for getting XML messages off a queue). Of course this casting works with interfaces too.

Awesome! Thanks to Jamie for the heads up :)

Sunday, October 12, 2008

Friction on new projects

As the last post mentioned, I have been involved in a new project at work so it has been a good opportunity to introduce the team to some new concepts and investigate a few things my self.

Firstly the good:

  • The last post mentioned that I have cleaned up the way we do stuff in terms of "code on the page". Its always good to be able to clean up coding standards
  • introduction of IoC using Structure Map. I choose SM as I felt it was one of the most user friendly for those un familiar to IoC. I have used the Castle a fair bit however, what is good for me is not necessarily best for the team.
  • Introduction of Rhino Mocks. Even though at work we are still .Net 3.0 Rhino Mocks we felt was a good choice over NMock2 as it has more fwd compatibility and the strong type of the delegates is well received (I hate strings)
  • A much more refined user friendly architecture and overall structure

The Bad

The Friction on TFS, MSTest and MSBuild has been, to be honest, amazing. If the same project was using SVN, Nant and (N/Mb/x)Unit the CI would have been trivial, 30mins if all went well and maybe a couple of hours if things got a bit hairy. Not so with this nasty little threesome. TFS in itself is pretty good at combining check-ins with bugs/tasks and it has a nice interface. However the price tags and the pain in the ass it becomes when anything else needs to be done is just silly. MSTest, well does any one like it? As for MSBuild, well I don't mind it, I just prefer Nant as my main build script

So if you are going to go Greenfield be sure to check out what you need to do to get the whole system up and running. Unfortunately the powers that be a petrified of any OSS entering the realm of their MS world... I should probably keep quiet that the ORM, IoC, AOP, Logger and who knows what else all fall in to the banished category...

So this week I guess I will have the fun of putting my fingers in to all of this mess. I am not really sure where to start.. CI Factory? Until then we just have a continuous compiler

Cleaning up bad NHibernate/DDD practices

I have of late managed to be involved in a new green-fields project at work. I am sure everyone can relate to the elation you get when you have the opportunity to work on something new an "do it properly this time". I have had the privilege of being able to set everything up how I wanted. Not being a complete ass I have tried to get the team involved as much as possible and explain why/how I am doing things along the way. I think the thing I have found most enjoyable is setting up the means to creating a decent domain.

For the first time we have

  • A clear delineation between value types and entities
  • real value types
  • generic repositories that only allow entities*
  • protected empty constructors on all entities forcing you to construct the entity with the more public greedy constructor that actually puts the entity in a valid state
  • Corrected sub classing of entities when behaviour is different for different types
  • Non insertable type field on subclasses (that match the discriminator column) so the guys can still query "the old way"**
  • Protected setters by default
  • DBC that matches our DB & business constraints, down to the setter level
  • A serious drive to push DOMAIN logic in to the DOMAIN objects
  • Specifications

This is all pretty basic but these key aspect were not in our last big project. It may sound like having one repository per aggregate root would be a hard thing, but to be honest its a good thing and the headaches it could have saved my on the last project; NHibernate issues (transient state/dif object same id etc) and more importantly undetected business rule errors in the code.

What I am really hoping is that now the the hard work has been done that the team can more clearly see why we do things in the way we do them.

*I need buy in from the team on the whole Entity/Value type thing before dropping the Aggregate Root bomb shell

**Again this is perhaps not the best way to do it, but I feel for the moment its the best way for the team, while moving in the right direction

Tuesday, October 7, 2008

Ruby continues its war path

Ruby is now supported in VS thanks to the guys at SapphireSteel Software

More info:
http://www.infoq.com/news/2008/10/rubyinsteel-personal

So now i can have Python and Ruby from VS.... nice.

Sunday, October 5, 2008

MSTest == Fail

I know i have already blogged about my dislike of MSTest, but this POC is really getting to me. The amount of hoops you have to jump thru to get it working in anything other than VS is just beyond me. I am seriously thinking of pulling my team over to MbUnit because this is getting out of hand.

  • Its slow
  • Requires Visual Studio. This makes it pretty dirty when setting up build servers. So now our build server has an instance of VS on it... well done M$, you just got an extra license becuase you suck.
  • Poor integration with MS build; M$ own build script language. Why the hell does this not have native MSTest tasks? Hel go one step futher and make a pluggable interface so any test framework can be directly integrated.
  • Using config settings just doesnt work. MS in all there wisdom decide to exclude files from the build dir that you have explicitly said to include. Well done morons. No i have to put dirty hacks in my tests to pick up 3rd party config settings.

Compare this with MbUnit. Um... there are no issues, well none that i have come across. Its fast, it integrates well, it has no wired application requirements...I just dont understand how one of the biggest software munfacturers in the world can balls this up so bad. The fact that the software is for developers just means any mistake will be amplified, you think they would make it better than anything under the sun... jeez they even had JUnit, NUnit, MbUnit (and many more) to copy from. You would think a product that you are paying for would at least be as good as the free alternatives.
My disappointment continues.

Thursday, October 2, 2008

Book, Book And Books

I have decided to throw up a list of books I have read over the last year on my website. Up until .Net 3.0 I never really read a lot of tech books, the Google machine answered most of my questions. On the pending release of .Net 3.0 I realised that the web was not going to offer me the answers i wanted and the head start i needed, so i got closer to the source. I grabbed a few of the books and realised the benefit of actually reading up on info BEFORE you needed it.

My usual reason for reading articles, blogs etc was because I had a problem and need to get around it. To be honest that's a pretty bad way to approach your profession. A doctor doesn't go and read a heart surgery blog when he has accidentally torn the bicuspid... well i hope not.

Another push toward my now ever growing library was Alt.Net Seattle earlier this year. Sometime you just don't know how much you don't know. Meeting guys like Martin Fowler, Brad Abrams, Udi Dahan, Greg Young, Ayende etc and talking with them face to face, you soon get a reality check. Luckily I was not the only on there that felt this way.

A few of us (and i don't want to drag these guys down to my level) started interrogating the Big Guns on what we need to do to get to the next level. Now we are not schmuck devs, but we realised the way to get up to the next level is some structured learning. The likes of Dru Sellars, Greg Young, Ian Cooper, Udi Dahan and Jarrod Ferguson were incredibly helpful in passing on there recommendations.

Since then I have stepped up my reading of about 1-2 books a year to almost 20 since late April, that's just shy of a book a week! I feel like a sponge, sucking up everything I can get my hands on.
The improvement is the quality of code has improved, my ability to acknowledge that MY code may need to be refactored is now apparent. My TDD skills are far superior to the start of the year. DDD is something I actually understand and can implement (whether i do it well is up for debate). Because of my improve domains Service Orientated Architectures are easier to create and evolve. My awareness that as a senior/lead Dev coding is only one of the small areas of my job. The ability to release good quality, testable, stable applications and do it fast are all thing we must manage.
The only problem is what the hell do I do with all these books? Being the travelling man I am, I now have dozens of kilos of books that i can not (cost effectively) take with me back to Australia. Damn, because many of these are books you want to keep around the office or at least easily accessible at home.
Anyway the page is not up yet but it should be under http://www.fullstack.co.uk/articles/library.aspx soon. Check it out. I am only going to put up books i think are worthwhile reading, so if its on there go buy the book.

.Net and cloud computing

Looks like Windows (and therfore .Net sans Mono) and MS-SQL will be availiable soon on the amazon cloud:
http://www.infoq.com/news/2008/10/EC2-Windows

Wednesday, October 1, 2008

Loose coupling: The Illusion

I must say I totally agree with the sentiment Jeremy has here. Loose coupling is an illusion to most of the projects have worked on. The project I current work on has the client UI in a distributed system knowing that we use NHibernate as our ORM. Unbelievable. Needless to saying Unit testing this solution is VERY hard! To me, this is the first place where loose couping rears its head. If you can not unit test a class due to concrete dependencies rearing their ugly head, then its either
  • Deal with the fact that you are not, in fact, loosely coupled or
  • Fix it.
As mention in Jeremy's post having separate assemblies is not loose coupling. At best this forces a direction of flow of control, at worst it hides circular references or creates the illusion of loose coupling. Jeremy doesn't break his solution down to quite the project granularity I do and nor do others (JP Boodhoo for example is know to have very few projects in a solution). The notion is that you are not trying to hide behind the perception of more assemblies == less coupling. You can also separate the single project into multiple assemblies in your build scripts if required. It then becomes a gentleman's agreement amongst the team that coupling rules will be adhered too. Now this is much easier to police with a tool like NDepend.

Currently I am without NDepend so I still break up my solution to multiple projects, for a couple of reasons. I like to be able to visually see what is going on quickly and I like the default name spacing that is then applied (Sure this can be done with folders too). Probably the aspect I like most however is that i can see what reference what at any given time by checking the references (again, we don't have NDepend on this project). By opening the UI project I can now see that Data Access references are made, or there are WCF references in a Data access project. Without NDepend this is my last resort to policing the silly things that go on in the current (and no doubt futures) project.

With NDepend I would certainly be moving toward smaller projects. *Thinking out loud* I think a common assembly for server side (with my default and abstract data access/repository stuff) a server side assembly and a client side assembly. It kinda makes sense. If none of that server side stuff will ever be accessed by an other application or assembly then why break it up? Hmmm...

Anyway, on the path to loose(r) coupling consider a couple of things:
  • Are you using a Test first approach. Although it is not necessary it tends to flush out dependencies early on
  • Are you using a means of Dependency injection? If you are new'ing up dependencies in line then you have just tightly coupled yourself to an implementations. Check out this for a start on DI including poor mans DI, which is still infinitely better than no DI, IMO
  • Code to interfaces not implementations. I always thought this was a pretty obvious statement, but apparently not. Your DEPENDENCIES should be interfaces. Anything you interact with in terms of Actions (ie methods or delegates/events) should ideally be via the interface. I very rarely see a point of having DTO's implement an interface...
  • Stream line code that interacts with the unmockable (file system, DB, WCF/Windows services etc) ; it should be as thin as possible. Try to get your working code out of these classes. This is more of a testing issue, but will also lead to better design.
  • Get NDepend. It is a kick ass tool that i wish my boss would get for me :(
  • Code reviews. Get out of trouble at the first sign, its very hard to "loosen up" an app once the tight coupling has set in.
Loose coupling is a design goal we should strive for but I believe it deserves a bit more than just lip service. Get the team on board an explain the benefits. The earlier this begins in the project timeline, obviously, the better

Tuesday, September 30, 2008

JQuery + VS

Wow!
http://weblogs.asp.net/scottgu/archive/2008/09/28/jquery-and-microsoft.aspx

Open source projects: Current Favourites

MbUnit: my new prefered Test framework
RhinoMocks 3.5: Liking the new syntax
MassTransit: Even if just for making Asynch and MSMQ easier
Suteki: ASP.Net MVC CMS (can i get any more TLAs in there?), I'm currently trying to port from MVC3 to MVC 5, but the demos look cool.
MVC Contrib: Even if its just so i can bolt Castle in .. so sweet!
NAnt: I still havent had any reason to move to the "cooler" *ake tools as Nant is just easy.

Things that are not Cool:
MSTest, TFS
I hate both of you. :p

Thursday, September 25, 2008

TDD: Fail fast

It has been brought up numerous times over the last few months about the failures of TDD for the masses. It has been proposed the the masses are not in fact test driven. It seems the majority of these failing to get the benefits of TDD fall in a couple of camps.

One camp is throwing in the occasional test if it happens to cover an area of concern. This is usually not Test Driven but retro fitting tests.

Another camp is the "over testing" camp, going for the "holy grail" of high code coverage often leading to:

a) Tests that don't test anything that needs to be tested. Common examples I see are testing frameworks you didn't write (i.e. NHibernate or System.IO) or testing property assignment worked etc

b) Brittle unmaintainable tests that soon become a mass of failings tests.

For some reason it appears the barrier to entry to real TDD is too high. What I plan to cover here is all the mistakes you will probably make when picking up TDD. By acknowledging these mistakes early, hopefully you can over come them and improve the way you approach testing.

  1. The first mistake many make is not knowing at all what they are doing. Get Kent Beck's TDD book and read it. Don't stop at the 3rd chapter, it really is one of the easiest books I have read in the last few years. It a quick read, in plain English and cover 80% of the tests you will be writing. It amuses me that people are too pig headed to learn how to do something correctly that they are supposed to be doing all day (and are paid to do!)
  2. Decide how you are going to be using tests as part of you design process. Be honest with yourself. If you know that you are not going to be truly TDD and write tests first then don't pretend you will. No one is watching over your back, karma wont bite you. But seriously if you haven't tried it, start up a side project at home and give it an honest bash. Make the mistakes on something that "doesn't matter" so there is a boss looming over you as you rewrite brittle tests. If you are not going to use TDD you can probably stop reading now. :)
  3. Make TDD part of your evolving, agile design process. TDD for me now is also a major part of my design process and one of the key reasons I use it. I generally have an idea of what I want but often TDD pushes me to a better design than I had first envisaged. Classes and methods become cleaner and more focused, interactions more expressively conveyed.
  4. Decide if you are going to use doubles*. This is an extra aspect of TDD to learn and is usually where things go pear shaped. Not using doubles in an enterprise environment usually means you are doing integration tests, test that call multiple classes, methods and layers. Integration tests can be slow as they often interact with databases, file systems and other services. They can also be brittle as they rely on the environment being in a constant state. Changing something in the database or a class several layers down may incorrectly break a test which means test that are hard to maintain.
  5. Understand Doubles. Whether you use them or not you should try to learn to understand what they mean. Doubles help you write Unit tests when a piece of code has dependencies. A unit test should test a unit, not all the stuff hanging off of it. I am not going to go into great detail here as it is covered by Kent, Martin, Roy, Gerard and the rest of the TDD community with an opinion. The two I use most commonly is the Mock and the Stub. A stub is a place holder that returns stuff when you call a dependencies specific method. Stubs don't cause test to fail, they just allow them to proceed with a more shallow dependency than you would otherwise use. A mock, like a stub, mimics a dependency in returning predefined results from a specific call, however mocks can break a test. Using a mock you are saying I EXPECT this method to be called on this dependency and if it is not, this test should break. This is where lots of people go pear shaped. People tend to "Over Mock". If it is not critical that a dependencies method is called then it is not a mock expectation, it is probably just a stub. See Ayende's write up on how Rhino Mock 3.5 moves to help with this. If you are not using Rhino Mocks, give it a go. The dynamic mock is worth it alone.
  6. Don't over Assert. An Assert is the test frameworks way of asserting the SUT's state. A good clean test will be short and ideally have one assert. Now this is not always the case, however if you are seeing tests with dozens of asserts becoming common place it is time you have a closer look at what you are testing.
  7. Don't over Expect. Following in the same vein as above, if you have more than, say, 3 mock expectations in one test you may need to re think your design as that is a lot of interaction for one piece of code. Again I try to keep my expectations to 1 or 2 per test.
  8. Run test often. Now many of you will be using Visual Studio Team System and therefore may be inclined to use MSTest (the built in test framework). That's cool, as long as it doesn't slow you down. For me, its way too slow. I am currently running a NAnt build script with MbUnit + RhinoMocks to build and test. This thing is fast and only runs my units test, not my integrations test. I run this script every couple of minutes, as that's how long it should take to either write a test or fill in the code to make the test pass. If your "build and test" turn around is more than a few seconds, you probably wont be doing it to often, which obviously affects you adoption to TDD. Some easy wins include: Having a smaller solution (with only projects you need in the build & test process), using a build script instead of VS to build & test and of course minimising your integration tests, focusing on faster unit tests. Its probably wise for me to say that I do have integrations tests, but I try to minimise them and only run them a couple of times a day, not every build cycle. A perfect time to run them would be as part of your check in process (which I assume you do reasonably often).
  9. When a test breaks: Tools down, fix the problem! If you have just made code changes and new Red light come on, this is bad! Only one red (failing test) at a time please! Enough said.
  10. Refactor! Refactoring  means better code. More readable and better performing sounds like great things to me, especially in the safety of a test suite to assure you the end result is still the same. It also highlights if you have brittle tests. When I first wrote tests I didn't refactor that much, as it often broke the tests. This was a mistake. Instead of "not refactoring" I should have addressed the root issue, which was brittle tests. Flying solo while doing this can be hard. That's what forums & user groups are for. Show the world your mistakes so you can stop making them**. You will probably find dozens of other doing the same thing and not realising it.
  11. Help team members get better at testing. Teaching others also helps you gets better as it highlights gaps in your own knowledge. The main benefit is the cross pollination of ideas and concepts with in your team. If one team member is spending a lot of time rewriting tests it is a sign that they are missing a concept. Maybe they are not using doubles correctly? maybe they are retrofitting tests
  12. Keep set ups and tear downs small. If you set ups are massive then your SUT is probably to coarse and need to be broken up to more manageable bite. Typically my set ups have a class fixture wide mock assignment (for fixtures that focus on the same dependencies) and teardowns only have a mock verification if I have one at all.
  13. Don't think TDD will come in a couple of hours. It doesn't. I have heard comments from others and tend to agree to make all the mistakes and come to the point where TDD is a natural progression takes about 6 months of standard 9-5 developer time. If you are an uber nerd maybe shorter as you may be writing some code at home or you just bash out more or you get concepts faster. I wrote my first NUnit test about 18 months ago and first played with A mocking framework 5-6 months later. for most of that time I was running blind and had really, no idea what I was doing. only in the last 6 months have I become very confident in my TDD ability, by reviewing my and other people tests can you see where and why you are doing things wrong. I am not the TDD guru but there is not much that I write now that I cant test (that I want to test!)

Although this is by no means a comprehensive list, it does give you pointers as to where you may have problems with your tests. I figure I would rather fail fast (if I am going to fail at all) so hopefully these pointers give you an idea to which paths you should not be going down.

As a side note: It is now habit writing tests as I go. This doesn't come at first but pushing thru the initial barriers and you will get there. I don't always follow the "Red, Green, Refactor" as sometimes I just write the test, write the code and run green. But when I first started I found the red aspect helpful.

Hope this helps get someone on to the TDD bandwagon a little easier and little faster.

RhysC

*Doubles include fakes, mocks, spies, dummies and stubs.

**Not all your mistakes, then the world just thinks you are a clown

Tuesday, September 23, 2008

Eset + Asp.net = :(

I installed Est Smart Security after years of benefit from Nod32 on my XP dev machine. Well the results were not quite the same. To be honest within 10 days it got uninstalled... 5 of those days I was not in the same country as my dev box... disappointed to be honest. :(

Alt.Net UK Conference

This weekend the UK Alt.Net conference was held in London. It was a good meet up with some interesting topics. I must say I didn't really know what to expect as there were not as many "names" attending as there was at Seattle (which is to be expected). Being my second Open Spaces event I felt more comfortable in the type of environment and was able to engage with a bit more confidence which was good.

Friday night basically covered Alt.Net and what it means to various people. I cant really say I got much out of this other than some people feel there is an identity crisis and too much bickering. I believe we are just pragmatic, aware and evolving developers, nothing more. I also feel that some of the bickering is just side effect of the type of people involved (i.e. driven, intelligent and probably very confident) and the means of communication (written, not verbal in which tone can be ascertained). I think we should just deal with, minimise it and move on.

Saturday was the actual sessions. I attended

  • Build/Deployment/CI
  • OSS ESBs (NServiceBus & MassTransit) and BizTalk
  • DDD
  • BDD and acceptance testing

From the Build talk I learnt about Some DB tools to help with deployment which is a stress point for us. It also sounds like TeamCity is fast becoming a preferred CI tool. The point was raise that many teams do not have any means to automatically test post deployment, i.e. is what we have rolled out actually working correctly, I thought this was particularly important, mainly because I have never really done it.

Unfortunately the ESB talk was a bit light. I was hoping (like most) that there were some users who had some pretty deep knowledge on either NServiceBus or MassTransit. There was only two of us that had even used them so we gave a brief of what they are and how we have found using them. I felt the talk got side lined as to when to use Asynch operations and then comparing to BizTalk, which I feel is like comparing apples to oranges, especially the price tag. A point the was made is that they (NSB & MT) make using MSMQ a lot easier, and to be honest, I think they validate themselves in this functionality alone. Some of the guys wanted a brief demo of MT: Check it out here. The original samples are in the download here.

I enjoyed the DDD talk most where Ian Cooper took the reigns a bit walking through the concepts as the vast majority had not actually read the blue book. This to me was good as it kept everyone of the same page. The number of DDD talks I have had with people that don't know what an aggregate root is is somewhat annoying (and that's only this year). I should clarify that its not that its annoying that they don't know, its annoying they are using the term DDD inappropriately. It was then highlighted that implementing of all aspect of DDD is overkill for many application, which is rightly so. Aggregate Root, Specification, Anti-corruption Layer, Shared Kernel and Bounded Context were all defined for those not familiar with the book. Discussions about Generic Repositories came up and how marker interfaces could be used to limit the use to Aggregate Roots, which I like. Messaging came up as a means of not letting domain concerns leak into the application and cleaning up responsibilities. I am a huge ANTI-fan of entities in the application. Use messages where possible, especially in a disconnected scenario. Refactoring is easier, maintenance is easier and the intent is much more clearly defined. A point was raised that NHibernate intrinsically leads to an anaemic domain, which I don't buy for one second. NHibernate does not stop you changing accessibility to properties (thereby avoiding property chaining), stop you adding rich functionality to entities, or force you to push logic to services. I think I either misunderstood the sentiment or there is a misconception in the functionally of ORMs.

BDD discussion basically turned to an acceptance testing talk. I am not going to lie, I was tired by then and wanted to go. The chat was relevant but I had been up since 6am packing, as we were also moving house that day. I think BDD is something that hasn't hit the mainstream as it seems to have too high a barrier to entry. Whether this perception is accurate, I am not sure. I like the idea of BDD but I am yet to jump in feet first, I will take from it what I want a continue along my happy way.

Thanks to everyone who attended, it was enjoyable.

Tuesday, September 16, 2008

SAO and DDD

There was an article posting summarising some views of the intersection of SAO and DDD.
It is funny but the same discussion was had over the weekend. I felt SOA could flourish from a DDD approach, certainly in my experience. If a well constructed domain has been created then the services can expose the business functionality at the granularity required. As DDD is such an organic process it feels like the Services naturally expose themselves. Now this is unfortunately not really the way SOA’s are approached, with good reason. You can’t just give a dev team an application and ask them to tell the business/architects what services came out of “the evolution of the domain”, we want to be a bit more concrete than that.
I have been lucky (?!?!) in that the application I am working on is to be part of an SOA structure. The definition is largely up in the air at the moment so as a dev/designer I have significant influence. We can actually define the service exposed based on the upfront and ongoing analysis and user feedback on the existing functionality. Will this bode well when it comes to implementation of the publicly available service? I don’t know. I certainly hope so. Although I won’t be here when that eventuates I am very keen to here the progress.
Currently we are making progress albeit with a stack that I don’t feel is optimal, but the amount of friction we get on a daily basis is hardly bearable

Domain
V
Synchronous Messaging
V
Application layer

I am interested to hear that others feel the application layer should be behind the messaging system, I would like to investigate more.

If something is significant to warrant a Service Orientated Architecture then I feel a DDD approach is a great fit, where SOA is the higher level Architecture and DDD being the design principles guiding the developers to correctly model the business process.

Refactoring to Specification Pattern and Predicates

Code like this doesnt mean anything:


public bool CanBePulled

{

get

{

bool canPull = false;

if (this.IsOnPlatform)

{

IJobItemStatus status = this.jobItemStatus;

if (status != null)

{

canPull =

!status.IsDeliveringToChops &&

!status.IsDeliveringToPlatform &&

!status.IsPullingFromChops &&

!status.IsCancelled &&

!status.IsPulled &&

!status.IsPullRequested &&

!status.IsRetired;

}

}

return canPull;

}

}


It just lists a bunch of boolena properties that dont relay why we care about this fields. Althought the follow syntactically could do with some reshuffling i think the underlying business intent is much more easily convey:


public bool CanBePulled

{

get

{

AbleToPull spec = new AbleToPull();

return spec.IsSatisfiedBy(this);

}

}


using the private specification class

private class AbleToPull : CompositeSpecification

{

public override bool IsSatisfiedBy(JobItem candidate)

{

ISpecification ableToPull = new OnPlatfrom().And(new HasProcessed().Not());

return ableToPull.IsSatisfiedBy(candidate);

}

}


Here it is now obvious that “To be pulled” you need to be “On Platform” and “Not Processed”. To me this conveys much more bussiness intent and therefore becomes much more usable to the developer consuming or maintaining the API


http://en.wikipedia.org/wiki/Specification_pattern

UPDATE: i hate office as my blog poster... WLW is no functional at work eitehr, so sorry about he nasty formating... it kinda defeats the purpose of the post

Thursday, September 11, 2008

Build Configurations

More build fun!
Having a chat with the lads at work and I was surprise that the Debug/Release option on Visual Studio was not a well understood build feature.
Debug and Release are just build configurations that can be modified. You can also add build configuration to fit your needs.
I follow the TreeSurgeon pattern (now out of habit) and create a new configuration eg AutomatedDebug and set up my build preferences there.
What are the benefits of doing this I here you ask?
• Ease of deployment
• Have a configuration for each deployment environment

Those for me are enough reason to do this. This build config and my nanat script means I can
• Build the solutions quickly
• Optionally run test based on category and optionally use code analysis tools (FXCop, NCover etc)
• Has all the dll’s in one place
• Have all tests dll’s in one place, separate to the production dll’s
• Have all my test reports in one place
• Have a distribution zip file with all I need (and nothing I don’t) to deploy the solution (dlls, config etc)
• Have the containing folder of all of this delete-able and be able to replace everything and not required to be in

It takes one click of to do this.
Well this to me sounds like its pretty hard to setup… I know it did because I was very reluctant to set it up… but it really wasn’t that bad and the time I have saved has paid me back 10 fold and continuing to do so.

If you want to try this approach on a clean soln just download TreeSurgeon from codeplex and get to it. You may have some issues with Ncover (I did). There are solutions on the web to fix it, but if you cant be bothered then you can just use NUnit and not worry about cod coverage.

Alright to do this with an exiting soln, right click on you soln in VS and select “Configuration Manager…”.

Under “Active Solution Config” drop-down select NEW.


You will get a pop up. Give the config a name eg AutomatedDebug and inherit from debug (my preference). Make sure the “Create New Project Config” check box is checked so all the projects get access to this build config.



Hit Ok and you will now see the active config is AutomatedDebug (or whatever you called it). Close the config manager and you will also notice that in the drop down box for soln config (in the VS tool bar) has been set to AutomatedDebug.

The next step I personally take is setting up all the projects in the solution to build to a standardised place. At the root of solution (ie in the trunk folder) I have a bin folder that is NOT under source control.
In this folder after a build using the AutomatedDebug configuration, I have “AutomatedDebug”, “AutomatedDebugTests”, “AutomatedDebugReports” and “AutomatedDebugDist” folders. In each of my production projects (i.e. not test project etc) I set the build location as a relative path to the AutomatedDebug bin folder eg
..\..\..\bin\AutomatedDebug\
Whereas all of my test projects are built to
..\..\..\bin\AutomatedDebugTests\
I do this to keep things clean for me; you can just dump everything in one folder if you want.


This now means you can call a build on your soln file using the new config from Nant (or whatever build tool you use).
Snippet:

<target name="compile" description="Compiles using the AutomatedDebug Configuration">
<msbuild project="${solnpath}\Fullstack.ExampleBuild.sln">
<property name="Configuration" value="AutomatedDebug" />
</msbuild>
</target>


The other folders are created when running test and compressing relevant files to a zip file.

Deploying is now very easy, as is rerunning test as all your test are in one place, however to be honest the targets I use by default build and run all test every time.

Application != Domain

After a week of major refactoring to the architecture and general design of the application I am currently working on, I have noticed, amongst many things, the confusion of application logic and domain logic.
The application in question is reasonably complex dealing with somewhat complex work flows, legal requirements, provider specific implementations of certain task etc and so “Domain Driven Design” is an ideal approach. Unfortunately a little knowledge is a bad thing. There is a general bad habit at my current work place of using terms inappropriately. Terms like “Domain” (used as : server side) and “Agile” (used as : make it up as we go and don’t document anything) etc are thrown around without many of the involved people understanding what they truly are. It is a situation we are trying to change however communication amongst us all needs to improve first…

Anyway, one of the major things I have noticed is that what we have created, server side, is a psuedo domain. It has its flaws (e.g. too much logic in services creating an unnecessarily anaemic domain) but it basically works, it does what is required. Unfortunately it exposes too much of it inner working to the outside world. This was originally a design plan to make an application that "could be connected or disconnect" so the client used domain entities and when disconnected passed entities across the wire. This meant a client proxy that mirrored the domain services.. which also lead to lazy creation of service, exposing unnecessary option to the client. --NB: this is not my idea, just the reason given to me as to why it was done like this.--
What this also leads to was leaking of domain logic even further up the stack to the actual application. This to me is completely unacceptable. I may preach on about this but this to me is another reason to use DTO’s.

DTO’s separate the application from the intricate workings of the domain, providing what is required for the given operation. They do not convey actions only state.

Correctly set up DTO will have suitable setter and getters so should not be easily abused. Also this allows the application developer, if required/desired, to transform these DTO’s to application “entities” that have smarts etc. Using DTO’s, to many, sounds like a tonne of unnecessary work.
I (obviously) disagree.
Although I am a contractor (read: mercenary) I still believe in delivering value to the client, which in my case first and foremost is actually the company at the given time I work for and then their respective client. Value is not only given in building an application in a time frame that also works* but I place a huge importance on maintainability. The couple dozen guys sitting in front of me are all maintaining applications written over the last 10 years. Looking at the quality of code in the company, well written software could possibly have halved that number. I believe correct separation of concerns is key to the goal of maintainable software.

Separating the application logic now be comes even more important. Application logic deals much more with user experience and flow.
The users don’t care that when they create X
• Y must be notified
• Z is processed
and nor should the application.
Domain logic should deal with fundamental business concerns. The example of “when you create X the Y must be notified and Z is processed” is, to me, quite clearly a domain issue. The application only then needs to care about giving the domain what it needs to correctly create X.
With this type of separation it allows the company in general to move closer providing legitimate SOA styled architecture, which can never be achieved with such a leaky domain.

Now none of this is ground breaking stuff, but it amazes me that this mistake occurs so often. Anything larger than a basic web site should probably be separating the application and domain logic. Certainly anything Enterprise level this is the first thing I would be doing.

For more information about this read DDD by Eric Evans. Anyone involved in Enterprise level, SOA or Distributed systems needs to read it.


*and is tested and is documented and ….

When to use Enum's vs object's

Enum’s are a touchy point with .net developers. There are the pure OO types that detest the use of them and then the perhaps more MS inclined that love the little buggers.
I will admit that I am more of the later but I have been rethinking my use of them lately and think I have settled on a few rules of thumbs that I may start to follow, which of course I would like your thoughts on.

Enum’s in the domain.
Enum’s can easily maps to reference tables in most ORM’s and so this is an easy win here. Unfortunately I am starting to lean towards the thought of not using Enum’s in the domain. The presence of Enum’s usually means different means of handling certain scenarios and instead of using ugly switch statements in the domain I am going to try to move to using objects over Enum’s, which may help with using a more robust strategy patterns.
These objects are still easily mapped using discriminators and this means it allows domain functionality in these new more DDD styled value types.
Possibly one approach is to start using Enum’s in the intial stages of mapping and as functionality grows, refactor to objects as necessary.

Enum’s over the wire
Enum’s over the wire I am completely ok with. Provided the Enum’s are well documented these little buggers just go across as the given value type you have assigned (commonly int). This keeps messages sizes down and allows the client to create an Enum on the receiving side to map to give Enum values. NServiceBus is an example of where this happens (for error codes IRC).

Enum’s in the application
I think this is where is would be most pragmatic with my approach. A lot of application developers, especially in the .Net world are more that happy to deal with Enum’s and small switch statement in the application may actually be easier for many to maintain. These may also be easier to deal with on UI displays, like drops downs as many people have standardised helpers to manipulate Enum’s. Again it really depends on the situation and how much logic is dealt with on the client/application.


Again I hope I will take a reasonably pragmatic approach to this. Hard and fast rule often mean you are unnecessarily painting yourself into a corner.

For those wondering what the hell I am talking about when using Objects as Enum’s this nasty code give a vague idea. Note that you can now subclass the type, providing type specific logic.

class Program

{

static void Main(string[] args)

{

Person bob = new Person(OccupationType.Developer, OccupationEnum.Developer);

//do other stuff...

}

public class Person

{

OccupationType occupation;

OccupationEnum occupationEnum;

public Person(OccupationType occupation, OccupationEnum occupationEnum)

{

this.occupation = occupation;

this.occupationEnum = occupationEnum;

}

}

public class OccupationType

{

public static OccupationType RockStar = new OccupationType();

public static OccupationType Developer = new OccupationType();

public static OccupationType BusDriver = new OccupationType();

public static OccupationType Maid = new OccupationType();

}

public enum OccupationEnum

{

RockStar,

Developer,

BusDriver,

Maid

}

}