Saturday, April 25, 2009

Models inc

Recently I have noticed there seems to be a confusion over what the term model means. I am happy to admit I am not the guru on all things code, but i am happy enough to put it out there that there is not necessarily one model per system. I have a feeling that a lot of this confusion has come from the MVC, MVP and DDD wave currently sweeping the world. A multitude of examples showing the possible way of creating an MVC application will use just the one model, i.e. the controller will talk directly to the repositories and the objects retrieved are passed to the view. This scenario is fine for web applications that have little need to scale and are effective bound to a 1-2 tier architecture. The new ASP.Net MVC + Linq - SQL is a fantastic candidate for this and allows you to get a testable solution up and running in no time.

But what if you are using WPF, with a application server  using an ORM for persistence and WCF to get the info to the client. Reusing the ORM specific objects is asking for a messy solution. To me this is where it become very important to define your models. In this common three tier type application architecture I immediately can see 3 types of "models". First and foremost the Domain model. These are the business classes that make up the model that reflect the "business truths". This is where the business rule are enforced at the up most. You ORM will interact with these domain entities and value types and map them appropriately to your persistence layer, which is most likely your Relational Database. This closely follows the core concepts of DDD. This is all well and good but it is often where people will stop in terms of models. These objects will be items such as Customer and OrderLine in my mind belong in the bounds of the domain. I have been hit, as have many, by trying to reuse these object and send them to the client to be "reused". This is a bad idea.

Lets play devils advocate and say that we will distribute these domain objects. Lets say we are also using and ORM that allows attribute defined mapping. Lets also say we wish to mark up the object using the old school way, defining the object as data and member contracts for WCF. Lets also say we are using WPF and want or binding support. Straight away the class is going to be bloated with infrastructural concerns. It is going to look like a mess. What if this object is sent to the client and a property that is marked as lazy loaded is referenced? How is this object going to get that information? Is it going to jump back across the wire and get the data... just for a lazy load operation?

I am obviously pushing for something here: separate your concerns. The domain object should remain as domain objects. The objects that get passed across the wire should be DTOs. These are incredibly simple and are just data holders. The service layer will convert the domain object to the DTO depending on the operation. For example when returning a list of objects it may not be important to send all the object information; perhaps just the Id, Name and Description would suffice, however if you are return a single item then it would be likely that more detailed information is required. This conversion scares people off. It sounds like too much hard work. It is not. It is trivial and easily tested. Please do not use the excuse of "it is too much overhead" as this is the easiest code you will write. Further to this you may find yourself using Specific DTO for specific service interactions. This is a good thing. Intention revealing interface are good. Creating these will most likely save you a lot of maintenance time later on down the track. For example you may have a CustomerDto for lists and CustomerDetailedDto for single instances etc (possibly not the best names used here, sorry)

Once the DTOs are passed over to the application tier they can then be used as-is, or if there is more application specific needs than a simple DTO can provide, then create an Application Model. This application model is, as the name suggests, specific to this application. A web application model will most likely be subtlety different to a WPF application model with infrastructural additional being the most prominent (i.e. data binding concerns).

This certainly appears to be a fair bit of extra work, however you will end up with a design that is incredibly simple at each layer and very simple to maintain. Each concern now only has one reason to change and you can easily facilitate multiple people work on vertical and horizontal aspects of the stack. To me the ease of working with a stack like this and the significantly reduced maintenance costs will push me to consider this approach very early on if  the application is moving towards a 3+tier design. I would strong recommend it if you are doing the same.

Friday, April 24, 2009

Simple reusable DTO factory methods

I have just found a little bug in my app that to me was an issue with code duplication. It was a DTO not getting properly hydrated when is was getting translated from a domain object in the service layer. My DTOs are just objects with auto properties, no business logic methods; just data carriers. I sometimes have the need for a basic DTO with just simple info (ie for Lists) and a more detailed DTO with the objects child collections (as DTOs collections) for more detailed views. The problem I had was when I added a field on to the domain object then had to modify my factory (and test) to ensure the new field was mapped. What I forgot to do was to also do it for my detailed DTO. I wtote the test and realised that I was doing the exact same work in 2 places, which was one of the reason behind the bug. As i prefer not to use anything other than a default constructor for DTOs i was in a bit of a quandry. Other than using JBogard's AutoMapper i was not sure how to tackle this.
My goodness, i may actually have to engage my brain!
Well the result was incredibly simple.
The detailed DTO (xyzDetailedDto) inherited from the normal DTO (xyzDto) so i just created a private generic factory method in the publicly exposed Extension method class

private static T Create(xyzDomianObject domainObj) where T : xyzDto, new()
return new T
Id = domainObj.Id,
Name = domainObj.Name,
Details = domainObj.Details,
OtherThing = domainObj.OtherThing

As xyzDetailedDto inherited from xyzDto I could reuse the creation method and get back the correct type.
This all seems very simple and silly now, but this has drastically cleaned up my translation layer :)

Monday, April 13, 2009

MEF: The Double edged sword

I am currently investigating the workings of MEF. MEF is the forth coming Managed Extensibility Framework that aims to allow for easy facilitation of plug-ins for framework that lend themselves to be open to such extensions. Visual Studio is likely to pop up and be used as a typical example as much of what MEF is doing is to be used in VS2010 and should be a great way for the M$ lads to dog food MEF.

What I have been running in to is the blurring of the lines of MEF and IoC, which I think will hit a lot of people. A large reason for this is the similarity in the usage of MEF and a typical IoC container:

var things = container.GetExportedObjects<IThingIWant>();

My take on MEF, and I am paraphrasing somewhat from Glenn Block,  is that I will want to use MEF to help me deal with unknown components while I will let IoC deal with the known. Unfortunately what I see is the use of MEF as just another IoC container. Now the demos that are out there are trivial in nature so it is not really fair to pick them apart, but it seems that there are people out there saying things like "should I use StructureMap or MEF on my next project?"... to me that's quite an odd question as they are not mutually exclusive.

  • MEF should be used when there may (or may not) be extensions available for your host application to consume; these parts are unknown and should be treated as so.

  • IoC should be used when there should be implementations of a given service contract* that you applications needs to consume. Generally these are well defined in their contracts and it is the implementation details we are trying to separate.

Another way to look at it is IoC should deal with the internal wiring up and MEF bolts extra stuff on. The pain I am currently feeling is how do I wire up (in terms of IoC) my extensions? The host application should certainly not explicitly know about the extensions components... would I have a wiring up module in my extensions? At the moment I am almost tempted to have a export part called IoCRegistration that has an IoC container registration aspect to it that will be called on app start up.... hmmm... i will have to think about this.

I really hope a lot of the dust settles from MEF with Preview 5 being released, this needs to be clearly defined prior to being released to the masses. IoC is currently a buzz word which means its "cool" and therefore dangerous. Once its use settles in the .Net world, sanity should prevail again. Hopefully MEF is not too close to this that it gets drag in.

* I uses the term Service in the Castle sense: Service is the interface and the Component is the implementation

Saturday, April 11, 2009

Explicit roles and pipelining strategies

After watching an excellent presentation by Udi Dahan this morning I have rethought some of my infrastructure concerns and the way I can handle certain aspect of my generic stack that I heavily lean on. On example that is relatively low hanging fruit is the persistence mechanism.

As a bit of background: I use a service locator pattern heavily in my code where dependency injection is not appropriate which just keeps things clean, lessening to knowledge of the underlying mechanisms and infrastructure concerns. Currently one good example of where this is used is in my application presentation level code to assist in navigation. We call a basic method

NavigateTo<T>(Action<T> preInit) where T: IPresenter

The service locator gets a presenter of type T and the DI container (which is the same thing as the service locator) instantiates the presenter with its view and any other decencies. Based on the type of the view the presenter has the navigation implementation displays it accordingly. As the application expands we can extend this to be able to do more specific actions, however the coding calling the NavigateTo does not have to know how the views are arranged.

The part that I am most interested in is the very specific example the Udi raised in his talk: Persistence.

I have been involved in a couple of projects that did exactly what Udi old school example did. We had an IEntity interface with a validate method contract on it. Everyone of our classes had to override this and it was messy when it came to validating children for the exact reason Udi mentioned. I think at one stage we even had reflection getting jammed in... it was a mess. Looking back a lot of this could have been cleaned up by implementing the IValidate<IEntity> that Udi proposed. This validator can be incorporated in the concrete persistence mechanism as part of a persistence pipeline.

calling IRepository<IEntity>.Persist(IEntity entity) would under the covers also potentially call a bunch of other infrastructure concerns

=>ILog.Log("IRepository<IEntity>.Persist(IEntity entity)", entity, user)




=>IAudit<IEntity>.Audit(entity, user)

=>ILog.Log("IRepository<IEntity>.Persist(IEntity entity)", entity, user)

Each one of these infrastructure concerns can be left generic, allowing a service locator to give you the concrete implementation of the type. eg the IValidateEntity<Customer> may be a customer validator that just calls the validate method on the customer itself... or it may interrogate the customers getters and evaluate based on those values. It may even ask the service locator for an instance of IValidateEntity<Order> and validate each of the orders in the customer that it has been passed. How it is done is not up to customer any more, and it is certainly not up to the persistence is now separated cleanly into its own role.

NB: The fact that when saving an address means that the service locator is calling for an IPersistSecurity<Address> type and that may not exist is great! If there is no defined IPersistSecurity<Address> then we can explicitly say there is a a default return value of "IsValid = true" using a null type (or however you want to implement it). The infrastructure concerns can be pushed aside and dealt with if and when required.

This also raises the question of AOP and policy injection. This pseudo code above implies that we call methods on each of these interfaces. This does not have to be in inline code. This can be added at run time and configured on the fly depending on the application is question.

Now our Entity can focus on what it needs to do and not worry about the myriad of other infrastructure concerns that can be dumped onto it. I am looking forward to this simple modification that should clean things up nicely.

Saturday, April 4, 2009

Agile Documentation and Stake Holder Engagement

My continuing battle to heavily restrict documentation continues at my current place of work.

We are a very waterfall oriented business by the shear nature of the sector we operate in. We are not a software development company, we get oil and gas out of the ground. Exploration, building oil rigs, getting the resources out & selling them and cleaning up afterwards when there is no more resources left, by nature is very waterfall-ish. It requires big design up front.

Fortunately I work in the small project team where we have a reasonable amount of freedom for self governance. Since I have been involved (and most likely several months prior) the developers I work with and I have been pushing for an improved engagement and development process as the waterfall approaches has only ever failed to deliver. We have tried to implement more and more agile techniques with great (but isolated) success. TDD, DDD, CI and some aspects of scrum have been taken up primarily by the developers. Engaging the non technical staff has been problematic to say the least. This primarily revolves around Engagement and Documentation processes and the slow uptake and lack of interest of the non technical staff to process improvement.

Firstly is the personal disagree with the amount of documentation that is required for a project. I have no problem with this as I believe healthy conflict and the healthy resolution of those conflicts, usually results in a better working environment. My thoughts are: given we are a small projects team (I have been involved in 3 projects already, 2 of which are deployed, one is half done), I believe there should be minimal documentation and the code should simulate the vast majority of the detailed documentation.
The default documentation the developers have proposed is:

  • A Vision Scope document: Define why we are even doing this project, the business outcomes and risk and very high level requirements
  • Architectural design with design document if the design deviates from our standard web or smart client architecture. All integration points must be defined (i.e. SAP, JDE, Service buses, Web Services etc) and how they will be subscribed to or published to. The detail of this document is heavily reliant on the project itself. It could be as simple as a class diagram or a full blown very details design document.
  • Use case/user stories. Definition of the business problem with the desired business functionality required to solve this problem. High level work item is probably broken down into several use cases. I personally don't care what format these are in; If a B.A. prefers one style over an other that is fine as long as all the necessary information is captured. One key aspect here is I do not want unnecessary technical information in this document. The person writing this document probably has a comparatively low technical comprehension when compared to the person delivering it. Don't tell me how to do my job!!!! If I ever see another proposed database table or stored procedure in a use case I will make the author eat it.

As far as documentation, that is it. The Vision and Scope is about 3 pages and if this can not be delivered then the PM/BA/Stake holder has no right to engage the team. Once these practices are agreed upon I would like to think that I wont even engage a project unless this fits our minimum templated requirements.

Architectural Design is done by the technical lead of the project. As we are lucky enough to have very skilled developers on our team (not that the company has acknowledge it yet) this is most likely done in a quick work shop with the PM, BA & SA. Other stakeholder may be invited, especially as we often work with other teams such as reporting. Their level of engagement is largely determined whether they are considered a technical owner or not. This work shop for 80% of our work will be done in about 30 minutes. Many of our applications are basic application with only a few integrations points that are well known. This document should be signed off by at least one other approved technical person.

From here use case and customer estimates can be done. I am still trying to push for an iterative approach, which is slowly sinking in. Typically we do 4 x 1 week sprints and release monthly. As the project moves on we may increase releases to fortnightly or weekly releases as functionality snowballs. This is a major benefit of a reusable architecture, reusable build and deployment scripts combined with Continuous Integration.

We are now at the point of Iteration zero. Our iteration zero should be about 1 day, including all of the interruptions we get. We have a custom software factory that allows us to have our infrastructure and application architecture standardised. This means with in about 5 minutes we can have a proven, architecturally sound application skeleton checked into source control and running off the build server ready for deployments. If only our non technical brothers were this organised... don't worry though, because we (the techies) have even written the templates for the Vision and Scope and Use Cases for them, all they have to do is fill in the blanks. To be honest I could probably write a Power Shell script to replace most of our non development staff...  ;p ^

AS for breaking down the actual work that a dev does, A use case will most likely be broken to multiple development tasks till each task has an estimate of less than 1 day, preferably .5 of a day. These tasks can be very briefly described eg

  • create edit customer view- est 30 minutes
  • create edit customer presenter logic & tests 45 minutes
  • etc

Typically these will be described at the start of a sprint and added as sub tasks in the task tracker (eg TFS) by the developers so the PM has visibility of development progress. This is in no way an essential part of the documentation but I believe it aids in

  • assigning responsibility (and therefore accountability),
  • increases visibility of project progress
  • Imp[roves estimation of what can be achieved in a sprint and
  • increase ease of assigning bugs to people and to associated work items.

An excellent overview of agile documentation that is almost completely inline with my feelings of documentation is found here:

Specially the following points:

  1.      The fundamental issue is communication, not documentation.
  3.      You should understand the total cost of ownership (TCO) for a document, and someone must explicitly choose to make that investment.
  7.        Documentation should be concise: overviews/roadmaps are generally preferred over detailed documentation.
  9.      With high quality source code and a test suite to back it up you need a lot less system documentation.
10.      Documentation should be just barely good enough.
12.      Comprehensive documentation does not ensure project success, in fact, it increases your chance of failure.
13.      Models are not necessarily documents, and documents are not necessarily models.
14.       Your team’s primary goal is to develop software, its secondary goal is to enable your next effort.
16.      The benefit of having documentation must be greater than the cost of creating and maintaining it.
17.      Each system has its own unique documentation needs, one size does not fit all.
19.      Ask whether you NEED the documentation, not whether you want it.
21.      Create documentation only when you need it at the appropriate point in the lifecycle.

We define our measure of success in term of production quality deployed software. For us as developers to move towards this we must provide a suitable engagement process for the non techies to follow. I believe the document outlined can be seen to be a bare minimum, but it is enough to deliver software. Any addition to this set of documents should be justified and be delivering an significant increase in business value; if not eliminate it. 

Too much documentation is a waste of time. Inaccurate or poorly maintained documentation is costly. Don't do it!

Recommend reading:

Software Requirements, Second Edition: Wiegers

Writing Effective Use Cases: Cockburn

Agile Project Management with Scrum: Schwaber

^ My disdain for the non technical people is not personal at all, I actually get on very well socially with them. I count myself lucky to work with a bunch of very nice people. What I don't like is the fact that these people are paid very well and I expect them to be not only competent, but experts in their field.  Unfortunately the developers are on a path on constant improvement; that passion is however not shared by our colleagues, which is a shame. We have very good developers running at about 30% efficiency as we spend too much time on non technical aspects of the SDLC.