Monday, November 22, 2010

Agile Tools That Just Work

Here are some nice tools that I use and generally make life easier in an agile world.

Scrumy

http://www.scrumy.com
Free kanban board for managing projects (and life).

WebSequenceDiagrams

http://www.websequencediagrams.com/
Quick way to create sequence diagrams and the input is text meaning its easy to share with other that don't have tools like Visio or EA etc. I actually prefer this to visio and our scrum team used it a lot in our last delivery.

SpecFlow

http://specflow.org/
I like Fitnesse but there is too much friction (not a lot, but enough) in terms of the development cycle (e.g. its odd source control management). SpecFlow is close enough to cucumber for me (in the .Net world) and does not rely on Ruby to be installed. Ruby installs on windows are still a complete PITA as far as I'm concerned and I'll generally do anything to avoid it. SpecFlow also runs with your existing (x/n/mb)unit testing tool of choice. I must say I didn't like this tool at first but having come back to it, either the experience is cleaner or I have opened up a bit more.

Gist

https://gist.github.com/
Like Pastie but associated to my GitHub account. I'm starting to use it more and more to do brain dumps of simple code snippets (e.g. PowerShell functions etc) that can be handy to reuse, but not really worth its own repo.

If I was running an external commercial project (e.g. at home) I would seriously consider the use of LowDown. It fits my development style very well without the clutter and bollocks associated with other ALM/project management tools. Git integration makes it very interesting too.

Monday, October 18, 2010

More AutoTesting - Powershell extensions and addins

There is a nice library called PS-Eventing that makes the event handling a whole lot cleaner and generally a nicer experience. This post with the plugin is not a smaller and doesnt really do any else other than the last post except cleanly deregister the event, but is much more user friendly.
I think i will also post my code on gisthub as I am not and have not been at my real machine for so long... i miss it... sorry.
Heres the link : http://gist.github.com/632749

Wednesday, October 13, 2010

Auto test - Thanks Powershell

Im doing some funky Nunit stuff* at the moment and sometimes the ReSharper or TD.Net runner wont pick up my inner fixtures, the console runner however runs them just fine. Here is a PS script to have auto test running on build:


$Script:initialdir = "C:\myroot\"

$fsw = new-object system.io.filesystemwatcher
$fsw.Path = "c:\mysln\myproj\Bin\"
$fsw.EnableRaisingEvents = $true

$action = {
$filter = "*Tests*.dll"
$file = $Event.SourceEventArgs.FullPath
if($file -like $filter)
{
$nunit = "C:\3rdParty\NUnit-2.5.2\nunit-console"
$currentdate = get-date -format g
write-Host "Running Tests on $file $currentdate" -ForegroundColor Yellow
&"$nunit" /xml:$_.Name-testresult.xml $file
if($LastExitCode -ne 0)
{
WRITE-HOST "Test have failed for $file"-ForegroundColor RED
}
else
{
WRITE-HOST "Test have passed for $file"-ForegroundColor GREEN
}
#$null = Remove-Event $_.EventIdentifier
}
}

Register-ObjectEvent -InputObject $fsw -EventName Changed "FileChanged" -Action $action
Register-ObjectEvent -InputObject $fsw -EventName Created "FileCreated" -Action $action


Note that you will either have to explicit unregister your event handlers or kill powershell to get rid of the handlers. Cancel the watcher by hitting ctrl+c and then Unregister by runnning these lines

unregister-event "FileCreated"
unregister-event "FileChanged"



*The stuff we are doing is to do with inheritance and inner test fixtures to give us a BDD experience without the drama of the BDD frameworks, that personally I still dont like. We are still a fair way off the TDD experience of the Ruby kids.

Tuesday, September 7, 2010

Threading - I am on a Journey

Recently it has become more and more apparent that I had a crack in my understanding of a key aspect of not only the CLR but computing in general. No prizes for guessing that it was threading and the general parallel programming constructs that go along with it.
Now I am not a complete n00b in the threading world, I have written code that uses threads in client and server side but it was typically handing off a single task to a single thread and looking back it was probably pretty delicate and by now in need of some re-factoring love.
The first real kick in the pants was playing with my 8 core machine and never seeing all 8 core light up in the task manager. How wasteful... its like all the Aston Martins I see driving over Tower Bridge; what is the point?
Next major trigger for me was a computationally heavy project that I, up until recently, was working on. My colleagues and I knew that splitting up tasks could make or break this system, however given a distinct lack of human resources and not exactly compelling hardware, time was never allocated to fleshing out the best approach.
Well I am no longer at that job (its somewhat unfortunate as the project could have been epic had it not be butchered by red tape and ego's) so I have had time to scrub up on my threading knowledge... well I thought I would just be scrubbing up. Sometimes you just don't know how much you don't know.

So in an attempt to short cut any one else's learning curve on the wonderful world of threading and parallel programming in the .Net world here are some great places to get started.

As always C# in a Nutshell is a great place to start with anything C# related. I have both 3.0 and 4.0, Joe is a great author; IMO every C# developer should have the latest copy of this book on their desk.

Next up the deceptively good CLR via C#. This book looks like its going to be as much fun as having teeth pulled out however it is surprisingly good and dives deep in the areas you want and would expect. I read the 2nd edition however will be buying the 3rd as soon as I can find it.
Obviously these books are not completely dedicated to threading but they have some great chapters dedicated to it. If you are after a more in depth reference I have been recommend Joe Duffys book : Concurrent Programming On Windows

Next up is my favourite learning tool : Tekpub. There is a threading production under way, it only has 1 episode but it is great. As I have a yearly subscription it was a no-brainer. For me this was valuable as it has a bunch of code and a great visual comparison of auto and manual reset events, Semaphores and Mutexes. I highly recommend this video if these are confusing you at all.

For a more casual and much briefer talk on Threading in the pre .Net 4 days check out Jon Skeets articles . I have found the multiple angles these guys talk about the same topic good for bashing it into my head.

The piece that I am currently reading is the PnP new book Parallel Programming with Microsoft .NET that is also available as a hard copy book and eBook. Code snippets for the text are available here.

The last piece that is semi related is my reading of the Manning book Real-World Functional Programming. Although functional programming technically has no direct link to threading its basic tenants strongly support parallel programming and as such I have been endeavouring to improve my understanding and use of functional constructs where it is appropriate. So far the books is very good it is in F# and C# so it has bee a good (re)introduction to F# for me.

With all of the movement in not only hardware (hyper-threaded and multi-core processors) as well as architectures (Grid, Cloud, ESBs etc) and the software frameworks we use (.Net's TPL, Rx etc) and how we use them (i.e the movement towards functional programming in distributed/parallel systems) I think it is becoming very clear that as .Net developers we need to know how this stuff works, how we can use it and what it means for the systems we build and the users that use them.

As tempting as it may be to run off and just blindly use the new TPL I strongly urge you to get a good understanding of the history and the older APM practices, I am sure this will ensure you will make better choices when you write your next parallel task.
As for me I'm back to reading, I have a long way to go...

Sunday, July 25, 2010

Getting started in Asp.Net

I recently met a guy who was new to town and had picked up a job doing development for a place that appeared to be moving towards Microsoft focused solutions meaning he was likely to move away from his LAMP (Linux, Apache, MySQL, PHP) background to a .Net/Sql Server environment.

I realised after talking to him that there are a lot of ways you could go wrong coming into the .Net realm from a LAMP/Java/Python/Ruby background and this post, although intend for him, is generic enough that it may save other web developers valuable time to get up and running, and producing valuable software using the M$ stack

Asp.Net

Asp.net is a different beast to the many other frameworks out there and you could spend years (and many have) learning all the ins and outs of the beast.
Dont.
I would highly recommend skipping WebForms all together as IMO it is a dead framework. I have been doing MS MVC for the last year and did MonoRail (an open source ASP MVC framework) on the side prior to that and I have never looked back. Some of the significant benefits are :
  • Matches the way the web works (you explicitly work with http verbs)
  • clean html
  • easy to learn
  • pluggable
  • testable
  • simple
  • And finally, IMO, encourages better coding practices, which web forms simply does not.

I will assume you will trust me and accept that MVC is the way forward and I will go ahead and recommend how to start.

Asp.Net MVC in action
First port-of-call is to pick up this book. It will give you the no BS version of the truth and covers some third party tools that will help you along the way. There is a second edition on the way however if you are using VS 2008 then this books is for you anyway. Manning (the publisher) often have good discounts so register for emails or follow them on Twitter to know when promos are running.

TekPub : Build your own blog with MVC
Tekpub is one of my favourite resources at the moment. It is great value for money and video is an efficient way to learn new concepts. I had bought several series and now have the yearly membership and many of my colleagues have done the same. Go check out the BYOB which shows how Rob* built a blog in MVC. There are also MVC 2.0 and various other Asp.net series available. Buy one, buy a bunch or get a month or yearly subscription, go on, you know you want to ;)
*Rob was on the team that actually built Asp.net MVC

Asp.net
Yes the website is a good place to get lots of info too! There are lots of little articles and videos from from smart people who live and breathe this stuff. Just be aware that there is sometimes a very Microsoft slant to the information, not a bad thing but something to be aware of.

Next there is the Database

SQL Server

If you have a background in Oracle or MySql then picking up M$ Sql Server should be pretty easy. The standard practices apply e.g. dont run your web app as System Administrator. Nothing to really say here to be honest.


Joining the web and data world is your data access strategy

Linq2Sql or NHibernate

The 2 standout choices for new commers IMO are the light weight data acess strategy from Microsfot called Linq2Sql (pronounced "Link to Sequel", just dont ever write it like that) or the OpenSource and more heavy weight NHibernate. If you are from a Java background NH may be a good tool, if you are from a Rails background then Castle ActiveRecord (which sits on tope of NH) may even be a good idea. However for most Linq2Sql is probably the best bet when you are getting started and NH is the best if you are building up more complex domains. The caveat with NH is that there is a steep learning curve. If you are new to it then check out SharpArchitecture or one of the abstractions in the NHforge.com to help bootstrap yourself. If you have a simple domain or going to be building from the database up (i.e. create a database table and then the c# code to use it) Linq2Sql is a better fit anyway.

Tools

If you are playing with the M$ stack at home then you will most likely not want to shell out cash for the full blown enterprise versions of Visual studio and Sql Server. For this reason they have the express versions. I have only ever used the SQL express edition (which is great) so have no idea what VsExpress is like. In saying that if i did not have the full version of VS I probably would not be using .Net and just be doing rails/sinatra dev (but thats just me).
Assuming you are working with the full version at work then be sure to invest in the Resharper VS plug-in. Its not super cheap but the time it will save you will mean it will quickly pay for itself.

I terms of source control TFS is the M$ standard, but I personally don’t recommend it. I think tools like Git or even SVN are better tools that integrate better with other platforms/tools and are free. TFS however does more than source control, so if you are using make sure you use all the good ALM bug tracking bits etc

Extra links

Follow the team behind the framework, most importantly Scott Gu (all .net dev should read his blog) and Scott Hansleman. Mr Hanselman also has a good podcast called hanselminutes that is not Asp.Net specific but a good subscription nonetheless.
StackOverflow is a great QnA community place for lots of developer question. The are lots of .Net and Asp.Net people hanging out here so most of your initial question will most likely have already been answered. The site itslef is even built on top of Asp.Net MVC and Linq2Sql so you can see that this combination is a very real world and scalable solution.

If you are in Perth (like this person was) check out the Perth .Net Community of Practice and the Alt.Net lads that have monthly beers and discuss things that may or may not be from the Microsoft mother ship that affect .Net devs.

Wednesday, June 23, 2010

Perth Alt.Net - RavenDb

Last I did a "talk" on RavenDb at the Perth Alt.Net beers. I must say I was really stoked at the turn out. Lots of banter, comments, queries, theories and general discussion. I sincerely hope that everyone else enjoyed it as much as I did. I also felt that last night was moving closer to what I hope these events should be. These shouldn't be one-way dialogue. IMO the presenter is there to just start discussion and then it becomes a group affair, there should be lots of interjecting, lots of crowd participation. Last night there certainly was that and best of all it stay on track, we didn’t veer off into off-topic subjects (much) and I think the night was better for it.

I really do feel lucky to be in the company of some very smart people and i want to thank everyone for letting me bleat on and not give me too much grief for being woefully underprepared.

However I put it out there that the people who are comfortable enough to ask the hard questions (and there were a few), please take the reins, we want to hear from you! I’m a strong believe that you can learn as much about some from the questions they ask as the questions they answer and there were some gems last night.
Please contact Mike (@wolfbyte) or myself if you are thinking you may be interesting is starting a discussion, or even if you want to hear about stuff especially if it is edgy and possibly not suitable for the standard M$ .Net community of practice stream (Mitch wheat is the contact for that if you do have M$ specific stuff however). Any feedback is good. The hardest thing for organisers is trying to figure out what the crowd wants and as we nerds are not the most vocal people, making it even harder to run nerd events.

A further incentive is we now have sponsorship. If you get up and present you get swag, swag you want. I worked out since I have been back in Perth I have received several thousand bucks worth of cool stuff from presenting or attending various .Net events. It’s worthwhile getting involved!

Garry Stewart is up next time (in about 2 weeks) talking about light wieght IDEs, specifically VIm and letting go of our addiction that is Visual Studio.

One big note I also want to make. Take the talks for what they are. The whole under current of Alt.Net is "options". We need to be aware there are options. We are not saying you must throw away SQL and use RavenDB exclusively or Vi will replace Studio, we just want people to open their minds. If last nights talk got you thinking about how you could get you python web stck talking to cassandra, sweet, most of what we are talking about is concepts, if there is a tool involved, its just there because it facilitates a different way of thinking. Thats all.

See you next time :)

Wednesday, June 16, 2010

DDD eXchange 2010 - London

Un fortunately i dio not attend this event (which looked like an awesome event) however the Skills matter guys were good enough to post the videos of the event... sweeet!

DDD EXCHANGE 2010 - PODCASTS, SLIDES AND PICTURES

Thye have recorded all the DDD eXchange 2010 talks. The slides and podcasts can be viewed here:

You can also find some photos taken at the conference here.

Thanks SkillsMatter!

Thursday, June 10, 2010

Test Fakes - Solving a Domain Entity Issue

Follow on from :Testing with Domain Driven Design

Background on the design in question

  • We are using NHibernate for persistence on Oracle.
  • We have service that accept command that are coarse grained and perform a clearly defined business function, they are not chatty services.
  • We are using the notion of aggregate roots so all child objects are created and managed by the owning aggregate root. If you need to access the child it must be done via the AR
  • The domain is relatively large and complex.

Background problem

We had issues where we were dealing with domain objects in test and getting problems with interrogating child collections. For example we sometimes need to be able to update a child of an aggregate root and we were using the id of the child (within our command object) to indicate which child to update. We are using a basic database identity field (oracle sequence) for setting the ids on all of our user defined entities.
Herein lies our problem;* In our test we create an aggregate root complete with child objects. We then want to test, for example, updating a child of the aggregate root via a DTO based command (i.e. using an id to define the child to update) and we run into issues when all of the child object have the same id of 0 as they are not persisted (it’s a unit test). Now this would never happen in real life. Why would you issue a command to update something that has not been persisted, how do you even know about that object?
The quick solution that I have seen used a lot is to set the ids of all of the domain objects prior to running the test. I don’t like this if you are doing this by exposing the ID setter on the domain object. This is opening up the API of our domain for testability and is potentially the start of a slippery slope in to a pit of hell. An easy way around this is to use fakes. These object are just child object of the domain objects in question that help expose stuff the domain shouldn’t; in this case the ID setter.
The other alternative is to set the id on creation of the fake so the setter of the id is not exposed. This can also work but it will mean your fake will always have an id set. For the situation I was in this was not suitable.

The end solution

All domain objects have a fake create for it. All of the fake implement an interface ISettableId. This interface is defined as below:

**CODE GOES HERE**
public interface ISettableId
{
    bool HasIdBeenSet();
    void SetId(TId id);
}

With an implementation example (its id type is and integer) :
public class FooFake : Foo, ISettableId
{
    public FooFake(string name, TypeRequiredForValidConstruction myDependecy)
    : base(name, myDependecy)
    {}
    public bool HasIdBeenSet()
    {
        return _id != 0;
    }
    public void SetId(int id)
    {
        _id = id;
    }
}

This mean we can now create fake objects and emulate persistence later by reflecting down the object graph and setting all of the ids. This is much faster than hitting the database and has proved to be a very valid exercise as we can now run tests again transient and persisted versions of the object graph without having a db connection.
One additional thing I should mention is that the creation of child object now must be exposed as a hook. For example when we create child objects I do not just new up to object and add it to a collection. I call a protected virtual method that creates the child object and then add that to the list. This allows my fake to override the return type that is added to the list so children can also be fakes. This has not increased the exposure of the domain but has now facilitate a hook to allow me to create fakes for my child collection.

Caveats:

Fakes should not include any logic. The fakes I am using only allow the setting of ids. The only logic they have is to check whether the id is set (this allows us to skip objects that have had their ids set when we walk the object graph and not go around in circles). The only other methods that are required is the overrides of the creation of child objects.
Other methods you may find in the fakes are builder methods. Instead of having big factories for creating test object you may choose to put the object creation closer to the owner by putting it in the fake class itself.

Summary

As I mentioned, we have a pretty large and complex domain we were running into issues where the difference between persisted and non persisted objects graphs was becoming problematic; using the fakes has allowed us to keep the domain clean and only as open as possible while allowing us to emulate persistence without the cost of hitting a database. Over all I’m pretty happy with the result.


*I’m sure there are other way to solve this higher level problem

Thursday, May 27, 2010

Alt.Net, Openspaces, whatever its coming...

Google groups and Twitter has been busy today after I seemed to have started some hype about an Alt.Net styled openspaces event (#ozopenspace)

PLEASE spread the word. We need ideas when people can come and where they would go to (Australia is a big place, we are all very spread out)

The wiki looks like its going to be based here: http://ozopenspace.pbworks.com

Register you interest here. Seriously please do this otherwise we can not estimate or cater for your needs/wants etc. This will also show sponsors if we are serious or not and may help convince overseas guest to attend too, of which there is already interest :)


The discussion is on the oz alt.net google groups here

I cant wait!

Wednesday, May 26, 2010

Powershell and the web

Lately I have been playing with PowerShell to see if it can help with some uptime/availability checks for our web sites and services at work; Yes i know what you are thinking but I'm not a sys-admin and I need to know if my stuff is working with out having access to the server... long story I'm sure you will appreciate.

In my reading I have seen lots of flaky posts about PowerShell and web services. I think people need to remember that web services still conform to the http protocol, there is no magic, you do not need to access any visual studio dll or do any weird crap like that, you just need to hit the url and receive a payload.

    $wc = new-object system.net.WebClient
    $webpage = $wc.DownloadData($url)     

That's it. No weird proxies no importing custom dll's.

Here's an example calling the Google Maps API (which is a nice API BTW) to get a latitude and longitude of an address, specifically Perth's craziest bar Devilles Pad (think Satan living in a trailer park that hangs out at a Go-Go club that used to be a James Bond-villians liar, yeah its that cool:

    $url = "http://maps.google.com/maps/api/geocode/xml?address=3+Aberdeen+St,+Perth,+WA+6000,+Australia&sensor=false"
    $wc = new-object system.net.WebClient
    $webpage = $wc.DownloadString($url)#Edit -Thanks Ken
    $xmldata = [xml]$webpage
    $lat = $xmldata.GeocodeResponse.result.geometry.location.lat
    $lng = $xmldata.GeocodeResponse.result.geometry.location.lng
    write-host "Latitude = $lat - Longitude = $lng" 

Like that? Go play: Google Maps API Doco

Friday, May 7, 2010

MSBuild 101

Lately for some reason I have seen a fair bit of tension around MSBuild. I guess this is good thing, more people are using build tools like PSake, Rake etc and most of the time if you are building .Net stuff sooner or later you will have to call into MSBuild, unless your feel bold enough to uses csc... um... not for me.

MSBuild is, to be honest, very basic. 
There are typical 2 things that I use MSBuild for
-Defining a build script
-Running a build

The term MSBuild is a little confusing to some as its covers a few things: a file type and an executable.
Firstly we need to be aware that all .Net solution files (*.sln) and project files (*.*proj) files in .Net are MSBuild files; that should hopefully ease some tension as it is now pretty obvious that you are already using MSBuild! The MSBuild files are XML and therefore human readable and modifiable. The files typically define properties and tasks. The properties can reference other properties and they can be collections too. The tasks can have dependencies on other task (i.e. Test requires Build which requires Clean) and the task can make use of our properties we have defined in our script as well as define items that are basically task level variables. This is starting to sound just like procedural programming, which I hope we are all au fait with. It is probably also a good time to note that pretty much all build tools use the notion of dependencies. This basically means if you call a task and it has a dependant task then that task will be called first. You can have chained dependencies like the example we mentioned before (i.e. Test requires Build which requires Clean) and you can also have multiple dependencies for one given task (Deploy Requires Package, Document, Test).

After reading that it is quite clear that MSBuild is actually very simple. It is the syntax and general XML noise that scares most people, including myself.

Running MSBuild

To achieve the most trivial usage out of MSBuild we need to know how the executable works so we can actually call MSBuild to do something. MSBuild comes with Visual studio, I don’t think it actually comes with the .Net framework (someone can correct me here) and it is also bound to a framework version.

To start with I will show the most trivial example of how to use MSBuild and that is to just build a solution.

@C:\Windows\Microsoft.NET\Framework\v3.5\MSbuild.exe MySolution.sln

Assuming that this command is run in the directory that MySolution.sln resides this will build that solution. It can’t get much easier than that. Note this will not do anything clever, it will just build to your default location ie bin/debug for each project in the solution. Personally for me this offers little value other than it is a bit faster than building in visual studio.

Typically if I am building using the command line it is because I am building from a build tool. Build tools like PSake allow me a lot more flexibility as they are not constrained by the bounds of XML and have powerful functions I can use that may be associated to builds and deployments that might not otherwise exist in MSBuild. If you are using a tool like PSake or Rake as your build script then it is more likely that you will use the following syntax:

&$msbuild "$solution_file" /verbosity:minimal /p:Configuration="Release" /p:Platform="Any CPU" /p:OutDir="$build_directory"\\  /logger:"FileLogger,Microsoft.Build.Engine;logfile=Compile.log"

This is from a PSake script so anything with a $ in front of it is a PowerShell variable (i.e. not MSBuild syntax). Walking through this I have defined the location of the MSBuild exe ($msbuild) and the sln file. Note that the sln file is note associated to a switch. It is the first argument and this indicates that this is the file we are building. Other arguments are prefixed with a switch. These switches begin with a forward slash and end in a colon and contain either full descriptive words (e.g. /verbosity:minimal) or short hand syntax for a word (e.g. /p:Platform="Any CPU" which is short for property and in this example defines the platform property)

FYI : The & at the start of the line is a PowerShell construct to say "run this command, don’t just print it to the screen"

Defining Build Scripts with MSBuild

First up we will walk through a very basic build script that justs cleans and builds a solutions. It will be pretty obvious that this scripts really offers little value but we will get to some of the nitty gritty stuff soon.
<Project DefaultTargets="Clean" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5">
  <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets"/>
 
  <PropertyGroup>
    <ApplicationName>MySolutionApplicationName>
    <SolutionFile>..\$(ApplicationName).slnSolutionFile>
  PropertyGroup>
  <Target Name="Clean">
    <MSBuild Targets="Clean" Projects="$(SolutionFile)"/>
  Target>
  <Target Name="Build" DependsOnTargets="Clean;" >
    <MSBuild Targets="Rebuild" Projects="$(SolutionFile)" Properties="Configuration=Release;" />
  Target>
Project>

Let’s go over this line by significant line:
The xml documents root is the project where we define the xml namespace, the MSBuild Tools version and the default target. Make sure you tools version work with your .Net framework version. This first line is pretty standard for me. I like having a default target and im typically working with .net 3.5 at the moment so this is all good.
Next up we are importing an external extension. This is an example of importing the very useful MSBuild Community Extension toolset. If you are making use of MSBuild a lot then you will want this toolset. Another Import I often make is for Gallio a very handy test runner. Please note that this import demonstrated is not being used in the script so could safely be removed, it is just here to show the syntax
Next we define our property group section. The nodes in here are user defined. MSBuild does not have “ApplicationName in its schema, i made that name up. These properties are in effect you global variables. You will also note that you can reference other Properties from with properties as shown in the solution file property. This is often very handy when dealing with file paths. Then Syntax for referencing a single properties is $(MyProperty)
Next up we define our first target. Targets are just like methods in that they define functionality. The Clean target just calls an MSBuild task that cleans the solution. Not very impressive, I know. The next target shows how we can specify dependencies. This means any time we call the Build target the Clean target must run first.

Having a build file is all very nice but we need to be able to run it. As it is not a solution of project file I tend to treat them a little different to how I would call those files directly. I typically prefer to be explicit in calling target, even though it is a DRY violation.

I have an example extracted from a bat file that calls into an MSBuild script below to show you the syntax (this is the same syntax as if you were to run from the cmd line):

@C:\Windows\Microsoft.NET\Framework\v3.5\MSbuild.exe MyMSBuildFile.build /t:UnitTests /l:FileLogger,Microsoft.Build.Engine;logfile="UnitTests.log"

From this you can see:
-I am calling MSBuild 3.5; MSBuild is slight different for each .Net version, be sure you are not calling the .Net 2 version for your new shiny .Net 4 solutions!
-MyMSbuildFile.build is in the location I am running this command. If it was not I would need a relative or full path.
-The extension of an MSBuild script is not important. I personally prefer to call my MSBuild files *.build to make it clear that they are in fact build scripts
-The "/t:" switch defines the target I am calling: UnitTest. You do not need to specify this if you have defined the Initial Target in the project tag in you build file. I recommend defining a default target and make sure it is a safe one... you don’t want to accidently deploy something to production "by default"
-By using the "/l:" switch I can define a logger so I don’t lose the info once the console window disappears. I pretty much only use this syntax and change the name of the output file.
 For more help you can just call the MSBuild exe and use the /? for a detail example of what the exe can do. I recommend doing this :)

Stepping Up

So everything I have shown you so far is all well and good but it is of little value in the real world. So here is a quick fire list of problem and MSBuild solutions, a cheat sheet of sorts:
Note the command line syntax used below is from PowerShell and to get the MSBuild variable assigned I have called:
[System.Reflection.Assembly]::Load('Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a') | Out-Null
 $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile("msbuild.exe", "VersionLatest")

Change a property from outside the script

You have a property, say application name defined in a script, you want to reuse the script but not have to edit the script for everything that uses it.
Solution : use the property switch
&$msbuild "$build_file" /p:ApplicationName="RhysCNewApp"

Directories with spaces are failing when passing them to MSBuild

If you are passing in paths from a build tool that requires you to use quote (e.g. the path has spaces in it) then MSBuild may throw its toys and blow up on you. To dodge this use double backslashes (\\) at the end of the argument to escape the nastiness that is MSBuild
&$msbuild "$solution_file"  /p:OutDir="c:\Rhys Hates Spaces\In His\ Directory Paths\"\\ 

 

Inside your Scripts

Display a message to the console

I want to display the paths of my Visual Studio tools folder for ‘05 and ‘08. Note VS80COMNTOOLS and VS90COMNTOOLS are environment variable, so yeah MSBuild can pick those up too J
<Target Name="VS">
  <Message Text="***********$(VS80COMNTOOLS)***********" />
  <Message Text="***********$(VS90COMNTOOLS)***********" />
Target>

I want to reference a whole Item group

Sometimes we have collections of related items and want to reference all of those items:
 <ItemGroup>
    <Items Include="Foo" />
    <Items Include="Bar" />
    <Items Include="Baz" />
  ItemGroup>
  <Target Name="Message">
    <Message Text="@(Items)" />  
  Target>
This will produce:
Foo;Bar;Baz

I want a conditional execution

I want to delete a folder, but only if it exists
<Target Name="Clean">
  <RemoveDir Directories="$(OutputDirectory)"
       Condition="Exists($(OutputDirectory))">
  RemoveDir>
Target>
This show the use of Exists() and the MSBuild command RemoveDir.

Error Messages

I want to display a meaningful error when an exception should be raised in my task. Solution use the Error command within you task
<Target Name="UnitTests" DependsOnTargets="Build">
  <Gallio
      Assemblies="@(UnitTestProject)"
      IgnoreFailures="true"
      EchoResults="true"
      ReportDirectory="$(ReportOutputDirectory)"
      ReportTypes="$(ReportTypes)"
      RunnerType="$(TestRunnerType)"
      ShowReports="$(ShowReports)">
   
    <Output TaskParameter="ExitCode" PropertyName="ExitCode"/>
  Gallio>
  <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" />
Target>
This examples show the use of the Gallio MSBuild task and how we can raises an error if the exit code is not 0 was the command is run

Conditional Properties

I want properties to have value dependant on conditions:
  <PropertyGroup>
    <ReportTypes Condition="'$(ReportTypes)'==''">
      Html-Condensed
    ReportTypes>
  PropertyGroup>

Swap out Configuration files

This is last as it is a personal preference on how to manage environmental differences. I basically have all my configuration files in one directory and have N+1 files per file type where N is the number of environments. I may have a ConnectionString.Config and right next to it will be ConnectionString.Config.DEV, ConnectionString.Config.TEST, ConnectionString.Config.UAT, ConnectionString.Config.PROD with each environments values in the respective file. My build only acknowledges the ConnectionString.Config file and it copies that one, the rest are not included in the build. So when i build for an environment I swap out the config files as part of the build. A contrived example of how to do this is below:
<Target Name ="SetDevConfigFiles">
  <ItemGroup>
    <SourceFiles Include="$(ApplicationConfigFolder)\*.Dev"/>
  ItemGroup>
  <Copy   SourceFiles="@(SourceFiles)"
          DestinationFiles="@(SourceFiles->'$(ApplicationConfigFolder)\%(Filename)')" />
 
Target>

This grabs all of the files from the Application Config Folder that end in .DEV and copies them to the same location without the DEV.
Explaination: The %(Filename) means I get the filename with out the extension. MSBuild basially just removes anything after the last “.” in the file name, this means I over write ConnectionString.Config with the contents from ConnectionString.Config.DEV. Its handy, it works for me, take it or leave it.

Creating your own MSBuild Task

This was way easier than I thought it was going to be. To create a task you just need to inherit from Microsoft.Build.Utilities.Task, override the execute method and add some properties. Here is a SQL migration build task I built (please excuse the logging and the IoC Cruft). The code to do all the leg work was already done and it had a console to fire it up. At the time I was using MSBuild to call it and i was using the exec functionality of MSBuild when I thought: I could probably just make a plug in for this! Below is the result
using System;
using Microsoft.Build.Framework;
using Microsoft.Build.Utilities;
using Microsoft.Practices.ServiceLocation;

namespace SqlVersionMigration.MsBuildTask
{
    public class RunAllScripts : Task
    {
        [Required]
        public string ScriptLocation { get; set; }
        [Required]
        public string ConnectionString { get; set; }

        public override bool Execute()
        {
            try
            {
                ContainerRegistration.Init();
                var versionMigrationService = ServiceLocator.Current.GetInstance<VersionMigrationService>();
                TaskHelper.LogEntry(Log, ScriptLocation, ConnectionString);
                var result = versionMigrationService.RunAllScriptsInOrder(ConnectionString, ScriptLocation);
                TaskHelper.LogResult(Log, result);
                return !result.Isfailed;
            }
            catch (Exception ex)
            {
                TaskHelper.LogError(Log, ex);
                return false;
            }
        }
    }
}

Some hints to get the most out of MSBuild: 

Download the MSBuild Community extensions, there is lots of good stuff here. 
Use plugins where it is a logical idea as opposed to calling out to exes. This can be done by using the import tag at the start of your file:
<Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets"/>

Know when to use MSBuild. It is a good starting point if you are learning about automated builds, especially as you have to use it every day (if you are a .Net dev) whether you like it or not (remember all sln and proj file are MSBuild files). Be warned though that it is not really the best option for stuff out side of a pure build. Once you are comfortable with MSBuild you will soon hit a very low ceiling. At that point investigate other tools like PSake and Rake. Due to the XML nature of MSBuild certain task are just messy or impossible to perform; Loops is a good example, just don’t do it.


I hope that is enough to get you up and running, if not let me know and I will try to fill in any of the gaps

Tuesday, April 27, 2010

Castle - Reborn!

It looks like Castle has had a shake up! I don't *know* who is behind all this but I know my my castle go-to-guy has his sticky fingers in it and I'm glad he does!
Of late there is a new Castle Wiki and thanks to Andy Pike a bunch of new screen cast dedicated to  getting up and running with the various aspect of the Castle framework.

So the links are:
Please check them out and if you want to add anything i have been told that the wiki is a real wiki, so contribute!
Thanks to Krzysztof Koźmic for the heads up :)

*OK so Castle is not reborn, just the doco ;)

Soft Deletes in Nhibernate

The following was an email to my dev colleagues, sorry about the code formatting Im using the Blogger web interface (OSX really needs a WLW equivlient!):


Fellow nerds;
There are frequently requirements that we need to have soft deletes. These things are a pain, especially with an ORM as they can start becoming invasive to your domain code.

If you want soft deletes to be transparent as possibly in normal workings of the domain you can use a DeleteListener to intercept the deletion and, well, soften it.
Below is some code to show a parent and child relationship (ParentSampleDomainObject and SampleDomainObject).

Key points:
• The entities need to have a flag to show they are deleted eg IsDeleted
• The mapping on the parent needs to say that it is not interested in retrieving deleted items by using a where clause
• The mapping also need to say it want to delete orphan children, (we won’t actually delete them, but this needs to be here)
• We need to have an NHibernate delete listener to intercept the cascading delete and instead of deleting the entity mark it as deleted. This is why we need cascading deletes in the mapping
• We need to register the listener to the config.

The test code will be checked into the core soon.

Rhys

Nhibernate Configuration:
<?xml version="1.0" encoding="UTF-8"?>
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
    <session-factory name="applicationregister">
        <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property>
        <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property>
        <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property>
        <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property>
        <property name="connection.connection_string">Data Source=.\SqlExpress;Database=CoreTests;Integrated Security=True;</property>
        <property name="connection.isolation">ReadCommitted</property>
        <property name="default_schema">CoreTests.dbo</property>
        <mapping assembly="RhysC.NHibernate.Tests"/>
        <event type="save-update">
            <listener class="RhysC.NHibernate.AuditListener,RhysC.NHibernate"/>
        </event>
        <event type="save">
            <listener class="RhysC.NHibernate.AuditListener,RhysC.NHibernate"/>
        </event>
        <event type="delete">
            <listener class="RhysC.NHibernate.SoftDeleteListener,RhysC.NHibernate"/>
        </event>
    </session-factory>
</hibernate-configuration>

Parent Domain entities mapping file:
<?xml version="1.0" encoding="UTF-8"?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="NHibernate.Tests" assembly="NHibernate.Tests">
    <class name="ParentSampleDomainObject" table="ParentSampleDomainObject">
        <id name="Id" column="ParentSampleDomainObjectId">
            <generator class="identity"/>
        </id>
        <bag name="Children" inverse="true" lazy="true" cascade="all-delete-orphan" where="IsDeleted = 0">
            <key column="ParentSampleDomainObjectId" foreign-key="ParentSampleDomainObjectId_FK"/>
            <one-to-many class="SampleDomainObject"/>
        </bag>
    </class>
</hibernate-mapping>

[Serializable]
public class ParentSampleDomainObject : BaseEntity<int>
{
public ParentSampleDomainObject()
{
Children = new List<SampleDomainObject>();
}
public virtual IList<SampleDomainObject> Children { get; set; }
public virtual void AddChild(SampleDomainObject child)
{
Children.Add(child);
child.Parent = this;
}
public virtual void RemoveChild(SampleDomainObject child)
{
Children.Remove(child);
}
}
[Serializable]
public class SampleDomainObject : BaseEntity<int> , ISoftDeletable
{
public virtual string Name { get; set; }
public virtual bool IsDeleted { get; set; }
public virtual void MarkAsDeleted()
{
IsDeleted = true;
}
public virtual ParentSampleDomainObject Parent { get; protected internal set; }
}

public class SoftDeleteListener : DefaultDeleteEventListener
{
protected override void DeleteEntity(IEventSource session, object entity,
EntityEntry entityEntry, bool isCascadeDeleteEnabled,
IEntityPersister persister, ISet transientEntities)
{
if (entity is ISoftDeletable)
{
var e = (ISoftDeletable)entity;
e.MarkAsDeleted();
if(entity is IAuditableRecord)
{
var a = (IAuditableRecord) entity;
AuditListener.SetAuditInfo(a);//need to have a log of when this was actually deleted, probably the intent of the soft delete!
}
CascadeBeforeDelete(session, persister, entity, entityEntry, transientEntities);
CascadeAfterDelete(session, persister, entity, transientEntities);
}
else
{
base.DeleteEntity(session, entity, entityEntry, isCascadeDeleteEnabled,
persister, transientEntities);
}
}
}