Tuesday, September 30, 2008
Open source projects: Current Favourites
RhinoMocks 3.5: Liking the new syntax
MassTransit: Even if just for making Asynch and MSMQ easier
Suteki: ASP.Net MVC CMS (can i get any more TLAs in there?), I'm currently trying to port from MVC3 to MVC 5, but the demos look cool.
MVC Contrib: Even if its just so i can bolt Castle in .. so sweet!
NAnt: I still havent had any reason to move to the "cooler" *ake tools as Nant is just easy.
Things that are not Cool:
MSTest, TFS
I hate both of you. :p
Thursday, September 25, 2008
TDD: Fail fast
It has been brought up numerous times over the last few months about the failures of TDD for the masses. It has been proposed the the masses are not in fact test driven. It seems the majority of these failing to get the benefits of TDD fall in a couple of camps.
One camp is throwing in the occasional test if it happens to cover an area of concern. This is usually not Test Driven but retro fitting tests.
Another camp is the "over testing" camp, going for the "holy grail" of high code coverage often leading to:
a) Tests that don't test anything that needs to be tested. Common examples I see are testing frameworks you didn't write (i.e. NHibernate or System.IO) or testing property assignment worked etc
b) Brittle unmaintainable tests that soon become a mass of failings tests.
For some reason it appears the barrier to entry to real TDD is too high. What I plan to cover here is all the mistakes you will probably make when picking up TDD. By acknowledging these mistakes early, hopefully you can over come them and improve the way you approach testing.
- The first mistake many make is not knowing at all what they are doing. Get Kent Beck's TDD book and read it. Don't stop at the 3rd chapter, it really is one of the easiest books I have read in the last few years. It a quick read, in plain English and cover 80% of the tests you will be writing. It amuses me that people are too pig headed to learn how to do something correctly that they are supposed to be doing all day (and are paid to do!)
- Decide how you are going to be using tests as part of you design process. Be honest with yourself. If you know that you are not going to be truly TDD and write tests first then don't pretend you will. No one is watching over your back, karma wont bite you. But seriously if you haven't tried it, start up a side project at home and give it an honest bash. Make the mistakes on something that "doesn't matter" so there is a boss looming over you as you rewrite brittle tests. If you are not going to use TDD you can probably stop reading now. :)
- Make TDD part of your evolving, agile design process. TDD for me now is also a major part of my design process and one of the key reasons I use it. I generally have an idea of what I want but often TDD pushes me to a better design than I had first envisaged. Classes and methods become cleaner and more focused, interactions more expressively conveyed.
- Decide if you are going to use doubles*. This is an extra aspect of TDD to learn and is usually where things go pear shaped. Not using doubles in an enterprise environment usually means you are doing integration tests, test that call multiple classes, methods and layers. Integration tests can be slow as they often interact with databases, file systems and other services. They can also be brittle as they rely on the environment being in a constant state. Changing something in the database or a class several layers down may incorrectly break a test which means test that are hard to maintain.
- Understand Doubles. Whether you use them or not you should try to learn to understand what they mean. Doubles help you write Unit tests when a piece of code has dependencies. A unit test should test a unit, not all the stuff hanging off of it. I am not going to go into great detail here as it is covered by Kent, Martin, Roy, Gerard and the rest of the TDD community with an opinion. The two I use most commonly is the Mock and the Stub. A stub is a place holder that returns stuff when you call a dependencies specific method. Stubs don't cause test to fail, they just allow them to proceed with a more shallow dependency than you would otherwise use. A mock, like a stub, mimics a dependency in returning predefined results from a specific call, however mocks can break a test. Using a mock you are saying I EXPECT this method to be called on this dependency and if it is not, this test should break. This is where lots of people go pear shaped. People tend to "Over Mock". If it is not critical that a dependencies method is called then it is not a mock expectation, it is probably just a stub. See Ayende's write up on how Rhino Mock 3.5 moves to help with this. If you are not using Rhino Mocks, give it a go. The dynamic mock is worth it alone.
- Don't over Assert. An Assert is the test frameworks way of asserting the SUT's state. A good clean test will be short and ideally have one assert. Now this is not always the case, however if you are seeing tests with dozens of asserts becoming common place it is time you have a closer look at what you are testing.
- Don't over Expect. Following in the same vein as above, if you have more than, say, 3 mock expectations in one test you may need to re think your design as that is a lot of interaction for one piece of code. Again I try to keep my expectations to 1 or 2 per test.
- Run test often. Now many of you will be using Visual Studio Team System and therefore may be inclined to use MSTest (the built in test framework). That's cool, as long as it doesn't slow you down. For me, its way too slow. I am currently running a NAnt build script with MbUnit + RhinoMocks to build and test. This thing is fast and only runs my units test, not my integrations test. I run this script every couple of minutes, as that's how long it should take to either write a test or fill in the code to make the test pass. If your "build and test" turn around is more than a few seconds, you probably wont be doing it to often, which obviously affects you adoption to TDD. Some easy wins include: Having a smaller solution (with only projects you need in the build & test process), using a build script instead of VS to build & test and of course minimising your integration tests, focusing on faster unit tests. Its probably wise for me to say that I do have integrations tests, but I try to minimise them and only run them a couple of times a day, not every build cycle. A perfect time to run them would be as part of your check in process (which I assume you do reasonably often).
- When a test breaks: Tools down, fix the problem! If you have just made code changes and new Red light come on, this is bad! Only one red (failing test) at a time please! Enough said.
- Refactor! Refactoring means better code. More readable and better performing sounds like great things to me, especially in the safety of a test suite to assure you the end result is still the same. It also highlights if you have brittle tests. When I first wrote tests I didn't refactor that much, as it often broke the tests. This was a mistake. Instead of "not refactoring" I should have addressed the root issue, which was brittle tests. Flying solo while doing this can be hard. That's what forums & user groups are for. Show the world your mistakes so you can stop making them**. You will probably find dozens of other doing the same thing and not realising it.
- Help team members get better at testing. Teaching others also helps you gets better as it highlights gaps in your own knowledge. The main benefit is the cross pollination of ideas and concepts with in your team. If one team member is spending a lot of time rewriting tests it is a sign that they are missing a concept. Maybe they are not using doubles correctly? maybe they are retrofitting tests
- Keep set ups and tear downs small. If you set ups are massive then your SUT is probably to coarse and need to be broken up to more manageable bite. Typically my set ups have a class fixture wide mock assignment (for fixtures that focus on the same dependencies) and teardowns only have a mock verification if I have one at all.
- Don't think TDD will come in a couple of hours. It doesn't. I have heard comments from others and tend to agree to make all the mistakes and come to the point where TDD is a natural progression takes about 6 months of standard 9-5 developer time. If you are an uber nerd maybe shorter as you may be writing some code at home or you just bash out more or you get concepts faster. I wrote my first NUnit test about 18 months ago and first played with A mocking framework 5-6 months later. for most of that time I was running blind and had really, no idea what I was doing. only in the last 6 months have I become very confident in my TDD ability, by reviewing my and other people tests can you see where and why you are doing things wrong. I am not the TDD guru but there is not much that I write now that I cant test (that I want to test!)
Although this is by no means a comprehensive list, it does give you pointers as to where you may have problems with your tests. I figure I would rather fail fast (if I am going to fail at all) so hopefully these pointers give you an idea to which paths you should not be going down.
As a side note: It is now habit writing tests as I go. This doesn't come at first but pushing thru the initial barriers and you will get there. I don't always follow the "Red, Green, Refactor" as sometimes I just write the test, write the code and run green. But when I first started I found the red aspect helpful.
Hope this helps get someone on to the TDD bandwagon a little easier and little faster.
RhysC
*Doubles include fakes, mocks, spies, dummies and stubs.
**Not all your mistakes, then the world just thinks you are a clown
Tuesday, September 23, 2008
Eset + Asp.net = :(
I installed Est Smart Security after years of benefit from Nod32 on my XP dev machine. Well the results were not quite the same. To be honest within 10 days it got uninstalled... 5 of those days I was not in the same country as my dev box... disappointed to be honest. :(
Alt.Net UK Conference
This weekend the UK Alt.Net conference was held in London. It was a good meet up with some interesting topics. I must say I didn't really know what to expect as there were not as many "names" attending as there was at Seattle (which is to be expected). Being my second Open Spaces event I felt more comfortable in the type of environment and was able to engage with a bit more confidence which was good.
Friday night basically covered Alt.Net and what it means to various people. I cant really say I got much out of this other than some people feel there is an identity crisis and too much bickering. I believe we are just pragmatic, aware and evolving developers, nothing more. I also feel that some of the bickering is just side effect of the type of people involved (i.e. driven, intelligent and probably very confident) and the means of communication (written, not verbal in which tone can be ascertained). I think we should just deal with, minimise it and move on.
Saturday was the actual sessions. I attended
- Build/Deployment/CI
- OSS ESBs (NServiceBus & MassTransit) and BizTalk
- DDD
- BDD and acceptance testing
From the Build talk I learnt about Some DB tools to help with deployment which is a stress point for us. It also sounds like TeamCity is fast becoming a preferred CI tool. The point was raise that many teams do not have any means to automatically test post deployment, i.e. is what we have rolled out actually working correctly, I thought this was particularly important, mainly because I have never really done it.
Unfortunately the ESB talk was a bit light. I was hoping (like most) that there were some users who had some pretty deep knowledge on either NServiceBus or MassTransit. There was only two of us that had even used them so we gave a brief of what they are and how we have found using them. I felt the talk got side lined as to when to use Asynch operations and then comparing to BizTalk, which I feel is like comparing apples to oranges, especially the price tag. A point the was made is that they (NSB & MT) make using MSMQ a lot easier, and to be honest, I think they validate themselves in this functionality alone. Some of the guys wanted a brief demo of MT: Check it out here. The original samples are in the download here.
I enjoyed the DDD talk most where Ian Cooper took the reigns a bit walking through the concepts as the vast majority had not actually read the blue book. This to me was good as it kept everyone of the same page. The number of DDD talks I have had with people that don't know what an aggregate root is is somewhat annoying (and that's only this year). I should clarify that its not that its annoying that they don't know, its annoying they are using the term DDD inappropriately. It was then highlighted that implementing of all aspect of DDD is overkill for many application, which is rightly so. Aggregate Root, Specification, Anti-corruption Layer, Shared Kernel and Bounded Context were all defined for those not familiar with the book. Discussions about Generic Repositories came up and how marker interfaces could be used to limit the use to Aggregate Roots, which I like. Messaging came up as a means of not letting domain concerns leak into the application and cleaning up responsibilities. I am a huge ANTI-fan of entities in the application. Use messages where possible, especially in a disconnected scenario. Refactoring is easier, maintenance is easier and the intent is much more clearly defined. A point was raised that NHibernate intrinsically leads to an anaemic domain, which I don't buy for one second. NHibernate does not stop you changing accessibility to properties (thereby avoiding property chaining), stop you adding rich functionality to entities, or force you to push logic to services. I think I either misunderstood the sentiment or there is a misconception in the functionally of ORMs.
BDD discussion basically turned to an acceptance testing talk. I am not going to lie, I was tired by then and wanted to go. The chat was relevant but I had been up since 6am packing, as we were also moving house that day. I think BDD is something that hasn't hit the mainstream as it seems to have too high a barrier to entry. Whether this perception is accurate, I am not sure. I like the idea of BDD but I am yet to jump in feet first, I will take from it what I want a continue along my happy way.
Thanks to everyone who attended, it was enjoyable.
Tuesday, September 16, 2008
SAO and DDD
It is funny but the same discussion was had over the weekend. I felt SOA could flourish from a DDD approach, certainly in my experience. If a well constructed domain has been created then the services can expose the business functionality at the granularity required. As DDD is such an organic process it feels like the Services naturally expose themselves. Now this is unfortunately not really the way SOA’s are approached, with good reason. You can’t just give a dev team an application and ask them to tell the business/architects what services came out of “the evolution of the domain”, we want to be a bit more concrete than that.
I have been lucky (?!?!) in that the application I am working on is to be part of an SOA structure. The definition is largely up in the air at the moment so as a dev/designer I have significant influence. We can actually define the service exposed based on the upfront and ongoing analysis and user feedback on the existing functionality. Will this bode well when it comes to implementation of the publicly available service? I don’t know. I certainly hope so. Although I won’t be here when that eventuates I am very keen to here the progress.
Currently we are making progress albeit with a stack that I don’t feel is optimal, but the amount of friction we get on a daily basis is hardly bearable
Domain
V
Synchronous Messaging
V
Application layer
I am interested to hear that others feel the application layer should be behind the messaging system, I would like to investigate more.
If something is significant to warrant a Service Orientated Architecture then I feel a DDD approach is a great fit, where SOA is the higher level Architecture and DDD being the design principles guiding the developers to correctly model the business process.
Refactoring to Specification Pattern and Predicates
public bool CanBePulled
{
get
{
bool canPull = false;
if (this.IsOnPlatform)
{
IJobItemStatus status = this.jobItemStatus;
if (status != null)
{
canPull =
!status.IsDeliveringToChops &&
!status.IsDeliveringToPlatform &&
!status.IsPullingFromChops &&
!status.IsCancelled &&
!status.IsPulled &&
!status.IsPullRequested &&
!status.IsRetired;
}
}
return canPull;
}
}
It just lists a bunch of boolena properties that dont relay why we care about this fields. Althought the follow syntactically could do with some reshuffling i think the underlying business intent is much more easily convey:
public bool CanBePulled
{
get
{
AbleToPull spec = new AbleToPull();
return spec.IsSatisfiedBy(this);
}
}
using the private specification class
private class AbleToPull : CompositeSpecification
{
public override bool IsSatisfiedBy(JobItem candidate)
{
ISpecification
return ableToPull.IsSatisfiedBy(candidate);
}
}
Here it is now obvious that “To be pulled” you need to be “On Platform” and “Not Processed”. To me this conveys much more bussiness intent and therefore becomes much more usable to the developer consuming or maintaining the API
http://en.wikipedia.org/wiki/Specification_pattern
UPDATE: i hate office as my blog poster... WLW is no functional at work eitehr, so sorry about he nasty formating... it kinda defeats the purpose of the post
Thursday, September 11, 2008
Build Configurations
Having a chat with the lads at work and I was surprise that the Debug/Release option on Visual Studio was not a well understood build feature.
Debug and Release are just build configurations that can be modified. You can also add build configuration to fit your needs.
I follow the TreeSurgeon pattern (now out of habit) and create a new configuration eg AutomatedDebug and set up my build preferences there.
What are the benefits of doing this I here you ask?
• Ease of deployment
• Have a configuration for each deployment environment
Those for me are enough reason to do this. This build config and my nanat script means I can
• Build the solutions quickly
• Optionally run test based on category and optionally use code analysis tools (FXCop, NCover etc)
• Has all the dll’s in one place
• Have all tests dll’s in one place, separate to the production dll’s
• Have all my test reports in one place
• Have a distribution zip file with all I need (and nothing I don’t) to deploy the solution (dlls, config etc)
• Have the containing folder of all of this delete-able and be able to replace everything and not required to be in
It takes one click of to do this.
Well this to me sounds like its pretty hard to setup… I know it did because I was very reluctant to set it up… but it really wasn’t that bad and the time I have saved has paid me back 10 fold and continuing to do so.
If you want to try this approach on a clean soln just download TreeSurgeon from codeplex and get to it. You may have some issues with Ncover (I did). There are solutions on the web to fix it, but if you cant be bothered then you can just use NUnit and not worry about cod coverage.
Alright to do this with an exiting soln, right click on you soln in VS and select “Configuration Manager…”.
Under “Active Solution Config” drop-down select NEW
Hit Ok and you will now see the active config is AutomatedDebug (or whatever you called it). Close the config manager and you will also notice that in the drop down box for soln config (in the VS tool bar) has been set to AutomatedDebug.
The next step I personally take is setting up all the projects in the solution to build to a standardised place. At the root of solution (ie in the trunk folder) I have a bin folder that is NOT under source control.
In this folder after a build using the AutomatedDebug configuration, I have “AutomatedDebug”, “AutomatedDebugTests”, “AutomatedDebugReports” and “AutomatedDebugDist” folders. In each of my production projects (i.e. not test project etc) I set the build location as a relative path to the AutomatedDebug bin folder eg
..\..\..\bin\AutomatedDebug\
Whereas all of my test projects are built to
..\..\..\bin\AutomatedDebugTests\
I do this to keep things clean for me; you can just dump everything in one folder if you want.
Snippet:
<target name="compile" description="Compiles using the AutomatedDebug Configuration">
<msbuild project="${solnpath}\Fullstack.ExampleBuild.sln">
<property name="Configuration" value="AutomatedDebug" />
</msbuild>
</target>
T
Deploying is now very easy, as is rerunning test as all your test are in one place, however to be honest the targets I use by default build and run all test every time.
Application != Domain
The application in question is reasonably complex dealing with somewhat complex work flows, legal requirements, provider specific implementations of certain task etc and so “Domain Driven Design” is an ideal approach. Unfortunately a little knowledge is a bad thing. There is a general bad habit at my current work place of using terms inappropriately. Terms like “Domain” (used as : server side) and “Agile” (used as : make it up as we go and don’t document anything) etc are thrown around without many of the involved people understanding what they truly are. It is a situation we are trying to change however communication amongst us all needs to improve first…
Anyway, one of the major things I have noticed is that what we have created, server side, is a psuedo domain. It has its flaws (e.g. too much logic in services creating an unnecessarily anaemic domain) but it basically works, it does what is required. Unfortunately it exposes too much of it inner working to the outside world. This was originally a design plan to make an application that "could be connected or disconnect" so the client used domain entities and when disconnected passed entities across the wire. This meant a client proxy that mirrored the domain services.. which also lead to lazy creation of service, exposing unnecessary option to the client. --NB: this is not my idea, just the reason given to me as to why it was done like this.--
What this also leads to was leaking of domain logic even further up the stack to the actual application. This to me is completely unacceptable. I may preach on about this but this to me is another reason to use DTO’s.
DTO’s separate the application from the intricate workings of the domain, providing what is required for the given operation. They do not convey actions only state.
Correctly set up DTO will have suitable setter and getters so should not be easily abused. Also this allows the application developer, if required/desired, to transform these DTO’s to application “entities” that have smarts etc. Using DTO’s, to many, sounds like a tonne of unnecessary work.
I (obviously) disagree.
Although I am a contractor (read: mercenary) I still believe in delivering value to the client, which in my case first and foremost is actually the company at the given time I work for and then their respective client. Value is not only given in building an application in a time frame that also works* but I place a huge importance on maintainability. The couple dozen guys sitting in front of me are all maintaining applications written over the last 10 years. Looking at the quality of code in the company, well written software could possibly have halved that number. I believe correct separation of concerns is key to the goal of maintainable software.
Separating the application logic now be comes even more important. Application logic deals much more with user experience and flow.
The users don’t care that when they create X
• Y must be notified
• Z is processed
and nor should the application.
Domain logic should deal with fundamental business concerns. The example of “when you create X the Y must be notified and Z is processed” is, to me, quite clearly a domain issue. The application only then needs to care about giving the domain what it needs to correctly create X.
With this type of separation it allows the company in general to move closer providing legitimate SOA styled architecture, which can never be achieved with such a leaky domain.
Now none of this is ground breaking stuff, but it amazes me that this mistake occurs so often. Anything larger than a basic web site should probably be separating the application and domain logic. Certainly anything Enterprise level this is the first thing I would be doing.
For more information about this read DDD by Eric Evans. Anyone involved in Enterprise level, SOA or Distributed systems needs to read it.
*and is tested and is documented and ….
When to use Enum's vs object's
I will admit that I am more of the later but I have been rethinking my use of them lately and think I have settled on a few rules of thumbs that I may start to follow, which of course I would like your thoughts on.
Enum’s in the domain.
Enum’s can easily maps to reference tables in most ORM’s and so this is an easy win here. Unfortunately I am starting to lean towards the thought of not using Enum’s in the domain. The presence of Enum’s usually means different means of handling certain scenarios and instead of using ugly switch statements in the domain I am going to try to move to using objects over Enum’s, which may help with using a more robust strategy patterns.
These objects are still easily mapped using discriminators and this means it allows domain functionality in these new more DDD styled value types.
Possibly one approach is to start using Enum’s in the intial stages of mapping and as functionality grows, refactor to objects as necessary.
Enum’s over the wire
Enum’s over the wire I am completely ok with. Provided the Enum’s are well documented these little buggers just go across as the given value type you have assigned (commonly int). This keeps messages sizes down and allows the client to create an Enum on the receiving side to map to give Enum values. NServiceBus is an example of where this happens (for error codes IRC).
Enum’s in the application
I think this is where is would be most pragmatic with my approach. A lot of application developers, especially in the .Net world are more that happy to deal with Enum’s and small switch statement in the application may actually be easier for many to maintain. These may also be easier to deal with on UI displays, like drops downs as many people have standardised helpers to manipulate Enum’s. Again it really depends on the situation and how much logic is dealt with on the client/application.
Again I hope I will take a reasonably pragmatic approach to this. Hard and fast rule often mean you are unnecessarily painting yourself into a corner.
For those wondering what the hell I am talking about when using Objects as Enum’s this nasty code give a vague idea. Note that you can now subclass the type, providing type specific logic.
class Program
{
static void
{
Person bob = new Person(OccupationType.Developer, OccupationEnum.Developer);
//do other stuff...
}
public class Person
{
OccupationType occupation;
OccupationEnum occupationEnum;
public Person(OccupationType occupation, OccupationEnum occupationEnum)
{
this.occupation = occupation;
this.occupationEnum = occupationEnum;
}
}
public class OccupationType
{
public static OccupationType RockStar = new OccupationType();
public static OccupationType Developer = new OccupationType();
public static OccupationType BusDriver = new OccupationType();
public static OccupationType Maid = new OccupationType();
}
public enum OccupationEnum
{
RockStar,
Developer,
BusDriver,
Maid
}
}
Wednesday, September 10, 2008
Style Cop and VS Defaults
I have been playing with Microsoft new(ish) code analysis tool StyleCop over the last couple of nights (god forbid I use it on our work code base!). Its a pretty cool tool in the same vein as FXCop but with more of a focus on general code layout etc. In this regard its pretty cool. It gives you warnings for when rules are broken & it integrates into VS quite nicely. Unfortunately there is no clean way of integrating it into my NAnt scripts as there is no exe to call. Apparently you can put it into your MSBuild scripts, but that's kinda weak that they have not provided an exe for other means... any who...
It also highlights the fact that VS default templates do not adhere to the StyleCop rules. If you run StyleCop over a standard code base you will see a bunch of warnings straight off the bat. Firstly there are no file headers on any standard C# file.
An XML type header need to be placed in every file. This to me stinks of C headers... are we not a bit past this? Is this really required? There are copyrights on the project in the assembly file, do we need the clutter on every single file (and repeated in the AssemblyInfo.cs twice)?
Well if you do then feel free to add a header similar to:
// <copyright file="AssemblyInfo.cs" company="FullStack">
// Copyright (c) 2008 All Right Reserved
// </copyright>
// <author>Rhys Campbell</author>
// <email>rhysc@fullstack.co.uk</email>
// <date>2008-09-10</date>
// <summary>Contains assembly information.</summary>
It also points out the VS, by default, places the using statement on the outside of the namespace which the pigs don't like!
Fortunately there is an easy fix to these and other default behaviours: Templates.
Visual Studio has configurable templates for the files it creates, so you can modify the standard output to what you want. These template reside as zip file in the C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\ItemTemplates\CSharp\ folders and are actuall cached near by in the ItemTemplatesCache folder. You can make your own personal templates and by default they hang out in C:\Users\Rhys Campbell\Documents\Visual Studio 2008\My Exported Templates\.
Say for example we wanted to trim the StyleCop warnings of our projects down a bit then the above examples could be changed from
using System;
using System.Collections.Generic;
$if$ ($targetframeworkversion$ == 3.5)using System.Linq;
$endif$using System.Text;
namespace $rootnamespace$
{
class $safeitemrootname$
{
}
}
To something like :
// <copyright file="AssemblyInfo.cs" company="FullStack">
// Copyright (c) 2008 All Right Reserved
// </copyright>
// <author>$username$</author>
// <email>$username$@fullstack.co.uk</email>
// <date>2008-09-10</date>
// <summary> </summary>
namespace $rootnamespace$
{
using System;
public class $safeitemrootname$
{
}
}
To do this just crack open a new class, make the changes to the layout you want using template parameters in logical places (i.e. class, namespace, date etc) and File > Export to Template... *
You can select the namespaces required and even pick a pretty icon for your new template. This is particularly good if you use a bunch of standard classes (Tests, NHibernate, WCF all spring to mind) and it should speed things up nicely :)
To use you new template Just Add a new item to a project and under all the standard templates will be a "My Templates" section with your new tempate happily residing.
*If your Export to Template, like mine, is not there, go to Tools > Customise and drag the command from the depths of the hidden file options on to the file menu drop down (i.e. Commands > Categories=File > Command = Export Template..)
Tuesday, September 9, 2008
Build scripts
Well over the last year I have usually had a build script lying around that was kinda "just there", more for assurance that what i was doing could be easily hooked in to a CI scenario. However I have been using the build script local over VS lately and am finding it pretty good. It certainly is a lot faster just to build the soln. I can also :
- control what gets built
- what tests are run
- what code analysis gets done
- if i need a distribution zip etc
all by clicking on a different bat file.
All of my tasks are in my NAnt file and each bat file just points to a different task.
I have a quick build (no test or code analysis), a standard build (build, run all unit tests and code analysis) and a deploy (standard plus zip the required files for a deploy).
Any solution bigger than a scratch pad can really benefit form a build script. The best thing is, if you are like me and many of your solutions have the same general structure, once you are happy with your defaults then you just need to change one or two parameters in the script for your different solutions.
If you are not using a local build script i would highly recommend it. I am just disappointed I have not been using it more in the past.
Monday, September 8, 2008
Gray hairs and coronaries ...
Monday morning comes around and there where some Small build issues (csproj.user are not in the a build outside of VS) but easily fix.
The really problem was when we started to do complex interactions... the app died...
Oh sh1t.... I'm fired..
Well maybe not fired just roll back to Friday and I have wasted a weekend.
Long story short... don't leave on Nhibernate logging... it kills your app.
Commenting out log4net returned everything to normal, now we just need to log the correct things (ie not every NHibernate interaction!)
Sunday, September 7, 2008
Friday, September 5, 2008
Better Know a Framework!
ImmutableAttribute to continuously check that a class or a structure is immutable.
PureAttribute to continuously check that a method is pure, i.e it doesn't provoke any side effect by modifying some fields states.
default C# Keyword in Generic Code. Will return null for reference types and zero for numeric value types
These are things that i haven't used in the past when i should have, but not any more! :)