I’ve just read this on Ayende’s blog and I’m baffled! I’ve spent quite some time with SSIS, and the most frustrating part is the complete lack of ‘support’ for storing the .dtsx files in a source control system (like subversion). At first glance, everything seems to be ok, as the dtsx files are actually just xml (and thus text) which should be diff’able. However, in practice, it’s a nightmare: every time the file is changed (however small the change you made), a large part of the contents changes and even worse: the changes are clearly not local as they occur all over the place! To make it even better, someone at MS decided it was probably a good idea to store both the ‘code’ (control/data flow logic, …) and the visual representation in the same file! Now even just moving a box 1 pixel messes up the complete file.

And no, the traditional (MS) answer that you should do a reserved checkout simply doesn’t work for 2 reasons

1/ having used the ‘edit-merge-commit’ pattern most of my professional life, I just *refuse* to go back to a checkout-edit-checkin way of working (for one project, I was forced to use VSS, and, to say it nicely, I didn’t enjoy it). This is 2008 you know?!

2/ What about branches and merging? Nope, thought so!

I just cannot imagine that they do not have ‘decent support for teamwork’ as one of their top (non-functional) requirements!! Even the best product (and SSIS clearly is not, IMHO) would be close to useless if it didn’t support this!


This table lists the macros that can be used in a VS-2005 pre/post build event. I never remember these, so I decided to put them here as a quick reference for myself.

Macro Description
$(ConfigurationName) The name of the current project configuration, for example, “Debug|Any CPU”.
$(OutDir) Path to the output file directory, relative to the project directory. This resolves to the value for the Output Directory property. It includes the trailing backslash ‘\’.
$(DevEnvDir) The installation directory of Visual Studio 2005 (defined with drive and path); includes the trailing backslash ‘\’.
$(PlatformName) The name of the currently targeted platform. For example, “AnyCPU”.
$(ProjectDir) The directory of the project (defined with drive and path); includes the trailing backslash ‘\’.
$(ProjectPath) The absolute path name of the project (defined with drive, path, base name, and file extension).
$(ProjectName) The base name of the project.
$(ProjectFileName) The file name of the project (defined with base name and file extension).
$(ProjectExt) The file extension of the project. It includes the ‘.’ before the file extension.
$(SolutionDir) The directory of the solution (defined with drive and path); includes the trailing backslash ‘\’.
$(SolutionPath) The absolute path name of the solution (defined with drive, path, base name, and file extension).
$(SolutionName) The base name of the solution.
$(SolutionFileName) The file name of the solution (defined with base name and file extension).
$(SolutionExt) The file extension of the solution. It includes the ‘.’ before the file extension.
$(TargetDir) The directory of the primary output file for the build (defined with drive and path). It includes the trailing backslash ‘\’.
$(TargetPath) The absolute path name of the primary output file for the build (defined with drive, path, base name, and file extension).
$(TargetName) The base name of the primary output file for the build.
$(TargetFileName) The file name of the primary output file for the build (defined as base name and file extension).
$(TargetExt) The file extension of the primary output file for the build. It includes the ‘.’ before the file extension.

Note: this list was taken from here

When you’re developing code that accesses a database (and in an enterprise world: who isn’t?), it is often difficult to write good unit-tests, as these tests depend on the data that is currently present in the database. A test that runs just fine now, might fail as soon as someone changes the data.

Therefore, a framework like DbUnit was developed: it allows you to define which data should be present when your tests are run. In fact, a typical usage is to reset the database (data-wise, that is), before every test. By ‘reset’, I actually mean: clear all data and reread a predefined set from file (typically xml). As I have had some good results with DbUnit, I was happy to find out that a port to .NET was underway: NDbUnit. However, when I tried to use it, I found (and fixed) some bugs in the code.

I’ll list them here for future reference and in the hope these fixes make it to the next release.


The code in insertRecursive/deleteRecursive that adds the name of the current table to the hashtable (in order to prevent a table from being processed twice), should be immediately after the test instead of at the end of the method:

if (deletedTableColl.ContainsKey(dataTableSchema.TableName))
insertedTableColl[dataTableSchema.TableName] = null;

As both methods will try to process related tables first (to enforce constraints), an infinite loop occurs when a table has a hierarchical relation with itself.


insertRecurcive should also try to add the parent relations first:

DataRelationCollection parentRelations = dataTableSchema.ParentRelations;
if (null != parentRelations)
foreach (DataRelation parentRelation in parentRelations)
// Must insert parent table first.
insertRecursive(ds, parentRelation.ParentTable, dbCommandBuilder,
dbTransaction, insertedTableColl,

The System.Console.Writeln should be removed from getSchemaTable as it hides the exact information of what went wrong. Just let the exception bubble-up to the caller (CreateSelectCommand in this case) and catch it there. So, instead of checking the return value as in:

_dataTableSchema = getSchemaTable(sqlSelectCommand);
if (_dataTableSchema == null)
string message = String.Format(@"SqlDbCommandBuilder.CreateSelectCommand(DataSet, string)
failed for tableName = '{0}'", tableName);

a try/catch block should be used to wrap the exception and add some extra information:

_dataTableSchema = getSchemaTable(sqlSelectCommand);
catch (Exception e)
string message = String.Format("SqlDbCommandBuilder.CreateSelectCommand(DataSet, string) failed for tableName = '{0}'", tableName);
throw new NDbUnitException(message, e);

You’re never too old to learn it seems… Today, by reading this entry on Ayende’s blog, I discovered the nullcoalescing  operator in C# 2.0. I often use code like the following:

    string name = (userName == null? “<no name entered>” : userName);

There’s nothing wrong with this code (although in some cases, the null-pattern is a better alternative) but the ternary operator ?: makes the code less readable, and the fact that you have to specify the userName variable twice has always bothered me somewhat. Well, it seems that someone at Microsoft felt the same and decided to do something about it! In C# 2.0 you can now write

    string name = userName ?? “<no name>”;

Of course, in SQL that was already possible with the COALESCE function:

    set @name = coalesce(userName, “<no name>”)



This operator allows for a very elegant use of the null-pattern:

    public User findUser(string name) {
        User user;
        // insert some highly advanced search algo here
        // return found user, or if nothing found, the NullUser instance
        return user ?? NullUser.Instance;

Almost every project I have worked on, reading/writing data from/to files in a fixed-length or <insert your favorite separator here> separated format. Recently, I’ve used SSIS to import large datasets from fixed-length files into a database. However, sometimes you also need to access these files from inside your code (in my case: for testing purposes) and although writing the code to this is not particularly difficult, it is not the most fun way to spend your time. A few minutes quality-time with google and voila: FileHelpers for .NET!

For example, if you want to read the data from the following file:

Amos|Tori|Little Earthquakes|19920225
Amos|Tori|Under The Pink|19940201

you just define the following class to hold the records

    class AlbumRecord
        public string name;
        public string firstName;
        public string album;
        [FieldConverter(ConverterKind.Date, "yyyyMMdd")]
        public Date releaseDate;

Reading the file into an array of AlbumRecord objects is now as simple as:

    FileHelperEngine<albumrecord> engine = new FileHelperEngine<albumrecord>();
    AlbumRecord[] recordsFromFile = engine.ReadFile("albums.txt");

Fixed length files can be processed in a similar way…

Some time ago, someone asked me why the following doesn’t work in C#:

List<BaseType>myList = new List<DerivedType>

As I didn’t know this either, I turned to google and came across a very interesting blog from Rik Byers explaining the reasons behind this some more. Today, I noticed a related posting where
Rik shows a workaround for a very common usage of this type of conversion, namely passing a generic list of some derived type to a method that expects a list of the base type. The following won’t work

public int DoSomething(List<BaseType> list) {
    foreach (BaseType element in list) {

Using a generic method, you can still accomplish the same effect though:

public int DoSomething<T> (List<T> list) where T: BaseType {
    foreach (BaseType element in list) {

A colleague of mine just had an interesting problem. Assume a generic class

namespace MyNamespace {   

public class MyGeneric<T>
        private string name;  

        public String Name {
            get { return name; }
            set { name = value; }

Now if you want to create an object of this type via spring, you just specify the following in the app context (assuming that this class resides inside MyAssembly.dll):

<object id="myGeneric"
        type="MyNamespace.MyGeneric<int>, MyAssembly">
    <property name="Name" value="My Generic Class with int"/>

Note that the xml notation for less-than in MyNamespace.MyGeneric<int is not a typo: spring really requires this to be specified this way. Now, suppose that you don’t want to use an int but a custom type as the generic type, let’s say

namespace MyOtherNamespace {
    public class MyClass {

If this type resides in the same assembly, you can just say

<object id="myOtherGeneric"
        type="MyNamespace.MyGeneric<MyOtherNamespace.MyClass>, MyAssembly">
    <property name="Name" value="My Generic Class with int"/>

However, if the class of the generic type sits in another assembly, you have to use a workaround. Just adding the assembly after the type won’t work, as the comma is used as the delimiter for multiple generic type arguments. Luckily, spring supports something called a TypeAlias, which, as the name implies, let’s us define an alias for a type. First define the alias

    <alias name="MyClass" type="MyOtherNameSpace.MyClass, MyOtherAssembly" />

With this type alias defined, we can now define the object as

<object id="myOtherGeneric"
        type="MyNamespace.MyGeneric<MyClass>, MyAssembly">
    <property name="Name" value="My Generic Class with int"/>

Next Page »