Sunday, April 17, 2011

Implementing an NBA Playoff Bracket in F#

I'm a huge fan of the NBA and sports in general and for years I've been fascinated with the way the NBA structures their playoff bracket. They take the top 8 teams from each conference and seed them based on which teams have to most wins. There are 2 conferences in all so that makes 16 playoff teams each year. The top seed plays the worst team, the second seed plays the seventh team and so on and so forth. So I thought it'd be cool to actually code this setup in F#.

I started with a method that knows how to read in each team from a text file.

// Learn more about F# at http://fsharp.net

open System
open System.IO
open System.Reflection

type Conference =
    | Eastern
    | Western

type Team = {
    Name : string;
    Wins : int;
    Losses : int;
    Conference : Conference
}

let filename = (Assembly.GetExecutingAssembly().Location |> Path.GetDirectoryName)  + "\Teams.txt"
let totalgames = 82
let max = totalgames + 1
let r = new Random()
let conferencesize = 15
let playoffteamsperconference = 8
let half = playoffteamsperconference / 2

let getteams() =
    seq {
        let mapteam conference (l : string) =
            let wins = r.Next(0, max)
            { Name = l.Trim(); Wins = wins; Losses = totalgames - wins; Conference = conference;}

        let teams = filename |> File.ReadAllLines

        // first 15 teams are the eastern conference
        let eastern = teams
                        |> Seq.take conferencesize
                        |> Seq.map (mapteam Eastern)
        
        yield! eastern

        // last 15 are the western conference
        let western = teams
                        |> Seq.skip conferencesize
                        |> Seq.take conferencesize
                        |> Seq.map (mapteam Western)

        yield! western
    }

The chunk of code within the seq{...} scope is known as a computation expression. This particular type of computation expression is called a sequence expression. It's a language integrated feature of F# that allows you to use certain operators based on a set of methods you implement. In this case, the compiler will translate my call of yield! to a method call that knows how to accept a sequence and return it's values. Computations in F# are monads; a fundamental feature of all functional languages. Believe it or not, the infamous LINQ as you know it is based on monads.

Amusingly enough, you've been using monads in .NET for quite a while even if you aren't a normal user of LINQ. Ever used Nullable<T>? It's a maybe monad. Even the infamous jQuery is a monad. It either has a value or it doesn't. In F# we represent this type of construct using the generic option type. Options are a discriminated unions. They're represented by Some 'T or None; where 'T is a generic type argument.

An example is

let o = Some 5
let p = Some "string"

Now o is an int option and p is a string option.

This is cool because we never have to worry about a value being null. Null isn't even a valid value in F#, but it is however a valid .NET value.

How do we find out if o or p has a value? We have to use pattern matching.

let printifhasvalue optionvalue =
    match optionvalue with
    | Some v -> printfn "%A" v
    | _ -> ()

You can think of patterns as switch statements. And that's exactly how the compiler translates them. When using pattern matching, you have to handle all cases or the compiler will yell at you. In my case I only have to worry about Some and None. I accounted for Some with the first check, and I used the wildcard value, _, to handle all other cases. If there were a third option, we'd have a problem because there would be 2 more possibilities. The compiler knows that None is the only other option so we're ok.

So when inter operating with other .NET libraries we do have to account for it. F# libraries never return null though. As for discriminated unions, I'll talk more about those later. As a heads up, my Conference type is one.

The map function accepts a function that can take a value and transform it into another type of value. It's just like Select in LINQ. When I called Seq.map (mapteam Western), I used a concept called partial application.

The expression (mapteam Western) isn't actually invoking the mapteam function. It actually returns a function that accepts the remaining arguments for mapteam. In our case it's the actual team. This is also known as currying. If mapteam took 3 arguments, I'd get a compile time error because I'd get a function back that accepts 2 arguments as opposed to 1. In that  case I'd have to do (mapteam Western arg2), to get a function that accepts only 1 argument. Pretty cool.

Most everything is a function in F#. Event the + and - operators. Don't believe me? You can use the operators as functions by wrapping them in parenthesis like so: (+). The output of that expression is a function that accepts 2 ints and returns another. Or (int -> int -> int).

I also needed a datastructure to store each team and its properties. Specifically I used what's known as a record in F#. It's completely separate from a class and the two are not synonymous.

There are 82 games in an NBA season so I wanted to randomly generate the record for each team. I generate a number between 0 and 82 inclusively for the wins and subtract that from the remaining games in the season to compute the losses. Pretty simple.

As far as which team each conference belongs to, I got the names of each team from the NBA.com website and typed them in order into my Teams.txt text file. I tried to keep things simple on that regard.

Teams.txt


Chicago
Miami
Boston
Orlando
Atlanta
New York
Philadelphia
Indiana
Milwaukee
Charlotte
Detroit
New Jersey
Washington
Toronto
Cleveland
San Antonio
L.A. Lakers
Dallas
Oklahoma City
Denver
Portland
New Orleans
Memphis
Houston
Phoenix
Utah
Golden State
L.A. Clippers
Sacramento
Minnesota

The cool thing about F# is that it's functional. That means we should implement light weight composable functions. That's exactly the approach I've taken here. Each function builds atop the other. A simple f(g(x)) relationship if you will.

And by the way, all values in F# are immutable by default. That means we can't change the state of something once we've created it. This greatly simplifies multithreaded programming, because you don't have to worry about multiple threads changing the state of your data. You can rest assured that once you give a function a reference to a value, it'll be in that very state once the operation is completed.

F# isn't a purely functional language so we do have mutable properties and values that we can pass around. Just not by default.

So now that we have a function that knows how to fetch each team and generate their record, we'll make something that can consume the output of that function and print each team.

let printteams (teams : seq<Team>) =
    teams
    |> List.ofSeq
    |> printfn "%A"

The printfn function is a cool utility because it knows how to print a generic object. It can be a sequence, a base type, etc. It's just like printf in C. You can pass a format like %s for strings and %d for integers.

I used List.ofSeq from the List module to convert my sequence to a List. I did that because sequences can be infinite in F# so the printfn function wouldn't print out all the values. A List on the other has is finite. So printfn will iterate the entire sequence and print out each element as opposed to the first few.

The type seq<'a> in F# is equivalent to IEnumerable<T> in C#. It represents a possibly infinite list of elements. I work solely with sequences throughout my implementation. All instances of Seq.x represent a set of functions known as the Sequence Module in F#. You can think of it as a class with a bunch of static methods. The funny looking |> syntax is just an operator. It takes a function and its argument and invokes that function passing in the given argument as a value. F(x) once again. I don't have to use this operator, the forward pipe operator as it's known, but I really like it's logical syntax so I kind of abused it here. It's no different than piping output from one command to another when using bash on Linux or any other command shell. Piping input to grep is really nice by the way.

There is also a backward operator that knows how go go in the opposite direction. It accepts x first then pushes it into f. That one reads from right to left like

List.ofSeq <| x

In order to pull of the same syntax without the pipe operator, I'd have to nest all of my return values. It'd look like f(g(x)) or

printfn "%A" (List.ofSeq teams)

In this case, g if my List.ofSeq function which accepts the teams. The teams are of course x. The output of that function is then passed to printfn. That makes printfn g in this equation. It doesn't look so helpful in simple scenarios, but later on you'll witness me using the pipe operator quite aggressively; and I think you'll start to appreciate the elegant syntax it allows you to exercise.

And since F# is functional, functions are first class citizens. They don't have to belong to any particular object in general just like in JavaScript. You can pass them as normal values just like ints, GUIDS, and any other base types. That makes F# a really powerful language.

You're probably wondering how I got away without specifying types. Don't you expect to see int and string? Well the F# compiler implements what's known as type inference. It's able to infer types based on the way they're used. We rarely have to specify types in F#, but I had to do it a few times with my implementaton using type annotations. These appear in the signature of my method. Method signature syntax goes

functionname arg1[type] arg2[type] arg3[type]...argn[type]

F# interprets this as (arg1 -> arg2 -> arg3 -> argn -> returntype)

The last value is the return type. Just like Func<T> in C#.

So anytime you see (blah -> blah), that means a function. If you ever see this from intellisense when hovering over a method, you can believe that method accepts a function as an argument; so be prepared to pass one.

Arguments are delimited by spaces. The type of the argument is optional and is required based on whether the compiler can use its type inference algorithm to infer the type of the argument. We don't need curly braces also, because F# detects scope based on spaces. Four spaces to be exact.

Now that I can get all the teams in the league, I need to group them by their conference. There are 2 conferences in the NBA. The eastern and western conferences. You'd normally represent something like this an an enum in C#, but we have a more functional construct known as discriminated unions in F#. My discriminated union is called Conference. You're probably starting to notice that the functions I'm using look a lot like LINQ. That's because LINQ's roots are tied deeply to functional programming.


let getteamsbyconference() =
    getteams()
    |> Seq.groupBy (fun t -> t.Conference)

I again made a function that knows how to print the teams out. Since I grouped the conferences, they came back as a pair or tuple as we call it. So I have to drill down into the conference to get the teams. Then from their it's business as usual. I can simply reuse the initial function I created that knows how to print a sequence of teams.

let printteamsbyconference (conferences : seq<(Conference * seq<Team>)>) =
    conferences
    |> Seq.iter (fun (_, teams) -> teams |> printteams)

You probably noticed the weird syntax I used in my function to iterate the conferences. It another form of pattern matching.  As I mentioned before, the _ represents the wildcard character. That means I don't care about the result of the first value; which in this case is the conference. The syntax I used is a pattern for tuples because its wrapped in parenthesis and delimited by a comma. That means I working with a pair, but I could have easily been working with a triple, quadruple, and so on and so forth. I could have called my teams parameter whatever I like. The name you give to your parameters is completely arbitrary.

I can get each conference and its respective teams now. It's time to make something that knows how to get the best 8 teams from each conference.

let getplayoffteams() =
    getteamsbyconference()
    |> Seq.map (fun (c, teams) -> (c, teams |> 
                                      Seq.sortBy (fun t -> t.Losses)
                                      |> Seq.take playoffteamsperconference))

For each conference, I sort the teams in the conference by the number of losses they have. Logically you'd think I've order by wins, but the sortBy function orders in ascending order. That means the teams with the least number of losses will be at the front of the pack. Logically the teams with the lest number of losses are the best right? That means they have the most number of wins. After sorting the teams, I take the top 8 teams from each conference and return them as a tuple to pair them with their conference.

Anytime you see the (fun x -> ...) syntax, that represents a lambda. Lambdas are pretty big in functional languages. It's with the lambda symbol that we mathematically denote a function. That's some old school history related stuff and it's admittedly pretty boring. It is kind of nice to know though.

The last step is to order the teams and make the final bracket.

let printplayoffbracket() =
    getplayoffteams()
    |> Seq.iter (fun (c, teams) -> 
                        Console.ForegroundColor <- ConsoleColor.Red

                        printfn "%A conference matchups\n" c

                        let topfour = teams |> Seq.take half
                        let bottomfour = teams |> Seq.skip half |> Seq.take half |> Seq.sortBy (fun t -> t.Wins)
                        
                        Console.ForegroundColor <- ConsoleColor.Yellow

                        bottomfour
                        |> Seq.zip topfour 
                        |> Seq.iter (fun (topseed, bottomseed) -> 
                                        printfn "%s (%d-%d) vs %s (%d-%d) \n" topseed.Name topseed.Wins topseed.Losses bottomseed.Name bottomseed.Wins bottomseed.Losses)
                        Console.ResetColor())

We consume the playoff teams and match the best teams against the worst teams. I used closures in this case. I know the top for teams are at the front of the pack, so I simply took the first 4 teams out of the 8 available in each conference. After that, I took the last 4 teams and sorted them by the number of wins they had. Again, you'd logically then I'd sort them by the number of losses they had. But again I know the worse teams are the teams with the least number of wins. The zip function knows how to pair up each member of one sequence with a member of another sequence. It will do this for each pair it can find. If one sequence is bigger than the other, it'll stop pairing elements from each sequence once it's paired the number of elements equivalent to the smallest sequence.

Now it's time to watch our little composable puppies in action

do getteams() |> printteams
do getteamsbyconference() |> printteamsbyconference
do getplayoffteams() |> printteamsbyconference
do printplayoffbracket()


We use the do keyword in F# to execute imperative code. That's code that doesn't return a value and just executes an action. It'll have the return type unit, or void in C#. In F# we always have to return a value. It'll either be an actual value like a record or tuple, or unit. Unit is denoted by (). So to return unit from a function you just write

let f() =
    ()

The function I definte above not only returns unit, but accepts it as an argument. So event when you think you're calling a parameterless method in F#, you're really not. And when you think you're not returning anything, you actually are.

And that's it. We started out by making a function that could read in each team and generate wins and losses for it. Then we grouped each one of those teams into the right conference. Next we were able to take the top 8 teams from each conference, which were our playoff teams. And lastly, we pair up the best teams with the worse teams in each conference just as the NBA does it.

If you want to try out my code, you can download F# and fsharp.net. It's deployed as its own toolset, independent of Visual Studio. I'd recommend Visual Studio so you can have intellisense though. If you want to get down and dirty, you use the F# interactive command shell. It's an interpretor so you're allowed to execute raw code without compiling it.

Don't forget my favorite 2 books. Real World Functional Programming and Expert F# 2.0. Also checkout my 2 favorite guys, the authors of my 2 favorite books, Thomas P and Don Syme.

I think it's safe to say we implemented map reduce here.

Cheers!!


Source can be found here.

Saturday, April 16, 2011

Implementing A Common Interface For NHibernate And RavenDb LINQ Providers

Background Knowledge

This is for those who are not familiar with the concept of a query provider. It's all about IQueryable<T>. By implementing this interface, you promise that you have a class (a query provider) that knows how to populate you (typically a collection) based on some domain specific data store. It can be a document database, a relational database, or even XML. In the case of Raven and NHibernate, we're dealing with document and relational databases. Raven's domain specific language is HTTP and REST, while NHibernate's is an abstraction layer atop SQL. The heart of any LINQ provider is expression trees. We call them quotations in F#, and they can be a nightmare for you when you want to use an existing LINQ implementation. Shame on you fsc. The c sharp compiler, csc, is a lot more friendly and compliant about emitting expression trees.

That being said, expression trees are where the magic happens. They are merely runtime representations of our code. The compiler will convert our calls against IQueryable like Where and Select to expression trees at compile time as opposed to delegates. Then it's up to you to implement an expression tree visitor and LINQ provider that knows how to parse each kind of expression supported by your API. You can find NHibernate's here and Raven's here. You'll be working with runtime representations of the standard LINQ query operators like Select, Where, OrderBy, and GroupBy. I'd like to assume everyone knows that there is a difference between IQueryable<T> and IEnumerable<T>, but I highly doubt that. What can be confusing for some is when they call


var five = new List<int> {3, 4, 5}.Where(n => n % 5 == 0).Single();

and it works. That's LINQ to Objects. In that instance, we're working with IEnumerable<T>. The key thing to remember is that both IQueryable<T> and IEnumerable<T> both have a set of extension methods that target them, and depending on which one you use you'll either love or hate the results. The extensions for IEnumerable<T> work with in memory collections as opposed to LINQ providers and expression trees. The extensions for IQueryable<T> are just an abstraction layer sitting atop your LINQ provider. You implement the LINQ provider, and .NET will invoke it at the proper time passing in the proper arguments (an expression tree). All you have to do is parse the tree and emit your domain specific output. Then you send that output to whatever backend you're encapsulating, fetch the results, and send them back to the client. I won't go any further into LINQ providers, but I figured I could clear up a little smoke by providing some concrete examples. The last thing I'll add is that IQueryable<T> is always lazily executed (just like IEnumerable<T>) and inherits from IEnumerable<T>. All IEnumerable<T> means is that you can iterate (for each) over its results. Now it's not that simple because the compiler generates this hidden class and  a state machine but we won't get into that and monads. What makes it lazy is that your query won't be executed until the client tries to actually iterate. This is cool because it allows us to continually make calls on our IQueryable<T> without it hitting our data store each time. Obviously we're not ready to consume any results until we start to iterate so everything is deferred up until that point. And don't worry about the compiler accidentally choosing the wrong call to Where or Select. It's smart enough to know that IQueryable<T> is more specific that IEnumerable<T> and invoke the right set of extensions.

I'd also like to conclude with a low level deep dive into LINQ.

Implementing the UoW

Let's get started shall we. First thing's first; we need a common interface to wrap the NHibernate and Raven sessions respectively.


public interface ISession : IDisposable {
    IQueryable<TEntity> Query<TEntity>() where TEntity : Entity;
    void Add<TEntity>(TEntity entity) where TEntity : Entity;
    void Update<TEntity>(TEntity entity) where TEntity : Entity;
    void Delete<TEntity>(TEntity entity) where TEntity : Entity;
    void SaveChanges();

    #region Future Load Methods. Can't use now because Raven forces Id's to be strings. If were not for that, we could make this generic between NHibernate and RavenDb.

    // TEntity Load<TEntity, TId>(TId id) where TEntity : Entity<TId>;
    // IEnumerable<TEntity> Load<TEntity, TId>(IEnumerable<TId> ids) where TEntity : Entity<TId>;

    #endregion
}

As you can see, we make each session promise to give us an IQueryable<T>. We're also enforcing our sessions to implement Unit Of Work, hence the SaveChanges method. The rest of the functions are CRUD based. Lastly we need to be able to shut the session down and free up resources so we make all sessions implement IDisposable.

Now we'll make the concrete RavenSession and it's wrapper class UnitOfWork

public static class UnitOfWork {
    public static void Start() {
        CurrentSession = new RavenSession();
    }

    public static ISession CurrentSession {
        get { return Get.Current<ISession>(); }
        private set { Set.Current(value); }
    }
}

internal class RavenSession : ISession {
    readonly DocumentStore _documentStore;
    readonly IDocumentSession _documentSession;

    internal RavenSession() {
        _documentStore = new DocumentStore { Url = "http://localhost:8080" };
        _documentSession = _documentStore.Initialize().OpenSession();
    }

    public IQueryable<TEntity> Query<TEntity>() where TEntity : Entity {
        /* may need to take indexing into consideration. raven will generat temps for us, but that may not be so efficient. 
         * i don't even know how long the temps stick around for. Raven will try and optimize for us best it can. */
        return _documentSession.Query<TEntity>();
    }

    public void Add<TEntity>(TEntity entity) where TEntity : Entity {
        _documentSession.Store(entity);
    }

    public void Update<TEntity>(TEntity entity) where TEntity : Entity {
        _documentSession.Store(entity);
    }

    public void Delete<TEntity>(TEntity entity) where TEntity : Entity {
        _documentSession.Delete(entity);
    }

    public void SaveChanges() {
        _documentSession.SaveChanges();
    }

    public void Dispose() {
        _documentStore.Dispose();
        _documentSession.Dispose();
    }

    #region Future Load Methods. Can't use now because Raven forces Id's to be strings. If were not for that, we could make this generic between nHibernate and RavenDb.

    public TEntity Load<TEntity, TId>(TId id) where TEntity : Entity<TId> {
        throw new NotImplementedException();
    }

    public IEnumerable<TEntity> Load<TEntity, TId>(IEnumerable<TId> ids) where TEntity : Entity<TId> {
        throw new NotImplementedException();
    }

    #endregion
}

I hard coded the url for now, but obviously I'd want it to be read from configuration somewhere.

Next I need a class to store the current session. I took an idea from a buddy of mine and made it strongly typed and reusable. It's just a wrapper around HttpContext that falls back to an in memory dictionary for unit testing purposes.

public static class Ensure {
    public static void That(bool condition) {
        if(!condition)
            throw new Exception("an expected condition was not met.");
    }

    public static void That<TType>(bool condition, string message) where TType : Exception {
        if(!condition)
            throw (TType)Activator.CreateInstance(typeof (TType), message);
    }
}

public static class Get {
    public static T Current<T>() where T : class {
        var context = HttpContext.Current;
        var key = typeof(T).FullName;

        var value = context == null ? (T)Set.InMemoryValuesForUnitTesting[key] : (T)context.Items[key];

        Ensure.That(value != null);

        return value;
    }
}

public static class Set {
    internal static Dictionary<string, object> InMemoryValuesForUnitTesting = new Dictionary<string, object>();

   public static void Current<T>(T value) {
       var context = HttpContext.Current;
       var key = typeof(T).FullName;

       if (context == null)
           InMemoryValuesForUnitTesting[key] = value;
       else
           context.Items[key] = value;
    }
}

Implementing Core Domain Objects

It's nice to have a base structure in place from which our domain objects can derive. More specifically a base entity and repository class. The base repository is strongly typed and knows how to persist a specific type of entity. I created a Raven specific repository because all ids in Raven are strings (or so I thought. Raven actually supports POID generators just like NHibernate). That's just the default implementation. It was implemented that way so the ids could be RESTful and human readable. Who wants to see a GUID on the query string? Not I...

public class Entity {}

public class Entity<TId> : Entity {
    public TId Id { get; set; }
}

public class BaseRepository<T, TId> : IRepository<T, TId> where T : Entity<TId> {
    public void Add(T entity) {
        UnitOfWork.CurrentSession.Add(entity);
    }

    public IQueryable<T> All() {
        return UnitOfWork.CurrentSession.Query<T>();
    }

    public virtual T Get(TId id) {
        return All().Where(e => e.Id.Equals(id)).SingleOrDefault();
    }

    public IEnumerable<T> Get(IEnumerable<TId> ids) {
        var idList = ids.ToList();

        return All().Where(e => idList.Contains(e.Id));
    }

    public void Delete(T entity) {
        UnitOfWork.CurrentSession.Delete(entity);
    }

    public void Update(T entity) {
        UnitOfWork.CurrentSession.Update(entity);
    }
}

public interface IRepository<T, in TId> : ICreate<T>, IRead<T, TId>, IDelete<T>, IUpdate<T> where  T : Entity<TId> {}

public interface IDelete<in T> {
    void Delete(T entity);
}

public interface IRead<out T, in TId> where T : Entity<TId> {
    IQueryable<T> All();
    T Get(TId id);
    IEnumerable<T> Get(IEnumerable<TId> ids);
}

public interface ICreate<in T> {
    void Add(T entity);
}

public interface IUpdate<in T> {
    void Update(T entity);
}

internal class Person : Entity<string> {
    public string Name { get; set; }
    public int Age { get; set; }
}

internal class PersonRepository : BaseRepository<Person, string>, IPersonRepository {
}

internal interface IPersonRepository : IRepository<Person, string> {
}

I implemented CRUD interfaces for my repositories so that a client can choose which operations it wants to interact with. If all a client needs to do is perform reads, then it can consume the IRead<T> interface and opposed to a full fledged IRepository<T>. That concrete implementation of IRead<T> would still be able to inherit from BaseRepository<T>, but would not be consumed as such. Using dependency injection, you'd do something like...

Map<IRead<User>>.To<UserRepository>();

Then an MVC controller or some dependant object would look like...

public class AccountController(IRead<User> userRepository) {...}

This concept is the I in SOLID for Interface Segregation. Give the client only what it needs. Nothing more and nothing less.

I didn't think I'd need something for updates like IUpdate<T> since most UoW implementations will implement change tracking. For instance if you retrieve and entity from a Raven or NHibernate session and modify it, the changes will automatically be applied upon saving the session. But thent I thought about what happens in ASP.NET MVC when we handle updates. Say the user goes to our update page and makes some changes to some text fields that represent and entity. The ASP.NET MVC will automatically construct an instance of our entity or view model and allow us to persist it. Their is a TryUpdateModel that MVC exposes on controllers, but what if you're mapping from view model to entity/DTO? There'd be no need to retrieve the entity from the domain layer since you already have a copy of it in memory. I could be wrong on this. Maybe it's common practice to always find your entity, apply the necessary changes, and persist it. I'm not sure how most do it, but having IUpdate<T> doesn't hurt right?

Implementing a Request Module for ASP.NET

Now I need a request module that knows how to initialize the session and spawn the UoW.

public class UnitOfWorkModule : IHttpModule {
    public void Init(HttpApplication application) {
        application.BeginRequest += ApplicationBeginRequest;
        application.EndRequest += ApplicationEndRequest;
    }
 
    static void ApplicationBeginRequest(object sender, EventArgs e) {
        UnitOfWork.Start();
    }

    static void ApplicationEndRequest(object sender, EventArgs e) {
        UnitOfWork.CurrentSession.SaveChanges();
    }

    public void Dispose() {
        UnitOfWork.CurrentSession.Dispose();
    }
}

Let me add that I borrowed the idea of this particular implementation of UoW from a blog on nhforge.com. I tweaked it to my liking. It's not perfect, but I'm content with it and it works for me. I'd never go so far as to deem this the ultimate implementation of UoW.

The cool thing about our implementation is that we can switch from Raven to NHibernate with one line of code.

The bad thing is that we can't leverage any framwork specific goodies. For instance, the power behind document databases is that they perform lightning fast reads. This is accomplished via indexes. In Raven, we specifiy our indexes upon executing our queries, but there's no way for my BaseRepository to do that unless it knows it's dealing with Raven in particular. I'd have to cheat to do that and probably break my encapsulation by assuming certain things about the current ISession at hand. Something like type casting it to an IDocumentSession (Raven specific). Raven is smart enough to dynamically create indexes for us on the fly if it detects that we didn't specify one client side, and will eventually promote them to permanent indexes if we used them enough over a certain amount of time. Frankly you just need to be aware of what you're gaining and losing. You should analyze if the benefits of a clean and reusable design are worth the extra work it takes to be able to leverage all of your target frameworks features. Sometimes you can get away with declarative xml configuration independent of code, or decorating your classes with a specific attribute and having the runtime pick up on it; But that's a big maybe and a long shot in most cases. Regardless I thought this would be a cool idea and fun to implement.

Implementing Unit Tests For Raven

We're not done yet my friends. It's time for some unit tests. NUnit where you be?

[Test]
public void Person_Repository_Can_Save_Person() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    var id = adubb.Id;

    var adubbFromRepo = personRepository.Get(id);

    Assert.IsNotNull(adubbFromRepo);
    Assert.AreEqual(adubb.Id, adubbFromRepo.Id);
    Assert.AreEqual(adubb.Name, adubbFromRepo.Name);
    Assert.AreEqual(adubb.Age, adubbFromRepo.Age);

    personRepository.Delete(adubbFromRepo);

    UnitOfWork.CurrentSession.SaveChanges();
}


















Whoops!! Looks like Raven doesn't allow us to call Equals to in the body of our lamdas. Time to refactor. We need to override our base implementation of Get(TId id); Let's make it virtual and override it.

public class RavenBaseRepository<T> : BaseRepository<T, string> where T : Entity<string> {
    public override T Get(string id) {
        return All().Where(e => e.Id == id).SingleOrDefault();
    }
}

internal class PersonRepository : RavenBaseRepository<Person>, IPersonRepository {
}

I'm already noticing that my query is taking a rather long time to execute. This probably means Raven isn't making optimized reads. I'd expect things to execute a lot faster.

Anyway, let's run our test again.


















That's strange. We didn't find any results. Something must be going wrong with my Id. The problem is the entity is still transient. That is to say, it hasn't been persisted yet. We need to submit our changes before performing our read. Let's refactor our test.

personRepository.Add(adubb);

UnitOfWork.CurrentSession.SaveChanges();

var id = adubb.Id;

We told Raven to persist the object before retrieving it. Let's try again.

Ok. I'm still getting an error. I probably shouldn't be messing around with my Id property. That's Raven's. Let's make one final change.

var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

Aaaaand!! Nope. Still didn't work. You may have caught on now but if you haven't the problem is inheritance. Raven apparently can't pick up on the fact that I'm inheriting my Id from my parent class Entity. So now I have to redefine it in the person class like so.

public class Person : Entity<string> {
    public new string Id { get; set; }
    public string Name { get; set; }
    public int Age { get; set; }
}

Alright. Things are working now according to my unit test and Raven Studio. My Add test passes. Now it's time to test delete.


[Test]
public void Person_Repository_Can_Delete_Person() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    UnitOfWork.CurrentSession.SaveChanges();

    var id = adubb.Id;

    var adubbFromRepo = personRepository.Get(id);

    Assert.IsNotNull(adubbFromRepo);

    personRepository.Delete(adubbFromRepo);

    UnitOfWork.CurrentSession.SaveChanges();

    adubbFromRepo = personRepository.Get(id);

    Assert.IsNull(adubbFromRepo);
}

This one worked right out of the box. No magic needed. Get is next.

[Test]
public void Person_Repository_Can_Get_Person() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    UnitOfWork.CurrentSession.SaveChanges();

    var id = adubb.Id;

    var adubbFromRepo = personRepository.Get(id);

    Assert.IsNotNull(adubbFromRepo);

    personRepository.Delete(adubb);
}

And lastly update. This one is pretty easy to test because of change tracking so I have 2 implementations. The first works just fine.

[Test]
public void Person_Repository_Can_Update_Person_Without_Calling_Update() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    UnitOfWork.CurrentSession.SaveChanges();

    var id = adubb.Id;

    var adubbFromRepo = personRepository.Get(id);

    Assert.IsNotNull(adubbFromRepo);

    const string changedName = "Changed Name";

    adubbFromRepo.Name = changedName;

    UnitOfWork.CurrentSession.SaveChanges();

    adubbFromRepo = personRepository.Get(id);

    Assert.AreEqual(changedName, adubbFromRepo.Name);

    personRepository.Delete(adubb);
}

But we run into problems with the second.





















Looks like Raven is forcing me to fetch my entity from the session before I can update it. It knows the entity is unattached. So I guess I could remove my implementation of Update. NHibernate however will properly convert my Contains call to an IN Clause; which is what I expect.

[Test]
public void Person_Repository_Can_Update_Person_When_Calling_Update() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    UnitOfWork.CurrentSession.SaveChanges();

    const string changedName = "Changed Name";
    const int changedAge = 19;

    var id = adubb.Id;

    // this entity didn't come from the session and thus is not being tracked. we're pretending like we've just
    // populated this entity in our controller based on the view and are about to persist it.
    var adubbFromRepo = new Person {Id = id, Age = changedAge, Name = changedName };

    personRepository.Update(adubbFromRepo);

    UnitOfWork.CurrentSession.SaveChanges();

    adubbFromRepo = personRepository.Get(id);

    Assert.AreEqual(changedName, adubbFromRepo.Name);
    Assert.AreEqual(changedAge, adubbFromRepo.Age);

    personRepository.Delete(adubb);
}

My last test if for Get with an overload. Of course it failed because Raven won't let me call Contains during my query. I'll figure it out later though. It's 1am right now and I'm beat. Plus my hot pocket is almost done cooking.

public static class ObjectExtensions {
    public static IEnumerable<TType> ToSingleEnumerable<TType>(this TType target) {
        yield return target;
    }
}

[Test]
public void Person_Repository_Can_Find_All_By_Id() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    UnitOfWork.CurrentSession.SaveChanges();

    var id = adubb.Id;

    var adubbFromRepo = personRepository.Get(id.ToSingleEnumerable()).FirstOrDefault();

    Assert.IsNotNull(adubbFromRepo);
    Assert.AreEqual(adubb.Id, adubbFromRepo.Id);
    Assert.AreEqual(adubb.Name, adubbFromRepo.Name);
    Assert.AreEqual(adubb.Age, adubbFromRepo.Age);

    personRepository.Delete(adubbFromRepo);
}

Implementing NHibernate Support

We're going to make a context switch to NHibernate now. We'll start with a concreate implementation of the ISession interface for NHibernate.

internal class NHibernateSession : ISession {
    static readonly ISessionFactory SessionFactory;

    static NHibernateSession() {
        // an expensive operation that should be called only once throughout the lifetime of the application. you'll typically see this in Application_Start of Global.asax.
        SessionFactory = Fluently
                            .Configure()
                            .Database(MsSqlConfiguration.MsSql2008
                                          .ConnectionString("Server=.; Database=NHPrac; Integrated Security=true;")
                                          .ShowSql())
                            .ExposeConfiguration(x => {
                                                     // for our CurrentSessionContext. this has to be configured or NHibernate won't be happy
                                                     x.SetProperty("current_session_context_class", "thread_static");
                                                         
                                                     // so the Product table can be exported to the database and be created before we make our inserts
                                                     var schemaExport = new SchemaExport(x);
                                                     schemaExport.Create(false, true);
                                                 })
                            .Mappings(x => x.FluentMappings.AddFromAssembly(Assembly.Load("Intell.Tests")))
                            .BuildSessionFactory();
        }

    internal NHibernateSession() {
        // sessions are really cheap to initialize/open
        var session = SessionFactory.OpenSession();

        session.BeginTransaction();

        CurrentSessionContext.Bind(session);
    }

    static NHibernate.ISession CurrentSession { get { return SessionFactory.GetCurrentSession(); } }

    public void Dispose() {
        // unbind the factory and dispose the current session that it returns
        CurrentSessionContext.Unbind(SessionFactory).Dispose();
    }

    public IQueryable<TEntity> Query<TEntity>() where TEntity : Entity {
        return CurrentSession.Query<TEntity>();
    }

    public void Add<TEntity>(TEntity entity) where TEntity : Entity {
        CurrentSession.Save(entity);
    }

    public void Update<TEntity>(TEntity entity) where TEntity : Entity {
        CurrentSession.Update(entity);
    }

    public void Delete<TEntity>(TEntity entity) where TEntity : Entity {
        CurrentSession.Delete(entity);
    }

    public void SaveChanges() {
        var transaction = CurrentSession.Transaction;

        if (transaction != null && transaction.IsActive)
            CurrentSession.Transaction.Commit();
    }
}

There's nothing special going on here. Just your standard Fluent NHibernate stuff and NHibernate basics in general. If my usage of the CurrentSessionContext class is unfamiliar to you then I'd suggest you get yourself a copy of the NHibernate 3.0 Cookbook. It's got the latest NHibernate best practices in it and was the basis of how I managed my NHibernate session.

Before I go any further I want to note that NHibernate won't let me call Equals in my Get(TId id) method so again I have to make a framework specific repository. Dangit!!

And yes I'm aware that S#arp Architecture has a base NHibernate and a base Entity class (I think...), but I wanted to take a stab at creating my own. It probably looks identical to what's already out there but oh well.

// we'd probably have to make a separate one for id's of type int to support identity columns
public class NHibernateBaseRepository<T> : BaseRepository<T, Guid> where T : Entity<Guid> {
    public override T Get(Guid id) {
        return All().Where(e => e.Id == id).SingleOrDefault();
    }
}

Next we'll build a ProductRepository, mapping file, and Product entity.

public class Product : Entity<Guid> {
    public virtual string Name { get; set; }
    public virtual int InventoryCount { get; set; }
}

public sealed class ProductMap : ClassMap<Product> {
    public ProductMap() {
        Id(x => x.Id)
            .GeneratedBy
            .GuidComb();
        Map(x => x.InventoryCount);
        Map(x => x.Name);
    }
}

internal class ProductRepository : NHibernateBaseRepository<Product>, IProductRepository {}

internal interface IProductRepository : IRepository<Product, Guid> {}

The Enity<TId> class becomes...

public class Entity<TId> : Entity {
    public virtual TId Id { get; set; }
}

Of course everything has to be virtual for proxy support so I had to make a slight modification to my base Entity<TId> class.

I only wrote 2 unit tests this time around, because I'm more than confident that the functionality works. Earlier I ran into a problem with my GetAll(IEnumerable<TId> ids) implementation of my BaseRepository class due to constraints that Raven enforces. I still have to come up with a clean work around, but we won't worry about that for now. This time I wanted to be sure my GetAll overload would work so I tested it in addition to Save. The tests both pass with flying colors.

The Start method of UnitOfWork becomes...

public static void Start() {
    // CurrentSession = new RavenSession();
    CurrentSession = new NHibernateSession();
}

[TestFixture]
public class ProductRepositoryTests {
    [TestFixtureSetUp]
    public void Init_Unit_Of_Work() {
        UnitOfWork.Start();
    }

    [TestFixtureTearDown]
    public void Uninit_Unit_Of_Work() {
        UnitOfWork.CurrentSession.Dispose();
    }

    [Test]
    public void Product_Repository_Can_Save() {
        var product = new Product { InventoryCount = 12, Name = "A-Dubb's World" };

        IProductRepository productRepository = new ProductRepository();

        productRepository.Add(product);

        var id = product.Id;

        Assert.IsFalse(id == Guid.Empty);
        
        product = productRepository.Get(id);

        Assert.AreEqual(12, product.InventoryCount);
        Assert.AreEqual("A-Dubb's World", product.Name);

        productRepository.Delete(product);

        UnitOfWork.CurrentSession.SaveChanges();
    }

    [Test]
    public void Product_Repository_Can_Get_All() {
        var product = new Product { InventoryCount = 12, Name = "A-Dubb's World" };

        IProductRepository productRepository = new ProductRepository();

        productRepository.Add(product);

        var id = product.Id;

        Assert.IsFalse(id == Guid.Empty);

        product = productRepository.Get(id.ToSingleEnumerable()).SingleOrDefault();

        Assert.IsNotNull(product);
        Assert.AreEqual(product.Name, "A-Dubb's World");
        Assert.AreEqual(product.InventoryCount, 12);

        productRepository.Delete(product);

        UnitOfWork.CurrentSession.SaveChanges();
    }
}





















And yea I should have followed TDD and written my tests first and I surely could have refactored my units tests for reusability's sake; but I'm not gonna bother.

Conclusion

We started out with our common session interface for NHibernate and Raven to implement. The we made our UoW and concrete Raven based session implementation. That was followed by our strongly typed classes for local storage and our core domain layer base classes (Entity and BaseRepository). We then subclasses out BaseRepository to make a Raven specific implementation that stores Entities with string based ids as Raven requires. Since we plan to be able to use Raven on the web, we made an HttpModule that can be registered in Web.config to initialize our session for each request made to our web application. And lastly, we wrapped things up with a few unit tests, a concrete ISession implementation for NHibernate, and discovered some things about Raven along the way. Specifically it can not pick up on and inherited Id property and only a specific subset of methods are allowed to be executed within our query calls/lamda expressions.

Well, that's it folks. As I mentioned before, the switch between Raven and NHibernate is a trivial but potentially problematic one, but you'd at least have your core domain layer in place for each framework. For one, Id's in Raven are string based, which is not the case in NHibernate where GUIDs tend to dominate. So switching would probably mean refactoring your entities switching the repository you inherit from; which could certainly be a problem. Secondly, it makes it tougher to use framework specific features such as indexes in Raven when executing queries; which is one its most important features. Were it not for the aforementioned constraints, you'd be able to switch from Raven to NHibernate with one line of code. That's why I build this post to begin with. I thought I could pull it off, but my unit tests told me otherwise. Either way, this was really fun to implement and I learned a lot. I hope this post proves to be helpful to a lot of people and can maybe serve as a catalyst for future implementations.

This is quite a bit of code so I should be uploading to github any day now.

I'll follow up this post by implementing a cool implementation of read only mode in ASP.NET (web forms). Pretty cool right?

Cheers!!

Tuesday, April 12, 2011

Implementing a Tree Collection in .NET

Once upon a time I needed a native tree collection to implement an algorithm in .NET, and well...there
isn't one. So I decided to make my own tree. A fluent one at that. Lets get started shall we?

First thing's first, we need a base class. Let's call it...say...Node!

using System.Collections.Generic;

public interface INode<T> {
    IEnumerable<Node<T>> Children { get; }
    INode<T> AddChild(T value);
    INode<T> AddChild(Node<T> value);
    bool HasChild(Node<T> node);
    bool HasChild(T value);
}


using System.Collections.Generic;

public class NodeBase<T> : INode<T> {
    protected readonly List<Node<T>> ChildNodes;
    readonly Dictionary<Node<T>, object> _nodeLookup;

    public NodeBase() {
        ChildNodes = new List<Node<T>>();
        _nodeLookup = new Dictionary<Node<T>, object>();
    }

    public IEnumerable<Node<T>> Children { get { return ChildNodes; } }

    public INode<T> AddChild(Node<T> node) {
        ChildNodes.Add(node);

        _nodeLookup[node] = null;

        return this;
    }

    public INode<T> AddChild(T value) {
        return AddChild(value.Node());
    }

    public bool HasChild(Node<T> node) {
        return _nodeLookup.ContainsKey(node);
    }

    public bool HasChild(T value) {
        return _nodeLookup.ContainsKey(value.Node());
    }
}

public sealed class Node<T> : NodeBase<T> {
    public Node(T value) : this(value, null) {}

    public Node(T value, params Node<T>[] nodes) {
        if(nodes != null)
            ChildNodes.AddRange(nodes);

        Value = value;
    }

    public T Value { get; private set; }

    public override bool Equals(object obj) {
        if (obj == null) return false;

        var node = obj as Node<T>;

        return node != null && node.Value.Equals(Value);
    }

    public override int GetHashCode() {
        return Value.GetHashCode();
    }

    public static bool operator == (Node<T> left, Node<T> right) {
        return left.Equals(right);
    }

    public static bool operator != (Node<T> left, Node<T> right) {
        return !left.Equals(right);
    } 
}

I wanted a common interface for all Nodes to implement and also a default base class they could inherit from. I also implemented HasChild for fast lookups. Lastly I wanted clients to consume all children as an IEnumerable<T> as opposed to a List<T>. This forces clients to call my implementation of Add so that I can place any newly added Nodes in my internal lookup. Some of you may be wondering why I used a Dictionary as opposed to a HashSet. Well, through a few simple tests, I discovered that this was the fastest way to determine if a Node has a particular child.
Oh, almost forgot. I implemented GetHashCode and Equals for equality. This is helpful when using Nodes inside of Dictionaries.

For example the following code will fail despite the 2 Nodes being completely separate instances.

// key already exists
var dic = new Dictionary<Node<int>, string> {
                                               { new Node<int>(3), string.Empty },
                                               { new Node<int>(3), string.Empty }
                                            };

I also took time out to implement the == and != operators

// this will print out true
Console.WriteLine(new Node<int>(3) == new Node<int>(3));

And my lookup tests...

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;

public class LookupTests {
    public static void Execute() {
        const object o = null;

        var list = Enumerable.Range(1, 10000000).ToList();
        var dic = list.ToDictionary(n => n, n => o);
        var hashtable = new HashSet<int>();

        list.ForEach(n => hashtable.Add(n));

        // has to enumerate the entire sequence since worst case scenario occurs. O(n)
        Time(() => list.Contains(10000000));

        // O(1)
        Time(() => dic.ContainsKey(10000000));

        // O(1)
        Time(() => hashtable.Contains(10000000));

        // has to make only 1 check since best case scenario occurs. should be just as fast as dictionary lookup. O(n)
        Time(() => list.Contains(1));

        // has to enumerate half the sequence. should be half as fast as first test. O(n)
        Time(() => list.Contains(10000000 / 2));
    }

    static void Time(Action test) {
        var stopWatch = new Stopwatch();

        stopWatch.Start();

        test();

        stopWatch.Stop();

        Console.WriteLine(stopWatch.Elapsed);
    }
}

The implementation using the Dictionary was consistently faster than the rest; including HashSet. Even if I were dead wrong on my conclusion that Dictionary is faster, I could always change the implementation later on due to the benefit of encapsulation. All clients care about is that I return the right result, not "HOW" I did it.

Now we need something to attach all those Nodes to, probably a Tree!

using System;
using System.Linq;

public class Tree<T> : NodeBase<T> {
    readonly static ConsoleColor[] Colors;

    static Tree() {
        Colors = Enum.GetValues(typeof (ConsoleColor)).Cast<ConsoleColor>().ToArray();
    }

    public Tree(params Node<T>[] nodes) {
        ChildNodes.AddRange(nodes);
    }

    public void PrettyPrint() {
        ChildNodes.ForEach(n => PrettyPrintRecursive(0, n, 4));
    }

    static void PrettyPrintRecursive(int indent, Node<T> node, int colorIndex) {
        Console.ForegroundColor = Colors[colorIndex % Colors.Length];

        Console.WriteLine("{0, " + indent + "}", node.Value);

        foreach (var child in node.Children)
            PrettyPrintRecursive(indent + 3, child, colorIndex + 1);

        Console.ResetColor();
    }
}

A Tree is basically a Node, but it doesn't have a value. It just contains other Nodes. You'll also notice that I've implemented a handy helper method in order to get a visual representation of our tree on screen. I of course used recursion to iterate the tree since they possess that innate quality. I used the modulace operator so I don't have to worry about IndexOfOfRangeExceptions. I could pass 1000 as the colorIndex and things still work out just fine.

Now that we have a basic structure in place, let's actually construct a Tree.

NOTE: Don't try compiling my code just yet because you're missing a few methods.. The best is yet to come.

class Program {
    static void Main() {
        var three = new Node<int>(3);
        three.AddChild(6);
        three.AddChild(7);
        three.AddChild(8);

        var four = new Node<int>(4);
        four.AddChild(new Node<int>(9));
        four.AddChild(new Node<int>(10));
        four.AddChild(new Node<int>(11));

        var five = new Node<int>(5);

        five.AddChild(12);
        five.AddChild(13);
        five.AddChild(14);

        var tree = new Tree<int>(three, four, five);

        tree.PrettyPrint();
    }
}































So we made a tree and printed it out, but the syntax is rather verbose. Seeing as how I have a backgroud in F#, I prefer my syntax to be as lean as possible. It's about time I whipped out my swiss army knives and cut the fat off this tree!!

"Oh extension methods...where are you?"

"Right here sir!!"

"Did you bring LINQ with you?"

"Yes sir we did!!

"Thank you extensions..."

Okay, enough with the personification already.

using System.Linq;
using System.Linq;

public static class NodeExtensions {
    public static Node<T> Node<T>(this T value) {
        return new Node<T>(value);
    }

    public static Node<T> Node<T>(this T value, params T[] values) {
        return new Node<T>(value).Children(values);
    }

    public static Node<T> Node<T>(this T value, params Node<T>[] nodes) {
        return new Node<T>(value, nodes);
    }

    public static Node<t> Children<T>(this Node<T> node, params T[] values) {
        values.ToList().ForEach(n => node.AddChild(n));

        return node;
    }

    public static Node<T> Children<T>(this Node<T> node, params Node<T>[] nodes) {
        nodes.ToList().ForEach(n => node.AddChild(n));

        return node;
    }
}

class Program {
    static void Main() {
        var nodeOne = 1.Node(3.Node(3.Node(2), 4.Node(99), 5.Node(15, 20, 90)), 4.Node(7, 7, 7), 5.Node(8, 7, 7));

        var tree = new Tree<int>(nodeOne, 2.Node(8, 8).Children(4, 5, 6, 7, 8));

        tree.PrettyPrint();
    }
}































Now we can write some nifty syntax. I wanted to make things as flexible as possible when adding child Nodes. You can add raw values as they are, you can call the children extension for a more domain specific approach, or you can amplify a value by wrapping it as a node and add it that way. Extension methods allow you the leverage type inference and the compiler to its full extent. I wish the same could be achieved at construction time, but making extra helper methods aren't so bad. I really got into a habit of using extensions for type inference in C# by reading Thomas P's book Real Work Functional Programming. That guy is a genius!! He and Don Syme both. If you want to be a better programmer, I'd suggest reading that book and Expert F# 2.0. They'll take you a long way in .NET and show you what you've been deprived of all these years using Java and C# (C# not so much, but Java...yea).

And that's it!! We created our base class and interface INode and NodeBase. Then we implemented our two concretes classes (Tree and NodeBase). And lastly we spiced things up using type inference and extension methods to get a fluent, clean syntax. Feel free to use the code in whatever way you like. It's about time we get a native tree to work with in .NET huh? Come on Microsoft!!

Well...until next time my friends. I'll be back with another post for RavenDb and NHibernate. I think you'll like it. It's quite the implementation.

Cheers!!