Friday, September 23, 2011

Using SOLID to Implement Fizz Buzz


So I was having lunch with @jeffreypalermo@dannydouglass, and my other teammates, and they were chatting about interview processes/techniques. Jeffrey was in town for our annual HeadSpring training and mentioned the infamous fizz buzz problem and I couldn’t but think about how I’d implement it...cleanly!! One given is that whomever reads it does not want to see a proliferation of if statements. But they have to get written at some point, right? So thinking about modern practices and design principles, I thought to myself…why not let SOLID guide me here? And that’s exactly what I did.
















So there are several components at play here; the FizzBuzzCommandProcessor and the FizzBuzzCommandRetriever. I took the processor technique from Jeffrey when he mentioned how they implemented the rules engine. I thought it was pretty cool and would be fun to implement something similar on my own. And with the advent of CQRS, I’m just really hooked on commands in general. They promote such strong usage of Single Responsibility Principle (SRP). The command processor takes input from the command retriever, and matches up each command with the value that it can handle. That’s it.
using System.Collections.Generic;

namespace ADubb.FizzBuzz
{
    public interface IFizzBuzzCommandProcessor
    {
        void Process(IEnumerable<int> numbers);
        void Process(int number);
    }
}
using System.Collections.Generic;
using System.Linq;

namespace ADubb.FizzBuzz
{
    public class FizzBuzzCommandProcessor : IFizzBuzzCommandProcessor
    {
        static readonly IEnumerable<IFizzBuzzHandler> HandlerCache;
        static readonly FizzBuzzCommandRetriever CommandRetriever;

        static FizzBuzzCommandProcessor()
        {
            CommandRetriever = new FizzBuzzCommandRetriever();
            HandlerCache = CommandRetriever.GetHandlers().ToList();
        }

        public void Process(IEnumerable<int> numbers)
        {
            numbers.ToList().ForEach(Process);
        }

        public void Process(int number)
        {
            var handlers = HandlerCache.Where(h => h.CanHandle(number));

            handlers.ToList().ForEach(h => h.Handle(number));
        }
    }
}

Next there is the FizzBuzzCommandRetriever. I decided to keep things simple and only scan the currently executing receiver for command handlers as opposed to the entire AppDomain. This guy helps me separate the responsibility of finding finding IFizzBuzzCommandHandlers at runtime. That’s all it knows how to do. We've achieved a clean separation between the what and the how.

using System.Collections.Generic;

namespace ADubb.FizzBuzz
{
    public interface IFizzBuzzCommandRetriever
    {
        IEnumerable<IFizzBuzzHandler> GetHandlers();
    }
}
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;

namespace ADubb.FizzBuzz
{
    public class FizzBuzzCommandRetriever : IFizzBuzzCommandRetriever
    {
        public IEnumerable<IFizzBuzzHandler> GetHandlers()
        {
            var handlers =
                Assembly.GetExecutingAssembly().GetExportedTypes().Where(
                    t => typeof(IFizzBuzzHandler).IsAssignableFrom(t) && t.IsClass)
                    .Select(Activator.CreateInstance)
                    .Cast<IFizzBuzzHandler>();

            return handlers;
        }
    }
}
Then I have actual implementations of IFizzBuzzCommandHandlers. These are consumed by the processor and told to act on their respective inputs. Clear as day.
namespace ADubb.FizzBuzz
{
    public interface ICommandHandler<in TType>
    {
        bool CanHandle(TType target);
        void Handle(TType target);
    }
}
namespace ADubb.FizzBuzz
{
    public interface IFizzBuzzHandler : ICommandHandler<int>
    {
    }
}
using System;

namespace ADubb.FizzBuzz
{
    public class MultiplesOfThreeFizzBuzzHanlder : IFizzBuzzHandler
    {
        public bool CanHandle(int target)
        {
            return target % 3 == 0;
        }

        public void Handle(int target)
        {
            Console.WriteLine("{0} is a multiple of 3.", target);
        }
    }
}
using System;

namespace ADubb.FizzBuzz
{
    public class MultiplesOfTwoFizzBuzzHanlder : IFizzBuzzHandler
    {
        public bool CanHandle(int target)
        {
            return target % 2 == 0;
        }

        public void Handle(int target)
        {
            Console.WriteLine("{0} is even.", target);
        }
    }
}

Lastly we have the Open Closed Principle (OCP) that comes at play. I was telling Jeffrey how the template pattern lends itself very suitable to the OCP. Since we have the command retriever, it’s easy for us to plug additional handlers into the pipeline. It’s also just as easy to take them out. What I admire most is that the mere act of adding or removing a class from the assembly has no adverse affect on the runtime. See for yourself. Just comment out the class definition for one of the handlers and observe as its output never gets logged to the console. Our template comes into play with inheritance of course. There is an IFizzBuzzHandler that each handler implements. It’s merely just a marker interface. This same concept is applied with handlers in NServiceBus. I like J

Program...
using System.Linq;

namespace ADubb.FizzBuzz
{
    class Program
    {
        static void Main()
        {
            var oneToOneHundred = Enumerable.Range(1, 101);
            IFizzBuzzCommandProcessor fizzBuzzCommandProccessor = new FizzBuzzCommandProcessor();
            fizzBuzzCommandProccessor.Process(oneToOneHundred);
        }
    }
}
Output...



























Overall, it was nice to watch all the constructs come together and play nicely with one another. Each component has a single responsibility. Nice fine grained, granular classes that don’t know how to do too much. They specialize at a specific task. They know how to do one thing really, really well. And that’s it.

Tuesday, July 12, 2011

How Do You Handle Aggregates At Your Shop?

I've witnessed this one too many times.

So here's a problem that I've commonly experienced during development. You get requirements, analyze them, and start to implement your domain model based on those requirements. If you're an agile shop, those requirements will come in chunks and be implemented in sprints or iterations (whatever you wanna call it). So your initial set of requirements will only have a part of the big picture. Let's say you're going to make logic for users and profiles. To start out with, a user simply wants to view their profile. So what might some of you proceed to do? You'll only make the ProfileEntity and leave the user out if you can get away with it. Why? Well let's say when you go to the profile page, you don't need to render anything about the user; stricly profile info (let's assume for the sake of brevity that you haven't implemented UserRepository and if you have, you're still not going to use it). So you do something like....

public class ProfileRepository {
    public Profile Get(Guid userId) {
        // call NHibernate or RavenDb, YEAAAA!!
        return session
                .Query<Profile>(p => p.UserId == userId)
                .SingleOrDefault();
    }
}

Now someone like myself would say hey, aren't we in the context of the user? Isn't the user the aggregate root? Shouldn't we be doing...

public class UserRepository {
    public User Get(Guid userId) {
        // call NHibernate or RavenDb. yea!!
    }
}

// dependency inject this of course
var userRepo = new UserRepository();
var user = userRepo.Get(Guid.NewGuid());
var profile = user.Profile;

// map profile to view model if you want
return View(profile);

I'd rather be proactive about making my aggregates if I can help it. Even if they aren't immediately needed. Heck, if you want to make a bare UserEntity with nothing but it's associations until you later find out its specific properties then that doesn't hurt either. At least you'll be prepared once you have to account for the user's information.

What ends up happening is that you have a proliferation of mini repositories that should never be there in the first place. Essentially one per association. When in reality you should be fetching the aggregate and drilling down into it to pick off associations. Since you should never access the associations without first going through the aggregate. That's if you're adhering to DDD.

So instead of implementing ProfileRepository and AddressRepository that both know how to retrieve profiles and addresses by userId, you end up with one wholesome UserRepository that will serve users and their respective associations. Like orders, address, profile, etc. as I've outlined below.

But some will do what is only necessary and refactor later on. The one inherent problem is that in almost every situation, we're always in the context of a user. Which kind of leads to a question within this one. Should everything hang off of the core user object? Think of how many associations that would be.

public class User {
    public Profile Profile { get; set; }
    public Address Address { get; set; }
    public IList<Orders> Orders { get; set; }
    public IList<Comment> Comments { get; set; }

    // and so on and so forth
}

For really big applications, upon hitting Intellisense, you'd be bombarded by about 30+ properties. I guess that's not so bad. You could always house them within some other contextual object. Something like...

User.UserInfo.Profile and User.UserInfo.Address

I dunno.

Monday, May 16, 2011

Implementing read-only mode in ASP.NET Using Inversion of Control, Reactive Programming, & the Visitor Pattern

In this post I wanted to focus primarily on Inversion of Control and a little on Reactive Programming (Rx) since I think the two go hand in hand. I really like these functional style approaches to programming. IoC is very reactive in nature, because the relying party ultimately ends up reacting to the input you created for it (details to follow). These 2 concepts are very mundane in nature, but serve as the core building blocks of some of our most widely used functions and libraries. If my suggestions leave you with any doubt or skepticism, feel free to pay one of my favorite guys a visit.

You're sure to have come across IoC if you're into Domain Driven Design (DDD) like myself. I'd suggest picking up a copy of the book with the bridge. Don't worry. You won't have any problem finding it...trust me. It's been around for 5 years by now. Yea...it's that good.

Inversion of Control/Rx examples

  • how NUnit runs your tests for you
  • how ASP.NET MVC invokes your controller for you, conveniently populating the arguments you requested like route values, models, etc.
  • how jQuery abstracts away AJAX for you and allows you to be ignorant as to how the response was generated and where it came from
  • how WCF invokes your service/host for you
  • how a DI Container creates concrete instance for you as opposed to you doing it yourself
  • how Windows Forms and ASP.NET listen for events for you and allow you to react to them
  • how the Windows and the CLR unite to provide your program with command line arguments via the string[] args parameter to your command line program's Main method. You react to the arguments, but not  once have you ever had to worry about where they came from or how they got there. It's all handled by the runtime.


Let's get into some code shall we?

Recursion....Important but BOOORRRIIIINNNGGGG!!

public static class AdubbExtensions {
    public static void ToReadOnly(this Page page) {
        page.ToReadOnly(new DefaultReadOnlyModeControlVisitor());
    }

    public static void ToReadOnly(this Page page, IReadOnlyControlVisitor visitor) {
        Action<IEnumerable<Control>> recursor = null;

        recursor = controls => {
            foreach (var control in controls) {
                if (control is BulletedList) continue;

                control.IfIs<ListControl>(l => {
                    var placeHolder = new PlaceHolder();

                    l.Items
                    .Cast<ListItem>()
                    .ToList()
                    .Select(i => new ListItemControl(placeHolder, i))
                    .ForEach(visitor.Visit);

                    /* swap the ListControl with a placeholder control so it's children will be rendered. otherwise the ListControl would ignore our child controls that 
                     * we added to the Controls collection. it doesn't read that property when rendering. it reads the Items property which we're not even dealing with. */
                    l.SwapWith(placeHolder);
                });

                control.IfIs<TextBox>(visitor.Visit);
                control.IfIs<CheckBox>(visitor.Visit);

                // for ListControl, the Controls collection will be empty. the actual controls are contained within the Items property.
                recursor(control.Controls.Cast<Control>().ToList());
            }
        };

        recursor(page.Form.Controls.Cast<Control>().ToList());
    }

    public static int Index(this Control control) {
        var index = -1;
        if (control.Parent == null) return index;

        foreach (var c in control.Parent.Controls) {
            index++;

            if (ReferenceEquals(c, control))
                return index;
        }

        throw new InvalidOperationException("Could not find control in parent's control tree.");
    }

    public static void SwapWith(this Control old, Control @new) {
        var index = old.Index();
        var parent = old.Parent;

        parent.Controls.Remove(old);
        parent.Controls.AddAt(index, @new);
    }

    public static void IfIs<T>(this object target, Action<T> action) where T : class {
        var wannaBe = target as T;

        if (wannaBe != null) action(wannaBe);
    }

    public static void ForEach<TType>(this IEnumerable<TType> target, Action<TType> action) {
        foreach (var element in target)
            action(element);
    }
}

internal class DefaultReadOnlyModeControlVisitor : IReadOnlyControlVisitor {
    const string LineBreak = "<br />";
    const string SelectableItemReadOnlyFormat = "<b>{0}{1}{2}</b> {3}{4}";
    const char OpenCurly = '{';
    const char ClosedCurly = '}';
    const char X = 'X';
    const char Underscore = '_';

    public void Visit(TextBox textBox) {
        textBox.SwapWith(new Literal { Text = string.Format("{0}{1}", textBox.Text, LineBreak) });
    }

    public void Visit(CheckBox checkBox) {
        checkBox.SwapWith(new Literal { Text = string.Format(SelectableItemReadOnlyFormat, OpenCurly, checkBox.Checked ? X : Underscore, ClosedCurly, checkBox.Text, LineBreak) });
    }

    public void Visit(ListItemControl listItem) {
        listItem.SwapWith(new Literal { Text = string.Format(SelectableItemReadOnlyFormat, OpenCurly, listItem.Selected ? X : Underscore, ClosedCurly, listItem.Text, LineBreak) });
    }
}

public interface IReadOnlyControlVisitor {
    /// <summary>
    /// Converts a <see cref="TextBox"/> to read-only mode.
    /// </summary>
    /// <param name="textBox">The text box to convert to read-only mode.</param>
    void Visit(TextBox textBox);

    /// <summary>
    /// Converts a <see cref="CheckBox"/> to read-only mode.
    /// </summary>
    /// <param name="checkBox">The check box to convert to read-only mode.</param>
    void Visit(CheckBox checkBox);

    /// <summary>
    /// Converts a <see cref="ListItemControl"/> to read-only mode.
    /// </summary>
    /// <param name="listItem">The list item to convert to read-only mode.</param>
    void Visit(ListItemControl listItem);
}

/// <summary>
/// A wrapper for ListItem since it's not a control.
/// </summary>
public class ListItemControl : Control {
    public ListItemControl(Control parent, ListItem listItem) {
        Selected = listItem.Selected;
        Text = listItem.Text;

        parent.Controls.Add(this);
    }

    public bool Selected { get; private set; }
    public string Text { get; private set; }
}

The beauty of the design is that we handled the recursive part of the code. That's the part that no one wants or even cares to deal with and rightfully so. I picked up this concept while reading Real World Functional Programming. Thomas P talks about how to implement routines like Sum, Max, and Min by encapsulating recursively iterating a list and accepting a function that knows how to do the rest. The client never has to worry about writing the loop. They just provide a function that abstractly accepts 2 values, does something with them, and returns the result. In the case of Sum, you'd provide the + operator as a function by wrapping it in parenthesis. Then you'd have a function like aggregate/reduce (it's really called fold in F#) that accepts the + operator. So a client could call

let seed = 0
let ten = Seq.aggregate seed [1; 2; 3; 4;] (+)

let seed' = 1
let twentyFour = Seq.aggregate seed' [1; 2; 3; 4;] (*)

This approach is backed by this blog post and a must have in any functional style language. It's a lot more verbose to define in C#, but definitely works. You're probably thinking what I'm thinking. Isn't the Aggregate function available in LINQ? Yup. Sure is. And I use it all the time.

In my case, you should only have to worry about reacting to a particular type of control. You inverted control over to me so that you can declaratively tap into the processing of the control tree. That's kind of funny when you think about it. You mean I'm going to give this person a reference to myself and I can't even control when or how many times I'm invoked? That's the beauty of it my friends. It's what IoC is all about. The same thing happens in ASP.NET MVC. When have you ever been in control of when your controller was invoked? You're not. That's the job of the action invoker. You just have to write code. A simple yet powerful concept.

I simply pluck controls from the tree, and if they match, I tell you about it. You're kind of the subject in this case. This is similar to how IQueryable and LINQ Providers work. You write code that knows how to handle each type of expression. Then .NET notifies your expression tree visitor when that particular type of expression shows up. Also when you implement a query provider, you have no control over when your code is executed. .NET will invoke your provider accordingly once the client makes calls to Where, Select, OrderBy, etc. Lastly there's the aforementioned Reactive Framework (Rx) in .NET. It's pretty cool as well with IObserver and IObervable. Their counterparts over in LINQ are IQueryable and IQueryProvider. In both cases .NET has conveniently implemented extesion methods that make use of these 2 heavy weight abstractions. You write code, and the extension methods provided by .NET determine when it will be executed.

I can imagine the next time those Microsoft guys find a standard and generic way of executing code. They'll be some other IX and IXable interface tandem. I have to them credit. They always find ways to formulate the perfect marriage between husband and wife. I wonder if IQueryable and IQueryProvider will be producing offspring in the near future. The world may never know.

Honey, we Have a Visitor...

I'm sure that some of you were attracted to this post to find out how I made usage of the visitor pattern. I used the visitor pattern to handle each type of control. That's exactly what that pattern was created for. You make some high level class that knows how to handle concrete instance of an inheritance hierarchy (2 points for polymorphism). I find it rare that I have a legitimate purpose for using it here, but it worked to perfection in this particular scenario. I created a default implementation of my IReadOnlyControlVisitor interface, but clients are allowed to swap that out if need be. For the record, RadioButton is a CheckBox via inheritance. So I kind of killed two birds with one stone on that regard (lucky me).

Why Make a Separate Control for ListItem?

I had to treat the ListItem object with a special case. Firstly, anytime I come into contact with a ListControl, I immediately drill down into its children. Why should clients have to write the same old loop over and over. All it cares about is the individual items. Secondly, ListItem is not a control, so I needed to make a wrapper for it that encapsulates whether its selected or not. At that point, I can treat it just like any other control and swap it, find its index, etc.

Adding flavor with SwapWith, IfIs<T> and Index

The 2 extension methods, SwapWith, and Index, are 2 pretty clever utilities I implemented. They're both pretty simple and straight forward. SwapWith is just a function if Index. IfIs<T> is an idea I got from a pal and decided to implement myself. I'm sure my implementation matches his line for line. We can both agree that we were fed up writing that POTC (Plain Old Type Cast). So we implemented a more declarative and functional implementation that's not as noisy as the de facto imperative check.

Two can Play that Game

Just to flex the design a little, I came up with a separate visitor that expresses how simple it is for us to customize our implementation. We're wiping out the DefaultReadOnlyModeControlVisitor with a more naive one.

internal class RainbowControlVisitor : IReadOnlyControlVisitor {
    public void Visit(TextBox textBox) {
        textBox.SwapWith(GetDiv("red", "Wacky Red"));
    }

    public void Visit(CheckBox checkBox) {
        checkBox.SwapWith(GetDiv("blue", "Wacky Blue"));
    }

    public void Visit(ListItemControl listItem) {
        listItem.SwapWith(GetDiv("green", "Wacky Green"));
    }

    static Control GetDiv(string color, string text) {
        var red = new WebControl(HtmlTextWriterTag.Div);

        red.Style.Add(HtmlTextWriterStyle.Color, color);
        red.Controls.Add(new Literal { Text = text });

        return red;
    }
}

Getting in Trouble

There are two ways to make this implementation blow up. The first is due to LINQ and that lazy bastard IEnumerable<T> (I love you). Since I'm modifying the incoming control collection, I have to be done enumerating it by the time it arrives. Put simply, the code will fail without a call to ToList which forces eager evaluation. Secondly code nuggets (<%...%>) will make her blow. You can remedy that situation by following this stackoverflow post. There was another good post out there by Rich Strahl, but I can't seem to locate it. The simplest solution is to wrap any code using code nuggets in a PlaceHolder control. All better now?

Samples Anyone?

<%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true"
    CodeFile="Default.aspx.cs" Inherits="Default" %>

<asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent">
</asp:Content>
<asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
    <h2>
        Welcome to ASP.NET!
    </h2>
    <p>
        To learn more about ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>.
    </p>
    <p>
        You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&clcid=0x409"
            title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>.
    </p>

    <asp:TextBox Text="Antwan As A Literal" runat="server" />
    <asp:RadioButtonList ID="buttonList" runat="server" />
    <asp:CheckBoxList ID="checkBoxList" runat="server" />
    <asp:RadioButton runat="server" Checked="true" Text="Antwan checked me homie!!"  />
    <asp:CheckBox runat="server" Text="R.I.P. to Bone of Cali Swagg" />
</asp:Content>

public partial class Default : Page {
    protected void Page_Load() {
        if (IsPostBack) {
            Response.Write("Why would you bind twice with view state enabled? Don't be silly.");
            return;
        }

        var foods = new List<string> { "pizza", "pineapples", "macaroni" };
        var dances = new List<string> { "cali-duggie", "detroit-jit", "atlanta-shoulder lean" };

        buttonList.DataSource = foods;
        checkBoxList.DataSource = dances;

        buttonList.DataBind();
        checkBoxList.DataBind();

        this.ToReadOnly();

        // skittles
        // this.ToReadOnly(new RainbowControlVisitor());
    }
}

Before


After


Skittles



Conclusion

And that's it. We began by handling the mundane recursive part of our implementation to alleviate the burden on our clients. No one should have to repeatedly implement that code. This set the stage for IoC. It gave us the ability to serve controls to the client in a reactive and convenient fashion. We made use of the visitor pattern to handle each type of control we wanted to convert to read-only mode. We provided clients with a default implementation but gave them the ability to override that implementation by providing their own version of IReadOnlyControlVisitor. Lastly we handled the special case for ListItems since they do not inherit from the base Control class provided by ASP.NET.

Thursday, May 5, 2011

Synchronizing Files With F# and the FileSystemWatcher

So I needed a way to automate change tracking on a set of directories and have those changes merged to another set of directories. In my case, I'm dealing with directories that have a similar make up. That is to say, they contain the same files, folders, etc. Just in different locations. They're essentially clones of one another. The team I'm currently working on calls these packages. I certainly don't agree with the way they implemented it and all the duplication, but I'm not going to manually copy my changes to 2 other directories all day long. So I came up with a simple utility to do the work for me. It's not fully polished yet, but I wanted to get my initial implementation up online. If you actually tried to use the program, things would work, aside from the IOException thats generated after subsequent saves due to the host process somehow maintaining a lock on the files. Now normally it's the developer's fault, and it probably is in my case, but based on the very nature of the function I'm calling, and what it promises to do for me, I doubt it. I'll resolve it in the coming days though. Lastly, I'm working purely with code (.cs, .aspx, .ascx, etc.) so I can get away with calling File.ReadAllText. I'm not even going to think about binary. Maybe it'd still work. Who cares...lol. Enough talking already. Here's the code.

Iteration I - 4 April, 2011

// Learn more about F# at http://fsharp.net
open System.Xml.Linq
open System.Reflection
open System.IO
open System.Linq
open Microsoft.FSharp.Control
open System.Threading
open System

let pathAttributeName = "path"
let xs n = XName.Get(n)
let wildcard = "*.*"

let workflow = async {
    printfn "Started listening at %A..." DateTime.Now

    while Console.ReadLine() <> "q"
        do
            let doc = (Assembly.GetExecutingAssembly().Location |> Path.GetDirectoryName) + "\Synch.config" |> XDocument.Load
            let config = doc.Root
            let root = config.Element("root" |> xs)
            let rootDir = root.Attribute(pathAttributeName |> xs).Value

            let mapdirs (e : XElement) =
                e.Elements("add" |> xs)
                |> Seq.map((fun (e : XElement) ->
                                let dir = [|rootDir; e.Attribute(pathAttributeName |> xs).Value;|] |> String.Concat
                                dir))

            let directories = config.Element("directories" |> xs)
            let masters = directories.Element("masters" |> xs) |> mapdirs
            let slaves = directories.Element("slaves" |> xs) |> mapdirs

            let directoryWatchers = masters
                                    |> Seq.map((fun d -> 
                                                    new FileSystemWatcher(d, EnableRaisingEvents = true, Filter = wildcard)))

            directoryWatchers 
            |> Seq.iter((fun w -> 
                            w.Changed.Add((fun e -> 
                                            let merge fp fn =
                                                let targetDir = Path.GetDirectoryName fp
                                                let content = fp |> File.ReadAllText

                                                slaves
                                                |> Seq.iter ((fun d -> 
                                                                    let fileName = [|d; "\\"; fn;|] |> String.Concat

                                                                    if File.Exists fileName then
                                                                        try
                                                                            File.WriteAllText(fileName, content)

                                                                            printfn "Merged %s to %s at %A %s" fp fileName DateTime.Now Environment.NewLine
                                                                        with 
                                                                            | :? IOException as e ->
                                                                                printfn "Antwan said he'd handle it later. He's eager to get his post up now!!"
                                                                    else
                                                                        printfn "File %s did not exist in directory %s. No merge required. Aborting...%s" fileName d Environment.NewLine
                                                                    ))

                                            e.Name |> merge e.FullPath))))

    printfn "Stopped listening at %A..." DateTime.Now
}

let start() = 
    workflow |> Async.RunSynchronously

do start()

And here's the configuration file I use. No it's probably not the most intuitive xml file you've ever seen, but it works for me. I called it Synch.config and placed it in my bin/Debug directory.

<watch>
 <root path="C:\Users\A-Dubb\Documents\" />
 <directories>
    <!-- Directories I'll be working in -->
    <masters>
      <add path="TestDir" />
    </masters>
    <!-- Directories I want my work merged to -->
    <slaves>
      <add path="TestDirII" />
    </slaves>
 </directories>
</watch>

It's nothing too complex. I just listen for changes in the master directories and merge them to the slave directories. Pretty cool though. It's definitely a good candidate for a Windows Service. I cheated with a while loop to force the main thread to wait on me without exiting the program. There are numerous ways to achieve that behavior as well, but it was quick and painless. I bet you something like DropBox makes use of a similar construct like FileSystemWatcher to keep your files in synch between machines. I'll upload the patch to resolve the IOException once I have time to delve into it.

Wanna get her up and running quickly? You got it. Just download Funtastic. It's a lightweight F# editor. Basically just a wrapper around F# Interactive. She's quite handy though.

For now, adios my friends.

Iteration II - 5 April, 2011

Ok. So I figured out what the problem is. First off, my exception handling code is in the wrong place. It should be concentrated on the attempt to read the file that was actually changed. Not the files that need to be patched. Number 2, since I'm subscribing to the Changed event, it gets triggered just by me simply reading the file. It's cause the file's metadata get's changed by the OS upon reading it (LastAccessedDate). So the Changed event happens so fast (probably nanoseconds) that as I'm reading the file the first time around, I attempt to read it again. Don't believe me? Open up once of your tracked files in Notepad++ and watch it get logged to the console. Even better, upon running the application, you'll notice that you always see the same file get merged to each directory twice. So instead of seeing 2 sets of output, you see 4. I'll have to find a way to suppress notifications for reads. I did try opening the file with FileAccess.Read and FileShare.Read. That didn't work 100% of the time but did seem to be a lot better than what I had before. I also like how ReadAllLines and ReadAllBytes are more high level. I don't have to worry about managing streams, disposing them, reading them, etc. The problem is, you don't have control over access permissions when consuming the file because of the defaults .NET sets for you. I'd never have a source file that's over 2 gigs, but that the most you can load in memory with my current approach because of Int.MaxValue. Maybe the guys at Microsoft know a way around that with their implementation. Who knows? Lastly, I'm working with raw bytes now since that's the fundamental makeup of every file whether it be binary or text based. So I take back my statement from earlier. I kind of do actually care now. I thought I'd have to make some fancy factory that knows how to read and write each file based on it's extension. That'd be one of three things: Either an infinite switch block, a jam packed dictionary, or a regex longer than the Mississippi. Anyway, here's my current revision. You're probably starting to think I'm trying to obsolete git by now. Forgive me. I just want an immediate view of how many times I took a swing at this thing. Don't worry. I'll call it a strikeout at 3.

// Learn more about F# at http://fsharp.net
open System.Xml.Linq
open System.Reflection
open System.IO
open System.Linq
open Microsoft.FSharp.Control
open System.Threading
open System

let pathAttributeName = "path"
let xs n = XName.Get(n)

let workflow = async {
    printfn "Started listening at %A..." DateTime.Now

    while Console.ReadLine() <> "q"
        do
            let doc = (Assembly.GetExecutingAssembly().Location |> Path.GetDirectoryName) + "\Synch.config" |> XDocument.Load
            let config = doc.Root
            let root = config.Element("root" |> xs)
            let rootDir = root.Attribute(pathAttributeName |> xs).Value

            let mapdirs (e : XElement) =
                e.Elements("add" |> xs)
                |> Seq.map((fun (e : XElement) ->
                                let dir = [|rootDir; e.Attribute(pathAttributeName |> xs).Value;|] |> String.Concat
                                dir))

            let directories = config.Element("directories" |> xs)
            let masters = directories.Element("masters" |> xs) |> mapdirs
            let slaves = directories.Element("slaves" |> xs) |> mapdirs

            let directoryWatchers = masters
                                    |> Seq.map((fun d -> 
                                                    new FileSystemWatcher(d, EnableRaisingEvents = true, IncludeSubdirectories = true)))

            directoryWatchers 
            |> Seq.iter((fun w -> 
                            w.Changed.Add((fun e -> 
                                            let merge fp fn =
                                                let targetDir = Path.GetDirectoryName fp
                                                
                                                try
                                                    use fs = File.Open(fp, FileMode.Open, FileAccess.Read, FileShare.Read)
                                                    
                                                    let size = fs.Length |> int
                                                    let buffer = Array.zeroCreate<byte> size
                                                    
                                                    fs.Read(buffer, 0, size) |> ignore
                                                    
                                                    slaves
                                                    |> Seq.iter ((fun d -> 
                                                                    let fileName = [|d; "\\"; fn;|] |> String.Concat

                                                                    if File.Exists fileName then
                                                                        File.WriteAllBytes(fileName, buffer)

                                                                        printfn "Merged %s to %s at %A %s" fp fileName DateTime.Now Environment.NewLine
                                                                    else
                                                                        printfn "File %s did not exist in directory %s. No merge required. Aborting...%s" fileName d Environment.NewLine
                                                                    ))
                                                with
                                                  | :? IOException as ioe ->
                                                        printfn "exception occured %s %s" ioe.Message Environment.NewLine
                                               
                                            e.Name |> merge e.FullPath))))

    printfn "Stopped listening at %A..." DateTime.Now
}

let start() = 
    workflow |> Async.RunSynchronously

do start()

I'll be back for my last strike later.

Iteration III - 5 April, 2011 3:54 PM

Ok. I spent a few minutes looking around at something I completely ignored to start out with. This line allows you to filter your events. You can filter events using F#, but this is even simpler. It still doesn't work for me though, because as soon as I open the file, that in and of itself is considered a change.

new FileSystemWatcher(d, EnableRaisingEvents = true, IncludeSubdirectories = true, NotifyFilter = NotifyFilters.LastWrite)))

I'm throwing in the flag for now, but you have to admire my persistence. I'm sure I could throw in a Sleep call in between reads, but I don't want to hack anything right now. It was kind of fun heuristically playing with the FileSystemWatcher. At least I'm fully aware of its potential limitations. Cool :).

Source can be found here.

Sunday, April 17, 2011

Implementing an NBA Playoff Bracket in F#

I'm a huge fan of the NBA and sports in general and for years I've been fascinated with the way the NBA structures their playoff bracket. They take the top 8 teams from each conference and seed them based on which teams have to most wins. There are 2 conferences in all so that makes 16 playoff teams each year. The top seed plays the worst team, the second seed plays the seventh team and so on and so forth. So I thought it'd be cool to actually code this setup in F#.

I started with a method that knows how to read in each team from a text file.

// Learn more about F# at http://fsharp.net

open System
open System.IO
open System.Reflection

type Conference =
    | Eastern
    | Western

type Team = {
    Name : string;
    Wins : int;
    Losses : int;
    Conference : Conference
}

let filename = (Assembly.GetExecutingAssembly().Location |> Path.GetDirectoryName)  + "\Teams.txt"
let totalgames = 82
let max = totalgames + 1
let r = new Random()
let conferencesize = 15
let playoffteamsperconference = 8
let half = playoffteamsperconference / 2

let getteams() =
    seq {
        let mapteam conference (l : string) =
            let wins = r.Next(0, max)
            { Name = l.Trim(); Wins = wins; Losses = totalgames - wins; Conference = conference;}

        let teams = filename |> File.ReadAllLines

        // first 15 teams are the eastern conference
        let eastern = teams
                        |> Seq.take conferencesize
                        |> Seq.map (mapteam Eastern)
        
        yield! eastern

        // last 15 are the western conference
        let western = teams
                        |> Seq.skip conferencesize
                        |> Seq.take conferencesize
                        |> Seq.map (mapteam Western)

        yield! western
    }

The chunk of code within the seq{...} scope is known as a computation expression. This particular type of computation expression is called a sequence expression. It's a language integrated feature of F# that allows you to use certain operators based on a set of methods you implement. In this case, the compiler will translate my call of yield! to a method call that knows how to accept a sequence and return it's values. Computations in F# are monads; a fundamental feature of all functional languages. Believe it or not, the infamous LINQ as you know it is based on monads.

Amusingly enough, you've been using monads in .NET for quite a while even if you aren't a normal user of LINQ. Ever used Nullable<T>? It's a maybe monad. Even the infamous jQuery is a monad. It either has a value or it doesn't. In F# we represent this type of construct using the generic option type. Options are a discriminated unions. They're represented by Some 'T or None; where 'T is a generic type argument.

An example is

let o = Some 5
let p = Some "string"

Now o is an int option and p is a string option.

This is cool because we never have to worry about a value being null. Null isn't even a valid value in F#, but it is however a valid .NET value.

How do we find out if o or p has a value? We have to use pattern matching.

let printifhasvalue optionvalue =
    match optionvalue with
    | Some v -> printfn "%A" v
    | _ -> ()

You can think of patterns as switch statements. And that's exactly how the compiler translates them. When using pattern matching, you have to handle all cases or the compiler will yell at you. In my case I only have to worry about Some and None. I accounted for Some with the first check, and I used the wildcard value, _, to handle all other cases. If there were a third option, we'd have a problem because there would be 2 more possibilities. The compiler knows that None is the only other option so we're ok.

So when inter operating with other .NET libraries we do have to account for it. F# libraries never return null though. As for discriminated unions, I'll talk more about those later. As a heads up, my Conference type is one.

The map function accepts a function that can take a value and transform it into another type of value. It's just like Select in LINQ. When I called Seq.map (mapteam Western), I used a concept called partial application.

The expression (mapteam Western) isn't actually invoking the mapteam function. It actually returns a function that accepts the remaining arguments for mapteam. In our case it's the actual team. This is also known as currying. If mapteam took 3 arguments, I'd get a compile time error because I'd get a function back that accepts 2 arguments as opposed to 1. In that  case I'd have to do (mapteam Western arg2), to get a function that accepts only 1 argument. Pretty cool.

Most everything is a function in F#. Event the + and - operators. Don't believe me? You can use the operators as functions by wrapping them in parenthesis like so: (+). The output of that expression is a function that accepts 2 ints and returns another. Or (int -> int -> int).

I also needed a datastructure to store each team and its properties. Specifically I used what's known as a record in F#. It's completely separate from a class and the two are not synonymous.

There are 82 games in an NBA season so I wanted to randomly generate the record for each team. I generate a number between 0 and 82 inclusively for the wins and subtract that from the remaining games in the season to compute the losses. Pretty simple.

As far as which team each conference belongs to, I got the names of each team from the NBA.com website and typed them in order into my Teams.txt text file. I tried to keep things simple on that regard.

Teams.txt


Chicago
Miami
Boston
Orlando
Atlanta
New York
Philadelphia
Indiana
Milwaukee
Charlotte
Detroit
New Jersey
Washington
Toronto
Cleveland
San Antonio
L.A. Lakers
Dallas
Oklahoma City
Denver
Portland
New Orleans
Memphis
Houston
Phoenix
Utah
Golden State
L.A. Clippers
Sacramento
Minnesota

The cool thing about F# is that it's functional. That means we should implement light weight composable functions. That's exactly the approach I've taken here. Each function builds atop the other. A simple f(g(x)) relationship if you will.

And by the way, all values in F# are immutable by default. That means we can't change the state of something once we've created it. This greatly simplifies multithreaded programming, because you don't have to worry about multiple threads changing the state of your data. You can rest assured that once you give a function a reference to a value, it'll be in that very state once the operation is completed.

F# isn't a purely functional language so we do have mutable properties and values that we can pass around. Just not by default.

So now that we have a function that knows how to fetch each team and generate their record, we'll make something that can consume the output of that function and print each team.

let printteams (teams : seq<Team>) =
    teams
    |> List.ofSeq
    |> printfn "%A"

The printfn function is a cool utility because it knows how to print a generic object. It can be a sequence, a base type, etc. It's just like printf in C. You can pass a format like %s for strings and %d for integers.

I used List.ofSeq from the List module to convert my sequence to a List. I did that because sequences can be infinite in F# so the printfn function wouldn't print out all the values. A List on the other has is finite. So printfn will iterate the entire sequence and print out each element as opposed to the first few.

The type seq<'a> in F# is equivalent to IEnumerable<T> in C#. It represents a possibly infinite list of elements. I work solely with sequences throughout my implementation. All instances of Seq.x represent a set of functions known as the Sequence Module in F#. You can think of it as a class with a bunch of static methods. The funny looking |> syntax is just an operator. It takes a function and its argument and invokes that function passing in the given argument as a value. F(x) once again. I don't have to use this operator, the forward pipe operator as it's known, but I really like it's logical syntax so I kind of abused it here. It's no different than piping output from one command to another when using bash on Linux or any other command shell. Piping input to grep is really nice by the way.

There is also a backward operator that knows how go go in the opposite direction. It accepts x first then pushes it into f. That one reads from right to left like

List.ofSeq <| x

In order to pull of the same syntax without the pipe operator, I'd have to nest all of my return values. It'd look like f(g(x)) or

printfn "%A" (List.ofSeq teams)

In this case, g if my List.ofSeq function which accepts the teams. The teams are of course x. The output of that function is then passed to printfn. That makes printfn g in this equation. It doesn't look so helpful in simple scenarios, but later on you'll witness me using the pipe operator quite aggressively; and I think you'll start to appreciate the elegant syntax it allows you to exercise.

And since F# is functional, functions are first class citizens. They don't have to belong to any particular object in general just like in JavaScript. You can pass them as normal values just like ints, GUIDS, and any other base types. That makes F# a really powerful language.

You're probably wondering how I got away without specifying types. Don't you expect to see int and string? Well the F# compiler implements what's known as type inference. It's able to infer types based on the way they're used. We rarely have to specify types in F#, but I had to do it a few times with my implementaton using type annotations. These appear in the signature of my method. Method signature syntax goes

functionname arg1[type] arg2[type] arg3[type]...argn[type]

F# interprets this as (arg1 -> arg2 -> arg3 -> argn -> returntype)

The last value is the return type. Just like Func<T> in C#.

So anytime you see (blah -> blah), that means a function. If you ever see this from intellisense when hovering over a method, you can believe that method accepts a function as an argument; so be prepared to pass one.

Arguments are delimited by spaces. The type of the argument is optional and is required based on whether the compiler can use its type inference algorithm to infer the type of the argument. We don't need curly braces also, because F# detects scope based on spaces. Four spaces to be exact.

Now that I can get all the teams in the league, I need to group them by their conference. There are 2 conferences in the NBA. The eastern and western conferences. You'd normally represent something like this an an enum in C#, but we have a more functional construct known as discriminated unions in F#. My discriminated union is called Conference. You're probably starting to notice that the functions I'm using look a lot like LINQ. That's because LINQ's roots are tied deeply to functional programming.


let getteamsbyconference() =
    getteams()
    |> Seq.groupBy (fun t -> t.Conference)

I again made a function that knows how to print the teams out. Since I grouped the conferences, they came back as a pair or tuple as we call it. So I have to drill down into the conference to get the teams. Then from their it's business as usual. I can simply reuse the initial function I created that knows how to print a sequence of teams.

let printteamsbyconference (conferences : seq<(Conference * seq<Team>)>) =
    conferences
    |> Seq.iter (fun (_, teams) -> teams |> printteams)

You probably noticed the weird syntax I used in my function to iterate the conferences. It another form of pattern matching.  As I mentioned before, the _ represents the wildcard character. That means I don't care about the result of the first value; which in this case is the conference. The syntax I used is a pattern for tuples because its wrapped in parenthesis and delimited by a comma. That means I working with a pair, but I could have easily been working with a triple, quadruple, and so on and so forth. I could have called my teams parameter whatever I like. The name you give to your parameters is completely arbitrary.

I can get each conference and its respective teams now. It's time to make something that knows how to get the best 8 teams from each conference.

let getplayoffteams() =
    getteamsbyconference()
    |> Seq.map (fun (c, teams) -> (c, teams |> 
                                      Seq.sortBy (fun t -> t.Losses)
                                      |> Seq.take playoffteamsperconference))

For each conference, I sort the teams in the conference by the number of losses they have. Logically you'd think I've order by wins, but the sortBy function orders in ascending order. That means the teams with the least number of losses will be at the front of the pack. Logically the teams with the lest number of losses are the best right? That means they have the most number of wins. After sorting the teams, I take the top 8 teams from each conference and return them as a tuple to pair them with their conference.

Anytime you see the (fun x -> ...) syntax, that represents a lambda. Lambdas are pretty big in functional languages. It's with the lambda symbol that we mathematically denote a function. That's some old school history related stuff and it's admittedly pretty boring. It is kind of nice to know though.

The last step is to order the teams and make the final bracket.

let printplayoffbracket() =
    getplayoffteams()
    |> Seq.iter (fun (c, teams) -> 
                        Console.ForegroundColor <- ConsoleColor.Red

                        printfn "%A conference matchups\n" c

                        let topfour = teams |> Seq.take half
                        let bottomfour = teams |> Seq.skip half |> Seq.take half |> Seq.sortBy (fun t -> t.Wins)
                        
                        Console.ForegroundColor <- ConsoleColor.Yellow

                        bottomfour
                        |> Seq.zip topfour 
                        |> Seq.iter (fun (topseed, bottomseed) -> 
                                        printfn "%s (%d-%d) vs %s (%d-%d) \n" topseed.Name topseed.Wins topseed.Losses bottomseed.Name bottomseed.Wins bottomseed.Losses)
                        Console.ResetColor())

We consume the playoff teams and match the best teams against the worst teams. I used closures in this case. I know the top for teams are at the front of the pack, so I simply took the first 4 teams out of the 8 available in each conference. After that, I took the last 4 teams and sorted them by the number of wins they had. Again, you'd logically then I'd sort them by the number of losses they had. But again I know the worse teams are the teams with the least number of wins. The zip function knows how to pair up each member of one sequence with a member of another sequence. It will do this for each pair it can find. If one sequence is bigger than the other, it'll stop pairing elements from each sequence once it's paired the number of elements equivalent to the smallest sequence.

Now it's time to watch our little composable puppies in action

do getteams() |> printteams
do getteamsbyconference() |> printteamsbyconference
do getplayoffteams() |> printteamsbyconference
do printplayoffbracket()


We use the do keyword in F# to execute imperative code. That's code that doesn't return a value and just executes an action. It'll have the return type unit, or void in C#. In F# we always have to return a value. It'll either be an actual value like a record or tuple, or unit. Unit is denoted by (). So to return unit from a function you just write

let f() =
    ()

The function I definte above not only returns unit, but accepts it as an argument. So event when you think you're calling a parameterless method in F#, you're really not. And when you think you're not returning anything, you actually are.

And that's it. We started out by making a function that could read in each team and generate wins and losses for it. Then we grouped each one of those teams into the right conference. Next we were able to take the top 8 teams from each conference, which were our playoff teams. And lastly, we pair up the best teams with the worse teams in each conference just as the NBA does it.

If you want to try out my code, you can download F# and fsharp.net. It's deployed as its own toolset, independent of Visual Studio. I'd recommend Visual Studio so you can have intellisense though. If you want to get down and dirty, you use the F# interactive command shell. It's an interpretor so you're allowed to execute raw code without compiling it.

Don't forget my favorite 2 books. Real World Functional Programming and Expert F# 2.0. Also checkout my 2 favorite guys, the authors of my 2 favorite books, Thomas P and Don Syme.

I think it's safe to say we implemented map reduce here.

Cheers!!


Source can be found here.

Saturday, April 16, 2011

Implementing A Common Interface For NHibernate And RavenDb LINQ Providers

Background Knowledge

This is for those who are not familiar with the concept of a query provider. It's all about IQueryable<T>. By implementing this interface, you promise that you have a class (a query provider) that knows how to populate you (typically a collection) based on some domain specific data store. It can be a document database, a relational database, or even XML. In the case of Raven and NHibernate, we're dealing with document and relational databases. Raven's domain specific language is HTTP and REST, while NHibernate's is an abstraction layer atop SQL. The heart of any LINQ provider is expression trees. We call them quotations in F#, and they can be a nightmare for you when you want to use an existing LINQ implementation. Shame on you fsc. The c sharp compiler, csc, is a lot more friendly and compliant about emitting expression trees.

That being said, expression trees are where the magic happens. They are merely runtime representations of our code. The compiler will convert our calls against IQueryable like Where and Select to expression trees at compile time as opposed to delegates. Then it's up to you to implement an expression tree visitor and LINQ provider that knows how to parse each kind of expression supported by your API. You can find NHibernate's here and Raven's here. You'll be working with runtime representations of the standard LINQ query operators like Select, Where, OrderBy, and GroupBy. I'd like to assume everyone knows that there is a difference between IQueryable<T> and IEnumerable<T>, but I highly doubt that. What can be confusing for some is when they call


var five = new List<int> {3, 4, 5}.Where(n => n % 5 == 0).Single();

and it works. That's LINQ to Objects. In that instance, we're working with IEnumerable<T>. The key thing to remember is that both IQueryable<T> and IEnumerable<T> both have a set of extension methods that target them, and depending on which one you use you'll either love or hate the results. The extensions for IEnumerable<T> work with in memory collections as opposed to LINQ providers and expression trees. The extensions for IQueryable<T> are just an abstraction layer sitting atop your LINQ provider. You implement the LINQ provider, and .NET will invoke it at the proper time passing in the proper arguments (an expression tree). All you have to do is parse the tree and emit your domain specific output. Then you send that output to whatever backend you're encapsulating, fetch the results, and send them back to the client. I won't go any further into LINQ providers, but I figured I could clear up a little smoke by providing some concrete examples. The last thing I'll add is that IQueryable<T> is always lazily executed (just like IEnumerable<T>) and inherits from IEnumerable<T>. All IEnumerable<T> means is that you can iterate (for each) over its results. Now it's not that simple because the compiler generates this hidden class and  a state machine but we won't get into that and monads. What makes it lazy is that your query won't be executed until the client tries to actually iterate. This is cool because it allows us to continually make calls on our IQueryable<T> without it hitting our data store each time. Obviously we're not ready to consume any results until we start to iterate so everything is deferred up until that point. And don't worry about the compiler accidentally choosing the wrong call to Where or Select. It's smart enough to know that IQueryable<T> is more specific that IEnumerable<T> and invoke the right set of extensions.

I'd also like to conclude with a low level deep dive into LINQ.

Implementing the UoW

Let's get started shall we. First thing's first; we need a common interface to wrap the NHibernate and Raven sessions respectively.


public interface ISession : IDisposable {
    IQueryable<TEntity> Query<TEntity>() where TEntity : Entity;
    void Add<TEntity>(TEntity entity) where TEntity : Entity;
    void Update<TEntity>(TEntity entity) where TEntity : Entity;
    void Delete<TEntity>(TEntity entity) where TEntity : Entity;
    void SaveChanges();

    #region Future Load Methods. Can't use now because Raven forces Id's to be strings. If were not for that, we could make this generic between NHibernate and RavenDb.

    // TEntity Load<TEntity, TId>(TId id) where TEntity : Entity<TId>;
    // IEnumerable<TEntity> Load<TEntity, TId>(IEnumerable<TId> ids) where TEntity : Entity<TId>;

    #endregion
}

As you can see, we make each session promise to give us an IQueryable<T>. We're also enforcing our sessions to implement Unit Of Work, hence the SaveChanges method. The rest of the functions are CRUD based. Lastly we need to be able to shut the session down and free up resources so we make all sessions implement IDisposable.

Now we'll make the concrete RavenSession and it's wrapper class UnitOfWork

public static class UnitOfWork {
    public static void Start() {
        CurrentSession = new RavenSession();
    }

    public static ISession CurrentSession {
        get { return Get.Current<ISession>(); }
        private set { Set.Current(value); }
    }
}

internal class RavenSession : ISession {
    readonly DocumentStore _documentStore;
    readonly IDocumentSession _documentSession;

    internal RavenSession() {
        _documentStore = new DocumentStore { Url = "http://localhost:8080" };
        _documentSession = _documentStore.Initialize().OpenSession();
    }

    public IQueryable<TEntity> Query<TEntity>() where TEntity : Entity {
        /* may need to take indexing into consideration. raven will generat temps for us, but that may not be so efficient. 
         * i don't even know how long the temps stick around for. Raven will try and optimize for us best it can. */
        return _documentSession.Query<TEntity>();
    }

    public void Add<TEntity>(TEntity entity) where TEntity : Entity {
        _documentSession.Store(entity);
    }

    public void Update<TEntity>(TEntity entity) where TEntity : Entity {
        _documentSession.Store(entity);
    }

    public void Delete<TEntity>(TEntity entity) where TEntity : Entity {
        _documentSession.Delete(entity);
    }

    public void SaveChanges() {
        _documentSession.SaveChanges();
    }

    public void Dispose() {
        _documentStore.Dispose();
        _documentSession.Dispose();
    }

    #region Future Load Methods. Can't use now because Raven forces Id's to be strings. If were not for that, we could make this generic between nHibernate and RavenDb.

    public TEntity Load<TEntity, TId>(TId id) where TEntity : Entity<TId> {
        throw new NotImplementedException();
    }

    public IEnumerable<TEntity> Load<TEntity, TId>(IEnumerable<TId> ids) where TEntity : Entity<TId> {
        throw new NotImplementedException();
    }

    #endregion
}

I hard coded the url for now, but obviously I'd want it to be read from configuration somewhere.

Next I need a class to store the current session. I took an idea from a buddy of mine and made it strongly typed and reusable. It's just a wrapper around HttpContext that falls back to an in memory dictionary for unit testing purposes.

public static class Ensure {
    public static void That(bool condition) {
        if(!condition)
            throw new Exception("an expected condition was not met.");
    }

    public static void That<TType>(bool condition, string message) where TType : Exception {
        if(!condition)
            throw (TType)Activator.CreateInstance(typeof (TType), message);
    }
}

public static class Get {
    public static T Current<T>() where T : class {
        var context = HttpContext.Current;
        var key = typeof(T).FullName;

        var value = context == null ? (T)Set.InMemoryValuesForUnitTesting[key] : (T)context.Items[key];

        Ensure.That(value != null);

        return value;
    }
}

public static class Set {
    internal static Dictionary<string, object> InMemoryValuesForUnitTesting = new Dictionary<string, object>();

   public static void Current<T>(T value) {
       var context = HttpContext.Current;
       var key = typeof(T).FullName;

       if (context == null)
           InMemoryValuesForUnitTesting[key] = value;
       else
           context.Items[key] = value;
    }
}

Implementing Core Domain Objects

It's nice to have a base structure in place from which our domain objects can derive. More specifically a base entity and repository class. The base repository is strongly typed and knows how to persist a specific type of entity. I created a Raven specific repository because all ids in Raven are strings (or so I thought. Raven actually supports POID generators just like NHibernate). That's just the default implementation. It was implemented that way so the ids could be RESTful and human readable. Who wants to see a GUID on the query string? Not I...

public class Entity {}

public class Entity<TId> : Entity {
    public TId Id { get; set; }
}

public class BaseRepository<T, TId> : IRepository<T, TId> where T : Entity<TId> {
    public void Add(T entity) {
        UnitOfWork.CurrentSession.Add(entity);
    }

    public IQueryable<T> All() {
        return UnitOfWork.CurrentSession.Query<T>();
    }

    public virtual T Get(TId id) {
        return All().Where(e => e.Id.Equals(id)).SingleOrDefault();
    }

    public IEnumerable<T> Get(IEnumerable<TId> ids) {
        var idList = ids.ToList();

        return All().Where(e => idList.Contains(e.Id));
    }

    public void Delete(T entity) {
        UnitOfWork.CurrentSession.Delete(entity);
    }

    public void Update(T entity) {
        UnitOfWork.CurrentSession.Update(entity);
    }
}

public interface IRepository<T, in TId> : ICreate<T>, IRead<T, TId>, IDelete<T>, IUpdate<T> where  T : Entity<TId> {}

public interface IDelete<in T> {
    void Delete(T entity);
}

public interface IRead<out T, in TId> where T : Entity<TId> {
    IQueryable<T> All();
    T Get(TId id);
    IEnumerable<T> Get(IEnumerable<TId> ids);
}

public interface ICreate<in T> {
    void Add(T entity);
}

public interface IUpdate<in T> {
    void Update(T entity);
}

internal class Person : Entity<string> {
    public string Name { get; set; }
    public int Age { get; set; }
}

internal class PersonRepository : BaseRepository<Person, string>, IPersonRepository {
}

internal interface IPersonRepository : IRepository<Person, string> {
}

I implemented CRUD interfaces for my repositories so that a client can choose which operations it wants to interact with. If all a client needs to do is perform reads, then it can consume the IRead<T> interface and opposed to a full fledged IRepository<T>. That concrete implementation of IRead<T> would still be able to inherit from BaseRepository<T>, but would not be consumed as such. Using dependency injection, you'd do something like...

Map<IRead<User>>.To<UserRepository>();

Then an MVC controller or some dependant object would look like...

public class AccountController(IRead<User> userRepository) {...}

This concept is the I in SOLID for Interface Segregation. Give the client only what it needs. Nothing more and nothing less.

I didn't think I'd need something for updates like IUpdate<T> since most UoW implementations will implement change tracking. For instance if you retrieve and entity from a Raven or NHibernate session and modify it, the changes will automatically be applied upon saving the session. But thent I thought about what happens in ASP.NET MVC when we handle updates. Say the user goes to our update page and makes some changes to some text fields that represent and entity. The ASP.NET MVC will automatically construct an instance of our entity or view model and allow us to persist it. Their is a TryUpdateModel that MVC exposes on controllers, but what if you're mapping from view model to entity/DTO? There'd be no need to retrieve the entity from the domain layer since you already have a copy of it in memory. I could be wrong on this. Maybe it's common practice to always find your entity, apply the necessary changes, and persist it. I'm not sure how most do it, but having IUpdate<T> doesn't hurt right?

Implementing a Request Module for ASP.NET

Now I need a request module that knows how to initialize the session and spawn the UoW.

public class UnitOfWorkModule : IHttpModule {
    public void Init(HttpApplication application) {
        application.BeginRequest += ApplicationBeginRequest;
        application.EndRequest += ApplicationEndRequest;
    }
 
    static void ApplicationBeginRequest(object sender, EventArgs e) {
        UnitOfWork.Start();
    }

    static void ApplicationEndRequest(object sender, EventArgs e) {
        UnitOfWork.CurrentSession.SaveChanges();
    }

    public void Dispose() {
        UnitOfWork.CurrentSession.Dispose();
    }
}

Let me add that I borrowed the idea of this particular implementation of UoW from a blog on nhforge.com. I tweaked it to my liking. It's not perfect, but I'm content with it and it works for me. I'd never go so far as to deem this the ultimate implementation of UoW.

The cool thing about our implementation is that we can switch from Raven to NHibernate with one line of code.

The bad thing is that we can't leverage any framwork specific goodies. For instance, the power behind document databases is that they perform lightning fast reads. This is accomplished via indexes. In Raven, we specifiy our indexes upon executing our queries, but there's no way for my BaseRepository to do that unless it knows it's dealing with Raven in particular. I'd have to cheat to do that and probably break my encapsulation by assuming certain things about the current ISession at hand. Something like type casting it to an IDocumentSession (Raven specific). Raven is smart enough to dynamically create indexes for us on the fly if it detects that we didn't specify one client side, and will eventually promote them to permanent indexes if we used them enough over a certain amount of time. Frankly you just need to be aware of what you're gaining and losing. You should analyze if the benefits of a clean and reusable design are worth the extra work it takes to be able to leverage all of your target frameworks features. Sometimes you can get away with declarative xml configuration independent of code, or decorating your classes with a specific attribute and having the runtime pick up on it; But that's a big maybe and a long shot in most cases. Regardless I thought this would be a cool idea and fun to implement.

Implementing Unit Tests For Raven

We're not done yet my friends. It's time for some unit tests. NUnit where you be?

[Test]
public void Person_Repository_Can_Save_Person() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    var id = adubb.Id;

    var adubbFromRepo = personRepository.Get(id);

    Assert.IsNotNull(adubbFromRepo);
    Assert.AreEqual(adubb.Id, adubbFromRepo.Id);
    Assert.AreEqual(adubb.Name, adubbFromRepo.Name);
    Assert.AreEqual(adubb.Age, adubbFromRepo.Age);

    personRepository.Delete(adubbFromRepo);

    UnitOfWork.CurrentSession.SaveChanges();
}


















Whoops!! Looks like Raven doesn't allow us to call Equals to in the body of our lamdas. Time to refactor. We need to override our base implementation of Get(TId id); Let's make it virtual and override it.

public class RavenBaseRepository<T> : BaseRepository<T, string> where T : Entity<string> {
    public override T Get(string id) {
        return All().Where(e => e.Id == id).SingleOrDefault();
    }
}

internal class PersonRepository : RavenBaseRepository<Person>, IPersonRepository {
}

I'm already noticing that my query is taking a rather long time to execute. This probably means Raven isn't making optimized reads. I'd expect things to execute a lot faster.

Anyway, let's run our test again.


















That's strange. We didn't find any results. Something must be going wrong with my Id. The problem is the entity is still transient. That is to say, it hasn't been persisted yet. We need to submit our changes before performing our read. Let's refactor our test.

personRepository.Add(adubb);

UnitOfWork.CurrentSession.SaveChanges();

var id = adubb.Id;

We told Raven to persist the object before retrieving it. Let's try again.

Ok. I'm still getting an error. I probably shouldn't be messing around with my Id property. That's Raven's. Let's make one final change.

var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

Aaaaand!! Nope. Still didn't work. You may have caught on now but if you haven't the problem is inheritance. Raven apparently can't pick up on the fact that I'm inheriting my Id from my parent class Entity. So now I have to redefine it in the person class like so.

public class Person : Entity<string> {
    public new string Id { get; set; }
    public string Name { get; set; }
    public int Age { get; set; }
}

Alright. Things are working now according to my unit test and Raven Studio. My Add test passes. Now it's time to test delete.


[Test]
public void Person_Repository_Can_Delete_Person() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    UnitOfWork.CurrentSession.SaveChanges();

    var id = adubb.Id;

    var adubbFromRepo = personRepository.Get(id);

    Assert.IsNotNull(adubbFromRepo);

    personRepository.Delete(adubbFromRepo);

    UnitOfWork.CurrentSession.SaveChanges();

    adubbFromRepo = personRepository.Get(id);

    Assert.IsNull(adubbFromRepo);
}

This one worked right out of the box. No magic needed. Get is next.

[Test]
public void Person_Repository_Can_Get_Person() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    UnitOfWork.CurrentSession.SaveChanges();

    var id = adubb.Id;

    var adubbFromRepo = personRepository.Get(id);

    Assert.IsNotNull(adubbFromRepo);

    personRepository.Delete(adubb);
}

And lastly update. This one is pretty easy to test because of change tracking so I have 2 implementations. The first works just fine.

[Test]
public void Person_Repository_Can_Update_Person_Without_Calling_Update() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    UnitOfWork.CurrentSession.SaveChanges();

    var id = adubb.Id;

    var adubbFromRepo = personRepository.Get(id);

    Assert.IsNotNull(adubbFromRepo);

    const string changedName = "Changed Name";

    adubbFromRepo.Name = changedName;

    UnitOfWork.CurrentSession.SaveChanges();

    adubbFromRepo = personRepository.Get(id);

    Assert.AreEqual(changedName, adubbFromRepo.Name);

    personRepository.Delete(adubb);
}

But we run into problems with the second.





















Looks like Raven is forcing me to fetch my entity from the session before I can update it. It knows the entity is unattached. So I guess I could remove my implementation of Update. NHibernate however will properly convert my Contains call to an IN Clause; which is what I expect.

[Test]
public void Person_Repository_Can_Update_Person_When_Calling_Update() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    UnitOfWork.CurrentSession.SaveChanges();

    const string changedName = "Changed Name";
    const int changedAge = 19;

    var id = adubb.Id;

    // this entity didn't come from the session and thus is not being tracked. we're pretending like we've just
    // populated this entity in our controller based on the view and are about to persist it.
    var adubbFromRepo = new Person {Id = id, Age = changedAge, Name = changedName };

    personRepository.Update(adubbFromRepo);

    UnitOfWork.CurrentSession.SaveChanges();

    adubbFromRepo = personRepository.Get(id);

    Assert.AreEqual(changedName, adubbFromRepo.Name);
    Assert.AreEqual(changedAge, adubbFromRepo.Age);

    personRepository.Delete(adubb);
}

My last test if for Get with an overload. Of course it failed because Raven won't let me call Contains during my query. I'll figure it out later though. It's 1am right now and I'm beat. Plus my hot pocket is almost done cooking.

public static class ObjectExtensions {
    public static IEnumerable<TType> ToSingleEnumerable<TType>(this TType target) {
        yield return target;
    }
}

[Test]
public void Person_Repository_Can_Find_All_By_Id() {
    IPersonRepository personRepository = new PersonRepository();

    var adubb = new Person { Age = 22, Name = "Antwan \"A-Dubb\" Wimberly \r\nIt's Okay To Not Hire A Senior Developer!! There Are Good Young Develpers Out There Too!!" };

    personRepository.Add(adubb);

    UnitOfWork.CurrentSession.SaveChanges();

    var id = adubb.Id;

    var adubbFromRepo = personRepository.Get(id.ToSingleEnumerable()).FirstOrDefault();

    Assert.IsNotNull(adubbFromRepo);
    Assert.AreEqual(adubb.Id, adubbFromRepo.Id);
    Assert.AreEqual(adubb.Name, adubbFromRepo.Name);
    Assert.AreEqual(adubb.Age, adubbFromRepo.Age);

    personRepository.Delete(adubbFromRepo);
}

Implementing NHibernate Support

We're going to make a context switch to NHibernate now. We'll start with a concreate implementation of the ISession interface for NHibernate.

internal class NHibernateSession : ISession {
    static readonly ISessionFactory SessionFactory;

    static NHibernateSession() {
        // an expensive operation that should be called only once throughout the lifetime of the application. you'll typically see this in Application_Start of Global.asax.
        SessionFactory = Fluently
                            .Configure()
                            .Database(MsSqlConfiguration.MsSql2008
                                          .ConnectionString("Server=.; Database=NHPrac; Integrated Security=true;")
                                          .ShowSql())
                            .ExposeConfiguration(x => {
                                                     // for our CurrentSessionContext. this has to be configured or NHibernate won't be happy
                                                     x.SetProperty("current_session_context_class", "thread_static");
                                                         
                                                     // so the Product table can be exported to the database and be created before we make our inserts
                                                     var schemaExport = new SchemaExport(x);
                                                     schemaExport.Create(false, true);
                                                 })
                            .Mappings(x => x.FluentMappings.AddFromAssembly(Assembly.Load("Intell.Tests")))
                            .BuildSessionFactory();
        }

    internal NHibernateSession() {
        // sessions are really cheap to initialize/open
        var session = SessionFactory.OpenSession();

        session.BeginTransaction();

        CurrentSessionContext.Bind(session);
    }

    static NHibernate.ISession CurrentSession { get { return SessionFactory.GetCurrentSession(); } }

    public void Dispose() {
        // unbind the factory and dispose the current session that it returns
        CurrentSessionContext.Unbind(SessionFactory).Dispose();
    }

    public IQueryable<TEntity> Query<TEntity>() where TEntity : Entity {
        return CurrentSession.Query<TEntity>();
    }

    public void Add<TEntity>(TEntity entity) where TEntity : Entity {
        CurrentSession.Save(entity);
    }

    public void Update<TEntity>(TEntity entity) where TEntity : Entity {
        CurrentSession.Update(entity);
    }

    public void Delete<TEntity>(TEntity entity) where TEntity : Entity {
        CurrentSession.Delete(entity);
    }

    public void SaveChanges() {
        var transaction = CurrentSession.Transaction;

        if (transaction != null && transaction.IsActive)
            CurrentSession.Transaction.Commit();
    }
}

There's nothing special going on here. Just your standard Fluent NHibernate stuff and NHibernate basics in general. If my usage of the CurrentSessionContext class is unfamiliar to you then I'd suggest you get yourself a copy of the NHibernate 3.0 Cookbook. It's got the latest NHibernate best practices in it and was the basis of how I managed my NHibernate session.

Before I go any further I want to note that NHibernate won't let me call Equals in my Get(TId id) method so again I have to make a framework specific repository. Dangit!!

And yes I'm aware that S#arp Architecture has a base NHibernate and a base Entity class (I think...), but I wanted to take a stab at creating my own. It probably looks identical to what's already out there but oh well.

// we'd probably have to make a separate one for id's of type int to support identity columns
public class NHibernateBaseRepository<T> : BaseRepository<T, Guid> where T : Entity<Guid> {
    public override T Get(Guid id) {
        return All().Where(e => e.Id == id).SingleOrDefault();
    }
}

Next we'll build a ProductRepository, mapping file, and Product entity.

public class Product : Entity<Guid> {
    public virtual string Name { get; set; }
    public virtual int InventoryCount { get; set; }
}

public sealed class ProductMap : ClassMap<Product> {
    public ProductMap() {
        Id(x => x.Id)
            .GeneratedBy
            .GuidComb();
        Map(x => x.InventoryCount);
        Map(x => x.Name);
    }
}

internal class ProductRepository : NHibernateBaseRepository<Product>, IProductRepository {}

internal interface IProductRepository : IRepository<Product, Guid> {}

The Enity<TId> class becomes...

public class Entity<TId> : Entity {
    public virtual TId Id { get; set; }
}

Of course everything has to be virtual for proxy support so I had to make a slight modification to my base Entity<TId> class.

I only wrote 2 unit tests this time around, because I'm more than confident that the functionality works. Earlier I ran into a problem with my GetAll(IEnumerable<TId> ids) implementation of my BaseRepository class due to constraints that Raven enforces. I still have to come up with a clean work around, but we won't worry about that for now. This time I wanted to be sure my GetAll overload would work so I tested it in addition to Save. The tests both pass with flying colors.

The Start method of UnitOfWork becomes...

public static void Start() {
    // CurrentSession = new RavenSession();
    CurrentSession = new NHibernateSession();
}

[TestFixture]
public class ProductRepositoryTests {
    [TestFixtureSetUp]
    public void Init_Unit_Of_Work() {
        UnitOfWork.Start();
    }

    [TestFixtureTearDown]
    public void Uninit_Unit_Of_Work() {
        UnitOfWork.CurrentSession.Dispose();
    }

    [Test]
    public void Product_Repository_Can_Save() {
        var product = new Product { InventoryCount = 12, Name = "A-Dubb's World" };

        IProductRepository productRepository = new ProductRepository();

        productRepository.Add(product);

        var id = product.Id;

        Assert.IsFalse(id == Guid.Empty);
        
        product = productRepository.Get(id);

        Assert.AreEqual(12, product.InventoryCount);
        Assert.AreEqual("A-Dubb's World", product.Name);

        productRepository.Delete(product);

        UnitOfWork.CurrentSession.SaveChanges();
    }

    [Test]
    public void Product_Repository_Can_Get_All() {
        var product = new Product { InventoryCount = 12, Name = "A-Dubb's World" };

        IProductRepository productRepository = new ProductRepository();

        productRepository.Add(product);

        var id = product.Id;

        Assert.IsFalse(id == Guid.Empty);

        product = productRepository.Get(id.ToSingleEnumerable()).SingleOrDefault();

        Assert.IsNotNull(product);
        Assert.AreEqual(product.Name, "A-Dubb's World");
        Assert.AreEqual(product.InventoryCount, 12);

        productRepository.Delete(product);

        UnitOfWork.CurrentSession.SaveChanges();
    }
}





















And yea I should have followed TDD and written my tests first and I surely could have refactored my units tests for reusability's sake; but I'm not gonna bother.

Conclusion

We started out with our common session interface for NHibernate and Raven to implement. The we made our UoW and concrete Raven based session implementation. That was followed by our strongly typed classes for local storage and our core domain layer base classes (Entity and BaseRepository). We then subclasses out BaseRepository to make a Raven specific implementation that stores Entities with string based ids as Raven requires. Since we plan to be able to use Raven on the web, we made an HttpModule that can be registered in Web.config to initialize our session for each request made to our web application. And lastly, we wrapped things up with a few unit tests, a concrete ISession implementation for NHibernate, and discovered some things about Raven along the way. Specifically it can not pick up on and inherited Id property and only a specific subset of methods are allowed to be executed within our query calls/lamda expressions.

Well, that's it folks. As I mentioned before, the switch between Raven and NHibernate is a trivial but potentially problematic one, but you'd at least have your core domain layer in place for each framework. For one, Id's in Raven are string based, which is not the case in NHibernate where GUIDs tend to dominate. So switching would probably mean refactoring your entities switching the repository you inherit from; which could certainly be a problem. Secondly, it makes it tougher to use framework specific features such as indexes in Raven when executing queries; which is one its most important features. Were it not for the aforementioned constraints, you'd be able to switch from Raven to NHibernate with one line of code. That's why I build this post to begin with. I thought I could pull it off, but my unit tests told me otherwise. Either way, this was really fun to implement and I learned a lot. I hope this post proves to be helpful to a lot of people and can maybe serve as a catalyst for future implementations.

This is quite a bit of code so I should be uploading to github any day now.

I'll follow up this post by implementing a cool implementation of read only mode in ASP.NET (web forms). Pretty cool right?

Cheers!!