2008

December

    Thank you everyone who attended our presentation last night at the Calgary Agile Methodologies User Group. We had a tonne of fun, and hope that you took away some valuable information.

    CAMUG eComplianceCAMUG eCompliance   CAMUG eCompliance

    We are pleased to have Adam Alinauskas, Joel Briggs, Luu Duong and Mo Khan from eCompliance Management Solutions speaking to us this month with their presentation "Shortening the Feedback Loop - Our Sprint in a Nutshell"

    Under the Agile software development umbrella there are many principals, processes, methodologies, and practices that fit this style of development. Many companies are relentlessly seeking and implementing ways to continually improve how they design, develop and deliver software. We believe and have found in practice that the Agile way of software development enables, supports and drives this continuous quest for efficiency and improvement. One of the primary goals of Agile software development is to satisfy customer needs through early and continuous delivery of valuable software. We find that most of the business value comes from creating an environment where a shorter feedback loop allows our team to be more proactive and adapt quickly as and when necessary. In this presentation we will share and walk you through a typical sprint/iteration at eCompliance.

    About eCompliance.ca: eCompliance Management Solutions Inc. is the leading provider of Occupational Health & Safety (OHS) Management solutions in Canada. Our vision is "To be the preferred technology partner of Canadian organizations in OHS by providing efficient and effective practical solutions to measure, manage and mitigate Health & Safety Risks in the quest for 'Zero Incidents'."

    I took some time today to pull down the source code for SvnBridge today, and man, I was blown away. I started at Program.cs and made it to line 25 Bootstrapper.Start(). From there I went on to look at the hand rolled container, then the ProxyFactory.

    In order for me to fully grasp the System.Runtime.Remoting API for creating proxies I had to re-write the code from SVN Bridge.... I just had too... it's just how I learn. It's like tracing over cartoons when you're a kid. I still do it!

    In case you're interested, the attached code is the sample I put together that is derived from the source code of SvnBridge. If you haven't checked out the source for the project, you really should.

    Pretty cool stuff.... Hopefully, this helps out anyone else who's just as curious

    My reduced sample source code...

     1   private static void Main(string[] args)
     2   {
     3     var marshal_mathers = new Person("marshall mathers");
     4     var some_celebrity = ProxyFactory.CreateIPerson>(marshal_mathers, new MyNameIsSlimShadyInterceptor());
     5 
     6     try
     7     {
     8       var name = some_celebrity.what_is_your_name();
     9       name.should_be_equal_to("slim shady");
    10     }
    11     catch (Exception e)
    12     {
    13       Console.Out.WriteLine(e);
    14     }
    15     Console.Out.WriteLine("will the real slim shady please stand up...");
    16     Console.In.ReadLine();
    17   }
    

November

    In my last post I briefly mentioned how we were wiring some components in to our container.  The syntax looked like the following:

    1   container.AddProxyOf(
    2       new ReportPresenterTaskConfiguration(), 
    3       new ReportPresenterTask(
    4           Lazy.Load<IReportDocumentBuilder>(),
    5           Lazy.Load<IApplicationSettings>())
    6           );
    

    We're using Castle Windsor under the hood, but we have an abstraction over it that allows us to configure it as we like. Even switching the underlying implementation. Which we did, from our hand rolled container to Castle Windsor. The implementation of the above method looks as follows:

    1   public void AddProxyOf<Interface, Target>(IProxyConfiguration<Interface> configuration, Target instance) where Target : Interface
    2   {
    3       var builder = new ProxyBuilder<Interface>();
    4       configuration.Configure(builder);
    5       AddInstanceOf(builder.CreateProxyFor(instance));
    6   }
    

    Wikipedia defines the Proxy design pattern as:

    A proxy, in its most general form, is a class functioning as an interface to another thing.

    To understand the ProxyBuilder implementation you can checkout JP's strongly typed selective proxies. The "AddProxyOf" method creates an instance of a proxybuilder. It then passes it to the configuration to allow it to configure the proxy builder before building the proxy. Then it registers the proxy as a singleton in to the container.

    1     public interface IConfiguration<T>
    2     {
    3         void Configure(T item_to_configure);
    4     }
    5 
    6     public interface IProxyConfiguration<T> : IConfiguration<IProxyBuilder<T>>
    7     {
    8     }
    

    In this case the proxy configuration looks like:

    1     public class ReportPresenterTaskConfiguration : IProxyConfiguration<IReportPresenterTask>
    2     {
    3         public void Configure(IProxyBuilder<IReportPresenterTask> builder)
    4         {
    5             var constraint = builder.AddInterceptor<DisplayProgressBarInterceptor>();
    6             constraint.InterceptOn.RetrieveAuditReport();
    7         }
    8     }
    

    This guy adds a progress bar interceptor, that displays a progress bar as the report is getting generated via the "RetrieveAuditReport" method on the IReportPresenterTask.

    So I recently started twittering... or tweeting. I'm not sure what the correct lingo is, so hook me up if you know. It all started a while back when James announced that he was the newest Twit. He mentioned a WPF client called Witty, and I wanted to see what it was all about. So I setup a Twitter account to play with the app. After I had my fun, I never deleted my account. Or at least I never looked into how to delete my account.

    About a week ago I received and email that a couple of people were following me on Twitter. Man that's flattering to read:

    Hi, mo_khan.
    Kyle Baley (kbaley) is now following your updates on Twitter.
    Check out Kyle Baley's profile here:
    http://twitter.com/kbaley
    You may follow Kyle Baley as well by clicking on the "follow" button.
    Best,
    Twitter

    Whaa... a celeb is interested in what I'm up to? *blush* So I jumped in, and so far it's been pretty fun. I found another service called "Jott". Jott's pretty cool, because I can call in to a number and I get an automated message that says:

    "Who do you want to jott?"

    I say ... "Twitter". Then I record my voice message.

    Jott then takes that message transcribes it in to text, pushes it up to my twitter page, and drops a tinyurl to the actual audio. Sweet... that saves me a few pennies worth of text messages. But there's more...

    I'm one of the poor saps who pay to much for mobile service up here in Canada, eh! I subscribe to Rogers Wireless, and the plan called "My5". I get to make unlimited phones calls to the 5 numbers that are in My5? So I put Jott on My5, and now I can shoot off messages to everyone for....

    Free Ninety Nine.... well almost! If you're young, fabulous and ghetto broke (it aint funny) like myself then you ought to give Jott a try!

    Patterns of Enterprise Application Architecture defines Lazy Load as:

    An object that doesn't contain all of the data you need but knows how to get it.

    A while back I was trying to figure out how to lazy load objects from a container, so that I didn't need to depend on the objects dependencies needing to be wired up in the correct order. The syntax I was looking for was something like the following....

    1   container.AddProxyOf(
    2  new ReportPresenterTaskConfiguration(),
    3       new ReportPresenterTask(
    4  Lazy.Load<IReportDocumentBuilder>(),
    5  Lazy.Load<IApplicationSettings>())
    6     );
    

    Lazy.Load will return a proxy in place of an actual implementation. This is just a temporary place holder that will forward the calls to the actual implementation. It wont load an instance of the actual type until the first time a call is made to it.

     1   public class when_calling_a_method_with_no_arguments_on_a_lazy_loaded_proxy : lazy_loaded_object_context
     2   {
     3       [Observation]
     4       public void should_forward_the_original_call_to_the_target()
     5       {
     6           target.should_have_been_asked_to(t => t.OneMethod());
     7       }
     8   
     9       protected override void establish_context()
    10       {
    11           target = dependency<ITargetObject>();
    12   
    13           test_container
    14               .setup_result_for(t => t.find_an_implementation_of<ITargetObject>())
    15               .will_return(target)
    16               .Repeat.Once();
    17       }
    18   
    19       protected override void because_of()
    20       {
    21           var result = Lazy.Load<ITargetObject>();
    22           result.OneMethod();
    23       }
    24   
    25       private ITargetObject target;
    26   }
    

    So when the method "OneMethod" is called on the proxy. It should forward the call to the target, which can be loaded from the container. The implementation depends on Castle DynamicProxy, and looks like the following...

     1   public static class Lazy
     2   {
     3       public static T Load<T>() where T : class
     4       {
     5           return create_proxy_for<T>(create_interceptor_for<T>());
     6       }
     7   
     8       private static LazyLoadedInterceptor<T> create_interceptor_for<T>() where T : class
     9       {
    10           Func<T> get_the_implementation = resolve.dependency_for<T>;
    11           return new LazyLoadedInterceptor<T>(get_the_implementation.memorize());
    12       }
    13   
    14       private static T create_proxy_for<T>(IInterceptor interceptor)
    15       {
    16           return new ProxyGenerator().CreateInterfaceProxyWithoutTarget<T>(interceptor);
    17       }
    18   }
    19   
    20   internal class LazyLoadedInterceptor<T> : IInterceptor
    21   {
    22       private readonly Func<T> get_the_implementation;
    23 
    24       public LazyLoadedInterceptor(Func<T> get_the_implementation)
    25       {
    26           this.get_the_implementation = get_the_implementation;
    27       }
    28 
    29       public void Intercept(IInvocation invocation)
    30       {
    31           var method = invocation.GetConcreteMethodInvocationTarget();
    32           invocation.ReturnValue = method.Invoke(get_the_implementation(), invocation.Arguments);
    33       }
    34   }
    35   
    36   public static class func_extensions
    37   {
    38       public static Func<T> memorize<T>(this Func<T> item) where T : class
    39       {
    40           T the_implementation = null;
    41           return () => {
    42                      if (null == the_implementation) {
    43                          the_implementation = item();
    44                      }
    45                      return the_implementation;
    46                  };
    47       }
    48   }
    

    "resolve" is a static gateway to the underlying IDependencyRegistry. This idea was totally inspired by JP's strongly typed selective proxies. If you haven't already, you should definitely check it out.

    Download the source.

    I love being the guy to hit 1000 tests first... I guess I should check in first. doh!

    one_thousand_tests

    In Patterns of Enterprise Application Architecture, the Unit of Work design pattern is defined as:

    Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems.

    NHibernate seems to have a great implementation of the unit of work, but understanding when to start and commit the unit of work without repeating yourself can be a little tricky. One thing we've been doing is starting a unit of work using an interceptor.

    1   [Interceptor(typeof (IUnitOfWorkInterceptor))]
    2   public class AccountTasks : IAccountTasks
    3   {
    4       public bool are_valid(ICredentials credentials)
    5       {
    6           ...
    7       }
    8   }
    

    Account tasks is a service layer piece, that is decorated with an interceptor that will begin and commit a unit of work.

     1   public interface IUnitOfWorkInterceptor : IInterceptor
     2   {
     3   }
     4 
     5   public class UnitOfWorkInterceptor : IUnitOfWorkInterceptor
     6   {
     7       private readonly IUnitOfWorkFactory factory;
     8 
     9       public UnitOfWorkInterceptor(IUnitOfWorkFactory factory)
    10       {
    11           this.factory = factory;
    12       }
    13 
    14       public void Intercept(IInvocation invocation)
    15       {
    16           using (var unit_of_work = factory.create()) {
    17               invocation.Proceed();
    18               unit_of_work.commit();
    19           }
    20       }
    21   }
    

    The interceptor starts a new unit of work, before proceeding with the invocation. If no exceptions are raised the unit of work is committed. If a unit of work is already started, the unit of work factory returns an empty unit of work. This ensures that if a service layer method calls into another method that it doesn't start another unit of work.

     1   public interface IUnitOfWorkFactory : IFactory<IUnitOfWork>
     2   {
     3   }
     4 
     5   public class UnitOfWorkFactory : IUnitOfWorkFactory
     6   {
     7       private readonly IApplicationContext context;
     8       private readonly IDatabaseSessionFactory factory;
     9       private readonly TypedKey<ISession> key;
    10 
    11       public UnitOfWorkFactory(IApplicationContext context, IDatabaseSessionFactory factory)
    12       {
    13           this.context = context;
    14           this.factory = factory;
    15           key = new TypedKey<ISession>();
    16       }
    17 
    18       public IUnitOfWork create()
    19       {
    20           if (unit_of_work_is_already_started()) {
    21               return new EmptyUnitOfWork();
    22           }
    23 
    24           return create_a_unit_of_work().start();
    25       }
    26 
    27       private bool unit_of_work_is_already_started()
    28       {
    29           return context.contains(key);
    30       }
    31 
    32       private IUnitOfWork create_a_unit_of_work()
    33       {
    34           var session = factory.create();
    35           context.add(key, session);
    36           return new UnitOfWork(session, context);
    37       }
    38   }
    

    The implementation of the repository pulls the active session from the application context.

     1   public class DatabaseRepository<T> : IRepository<T>
     2   {
     3       private readonly IApplicationContext context;
     4       private readonly IKey<ISession> session_key;
     5 
     6       public DatabaseRepository(IApplicationContext context)
     7       {
     8           this.context = context;
     9           session_key = new TypedKey<ISession>();
    10       }
    11 
    12       public IQueryable<T> all()
    13       {
    14           return the_current_session().Linq<T>();
    15       }
    16 
    17       public void save(T item)
    18       {
    19           the_current_session().SaveOrUpdate(item);
    20       }
    21 
    22       public void delete(T item)
    23       {
    24           the_current_session().Delete(item);
    25       }
    26 
    27       private ISession the_current_session()
    28       {
    29           var current_session = context.get_value_for(session_key);
    30           if (null == current_session || !current_session.IsOpen) {
    31               throw new NHibernateSessionNotOpenException();
    32           }
    33           return current_session;
    34       }
    35   }
    36     
    
    For more information on Interceptors check out the Castle stack...

    Recently, we've been mocking out IQueryable's as return values, which had led to setups that look like the following...

      programs
        .setup_result_for(x => x.All())
        .Return(new List<IProgram> {active_program,inactive_program}.AsQueryable());
    

    I just switched over to the following syntax... by creating an extension method.

      programs
        .setup_result_for(x => x.All())
        .will_return(active_program, inactive_program);
    

    The following are the extensions methods to make this work.

      public static IMethodOptions<IEnumerable<R>> will_return<R>(this IMethodOptions<IEnumerable<R>> options, params R[] items)
      {
          return options.Return(items);
      }
    
      public static IMethodOptions<IQueryable<R>> will_return<R>(this IMethodOptions<IQueryable<R>> options, params R[] items)
      {
          return options.Return(new Query<R>(items));
      }
    

    and...

      internal classQuery<T> : IQueryable<T> 
      { 
         private readonly IQueryable<T> query; 
      
       public Query(params T[] items) 
       { 
         query = items.AsQueryable(); 
       } 
      
       public Expression Expression 
       { 
         get { return query.Expression; } 
       } 
      
       public Type ElementType 
       { 
         get { return query.ElementType; } 
       } 
      
       public IQueryProvider Provider 
       { 
         get { return query.Provider; } 
       } 
      
       public IEnumerator<T> GetEnumerator() 
       { 
         return query.GetEnumerator(); 
       } 
      
       IEnumerator IEnumerable.GetEnumerator() 
       { 
         return GetEnumerator(); 
       } 
      }
    

    Hope, this helps!

    joshka left a comment on my previous post that reads...

    "... Can you talk about the Application Context and IKey stuff a little in a future post?"

    The IKey interface defines a contract for different keys that are put into a dictionary. It depends on the implementation of the key to know how to parse its value out of the dictionary.

    1   public interface IKey<T>
    2   {
    3       bool is_found_in(IDictionary items);
    4       T parse_from(IDictionary items);
    5       void remove_from(IDictionary items);
    6       void add_value_to(IDictionary items, T value);
    7   }
    8   
    

    An implementation of the key that we used for shoving an ISession into the HttpContext.Items collection is the TypedKey. It creates a unique key based on type T.

     1   internal class TypedKey<T> : IKey<T>
     2   {
     3       public bool is_found_in(IDictionary items)
     4       {
     5           return items.Contains(create_unique_key());
     6       }
     7   
     8       public T parse_from(IDictionary items)
     9       {
    10           return (T) items[create_unique_key()];
    11       }
    12   
    13       public void remove_from(IDictionary items)
    14       {
    15           if (is_found_in(items))
    16           {
    17               items.Remove(create_unique_key());
    18           }
    19       }
    20   
    21       public void add_value_to(IDictionary items, T value)
    22       {
    23           items[create_unique_key()] = value;
    24       }
    25   
    26       public bool Equals(TypedKey<T> obj)
    27       {
    28           return !ReferenceEquals(null, obj);
    29       }
    30   
    31       public override bool Equals(object obj)
    32       {
    33           if (ReferenceEquals(null, obj)) return false;
    34           if (ReferenceEquals(this, obj)) return true;
    35           if (obj.GetType() != typeof (TypedKey<T>)) return false;
    36           return Equals((TypedKey<T>) obj);
    37       }
    38   
    39       public override int GetHashCode()
    40       {
    41           return GetType().GetHashCode();
    42       }
    43   
    44       private string create_unique_key()
    45       {
    46           return GetType().FullName;
    47       }
    48     }
    

    It knows how to add a value in to the dictionary using it as the key, and how to parse values from the dictionary using it. The application context can be an adapter around the HttpContext, or a hand rolled context for win forms. An implementation on the web might look like....

     1   public class WebContext : IApplicationContext
     2   {
     3       public bool contains<T>(IKey<T> key)
     4       {
     5           return key.is_found_in(HttpContext.Current.Items);
     6       }
     7   
     8       public void add<T>(IKey<T> key, T value)
     9       {
    10           key.add_value_to(HttpContext.Current.Items, value);
    11       }
    12   
    13       public T get_value_for<T>(IKey<T> key)
    14       {
    15           return key.parse_from(HttpContext.Current.Items);
    16       }
    17   
    18       public void remove(IKey<ISession> key)
    19       {
    20           key.remove_from(HttpContext.Current.Items);
    21       }
    22   }
    

    When running your integration tests, you can swap out the implementation with an implementation specific to running unit tests.

October

    Validation is a tough subject. One that I'm constantly trying to think of better ways of doing. Some suggest that all validation should occur in the domain, and some prefer to check if the object is valid before proceeding. I lean towards the idea of not allowing your objects to enter an invalid state. So far the easiest approach I have found to do this is to raise meaningful exceptions in the domain to ensure this.

    However, when there are several reasons why an object can be considered "invalid" and, those reasons need to be reflected in the UI, I haven't been able to figure out a clean way to do this in the domain. Suggestions are welcome.

    Here's an approach that we've taken to some of our validation, when user input needs to be checked so that we can provide meaningful error messages to the end user.

    First we have 2 core validation interfaces:

    1     public interface IValidationResult
    2     {
    3         bool IsValid { get; }
    4         IEnumerable<string> BrokenRules { get; }
    5     }
    

    and

    1     public interface IValidation<T>
    2     {
    3         IValidationResult Validate(T item);
    4     }
    

    The IValidation is in essence a form of a Specification. Now to collect the errors we use a [visitor](http://mokhan.ca/oo/designpatterns/2010/12/14/the-visitor-design-pattern.html). The following are the core [visitor](http://mokhan.ca/oo/designpatterns/2010/12/14/the-visitor-design-pattern.html) interfaces.

     1   public interface IVisitor<T>
     2   {
     3       void Visit(T item_to_visit);
     4   }
     5 
     6   public interface IValueReturningVisitor<TypeToVisit, TypeToReturn> : IVisitor<TypeToVisit>
     7   {
     8       void Reset();
     9       TypeToReturn Result { get; }
    10   }
    

    We have an implementation of a IValueReturningVisitor that collects errors from visiting IValidations, then returns a validation result.

     1   public class ErrorCollectingVisitor<T> : IValueReturningVisitor<IValidation<T>, IValidationResult>
     2   {
     3       readonly T item_to_validate;
     4       readonly List<string> results;
     5 
     6       public ErrorCollectingVisitor(T item_to_validate)
     7       {
     8           this.item_to_validate = item_to_validate;
     9           results = new List<string>();
    10       }
    11 
    12       public void Visit(IValidation<T> item_to_visit)
    13       {
    14           var validation_result = item_to_visit.Validate(item_to_validate);
    15           if (!validation_result.IsValid)
    16           {
    17               results.AddRange(validation_result.BrokenRules);
    18           }
    19       }
    20 
    21       public void Reset()
    22       {
    23           results.Clear();
    24       }
    25 
    26       public IValidationResult Result
    27       {
    28           get { return new ValidationResult(results.Count == 0, results); }
    29       }
    30   }
    

    And a handy extension method for returning the value from visiting a set of validations.

    1   public static Result ReturnValueFromVisitingAllItemsWith<TypeToVisit, Result>(this IEnumerable<TypeToVisit> items_to_visit, IValueReturningVisitor<TypeToVisit, Result> visitor)
    2   {
    3       visitor.Reset();
    4       items_to_visit.Each(x => visitor.Visit(x));
    5       return visitor.Result;
    6   }
    

    An example of the usage for the visit can be seen below:

    1   public IValidationResult Validate(IUser user)
    2   {
    3       return userValidations
    4           .Select(x => x as IValidation<IUser>)
    5           .ReturnValueFromVisitingAllItemsWith(new ErrorCollectingVisitor<IUser>(user));
    6   }
    

    Today I received an email about the JetBrains Seeder Program, and thought that I would try to sign up for an account to find out more.

    Can you help me understand, what JetBrains is trying to tell me?

    jetbrains_subliminal_messages 

    horning

September

    My wife is running in this years CIBC Run for the cure, on Sunday October. 5th 2008. This is an annual fund raiser to raise money for breast cancer research. This is a cause that is near and dear to us since she has lost loved ones in her family to breast cancer.

    The donations are tax deductible, here in Canada, and receipts are sent out electronically. There is no minimum donation, so anything and everything counts. Please don't be shy!

    If you would like to support my wife and daughter before their 5 KM walk please make a donation here.

    allison_and_adia_all_pretty_in_pink

    Allison Khan's Message:

    "I am excited to be joining team Glamma's and friends this year to walk for breast cancer. My Mom and friends will be walking their 4th year and Adia and I are thrilled to be the newest members. As breast cancer has touched my life on both maternal and paternal sides of my family, it's time to be proactive in finding a cure for this disease. I deeply appreciate your support; by donation, cheers on the sidelines or joining our team!"

     donate_to_allison_and_adia

August

    So tonight I got to help demo what a fishbowl was at the ALT.NET Canada (thanks Doc!) conference and the topic of discussion was on the Fundamentals of Software development. During the session I started to realize that what I considered to be fundamental seemed to be far from what others did. After a few discussions I started to think that the fundamentals were different based on generational divides.

    I'm speaking for myself, but hopefully for my generation, but when I hear "The Fundamentals of .NET development" I think, Object Oriented Programming, Design Patterns, a knowledge of the syntax of a language, and at least a base understanding of what the CLR is and what it provides for us.

    Some other takes on fundamentals were focused on understanding how the underlying operating system works, Algorithms and Data Structures.

    This got me a little depressed because I don't have an intimate knowledge of how the underlying operating system works, or how to perform a deletion from a red-black tree, or how to implement a half decent hashing algorithm. Ask me to build an AVL tree and I might puke, or at least ask "why? it's 2008" I have a base understanding, but I'm not sure if that counts as the required fundamental knowledge to build a decent app.

    When I was writing C, I cared a lot about writing well optimized code. I cared about memory allocation/de-allocation. I cared about protecting from buffer overruns. I cared about so many things, that I just don't think about, as much, now as a .NET developer. All the things I don't have to be concerned about allows me to focus on other things that I want to care about.

    It seems like we developers are proud of being an expert at something, but when that something becomes less and less relevant in building applications today, we tag it as "fundamental". Perhaps in 10 years or so the next generation will wonder why they would need to understand object oriented programming to build valuable applications. After all it will all be written in DSL's, right?

    Couple of more pennies for ya!

    First off I want to make it clear that I'm not a guru on the topic, but I do find it interesting. The topic of course is Context Based Specifications. I've seen an emergence in interest in writing context based specifications lately on the blogosphere. However, everyone seems to be advertising it slightly differently...

    One of the things that our team tries to aim for is to keep technical language out of our specifications. They should be human readable sentences, not "Yoda" speak. This is crucial if we want non technical people to actually read our specs to make sure the code is inline with what the business is attempting to do. The goal, in our humble opinions, is to work closer towards the ubiquitous language. The benefit is that documentation is updated along with the code, because it is the code.

    Something that reads..

    when_the_account_controller_is_given_valid_arguments_on_the_register_account_action

    Doesn't read as easy as:

    when_registering_a_new_account

    Another subtle change that our team made was to put the specs above establishing the context. In some cases it just seem to read better from top to bottom.

    when_creating_a_new_account_for_a_user_with_a_valid_submission

    - it_should_inform_the_user_that_the_account_was_created

    - it_should_save_the_new_account_information

    under_these_conditions

    because_of

    "It" being the system under test.

    We don't always get it right, but by trying to drop the technical language we force ourselves to step away and think about the problem that we are ultimately trying to address.

    Again... this is just my our 2 cents.

    So this week we got to start working a brand spanking new MVC project. So far we're leveraging Castle Windsor, NHibernate, Fluent Nhibernate, and kind of running Linq to NHibernate. It's amazing how quickly you can get a project up and running in such a short amount of time. (BTW, Fluent NHibernate rocks!) When you're building off the trunk of these projects, it's almost like the contributors to all these great projects are extended members of the team. Thank you all!

    Moving on... One of the things that are cool, but also slightly annoying, is how the MVC framework parses out items from the http payload to populate any input arguments on controller actions.

    It's great how it just works, but it's a little annoying if it's under test and you have to add more fields, or remove fields from a form, then you have to go update the signature of the action then go update the test.... yada yada The changes just ripple down...

    So one thing we tried out this week was to create a payload parser. What this guy does is take a DTO parse out the values for each of the properties on the DTO from the current requests payload and fill it. This makes it easy to package up the form parameters in a nicely packaged DTO and fire it off down to a service layer to do some work.

    So instead of declaring an action method on a controller that looks like this, where the signature would have to change based on what fields are submitted on a form:

      ViewResult register_new_account(string user_name, string first_name, string last_name)
    

    We can write this...

      public ViewResult register_new_account() 
      {
          var accountSubmissionDTO = parser.MapFromPayloadTo<AccountSubmissionDTO>();
          var validationResult = task.Validate(accountSubmissionDTO);
          if (validationResult.IsValid) {
              task.Submit(accountSubmissionDTO);
              return View("Success", accountSubmissionDTO);
          }
    
          return View("Index", validationResult.BrokenRules);
      }
      
    

    This better allows us to adhere to the OCP. If we need to include additional fields on the form, we can add them to the form as long as the control name is the same as the name of the property on the DTO that it will be bound to. The implementation of the payload parser is quite primitive for now, but at the moment it's all that we needed.

    First up the specs... simple enough, for now!

      public class when_parsing_the_values_from_the_current_request_to_populate_a_dto : context_spec<IPayloadParser>
      {
          [Test]
          public void should_return_a_fully_populated_dto()
          {
              result.Name.should_be_equal_to("adam");
              result.Age.should_be_equal_to(15);
              result.Birthdate.should_be_equal_to(new DateTime(1982, 11, 25));
              result.Id.should_be_equal_to(1);
          }
    
          protected override IPayloadParser UnderTheseConditions()
          {
              var current_request = Dependency<IWebRequest>();
              var payload = new NameValueCollection();
    
              payload["Name"] = "adam";
              payload["Age"] = "15";
              payload["Birthdate"] = new DateTime(1982, 11, 25).ToString();
              payload["Id"] = "1";
    
              current_request.setup_result_for(r => r.Payload).Return(payload);
    
              return new PayloadParser(current_request);
          }
    
          protected override void BecauseOf()
          {
              result = sut.MapFromPayloadTo<SomeDTO>();
          }
    
          private SomeDTO result;
      }
    
      public class when_parsing_values_from_the_request_that_is_missing_values_for_properties_on_the_dto : context_spec<IPayloadParser>
      {
          private AccountSubmissionDTO result;
    
          [Test]
          public void it_should_apply_the_default_values_for_the_missing_properties()
          {
              result.LastName.should_be_null();
              result.EmailAddress.should_be_null();
          }
    
          protected override IPayloadParser UnderTheseConditions()
          {
              var current_request = Dependency<IWebRequest>();
    
              var payload = new NameValueCollection();
    
              payload["FirstName"] = "Joel";
              current_request.setup_result_for(x => x.Payload).Return(payload);
    
              return new PayloadParser(current_request);
          }
    
          protected override void BecauseOf()
          {
              result = sut.MapFromPayloadTo<AccountSubmissionDTO>();
          }
      }
    
      public class SomeDTO
      {
          public long Id { get; set; }
          public string Name { get; set; }
          public int Age { get; set; }
          public DateTime Birthdate { get; set; }
      }
      
    

    The current implementation:

      public interface IPayloadParser 
      {
          TypeToProduce MapFromPayloadTo<TypeToProduce>() where TypeToProduce : new();
      }
    
      public class PayloadParser : IPayloadParser 
      {
          private readonly IWebRequest current_request;
    
          public PayloadParser(IWebRequest current_request) 
          {
              this.current_request = current_request;
          }
    
          public TypeToProduce MapFromPayloadTo<TypeToProduce>() where TypeToProduce : new() 
          {
              var dto = new TypeToProduce();
              foreach (var propertyInfo in typeof (TypeToProduce).GetProperties()) {
                  var value = Convert.ChangeType(current_request.Payload[propertyInfo.Name], propertyInfo.PropertyType);
                  propertyInfo.SetValue(dto, value, null);
              }
    
              return dto;
          }
      }
      
    

    I finished reading...

    Agile Principles, Patterns, and Practices in C# (Robert C. Martin Series)
    by Robert C. Martin, Micah Martin

    Read more about this book...

     

    What an excellent book, seriously! It was written by Robert C. Martin and his son Micah. The following is a list of excerpts from the book that I can appreciate:

    "Continuous attention to technical excellence and good design enhances agility. High quality is the key to high speed. The way to go fast is to keep the software as clean and robust as possible. Thus, all agile team members are committed to producing only the highest quality code they can. They do not make messes and then tell themselves that they'll clean up when they have more time. They clean any messes as they are made."

    "The goal of refactoring, as depicted in this chapter, is to clean your code every day, every hour and every minute. We don't want the mess to build. We don't want to have to chisel and scrub the encrusted bits that accumulate over time. We want to be able to extend and modify our systems with a minimum of effort. The most important enabler of that ability is the cleanliness of code."

    "Specifying contracts in unit tests. Contracts can also be specified by writing unit tests. By thoroughly testing the behavior of a class, the unit tests make the behavior of the class clear. Authors of client code will want to review the unit tests in order to know what to reasonably assume about the classes they are using."

    "Databases are implementation details! Consideration of the database should be deferred as long as possible. Far too many applications were designed with the database in mind from the beginning and so are inextricably tied to those databases. Remember the definition of abstraction: "the amplification of the essential and the elimination of the irrelevant." At this stage of the project, the database is irrelevant; it is merely a technique used for storing and accessing data, nothing more."

    "This style of testing is called behavior-driven development. The idea is that you should not think of tests as tests, where you make assertions about state and results. Instead, you should think of tests as specifications of behavior, in which you describe how the code is supposed to behave."

    A few weeks ago I started feeling a little over whelmed by the volume of interest in what I was up to. After reading a chapter from Tim Ferris' book, I decided to disconnect. It was the most effective advice I could have ever received. I went cold turkey. I turned off my phone and put it in a drawer. I completely stopped checking my email, and wouldn't allow myself to "surf" the net.

    The result after a couple of weeks, I feel liberated... and refreshed!

    The 4-Hour work Week: Escape 9-5, Live Anywhere, and Join the New Rich
    by Timothy Ferris

    Read more about this book...

     

    The first couple of days were hard, I had the itch. I kept wondering... "what if an emergency happens and someone needs to get a hold of me?" There was no emergency, and the best part no shackles. When I finally checked my email, I spent 5 minutes scanning the email that seemed to contain "information" that was important to me. It was amazing how much "noise" I was able to filter out. This is something that Tim describes as a "Low Information Diet."

    I'm toying with the idea of completely disconnecting my phone and I'm currently checking my email once a week (Mondays).

    I looked back at a post that made remarkable difference to me when I first read it last year. It was JP's tips on becoming a more effective developer. In it he told us to limit the amount of instant messaging that we do during the day. Today I feel that instant messaging has been replaced by mailing lists, twitter, texting and RSS feeds. All of this can consume a good portion of your day, and for me causes me to lose focus, quickly. It's important to be selective about what information is important to keep you focused and to filter out what can wait.

    I'm not saying this is for everyone, but the Low Information Diet is working for me, and my daughter is loving the extra focused attention she gets from her daddy (likewise for her daddy).

    Mike left a comment on my last post on Windows Forms Databinding asking:

    What do the tests look like?

    On the ComboBox binding, why aren't you using adding the binding through DataBinding.Add?  With the way you have it now if you change the value the combobox is bound too it doesn't get pushed back to the screen.

    Well Mr. Mike, on the view implementation there were no tests... *hang my head in shame* Yup, we went at it trying to understand how Windows Forms Data bindings works, but if we had gone at it test first, we would have found that leveraging the built-in data bindings are not very testable. It requires having a BindingContext setup, and in some cases the controls have to actually be displayed for the bindings to actually kick in. Second, if we had gone test first, we would have noticed the issue the Mike brought up in regards to the ComboBox.

    Feeling a little guilty about publishing code that wasn't well thought out, I decided to go at it again, with a test first approach. The test started off very high level. I knew the API that I wanted to work with, in this case a fluent interface for defining a binding to a control. The end result was quite different..

     1     [Concern(typeof (Create))]
     2     public class when_binding_a_property_from_an_object_to_a_combo_box : context_spec {
     3         [Test]
     4         public void should_initialize_the_combo_box_with_the_current_value_of_the_property() {
     5             combo_box.SelectedItem.should_be_equal_to(baby_girl);
     6         }
     7 
     8         protected override void under_these_conditions() {
     9             combo_box = new ComboBox();
    10             thing_to_bind_to = Dependency<IAnInterface>();
    11             baby_girl = Dependency<IAnInterface>();
    12             baby_boy = Dependency<IAnInterface>();
    13 
    14             combo_box.Items.Add(baby_boy);
    15             combo_box.Items.Add(baby_girl);
    16 
    17             thing_to_bind_to
    18                 .setup_result_for(t => t.Child)
    19                 .Return(baby_girl);
    20         }
    21 
    22         protected override void because_of() {
    23             Create
    24                 .BindingFor(thing_to_bind_to)
    25                 .BindToProperty(t => t.Child)
    26                 .BoundToControl(combo_box);
    27         }
    28 
    29         private ComboBox combo_box;
    30         private IAnInterface thing_to_bind_to;
    31         private IAnInterface baby_girl;
    32         private IAnInterface baby_boy;
    33     }
    34         
    

    The end result doesn't leverage the Windows Forms databindings at all. It registers event handlers for events on the controls.

     1     public class ComboBoxPropertyBinding<TypeToBindTo, PropertyType> : IPropertyBinding<PropertyType> 
     2     {
     3         private readonly IPropertyBinder<TypeToBindTo, PropertyType> binder;
     4 
     5         public ComboBoxPropertyBinding(ComboBox control, IPropertyBinder<TypeToBindTo, PropertyType> binder) 
     6         {
     7             this.binder = binder;
     8             control.SelectedItem = binder.CurrentValue();
     9             control.SelectedIndexChanged += delegate { binder.ChangeValueOfPropertyTo(control.SelectedItem.ConvertedTo<PropertyType>()); };
    10         }
    11 
    12         public PropertyType CurrentValue() 
    13         {
    14             return binder.CurrentValue();
    15         }
    16     }
    17         
    

    If you're interested in the rest of the source code download the source here. The moral of the story... Don't become complacent and take off your TDD hat, prematurely. In most cases it can, and should be, tested. Your design will probably come out much cleaner then going at the problem head on without tests to back you up. Not only that, but tests also give you extension points for making changes, and dealing with different contexts you probably wouldn't have thought of right off the bat.

    A couple of weeks ago, Adam and I were pairing on a new screen in a windows forms application. He started showing me some stuff that he had learned about windows forms data bindings. I showed him a little bit of what JP tried to teach me, back in the Austin Nothin' But .NET boot camp, about Expressions and we decided to try a different way of binding domain object to screen elements in our application. The following is a method on the view that's invoked from a presenter. It's given an object from our model to display.

     1   public void Display(IActionPlan actionPlan)
     2   {
     3       Create
     4           .BindingFor(actionPlan)
     5           .BindToProperty(a => a.RecommendedAction)
     6           .BoundToControl(uxRecommendedAction);
     7 
     8       Create
     9           .BindingFor(actionPlan)
    10           .BindToProperty(a => a.AccountablePerson)
    11           .BoundToControl(uxAccoutablePerson);
    12   
    13       Create
    14           .BindingFor(actionPlan)
    15           .BindToProperty(a => a.EstimatedCompletionDate)
    16           .BoundToControl(uxEstimatedCompletionDate);
    17   
    18       Create
    19           .BindingFor(actionPlan)
    20           .BindToProperty(a => a.EstimatedStartDate)
    21           .BoundToControl(uxEstimatedStartDate);
    22   
    23       Create.BindingFor(actionPlan)
    24           .BindToProperty(a => a.RequiredResources)
    25           .BoundToControl(uxResourcesRequired);
    26   
    27       Create.BindingFor(actionPlan)
    28           .BindToProperty(a => a.Priority)
    29           .BoundToControl(uxPriority);
    30   }
    

    Each of our controls are prefixed with "ux". What we did was bind different types of controls to property's on the object to display. This immediately changed that state of the object as the user filled out information on the screen. The BindToPropery() method is given the property on the object to bind too. The following was the implementation we came up with.

     1     public static class Create
     2     {
     3         public static IBinding<T> BindingFor<T>(T object_to_bind_to)
     4         {
     5             return new ControlBinder<T>(object_to_bind_to);
     6         }
     7     }
     8 
     9     public interface IBinding<TypeToBindTo>
    10     {
    11         IBinder<TypeToBindTo> BindToProperty<T>(Expression<Func<TypeToBindTo, T>> property_to_bind_to);
    12     }
    
    1     public interface IBinder<TypeOfDomainObject>
    2     {
    3         string NameOfTheProperty { get; }
    4         TypeOfDomainObject InstanceToBindTo { get; }
    5     }
    

    The implementation of the BindToProperty method takes in an input argument of type Expression>. This allows us to inspect the expression to parse out the name of the property the binding is for. It's like treating code as data. The IControlBinder implements two interfaces. One that's issued to client components (IBinding) which restricts what they can do with the type. (see above in the Create class) The second interface exposes enough information for extension methods to pull from to build bindings for specific windows forms controls.

     1     public interface IControlBinder<TypeToBindTo> : IBinding<TypeToBindTo>, IBinder<TypeToBindTo>
     2     {
     3     }
     4 
     5     public class ControlBinder<TypeOfDomainObject> : IControlBinder<TypeOfDomainObject>
     6     {
     7         public ControlBinder(TypeOfDomainObject instance_to_bind_to)
     8         {
     9             InstanceToBindTo = instance_to_bind_to;
    10         }
    11 
    12         public IBinder<TypeOfDomainObject> BindToProperty<TypeOfPropertyToBindTo>(
    13             Expression<Func<TypeOfDomainObject, TypeOfPropertyToBindTo>> property_to_bind_to)
    14         {
    15             var expression = property_to_bind_to.Body as MemberExpression;
    16             NameOfTheProperty = expression.Member.Name;
    17             return this;
    18         }
    19 
    20         public string NameOfTheProperty { get; private set; }
    21 
    22         public TypeOfDomainObject InstanceToBindTo { get; private set; }
    23     }
    

    The BoundToControl overloads were put into extension methods, allowing others to create new implementations of bindings without having to modify the Control binder itself. The extension methods....

     1     public static class ControlBindingExtensions {
     2         public static IControlBinding BoundToControl<TypeOfDomainObject>(
     3             this IBinder<TypeOfDomainObject> binder,
     4             TextBox control) {
     5             var property_binder = new TextPropertyBinding<TypeOfDomainObject>(
     6                 control,
     7                 binder.NameOfTheProperty,
     8                 binder.InstanceToBindTo);
     9             property_binder.Bind();
    10             return property_binder;
    11         }
    12 
    13         public static IControlBinding BoundToControl<T>(this IBinder<T> binder, RichTextBox box1) {
    14             var property_binder = new TextPropertyBinding<T>(box1,
    15                                                              binder.NameOfTheProperty,
    16                                                              binder.InstanceToBindTo);
    17             property_binder.Bind();
    18             return property_binder;
    19         }
    20 
    21         public static IControlBinding BoundToControl<T>(this IBinder<T> binder, ComboBox box1) {
    22             var property_binder = new ComboBoxBinding<T>(box1,
    23                                                          binder.NameOfTheProperty,
    24                                                          binder.InstanceToBindTo);
    25             property_binder.Bind();
    26             return property_binder;
    27         }
    28 
    29         public static IControlBinding BoundToControl<T>(this IBinder<T> binder, DateTimePicker box1) {
    30             var property_binder = new DatePickerBinding<T>(box1,
    31                                                            binder.NameOfTheProperty,
    32                                                            binder.InstanceToBindTo);
    33             property_binder.Bind();
    34             return property_binder;
    35         }
    36     }
    

    For completeness... the control bindings...

     1     public class TextPropertyBinding<TypeToBindTo> : IControlBinding {
     2         private readonly Control control_to_bind_to;
     3         private readonly string name_of_the_propery_to_bind;
     4         private readonly TypeToBindTo instance_of_the_object_to_bind_to;
     5 
     6         public TextPropertyBinding(
     7             Control control_to_bind_to,
     8             string name_of_the_propery_to_bind,
     9             TypeToBindTo instance_of_the_object_to_bind_to
    10             ) {
    11             this.control_to_bind_to = control_to_bind_to;
    12             this.name_of_the_propery_to_bind = name_of_the_propery_to_bind;
    13             this.instance_of_the_object_to_bind_to = instance_of_the_object_to_bind_to;
    14         }
    15 
    16         public void Bind() {
    17             control_to_bind_to.DataBindings.Clear();
    18             control_to_bind_to.DataBindings.Add(
    19                 "Text",
    20                 instance_of_the_object_to_bind_to,
    21                 name_of_the_propery_to_bind);
    22         }
    23     }
    
     1     public class ComboBoxBinding<TypeToBindTo> : IControlBinding {
     2         private readonly ComboBox control_to_bind_to;
     3         private readonly string name_of_the_propery_to_bind;
     4         private readonly TypeToBindTo instance_of_the_object_to_bind_to;
     5 
     6         public ComboBoxBinding(ComboBox control_to_bind_to,
     7                                string name_of_the_propery_to_bind,
     8                                TypeToBindTo instance_of_the_object_to_bind_to) {
     9             this.control_to_bind_to = control_to_bind_to;
    10             this.name_of_the_propery_to_bind = name_of_the_propery_to_bind;
    11             this.instance_of_the_object_to_bind_to = instance_of_the_object_to_bind_to;
    12         }
    13 
    14         public void Bind() {
    15             control_to_bind_to.SelectedIndexChanged +=
    16                 delegate {
    17                     typeof (TypeToBindTo)
    18                         .GetProperty(name_of_the_propery_to_bind)
    19                         .SetValue(
    20                         instance_of_the_object_to_bind_to,
    21                         control_to_bind_to.Items[control_to_bind_to.SelectedIndex],
    22                         null);
    23                 };
    24         }
    25     }
    
     1     public class DatePickerBinding<TypeToBindTo> : IControlBinding {
     2         private readonly DateTimePicker control_to_bind_to;
     3         private readonly string name_of_the_propery_to_bind;
     4         private readonly TypeToBindTo instance_of_the_object_to_bind_to;
     5 
     6         public DatePickerBinding(DateTimePicker control_to_bind_to,
     7                                  string name_of_the_propery_to_bind,
     8                                  TypeToBindTo instance_of_the_object_to_bind_to) {
     9             this.control_to_bind_to = control_to_bind_to;
    10             this.name_of_the_propery_to_bind = name_of_the_propery_to_bind;
    11             this.instance_of_the_object_to_bind_to = instance_of_the_object_to_bind_to;
    12         }
    13 
    14         public void Bind() {
    15             control_to_bind_to.DataBindings.Clear();
    16             control_to_bind_to.DataBindings.Add(
    17                 "Value",
    18                 instance_of_the_object_to_bind_to,
    19                 name_of_the_propery_to_bind);
    20         }
    21     }
    

    We found that using the fluent interface for creating bindings was pretty easy and made screen synchronization a breeze, however, our implementation wasn't the easiest thing to test. So far it's been good to us.

    As a side note... go register for the Las Vegas course, it may cause you to love your job! Also, if you've already attended a boot camp, and you think you already know what the course is about, you have no idea, it keeps getting better and better.

    When building up a tree view that represents the directory structure of a file system, like the windows explorer, my first reaction was to use recursion to traverse the file system and build up a tree. I quickly found that doing something like that is a time consuming process, and required some optimization.

    I came up with what I like to call the recursive command. Each Tree Node item on a tree view is bound to a command to execute. The command looks like this...

    1   public interface ITreeNodeClickedCommand {
    2       void Execute(ITreeNode node);
    3   }
    

    When the command is executed, the command gets an opportunity to modify the state of the tree node that was clicked. In this case I wanted to lazy load the sub directories of a node that was clicked. The command implementation looks like this...

     1   public interface IAddFoldersCommand : ITreeNodeClickedCommand {}
     2   
     3   public class AddFoldersCommand : IAddFoldersCommand {
     4       private readonly DirectoryInfo the_current_directory;
     5       private bool has_executed;
     6   
     7       public AddFoldersCommand(DirectoryInfo the_current_directory) {
     8           this.the_current_directory = the_current_directory;
     9       }
    10   
    11       public void Execute(ITreeNode node) {
    12           if (!has_executed) {
    13               foreach (var directory in the_current_directory.GetDirectories()) {
    14                   node.Add(new TreeNodeItem(directory.Name, ApplicationIcons.Folder, new AddFoldersCommand(directory)));
    15               }
    16           }
    17           has_executed = true;
    18       }
    19   }
    20     
    

    This command is executed each time the tree node that it is bound too is clicked, but will only build up the child tree node items once. Each of the child tree nodes are bound to a new instance of the same command. Hence, what I like to call the recursive command.

    recursive_command

    For more information on the command pattern check out WikiPedia's write up.

    *Update 11:30 am MST, Friday, August. 01, 2008

    After a little more inspection, I realized I was doing nothing more than just visiting each tree node item on the demand. Visitors are known to be great for recursive structures.... so ix-nay on the ecursive-ray ommand-kay.

    The revised version:

     1   public interface IAddFoldersToTreeVisitor : ITreeNodeVisitor
     2   {
     3   }
     4   
     5   public class AddFoldersToTreeVisitor : IAddFoldersToTreeVisitor
     6   {
     7       private readonly DirectoryInfo the_current_directory;
     8       private bool has_executed;
     9       private readonly IAddFilesToTreeVisitor add_files_visitor;
    10       private readonly IAddFoldersCommandFactory factory;
    11   
    12       public AddFoldersToTreeVisitor(DirectoryInfo the_current_directory, IAddFoldersCommandFactory factory,
    13                                      IAddFilesToTreeVisitor add_files_visitor)
    14       {
    15           this.the_current_directory = the_current_directory;
    16           this.factory = factory;
    17           this.add_files_visitor = add_files_visitor;
    18       }
    19   
    20       public void Visit(ITreeNode node)
    21       {
    22           if (!has_executed)
    23           {
    24               foreach (var directory in the_current_directory.GetDirectories())
    25               {
    26                   node.Add(directory.Name, ApplicationIcons.Folder, factory.CreateFor(directory));
    27               }
    28               add_files_visitor.PreparedWith(the_current_directory);
    29               add_files_visitor.Visit(node);
    30           }
    31           has_executed = true;
    32       }
    33   }
    

    links

July

    Our team is comprised of 3 dedicated developers, 1 project manager, 1 super dedicated product owner and a trusty task board. Although, we're a small team we've been uber successful, and so far have been able to out perform the competition (in a surprisingly short amount of time).

    One of those reasons is because of how "tight" (read:close) the team is. What I mean is that we're completely open with each other, which has allowed us to really gel as a team. We have our good days and bad days, but overall I feel like I can honestly depend on my team mates, and likewise.

    We all make the same salary, and have the same amount of shares. There's no super heroes on our team. We all have strengths and our weaknesses, but there's no sense that any one of us feels like we are obligated to outperform one another. We're judged based on team performance rather than individual performance.

    "The problem with reviews is that most reviews and raises are based on individual goals and achievements, but XP focuses on team performance. If a programmer spends half of his time pairing with others, how can you evaluate his individual performance? How much incentive does he have to help others if he will be evaluated on individual performance?" - Kent Beck from Extreme Programming Explained

    Extreme Programming Explained: Embrace Change (2nd Edition) (The XP Series)
    by Kent Beck, Cynthia Andres

    Read more about this book...

    Depending on the day, we each step up to lead the team. There's no water cooler discussions about why a member of the team makes x dollars, while I make x - 20K. We're either performing or not, and call each other out when we're not.

    So far, this has helped our team gel. I'm curious to hear how about why you and your team are so "tight"?

    So JP had to tag me... then the Los Techies crew had to invite me to join Los Techies. This sucks for someone who "claims" to be quite a private person. Thanks JP for putting me in the spotlight, and thanks to all the techies who thought I was fit to join. So here goes...

    How old were you when you first started in programming?

    I was in grade 11, so I guess that would have made me 15.

    How did you get started in programming?

    Hmm... Kind of by accident. I took a C++ course in high school as an option and I found that I actually liked it. I didn't actually think I was capable of becoming a software developer, but I knew I liked it.

    What was your first programming language?

    C++

    What was the first real program you wrote?

    In college I signed up for a curriculum that focused more on electrical engineering than software development. However we got a little bit of exposure with different programming languages like assembler and C.

    The first actual program that I finished was in my second year of college. We wrote a piece of voice recognition software using MatLab. It was actually a tonne of fun, because it required us to utilize what we had learned about digital signal processing as well as how to pick up a brand new language and learn how to get something compiling with it in a short bit of time. This was probably when I realized I liked staring at code more than I liked staring at circuit diagrams.

    What languages have you used since you started programming?

    Assembler, C, C++, C#, T-SQL, VB, VB.NET, MatLab. The languages I would say I'm ok in are C and C#.

    What was your first professional programming gig?

    Right after college I got scooped up by a company called DataShapers, where I got to work on a project called Incentus. It was a Gift Card, and Loyalty Management system. I was hired to build embedded gift card applications for different point of sale terminals.

    I was quite fortunate that I got to work on such a sweet project right out of school. I was exposed to things like chip cards, 3DES encryption, SSL/TLS at a raw sockets level written in C. I was fortunate enough to be mentored by one of the best while I was there. Thanks Mr. Mark!

    If you knew then what you know now, would you have started programming?

    Oh... yes!

    If there is one thing you learned along the way that you would tell new developers, what would it be?

    Actually listen to your elders, and follow through with what they tell you. At the same time, question everything they tell you, and decide what's right for you.

    "Listen to your elders, but question everything they tell you."

    What's the most fun you've ever had programming?

    The Nothin' But .NET boot camp (times 2)... seriously, it blows my mind!

    Who am I calling out?

    Luu Duong

    Mark Chen

    Owen Rogers

    Adam Alinauskas (whenever he gets his blog back up)

    David Morgantini

    So yesterday I had this idea on how to run a background thread using an interceptor, but first I needed to Grok how the castle interceptors worked. I quickly ran into a snag, it looked something like this:

    background_thread_interceptor

    The problem is that the latest version of Rhino.Mocks hasn't internalized the castle dependencies. So the compiler doesn't know whether I'm referring to Castle.Core.Interceptor.IInvocation from the Castle.Core.dll or Castle.Core.Interceptor.IInvocation from Rhino.Mocks.dll.

    I'm not sure why this is, so I did a little digging. And found this post... In one of the last comments someone else had this same issue...

    rhino_mocks_internalize_castle_dependency

    I'm not really sure what this meant, so I decided to pull the source from the trunk. In the Rhino.Mocks project there's a file called "ilmerge.exclude" and in it was the interface that I was having trouble resolving (IInvocation). I removed it from the file, and rebuilt a release version. Seems to be working now.... I'm still not sure why this change was made... which makes me feel a bit uneasy.

    ilmerge_diff

    Try it at your own discretion.

    In case you haven't you should read...

    xUnit Test Patterns: Refactoring Test Code (The Addison-Wesley Signature Series)
    by Gerard Meszaros

    Read more about this book...

    As a reminder, let's talk about a test smell described in the above mentioned book. It's called "Conditional Test Logic".

    "Conditional Test Logic: A test contains code that may or may not be executed." xUnit Test Patterns

    "A fully automated test is just code that verifies the behavior or other code. But if this code is complicated, how do we verify that it works properly?"

    Warning bells should sound off in your head when you start to see looping or conditional constructs within a single unit test.

    "Code that has only a single execution path always executes in exactly the same way. Code that has multiple execution paths presents much greater challenges and does not inspire as much confidence about its outcome."

    For more information check this out...

    Basically Ignore Logic... that could cause multiple execution paths within a single unit test.

    So I got a phone call this morning that went something like this....

    "Yo mO, what's happenin' homie?", voice on the phone.

    "Ye'.... I'm just slingin some code wit my compadre, G. What's crackin'?", says mo!

    "So word on the street is that you're slingin' some made Rhino Mocks 3 dot 5 ish? So lemme asks you, how do you bust out some event raisin' with the new ish?"

    My response... "I gots no clue my man, no clue!"

    After some quick digging here's what I found... (Please remember this is a contrived example!)

    The old school way...

     1   [TestFixture]
     2   public class AnonymousPresenterTest {
     3       private IView view;
     4       private MockRepository mockery;
     5       private ITask task;
     6   
     7       [SetUp]
     8       public void SetUp() {
     9           mockery = new MockRepository();
    10           view = mockery.DynamicMock<IView>();
    11           task = mockery.DynamicMock<ITask>();
    12       }
    13   
    14       public IPresenter CreateSUT() {
    15           return new AnonymousPresenter(view, task);
    16       }
    17   
    18       [Test]
    19       public void ShouldDoSomethingUseful() {
    20           IEventRaiser raiser = null;
    21           using (mockery.Record()) {
    22               view.Load += null;
    23               raiser = LastCall.GetEventRaiser();
    24   
    25               Expect
    26                   .Call(task.AllProvinces())
    27                   .Return(new List<IProvince>());
    28           }
    29   
    30           using (mockery.Playback()) {
    31               CreateSUT();
    32               raiser.Raise(null, EventArgs.Empty);
    33           }
    34       }
    35   }
    

    Here's the new way.. that I quickly Googled for...

     1   [Concern(typeof(AnonymousPresenter))]
     2   public class when_the_view_is_first_loaded : context_spec<IPresenter> {
     3       private IView view;
     4       private ITask task;
     5   
     6       protected override IPresenter UnderTheseConditions() {
     7           view = Dependency<IView>();
     8           task = Dependency<ITask>();
     9           return new AnonymousPresenter(view, task);
    10       }
    11   
    12       protected override void BecauseOf() {
    13           view.Raise(v => v.Load += null, view, EventArgs.Empty);
    14       }
    15   
    16       [Test]
    17       public void should_do_something_useful() {
    18           task.should_have_been_asked_to(t => t.AllProvinces());
    19       }
    20   }
    

    So there you have it. Enjoy..

    P.S.

    I found the usage on Ayende's wiki here. Also, I am not a Rhino.Mocks guru, nor do I want to be, and yes the phone conversation was not as interesting as was previously illustrated.

    Building a splash screen. (err... not taking a bath)

    So one requirement we had this week was to add a splash screen to the project we're working on. My knowledge of threading is weak, so bare with me. As our application started to grow, the start up times started to grow so a splash screen is supposed to be a queue to the user that yes the app is running.

    This is what the end result ended up...

    1   using (new BackgroundThread(new DisplaySplashScreenCommand()))
    2   {
    3       ApplicationStartUpTask.ApplicationBegin();
    4   }
    

    So what's happening is the Splash screen is loaded on a background thread, while the application start up continues on the main thread. When the application start up is finished, the background thread disposes of the command that it's executing. In this case it starts fading the splash screen away.

    Here's the core interface that made this happen.

    1   public interface ICommand
    2   {
    3       void Execute();
    4   }
    
    1   public interface IDisposableCommand : ICommand, IDisposable
    2   {
    3   }
    4 
    5   public interface IBackgroundThread : IDisposable
    6   {
    7   }
    

    This is a pretty simple solution (IMHO). The actual splash screen is just a win form, that starts a timer and adjusts the opacity when it's asked to display, then fade away when it's asked to hide.

     1   public partial class SplashScreen : Form, ISplashScreen
     2   {
     3       private Timer timer;
     4 
     5       public SplashScreen()
     6       {
     7           InitializeComponent();
     8           Visible = false;
     9       }
    10 
    11       public void DisplayTheSplashScreen()
    12       {
    13           ApplyWindowStyles();
    14           StartFadingIn();
    15       }
    16 
    17       public void HideTheSplashScreen()
    18       {
    19           StartFadingOut();
    20       }
    21 
    22       private void StartFadingIn()
    23       {
    24           Opacity = .0;
    25           timer = new Timer {Interval = 50};
    26           timer.Tick += ((sender, e) => { if (Opacity < 1) Opacity += .05; });
    27           timer.Start();
    28           ShowDialog();
    29       }
    30 
    31       private void StartFadingOut()
    32       {
    33           if(timer != null &amp;&amp; timer.Enabled){
    34               timer.Stop();
    35           }
    36           timer = new Timer {Interval = 50};
    37           timer.Tick += (delegate {
    38                                  if (Opacity > 0) {
    39                                      Opacity -= .1;
    40                                  }
    41                                  else {
    42                                      timer.Stop();
    43                                      Close();
    44                                  }
    45                              });
    46           timer.Start();
    47       }
    48 
    49       private void ApplyWindowStyles()
    50       {
    51           BackgroundImage = Image.FromFile(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "images/splash.jpg"));
    52           FormBorderStyle = FormBorderStyle.None;
    53           StartPosition = FormStartPosition.CenterScreen;
    54           ClientSize = BackgroundImage.Size;
    55           TopMost = true;
    56       }
    57   }
    

    For starting and running a non-blocking background thread, we're using the BackgroundWorker class which takes care of thread synchronization. (This comes in handy for synchronizing UI elements)

     1   public class BackgroundThread : IBackgroundThread
     2   {
     3       private readonly BackgroundWorker worker_thread;
     4 
     5       public BackgroundThread(IDisposableCommand command)
     6       {
     7           worker_thread = new BackgroundWorker();
     8           worker_thread.DoWork += delegate { command.Execute(); };
     9           worker_thread.Disposed += delegate { command.Dispose(); };
    10           worker_thread.RunWorkerAsync();
    11       }
    12 
    13       public void Dispose()
    14       {
    15           worker_thread.Dispose();
    16       }
    17   }
    

    Just so you get the rest of the code, here's the DisplaySplashScreenCommand, although I'm sure it was obvious.

     1   public class DisplaySplashScreenCommand : IDisposableCommand
     2   {
     3       private ISplashScreen splash_screen;
     4 
     5       public DisplaySplashScreenCommand() : this(new SplashScreen())
     6       {
     7       }
     8 
     9       public DisplaySplashScreenCommand(ISplashScreen splash_screen)
    10       {
    11           this.splash_screen = splash_screen;
    12       }
    13 
    14       public void Execute()
    15       {
    16           splash_screen.DisplayTheSplashScreen();
    17       }
    18 
    19       public void Dispose()
    20       {
    21           splash_screen.HideTheSplashScreen();
    22       }
    23   }
    

    Hope this helps anyone out there trying to implement a splash screen. PS. I can't stress the fact that my knowledge of Threading is limited, so if you know of a cleaner implementation... Please, please hook a brotha up!

    The end result looks like...

    splash_screen

    Tonight I finished reading...

    C# in Depth: What you need to master C# 2 and 3
    by Jon Skeet

    Read more about this book...

    This was an amazing book, and definitely offers a great in depth look at the C# language. Most importantly it answered a lot of my questions about elements introduced in C# 3.0, and taught me things I didn't know about C# 1.0. If you're looking for information on the following items, then this book is definitely for you.

    • Expression
    • IQueryable
    • IQueryProvider
    • Lamba's
    • Type Inferencing

    Thanks JP for recommending this book!

    Here are a few gems that I picked from this book.

    Delegates

    "You rarely see an explicit call to Delegate.Combine in C# code - usually the + and += operators are used."

    1   var x = new EventHandler(delegate { });
    2   var y = new EventHandler(delegate { });
    3   x += y;
    4   x = x + y;// same as above
    5   x = (EventHandler) Delegate.Combine(x, y);// same as above
    

    Static vs Dynamic Typing

    "C# is statically typed: each variable is of a particular type, and that type is known at compile time. The alternate to static typing is dynamic typing, which can take a variety of guises. "

    Explicit vs. Implicit Typing

    "The distinction between explicit typing and implicit typing is only relevant in statically typed languages. With explicitly typing, the type of every variable must be explicitly stated in the declaration. Implicit typing allows the compiler to infer the type of the variable based on its use."

    Covariant vs. Incovariant

    1   object[] stuff = new string[]{"blah"}; // valid and is an example of covariance
    2   List<object> more_stuff = new List<string>();// invalid and is an example of incovariance
    

    Fluent Interfaces

    Jon, the author, mentions a blog post by Anders Noras on Planning a fluent interface

    Here's an example of a fluent interface for building menu's that I've been playing with.

    1   CreateA.MenuItem()
    2     .Named("&amp;Close")
    3     .BelongsTo(MenuGroups.File)
    4     .CanBeClickedWhen(m => task.IsThereAProtocolSelected())
    5     .WhenClickedExecute(closeCommand)
    6     .Build();
    

    Readability

    "When it comes to getting the broad sweep of code, what is required is 'readability of results' - I want to know what the code does, but I don't care how it does it right now."

    There's a lot of information on IQueryables, Expression Trees, and other goodness in this book. This is a great book and definitely worth reading, especially if you're as interested in the C# language as I am.

    I mean Grokking Rhino Mocks. My usage of Rhino Mocks has changed quite a bit since I first started using it a year ago, as well as the way I write tests.

    First it was like this:

     1   [TestFixture]
     2   public class ConsoleTest {
     3       private MockRepository mockery;
     4       private IReportPresenter presenter;
     5   
     6       [SetUp]
     7       public void SetUp() {
     8           mockery = new MockRepository();
     9           presenter = mockery.CreateMock<IReportPresenter>();
    10       }
    11   
    12       [TearDown]
    13       public void TearDown() {
    14           mockery.VerifyAll();
    15       }
    16   
    17       [Test]
    18       public void ShouldInitializeTheReportPresenter() {
    19           var commandLineArguments = new[] {"blah"};
    20           presenter.Initialize();
    21   
    22           mockery.ReplayAll();
    23           new Console(presenter).Execute(commandLineArguments);
    24       }
    25   }
    

    Then it evolved to this...

     1   [TestFixture]
     2   public class ConsoleTest {
     3       private MockRepository mockery;
     4       private IReportPresenter presenter;
     5   
     6       [SetUp]
     7       public void SetUp() {
     8           mockery = new MockRepository();
     9           presenter = mockery.DynamicMock<IReportPresenter>();
    10       }
    11   
    12       [Test]
    13       public void ShouldInitializeTheReportPresenter() {
    14           var commandLineArguments = new[] {"blah"};
    15   
    16           using (mockery.Record()) {
    17               presenter.Initialize();
    18           }
    19           using (mockery.Playback()) {
    20               CreateSUT().Execute(commandLineArguments);
    21           }
    22       }
    23   
    24       private IConsole CreateSUT() {
    25           return new Console(presenter);
    26       }
    27   }
    

    Then for a short time I tried this...

     1   [TestFixture]
     2   public class when_giving_the_console_valid_arguments {
     3       private IReportPresenter presenter;
     4   
     5       [SetUp]
     6       public void SetUp() {
     7           presenter = MockRepository.GenerateMock<IReportPresenter>();
     8       }
     9   
    10       [Test]
    11       public void should_initialize_the_report_presenter() {
    12           var commandLineArguments = new[] {"blah"};
    13           CreateSUT().Execute(commandLineArguments);
    14           presenter.AssertWasCalled(p => p.Initialize());
    15       }
    16   
    17       private IConsole CreateSUT() {
    18           return new Console(presenter);
    19       }
    20   }
    

    Now I'm trying this...

     1   [Concern(typeof (Console))]
     2   public class when_the_console_is_given_valid_console_arguments : context_spec<IConsole> {
     3       private string[] command_line_arguments;
     4       private IReportPresenter presenter;
     5   
     6       protected override IConsole UnderTheseConditions() {
     7           command_line_arguments = new[] {"path", "testfixtureattributename"};
     8           presenter = Dependency<IReportPresenter>();
     9   
    10           return new Console(presenter);
    11       }
    12   
    13       protected override void BecauseOf() {
    14           sut.Execute(command_line_arguments);
    15       }
    16   
    17       [Test]
    18       public void should_initialize_the_report_presenter() {
    19           presenter.should_have_been_asked_to(p => p.Initialize());
    20       }
    21   }
    

    Now I'm generating reports from my tests specs using this. I wonder what's next...

    Last week I finished reading...

    The 7 Habits of Highly Effective People
    by Stephen R. Covey

    Read more about this book...

    I really enjoyed this book, it's definitely worth taking the time to sit down and read. I realized a lot about myself as I read it. Hopefully you will too! Here are a few excerpts from the book that had an impact on me.

    "Management is a bottom line focus: How can I best accomplish certain things? Leadership deals with the top line: What are the things I want to accomplish?"

    "... envision a group of producers cutting their way through the jungle with machetes. They're the producers, the problem solvers. They're cutting through the undergrowth, clearing it out.

    The managers are behind them, sharpening their machetes, writing policy and procedure manuals, holding muscle development programs, bringing in improved technologies and setting up working schedules and compensation programs for machete wielders.

    The leader is the one who climbs the tallest tree, surveys the entire situation, and yells, 'Wrong Jungle!'"

    "Work Centeredness. Work-centered people may become 'workaholics,' driving themselves to produce at the sacrifice of health, relationships, and other important areas of their lives. Their fundamental identity comes from their work - 'I'm a doctor', 'I'm a writer', 'I'm an actor.'

    Because their identity and sense of self-worth are wrapped up in their work, their security is vulnerable to anything that happens to prevent them from continuing in it. Their guidance is a function of the demands of the work. Their wisdom and power come in the limited areas of their work, rendering them ineffective in other areas of life."

    "There are times when neither the teacher nor the student knows for sure what's going to happen. In the beginning, there's a safe environment that enables people to be really open and to learn and to listen to each other's ideas. Then comes brainstorming, where the spirit of evaluation is subordinated to the spirit of creativity, imagining, and intellectual networking. Then an absolutely unusual phenomenon begins to take place. The entire class is transformed with the excitement of a new thrust, a new idea, a new direction that's hard to define, yet it's almost palpable to the people involved."

    "Suppose you were to come upon someone in the woods working feverishly to saw down a tree.

    'What are you doing?' you ask.

    'Can't you see?' comes the impatient reply. 'I'm sawing down this tree.'

    'You look exhausted!' you exclaim. 'How long have you been at it?'

    'Over five hours,' he returns, 'and I'm beat! This is hard work.'

    'Well, why don't you take a break for a few minutes and sharpen that saw?' you inquire. 'I'm sure it would go a lot faster.'

    'I don't have time to sharpen the saw,' then man says emphatically. 'I'm too busy sawing!'"

    "Principles are natural laws that are external to us and that ultimately control the consequences of our actions. Values are internal and subjective and represent that which we feel strongest about in guiding our behavior."

June

    I'm currently reading...

    The 7 Habits of Highly Effective People
    by Stephen R. Covey

    Read more about this title...

    I came across a paragraph that stuck out for me, and just wanted to share it:

    "If you don't let a teacher know at what level you are -- by asking a questions, or revealing your ignorance - you will not learn or grow. You cannot pretend for long, for you will eventually be found out. Admission of ignorance is often the first step in our education."

    This was immediately followed by the next powerful statement.

    "Thoreau taught, 'How can we remember our ignorance, which our growth requires, when we are using our knowledge all the time?'"

    Note to self, "be more ignorant..." *giggle* Have a great day!

    An idea the team an I had today, was to build a more fluent interface for creating dynamic SQL queries. Here's what I mean:

     1   [TestFixture]
     2   public class when_creating_an_insert_query_for_two_or_more_columns {
     3       [Test]
     4       public void should_return_the_correct_sql() {
     5           var query = Insert.Into<CustomersTable>()
     6               .ValueOf("mo").ForColumn(c => c.FirstName())
     7               .And()
     8               .ValueOf("khan").ForColumn(c => c.LastName())
     9               .End();
    10 
    11           var expected =
    12               "INSERT INTO Customers ( FirstName, LastName ) VALUES ( @FirstName, @LastName );";
    13           query.ToSql().ShouldBeEqualTo(expected);
    14       }
    15   }
    

    It's the responsibility of the query object to prepare the command with the command parameter names and values, so in this test I'm just focused on the raw sql. One of the benefits of this API, is that it's strongly typed, so you can't stick a string in a column represented by a long.

    For example, Imagine a customers table that looks like this:

     1   public class CustomersTable : IDatabaseTable {
     2       public string Name() {
     3           return "Customers";
     4       }
     5 
     6       public IDatabaseColumn<long> Id() {
     7           return new DatabaseColumn<long>("Id");
     8       }
     9 
    10       public IDatabaseColumn<string> FirstName() {
    11           return new DatabaseColumn<string>("FirstName");
    12       }
    13 
    14       public IDatabaseColumn<string> LastName() {
    15           return new DatabaseColumn<string>("LastName");
    16       }
    17   }
    

    Here's what we've got so far for contracts...

     1   public class Insert {
     2       public static ITableSelector<Table> Into<Table>() where Table : IDatabaseTable {
     3           return new TableSelector<Table>();
     4       }
     5   }
     6 
     7   public interface ITableSelector<Table> {
     8       IColumnSelector<Table, ColumnType> ValueOf<ColumnType>(ColumnType value);
     9   }
    10 
    11   public interface IColumnSelector<Table, ColumnType> {
    12       IChainedSelector<Table> ForColumn<TColumn>(Func<Table, TColumn> columnSelection)
    13           where TColumn : IDatabaseColumn<ColumnType>;
    14   }
    15 
    16   public interface IChainedSelector<Table> {
    17       ITableSelector<Table> And();
    18       IQuery End();
    19   }
    

    And here's as far as we got with the implementation...

     1   public class TableSelector<Table> : ITableSelector<Table> where Table : IDatabaseTable {
     2       public IColumnSelector<Table, T> ValueOf<T>(T value) {
     3           return new ColumnSelector<Table, T>(value);
     4       }
     5   }
     6 
     7   public class ColumnSelector<Table, T> : IColumnSelector<Table, T> where Table : IDatabaseTable {
     8       private readonly T value;
     9 
    10       public ColumnSelector(T value) {
    11           this.value = value;
    12       }
    13 
    14       public IChainedSelector<Table> ForColumn<TColumn>(Func<Table, TColumn> columnSelection)
    15           where TColumn : IDatabaseColumn<T> {
    16           var table = Activator.CreateInstance<Table>();
    17           return new ChainedSelector<Table, T, TColumn>(
    18               table,
    19               value,
    20               columnSelection(table)
    21               );
    22       }
    23   }
    24 
    25   public class ChainedSelector<Table, Value, Column> : IChainedSelector<Table>
    26       where Table : IDatabaseTable
    27       where Column : IDatabaseColumn<Value> {
    28       private readonly Table table;
    29       private readonly Value value;
    30       private readonly Column column;
    31 
    32       public ChainedSelector(Table table, Value value, Column column) {
    33           this.table = table;
    34           this.value = value;
    35           this.column = column;
    36       }
    37 
    38       public ITableSelector<Table> And() {
    39           throw new NotImplementedException();
    40       }
    41 
    42       public IQuery End() {
    43           var builder = new InsertStatementBuilder(table.Name());
    44           builder.Add(column, value);
    45           return builder.EndQuery();
    46       }
    47   }
    

    The most important piece is still missing, and that's implementing the "And()" method on ChainedSelector... and finishing off the End method. I'm drawing a blank.. Thoughts are appreciated!

    Oh man... oh man! JP's puttin' on a contest and givin' away free stuff... Lot's of it! *drool*

    It's kind of a cool idea... The gist of it is to describe how YOU are contributing to the community, and how YOU are leaving an impact on those around you. It's all about YOU!

    I can think of a few people that I have definitely left an impact on my life. If anyone's left an impact on you, why not nominate them!

    The first place winner of the contest...

    wins a seat at the Nothin' But .NET Boot Camp, held in Las Vegas! That's about $3000 bucks for that seat alone! Plus... they get a copy of Visual Studio 2008 Team Suite... Plus a full years subscription to an MSDN Premium subscription.

    I have no idea what an MSDN Premium subscription is... so I Googled it, here's what I found:

    "MSDN Subscriptions are the ultimate resource for professional Developers, teams and organizations..." - http://msdn.microsoft.com/en-us/subscriptions/aa718661.aspx

    Oh man, oh man... not only do you get a first class ticket to one of the most fulfilling courses you'll ever take, but.... you also get the ultimate resource for professional developers. If that doesn't get you all the fame and glory you've ever wanted... I'm not sure what will!

    The second place winner...

    wins a stack of books, and tools and another copy of Visual Studio 2008 Team Suite with another fully year subscription to MSDN.

    The books are wicked too, not to mention expensive! You get...

    • The Pragmatic Programmer
    • Code Complete
    • Refactoring
    • Head First Design Patterns
    • Design Patterns
    • Test Driven Development
    • CLR via C#
    • Working Effectively with Legacy Code
    • Domain Driven Design
    • Agile, Principles, Patterns and Practices in C#.

    I've read everyone of these books except for the last one. I can honestly say, I would read them over and over again. In fact, I am... and the tools.... oh man the tools... once you go ReSharper you'll never go back to naked studio, you just can't physically do it. It makes you physically ill... I puked once trying to do it... it was messy!

    The third place winner...

    wins a gift card from Amazon worth $140 bucks... That's quite a few books for a starving reader.

    For more information go read up on the contest here...

    How do you know if you really know someone?

    I remember asking myself this question a lot as a kid. As I grew up and developed relationships with people, and cut relationships with people I've found that I never really got to know someone until I've seen them express different emotions.

    In order to get to know someone, in my humble opinion, you've got to see them upset. You have got to see them mad, glad, sad and every color of the rainbow. You don't truly get to know someone until you have seen them shout, cry, and laugh till it hurts.

    It's in how you react, that truly defines who you are! (or want to be..)

    I was recently listening to an episode of the ALT.NET podcast with guests Jeremy Miller, David Laribee, and Chad Myers. I remember Chad saying something to the effect of being slightly embarrassed of the code that Jeremy was about to step in to. I realized that I felt the same way...

    I remember last year when I was just jumping into this .NET game. I had nothing to hide, I wanted people to review my work. I wanted feedback, I wanted guidance and I really worked hard to get feedback from people that I respected.

    Today, I feel more like Chad! I feel a little more defensive about the stuff that I've written. I'm more nervous about having to explain design decisions made months ago, that I don't agree with today. I'm apologetic for making choices and writing some of the code that I've written. *sigh* (The overuse of the word I, probably hides the fact that yes I am part of an agile team and a lot of the decisions were made as a team or at least in pairs.)

    Kshitij reminded me of a quote from Robin Sharma. I don't remember the exact quote but to paraphrase its along the lines of...

    "If the cup is full, it will spill if you try to fill it. You must empty the cup in order to re-fill it!"

    The reason that I think this quote applies is because I now realize that if I we choose to be to proud to accept criticism now, then I'm we're likely to be stuck in my our ways. The work that I've we've done, was a reflection of my our abilities at the time I was we were doing it, and not a reflection of who I am we are today.

    Have you ever experienced that feeling of when you bring someone new in to a team, and you subconsciously wonder how they're going to upset the balance of the team. Are they going to find the dirty skeletons in your code closet and expose you. Or are they going to go with the flow, and just accept the way things are and keep on keepin' on!?

    I think that I'm trying to form a post from all these random ideas, but the point I'm trying so hard to make is don't be embarrassed of your skill set. If we were all super heroes, then we wouldn't have any!

    "The ghetto, let go. It's not a novelty, you can love your neighborhood, without loving poverty." - KRS ONE

    You can keep that love for software, using alternative methods...

May

    I'm currently reading...

    Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development (3rd Edition)
    by Craig Larman

    Read more about this book...

    (I'm actually reading the 2nd edition)

    This is a good book. It started off really boring... really boring! I picked up this book as an introduction to object oriented programming, and it started off with a lot of talk on UML, documentation and the Rational Unified Process. But then I got to chapter 16... "GRASP: Designing Objects with Responsibilities".

    Here's an excerpt that I enjoyed!

    "Perhaps the most common mistake when creating a domain model is to represent something as an attribute when it should have been a concept. A rule of thumb to help prevent this mistake is:

    If we do not think of some conceptual class X as a number or text in the real world, X is probably a conceptual class, not an attribute."

    Here's a definition for a Domain Model...

    "The Domain Model provides a visual dictionary of the domain vocabulary and concepts from which to draw inspiration for the naming of some things in the software design."

    Chapter 16 is great so far, it talks about how to decompose responsibilities for objects using an acronym (I'm not a fan of acronyms) called GRASP. GRASP stands for General Responsibility Assignment Software Patterns.

    Craig goes on to talk about the five different patterns of GRASP. They are:

    • Information Expert: the class that has the information necessary to fulfill the responsibility.
    • Creator: a class that has the responsibility to create an instance of another class.
    • High Cohesion: increase the measure of how strongly related and focused the responsibilities of an element are.
    • Low Coupling: decrease the amount a class is connected to, has knowledge of, or relies on other elements.
    • Controller: a class with the responsibility of receiving or handling a system event message.

    A couple of days ago I posted something on an XmlEnumerable. An object that knows how to traverse an XML document in a linear form. After talking with Adam, he suggested that I simplify the implementation with a little XPath action.

     1   public class XmlElementEnumerable : IEnumerable<IXmlElement> {
     2       private XmlElement rootElement;
     3       private IMapper<XmlElement, IXmlElement> mapper;
     4   
     5       public XmlElementEnumerable(XmlElement rootElement) {
     6           this.rootElement = rootElement;
     7           mapper = new XmlElementMapper();
     8       }
     9   
    10       public IEnumerator<IXmlElement> GetEnumerator() {
    11           foreach (var node in rootElement.SelectNodes("//*")) {
    12               yield return mapper.MapFrom(node.DownCastTo<XmlElement>());
    13           }
    14       }
    15   
    16       IEnumerator IEnumerable.GetEnumerator() {
    17           return GetEnumerator();
    18       }
    19   }
    

    Diving a little deeper, I think using XPath expressions are probably a lot more efficient for traversing a document.

    If you ever need to traverse each xml element in an xml document , you may want to implement your own XmlEnumerable. I've had some issues with the .NET XML API recently. The built in .NET XmlElement implements the non generic IEnumerable which means you've got to foreach through a bunch of objects.

    1   foreach (object o in rootElement) {
    2       
    3   }
    

    This kind of scares me a bit because of the Xml object hierarchy. The reason being, there are several sub classes of XmlNode, and trying to understand this object hierarchy is not interesting to me.

    xml_node_derivatives

    Rather than having to check if each item is an Xml element, we just created our own abstraction that we prefer to work with, and map from the framework XmlElement to our own IXmlElement.

    1   public interface IXmlElement : IEquatable<IXmlElement>, IEnumerable<IXmlElement> {
    2       string Name();
    3       string ToXml();
    4   }
    

    Let's say you need to traverse and Xml that looks like this:

     1   <root>
     2     <GrandParent>
     3       <Parent>
     4         <Child>
     5           <GrandChild></GrandChild>
     6         </Child>
     7       </Parent>
     8     </GrandParent>
     9     <GrandParent>
    10       <Parent>
    11         <Child>
    12           <GrandChild></GrandChild>
    13         </Child>
    14       </Parent>
    15     </GrandParent>
    16     <Cousin></Cousin>
    17   </root>
    

    If we were to traverse this document we would expect to find 10 elements

     1   [Test]
     2   public void should_traverse_through_each_element() {
     3       CreateSUT().Count().ShouldBeEqualTo(10);
     4   }
     5   
     6   [Test]
     7   public void should_contain_one_root_element() {
     8       CreateSUT()
     9           .Where(x => x.Name().Equals("root"))
    10           .Count()
    11           .ShouldBeEqualTo(1);
    12   }
    13   
    14   [Test]
    15   public void should_contain_two_grand_parents() {
    16       CreateSUT()
    17           .Where(x => x.Name().Equals("GrandParent"))
    18           .Count()
    19           .ShouldBeEqualTo(2);
    20   }
    

    We could walk this xml structure and query it, using an API that we prefer by building our own IEnumerable and extension methods for querying.

     1   public class XmlElementEnumerable : IEnumerable<IXmlElement> {
     2       private XmlElement rootElement;
     3       private IMapper<XmlElement, IXmlElement> mapper;
     4   
     5       public XmlElementEnumerable(XmlElement rootElement) {
     6           this.rootElement = rootElement;
     7           mapper = new XmlElementMapper();
     8       }
     9   
    10       public IEnumerator<IXmlElement> GetEnumerator() {
    11           yield return mapper.MapFrom(rootElement);
    12           foreach (var element in RecursivelyWalkThrough(rootElement)) {
    13               yield return mapper.MapFrom(element);
    14           }
    15       }
    16   
    17       IEnumerator IEnumerable.GetEnumerator() {
    18           return GetEnumerator();
    19       }
    20   
    21       private IEnumerable<XmlElement> RecursivelyWalkThrough(XmlNode element) {
    22           if (element.HasChildNodes) {
    23               foreach (var childNode in element.ChildNodes) {
    24                   if (childNode is XmlElement) {
    25                       yield return childNode.DownCastTo<XmlElement>();
    26                       foreach (var xmlElement in RecursivelyWalkThrough(childNode.DownCastTo<XmlElement>())) {
    27                           yield return xmlElement;
    28                       }
    29                   }
    30               }
    31           }
    32       }
    33   }
    

    Now you can traverse your own xml data structures using a more strongly typed API that suits your needs. For example:

     1   public class RawXmlElement : IXmlElement {
     2       public RawXmlElement(string rawXml) {
     3           _rawXml = rawXml;
     4       }
     5   
     6       public string ToXml() {
     7           return _rawXml;
     8       }
     9   
    10       public string Name() {
    11           return Parse.Xml(this).ForItsName();
    12       }
    13   
    14       public bool Equals(IXmlElement other) {
    15           return other != null && other.ToXml().Equals(_rawXml);
    16       }
    17   
    18       public override bool Equals(object obj) {
    19           return ReferenceEquals(this, obj) || Equals(obj as IXmlElement);
    20       }
    21   
    22       public override int GetHashCode() {
    23           return _rawXml != null ? _rawXml.GetHashCode() : 0;
    24       }
    25   
    26       IEnumerator IEnumerable.GetEnumerator() {
    27           return GetEnumerator();
    28       }
    29   
    30       public IEnumerator<IXmlElement> GetEnumerator() {
    31           return new XmlEnumerable(this).GetEnumerator();
    32       }
    33   
    34       public override string ToString() {
    35           return _rawXml;
    36       }
    37   
    38       private readonly string _rawXml;
    39   }
    

    Or...

     1   public class SingleXmlElement<T> : IXmlElement {
     2       public SingleXmlElement(string elementName, T elementValue) {
     3           this.elementName = elementName;
     4           this.elementValue = elementValue;
     5       }
     6   
     7       public string ToXml() {
     8           return ToString();
     9       }
    10   
    11       public string Name() {
    12           return Parse.Xml(this).ForItsName();
    13       }
    14   
    15       public IEnumerator<IXmlElement> GetEnumerator() {
    16           return new XmlEnumerable(this).GetEnumerator();
    17       }
    18   
    19       public override string ToString() {
    20           return string.Format("<{0}>{1}</{0}>", elementName, elementValue);
    21       }
    22   
    23       public bool Equals(IXmlElement other) {
    24           return other != null && ToString().Equals(other.ToXml());
    25       }
    26   
    27       public override bool Equals(object obj) {
    28           return ReferenceEquals(this, obj) || Equals(obj as IXmlElement);
    29       }
    30   
    31       public override int GetHashCode() {
    32           return
    33               (elementName != null ? elementName.GetHashCode() : 0) +
    34               29*(elementValue != null ? elementValue.GetHashCode() : 0);
    35       }
    36   
    37       IEnumerator IEnumerable.GetEnumerator() {
    38           return GetEnumerator();
    39       }
    40   
    41       private readonly string elementName;
    42       private readonly T elementValue;
    43   }
    

    Hopefully, this helps someone else who's drowning in xml!

    Tim Ferriss writes:

    "Brain activation for listening is cut in half if the person is trying to process visual input at the same time. A recent study at The British Institute of Psychiatry showed that checking your email while performing another creative task decreases your IQ in the moment 10 points."

    This post is definitely worth reading!

    I received a question the other day on building menu's in a win forms application. I wasn't sure of a clean way of doing it, so I thought I would put together a sample app to see if I could come up with something. I'm not sure I'm completely happy with what I've got so far, but my goal was to be able to drop in new menu items, and menu groups without a lot of ceremony and configuration.

    The guts of it depends on castle windsor to glue most of the pieces together using the mass component registration api. I found it really hard to test, but was please with how easy it just kind of worked!

     1   public class WindsorContainerFactory : IWindsorContainerFactory {
     2       private static IWindsorContainer container;
     3       private IComponentExclusionSpecification criteriaToSatisfy;
     4 
     5       public WindsorContainerFactory() : this(new ComponentExclusionSpecification()) {}
     6 
     7       public WindsorContainerFactory(IComponentExclusionSpecification criteriaToSatisfy) {
     8           this.criteriaToSatisfy = criteriaToSatisfy;
     9       }
    10 
    11       public IWindsorContainer Create() {
    12           if (null == container) {
    13               container = new WindsorContainer();
    14               container.Register(
    15                   AllTypes
    16                       .Pick()
    17                       .FromAssembly(GetType().Assembly)
    18                       .WithService
    19                       .FirstInterface()
    20                       .Unless(criteriaToSatisfy.IsSatisfiedBy)
    21                       .Configure(
    22                       delegate(ComponentRegistration registration) {
    23                           this.LogInformational("{1}-{0}", registration.Implementation, registration.ServiceType.Name);
    24                           if (registration.Implementation.GetInterfaces().Length == 0) {
    25                               registration.For(registration.Implementation);
    26                           }
    27                       })
    28                   );
    29           }
    30           return container;
    31       }
    32   }
    

    The other neat piece that kind of made things easy to get up and running was the concept of a default repository. (I picked up this bit of knowledge from Oren at DevTeach.)

     1   public class DefaultRepository<T> : IRepository<T> {
     2       private IDependencyRegistry registry;
     3 
     4       public DefaultRepository(IDependencyRegistry registry) {
     5           this.registry = registry;
     6       }
     7 
     8       public IEnumerable<T> All() {
     9           return registry.AllImplementationsOf<T>();
    10       }
    11   }
    

    This was the only implementation of a repository in the system, and it was used for a IRepository and IRepository. I just created a new implementation of an IMenuItem or ISubMenu and it picked it up via Windsor's mass component registration.

     1   public class MainMenuPresenter : IMainMenuPresenter {
     2       private readonly IMainMenuView mainMenu;
     3       private readonly IRepository<ISubMenu> repository;
     4       private readonly ISubMenuItemComparer comparer;
     5 
     6       public MainMenuPresenter(IMainMenuView mainMenu, IRepository<ISubMenu> repository, ISubMenuItemComparer comparer) {
     7           this.mainMenu = mainMenu;
     8           this.repository = repository;
     9           this.comparer = comparer;
    10       }
    11 
    12       public void Initialize() {
    13           foreach (var subMenuToAddToMainMenu in repository.All().SortedUsing(comparer)) {
    14               mainMenu.Add(subMenuToAddToMainMenu);
    15           }
    16       }
    17   }
    

    I also spent a little time playing with Gallio. I had some issue with conflicts between the version of Castle.Microkernel that I was toying with and the one that comes with gallio. I wasn't able to resolve the issue, but after looking into the concept behind Gallio, I like the idea. Kind of neat stuff!

    Here's what I came up... Thank you Mr. JP for the inspiration!

    Source can be downloaded here!

    I can't stress how many ideas in this project came from concepts learned from the Nothin' But .NET boot camp. If you're in the area, you should definitely go check out the Vancouver course coming up next month!

    Last week my family and I were in Toronto, Ontario so that I could attend DevTeach. A conference put on by developers for developers, and it was a tonne of fun. Not only did my wife, daughter and I get to check out Toronto, and visit family but I got to bump in to some more of the industries greats and here them speak.

    Before I continue I've got to plug this little cafe that we accidentally stumbled into one night. My daughter, wife and her cousin were out looking for the MuchMusic building when we got a little lost. We ended up walking down McCaul Street and spotted this tiny little cafe on the corner of Elm St. It looked pretty cool from the outside and just looked kind of out of place. We're so glad we stopped in... The place was called "MangiaCake Panini Shoppe" and they specialized in panini's and, you guessed it, cake!

    We tried a piece of the cherry cheese cake, chocolate cake, and the carrot cake, as well as a salad, a couple of panini's and a lasagna for myself. It was absolutely awesome! The best part was the additional attention we got from the owner named Raj. He was just great and made the experience so much more...

    If you're in the Toronto, Ontario area you have to check out MangiaCake Panini Shoppe located at 160 McCaul Street.

    Back to the conference...

    Day 1: Tuesday, May 13, 2008

    8-9:15am: Keynote by Scott Hanselman

    Scott talked about Data Dynamic Web Applications, Astoria, tools like Fidler Http Proxy, LinqPad, TcpTrace.

    9:30-11:00am: Home-Grown Production System Monitoring: Creating a Bridge Between Development and Operations by Owen Rogers

    I really enjoyed Owens talk. I thought it was informative and backed by real project experience. Some of the things I learned:

    Problems with log files:

    • scattered
    • not analyzed
    • not accessible
    • size constrained
    • multiple logs (different time zones?)

    You should log for immediate data, and limit the footprint of logging on client machines. Owen mentions that a great book to read is "Release It" by Mike Nygard.

    11am-12:15pm: Behavior Driven Development Installed by David Laribee and Scott Bellware

    This was a great session, that showcased the direction that BDD is taking and what it means. Some of the things I learned are:

    • User stories should not have UI or technical language in it.
    • We should try getting our end users to help write the stories.
    • Acceptance criteria has technical details in it.
    • Break a apart the product backlog, from a release backlog and an iteration.
    • When writing context based specifications use the active voice instead of the passive voice. Eg. "when an account has been opened" is in the passive voice. The active voice says "when opening an account".

    1:30-2:45pm: How to make scrum really work by Joel Semeniuk and Turning Visual Studio Into a Software Factory by Kevin McNeish

    I bounced out of the scrum talk as soon as we started getting into team foundation server, and the software factory talk wasn't exactly what I expected.

    3:00-4:15pm: Achieving Persistence Ignorance with NHibernate by James Kovacs

    This was a good talk that discussed alternatives to Active Record and how to implement an infrastructure ignorant domain model. It talked about different settings in NHibernate and how to create the mapping files and most importantly why you would want a infrastructure ignorant domain model.

    4:30pm-5:45pm: Rapid (maintainable) web development with MonoRail by Oren Eini

    This was another good talk walked through the creation of a project using MonoRail. Oren talked about the different conventions that are used by MonoRail and put it in contrast to the MS MVC framework. I'm definitely more curious about MonoRail and itchin' to slap something together using it.

    Day 2: Wednesday, May 14, 2008

    8-9:15am: Cross-platform Development with Mono by Geoff Norton and Planned Agility?! by David Laribee

    The Mono talk was great, and actually got me pretty excited about the project. I'm surprised by just how much the Mono team has been able to accomplish and by the quick turn around on releases. I'm definitely going to have to spend some time learning more about the project.

    The Mono talk ended a little early so I popped into David Laribee's talk on planned agility. This was a great talk on how to bring Agile into your projects. I guess it's still a little surprising to me how many company's are still working in a traditional methodologies, so it makes me feel pretty privileged to work where I do and with the great guys that I work with.

    9:30-10:45am: Recommended Practices for Continuous Integration by Owen Rogers

    This was another great talk on the concepts of Continuous Integration and how to achieve it with an automated build server. Owen talked about the inception of the CruiseControl.NET project and shared his experiences with how people were using it effectively and how people were abusing it.

    11:00am-12:15pm: Busy .NET Developer's Guide to F# by Ted Neward

    Mr. Ted knows his stuff. This was a great talk about F# and the functional programming paradigm. A lot of it was over my head, but I enjoyed the discussion around why this is important and what are some of the potential benefits of this style of development. Concurrency and side effect free functions were topics that kept coming up. I will definitely have to commit some time to better understand functional programming.

    1:30pm-2:45pm: Blackbelt Configuration for New Projects by Jeffrey Palermo

    Mr. Jeffrey gave a great talk on how to take control of your projects by offering suggestions on project structure, how to set up a single user development environment, the importance of version control, dependency management, the importance of automated deployments, application architecture.

    To be continued...

    A great book to read is...

    Refactoring: Improving the Design of Existing Code (The Addison-Wesley Object Technology Series)
    by Martin Fowler, Kent Beck, John Brant, William Opdyke, Don Roberts

    Read more about this title...

    "Any fool can write code that a computer can understand. Good programmers write code that humans can understand."

    "The first time you do something, you just do it. The second time you something similar, you wince at the duplication, but you do the duplicate thing anyway. The third time you do something similar, you refactor."

    Introduce Local Extension: A server class you are using needs several additional methods, but you can't modify the class.

    Create a new class that contains these extra methods. Make this extension class a subclass or a wrapper of the original.

    E.g From this...

     1     public interface IController{
     2         void Execute();
     3     }
     4     
     5     public class Controller : IController {
     6         protected void RenderView(string name, object data){
     7             //... note that this is a protected method
     8         }
     9         
    10         public void Execute(){
    11             //...
    12         }
    13     }
    

    To this...

    1     public interface IViewRenderer{
    2         void Render<T>(string name, T data);
    3     }
    4     
    5     public class LocalExtensionController : Controller, IViewRenderer {
    6         public void Render<T>(string name, T data){
    7             RenderView(name, data);
    8         }
    9     }
    

    Replace Conditional with Polymorphism: You have a conditional that chooses different behavior depending on the type of an object.

    Move each leg of the conditional to an overriding method in a subclass. Make the original method abstract.

    E.g From this...

     1     public class Bird{
     2         public Bird(BirdType type){
     3             _type = type;
     4         }
     5         
     6         public double GetSpeed(){
     7             switch(_type){
     8                 case BirdType.EUROPEAN:
     9                     return 5;
    10                 
    11                 case BirdType.AFRICAN:
    12                     return 10;
    13                     
    14                 case BirdType.NORWEGIAN_BLUE:
    15                     return 20;
    16             }
    17             throw new ArgumentException();
    18         }
    19         
    20         private BirdType _type;        
    21     }
    22     
    23     public enum BirdType{
    24         EUROPEAN,
    25         AFRICAN,
    26         NORWEGIAN_BLUE
    27     }
    

    To this...

     1     public interface IBird{
     2         double GetSpeed();
     3     }
     4     
     5     public class EuropeanBird : IBird {
     6         public double GetSpeed(){
     7             return 5;
     8         }
     9     }
    10 
    11     public class AfricanBird : IBird {
    12         public double GetSpeed() {
    13             return 10;
    14         }
    15     }
    16 
    17     public class NorwegianBlueBird : IBird {
    18         public double GetSpeed() {
    19             return 20;
    20         }
    21     }        
    

    Q: Should I be worried if my username and password are sent back and forth to a server in clear text, in a cookie, upon each request????

    dev.teach.clear.text

    A couple of months ago I finished reading...

    xUnit Test Patterns: Refactoring Test Code (The Addison-Wesley Signature Series)
    by Gerard Meszaros

    Read more about this book...

     

    This was a thick book, that discusses unit test smells, unit test refactorings, unit test patterns... and just about anything else related to unit testing. Here's a little of what I've learned from this book...

    Defect Localization

    "Mistakes happen! Of course, some mistakes are much more expensive to prevent than to fix. Suppose a bug does slip through somehow and shows up in the Integration Build. If our unit test are fairly small (i.e., we test only a single behavior in each one), we should be able to pinpoint the bug quickly based on which test fails. This specificity is one of the major advantages that unit tests enjoy over customer tests. The customer tests tell us that some behavior expected by the customer isn't working; the unit tests tell us why. We call this phenomenon Defect Localization. If a customer test fails but no unit tests fail, it indicates a Missing Unit Test."

    Tests as Documentation

    "Without automated tests, we would need to pore over the SUT code trying to answer the question, 'What should be the result if ...?' With automated tests, we simply use the corresponding Tests as Documentation; they tell us what the result should be. If we want to know how the system does something, we can turn on the debugger, run the test, and single-step through the code to see how it works. In this sense, the automated tests act as a form of documentation for the SUT."

    "When it is not important for something to be seen in the test method, it is important that it not be seen in the test method!"

    Test Doubles

    A test double is any object or component that we install in place of the real component for the express purpose of running a test. Depending on the reason why we are using it, a Test Double can behave in one of four ways.

    • Dummy Object: a object that is passed to the SUT as an argument but is never actually used.
    • Test Stub: an object that replaces a real component on that the SUT depends on so that different inputs can by applied against the SUT.
    • Test Spy: an object that can act as an observation point for the indirect outputs of the SUT.
    • Mock Object: an object that replaces a real component that the SUT depends on to test the and verify indirect outputs.
    • Fake Object: an object that replaces the functionality of the real SUT dependency with an alternate implementation that provides the same functionality.

    Strict vs Loose

    Mock objects come in two basic flavors:

    • Strict Mock: fails the test if incorrect calls are received.
    • Loose (lenient) Mock: fails if expected calls are not received, but is lenient if additional calls are received.

    Need-Driven Development

    "This 'outside-in' approach to writing and testing software combines the conceptual elegance of the traditional 'top-down' approach to writing code with modern TDD techniques supported by Mock Objects. It allows us to build and test the software layer by layer, starting at the outermost layer before we have implemented the lower layers."

    Test Smells

    "Developers Not Writing Tests may be caused by an overly aggressive development schedule, supervisors who tell developers not to 'waste time writing tests,' or developers who do not have the skills to write tests. Other potential causes might include an imposed design that is not conducive to testing or a test environment that leads to Fragile Tests. Finally, this problem could result from Lost Tests - tests that exist but are not included in the All Tests Suite used by developers during check-in or by the automated build tool."

    "Another productivity-sapping smell is Frequent Debugging. Automated unit tests should obviate the need to use a debugger in all but rare cases, because the set of tests that are failing should make it obvious why the failure is occurring. Frequent Debugging is a sign that the unit tests are lacking in coverage or are trying to test to much functionality at once."

    Fragile Test: "A test fails to compile or run when the SUT is changed in ways that do not affect the part the is exercising... Fragile tests increase the cost of test maintenance by forcing us to visit many more tests each time we modify the functionality of the system or the fixture."

    This headache is typical if you're working with strict mock objects. I experienced this pain when working on a project using NMock. I couldn't find a clean separation between strict and loose mocks using NMock. There was only the concept of Strict Mocks and Stubs.

    Slow Tests: "The tests take too long to run... They reduce the productivity of the person running the test. Instead, the developers wait until the next coffee break or another interruption before running them. Or, whenever they run the tests, they walk around and chat with other team members..."

    The main disadvantages of using Fit are described here:

    • The test scenarios need to be very well understood before we can build the Fit Fixture. We then need to translate each test's logic into a tabular representation; this isn't always a good fit.
    • The tests need to employ the same SUT interaction logic in each test. To run several different styles of tests, we would probably have to build one or more different fixtures for each style of test. Building a new fixture is typically more complex than writing a few Test Methods.
    • Fit tests aren't normally integrated into developers' regression tests that are run via xUnit. Instead, these test must be run separately - which introduces the possibility that they will not be run at each check-in.

    Pull

    "A concept from lean manufacturing that states that things should be produced only once a real demand for them exists. In a 'pull system,' upstream assembly lines produce only enough products to replace the items withdrawn from the pool that buffers them from the downstream assembly lines. In software development, this idea can be translated as follows: 'We should only write methods that have already been called by other software and only handle those cases that the other software actually needs.' This approach avoids speculation and the writing of unnecessary software, which is one of software development's key forms of inventory (which is considered waste in lean systems)."

    I'm just about finished reading...

    Agile Web Development with Rails, 2nd Edition
    by Dave Thomas, David Hansson, Leon Breedt, Mike Clark, James Duncan Davidson, Justin Gehtland, Andreas Schwarz

    Read more about this book...

    This book focuses on the Ruby on Rails framework for developing web applications. It touches very lightly on the Ruby language itself, but mostly talks about things like Model View Controller, Active Record, Action Pack and how it's implemented in RoR's. The book starts off with a light primer on what MVC is, which I enjoyed, then moves on to how to install RoR, then jumps right into building a quick application using RoR. I enjoyed the discussion Testing and the concept of Migrations.

    Before jumping right in to chapter one I quickly read through the first Appendix titled "Introduction to Ruby", which helped a little bit, but I probably would have done better if I had first read a book just on the Ruby language. I think I would have found it more interesting as well. There were times in the book where it got so deep into the nitty gritty details of the RoR framework that I just completely lost interest. I choose to read this book to get some high level ideas, but I wasn't as interested in the tiny details of the framework. There's tonnes of great ideas in this book, that I recognize being adopted quite a bit in the .NET community. Migrations and MVC being a couple of my favorites.

    Here's a few excerpts that I enjoyed from this book...

    Convention over Configuration

    "Rails gives you lots of opportunities to override this basic workflow ... As it stands, our story illustrates convention over configuration, one of the fundamental parts of the philosophy of Rails. By providing convenient defaults and by applying certain conventions, Rails applications are typically written using little or no external configuration - things just knit themselves together in a natural way."

    Migrations

    "Over the years, developers have come up with ways of dealing with this issue. One scheme is to keep the Data Definition Language (DDL) statements that define the schema in source form under version control. Whenever you change the schema, you edit this file to reflect the changes. You then drop your development database and re-create the schema from scratch by applying your DDL. If you need to roll back a week, the application code and the DDL that you check out from the version control system are in step: when you re-create the schema from the DDL, your database will have gone back in time.

    Except... because you drop the database every time you apply the DDL, you lose any data in your development database. Wouldn't it be more convenient to be able to apply only those changes that are necessary to move a database from version X to version Y? this is exactly what Rails migrations let you do."

    E.g. 001_create_products.rb

     1   class CreateProducts < ActiveRecord:Migration    
     2     def self.up      
     3       create_table :products do |t|      
     4         t.column :title, :string      
     5         t.column :description :text      
     6         t.column :image_url :string      
     7       end      
     8     end      
     9     def self.down      
    10       drop_table :products      
    11     end     
    12   end
    

    Pragmatic Ajax-ification

    "In the old days, browsers were treated as really dumb devices. When you wrote a browser-based application, you'd send stuff down to the browser and then forget about that session. At some point, the user would fill in some form fields or click a hyperlink, and your application would get woken up by an incoming request. It would render a complete page back to the user, and the whole tedious process would start afresh...

    Whenever you work with AJAX, it's good to start with the non-AJAX version of the application and then gradually introduce AJAX features."

    No REST For The Wicked

    "REST stands for REpresentational State Transfer, which is basically meaningless. What it really means is that you use HTTP verbs (GET, POST, DELETE, and so on) to send requests and responses between applications."

    Performance Testing

    "Testing isn't just about whether something does what it should. We might also want to know whether it does it fast enough.

    Before we get to deep into this, here's a warning. Most applications perform just fine most of the time, and when they do start to get slow, it's often in ways we would never have anticipated. For this reason, it's normally a bad idea to focus on performance early in development. Instead, we recommend using performance testing in two scenarios, both late in the development process."

    Statement Modifiers

    "Ruby statement modifiers are a useful shortcut if the body of an if or while statement is just a single expression. Simply write the expression, followed by if or while and the condition."

    The following is valid Ruby syntax:

    puts "Danger, Will Robinson" if radiation > 3000

    I would love to express the following C# syntax....

    1   public void AddLicense(ILicense license){
    2       if(license.IsValid()){
    3           licenseRepository.Add(license);
    4       }
    5   }
    

    Like this...

    1   public void AddLicense(ILicense license){
    2       licenseRepository.Add(license).If(license.IsValid());
    3   }
    

    Thank you Mr. Aaron, today we just grabbed the latest beta version of Rhino.Mocks and out test times significantly dropped....

    Our times before the update were 450ish seconds to run all the unit tests and create the report:

    before

    Our times after are 100ish seconds:

    after

    Patterns of Enterprise Application Architecture (The Addison-Wesley Signature Series)
    by Martin Fowler

    Read more about this book...

    Defines Unit of Work as:

    "Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." - PoEAA

    I've been playing with some different ideas on how you can implement a unit of work in a win forms application.

    Here was the idea of the usage:

     1   public void SomeMethod() {
     2       using (var unitOfWork = UnitOfWork.StartFor<IPerson>())
     3       {
     4           var stacey = new Person(&quot;stacey&quot;);
     5           var veronica = new Person(&quot;veronica&quot;);
     6           var betty = new Person(&quot;betty&quot;);
     7   
     8           stacey.NewNumberIs(&quot;312-7467&quot;);
     9   
    10           unitOfWork.Commit();
    11       }
    12   }
    

    When the unit of work is asked to commit the new and modified instance would be committed to the person repository, in this case my imaginary black book.

     1   public class BlackBook : IRepository<IPerson> {
     2       private IList<IPerson> associates;
     3   
     4       public BlackBook() : this(new List<IPerson>()) {
     5       }
     6   
     7       public BlackBook(IList<IPerson> associates) {
     8           this.associates = associates;
     9       }
    10   
    11       public void Add(IPerson newAssociate) {
    12           associates.Add(newAssociate);
    13       }
    14   
    15       public void Update(IPerson updatedAssociate) {
    16       }
    17   }
    

    Here's how it works... Person inherits from "DomainSuperType". In the layer super type the no argument constructor registers itself with the current unit of work. I really don't like this because it makes all the domain objects aware of the surrounding infrastructure, and makes it much more difficult to test.

    Next all components have to be decorated with the "Serializable" attribute, so that I could manage dirty object tracking. This also sucks...

     1   [Serializable]
     2   public class DomainSuperType<T> where T : class {
     3       public DomainSuperType() {
     4           UnitOfWork.StartFor<T>().Register(this as T);
     5       }
     6   }
     7   public interface IPerson
     8   {
     9       void NewNumberIs(string newNumber);
    10   }
    11   
    12   [Serializable]
    13   public class Person : DomainSuperType<IPerson>, IPerson {
    14       private string name;
    15       private string knownPhoneNumber;
    16   
    17       public Person(string name) {
    18           this.name = name;
    19       }
    20   
    21       public void NewNumberIs(string newNumber) {
    22           knownPhoneNumber = newNumber;
    23       }
    24   }
    

    The unit of work delegates to a registry of units of work to retrieve the unit of work applicable to type ....

    1   public static class UnitOfWork {
    2       public static IUnitOfWork<T> StartFor<T>() {
    3           return Resolve.DependencyFor<IUnitOfWorkRegistry>().StartUnitOfWorkFor<T>();
    4       }
    5   }
    

    The unit of work registry creates a unit of work for type if one hasn't been started yet. Otherwise returns the already started unit of work. This registry is similar to an identity map using type T as the identifier.

     1   public class UnitOfWorkRegistry : IUnitOfWorkRegistry {
     2       private IDictionary<Type, object> unitsOfWork;
     3       private IUnitOfWorkFactory factory;
     4   
     5       public UnitOfWorkRegistry(IUnitOfWorkFactory factory) {
     6           this.factory = factory;
     7           unitsOfWork = new Dictionary<Type, object>();
     8       }
     9   
    10       public IUnitOfWork<T> StartUnitOfWorkFor<T>() {
    11           if (unitsOfWork.ContainsKey(typeof (T)))
    12           {
    13               return (IUnitOfWork<T>) unitsOfWork[typeof (T)];
    14           }
    15           var unitOfWork = factory.CreateFor<T>();
    16           unitsOfWork.Add(typeof (T), unitOfWork);
    17           return unitOfWork;
    18       }
    19   }
    

    The unit of work factory leverages the dependency resolver to retrieve an implementation of the repository applicable to type T.

     1   public class UnitOfWorkFactory : IUnitOfWorkFactory {
     2       private IDependencyResolver resolver;
     3   
     4       public UnitOfWorkFactory(IDependencyResolver resolver) {
     5           this.resolver = resolver;
     6       }
     7   
     8       public IUnitOfWork<T> CreateFor<T>() {
     9           return new WorkSession<T>(resolver.GetMeAnImplementationOf<IRepository<T>>());
    10       }
    11   }
    

    Each time the unit of work factory is asked to create a new unit of work it creates a fresh instance of a work session.

     1   public class WorkSession<T> : IUnitOfWork<T> {
     2       public WorkSession(IRepository<T> repository) : this(repository, new ObjectToRegisteredObjectMapper()) {
     3       }
     4   
     5       public WorkSession(IRepository<T> repository, IObjectToRegisteredObjectMapper mapper) {
     6           this.mapper = mapper;
     7           this.repository = repository;
     8           registeredInstances = new HashSet<IRegisteredInstanceOf<T>>();
     9       }
    10   
    11       public void Register(T newInstanceToRegister) {
    12           registeredInstances.Add(mapper.MapFrom(newInstanceToRegister));
    13       }
    14   
    15       public void Commit() {
    16           foreach (var registeredInstance in registeredInstances)
    17           {
    18               registeredInstance.CommitTo(repository);
    19           }
    20       }
    21   
    22       public void Dispose() {
    23           registeredInstances = new HashSet<IRegisteredInstanceOf<T>>();
    24       }
    25   
    26       private readonly IRepository<T> repository;
    27       private ICollection<IRegisteredInstanceOf<T>> registeredInstances;
    28       private IObjectToRegisteredObjectMapper mapper;
    29   }
    
     1 public class RegisteredInstance<T> : IRegisteredInstanceOf<T> {
     2     private readonly T originalInstance;
     3     private readonly T workingInstance;
     4 
     5     public RegisteredInstance(T newInstanceToRegister, ICloner cloner) {
     6         workingInstance = newInstanceToRegister;
     7         originalInstance = cloner.Clone(newInstanceToRegister);
     8     }
     9 
    10     public T Original() {
    11         return originalInstance;
    12     }
    13 
    14     public T WorkingCopy() {
    15         return workingInstance;
    16     }
    17 
    18     public bool HasBeenModified() {
    19         return !Original().Equals(WorkingCopy());
    20     }
    21 
    22     public void CommitTo(IRepository<T> repository) {
    23         if (HasBeenModified()) {
    24             repository.Update(WorkingCopy());
    25         }
    26         else {
    27             repository.Add(WorkingCopy());
    28         }
    29     }
    30 
    31     protected bool Equals(RegisteredInstance<T> registered) {
    32         return registered != null &amp;&amp; Equals(originalInstance, registered.originalInstance);
    33     }
    34 
    35     public override bool Equals(object obj) {
    36         return ReferenceEquals(this, obj) || Equals(obj as RegisteredInstance<T>);
    37     }
    38 
    39     public override int GetHashCode() {
    40         return originalInstance != null ? originalInstance.GetHashCode() : 0;
    41     }
    42 }
    

    Each registered instance immediately clones the original instance to keep track of changes between the original and the current working copy. For this to work properly the cloner has to perform a deep copy otherwise the dirty tracking wont work properly. To do the deep copy, i'm using serialization, hence the "serializable" attribute decorating each entity.

    1   public class Cloner : ICloner
    2   {
    3       public T Clone< T >( T instanceToClone )
    4       {
    5           var serializer = new Serializer< T >( );
    6           return serializer.DeserializeFrom( serializer.Serialize( instanceToClone ) );
    7       }
    8   }
    

    So far this implementation is just a spike on how to implement a unit of work, it's really not a great implementation but I'm hoping to solicit some feedback on ways that have worked for others.

    So last week the guys and I at work started to spike ASP.NET MVC. We're starting up a new project, and decided to take advantage of the Preview 2 version of the so far released libraries. Our experiences so far have been.... hmmm... not as expected.

    Here's a few things we've learned, hopefully they help someone else out. We're nant junkies, so the first thing we did to get going was automate the compiling, testing, running, deploying, and creation of the database with nant. We found that when running our project against the aspnet_compiler.exe that it didn't recognize some of the new C# 3.0 syntax.

    1   <select name="protocolName">
    2       <&#37; foreach( var dto in ViewData ) {&#37;>
    3           <option><&#37;= dto.ProtocolName &#37;></option>
    4       <&#37; } &#37;>
    5   </select>
    

    The above code would raise an error with the aspnet_compiler.exe. Now this is valid C# 3.0, but the pre compiler didn't know what to do with the "var" keyword. Next, the precompiler didn't know where to find the "Form()" method on the Html helper class because, it's an extension method.

    1   <&#37; using( Html.Form( Controllers.Order.Name, "submit", FormMethod.Post ) ) {&#37;>
    

    It's kind of an interesting idea that so many methods on the "HtmlHelper" class are extension methods. The solution to getting the aspnet_precompiler to recognize the C#3.0 syntax was to dump this block of xml in the web.config.

    1   <system.codedom>
    2       <compilers>
    3           <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" 
    4               type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
    5               <providerOption name="CompilerVersion" value="v3.5"/>
    6               <providerOption name="WarnAsError" value="false"/>
    7           </compiler>
    8       </compilers>
    9   </system.codedom>
    

    Next up... testing controllers. I think the guys and I were a little surprised at just how awkward it was to test a controller. I thought, a lot of time was spent making the controllers more testable. Our first pain point was the fact that "RenderView()" is a protected method on the Controller base class. Here's what I'm talking about...

     1   public class HomeController : Controller
     2   {
     3       public void Index( )
     4       {
     5           RenderView( "Index" );
     6       }
     7 
     8       public void About( )
     9       {
    10           RenderView( "About" );
    11       }
    12   }
    

    So let's think... how can we test that when the Index action is invoked it calls "RenderView" with an argument value "Index"... So some people have suggested creating a Test Double. I say... booo... I use mock object frameworks so that I don't need to groom a garden of hand rolled test stubs. Here's what we came up with... first cut remember!

     1   public class OrderController : BaseController, IOrderController
     2   {
     3       private readonly IOrderIndexCommand indexCommand;
     4       private readonly ISubmitOrderCommand submitCommand;
     5   
     6       public OrderController( IOrderIndexCommand indexCommand, ISubmitOrderCommand submitCommand )
     7       {
     8           this.indexCommand = indexCommand;
     9           this.submitCommand = submitCommand;
    10       }
    11   
    12       public void Index( )
    13       {
    14           indexCommand.InitializeWith( this );
    15           indexCommand.Execute( );
    16       }
    17   
    18       public void Submit( )
    19       {
    20           submitCommand.InitializeWith( this );
    21           submitCommand.Execute( );
    22       }
    23   }
    

    Ok... so it's slightly more testable. Each action on the controller executes a command, after first being initialized with ... The other thing to notice is that the OrderController inherits from BaseController. BaseController is actually an adapter that implements an IViewRenderer interface.

    1   public abstract class BaseController : Controller, IViewRenderer
    2   {
    3       public void Render< TypeToBindToView >( IView view, TypeToBindToView viewData )
    4       {
    5           RenderView( view.Name( ), viewData );
    6       }
    7   }
    

    The OrderIndexCommand is initialized with an IViewRenderer.

     1   public class OrderIndexCommand : IOrderIndexCommand
     2   {
     3       private IViewRenderer viewEngine;
     4       private readonly IOrderTasks task;
     5   
     6       public OrderIndexCommand( IOrderTasks task )
     7       {
     8           this.task = task;
     9       }
    10   
    11       public void InitializeWith( IViewRenderer engineToRenderViews )
    12       {
    13           viewEngine = engineToRenderViews;
    14       }
    15   
    16       public void Execute( )
    17       {
    18           viewEngine.Render( ControllerViews.Order.Index, task.RetrieveAllProtocols( ) );
    19       }
    20   }
    

    If you haven't heard, JP's giving away a $70.00 book credit to Amazon. For more details check out his most recent post.

    I really enjoy reading books, but if you're low on funds. Books can be quite pricey, especially tech books. This is a great offer, and anyone interested should definitely take the man up on his offer. Even if YOU don't need the books, or the credit, I'm sure you can think of someone who could. Let them know...

    Why do I even care?

    Because I know how hard it was to purchase books and support a family. I'm in much better shape now, and would love for someone else who needs a leg up to win an opportunity to be successful. Do you know someone that could use a little help?

    Wow... I don't know what it is, but right after the ALT. NET conference I was pretty pumped up and excited, but these days I'm feeling a little low. It's amazing how many young, talented people there are out in the industry. It's more amazing to see how fast people are moving and growing.

    The guys on my team, and I, try hard to stay up on what's new... and what the cool kids are doing. But these days' it's just making me dizzy... we've got the Eleutian Guys slingin' code like crazy. This PolyGlot programming thing has got me feeling like I need to go add more languages to my vocab. I'm getting sick of checking my gmail, because each time i do it looks like the ALT.NET mailing list has just puked all over my monitor.

    There's new frameworks flying out like ASP.NET MVC, Moq, Prism, Silverlight, WPF... then debates about how to write tests, what's bdd, is the auto mocking container a smell. Then there's the hype around ruby and rails, and the comparisons between dynamic and statically typed languages.

    It's got me a little dizzy, but now that I think about... it's kind of cool how fast the industry seems to be evolving!

April

    I'm still reading...

    Effective Java(TM) Programming Language Guide (The Java Series)
    by Joshua Bloch

    Read more about this book...

     

    Here's a few quotes from the book, that I found interesting:

    "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." - Donald E. Knuth

    "Don't sacrifice sound architectural principles for performance. Strive to write good programs rather than fast ones. If a good program is not fast enough, its architecture will allow it to be optimized. Good programs embody the principle of information hiding: Where possible, they localize design decisions within individual modules, so individual decisions can be changed without affecting the remainder of the system."

    A typical example of pre-mature optimization is thinking about "database calls" when working up in a UI layer. These are two completely separate architectural layers and should be developed without "optimization" constraints in mind.

    For example:

    "...we have to check if its a post back, to make sure we don't make another database hit. If we retrieve the contents of the page each time the page is requested that's going to be another database hit."

    Relax... there are lots of techniques (lazy loading, identity map...) to improve performance if it's an issue. But optimizing the UI for database performance is weak, and will more than likely cause you to build a crippled UI layer before even hitting the lower layers.

    "Often attempted optimizations have no measurable effect on performance; sometimes they make it worse."

    If we take the above example, if you're constantly "optimizing" for database performance be weary of what it's costing you. E.g... ViewState, extra conditional code that's sooo freakin' impossible to understand, that no dev wants to come back and touch that code.

    It can be pointless to try to find solutions to a problem that doesn't exist.

    One of the underlying themes I noticed at ALT.NET was that there were a lot of people their, like myself, who were yearning for good mentoring. They wanted to be part of teams that had "senior" developers that could lead them, and push them to grow.

    We all seek guidance, some of use are privileged to be guided by others and learn from the shared experience and some of us pave our own paths. 

    There is a very fine line between mentoring and molding, one that I think is important to distinguish. To be truly mentored by someone, means that your mentor will be able to offer you challenges that push you outside of comfort zone without telling you how. Your mentor should challenge you to be great, not tell you how it's done! A good mentor will empower you, and teach you how to think, not what to think. A good mentor knows how to learn just as much from you, as he knows how to teach you.

    When you're being molded, you're shown how you're expected to do things. A blind mentor may do this to gain a sense of control, but in the process fails to trigger new ideas in those around them. This benefits no one. Not only is the mentor not gaining from allowing the protege from being creative and allowing new ideas to trigger greater ones, but the blind mentor locks his/herself down to only their own ideas and does not allow themselves to open up to change and trigger new ideas.

    I'm sure mentoring is a tough thing, no one wants to wear the "Hi I'm your mentor" name tag. But perhaps there are mentors all around us. If we were to look at those around us more closely, and tried to understand what it is that each person can teach us, then you're surrounded by mentors. Everyone has something that they can teach you, the toughest part is trying to find out what that is.

    "In any journey it is helpful to have a guide, someone who can help us in times of difficulty. Often, we do not venture forth due to fear. A good guide, a teacher, quells our fear and gives us confidence during our journey. An old Indian adage says: "When the student is ready, the teacher appears." Sri Chinmoy has a beautiful poem that talks about this journey."

    You have a multitude of questions,
    But there is only one answer:
    The road is right in front of you,
    And the guide is waiting for you.

    - wisdomofyoga.org/teachers

    I'm currently reading...

    Effective Java(TM) Programming Language Guide (The Java Series)
    by Joshua Bloch

    Read more about this book...

     

    I thought I would share en excerpt that I read this morning on the bus... it's about my favorite topic: Implementation Inheritance.

    "Inheritance is a powerful way to achieve code reuse, but it is not always the best tool for the job. Used inappropriately, it leads to fragile software. It is safe to use inheritance within a package, where the subclass and the superclass implementation are under the control of the same programmers. It is also safe to use inheritance when extending classes specifically designed and documented for extension. Inheriting from ordinary concrete classes across package boundaries, however, is dangerous. As a reminder, this book uses the word 'inheritance' to mean 'implementation inheritance' (when one class extends another). The problems discussed in this item do not apply to interface inheritance (when a class implements an interface or where one interface extends another)."

    "Unlike method invocation, inheritance breaks encapsulation. In other words, a subclass depends on the implementation details of its superclass for its proper function."

    So at lunch I decided to check my email, and I got one from Justice with a subject that read....

    "Did you get one of these? Fwd: ALT.NET Seattle!"

    It was a forward from David Laribee, one of the organizers of ALT.NET, that was sent out to all registered attendees. As I continued to read it said:

    "We are *FULL* and there are, I'm sorry to say, no "plus ones" at this point."

    My reaction was... GULP, my wife is going to kick my ***!

    Then a magical thing happened... I saw another email in my inbox... the subject read "ALT.NET Seattle!" and it was from Mr. David Laribee himself!

    I'm in, I'll see you cool kids this weekend in Seattle! If you're thinking about crashing the party, I would suggest that you get in touch with one of the organizers instead of pulling a mO... *sigh*

    The email in it's entirety, for all the curious!

    Hi all,
    We're just about ready to launch into ALT.NET Open Spaces, Seattle. A few housekeeping notes:

    • The space opens on Friday, 4/18 from 6pm to 8pm. Saturday we'll meet for sessions between 10am and 6pm. We'll wrap up on Sunday from 10am to 2pm.
    • There is no shuttle service between the hotel (Marriott Town Center) and DigiPen (event location). Please arrange or offer rides if you can. It's Bring Your Own Ride, so be aware.
    • Event details (location, maps, times) are always available at http://altdotnet.org/events/seattle/ 
    • We are *FULL* and there are, I'm sorry to say, no "plus ones" at this point. We'll be doing a loose registration at the door and you have to be registered (you are if you're getting this message) to participate.

    If you have any questions, please send me email. I'll do my best to answer promptly.
    Looking forward to an exciting and productive meet-up!

    / Dave

    So last night my wife and I booked our tickets to Seattle, Washington. I'm heading down this weekend in hopes that I'll be able to attend the ALT.NET conference. I'm currently sitting on the wait list to get in, but hopefully it all works out.

    When the registration went up for the ALT.NET conference it was hard to predict what I would be doing and whether or not I would be able to attend. Now that it's a little closer to the date, it's a little easier to gage. Regardless of what happens we're super excited about visiting Seattle, and Redmond, Washington.

    Cross your fingers for me!

    Loading a subversion repository from a dump file isn't as hard as I thought it would be. It's as easy as:

    > svnadmin path.to.repository.directory < repository.dump.file

    You just have to make sure that:

    • you've got subversion installed or that you can hit "svnadmin.exe" directly.
    • the "path.to.repository.directory" has a repository created in it.

    All that this seems to be doing is replaying every commit that ever happened on the original repository. This is pretty sweet, especially for a class room set up when the student afterwards want to go through different revisions to compare changes or things learned in class.

    svn.dump

    What a week!! Well as I expected it was awesome, intense, and career altering. My wife, daughter and I traveled to Austin, TX to take part in the Nothin But .NET boot camp. We love Austin! It's an amazing city, and the people are fantastic. We often hear about how friendly Canadians are, but honestly it was unbelievable how kind people are in Austin. Every where you go people seem to be having fun and loving life. We spoke with bus drivers, cab drivers, people sitting at the bus stop, people at restaurants, people downtown.

    My wife and daughter traveled the city using transit while I was in class all week. Everyday my wife would tell me stories about how nice people were to her. She dislikes taking transit here in Calgary but really found it fun and a pleasant experience in Austin. The buses are pretty cool. For $1 USD you get a ticket that lasts for 24 hours. When you get on the bus you swipe it at the front and get on. Bus drivers were so friendly and told stories and jokes during the ride. They really made an extra effort to help people in wheel chairs get on the bus. I'd never seen it before, but seems to be common. Awesome!

    A lot of people we met in Austin, aren't originally from Austin. It seems that there are a lot of people currently migrating there. We were fortunate to meet a pretty cool cab driver named Ed. He moved to the states from Brazil and told us a lot about the city of Austin.

    JP - .NET Training 001The course itself was awesome. There were students there from Austin, Winnipeg, Houston, Denver, Calgary, Louisiana, and even as far as Brazil. It's amazing how close you seem to get over such a short time. The same thing seemed to happen in the Calgary course. The collaborative environment really gets people to drop their defenses, open up and be comfortable with their current skill set knowing that it's only where they are today, but not where they'll be tomorrow.

    There were so many great conversations about so many different topics. Everyone seemed to have great opinions and ideas to further and push the .NET community. Everyone in the room was definitely passionate about developing better software.

    JP - It was pretty cool to meet Scott Bellware, and here some of his ideas about .NET and software development in general. At first I didn't really get it, but by the last night it clicked in. I remember saying in my head... "He's right!" Scott is a super passionate person, and questions everything. I remember making it a motto to "question everything" and he truly does that well. The .NET community definitely needs a voice like Scott, to keep us all on our toes to make sure we pay attention to what we're doing, to expose more effective ways of doing things.

    I realize that as a young dev that I'm part of the next generation, and it's important to me to learn from the trail blazers and continue pave new paths when they've finished.

    One of the key things that Scott talked about was the concept of Solubility. "It's so easy to read that it melts into your brain." The new style of writing unit tests that target specific contexts make it so much easier to jump in to a specific context and continue to write in new chapters of the novel. I really enjoy reading code and tests that read like chapters from a novel. It's a higher level abstraction that allows me to focus on the problem domain rather then the technical details. Let the compiler do the interpreting...

    It was so much fun being a teaching assistant. I love answering questions and helping out, at first I was  pretty nervous but after being able to fix a few small issues I felt better. One of the things I learned this week was that when I didn't know the answer or wasn't sure it was important to make sure I made that known. The last thing I want to do is pass along incorrect information or pretend to know more than I do. I found that by just communicating that I didn't know the answer to a problem, that chances were that someone else in the room did. This is one of the reasons why the open, collaborative work space is so important. You can save so much time by just asking for help.

    When we left Austin to go home, we were pretty sad to leave. The further we got away from Austin, the more we talked about how we could totally move to Austin... I can't wait to go back and help out at the next boot camp!

    Image011 The month of March was definitely a busy one. The month of April will be another busy one. Yesterday I started my first day at eCompliance. I left ThoughtWorks to pursue the world of the start-up. So far it's been a lot of fun. Day one on the job I was hitting things that most developers are sheltered from. Although, a bit scary at times... it's been a super fun ride so far.

    I managed to knock off a few goals from my list, I finished the second exam to earn an MCTS designation, I finished reading xUnit Test Patterns, and my family and I booked tickets to fly down to Austin, Texas for a week.

    Last year it was a huge dream for me to be able to attend the Nothin' But .NET boot camp, and this year I'm proud to say that I will have the opportunity to help TA at the upcoming boot camp in Austin. I'm super nervous, and humbled that this opportunity is available to me. The best part is when my wife and I sit down, and cross things off our list of goals, together!

    Since my wife and I sat down and wrote out our list of goals, things started happening almost immediately. We slapped ours on the side of the fridge so that we can take a peek at it as we walk by. It's been invaluable to us, and has helped us with making tough decisions.

    One of the things I've learned is that my lists tend to change, but our list is pretty much the same since it started. I would go full force trying to accomplish things off my list, even if they were no longer important to me. I've started to adjust my lists, and I don't feel as bad as I thought I would when I don't knock things off my old lists. Hopefully, it's a sign of adapting to changes rather then settling for less.

    A wise man once suggested that I get listed, and now I suggest you do the same!

March

    I had an interesting conversation the other day about expired knowledge. On the current project I'm working on I'm realizing that there are a lot of gaps in my knowledge of WebForms. The other day, I told the person that I was pairing with that this chaotic abstraction of the web is confusing to me. My lack of knowledge of WebForms is not because I haven't learned it yet, it's because I didn't really want to.

    I remember the first time I heard the term "complex page life cycle". It had no meaning to me so I googled it to find out what it mean. oh man.. oh man... i remember reading through some MSDN docs about the sequence of events that fire as a page is constructed, and thought to myself. I'm not going to memorize this. This is crazy....

    What is an advanced knowledge of webforms going to mean to you in 5 years? how about 10 years? how about 20?

    I have no idea what the future holds for me, but I do know that I want to be in software for quite a while, so I do my best to focus on knowledge of software development that wont expire quickly. I would much rather spend my time studying, the intricacies of new programming paradigms rather than learning the intricacies of some new framework.

    It sucks when you're doing a demo of your work to a person from business, and they're more impressed by shiny things in the UI, rather than how well tested and loosely coupled your design is.

    It's like an owner teaching it's dog that it will be rewarded for certain types of behavior. If I'm rewarded by spending my time learning about how to use the cool new AJAX controls, and shunned for spending my time trying to understand what the Liskov Substitution Principal means, then what am I more likely to do?

    What's the moral to this story... don't feel bad, like I do, that you don't know the intricacies of some specific technology. Strive to understand what it is you're doing, and why you're doing it like that.

    My focus for this year is to study:

    • object oriented programming
    • test driven development
    • design patterns

    What about you?

    I recently started to leverage extension methods in my unit tests as a way to create more readable and strongly typed unit tests. Here's an example:

     1   [Test]
     2   public void should_be_able_to_subtract_one_number_from_another() {
     3       var twenty = Number( 20 );
     4       var three = Number( 3 );
     5       var seventeen = Number( 17 );
     6 
     7       var calculator = CreateSUT( );
     8       var resultOfCalculation
     9           = calculator
    10               .Number( twenty )
    11               .Minus( )
    12               .Number( three )
    13               .ComputesTo( );
    14 
    15       resultOfCalculation.ShouldBeEqualTo( seventeen );
    16   }
    

    This is a completely state based unit test that asserts that the result of the calculation is equal to 17. The actual assertion actually happens in an extension method. Defined below:

    1   public static class AssertionExtensions {
    2       public static void ShouldBeEqualTo< T >( this T itemToCheck, T valueToBeEqualTo ) {
    3           Assert.AreEqual( valueToBeEqualTo, itemToCheck );
    4       }
    5   }
    

    MbUnits Assert.AreEqual() method has several overloads. The one I use the most is the overload defined as follows:

    1   public void AreEqual( object expected, object actual ) {...}
    

    The problem I have with this overload is that it's not strongly typed. So I could have written an Assertion that looked like:

    1   Assert.AreEqual( 17, new Number(17) );
    

    This won't give me a compile error but will indicate a broken test with the following message.

    Equal assertion failed: [[17]]!=[[Calculator.Domain.Number]]

    This happens because the integer value type 17 is not equal to the Number reference type with an underlying value of 17. Also, this would have boxed the value type to a reference type to use the overload that accepts two objects. (which is also expensive)

    By extending the assertion with a generic extension method, I get a more readable test with the strong typing of generics.

    Hopefully, this saves you from some embarrassing moments with your pair!

    I recently finished reading...

    C# 3.0 in a Nutshell: A Desktop Quick Reference (In a Nutshell (O'Reilly))
    by Joseph Albahari, Ben Albahari

    Read more about this title...

    My desire to read this book was to understand what new language features C# 3.0 brings to the table. This book started to explain the C# language right from the very beginning up until now. So some of it was a great refresher and some of it was quite boring. It not only covers the new language features but covers several areas of the framework class libraries and new lib's that came with the .NET 3.5 stack.

    I was hoping to find my coverage on the new language features to see different usages for things like extension methods and lambda's to try to get a sense for what I like and don't like. So far I'm not really a fan of the new query comprehension syntax. It's to SQL-ish for me... I prefer working directly against methods. But I might have to give it some time... enough ranting here's some of the things I've learned.

    Lambda Expressions

    There's tonnes, and tonnes of discussion on this topic right now. I love how it's so much cleaner, and less verbose then anonymous delegates, but I see potential room for abuse. IMHO seeing lambda's tossed around all over your code base is no better then over using anonymous delegates. I love the new ideas that lambda's bring though...

    "A lambda expression is an unnamed method written in place of a delegate instance. The compiler immediately converts the lambda expression to either:

    • A delegate instance.
    • An expression tree, of type Expression<T>, representing the code inside the lambda expression in a traversable object model. This allows the lambda expression to be interpreted later at runtime..." - C# 3.0 in a Nutshell

    I love the idea of building up an expression tree of delegates that chain together to solve an equation. One of the ideas I'm working on is understanding how to leverage an expression tree of lambdas to solve trivial mathematical equations. Then possible traversing through the structure with a visitor to build out a display friendly version of the equation.

    WPF

    "The benefits of WPF are as follows:

    • It supports sophisticated graphics, such as arbitrary transformations, 3D rendering, and true transparency.
    • Its primary measurement unit is not pixel-based, so applications display correctly in any DPI
    • It has extensive dynamic layout support, which means you can localize any application without danger of elements overlapping.
    • Rendering uses DirectX and is fast, taking good advantage of graphics hardware acceleration.
    • User interfaces can be described declaratively in XAML files that can be maintained independently of the "code-behind" files - this helps to separate appearance from functionality." - C# 3.0 in a Nutshell

    WCF

    "WCF is the communication infrastructure new to Framework 3.0. WCF is flexible and configurable enough to make both of its predecessors - Remoting and (.ASMX) Web Services - mostly redundant." - C# 3.0 in a Nutshell

    XML

    "If you're dealing with data that's originated from or destined for an XML file, XmlConvert (the System.Xml namespace) provides the most suitable methods for formatting and parsing. The methods in XmlConvert handle the nuances of XML formatting without needing special format strings." - C# 3.0 in a Nutshell

    Iterators

    "The compiler, upon parsing the yield return statement, writes 'behind the scenes,' a hidden nested enumerator class, and then refactors GetEnumerator to instantiate and return that class. Iterators are powerful and simple" - C# 3.0 in a Nutshell

    Hash Tables

    "Its underlying hashtable works by converting each element' key into an integer hashcode - a pseudo unique value - and then applying an algorithm to convert the hashcode into a hash key. This hash key is used internally to determine which 'bucket' an entry belongs to. If the bucket contains more than one value, a linear search is performed on the bucket. A hashtable typically starts out maintaining a 1:1 ration of buckets to values, meaning that each bucket contains only one value. However, as more items are added to the hashtable, the load factor dynamically increases, in a manner designed to optimize insertion and retrieval performance as well as memory requirements." - C# 3.0 in a Nutshell

    Serialization

    "The data contract serializer is the newest and the most versatile of the three serializatoin engines and is used by WCF. The serializer is particularly strong in two scenarios:

    • When exchanging information through standards-compliant messaging protocols
    • When you need high-version tolerance plus the option of preserving object references."

    C# 3.0 in a Nutshell

    Threading

    "A Mutex is like a C# lock, but it can work across multiple processes. In other words, Mutex can be computer-wide as well as application-wide" - C# 3.0 in a Nutshell

    "A Semaphore is like a nightclub: it has a certain capacity, enforced by a bouncer. Once it's full, no more people can enter and a queue build up outside. Then, for each person that leaves, one person enters from the head of the queue." - C# 3.0 in a Nutshell

    "The Thread class provides GetData and SetData methods for storing nontransient isolated data in 'slots' whose values persist between method calls." - C# 3.0 in a Nutshell

    Last week I went and checked out JP's latest presentation on Generics at the Calgary .NET User Group, and as usual it was awesome! He's definitely knee deep in C# 3.0, and was dropping lambda's and extension methods like it was old news... Here's some of the stuff I learned.

    Extending the ISpecification interface via the use of Extension methods.

        public static class SpecificationExtensions {
            public static ISpecification< T > And< T >( this ISpecification< T > leftSide, ISpecification< T > rightSide ) {
                return new AndSpecification< T >( leftSide, rightSide );
            }
    
            public static ISpecification< T > And< T >( this ISpecification< T > left, Predicate< T > criteriaToSatisfy ) {
                return left.And( new Specification< T >( criteriaToSatisfy ) );
            }
    
            public static ISpecification< T > Or< T >( this ISpecification< T > leftSide, ISpecification< T > rightSide ) {
                return new OrSpecification< T >( leftSide, rightSide );
            }
    
            public static ISpecification< T > Or< T >( this ISpecification< T > left, Predicate< T > criteriaToSatisfy ) {
                return left.Or( new Specification< T >( criteriaToSatisfy ) );
            }
    
            private class AndSpecification< T > : ISpecification< T > {
                public AndSpecification( ISpecification< T > leftCriteria, ISpecification< T > rightCriteria ) {
                    this.leftCriteria = leftCriteria;
                    this.rightCriteria = rightCriteria;
                }
    
                public bool IsSatisfiedBy( T item ) {
                    return leftCriteria.IsSatisfiedBy( item ) && rightCriteria.IsSatisfiedBy( item );
                }
    
                private ISpecification< T > leftCriteria;
                private ISpecification< T > rightCriteria;
            }
    
            private class OrSpecification< T > : ISpecification< T > {
                public OrSpecification( ISpecification< T > leftCriteria, ISpecification< T > rightCriteria ) {
                    this.leftCriteria = leftCriteria;
                    this.rightCriteria = rightCriteria;
                }
    
                public bool IsSatisfiedBy( T item ) {
                    return leftCriteria.IsSatisfiedBy( item ) || rightCriteria.IsSatisfiedBy( item );
                }
    
                private ISpecification< T > leftCriteria;
                private ISpecification< T > rightCriteria;
            }
        }
    

    By accepting a Predicate delegate as the second argument you can now inline your lambdas and still take advantage of specifications. Client components can now take advantage of these extensions like this...

        public class SlipsRepository : ISlipsRepository {
            public SlipsRepository( ISlipDataMapper mapper ) {
                _mapper = mapper;
            }
    
            public IEnumerable< ISlip > AllAvailableSlips() {
                return _mapper.AllSlips( ).Where( Is.NotLeased( ) );
            }
    
            public IEnumerable< ISlip > AllAvailableSlipsFor( IDock dockToFindSlipsOn ) {
                return _mapper.AllSlips( ).Where( Is.NotLeased( ).And( Is.On( dockToFindSlipsOn ) ) );
            }
    
            private readonly ISlipDataMapper _mapper;
    
            private static class Is {
                public static ISpecification< ISlip > NotLeased() {
                    return new Specification< ISlip >( slip => !slip.IsLeased( ) );
                }
    
                public static Predicate< ISlip > On( IDock dock ) {
                    return slip => dock.Equals( slip.Dock( ) );
                }
            }
        }
    

    The base specification class becomes a quick and easy...

        public class Specification< T > : ISpecification< T > {
            public Specification( Predicate< T > criteriaToSatisfy ) {
                _criteriaToSatisfy = criteriaToSatisfy;
            }
    
            public bool IsSatisfiedBy( T item ) {
                return _criteriaToSatisfy( item );
            }
    
            private readonly Predicate< T > _criteriaToSatisfy;
        }
    

February

    I just finished reading...

    Design Patterns: Elements of Reusable Object-Oriented Software (Addison-Wesley Professional Computing Series) by Erich Gamma, Richard Helm, Ralph Johnson, John M. Vlissides
    Read more about this title...

    It's about time I read this book... This catalog contains 23 patterns with examples in C++. I enjoyed reading this book. I found at times that my mind started to drift off, but then there were moments when I would forget where I was and miss my bus stop. (Ok one moment!)

    I enjoyed reading the discussion on OO more then anything else in this book. The examples are a little dated but for it's time the catalog is awesome. Although I've been wanting to read this book for a while I preferred "Head First Design Patterns", mostly because it was easy to read. (I call it the comic book for dev's)

    As usual, here are some quotes from the book that I really enjoyed.

    "When an abstraction can have one of several possible implementations, the usual way to accommodate them is to use inheritance. An abstract class defines the interface to the abstraction, and concrete subclasses implement it in different ways. But this approach isn't always flexible enough. Inheritance binds an implementation to the abstraction permanently, which makes it difficult to modify, extend and reuse abstractions and implementations independently."

    "Studies of expert programmers for conventional languages have shown that knowledge and experience isn't organized simply around syntax but in larger conceptual structures such as algorithms, data structures and idioms, and plans for fulfilling a particular goal. Designers probably don't think about the notation they're using for recording the design as much as they try to match the current design situation against plans, algorithms, data structures, and idioms they have learned in the past."

    "These design patterns can also make you a better designer. They provide solutions to common problems. If you work with object-oriented systems long enough, you'll probably learn these design patters on your own. But reading the book will help you learn them much faster. Learning these patterns will help a novice act more like an expert."

    "To continue to evolve, the software must be reorganized in a process known as refactoring. This is the phase in which frameworks often emerge. Refactoring involves tearing apart classes into special- and general-purpose components, moving operations up or down the class hierarchy, and rationalizing the interfaces of classes. This consolidation phase produces many new kinds of objects, often by decomposing existing objects and using object composition instead of inheritance. Hence black-box reuse replaces white-box reuse. The continual need to satisfy more requirements along with the need for more reuse propels object-oriented software through repeated phases of expansion and consolidation - expansion as new requirements are satisfied, and consolidation as the software becomes more general."

    I've got a beef with enums. When I see them, I cringe... which is quite different from my days in C, where I couldn't live without enums and structs. That's another story...

    "Flyweight: Use sharing to support large numbers of fine-grained objects efficiently" - Design Patterns

    Design Patterns: Elements of Reusable Object-Oriented Software (Addison-Wesley Professional Computing Series) by Erich Gamma, Richard Helm, Ralph Johnson, John M. Vlissides
    Read more about this title...

    Why do some of us quickly jump to enums? In procedural languages it makes sense. It's giving a type code a human readable meaning. So instead of having to stare at type code 1, everywhere you can use State.Acknowledgement, which is a lot easier to understand ... what does 1 mean again?

    But in an OO language, I feel dirty when I see enums. The argument of using it for bitwise operations and the Flags attribute is weak. Create a composite!

      [Flags]
      public enum Digits {
          Zero = 0x00,
          One = 0x01,
          Two = 0x02,
          Three = 0x03,
          Four = 0x04,
          Five = 0x05,
          Six = 0x06,
          Seven = 0x07,
          Eight = 0x08,
          Nine = 0x09
      }
    
      [Test]
      public void Should_be_equal_to_2_digits() {
          Digits digits = Digits.Six | Digits.One;
          Assert.IsTrue( Digits.Six == (digits & Digits.Six) );
          Assert.IsTrue( Digits.One == (digits & Digits.One) );            
      }
    

    Weak... do you really want to use a bitwise & to check if a certain digit is enabled. I don't. I'm going to use the following 2 tests to squash the enum into a first class component, with some smarts to it.

      [Test]
      public void should_be_able_to_add_a_single_digit() {
          INumberBuilder builder = CreateSUT( );
          builder.Add( Digits.One );
          Assert.AreEqual( new Number( 1 ), builder.Build( ) );
      }
    
      [Test]
      public void should_be_able_to_form_a_number_with_more_than_one_digit() {
          INumberBuilder builder = CreateSUT( );
          builder.Add( Digits.One );
          builder.Add( Digits.Nine );
          Assert.AreEqual( new Number( 19 ), builder.Build( ) );
      }
    

    My current NumberBuilder implementation looks like this (it sucks but it works):

      public class NumberBuilder : INumberBuilder {
          public void Add( Digits digit ) {
              numberBeingBuilt += Convert.ToString( Convert.ToInt32( digit ) );
          }
    
          public INumber Build() {
              return new Number( Convert.ToInt32( numberBeingBuilt ) );
          }
    
          private string numberBeingBuilt;
      }
    

    I'm going to start off by creating some Flyweights.

      public class Digits {
          public static readonly IDigit Eight = new Digit( 8 );
          public static readonly IDigit Five = new Digit( 5 );
          public static readonly IDigit Four = new Digit( 4 );
          public static readonly IDigit Nine = new Digit( 9 );
          public static readonly IDigit One = new Digit( 1 );
          public static readonly IDigit Seven = new Digit( 7 );
          public static readonly IDigit Six = new Digit( 6 );
          public static readonly IDigit Three = new Digit( 3 );
          public static readonly IDigit Two = new Digit( 2 );
          public static readonly IDigit Zero = new Digit( 0 );
    
          public class Digit : IDigit {
              public Digit( int digitToRepresent ) {
                  _digitToRepresent = digitToRepresent;
              }
    
              public override string ToString() {
                  return _digitToRepresent.ToString( );
              }
    
              private readonly int _digitToRepresent;
          }
      }
    

    My compiler is telling me that the Add method on my builder currently accepts a parameter of type "Digits", so I'm going to change the signature to accept a parameter of type IDigit.

      public interface INumberBuilder {
          void Add( Digits digit );
          INumber Build();
      }
    

    To...

      public interface INumberBuilder {
          void Add( IDigit digit );
          INumber Build();
      }
    

    Let's update the NumberBuilder implementation to:

      public class NumberBuilder : INumberBuilder {
          public NumberBuilder() {
              _digitsOfNumberBeingBuilt = new List< IDigit >( );
          }
    
          public void Add( IDigit digit ) {
              _digitsOfNumberBeingBuilt.Add( digit );
          }
    
          public INumber Build() {
              return new Number( CreateIntegerFrom( _digitsOfNumberBeingBuilt ) );
          }
    
          private int CreateIntegerFrom( IEnumerable< IDigit > digitsOfNumberBeingBuilt ) {
              StringBuilder builder = new StringBuilder( );
              foreach ( IDigit digit in digitsOfNumberBeingBuilt ) {
                  builder.Append( digit );
              }
              return Convert.ToInt32( builder.ToString( ) );
          }
    
          private IList< IDigit > _digitsOfNumberBeingBuilt;
      }
    

    I run the tests and they pass, sweet. But I'm not happy with the current implementation, so I look for other potential refactorings. I decide to forward the digit to append right to the number, the number can take care of how to append the digit, rather then having the builder doing so. The builder now looks like:

      public class NumberBuilder : INumberBuilder {
          public NumberBuilder() {
              _numberBeingBuilt = new Number( 0 );
          }
    
          public void Append( IDigit digit ) {
              _numberBeingBuilt = _numberBeingBuilt.Append( digit );
          }
    
          public INumber Build() {
              return _numberBeingBuilt;
          }
    
          private INumber _numberBeingBuilt;
      }
    

    And Number looks like:

      public class Number : INumber, IEquatable< Number > {
          public Number() : this( 0 ) {}
    
          public Number( int numberToRepresent ) {
              _numberToRepresent = numberToRepresent;
          }
    
          public INumber Append( IDigit digit ) {
              return new Number( ( _numberToRepresent*10 ) + digit.Value( ) );
          }
    
          public bool Equals( Number number ) {
              if ( number == null ) {
                  return false;
              }
              return _numberToRepresent == number._numberToRepresent;
          }
    
          public override string ToString() {
              return _numberToRepresent.ToString( );
          }
    
          public override bool Equals( object obj ) {
              if ( ReferenceEquals( this, obj ) ) {
                  return true;
              }
              return Equals( obj as Number );
          }
    
          public override int GetHashCode() {
              return _numberToRepresent;
          }
    
          private readonly int _numberToRepresent;
      }
    

    To wrap this up, Number aggregates digits, the enum got dropped and was replaced by a class. There's still more refactorings that can occur, but the point is that a full blown component is much easier to extend then an enum...

    For more info check out "Replace Type Code with Class" from...

    Refactoring: Improving the Design of Existing Code (The Addison-Wesley Object Technology Series) by Martin Fowler, Kent Beck, John Brant, William Opdyke, Don Roberts

    Read more about this title...

    Refactoring: Improving the Design of Existing Code (The Addison-Wesley Object Technology Series) by Martin Fowler, Kent Beck, John Brant, William Opdyke, Don Roberts
    Read more about this title...

    This book is awesome and a must read for anyone who enjoys the art of refactoring as much as I do. The examples are crystal clear and the way the refactorings are done step by step makes it so much more understandable.

    Here are a few excerpts that I enjoyed from this book.

    "The problem with copying and pasting code comes when you have to change it later. If you are writing a program that you don't expect to change, then cut and paste is fine. If the program is long lived and likely to change, then cut and paste is a menace."

    I've got to agree and disagree with the above statement. I think anytime you find yourself copying and pasting is a clear sign of duplication which should be improved. Removing duplication should be something we all strive for, and remember that inheritance is not the only way to remove duplication. Proper object composition, delegation and generics are all great ways to remove duplicate code.

    Replace conditional with Polymorphism has to be by far one of my favorite refactorings. If you're seeing if-else statements scattered throughout your code base that's a smell. Switch's are no better... (booo switches...)

    "Is renaming worth the effort? Absolutely. Good code  should communicate what it is doing clearly, and variable names are a key to clear code. Never be afraid to change the names of things to improve clarity."

    Amen, brother! Thank goodness for tools like Resharper and Rhino.Mocks. On my current project we're using NMock2 and I got burned several times doing a Rename Method because of the string literals used in NMock tests... My advice is just use Rhino Mocks... please... for my sake!

    "You write code that tells the computer what to do, and it responds by doing exactly what you tell it. In time you close the gap between what you want it to do and what you tell it to do. Programming in this mode is all about saying exactly what you want. But there is another use of your source code. Someone will try to read your code in a few months' time to make some changes. We easily forget that extra user of the code, yet that user is actually the most important."

    "The first time you do something, you just do it. The second time you do something similar, you wince at the duplication, but you do the duplicate thing anyway. The third time you do something similar, you refactor."

    I often heard the term "smell" used as a way to describe something funky in a code base or team but I had no idea where the term came from... until know!

    "If it stinks, change it." - Gradma Beck, discussing child-rearing philosophy

    The chapter on code smells is awesome, it's offers a catalog of code smells like:
    * Duplicate Code * Long Methods * Large Classes * Long Parameter Lists * Divergent Changes * Shotgun Surgery * Feature Envy * Data Clumps * Primitive Obsession * Switch Statements * Parallel Inheritance Hierarchies * Lazy Class * Speculative Generality - "Oh, I think we need the ability to do this kind of thing someday." * Temporary Fields * Message Chains * Middle Man * Inappropriate Intimacy * Alternative Classes with Different Interfaces * Incomplete Library Classes * Data Classes * Refused Bequest - "Subclasses get to inherit the methods and data of their parents. But what if they don't want or need what they are given?" * Comments

    If you only read one chapter in this book, I suggest Chapter 3. "Bad Smells in Code". I really like how Resharper uses the same refactoring names as those mentioned in this book. Anyway's, what are you waiting for go read this book.

    Today I read...

    The Dream Giver
    by Bruce Wilkinson

    Read more about this book...

    A friend recommended this book, and I'm glad I listened to him. I found that when I started reading, I had trouble putting it down so I read the whole thing. The book starts off...

    "A Nobody named Ordinary who lived in the land of Familiar."

    After that, I realized a lot of this book was speaking about me. It explained the different people in my life and why they behave the way they do. I'm having trouble explaining why I enjoyed this book so much, maybe it's because right now I feel "stuck".

    I feel like I've got something in me that's screaming to get out, but I just can't figure out what it is. Some day's I think it's my inner creativity burning to get out, and if I don't feel like I'm able to think creatively and try different things then I'm a sleep or that I might lose that "potential".

    I know you can't see it yet, but I will become what I am.

    One of the things that keeps me pumped up throughout my day is the opportunity to solve problems creatively. When the opportunity isn't there I feel like I can barely stay awake. Lately I've been struggling to stay awake. My passion for software development is low right now. My motivation to learn new things, and code is still there... but starting to dwindle away.

    I'm working on a good project right now. The architecture is laid out, and big changes are a no no. The client is happy, so any suggested changes are kind of looked at with raised eye brows. I feel like a spec developer who gets handed a 7 page document for a story card that I have to implement. It's mostly just creating new screens and updating stored proc's (so far). (I very much dislike having to spend my time in the land of SQL, I am an object bigot.) So for a new dev, it's a pretty cozy job. For me it's not quite my "sweet spot". (I should tell you that it's only been about a month so far.)

    Zzzz... I'm finding it difficult to find new and interesting things to blog about, and it almost seems forced these days. It seems like if I want to keep any sort of artistic creativity alive, I've got to do it on my own time. Not on work time!

    I'm not complaining about my job, I'm just yearning for the past. Last year was a tonne of fun, at my old job. There was no one to blame but ourselves when things didn't work out. There was no pointing fingers at the people in another department on another floor. We were the team and there was no other floor. We were a tiny team that got to work on some big problems and in the process we got to flex our creative muscles.

    Like any muscle if you don't exercise it regularly, it becomes weak. Right now my creative muscles feel rather weak. Today I sat my butt down to write some code, I had plans on demonstrating some ideas I thought of while riding the bus. But once I got started I found myself getting upset, and frustrated with myself. I can't explain it, I was just mad that I wasn't moving as fast as i wanted to be. My ideas were a tangled mess, and I just couldn't sort it out. I was just annoyed and disappointed with myself. (mO, mO, mO... breathe buddy... breathe!)

    One of the reasons I was drawn to software development was because...

    I suck at drawing!

    I've always enjoyed art, music, and literature. When I found something that allowed me to be creative, and something that I thought I was pretty good at, I held on to it. But lately everything seems so familiar, so comfortable, so boring... Zzzz...

    I can only imagine what my team members might think if they read this post. I only wish they could see how we developed software during the last few months of my last job. I remember during my phone interview with ThoughtWorks saying that

    "If it's not ThoughtWorks, then I'm not leaving my job. I like the guys I work with and I'm having a lot of fun."

    If I liked the guys I worked with and I was having fun, then why did I leave? Let's face it... I tell myself it was for the opportunity to grow and face new challenges. Well, I was growing and facing challenges where I was. In the end I realize it was for the money. I was not in a comfortable place, and instead of pushing through, I returned to the land of familiar. So there's the decision... comfort and familiarity or the "dream".

    I admit that there were times when I was discouraged about the progress of my old team, but after now job hopping for a few years I see that it had been one of the greatest and most accelerated learning experiences of my life. I couldn't wait to get out of the Waste Land, and not have to worry about money. Now that I'm out, I see how the time in the waste land was actually a season of preparation, but I don't think I stayed long enough to appreciate it.

    I've always wondered how much other people make, financially. Not so much because I'm greedy, but more because I don't want to look foolish when I'm asked "What's your expecting salary range?"

    Here it is... the big secret most people seem to hold on to.

    DataShapers Inc

    • June. 15, 2004 - December. 15, 2006
    • Starting Salary: $30, 000.00 CAD/Year
    • Ending Salary: $45, 000.00 CAD/Year

    Imaging Dynamics Corporation

    • December. 18, 2006 - February. 04, 2007
    • Starting Salary: $43, 000.00 CAD/Year
    • Ending Salary: $43, 000.00 CAD/Year

    MediaLogic Inc.

    • February. 12, 2007 - January. 11, 2008
    • Starting Salary: $40, 000.00 CAD/Year
    • Ending Salary: $43, 000.00 CAD/Year

    ThoughtWorks Inc.

    January. 16, 2008 - Present

    • Starting Salary: $55, 000.00 CAD/Year

    If anything this should satisfy the person who's been googling "How much money does a software developer make?"

    Let's have a quick chat about deferring execution. Take a look at the following test:

        [SetUp]
        public void SetUp() {
            _mockery = new MockRepository( );
            _mapper = _mockery.DynamicMock< IXmlToBookMapper >( );
            _xmlGateway = _mockery.DynamicMock< IXmlGateway >( );
        }
    
        public IBooksGateway CreateSUT() {
            return new BooksGateway( _mapper, _xmlGateway );
        }
    
        [Test]
        public void Should_leverage_xml_bank_to_retrieve_xml() {
            using ( _mockery.Record( ) ) {
                Expect.Call( _xmlGateway.AllElementsNamed( "Book" ) ).Return( new List< IXmlElement >( ) );
            }
    
            using ( _mockery.Playback( ) ) {
                CreateSUT( ).LoadAllBooksFromStorage( );
            }
        }
    

    When I run this test I get the following output:

    It says that the expectation set on the xml gateway was never satisfied. So the call to the method "AllElementsNamed()" with an input parameter with a value of "Book" was not made.

    Let's take a look at the implementation that failed this test.

        public class BooksGateway : IBooksGateway {
            public BooksGateway( IXmlToBookMapper mapper, IXmlGateway xmlGateway ) {
                _mapper = mapper;
                _xmlGateway = xmlGateway;
            }
    
            public IEnumerable< IBook > LoadAllBooksFromStorage() {
                foreach ( IXmlElement element in _xmlGateway.AllElementsNamed( "Book" ) ) {
                    yield return _mapper.MapFrom( element );
                }
            }
    
            private readonly IXmlToBookMapper _mapper;
            private readonly IXmlGateway _xmlGateway;
        }
    

    Can you spot the error? Really?

    There is no error, the reason this test fails is because of something that C# 2.0 offered for free that very few people actually talk about, and that's deferred execution. The iteration through the loop never occurs because the client of the BooksGateway component never actually begins iterating. In this case the client component is our unit test.

      CreateSUT( ).LoadAllBooksFromStorage( );
    

    The above line never actually starts to walk the underlying collection therefore causes the expectation violation to occur. Traversal through the collection is put off until the last possible moment necessary. What this also means is that each time the traversal through the collection re-starts it will actually rebuild the internal collection to walk through. This works great for immutable types but can cause a bit of a head ache with types that change through out its lifetime, since new instances are brought back out of persistence. In this case, each time we walk the underlying collection we're actually re-reading the books from an xml file.

    If we re-write the test like this...

        [Test]
        public void Should_leverage_xml_bank_to_retrieve_xml() {
            using ( _mockery.Record( ) ) {
                Expect.Call( _xmlGateway.AllElementsNamed( "Book" ) ).Return( new List< IXmlElement >( ) );
            }
    
            using ( _mockery.Playback( ) ) {
                foreach ( IBook book in CreateSUT( ).LoadAllBooksFromStorage( ) ) {
                    Console.Out.WriteLine( book.Name( ) );
                }
            }
        }
    

    The test now passes...

    It's so important to properly protect your internals of an object, especially collections. I'm going rant for a bit about why the following piece of code drives me a little nutty!

      public class HighSocietyCountryClub : ICountryClub 
      {
        public HighSocietyCountryClub() 
        {
          _membersOfTheHighlyExclusiveCountryClub = new List< IExclusiveMember >( );
        }
    
        public IList< IExclusiveMember > Members 
        {
          get { return _membersOfTheHighlyExclusiveCountryClub; }
        }
    
        private IList< IExclusiveMember > _membersOfTheHighlyExclusiveCountryClub;
      }
    

    Why is this so wrong... because it does not keep the ruffians out.

      public class CountryClubBackDoor 
      {
        public CountryClubBackDoor() 
        {
          _club = new HighSocietyCountryClub( );
        }
    
        public void SneakIn() 
        {
          Ruffian ruffian = new Ruffian( );
          _club.Members.Add( ruffian );        // oops... who let the ruffian in?
        }
    
        private ICountryClub _club;
      }
    

    When you expose a collection as a property on a component you expose the innards of the component. Clients of the component can inject state that was not meant to be there. Does that suck? YUP!

    So how do we deal with this? You separate the behavior of add new exclusive members from checking the exclusive members roster. When you're viewing the roster, you're doing so in a read only manner. Clients who are checking out the roster should not have access to sneaking in new exclusive members.

    Enter the IEnumerable interface. All collections types implement the IEnumerable interface, it's contract allows consumers to walk a collection but not sneak members in. Or does it? Let's take a look at a revised contract for the Country Club.

      public interface ICountryClub 
      {
        //IList< IExclusiveMember > Members { get; }
        IEnumerable< IExclusiveMember > RosterOfMembers { get; }
      }
    

    Looks alright for now. Let's peak at an implementation or the contract...

      public IEnumerable< IExclusiveMember > RosterOfMembers 
      {
        get { return _membersOfTheHighlyExclusiveCountryClub; }
      }
    

    IEnumerable doesn't have an add method so I guess clients shouldn't be able to sneak in now right? Let's see what those ruffians come up with...

            
      public void SneakIn() 
      {
        Ruffian ruffian = new Ruffian( );
        //_club.Members.Add( ruffian ); // oops... who let the ruffian in?
        ( ( List< IExclusiveMember > )_club.RosterOfMembers ).Add( ruffian ); // they did it again!
      }
    

    They did it again, those ruffians are a persistent bunch. They guessed that the roster of members were composed in a collection of type List, and they were right! They snuck in again!

    We can keep those ruffians out by building an instance of a type that implements the IEnumerable interface but doesn't hand out our internal collection. The following code does just that:

      public IEnumerable< IExclusiveMember > RosterOfMembers 
      {
        get 
        {
          foreach ( IExclusiveMember exclusiveMember in _membersOfTheHighlyExclusiveCountryClub ) 
          {
              yield return exclusiveMember;
          }
        }
      }
    

    The above code compiles down to a full blown Enumerable type. Kind of like the code below, you can go check out the IL produced from the above and see what it translates too...

      public class ExclusiveMembersEnumerable : IEnumerable< IExclusiveMember > 
      {
        public ExclusiveMembersEnumerable( IEnumerable< IExclusiveMember > members ) 
        {
          _members = members;
        }
    
        IEnumerator< IExclusiveMember > IEnumerable< IExclusiveMember >.GetEnumerator() 
        {
          return _members.GetEnumerator( );
        }
    
        public IEnumerator GetEnumerator() 
        {
          return ( ( IEnumerable< IExclusiveMember > )this ).GetEnumerator( );
        }
    
        private readonly IEnumerable< IExclusiveMember > _members;
      }
    

    One additional thing that the yield return keyword offers is deferred execution. This concept is getting a lot more attention now in C# 3.0, but it was already available in C#2.0. More on that later...

    If the purpose of exposing the IList interface on a type instead of the IEnumerable interface is to leverage the sorting methods then I suggest you factor out a separate interface specifically for traversing a collection and being able to query it. I spoke of a RichEnumerable interface that allowed you to do so, but the new language features in C# 3.0 and specifically the IQueryable interface looks like it might make this much easier to traverse a collection and sort and query it as needed.

    New interfaces to check out:

    • IGrouping<TKey, TElement>
    • ILookup<TKey, TElement>
    • IOrderedEnumerable
    • IOrderedQueryable
    • IQueryable
    • IQueryProvider

    My personal preference is to drop the properties... In my mind when I see a call to a method, I think this is invoking and action or triggering some sort of behavior, and in this example it is. We're building up a type that we serve back to a client to allow them to traverse our internals without completely handing it out. Anyway's, I wont fight the battle with properties today... but maybe I can save that for a later rant.

    CHECK OUT THE CODE!

    WebForms is an awkward marriage between a Page Controller and a Template View. In the Web Forms model the Template View (aspx page) inherits from the Page Controller (code behind).
    Patterns of Enterprise Application Architecture defines the Page Controller as:

    "An object that handles a request for a specific page or action on a web site." - PoEAA

    In this example I've separated the Page Controller from the Template View, because... mostly because I was bored and thought this would be a great way to better understand the patterns. So let's get started...
    I defined a layer super type for all page controllers defined as :

    1     public interface IPageController : IHttpHandler {
    2         void Execute();
    3     }
    

    The IPageController could have very well been called and Page Command because in this implementation I'm not concerned about having separate behaviors for GET and POST method requests. If this example were to evolve I might choose to separate the "Execute()" method into "ProcessGetRequest()" and "ProcessPostRequest()".

    The IPageController type inherits IHttpHandler in order to register this page controller with ASP.NET to receive all requests for a particular path. In this case this handler is registered in the web.config for all requests to the "DisplayAllCustomers.aspx" page.

    1     <httpHandlers>
    2         <add 
    3             verb="*" 
    4             path="DisplayAllCustomers.aspx" 
    5             validate="false" 
    6             type="PlayingWithPageControllers.Web.Controllers.DisplayAllCustomersController, PlayingWithPageControllers"/>
    7     </httpHandlers>
    

    If I wanted to get a little more nitty, gritty I could have specified only GET Http method's are handled by this handler.

    Moving on.... The PageController base type for all controllers looks like:

     1     public abstract class PageController : IPageController {
     2         public void ProcessRequest( HttpContext context ) {
     3             Execute( );
     4         }
     5 
     6         public bool IsReusable {
     7             get { return true; }
     8         }
     9 
    10         public abstract void Execute();
    11     }
    

    And finally our "DisplayAllCustomersController" component looks like:

     1     public class DisplayAllCustomersController : PageController, IDisplayAllCustomersController {
     2         public DisplayAllCustomersController( IDisplayAllCustomersView view, ICustomerTasks tasks ) {
     3             _view = view;
     4             _tasks = tasks;
     5         }
     6 
     7         public override void Execute() {
     8             _view.AddToBag( _tasks.AllCustomers( ) );
     9             _view.Render( );
    10         }
    11 
    12         private readonly IDisplayAllCustomersView _view;
    13         private readonly ICustomerTasks _tasks;
    14     }
    

    And voila all requests to "DisplayAllCustomers.aspx" are handled by the DisplayAllCustomersController which pulls information from the model and fires it off to the template view to be rendered.

    The Template View Pattern is defined as:

    "Renders information into HTML by embedding markers in an HTML page." - PoEAA

    Our template view for "AllCustomers.aspx" looks like this:

        <table>
            <thead>
                <tr>
                    <td>First Name:</td>
                    <td>Last Name:</td>
                </tr>
            </thead>
            <tbody>            
                <&#37; foreach ( DisplayCustomerDTO dto in ViewBagLocator.For( ViewBagKeys.DisplayCustomers ) ) {&#37;>
                <tr>
                    <td><&#37;= dto.FirstName() &#37;></td>
                    <td><&#37;= dto.LastName( ) &#37;></td>
                </tr>
                <&#37; } &#37;>
            </tbody>
        </table>
    

    This separates the Template view from having any knowledge of the Page Controller. The Page controller has the responsibility of pulling information from the model to pass along to the appropriate view.

    All access to the current context and the ASP.NET pipeline has been isolated to the HttpGateway which abstracts the ASP.NET facilities available through a condensed client interface.

    1     public interface IHttpGateway {
    2         void RedirectTo( IView view );
    3 
    4         void AddItemWith< T >( IViewBagKey< T > key, T itemToAddToBag );
    5 
    6         T FindItemFor< T >( IViewBagKey< T > key );
    7     }
    

    Now that I think about it, another step that I could have taken would have to shield the page controllers from having any knowledge of "IHttpHandler", which would have further isolated the ASP.NET infrastructure from the rest of the web presentation layer.

    Patterns of Enterprise Application Architecture defines a Gateway as:

    "An object that encapsulates access to an external system or resource." - PoEAA

     1     public class HttpGateway : IHttpGateway {
     2         public HttpGateway( IHttpContext context ) {
     3             _context = context;
     4         }
     5 
     6         public void RedirectTo( IView view ) {
     7             _context.Server.Transfer( view.Path( ) );
     8         }
     9 
    10         public void AddItemWith< T >( IViewBagKey< T > key, T itemToAddToBag ) {
    11             _context.Items.Add( key, itemToAddToBag );
    12         }
    13 
    14         public T FindItemFor< T >( IViewBagKey< T > key ) {
    15             return ( T )_context.Items[ key ];
    16         }
    17 
    18         private readonly IHttpContext _context;
    19     }
    

    If you haven't already you should go buy and read, then re-read, then re-read "Patterns of Enterprise Application Architecture by Martin Fowler"

    Patterns of Enterprise Application Architecture (The Addison-Wesley Signature Series)

    by Martin Fowler Read more about this title...

    DOWNLOAD THE CODE

    Hackers and Painters: Big Ideas from the Computer Age

    by Paul Graham

    Read more about this title...

    This was not what I expected, but then again I wasn't really sure what I expected. But I liked it, the first chapter kind of caught me off guard, but some of the analogies and comparisons make a lot of sense. You can tell that Mr. Paul Graham sure is a thinker!

    Here are a few excerpts from the book that I found enjoyable:

    "Big companies win by sucking less than other big companies."

    "You learn to paint mostly by doing it. Ditto for hacking. Most hackers don't learn to hack by taking college courses in programming. They learn by writing programs of their own at age thirteen. Even in college classes, you learn to hack mostly by hacking."

    "Maybe it would be good for hackers to act more like painters, and regularly start over from scratch, instead of continuing to work for years on one project, and trying to incorporate all their later ideas as revisions."

    "If I could get people to remember just one quote about programming, it would be the one at the beginning of Structure and Interpretation of Computer Programs.

    Programs should be written for people to read, and only incidentally for machines to execute."

    "When you catch bugs early, you also get fewer compound bugs. Compound bugs are two separate bugs that interact: you trip going downstairs, and when you reach for the handrail it comes off in your hand."

    "Mistakes are natural. Instead of treating them as disasters, make them easy to acknowledge and easy to fix. Leonardo more or less invented the sketch, as a way to make drawing bear a greater weight of exploration. Open source software has fewer bugs because it admits the possibility of bugs."

    "Of all tyrannies, a tyranny exercised for the good of its victims may be the most oppressive. - C.S. Lewis"

    "One of my first drawing teachers told me: if you're bored when you're drawing something, the drawing will look boring."

    "Indeed, there is even a saying among painters: 'A painting is never finished. You just stop working on it.'"

    There is so much great content in this book that just provokes thought. I highly encourage you to go check it out, it's definitely worth reading and re-reading.

    Pragmatic Version Control: Using Subversion (The Pragmatic Starter Kit Series)(2nd Edition)

    by Mike Mason

    Read more about this title...

    If you're an svn or tortoise junkie I recommend that you check out this book. Although most of the information provided in this book can probably be found in the SVN documentation, I much preferred reading this top to bottom. This book walks you through several different project scenarios and shows you how to effectively use svn as your source control system.

    The appendices also offers a lot of very helpful information on setup and third party tools. Some must have tools for svn are:

    • TortoiseSVN
    • VisualSVN
    • SVN.exe

    I love that visual svn takes care of things like moving files, add and deleting it really simplifies the check in process. Tortoise is a great windows explorer shell that provides a wicked abstraction over the svn shell.

    Since reading the book I've found that I'm using tortoise less for checkin's and updates. Here's a couple of simple commands to get you started:

    Update:

    Commit (check in):

    If you're going to practice any form of unit testing you need to learn about the different types of tests. I have yet to read XUnit Test Patterns, but I'm sure this will offer a great deal more detail then this post. Also, I'm anxiously awaiting "The Art Of Unit Testing"

    Unit Tests

    Unit tests are blocks of code that exercise very specific areas of a code base to ensure the it is meeting it's responsibility (NOT responsibilities in the plural sense. See Single Responsibility Principal). At it's core its asserting that a very specific result or behavior is met. Unit tests can be broken down into two types.

    Black Box Testing (State)

    The first is the traditional state based (black box) unit tests. These are unit tests that assert that components of the system exert a behavior that expected from the perspective of a client component. It cares less about the actual implementation of the component and more about the result. These types of tests tend to be easier to refactor and are a great way to start learning about test driven development or unit testing in general. The unit tests give you the confidence to go in to the trenches and make significant changes to the underlying implementation. This allows you to evolve a code base with confidence and precision. (And remember software development is an evolution. Using the same architecture and tools that you did from several years ago could indicate a smell.)

    For example:

      [Test]
      public void Should_be_able_to_lease_a_slip() 
      {
        ICustomer customer = CreateSUT( );
        ISlip slip = ObjectMother.Slip( );
        ILeaseDuration duration = LeaseDurations.Monthly;
    
        customer.Lease( slip, duration );
    
        Assert.AreEqual( 1, ListFactory.From( customer.Leases( ) ).Count );
      }
    

    White Box Testing (Interaction)

    This type of unit test is more focused on the interaction of components then the result. It's verifying expectations that components are working as expected with it's dependencies under different conditions. It's a way to simulate different environment conditions without actually having to exercise the component in that environment. The canonical example is to mock or stub out an interaction with a database or third party component.

    It's called white box testing because it's like you can see clearly through the box to see what's going on inside. It might make more sense to refer to this as glass box testing.

    For example:

          [Test]
          public void Should_leverage_task_to_retrieve_all_registered_boats_for_customer() 
          {
              long customerId = 23;
              IList< BoatRegistrationDTO > boats = new List< BoatRegistrationDTO >( );
    
              using ( _mockery.Record( ) ) 
              {
                  SetupResult.For( _mockRequest.ParsePayloadFor( PayloadKeys.CustomerId ) ).Return( customerId );
                  Expect.Call( _mockTask.AllBoatsFor( customerId ) ).Return( boats );
              }
    
              using ( _mockery.Playback( ) ) 
              {
                  CreateSUT( ).Initialize( );
              }
          }
    

    Integration Test

    Integration tests are tests that sweep across a system. They exercise the system from top down, to ensure that it is behaving as expected in a production like environment. This is a great place to weed out contracts that have not been implemented, and help to identify different environment scenarios that may need further unit testing. These tests actually hit the third party components and exercise the full system. These tests typically take a little longer depending on the environment conditions such as hitting a database.

    There are frameworks available, such as Fit, that allow business analysts to define test criteria that can then exercise the system top down. The problem with some of these frameworks is that they can implicitly allow the BA to start designing how the system is implemented if taken as a literal design spec. I much prefer writing top down tests rather then implementing fit-like fixtures.

    One of the books I've read this month is....

    Extreme Programming Explained: Embrace Change (2nd Edition) (The XP Series) by Kent Beck, Cynthia Andres
    Read more about this title...

    I really enjoyed reading this book. It paints a picture of what an ideal XP team can look like and talks about the principals, practices and values of XP. I'd like to share some excerpts from the book that really stuck out and hand an impact on me.

    "If you have six weeks to get a project done, the only thing you can control is your own behavior. Will you get six weeks' worth of work done or less? you can't control others' expectations. You can tell them what you know about the project so their expectations have a chance of matching reality. My terror of deadlines vanished when I learned this lesson. It's not my job to "manage" someone else's expectations. It's their job to manage their own expectations. It's my job to do my best and to communicate clearly."

    One of the things I learned about myself is that I hate being late. This isn't a great trait to have in the world of software development. When I'm handed a deadline I become so eager to meet it that in the process of racing to the deadline quality is compromised. I can't control deadlines or others expectations but I can control my own behavior and if I work consistently as hard as I can without letting go of quality.

    "I chose practices for XP because they meet both business and personal needs. There are other human needs; such as rest, exercise, and socialization; that don't need to be met in the work environment. Time away from the team gives each individual more energy and perspective to bring back to the team. Limiting work hours allows time for these other human needs and enhances each person's contributions while he is with the team."

    It sucks how XP and Agile have become buzzwords in the industry that mean more to the marketing department then to the software developers. I've said it before and I'll say it again...

    "You aren't doing Agile. YOU ARE AGILE!"

    "Part of the challenge of team software development is balancing the needs of the individual with the needs of the team. The team's needs may meet your own long-term individual goals, so are worth some amount of sacrifice. Always sacrificing your own needs for the team's doesn't work. If I need privacy, I am responsible for find a ways to get my need met in a way that doesn't hurt the team. The magic of great teams is that after the team members develop trust they find that they are free to be more themselves as a result of their work together."

    You've got to sacrifice something, regardless of context you're talking about, in order to succeed. What's important is deciding on what you're willing to sacrifice in order to get closer to your end goals. I have found that pushing the people outside of their comfort zone a little bit not only helped me in growing but also the team as a whole. I also learned that it's important to slow down and reflect. The up and down rhythm of an XP team is balanced by the different members of a team, and with trust it's much easier to maintain that balance.

    E.G. Team member A might be a hardcore, heads down, must punch out code as efficiently as possible. Team member B may be more of a let's take it a little slower and sit back and think about the problem at hand kind of guy. With trust the two team members will be able to develop a rhythm that keeps the project going at a sustainable pace without sacrificing quality.

    "I trust two metrics to measure the health of XP teams. The first is the number of defects found after development. An XP team should have dramatically fewer defects in its first deployment and make rapid progress from there. Some XP teams that have been on the path of improvement for several years see only a handful of defects per year. No defect is acceptable; each is an opportunity for the team to learn and improve."

    Test-driven development/design rather then design in your head driven code. The test is a clear statement of truth. It documents the design that would otherwise be locked up in your head and is very black of white about whether or not the subject under test satisfies the test specification or behavior that is expected. If the team is not disciplined even the best of the best XP teams can forget about the principals behind the practices, and stop following the practices. One of the most important things about unit tests is the early feedback. I want to know as soon as possible when a component in the system is not behaving as expected. By waiting for QA to pick out bugs, then log a bug, then assign a developer to look at the bug does not deliver early feedback, I consider this waste! It's a vicious cycle that can be reduced greatly.

    "The problem with reviews is that most reviews and raises are based on individual goals and achievements, but XP focuses on team performance. If a programmer spends half of his time pairing with others, how can you evaluate his individual performance? How much incentive does he have to help others if he will be evaluated on individual performance?"

    Out of all the chapters in this book I think chapter 3 is my favorite. It's titled "Values, Principles, and Practices", and to me it speaks the loudest of what is XP and why would we want to consider using it as a methodology for building and delivering software.

    I highly recommend this book!

January

    So where the heck have I been? Well it's a brand new year and it's been busy. Right now Alli, Adia and I are living in a spare bedroom at Alli's mom's. It's been kind of hectic moving out of our old place and shoving everything we own into a garage but also fun at the same time. (define fun for us, mO!)
    This is my final week of work at MediaLogic and it's a little sad to think that I wont be walking in to the ML studio next Monday morning, but it's been fun. I received a lot of kind feedback from my last post. Thank you to everyone who took the time to write and leave comments, it's nice to know that the universe cares and that there are kind people out there. I hope I didn't paint a grim picture of being underpaid and up against all odds. In fact I've had a pretty good life and really the financial pit falls from last year are from my own doing. In fairness to ML, I receive a decent entry level salary!

    School and ambition can be expensive, warn your spouse!

    In other news, I stopped by the Calgary ThoughtWorks office today to drop off 2 passport photos and a copy of my driver's license. I was all set to go to the 2 week immersion course in India, but I found out today that wont be happening. I'm a little disappointed, but at the same time flattered because they're tossing me on to a project ASAP. Which means the ThoughtWorkers who were part of my interview process have faith that I'm ready to leap on to a project. There's still lots of time for travel. sigh
    I did find out which project that I'll be jumping onto and who I'll be working with. I'm so nervously, excited that I can't wait to jump in and meet the team, but I also feel like I could vomit all over myself at the same time. (hopefully it doesn't happen at the same time!)
    I walked out of the office with a copy of "Pragmatic Version Control" by Mr. Mike Mason. My stack of books is increasing. I've now got "Extreme Programming Explained", "Hackers and Painters", "Introduction to Algorithms" and "Pragmatic Version Control using Subversion" waiting for me.
    So I'm filled with many emotions these days. The transition between jobs is definitely a weird place to be in, especially when you're leaving a place you enjoy working at. It's definitely important to me to keep in touch with the guys at ML because I feel like I've done a lot of growing with them, and it also makes me want to reconnect with some of the people I used to work with.
    I can imagine what it might feel like to be a young rookie entering the big leagues. If I get some ice time, I might even score a couple!
    Some thoughts that go through my mind are?
    "Did I over sell myself? I don't think I did. I tried to be honest about my skill sets. I guess if I did, they'll be exposed pretty quick. So I will have to ramp up quickly."
    "I hope I don't disappoint the new team. I'm not sure what they're expectations of me are, but I better work my butt off to exceed them."
    "Will I make it past the probationary period? Will the team even like me?"
    I know these are just thoughts and most of them I shouldn't even worry about. Just show up, work hard and be respectful. I still can't help but think the above thoughts.
    I imagine I'll be the youngest on the team, I'm kind of used to it now. I was the youngest person to graduate from high school in my graduating class. I was always the youngest person in my class all through grade school. The benefit has always been that I got to hang out with the older kids. The disadvantage is that I got to hand out with the older kids. Sometimes it feels like I might have grown up to quickly, and sometimes it feels like I haven't grown up quick enough.
    I suppose my age is my advantage, and that doesn't last for long. There's always going to be someone faster, younger, and smarter then me. Hopefully, that doesn't deter me from attempting to reach my potential but also doesn't allow me to grow an inflated ego.
    So what does all this ranting really mean? I guess inside I'm still just a 23 year old kid.

Archive