2010

December

    This is a collection of information that I have learned since I started using Ubuntu 10.4 back in August of 2010. Since then I have upgraded to 10.10 and have been using it as my primary OS on my voodoo laptop. It has been a super powerful OS, with pretty much everything I need in an OS to use my laptop.

    There are a couple of things that I have had issues with: One is getting my Microsoft LifeCam to work nicely with Skype, and well I haven't fired up Visual Studio on my laptop in months.

    links

    TV

    Hauppage WinTV PVR 150

    I have a tv tuner card hooked up to my pc as well as a logitech webcam. To watch tv on my pc I can open up VLC and:

    • Media (alt+M)
    • Open Capture Device (ctrl+C)
    • Capture Mode = PVR
    • Device Name = "/dev/video1" -- this should be set to your tv tuner video device you can find out what number is assigned to your tuner by using v4l2-ctl --list-devices
    • then press Play

    VLC will then start playing whatever channel the card is currently configured to point to.

    To change channels you can open up Terminal and enter

      $ ivtv-tune --channel=21
    

    Fine tuning

      $ v4l2-ctl -n # to find out what inputs are available on the current device
      $ v4l2-ctl -i 1 # to change the current device to input 1
      $ v4l2-ctl -d/dev/video1 # to change devices
      $ ivtv-tune --channel=21 -d/dev/video1 # to change to channel 21 on the device 1
    
    links

    Mounting an internal drive

    The visitor design pattern allows us to separate an algorithm from the structure that the algorithm works against. This is one of the easiest ways to adhere to the open/closed principle. because it allows us to add new algorithms without having to modify the original structure that the algorithm is run against.

    The full source code for this entry can be downloaded from here.

    The visitor design pattern core interfaces in C#:

    1     public interface IVisitable<T>
    2     {
    3       void accept(IVisitor<T> visitor);
    4     }
    5 
    6     public interface IVisitor<T>
    7     {
    8       void visit(T item);
    9     }
    

    example

    In the following example, I've defined a structure for a table. A table has many rows, and each row has a different cell for each column.

    I have omitted a lot of details for brevity.

     1     public class Table : IVisitable<Row>
     2     {
     3         ...
     4         public void accept(IVisitor<Row> visitor)
     5         {
     6             rows.each(x => visitor.visit(x));
     7         }
     8     }
     9 
    10     public class Row : IVisitable<ICell>
    11     {
    12         ...
    13         public void accept(IVisitor<ICell> visitor)
    14         {
    15             cells.each(x => visitor.visit(x));
    16         }
    17     }
    18 
    19     public class Column<T> : IColumn
    20     {
    21         ...
    22         public Column(string name)
    23         {
    24             this.name = name;
    25         }
    26     }
    27 
    28     public class Cell<T> : ICell
    29     {
    30         ...
    31         public bool IsFor(IColumn otherColumn)
    32         {
    33             return column.Equals(otherColumn);
    34         }
    35 
    36         public T Value { get; private set; }
    37 
    38         public void ChangeValueTo(T value)
    39         {
    40             Value = value;
    41         }
    42     }
    

    We can now define different algorithms for traversing the table structure without having to modify the table structure. In this example the TotalRowsVisitor increments a counter each time it visits a row.

     1     public class TotalRowsVisitor : IVisitor<Row>
     2     {
     3         public void visit(Row item)
     4         {
     5             Total++;
     6         }
     7 
     8         public int Total { get; set; }
     9     }
    10 
    11     var visitor = new TotalRowsVisitor();
    12     table.accept(visitor);
    13     Console.Out.WriteLine( visitor.Total );
    

    The full source code for this entry can be downloaded from here.

    links

    tools * [gnu screen] * tutorial * quick ref

    packages

    • ruby
    • python
    • ncurses
    • openssh
    • wget
    • gcc
    • lynx

    ruby

    To get ruby working on both windows and cygwin you need to unset the RUBYOPT variable in your ".bash_profile"

    1   unset RUBYOPT
    

    gems

    • gem install rdiscount
    • gem install jekyll
    • gem install jekyll_ext
    • gem install vmail

    python

    Install the python package under cygwin. then to get easy_install

    1   $ wget http://peak.telecommunity.com/dist/ez_setup.py
    2   $ python ez_setup.py
    

    sourced from server fault

    Then to get pygments:

    1   $ easy_install Pygments
    

    sourced from github

    To Do

    These are things that I need to figure out how to accomplish under cygwin.

    • find equivalent to "start ." or "nautilus ."
    • launch gvim from shell. e.g "gvim ."
    • figure out why vmail does not work.
    • kill windows processes e.g. "ps msbuild.exe | kill"
    • install inconsolata font

    Experienced OO practitioners know that switch statements are a design smell. Usually it's an indication of missing polymorphic behaviour.

    Take a look at the following snippet of code:

     1     foreach (var permission in principal.Permissions)
     2     {
     3       if (newGroup != null)
     4       {
     5         var assignment = new SPRoleAssignment(newGroup);
     6         if (assignment != null)
     7         {
     8           switch (permission.RoleDefinition)
     9           {
    10             case PermissionTypes.FullControl:
    11               assignment.RoleDefinitionBindings.Add(fullControlRole);
    12             break;
    13             case PermissionTypes.Design:
    14               assignment.RoleDefinitionBindings.Add(designRole);
    15             break;
    16             case PermissionTypes.Read:
    17               assignment.RoleDefinitionBindings.Add(readRole);
    18             break;
    19             case PermissionTypes.Contribute:
    20               assignment.RoleDefinitionBindings.Add(contributeRole);
    21             break;
    22           }
    23 
    24           web.RoleAssignments.Add(assignment);
    25         }
    26       }
    27     }
    

    It looks simple enough, doesn't it? But what happens if we add a new PermissionType? Are we switching on the PermissionType in other areas of the code base?

    I think it's safe to say that the above code violates the Open/Closed Principle as well as the Single Responsibility Principle.

    When I look at this code, my mind immediately starts looking for the missing abstraction. In this case I believe it's an abstraction over different Permission Types.

    Here's how my mind has re-factored the above code.

    1     foreach (var permission in principal.Permissions)
    2     {
    3       var assignment = permission.CreateFor(newGroup);
    4       web.AddRoleAssignments(assignment);
    5     }
    

    Instead of making RoleDefinition an enum, we can turn it into a class to represent each case in the switch statement. We then just delegate the polymorphic behavior of each RoleDefinition to decide which role should be added to the RoleDefinitionBinding.

     1     public class FullControllRole : RoleDefinition
     2     {
     3       public void AddRolesTo(IList<RoleDefinitionBindings> roleDefinitionBindings)
     4       {
     5         roleDefinitionBindings.Add(fullControlRole /* or this */);
     6       }
     7     }
     8 
     9     public class DesignRole : RoleDefinition
    10     {
    11       public void AddRolesTo(IList<RoleDefinitionBindings> roleDefinitionBindings)
    12       {
    13         roleDefinitionBindings.Add(designRole);
    14       }
    15     }
    

    This also better satisfies the Law of Demeter.

    The rest of my mental picture

    Please note that the below code was written in a text editor and was not actually run against a compiler.

     1     public class Permission
     2     {
     3       RoleDefinition role;
     4 
     5       public Permission(RoleDefinition role)
     6       {
     7         this.role = role;
     8       }
     9 
    10       public SPRoleAssignment CreateFor(string group)
    11       {
    12         var assignment = new SPRoleAssignment(group);
    13         role.AddRolesTo(assignment.RoleDefinitionBindings);
    14         return assignment;
    15       }
    16     }
    17 
    18     public class SPRoleAssignment
    19     {
    20       public SPRoleAssignment(string group){}
    21     }
    22 
    23     public class Web
    24     {
    25       IList<SPRoleAssignment> RoleAssignments;
    26 
    27       public void AddRoleAssignments(SPRoleAssignment assignment)
    28       {   
    29         RoleAssignments.Add(assignment);
    30       }
    31     }
    

    additional help

November

    The open/closed principle (OCP) is another object oriented principle that simple states:

    Classes should be open for extension but closed for modification.

    In my last post we fixed the SRP violation, but we left a violation of the open/closed principle. The reason this principle is important is because it helps us design components/classes so that the need for change in the future is reduced. When building software systems it is much safer to introduce new components rather than altering existing ones. Each time we change the behavior of an existing component, we increase the possibility of error in existing classes that depend on that classes behavior.

    The code we left off with was this:

     1     public class DatabaseGateway
     2     {
     3       IConnectionFactory connectionFactory;
     4       IMapper<IDataReader, IEnumerable<DataRow>> mapper;
     5 
     6       public DatabaseGateway(IConnectionFactory connectionFactory, IMapper<IDataReader, IEnumerable<DataRow>> mapper)
     7       {
     8         this.connectionFactory = connectionFactory;
     9         this.mapper = mapper;
    10       }
    11 
    12       public IEnumerable<DataRow> execute(string sql)
    13       {
    14         using( var connection = connectionFactory.OpenConnection())
    15         {
    16           var command = connection.CreateCommand();
    17           command.CommandText = sql;
    18           command.CommandType = CommandType.Text;
    19           return mapper.MapFrom(command.ExecuteReader());
    20         }
    21       }
    22     }
    

    Each time we need to change how a query is executed against the database, we need to modify the DatabaseGateway. What we would like is to avoid having to modify DatabaseGateway each time we need to query against the database in a different way. (e.g. if we wanted to execute a stored procedure instead of raw SQL.)

    We can do this by introducing the strategy pattern.

    The strategy pattern allows us to change algorithms at runtime.

    Instead of handing the "execute" method a raw SQL string, we're going to pass in a query object.

     1     public class DatabaseGateway
     2     {
     3       IConnectionFactory connectionFactory;
     4 
     5       public DatabaseGateway(IConnectionFactory connectionFactory)
     6       {
     7         this.connectionFactory = connectionFactory;
     8       }
     9 
    10       public void execute(IQuery query)
    11       {
    12         using( var connection = connectionFactory.OpenConnection())
    13         {
    14           query.execute_using(connection.CreateCommand());
    15         }
    16       }
    17     }
    

    We're now able to create different implementations of IQuery that can each run against the IDbCommand interface. By inverting control to an object that is handed to us we are able to add new types of queries to be run against the database in the future without the need to modify the DatabaseGateway.

     1     public interface IQuery
     2  {
     3      void execute_using(IDbCommand command);
     4  }
     5 
     6     public class RawSQLQuery : IQuery
     7     {
     8       string sql;
     9       public RawSQLQuery(string sql)
    10       {
    11         this.sql = sql;
    12         Result = new DataTable();
    13       }
    14 
    15       public DataTable Result{ get; private set; }
    16 
    17       public void execute_using(IDbCommand command)
    18       {
    19           command.CommandText = sql;
    20           command.CommandType = CommandType.Text;
    21           Result.Load(command.ExecuteReader());
    22       }
    23     }
    24 
    25     public class RawSQLCommand : IQuery
    26     {
    27       string sql;
    28       public RawSQLQuery(string sql)
    29       {
    30         this.sql = sql;
    31         Result = new DataTable();
    32       }
    33 
    34       public DataTable Result{ get; private set; }
    35 
    36       public void execute_using(IDbCommand command)
    37       {
    38           command.CommandText = sql;
    39           command.CommandType = CommandType.Text;
    40           command.ExecuteNonQuery();
    41       }
    42     }
    

    The query objects are our different Strategy object that we can pass to the DatabaseGateway to execute commands against the database based on the client components needs.

    The single responsibility principle (SRP) is a design principle that object oriented developers believe in. In it's simplest form it means:

    Each class should have a single responsibility, and therefore only a single reason to change.

    This enforces stability throughout our system. If we build small composable objects that each focus on a single responsibility, we are then able to introduce change much easier. Changes are isolated and more focused to smaller areas of the code base.

    Violations of SRP are easy to recognize. They can usually be found in classes that have many lines of code.

    Let's take a look at the following example:

     1     public class Foo
     2     {
     3       public IEnumerable<DataRow> bar(string sql)
     4       {
     5         using( var connection = new SqlConnection(ConfigurationManager.ConnectionStrings[ConfigurationManager.AppSettings["active.connection"]].ConnectionString))
     6         {
     7           var command = connection.CreateCommand();
     8           command.CommandText = sql;
     9           command.CommandType = CommandType.Text;
    10           var table = new DataTable();
    11           table.Load(command.ExecuteReader());
    12           return table.Rows.Cast<DataRow>();
    13         }
    14       }
    15     }
    

    Let's try to break down the above code by describing what it's responsibilities are.

    • Lookup the database configuration
    • Open a connection to a database
    • Execute a SQL query against the database
    • Map the results from the database to DataRow's

    I think it's probably safe to say that it's possible that we may need to open up a connection to the database in other areas of the application. Perhaps to ExecuteNonQuery(), or ExecuteScalar(). How many other places are we likely going to have the same boiler plate "open a connection" code.

    Let's re-factor this a bit to break out the separate responsibilities in to separate classes.

     1     public class Foo
     2     {
     3       IConnectionFactory connectionFactory;
     4       IMapper<IDataReader, IEnumerable<DataRow>> mapper;
     5 
     6       public Foo(IConnectionFactory connectionFactory, IMapper<IDataReader, IEnumerable<DataRow>> mapper)
     7       {
     8         this.connectionFactory = connectionFactory;
     9         this.mapper = mapper;
    10       }
    11 
    12       public IEnumerable<DataRow> bar(string sql)
    13       {
    14         using( var connection = connectionFactory.OpenConnection())
    15         {
    16           var command = connection.CreateCommand();
    17           command.CommandText = sql;
    18           command.CommandType = CommandType.Text;
    19           return mapper.MapFrom(command.ExecuteReader());
    20         }
    21       }
    22     }
    

    By breaking apart the different responsibilities in to different classes we are now able to re-use those classes in other areas of the application, and we reduce the amount of duplication spread across the application. If there happens to be an error, it's likely only a single component needs to be corrected. For instance if we were to forget to "Open" the connection, we can fix this error in one location. Hence the "single reason to change" philosophy.

    The IConnectionFactory is responsible for Opening a connection to the database. The IMapper is responsible for mapping an IDataReader to an IEnumerable. Foo is responsible for executing the command against the IDbConnection.

    In my next post, I will show you how to re-factor this solution to use the strategy pattern to fix the Open Closed Principle violation.

    download

    Subscribe

    When you subscribe to a certain type of message. your subscription is put into the publisher assemblies subscription queue. (if you use MsmqSubscriptionStorage()) By default when you call "LoadMessageHandlers()" your assembly automatically subscribes to all messages that your assembly has handlers for.

    1     bus.Subscribe(typeof(NewEmployeeHired));
    

    Publish

    When you publish a message it gets sent to all subscribers of that message type or assembly that contains that message.

    1     bus.Publish<NewEmployeeHired>(x =>
    2     {
    3         x.Id = employee.Id;
    4         x.FirstName = message.first_name;
    5         x.LastName = message.last_name;
    6     });
    

    Don't publish from the web app

    You should avoid doing this because it makes it difficult to scale out, and puts unnecessary load on your web server. Also, requests that come from the web don't correspond to events have actually happened. They are still requests, until they are processed, then they become an event.

    For example, a web tier might send the "ChangeEmployeeAddress" command to the app tier. The app tier then processes the command and changes the employees address, then publishes the "EmployeeAddressChanged" event. In this scenario, it wouldn't have made sense to publish "ChangeEmployeeAddress" on to the bus. Event's that happened should be published on the bus, not requests that we intend to process.

    Send

    when you send a message it is sent to a specific end point. You must tell what endpoint to send the messages to. In order to do this you must specify it in the app.config.

    1     [AcceptVerbs(HttpVerbs.Post)]
    2     public ActionResult new_hire(HireNewEmployee command)
    3     {
    4         bus.Send(command);
    5         return RedirectToAction("all");
    6     }
    

    web.config

    In the following configuration, we are telling nservice bus that whenever we send a message that from an assembly called 'common.messages.dll' it should get sent to the endpoint named 'easyhr.service.queue'.

     1     <configuration>
     2       <configSections>
     3             <section name="MsmqTransportConfig" type="NServiceBus.Config.MsmqTransportConfig, NServiceBus.Core"/>
     4             <section name="UnicastBusConfig" type="NServiceBus.Config.UnicastBusConfig, NServiceBus.Core"/>
     5             <section name="RijndaelEncryptionServiceConfig" type="NServiceBus.Config.RijndaelEncryptionServiceConfig, NServiceBus.Core"/>
     6       </configSections>
     7       <MsmqTransportConfig InputQueue="easyhr.web.queue" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5"/>
     8       <UnicastBusConfig>
     9         <MessageEndpointMappings>
    10                 <add Messages="common.messages" Endpoint="easyhr.service.queue"/>
    11                 <add Messages="easyhr.messages" Endpoint="easyhr.service.queue"/>
    12                 <add Messages="it.messages" Endpoint="it.service.queue"/>
    13             </MessageEndpointMappings>
    14         </UnicastBusConfig>
    15       <RijndaelEncryptionServiceConfig Key="gdDbqRpqdRbTs3mhdZh9qCaDaxJXl+e7"/>
    16     </configuration>
    

    Reply

    When you reply to a message that was sent to you, the reply is sent to the endpoint that sent you the original message.

    replying to a message

    In the example below, our handler receives a message of type GetAllUserNamesQuery from some endpoint. When we respond by called "bus.Reply(x)" our reply gets sent back to the endpoint that contacted us in the first place. We do not need to know which endpoint contacted us, but the endpoint who contacted us, had to specify the location of our queue in their app.config.

     1     public class GetAllUsernamesMessageHandler : IHandleMessages<GetAllUserNamesQuery>
     2     {
     3         IBus bus;
     4         IUserRepository users;
     5         IMapper mapper;
     6 
     7         public GetAllUsernamesMessageHandler(IBus bus, IUserRepository users, IMapper mapper)
     8         {
     9             this.bus = bus;
    10             this.mapper = mapper;
    11             this.users = users;
    12         }
    13 
    14         public void Handle(GetAllUserNamesQuery message)
    15         {
    16             users.FindAll().MapAllUsing<User, UserCredentials>(mapper).Each(x =>
    17             {
    18                 bus.Reply(x);
    19             });
    20         }
    21     }
    

    Hosting NService Bus

    With NService bus you can either have your class library hosted by the NServiceBus.Host.exe or you can self host it. Using the NServiceBus.Host.exe allows you to deploy your class library as a windows service quite easily.

    To debug and make use of the hosted solution: 1. Add a reference to NServiceBus.Host.exe to your class library. 2. Compile the library. 3. In the project properties for you class library go to the "Debug" tab and select the option that says "Start external program:" 4. Select the elipsis (...) and go to /bin/debug/NServiceBus.Host.exe for your class library.

    example of how to leverage a hosted solution.

    The NServiceBus.Host.exe will scan all assemblies in the same runtime directory as it looking for types that implement certain NServiceBus interfaces. This is how your configuration get picked up.

    In the example below, we created a few classes that implement certain NServiceBus interfaces. By doing that, our configuration automatically gets picked up and is used to configure NServiceBus.

     1     public class ConfigureThisEndPoint : IConfigureThisEndpoint, AsA_Publisher, IWantCustomLogging
     2     {
     3         public void Init() {}
     4     }
     5 
     6     public class Initialize : IWantCustomInitialization
     7     {
     8         public void Init()
     9         {
    10             var container = new WindsorContainer();
    11             Configure.Instance.RijndaelEncryptionService();
    12             Configure
    13                 .With()
    14                 .Log4Net()
    15                 .CastleWindsorBuilder(container)
    16                 .XmlSerializer()
    17                 .RijndaelEncryptionService()
    18                 .MsmqTransport().IsTransactional(true).PurgeOnStartup(true)
    19                 .MsmqSubscriptionStorage()
    20                 .UnicastBus().ImpersonateSender(true).LoadMessageHandlers()
    21                 .CreateBus()
    22                 .Start()
    23                 ;
    24         }
    25     }
    

    Self Hosting NService Bus

    In a web application, you wont need to have your assembly hosted by anything because it is already hosted by ASP.NET. You will need to tell NServiceBus how your application should be configured. You can do this by configuring the application in Global.asax.

    self hosted nservice bus example

     1     public class Global : HttpApplication
     2     {
     3         static public IBus Bus { get; private set; }
     4 
     5         protected void Application_Start()
     6         {
     7             AreaRegistration.RegisterAllAreas();
     8             RouteTable.Routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
     9             RouteTable.Routes.MapRoute( "Default", "{controller}/{action}/{id}", new {controller = "home", action = "index", id = UrlParameter.Optional} );
    10 
    11             Bootstrap();
    12         }
    13 
    14         static void Bootstrap()
    15         {
    16             var container = new WindsorContainer();
    17             ControllerBuilder.Current.SetControllerFactory(new WindsorControllerFactory(container));
    18 
    19             Bus = Configure.WithWeb()
    20                 .Log4Net()
    21                 .CastleWindsorBuilder(container)
    22                 .XmlSerializer()
    23                 .RijndaelEncryptionService()
    24                 .MsmqTransport().IsTransactional(true).PurgeOnStartup(true)
    25                 .MsmqSubscriptionStorage()
    26                 .UnicastBus().ImpersonateSender(true).LoadMessageHandlers()
    27                 .CreateBus()
    28                 .Start();
    29         }
    30     }
    

    In the above configuration, we are telling NService Bus that the application * is a web application ("WithWeb"), * we want to use CastleWindsor as our inversion of control container, ("CastleWindsorBuilder") * we want our messages to be serialized/deserialized using an XmlSerializer, ("XmlSerializer") * we want the WireEncryptedString's to be encrypted and decrypted using the Rijndael encryption algorithm, ("RijndaelEncryptionService") * we want our messaging communication to use MSMQ as the storage mechanism, and it should be transactional, ("MsmqTransport().IsTransactional(true)") * we want all existing messages in the message queue to be purged at startup, ("PurgeOnStartup") * we want to use MSMQ to manage client subscriptions. ("MsmqSubscriptionStorage")

    When we tell NServiceBus to "LoadMessageHandlers" this will scan all assemblies in the runtime directory for all classes that implement "IHandleMessages of T", and register it in to the container. It will also subscribe to all messages of type T from the correct publisher out on the bus.

    Things to Remember

    When debugging it's always important to make sure that your 'Publishers' are started first, then your 'Subscribers'. The reason for this is, if your Subscriber starts up before the Publisher, then it has no one to subscribe to.

    As long as you have handlers that implement IHandleMessages of T then your end point will automatically subscribe to the end point that publishes messages of T. in you app config you will need to configure each end point the publishes messages from a certain assembly.

    I use a convention of having a messages assembly per service. so each message assembly will contain message types that is sent to the service or that it replies with, or that it publishes.

    additional information

    things that suck

    out of the box you can't decorate your IHandleMessages implementation with an Interceptor attribute because this creates a proxy of the handler, and when nservicebus tries to load the type it does it based on it's actual type not on whether the class is assignable to the type. so basically you can't use interceptors easily.

    download

October

    Presentation Patterns

    Presentation Model

    • Represent the state and behavior of the presentation independently of the GUI controls used in the interface

    MVVM

    • Is a specialization of the more general Presentation Model pattern, tailor made for WPF and Silverlight.
    • separation of functional development provided by MVC as well as leveraging the advantages of XAML and binding.
    • Model - an object model that represents the real state content.
    • View - refers to all elements displayed by the GUI such as buttons, windows, graphics, and other controls.
    • ViewModel - abstraction of the View that also serves in data binding between the View and the Model. Conceptual state of the data as opposed to the real state of the data.

    MVP

    Passive View

    • "A screen and components with all application specific behavior extracted into a controller so that the widgets have their state controlled entirely by controller."
    • solves the problem of testability.
    • handles this by reducing the behaviour of the UI components to the absolute minimum by using a controller that not just handles responses to user events, but also does all the updating of the view.

    Supervising Controller

    • "Factor the UI into a view and controller where the view handles simple mapping to the underlying model and the controller handles input response and complex view logic."
    • 2 responsibilities
      • input response: the user gestures are handled initially by the screen widgets, however all they do in response is to hand these events off to the presenter, which handles all further logic.
      • partial view/model synchronization: the view uses some form of Data Binding to populate much of the information for its fields.

    INotifyPropertyChanged

    This interface exposes a single event that screen widgets subscribe to. The event should be raised when any properties on the subject have changed, this signals the UI to synchronize with the current state of the ViewModel. There are a few things that I don't like about common usage of this interface and that is hard coded string sprinkled across your source code. this is not refactoring friendly and does not provide you with any sort of compile time errors when an incorrect property name is provided. This is an implementation of the Observer design pattern.

    Observer

    "The Observer Pattern defines a one-to-many dependency between objects so that when one object changes state, all of its dependents are notified and updated automatically." - Head First Design Patterns

    Example 1. Using Inheritance

      public interface INotifyPropertyChanged
      {
        event PropertyChangedEventHandler PropertyChanged;
      }
    
      public abstract class Observable<T> : INotifyPropertyChanged
      {
        protected void update(params Expression<Func<T, object>>[] properties)
        {
          properties.each(x =>
          {
            PropertyChanged(this, new PropertyChangedEventArgs(x.pick_property().Name));
          });
        }
    
        public event PropertyChangedEventHandler PropertyChanged = (o, e) => {};
      }
    
      public class ViewModel : Observable<ViewModel>
      {
        public string FirstName 
        {
          get{ return firstName; }
          set
          { 
            return firstName;
            update(x => x.FirstName);
          }
        }
        string firstName;
      }
    

    Example 2. Using Composition

      public abstract class Observable<T> : INotifyPropertyChanged
      {
        protected void update(params Expression<Func<T, object>>[] properties)
        {
          properties.each(x =>
          {
            propertyChanged(this, new PropertyChangedEventArgs(x.pick_property().Name));
          });
        }
    
        PropertyChangedEventHandler propertyChanged;
      }
    
      public class ViewModel 
      {
        public ViewModel()
        {
          observable = new Observable<ViewModel>(this.ProperChanged);
        }
    
        public string FirstName 
        {
          get{ return firstName; }
          set{ 
            return firstName;
            observable.update(x => x.FirstName);
          }
        }
    
        public event PropertyChangedEventHandler PropertyChanged = (o, e) => {};
        Observable<ViewModel> observable;
      }
    

    INotifyCollectionChanged

    When binding a screen widget to a collection you can create a collection that implements this interface to signal the widget when items are added or removed. When you add a new item to the collection raise the CollectionChanged event to signal the UI to rebind to the collection.

    The built in ObservableCollection implements this interface and takes care of raising the changed event when items are added or removed from the collection. When binding to a large number of items, it can be helpful to suspend publishing of the changed event then batch the adding of those items then continue raising the changed event.

      public interface INotifyCollectionChanged
      {
          event NotifyCollectionChangedEventHandler CollectionChanged;
      }
    

    ICommand

    This interface defines an action that screen widgets can invoke when they are activated. The simplest example is a Button. When a button is clicked it will execute the command that it is bound to. If the command cannot be executed then the button will become disabled.

    This is an implementation of the Command Pattern and the Specification Pattern.

    Command:

    "The Command Pattern encapsulates a request as an object, thereby letting you parameterize other objects with different requests, queue or log requests, and support undoable operations." - Head First Design Patterns

    Specification:

    "In computer programming, the specification pattern is a particular software design pattern, whereby business logic can be recombined by chaining the business logic together using boolean logic."

    This synchronization is done via this interface. When a condition in the application changes the ability for a command to be executed, the command must raise the CanExecuteChanged event to signal the UI to check if the command can be executed.

      public interface ICommand
      {
        void Execute(object parameter);
        void CanExecute(object parameter);
        event EventHandler CanExecuteChanged;
      }
    
      public class SimpleCommand : ICommand
      {
        Action command;
        Func<bool> predicate;
    
        public SimpleCommand(Action command): this(command, () => true) { }
    
        public SimpleCommand(Action command, Func<bool> predicate)
        {
          this.command = command;
          this.predicate = predicate;
        }
    
        public void Execute(object parameter)
        {
          command();
        }
    
        public bool CanExecute(object parameter)
        {
          return predicate();
        }
    
        public event EventHandler CanExecuteChanged = (o,e)=>{};
      }
    
      <Button Command="{Binding Path=SaveCommand}">Save</Button>
    

    IDataErrorInfo

    This interface is used to synchronize errors in the view model on to the screen. When your ViewModel implements this interface, the screen will send it the name of each property that it is bound to, in order to tell if there is a validation error with it. If an error is returned the default red line around the control is displayed.

      public interface IDataErroInfo
      {
        string this[string columnName];
        public string Error {get;}
      }
    
      public interface IValidation
      {
        bool IsValid { get; }
        string Message { get;  }
      }
    
      class Validation : IValidation
      {
        Func<bool> condition;
    
        public Validation(Func<bool> condition, string message)
        {
            condition = condition;
            Message = message;
        }
    
        public bool IsValid
        {
            get { return condition(); }
        }
    
        public string Message { get;set;}
      }
    
      public class ViewModel : IDataErrorInfo
      {
        public TodoItemViewModel(ITodoItemRepository todoItemRepository)
        {
          validations = new Dictionary<string, IValidation>
          {
            {"Description", new Validation(() => !string.IsNullOrEmpty(Description), "Cannot have an empty description.")},
            {"DueDate", new Validation(() => DueDate >= DateTime.Now, "Due Date must occur on or after today.")}
          };
        }
    
        public string Description { get;set; }
        public DateTime DueDate{ get; set; }
    
        public string this[string columnName]
        {
          get
          {
            var validation = validations[columnName];
            return validation.IsValid ? null : validation.Message;
          }
        }
    
        public string Error
        {
          get { return BuildErrors(); }
        }
    
        private string BuildErrors()
        {
          var builder = new StringBuilder();
          foreach (var validation in validations.Values)
            if(!validation.IsValid)
              builder.AppendLine(validation.Message);
          return builder.ToString();
        }
    
        private IDictionary<string, IValidation> validations;
      }
    
      <DockPanel>
        <DockPanel.Resources>
        <Style x:Key="ValidationStyle" TargetType="Control">
        <Style.Triggers>
          <Trigger Property="Validation.HasError" Value="true">
            <Setter Property="Control.ToolTip" Value="{Binding RelativeSource={x:Static RelativeSource.Self}, Path=(Validation.Errors)[0].ErrorContent}" />
            <Setter Property="Control.BorderBrush" Value="Red" />
            <Setter Property="Control.BorderThickness" Value="2" />
          </Trigger>
        </Style.Triggers>
        </Style>
        </DockPanel.Resources>
        <TextBox Width="200" Text="{Binding Path=Description, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged}" Style="{StaticResource ResourceKey=ValidationStyle}" />
        <DatePicker SelectedDate="{Binding Path=DueDate, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged}" Style="{StaticResource ResourceKey=ValidationStyle}" />
      </DockPanel>
    

    Data Binding

    • powerful databinding.
    • you can bind to:

      • controls
      • public properties
      • xml
      • objects
    • requires both a target and a source

      • a target can be any property that is derived from DependencyProperty eg. TextBox
      • a source can be and public property, controls, objects, xaml elements, ado.net datasets, xml fragments.
    • Josh Smith has an excellent article data binding in wpf on codeproject.

    DataContext

    The class System.Windows.FrameworkElement has a property on it named DataContext of type object. This is a very special property that every screen widget inherits. When the DataContext is set to an object, every control within that control has its DataContext set to that same object.

    In the following example, when the DataContext is set on the DockPanel, the Label and Button have the same DataContext. That means that the Label and the Button can bind to properties on the same object.

      <Window x:Class="MainWindow">
        <DockPanel>
          <Label Text="{Binding Path=Name}"></Label>
          <Button Command={Binding Path=Save}></Button>
        </DockPanel>
      </Window>
    
      public partial class MainWindow : Window
      {
        public Window()
        {
          InitializeComponent();
          ViewModel = new ViewModel();
        }
    
        public ViewModel ViewModel
        {
          get { return (ViewModel) DataContext; }
          set { DataContext = value; }
        }
      }
    
      public class ViewModel
      {
        public string Name{get;set;}
        public ICommand Save{get;set;}
      }
    

    IValueConverter

    Value converters are used to convert objects from one type to another. When used in the UI, the value converter is used to convert back and forth from the View to the ViewModel.

      public interface IValueConverter
      {
        object Convert(object value, Type targetType, object parameter, CultureInfo culture);
        object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture);
      }
        
      <Window.Resources>
        <BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter" />
      </Window.Resources>
    
      <Image Source="..\Images\warning.png" Visibility="{Binding Path=IsDisabled, Mode=OneWay, Converter={StaticResource BooleanToVisibilityConverter}}" />
                    
    

    tutorials

    :set spell

    spelling tutorial

    :TOhtml

    Convert file to html

    plug ins

    syntax files

    vimrc

    folding

    • create fold
      • visual mode
      • select lines
      • zf to fold
    • unfold
      • zo

    :set foldmethod=indent zm -fold more zr -fold less

    :set foldlevel=0

    To set the width of a window type: ctrl+w 100 | ^W100|

    jumping tags ^[ - jump to ^T - jump back

    switching case: - shift + ~ - visual mode selection then "gu" to for lowercase or "gU" for upper case.

    links

    my git-svn workflow

    • git svn clone http://svn-repo -T trunk -b branches -t tags
    • cd svn-repo
    • git svn rebase
    • git branch -f development
    • git checkout development
    • gvim hello-world.mkd -- edit file
    • gvim goodbye-world.mkd -- edit file
    • git add .
    • git commit -am "added hello world, and good bye"
    • git checkout master
    • git svn rebase
    • git merge --squash development
    • git commit -- squash commit message
    • git svn dcommit
    • git branch -f development
    • git checkout development
    • rm goodbye-world.mkd
    • git add -A
    • git commit -am "removed goodbye."
    • git checkout master
    • git svn rebase
    • git merge --squash development
    • git commit -- edit and squash the commit message.
    • git svn dcommit

    To create an SVN tag from git.

    1   $ git svn branch -t {tagname}
    

    creating an ssh key

    $ cd ~/.ssh $ ssh-keygen -t rsa -C "email@mokhan.ca"

    managing multiple accounts

    in your .ssh config you can specify which rsa keys to use for different domains. e.g.

    1     ~/.ssh/config
    2     Host project1.unfuddle.com
    3         User git
    4         IdentityFile ~/.ssh/project1/id_rsa
    5 
    6     Host *.unfuddle.com
    7         User git
    8         IdentityFile ~/.ssh/id_rsa
    

    hosting git yourself

    1 $ sudo adduser git
    2 $ login git
    3 $ mkdir test.git
    4 $ cd test.git
    5 $ git init --bare
    6 $ git remote rm origin
    7 $ git remote add origin git@server.com:test.git
    8 $ git push origin master
    

    git submodules

    tutorial fix path * on windows I had to fix the path in .gitmodules from .\src\Messages to src/Messages

    submodules for git-svn

    tutorial

    links

    tips for intermediates git svn externals private git server setup

    configure startup.

    c:/users/mkhan/documents/windowspowershell/profile.ps1

     1   # Load posh-git module from current directory
     2  Import-Module d:\scripts\posh-git
     3 
     4      # Set up a simple prompt, adding the git prompt parts inside git repos
     5  function prompt {
     6      $host.UI.RawUi.WindowTitle = get_current_folder
     7      return write_prompt(get_unix_style_path);
     8  }
     9  function get_current_folder() {
    10      $pathbits = ([string]$pwd).split("\", [System.StringSplitOptions]::RemoveEmptyEntries)
    11      if($pathbits.length -eq 1) {
    12        return   $pathbits[0] + "\";
    13      } 
    14      return   $pathbits[$pathbits.length - 1]
    15  }
    16  function get_unix_style_path() {
    17      return '/' + ([string]$pwd).replace('\','/').replace(':','').tolower()
    18  }
    19  function write_prompt($path) {
    20      Write-Host( ($env:username + '@' + [System.Environment]::MachineName + ' ' + $path).tolower() ) -foregroundcolor Green
    21          
    22      $Global:GitStatus = Get-GitStatus
    23      Write-GitStatus $GitStatus
    24      return "$ " 
    25  }
    26 
    27  if(-not (Test-Path Function:\DefaultTabExpansion)) {
    28      Rename-Item Function:\TabExpansion DefaultTabExpansion
    29  }
    30 
    31      # Set up tab expansion and include git expansion
    32  function TabExpansion($line, $lastWord) {
    33      $lastBlock = [regex]::Split($line, '[|;]')[-1]
    34      
    35      switch -regex ($lastBlock) {
    36          # Execute git tab completion for all git-related commands
    37          'git (.*)' { GitTabExpansion $lastBlock }
    38          # Fall back on existing tab expansion
    39          default { DefaultTabExpansion $line $lastWord }
    40      }
    41  }
    42 
    43  Enable-GitColors
    

    These are a collection of notes from going through creating another click once deployment.

    • sign the manifest files without signing the assemblies.
    • sign manifest with *.pfx files
    • when calling the publish target against msbuild include the cert thumbprint.
      • eg.
    1     <target name="_publish" depends="compile">
    2       <property name="command.line" value='${base.dir}\src\app\longrangemodel.ui\longrangemodel.ui.csproj /t:publish /p:UpdateEnabled=true /p:UpdateRequired=true /p:PublisherName="${publisher.name}" /p:ProductName="${product.name}" /p:PublishUrl=${publish.url} /p:InstallUrl=${publish.url} /p:UpdateUrl=${publish.url} /p:Install=True /p:ApplicationVersion=${major.version}.${minor.version}.${build.number}.* /p:ApplicationRevision=${svn.revision} /p:UpdateInterval=1 /p:UpdateIntervalUnits=Minutes /p:UpdateUrlEnabled=True /p:IsWebBootstrapper=True /p:InstallFrom=Unc /p:PublishDir=${publish.dir} /p:ManifestKeyFile="${key.file}" /p:ManifestCertificateThumbprint="${key.file.thumbprint}"' />
    3       <exec program="${msbuild.exe}" commandline="${command.line}" />
    4   </target>
    
    * http://www.kavinda.net/2007/01/26/clickonce-on-multiple-environments.html
    * > msbuild.exe project.csproj /t:publish /p:UpdateEnabled=true /p:UpdateRequired=true /p:PublisherName="Mo Khan" /p:ProductName="Mo's Product" /p:PublishUrl=http://mokhan.ca/publish /p:InstallUrl=http://mokhan.ca/publish /p:UpdateUrl=http://mokhan.ca/publish /p:Install=True /p:ApplicationVersion=1.0.0.* /p:ApplicationRevision=1235 /p:UpdateInterval=1 /p:UpdateIntervalUnits=Minutes /p:UpdateUrlEnabled=True /p:IsWebBootstrapper=True /p:InstallFrom=Unc /p:PublishDir=${publish.dir} /p:ManifestKeyFile="mykey.pfx" /p:ManifestCertificateThumbprint="9DAAADE32307C99743FC74A475D6008370C65642"
    
    • I've been using the visual studio project properties panel to dig out the thumbprint from the pfx file.
      • Open project properties, click on the signing tab, check the sign manifests check box, choose the file, then click on more details..
    • You can also specify a /p:SupportUrl=http://mokhan.ca and a shortcut will appear in the start menu to that site.

    When you create a clickonce one deployment there are three main files that are created. * setup.exe - this is the bootstrapper that users should run to install the application. It will check to see if you have the required pre-requisites in order to run the application. If you do not you can have it automatically download it and install it for you. * .application - this file keeps track of the current version of the application. I believe each time the application is started it checks this file to see if there is a newer version of the application. This file has to be hosted somewhere that each user will have access to like on the web (http) or over a local Intranet (UNC). * .manifest - this file keeps track of each of the files that need to be deployed with a specific version of the application.

    clickonce deployment folder structure (server)

     1 ### /public
     2  - /Application Files
     3      - {PROGRAM}_1.0.0.1000
     4          - {PROGRAM}.exe.manifest
     5          - ... rest of the files to deploy with this version
     6      - {PROGRAM}_1.0.0.2000
     7          - {PROGRAM}.exe.manifest
     8          - ... rest of the files to deploy with this version
     9  - {PROGRAM}.application
    10  - setup.exe
    

    Deployment on the client * I used procmon.exe to trace down where the application is installed on the client machine. * On my machine the app was installed to: "C:\Users\mkhan\AppData\Local\Apps\2.0\XXLBODCL.D2T\W0JYK67Z.2QC\long..

    signing

    "Publisher certificates come in two flavors—self-generated or third-party–verified (by Verisign, for example). A certificate is issued by a certificate authority, which itself has a certificate that identifies it as a certificate issuing authority. A self-generated certificate is one that you create for development purposes, and you basically become both the certificate authority and the publisher that the certificate represents. To be used for production purposes, you should be using a certificate generated by a third party, either an external company like Verisign or an internal authority such as your domain administrator in an enterprise environment. " - msdn We should use a certificate that was generated by a domain administrator for production deployment. You can use 'certmgr.exe' to manage certificates in the store on your machine.

    gotchas

    Deploying to production

    When signing a clickonce install with a cert issued by a cert server, you must have the pfx file installed on your local machine in the Current User Certificates store.

    • mmc.exe
    • File -> Add/Remove Snap In...
    • Certificates
    • Add
    • Current User

    Then deploy from your machine using msbuild.

    To bypass the pesky security warning dialog you need to ensure the following:

    To be considered a trusted publisher, the publisher certificate must be installed in the Trusted Publishers certificate store on the user's machine, and the issuing authority of the publisher certificate must have their own certificate installed in the Trusted Root Certification Authority certificate store. - MSDN

    1. Make sure the cert (pfx) is installed into your "Personal/Certificates"
    2. Make sure that the cert was issued by the Root Certification Authority
    3. Make sure that the cert is installed in to the "Trusted Publishers/Certificates" in the (Local Computer)

    If you don't get the cert installed in to the Trusted Publishers store then the Security Dialog will pop up and tell you the Publishers name is the same name as the "Issued To" value in the cert. All of this can be viewed in mmc.exe by adding the "Certificates" snap-in.

    bugs

    • when installing the app using an low privileged account I got the following error.

      Unable to install or run the application. The application requires that assembly Microsoft.Windows.Design.Extensibility Version 3.5.00 be installed in the Global Assembly Cache (GAC) first.

    I found the following solution on stackoverflow. ClickOnce is a great technology for releasing updates in to the wild quickly and easily, but it sure sucks to set up.

    I got the error a few times for different assemblies. By updating it's status from prerequisite to include seems to have fixed the problem.

    • I had an issue where I was getting an error when installing the clickonce app with limited privileges, and a win xp box where it was saying an app with the same identity has already been installed. I checked add/remove programs and our app wasn't there, then I checked where the clickonce apps are installed and it was there. So I cleared out the folder, re-ran the setup.exe and it worked.

    helpful links

    my name is mo and i am testing out jekyll as a legit blogging platform.

    things i need to learn:

    • how to manage posts
    • how to generate an rss feed
    • how to publish
    • how to schedule posts - use the published variable in your _config.yml file.

    sometimes i need to publish code like the following:

     1     namespace learning.jekyll
     2  {
     3      public class HelloWorld
     4      {
     5          public void SayHello()
     6          {
     7              System.Console.Out.WriteLine("Hello");
     8          }
     9      }
    10  }
    

    jekyll - static site generation

    on windows

    • install ruby 1.8.7 installer
    • install ruby devkit 3.4.5
    • unzip devkit into ruby install (c:\ruby)
    • gem install jekyll
    • gem install rdiscount

    jekyll configuration

    • the site structure is pretty self explanatory. more info
    • create a _config.yml in the root dir and add 'markdown: rdiscount' so that you don't have to type 'jekyll --rdiscount'

    To load the local mysql database i went and created a backup by logging in to the godaddy.com dashboard and created a backup. then i ftp'd the backup to my local machine.

    MySQL

    • $ mysql> create database wp; -- link
    • $ mysql> use wp;
    • $ mysql> source d:\tmp\wp_backup.sql -- link

    importing the old posts

    • ps > $env:EDITOR = 'gvim -f'
    • gem open jekyll
    • copy the wordpress.rb and csv.rb to your sites _import directory.
    • gem install sequel mysqlplus
    • gem install sequel
    • gem install mysql
    • from the _import directory
      • $ ruby -r 'wordpress' -e "Jekyll::WordPress.process( 'wp', 'root', 'password')"
      • wp is the name of the database i created in mysql.
      • root is th username
      • password was the password.

    the original import didn't include lots of details from wordpress like categories, time and comments. so i found another way to import those details from this post.

    Markdown

    Last week while I sat in a Udi Dahans distributed systems design course, taking notes on my laptop. I was told about this tool called markdown. It allows you to include some basic format specifiers in your text file so that it can later be converted to html using something like rdiscount.

    Syntax Documentation

    TODO::

    • discuss how this relates to jekyll and why you would want to convert text to html.
    • installation ?
    • vim syntax files

September

    2010.09.27

    • idempotency
    • gigabit ethernet => 128 megabytes ethernet => subtract tcp/http overhead => 50 megabytes data
    • multiple networks => prioritize commands over queries. (CustomerBecamePreferred vs GetAllCustomers
      • network admins can prioritize data on separate networks.
      • move time critical data to separate networks.
    • properties suck! accessing one property from another is not detministically determined. (time to access)
    • everytime you create an assocation their is cost down the road involved.

      8 fallacies of distributed computing..

    • 1 - the network is reliable
    • 2 - latency isn't a problem
    • 3 - bandwith isn't a problem
    • 4 - the network is secure - security
    • 5 - the topology wont change
    • 6 - the admin will know what to do.
    • 7 - transport cost isnt' a problem.
    • 8 - the network is homogeneous
    • 9 - the system is atomic
      • centralized dba committee to sign off on schema changes.
      • solution: internal loose coupling, modularize, design for scale out in advance, design for interaction with other software.
    • 10 - the system is finished.
      • maintenance costs more than development - design for maintenance
      • the system is never finished. - design for upgrades.
      • how will you upgrade the system.
    • 11 - business logic can and should be centrailized

      • re-use can be bad. context matters.
      • single generic abstraction can make things more difficult, and can cause performance problems.
      • more classes but more maintainable. the number of lines and coupling is reduced.
      • generic abstractions can cause a lot of problems down the line. performance, maintainability. (small code base but much more complex to jump in)
      • rules that change often can be segrated from rules that don't change often.
      • we are taught that re-use is one of the greatest values in software development, in reality, it doesn't really help as much as we think it should.

      solution:

      • accept that business logic and validation will be distributed. plan for it.

      big idea! (logical centralization and physical centralization is one to one.) at design time package rules enforcement together.

        - development time artifact
        - 12.sln that only has files that relate to rule "12"
            - ie. js file, cs file, and sql file all related to business rule 12...
        - business says we want to change rule # 12, then open up 12.sln and make those changes.???
        -- COULD THIS WORK??? 
        "-" any new sql files would have to have the next ordinal number to run migration scripts in order.
        "+" order migrations by timestamp would solve this problem.
      
    • best practices have yet to catch up to "best thinking"

    • tech cannot solve all problems
    • adding hardware doesn't necessarily help lunch
    • coupling
      • afferent : depends on you
      • efferent : you depend on
    • attempt to minimize afferent and efferent coupling.
    • zero coupling is not possible.
    • types of coupling for systems -platform -temporal -spatial

    shared database is one of the worst forms of coupling. - e.g. one system writes to database, and another reads from it. - make coupling visible. no more shared database, it's to hard to figure out the coupling when you do that. - # 1 danger is not being aware of the coupling by not seeing it.

    Platform coupling

    • aka "interoperability"
    • using protocols only available on one platform. e.g. .NET [A] --------> JAVA [B]

    solutions: - use standards based transfer protocol like http, tcp, udp, smtp, snmp

    Temporal coupling

    • coupling to time.
    • e.g. service [A] sends message to service [B] and is waiting for a response. ---> TIMEOUT
    • stop trying to solve this type of coupling with multithreaded code.
    • most people should not be writing multi threaded code.
    • fowler says that only 4 people in the world know how to write proper multi threaded code.
    • publish/subscribe vs request/response

    Spatial coupling

    • where in space are we depending on.
    • how closely tied am i in space to the physical machine where this is running.

    summary

    • loose coupling is more than just a slogan.
    • coupling is a function of 5 diff dimensions.
      • platform
      • spatial
      • temporal
      • afferent
      • efferent

    mitigate temporal and spatial coupling by hosting in the same process. [A[B]] or [B[A]]

    to do: - add zip file to google docs. - look up 8 fallacies of distributed computing.

    Messaging:

    • reduces coupling
    • RPC crashes when load increases, messaging does not.
    • messaging: data can sit and wait to be processed in a queue
    • rpc: data cannot just sit, threads are blocked. as load increases, more threads are needed.
    • throttling: tell clients to go away. all threads are tied up, and dont intend to activate more.

      • get to max throughput then stay there.
    • messaging: fire and forget

      • we can set the max # of threads to process items off of a queue.
      • store items in persistent storage (queue) which is cheap.
      • this works well for commands.

      • how do you deal with queries? the result needs to be returned in almost real-time.

    "represent methods as messages" - Authorization : IHandles --> runs against every message for authorization.

    DAY 2 --- 2010.09.28

    role playing... kinky? - previous assumptions are incorrect - cross functional teams

    benefits - same infrastructure - slightly different architecture - increase performance

    cost of messaging: - learning curve

    10:15 am - need "correlation id" to tell us which response belongs to which request. - eg this is response for request 123 - now you can have multiple responses for a single request.

    browser -> send request to server -> poll server for response with correlation id <- return responses for corellation id

    Publish/Subscribe

    • events are...

    todo:: - topic hierarchies

    1:08 pm

    Broker

    • a little bit of orchestration
    • a little bit of transformation

      Bus

    • all about connecting event sources to event sinks events are core abstraction of bus arch style.
    • supports federated mode?
    • doesn't break service autonomey
      disadvantages:
      • harder to design

    Service Orientation

    4 tenets of service orientation

    • services are autonomous
    • services have explicit boundaries
    • services share contract & schema, not class or type
    • service interaction is controlled by policy

    what is a service?

    Service: technical authority for a specific business capability. Service: all data and business rules reside within the service.

    what a service is not:

    • has only functionality
    • only has data is a database
      • like [create, read, update, delete] entity

    service examples

    service deployments

    • many services can be deployed to the same box
    • many services can be deployed in the same app
    • many services can cooperate in a workflow
    • many services can be mashed up in the same page.

    Availability

    • if subscriber goes down, messages are buffered to disk

      • this can require a significant amount of disk space for an outage.
      • this can take the publisher down.
    • schema: defines the logical message types

    • contract: provides additional tech info

    The IT/Operations Service

    • responsible for keeping info flowing in the enterprise
    • owns all hosting technologies.
    • hosts all the business services

      • configures it's own handlers to run first
        • authentication, authorization - LDAP/AD access
    • HR - employee hired, employee fired.

      • provisions/de-provisions machines, accounts, etc.

    todo:: * watch big bang theory * nerdtree vim plugin * tcommment vim plugin

    DAY 3 = 2009.09.29

    Review: (Hotel management system exercise) define the services + page composition really simplifies things. + avoid request/response between services. + avoid data duplication between service boundaries. + defining the services exercise helps identify the key business capabilities and fosters communication about the business. +

    Services

    • Availability/Booking:
      • booking[bookingid, customerid, daterange, roomclassification]
      • tells how many of what room classifications are available at what times.
    • Facilities/Housekeeping
      • tells what rooms are physically available
    • Customer Care
      • tracks customers information like first name and last name.
    • Billing
      • has price and bills customers for their bookings.

    break.

    Service Decomposition

    • large scale business capability that a service provides can be futher broken down.

    Business Components - BC

    • [yes] multiple databases schemas, no fk relationships between schemas
      • this mitigates deadlocks, and perf issues.
      • referential integrity argument: delete => deletions usually mean the data will not be shown.
      • duplicate data on different islands of data is a huge problem, this is why we do not duplicate data across service boundaries.
      • data guy says do not delete data ever.
      • we effectively scale our our databases by having multiple databases.

    be aware of solutions masquerading as requirements - udi dahan

    Services and transactions

    "autonomous components" : AC

    #

    - responsible for one of more message types
    - thing that performs a unit of work.
    - is independently deployable, has it's own endpoint.
    - common code that is needed to handle a specific message type not a single piece of code.
    - running part of a service.
    

    Layout

    BC - multiple AC's - single DB

    • usually when users report the system is slow, they are usually talking about a specific use case.

      • using the typical monolithic architecture it is difficult to scale.
      • when the business components are separated it is much easier to scale specific business capabilities.
    • from a runtime perspective there are more moving parts, and it can be difficult to monitor.

    SOA building blocks summary

    you see the AC's running. - autonomous components are the unit of deployment in SOA. - ac's takes responsibility for a specific set of messages types in the service. - ac uses the bus to communicate with other ac's - ac's can communicate using a message bus instead of a service bus. same technological bus, but used differently. - not all ac's require the same infrastructure, within a bc or across all ac's. - eg. one ac my use an orm, another might write straight sql.

    Service Structure

    • single user domain / multi-user collaborative domain.
    • model multi user data explicitly.

    Queries in a collaborative multi user domain.

    is this the right screen to be built for the purpose of collaboration?

    • tell users how stale the data is. "data correct as of 10 minutes ago."
    • eg bank statement without date.
    • decision support. all query screens are for decision support.
    • which decision(s) are you looking to support for each screen.
    • create separate screens for different levels of decision support.
    • persistent view model. what if we had a table for each screen that stores everything for that screen.
    • how do you keep the data in sync between the persistent view model tables and the source data.
    • no coupling between screens in the ui.
    • no fk relationships in the persisten view model tables.
      • avoid calculations when doing queries.

    Deployment and security

    Role based security

    • different screens for different roles go to different tables. select permissions per role.
    • use the persistent view model to run some validation before issuing the command.

    Commands

    • validation: is the input potentially good? structured correctly? ranges, lengths, etc?
      • this is not a service, this is a function.
    • rules: should we do this?
      • based on current system date.
      • what the user saw is irrelevant.

    model user intent - udi dahan we want to implement a fair system. if no one can see that a system is unfair, then it is fair enough.

    HFT - high frequency trading.

    • good commands

      • "thank you. your confirmation email will arrive shortly."
      • inherently asynchronous
    • it's easier to validate a command, because

      • we have context,
      • less data
      • more specific
    • in most cases it's difficult to justify many to many relationships within a bc.

    • document databases are great for persistent view models.

    CQRS

    Validation

    schema

    • internal schema
      • message types are commands
        • ChangeCustomerAddress
      • faults
        • customer.... missed it.
    • external schema
      • primarily based on events
        • CustomerBecamePreferred
        • OrderCancelled
      • past tense
      • something that has already occurred
      • stay away from db thinking = no CRUD
        • think about business status changes
    • faults
      • an enum value
        • CustomerNotFound
        • OrderLimitExceededForCustomer
      • not an exception
        • we expect these things to occur
        • exceptions don't really work in async programming.

    ultimately, we are refactoring a bus into a plane while it's driving. - udi dahan

    Day 4 -- 2009.09.30

    Review: Order cancelled - don't process the refund until the products have been returned. - the cancel order command has no reason to fail now. - you can now cancel an order at anytime, for the customer to receive the refund they must return the product. - the ship order command has no reason to fail now. - we do not need to ship the order if it is cancelled, but even if they cancel the order after it has been shipped that's ok. - if the new requirements don't fit the rules then - your service boundaries might be incorrect. - there is something missing in the requirements or process. - svn => single user domain - git => multi user domain

    break

    Long Running Processes

    • mortgage lending process
    • time is important. you don't see time, but it's important to model.

    SAGA - handles long lived transaction

    • triggers ares messages
    • similar to message handlers
      • can handle a number of different message types.
    • different from message handlers
      • have state, message handlers don't
    • sql server isolation level repeatable read is like "select for update"
      • i am going to execute this query again in this transaction and i would like to get back the same result set.
    • within a single business component

    Testing

    • test it as a black box.
    • make sure you test the saga's because they are core, and they will change, and it's important to maintain the original business behaviour as it grows and changes.

    workflow foundation WWF - transaction management is manual do it yourself - hard to test - no timeout mechanism

    biztalk - can't actually represent this kind of functionality in a drag and drop orchestration. - can be useful when doing something procedural and synchronous. - latency tends to be slower. - haven't modeled time. - biztalk rules engine (bre)

    the hard part

    • the easy part is using the building blocks.
    • hard part is getting them to tell you what the process needs to be.
    • interacting with legacy systems, each response becomes a message which triggers an activity.
      • legacy systems are usually internal to a service.

    summary

    whenever you hear about workflows, orchestration etc then saga's are likely a candidate. if we see a saga handling 50 messages, that's usually a smell.

    SCALING

    SERVICE LAYER & DOMAIN MODEL

    domain model - if you have complicated and ever changing business rules - if you have simple not-null checks and a couple of sums to calculate, a transaction script is a better bet. - independent of all concerns - poco - plain old c# objects - testing a connection between objects does not test any sort of behaviour - a unit is something that has a boundary. - you have been testing the innards of a unit - can be deployed multi tier - it's not about persistence, it's about behaviour.

    service layer

    • manages persistence
      • e.g. uses orm to persist domain model.

    concurrency models

    • at least with eventual consistency we will effectively get true consistency.
    • with the current way we develop we do not have consistency.
    • the current domain models you've built are great for single user model, but not multi user model.

      realistic concurrency

    • happy face
      • you change the customers address
      • i update the customers credit history
    • sad face

      • you cancel an order
      • i try to ship the order
    • only get one domain object.

      • ask it to update itself
      • domain object runs business rules
      • eg. -- customer care using() { var customer = session.Get(id); customer.MakePreferred(); }
    • violation

      • crosses 3 service boundaries
        • shipping, billing, customer care public void MakePreferred(){ foreach( var order in this.UnshippedOrders) foreach(var orderLine in order.OrderLines) orderLine.Discount(10.Percent()); }

    DAY 5 - 2009.10.01

    review

    out of order events

    Building a saga for shipping

    public class ShippingSagaData : IContainSagaData
    {
        public virtual Guid Id{get;set;}
        public virtual string Originator {get;set;}
        public virtual string OriginalMessageId {get;set;}
    
        public virtual bool Billed {get;set;}
        public virtual bool Accepted {get;set;}
        public virtual Guid OrderId {get;set;}
    }
    
    public class ShippingSaga : Saga<ShippingSagaData>,
        IAmStartedByMessage<OrderAccepted>
        IAmStartedByMessage<OrderBilled>
    {
        public override void ConfigureHowToFindSaga()
        {
            ConfigureMapping<OrderAccepted>(s =>s.OrderId, m =>m.OrderId);
            ConfigureMapping<OrderBilled>(s =>s.OrderId, m =>m.OrderId);
        }
    
        public void Handle(OrderAccepted message)
        { 
            Data.OrderId = message.OrderId;
            Data.Accepted = true;
    
            if(Data.Billed)
                MarkAsComplete();
            else
                RequestTimeout(TimeSpan.FromDays(7), "bill");
        }
    
        public void Handle(OrderBilled message)
        { 
            Data.OrderId = message.OrderId;
            Data.Billed = true;
            if(Data.Accepted)
                MarkAsComplete();
        }
    }
    
    public class OrderAccepted : IMessage
    {
        public Guid OrderId{get;set;}
    
    }
    

    the tests...

    [TestFixture]
    public class ShippingTests
    {
        [Test]
        public void WhenBillingArrivesAfterAcceptedSagaShouldComplete()
        {
            Test.Initialize();
            Test.Saga<ShippingSaga>()
            .WhenReceievesMessageFrom("client")
            .When(s =>s.Handle(new OrderAccepted())
            .AssertSagaCompletionIs(false)
            .When(s =>s.Handle(new OrderBilled())
            .AssertSagaCompletionIs(true);
        }
    
        [Test]
        public void WhenBillingArrivesBeforeAcceptedSagaShouldComplete()
        {
            Test.Initialize();
            Test.Saga<ShippingSaga>()
            .WhenReceievesMessageFrom("client")
            .When(s =>s.Handle(new OrderBilled())
            .AssertSagaCompletionIs(false)
            .When(s =>s.Handle(new OrderAccepted())
            .AssertSagaCompletionIs(true);
        }
    
        [Test]
        public void SagaRequestsBillingTimeout()
        {
            Test.Initialize();
            Test.Saga<ShippingSaga>()
                .WhenReceievesMessageFrom("client")
                .ExpectSend<TimeoutMessage>(m => true)
                .When(s =>s.Handle(new OrderAccepted())
                .When(s =>s.Timeout(null))
                .AssertSagaCompletionIs(true)
            ;
        }
    }
    

    Web

    • synchronous user login
    • caching
      • keeping cache up to date across farms is challenging.
      • cache invalidation
      • track hit rate (number of times that item was in cache.)
        • hard to do
        • google, facebook, try to get a hit rate above 95%.
    • start using a cdn (content delivery network.)
      • akamai
    • in the db, reads interfere with writes - hurts perf.

      the number one reason why people are having scaling their web applications, is because they are ignoring the web. - udi dahan

    • 90 % of page does not need to be rendered server side for every request.

    • different interface for search engine than for users.
      • meta tags for search engine.
      • ui for users.
    • persistent view model browser side using cookies.
    • ho

    smart clients

    • use synchronization domains for thread synchronization.
    • provide information radiators for your knowledge workers.
    • client side domain model.
    • use property grid to display the status of objects without needing to attach a debugger.
    • cloning-proxies for views
      • create a proxy for your views so that any data handed to the view can be cloned and bound to the view.

    map display

    • see your wells on a map
    • constraints
      • may receive 100's of updates per second at peak

April

    One of the coolest things about powershell is being able to customize the shell. Here’s what my shell looks like now.

    powershell.prompt

    When I’m working on a project using git, my prompt looks like this.

    powershell.prompt.git

    It now tells me what branch i am on. Whoa… All I had to do was drop a modified version of profile.ps1 into “c:\users\mo\documents\WindowsPowerShell”. If the “WindowsPowerShell” folder doesn’t exist, then create it. That’s what I did. This is also using posh-git. If you checkout the source you’ll find an example of the profile.ps1 that you can use.

    Leveraging this file you can load other scripts every time you pop open a powershell. Like if you wanted to load a sweet twitter script. Here’s my current script…

    Import-Module d:/scripts/posh-git/posh-git
    d:\scripts\twitter-on-powershell\twitter-on-powershell.ps1
    d:\scripts\vsvars2010.ps1

    function prompt {
        $user_location = $env:username + '@' + [System.Environment]::MachineName + ' /' + ([string]$pwd).replace('\', '/').replace(':', '').tolower() + ' ~'
        $host.UI.RawUi.WindowTitle = $pwd
        Write-Host($user_location) -foregroundcolor green
        # Git Prompt
        $Global:GitStatus = Get-GitStatus
        Write-GitStatus $GitStatus
        return "> "
    }

    if(-not (Test-Path Function:\DefaultTabExpansion)) {
        Rename-Item Function:\TabExpansion DefaultTabExpansion
    }

    function TabExpansion($line, $lastWord) {
        $lastBlock = [regex]::Split($line, '[|;]')[-1]
        switch -regex ($lastBlock) {
            # Execute git tab completion for all git-related commands
            'git (.*)' { GitTabExpansion $lastBlock }
            # Fall back on existing tab expansion
            default { DefaultTabExpansion $line $lastWord }
        }
    }

    Enable-GitColors

    The cool part is that everything you write in a powershell console can be dropped right in to a .ps1 file and run as a script. I’m actively learning…

    At ARC I recently got to work on a multi touch screen application for a Smart Board. The application is for when guests come to visit the office, they use the touch screen to lookup the person they are here to see, then create a visitor pass for their visit. The application was built in Flash, which is a technology I have almost no experience with. However, they Flash guys were having some trouble getting the multi-touch piece working on the board. That’s when I came in.

    I ended up building a TUIO bridge that is an overlay on top of their application. When a touch is recorded I am building up TUIO packets and flushing it to all connected clients on TCP port 3000. We received a version of a TUIO overlay from the guys at SMART but it didn’t work and the code was a procedural mess. I was committed to writing an OO friendly version of the overlay, and so far so good. There were some tiny things that we had to change to get this working. For instance our target machine was a 64 bit copy of Windows 7. Because of this we had to change some registry settings for the SMART board software. This was a pain to figure out but we got some decent help from a developer at SMART Technologies. Here’s a snippet of an email he from the SMART guy.

    We replicated the problem on this end with my Windows 7 machine, so I have a fix for you. What was happening is that when you did a second contact, the software was feeding both contacts to the Windows 7 Gesture recognition engine. To bypass this.

    FIRST:

    Regedit:

    HKEY_CURRENT_USER\Software\Classes\Virtual Store\MACHINE\SOFTWARE\Wow6432Node\SMART Technologies\SMART Board Drivers\Board1\IsDoGesture to 0

    The key should already exist.

    SECOND:

    Run SMART Board Control Panel
    - Under SMART Hardware Settings\Mouse and Gesture TURN OFF Enable Multitouch Gestures and Enable Single Touch Gestures.

    THIRD:
    - Restart SMART Board Service so it picks up the new settings. Under SMART Board Control Panel -> About Software and.. -> Tools -> Diagnostics -> Service -> Stop.

    And then start it again.

    The registry keys for other versions of Windows are:

    • Windows Vista/7 (32 bit, UAC on) HKCU\Software\Classes\VirtualStore\Machine\Software\SMART Technologies\SMART Board Drivers\BoardX\ IsDoGesture
    • Windows Vista/7 (64 bit, UAC on) HKCU\Software\Classes\VirtualStore\Machine\Software\Wow6432Node\SMART Technologies\SMART Board Drivers\BoardX\ IsDoGesture
    • Windows Vista/7 (32 bit, UAC off) HKLM\Software\SMART Technologies\SMART Board Drivers\BoardX\ IsDoGesture
    • Windows Vista/7 (64 bit, UAC off) HKLM\Software\Wow6432Node\SMART Technologies\SMART Board Drivers\BoardX\ IsDoGesture

    Where X is a board number >= 1.

    In order to make sure the overlay was functioning properly, I built a test tool that listens for all the xml that got flushed to TCP port 3000 and displays it in a console application. When building applications like this, it’s much more important to have useful logging rather then depending on a debugger.

    The SMART Board API required me to listen to messages on the windows message pump then funnel those message up in to the SMART board sdk, which then gets processed and pumped back to me via event handlers. When a touch is received it gets a unique id assigned to it. The Smart board is only capable of handling two touches at a time, so only 2 id’s ever appear. The recorded touches are flushed down a TCP socket 30 frames per second. This means that as a unique touch is moving across the board, the last known x and y coordinates for that touch is what should be flushed down.

    The key to capturing touches and drags was wiring up event handlers for the OnXYDown, OnXYUp, and OnXYMove events.

    public partial class Shell
    {
      [DllImport("user32.dll")]
      static public extern int RegisterWindowMessageA([MarshalAs(UnmanagedType.LPStr)] string lpString);
    
      int SBSDKMessageID = RegisterWindowMessageA("SBSDK_NEW_MESSAGE");
      ISBSDKBaseClass2 Sbsdk;
    
      public Shell()
      {
        InitializeComponent();
        Loaded += (o, e) =>
        {
          Sbsdk = new SBSDKBaseClass2();
          ((_ISBSDKBaseClass2Events_Event) Sbsdk).OnXYDown += (x, y, z, pointer_id) =>
          {
            TouchTrigger.fire(new Down(pointer_id, x, y));
          };
          ((_ISBSDKBaseClass2Events_Event) Sbsdk).OnXYMove += (x, y, z, pointer_id) =>
          {
            TouchTrigger.fire(new Down(pointer_id, x, y));
          };
          ((_ISBSDKBaseClass2Events_Event) Sbsdk).OnXYUp += (x, y, z, pointer_id) =>
          {
            TouchTrigger.fire(new Up(pointer_id));
          };
    
          var handle = new WindowInteropHelper(this).Handle;
          var int_handle = handle.ToInt32();
          Sbsdk.SBSDKAttachWithMsgWnd(int_handle, false, int_handle);
          Sbsdk.SBSDKSetSendMouseEvents(int_handle, _SBCSDK_MOUSE_EVENT_FLAG.SBCME_NEVER, -1);
          HwndSource.FromHwnd(handle).AddHook(new_message);
        };
      }
    
      IntPtr new_message(IntPtr hWnd, int Msg, IntPtr wParam, IntPtr lParam, ref bool Handled)
      {
        if (Msg == SBSDKMessageID && Sbsdk != null) Sbsdk.SBSDKProcessData();
        return IntPtr.Zero;
      }
    }
    

    The TouchTrigger will fire off the touch to any listening observers.

    public class TUIOProtocol : Protocol
    {
      public TUIOProtocol(double screen_width, double screen_height)
      {
        this.screen_width = screen_width;
        this.screen_height = screen_height;
      }
    
      public void record(Touch touch)
      {
        touches[touch.id] = touch;
      }
    
      public void publish_to(Connection connection)
      {
        connection.send(build_for(connection));
      }
    
      Serializable build_for(Connection connection)
      {
        var xml = new Xml();
        xml.add("<OSCPACKET ADDRESS='{0}' PORT='{1}' TIME='{2}'>", connection.ip, connection.port, create_time_stamp());
        foreach (var touch in touches.Values)
        {
          touch.append_header(xml, screen_width, screen_height);
        }
        xml.add("<MESSAGE NAME='/tuio/2Dcur'>");
        xml.add("<ARGUMENT TYPE='s' VALUE='alive' />");
        foreach (var touch in touches.Values)
        {
          touch.append_footer(xml);
        }
        xml.add("</MESSAGE>");
        xml.add("<MESSAGE NAME='/tuio/2DCur'>");
        xml.add("<ARGUMENT TYPE='s' VALUE='fseq'/>");
        xml.add("<ARGUMENT TYPE='i' VALUE='{0}'/>, sequence.next());
        xml.add("</MESSAGE>");
        xml.add("</OSCPACKET>");
        return xml;
      }
    
      double create_time_stamp()
      {
        return DateTime.Now.Subtract(reference_time).TotalMilliseconds/1000000000;
      }
    }
    

    The TUIOProtocol stores updates the recorded touch for each unique touch id. When the timer elapses it tells the TUIOProtocol to publish the changes to a TCP Connection. As the xml is built it tells each touch to append it’s own header and footer to the xml. If it’s Down touch then something gets appended. If it’s an UP touch then nothing gets added.

    public class Down : Touch
    {
      public Down(long id, double x, double y)
      {
        this.id = id;
        this.x = x;
        this.y = y;
      }
    
      public long id { get; private set; }
    
      public void append_header(Xml xml, double screen_width, double screen_height)
      {
        xml.add("<MESSAGE NAME='/tuio/2Dcur\'>");
        xml.add("<ARGUMENT Type='s' VALUE='set' />");
        xml.add("<ARGUMENT Type='i' VALUE='{0}' />", id);
        xml.add("<ARGUMENT Type='f' VALUE='{0}' />", plot_x(screen_width));
        xml.add("<ARGUMENT Type='f' VALUE='{0}' />", plot_y(screen_height));
        xml.add("<ARGUMENT Type='f' VALUE='0.0000000' />");
        xml.add("<ARGUMENT Type='f' VALUE='0.0000000' />");
        xml.add("<ARGUMENT Type='f' VALUE='0.0000000' />");
        xml.add("</MESSAGE>");
      }
    
      public void append_footer(Xml xml)
      {
        xml.add("<ARGUMENT TYPE='i' VALUE='{0}' />", id);
      }
    
      double plot_x(double screen_width)
      {
        return x/screen_width;
      }
    
      double plot_y(double screen_height)
      {
        return x/screen_height;
      }
    
      double x;
      double y;
    }
    

    The UP touch does not append anything. It just symbolizes a gesture where the user has lifted their finger off of the touch surface.

    public class Up : Touch
    {
      public Up(long id)
      {
        this.id = id;
      }
    
      public long id { get; private set; }
    
      public void append_header(Xml xml, double screen_width, double screen_height){}
    
      public void append_footer(Xml xml){}
    }
    

    And that’s all folks. Developing an application for the SMART board has been fun. I would love to get an opportunity to build a full blown WPF app on either the SMART board or the SMART table. For more info on the SMART Developer Network.

    Download Source

    In this post I am going to discuss multi-legged transactions. Multi legged transactions occurs when items are transferred to and from multiple accounts. For example, If I withdraw $100.00 from my chequing account and put $50.00 in to a retirement savings account and the other $50.00 in to a utility payment account. If you would rather just read the source code, I have provided a download.

    In order to successfully complete a multi legged transactions, the total of all exchanges must balance to 0. It’s important to remember that Accounts don’t just apply to monetary values. You can have Gas Volume account with a unit of measure in MCF, or an Oil Volume account measured in BOED. When transferring money from a money account to a gas account, you are essentially buying gas which means you must convert money in to it’s equivalent amount of gas at current price of gas. We’ll talk about different strategies of how you can accomplish this.

    There are also times when you want to group accounts together to create a hierarchy of accounts. For example, I could have an expenses account which aggregates entries from a utility payment account, a tax account, and a food expenses account. This is a type of Summary Account which can aggregate one or more Accounts. Accounts at their lowest level are called Detail Accounts. Detail accounts track a the entries for a single account.

    Let’s start with an example. When transferring funds from one account to another it should increase the balance of the destination account and decrease the balance of the source account. For this example both of our accounts will use the same currency.

      [Concern(typeof(Transaction))]
      public class when_transferring_funds_from_one_account_to_another : concern 
      {
        context c = () =>
        {
          source_account = DetailAccount.New(Currency.CAD);
          destination_account = DetailAccount.New(Currency.CAD);
          source_account.add(Entry.New<Deposit>(100, Currency.CAD));
        };
    
        because of = () =>
        {
          sut.deposit(destination_account, new Quantity(100, Currency.CAD));
          sut.withdraw(source_account, new Quantity(100, Currency.CAD));
          sut.post();
        };
    
        it should_increase_the_balance_of_the_destination_account = () =>
        {
          destination_account.balance().should_be_equal_to(new Quantity(100, Currency.CAD));
        };
    
        it should_decrease_the_balance_of_the_source_account = () =>
        {
          source_account.balance().should_be_equal_to(new Quantity(0, Currency.CAD));
        };
    
        static DetailAccount source_account;
        static DetailAccount destination_account;
      }
    

    Take a look at the “because” block in the above code. We are depositing a quantity of 100 CAD in to one account, and withdrawing a quantity of 100 CAD from another account. This transaction balances to zero, so when we post the transaction it shouldn’t have any problems. Pretty straight forward so far. We’ve made some key design decisions in these tests. Instead of modeling an account just for money, we are using a Quantity object. This allows to potentially withdraw 80 CAD from one account and deposit 1 BOED of oil in to another account. Before jumping in to the Transaction class let’s take a quick peek at Quantity.

    Quantity

      public class Quantity : IEquatable<Quantity>
      {
        double amount;
        UnitOfMeasure units;
    
        public Quantity(double amount, UnitOfMeasure units)
        {
          this.units = units;
          this.amount = amount;
        }
    
        public Quantity plus(Quantity other)
        {
          return Quantity(amount + other.convert_to(units).amount, units);
        }
    
        public Quantity subtract(Quantity other)
        {
          return Quantity(amount - other.convert_to(units).amount, units);
        }
    
        public Quantity convert_to(UnitOfMeasure unit_of_measure)
        {
          return new Quantity(unit_of_measure.convert(amount, units), unit_of_measure);
        }
      }
    

    In our current implementation Quantity’s can be added to one another, and they can be subtracted from one another. Each quantity represents a single amount of something. That something is represented as a Unit Of Measure. For example, 100 Canadian dollars can be represented as a quantity of 100 with a unit of measure of CAD. 1000 BOED of oil can be modeled as a quantity of 1000 with a unit of measure of BOED. 6 MCF of gas can be represented as a quantity of 6 with a unit of measure of MCF. Each unit of measure can be converted to another unit of measure. When we add 100 CAD to 1000 BOED we may want the result to be measured in CAD or BOED. This requires a conversion using the price of oil at the time of conversion. Let’s talk about one strategy to do this using a exchange rate table.

    Unit of Measure

    We need a way to sneak in a rate table lookup during runtime, usually the way we would do this is by pushing in a domain service to use to lookup the current days rate. What we want to avoid is having our Domain model reaching out to an external third party directly to look up the rates. But we want to be able to provide a rate table lookup strategy at run time.

      public delegate ConversionRation RateTable(UnitOfMeasure unitCurrency, UnitOfMeasure referenceCurrency);
    
      public abstract class SimpleUnitOfMeasure : UnitOfMeasure
      {
        public double convert(double amount, UnitOfMeasure other)
        {
          return rate_table(this, other).applied_to(amount);
        }
    
        public abstract string pretty_print(double amount);
    
        static RateTable rate_table = (x, y) => ConversionRatio.Default;
    
        static public void provide_rate(RateTable current_rates)
        {
          rate_table = current_rates;
        }
      }
    

    The “provide_rate" method allows us to push in a rate lookup, in a manner that doesn’t couple us to the implementation. The actual implementation might open up a connection to a remote host and pull down the current rates, or it might cache the rates and serve them. Either way this becomes completely open for extension. We can now drop in different units of measure like BOED, Currency, MCF etc.

      public class Currency : SimpleUnitOfMeasure
      {
        static public readonly Currency USD = new Currency("USD");
        static public readonly Currency CAD = new Currency("CAD");
        ...
    
        Currency(string pneumonic)
        {
          this.pneumonic = pneumonic;
        }
    
        public override string pretty_print(double amount)
        {
          return "{0:C} {1}".format(amount, this);
        }
      }
    

    Transaction

    Ok now let’s get back on track, we were talking about multi legged transactions. When we are building a transaction we need a way to record the potential entries before actually posting them to each respective account. When we deposit or withdraw anything we record Potential transactions. When we post the transaction we ensure that the balance is zero. If it’s all good then, we commit each potential transaction to the respective accounts.

      public class Transaction
      {
        Transaction(UnitOfMeasure reference)
        {
          reference_units = reference;
        }
    
        public void deposit(DetailAccount destination, Quantity amount)
        {
          deposits.Add(Potential<Deposit>.New(destination, amount));
        }
    
        public void withdraw(DetailAccount source, Quantity amount)
        {
          withdrawals.Add(Potential<Withdrawal>.New(source, amount));
        }
    
        public void post()
        {
          ensure_zero_balance();
          deposits.Union(withdrawals).each(x => x.commit());
        }
    
        void ensure_zero_balance()
        {
          var balance = calculate_total(deposits.Union(withdrawals));
          if(balance == 0) return;
    
          throw new TransactionDoesNotBalance();
        }
    
        Quantity calculate_total(IEnumerable<PotentialEntry> potential_transactions)
        {
          var result = new Quantity(0, reference_units);
          potential_transactions.each(x => result = x.combined_with(result));
          return result;
        }
    
        List<PotentialEntry> deposits = new List<PotentialEntry>();
        List<PotentialEntry> withdrawals = new List<PotentialEntry>();
        UnitOfMeasure reference_units;
      }
    

    Detail Account

    Now I skipped a bunch of tests, but you can download the source to check out the rest. Each potential transaction records the account that is the target of the entry, and whether it was a deposit or a withdrawal.

      public class DetailAccount : Account
      {
        DetailAccount(UnitOfMeasure unit_of_measure)
        {
          this.unit_of_measure = unit_of_measure;
        }
    
        public void add(Entry new_entry)
        {
          entries.Add(new_entry);
        }
    
        public Quantity balance()
        {
          return balance(Calendar.now());
        }
    
        public Quantity balance(Date date)
        {
          return balance(DateRange.up_to(date));
        }
    
        public Quantity balance(Range<Date> period)
        {
          var result = new Quantity(0, unit_of_measure);
          foreach(var entry in entries.Where(x => x.booked_in(period))
          {
            result = entry.adjust(result);
          }
          return result;
        }
    
        IList<Entry> entries = new List<Entry>();
        UnitOfMeasure unit_of_measure;
      }
    

    When the potential transaction is committed it simple adds the equivalent entry to the target account. When the account calculates the balance it sums up each entry. Withdrawal entries decrement the amount, and deposits increase the amount. The balance is returned in the unit of measure that the account manages. If it’s a monetary account, then a monetary quantity is returned.

    Download

    I am constantly working towards becoming a better OO practitioner. To do so I like to practice by solving problems by trying to stay true to the design principles of OO. My current job is a great source of real world business domains, so this helps with my practicing. In this post I am going to focus on a specific problem on employee compensation. If you prefer to just download the source code, I have included it as a download.

    Last year we released a system to the Human Resources department of our company to help them manage the compensation for each employee in the company. As part of our compensation we are all issued a base salary for the year, a target bonus, and a target LTIP (long term incentive plan.)

    The bonus is split in half, and issued to employees in January, and June of each year. This is called the H1 and H2 bonuses. The LTIP is also split in half and issued in the spring and fall of each year. This is known as the spring and fall LTIP. Bonus are issued in cash, but LTIP’s are issued as grants. We offer two types of LTIP’s, one called RTU (restricted trust units) and another called PTU (performance trust units). For this article I am going to focus on our RTU grants.

    When a grant is issued to an employee, 1/3 of the grant will vest on each anniversary of the date that the grant was issued. For instance, if I was issued a fall LTIP grant of $4500.00 at a unit price of $10.00, then the following year I would be issued 1/3 of the grants value. If the price doubles from $10.00/unit to $20.00/unit then I would receive a payout of $3000.00. When an employee has been working at ARC for 3 years, then they are considered “fully loaded”, which means that during either the spring or fall compensation events, that employee would receive 1/3 of 3 different grants.

    So let’s model this. If an employee can have any where between 0 and 3 grants with unvested units available at any time, how can we calculate the current value of that employees LTIP. Let’s start by writing a unit test, and let test driven development guide us.

    [Concern(typeof(Compensation))]
    public class when_calculating_the_total_unvested_dollars_awarded : concern 
    {
      context c = ()=> 
      {
        grant_date = new DateTime(2009, 09, 15);
        value_of_grant = 4500.00;
        unit_price = 10.00;
        portion_to_issue_at_each_vest = new One<Third>();
        frequency = new Annually();
      };
    
      because of = () =>
      {
        Calendar.stop(() => grant_date);
        sut.issue_grant(value_of_grant, unit_price, portion_to_issue_at_each_vest, frequency);
    
        Calendar.start();
        sut.grant_for(grant_date).change_unit_price_to(20.00);
      };
    
      it should_indicate_that_nothing_has_vested_before_the_first_anniversary = () =>
      {
        sut.unvested_balance(new DateTime(2010, 09, 14)).should_be_equal_to(9000);
      };
    
      it should_indicate_that_one_third_has_vested_after_the_first_anniversary = () =>
      {
        sut.unvested_balance(new DateTime(2010, 09, 15)).should_be_equal_to(6000);
      };
    
      it should_indicate_that_two_thirds_has_vested_after_the_second_anniversary = () =>
      {
        sut.unvested_balance(new DateTime(2011, 09, 15)).should_be_equal_to(3000);
      };
    
      it should_indicate_that_the_complete_grant_has_vested_after_the_third_anniversary = () =>
      {
        sut.unvested_balance(new DateTime(2012, 09, 15)).should_be_equal_to(0);
      };
    
      static DateTime grant_date;
      static double value_of_grant;
      static double unit_price;
      static One<Third> portion_to_issue_at_each_vest;
      static Annually frequency;
    }
    

    In the above set of unit tests, I am focusing on a single employees compensation. I’ve awarded that compensation a single grant valued at $4500.00 at the time of grant at a price of $10.00/unit. In each test I am checking to see that the unvested amount is correct at different times in the future. With this design I can now see what an employees compensation looks like in the future and at any point in the past. This is a form of black box testing, I am testing the expected behavior of a single class. I don’t really care what the underlying implementation is. I just want to know that in the end it produces the value that I expect. This type of testing is my preferred style when working in a domain model, it allows for much easier refactoring, less test maintenance and still preserves the expected behavior.

    Compensation

    Let’s take a look at the Compensation class to see how we can get these tests passing and stick to some fundamental object oriented programming principles.

    public class Compensation : Visitable<Grant>
    {
      IList<Grant> grants = new List<Grant>();
    
      public void issue_grant(Money grant_value, UnitPrice price, Fraction portion_to_issue_at_each_vest, Frequency frequency)
      {
        grants.Add(Grant.New(grant_value, price, portion_to_issue_at_each_vest, frequency));
      }
    
      public Grant grant_for(Date date)
      {
        return grants.Single(x => x.was_issued_on(date));
      }
    
      public Money unvested_balance(Date date)
      {
        var total = Money.Zero;
        accept(new AnonymousVisitor<Grant>(grant => total = total.plus(grant.balance(date))));
        return total;
      }
    
      public void accept(Visitor<Grant> visitor)
      {
        grants.each(x => visitor.visit(x));
      }
    }
    

    Compensation is our aggregate root. Within it’s boundary it creates an instance of Grant via a static factory method. It implements a Visitable<T> interface to adhere to the interface segregation principle as well as the open closed principle. By allowing the Compensation to accept visitors it leaves this class closed for modification but still for extension. We can create new implementation of the visitors and pass them to collect the information necessary. In our calculation we are visiting each Grant and telling it to calculate the balance remaining as of a particular date. Notice the message passing, and information hiding. Compensation doesn’t need to “know” about any of Grants “data” it is invoke specific behaviors on Grant instead of picking off values from getters. I prefer not to use getters and setters, not only are they an anti-patterns in object oriented design, but they help produce brittle software. Every time you add a getter or worse, setter you are adding a future maintenance cost to your software. Focus on behavior rather than on data.

    Grant

    static public Grant New(Money purchase_amount, UnitPrice price, Fraction portion, Frequency frequency)
    {
      var grant = new Grant
      {
        issued_on = Calendar.now(),
      };
      grant.change_unit_price_to(price);
      grant.purchase(purchase_amount);
      grant.apply_vesting_frequency(portion, frequency);
      return grant;
    }
    

    There’s a couple of things that happen when we create an instance of Grant. First we record that date that the grant was issued on, second we record the unit price, then we purchase units, and finally we apply a vesting frequency. When we record the unit price we are actually tracking the each price change, which allows us to move forward and backwards in time.

    History<UnitPrice> price_history = new History<UnitPrice>();
    
    public virtual void change_unit_price_to(UnitPrice new_price)
    {
      price_history.record(new_price);
    }
    

    The history of each price change is record in the generic History. This will record the date that the change occurred and keeps a stack of these changes. Again, we are pushing message forward. We have also wrapped the primitive double type in a UnitPrice class that allows us to extend double with additional behavior as well as allows to quickly glean the intention rather than the implementation. If we were to store dollars and units in primitive types, there’s little that blocks us from accidentally adding these two values together. This is just now how Money and UnitPrice behave with one another. This relationship now become explicit when we use actual classes.

    void purchase(Money amount)
    {
      units = units.combined_with(current_unit_price().purchase_units(amount));
    }
    
    UnitPrice current_unit_price()
    {
      return unit_price(Calendar.now());
    }
    
    UnitPrice unit_price(Date on_date)
    {
      return price_history.recorded(on_date);
    }
    

    When we purchase a certain amount of units, we look up the current unit price, and purchase as many units as we can for the dollars given. Notice how it’s the UnitPrice class that is calculating the number of Units that can be awarded for a certain amount of Money. We then combine those Units with the existing number of Units already awarded to this grant. The Unit Price History allows us to look up the most relevant Unit Price for any given date.

    public virtual Money balance()
    {
      return balance(Calendar.now());
    }
    
    public virtual Money balance(Date on_date)
    {
      return unit_price(on_date).total_value_of(units_remaining(on_date));
    }
    
    Units units_remaining(Date on_date)
    {
      var remaining = Units.Empty;
      foreach( var expiration in expirations)
      {
        remaining = remaining.combined_with(expiration.unvested_units(units, on_date));
      }
      return remaining;
    }
    

    The final balance calculation looks up the unit price for the given date and calculates the total monetary value for the unit that have not expired. We iterate through each expiration and accumulate the units that have not vested. Let’s take a look at how that is done.

    Vest

    public class Vest
    {
      Fraction portion;
      Date vesting_date;
    
      public Vest(Fraction portion, Date vesting_date)
      {
        this.portion = portion;
        this.vesting_date = vesting_date;
      }
    
      public Units unvested_units(Units total_units, Date date)
      {
        return expires_before(date) ? Units.Empty : total_units.reduced_by(portion);
      }
    
      bool expires_before(Date date)
      {
        return vesting_date.is_before(date);
      }
    }
    

    Each Vest has a date that the vest occurs. In our example this happens on each anniversary of the original grant date until each 1/3 has completely vested. To calculate the unit remaining we check to see if the vest expired before the given date. If so then 1/3 has expired. If not then we take the total units available and divide that by 1/3. We have a Fraction interface so that if in the future the rules need to changes from 1/3 to 1/12 we can accommodate that.

    In this post I hope that I have given you an opportunity to see the benefit of object oriented modeling. By modeling real world business processes as closely to the real thing, we allow for change, in fact we embrace it. We make the code easy to read and hopefully easy to understand. The small pieces are easier to digest and get new team members up to speed on the core domain much faster. The way we name our classes and methods should be intention revealing and mimic the language used in the core business domain. Focusing on behavior rather than data, allows us to achieve things in a model that a data model simply cannot easily do. I have done my best to illustrate some of the principles of object oriented design such as “Tell don’t ask”, “Single Responsibility Principle”, “Open/Closed Principle”, “Interface Segregation Principle”.

    Download

    Casual elegance at an attractive price; this appealing 2 storey 3 bedroom ,3 bathroom home with an inviting front veranda offers a practical floor plan with contemporary finishes.  Once you arrive you will be captured by the striking living room fireplace which enhances the contemporary feel of the home.  In the kitchen you are greeted by shaker cabinets with plenty of counter space, a reed glass corner pantry door, tiled backsplash and stainless steel appliances. A roomy dining nook is perfect for family gatherings or entertaining.  There is no shortage of space in the master suite with a generous walk in closet and a full 4pce ensuite.  With summer just around the corner you'll soon be able to enjoy the fully landscaped, fenced yard and the west facing deck.  A tot lot down the street, plenty of shopping nearby and easy access to Stoney Trail make this a superb location. Call today to view this outstanding home first hand.

    C3419358_101_12 C3419358_201_19 C3419358_301_19 C3419358_401_18 C3419358_501_18 C3419358_601_18 C3419358_701_103 C3419358_801_15 C3419358_901_15 C3419358_A01_97 C3419358_B01_97 C3419358_C01_96 C3419358_D01_96 C3419358_E01_111

Archive