Archives

Configuring behaviour in Inversion: Part 2

Previous: Configuring behaviour in Inversion

In the last article I talked about the how and why of implementing behaviour configuration in Inversion. When I reviewed the work I surmised that it was a qualified success with some work remaining to do before the matter could be put to bed entirely.

With the original implementation there was a lot of pressure to inherit from various classes in order to inherit some of their configuration features. This caused a lot of strain on inheritance.

With the move to a common data-structure for configuration we took away pressure from inheriting to gain varying configuration properties.

With the move to predicates as methods extending IConfiguredBehaviour we took pressure away from having to inherit from a particular class in order to pick up it’s condition predicates.

What we didn’t escape was the need to actually use these predicates in a condition, therefore making it desirable to inherit from some classes in order to obtain the checks they perform in their condition.

So this is really a 2 out of 3 in this regard. We have relieved pressure from inheritance in quite a marked way, but there remains an impediment that will require more thought and work.

The basic mechanism for addressing this wasn’t really the issue, uncertainty was where such a mechanism should reside.

The issue isn’t implementing the lookup of predicate strategies, it can be as simple as a dictionary of lambdas, the cause for concern is where to define this, and where to inject it. Which object should be responsible for maintaining this lookup? It probably fits well enough on the context, but it would require the context to hold implementation details of behaviours, and I want to think about that some.

This follow-up article will talk about how progress was made with this remaining area extending selection strategies for behaviours, with a focus on “open for extension but closed for modification”.

Selection criteria

One of the concepts that was firming up was the idea of selection criteria which was a predicate acting upon a configuration and event to determine if a behaviours condition was a match. Last time these were implemented as extension methods for IConfiguredBehaviour which were nice in that it was easy to add new selection criteria without having to change anything. The problem remaining with them was that conditions still needed to know about and use them. The uses-a relationship between behaviours and their selection criteria was not open for easy extension. The use of selection criteria was “hard coded”, and required use of inheritance to override, which is something we were trying to avoid as we prefer “composition over inheritance for application behaviour”.

By the end of the last piece we had a reasonably firm idea that we wanted to inject selection criteria into behaviours as strategies to be used by conditions without the conditions knowing about the strategies other than their general shape and how to use them. The details or purpose of a strategy not being important to a behaviour which is just concerned whether or not its selection criteria pass or fail.

So the first order of business was to make selection criteria a thing:-

1
public delegate bool SelectionCriteria(IConfiguration config, IEvent ev);

A function that acts upon an IConfiguration and IEvent, and returns a bool. This allows us to move our use of extension methods to lambda expressions which are easy to store and inject:-

1
(config, ev) => ev.HasParams(config.GetNames("event", "has"))

If a behaviour as part of it’s configuration were injected with a set of these SelectionCriteria a behaviour during it’s condition check could simply check that each of these criteria returns true. We would be able to effectively inject a behaviours condition implementation.

That bit was easy… But how do we decide which of these SelectionCriteria to inject into a behaviour?

Stuff what selects stuff what selects stuff

Then I fell off a conceptual cliff, largely due to semantics, and a brief period spent chasing my own tail.

How to decide what stuff to inject?.. I spent most of a morning trying to formalise an expression of “stuff what selects stuff what selects stuff” that didn’t make me sound like a cretin. I’d walk into my garden and sit, think of a compositional pattern, run to my studio and find I’d laid down a bunch of things that all sounded the same the distinctions between which seemed very arbitrary.

The darkest 15 minutes of that morning was the brief period when I considered using behaviours to configure behaviours, and started seeing behaviours all the way down.

The reason for my anxiety is I was becoming convinced that I was starting to commit a cardinal sin of application architects which is the sin of the Golden Hammer.

The concept known as the law of the instrument, Maslow’s hammer, Gavel or a golden hammer[a] is an over-reliance on a familiar tool; as Abraham Maslow said in 1966, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

The pull of the Golden Hammer for the architect is almost inexorable as the core concern of the architect is too look for common patterns of structure and behaviour, to move from a diverging variety of abstractions to converging use of abstractions. When you get a hold of an implementation of a pattern that is producing good results for you, it is very hard to avoid seeing that pattern everywhere.

It’s also one of the primary mechanisms by which we turn our architectural cathedrals into slag heaps. It’s destructive because it represents the building of an increasingly strong bias about the applicability of an abstraction, that leads to poor judgment and the inappropriate application of abstractions. I call it a sin because its seductive, difficult to avoid, is always recurring, and has bad consequences in the long term while feeling good in the short term.

I knew I was seeing the modeling of condition/action pairs everywhere, that this was part of a protracted phase I’m going through, and that I was vulnerable to the hubris of the Golden Hammer.

I also knew that some patterns are foundational and do have broad applicability. I don’t find the promiscuous use of key/value pairs or IEnumerable<T> anxiety provoking use of a Golden Hammer, and condition/action is as foundational as an if statement.

The rest of the morning was spent giving a performance of Gollum (from Lord of the Rings) as an application architect having an argument with himself about the semantics of stuff and select while anxious about getting hit by a hammer.

An optional extension of the existing framework

I broke out of this neurotic circular argument with myself by deciding that I would implement the abstraction of stuff what selects stuff what selects stuff as a straight-up extension of the existing framework without altering any of the existing types or their implementations. If I could do this then if it became apparent that the abstraction or its implementation was ill-conceived (as it felt it might be) it could remain an odd appendix of an experiment that could be removed at some point without any negative impact on the broader framework… If the extension sucked it simply wouldn’t get used… And I wouldn’t write about it.

It’s worth drawing attention to this benefit of implementing features as extensions.

When we talk about extensibility being good, and consider things like open for extension but closed for modification we tend to view it from the angle of this concern making the writing of extensions easier. The benefit that doesn’t get considered perhaps quite as much is that this approach of extending what is without modifying it is also a strategy for mitigating risk. It makes it easier to move away from such extensions if they’re poorly conceived with reduced consequence to the rest of the framework.

This is one of the goals of Inversion. Development by extension, with an ability to evolve and move poorly conceived abstractions toward increasingly better abstractions. The ability to experiment which is to say, try out different approaches, needs to be facilitated or our systems can’t evolve and we will never get past either cycles of system rewrites, or legacies of poor judgment which we can’t escape. Extensibility in this way is a strategy for easing the paying down of technical debt in the future or lowering the interest rates for technical debt if you like.

Say what you see

So the worst case scenario was an odd bit of code that Guy wrote one day that Adam laughed at. There wasn’t a risk of reverting anything, and my anxiety was removed, making clear quite a short and easy path to a solution.

Once I decided I was losing the war on semantics and came to terms with my caveman-like expression of the problem, it was easy to start breaking it down.

stuff that selects stuff that selects stuff

I know how primitive that is, but it’s what I had… We’re going to look at a configuration, and on the basis of what we see there, we’re going to pick a bunch of selection criteria that a behaviour will use in its condition.

We have the last bit, the SelectionCriteria. The first bit is a match that can be expressed as a predicate acting upon an IConfiguration.

1
2
// stuff what selects, stuff what selects stuff
(Predicate<IConfiguration> match, SelectionCriteria criteria)

This concern pivots around a behaviours configuration with selection criteria being picked on the basis of the configurations characteristics. So if for example a behaviour configuration contains the tuple ("event", "has") the predicate that matches this would be associated with the SelectionCriteria to act on this as part of the behaviours condition.

1
2
match: (config) => config.Has("event", "has"),
criteria: (config, ev) => ev.HasParams(config.GetNames("event", "has"))

Struggling with semantics as I was, I decided to simply call this association of two predicates a case.

1
2
3
4
public interface IPrototypeCase {
Predicate<IConfiguration> Match { get; }
SelectionCriteria Criteria { get; }
}

This picking of selection criteria consults only the configuration and given that the behaviour configuration is immutable, this picking can take place when the configuration is instantiated, and would only need only need to expose the selection criteria that had been picked. This was done by extending IConfiguration thus:-

1
2
3
public interface IPrototype : IConfiguration {
IEnumerable<SelectionCriteria> Criteria { get; }
}

Similarly constrained in terms of semantic inspiration this extension of the behaviours configuration was called a prototype. I was thinking in terms of prototype-based programming which I’d had some success in the past with classification, inheritance, and overriding of relational data, and was thinking of a behaviours configuration tuples with associated functions as prototypes. Not the best example of prototypes, but vaguely in the ballpark, I needed to call it something and had lost patience with my own semantic angst. I was ready to call this thing “Nigel” if it allowed me to move on, and Prototype kind of fit.

A prototype is a configuration that expresses selection criteria that have been chosen for that configuration.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public static readonly ConcurrentDictionary<string, IPrototypeCase> NamedCases = new ConcurrentDictionary<string, IPrototypeCase>();
private readonly ImmutableHashSet<SelectionCriteria> _criteria;
public Prototype(
IEnumerable<IConfigurationElement> config,
IEnumerable<IPrototypeCase> cases
) : base(config) {
var builder = ImmutableHashSet.CreateBuilder<SelectionCriteria>();
foreach (IPrototypeCase @case in cases) {
if (@case.Match(this)) builder.Add(@case.Criteria);
}
_criteria = builder.ToImmutable();
}

This allows us to establish a base set of selection criteria out of the box, that is easy for application developers to override, as seen in Prototype thus:-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
NamedCases["event-has"] = new Case(
match: (config) => config.Has("event", "has"),
criteria: (config, ev) => ev.HasParams(config.GetNames("event", "has"))
);
NamedCases["event-match"] = new Case(
match: (config) => config.Has("event", "match"),
criteria: (config, ev) => ev.HasParamValues(config.GetMap("event", "match"))
);
NamedCases["context-has"] = new Case(
match: (config) => config.Has("context", "has"),
criteria: (config, ev) => ev.Context.HasParams(config.GetNames("context", "has"))
);
NamedCases["context-match"] = new Case(
match: (config) => config.Has("context", "match"),
criteria: (config, ev) => ev.Context.HasParamValues(config.GetMap("context", "match"))
);
// and so om

We can then see this being used in PrototypedBehaviour:-

1
2
3
4
public override bool Condition(IEvent ev, IProcessContext context) {
return base.Condition(ev, context) &&
this.Prototype.Criteria.All(criteria => criteria(this.Configuration, ev));
}

This now forms a solid base class that is open for extension. We have relieved the pressure from having to inherit from a particular class in order to inherit its selection criteria which are now picked out during the behaviours instantiation, based upon the shape of the behaviours configuration. This extension is implemented as an extension of the behaviours configuration which is the focus of its concern and action.

The added benefit of this is because only applicable selection criteria are picked for a behaviour, we’re never running redundant selection criteria as part of a condition. This in turn means we can grow our implementations of selection criteria without concern about a performance impact from redundant checks. Because behaviours are singletons, this selection process takes place just once for each behaviour, so it scales nicely as the surface area of our selection criteria increases over time.

Another way of thinking of this injection of strategies is to compose or “mixin” at run-time applicable implementation details based upon configuration.

A side benefit of this work apart from making it easier to extend behaviours without having to introduce new types, is that we picked up an extra 5% to 10% performance with the loss of redundant selection criteria.

The abuse of static members and future work

The maintenance of NamedCases as a static member of Prototype is a bad thing. Initialising the default cases from the Prototype static constructor is a doubly bad thing. Lastly, this is mutable data being maintained as a static member, so I’m going straight to hell for sure.

It’s not because “global state is bad”, because it’s not. The notion that global state is bad requires ignoring the use of a database, file-system, configuration, service container, or getting the time from the system. The maintenance of non-global state globally is bad, and I’m not sure to what degree it can be said that these default cases are global.

In maintaining the cases like this I’m needlessly tying the default implementation of selection criteria to the Prototype class, and I wonder if it should be associated with the behaviours type. I’m not sure yet.

The strongest case for not maintaining the named cases as a static is because we don’t need to.

Behaviours are used as singletons so these cases can sit as instance members of either the prototype of a behaviour or the behaviour itself, but I’m not entirely sure where I want to place this concern yet, and at the moment I’m trying to impact prior work as little as possible.

The cases are injected via this constructor:-

1
2
3
4
public Prototype(
IEnumerable<IConfigurationElement> config,
IEnumerable<IPrototypeCase> cases
)

So I can easily kill the static members and inject the prototype from the behaviours constructor.

As is probably clear from this write-up, I struggled conceptually a couple of times through this process. The simplest possible thing at this point is not just desirable, but needful, and the simplest possible way of injecting a prototypes cases is:-

1
2
public Prototype(IEnumerable<IConfigurationElement> config):
this(config, Prototype.NamedCases.Values) {}

In the last post on behaviour configuration I stopped having solved two out of three parts of a problem. If I had continued without time to simply think the abstraction over I would have started making things worse rather than better. I find it personally important to recognise when I am approaching this point. Much of my worst code has been done past the point when I should have simply stopped, regrouped my mental faculties, gained some perspective, sought outside opinions, and contemplated my options weighing their pros and cons for more than 2 minutes.

Invariably when I continue past where I should have prudently stopped it has involved my own vanity and a concern about what other developers and architects would think of me. Being aware of one or more deficiencies in my code, often aware that I am at risk of running afoul of one or more anti-patterns, I over-extend myself because I fear being called a “bad developer”… There’s a self defeating vicious cycle in this… I have never, nor am I ever likely to finish a piece of work that is perfect. Every single piece of work I complete will be flawed, and if I don’t come to terms with that I will over extend my self each time and turn good work into bad.

When I accept that my work will iteratively improve a situation but at each iteration be left with flaws, I can then look to recognise and manage those flaws. I can establish my contingencies, and I can plan a safe and pragmatic route of improving abstractions.

The remaining problem of being able to inject selection criteria into behaviours on the basis of their configuration in a manner that other developers can easily extend to meet their own needs, and without changing the preexisting framework has been accomplished. There is the uncomfortable hang-nail of NamedCases being a static member, but it’s safe where it’s parked and easy to move away from without negative impact. So this is where this iteration should end. I need to now let this abstraction bed in, ensure it doesn’t have any unintended consequences before anointing it and baking it into the framework any further.

Configuring behaviour in Inversion

Or, Experiments with black-box development.

update: Part 2 now follows on with the “further work” outlined toward the end of this article.

I’ve recently overhauled both the way that Inversion configures behaviours and the way in which that configuration is acted on as selection criteria when determining which behaviours should respond to an event. I thought I’d write this up as it keeps the small group of developers engaged with this work up-to-date, provides some informal documentation, and provides an illustration of a couple of Inversions design goals.

You need to know where the goal is in order to score

TL;DR Make things as bendy, fast and easy to change as possible… Check that you are.

Inversion might have been labeled Application stack number seven as it sits on the back of six previous incarnations the first of which started in 2004 as an experiment in implementing MVC on the .NET platform, the decedent of which went live into production in 2006 where is remains to this day. Two other slightly earlier but very close incarnations of Inversion have gone into production in 2012 and 2014, but by this time the point of interest had moved well past MVC to playing with ideas of behavioural composition as a means of meeting cross-cutting concerns normally addressed by AoP.

So Inversion is merely the most recent of a series of application stacks experimenting with a handful of core ideas some of which are more developed than others at this point, but each of which should show progression rather than regression over time.

My experience tells me that any piece of development spanning more than a handful of weeks quickly starts accumulating the risk of it’s initial goals being diluted and then eventually forgotten. It is important then to remind ourselves what it is in broad terms we’re trying to obtain from a system, and then to review our activity and ensure we’re actually meeting those goals whether formally or informally.

This is a summary of some of Inversions goals :-

  • Black-box components
  • Configuration as resource distinct from application
  • Favouring composition over inheritance for application behaviour
  • Small footprint micro-framework
  • Single responsibility
  • Extensibility, DIY
  • Substitution, Plugability
  • Inversion of Control
  • Testability
  • Portability
  • Conventions for state
  • Speed

Behaviour configuration will have a large impact on Inversion and will almost certainly add a distinct flavour to the frameworks usage for better or for worse, so it’s important once we’re done that we review our goals and ensure they’re being advanced and not diminished.

Finding a common abstraction for behaviour configuration

TL;DR Tuples.

Inversion is in large part a composition of IProcessBehaviour objects. A set of condition/action pairs. The relevant portion of the interface being:-

1
2
3
4
5
6
// snipped out some members and comments for clarity
public interface IProcessBehaviour {
string RespondsTo { get; }
bool Condition(IEvent ev, IProcessContext context);
void Action(IEvent ev, IProcessContext context);
}

When we context.Fire(event) we pass the event to the condition of each behaviour registered with the context in turn. If the condition returns true, that behaviours action is executed on the context.

Over time we find a lot of common activity taking place in conditions.

  • Does the context have any of these parameters?
  • Do these parameters and values exist on the context?
  • Do they exist on the event?
  • Do we have these keys in the contexts control-state?

For want of a better phrase we call this the selection criteria of the behaviour.

So we quite naturally start to refactor common selection criteria into base classes. We also start to push out the specification of selection criteria to configuration of the behaviour.

In Inversions previous incarnation the expression of config for selection had gotten quite out of hand. Each category of test ended up with its own data-structure both to configure and drive that test.

1
2
3
4
5
6
private ImmutableList<string> _includedAllControlStates;
private ImmutableList<string> _nonIncludedControlStates;
private ImmutableList<string> _includedAllParameters;
private ImmutableList<string> _nonIncludedParameters;
private ImmutableDictionary<string, string> _matchingAllParams;
private ImmutableDictionary<string, string> _nonMatchingAllParams;

This config would be injected in the constructor by the service container, and be acted upon by the behaviours condition.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public virtual bool Condition(IEvent ev, IProcessContext context) {
bool parmsAndValues = this.MatchingAllParameters.All(p =>
context.Params.Contains(p) &&
context.Params[p.Key] == p.Value
);
bool notParmsAndValues = this.NonMatchingAllParameters.All(p =>
context.Params.Keys.Contains(p.Key) &&
context.Params[p.Key] != p.Value
);
bool controlStates = this.IncludedAllControlStates.All(p =>
context.ControlState.Keys.Contains(p)
);
bool notParms = this.NonIncludedParameters.All(p =>
!context.Params.Keys.Contains(p)
);
bool notControlStates = this.NonIncludedControlStates.All(p =>
!context.ControlState.Keys.Contains(p)
);
return
base.Condition(ev, context) &&
parms &&
parmsAndValues &&
notParmsAndValues &&
controlStates &&
notParms &&
notControlStates;
}

It worked, but it wasn’t extensible in that to extend the functionality required adding more and more data-structures, with less and less common purpose. It put a lot of pressure on inheritance to both pick up config you were interested in, along with it’s condition checks. It was riddled with assumptions which you either accepted or were left with no functionality except a bespoke implementation. Special little snowflakes everywhere, which is the opposite of what is being attempted.

It worked in that it allowed specifying behaviour selection criteria from config but the expression of these configs in Spring.NET or in code, was painful, hard to understand, and messy. Worst it was leaking implementation details from the behaviours across their interface.

So I started playing around with something along the lines of IDictionary<string, IDictionary<string, IDictionary<string, IList<string>>>>, which again kind of worked. It was an improvement in that the previous configurations for selection criteria could all pretty much be expressed with the one data-structure. The data-structure however was messy, and difficult to express in configuration.

Next I started playing with a structure something like, MultiKeyValue<string, string, string, string> which finally started to feel in some way rational and self contained. I happened to be reading a piece comparing the efficiency of hashcodes between key-value pairs and tuples which made obvious a very short step to Configuration.Element : Tuple<int, string, string, string, string>.

The class Inversion.Process.Configuration represents a set of ordered elements, or a relation of tuples expressing (ordinal, frame, slot, name, value). This is a very expressive structure, and with LINQ easy and efficient to query in lots of different ways.

The resulting configuration is easy to express in code, and encourages a declarative rather than fluent style.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Naiad.ServiceContainer.Instance.RegisterService("test-behaviours",
container => {
return new List<IProcessBehaviour> {
new MessageTraceBehaviour("*",
new Configuration.Builder {
{"event", "match", "trace", "true"}
}
),
new ParameterisedSequenceBehaviour("test",
new Configuration.Builder {
{"fire", "bootstrap"},
{"fire", "parse-request"},
{"fire", "work"},
{"fire", "view-state"},
{"fire", "process-views"},
{"fire", "render"}
}
),
new ParameterisedSequenceBehaviour("work",
new Configuration.Builder {
{"context", "match-any", "action", "test1"},
{"context", "match-any", "action", "test2"},
{"fire", "work-message-one", "trace", "true"},
{"fire", "work-message-two", "trace", "true"}
}
),
new ParseRequestBehaviour("parse-request"),
new BootstrapBehaviour("bootstrap",
new Configuration.Builder {
{"context", "set", "area", "default"},
{"context", "set", "concern", "default"},
{"context", "set", "action", "default"},
{"context", "set", "appPath", "/web.harness"}
}
),
new ViewStateBehaviour("view-state"),
new ProcessViewsBehaviour("process-views",
new Configuration.Builder {
{"config", "default-view", "xml"}
}
),
new RenderBehaviour("render"),
new JsonViewBehaviour("json::view", "text/json"),
new XmlViewBehaviour("xml::view", "text/xml"),
new XsltViewBehaviour("xslt::view", "text/xml"),
new XsltViewBehaviour("xsl::view", "text/html"),
new StringTemplateViewBehaviour("st::view", "text/html")
};
}
);

Not the best notational representation ever of configuration, but not the worst, a definite improvement, and something it’s felt one can become comfortable with. It’s certainly a very concise configuration of a swathe of behaviour.

This also is not the primary means of configuration. This is showing the configuration of Naiad which is a toy service container Inversion provides suitable for use in unit tests. The above is the configuration of a test.

A good friend and former colleague (Adam Christie) is getting good results from the prototype of a service container called Pot, intended to replace the use of Spring.NET. Until that matures over the coming months, Spring.NET is the favoured service container for Inversion. This doesn’t stop you from from using whichever service container takes your fancy as IServiceContainer shows, Inversions own expectations of a service container are minimal.

1
2
3
4
public interface IServiceContainer : IDisposable {
T GetService<T>(string name) where T: class;
bool ContainsService(string name);
}

If you can honour that interface (and you can) with your service container, Inversion wont know any difference.

What I mean when I say Spring.NET is the favoured service container is that out of all the possibilities, Spring.NET is what I happen to be focused on as a baseline.

Acting on configuration for conditions

TL;DR LINQ

BehaviourConditionPredicates provides a bunch of predicates as extension methods that take the form:-

1
2
3
4
5
6
7
8
9
public static bool ContextMatchesAnyParamValues(this IConfiguredBehaviour self, IProcessContext ctx) {
IEnumerable<IConfigurationElement> elements = self.Configuration.GetElements("context", "match-any");
int i = 0;
foreach (IConfigurationElement element in elements) {
i++;
if (ctx.HasParamValue(element.Name, element.Value)) return true;
}
return i == 0; // there was no match specified
}

Which illustrates extending IConfiguredBehaviour for whatever condition predicates are useful over time, without having to modify IConfiguredBehaviour or Configuration. We establish our own convention of tuples, and act on them. In the above example we’re extracting elements from the configuration that have the frame and slot {"context", "match-any"} which drops out the tuples:-

1
2
{"context", "match-any", "action", "test1"},
{"context", "match-any", "action", "test2"}

We check the context for the name and value of each tuple with ctx.HasParamValue(element.Name, element.Value).

You’re always free to write the conditions for your behaviours in whatever way you need. What we see here is only an illustration of how I happen to be tackling it.

Expressing tuples as XML

TL;DR Just read four nodes deep and call them tuple elements.

If you step back from XML a moment and consider it simply as an expression of a tree of nodes, there’s a trick you can pull with reading a tree of nodes as tuples which is a little novel in this context but which we take for granted when working with relational databases. That is databases that focus on the grouping of tuples into collections which we call relations, or more commonly tables… I’ll confess to that piece of conceit straight-away. I happened to be doing some reading on working with sets of tuples and ran across the fact that the relational in “relational database” refers to the fact that it’s the set of tuples that are a relation, commonly called table, not any association between tables as you might expect from the term. Was novel to me, and I now obviously like flaunting the term… Back on topic…

Given that our configuration is made up of a set of tuples the elements of which we’re calling (frame, slot, name, value), consider the following XML:-

1
2
3
4
5
6
7
8
9
10
11
12
...
<context>
<match-any>
<action>test1</action>
<action>test2</action>
</match-any>
</context>
<fire>
<work-message-one trace="true" />
<work-message-two trace="true" />
</fire>
...

If we read that one node at a time, and with each node copy it as an element of our tuple, our first tuple builds up thus:-

1
2
3
4
context => {"context"}
match-any => {"context", "match-any"}
action => {"context", "match-any", "action"}
test1 => {"context", "match-any", "action", "test1"}

And we have our first tuple. Now if we were reading results from a database, we’d not be surprised if the next value value2 were preceded by the same elements, as they are unchanged. So our second tuple is {"context", "match-any", "action", "test2"}. In this way we can read that XML snippet as:-

1
2
3
4
{"context", "match-any", "action", "test1"},
{"context", "match-any", "action", "test2"},
{"fire", "work-message-one", "trace", "true"},
{"fire", "work-message-two", "trace", "true"}

Which is exactly what we’re after. We can now define a set of tuples very expressively and in an extensible manner with XML, we now just need to hook this up with Spring.

Extending Spring.NET configuration

TL;DR Was much easier than expected.

I’ve been using Spring.NET since 2006 as the backbone of most applications I’ve built. It’s something of a behemoth, and the reality is I’ve only really ever used a very thin slice of its features. I’ve always been comforted by the fact that there’s a solution to most problems with Spring and if I needed to I could extend my way out of a tight space, despite the fact I’ve never had much call to.

One of the things I’ve always wanted to do was extend and customise Spring xml configuration. If you’re working with Spring and xml configs one of the costs is you’re going to end up with a lot of configuration and it’s got a fair few sharp edges to it. After having a stab at it I can only say I wish I’d done it years ago as it was far less involved than I expected.

The relevant documentation for this is Appendix B. Extensible XML authoring which lays out in pretty straight-forward terms what needs to be done. From this we produce:-

An XSD schema describing our extension of the Spring config

Which provides the schema for our config extension, and most importantly associates it with a namespace. This is our “what”.

There a small gotcha here. You need to go to the file properties and set Build action to Embedded resource, as you’re schema needs to be embedded in the assembly for it to be used.

A parser for our namespace

Which is responsible for mapping individual xml elements to the unit of code that will process them. For each xml element you register its handler, thus:-

1
this.RegisterObjectDefinitionParser("view", new ViewBehaviourObjectDefinationParser());

This is our link between “what” and “how”.

An object definition parser

This is our “how”, where the actual work gets done. In these object definition parsers we process the xml and drive an ObjectDefinitionBuilder provided by Spring.

If we consider a simple example of this implementation in ViewBehaviourObjectDefinationParser. First we override GetObjectTypeName.

1
2
3
protected override string GetObjectTypeName(XmlElement element) {
return element.GetAttribute("type");
}

When our element view is encountered and ViewBehaviourObjectDefinationParser is resolved from its registration for this element, Spring asks for a string expression of the type for the object that will be created for this element. We simply read this from the elements @type attribute, exactly as Spring normally would.

Next we need to deal with any constructor injection, and it turns out that because we’re processing elements in our own namespace, elements from Springs namespace still work as expected, allowing us to mix and match to some extent.

1
2
3
4
5
6
7
8
<behaviour responds-to="parse-request"
type="Inversion.Web.Behaviour.ParseRequestBehaviour, Inversion.Web"
>
<spring:constructor-arg
name="appDirectory"
value="Inversion.Web.Harness.Site"
/>
</behaviour>

Note the spring:constructor-arg within the behaviour element.

So we’re in the rather comfy position of retaining Springs base functionality in this area and merely adding syntactic sugar where it suits us.

Spring calls DoParse on our definition parser, and passes it the associated element.

1
2
3
4
5
6
7
8
9
protected override void DoParse(XmlElement xml, ObjectDefinitionBuilder builder) {
// all behaviours with config being parsed have @respondsTo
string respondsTo = xml.GetAttribute("responds-to");
builder.AddConstructorArg(respondsTo);
// all view behaviours have @content-type
string contentType = xml.GetAttribute("content-type");
builder.AddConstructorArg(contentType);
}

In this example we are extracting the @responds-to and @content-type attributes and adding them to the object definition builder as constructor arguments.

Reading the behaviour configuration from XML

Okay, so if we take stock, we’re by this point able to provide our own expressions in XML of object definitions. This doesn’t speak to our provision of a set of tuples as configuration for a behaviour.

BehaviourObjectDefinationParser is a little more gnarly than our definition parser for view behaviours, but it’s DoParse isn’t too wild. We iterate over the xml nodes and construct a hashset of tuples from them, and once we have them we call builder.AddConstructorArg(elements) to tell Spring that we’re using them as the next constructor argument.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
// we're going to read the config into tuples
// of frame, slot, name, value
foreach (XmlElement frameElement in frames) {
string frame = frameElement.Name;
// process any frame attributes as <frame slot="name" />
foreach (XmlAttribute pair in frameElement.Attributes) {
string slot = pair.Name;
string name = pair.Value;
Configuration.Element element = new Configuration.Element(ordinal, frame, slot, name, String.Empty);
elements.Add(element);
ordinal++;
}
foreach (XmlElement slotElement in frameElement.ChildNodes) {
string slot = slotElement.Name;
int start = elements.Count;
// read children of slot as <name>value</name>
foreach (XmlElement pair in slotElement.ChildNodes) {
string name = pair.Name;
string value = pair.InnerText;
Configuration.Element element = new Configuration.Element(ordinal, frame, slot, name, value);
elements.Add(element);
ordinal++;
}
// read attributes of slot as name="value"
foreach (XmlAttribute pair in slotElement.Attributes) {
string name = pair.Name;
string value = pair.Value;
Configuration.Element element = new Configuration.Element(ordinal, frame, slot, name, value);
elements.Add(element);
ordinal++;
}
if (elements.Count == start) { // the slot had no name/value pairs
Configuration.Element element = new Configuration.Element(ordinal, frame, slot, String.Empty, String.Empty);
elements.Add(element);
ordinal++;
}
}
}
builder.AddConstructorArg(elements);

Nothing clever happening here at all, left rather verbose and explicit to assist with debugging.

So we have our behaviours configurations nicely integrated with Spring, and with reasonable opportunity for extension.

Lastly from our behaviour.xsd schema we can default attribute values for elements, as we do for message-sequence@type:-

1
2
3
4
5
6
7
8
9
<xsd:element name="message-sequence">
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base="configured-behaviour-type">
<xsd:attribute name="type" type="xsd:string" use="optional" default="Inversion.Process.Behaviour.ParameterisedSequenceBehaviour, Inversion.Process"/>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>

This allows us to write message-sequence with it’s @type value supplied by the schema.

The end result of these extenstions is the ability to express cleanly in XML the equivalent of our in code configuration of behaviours, as can be seen in behaviour.config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
<spring:list element-type="Inversion.Process.Behaviour.IProcessBehaviour, Inversion.Process">
<message-sequence responds-to="process-request">
<fire>
<bootstrap />
<parse-request />
<work />
<view-state />
<process-views />
<render />
</fire>
</message-sequence>
<behaviour
responds-to="bootstrap"
type="Inversion.Web.Behaviour.BootstrapBehaviour, Inversion.Web"
>
<context>
<set
area="default"
concern="default"
action="default"
appPath="/web.harness"
/>
</context>
</behaviour>
<behaviour
responds-to="parse-request"
type="Inversion.Web.Behaviour.ParseRequestBehaviour, Inversion.Web"
>
<spring:constructor-arg name="appDirectory" value="Inversion.Web.Harness.Site" />
</behaviour>
<behaviour
responds-to="view-state"
type="Inversion.Web.Behaviour.ViewStateBehaviour, Inversion.Web"
/>
<behaviour
responds-to="process-views"
type="Inversion.Web.Behaviour.ProcessViewsBehaviour, Inversion.Web"
/>
<behaviour
responds-to="render"
type="Inversion.Web.Behaviour.RenderBehaviour, Inversion.Web"
/>
<!-- VIEWS -->
<view
responds-to="rzr::view"
content-type="text/html"
type="Inversion.Web.Behaviour.View.RazorViewBehaviour, Inversion.Web"
/>
<view
responds-to="xml::view"
content-type="text/xml"
type="Inversion.Web.Behaviour.View.XmlViewBehaviour, Inversion.Web"
/>
<view
responds-to="json::view"
content-type="text/json"
type="Inversion.Web.Behaviour.View.JsonViewBehaviour, Inversion.Web"
/>
<view
responds-to="xslt::view"
content-type="text/xml"
type="Inversion.Web.Behaviour.View.XsltViewBehaviour, Inversion.Web"
/>
<view
responds-to="xsl::view"
content-type="text/html"
type="Inversion.Web.Behaviour.View.XsltViewBehaviour, Inversion.Web"
/>
<view
responds-to="st::view"
content-type="text/html"
type="Inversion.StringTemplate.Behaviour.View.StringTemplateViewBehaviour, Inversion.StringTemplate"
/>
<!-- app -->
<message-trace responds-to="*">
<event>
<match trace="true" />
</event>
</message-trace>
<message-sequence responds-to="work">
<context>
<match-any>
<action>test1</action>
<action>test2</action>
</match-any>
</context>
<fire>
<work-message-one trace="true" />
<work-message-two trace="true" />
</fire>
</message-sequence>
</spring:list>

Which can be compared with a previous version. The difference is stark.

We can also see here both the beginning of our own domain specific language in the configuration of our behaviours, but more importantly the ability for other developers to extend this with their own semantics.

Consider the following definition of a behaviour:-

1
2
3
4
5
6
7
<some-behaviour responds-to="something">
<resource>
<exist>
<path>Resources/Results/result-1-1.xml</path>
</exists>
</resource>
</some-behaviour>

I just made that up, but hopefully it begins to become clear how that will be read as a set of tuples for the behaviours configuration that I can act on. You can make your own stuff up, which is what open for extension means. The ability for you to make stuff up, that I didn’t foresee and without you having to ask me to modify my stuff.

There’s a strong smell of Prolog around here now. If you’re familiar with Prolog, think of assertions upon which predicates act.

A little caveat on reading XML as a set of tuples

In a relation of tuples you can’t have a duplicate tuple, so tuples that are repeated are collapsed down to the one tuple. The consequence of this is you can’t do…

1
2
3
4
5
6
7
8
{"fire", "bootstrap"},
{"fire", "parse-request"},
{"fire", "work"},
{"fire", "work"},
{"fire", "work"},
{"fire", "view-state"},
{"fire", "process-views"},
{"fire", "render"}

As you’ll end up with just the one {"fire", "work"} tuple. The elements as implemented express an ordinal so it is possible to change this to allow duplicate tuples but I want to digest what the implications of that might be first, and to wait and see what if any pain it actually may cause in practice, before fixing something that may not be broke.

You could as stands move past this problem by moving to something like {"fire-repeat", "work", "3"}.

We have enough here to feel confident in adapting to our needs in this area over time. We’re not walled in if we experience pain in this.

Reviewing our goals

TL;DR It went rather well, or I’d not be writing about it.

I listed a bunch of goal, principles or aspirations that are important to Inversion. I find it important after a non-trivial piece of work to consciously run down a mental check-list and ensure that I’m not negatively impacting any of those goals without compelling reason. The purpose of such a review is not to seek perfection but to ensure simple forward progress in each area, even if it’s only inching forward. Incremental improvement, kaizen and all that.

This is just my sharing informal observations after a piece of work. Normally I would use my internal dialogue.

  • Black-box components

    Inversion has a strong opinion on behaviours as black boxes being as opaque as possible. This is why we don’t inject implementation components into behaviours, and encourage behaviours to use service location to locate the component they need for their implementation. The reasons for this are outlined in other pieces I’ve written and is something I’ll write about more in the future. The short version is a concern with leaking implementation details, and imposing externally visible has-a relationships upon components where uses-a would be more appropriate. A behaviour may use configuration to form the basis of component location, but that is a detail of implementation not common interface.

    This concern speaks to behaviours only. How data-access components obtained from an IoC container are instantiated and injected for example is a separate and distinct architectural concern. Behaviours are participating in a component model around which there are specific expectations. Other services aren’t.

    Anything that distracts from a behaviours condition and action is a potentially undesirable overhead, especially if its leaking details across the interface. Moving from multiple data-structures to the one whose interface does not express intent, focuses on being a simple, generalised, immutable structure of string values that can serve multiple purposes… While not a radical improvement in terms of behaviours as black-boxes, it’s a definite improvement. We’re leaking less. Configuration becomes a standardised input at instantiation.

    Intent is expressed where desirable through the data-structures actual data, not it’s interface. This is what is meant by moving to a more generalised data-structure.

  • Substitution, “pluggability”

    Related to the interest in black-boxes, this isn’t a Liskov concern. This is a very real and practical concern with being able to swap out components with alternate implementations. Change the type specified by that attribute and nothing else needs to change kind of swapping out. Behaviours as extensible plugins.

    Again no tectonic shift here, as with the previous point, the focus of many interfaces into one common interface shared by many behaviours provides substantially less friction to behaviours as plugins.

  • Configuration as resource distinct from application

    Expressing configuration is more standardised, expressive, and more elegant especially when using XML notation thanks to the Spring.NET extensions. The change is impactful enough that we’re now starting as a natural matter of course to express our own semantics through configuration.

    So while configuration has not been made any more distinct as a resource, the quality of it’s use as a distinct resource has been much improved, and I have hope that it will over time become a pleasure to use and a welcome tool for the developer rather than an onerous liability.

  • Favouring composition over inheritance for application behaviour

    With the original implementation there was a lot of pressure to inherit from various classes in order to inherit some of their configuration features. This caused a lot of strain on inheritance.

    With the move to a common data-structure for configuration we took away pressure from inheriting to gain varying configuration properties.

    With the move to predicates as methods extending IConfiguredBehaviour we took pressure away from having to inherit from a particular class in order to pick up it’s condition predicates.

    What we didn’t escape was the need to actually use these predicates in a condition, therefore making it desirable to inherit from some classes in order to obtain the checks they perform in their condition.

    So this is really a 2 out of 3 in this regard. We have relieved pressure from inheritance in quite a marked way, but there remains an impediment that will require more thought and work.

  • Small footprint micro-framework

    This was one of the primary reasons for the piece of work and one of the more substantial wins as it’s reduced down the footprint of the behaviour interface and provides a strategy for accommodating future change without modification. Behaviour configuration is in a markedly better state than it was. Far more compact in design.

  • Single responsibility

    Providing configuration features was starting to distract from a behaviours responsibility to provide a condition/action pair, with an emphasis on the action. Most of the responsibility for expressing configuration and working with it has been removed from the behaviour which for the most part now merely has a configuration that was provisioned by a base class and is acted on by extension methods. So our focus on the actual responsibility of behaviours has been tightened.

  • Extensibility, DIY

    This again was one of the primary reasons for performing this piece of work. There was a desire in the face of feature requests concerning configuration and predicates to be able to reasonably reply “do it yourself”.

    On the one hand there’s a big gain. RDF is able to describe the world with triples, and it turns out N-Quads is a thing. The point is, in terms of data expression you can drive a Mongolian Horde though an ordered set of four element tuples. It makes it very easy for other developers to extend with their own configuration expressions.

    As mentioned previously adding new predicates as extension methods is now also smooth.

    We’re still stuck on having to actually use these predicates as mentioned.

    The issue isn’t implementing the lookup of predicate strategies, it can be as simple as a dictionary of lambdas, the cause for concern is where to define this, and where to inject it. Which object should be responsible for maintaining this lookup? It probably fits well enough on the context, but it would require the context to hold implementation details of behaviours, and I want to think about that some.

  • Inversion of Control

    I’m not sure I would go so far as to say IoC has been significantly enhanced here. Behaviour implementations have certainly relinquished much of their control over their configuration. Perhaps a nudge in the right direction for IoC is that it is now easier for developers to drive both their condition and action from configuration, so we have perhaps afforded more opportunity for IoC.

  • Testability

    No big wins in functional terms here, the more concise and expressive configuration is simply easier and more pleasant to use, so unit tests for example that would tend to want to configure a wide variety of cases and so are big users of configuration certainly benefit.

    While I was rummaging around the framework touching lots of different bits I also took a slight detour to implement MockWebContext along with MockWebRequest and MockWebResponse as I had a need to lay down some half-decent tests. Nothing exciting, you can see their use in ViewPipelineTests.

    So overall this patch of work puts Inversion in quite a strong position for testing with it possible to test contexts running a full application life-cycle for a request, or any behaviour or group of behaviours in concert as needed. Very few behaviours have a dependency on IWebContext, in this case only those parsing the request and writing the response, so testing even a view pipeline is straight-forward.

  • Portability

    No big impact here except there’s less to port. The use of LINQ statements is an implementation detail, and there are easy equivalent implementations available on all common platforms. There’s nothing exotic being done here.

  • Conventions for state

    Inversion attempts to place mutable state in known places, and to keep state elsewhere as immutable and simple as possible. We’ve consolidated down our configuration to a single immutable structure, so a small nudge in the right direction.

  • Speed

    Performance tests are showing the same figures. There wasn’t expected to be any change here, a move to backing configuration with arrays in the future may squeeze out some performance gains.

  • Other observations

    I’m starting to become mildly concerned over the use of LINQ methods used in a fluent style in implementation code. I have become aware of how often when debugging I am changing a linq statement into a simpler form in order to step through it and see what’s happening. I take this as a very loud warning sign. Often my use of linq is pure vanity as it has go-faster stripes. I think I’m going to start avoiding fluent chained statements, and expose the intermediary steps as local variables in order to make debugging easier… Difficult to force myself perhaps, as linq is bloody expressive.

Future work

TL;DR My flaky ideas.

There’s a couple of progressions I can see to this work, but first I want to let the existing work bed in before jumping the gun.

Backing the configuration elements with an array

At the moment the Configuration is backed by ImmutableHashSet<IConfiguratonElement>. This is reasonably efficient, and is easy to work with. It could however be moved to being backed by an array:-

1
2
3
4
string[][] config = new[] {
new[] {"frame1", "slot1", "name1", "value1"},
new[] {"frame2", "slot2", "name2", "value2"}
};

Which would probably be more efficient.

I did it this way as it was easier to reason about and debug, and those are still valid reasons at the moment. Once it’s become part of the furniture, then I can think about trying this out.

Expressing tuples relative to the preceding tuple

There’s an improvement I can vaguely see… because the tuples are ordered, we can consider a new tuple as an expression relative to the previous tuple.

1
2
3
4
(a, b, c, d)
((-2), e, f) relative to the previous tuple => (a, b, e, f)
((0), g, h) becomes => (a, b, e, f, g, h)
((-4), x) becomes => (a, b, x)

Relations of tuples include a lot of repetition in many cases. Using an expression of an offset from the end would allow us to express an uncapped arity of tuples with the limit being on how many new elements of a tuple we could expand by at a time. They could get effectively as large as you like… Think of scanning the list of tuple definitions using a stack as a point of context, you pop the specified amount of elements, and then push the rest on, the result is your current tuple. You could put this stack based iteration behind IEnumerable<IConfigurationElement> and anybody using it say via LINQ would be none the wiser.

My thinking on this is still fuzzy, and I feel it may be more than is required, possibly turning something quite straight-forward into something quite convoluted. Once I’ve thought through it a bit more, it may just be an obviously bad idea in practice.

Also sometimes a little constraint is an appropriate restraint. Time will tell.

The lookup of condition predicates

As discussed, at the moment predicates to act on configuration are provided as extension methods which need to be used in conditions. The frame of a tuple could be used as a key to lookup the predicate to apply to it by a variety of mechanisms.

This would add extensibility but may be one indirection too far.

In parting

I always feel a bit odd after I write something like this up. I’m not sure what use or interest this is beyond a small group of involved people, but I find I’m getting a lot of value out of explaining a thing in a public context. It’s certainly encouraging my thinking toward more rigour, so it’s a worthwhile activity for that reason alone.

My attempt at writing this up isn’t to show any arrival upon a perfect landing spot, but instead relate in some way software development and architectural concern as an incremental process of improvement moving toward a goal.

How I interview developers

Or, do unto others as you would have them do unto you.

Interviewing is one of the riskiest activities a team lead or technical manager is involved with as a standard part of their responsibilities, the consequences of which has the potential to leave a lasting mark on a team. The gap between a dream hire and a nightmare hire is wide and stark.

As an activity however, recruitment tends to be sporadic and infrequent. If it’s something being performed day to day you quickly accumulate experience. If it’s once or twice a year, or every couple of years, it can take quite a while to start learning from our mistakes.

I also feel this is an area that because we may feel unsure attracts a lot of posturing and seeking of magic bullets. A desire for processes that will simply drop out good results from them regardless of our performance within them. This has I feel led to some odd practices toward recruitment in our industry that are I feel self defeating.

So I thought I’d share how I interview people, and why.

Determine the possible size and shape of the role

I try and keep requirements to the absolute essentials in order to do the immediate job. I don’t know what I will find, and I limit my options if I start artificially filtering candidates too stringently. I am prepared and hope to be surprised.

I avoid looking for any qualities that aren’t actually going to be exercised. This may sound like an odd thing to say, but I feel it to be very important. I seek those qualities that are actually going to be required, rather than because they sound good. It’s an artificial constraint to start filtering candidates on the basis of communication and presentation if those qualities aren’t going to be stretched.

I list instead the different ways a candidate may contribute to a team. If we have a couple of gaps, no really good tester or anybody with QA experience. Maybe we have too much bias toward the back-end and are finding friction working with the designers. Maybe an existing person could use support as we experience pain every time they’re on vacation. I talk with the existing team and other managers about this, and develop a view of the different kinds of person that may be attractive for us. This may be narrow or it may be quite broad especially if the hire is simply to add capacity.

Sifting CVs

I avoid doing this without another pair of eyes involved, even if only cursory. Invariably I rope in a senior dev to also go over CVs independently of me. There’s too much opportunity for one individual to skip over or sell short an aspect of a CV. Equally it’s easy to miss a warning sign. The senior dev can whip through the CVs quickly as they’re only looking to catch what they feel to be potential mistakes on my part.

I am looking for breadth and depth, with both depending on the seniority of the role. I look for different languages and platforms. A range of employer types. As much diversity of experience as possible, because ultimately I have begun the process of trying to find a developer that is able to learn and grow over the course of their career, engaged with a job they enjoy. While I am happy for a developers career to have points of focus and themes, I tend to steer away from CVs that express only experience with one of the currently dominant development stacks. All approaches have weaknesses, and they have no point of comparison or counter experience from which to compensate.

I find CVs that simply list every technology the developer has ever been sat in the same room as difficult to consume as they don’t relate to me the developers competencies. I don’t ignore such CVs entirely, but I don’t find them as useful as CVs were the developer has engaged in some critical thinking about what they feel is most important to present and give emphasis.

While it is a definite bonus to encounter CVs with some personality expressed in them, it is only a bonus, and I don’t regard badly those who don’t. Some people feel it inappropriate and potentially risky to inject personality into a CV, and not without reason. I feel it then unfair to expect it, but as I say it is a bonus when it’s there.

Those CVs that make available source code for review are the most useful CVs I can receive, but unfortunately are very rare. They allow me to from the very beginning of the process to determine whether or not the candidate is capable of programming in a way I find appropriate. Nothing else on a CV will provide me with this information. At this stage I don’t need to review the code, a quick scan of a repo will tell me if there is enough substance to the candidate to warrant speaking with them.

I don’t regard it as a black mark if a developer doesn’t cite source code, as it’s not the norm in my corner of the industry. I am however going to need to put eyes on some code the developer has produced at some point and if that’s right at the beginning, well early feedback is always good.

I make notes for each CV citing the reasons for either wanting to speak with the candidate or not. Whoever I’ve roped into also quickly giving the CVs a once-over can challenge any of my reasons.

From this I have a rolling list of candidates to speak with.

Phone screening

I prepare a series of open-ended questions to ask the candidate on the phone. I plan on the screening interview to take up to 20 minutes, but I’ll cut it short if there is no traction with the developer.

After exchanging pleasantries with the candidate, I confirm what they’ve been told about the position by any agent or HR, and clarify broad details.

I make it clear that this is a quick screening interview, that if we move forward they will be afforded an opportunity at the next stage to engage in a full interview process. It is explained to them that they’re going to be asked a series of mostly quite general questions that they’re probably going to have reasonable answers for, and that the emphasis is on what a thing represents to them, rather than trying to guess the answer I’m looking for.

The candidate is told that at this stage and any future stage, they are able to correct themselves if they feel with a couple of minutes hindsight they’ve just said something stupid, and that doing so will be seen as a strength not a weakness.

A chunk of the questions will be very general indeed and will tend to be along the lines of “what does XML mean to you?” There’s a huge range of possible answers to that question without getting into right or wrong. What answers the candidate gives will start to tell me about their background and the sort of work they have been doing. Do they talk about data-modeling, serialisation, interchange, schema, configuration? Do they mention the word “extensible”, and does it mean something to them? Maybe they talk about how and why they prefer JSON. They may similarly be asked “what does OO mean to you?”, again with the range of possible answers wide open and likely to tell me much about the candidates world view.

I avoid phrasing questions in a way that is likely to encourage the candidate to try and guess what the answer is that they think I want to hear.

Just a couple of questions will ensure basic awareness of what is going on around the developer on a particular platform. So, for a .NET position I may ask “what is a lambda, and how might you use it?” or “what is an extension method, and why might you use it?” These questions are intended primarily to see if the candidate is paying attention to basic developments on their platform. They afford the opportunity for the candidate to volunteer a view on injecting strategies, or perhaps to comment on their extensive usage in LINQ. Because the questions don’t focus tightly on right or wrong, they allow the candidate to give me an insight into their background.

I expect senior developers to be able to give succinct and while not necessarily complete answers considering the context of a quick phone screening, to some degree defining answers.

I expect junior developers to be able to kick the ball in the direction of the goal, and to hit the target on a couple of occasions.

I’m not overly interested at this stage whether or not I see eye-to-eye with the candidate in their answer to an open ended question. It doesn’t much matter to me if the candidate favours NoSQL over RDBMS, it is interesting to me if they have a reasoned view on the usage of the two. If I ask them what technical sites they use for news, I’m more interested to see that they do exercise an interest in the broader industry, if they happen to be looking at a different end of it than I, that’s not terribly important.

I value honesty, and I’m greatly encouraged if the developer is able to simply declare “I don’t know” when they don’t.

From this I have a rolling list of candidates that I feel warrant investing the time in a proper face-to-face interview.

The face-to-face interview

The face-to-face can take as little as 30 minutes for candidates that are obviously out of their depth, to as much as 3 1/2 hours where things are going well.

There are a couple of things I want to achieve in the face-to-face interview above and beyond the predictable exchange of information about company history, duties, roles, and work history, the order of which isn’t terribly important and will depend a lot on the candidate and how the interview progresses.

I need to eliminate as much of the candidates fear as possible

It’s beyond clichéd to say “fear is the mind killer”, but it really is. Very few people perform well at all while afraid, and those people that do I may or may not want working for me depending on other aspects of their character. Moreover many people start to exhibit odd or uncharacteristic behaviours when they experience pronounced fear without support.

This is where myself and any potential reader may diverge for legitimate reasons.

There are roles where it is legitimate to test if the candidate can deal well with highly stressful situations. Consultancies and agencies that parachute bodies behind potentially hostile enemy lines, on their own or with minimal support. In these cases the candidates ability to code while terrified may have some merit.

It is I feel crucially important to be extremely careful to consider what qualities we are scrutinising the candidate for, and whether the manner of our scrutiny will actually expose those qualities. To scrutinise for qualities that aren’t actually a requirement is simply to apply an artificial and erroneous filter upon prospective developers. While my team members will have stress as part of their working lives with me, and they will have bad days, they will never be facing these things without my full support and the full support of a team that they respect and trust. Asking candidates to jump through arbitrary hoops is a counter-productive and abusive practice, that speaks more to the insecurities of the interviewer. Dehumanising people is rarely a good solution to a problem especially so in people-centric processes.

I make it clear what it has been that has impressed me about them and why I have invited them to interview. I also make it clear that given the current market for developers, they’re going to get a job, the question is whether that is with us or another company.

The candidate is told I am not interested in testing their ability to perform well in interviews because their job will not entail sitting interviews… Again in a more consultancy oriented environment it may actually involve similar… If they experience a brain freeze, go blank, experience a mini-panic attack, it’s okay and not terribly important to me. They wont be judged if the circumstance of the interview causes a bad reaction in them. Interviews are tricky, and a bit weird. It’s okay if they are a bit weird as a result.

When candidates realise that you’re not going to torture them, and that coming in for interview today might actually end up an enjoyable experience, they often become very grateful. You can see the look come across their face the moment they think, “nobody is going to try and shame me here”. This needs to be reinforced throughout the interview.

In most cases, and invariably in every case where the candidate is successful they will relax at this point and change gear.

It is my belief that most of us share two common fears in an interview. The first is simply that we will experience shame. The prospect of shame bends nearly all of us out of shape. Those that it doesn’t tend to have quirks of personality that may not make them healthy members of a team, but that’s out of scope of this piece. The second fear is that we’ll be misunderstood. That we wont manage to show the interviewer the best of us. These fears can distort the person in front of you to the degree that you’re no longer getting useful information about how they will actually perform within your team.

Therefore the candidate is told, that if at any point of the interview they simply want to tell me something about their work history or a cool thing they’re proud of, that they should simply volunteer it. They are instructed that they should not find themselves walking away from the interview wishing “if only I had told him about X”.

The probation period

Early on in the interview I tell the candidate that we exercise a x month probation period, and that we take it seriously. There is nothing to be gained by the candidate misrepresenting themselves or being disingenuous in trying to “win” the position, it’s not in their interests or ours. They should relax, and just be open and honest with us.

Best and worst

I ask the candidate to tell me about about a project they were involved in that they feel proudest about and why. I ask them what are the things from that project they are most enthusiastic about carrying forward into future projects.

Then I ask the candidate about a project whose involvement with they are least proud of. What their mistakes were, and how this project has informed their practices since.

This is a crucial stage of the interview. It is vitally important to me that my developers at a simple level are able to cop to their own mistakes, hopefully correct them, and most importantly learn from them… I am tolerant of mistakes, but not their repetition… Developers who sweep their mistakes under the rug and hope nobody notices are a liability. Developers who do this are also likely to engage in disruptive behaviour in the team in order to cover for their failing.

Understanding our failings is also essential to the process of learning. If we are unable to reflect upon our mistakes we are unable to learn from them. If we are unable to share those mistakes with our peers, it becomes an obstacle to learning with them.

There are very few outright failure criteria in my interviews. The candidate who attempts to dress up a strength as a weakness… I try too hard… will be challenged gently but firmly on this point. If they pull off a convincing U-turn, I wont hold it against them as the conventional mode of the interview process encourages the candidate to misrepresent themselves. If they are unable to reflect upon past mistakes in a simple and honest way, the interview probably wont proceed its full course.

When did that work for you, when did it not work for you, becomes a recurring theme of the interview.

The code test

I don’t normally do on the spot code tests.

In ideal circumstances the candidate will have code available for me to review, and we’ll do that together. This hopefully is an example of their programming unconstrained, and an exposition of what they consider to be their best.

I am interested in seeing their best. Their worst code will look just like my worst code, which is terrible. I want to see their best as it’s their best I intend to get out of them. Seeing their worst code does not tell me anything at all about their best code, it is therefore next to useless to me.

I’ll underline that… I need to know what their best code looks like, not their worst.

Unfortunately most developers currently don’t have repos to show, and indeed this has tended to be why recruitment processes in the industry have gotten so bent out of shape. So bent out of shape that many companies wont look at source code when made available by developers focusing only on their own tests, which in turn demotivates developers from maintaining projects to present to prospective employers. It’s a vicious little circle we’ve established here.

If the developer doesn’t have code to show, I will negotiate with them a mini-project that looks something like what they’d be doing for us, but feels comfortable enough for them to feel confident about giving a good account of themselves. I’ll try and base this on what they’ve related to me in interview, so if they’ve raved about design pattern X, then we’ll find a project that allows them to demonstrate it. With senior devs we’ll negotiate a kink of some sort that will stretch them a bit.

The test is outlined in such a way that the solution is neutral and doesn’t tie it to our company, allowing the developer to reuse it as a demonstration of code with other companies if they’re not successful with us, meaning they get something out of this test process and aren’t just sinking time into us.

Given that the candidate represents well in other aspects of the interview, what they have available in terms of code to show, or what they produce in the code test will often be the deciding factor between them and other candidates.

They can do this test either in the our office or at home. I don’t much care how much time they want to spend on it, if they want to invest a lot of time in it, good. Why would I stop them? The more they do, the more I have to consider.

Ad hoc questioning

While having general discussions with the candidate about their work history and competencies, I will drop in ad hoc the sort of questions that would normally be given in the form of an oral or written test… “Tell me, what’s the difference between a value type and a reference type?”… While the developer is engaged in hopefully animated discussion about things they want you to know, they forget to be afraid of such casual questions as things are evidently going well.

Over the course of an interview I can ask all the questions I want of a candidate without them ever feeling interrogated if I simply ask the questions the same way I would to somebody I was having a friendly conversation with, because that’s what is happening.

Second interview

Often there will be second interviews, but not always. It’s really more a matter of revisiting candidates where there is perhaps a choice between two or three strong candidates.

Second interviews will invariably involve other managers and members of the team, focus on how the candidate may fit in with the company, and will tend to look like follow-up interviews in a lot of places. By this point the candidate is likely to have a good level of confidence, be enjoying the process, and have a strong sense that things are going well. Starting to dig robustly into the candidates practices, work history, or how they work with others, especially if it is a senior role, is quite possible at this stage without freaking the candidate out.

If after a second interview it’s not clear to me if a candidate is the right choice, then they’re evidently not, and the process needs to continue.

Avoid the expectation that good developers think like me

Many of the best people I have worked with are nothing like me at all.

When I look back at a younger version of myself, I am ashamed at the number of articles of knowledge which I would insist all “good programmers” should know, I had in fact only learned myself six to twelve months prior. Things that I encountered from candidates in interview that I didn’t know were invariably classified as pointless detail that distracted from the core tenants of a good programmer.

It is important that candidates remain current with trends in the industry, but I’m not aware of any current trend to have emerged in my corner of the industry that wasn’t an established element of development cannon in the broader industry at least a decade prior. In every case I can think of, when I have learned a new concept I have been coming to the party at least a decade later than a whole bunch of other developers in another corner somewhere.

Over time I have learned to be careful not to expect the candidate to coincidentally be learning the same things as me, in the same order, at the same time. It is an unreasonable expectation and is the execution of a bias that can filter out potentially suitable candidates.

This bias produces a broader impact on the industry in that it can artificially reinforce trends. In commercial development communities a concern with best practice can quickly become an unreasoning group-think.

Imagine a case where I have been spending the last six months reading material on CQRS. Over that period I share the ideas with my team, and they start reading material. Excited and enthusiastic material. We convince senior management this is the way forward, and for our next application CQRS will be one of it’s core pillars.

We happen to be recruiting into a position on the team and we’re presented with two promising candidates, both of whom have sufficient skill and experience to fulfill day-to-day duties and expectations.

Candidate A has read the same material, is as enthusiastic as us, thinks the idea of being able to roll back events and replay them is killer, and would love nothing else but for their next project to involve CQRS.

Candidate B has heard of CQRS but isn’t overly familiar with it. They seem to understand the core concepts as it’s explained to them, and are intrigued, but express the concern that it seems a little overly complicated for the baked goods ordering system we’re building next. The thing they’re excited about the past year seems to be the use of the actor model in Scala which they’ve been playing with.

Everything else being equal, which candidate is more useful to me and my team?

Unfortunately there isn’t a right or wrong answer to that. There are pros and cons which is to say risks to both candidates in terms of how they integrate with the team. Candidate A will likely integrate with the team more smoothly, but is likely to simply amplify what is already in the team. If we have full certitude we’re right, we’re likely to prefer this option. Candidate B however, if they possess a sufficiently flexible personality has the potential to add a perspective to the team that it doesn’t currently possess.

My personal value judgment is to seek out Candidate Bs with sufficiently flexible personalities. It is more important to me that a candidate is capable of appropriately challenging assumptions that are presented to them, than it is for them to echo back to me my own views, reassuring me that my choices are correct. If Candidate B has a perspective or experience in areas that are absent in the team, then they start to become really interesting.

It is also more important to consider what the candidate has been doing, rather than if they have been reading and agreeing with the same material as me. It’s remarkable the number of developers who have a strong opinion on things they’re not actually doing.

Give me somebody playing with actors in Scala over somebody talking about CQRS, even if the later is what I’m excited about (I’m not as it happens it really is just an example).

It’s time consuming

Tough. It’s my job, and if I don’t get it right it can cost my company dearly in a myriad of ways. We will live with our choice of candidate for years, there are few other jobs that a team lead or technical manager perform that are more important.

One might argue that a recruitment process such as this will not scale, to which I would suggest if each team has a team lead that takes point on recruitment for that team, whether you have 2 teams or 12 will make no difference as the activity is in parallel.

If what one means to say is that an organisation in the pursuit of more efficient allocation of resource has teams leads acting merely as line manager for 3 teams each, and they simply don’t have the time to do their job properly, then that is a problem of the organisations own making and has more to do with them overworking their team leads that it does the scaling of process.

If I am distracted with a recruitment process my senior devs pick up the slack, it’s good practice for them, and indeed what they are there for. If I have a senior dev interested in gaining more experience they can pick up some of the screening. Recruitment is a good time to carefully stretch your senior devs skills, and I see it as an opportunity for their personal development.

Want to know if a candidate will fit in with the team?

Take them for lunch with the team. If it’s a really important hire, take them all for dinner. My team gets a jolly, feel involved participants (because they actually are) and nothing will give me more information about how a candidate may fit in with my team than putting them all in the same place in a relaxed setting, shutting my mouth for a while, and watching how they all get on.

We can test for a thing, which is to say observe its simulation within an artificial context, or we can actually just observe the thing in a natural context.

The candidate that seems brilliant but gives me concern that they may be too shy and reserved, I may find at lunch outside an interview room is relaxed, outgoing, confident and expansive… People can act in very unnatural ways in interviews that do not correlate with their behaviour in other aspects of their life. Again, I need to be mindful of what the interview is actually testing.

The pitch and pay negotiations start upon first contact

My experience tells me that the majority of developers will balance two simple criteria in choosing their next position. The first being how much does it pay, and the second how much they feel they will enjoy the position. How those two factors are balanced will depend on the developer. Many developers will take less pay for a job they feel they will enjoy and grow in.

The candidates first impressions of me, my company, my team and our projects are every bit as important as my first impressions of them. The more weight I put on this side of the scales, the less money will need to be placed on the other side of the scales.

This isn’t to suggest a recipe for underpaying developers. Coming very closely after the priority of acquiring the best developers I can get my hands on, retaining them is my second highest priority, and salary most definitely plays a role in this. The issue is that if the candidate is any good and has any significant experience, one has to assume they are entertaining other offers. There is always going to be somebody paying more than me, and I have a duty to my company to get the most I can from budget.

Again experience tells me that most candidates will not reveal other offers they are considering. While some candidates will come back to negotiate a better pay offer, a large proportion will not, and with those you have effectively engaged in a sealed-bid. You’ll be notified of their decision. In these cases having treated the candidate like a human being from first point of contact up until they make what is a life impacting choice for them and potentially their whole family, can be a compelling factor.

With good candidates, you aren’t just interviewing them, they are also interviewing you. You want smart people, well expect them to be smart.

My job is two-fold, to ensure my company prospers from the activity of its developers, and to ensure my developers prosper within the company. This is a mutually beneficial relationship if it is to work well over time. If by the end of the recruitment process the hopefully smart candidate is to accept this as genuine and authentic it needs to start at first point of contact, and remain consistent.

I treat my team members as humans, which is to say individuals with their own motivations, hopes, fears, insecurities, and aspirations. I treat my candidates the same way. Just because I don’t know them very well, does not make them less human.

This isn’t a case of it’s nice to be nice, making good choices and encouraging the best from people rather than the worst has a direct and marked impact on productivity and staff churn which can be very costly and sometimes outright crippling to a businesses ability to achieve and maintain momentum.

Treating candidates as humans is smart and the effort involved pays dividends whether your contact with them is for minutes or years.

Why I like XSLT.

I first fell in love with XSL in 1999, when Microsoft rolled an XSLT processor into MSXML. I got my hands on the works of Shakespeare marked up in XML and started playing. I found it so shockingly easy to transform a whole play into HTML with XSLT handling the heavy lifting of flow control. It was love at first sight.

I still use XSL today for my own projects, and for a good handful of years there while WebForms were still squeezing the joy out of life, and before the MVC.NET Borg collective had arrived to assimilate everybody, for five glorious years myself and my development team at the time used XSL extensively. Then it all went away.

Since then Razor has come to dominate, almost to the exclusion of anything else on the .NET platform, with similar templating pervasive across most platforms. I’ve come to terms with the current of the stream being simply too strong to swim against. My closest of colleagues whom I’ve persuaded over the years to entertain a wide variety of bizarre ideas and technologies have remained a stonewall of refusal. So this is why I like XSL.

This may ramble a little as the reasons why I use XSL are a cohesive whole, making it difficult to tease apart the considerations as individual pieces. Indulge me while I give it a shot.

Portability

Arguably the big shout for XSLT is the breadth of its support. If you’re writing software commercially, there’s undoubtedly an XSLT processor available on your platform. Just to have a quick eyeball at the outliers, there’s XSLT processors for Erlang, and here’s a tutorial on using XSLT with Delphi. The current popular platforms have had very good XSLT for quite some time, and .NET has a very nice optomised XSLT processor.

It’s not just on the server with XSLT support in browsers. The failure cases there are in areas unlikely to impact development.

I’m not aware of another templating language that has the range of support that XSLT does. You would need to consider Javascript as a templating language to get support as pervasive.

What could have been

If as an industry we had persisted with the adoption of XSL by now our front-end developers would be armed with HTML, CSS, Javascript, and XML/XSLT. They would be used to being handed an XML output on one side, and a design mock on the other for which they wrote the XSL stylesheet to get from one side to the other. Whether their work history had taken them through PHP, Ruby, Python, Java or .NET development teams would make little difference to them. They’d use the same templating solution wherever they went and would need to know little intimately about the platform on which the application was built.

Recruitment, developer quality, and management of front-end developers would be in a significantly different place than it is today, with front-end developers able to focus on what their job is, accruing common competence regardless of platform along the way.

“X years commercial experience with HTML, CSS, Javascript, XML/XSL, Creative Suite”, could be enough to plug in a front-end dev into any web development team… Having to then specify “With experience in C#/.NET, MVC.NET, Razor. Entity Framework a bonus” not only partitions you prospective pool of developers or jobs depending on which end of that you find yourself, it also shines a spotlight on the dependencies your view layer is carrying with it.

Separation of concerns

For me personally this is the big issue, and it’s not an issue that XSLT alone addresses. The piece The Importance of Model-View Separation is an interview with the author of StringTemplate and ANTLR which uses StringTemplate on its back-end. It’s a really good discussion with somebody taking a good hard look at just what it means to have a view layer that only deals with it’s own concern.

With a templating solution like Razor it is easy to both implement business logic, and to reach back into the application in ways a view simply is not meant to. This leads to MVC.NET applications that have views that are highly enmeshed with the broader application.

In an application using XML/XSL the result of a request operation is an XML view. A serialisation of end-state if you like. The XSL stylesheet can do little except transform this. There is no reaching back up-stream, and implementing business logic in XSL while doable is painful.

There is a clean fire-break between the handing off of the XML and rendering it as an XSL transform into HTML/CSS/Javascript, PDF or whatever. This line between two layers of the application are as clean as I have witnessed in any architecture, and makes breaking it an exercise in swimming against the tide.

The VmEntityName plague

A lot of MVC.NET applications use Entity Framework or a similar ORM, often with lazy-loading of members. By the time these “live” entities make it down to view layer their context has been disposed, and so their data-members can’t be populated. Even when lazy loading isn’t being used, we don’t feel good about passing these entities down to our view.

So we start creating a whole bunch of view-model objects for different scenarios, and the surface area of our application model balloons. Hydrating these view-model objects is a pain in the arse, so we employ and auto-mapper. We now have a wonderful little tar-ball of complexity that’s simply going to increase over time, stretching testing to the point where you’ve simply stopped doing it in any real sense.

An XML view-model

XML is really good at extensible representation of data-models. It was purpose made for it off the back of a lot of experience from SGML. I like that I can start with a simple unconstrained view-model that can be refined and tightened over time. I can layer different models with namespaces, and I can apply schema to the model if needed.

For large complex applications where the view pipeline is a significant portion of the application, in multi-tenant or white-labeling scenarios I can outright come up with a domain-specific language for the view-layer.

Transformations

XSLT is intended to transform one XML document into another XML document. Any text notation can be output from XML, but it’s designed for XML-to-XML. This transformation is accomplished by matching on a point of context in a document, and then specifying output and recursion on the basis of this match. XSL is a functional language that makes it difficult to use in an imperative fashion. There is for-each and call-template but their use is strongly discouraged.

Transformations naturally lead to pipelines. The output of one stylesheet can be the input of another. So for example, the data-model is serialised to XML, the back-end devs then transform this to an intermediate presentational layer. Fornt-end devs are then responsible for rendering this presentational layer into HTML. If you need to actually layer your view layer XML/XSL will naturally extend in this direction. Razor wont.

Focus on API

The focus on XML as the result of a controller naturally focuses development on the application API, as you’re effectively creating an API for your own internal use. This XML API is what your XSL stylesheets use. Making this accessible externally becomes a naturally progression rather than a different activity.

Serialisation by threading a writer through an object graph is fast and provides a single pattern for serialising to XML, JSON, or whatever.

Sand-boxed

The clean fire-break between the view template and the rest of the application coupled on an XML representation that the XSL dev can’t reach back beyond means your view-temaplte is neatly sandboxed. A lot of effort has been made to ensure you can’t yield side-effects from the XSLT template except to produce output. There are far fewer sins can be committed in an XSL template than a Razor template. Doing the wrong thing is painful.

Null references in the view

If your view templating solution has issues with null references, it’s not a view templating solution. Defensive coding in a view template is decidedly more torturous than XSLT.

Ease of testing

The combination of portability, wide availability, clean separation of concerns, and not least schema, means that testing not only XSL stylesheets themselves but the XML product of controllers is about as easy as it gets. Mocking XML data-model representations is trivial.

Handing your front-end dev an XML mock and saying “this is what we’ll be producing” allows them to be working on the view presentation at the same time that you are working on the implementation.

And the down-side

There are many books and articles extolling the virtues of XSLT, but we can’t get passed a quite deeply ingrained loathing of XSLT and indeed XML in general in whole segments of the web development community. I have personally always found this a little odd in a bunch of people that have such a focus on HTML markup, but it is there and cannot be ignored.

Bill Venners When you hire a bunch of programmers it can be quite difficult to get them all to go in the same direction. One of the ways I think about architecture is that you can try to define it such that it is easy for people to go the way you want them to go, and painful, though likely possible, to go other ways. People will usually follow the easy route.

XSLT deliberately makes some thing difficult. This can be misidentified as a problem with XSLT rather than a problem with what it is you’re trying to do in a view-template. That being said, some issues are not about trying to do the wrong thing but are about some sharp edges XSLT presents in a current web development environment.

It’s XML

I can’t help but feel that if this…

1
2
3
4
5
6
<xsl:template match="item[@name='navigation-view']" mode="navigation">
<h4>Relations</h4>
<ul>
<xsl:apply-templates select="records/item" mode="navigation" />
</ul>
</xsl:template>

Had looked like this…

1
2
3
4
5
6
$template[navigation](item[@name='navigation']){
<h4>Relations</h4>
<ul>
$apply-templates[navigation](records/item)
</ul>
}

The history of XSL may well have been different. DSSSL a stylesheet language for SGML that XSL is derived from is a subset of Scheme. We can see here the origins of CSS also.

1
2
3
4
5
6
7
(element doc
(make paragraph
font-size: 20pt
line-spacing: 20pt
font-weight: 'bold
color: color-purple
(process-children)))

If XSL had a different notation, we’d be all over it.

XSLT is verbose, and without good editor support it’s a right pain in the arse. I work largely on the .NET platform spending much of my life in Visual Studio. I have intellisense for XSLT, a good understanding of XML by the IDE, and even debugging of XSL stylesheets. Anybody working in a similar situation has little excuse to cite “pain in the arse to work with” as the IDE is handling the pain for you.

If you want however, you can be working on your templates with Notepad and a web-browser.

The fact remains XSL syntax is less friendly for the developer than most templating languages. Architecturally however, it’s hard to conceive of a friendlier templating solution. The “X” stands for extensible.

It’s harder to think about for imperative programmers

XSL is a strict functional language. It relies on pattern matching and recursion. These are things that can legitimately hurt our heads a bit coming from traditional imperative, OO backgrounds. I would suggest this pain passes with familiarity until it is gone. The gains from XSL continue to accrue over time.

Staffing can be tricky with XSL

Because of the unpopularity of XSL acquiring devs with experience can be tricky. Even those devs who have used XSL often don’t maintain or develop skills with it and was often a cleaning out the latrines type task when they came into contact with it.

Projects that are constrained with developer experience, where there is significant developer churn, or where the project is short lived… truthfully it’s not worth the hastle, life’s too short.

For long-lived projects, or where there is investment in staff and work-flow developing skills with XSL and architecting an adult view layer I would say warrants proper consideration. Razor looks like a quick-fix but as it muddies the boundaries around the view layer, it also muddies the skill requirements of your front-end devs and can make for a very uncomfortable working relationship between front-end and back-end devs. This often leads to developers spending quite a bit of time working on a side of the fence they aren’t particularly comfortable on. Either your front-end devs are working with C# and a view-model exposed as an object model, or your back-end devs are producing HTML and CSS.

The fact remains that today, staffing and training can be an issue with XSL.

An acquired taste

A diet of nothing but fast-food is bad for us. Picking nothing but fast-food technologies for our application stacks is bad for them. Razor is compelling because like fast-food it requires little if any effort on our parts. There are times when its expedience should win out as the right choice, there are many times however when it is rotting the insides of our applications, clogging its arteries and shortening their lifespan.

Many of the foods that are good for us we spat out as children because they had a strong taste. Often as adults these same foods go on to become our favourites, sometimes even becoming described as “moorish”. Sometimes the same is true of technology, and even when it’s not good food is still required for health regardless of our tastes.

If it were just a personal matter, no more than that need be said about it. Many of us however are paid well to make good technical choices for companies we work for. We should be careful that we are not making choices on their behalf based solely on our own personal tastes.

I come not to bury IoC but to praise it.

Or, All elephants are grey, that’s grey, therefore that’s an elephant.

I have used Spring.NET since 2006 when I first used it as the backbone of an MVC framework and CMS on the .NET platform, and I have used it aggressively since and to this day, inducting several development teams into its use over that time.

I felt compelled to give my credentials there as a good card carrying developer, hip to current trends, who finds it almost unthinkable to write a non-trivial application without an IoC container. I feel compelled because this piece may be unfashionable and I’m kind of dimly aware could make me unpopular in the current development community in which I find myself. Nobody likes to voice an unpopular view. We fear others will think us stupid.

It’s important that any reader appreciate that I am writing as .NET developer, lead and architect working in London. There may well be wider applicability to my views, but I can’t know that, as my observations are based on… well, what I get to observe. Things may be similar where you are, but they may not.

A room full of smart people wearing bell-bottoms, because…

I have found myself on more than one occasions saying to my developers, “commercial software development is first and foremost a social activity before it is anything else”. I’d been saying that (possibly while stroking my beard) for some time before I actually stopped to think about it, because I felt quite a level of conviction, before I really understood what I meant.

It’s not a terribly opaque statement, and it’s really quite obvious the moment you consider it… Before any code gets written, before any infrastructure is laid down, a bunch of people are going to make a whole bunch of decisions. Throughout the development of an application, and through it’s support and maintenance a wide assortment of people are going to negotiate between themselves what is right action and what is wrong action. The quality of those decisions will have a significant impact on the quality of the software and ultimately equate to pound signs either written in black or red.

There are libraries filled with books written on the subject of people and decision making by writers far more studied in the subject than myself. I want to focus for the moment on one specific aspect of groups of developers and architects making technical decisions together. The acceptance without scrutiny of self evident virtue received as common wisdom from a peer group. Known by the more plain speaking as Fashion.

There’s a couple of angles I could take at this and I may explore other areas later, but for now I want to drill into Inversion of Control and Dependency Injection a little, and the manner of it’s pervasive use in the .NET development community currently.

I’ll admit that’s a lot of packaging before getting to the content.

Inversion of Control (IoC)

So what is IoC? It’s almost impossible to answer that question without first asking “when?”, because the expected answer to that question in interview today is very different than those who coined the term would give.

Martin Fowler When these containers talk about how they are so useful because they implement “Inversion of Control” I end up very puzzled. Inversion of control is a common characteristic of frameworks, so saying that these lightweight containers are special because they use inversion of control is like saying my car is special because it has wheels.

I am reminded of the fact that if more people read Fielding’s own comments on the application of REST, there would be a lot fewer articles and books on REST, and far fewer applications calling themselves RESTful. Concepts percolate through the development community and in the same way the truth of an event in some distant foreign country will go through many changes before it reaches your tabloid front-page, so do concepts in software development before they end up in the blog post you’re reading.

If we’re not lazy however, we can go back to their root and origin.

Ralph Johnson and Brian Foote One important characteristic of a framework is that the methods defined by the user to tailor the framework will often be called from within the framework itself, rather than from the user’s application code. The framework often plays the role of the main program in coordinating and sequencing application activity. This inversion of control gives frameworks the power to serve as extensible skeletons. The methods supplied by the user tailor the generic algorithms defined in the framework for a particular application.

That quote, which is referenced by Mr. Fowler’s writing on IoC and was written in 1988. If I can persuade you to read just one paper, please make it this one.

What is IoC trying to achieve?

Before we look at the ways in which IoC is in someway a special case, it is useful perhaps to consider what it shares in common with a broader approach.

IoC participates in a movement to develop techniques for more reusable code, along with a bunch of other principles to this end. One of the success criteria then for our employment of IoC then is the degree to which we attain code reuse.

The environment in which IoC grew-up code reuse was seen not just as a means of increasing productivity, it was seen as essential if systems were to be allowed to grow and evolve over time, without collapsing under their own weight of complexity. There’s a lot of talk of class libraries evolving into frameworks, and frameworks evolving from white box systems to black box as our understanding of a systems abstraction improve with experience. There’s importance given in the early talk of and around IoC to the human factor at play within development. Systems should it was thought change as our understanding as a group of people engaged with a problem domain changes. The system should evolve in tandem with our understanding of it.

This approach to software development as an evolving system requires a focus on decoupling of implementation from use, an aggressive focus on discrete interfaces, and an almost obsessive regard for component substitution (plug-ability).

This common goal is what IoC is meant to further, to yield systems resilient to change.

So what is IoC?

It’s a lot of different things.

Martin Fowler There is some confusion these days over the meaning of inversion of control due to the rise of IoC containers; some people confuse the general principle here with the specific styles of inversion of control (such as dependency injection) that these containers use. The name is somewhat confusing (and ironic) since IoC containers are generally regarded as a competitor to EJB, yet EJB uses inversion of control just as much (if not more).

Whoah. Something more IoC than an IoC Container?

One of the characteristics of the practice of religious faith, is that it often doesn’t stand up to scrutiny with its founding texts and prophets.

Martin Fowler Another way to do this is to have the framework define events and have the client code subscribe to these events. .NET is a good example of a platform that has language features to allow people to declare events on widgets. You can then bind a method to the event by using a delegate.

You’ve been doing IoC for a long time, in a lot of different ways, long before you ever found Ninject.

Inversion of Control is any pattern that inverts the traditional procedural flow of control. That’s it. The purpose of which is to reduce coupling between components to make them easier to swap out, and to promote flexible application evolution that is able to cope with new abstraction built from it.

Just because your mouse is grey doesn’t mean it’s an elephant

Consider the following wee piece of code…

1
context.FireWith("user-store::delete-user", "user-id");

Which is a straight-up imperative call. We have the expectation that a named component will delete the user specified. It’s a message-based call, and there are valid aspects of decoupling taking place here, but it’s weak in terms of IoC as it’s a traditional forward calling imperative… “Oi! You there, do that.”

Almost the same…

1
context.FireWith("user-unsubscribed", "user-id");

Here we are notifying the system of a significant occurrence. We may have no idea what if anything is going to happen as a side-effect of this. Several things may act upon this, or nothing may act upon this… and we don’t care, because in the second example it’s not our business at this point in the application. This is not an imperative. It’s notifying the broader system of an event… “Excuse me. Anybody there? I don’t want to be a nuisance, but I just did something, and I thought somebody might want to know.”… Henceforth to be known as the English pattern.

In the second example you can have many components responding to the message, each with a discrete narrow focus of purpose. It is open to easy extension by adding new components that will respond to the message, without modifying existing implementation or behaviour. It’s easy to swap out components for ones with different implementations, and the interface is small, discrete and generalised, binding is indirect and at runtime. Feature switching not just at compile time, but at runtime is possible. Lastly at this point in the code, we don’t concern ourself with what happens in the broader system, and require the least possible knowledge about the broader system.

We’re using exactly the same API call here, but this time our expectations are of a reactive response to this event. Here we have inverted the traditional flow of control. Both these calls will use exactly the same mechanic of resolution, but one expresses an inversion of control and one doesn’t.

That’s how narrow the divide can be between flow of control going forward or backwards. The defining factor on either side of that divide is one of intent. By intent I mean an implicit accord on both sides with a common expectation of the meaning of a signal. The balance of the roles and responsibilities in this relationship are for you to decide. It’s about your intent. The mechanism of that signal is important, but not as important as what is being expressed and what is expected. There’s lots of different ways you can use a delegate, many of them will invert control, but simply using a delegate will not get you inversion of control.

It ain’t what you do, it’s the way that you do it, and parroting patterns will not ensure that intent is fulfilled. Understanding intent is key here, as if we understand intent, as long as we’re half-decent developers, our implementations will improve over time and we’ll get there. If we don’t understand the initial intent, our chances of hitting the target are much reduced and start involving luck.

The pioneers laying down these principles did not expect it to be possible for groups of humans to land on the perfect abstraction straight out the gate. They talk about taking flawed initial implementations and iteratively improving our architectural choices. So unless you happen to be some kind of architectural genius that gets their abstractions right first time, a strategy for change becomes a strategy for survival.

Javascript

I’m aware of where Javascript started with Netscape on the server, and I’m aware of where Javascript is today both with NodeJS. Javascript came of age however in the browser. This meant that Javascript grew up in an environment where the programmer was interacting with a framework (the browser) via an exposed object model.

1
<div onclick="function(this){/* do something with 'this' */}">

That’s a good example of inversion of control. We’re registering a callback with an event the framework is going to raise for this element, with the execution of any callbacks managed by the framework. If we think of the alternative, we would need to modify the framework’s core functionality.

This naturally evolves to…

1
element.addEventListener("click", function(){ alert("Hello World!"); });

Javascript developers weren’t writing complete applications, they were integrating with a framework that forced them to accept IoC as the natural order of things. Modern Javascript frameworks reflect this heritage.

There’s any one of a rampaging horde of Javascript frameworks I could cite for example here, so don’t read too much in my choosing Twitter’s Flight to illustrate the point.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
/* Component definition */
var Inbox = flight.component(inbox);
function inbox() {
this.doSomething = function() { /* ... */ }
this.doSomethingElse = function() { /* ... */ }
// after initializing the component
this.after('initialize', function() {
this.on('click', this.doSomething);
this.on('mouseover', this.doSomethingElse);
});
}
/* Attach the component to a DOM node */
Inbox.attachTo('#inbox');

It’s not so much that Javascript has such laudable executions of IoC, it’s that the .NET development community has settled on such an anemic consensus on IoC.

And we’ve not mentioned Dependency Injection (DI) yet

Because although it’s pervasive, it’s possibly the least interesting aspects of IoC while remaining one of the more convenient.

In dependency injection, a dependent object or module is coupled to the object it needs at run time. http://en.wikipedia.org/wiki/Inversion_of_control

Coupled to the object it needs. There is coupling taking place with DI that needs to be managed, and if it’s via constructor injection it’s not necessarily very loose.

I’m looking at an MVC.NET controller with 15 objects injected into it’s constructor. Most of them are repositories. I was intending to count the members of each of those objects, but the first one had 32 public members and I stopped counting there.

How loosely coupled do you think I feel looking at this code? How discrete are the responsibilities being exercised here do you think?

These objects are all injected into the controller by an IoC container. There is a huge surface area being exposed to this component regardless of which particular operation it is performing, with is possessing 27 public members itself.

Just because you are using an IoC container and DI does not mean you are implementing IoC. It just means you’ve found a convenient way to manage instantiation of objects. In my experience this convenience in wiring up components in unthoughtful ways has done considerable harm exhibited by the current swathe of MVC.NET + Entity Framework + Ninject web application all implemented quite cheerfully around SOLID principles.

Ralph Johnson and Brian Foote Sometimes it is hard to split a class into two parts because methods that should go in different classes access the same instance variable. This can happen because the instance variable is being treated as a global variable when it should be passed as a parameter between methods. Changing the methods to explicitly pass the parameter will make it easier to split the class later.

Your use of the constructor is not inconsequential. I personally aim as much as possible to inject at the constructor only such configuration data as is necessary for that class of component to operate, regardless of implementation. I want as much as possible to be able to swap out implementations without altering their config. Remember that’s what we’re trying to achieve here.

1
2
3
4
5
6
7
8
9
10
11
<object type="Conclave.Web.Behaviour.BootstrapBehaviour">
<constructor-arg name="message" value="bootstrap" />
<constructor-arg name="params">
<dictionary key-type="string" value-type="string">
<entry key="area" value="default" />
<entry key="concern" value="default" />
<entry key="action" value="default" />
<entry key="app-path" value="/conclave.cms" />
</dictionary>
</constructor-arg>
</object>

We’re configuring behaviour here, regardless of the implementation what we are expressing in this configuration remains the same as our intent is the same. Although we are not obliged contractually we understand the spirit of our intent and try and keep our constructors as honest a part of our interface as possible.

This is DI, but it’s very much light-weight and focuses on configuring the component for use.

1
2
3
4
5
6
7
8
<object type="Conclave.Web.Behaviour.View.XslViewBehaviour, Conclave.Web">
<constructor-arg name="message" value="xslt::view" />
<constructor-arg name="contentType" value="text/xml" />
</object>
<object type="Conclave.Web.Behaviour.View.XslViewBehaviour, Conclave.Web">
<constructor-arg name="message" value="xsl::view" />
<constructor-arg name="contentType" value="text/html" />
</object>

If a component uses rather than has another service for it’s operation it is an implementation detail and is acquired by service location. In this particular framework we care a lot about being able to swap out components, and ensure this intent is met.

In most cases I do not regard it as appropriate to inject something as fat and implementation specific as a repository into a behavioural component. Even though it may be DI, there are too many other principles in the balance that this violates.

The Dependency Inversion Principle (DIP)

The “D” in SOLID does not stand for Dependency Injection. It stands for the Dependency inversion principle which is a subtly different thing. And has a focus on implementing interface abstractions and using those interface abstractions.

The goal of the dependency inversion principle is to decouple application glue code from application logic. Reusing low-level components (application logic) becomes easier and maintainability is increased. This is facilitated by the separation of high-level components and low-level components into separate packages/libraries, where interfaces defining the behavior/services required by the high-level component are owned by, and exist within the high-level component’s package. The implementation of the high-level component’s interface by the low level component requires that the low-level component package depend upon the high-level component for compilation, thus inverting the conventional dependency relationship. Various patterns such as Plugin, Service Locator, or Dependency Injection are then employed to facilitate the run-time provisioning of the chosen low-level component implementation to the high-level component. http://en.wikipedia.org/wiki/Dependency_inversion_principle

In Dependency Inversion, the implementing class is dependent on an interface that is either owned by an intermediary that the high level component is also dependent upon, or the interface is owned by the high level component.

Strictly speaking if the interface isn’t owned by the high level component, the dependency has not been inverted.

In order to completely achieve dependency inversion, it is important to understand that the abstracted component, or the interface in this case, must be “owned” by the higher-level class. http://blog.appliedis.com/2013/12/10/lost-in-translation-dependency-inversion-principle-inversion-of-control-dependency-injection-service-locator/

In Inversion and Conclave you’ll see occasionally a comment along the lines of // we need to own this interface. You’ll also see several BCL components being wrapped such as request and response objects. One of the goals of Inversion is to be easily portable to other platforms, and so it is important to control what interfaces the framework exposes.

We don’t notice this for the most part in everyday development as a lot of our interface abstractions are picked up by the .NET base class library. If I have a low level component implementing IList and a high level component consuming is via IList, we take the stewardship of the interface by the BCL as good-enough, and quite reasonably don’t get too pedantic over the fact this isn’t DIP as the high level component doesn’t own the interface. A stable and neutral third-party is often anointed by the high level component. That example is a little contrived for simplicity as lists are not the kind of components we would normally engage this level of concern over, but more valid examples are to be found in the System.Data namespace.

This principle can quickly get quite fiddly in practice so often its pragmatically summarised as “don’t use concretes”, which gives 80% of its goodness, but not all.

Consider the use of the Newtonsoft.Json package. It’s such a brilliant package that it’s used extensively. When a high level component couples with these interfaces it becomes dependent on them in the traditional way. You don’t control those interfaces, Newtonsoft do. In most cases use of such foreign interfaces should be implementation detail that is not exposed to the broader framework.

But there’s a way to dodge most of the issues with DIP entirely, and that is to not use lower level components directly. Instead model the interactions the high level component needs to have with the low level components, treat them as pluggable black-boxes, and only interact with them via an intermediary interface with the framework responsible for resolving that interaction. Messaging is a good example of this, as were the two snippets of code earlier in this piece.

Fashion

When a room full of smart people decide to turn MVC.NET + Entity Framework + Ninject into a the exact opposite of what IoC is trying to achieve, which is to say a rats nest of dependencies, leaking knowledge all over the place, with components coupling like a Roman orgy, we have to ask ourselves how and why?

The best answer I can come up with is fashion.

That’s not to be dismissive or derisory. To be so would only compound the issue. It is to acknowledge that we can see and accept the role of fashion in almost every human endevour, and to suggest that we may need to consider how it impacts our technical choices.

We all have a very real need to feel the support and approval of our peers. It’s not a mild passing influence, it’s wired deep into us as a survival strategy as an animal. As we occupy the more geekish end of the spectrum we tend to seek our approval through demonstrating what we know. Moreover it’s the means by which we earn our living. Saying “I don’t know”, does not necessarily come easy to us.

DIP causes me problems. I disagree in part with some of it’s intent. I don’t want my low level components knowing anything about my higher level components. I have no intention of doing that. That bit of DIP looks like nonsense to me.

When I make that assertion I know a couple of things. The Dependency inversion principle was cooked up by some very smart and well studied people. Given that, there is the distinct possibility that I am missing something. The confusion I feel when considering some aspects of DIP further lends weight to this. If I’m sat in a room of my peers, I risk looking foolish if I express my confusion.

Now imagine I’m a team lead or architect. I’m getting paid to know what I’m doing, and my ability to lead and instruct my team is dependent on their respect for my technical ability. I am making myself vulnerable when I admit to my team that I am experiencing confusion with some methods of application of an architectural principle that the whole .NET community seem to have accepted as self evidently virtuous. It might be easier to just pretend I know, and then to cover my inadequacy coach my team in my version and understanding of DIP as the real thing.

This is how fashionable decisions are made. When the goal becomes to be seen by our peers as “good developers” we are engaged in a social exercise and the technical merits of our choices become secondary.

In every case where I have observed this happening it is either an absence of a team lead, or a failure on the part of the team lead to establish the safety for simple human honesty. Further a failure to acknowledge that despite best intentions we are very fallible, and intellectual honesty needs to be motivated. The technical impact of this is we end up wearing principles like IoC like a fashion accessory with very little honest endevour given to it’s underlying intent.

IoC is great. On the .NET platform it’s a particularly interesting time with so much growth in reactive features on the platform, and the TPL. IoC as it exists currently in commercial web application development on the .NET platform has more to do I would suggest with fashion than anything of substance.

Naiad, a toy service container.

In the previous piece Service locator vs depdendency injection I had declared, “Service location is, and is provided via an interface on the context that can be implemented inside 10 minutes as a dictionary of lambdas if you had a pressing need.” Which risks being a throw-away comment comprising largely of hot air. So I thought I’d knock one up, the guts of which is…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
private readonly ConcurrentDictionary<string, object> _ctors = new ConcurrentDictionary<string, object>();
public void RegisterService<T>(string name, Func<IServiceContainer, T> ctor) {
_lock.EnterWriteLock();
try {
_ctors[name] = ctor;
} finally {
_lock.ExitWriteLock();
}
}
public T GetService<T>(string name) {
_lock.EnterReadLock();
try {
Func<IServiceContainer, T> ctor = _ctors[name] as Func<IServiceContainer, T>;
return ctor(this);
} finally {
_lock.ExitReadLock();
}
}

It really is just a dictionary of lambdas, and wires up thus…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Naiad.ServiceContainer.Instance.RegisterService("request-behaviours",
container => {
return new List<IProcessBehaviour> {
new SimpleSequenceBehaviour("process-request", container.GetService<List<string>>("life-cycle")),
new BootstrapBehaviour("bootstrap",
new Dictionary<string,string> {
{"area", "default"},
{"concern", "default"},
{"action", "default"},
{"app-path", "/web.harness"}
}
),
new ParseRequestBehaviour("parse-request", "Inversion.Web.Harness.Site"),
new ViewStateBehaviour("view-state"),
new ProcessViewsBehaviour("process-views"),
new RenderBehaviour("render"),
new RazorViewBehaviour("rzr::view"),
new XmlViewBehaviour("xml::view", "text/xml"),
new JsonViewBehaviour("json::view", "text/json"),
new XsltViewBehaviour("xslt::view", "text/xml"),
new XsltViewBehaviour("xsl::view", "text/html"),
new HelloWorldBehaviour("work"){
MatchingAllParameters = new Dictionary<string,string> {
{"action", "hello"}
}
}
};
}
);

It isn’t much of any use except as a base-line “simplest possible thing that works” to measure more industrial strength implementations against. Just prodding it with apache bench for a ballpark it’s a whisker faster than Spring.NET, which considering the facilities Spring offers leaves me quite impressed with Spring.

There’s a lot of value in returning to the simplest implementation that works as it’s easy to lose track of the cost of components that become an integral part of our applications.

So there’s no misunderstanding, this is a toy. But sometimes all you need is a toy. Inversion started life as a toy, the initial prototype being a predicate/action dictionary, where the predicate acted on an event and the action acted upon a context. In scripting languages knocking prototypes up with the general purpose data structures lying around such as lists and dictionaries is very normal, and we could maybe do with it becoming more of a norm in .NET before we jump off into the deep-end with grand object-models.

As I’m proofing this I can see I need to move the exit from the read lock after looking up the constructor but before executing the constructor… as I say, it’s a toy.

Service locator vs dependency injection.

Or, “Who would win the fight between a submarine and a tank?”

I much enjoyed reading a piece on service location vs dependency injection which chimed with some of my own thoughts overs the years.

The article starts with a quote by Martin Fowler, the brilliant man whose brilliant work has given rise to so many cargo-cult practices in the .NET development community. I say “cargo-cult” as I’m implying unreasoned and absolute application of a single principle out of context, to the exclusion of any nuance. It’s worth reading Fowler’s piece as it’s a very balanced take on the subject and not absolutist.

Martin Fowler The choice between Service Locator and Dependency Injection is less important than the principle of separating service configuration from the use of services within an application.

Architecturally Inversion expresses service location, and it avoids any explicit use of dependency injection (DI) while at the same time assuming considerable use of DI by application developers. Given this I thought some brief word on “why” might be useful, while adding my voice of concern about the over use of DI.

Inversion favours the use of Spring as it’s IoC container, and XML configuration. I’ve long intended to try out autofac as it too apparently has good XML config support. As long as it has good XML config support and performs reasonably well I really don’t care which container I use, because for me the primary requirement is a config notation so I can decouple my config from service use and binary deploys, and so that I can easily manage configuration for an application in different instances.

This core issue seems to get thrown out with the bath-water in near all DI using solutions I have seen in the wild. Why? Because we had a bunch of people write that service locators are an anti-pattern. Like a lot, if it passed by your notice Google “service locator anti pattern”, pick a couple of pieces and read at random for 15 min.

Most of the core arguments regarding service location as an anti-pattern stress the pitfall of runtime rather than compile-time errors caused by failure to fulfill dependency requirements. This is compounded by the dependency graph being buried in implementation code. These are valid concerns, but the counter application as a blanket absolute I feel leads developers into more pitfalls than it avoids.

The emphasis on compile-time errors in this argument leads the developer to favour statically-compiled container configuration, and in most cases the fluent interfaces that are emphasised by modern IoC containers. Without exception in any case I’ve observed this leads to Martin Fowler’s chief concern getting throw out with the bathwater.

separating service configuration from the use of services within an application

There are other more insidious issues introduced with the assumption of pervasive DI use.

Abusing constructors rather than abstracting

At a very simple level, most examples of DI vs service location assume constructor injection. This is for the valid reason of ensuring the object is instantiated with all it’s dependencies fulfilled, and this is the fig leaf we use to explain this approach. The truth is a little buried anti-pattern in itself.

Dependencies will often vary for different implementations, so what we need to inject varies. The constructor is effectively a big gaping hole in a types interface contract. We can run anything we want through there, and they can vary between implementations. So rather than abstract our dependencies we just throw them through the contructor. This is not a virtue.

In the world of .NET Blog Driven Development combined with MVC.NET and Entity Framework this leads over the course of years almost inexorably to the magic tarball of a dependency graph with all the things touching all the things and the contructor being the means by which we communicate relationships between objects.

Assumptions about state

This abuse of constructors as a hole through our interfaces leads us to another problem.

It makes a huge assumption about the state of my type, and will almost compel inexperienced developers to inflict state upon types that don’t need it. We without thought turn a uses-a relationship into a has-a relationship and ensure we can’t use singletons where appropriate, and steers us away from a swathe of compositional patterns.

This is a big deal for performance in web-applications, and almost ensures while we model relationships between data entities, we don’t model behavioural relationships between objects or pay much of any attention toward how objects use each other.

Writing assemblies to be consumed by others

The flaming strawman of a horror story that the notion of an anti-pattern is built on is the story of shipping an assembly to a third-party that’s using a service locator, with a dependency that isn’t fulfilled in the config, causing a runtime error that isn’t easy for the consumer to resolve as the dependency is expressed in configuration code.

I call this a strawman as using a service locator in this way for a shipping lib is a complete non-starter. The concern is applicable for any low-level or foundation assembly (as most of us are not shipping libs).

Conclave.Map and related assemblies have no notion of a service container or locator. It’s part of a data-access layer, and service location is none of its business. Nobody in their right mind is going to suggest injecting a service locator into something that isn’t participating in a component model. It may have a database connection however.

In WinForms a service container is threaded through all the components, because they are participating in a component model. The IO namespaces aren’t because they’re not participating in a component model.

Yes, there are a whole bunch of concerns that should not be addressing service location. There’s a whole bunch of types that shouldn’t have access to the application config at all, that should be agnostic to their environment. Your data access layer probably shouldn’t know anything about HTML or CSS… but that does not make HTML and CSS anti-patterns, it is simply to know that as professionals we make judgment about how we partition concerns within our application while being mindful of principles like The Law of Demeter we understand we need to manage carefully the coupling between types.

If however a types responsibility is coordinating between services, and providing application integration with services, then service location is a perfectly reasonable concern, and trying to pretend otherwise because somebody called it an anti-pattern will bend your application out of shape.

Patterns are not universally applicable articles of faith

Patterns are not catechisms, and they do not direct a moral imperative. Patterns offer solutions to common problems and bring with them their own consequence that will vary between scenarios of application.

Consider message queues. Not unlike service locators they introduce a fire-break of an interface decoupling, taking a lot of of stuff that used to happen here and by whatever means makes it happen over there. Quite where or how often isn’t the business of the application developer looking at one end of it.

Should we wire in a service locator into a low level PDF library that is not participating in a component model? Probably not, for all the same reasons we probably shouldn’t wire in a message queue.

Is this to say then that message queues are an anti-pattern? No, it’s to say you’re a muppet if you wire a domestic power cable from the wall outlet into your wrist-watch to power it. Not because domestic power cables and wall outlets are bad or antithetical, but because if you insist on wiring in power cables in inappropriate ways, you’re going to get an electric shock and will probably render your watch inoperable.

Take 3 Java developers and 3 .NET developers to an imaginary bar in our heads. They’re going to write down an exhaustive list of all the ways in which it is appropriate or inappropriate to use a message queue. Once the Java and .NET devs are done introduce 3 Erlang developers, and there’s going to be a bar fight. This is because an Erlang developer is going to have a completely different architectural take on where it is appropriate to use messaging.

This might seem a bit of a contrived example unless you are a .NET developer using Rx.NET or DataFlow in anger. In which case your notions of inter-object communication is probably drifting slowly toward the Erlang chaps and you might surprise your peers by joining the Erlang devs in the ensuing ruck. Further shocking the Java devs when one of their own screams “Scala!” and turns on them… Now throw in 3 Haskel devs and all bets are off. They’re likely to label your whole type-system an anti-pattern… When we look under the table we find a Rails dev rocking themselves whimpering “I just want to build awesome websites”.

As a .NET dev I may favour compile time errors over runtime errors more than say a Python or Ruby developer, but if I am creating a component model that composes at runtime, and I try and eliminate runtime errors as a blanket architectural rule, then I am likely to bend my architecture out of shape.

Using a process context for service location

So how does Inversion and Conclave approach this? Hopefully with a sense of balance, and an awareness of when the focus is service location and when the focus is dependency injection, with a cut between the two at the appropriate layer for the application to separate its concerns.

Inversion centres around process context in much the same way that an ASP.NET application will centre around a HttpContext. This context is used to manage state for a running process and to mediate with actors and resources external to the application. The process context is also responsible for mediating between units of application and business logic, coordinating their activity.

The context has-a service container, which is injected in it’s constructor. This interface is held for all process context implementations. If I could specify the constructor on the interface I would (I might take a closer look at the MS design by contract library for .NET).

1
2
3
4
5
6
7
8
public ProcessContext(IServiceContainer services) {
_serviceContainer = services;
// snip
}
public IServiceContainer Services {
get { return _serviceContainer; }
}

Which is completely unremarkable. Slightly more controversial is the interface for IServiceContainer.

1
2
3
4
public interface IServiceContainer : IDisposable {
T GetService<T>(string name);
bool ContainsService(string name);
}

This is perhaps slightly controversial as its getting services by name rather than by type. This is because at this level the concern is service location via a generalised component interface. If the service container being used supports DI (and it will), injection is configuration level concern. The component isn’t going to inflict it’s dependency upon the application architecture.

1
2
3
4
5
6
7
8
9
10
11
12
13
public override void Action(IEvent ev, ProcessContext context) {
if (ev.HasRequiredParams("id")) {
using (ITopicStore store = context.Services.GetService<ITopicStore>("store::topic-map")) {
store.Start();
Topic topic = store.GetTopic(ev["id"]);
context.ControlState["topic"] = topic;
ev.Object = topic;
if (topic == Topic.Blank) {
context.Errors.CreateMessage(...);
}
}
}
}

So here we have the action of an IProcessBehaviour. It uses the same interface as all process behaviours, it’s not a special little snowflake, and plugs into the architecture the same as every other component.

Crucially… this behaviour uses a context which has a service locator which this behaviour uses to obtain a topic store.

The behaviour, and all the other behaviours like it have naff all. The process context has everything. Any immutable config for the behaviour is injected by the service container from which the behaviour is obtained, and is a config level concern that remains the business of the behaviours author and for them to worry about. DI in this way is not the business, nor the concern of the framework. Service location is, and is provided via an interface on the context that can be implemented inside 10 minutes as a dictionary of lambdas if you had a pressing need.

Service location and dependency injection are different things

Obtaining a manifest from a database at runtime of service component names that conform to a generalised interface, obtaining them from the service container by name, and then executing them is the concern of a service locator, not DI. It’s not about one being better than the other, it’s about them being concerned with different things. Service location has an architectural impact on patterns of application composition. DI has an impact on configuring object instantiation.

The reason the two streams get crossed is that every DI offering that I have come across is built upon and predicated by a service locator. DI is one pattern that can be implemented with a service locator. So in almost every case you’re going to come across the two things in the same place called a “service container”. Use of service location will naturally co-mingle with DI, because reasoned use of DI is a wonderful thing, and shields our application from a lot of instantiation details, keeping them firmly ring-fenced as config.

To suggest that service location is an anti-pattern and DI is the one pattern (built upon service location) for all the things, is cargo-cultish.

Inversion and Conclave express service location and assume you will use whatever DI takes your fancy. What service locator and DI you choose to use is not my concern and should not impact the architecture.

Looking-up stuff

We as developers out of necessity seek guiding principles to inform our daily work. This isn’t exclusive to IT, we do it in all aspects of life. “A stitch in time saves nine”, is a truism that we may all find ourselves nodding to as its a useful sentiment. As is “measure twice, cut once” and “more speed less haste” despite there being subtle tensions between such truisms. They are useful principles. Their application requires wisdom and judgment. They are useful models, they are not innate laws of the cosmos… The map is not the terrain.

The assertion that service location is an anti-pattern masks consideration and balance of an underlying concern which I shall grandly entitle “looking-up stuff”. The issue isn’t one of service locators, database connections, sockets or access to the file-system. The issue is whether an operation should be looking up information external to itself, or whether it should be acting on only the information passed to it. Related to this, but beyond the scope of this piece is whether an operation should be yielding side-effects, and if it should, how they are managed.

There isn’t a simple answer to this concern because what is appropriate is contextual and determined by what the components role is whithin the broader system. Should my component pull information from an outside source, or should it be given that information? Should my parser be a pull or push parser? Whatever you decide is appropriate it is probably silly to call pull-parsing an anti-pattern when your push-parser has probably been built on top of one, despite the fact that in most cases you should probably be using a push-parser.

There is no universally applicable principle that will ensure we wear the mantle of “good developer”. There is no abdicating responsibility for the decisions we need to make not just as a programmers, but as system analysts even if you call yourself a developer. I become concerned when blanket truths replace consideration of context.

Service location is not an anti-pattern. There are anti-patterns that involve use of a service locator along with other similar constructs. There are anti patterns that involve the use of DI. Most devises we use in programming involve both (virtuous) patterns, and anti-patterns, which is really just a grand way of saying pros and cons. Generally speaking people who summarise the world in terms of only pros or only cons are said to be engaging in splitting.

Splitting (also called black and white thinking or all-or-nothing thinking) is the failure in a person’s thinking to bring together both positive and negative qualities of the self and others into a cohesive, realistic whole. It is a common defense mechanism used by many people. The individual tends to think in extremes (i.e., an individual’s actions and motivations are all good or all bad with no middle ground.)

I need to take a look about and see what discussions there may be on the subject of polarised views and whether they are more prevalent among programmers than other professions.

Introducing Inversion.

Conclave originally began life around 2004 as a .NET CMS built around topicmaps, and influenced heavily by the WikiWikiWeb. It was a lot of fun but a personal side project, and was a little slow and clunky.

The next incarnation in 2006 was Acumen a .NET MVC web-application framework and CMS built with a team in Spain. Multi-tenant, multi-lingual and driving a couple of dozen public facing and internal extranet applications, Acumen was so much fun to develop and an incredible learning experience.

More recently in 2011 I began working on a behaviour oriented framework the purpose of which was to replace MVC within Conclave, so that feature-set just got rolled into Conclave. This left Conclave very schizophrenic and almost impossible to explain to any uninvolved person. Conclave simply seemed to be too many things.

So. The behavioural composition malarkey has been taken out of Conclave and is now Inversion. Conclave.CMS and Conclave.Funder will then simply be applications that use Inversion rather than being joined at the hip. This it is hoped will help keep our separation of concerns a little more honest.

Over the course of the Winter I’ll write some more about Inversion and it’s design goals.

First post!

So this is always the intimidating part of a development blog… the first post. There are few things more pity-worthy than a blog with “Hello World” followed by “First post!”, but you’ve got to start somewhere.

I haven’t kept a development blog since 2002, and although I was reasonably busy with blogging, it obviously never made me famous. I used to use my own CMS for blogging, and because I work largely with content management I began to feel if a blog wasn’t using my own software I was somehow a fraud. I’ve written my fair share of blogging features for company platforms, but like the anecdotal builder who just never gets around to their own home improvements because they’re busy working on other peoples homes, so went my blogging.

I’d long gotten into the habit of keeping development notes in markdown in the form of git repo readme files and used in conjunction with a hacked together document generator that transforms .NET XML API docs into Markdown, it makes for not utterly embarrassing technical documentation. This isn’t very accessible for anybody other than devs working with the repo quite obviously.

So I decided that with GitHub pages there really is no reason not to have a blog for ongoing development notes especially when one take a look at the wide range of quality markdown based static site generators which work really well with git deployments. I wanted a Node based platform as a lot of the Ruby ones use Bunlder and I’ve had problems with bundler on this Windows workstation. So after a lot of browsing of different generators, I settled on Hexo. It’s easy to hack, has reasonable themes as a starting point and seems to be well supported. It may not be my final resting place, I might even finally get around to a static site generator for Conclave, but markdown is easy to migrate so there’s no reason not to get started with this now.