inversion

Configuring behaviour in Inversion: Part 2

Previous: Configuring behaviour in Inversion

In the last article I talked about the how and why of implementing behaviour configuration in Inversion. When I reviewed the work I surmised that it was a qualified success with some work remaining to do before the matter could be put to bed entirely.

With the original implementation there was a lot of pressure to inherit from various classes in order to inherit some of their configuration features. This caused a lot of strain on inheritance.

With the move to a common data-structure for configuration we took away pressure from inheriting to gain varying configuration properties.

With the move to predicates as methods extending IConfiguredBehaviour we took pressure away from having to inherit from a particular class in order to pick up it’s condition predicates.

What we didn’t escape was the need to actually use these predicates in a condition, therefore making it desirable to inherit from some classes in order to obtain the checks they perform in their condition.

So this is really a 2 out of 3 in this regard. We have relieved pressure from inheritance in quite a marked way, but there remains an impediment that will require more thought and work.

The basic mechanism for addressing this wasn’t really the issue, uncertainty was where such a mechanism should reside.

The issue isn’t implementing the lookup of predicate strategies, it can be as simple as a dictionary of lambdas, the cause for concern is where to define this, and where to inject it. Which object should be responsible for maintaining this lookup? It probably fits well enough on the context, but it would require the context to hold implementation details of behaviours, and I want to think about that some.

This follow-up article will talk about how progress was made with this remaining area extending selection strategies for behaviours, with a focus on “open for extension but closed for modification”.

Selection criteria

One of the concepts that was firming up was the idea of selection criteria which was a predicate acting upon a configuration and event to determine if a behaviours condition was a match. Last time these were implemented as extension methods for IConfiguredBehaviour which were nice in that it was easy to add new selection criteria without having to change anything. The problem remaining with them was that conditions still needed to know about and use them. The uses-a relationship between behaviours and their selection criteria was not open for easy extension. The use of selection criteria was “hard coded”, and required use of inheritance to override, which is something we were trying to avoid as we prefer “composition over inheritance for application behaviour”.

By the end of the last piece we had a reasonably firm idea that we wanted to inject selection criteria into behaviours as strategies to be used by conditions without the conditions knowing about the strategies other than their general shape and how to use them. The details or purpose of a strategy not being important to a behaviour which is just concerned whether or not its selection criteria pass or fail.

So the first order of business was to make selection criteria a thing:-

1
public delegate bool SelectionCriteria(IConfiguration config, IEvent ev);

A function that acts upon an IConfiguration and IEvent, and returns a bool. This allows us to move our use of extension methods to lambda expressions which are easy to store and inject:-

1
(config, ev) => ev.HasParams(config.GetNames("event", "has"))

If a behaviour as part of it’s configuration were injected with a set of these SelectionCriteria a behaviour during it’s condition check could simply check that each of these criteria returns true. We would be able to effectively inject a behaviours condition implementation.

That bit was easy… But how do we decide which of these SelectionCriteria to inject into a behaviour?

Stuff what selects stuff what selects stuff

Then I fell off a conceptual cliff, largely due to semantics, and a brief period spent chasing my own tail.

How to decide what stuff to inject?.. I spent most of a morning trying to formalise an expression of “stuff what selects stuff what selects stuff” that didn’t make me sound like a cretin. I’d walk into my garden and sit, think of a compositional pattern, run to my studio and find I’d laid down a bunch of things that all sounded the same the distinctions between which seemed very arbitrary.

The darkest 15 minutes of that morning was the brief period when I considered using behaviours to configure behaviours, and started seeing behaviours all the way down.

The reason for my anxiety is I was becoming convinced that I was starting to commit a cardinal sin of application architects which is the sin of the Golden Hammer.

The concept known as the law of the instrument, Maslow’s hammer, Gavel or a golden hammer[a] is an over-reliance on a familiar tool; as Abraham Maslow said in 1966, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

The pull of the Golden Hammer for the architect is almost inexorable as the core concern of the architect is too look for common patterns of structure and behaviour, to move from a diverging variety of abstractions to converging use of abstractions. When you get a hold of an implementation of a pattern that is producing good results for you, it is very hard to avoid seeing that pattern everywhere.

It’s also one of the primary mechanisms by which we turn our architectural cathedrals into slag heaps. It’s destructive because it represents the building of an increasingly strong bias about the applicability of an abstraction, that leads to poor judgment and the inappropriate application of abstractions. I call it a sin because its seductive, difficult to avoid, is always recurring, and has bad consequences in the long term while feeling good in the short term.

I knew I was seeing the modeling of condition/action pairs everywhere, that this was part of a protracted phase I’m going through, and that I was vulnerable to the hubris of the Golden Hammer.

I also knew that some patterns are foundational and do have broad applicability. I don’t find the promiscuous use of key/value pairs or IEnumerable<T> anxiety provoking use of a Golden Hammer, and condition/action is as foundational as an if statement.

The rest of the morning was spent giving a performance of Gollum (from Lord of the Rings) as an application architect having an argument with himself about the semantics of stuff and select while anxious about getting hit by a hammer.

An optional extension of the existing framework

I broke out of this neurotic circular argument with myself by deciding that I would implement the abstraction of stuff what selects stuff what selects stuff as a straight-up extension of the existing framework without altering any of the existing types or their implementations. If I could do this then if it became apparent that the abstraction or its implementation was ill-conceived (as it felt it might be) it could remain an odd appendix of an experiment that could be removed at some point without any negative impact on the broader framework… If the extension sucked it simply wouldn’t get used… And I wouldn’t write about it.

It’s worth drawing attention to this benefit of implementing features as extensions.

When we talk about extensibility being good, and consider things like open for extension but closed for modification we tend to view it from the angle of this concern making the writing of extensions easier. The benefit that doesn’t get considered perhaps quite as much is that this approach of extending what is without modifying it is also a strategy for mitigating risk. It makes it easier to move away from such extensions if they’re poorly conceived with reduced consequence to the rest of the framework.

This is one of the goals of Inversion. Development by extension, with an ability to evolve and move poorly conceived abstractions toward increasingly better abstractions. The ability to experiment which is to say, try out different approaches, needs to be facilitated or our systems can’t evolve and we will never get past either cycles of system rewrites, or legacies of poor judgment which we can’t escape. Extensibility in this way is a strategy for easing the paying down of technical debt in the future or lowering the interest rates for technical debt if you like.

Say what you see

So the worst case scenario was an odd bit of code that Guy wrote one day that Adam laughed at. There wasn’t a risk of reverting anything, and my anxiety was removed, making clear quite a short and easy path to a solution.

Once I decided I was losing the war on semantics and came to terms with my caveman-like expression of the problem, it was easy to start breaking it down.

stuff that selects stuff that selects stuff

I know how primitive that is, but it’s what I had… We’re going to look at a configuration, and on the basis of what we see there, we’re going to pick a bunch of selection criteria that a behaviour will use in its condition.

We have the last bit, the SelectionCriteria. The first bit is a match that can be expressed as a predicate acting upon an IConfiguration.

1
2
// stuff what selects, stuff what selects stuff
(Predicate<IConfiguration> match, SelectionCriteria criteria)

This concern pivots around a behaviours configuration with selection criteria being picked on the basis of the configurations characteristics. So if for example a behaviour configuration contains the tuple ("event", "has") the predicate that matches this would be associated with the SelectionCriteria to act on this as part of the behaviours condition.

1
2
match: (config) => config.Has("event", "has"),
criteria: (config, ev) => ev.HasParams(config.GetNames("event", "has"))

Struggling with semantics as I was, I decided to simply call this association of two predicates a case.

1
2
3
4
public interface IPrototypeCase {
Predicate<IConfiguration> Match { get; }
SelectionCriteria Criteria { get; }
}

This picking of selection criteria consults only the configuration and given that the behaviour configuration is immutable, this picking can take place when the configuration is instantiated, and would only need only need to expose the selection criteria that had been picked. This was done by extending IConfiguration thus:-

1
2
3
public interface IPrototype : IConfiguration {
IEnumerable<SelectionCriteria> Criteria { get; }
}

Similarly constrained in terms of semantic inspiration this extension of the behaviours configuration was called a prototype. I was thinking in terms of prototype-based programming which I’d had some success in the past with classification, inheritance, and overriding of relational data, and was thinking of a behaviours configuration tuples with associated functions as prototypes. Not the best example of prototypes, but vaguely in the ballpark, I needed to call it something and had lost patience with my own semantic angst. I was ready to call this thing “Nigel” if it allowed me to move on, and Prototype kind of fit.

A prototype is a configuration that expresses selection criteria that have been chosen for that configuration.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public static readonly ConcurrentDictionary<string, IPrototypeCase> NamedCases = new ConcurrentDictionary<string, IPrototypeCase>();
private readonly ImmutableHashSet<SelectionCriteria> _criteria;
public Prototype(
IEnumerable<IConfigurationElement> config,
IEnumerable<IPrototypeCase> cases
) : base(config) {
var builder = ImmutableHashSet.CreateBuilder<SelectionCriteria>();
foreach (IPrototypeCase @case in cases) {
if (@case.Match(this)) builder.Add(@case.Criteria);
}
_criteria = builder.ToImmutable();
}

This allows us to establish a base set of selection criteria out of the box, that is easy for application developers to override, as seen in Prototype thus:-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
NamedCases["event-has"] = new Case(
match: (config) => config.Has("event", "has"),
criteria: (config, ev) => ev.HasParams(config.GetNames("event", "has"))
);
NamedCases["event-match"] = new Case(
match: (config) => config.Has("event", "match"),
criteria: (config, ev) => ev.HasParamValues(config.GetMap("event", "match"))
);
NamedCases["context-has"] = new Case(
match: (config) => config.Has("context", "has"),
criteria: (config, ev) => ev.Context.HasParams(config.GetNames("context", "has"))
);
NamedCases["context-match"] = new Case(
match: (config) => config.Has("context", "match"),
criteria: (config, ev) => ev.Context.HasParamValues(config.GetMap("context", "match"))
);
// and so om

We can then see this being used in PrototypedBehaviour:-

1
2
3
4
public override bool Condition(IEvent ev, IProcessContext context) {
return base.Condition(ev, context) &&
this.Prototype.Criteria.All(criteria => criteria(this.Configuration, ev));
}

This now forms a solid base class that is open for extension. We have relieved the pressure from having to inherit from a particular class in order to inherit its selection criteria which are now picked out during the behaviours instantiation, based upon the shape of the behaviours configuration. This extension is implemented as an extension of the behaviours configuration which is the focus of its concern and action.

The added benefit of this is because only applicable selection criteria are picked for a behaviour, we’re never running redundant selection criteria as part of a condition. This in turn means we can grow our implementations of selection criteria without concern about a performance impact from redundant checks. Because behaviours are singletons, this selection process takes place just once for each behaviour, so it scales nicely as the surface area of our selection criteria increases over time.

Another way of thinking of this injection of strategies is to compose or “mixin” at run-time applicable implementation details based upon configuration.

A side benefit of this work apart from making it easier to extend behaviours without having to introduce new types, is that we picked up an extra 5% to 10% performance with the loss of redundant selection criteria.

The abuse of static members and future work

The maintenance of NamedCases as a static member of Prototype is a bad thing. Initialising the default cases from the Prototype static constructor is a doubly bad thing. Lastly, this is mutable data being maintained as a static member, so I’m going straight to hell for sure.

It’s not because “global state is bad”, because it’s not. The notion that global state is bad requires ignoring the use of a database, file-system, configuration, service container, or getting the time from the system. The maintenance of non-global state globally is bad, and I’m not sure to what degree it can be said that these default cases are global.

In maintaining the cases like this I’m needlessly tying the default implementation of selection criteria to the Prototype class, and I wonder if it should be associated with the behaviours type. I’m not sure yet.

The strongest case for not maintaining the named cases as a static is because we don’t need to.

Behaviours are used as singletons so these cases can sit as instance members of either the prototype of a behaviour or the behaviour itself, but I’m not entirely sure where I want to place this concern yet, and at the moment I’m trying to impact prior work as little as possible.

The cases are injected via this constructor:-

1
2
3
4
public Prototype(
IEnumerable<IConfigurationElement> config,
IEnumerable<IPrototypeCase> cases
)

So I can easily kill the static members and inject the prototype from the behaviours constructor.

As is probably clear from this write-up, I struggled conceptually a couple of times through this process. The simplest possible thing at this point is not just desirable, but needful, and the simplest possible way of injecting a prototypes cases is:-

1
2
public Prototype(IEnumerable<IConfigurationElement> config):
this(config, Prototype.NamedCases.Values) {}

In the last post on behaviour configuration I stopped having solved two out of three parts of a problem. If I had continued without time to simply think the abstraction over I would have started making things worse rather than better. I find it personally important to recognise when I am approaching this point. Much of my worst code has been done past the point when I should have simply stopped, regrouped my mental faculties, gained some perspective, sought outside opinions, and contemplated my options weighing their pros and cons for more than 2 minutes.

Invariably when I continue past where I should have prudently stopped it has involved my own vanity and a concern about what other developers and architects would think of me. Being aware of one or more deficiencies in my code, often aware that I am at risk of running afoul of one or more anti-patterns, I over-extend myself because I fear being called a “bad developer”… There’s a self defeating vicious cycle in this… I have never, nor am I ever likely to finish a piece of work that is perfect. Every single piece of work I complete will be flawed, and if I don’t come to terms with that I will over extend my self each time and turn good work into bad.

When I accept that my work will iteratively improve a situation but at each iteration be left with flaws, I can then look to recognise and manage those flaws. I can establish my contingencies, and I can plan a safe and pragmatic route of improving abstractions.

The remaining problem of being able to inject selection criteria into behaviours on the basis of their configuration in a manner that other developers can easily extend to meet their own needs, and without changing the preexisting framework has been accomplished. There is the uncomfortable hang-nail of NamedCases being a static member, but it’s safe where it’s parked and easy to move away from without negative impact. So this is where this iteration should end. I need to now let this abstraction bed in, ensure it doesn’t have any unintended consequences before anointing it and baking it into the framework any further.

Configuring behaviour in Inversion

Or, Experiments with black-box development.

update: Part 2 now follows on with the “further work” outlined toward the end of this article.

I’ve recently overhauled both the way that Inversion configures behaviours and the way in which that configuration is acted on as selection criteria when determining which behaviours should respond to an event. I thought I’d write this up as it keeps the small group of developers engaged with this work up-to-date, provides some informal documentation, and provides an illustration of a couple of Inversions design goals.

You need to know where the goal is in order to score

TL;DR Make things as bendy, fast and easy to change as possible… Check that you are.

Inversion might have been labeled Application stack number seven as it sits on the back of six previous incarnations the first of which started in 2004 as an experiment in implementing MVC on the .NET platform, the decedent of which went live into production in 2006 where is remains to this day. Two other slightly earlier but very close incarnations of Inversion have gone into production in 2012 and 2014, but by this time the point of interest had moved well past MVC to playing with ideas of behavioural composition as a means of meeting cross-cutting concerns normally addressed by AoP.

So Inversion is merely the most recent of a series of application stacks experimenting with a handful of core ideas some of which are more developed than others at this point, but each of which should show progression rather than regression over time.

My experience tells me that any piece of development spanning more than a handful of weeks quickly starts accumulating the risk of it’s initial goals being diluted and then eventually forgotten. It is important then to remind ourselves what it is in broad terms we’re trying to obtain from a system, and then to review our activity and ensure we’re actually meeting those goals whether formally or informally.

This is a summary of some of Inversions goals :-

  • Black-box components
  • Configuration as resource distinct from application
  • Favouring composition over inheritance for application behaviour
  • Small footprint micro-framework
  • Single responsibility
  • Extensibility, DIY
  • Substitution, Plugability
  • Inversion of Control
  • Testability
  • Portability
  • Conventions for state
  • Speed

Behaviour configuration will have a large impact on Inversion and will almost certainly add a distinct flavour to the frameworks usage for better or for worse, so it’s important once we’re done that we review our goals and ensure they’re being advanced and not diminished.

Finding a common abstraction for behaviour configuration

TL;DR Tuples.

Inversion is in large part a composition of IProcessBehaviour objects. A set of condition/action pairs. The relevant portion of the interface being:-

1
2
3
4
5
6
// snipped out some members and comments for clarity
public interface IProcessBehaviour {
string RespondsTo { get; }
bool Condition(IEvent ev, IProcessContext context);
void Action(IEvent ev, IProcessContext context);
}

When we context.Fire(event) we pass the event to the condition of each behaviour registered with the context in turn. If the condition returns true, that behaviours action is executed on the context.

Over time we find a lot of common activity taking place in conditions.

  • Does the context have any of these parameters?
  • Do these parameters and values exist on the context?
  • Do they exist on the event?
  • Do we have these keys in the contexts control-state?

For want of a better phrase we call this the selection criteria of the behaviour.

So we quite naturally start to refactor common selection criteria into base classes. We also start to push out the specification of selection criteria to configuration of the behaviour.

In Inversions previous incarnation the expression of config for selection had gotten quite out of hand. Each category of test ended up with its own data-structure both to configure and drive that test.

1
2
3
4
5
6
private ImmutableList<string> _includedAllControlStates;
private ImmutableList<string> _nonIncludedControlStates;
private ImmutableList<string> _includedAllParameters;
private ImmutableList<string> _nonIncludedParameters;
private ImmutableDictionary<string, string> _matchingAllParams;
private ImmutableDictionary<string, string> _nonMatchingAllParams;

This config would be injected in the constructor by the service container, and be acted upon by the behaviours condition.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public virtual bool Condition(IEvent ev, IProcessContext context) {
bool parmsAndValues = this.MatchingAllParameters.All(p =>
context.Params.Contains(p) &&
context.Params[p.Key] == p.Value
);
bool notParmsAndValues = this.NonMatchingAllParameters.All(p =>
context.Params.Keys.Contains(p.Key) &&
context.Params[p.Key] != p.Value
);
bool controlStates = this.IncludedAllControlStates.All(p =>
context.ControlState.Keys.Contains(p)
);
bool notParms = this.NonIncludedParameters.All(p =>
!context.Params.Keys.Contains(p)
);
bool notControlStates = this.NonIncludedControlStates.All(p =>
!context.ControlState.Keys.Contains(p)
);
return
base.Condition(ev, context) &&
parms &&
parmsAndValues &&
notParmsAndValues &&
controlStates &&
notParms &&
notControlStates;
}

It worked, but it wasn’t extensible in that to extend the functionality required adding more and more data-structures, with less and less common purpose. It put a lot of pressure on inheritance to both pick up config you were interested in, along with it’s condition checks. It was riddled with assumptions which you either accepted or were left with no functionality except a bespoke implementation. Special little snowflakes everywhere, which is the opposite of what is being attempted.

It worked in that it allowed specifying behaviour selection criteria from config but the expression of these configs in Spring.NET or in code, was painful, hard to understand, and messy. Worst it was leaking implementation details from the behaviours across their interface.

So I started playing around with something along the lines of IDictionary<string, IDictionary<string, IDictionary<string, IList<string>>>>, which again kind of worked. It was an improvement in that the previous configurations for selection criteria could all pretty much be expressed with the one data-structure. The data-structure however was messy, and difficult to express in configuration.

Next I started playing with a structure something like, MultiKeyValue<string, string, string, string> which finally started to feel in some way rational and self contained. I happened to be reading a piece comparing the efficiency of hashcodes between key-value pairs and tuples which made obvious a very short step to Configuration.Element : Tuple<int, string, string, string, string>.

The class Inversion.Process.Configuration represents a set of ordered elements, or a relation of tuples expressing (ordinal, frame, slot, name, value). This is a very expressive structure, and with LINQ easy and efficient to query in lots of different ways.

The resulting configuration is easy to express in code, and encourages a declarative rather than fluent style.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Naiad.ServiceContainer.Instance.RegisterService("test-behaviours",
container => {
return new List<IProcessBehaviour> {
new MessageTraceBehaviour("*",
new Configuration.Builder {
{"event", "match", "trace", "true"}
}
),
new ParameterisedSequenceBehaviour("test",
new Configuration.Builder {
{"fire", "bootstrap"},
{"fire", "parse-request"},
{"fire", "work"},
{"fire", "view-state"},
{"fire", "process-views"},
{"fire", "render"}
}
),
new ParameterisedSequenceBehaviour("work",
new Configuration.Builder {
{"context", "match-any", "action", "test1"},
{"context", "match-any", "action", "test2"},
{"fire", "work-message-one", "trace", "true"},
{"fire", "work-message-two", "trace", "true"}
}
),
new ParseRequestBehaviour("parse-request"),
new BootstrapBehaviour("bootstrap",
new Configuration.Builder {
{"context", "set", "area", "default"},
{"context", "set", "concern", "default"},
{"context", "set", "action", "default"},
{"context", "set", "appPath", "/web.harness"}
}
),
new ViewStateBehaviour("view-state"),
new ProcessViewsBehaviour("process-views",
new Configuration.Builder {
{"config", "default-view", "xml"}
}
),
new RenderBehaviour("render"),
new JsonViewBehaviour("json::view", "text/json"),
new XmlViewBehaviour("xml::view", "text/xml"),
new XsltViewBehaviour("xslt::view", "text/xml"),
new XsltViewBehaviour("xsl::view", "text/html"),
new StringTemplateViewBehaviour("st::view", "text/html")
};
}
);

Not the best notational representation ever of configuration, but not the worst, a definite improvement, and something it’s felt one can become comfortable with. It’s certainly a very concise configuration of a swathe of behaviour.

This also is not the primary means of configuration. This is showing the configuration of Naiad which is a toy service container Inversion provides suitable for use in unit tests. The above is the configuration of a test.

A good friend and former colleague (Adam Christie) is getting good results from the prototype of a service container called Pot, intended to replace the use of Spring.NET. Until that matures over the coming months, Spring.NET is the favoured service container for Inversion. This doesn’t stop you from from using whichever service container takes your fancy as IServiceContainer shows, Inversions own expectations of a service container are minimal.

1
2
3
4
public interface IServiceContainer : IDisposable {
T GetService<T>(string name) where T: class;
bool ContainsService(string name);
}

If you can honour that interface (and you can) with your service container, Inversion wont know any difference.

What I mean when I say Spring.NET is the favoured service container is that out of all the possibilities, Spring.NET is what I happen to be focused on as a baseline.

Acting on configuration for conditions

TL;DR LINQ

BehaviourConditionPredicates provides a bunch of predicates as extension methods that take the form:-

1
2
3
4
5
6
7
8
9
public static bool ContextMatchesAnyParamValues(this IConfiguredBehaviour self, IProcessContext ctx) {
IEnumerable<IConfigurationElement> elements = self.Configuration.GetElements("context", "match-any");
int i = 0;
foreach (IConfigurationElement element in elements) {
i++;
if (ctx.HasParamValue(element.Name, element.Value)) return true;
}
return i == 0; // there was no match specified
}

Which illustrates extending IConfiguredBehaviour for whatever condition predicates are useful over time, without having to modify IConfiguredBehaviour or Configuration. We establish our own convention of tuples, and act on them. In the above example we’re extracting elements from the configuration that have the frame and slot {"context", "match-any"} which drops out the tuples:-

1
2
{"context", "match-any", "action", "test1"},
{"context", "match-any", "action", "test2"}

We check the context for the name and value of each tuple with ctx.HasParamValue(element.Name, element.Value).

You’re always free to write the conditions for your behaviours in whatever way you need. What we see here is only an illustration of how I happen to be tackling it.

Expressing tuples as XML

TL;DR Just read four nodes deep and call them tuple elements.

If you step back from XML a moment and consider it simply as an expression of a tree of nodes, there’s a trick you can pull with reading a tree of nodes as tuples which is a little novel in this context but which we take for granted when working with relational databases. That is databases that focus on the grouping of tuples into collections which we call relations, or more commonly tables… I’ll confess to that piece of conceit straight-away. I happened to be doing some reading on working with sets of tuples and ran across the fact that the relational in “relational database” refers to the fact that it’s the set of tuples that are a relation, commonly called table, not any association between tables as you might expect from the term. Was novel to me, and I now obviously like flaunting the term… Back on topic…

Given that our configuration is made up of a set of tuples the elements of which we’re calling (frame, slot, name, value), consider the following XML:-

1
2
3
4
5
6
7
8
9
10
11
12
...
<context>
<match-any>
<action>test1</action>
<action>test2</action>
</match-any>
</context>
<fire>
<work-message-one trace="true" />
<work-message-two trace="true" />
</fire>
...

If we read that one node at a time, and with each node copy it as an element of our tuple, our first tuple builds up thus:-

1
2
3
4
context => {"context"}
match-any => {"context", "match-any"}
action => {"context", "match-any", "action"}
test1 => {"context", "match-any", "action", "test1"}

And we have our first tuple. Now if we were reading results from a database, we’d not be surprised if the next value value2 were preceded by the same elements, as they are unchanged. So our second tuple is {"context", "match-any", "action", "test2"}. In this way we can read that XML snippet as:-

1
2
3
4
{"context", "match-any", "action", "test1"},
{"context", "match-any", "action", "test2"},
{"fire", "work-message-one", "trace", "true"},
{"fire", "work-message-two", "trace", "true"}

Which is exactly what we’re after. We can now define a set of tuples very expressively and in an extensible manner with XML, we now just need to hook this up with Spring.

Extending Spring.NET configuration

TL;DR Was much easier than expected.

I’ve been using Spring.NET since 2006 as the backbone of most applications I’ve built. It’s something of a behemoth, and the reality is I’ve only really ever used a very thin slice of its features. I’ve always been comforted by the fact that there’s a solution to most problems with Spring and if I needed to I could extend my way out of a tight space, despite the fact I’ve never had much call to.

One of the things I’ve always wanted to do was extend and customise Spring xml configuration. If you’re working with Spring and xml configs one of the costs is you’re going to end up with a lot of configuration and it’s got a fair few sharp edges to it. After having a stab at it I can only say I wish I’d done it years ago as it was far less involved than I expected.

The relevant documentation for this is Appendix B. Extensible XML authoring which lays out in pretty straight-forward terms what needs to be done. From this we produce:-

An XSD schema describing our extension of the Spring config

Which provides the schema for our config extension, and most importantly associates it with a namespace. This is our “what”.

There a small gotcha here. You need to go to the file properties and set Build action to Embedded resource, as you’re schema needs to be embedded in the assembly for it to be used.

A parser for our namespace

Which is responsible for mapping individual xml elements to the unit of code that will process them. For each xml element you register its handler, thus:-

1
this.RegisterObjectDefinitionParser("view", new ViewBehaviourObjectDefinationParser());

This is our link between “what” and “how”.

An object definition parser

This is our “how”, where the actual work gets done. In these object definition parsers we process the xml and drive an ObjectDefinitionBuilder provided by Spring.

If we consider a simple example of this implementation in ViewBehaviourObjectDefinationParser. First we override GetObjectTypeName.

1
2
3
protected override string GetObjectTypeName(XmlElement element) {
return element.GetAttribute("type");
}

When our element view is encountered and ViewBehaviourObjectDefinationParser is resolved from its registration for this element, Spring asks for a string expression of the type for the object that will be created for this element. We simply read this from the elements @type attribute, exactly as Spring normally would.

Next we need to deal with any constructor injection, and it turns out that because we’re processing elements in our own namespace, elements from Springs namespace still work as expected, allowing us to mix and match to some extent.

1
2
3
4
5
6
7
8
<behaviour responds-to="parse-request"
type="Inversion.Web.Behaviour.ParseRequestBehaviour, Inversion.Web"
>
<spring:constructor-arg
name="appDirectory"
value="Inversion.Web.Harness.Site"
/>
</behaviour>

Note the spring:constructor-arg within the behaviour element.

So we’re in the rather comfy position of retaining Springs base functionality in this area and merely adding syntactic sugar where it suits us.

Spring calls DoParse on our definition parser, and passes it the associated element.

1
2
3
4
5
6
7
8
9
protected override void DoParse(XmlElement xml, ObjectDefinitionBuilder builder) {
// all behaviours with config being parsed have @respondsTo
string respondsTo = xml.GetAttribute("responds-to");
builder.AddConstructorArg(respondsTo);
// all view behaviours have @content-type
string contentType = xml.GetAttribute("content-type");
builder.AddConstructorArg(contentType);
}

In this example we are extracting the @responds-to and @content-type attributes and adding them to the object definition builder as constructor arguments.

Reading the behaviour configuration from XML

Okay, so if we take stock, we’re by this point able to provide our own expressions in XML of object definitions. This doesn’t speak to our provision of a set of tuples as configuration for a behaviour.

BehaviourObjectDefinationParser is a little more gnarly than our definition parser for view behaviours, but it’s DoParse isn’t too wild. We iterate over the xml nodes and construct a hashset of tuples from them, and once we have them we call builder.AddConstructorArg(elements) to tell Spring that we’re using them as the next constructor argument.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
// we're going to read the config into tuples
// of frame, slot, name, value
foreach (XmlElement frameElement in frames) {
string frame = frameElement.Name;
// process any frame attributes as <frame slot="name" />
foreach (XmlAttribute pair in frameElement.Attributes) {
string slot = pair.Name;
string name = pair.Value;
Configuration.Element element = new Configuration.Element(ordinal, frame, slot, name, String.Empty);
elements.Add(element);
ordinal++;
}
foreach (XmlElement slotElement in frameElement.ChildNodes) {
string slot = slotElement.Name;
int start = elements.Count;
// read children of slot as <name>value</name>
foreach (XmlElement pair in slotElement.ChildNodes) {
string name = pair.Name;
string value = pair.InnerText;
Configuration.Element element = new Configuration.Element(ordinal, frame, slot, name, value);
elements.Add(element);
ordinal++;
}
// read attributes of slot as name="value"
foreach (XmlAttribute pair in slotElement.Attributes) {
string name = pair.Name;
string value = pair.Value;
Configuration.Element element = new Configuration.Element(ordinal, frame, slot, name, value);
elements.Add(element);
ordinal++;
}
if (elements.Count == start) { // the slot had no name/value pairs
Configuration.Element element = new Configuration.Element(ordinal, frame, slot, String.Empty, String.Empty);
elements.Add(element);
ordinal++;
}
}
}
builder.AddConstructorArg(elements);

Nothing clever happening here at all, left rather verbose and explicit to assist with debugging.

So we have our behaviours configurations nicely integrated with Spring, and with reasonable opportunity for extension.

Lastly from our behaviour.xsd schema we can default attribute values for elements, as we do for message-sequence@type:-

1
2
3
4
5
6
7
8
9
<xsd:element name="message-sequence">
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base="configured-behaviour-type">
<xsd:attribute name="type" type="xsd:string" use="optional" default="Inversion.Process.Behaviour.ParameterisedSequenceBehaviour, Inversion.Process"/>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>

This allows us to write message-sequence with it’s @type value supplied by the schema.

The end result of these extenstions is the ability to express cleanly in XML the equivalent of our in code configuration of behaviours, as can be seen in behaviour.config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
<spring:list element-type="Inversion.Process.Behaviour.IProcessBehaviour, Inversion.Process">
<message-sequence responds-to="process-request">
<fire>
<bootstrap />
<parse-request />
<work />
<view-state />
<process-views />
<render />
</fire>
</message-sequence>
<behaviour
responds-to="bootstrap"
type="Inversion.Web.Behaviour.BootstrapBehaviour, Inversion.Web"
>
<context>
<set
area="default"
concern="default"
action="default"
appPath="/web.harness"
/>
</context>
</behaviour>
<behaviour
responds-to="parse-request"
type="Inversion.Web.Behaviour.ParseRequestBehaviour, Inversion.Web"
>
<spring:constructor-arg name="appDirectory" value="Inversion.Web.Harness.Site" />
</behaviour>
<behaviour
responds-to="view-state"
type="Inversion.Web.Behaviour.ViewStateBehaviour, Inversion.Web"
/>
<behaviour
responds-to="process-views"
type="Inversion.Web.Behaviour.ProcessViewsBehaviour, Inversion.Web"
/>
<behaviour
responds-to="render"
type="Inversion.Web.Behaviour.RenderBehaviour, Inversion.Web"
/>
<!-- VIEWS -->
<view
responds-to="rzr::view"
content-type="text/html"
type="Inversion.Web.Behaviour.View.RazorViewBehaviour, Inversion.Web"
/>
<view
responds-to="xml::view"
content-type="text/xml"
type="Inversion.Web.Behaviour.View.XmlViewBehaviour, Inversion.Web"
/>
<view
responds-to="json::view"
content-type="text/json"
type="Inversion.Web.Behaviour.View.JsonViewBehaviour, Inversion.Web"
/>
<view
responds-to="xslt::view"
content-type="text/xml"
type="Inversion.Web.Behaviour.View.XsltViewBehaviour, Inversion.Web"
/>
<view
responds-to="xsl::view"
content-type="text/html"
type="Inversion.Web.Behaviour.View.XsltViewBehaviour, Inversion.Web"
/>
<view
responds-to="st::view"
content-type="text/html"
type="Inversion.StringTemplate.Behaviour.View.StringTemplateViewBehaviour, Inversion.StringTemplate"
/>
<!-- app -->
<message-trace responds-to="*">
<event>
<match trace="true" />
</event>
</message-trace>
<message-sequence responds-to="work">
<context>
<match-any>
<action>test1</action>
<action>test2</action>
</match-any>
</context>
<fire>
<work-message-one trace="true" />
<work-message-two trace="true" />
</fire>
</message-sequence>
</spring:list>

Which can be compared with a previous version. The difference is stark.

We can also see here both the beginning of our own domain specific language in the configuration of our behaviours, but more importantly the ability for other developers to extend this with their own semantics.

Consider the following definition of a behaviour:-

1
2
3
4
5
6
7
<some-behaviour responds-to="something">
<resource>
<exist>
<path>Resources/Results/result-1-1.xml</path>
</exists>
</resource>
</some-behaviour>

I just made that up, but hopefully it begins to become clear how that will be read as a set of tuples for the behaviours configuration that I can act on. You can make your own stuff up, which is what open for extension means. The ability for you to make stuff up, that I didn’t foresee and without you having to ask me to modify my stuff.

There’s a strong smell of Prolog around here now. If you’re familiar with Prolog, think of assertions upon which predicates act.

A little caveat on reading XML as a set of tuples

In a relation of tuples you can’t have a duplicate tuple, so tuples that are repeated are collapsed down to the one tuple. The consequence of this is you can’t do…

1
2
3
4
5
6
7
8
{"fire", "bootstrap"},
{"fire", "parse-request"},
{"fire", "work"},
{"fire", "work"},
{"fire", "work"},
{"fire", "view-state"},
{"fire", "process-views"},
{"fire", "render"}

As you’ll end up with just the one {"fire", "work"} tuple. The elements as implemented express an ordinal so it is possible to change this to allow duplicate tuples but I want to digest what the implications of that might be first, and to wait and see what if any pain it actually may cause in practice, before fixing something that may not be broke.

You could as stands move past this problem by moving to something like {"fire-repeat", "work", "3"}.

We have enough here to feel confident in adapting to our needs in this area over time. We’re not walled in if we experience pain in this.

Reviewing our goals

TL;DR It went rather well, or I’d not be writing about it.

I listed a bunch of goal, principles or aspirations that are important to Inversion. I find it important after a non-trivial piece of work to consciously run down a mental check-list and ensure that I’m not negatively impacting any of those goals without compelling reason. The purpose of such a review is not to seek perfection but to ensure simple forward progress in each area, even if it’s only inching forward. Incremental improvement, kaizen and all that.

This is just my sharing informal observations after a piece of work. Normally I would use my internal dialogue.

  • Black-box components

    Inversion has a strong opinion on behaviours as black boxes being as opaque as possible. This is why we don’t inject implementation components into behaviours, and encourage behaviours to use service location to locate the component they need for their implementation. The reasons for this are outlined in other pieces I’ve written and is something I’ll write about more in the future. The short version is a concern with leaking implementation details, and imposing externally visible has-a relationships upon components where uses-a would be more appropriate. A behaviour may use configuration to form the basis of component location, but that is a detail of implementation not common interface.

    This concern speaks to behaviours only. How data-access components obtained from an IoC container are instantiated and injected for example is a separate and distinct architectural concern. Behaviours are participating in a component model around which there are specific expectations. Other services aren’t.

    Anything that distracts from a behaviours condition and action is a potentially undesirable overhead, especially if its leaking details across the interface. Moving from multiple data-structures to the one whose interface does not express intent, focuses on being a simple, generalised, immutable structure of string values that can serve multiple purposes… While not a radical improvement in terms of behaviours as black-boxes, it’s a definite improvement. We’re leaking less. Configuration becomes a standardised input at instantiation.

    Intent is expressed where desirable through the data-structures actual data, not it’s interface. This is what is meant by moving to a more generalised data-structure.

  • Substitution, “pluggability”

    Related to the interest in black-boxes, this isn’t a Liskov concern. This is a very real and practical concern with being able to swap out components with alternate implementations. Change the type specified by that attribute and nothing else needs to change kind of swapping out. Behaviours as extensible plugins.

    Again no tectonic shift here, as with the previous point, the focus of many interfaces into one common interface shared by many behaviours provides substantially less friction to behaviours as plugins.

  • Configuration as resource distinct from application

    Expressing configuration is more standardised, expressive, and more elegant especially when using XML notation thanks to the Spring.NET extensions. The change is impactful enough that we’re now starting as a natural matter of course to express our own semantics through configuration.

    So while configuration has not been made any more distinct as a resource, the quality of it’s use as a distinct resource has been much improved, and I have hope that it will over time become a pleasure to use and a welcome tool for the developer rather than an onerous liability.

  • Favouring composition over inheritance for application behaviour

    With the original implementation there was a lot of pressure to inherit from various classes in order to inherit some of their configuration features. This caused a lot of strain on inheritance.

    With the move to a common data-structure for configuration we took away pressure from inheriting to gain varying configuration properties.

    With the move to predicates as methods extending IConfiguredBehaviour we took pressure away from having to inherit from a particular class in order to pick up it’s condition predicates.

    What we didn’t escape was the need to actually use these predicates in a condition, therefore making it desirable to inherit from some classes in order to obtain the checks they perform in their condition.

    So this is really a 2 out of 3 in this regard. We have relieved pressure from inheritance in quite a marked way, but there remains an impediment that will require more thought and work.

  • Small footprint micro-framework

    This was one of the primary reasons for the piece of work and one of the more substantial wins as it’s reduced down the footprint of the behaviour interface and provides a strategy for accommodating future change without modification. Behaviour configuration is in a markedly better state than it was. Far more compact in design.

  • Single responsibility

    Providing configuration features was starting to distract from a behaviours responsibility to provide a condition/action pair, with an emphasis on the action. Most of the responsibility for expressing configuration and working with it has been removed from the behaviour which for the most part now merely has a configuration that was provisioned by a base class and is acted on by extension methods. So our focus on the actual responsibility of behaviours has been tightened.

  • Extensibility, DIY

    This again was one of the primary reasons for performing this piece of work. There was a desire in the face of feature requests concerning configuration and predicates to be able to reasonably reply “do it yourself”.

    On the one hand there’s a big gain. RDF is able to describe the world with triples, and it turns out N-Quads is a thing. The point is, in terms of data expression you can drive a Mongolian Horde though an ordered set of four element tuples. It makes it very easy for other developers to extend with their own configuration expressions.

    As mentioned previously adding new predicates as extension methods is now also smooth.

    We’re still stuck on having to actually use these predicates as mentioned.

    The issue isn’t implementing the lookup of predicate strategies, it can be as simple as a dictionary of lambdas, the cause for concern is where to define this, and where to inject it. Which object should be responsible for maintaining this lookup? It probably fits well enough on the context, but it would require the context to hold implementation details of behaviours, and I want to think about that some.

  • Inversion of Control

    I’m not sure I would go so far as to say IoC has been significantly enhanced here. Behaviour implementations have certainly relinquished much of their control over their configuration. Perhaps a nudge in the right direction for IoC is that it is now easier for developers to drive both their condition and action from configuration, so we have perhaps afforded more opportunity for IoC.

  • Testability

    No big wins in functional terms here, the more concise and expressive configuration is simply easier and more pleasant to use, so unit tests for example that would tend to want to configure a wide variety of cases and so are big users of configuration certainly benefit.

    While I was rummaging around the framework touching lots of different bits I also took a slight detour to implement MockWebContext along with MockWebRequest and MockWebResponse as I had a need to lay down some half-decent tests. Nothing exciting, you can see their use in ViewPipelineTests.

    So overall this patch of work puts Inversion in quite a strong position for testing with it possible to test contexts running a full application life-cycle for a request, or any behaviour or group of behaviours in concert as needed. Very few behaviours have a dependency on IWebContext, in this case only those parsing the request and writing the response, so testing even a view pipeline is straight-forward.

  • Portability

    No big impact here except there’s less to port. The use of LINQ statements is an implementation detail, and there are easy equivalent implementations available on all common platforms. There’s nothing exotic being done here.

  • Conventions for state

    Inversion attempts to place mutable state in known places, and to keep state elsewhere as immutable and simple as possible. We’ve consolidated down our configuration to a single immutable structure, so a small nudge in the right direction.

  • Speed

    Performance tests are showing the same figures. There wasn’t expected to be any change here, a move to backing configuration with arrays in the future may squeeze out some performance gains.

  • Other observations

    I’m starting to become mildly concerned over the use of LINQ methods used in a fluent style in implementation code. I have become aware of how often when debugging I am changing a linq statement into a simpler form in order to step through it and see what’s happening. I take this as a very loud warning sign. Often my use of linq is pure vanity as it has go-faster stripes. I think I’m going to start avoiding fluent chained statements, and expose the intermediary steps as local variables in order to make debugging easier… Difficult to force myself perhaps, as linq is bloody expressive.

Future work

TL;DR My flaky ideas.

There’s a couple of progressions I can see to this work, but first I want to let the existing work bed in before jumping the gun.

Backing the configuration elements with an array

At the moment the Configuration is backed by ImmutableHashSet<IConfiguratonElement>. This is reasonably efficient, and is easy to work with. It could however be moved to being backed by an array:-

1
2
3
4
string[][] config = new[] {
new[] {"frame1", "slot1", "name1", "value1"},
new[] {"frame2", "slot2", "name2", "value2"}
};

Which would probably be more efficient.

I did it this way as it was easier to reason about and debug, and those are still valid reasons at the moment. Once it’s become part of the furniture, then I can think about trying this out.

Expressing tuples relative to the preceding tuple

There’s an improvement I can vaguely see… because the tuples are ordered, we can consider a new tuple as an expression relative to the previous tuple.

1
2
3
4
(a, b, c, d)
((-2), e, f) relative to the previous tuple => (a, b, e, f)
((0), g, h) becomes => (a, b, e, f, g, h)
((-4), x) becomes => (a, b, x)

Relations of tuples include a lot of repetition in many cases. Using an expression of an offset from the end would allow us to express an uncapped arity of tuples with the limit being on how many new elements of a tuple we could expand by at a time. They could get effectively as large as you like… Think of scanning the list of tuple definitions using a stack as a point of context, you pop the specified amount of elements, and then push the rest on, the result is your current tuple. You could put this stack based iteration behind IEnumerable<IConfigurationElement> and anybody using it say via LINQ would be none the wiser.

My thinking on this is still fuzzy, and I feel it may be more than is required, possibly turning something quite straight-forward into something quite convoluted. Once I’ve thought through it a bit more, it may just be an obviously bad idea in practice.

Also sometimes a little constraint is an appropriate restraint. Time will tell.

The lookup of condition predicates

As discussed, at the moment predicates to act on configuration are provided as extension methods which need to be used in conditions. The frame of a tuple could be used as a key to lookup the predicate to apply to it by a variety of mechanisms.

This would add extensibility but may be one indirection too far.

In parting

I always feel a bit odd after I write something like this up. I’m not sure what use or interest this is beyond a small group of involved people, but I find I’m getting a lot of value out of explaining a thing in a public context. It’s certainly encouraging my thinking toward more rigour, so it’s a worthwhile activity for that reason alone.

My attempt at writing this up isn’t to show any arrival upon a perfect landing spot, but instead relate in some way software development and architectural concern as an incremental process of improvement moving toward a goal.

I come not to bury IoC but to praise it.

Or, All elephants are grey, that’s grey, therefore that’s an elephant.

I have used Spring.NET since 2006 when I first used it as the backbone of an MVC framework and CMS on the .NET platform, and I have used it aggressively since and to this day, inducting several development teams into its use over that time.

I felt compelled to give my credentials there as a good card carrying developer, hip to current trends, who finds it almost unthinkable to write a non-trivial application without an IoC container. I feel compelled because this piece may be unfashionable and I’m kind of dimly aware could make me unpopular in the current development community in which I find myself. Nobody likes to voice an unpopular view. We fear others will think us stupid.

It’s important that any reader appreciate that I am writing as .NET developer, lead and architect working in London. There may well be wider applicability to my views, but I can’t know that, as my observations are based on… well, what I get to observe. Things may be similar where you are, but they may not.

A room full of smart people wearing bell-bottoms, because…

I have found myself on more than one occasions saying to my developers, “commercial software development is first and foremost a social activity before it is anything else”. I’d been saying that (possibly while stroking my beard) for some time before I actually stopped to think about it, because I felt quite a level of conviction, before I really understood what I meant.

It’s not a terribly opaque statement, and it’s really quite obvious the moment you consider it… Before any code gets written, before any infrastructure is laid down, a bunch of people are going to make a whole bunch of decisions. Throughout the development of an application, and through it’s support and maintenance a wide assortment of people are going to negotiate between themselves what is right action and what is wrong action. The quality of those decisions will have a significant impact on the quality of the software and ultimately equate to pound signs either written in black or red.

There are libraries filled with books written on the subject of people and decision making by writers far more studied in the subject than myself. I want to focus for the moment on one specific aspect of groups of developers and architects making technical decisions together. The acceptance without scrutiny of self evident virtue received as common wisdom from a peer group. Known by the more plain speaking as Fashion.

There’s a couple of angles I could take at this and I may explore other areas later, but for now I want to drill into Inversion of Control and Dependency Injection a little, and the manner of it’s pervasive use in the .NET development community currently.

I’ll admit that’s a lot of packaging before getting to the content.

Inversion of Control (IoC)

So what is IoC? It’s almost impossible to answer that question without first asking “when?”, because the expected answer to that question in interview today is very different than those who coined the term would give.

Martin Fowler When these containers talk about how they are so useful because they implement “Inversion of Control” I end up very puzzled. Inversion of control is a common characteristic of frameworks, so saying that these lightweight containers are special because they use inversion of control is like saying my car is special because it has wheels.

I am reminded of the fact that if more people read Fielding’s own comments on the application of REST, there would be a lot fewer articles and books on REST, and far fewer applications calling themselves RESTful. Concepts percolate through the development community and in the same way the truth of an event in some distant foreign country will go through many changes before it reaches your tabloid front-page, so do concepts in software development before they end up in the blog post you’re reading.

If we’re not lazy however, we can go back to their root and origin.

Ralph Johnson and Brian Foote One important characteristic of a framework is that the methods defined by the user to tailor the framework will often be called from within the framework itself, rather than from the user’s application code. The framework often plays the role of the main program in coordinating and sequencing application activity. This inversion of control gives frameworks the power to serve as extensible skeletons. The methods supplied by the user tailor the generic algorithms defined in the framework for a particular application.

That quote, which is referenced by Mr. Fowler’s writing on IoC and was written in 1988. If I can persuade you to read just one paper, please make it this one.

What is IoC trying to achieve?

Before we look at the ways in which IoC is in someway a special case, it is useful perhaps to consider what it shares in common with a broader approach.

IoC participates in a movement to develop techniques for more reusable code, along with a bunch of other principles to this end. One of the success criteria then for our employment of IoC then is the degree to which we attain code reuse.

The environment in which IoC grew-up code reuse was seen not just as a means of increasing productivity, it was seen as essential if systems were to be allowed to grow and evolve over time, without collapsing under their own weight of complexity. There’s a lot of talk of class libraries evolving into frameworks, and frameworks evolving from white box systems to black box as our understanding of a systems abstraction improve with experience. There’s importance given in the early talk of and around IoC to the human factor at play within development. Systems should it was thought change as our understanding as a group of people engaged with a problem domain changes. The system should evolve in tandem with our understanding of it.

This approach to software development as an evolving system requires a focus on decoupling of implementation from use, an aggressive focus on discrete interfaces, and an almost obsessive regard for component substitution (plug-ability).

This common goal is what IoC is meant to further, to yield systems resilient to change.

So what is IoC?

It’s a lot of different things.

Martin Fowler There is some confusion these days over the meaning of inversion of control due to the rise of IoC containers; some people confuse the general principle here with the specific styles of inversion of control (such as dependency injection) that these containers use. The name is somewhat confusing (and ironic) since IoC containers are generally regarded as a competitor to EJB, yet EJB uses inversion of control just as much (if not more).

Whoah. Something more IoC than an IoC Container?

One of the characteristics of the practice of religious faith, is that it often doesn’t stand up to scrutiny with its founding texts and prophets.

Martin Fowler Another way to do this is to have the framework define events and have the client code subscribe to these events. .NET is a good example of a platform that has language features to allow people to declare events on widgets. You can then bind a method to the event by using a delegate.

You’ve been doing IoC for a long time, in a lot of different ways, long before you ever found Ninject.

Inversion of Control is any pattern that inverts the traditional procedural flow of control. That’s it. The purpose of which is to reduce coupling between components to make them easier to swap out, and to promote flexible application evolution that is able to cope with new abstraction built from it.

Just because your mouse is grey doesn’t mean it’s an elephant

Consider the following wee piece of code…

1
context.FireWith("user-store::delete-user", "user-id");

Which is a straight-up imperative call. We have the expectation that a named component will delete the user specified. It’s a message-based call, and there are valid aspects of decoupling taking place here, but it’s weak in terms of IoC as it’s a traditional forward calling imperative… “Oi! You there, do that.”

Almost the same…

1
context.FireWith("user-unsubscribed", "user-id");

Here we are notifying the system of a significant occurrence. We may have no idea what if anything is going to happen as a side-effect of this. Several things may act upon this, or nothing may act upon this… and we don’t care, because in the second example it’s not our business at this point in the application. This is not an imperative. It’s notifying the broader system of an event… “Excuse me. Anybody there? I don’t want to be a nuisance, but I just did something, and I thought somebody might want to know.”… Henceforth to be known as the English pattern.

In the second example you can have many components responding to the message, each with a discrete narrow focus of purpose. It is open to easy extension by adding new components that will respond to the message, without modifying existing implementation or behaviour. It’s easy to swap out components for ones with different implementations, and the interface is small, discrete and generalised, binding is indirect and at runtime. Feature switching not just at compile time, but at runtime is possible. Lastly at this point in the code, we don’t concern ourself with what happens in the broader system, and require the least possible knowledge about the broader system.

We’re using exactly the same API call here, but this time our expectations are of a reactive response to this event. Here we have inverted the traditional flow of control. Both these calls will use exactly the same mechanic of resolution, but one expresses an inversion of control and one doesn’t.

That’s how narrow the divide can be between flow of control going forward or backwards. The defining factor on either side of that divide is one of intent. By intent I mean an implicit accord on both sides with a common expectation of the meaning of a signal. The balance of the roles and responsibilities in this relationship are for you to decide. It’s about your intent. The mechanism of that signal is important, but not as important as what is being expressed and what is expected. There’s lots of different ways you can use a delegate, many of them will invert control, but simply using a delegate will not get you inversion of control.

It ain’t what you do, it’s the way that you do it, and parroting patterns will not ensure that intent is fulfilled. Understanding intent is key here, as if we understand intent, as long as we’re half-decent developers, our implementations will improve over time and we’ll get there. If we don’t understand the initial intent, our chances of hitting the target are much reduced and start involving luck.

The pioneers laying down these principles did not expect it to be possible for groups of humans to land on the perfect abstraction straight out the gate. They talk about taking flawed initial implementations and iteratively improving our architectural choices. So unless you happen to be some kind of architectural genius that gets their abstractions right first time, a strategy for change becomes a strategy for survival.

Javascript

I’m aware of where Javascript started with Netscape on the server, and I’m aware of where Javascript is today both with NodeJS. Javascript came of age however in the browser. This meant that Javascript grew up in an environment where the programmer was interacting with a framework (the browser) via an exposed object model.

1
<div onclick="function(this){/* do something with 'this' */}">

That’s a good example of inversion of control. We’re registering a callback with an event the framework is going to raise for this element, with the execution of any callbacks managed by the framework. If we think of the alternative, we would need to modify the framework’s core functionality.

This naturally evolves to…

1
element.addEventListener("click", function(){ alert("Hello World!"); });

Javascript developers weren’t writing complete applications, they were integrating with a framework that forced them to accept IoC as the natural order of things. Modern Javascript frameworks reflect this heritage.

There’s any one of a rampaging horde of Javascript frameworks I could cite for example here, so don’t read too much in my choosing Twitter’s Flight to illustrate the point.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
/* Component definition */
var Inbox = flight.component(inbox);
function inbox() {
this.doSomething = function() { /* ... */ }
this.doSomethingElse = function() { /* ... */ }
// after initializing the component
this.after('initialize', function() {
this.on('click', this.doSomething);
this.on('mouseover', this.doSomethingElse);
});
}
/* Attach the component to a DOM node */
Inbox.attachTo('#inbox');

It’s not so much that Javascript has such laudable executions of IoC, it’s that the .NET development community has settled on such an anemic consensus on IoC.

And we’ve not mentioned Dependency Injection (DI) yet

Because although it’s pervasive, it’s possibly the least interesting aspects of IoC while remaining one of the more convenient.

In dependency injection, a dependent object or module is coupled to the object it needs at run time. http://en.wikipedia.org/wiki/Inversion_of_control

Coupled to the object it needs. There is coupling taking place with DI that needs to be managed, and if it’s via constructor injection it’s not necessarily very loose.

I’m looking at an MVC.NET controller with 15 objects injected into it’s constructor. Most of them are repositories. I was intending to count the members of each of those objects, but the first one had 32 public members and I stopped counting there.

How loosely coupled do you think I feel looking at this code? How discrete are the responsibilities being exercised here do you think?

These objects are all injected into the controller by an IoC container. There is a huge surface area being exposed to this component regardless of which particular operation it is performing, with is possessing 27 public members itself.

Just because you are using an IoC container and DI does not mean you are implementing IoC. It just means you’ve found a convenient way to manage instantiation of objects. In my experience this convenience in wiring up components in unthoughtful ways has done considerable harm exhibited by the current swathe of MVC.NET + Entity Framework + Ninject web application all implemented quite cheerfully around SOLID principles.

Ralph Johnson and Brian Foote Sometimes it is hard to split a class into two parts because methods that should go in different classes access the same instance variable. This can happen because the instance variable is being treated as a global variable when it should be passed as a parameter between methods. Changing the methods to explicitly pass the parameter will make it easier to split the class later.

Your use of the constructor is not inconsequential. I personally aim as much as possible to inject at the constructor only such configuration data as is necessary for that class of component to operate, regardless of implementation. I want as much as possible to be able to swap out implementations without altering their config. Remember that’s what we’re trying to achieve here.

1
2
3
4
5
6
7
8
9
10
11
<object type="Conclave.Web.Behaviour.BootstrapBehaviour">
<constructor-arg name="message" value="bootstrap" />
<constructor-arg name="params">
<dictionary key-type="string" value-type="string">
<entry key="area" value="default" />
<entry key="concern" value="default" />
<entry key="action" value="default" />
<entry key="app-path" value="/conclave.cms" />
</dictionary>
</constructor-arg>
</object>

We’re configuring behaviour here, regardless of the implementation what we are expressing in this configuration remains the same as our intent is the same. Although we are not obliged contractually we understand the spirit of our intent and try and keep our constructors as honest a part of our interface as possible.

This is DI, but it’s very much light-weight and focuses on configuring the component for use.

1
2
3
4
5
6
7
8
<object type="Conclave.Web.Behaviour.View.XslViewBehaviour, Conclave.Web">
<constructor-arg name="message" value="xslt::view" />
<constructor-arg name="contentType" value="text/xml" />
</object>
<object type="Conclave.Web.Behaviour.View.XslViewBehaviour, Conclave.Web">
<constructor-arg name="message" value="xsl::view" />
<constructor-arg name="contentType" value="text/html" />
</object>

If a component uses rather than has another service for it’s operation it is an implementation detail and is acquired by service location. In this particular framework we care a lot about being able to swap out components, and ensure this intent is met.

In most cases I do not regard it as appropriate to inject something as fat and implementation specific as a repository into a behavioural component. Even though it may be DI, there are too many other principles in the balance that this violates.

The Dependency Inversion Principle (DIP)

The “D” in SOLID does not stand for Dependency Injection. It stands for the Dependency inversion principle which is a subtly different thing. And has a focus on implementing interface abstractions and using those interface abstractions.

The goal of the dependency inversion principle is to decouple application glue code from application logic. Reusing low-level components (application logic) becomes easier and maintainability is increased. This is facilitated by the separation of high-level components and low-level components into separate packages/libraries, where interfaces defining the behavior/services required by the high-level component are owned by, and exist within the high-level component’s package. The implementation of the high-level component’s interface by the low level component requires that the low-level component package depend upon the high-level component for compilation, thus inverting the conventional dependency relationship. Various patterns such as Plugin, Service Locator, or Dependency Injection are then employed to facilitate the run-time provisioning of the chosen low-level component implementation to the high-level component. http://en.wikipedia.org/wiki/Dependency_inversion_principle

In Dependency Inversion, the implementing class is dependent on an interface that is either owned by an intermediary that the high level component is also dependent upon, or the interface is owned by the high level component.

Strictly speaking if the interface isn’t owned by the high level component, the dependency has not been inverted.

In order to completely achieve dependency inversion, it is important to understand that the abstracted component, or the interface in this case, must be “owned” by the higher-level class. http://blog.appliedis.com/2013/12/10/lost-in-translation-dependency-inversion-principle-inversion-of-control-dependency-injection-service-locator/

In Inversion and Conclave you’ll see occasionally a comment along the lines of // we need to own this interface. You’ll also see several BCL components being wrapped such as request and response objects. One of the goals of Inversion is to be easily portable to other platforms, and so it is important to control what interfaces the framework exposes.

We don’t notice this for the most part in everyday development as a lot of our interface abstractions are picked up by the .NET base class library. If I have a low level component implementing IList and a high level component consuming is via IList, we take the stewardship of the interface by the BCL as good-enough, and quite reasonably don’t get too pedantic over the fact this isn’t DIP as the high level component doesn’t own the interface. A stable and neutral third-party is often anointed by the high level component. That example is a little contrived for simplicity as lists are not the kind of components we would normally engage this level of concern over, but more valid examples are to be found in the System.Data namespace.

This principle can quickly get quite fiddly in practice so often its pragmatically summarised as “don’t use concretes”, which gives 80% of its goodness, but not all.

Consider the use of the Newtonsoft.Json package. It’s such a brilliant package that it’s used extensively. When a high level component couples with these interfaces it becomes dependent on them in the traditional way. You don’t control those interfaces, Newtonsoft do. In most cases use of such foreign interfaces should be implementation detail that is not exposed to the broader framework.

But there’s a way to dodge most of the issues with DIP entirely, and that is to not use lower level components directly. Instead model the interactions the high level component needs to have with the low level components, treat them as pluggable black-boxes, and only interact with them via an intermediary interface with the framework responsible for resolving that interaction. Messaging is a good example of this, as were the two snippets of code earlier in this piece.

Fashion

When a room full of smart people decide to turn MVC.NET + Entity Framework + Ninject into a the exact opposite of what IoC is trying to achieve, which is to say a rats nest of dependencies, leaking knowledge all over the place, with components coupling like a Roman orgy, we have to ask ourselves how and why?

The best answer I can come up with is fashion.

That’s not to be dismissive or derisory. To be so would only compound the issue. It is to acknowledge that we can see and accept the role of fashion in almost every human endevour, and to suggest that we may need to consider how it impacts our technical choices.

We all have a very real need to feel the support and approval of our peers. It’s not a mild passing influence, it’s wired deep into us as a survival strategy as an animal. As we occupy the more geekish end of the spectrum we tend to seek our approval through demonstrating what we know. Moreover it’s the means by which we earn our living. Saying “I don’t know”, does not necessarily come easy to us.

DIP causes me problems. I disagree in part with some of it’s intent. I don’t want my low level components knowing anything about my higher level components. I have no intention of doing that. That bit of DIP looks like nonsense to me.

When I make that assertion I know a couple of things. The Dependency inversion principle was cooked up by some very smart and well studied people. Given that, there is the distinct possibility that I am missing something. The confusion I feel when considering some aspects of DIP further lends weight to this. If I’m sat in a room of my peers, I risk looking foolish if I express my confusion.

Now imagine I’m a team lead or architect. I’m getting paid to know what I’m doing, and my ability to lead and instruct my team is dependent on their respect for my technical ability. I am making myself vulnerable when I admit to my team that I am experiencing confusion with some methods of application of an architectural principle that the whole .NET community seem to have accepted as self evidently virtuous. It might be easier to just pretend I know, and then to cover my inadequacy coach my team in my version and understanding of DIP as the real thing.

This is how fashionable decisions are made. When the goal becomes to be seen by our peers as “good developers” we are engaged in a social exercise and the technical merits of our choices become secondary.

In every case where I have observed this happening it is either an absence of a team lead, or a failure on the part of the team lead to establish the safety for simple human honesty. Further a failure to acknowledge that despite best intentions we are very fallible, and intellectual honesty needs to be motivated. The technical impact of this is we end up wearing principles like IoC like a fashion accessory with very little honest endevour given to it’s underlying intent.

IoC is great. On the .NET platform it’s a particularly interesting time with so much growth in reactive features on the platform, and the TPL. IoC as it exists currently in commercial web application development on the .NET platform has more to do I would suggest with fashion than anything of substance.

Naiad, a toy service container.

In the previous piece Service locator vs depdendency injection I had declared, “Service location is, and is provided via an interface on the context that can be implemented inside 10 minutes as a dictionary of lambdas if you had a pressing need.” Which risks being a throw-away comment comprising largely of hot air. So I thought I’d knock one up, the guts of which is…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
private readonly ConcurrentDictionary<string, object> _ctors = new ConcurrentDictionary<string, object>();
public void RegisterService<T>(string name, Func<IServiceContainer, T> ctor) {
_lock.EnterWriteLock();
try {
_ctors[name] = ctor;
} finally {
_lock.ExitWriteLock();
}
}
public T GetService<T>(string name) {
_lock.EnterReadLock();
try {
Func<IServiceContainer, T> ctor = _ctors[name] as Func<IServiceContainer, T>;
return ctor(this);
} finally {
_lock.ExitReadLock();
}
}

It really is just a dictionary of lambdas, and wires up thus…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Naiad.ServiceContainer.Instance.RegisterService("request-behaviours",
container => {
return new List<IProcessBehaviour> {
new SimpleSequenceBehaviour("process-request", container.GetService<List<string>>("life-cycle")),
new BootstrapBehaviour("bootstrap",
new Dictionary<string,string> {
{"area", "default"},
{"concern", "default"},
{"action", "default"},
{"app-path", "/web.harness"}
}
),
new ParseRequestBehaviour("parse-request", "Inversion.Web.Harness.Site"),
new ViewStateBehaviour("view-state"),
new ProcessViewsBehaviour("process-views"),
new RenderBehaviour("render"),
new RazorViewBehaviour("rzr::view"),
new XmlViewBehaviour("xml::view", "text/xml"),
new JsonViewBehaviour("json::view", "text/json"),
new XsltViewBehaviour("xslt::view", "text/xml"),
new XsltViewBehaviour("xsl::view", "text/html"),
new HelloWorldBehaviour("work"){
MatchingAllParameters = new Dictionary<string,string> {
{"action", "hello"}
}
}
};
}
);

It isn’t much of any use except as a base-line “simplest possible thing that works” to measure more industrial strength implementations against. Just prodding it with apache bench for a ballpark it’s a whisker faster than Spring.NET, which considering the facilities Spring offers leaves me quite impressed with Spring.

There’s a lot of value in returning to the simplest implementation that works as it’s easy to lose track of the cost of components that become an integral part of our applications.

So there’s no misunderstanding, this is a toy. But sometimes all you need is a toy. Inversion started life as a toy, the initial prototype being a predicate/action dictionary, where the predicate acted on an event and the action acted upon a context. In scripting languages knocking prototypes up with the general purpose data structures lying around such as lists and dictionaries is very normal, and we could maybe do with it becoming more of a norm in .NET before we jump off into the deep-end with grand object-models.

As I’m proofing this I can see I need to move the exit from the read lock after looking up the constructor but before executing the constructor… as I say, it’s a toy.

Service locator vs dependency injection.

Or, “Who would win the fight between a submarine and a tank?”

I much enjoyed reading a piece on service location vs dependency injection which chimed with some of my own thoughts overs the years.

The article starts with a quote by Martin Fowler, the brilliant man whose brilliant work has given rise to so many cargo-cult practices in the .NET development community. I say “cargo-cult” as I’m implying unreasoned and absolute application of a single principle out of context, to the exclusion of any nuance. It’s worth reading Fowler’s piece as it’s a very balanced take on the subject and not absolutist.

Martin Fowler The choice between Service Locator and Dependency Injection is less important than the principle of separating service configuration from the use of services within an application.

Architecturally Inversion expresses service location, and it avoids any explicit use of dependency injection (DI) while at the same time assuming considerable use of DI by application developers. Given this I thought some brief word on “why” might be useful, while adding my voice of concern about the over use of DI.

Inversion favours the use of Spring as it’s IoC container, and XML configuration. I’ve long intended to try out autofac as it too apparently has good XML config support. As long as it has good XML config support and performs reasonably well I really don’t care which container I use, because for me the primary requirement is a config notation so I can decouple my config from service use and binary deploys, and so that I can easily manage configuration for an application in different instances.

This core issue seems to get thrown out with the bath-water in near all DI using solutions I have seen in the wild. Why? Because we had a bunch of people write that service locators are an anti-pattern. Like a lot, if it passed by your notice Google “service locator anti pattern”, pick a couple of pieces and read at random for 15 min.

Most of the core arguments regarding service location as an anti-pattern stress the pitfall of runtime rather than compile-time errors caused by failure to fulfill dependency requirements. This is compounded by the dependency graph being buried in implementation code. These are valid concerns, but the counter application as a blanket absolute I feel leads developers into more pitfalls than it avoids.

The emphasis on compile-time errors in this argument leads the developer to favour statically-compiled container configuration, and in most cases the fluent interfaces that are emphasised by modern IoC containers. Without exception in any case I’ve observed this leads to Martin Fowler’s chief concern getting throw out with the bathwater.

separating service configuration from the use of services within an application

There are other more insidious issues introduced with the assumption of pervasive DI use.

Abusing constructors rather than abstracting

At a very simple level, most examples of DI vs service location assume constructor injection. This is for the valid reason of ensuring the object is instantiated with all it’s dependencies fulfilled, and this is the fig leaf we use to explain this approach. The truth is a little buried anti-pattern in itself.

Dependencies will often vary for different implementations, so what we need to inject varies. The constructor is effectively a big gaping hole in a types interface contract. We can run anything we want through there, and they can vary between implementations. So rather than abstract our dependencies we just throw them through the contructor. This is not a virtue.

In the world of .NET Blog Driven Development combined with MVC.NET and Entity Framework this leads over the course of years almost inexorably to the magic tarball of a dependency graph with all the things touching all the things and the contructor being the means by which we communicate relationships between objects.

Assumptions about state

This abuse of constructors as a hole through our interfaces leads us to another problem.

It makes a huge assumption about the state of my type, and will almost compel inexperienced developers to inflict state upon types that don’t need it. We without thought turn a uses-a relationship into a has-a relationship and ensure we can’t use singletons where appropriate, and steers us away from a swathe of compositional patterns.

This is a big deal for performance in web-applications, and almost ensures while we model relationships between data entities, we don’t model behavioural relationships between objects or pay much of any attention toward how objects use each other.

Writing assemblies to be consumed by others

The flaming strawman of a horror story that the notion of an anti-pattern is built on is the story of shipping an assembly to a third-party that’s using a service locator, with a dependency that isn’t fulfilled in the config, causing a runtime error that isn’t easy for the consumer to resolve as the dependency is expressed in configuration code.

I call this a strawman as using a service locator in this way for a shipping lib is a complete non-starter. The concern is applicable for any low-level or foundation assembly (as most of us are not shipping libs).

Conclave.Map and related assemblies have no notion of a service container or locator. It’s part of a data-access layer, and service location is none of its business. Nobody in their right mind is going to suggest injecting a service locator into something that isn’t participating in a component model. It may have a database connection however.

In WinForms a service container is threaded through all the components, because they are participating in a component model. The IO namespaces aren’t because they’re not participating in a component model.

Yes, there are a whole bunch of concerns that should not be addressing service location. There’s a whole bunch of types that shouldn’t have access to the application config at all, that should be agnostic to their environment. Your data access layer probably shouldn’t know anything about HTML or CSS… but that does not make HTML and CSS anti-patterns, it is simply to know that as professionals we make judgment about how we partition concerns within our application while being mindful of principles like The Law of Demeter we understand we need to manage carefully the coupling between types.

If however a types responsibility is coordinating between services, and providing application integration with services, then service location is a perfectly reasonable concern, and trying to pretend otherwise because somebody called it an anti-pattern will bend your application out of shape.

Patterns are not universally applicable articles of faith

Patterns are not catechisms, and they do not direct a moral imperative. Patterns offer solutions to common problems and bring with them their own consequence that will vary between scenarios of application.

Consider message queues. Not unlike service locators they introduce a fire-break of an interface decoupling, taking a lot of of stuff that used to happen here and by whatever means makes it happen over there. Quite where or how often isn’t the business of the application developer looking at one end of it.

Should we wire in a service locator into a low level PDF library that is not participating in a component model? Probably not, for all the same reasons we probably shouldn’t wire in a message queue.

Is this to say then that message queues are an anti-pattern? No, it’s to say you’re a muppet if you wire a domestic power cable from the wall outlet into your wrist-watch to power it. Not because domestic power cables and wall outlets are bad or antithetical, but because if you insist on wiring in power cables in inappropriate ways, you’re going to get an electric shock and will probably render your watch inoperable.

Take 3 Java developers and 3 .NET developers to an imaginary bar in our heads. They’re going to write down an exhaustive list of all the ways in which it is appropriate or inappropriate to use a message queue. Once the Java and .NET devs are done introduce 3 Erlang developers, and there’s going to be a bar fight. This is because an Erlang developer is going to have a completely different architectural take on where it is appropriate to use messaging.

This might seem a bit of a contrived example unless you are a .NET developer using Rx.NET or DataFlow in anger. In which case your notions of inter-object communication is probably drifting slowly toward the Erlang chaps and you might surprise your peers by joining the Erlang devs in the ensuing ruck. Further shocking the Java devs when one of their own screams “Scala!” and turns on them… Now throw in 3 Haskel devs and all bets are off. They’re likely to label your whole type-system an anti-pattern… When we look under the table we find a Rails dev rocking themselves whimpering “I just want to build awesome websites”.

As a .NET dev I may favour compile time errors over runtime errors more than say a Python or Ruby developer, but if I am creating a component model that composes at runtime, and I try and eliminate runtime errors as a blanket architectural rule, then I am likely to bend my architecture out of shape.

Using a process context for service location

So how does Inversion and Conclave approach this? Hopefully with a sense of balance, and an awareness of when the focus is service location and when the focus is dependency injection, with a cut between the two at the appropriate layer for the application to separate its concerns.

Inversion centres around process context in much the same way that an ASP.NET application will centre around a HttpContext. This context is used to manage state for a running process and to mediate with actors and resources external to the application. The process context is also responsible for mediating between units of application and business logic, coordinating their activity.

The context has-a service container, which is injected in it’s constructor. This interface is held for all process context implementations. If I could specify the constructor on the interface I would (I might take a closer look at the MS design by contract library for .NET).

1
2
3
4
5
6
7
8
public ProcessContext(IServiceContainer services) {
_serviceContainer = services;
// snip
}
public IServiceContainer Services {
get { return _serviceContainer; }
}

Which is completely unremarkable. Slightly more controversial is the interface for IServiceContainer.

1
2
3
4
public interface IServiceContainer : IDisposable {
T GetService<T>(string name);
bool ContainsService(string name);
}

This is perhaps slightly controversial as its getting services by name rather than by type. This is because at this level the concern is service location via a generalised component interface. If the service container being used supports DI (and it will), injection is configuration level concern. The component isn’t going to inflict it’s dependency upon the application architecture.

1
2
3
4
5
6
7
8
9
10
11
12
13
public override void Action(IEvent ev, ProcessContext context) {
if (ev.HasRequiredParams("id")) {
using (ITopicStore store = context.Services.GetService<ITopicStore>("store::topic-map")) {
store.Start();
Topic topic = store.GetTopic(ev["id"]);
context.ControlState["topic"] = topic;
ev.Object = topic;
if (topic == Topic.Blank) {
context.Errors.CreateMessage(...);
}
}
}
}

So here we have the action of an IProcessBehaviour. It uses the same interface as all process behaviours, it’s not a special little snowflake, and plugs into the architecture the same as every other component.

Crucially… this behaviour uses a context which has a service locator which this behaviour uses to obtain a topic store.

The behaviour, and all the other behaviours like it have naff all. The process context has everything. Any immutable config for the behaviour is injected by the service container from which the behaviour is obtained, and is a config level concern that remains the business of the behaviours author and for them to worry about. DI in this way is not the business, nor the concern of the framework. Service location is, and is provided via an interface on the context that can be implemented inside 10 minutes as a dictionary of lambdas if you had a pressing need.

Service location and dependency injection are different things

Obtaining a manifest from a database at runtime of service component names that conform to a generalised interface, obtaining them from the service container by name, and then executing them is the concern of a service locator, not DI. It’s not about one being better than the other, it’s about them being concerned with different things. Service location has an architectural impact on patterns of application composition. DI has an impact on configuring object instantiation.

The reason the two streams get crossed is that every DI offering that I have come across is built upon and predicated by a service locator. DI is one pattern that can be implemented with a service locator. So in almost every case you’re going to come across the two things in the same place called a “service container”. Use of service location will naturally co-mingle with DI, because reasoned use of DI is a wonderful thing, and shields our application from a lot of instantiation details, keeping them firmly ring-fenced as config.

To suggest that service location is an anti-pattern and DI is the one pattern (built upon service location) for all the things, is cargo-cultish.

Inversion and Conclave express service location and assume you will use whatever DI takes your fancy. What service locator and DI you choose to use is not my concern and should not impact the architecture.

Looking-up stuff

We as developers out of necessity seek guiding principles to inform our daily work. This isn’t exclusive to IT, we do it in all aspects of life. “A stitch in time saves nine”, is a truism that we may all find ourselves nodding to as its a useful sentiment. As is “measure twice, cut once” and “more speed less haste” despite there being subtle tensions between such truisms. They are useful principles. Their application requires wisdom and judgment. They are useful models, they are not innate laws of the cosmos… The map is not the terrain.

The assertion that service location is an anti-pattern masks consideration and balance of an underlying concern which I shall grandly entitle “looking-up stuff”. The issue isn’t one of service locators, database connections, sockets or access to the file-system. The issue is whether an operation should be looking up information external to itself, or whether it should be acting on only the information passed to it. Related to this, but beyond the scope of this piece is whether an operation should be yielding side-effects, and if it should, how they are managed.

There isn’t a simple answer to this concern because what is appropriate is contextual and determined by what the components role is whithin the broader system. Should my component pull information from an outside source, or should it be given that information? Should my parser be a pull or push parser? Whatever you decide is appropriate it is probably silly to call pull-parsing an anti-pattern when your push-parser has probably been built on top of one, despite the fact that in most cases you should probably be using a push-parser.

There is no universally applicable principle that will ensure we wear the mantle of “good developer”. There is no abdicating responsibility for the decisions we need to make not just as a programmers, but as system analysts even if you call yourself a developer. I become concerned when blanket truths replace consideration of context.

Service location is not an anti-pattern. There are anti-patterns that involve use of a service locator along with other similar constructs. There are anti patterns that involve the use of DI. Most devises we use in programming involve both (virtuous) patterns, and anti-patterns, which is really just a grand way of saying pros and cons. Generally speaking people who summarise the world in terms of only pros or only cons are said to be engaging in splitting.

Splitting (also called black and white thinking or all-or-nothing thinking) is the failure in a person’s thinking to bring together both positive and negative qualities of the self and others into a cohesive, realistic whole. It is a common defense mechanism used by many people. The individual tends to think in extremes (i.e., an individual’s actions and motivations are all good or all bad with no middle ground.)

I need to take a look about and see what discussions there may be on the subject of polarised views and whether they are more prevalent among programmers than other professions.

Introducing Inversion.

Conclave originally began life around 2004 as a .NET CMS built around topicmaps, and influenced heavily by the WikiWikiWeb. It was a lot of fun but a personal side project, and was a little slow and clunky.

The next incarnation in 2006 was Acumen a .NET MVC web-application framework and CMS built with a team in Spain. Multi-tenant, multi-lingual and driving a couple of dozen public facing and internal extranet applications, Acumen was so much fun to develop and an incredible learning experience.

More recently in 2011 I began working on a behaviour oriented framework the purpose of which was to replace MVC within Conclave, so that feature-set just got rolled into Conclave. This left Conclave very schizophrenic and almost impossible to explain to any uninvolved person. Conclave simply seemed to be too many things.

So. The behavioural composition malarkey has been taken out of Conclave and is now Inversion. Conclave.CMS and Conclave.Funder will then simply be applications that use Inversion rather than being joined at the hip. This it is hoped will help keep our separation of concerns a little more honest.

Over the course of the Winter I’ll write some more about Inversion and it’s design goals.