Configuring behaviour in Inversion: Part 2

Previous: Configuring behaviour in Inversion

In the last article I talked about the how and why of implementing behaviour configuration in Inversion. When I reviewed the work I surmised that it was a qualified success with some work remaining to do before the matter could be put to bed entirely.

With the original implementation there was a lot of pressure to inherit from various classes in order to inherit some of their configuration features. This caused a lot of strain on inheritance.

With the move to a common data-structure for configuration we took away pressure from inheriting to gain varying configuration properties.

With the move to predicates as methods extending IConfiguredBehaviour we took pressure away from having to inherit from a particular class in order to pick up it’s condition predicates.

What we didn’t escape was the need to actually use these predicates in a condition, therefore making it desirable to inherit from some classes in order to obtain the checks they perform in their condition.

So this is really a 2 out of 3 in this regard. We have relieved pressure from inheritance in quite a marked way, but there remains an impediment that will require more thought and work.

The basic mechanism for addressing this wasn’t really the issue, uncertainty was where such a mechanism should reside.

The issue isn’t implementing the lookup of predicate strategies, it can be as simple as a dictionary of lambdas, the cause for concern is where to define this, and where to inject it. Which object should be responsible for maintaining this lookup? It probably fits well enough on the context, but it would require the context to hold implementation details of behaviours, and I want to think about that some.

This follow-up article will talk about how progress was made with this remaining area extending selection strategies for behaviours, with a focus on “open for extension but closed for modification”.

Selection criteria

One of the concepts that was firming up was the idea of selection criteria which was a predicate acting upon a configuration and event to determine if a behaviours condition was a match. Last time these were implemented as extension methods for IConfiguredBehaviour which were nice in that it was easy to add new selection criteria without having to change anything. The problem remaining with them was that conditions still needed to know about and use them. The uses-a relationship between behaviours and their selection criteria was not open for easy extension. The use of selection criteria was “hard coded”, and required use of inheritance to override, which is something we were trying to avoid as we prefer “composition over inheritance for application behaviour”.

By the end of the last piece we had a reasonably firm idea that we wanted to inject selection criteria into behaviours as strategies to be used by conditions without the conditions knowing about the strategies other than their general shape and how to use them. The details or purpose of a strategy not being important to a behaviour which is just concerned whether or not its selection criteria pass or fail.

So the first order of business was to make selection criteria a thing:-

1
public delegate bool SelectionCriteria(IConfiguration config, IEvent ev);

A function that acts upon an IConfiguration and IEvent, and returns a bool. This allows us to move our use of extension methods to lambda expressions which are easy to store and inject:-

1
(config, ev) => ev.HasParams(config.GetNames("event", "has"))

If a behaviour as part of it’s configuration were injected with a set of these SelectionCriteria a behaviour during it’s condition check could simply check that each of these criteria returns true. We would be able to effectively inject a behaviours condition implementation.

That bit was easy… But how do we decide which of these SelectionCriteria to inject into a behaviour?

Stuff what selects stuff what selects stuff

Then I fell off a conceptual cliff, largely due to semantics, and a brief period spent chasing my own tail.

How to decide what stuff to inject?.. I spent most of a morning trying to formalise an expression of “stuff what selects stuff what selects stuff” that didn’t make me sound like a cretin. I’d walk into my garden and sit, think of a compositional pattern, run to my studio and find I’d laid down a bunch of things that all sounded the same the distinctions between which seemed very arbitrary.

The darkest 15 minutes of that morning was the brief period when I considered using behaviours to configure behaviours, and started seeing behaviours all the way down.

The reason for my anxiety is I was becoming convinced that I was starting to commit a cardinal sin of application architects which is the sin of the Golden Hammer.

The concept known as the law of the instrument, Maslow’s hammer, Gavel or a golden hammer[a] is an over-reliance on a familiar tool; as Abraham Maslow said in 1966, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

The pull of the Golden Hammer for the architect is almost inexorable as the core concern of the architect is too look for common patterns of structure and behaviour, to move from a diverging variety of abstractions to converging use of abstractions. When you get a hold of an implementation of a pattern that is producing good results for you, it is very hard to avoid seeing that pattern everywhere.

It’s also one of the primary mechanisms by which we turn our architectural cathedrals into slag heaps. It’s destructive because it represents the building of an increasingly strong bias about the applicability of an abstraction, that leads to poor judgment and the inappropriate application of abstractions. I call it a sin because its seductive, difficult to avoid, is always recurring, and has bad consequences in the long term while feeling good in the short term.

I knew I was seeing the modeling of condition/action pairs everywhere, that this was part of a protracted phase I’m going through, and that I was vulnerable to the hubris of the Golden Hammer.

I also knew that some patterns are foundational and do have broad applicability. I don’t find the promiscuous use of key/value pairs or IEnumerable<T> anxiety provoking use of a Golden Hammer, and condition/action is as foundational as an if statement.

The rest of the morning was spent giving a performance of Gollum (from Lord of the Rings) as an application architect having an argument with himself about the semantics of stuff and select while anxious about getting hit by a hammer.

An optional extension of the existing framework

I broke out of this neurotic circular argument with myself by deciding that I would implement the abstraction of stuff what selects stuff what selects stuff as a straight-up extension of the existing framework without altering any of the existing types or their implementations. If I could do this then if it became apparent that the abstraction or its implementation was ill-conceived (as it felt it might be) it could remain an odd appendix of an experiment that could be removed at some point without any negative impact on the broader framework… If the extension sucked it simply wouldn’t get used… And I wouldn’t write about it.

It’s worth drawing attention to this benefit of implementing features as extensions.

When we talk about extensibility being good, and consider things like open for extension but closed for modification we tend to view it from the angle of this concern making the writing of extensions easier. The benefit that doesn’t get considered perhaps quite as much is that this approach of extending what is without modifying it is also a strategy for mitigating risk. It makes it easier to move away from such extensions if they’re poorly conceived with reduced consequence to the rest of the framework.

This is one of the goals of Inversion. Development by extension, with an ability to evolve and move poorly conceived abstractions toward increasingly better abstractions. The ability to experiment which is to say, try out different approaches, needs to be facilitated or our systems can’t evolve and we will never get past either cycles of system rewrites, or legacies of poor judgment which we can’t escape. Extensibility in this way is a strategy for easing the paying down of technical debt in the future or lowering the interest rates for technical debt if you like.

Say what you see

So the worst case scenario was an odd bit of code that Guy wrote one day that Adam laughed at. There wasn’t a risk of reverting anything, and my anxiety was removed, making clear quite a short and easy path to a solution.

Once I decided I was losing the war on semantics and came to terms with my caveman-like expression of the problem, it was easy to start breaking it down.

stuff that selects stuff that selects stuff

I know how primitive that is, but it’s what I had… We’re going to look at a configuration, and on the basis of what we see there, we’re going to pick a bunch of selection criteria that a behaviour will use in its condition.

We have the last bit, the SelectionCriteria. The first bit is a match that can be expressed as a predicate acting upon an IConfiguration.

1
2
// stuff what selects, stuff what selects stuff
(Predicate<IConfiguration> match, SelectionCriteria criteria)

This concern pivots around a behaviours configuration with selection criteria being picked on the basis of the configurations characteristics. So if for example a behaviour configuration contains the tuple ("event", "has") the predicate that matches this would be associated with the SelectionCriteria to act on this as part of the behaviours condition.

1
2
match: (config) => config.Has("event", "has"),
criteria: (config, ev) => ev.HasParams(config.GetNames("event", "has"))

Struggling with semantics as I was, I decided to simply call this association of two predicates a case.

1
2
3
4
public interface IPrototypeCase {
Predicate<IConfiguration> Match { get; }
SelectionCriteria Criteria { get; }
}

This picking of selection criteria consults only the configuration and given that the behaviour configuration is immutable, this picking can take place when the configuration is instantiated, and would only need only need to expose the selection criteria that had been picked. This was done by extending IConfiguration thus:-

1
2
3
public interface IPrototype : IConfiguration {
IEnumerable<SelectionCriteria> Criteria { get; }
}

Similarly constrained in terms of semantic inspiration this extension of the behaviours configuration was called a prototype. I was thinking in terms of prototype-based programming which I’d had some success in the past with classification, inheritance, and overriding of relational data, and was thinking of a behaviours configuration tuples with associated functions as prototypes. Not the best example of prototypes, but vaguely in the ballpark, I needed to call it something and had lost patience with my own semantic angst. I was ready to call this thing “Nigel” if it allowed me to move on, and Prototype kind of fit.

A prototype is a configuration that expresses selection criteria that have been chosen for that configuration.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public static readonly ConcurrentDictionary<string, IPrototypeCase> NamedCases = new ConcurrentDictionary<string, IPrototypeCase>();
private readonly ImmutableHashSet<SelectionCriteria> _criteria;
public Prototype(
IEnumerable<IConfigurationElement> config,
IEnumerable<IPrototypeCase> cases
) : base(config) {
var builder = ImmutableHashSet.CreateBuilder<SelectionCriteria>();
foreach (IPrototypeCase @case in cases) {
if (@case.Match(this)) builder.Add(@case.Criteria);
}
_criteria = builder.ToImmutable();
}

This allows us to establish a base set of selection criteria out of the box, that is easy for application developers to override, as seen in Prototype thus:-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
NamedCases["event-has"] = new Case(
match: (config) => config.Has("event", "has"),
criteria: (config, ev) => ev.HasParams(config.GetNames("event", "has"))
);
NamedCases["event-match"] = new Case(
match: (config) => config.Has("event", "match"),
criteria: (config, ev) => ev.HasParamValues(config.GetMap("event", "match"))
);
NamedCases["context-has"] = new Case(
match: (config) => config.Has("context", "has"),
criteria: (config, ev) => ev.Context.HasParams(config.GetNames("context", "has"))
);
NamedCases["context-match"] = new Case(
match: (config) => config.Has("context", "match"),
criteria: (config, ev) => ev.Context.HasParamValues(config.GetMap("context", "match"))
);
// and so om

We can then see this being used in PrototypedBehaviour:-

1
2
3
4
public override bool Condition(IEvent ev, IProcessContext context) {
return base.Condition(ev, context) &&
this.Prototype.Criteria.All(criteria => criteria(this.Configuration, ev));
}

This now forms a solid base class that is open for extension. We have relieved the pressure from having to inherit from a particular class in order to inherit its selection criteria which are now picked out during the behaviours instantiation, based upon the shape of the behaviours configuration. This extension is implemented as an extension of the behaviours configuration which is the focus of its concern and action.

The added benefit of this is because only applicable selection criteria are picked for a behaviour, we’re never running redundant selection criteria as part of a condition. This in turn means we can grow our implementations of selection criteria without concern about a performance impact from redundant checks. Because behaviours are singletons, this selection process takes place just once for each behaviour, so it scales nicely as the surface area of our selection criteria increases over time.

Another way of thinking of this injection of strategies is to compose or “mixin” at run-time applicable implementation details based upon configuration.

A side benefit of this work apart from making it easier to extend behaviours without having to introduce new types, is that we picked up an extra 5% to 10% performance with the loss of redundant selection criteria.

The abuse of static members and future work

The maintenance of NamedCases as a static member of Prototype is a bad thing. Initialising the default cases from the Prototype static constructor is a doubly bad thing. Lastly, this is mutable data being maintained as a static member, so I’m going straight to hell for sure.

It’s not because “global state is bad”, because it’s not. The notion that global state is bad requires ignoring the use of a database, file-system, configuration, service container, or getting the time from the system. The maintenance of non-global state globally is bad, and I’m not sure to what degree it can be said that these default cases are global.

In maintaining the cases like this I’m needlessly tying the default implementation of selection criteria to the Prototype class, and I wonder if it should be associated with the behaviours type. I’m not sure yet.

The strongest case for not maintaining the named cases as a static is because we don’t need to.

Behaviours are used as singletons so these cases can sit as instance members of either the prototype of a behaviour or the behaviour itself, but I’m not entirely sure where I want to place this concern yet, and at the moment I’m trying to impact prior work as little as possible.

The cases are injected via this constructor:-

1
2
3
4
public Prototype(
IEnumerable<IConfigurationElement> config,
IEnumerable<IPrototypeCase> cases
)

So I can easily kill the static members and inject the prototype from the behaviours constructor.

As is probably clear from this write-up, I struggled conceptually a couple of times through this process. The simplest possible thing at this point is not just desirable, but needful, and the simplest possible way of injecting a prototypes cases is:-

1
2
public Prototype(IEnumerable<IConfigurationElement> config):
this(config, Prototype.NamedCases.Values) {}

In the last post on behaviour configuration I stopped having solved two out of three parts of a problem. If I had continued without time to simply think the abstraction over I would have started making things worse rather than better. I find it personally important to recognise when I am approaching this point. Much of my worst code has been done past the point when I should have simply stopped, regrouped my mental faculties, gained some perspective, sought outside opinions, and contemplated my options weighing their pros and cons for more than 2 minutes.

Invariably when I continue past where I should have prudently stopped it has involved my own vanity and a concern about what other developers and architects would think of me. Being aware of one or more deficiencies in my code, often aware that I am at risk of running afoul of one or more anti-patterns, I over-extend myself because I fear being called a “bad developer”… There’s a self defeating vicious cycle in this… I have never, nor am I ever likely to finish a piece of work that is perfect. Every single piece of work I complete will be flawed, and if I don’t come to terms with that I will over extend my self each time and turn good work into bad.

When I accept that my work will iteratively improve a situation but at each iteration be left with flaws, I can then look to recognise and manage those flaws. I can establish my contingencies, and I can plan a safe and pragmatic route of improving abstractions.

The remaining problem of being able to inject selection criteria into behaviours on the basis of their configuration in a manner that other developers can easily extend to meet their own needs, and without changing the preexisting framework has been accomplished. There is the uncomfortable hang-nail of NamedCases being a static member, but it’s safe where it’s parked and easy to move away from without negative impact. So this is where this iteration should end. I need to now let this abstraction bed in, ensure it doesn’t have any unintended consequences before anointing it and baking it into the framework any further.