How Functional Programming Found Me

I could say that the seeds were planted at a young age. Visiting EPCOT Center as a child, reading ample science fiction books, especially those by Isaac Asimov, having an interest in science and all things technological were enough to ignite my early interest in intelligent machines. As a teenager, I started reading books by Artificial Intelligence researcher Margaret Boden, which was how I first found out about LISP. In my early 20s I tinkered with ANSI Common LISP, but was disappointed in the seeming lack of a community, lack of a rich framework and toolbase, and poor interoperability with the leading frameworks of the day—Java and .NET. Moreover,

(all (of (the (parentheses (were (driving (me (batty)))))))).

So my brief foray into functional programming was halted prematurely while I increasingly absorbed myself in the dominant paradigm, object-oriented programming (for the remainder of this article whenever I refer to object-oriented programming I'm referring to the canonical, imperative style that is the norm when using an OOP language). And so I went on my way as a nice little C# developer, naively content in my myopic view of the world as being composed of objects while I indulged in polymorphism, design patterns, and the like. All of that changed in my late 20s, however, when one monumental .NET technology caused massive tectonic shifts in my software developer worldview—LINQ.

The first step in my transition to the functional world was when I realized that 60-80% of the data access code I was writing consisted of LINQ queries. Concomitant with that realization was a perspective I had developed in which I began to see the solutions to abstract problems as data streams, rather than things that set and modify values, then do stuff with those values.

The second step was when I noticed that I was increasingly using .NET delegates to implement what I quaintly dubbed "meta-methods," similar to that below:

bool InvokeSafe(Func<bool> action)
        return action();
    catch (InvalidOperationException)
        return false;

At the time I naively thought I'd stumbled upon some powerful new dev technique, unaware that such patterns are the norm and a one of the pillars of functional programming, where they are idiomatically referred to as higher-order functions. The following (contrived) example shows what the equivalent code might look like in F#:

let invokeSafe func =
    | :? InvalidOperationException -> None
    | _ -> reraise()

Indeed, as the C# language continues its transition into a hybrid OOP/functional language, such methods and extension methods have become commonplace in the .NET world.

Lastly, I had developed a renewed interest in Artificial Intelligence and various other abstract computational approaches. The functional programming style, by its very nature, lends itself readily to elegant, efficient, inherently parallelizable implementations of such algorithms.

The stage was set; I took the dive.

I chose to learn F# as my first "official" functional language because it is a Microsoft-sanctioned, first class .NET language that supports all the nifty frameworks and toolkits that I have come to know and love. Also, Don Syme and the rest of the F# crew put a great deal of effort into ensuring that F# plays nice with other .NET languages through its support of OOP and other idiomatic .NET features. More concisely, F# is a multi-paradigm language, just like C#, though it approaches that conceptual divide from the exact opposite direction. Not surprisingly, this allows applications, libraries, and components constructed using F# to dovetail nicely with those constructed using a classic OOP language like C#, paving the way for what I initially termed "hybrid applications," but should rightly be called multi-paradigm applications (more on this later).

And so as time flew by, and my skill and knowledge in the functional paradigm grew, I had a profound philosophical epiphany that transcended the realm of all things techy and geeky; veritably, it changed the way I view the world around me.

It's easy to view the world as constructed of things, which in turn move around and perform actions against other things—hence, the impetus for the creation of the first OOP languages to begin with. However, one may also view the world around oneself as constructed by doings; that is, what we perceive as reality is actually the product of long chains of interdependent quantum processes, going all the way back to some hypothetical First Cause.

From this vantage point, the world of object-orientation and its thing-centric bias toward abstraction rather resembles Georges Seurat's painting A Sunday Afternoon on the Island of La Grande Jatte—static, frozen, unmoving in its pointillist style.

Seurat - A Sunday Afternoon on the Island of La Grande Jatte

And the functional style in comparison? Perhaps more like post-surrealist modern art—colorful, dynamic, multidimensional. Now, this is not to say that I'm done with C# and all other OOP programming languages and techniques. Far from it. Rather, I now approach software development from a balanced perspective and with the understanding that OOP and FP are merely different problem-solving approaches, each more suited than the other for solving specific classes of problems. Like Yin and Yang, they both complement each other, and wise is the developer who knows not only where each is appropriate, but also how to use them together in a single application in order to solve challenging problems. In my humble opinion, such multi-paradigm applications, which combine both the imperative Yang of OOP with the declarative Yin of FP will truly represent the cutting edge, the realm of elites, in the years and decades to come.

Ying Yang

In closing, I now consider myself to be a functional software developer (while remaining a skilled object-oriented developer). While .NET is my current specialty, I do believe in staying open to new possibilities, which is why I have the likes of Scala, Clojure, Erlang, and Go on my radar. It's an exciting, dynamic world out there, and there's no way I'm going back!


Andrew Ng of Stanford Artificial Intelligence Laboratory on the Future of AI and Robotics

Andrew Ng, director of the Stanford Artificial Intelligence Laboratory (SAIL), on the future of robotics and AI. In this short lecture he talks about many of the problems facing AI researchers, yet gives hope for real progress in this field.

His sentiments in his closing remarks closely mirror my own:

So let me just close with a personal story. Ever since I was a kid I always wanted to work on AI and build smart robots and then I got older and got into college, and I started to learn about AI in college, and I learned how hard it was. It turns out that AI has given us tons of great stuff: AI has helped us build web search engines like Google and Bing, it has given us spam filters, it has given us tons of great stuff; but there has always been this bigger dream of not just building web search engines and spam filters but of getting machines that can see the world and think and understand the world and be intelligent the way that people are. For the longest time I gave up on that dream, and for many years as a professor I was even advising students to not think about that bigger AI dream... that's something I'm not that proud of now. It was only five years ago, when I learned about these ideas in neural science and machine learning, that the brain might be much simpler than we had thought and that it might be possible to replicate some of how the brain works in the computer and use that to build perception systems... that was the first time in my adult life when I felt like we might have a chance of making progress in this again... when I look at all of our lives I see all of us spending so much time in acts of mental drudgery, we spend so much time cleaning our houses, filling out silly paperwork, having to go shopping for trivial items. I think that if we make computers and robots smart enough to do some of these things for us, to free up our time for higher endeavors, what could be more exciting than that?

In future blog posts I'll likely be discussing this remarkable institution and some of the contributions its faculty have made to the fields of AI and computer science. Some heavy hitters like John McCarthy came out of SAIL, and I still think that original vision may be realized one day. One day...

Markov Chain Text Generator In F#

After spending a couple of months with my head down focusing on my day job and other life concerns I decided to get back to learning F#. So I dusted off the cobwebs and dove into a project that is somewhat trivial, yet fun and engaging enough to get me thinking in the Functional Style again—I decided to write a Markov Chain text generator.

In simple terms, a Markov chain is simply a string of probable states which are contingent upon the current state of a given system. For example, a Markov chain might be used to describe weather:

  • If it is sunny, then there is a 30% chance of rain tomorrow and a 70% chance it will be sunny again.
  • If it is rainy, then there is a 60% chance of rain tomorrow and a 40% chance it will be sunny.
  • And so on, and so forth...

Markov chains can also be useful for text analysis in that they can model what words are likely to follow any other given word in a text document. The program below does exactly that:

  1. open System
  2. open System.IO
  3. open System.Collections.Generic
  4. open System.Collections.Concurrent
  5. open System.Text.RegularExpressions
  7. module Program =
  8. let fileName = @"..\..\magnacarta.txt"
  9. let numWordsToGenerate = 50
  10. let wordsRegex = new Regex(@"[A-z]+\'{0,1}[A-z]+", RegexOptions.Compiled)
  11. let rand = new Random()
  13. let getLines (stream : Stream) =
  14. let sr = new StreamReader(stream)
  15. seq { while not <| sr.EndOfStream do yield sr.ReadLine() }
  17. let getWords (line : string) =
  18. seq { for regexMatch in wordsRegex.Matches(line) do yield regexMatch.Value.ToLowerInvariant() }
  20. let buildDict (stream : Stream) =
  21. let dict = new Dictionary<string, Dictionary<string, int>>()
  22. let addWord previous current =
  23. if not(dict.ContainsKey(previous)) then
  24. dict.Add(previous, new Dictionary<string, int>())
  26. let dict2 = dict.[previous]
  27. if dict2.ContainsKey(current) then
  28. dict2.[current] <- dict2.[current] + 1
  29. else
  30. dict2.Add(current, 1)
  31. stream
  32. |> getLines
  33. |> getWords
  34. |> Seq.concat
  35. |> Seq.pairwise
  36. |> Seq.iter (fun (previous, current) -> addWord previous current)
  37. dict
  39. let buildMarkov (dict : Dictionary<string, Dictionary<string, int>>) =
  40. let markov = new Dictionary<string, (float * string) list>()
  41. for kvp in dict do
  42. let dict2 = kvp.Value
  43. let total = float (Seq.sum dict2.Values)
  44. let chain = [ for wordPair in dict2 do yield ((float wordPair.Value) / total, wordPair.Key) ]
  45. markov.Add(kvp.Key, chain)
  46. markov
  48. let search (func : ('b -> 'a -> ('a option * 'b))) (initial : 'b) (items : seq<'a>) =
  49. let rec findIt (enum : IEnumerator<'a>) (acc : 'b) =
  50. if not(enum.MoveNext()) then
  51. None
  52. else
  53. let result = func acc enum.Current
  54. if Option.isSome <| fst result then
  55. Some(Option.get <| fst result)
  56. else
  57. findIt enum (snd result)
  58. findIt (items.GetEnumerator()) initial
  60. let getRandWord (chain : (float * string) list) =
  61. let roll = rand.NextDouble()
  62. let result = search (fun (acc : float) (item : (float * string)) ->
  63. let myOption = if (roll <= acc + fst item) then Some(item) else None
  64. (myOption, acc + fst item))
  65. 0.0
  66. chain
  67. if Option.isNone result then failwith "Error getting random word." else snd <| Option.get result
  69. [<EntryPoint>]
  70. let main _ =
  71. use fileStream = new FileStream(fileName, FileMode.Open)
  72. let markov = fileStream |> buildDict |> buildMarkov
  73. let firstWord = Seq.nth (rand.Next(markov.Count)) markov.Keys
  74. Seq.unfold (fun (word, count) -> if (count = numWordsToGenerate) then None else Some(word, (getRandWord markov.[word], count + 1))) (firstWord, 0)
  75. |> Seq.iter (printf "%s ")
  77. Console.ReadLine() |> ignore
  78. 0

Let's examine this program in more detail.

Lines 13 - 37, the getLines, getWords and buildDict functions serve a single purpose, which is to dynamically read text from a stream, parse out the words, then keep a running count of which words follow each other word. Look at lines 31 - 36 in particular:

    |> getLines
    |> getWords
    |> Seq.concat
    |> Seq.pairwise
    |> Seq.iter (fun (previous, current) -> addWord previous current)

What this does, in a nutshell, is:

  1. Read each line from a stream.
  2. Break apart each line into a sequence of words.
  3. Smash together the sequence of sequences (Seq.concat) so that we have a single sequence of words—this is analogous to the SelectMany LINQ extension method you might have used in C#.
  4. Take each word and the one that follows it (Seq.pairwise).
  5. Add the word-combo to our dictionary.

The Seq.pairwise function is the hero of the day here. This underrated and rarely-used function fits perfectly in this application and keeps us working in the functional style.

It should be noted that for tallying up the count of the words I decided to use a dictionary of dictionaries (see line 21). While technically this is an imperative (hence, non-pure) style of programming, for performance reasons I felt it was a fine approach.

When all the counts have been acquired, we have a data structure that looks something like this:

Markov Chain

Each key in the main dictionary maps to another dictionary that contains a list of words and the count of each. From here it is easy to compute the probabilities for each word in each chain. That's where the buildMarkov function comes into play. All that this does is use Seq.sum to count up the total number of words in each chain, then divide each count by that total to come up with a probability value between 0.0 and 1.0. Now on to generating the words...

You'll notice that I implemented a special search higher-order function. As far as I can tell, there is no out-of-the-box function in F# that lets you search through a sequence while passing along an accumulator, yet this is precisely what we need. The first parameter to the search function is itself a function which:

  • Takes an accumulator value.
  • Takes a sequence item.
  • Returns Some(item) if the search criteria is met or None if it isn't, tupled to the next value of the accumulator.

The additional parameters are the initial value of the accumulator and the sequence to operate against.

The bulk of the work for this function is accomplished using the nested function called findIt. The astute reader will notice that this is a tail-recursive function—it consumes no stack space, hence can operate against a sequence of any size without running out of memory. To verify that this is the case, let's examine the body of the findIt function using Telerik's JustDecompile utility against the compiled executable. Here's what it looks like translated into C#:

while (@enum.MoveNext())
    Tuple<FSharpOption<a>, b> result = FSharpFunc<b, a>.InvokeFast<Tuple<FSharpOption<a>, b>>(this.func, acc, @enum.Current);
    if (OptionModule.IsSome<a>(Operators.Fst<FSharpOption<a>, b>(result)))
        return FSharpOption<a>.Some(OptionModule.GetValue<a>(Operators.Fst<FSharpOption<a>, b>(result)));
    acc = Operators.Snd<FSharpOption<a>, b>(result);
    @enum = @enum;
return null;

You'll notice that the F# compiler turned our "recursive" function into a while loop. Pretty cool!

Once we have the search function in our tool belt, the rest is easy. The getRandWord function uses search to pick a random word from a given Markov chain by accumulating the probabilities as it moves along. So if the first item in the chain has a probability of 0.25, the next item has a probability of 0.50, and the random number generator rolls a 0.76, then it will select (you guessed it) the third item.

The generation of the actual words is accomplished using my favorite F# function, Seq.unfold. Line 74 simply takes a count value and an initial word (state) to "unfold" the next word in the sequence, or return None if the max count has been reached.

The output of the program looks something like this when using the Magna Carta as input.

services and sworn knights cross as is responsible for god and by the owner of that it shall fail to bring with four barons shall have all evil tolls except those possessions or to be given or if any such thing be responsible to attach and the common counsel of

While it doesn't make a lot of sense grammatically, it does have some syntactic resemblance to natural English language. In fact, now you have an idea of how spammers try to fool spam detection systems. More noble uses would include word suggestion algorithms in search applications (e.g. Google).

So that's it for basic text analysis using Markov Chains in F#.

Happy coding!

Download F# Source

Are Design Patterns Really Anti-Patterns?

When I first read the Gang of Four's monumental book Design Patterns: Elements of Reusable Object-Oriented Software, I thought that I had discovered the Holy Grail of software development. I was amazed at the power and flexibility that design patterns afforded me in solving difficult problems, the seeming elegance bestowed upon the humble programmer who was constantly embroiled in battle against the demons of ever widening problem-space complexity.

As so often happens to those who seek continuous improvement, however, very large tectonic shifts in my worldview started occurring as I learned more and more about the theory behind software and started opening myself to different paradigms of programming.

Buzzword Bingo

As more and more recruiters identified Design Patterns as a "hot item" on my resume I got an increasingly uneasy feeling, not because it's a frivolous skill to have—design patterns are useful in a number of scenarios—but because people I talked to didn't understand why they were useful. Moreover, I find that a lot of programmers who should know better see them as a silver bullet instead of a set of tools that are appropriate in specific contexts.

Failure is Always an Option

Not only is failure always an option, it is necessary for growth. One of my big pet projects that I've worked on/off again the past few years is a stock market analysis engine called Continuum. My intent behind the last iteration was to create a genetic algorithm running behind a back testing engine. The genetic algorithm would iterate against a number of historical data sets and over time evolve an effective trading strategy consisting of technical and fundamental indicators. Simple, right? Nope.

Even with all my knowledge of loose coupling, abstraction, and other architectural best practices I found that the solution to this problem was still very hard for me to conceptualize. After playing with a bunch of reflection-based approaches and something I dubbed a "meta-interface," my project slammed into a brick wall. I put it on hold and pursued other endeavors. Little did I know at the time, there in fact is a tool which lends itself very well to this problem (partial function applications) but it is not available in the world of object-oriented programming.

OOP is Fundamentally Broken

After my setback with Continuum I started to wonder if I wasn't barking up the wrong tree entirely using an object-oriented approach. In addition, I witnessed a number of programmers in the "professional" workplace nonchalantly employing horrific OOP practices such as:

  • Using inheritance to extend the behavior of a class/component.
  • Creating "God" classes containing hundreds of properties and/or methods.
  • Neglecting the power of OOP entirely and just using C# to write really bad procedural code.

People smarter than I have pointed out that object-oriented languages likely weren't even implemented correctly in the first place! As it all sunk in, I came to realize that:

  1. There is a better way.
  2. It's conceivable that OOP design patterns, especially those of the behavioral variety, are hacks which attempt to compensate for fundamental shortcomings of the OOP paradigm.

Around this time I also realized that more than half the C# code I was writing consisted of LINQ query expressions and lambdas.

I was ready for the next step...

Functional Programming in the .NET World

I began teaching myself F#, Microsoft's "official" functional programming language for the .NET stack, and immediately fell in love.

The declarative style of the language jived well with my style of thinking and made it seem almost effortless for me to implement complex abstractions and algorithms.

Philosophically, I think it's fun to imagine that the universe around us is really built from processes instead of things. For instance, a person isn't a concrete object with properties and behaviors but is rather a process of life that changes and adapts over time. From this perspective, functional programming paradoxically lends a more accurate representation of the real world than OOP.

More pragmatically, functional programming languages are far better suited to representing complex algorithms because they facilitate behavioral composition as opposed to state composition (OOP).

The Proof is in the Pudding

In the next few blog entries I will set out to show how a functional programming language, F#, may be used to implement many of the classic Gang of Four design patterns in a functional, declarative, and (hopefully) much more elegant style. I will begin with some of the low-hanging fruit.

Let's see what the Chain of Responsibility pattern might look like in a functional style as opposed to a standard C#, object-oriented implementation...

Next up: A functional version of the Chain of Responsibility

Think Multidimensionally

Many people try their hand at computer programming; most do it wrong. The fact of the matter is that it takes a different kind of thinking, a higher-dimensional form of thinking, to master the art and science of software development. [1]

Like the humble square in Edwin Abbott's famous story Flatland many programmers are trapped in a one-dimensional approach to problem solving without even an inclination that there's a better way.

To build software solutions that are simple yet elegant, extensible and reliable, it takes a three-dimensional mode of thinking which few possess, or even realize exist. Those three dimensions are:




Moving in a Straight Line: Procedural Programming

I’m sure you wrote your first “Hello World” program in a simple, procedural language such as BASIC. Procedural programming is much like baking a cake: you have a starting point, you have an objective, and you have a series of steps which are executed in linear fashion in order to get from the starting point to the final objective.

Of course, procedural programming may involve various flow control mechanisms such as conditional (IF/THEN) statements, loops, function and/or procedure calls, and more. These do not fundamentally add depth to this form of programming—they all operate along a single continuum of machine logic.

Here is an example of procedural programming in C#:

static void Main(string[] args)
    int count = 0;
    Console.WriteLine(count * count);
    if (count < 10) goto Loopstart;


Anyone who calls him/herself a programmer understands this dimension. The next one is a little bit trickier...

Ascension and Descension: Inheritance


Inheritance is the notion that one class of entities derives from a more general class. For example, a “Person” class might derive from an “Animal” class, which in turn derives from an “Organism” class. Each child class inherits the behavior and properties of its parent classes.

This simple diagram illustrates the relationships amongst these classes.

In C# the classes might be coded like this:

abstract class Organism
    public abstract string Genus { get; }
    public abstract string Species { get; }
class Animal : Organism
    public void Walk() { }
sealed class Person : Animal
    public void Talk() { }
    public string Name { get; set; }

Inheritance is a powerful, yet often abused feature of modern object-oriented languages. Examples of inheritance abuse are legion. A simple glance at this thread will show you how far down this rabbit hole goes.

Probably about half to three quarters of the programmers out there understand inheritance and how to apply it to trivial problems. However, when approaching much more difficult problems most of them fly off the track because they try to use inheritance to extend properties and behavior of parent classes. This is WRONG WRONG WRONG.

Inheritance has two proper, virtuous uses:

  1. Polymorphism—the substitution of one entity by another, similar entity.

  2. Code reuse amongst classes which are conceptually equivalent at a base level.

One must ask him/herself: “Are all of these classes logically equivalent at a base level? Can I substitute a child class for a parent class and still expect it to work as intended?”

This is called the Liskov Substitution Principle and it is one of the most important concepts of OOP.

Now that we have an understanding of inheritance and its purpose, let’s move on to the last and most important dimension...

Dissolve and Coagulate: Object Composition


After reading my philosophy concerning inheritance you may be wondering just how you might go about extending the properties and behaviors of a base class, or how you might build intricate, many-faceted data structures which serve different purposes at runtime. Object composition is the most obvious choice. [5]

Object composition is the notion that an object instance of a certain class may be composed of smaller, more basic pieces which are generally interchangeable. Some design patterns, such as Builder, operate directly along this dimension.

Composition may be visualized like this:

Here is an example in C# of proper use of composition:

abstract class HumanBehavior
    public abstract void ExecuteBehavior(Person person);
sealed class Jump : HumanBehavior
    public override void ExecuteBehavior(Person person)
        Console.WriteLine("{0} is jumping up and down.");
class Person
    public HumanBehavior Behavior { get; private set; }
    public string Name { get; private set; }
    public Person(string name, HumanBehavior behavior)
        Name = name;
        Behavior = behavior;
    public void DoSomething()
class Program
    static void Main(string[] args)
        Person john = new Person("John", new Jump());

This trivial example merely prints the text “John is jumping up and down.” What’s important to get out of it is the notion that behaviors which would otherwise get slapped on to a class using inheritance have instead been abstracted out into their own family of classes. The Person class now becomes a composite which may accept any child class of HumanBehavior and call it as it sees fit.

It is clear that the dimensions of Object Composition and Inheritance go hand in hand and flow easily into and out of each other. This is not an accident.

  • Inheritance facilitates composition through polymorphism. It also prevents abuse of Composition (objects built from too finely-grained pieces, a.k.a. compositional chimeras) and flat-out cut and pasting by allowing for code reuse at a procedural (first dimension) level.

  • Composition bolsters inheritance by removing extension-oriented code from the class itself, and allowing the class to focus on only the most essential elements—the quintessence, as it were. [6]

Composition and inheritance are both powerful forces to be reckoned with when used in unison. However, composition trumps inheritance. As the Gang of Four state plainly in their monumental book Design Patterns: “Favor object composition over class inheritance.”

By using composition to extend functionality you create software that is more flexible, maintainable, extensible, and powerful.

A note on explicit interfaces:

The use of inheritance (abstract classes, etc) is obviously not the only means of implementing a polymorphic design. Languages such as C# and Java support explicit interface declarations which function as de-facto code contracts. I’m all for using interfaces polymorphically to facilitate object composition, but beware... this is another language feature that is often abused.

Newbie and intermediate level programmers will often use interfaces as their tool-of-choice for implementing polymorphism. This is a mistake. The designers of both Java and C# chose to build in support for interfaces because it represents a compromise in languages which otherwise do not support multiple inheritance. They wisely chose to avoid the myriad headaches which come with the multiple inheritance model; for any given class, programmers are limited to a single “silver-bullet” parent class which they can derive from alongside any number of interfaces for “mix-in” behavior.

C++ programmers might not be fond of this model but I love it. Why? It enforces the Single Responsibility Principle. One class—one purpose. End of story. Too often have I seen programmers who should’ve known better implement polymorphic chimeras by overloading a class with too many interfaces. Don’t go down that road.

Like Yin and Yang, inheritance and composition flow elegantly from one to the other. Interfaces support both of those dimensions to achieve a greater harmony.


The three dimensions of software development—procedural, inheritance, compositional—form the basis of a powerful and elegant approach to problem solving.

Almost every programmer understands the first dimension. A few understand the second. A handful understand the third.

Programmers who understand all three dimensions and how to apply them properly are rare as unicorns.

Modern software development truly is a form of alchemy, both an art and a science which reacts and interacts to create a Great Work, a machine built of pure mind-stuff. To master this craft requires thinking in three dimensions...


Nope. There’s more.

Prepare yourself. You’re about to get sucked through the Klein bottle into the 4th dimension...

Evolution and Devolution: Time

There’s another dimension to software development which often gets overlooked: time. Of course none of us have a crystal ball. Nevertheless, it is imperative that a programmer visualizes possible future scenarios and considers how a system will change over time. This is tricky for even the most adept programmer because it involves thinking about the three basic dimensions and then projecting those onto a potentially infinite number of future scenarios.

Some example considerations involving the fourth dimension may be:

  • How often will a client need to extend a base class/component?

  • Are the existing building blocks sufficient to allow for change six months, a year, five years into the future?

  • What are the resource impacts of a given design/feature? Will that feature scale over time as the system changes?

  • What extensibility points should be put in place in anticipation of future enhancements or new technologies which don’t exist yet?

  • Which components are NOT extensible?

  • Which components or subsystems will likely get scrapped at some point?

  • Where is the market trending? What will the new hot technologies be tomorrow?

  • How to keep the system open and extensible, yet concrete enough that it does what it needs to do?

These considerations and more all apply to any software endeavor, especially on a larger scale.

Final Words

What is important at the end of the day isn’t necessarily which cool design pattern you implemented, or some awesome new framework that looks good on a resume. It’s having an understanding of the tools that you have at your disposal and using the most appropriate ones to solve a given problem. That’s it.

Object-oriented languages are exceptionally powerful tools at a core level because they allow for a type of multidimensional problem conceptualization which neatly mirrors real life entities. Unfortunately, many programmers don’t understand the dimensions involved, how they interact, or when each is appropriate in a given context. As we proceed into the 21st century even more interesting paradigms will emerge which make OOP look quaint, and terms like multidimensional and non-linear will become as commonplace as the mouse and keyboard. But before we take that next step let’s look around at the tools we have, and thank our lucky stars that we’re not still coding in FORTRAN.

- John


1 This is in reference to the object-oriented paradigm of software development and to the C# language in particular. Other languages such as F# involve fundamentally different paradigms, such as functional programming.

2 This example is for illustrative purposes only. The use of the “goto” keyword is NEVER recommended in C# for obvious reasons.

3 The astute learner will have already noticed that a number of abstract programming concepts have an amazing similarity to ideas stemming from the lost art and science of alchemy. In this case, the Inheritance dimension is equivalent to the alchemical notion of “As above, so below.” That is, the macrocosm is contained within the microcosm.

4 Latin: “Solve et coagula”—another alchemical concept. This is the notion that a greater whole may be broken down into constituent parts and then reconstituted into some new form.

5 There is also the dark art of Reflection, but I will not mention it further here.

6 Here is another notion from alchemy. “Quintessence” is generally used to refer to the deepest, most essential presence of something, that which gives it its defining characteristics.

Subscribe to