EA Tools vs Modelling Tools

We’ve had a few questions recently about why Essential doesn’t provide a greater ability to draw pictures, which is part of the broader question regarding the difference between EA tools and Modelling tools.  Essential is primarily an EA tool and so is focused on supporting the objectives of CxOs and EAs/Chief Architects, with some support for Solution Architects; Diagram-driven, solution modelling tools, however, are focused on supporting Solution Architects in their design work, but do not provide visualisations that can support the key requirements of the EAs/Chief Architects or the CxO’s objectives.  Beyond an overall all systems wiring diagram to show complexity (and used for effect), it’s difficult to think of many more Visio-type diagrams you’d put in front of an CxO.

We’ve drawn up a slide that explains the objectives of the different roles and how Essential can support each, and we’ve also taken the opportunity to update the demo viewer to show the support for the different roles, so there is now a CxO portal, an EA portal and a Solution Architects portal*.

EA Tools vs Modelling Tools

We are aware that many organisations want a tool that supports all three roles and so we are working on the ability to import and export from Visio, which will extend Essential’s reach.  Added to this is Essential’s ability to support an organisation beyond the scope of just IT, for example with our GDPR or Strategic Resource Optimisation pack, and Essential provides an organisational support tool that is unique in its field.

Just one final point, we noted Mega’s press release on 3/10/2017, ‘MEGA is First EA Vendor with Unique GDPR Solution’.  That’s not strictly true, as the Essential GDPR Solution was launched on 28/7/2017!  And to be honest we’ve had clients using our PII solution, the foundation for GDPR, since 2013, something none of the other EA tools did.  If you want to see a proven GDPR tool, give us a call or drop us an e-mail.

*The Essential Viewer can be configured directly in Essential, so organisations can easily create and configure multiple portals to suit their needs.  Essential Cloud also gives the ability to control access to views, and even redact specific data in views, by role or individual.


Essential GDPR Launched

Our GDPR pack is now ready for use.  Unique in the marketplace, it supports business questions such as ‘do I have a legal basis for using this data?’ and ‘have I captured the client’s consent?’ as well as technical access and security questions, such as ‘where is my data most at risk?’.  Most other tools are focused on one or other end of this spectrum.  High level dashboards show where the GDPR compliance issues exist, and drill down capabilities allow you to hone in on the exact process, application or technology that is the cause of the risk.

We have partnered with UST to, optionally, incorporate the use of their ground-breaking data discovery tool which can identify structured and unstructured GDPR data in databases and document stores across the organisation. This not only eases the burden of data capture but also provides an invaluable cross-check of information provided through more traditional means.

A sample of the dashboards are shown below, or you can read further information, access the GDPR demo viewer, or sign up  here.

Data Lens

You may have noticed from our site that the Data Lens is in beta.  It’s a lens that we’ve developed because we’ve been continually told that people don’t have control of their data.

In our EA consulting, we have seen:

  • Organisations that were unwittingly reporting incorrect MI figures because data was inaccurate or incomplete
  • Projects that intended to master and duplicate data that already existed in the organisation
  • Inconsistency in what people thought certain data was
  • Differing views on where data was sourced from
  • Projects repeating the same data collection work, asking the same questions again

The Data Lens looks to address this by bringing transparency and coherence to your data estate.  It is aimed at supporting the demands of people wanting to use data, such as:

  • Data Lake or Analytics efforts, which need to know information such as where data is sourced from, what terms are used for the same data, e.g. client and customer, how good the data is in terms of quality and completeness, etc.
  • Platform projects need to know where data masters exist, where data flows, how data is transformed, etc.
  • Any data rationalisation project needs to know where master sources of data exist, where duplication exists and how data is used.
  • Plus, Data Scientists need to understand the sources of data available for their analysis

The lens addresses these needs by providing a number of views and tools.

The Data Definition views provide data definitions, summaries and dynamically produced data models.

The Data Architecture Analysis views are geared towards you understanding sources of data, data flows, where duplication exists, etc.

Data Management is where the lens excels.  You are able to understand data quality across a number of criteria and see sources of data.  The Quality Dashboard shows the quality of the key data required to support your strategic objectives and business capabilities, and also the initiatives impacting that data.  This allows you to identify where your data initiatives may need to be focused to improve your business data output and enable your strategy.  The Data Quality Analysis page lets you pick the data you need and it then shows you where to source it from, plus the quality, completeness and accuracy of that data. This is really useful if you are using the data for other purposes, e.g. MI reporting or analytics. The data dashboard provides and summary view of your data which you can drill down into.

We see the Data Lens acting as the bridge between the tools that are more focused on the physical data layer, and which typically meet the needs of the technical teams but not the business users or the data scientists.  Equally, where you have conceptual data in a tool, the lens can act as the bridge to the physical data, removing the gap between the conceptual and physical layers, bringing context and meaning to the data.

The lens is currently in beta but we are allowing organisations to register an interest and we would love to get any feedback on the lens.

IRM UK EA Conference – Outsourcing and EA

I presented a session on Outsourcing and EA at the IRM EA conference last week; specifically how, as Enterprise Architects, we are in a prime position to ensure that outsourcing deals are both created and run effectively as we are in the unique position of having the knowledge and understanding of both the business and IT across the entire enterprise.  We likened EA’s to the Spartans in the battle of Thermopylae who held off an army of (allegedly) a million men for seven days with only 300 warriors – primarily because they understood and had a map of the landscape.  (Unfortunately they were betrayed and slaughtered after a week – hopefully the analogy doesn’t stretch that far!).

Research by both Gartner and AT Kearney suggests that around 1/3rd of outsource initiatives fail.  We discussed how use of our architecture knowledge and artefacts can mitigate the risks of failure and how EA can be used to bring greater success.  We touched on our work to help organisations use EA and Essential together to reduce the outsource transition time (from idea to completed transition to a new provider) from a typical 18-24 months to 6-9 months, which addresses a key concern raised by the FCA.  We showed some examples of how Essential has been used to support such initiatives across a number of organisations.

The conference itself was very interesting and it seems to me that EA is really coming of age – there were many talks showing how EA is used in organisations to provide real and concrete benefit to senior managers.

If you would like a copy of the presentation then drop me an e-mail at the info at e-asolutions.com address.

Essential Information and Data Pack

It’s been a couple of months since we released the Information and Data Pack and I thought it would be useful to take a more detailed look at what is in this extension to the Essential Meta Model and what it can do for us.


Firstly, thanks to our community members who have given us some feedback and found a couple of small bugs in there. We’ve started a forum thread to catch any more issues as we find them but will be releasing a patch and new version of the pack in the coming weeks – we wanted to make sure we had as many issues addressed in a single update as possible. If you’ve found any issues in there, please let us know in this forum.


This optional extension pack is a major extension to the Information layer of the meta model but although there are some tweaks of the core meta model elements, this is mostly an extension to what was already there.


We have added a number of important meta classes for managing Data elements and the relationships that these have to Application, Information and Business elements, enabling the resulting models to support Enterprise Data Management and Enterprise Information Management activities. More about that later in this blog.


One of the most important concepts in the pack is the separation between Information and Data. We have defined strong semantics for what is Information and what is Data, so that there is a clear and consistent framework for capturing and managing these elements of your Information and Data architecture.


We’ve based this on the commonly used “Information is Data in context” definition. Data elements have the same meaning and value regardless of the context in which they are used, whereas Information elements differ depending on the context. e.g. Product data means the same things regardless of whether we are looking it in the context of Stock, Sales, Manufacturing. That’s not to say that we only have one Product data element in our architecture, we can define as many variants of how Product data is managed in our organisation as we need – in terms of the attributes and relationships. e.g. Product Item data might have a different set of attributes to a Bill of Materials data object.


In contrast, Information takes data and uses it in a particular context. e.g. Stock Volume by Location might use data about Products at particular locations in the organisation and would have a different value for a single Product depending on the Location.


This separation of the Information and Data fits neatly into how we need to manage these things in the real world. Data combined and used to produce the Information that we need.


Naturally, we’ve used the Conceptual, Logical and Physical abstractions for the new Data metaclasses, covering the WHAT, HOW and WHERE for data elements. In addition to adding these new meta classes, we have created powerful relationship classes to them that enable us to clearly understand how Information and Data is being used by Applications and Business Processes. Some of this might seem a bit complex at first glance but this is due to the contextualised nature of these relationships. What we’ve created are constructs that enable us to understand what Information Applications use and create and in the context of that, which data elements are used, specifically, to deliver that Information – and in that context what are the CRUD for the Information and the Data. We believe that having these types of contextual relationships is a uniquely powerful capability of Essential Architecture Manager but we haven’t added these just because we can but because without them we cannot accurately or even reliably understand what is really going on in our organisation with respect to Information and Data.


So, what kind of things can we do with the Information and Data Pack?

We have designed the meta model to support Information Management and Data Management activities managing all the dependencies that exist between information and data elements and also how these are used by Application elements and how they are produced by Applications.


More specifically, in combination with the Views that are supplied out-of-the-box, we can understand where issues exist with our Master Data Management solutions, e.g.

  • where are we mastering Product data?
  • how is this data being supplied to our Applications?
  • how does this compare to how data should be supplied to our applications (according to our MDM policies)?


As you will have come to expect, the supplied Views are easy-to-consume, hyperlinked perspectives that are designed to give valuable insights to a wide range of stakeholders – not just Information and Data professionals.


We can browse the Data catalogue, drill into individual subjects, see what Objects we have in each subject and how each of these objects is defined. Further drill-downs can take us to views that should how this Object is provided to the relevant applications in the enterprise and from where.


Based on the detailed CRUD information that we can capture in the model, we can easily produce CRUD matrices from a wide variety of perspectives, e.g. CRUD of Data Subjects by Business Capability, CRUD of Data Objects by Business Process – both of which are derived from the details in the model, which means that these are automatically updated as the content of the model is updated.


One of the most powerful Views is also one of the simplest. We find that providing a browser-based, easy-to-access place to find accurate and detailed definitions for all the Information and Data elements – and most importantly in easy-to-understand terms – is a capability that is quickly valued, in particular by non-technical stakeholders. Deliberately, there are no ERDs or UML class diagrams in these definitions. Rather, we can explore the catalogue of the Information and Data elements in a highly hyperlinked environment, providing an automatically generated data (and information) dictionary to the organisation.


In that context, we’ve introduced some nice little features that will be included across all the Views such as the ability to identify the content owner of particular elements or groups of elements. This means that if we see that the content on a View is incorrect, we can click a link and send an email to that owner to let them know that things have changed or that there’s an error in the content.


We recognise that while most of the meta model concepts for the Information and Data pack are straightforward, there are some more complex areas, in particular in the definition of relationships. As always, we’ve worked hard to keep these as simple as possible but the reality of how Information and Data is used and produced is (or at least can be!) complex and we need to able to capture and manage these complexities. However, the capabilities and results are worth the investment in understanding how to use these. For example, the way the Information to Application (and in that context, to Data) relationships work, enable us to understand how a packaged application has been configured in terms of how the out-of-the-box data structures are being used. This means that we can understand where Product Codes are being stored in Location Code tables, for example, and this is the kind of scenario where the ‘devil is in the detail’ and fine-grained things like this can become the source of a lot of larger-scale issues in the architecture.


We’ve already had some excellent feedback on the pack and based on demands from real-world use of Essential Architecture Manager, we are now looking to extend the granularity of the application-to-data relationships to include data attributes, rather than just data objects. You might not always need to go to that level of detail but if you do, the capability will be there – and this will be designed to enable you to model at asymmetrical levels of granularity.


Although it’s currently an optional extension, the Information and Data pack will be incorporated into the next baseline version of the meta model. We think that the game-changing capabilities of the Information and Data Pack are a vital part of the enterprise architecture model and so it is natural that this extension become part of the core meta model.


Strategy Management and Enterprise Architecture

We have noticed that many organisations are currently looking to their EA to support their strategy management, whether that be business and IT or just IT focused.  This is quite a shift for some organisations in moving EA up the stack and out of project or domain focused architecture to one that provides a broader, higher level view.

Our Strategy Management release is, therefore, very timely.  It has been in development for some time and was the focus of ECP 4, we would like to acknowledge and thank the community for their contributions to the release.   We have been using it at one of our global clients for some time and it is, therefore, released with us knowing that it actually works in real life situations – in a global organisation that has distributed businesses with differing regional and organisational objectives. 

A very brief overview of the key elements in this release is given below, but for full details see the release documentation:-

  • Architecture States – represent the different states of your architecture.  Sometimes these are referred to as ‘current state’ and ‘future state’, but we think it is best to avoid these terms as your current state is always evolving and will eventually (one would hope!) become your future state, which makes everything somewhat confusing.  In our opinion it is better to actually refer to the state you are in (politely of course!) and the state you want to be in, including the steps in between.  So, an example of the type of naming we would suggest is ‘manual invoicing’, then ‘automated payment – UK’, followed by ‘automated invoicing – UK ’ and finally ‘fully automated invoicing’ , which allow you to understand exactly what each architecture state is referring to. 
    An architecture state can relate to one or all of the layers of the architecture, and this will depend on the type of project that is being transitioned. 
    We would expect there to be many of these very specific architecture states that reflect different areas of focus, rather than having a very few states that represent the entire EA at specific points in time.  We think the idea of lots of ‘smaller’, more specific architecture states is very powerful for managing the complexity of the progression of the architecture; break it into manageable chunks that deal with specific programmes or areas of interest and use lots of specific, focussed roadmaps rather than having one unwieldy uber roadmap.  However, if that is what you need to manage the transition of your EA then you can, of course, do just that! 
  • Roadmap Model – is the pictorial representation showing how the architecture(s) will transition between the various different states, which are shown as roadmap milestones on the model.  A timeline for this transitioning can also be included in the model.
  • Strategic Plans – hold the detail of how the organisation plans to transition from one state to another and are implemented by Projects, which can be grouped into Programmes.  Strategic Plans also hold the detail of what Issues are being addressed and Objectives are being met by the implementation of the Strategic Plans. 

There are a number of standard views released with the pack, but in true Essential spirit we expect users to develop their own views to meet the needs of their organisation.  The type of views we envisage being used are, for example, a view that maps Projects to Issues and Objectives, so you can see which projects support which Objectives and resolve which Issues.  This can be useful in allocating resources to the areas providing most benefit to the organisation. 

We feel that this is a key area of EA, and additionally it is one that is, or should be, the focus of many organisations at the moment as they move forward following the recession.  Using EA to assist with the strategy management within an organisation is really where the big pay back from EA can start to be seen.  Of course, all the underlying activities and effort are also crucial and can provide benefits on a project or region or domain basis, but using EA to support strategy management is the activity that can provide the overall view of where the organisation is now, where it wants to be, the plans it has for moving there and, crucially, can be used to ensure the organisation actually gets there.  It can do this by analysing information and then providing the insights, as stakeholder specific views, that highlight potential pitfalls and areas of opportunity and allowing these to be used to aid decision making and ensure the strategic plans are achieved.

We are always keen to hear your views, please let us know if you have any comments.

Exploiting information that you already have about your organisation

Whether you are just starting out with a new EA modelling solution or already have a powerful repository, everyone has information about their organisation in many other places in many other forms. Exploit this existing information.

Many organisations embarking on EA modelling already have a wealth of information in a variety of disparate forms, such as presentations, drawing tools and most commonly in spreadsheets. Rather than re-key, or re-model all this work, shouldn’t we be able to exploit this existing information – lifting it directly – by importing it into Essential Architecture Manager?

Over the last few years, we have explored a variety of options for importing existing information into Protege (and therefore Essential Architecture Manager) and we realised that the only safe and reliable approach was to do this using the Protege API to add, remove and update information in the knowledge base in effectively the same was as the graphical user interface and let Protege take care of all the integrity issues as it does when you work with the forms. Using the Protege API is at the heart of the Essential integration tools.

The new Essential Integration tab plugin for Protege and Essential Architecture Manager simplifying the one-off ‘data load’ of things like your spreadsheets but also the on-going synchronisation of important data that is maintained in other systems. We’ve had some very good experience of doing this sort of synchronisation with configuration management databases and virtual server environment configurations.

This new tab is an evolution of the ‘skunk-works’ release of the Integration Server. We have made it into a tab to make it easier for both stand-alone and multi-user use (both modes are supported) and to make it easier to install and add this capability to your Essential Architecture Manager. The process of running an import or synchronisation is now much simpler than it was with the Integration Server.

The concept of the integration server will remain but will be re-worked to provide a means of automating imports and on-going synchronisations from other sources, e.g. on a scheduled daily basis. However, the new integration tab will be the most commonly used mechanism for importing information into Essential en-masse.

Getting information out of Essential to share with other systems is simply a matter of creating a suitable view for Essential Viewer, maybe an XML document or a CSV etc. With the contribution from Essential Project Community member, Mathwizard, this can be saved directly as an export file from Essential Viewer, using your browser.

So how to do you go about importing your spreadsheets, XML documents etc. using the integration tab?

As you might expect, XML is the ideal format for integrating the source information and the integration tab expects source data to be in XML format, so it is often easiest to work with an XML export of your source information.

The first step is to work out how this existing information will be represented in Essential. Unfortunately, there is no magic solution to this. You need to define how your source information maps to the Essential Meta Model and with this understanding define the transform that is required.

By the way, creating relationships and relationship classes during imports can be particularly tricky as the source information often does not represent the relationships in a way that can be mapped to the relationships in the Essential Meta Model. In many cases, it can be best to focus on importing the instances and then completing the relationships in Protege via the forms.

There are two approaches to defining the transform that I’ll explore in a moment. We intend to produce and share a library of transforms from common source formats into the baseline Essential Meta Model.

Currently, we have a transform for importing the XML representation of an Essential repository that Essential Viewer uses. This means that you can import elements from other Essential repositories, out of the box. Also, our previous experience with certain configuration management databases and virtual server environments mean that we can easily share transforms to import Technology Nodes and Technology Instances into Essential.

The two approaches to transforming your existing source information into the Essential Meta Model are:

  1. Transform you existing source to XML using the Essential Viewer XML schema. Then import this resulting XML into Essential using the out-of-the-box Essential Repository transform. This approach is useful for once-off data loads from existing sources such as spreadsheets where it’s most important to get the main instances into Essential rather than complex relationships.
  2. Define your own transform file and use this to have the integration tab transform your source information and import it into Essential. Although this may seem to be a more complex exercise than approach 1, you have more flexibility in defining how your source information maps to the Essential Meta Model. A how-to guide for writing transforms is being written at the moment that explains how the transforms work and the supporting tools that are available, e.g. a library of script functions that wrap the Protege API to support things like on-going synchronisation of instances in Essential with instances in other, external repositories.

We would be very happy to help anyone who needs to construct their own custom transform via the forums or even to undertake commissions to build custom transforms as required.

And of course, if you’ve built some transforms that you’d like to share with the Community, we would really like to add these to the Share area of the site.

Happy exploiting of your existing information!

Business Capabilities

We have been struck recently by the volume of articles and blogs regarding Business Capability modelling, many seemingly of the view that this is a new concept that will resolve the old business IT alignment issue.

Whilst we don’t concur with the view that the concept of business capabilities is either new or capable of resolving the alignment issue alone, we are in agreement that business capability modelling is a key aspect of the business architecture.

We view business capabilities as the ‘services’ that the business offers or requires.  In Essential, these capabilities are modelled in the Business Conceptual layer and represent what the business does (or needs to do) in order to fulfil its objectives and responsibilities.

A business capability is at a higher level than a business process.  It represents a conceptual service that a group of processes and people, supported by the relevant application, information and underlying technology, will perform.  The capability represents the what, whereas the process, people and technology represent the how.  Business Capabilities can themselves be broken down into supporting capabilities, if this is useful.

Defining your business capabilities is extremely useful as it allows you to take a step back and focus on the key elements of your organisation.  You can avoid getting bogged down in the details of ‘how’ things happen and concentrate on ‘what’ does (or needs to) happen.  Once you have done this it is possible to identify your key capabilities, for example, the ones that will differentiate your business and you can use this information to ensure that you focus on the areas of importance in your business, whether this is in defining new projects or ensuring business as usual delivers appropriately.

You will find that your business capabilities are relatively static because you are defining the ‘what’ which rarely changes whereas, for example, your business processes will constantly be evolving as the ‘how’ things are done changes all the time with the advancement of technology and of customer demand.  A very obvious example is retail – twenty years ago the internet did not exist so there were no online sales channels; but the capabilities of a retail channel have not altered, Sales, Fulfilment and Billing are still capabilities, however the process of ‘how’ they sell, dispatch and take payment has altered dramatically.

In reviewing our tutorials we noticed that we already have tutorials on Capturing the Business Value Chain (a subset of the capabilities) and Business Process Modelling, but we don’t have a tutorial that focuses solely on business capability modelling.  In view of the current interest we aim to address this gap as soon as we can and a new tutorial will be available shortly.

Getting the Graphical Views

Modelling in Essential Architecture Manager is focused on capturing knowledge rather than drawing diagrams. However, graphical views can bring a lot of value. Here, we explore new graphical capabilities of EAM.

A couple of months ago, in my blog article ‘Where have all the graphical models gone?‘ I described our approach to capturing knowledge about the enterprise using the forms in Protege rather than drawing diagrams. However, as the saying goes, a picture speaks a thousand words, so I would now like to highlight some new graphical features that are now available in Essential Architecture Manager and to explore some more of the background to our approach to capturing knowledge.

In Essential Architecture Manager, many of the elements that we capture are modelled so that they have a Definition of what the element is. This is then elaborated by an Architecture that describes how that element is composed. A useful way to think about this is that we black-box every element. The Definition is what we see on the outside of the box. We can still use that element in the overall model even if we know nothing more about how it works or how it is composed. However, if we do know more about the element – or we find out the details later on – we can then open the black-box and describe the Architecture, which tells us how the element is composed or how it works.

In fact, the Definition-Architecture approach means that we can define multiple architectures for an element. e.g. an Application Service or Application Provider can have both a Static Architecture and a Dynamic Architecture. It’s certainly more manageable to be able to separate these.

The Definitions are very naturally captured using the standard forms in Protege. We need to capture textual descriptions, relate the element directly to other elements in the model and so on. All of which is very productive, quick and straight-forward using the forms. This is why much of the input in Essential is form based.

In contrast, the Architectures add a contextual dimension to the relationships and dependencies that we are capturing between elements. We quickly found that the basic forms made this rather complex. Fortunately, the GraphWidget of Protege makes capturing Architectures much more straight-forward and we use this graphical tool in combination with the basic widgets for the capturing the Architecture.

The diagrams that are produced to capture these Architectures are focused on utility. Visually, they are basic and agnostic to any particular notation. However, whilst recognising that these diagrams may not be something you would hang on the wall, it would still be very useful to have these diagrams appear in the relevant analysis reports of Essential Viewer. This would be in addition to, not instead of, producing views in specific notations or other ‘graphical reports’.

To provide this capability, we have just released an update to the Essential Widgets and Essential Viewer that takes a snapshot of each architecture diagram during the repository publishing process. These snapshots are then presented in the relevant reports, such as Business Process Definition, Application Module Summary, Technology Product Details and so on. The update makes it very easy to bring in a relevant architecture diagram to any custom report. These updates to Viewer and Widgets have been packed into the latest version (1.3) of Essential Architecture Manager and all are available now to download.

But that’s not the end of the story for getting graphical views of your architecture model. Within the Protege environment, there is the Jambalaya SVG tab that provides a wealth of graphical reporting capabilities. Although we have been focusing on reporting within the Viewer environment – to open the analysis and view of the architecture to as wide an audience as possible in the organisation – there could be some value in sharing Jamabalaya reports with the community.

I would also like to draw your attention to Clint Cooper’s recent contribution – the Visio Export Tool. This produces a rendering either of selected areas or of the whole repository in Microsoft Visio. The resulting Visio file provides a readily-shared, graphical view of the model that can be easily manipulated to provide the view that you need to share with the wider audience in your organisation. Many thanks to Clint for sharing this with the rest of the Essential Project Community.

Although we take a forms-based approach to capturing the knowledge about the elements in the enterprise, there are a range of options for producing graphical views of this knowledge. From the snapshots of the architecture capture diagrams, clickable SVG diagrams to the Visio exports, there are now a range of options for getting the graphical view of your architecture that you need.

Enterprise Architecture, Knowledge Management – Knowledge Representation

A very nice definition of Knowledge Management rings very true for Enterprise Architecture – modelling in particular. How you realise the capabilities in the definition all depends on how you represent knowledge about your organisation.

I was pleasantly surprised when I came across this definition of Knowledge Management by Dave Snowden.

I like this definition of KM but was struck by how it could equally apply to Enterprise Architecture, in particular when Dave mentions processes and technology, and especially when thinking about the contents of our Enterprise Architecture models.

Back in the 1990’s ‘boom’ days of Knowledge Management, there were a number of definitions of KM – many of which seemed to be tied to tools (hmmm, sounds familiar…) – but the thing that stuck in my mind was that knowledge has to be transformed from the ‘raw’ information about the organisation. This transformation is what makes the knowledge applicable to any relevant scenario, having captured information about specific instances of it. And this transformation of information into knowledge – to some extent analogous to the transformation of data into information – is what never seemed to happen in any of the early Knowledge Management solutions. Apparently, all that was required was an Intranet where anyone in the organisation could contribute their knowledge and then anyone who needs to apply that knowledge just searches the Intranet to find it. More latterly, Web 2.0 was going to solve those problems all over again using wikis and blogs and all the social tools, where people could contribute their knowledge so that others can find and apply it. For some areas, wikis can be very useful if notoriously hit-and-miss when searching.

And that’s the real trick that none of these solutions solve; how do you find the knowledge? How can a search engine make the link between a blog posting about a particular topic to a request for best practice on a process? Without the capability to do this, I don’t think any of these solutions are what I would call Knowledge Management. Without the ability to abstract the contextual, subjective information to build generalised knowledge we can’t apply these valuable contributions to other similar scenarios.

How do we transform the information into knowledge? Well, at its simplest, we need some form of ‘framework’ on which to hang the information that we have to hand, so that we can apply objective definitions to what the information is about and relate it to other information with strong semantics. We can then use this knowledge framework to query – not just search! – the information that has been captured from many different sources and in a meaningful way.

Wait a moment, that sounds just like an EA meta model. It certainly does and that’s when I start to wonder, is it that actually Enterprise Architecture and Knowledge Management should be effectively the same thing – it is just that their current guises are particular perspectives on the knowledge base? For me, Knowledge Management has to be more than a bunch of Intranet sites and wikis and similarly, Enterprise Architecture models have to be more than a bunch of diagrams and pictures if we are to truly apply the knowledge that people are trying to capture.

The meta model gives us a framework for representing knowledge about the enterprise that defines the types of information that we need to capture and how this information relates. The resulting model of the organisation enables us to apply the knowledge to the particular problem that we are dealing with right now, whether it is a large scale, strategic issue or an fine-grained query from the operational or tactical agenda.

And this is what Enterprise Architecture is really about. EA is not the modelling. Rather, EA is about the application of knowledge about the enterprise (ideally, represented in a model, because it rapidly gets very complex) to the problems at hand.

In taking a knowledge representation approach to EA and KM (and I believe we should), is Enterprise Architecture modelling the same as Knowledge Management?