The EUs Gift to Enterprise Architects

Interesting article from John – ‘The EUs gift to Enterprise Architects‘ – discusses how EAs can make best use of the data that GDPR is forcing organisations to collect and keep updated, to increase credibility and value to the business.  Essential’s GDPR Monitoring pack uses this information to provide GDPR support, but also harnesses this data to allow analysis of more traditional EA fare, such as APM, Data Management and so on.  See a demo here, and our ‘5 Steps to Effective GDPR Monitoring‘ blog here.

 

5 Steps to Effective GDPR Monitoring

Wherever you are on your GDPR journey, the 5 steps detailed below must be completed and can provide a useful checklist for progress.  This is based on several years’ experience we’ve had supporting the PII data requirements of global organisations.

  1. Assemble a Cross Business Team
    A successful GDPR initiative needs a number of different roles from across the Business and IT including, but not limited to, the following:
Role Purpose
Compliance The compliance team are responsible for defining the scope of the GDPR data for an organisation and also the allowed usages for GDPR data, i.e. defining the legal basis for use across the organisation. Compliance are also responsible for analysing the information returned and ensuring that remediation is put in place.
GDPR Coordinator The GDPR coordinator is responsible for ensuring that each business unit provides the detail of the data they process, the purpose and the applications used.  They should brief Business Units, coordinate and QA the information returned and manage queries.
Business Units The Business Units are responsible for providing the detail of the data they process, the purpose and the applications used for their business area, accurately and completely.
IT The IT teams are responsible for providing the detail of the applications and systems they are responsible for, accurately and completely.
Project Manager Create plan, coordinate resources, manage dates and deliverables and provide senior management reporting.
Analyst The analyst is responsible for analysing and modelling the data received from IT and the Business Units, for example, ensuring there are no duplicates, and providing this to compliance in a format that they can utilise to manage GDPR.
  1. Define the Data in Scope for GDPR and Define the Allowed Data Uses
    The data that is in scope for GDPR will vary from industry to industry, and organisation to organisation and each organisation must, therefore, define the data in scope for them. They must also define the data that is allowed to be used for each business purpose and whether or not consent is required.  We would recommend doing this before the fact-finding exercise as it provides a structure and minimises the possibility of duplication and data gaps.
  2. Get the Business Teams to Provide detailed GDPR Data
    The Business Teams will need to provide the data on their processes, purpose, data and applications used. Additionally, IT will need to provide information on the data held in databases, where the databases are stored and located and the security surrounding both the applications and the underlying technology.  There will need to be a standard means of capturing this detail to ensure consistency, so make sure the business have clarity on what they are doing – utilise your Data in Scope for this.  Once this data is provided, a central team should QA and analyse the data to ensure it provides an overall view of the business situation regarding GDPR.
  3. Gap Analysis and Action Plan
    A gap analysis and action plan should be created to work towards GDPR compliance. An ongoing process should be created to ensure this is an on-going exercise that continually demonstrates compliance.  Engage both the business and the IT teams in defining this process.
  4. Report to the Regulator
    The regulator will need to see evidence that you are on top of the new regulations; you will need to demonstrate that you have assessed your organisation against the new regulations, that you understand where you are compliant and that you have a plan in place to rectify any issues. They will also want you to demonstrate that you have a plan in place to manage GDPR as on-going commitment within your organisation, i.e. people, processes, technology, changes.

EAS have formed a partnership with UST Global and released the Essential GDPR pack, which enables organisations to understand their GDPR compliance adherence and risk from both a business and an IT perspective.  The objective of the tool is to demonstrate to both your CEO and the Regulator that the GDPR position is understood and under control; this is achieved through a series of interactive dashboards and detailed views that can be viewed online or printed out to suit the needs of both key stakeholders.

Our feedback indicates that whilst organisations have assembled teams and started data capture, many are proposing to manage GDPR compliance in a series of spreadsheets.  It is our experience that this is not sustainable; with such a large and constantly changing data set it is almost impossible to collect and structure the data in such a way as to answer all the regulators questions whilst keeping pace with change.  A GDPR tool with a comprehensive meta model, repository and adaptable viewer, allied to a very structured data capture process, makes this task achievable and, in fact, allows the data captured to be used to support other initiatives such as data management, application portfolio management and so on, enabling organisations to make use of the data that must be captured for GDPR.

EAS, in partnership with UST Global, can accelerate your GDPR initiative by bringing our combined experience and the Essential GDPR pack to:

  • Work with you to create a detailed plan to help you gear your organisation’s GDPR initiative for success, including the roles and responsibilities required across the business.
  • Work with your Compliance Team, or external organisations such as solicitors, to accelerate your initiative by providing quick starts based on our experience of the scope of GDPR data applicable to your organisation, and a business model that will aid understanding of allowed data usage.
  • Provide a set of pre-defined Questionnaires and Online Forms that direct the capture and analysis of the business and IT data required from your organisation.  Work with you to create a process to keep this data up to date.
  • UST Global, our partner, provides an automated data discovery tool that finds GDPR data in your databases and document stores, covering both structured and unstructured data such as PDFs. The results can be automatically loaded into Essential GDPR to supplement the manual data discovery carried out by business and IT teams to enable greater accuracy and accelerate the process.  The UST tool can also support the “Right to be Forgotten’ requirement, highlighting all the instances where a person exists across your organisation.
  • Essential GDPR provides powerful dashboards and visualisations to your GDPR data, allowing you to proactively manage your GDPR compliance and demonstrate to both your CEO and the Regulator that you are in control of your GDPR exposure, highlighting where you are compliant, where you have issues and where your risks lie.
  • Allow you to utilise the data that you have collected for GDPR to provide additional benefits across your organisation, such as identification of rationalisation opportunities, etc.

Find out more about the Essential GDPR pack or contact us.

 

EA Tools vs Modelling Tools

We’ve had a few questions recently about why Essential doesn’t provide a greater ability to draw pictures, which is part of the broader question regarding the difference between EA tools and Modelling tools.  Essential is primarily an EA tool and so is focused on supporting the objectives of CxOs and EAs/Chief Architects, with some support for Solution Architects; Diagram-driven, solution modelling tools, however, are focused on supporting Solution Architects in their design work, but do not provide visualisations that can support the key requirements of the EAs/Chief Architects or the CxO’s objectives.  Beyond an overall all systems wiring diagram to show complexity (and used for effect), it’s difficult to think of many more Visio-type diagrams you’d put in front of an CxO.

We’ve drawn up a slide that explains the objectives of the different roles and how Essential can support each, and we’ve also taken the opportunity to update the demo viewer to show the support for the different roles, so there is now a CxO portal, an EA portal and a Solution Architects portal*.

EA Tools vs Modelling Tools

We are aware that many organisations want a tool that supports all three roles and so we are working on the ability to import and export from Visio, which will extend Essential’s reach.  Added to this is Essential’s ability to support an organisation beyond the scope of just IT, for example with our GDPR or Strategic Resource Optimisation pack, and Essential provides an organisational support tool that is unique in its field.

Just one final point, we noted Mega’s press release on 3/10/2017, ‘MEGA is First EA Vendor with Unique GDPR Solution’.  That’s not strictly true, as the Essential GDPR Solution was launched on 28/7/2017!  And to be honest we’ve had clients using our PII solution, the foundation for GDPR, since 2013, something none of the other EA tools did.  If you want to see a proven GDPR tool, give us a call or drop us an e-mail.

*The Essential Viewer can be configured directly in Essential, so organisations can easily create and configure multiple portals to suit their needs.  Essential Cloud also gives the ability to control access to views, and even redact specific data in views, by role or individual.

 

Essential GDPR Launched

Our GDPR pack is now ready for use.  Unique in the marketplace, it supports business questions such as ‘do I have a legal basis for using this data?’ and ‘have I captured the client’s consent?’ as well as technical access and security questions, such as ‘where is my data most at risk?’.  Most other tools are focused on one or other end of this spectrum.  High level dashboards show where the GDPR compliance issues exist, and drill down capabilities allow you to hone in on the exact process, application or technology that is the cause of the risk.

We have partnered with UST to, optionally, incorporate the use of their ground-breaking data discovery tool which can identify structured and unstructured GDPR data in databases and document stores across the organisation. This not only eases the burden of data capture but also provides an invaluable cross-check of information provided through more traditional means.

A sample of the dashboards are shown below, or you can read further information, access the GDPR demo viewer, or sign up  here.

Data Lens

You may have noticed from our site that the Data Lens is in beta.  It’s a lens that we’ve developed because we’ve been continually told that people don’t have control of their data.

In our EA consulting, we have seen:

  • Organisations that were unwittingly reporting incorrect MI figures because data was inaccurate or incomplete
  • Projects that intended to master and duplicate data that already existed in the organisation
  • Inconsistency in what people thought certain data was
  • Differing views on where data was sourced from
  • Projects repeating the same data collection work, asking the same questions again

The Data Lens looks to address this by bringing transparency and coherence to your data estate.  It is aimed at supporting the demands of people wanting to use data, such as:

  • Data Lake or Analytics efforts, which need to know information such as where data is sourced from, what terms are used for the same data, e.g. client and customer, how good the data is in terms of quality and completeness, etc.
  • Platform projects need to know where data masters exist, where data flows, how data is transformed, etc.
  • Any data rationalisation project needs to know where master sources of data exist, where duplication exists and how data is used.
  • Plus, Data Scientists need to understand the sources of data available for their analysis

The lens addresses these needs by providing a number of views and tools.

The Data Definition views provide data definitions, summaries and dynamically produced data models.

The Data Architecture Analysis views are geared towards you understanding sources of data, data flows, where duplication exists, etc.

Data Management is where the lens excels.  You are able to understand data quality across a number of criteria and see sources of data.  The Quality Dashboard shows the quality of the key data required to support your strategic objectives and business capabilities, and also the initiatives impacting that data.  This allows you to identify where your data initiatives may need to be focused to improve your business data output and enable your strategy.  The Data Quality Analysis page lets you pick the data you need and it then shows you where to source it from, plus the quality, completeness and accuracy of that data. This is really useful if you are using the data for other purposes, e.g. MI reporting or analytics. The data dashboard provides and summary view of your data which you can drill down into.

We see the Data Lens acting as the bridge between the tools that are more focused on the physical data layer, and which typically meet the needs of the technical teams but not the business users or the data scientists.  Equally, where you have conceptual data in a tool, the lens can act as the bridge to the physical data, removing the gap between the conceptual and physical layers, bringing context and meaning to the data.

The lens is currently in beta but we are allowing organisations to register an interest and we would love to get any feedback on the lens.

IRM UK EA Conference – Outsourcing and EA

I presented a session on Outsourcing and EA at the IRM EA conference last week; specifically how, as Enterprise Architects, we are in a prime position to ensure that outsourcing deals are both created and run effectively as we are in the unique position of having the knowledge and understanding of both the business and IT across the entire enterprise.  We likened EA’s to the Spartans in the battle of Thermopylae who held off an army of (allegedly) a million men for seven days with only 300 warriors – primarily because they understood and had a map of the landscape.  (Unfortunately they were betrayed and slaughtered after a week – hopefully the analogy doesn’t stretch that far!).

Research by both Gartner and AT Kearney suggests that around 1/3rd of outsource initiatives fail.  We discussed how use of our architecture knowledge and artefacts can mitigate the risks of failure and how EA can be used to bring greater success.  We touched on our work to help organisations use EA and Essential together to reduce the outsource transition time (from idea to completed transition to a new provider) from a typical 18-24 months to 6-9 months, which addresses a key concern raised by the FCA.  We showed some examples of how Essential has been used to support such initiatives across a number of organisations.

The conference itself was very interesting and it seems to me that EA is really coming of age – there were many talks showing how EA is used in organisations to provide real and concrete benefit to senior managers.

If you would like a copy of the presentation then drop me an e-mail at the info at e-asolutions.com address.

Essential Information and Data Pack

It’s been a couple of months since we released the Information and Data Pack and I thought it would be useful to take a more detailed look at what is in this extension to the Essential Meta Model and what it can do for us.

 

Firstly, thanks to our community members who have given us some feedback and found a couple of small bugs in there. We’ve started a forum thread to catch any more issues as we find them but will be releasing a patch and new version of the pack in the coming weeks – we wanted to make sure we had as many issues addressed in a single update as possible. If you’ve found any issues in there, please let us know in this forum.

 

This optional extension pack is a major extension to the Information layer of the meta model but although there are some tweaks of the core meta model elements, this is mostly an extension to what was already there.

 

We have added a number of important meta classes for managing Data elements and the relationships that these have to Application, Information and Business elements, enabling the resulting models to support Enterprise Data Management and Enterprise Information Management activities. More about that later in this blog.

 

One of the most important concepts in the pack is the separation between Information and Data. We have defined strong semantics for what is Information and what is Data, so that there is a clear and consistent framework for capturing and managing these elements of your Information and Data architecture.

 

We’ve based this on the commonly used “Information is Data in context” definition. Data elements have the same meaning and value regardless of the context in which they are used, whereas Information elements differ depending on the context. e.g. Product data means the same things regardless of whether we are looking it in the context of Stock, Sales, Manufacturing. That’s not to say that we only have one Product data element in our architecture, we can define as many variants of how Product data is managed in our organisation as we need – in terms of the attributes and relationships. e.g. Product Item data might have a different set of attributes to a Bill of Materials data object.

 

In contrast, Information takes data and uses it in a particular context. e.g. Stock Volume by Location might use data about Products at particular locations in the organisation and would have a different value for a single Product depending on the Location.

 

This separation of the Information and Data fits neatly into how we need to manage these things in the real world. Data combined and used to produce the Information that we need.

 

Naturally, we’ve used the Conceptual, Logical and Physical abstractions for the new Data metaclasses, covering the WHAT, HOW and WHERE for data elements. In addition to adding these new meta classes, we have created powerful relationship classes to them that enable us to clearly understand how Information and Data is being used by Applications and Business Processes. Some of this might seem a bit complex at first glance but this is due to the contextualised nature of these relationships. What we’ve created are constructs that enable us to understand what Information Applications use and create and in the context of that, which data elements are used, specifically, to deliver that Information – and in that context what are the CRUD for the Information and the Data. We believe that having these types of contextual relationships is a uniquely powerful capability of Essential Architecture Manager but we haven’t added these just because we can but because without them we cannot accurately or even reliably understand what is really going on in our organisation with respect to Information and Data.

 

So, what kind of things can we do with the Information and Data Pack?

We have designed the meta model to support Information Management and Data Management activities managing all the dependencies that exist between information and data elements and also how these are used by Application elements and how they are produced by Applications.

 

More specifically, in combination with the Views that are supplied out-of-the-box, we can understand where issues exist with our Master Data Management solutions, e.g.

  • where are we mastering Product data?
  • how is this data being supplied to our Applications?
  • how does this compare to how data should be supplied to our applications (according to our MDM policies)?

 

As you will have come to expect, the supplied Views are easy-to-consume, hyperlinked perspectives that are designed to give valuable insights to a wide range of stakeholders – not just Information and Data professionals.

 

We can browse the Data catalogue, drill into individual subjects, see what Objects we have in each subject and how each of these objects is defined. Further drill-downs can take us to views that should how this Object is provided to the relevant applications in the enterprise and from where.

 

Based on the detailed CRUD information that we can capture in the model, we can easily produce CRUD matrices from a wide variety of perspectives, e.g. CRUD of Data Subjects by Business Capability, CRUD of Data Objects by Business Process – both of which are derived from the details in the model, which means that these are automatically updated as the content of the model is updated.

 

One of the most powerful Views is also one of the simplest. We find that providing a browser-based, easy-to-access place to find accurate and detailed definitions for all the Information and Data elements – and most importantly in easy-to-understand terms – is a capability that is quickly valued, in particular by non-technical stakeholders. Deliberately, there are no ERDs or UML class diagrams in these definitions. Rather, we can explore the catalogue of the Information and Data elements in a highly hyperlinked environment, providing an automatically generated data (and information) dictionary to the organisation.

 

In that context, we’ve introduced some nice little features that will be included across all the Views such as the ability to identify the content owner of particular elements or groups of elements. This means that if we see that the content on a View is incorrect, we can click a link and send an email to that owner to let them know that things have changed or that there’s an error in the content.

 

We recognise that while most of the meta model concepts for the Information and Data pack are straightforward, there are some more complex areas, in particular in the definition of relationships. As always, we’ve worked hard to keep these as simple as possible but the reality of how Information and Data is used and produced is (or at least can be!) complex and we need to able to capture and manage these complexities. However, the capabilities and results are worth the investment in understanding how to use these. For example, the way the Information to Application (and in that context, to Data) relationships work, enable us to understand how a packaged application has been configured in terms of how the out-of-the-box data structures are being used. This means that we can understand where Product Codes are being stored in Location Code tables, for example, and this is the kind of scenario where the ‘devil is in the detail’ and fine-grained things like this can become the source of a lot of larger-scale issues in the architecture.

 

We’ve already had some excellent feedback on the pack and based on demands from real-world use of Essential Architecture Manager, we are now looking to extend the granularity of the application-to-data relationships to include data attributes, rather than just data objects. You might not always need to go to that level of detail but if you do, the capability will be there – and this will be designed to enable you to model at asymmetrical levels of granularity.

 

Although it’s currently an optional extension, the Information and Data pack will be incorporated into the next baseline version of the meta model. We think that the game-changing capabilities of the Information and Data Pack are a vital part of the enterprise architecture model and so it is natural that this extension become part of the core meta model.

 

Strategy Management and Enterprise Architecture

We have noticed that many organisations are currently looking to their EA to support their strategy management, whether that be business and IT or just IT focused.  This is quite a shift for some organisations in moving EA up the stack and out of project or domain focused architecture to one that provides a broader, higher level view.

Our Strategy Management release is, therefore, very timely.  It has been in development for some time and was the focus of ECP 4, we would like to acknowledge and thank the community for their contributions to the release.   We have been using it at one of our global clients for some time and it is, therefore, released with us knowing that it actually works in real life situations – in a global organisation that has distributed businesses with differing regional and organisational objectives. 

A very brief overview of the key elements in this release is given below, but for full details see the release documentation:-

  • Architecture States – represent the different states of your architecture.  Sometimes these are referred to as ‘current state’ and ‘future state’, but we think it is best to avoid these terms as your current state is always evolving and will eventually (one would hope!) become your future state, which makes everything somewhat confusing.  In our opinion it is better to actually refer to the state you are in (politely of course!) and the state you want to be in, including the steps in between.  So, an example of the type of naming we would suggest is ‘manual invoicing’, then ‘automated payment – UK’, followed by ‘automated invoicing – UK ’ and finally ‘fully automated invoicing’ , which allow you to understand exactly what each architecture state is referring to. 
    An architecture state can relate to one or all of the layers of the architecture, and this will depend on the type of project that is being transitioned. 
    We would expect there to be many of these very specific architecture states that reflect different areas of focus, rather than having a very few states that represent the entire EA at specific points in time.  We think the idea of lots of ‘smaller’, more specific architecture states is very powerful for managing the complexity of the progression of the architecture; break it into manageable chunks that deal with specific programmes or areas of interest and use lots of specific, focussed roadmaps rather than having one unwieldy uber roadmap.  However, if that is what you need to manage the transition of your EA then you can, of course, do just that! 
  • Roadmap Model – is the pictorial representation showing how the architecture(s) will transition between the various different states, which are shown as roadmap milestones on the model.  A timeline for this transitioning can also be included in the model.
  • Strategic Plans – hold the detail of how the organisation plans to transition from one state to another and are implemented by Projects, which can be grouped into Programmes.  Strategic Plans also hold the detail of what Issues are being addressed and Objectives are being met by the implementation of the Strategic Plans. 

There are a number of standard views released with the pack, but in true Essential spirit we expect users to develop their own views to meet the needs of their organisation.  The type of views we envisage being used are, for example, a view that maps Projects to Issues and Objectives, so you can see which projects support which Objectives and resolve which Issues.  This can be useful in allocating resources to the areas providing most benefit to the organisation. 

We feel that this is a key area of EA, and additionally it is one that is, or should be, the focus of many organisations at the moment as they move forward following the recession.  Using EA to assist with the strategy management within an organisation is really where the big pay back from EA can start to be seen.  Of course, all the underlying activities and effort are also crucial and can provide benefits on a project or region or domain basis, but using EA to support strategy management is the activity that can provide the overall view of where the organisation is now, where it wants to be, the plans it has for moving there and, crucially, can be used to ensure the organisation actually gets there.  It can do this by analysing information and then providing the insights, as stakeholder specific views, that highlight potential pitfalls and areas of opportunity and allowing these to be used to aid decision making and ensure the strategic plans are achieved.

We are always keen to hear your views, please let us know if you have any comments.

Exploiting information that you already have about your organisation

Whether you are just starting out with a new EA modelling solution or already have a powerful repository, everyone has information about their organisation in many other places in many other forms. Exploit this existing information.

Many organisations embarking on EA modelling already have a wealth of information in a variety of disparate forms, such as presentations, drawing tools and most commonly in spreadsheets. Rather than re-key, or re-model all this work, shouldn’t we be able to exploit this existing information – lifting it directly – by importing it into Essential Architecture Manager?

Over the last few years, we have explored a variety of options for importing existing information into Protege (and therefore Essential Architecture Manager) and we realised that the only safe and reliable approach was to do this using the Protege API to add, remove and update information in the knowledge base in effectively the same was as the graphical user interface and let Protege take care of all the integrity issues as it does when you work with the forms. Using the Protege API is at the heart of the Essential integration tools.

The new Essential Integration tab plugin for Protege and Essential Architecture Manager simplifying the one-off ‘data load’ of things like your spreadsheets but also the on-going synchronisation of important data that is maintained in other systems. We’ve had some very good experience of doing this sort of synchronisation with configuration management databases and virtual server environment configurations.

This new tab is an evolution of the ‘skunk-works’ release of the Integration Server. We have made it into a tab to make it easier for both stand-alone and multi-user use (both modes are supported) and to make it easier to install and add this capability to your Essential Architecture Manager. The process of running an import or synchronisation is now much simpler than it was with the Integration Server.

The concept of the integration server will remain but will be re-worked to provide a means of automating imports and on-going synchronisations from other sources, e.g. on a scheduled daily basis. However, the new integration tab will be the most commonly used mechanism for importing information into Essential en-masse.

Getting information out of Essential to share with other systems is simply a matter of creating a suitable view for Essential Viewer, maybe an XML document or a CSV etc. With the contribution from Essential Project Community member, Mathwizard, this can be saved directly as an export file from Essential Viewer, using your browser.

So how to do you go about importing your spreadsheets, XML documents etc. using the integration tab?

As you might expect, XML is the ideal format for integrating the source information and the integration tab expects source data to be in XML format, so it is often easiest to work with an XML export of your source information.

The first step is to work out how this existing information will be represented in Essential. Unfortunately, there is no magic solution to this. You need to define how your source information maps to the Essential Meta Model and with this understanding define the transform that is required.

By the way, creating relationships and relationship classes during imports can be particularly tricky as the source information often does not represent the relationships in a way that can be mapped to the relationships in the Essential Meta Model. In many cases, it can be best to focus on importing the instances and then completing the relationships in Protege via the forms.

There are two approaches to defining the transform that I’ll explore in a moment. We intend to produce and share a library of transforms from common source formats into the baseline Essential Meta Model.

Currently, we have a transform for importing the XML representation of an Essential repository that Essential Viewer uses. This means that you can import elements from other Essential repositories, out of the box. Also, our previous experience with certain configuration management databases and virtual server environments mean that we can easily share transforms to import Technology Nodes and Technology Instances into Essential.

The two approaches to transforming your existing source information into the Essential Meta Model are:

  1. Transform you existing source to XML using the Essential Viewer XML schema. Then import this resulting XML into Essential using the out-of-the-box Essential Repository transform. This approach is useful for once-off data loads from existing sources such as spreadsheets where it’s most important to get the main instances into Essential rather than complex relationships.
  2. Define your own transform file and use this to have the integration tab transform your source information and import it into Essential. Although this may seem to be a more complex exercise than approach 1, you have more flexibility in defining how your source information maps to the Essential Meta Model. A how-to guide for writing transforms is being written at the moment that explains how the transforms work and the supporting tools that are available, e.g. a library of script functions that wrap the Protege API to support things like on-going synchronisation of instances in Essential with instances in other, external repositories.

We would be very happy to help anyone who needs to construct their own custom transform via the forums or even to undertake commissions to build custom transforms as required.

And of course, if you’ve built some transforms that you’d like to share with the Community, we would really like to add these to the Share area of the site.

Happy exploiting of your existing information!

Business Capabilities

We have been struck recently by the volume of articles and blogs regarding Business Capability modelling, many seemingly of the view that this is a new concept that will resolve the old business IT alignment issue.

Whilst we don’t concur with the view that the concept of business capabilities is either new or capable of resolving the alignment issue alone, we are in agreement that business capability modelling is a key aspect of the business architecture.

We view business capabilities as the ‘services’ that the business offers or requires.  In Essential, these capabilities are modelled in the Business Conceptual layer and represent what the business does (or needs to do) in order to fulfil its objectives and responsibilities.

A business capability is at a higher level than a business process.  It represents a conceptual service that a group of processes and people, supported by the relevant application, information and underlying technology, will perform.  The capability represents the what, whereas the process, people and technology represent the how.  Business Capabilities can themselves be broken down into supporting capabilities, if this is useful.

Defining your business capabilities is extremely useful as it allows you to take a step back and focus on the key elements of your organisation.  You can avoid getting bogged down in the details of ‘how’ things happen and concentrate on ‘what’ does (or needs to) happen.  Once you have done this it is possible to identify your key capabilities, for example, the ones that will differentiate your business and you can use this information to ensure that you focus on the areas of importance in your business, whether this is in defining new projects or ensuring business as usual delivers appropriately.

You will find that your business capabilities are relatively static because you are defining the ‘what’ which rarely changes whereas, for example, your business processes will constantly be evolving as the ‘how’ things are done changes all the time with the advancement of technology and of customer demand.  A very obvious example is retail – twenty years ago the internet did not exist so there were no online sales channels; but the capabilities of a retail channel have not altered, Sales, Fulfilment and Billing are still capabilities, however the process of ‘how’ they sell, dispatch and take payment has altered dramatically.

In reviewing our tutorials we noticed that we already have tutorials on Capturing the Business Value Chain (a subset of the capabilities) and Business Process Modelling, but we don’t have a tutorial that focuses solely on business capability modelling.  In view of the current interest we aim to address this gap as soon as we can and a new tutorial will be available shortly.