Skip to main content

I’m currently working on capturing all IT components used across our organization in LeanIX. While using the reference catalog is helpful—especially for details like lifecycle (e.g., end-of-life dates)—the list is becoming quite large and difficult to manage.

I’d like to understand how other teams are approaching this:

  • What level of granularity are you using when modeling IT components?
  • Do you include every tool/library/framework or focus only on critical or enterprise-level technologies?
  • How do you balance accuracy with maintainability when using the reference catalog?

Would love to hear your strategies or lessons learned for keeping the IT component list meaningful yet manageable.

We at MKS require components for every business app be created and managed. We also track software (at a component level)  that IT helps procure for the business.   So that is like 420 apps and another 100 or so software components.  That is already too mcuh for a team of 2 people.

After that, we pull CMDB data from ServiceNow into LeanIX and that comes from ServiceNow Discovery.  So we see what is installed and out there, but we don’t have the capacity either to manage it all.  Its just too much. 

It all comes down to how much you want to spend on managing it.  For most orgs, I suspect its not worth investing in that people resources to manage and track it all and most orgs struggle to get federated owners to manage their parts. 


Were still quite early in our LeanIX journey and just starting to explore IT components.

We dont have the luxury of a true CMDB (our ServiceNow implentation is currently mainly focused around incident / service managment capabilites), although we may improve on this in the future.

At the moment our rule is automate all imports where possible, manual input of IT components is our choice of last resort.

We have a lot of lists of IT components in various technologies (sharepoint lists seems to be a favourite) so as long as the providing teams can tag applications at source (we provide them a LeanIX external ID) that they can associate with this business application we automate the rest.  Were also exploring how we can associate groups of components in our various public cloud applications (PaaS component usage for example).

While for applications / business capabilities etc LeanIX is the master / source of truth, for out IT components were treating groups of this as mastered outside of LeanIX.  LeanIX is the agregated data view for these.

For example our team who looks after the IaaS estate have a load of data within a sharepoint site which contains all IaaS assets on the estate (VM’s essentially), which in turn is fed by various environments (Azure,  VSphere, AIX/PSeries teams) etc.    It contains a whole wealth of information.  They tag VM's at source against a service (minimal effort on there behalf), and we extract a subset of data out and aggregate essential information which were intrested in for LeanIX.   Its allready helped us highlight:

  • IaaS assets which were running but not actually not associated with active applications in the estate. Orphaned VM’s / DB’s from decommisioned technology.
  • IaaS assets associated to the wrong area / applications (so were not properly owned).
  • IaaS assets spun up with no understood business purpose (shadow / black market IT)
  • Cloud services which also have an unseen IaaS footprint and cost (eg forgotten sync and migration servers).
  • Obsolence risk from old operating systems (we want to extend this to software and further).
  • How multiple backup technologies are used and mixed across components in our applications (simplification opportunities).

 

At the moment were using some simple power automate to act as a “middleware” to extract data from various sources and then feed in LDIF into the Integration API.  The integration API is really useful for aggregating and mapping data to common structures.  It took a bit to get our head round it but its very powerful once you learn how to use it.   Automations and calculated fields on top of this also help to build hidden view points of this data or highlight certain things.

Ive found co-pilot is a quite good AI assistant for writing the API processor configs and JUEL/Regex expressions once you learn the right prompts for it.  


Regards,

Neil

 

 


We moved the representation of our Architecture Standards to LeanIX in the last year (ala...a Standard is one or more Tech Categories with one or more IT Components).  From that perspective, that drives related IT Components without getting too granular.  For example we used to track SSO options (like Entra or Okta) in a Tag and shifted that to IT Components, as part of the identity stack for an application. 

We’re not doing ServiceNow integration yet but talk about it multiple times a year  I’m curious how discovered CMDB items can be normalized into a LeanIX-friendly normalized list.

Typically we don’t go down to a ‘development library’ level, however, in partnership with our cyber security area, if they identified one that was high risk, I would encourage adding it for visibility.  When in doubt, I lean the IT Component use towards what senior leaders that could sponsor data collection are expressing interest in.  If it’s "Vulnerabilities and Risk” that might lean towards high-risk development libraries.  If it’s “Cost’ that would lean towards aggregate, high-level IT Components.


Reply