Skip to main content

LeanIX usage for EU AI Act

  • April 29, 2025
  • 5 replies
  • 267 views

Carsten
Forum|alt.badge.img+1

Hello,
I’d like to raise an conversation about the topic how to utilize SAP LeanIX for the EU AI Act.

We started with applications by enhancing the metamodel following the proposal of SAP LeanIX:

The mandatory fields are conditional and except AI Usage they are conditional based on what is selected in the AI Usage single selection.
To simplify the factsheets owners data entry I created a survey with all this information inside.
We also added the AI Potential to Business Capability Factsheet.
Next step is to apply this approach to the component factsheet.
We are still discussing how calculate an AI risk score and -level. Did you do already something like this? What’s your approach/formular?
Best regards,
Carsten

5 replies

Forum|alt.badge.img
  • Royalty For Loyalty
  • April 29, 2025

We implemented the following:

Governance Factsheet to track compliance / assessments | Community

Let me know if you need more info.


Carsten
Forum|alt.badge.img+1
  • Author
  • Everlasting Love
  • April 29, 2025

@Jacques : Thank you for sharing, a good approach. We started using such kind of factsheet for Governance & Architecture, too.

But for AI we want to stay in line with the proposed implementation and you need to document which Application / Software etc. uses AI.


simonsztabholz
Forum|alt.badge.img
  • Royalty For Loyalty
  • May 27, 2025

Hello Cartsen,

 

For us the AI Risk assessment sits in OneTrust. We identify in LeanIX which assets are using AI but the assessment is made in OneTrust through the assessment functionality.

However, a way that i see to do it LeanIX would be to have adequate attributes to assess the AI risk (Autonomy Level, Human Interaction with the AI...) and use the new calculation feature of LeanIX. This calculation feature could take the score of this attribute and assign a risk based on a logic that you should determine.

Kind regards,

 

Simon


Hi Simon,

 

We dont have ‘OneTrust’ for assessment and we were wondering to extend Lean IX with assessments either through attributes or through integration to some Microsoft forms which in turn returns some value to ‘AI Risk’ Field.

While we were looking through assessment questions from legal, it looked like 60-70 questions. We haven't reached to the details which can sort out what are the core fields which can derive risk score.

Have you done that exercise already? If so, can you provide that information (like you mentioned Autonomy level etc)

 

Thanks,

Surbhi


carmen.guti
Forum|alt.badge.img
  • Royalty For Loyalty
  • December 23, 2025

Hi all,

Thanks for sharing your approaches, very interesting discussion.

To add our experience from Avolta, we are currently using LeanIX as the central place to create visibility of AI usage across the application portfolio, aligned with the EU AI Act, without trying to fully replicate legal risk assessments inside the tool.

Our approach so far has been:

  • Activating the AI Governance and Adoption section on the Application factsheet.

  • Using LeanIX surveys to collect AI-related information from Application Owners in a guided and lightweight way.

  • Defining a clear checklist for Application Owners so they apply a common and consistent criterion to:

    • identify whether the application uses AI or not (AI Usage),

    • determine the Type of Artificial Intelligence used,

    • and assess the AI Potential from a business perspective.

  • Keeping the AI Risk field owned by Compliance and Legal, not by Application Owners, as it relates to EU AI Act risk classification and regulatory assessment rather than IT ownership.

This separation of responsibilities has helped us avoid overloading Application Owners with legal assessments, while still creating a reliable baseline of AI visibility across the portfolio.

One improvement we identified is that the “Type of Artificial Intelligence” field is currently single-value, whereas in reality a single application may include multiple AI techniques. A multi-value option would better reflect real-world implementations and reduce oversimplification.

In this context, one resource we’ve found particularly useful, especially for teams without a legal background, is the EU AI Act Explorer:
 https://artificialintelligenceact.eu/ai-act-explorer/

It provides a clear and accessible way to understand risk categories and obligations under the EU AI Act, and is very helpful when:

  • aligning discussions between Legal, IT and EA,

  • narrowing down assessment questions,

  • and deciding where deeper compliance analysis is really needed.

From an EA perspective, this combination of portfolio visibility first, clear ownership, and progressive risk assessment seems to strike a good balance between speed, compliance and sustainability.

Curious to hear how others are structuring ownership and responsibilities around these AI-related fields.

Best regards,
Carmen