Artificial intelligence

Summary

    Artificial Intelligence (AI) is a groundbreaking technology with significant potential to positively influence the financial sector. It can drive digital transformation and innovation, optimise processes, provide new customer insights, promote inclusion, and more.

    Within the financial sector, AI manifests in various use cases with varying levels of autonomy, such as chatbots, fraud detection, credit scoring, risk management, asset/portfolio management and robo-advice among others. While AI’s potential advantages are immense, they can only be realised if the foundations of this technology and its inherent risks are thoroughly grasped, and an appropriate control system is implemented.

    To ensure a clear understanding and evaluation of AI-related risks in the financial sector, the CSSF has released some supporting documentation for regulated entities. Such documentation aims to help entities navigate the challenges and benefits associated with AI use cases in the financial sector.

    The European Artificial Intelligence Act

    On 1 August 2024, the European Artificial Intelligence Act (AI Act), the world’s first comprehensive regulation on artificial intelligence, entered into force. The AI Act is designed to promote the uptake of human centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights. The regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment.

    The AI Act is a horizontal legislation (applies transversally to all sectors, including the financial sector) which follows a risk-based approach, imposing stricter rules on AI systems classified as “high risk”.

    AI systems presenting specific transparency risk (e.g. chatbots) will instead be subject to limited transparency obligations: e.g. AI systems like chatbots must clearly disclose to users that they are interacting with a machine, while certain AI-generated content, including deep fakes, must be labelled as such. General-Purpose AI systems (“GPAI”), including systems using generative AI (“GenAI”), are among the systems subject to such transparency rules.

    At the same time, for GPAI systems with systemic risk stricter rules will apply.

    Finally, AI systems considered a clear threat to the fundamental rights of people will be banned (such as, for example, AI systems used for cognitive behavioural manipulation and social scoring, or for the untargeted scraping of facial images from the internet).

    The AI Act also introduces a new governance framework, where national authorities are responsible for overseeing and enforcing rules for AI systems, while the EU level (in particular, via the newly established AI Office within the EU Commission) is responsible for governing General-Purpose AI models.

    To ensure EU-wide coherence and cooperation, the European Artificial Intelligence Board (AI Board), comprising representatives from Member States, will be established. In addition, the AI Act establishes two advisory bodies to provide expert input: the Scientific Panel and the Advisory Forum.

    The AI Act will apply two years after entry into force on 2 August 2026, except for the following specific provisions:

    • 6 months after entry into force (February 2025): the rules regarding prohibited AI practices, as well as the definitions and the provisions related to AI literacy, will apply.
    • 12 months after entry into force (August 2025): the obligations for General-Purpose AI and the rules on governance will apply.
    • 36 months after entry into force (August 2027): the obligations for high-risk AI systems that classify as high-risk because they are embedded in regulated products, listed in Annex II (list of Union harmonisation legislation) will apply.

    Documentation

    Laws, regulations and directives