Brief

Responsible by Design: Five Principles for Generative AI in Financial Services

Responsible by Design: Five Principles for Generative AI in Financial Services

To balance risk and opportunity, institutions must build a new risk model suited to an evolving technology.

  • Min. Lesezeit

Brief

Responsible by Design: Five Principles for Generative AI in Financial Services
en
Auf einen Blick
  • Generative AI presents great opportunity for financial services companies, but also multiple risks, some familiar, others new.
  • With AI models trained on such large quantities of data, there are concerns that biased data could, if unmitigated, infect applications, for example.
  • Five design principles can help companies mitigate such risks and set themselves up to achieve their responsible AI goals and deliver on their strategic ambitions.

Thanks to recent technological advances in generative artificial intelligence foundation models and record-breaking rates of consumer adoption, it’s no longer a question whether your company will use this technology. It’s a question of when and how.

Trained on enormous volumes of data and adapted to many applications, foundation models are more sophisticated, complex, and capable than prior AI tools, especially at handling unstructured data. Increasingly offered as a service, they are also much easier and economical to adopt. But concerns about unforeseen consequences and potential misuse of the technology make it urgent for business leaders to understand the privacy, fairness, ethical, and social implications of generative AI, and to balance those risks against its promising commercial potential.

Managing and mitigating the new risks that come with technological advance is familiar terrain for financial service institutions. Generative AI will amplify some well-known concerns but will also present new ones. For example, the risk of bias, long managed through fairness policies and compliance efforts, could now inadvertently be built into applications based on these models. The risk faced by any individual company will depend on two things: first, where and how it applies generative AI, and second, the maturity of its AI governance. Whatever their level of risk, any company using generative AI must identify relevant and emerging risks; understand how their applications map to existing and new regulations; and enhance internal functions, such as machine learning engineering, technology, and legal, in anticipation of new risks.

Financial services applications of generative AI 

Generative AI has the potential to significantly improve the productivity and quality of many types of knowledge work, increase revenue, and reduce costs. Consequently, financial service organizations are likely to use it in a variety of ways. These may include augmenting the productivity of their workforces, personalizing content for consumers, and, eventually, improving consumer self-service. Traditional AI has already been used extensively in financial services, typically with structured data for prediction and segmentation. Today’s foundation models could be used for converting unstructured data—like text, images, and audio—as well as data sets—such as communications, legal documents, and written financial reports—into structured data, which could then be used for strengthening these existing AI risk models. 

The breadth and scale of generative AI’s likely uses combined with its evolving social and ethical risks make creating and managing a comprehensive governance program complex (see Figure 1).

Figure 1
Generative AI foundation models carry new risks, and their scale and broad application augment existing risks

Regulatory, compliance, and legal risks: inheritance, ownership of training data, developing regulation, data privacy, IP ownership of created content, job displacement  

Regulators are clearly still catching up to the rapid evolution of generative AI and foundation models. In the coming months, executives will have to watch for upcoming regulations and proactively manage them. These will come from existing regulatory bodies that are forming their perspectives, as well as from new regulatory entities that may be created specifically for this technology, such as those envisioned in the European Union’s AI Act.

Generative AI also exposes organizations to increased legal risk from inadvertent or unintentional exposure of customer data by employees experimenting on public or shared systems, uncertainties in the provenance of data used in training foundation models, and potential copyright risks on content generated using these technologies.

Additionally, the economic risks from regulatory noncompliance must also be considered—the draft European regulations are suggesting stiff financial penalties, similar to fines for noncompliance with data privacy regulations (GDPR).

Operational risks including data, IT, and cyber resilience and cybersecurity: data management and governance, fraud, adversarial/cyberattacks, vendor risk

Given the rapid pace of advances in generative AI, many features and capabilities are being launched to support experimentation. Until these solutions are hardened to support scaling, control privacy, monitor performance, manage security anomalies, follow data sovereignty, access regulations, and meet enterprise service levels, their commercial use must be very carefully considered.

Excessive complexity can make these systems brittle and more vulnerable to new vectors of cybersecurity attack, like training data poisoning and prompt injection attacks (see Glossary). It is likely, too, that the technology’s ease of use may enable the generation of malicious emails, phishing attacks, and “deepfakes” of voices and images, among other issues. Vendor risk relates both to locking into a “walled garden,” especially as the vendor ecosystem grows, and to the possibility that some vendors will not survive in this increasingly busy space. Open-source models may have their own complexity of maintenance and upgrades.

Model risks including fairness: hallucination, bias, accuracy, accountability, explainability, transparency

The financial services industry has well-developed policies of fairness, accuracy, explainability, and transparency built in compliance with regulatory guidelines. Generative AI intensifies some existing risks associated with AI while requiring a different approach to others. Given the large amount of data that goes into creating foundation models, for example, it is likely that bias will creep into some aspects of the data. And with foundation models mostly available as a service, new and derivative applications will inherit their risk of bias. Earlier machine learning models produced structured output for specific tasks, while generative AI creates novel results whose fidelity and accuracy can be difficult to assess. One particular concern: It can “hallucinate” output that was not present in its training data. That’s a desirable result when looking for innovative content, but unacceptable if presented without verification or qualification.

  • Glossary

Economic risks

As with any new technology, unless planned correctly, generative AI initiatives run the risk of becoming expensive experiments that don’t deliver shareholder value. There is a risk of underestimating the extent to which an organization and its people will need to transform in order to realize the benefits of generative AI. Given the technology’s evolving nature, companies risk investing in the wrong technology or failing to hit the right balance between what they choose to build in-house and what they buy from outside vendors. Ultimately, every executive worries they might lose out to a competitor that deploys the technology in a way that is so appealing to customers it renders their current business model obsolete.

Reputation risks

The tectonic shift generative AI is precipitating brings fear of automation and the potential impact on employment, employees, and society at large. Stakeholders, including customers, employees, and investors, have all demonstrated, as they have with ESG, that they place a high level of emphasis on social responsibility, and this technology will be no exception.

The five design principles of responsible generative AI

Building the organizational capability to responsibly design and deploy generative AI will require an investment of significant resources. By focusing that investment on five principles, companies can begin to mitigate risk and achieve their responsible AI goals while delivering on their strategic ambitions (see Figure 2).

Figure 2
Five principles of responsible AI
  1. Be human-centric: Design for transparency and explainability. Generative AI systems must be built with audit trails and monitoring that fit their end use. This will help ensure that the systems are accessible and fair, are not unfairly biased, and do not discriminate. All stakeholders should be adequately informed when they interact with a machine and should be able to reach a human to escalate any issue they have with a decision made by the system.

    For AI to be trustworthy, it must be designed for human agency and oversight. It is critical that financial service institutions ensure that a human is in the generative AI loop, whether to review feedback or address an escalated problem. End-users or other subjects should always know when a decision, content, advice, or outcome is the result of an algorithm.

  2.  Know where you stand: Ensure that data privacy and infrastructure are robust. With a growing choice of foundation models and providers, organizations will need to select the right service and vendor. Some companies will choose a fully cloud-hosted software-as-a-service approach, while others will opt for models with privately managed infrastructure. As with other cloud technologies, companies will need to balance the simplicity of single sourcing against the risk of becoming locked in to one vendor, and be aware of their vendor’s data security, privacy, and data residency standards. Whichever choice is made, companies can build their technical infrastructure to be foundation-model agnostic so that they have flexibility to change with the evolution of the ecosystem. Financial services firms can specifically mitigate customer and organizational data privacy concerns as well as security and performance risks by opting for the right technology architecture and focusing on building capability in prompt engineering, embeddings, and outputs. 

  3. Earn trust: Prepare for regulation. Regulators are playing catch-up on generative AI, but organizations can prepare by proactively monitoring for, evaluating, and addressing risks and taking a forward-looking approach to governance, risk management, and compliance reporting.

  4. Employ agility: Ensure oversight and disclosure, before and after deployment. Given the fast-evolving nature of this technology and its scale, companies will have to keep monitoring their applications for new and developing risks after deployment and build a human override. They must also have explicit criteria for testing and evaluating the model. Tools that provide information about the AI, such as model cards, will need to evolve in order to ensure that foundation models can be quantitatively evaluated and tested at industrial scale before deployment.

  1. Act with intention: Consider organizational maturity and AI governance when selecting applications. When companies first develop generative AI, it makes sense to focus on uses with low risk. Later, as their responsible AI capabilities mature, companies can work up to those with higher risk. It may be ideal for organizations to start with internal applications, then move on to applications with a limited set of external users. Once those applications have built detailed feedback loops, they can expand to a wider audience.

Generative AI is no longer futuristic but an imminent reality, one offering financial services leaders both unparalleled opportunities and new business and societal risks. Financial services firms can responsibly embrace this transformative technology by building robust governance frameworks and upskilling and reskilling employees to adapt to the AI-driven workplace.

This starts with a conscious decision to prioritize responsible AI practices that are designed with their broader impact in mind and aligned with the organization's core values and long-term strategic objectives. By pioneering an appropriate model for deploying generative AI, financial services organizations have the opportunity to not only gain competitive advantage in an increasingly digital world, but also set an example of responsibility and foresight.

Markierungen

Möchten Sie mit uns in Kontakt bleiben?

Wir unterstützen Führungskräfte weltweit, die kritischen Themen in ihrem Unternehmen zu adressieren. Gemeinsam schaffen wir nachhaltige Veränderungen und Ergebnisse.