Responsible AI in health care starts at the top — but it’s everyone’s responsibility (VB Live)


Presented by Optum

Health care’s Quadruple Aim is to improve health outcomes, enhance the experiences of patients and providers, and reduce costs — and AI can help. In this VB Live event, learn more about how stakeholders can use AI responsibly, ethically, and equitably to ensure all populations benefit.

Register here for free.

Breakthroughs in the application of machine learning and other forms of artificial intelligence (AI) in health care are rapidly advancing, creating advantages in the field’s clinical and administrative realms.  It’s on the administrative side — think workflows or back office processes — where the technology has been more fully adopted. Using AI to simplify those processes creates efficiencies that reduce the amount of work it takes to deliver health care and improves the experiences of both patients and providers.

But it’s increasingly clear that applying AI responsibly needs to be a central focus for organizations who use data and information to improve outcomes and the overall experience.

“Advanced analytics and AI have a significant impact in how important decisions are made across the health care ecosystem,” says Sanji Fernando, SVP of artificial intelligence and analytics platforms at Optum. And, so, the company has guidelines for the responsible use of advanced analytics and AI for all of UnitedHealth Group.

“It’s important for us to have a framework, not only for the data scientists and machine learning engineers, but for everyone  in our organization — operations, clinicians, product managers, marketing — to better understand  expectations  and how we want to drive breakthroughs to better support our customers, patients, and the wider health care system,” he says. “We view the promise of AI and its responsible use as  part of our shared responsibility to use these breakthroughs appropriately for patients, providers, and our customers.”

The guideline focuses on making sure everyone is considering how to appropriately use advanced analytics and AI, how these models are trained, and how they are monitored and evaluated over time, he adds.

Machine learning models, by definition, learn from the available data that’s being created throughout the health care system. Inequities in the system may be reflected in the data and predictions that machine learning models return. It’s important for everyone to be aware that health inequity may exist and that models may reflect that, he explains.

“By consistently evaluating  how models may classify or infer, and looking at how that affects folks of different races, ethnicities, and ages, we can  be more aware of where some models may require consistent examination to best ensure they are working the way we’d like them to,” he says. “The reality is that there’s no magic bullet to ‘fix’ an ML model automatically, but it’s important for us to understand and consistently learn where these models may impact different groups.”

Transparency is a key factor in delivering responsible AI. That includes being very clear about how you’re training your models, the appropriate use of data used to train an algorithm, as well as data privacy. When possible, it also means understanding how specific features are being identified or leveraged within the model. Basics like an age or date are straightforward features, but the challenge arises with paragraphs of natural language and unstructured text. Each word, phrase or paragraph can be considered a feature, creating an enormous number of combinations to consider.

“But understanding feature importance — the features that are more important to the model — is important to provide better insight into how the model may actually be  working,” he explains. “It’s not true mathematical interpretability, but it gives us a better awareness.”

Another important factor is being able to reproduce the performance and results of a model. Results will necessarily change when you train or retrain an algorithm, so you want to be able to trace that history, by being able to reproduce results over time. This ensures the consistency and appropriateness of the model remains constant (and allows for potential adjustments should they be needed).

There’s no shortage of tools and capabilities available across the field of responsible AI because there are so many people who are passionate about making sure we all use AI responsibly. For example, Optum uses an open-source bias audit tool from the University of Chicago. But there are any number of approaches and great thinking from a tooling perspective, Fernando says, so it’s really becoming an industry best practice to implement a policy of responsible AI.

The other piece of the puzzle requires work and a commitment from everyone in the ecosystem: making responsible use everyone’s responsibility, not just the machine learning engineer or data scientist.

“Our aspiration is that every employee understands these responsibilities and takes ownership of them,” he says, “whether UHG employees are using ML-driven recommendations in their day-to-day work, designing new products and services, or they’re the data scientists and ML engineers who can evaluate models and understand output class distributions, we all have a shared responsibility to ensure these tools are achieving the best and most equitable results for the people we serve.”

To learn more about the ways that AI is impacting the delivery and administration of health care across the ecosystem, the benefits of machine learning for cost savings and efficiency, and the importance of responsible AI for every worker, don’t miss this VB Live event.

Don’t miss out!

Register here for free.

You’ll learn:

  • What it means to use advanced analytics “responsibly”
  • Why responsible use is so important in health care as compared to other fields
  • The steps that researchers and organizations are taking today to ensure AI is used responsibly
  • What the AI-enabled health system of the future looks like and its advantages for consumers, organizations, and clinicians


  • Brian Christian, Author, The Alignment Problem, Algorithms to Live By and The Most Human Human
  • Sanji Fernando, SVP of Artificial Intelligence & Analytics Platforms, Optum
  • Kyle Wiggers, AI Staff Writer, VentureBeat (moderator)

Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *