How Will Data Science and AI Transform Healthcare?

How are data science, machine learning, and AI transforming the way we conduct healthcare and medicine? We take a deep dive into data-driven healthcare with Dr. Shez Partovi, Chief Medical, Innovation & Strategy Officer at Royal Philips.

40:05

Jun 17, 2022
7,035 Views

Data science, machine learning, and artificial intelligence have the potential to transform healthcare in profound ways. In this episode of CXOtalk, we speak to Dr. Shez Partovi, Chief Medical, Innovation & Strategy Officer at Royal Philips to discuss how these technologies are already improving patient outcomes, diagnosing disease, and more.

The conversation includes these topics:

Shez leads Philip's global Innovation & Strategy organization, including the Chief Technology Office, Research, HealthSuite Platforms, the Chief Medical Office, Product Engineering, Experience Design, and Strategy. Innovation & Strategy, in collaboration with the operating businesses and the markets, is responsible for directing the company strategy to delight our customers and advance our growth and profitability ambitions.

Shez joined Philips from Amazon Web Services (AWS), where he served as Worldwide Head of Business Development for Healthcare, Life Sciences and Medical Devices. Prior to joining AWS in 2018, Shez spent 20 years at Dignity Health, the fifth largest health system in the U.S. He started his career in 1998 as a neuroradiologist at the Barrow Neurological Institute and was in clinical practice until 2013. In addition to his medical training at McGill University in Montreal, he has post-graduate qualifications in computer science. He helped launch the Biomedical Informatics Department at Arizona State University and taught there as a clinical professor for three years.

Transcript

Shez Partovi: To really be able to create models that help with, as a tool for protection, you need a high volume of data, which also, by the way, helps take out bias. You want a variety of data – again, going to bias – and generally, better machine learning models and veracity, the truth of the data. That's sort of step one.

On Philips as a health tech company

Michael Krigsman: Data science and AI are transforming healthcare. That's Shez Partovi, Chief Innovation and Strategy Officer at Royal Philips.

Shez Partovi: Seven years ago, Philips decided to divest of everything other than healthcare, and it is really now a health tech company. So, while you may see Philips lighting still, it's really just a licensing. Philips itself is now a 100% health tech company focused on the continuum of care, everything from at home, ambulatory, in-patient, and so on.

When you think of Philips now, think of it as a health tech company. It's a 130-year-old startup because it really transformed itself 10 years ago, so it's like a startup that's 130 years old.

Michael Krigsman: You're a physician. You're the chief innovation and strategy officer at Philips. Tell us about your role and the work that you do, where you're focused.

Shez Partovi: I have one of the best jobs in the world because I get to work on the strategy side. I get to work with customers and to understand their unmet needs, and to work backward from customer problems, and then work with my colleagues to create a strategy that would have Philips solve those problems for our customers. That's the strategy side.

Then on the innovation side, when we listen to our customers and what problems they have, then we have to look and see, well, how do we innovate on their behalf to solve those problems? The entire innovation community inside Philips is part of my remit, and what we do is to listen to customer signals, look at the market trends, and then canvas the technology that we have or that's external to Philips and put that together to be able to delight the customer.

In this case, whether it's a health system or whether it's a patient, that's the job that's the nexus of listening to customers, getting the signals, creating a strategy, and then turning to the innovation team and saying, "How do we delight our customers inventively? How do we innovate on their behalf?" And then go to market with those propositions.

That's the team I lead, and it's just literally like a kid in a candy shop. It's the best place to be.

On the role of data in healthcare transformation

Michael Krigsman: I know a lot of your work is focused on data. Can you shed some light on that aspect of it?

Shez Partovi: We have now, in healthcare, a lot of data creation, data generation. Of course, the cynics would say, "Well, yes, and most of that is used just for billing, particularly on the U.S. lens."

Notwithstanding that sort of perspective, it is still true that a vast majority of data is now digital. When you think of, for example, the classic clipboard that you would go in and fill stuff out, now you probably are filling stuff out online, which means there's digitalization of data.

That has occurred. But when you look deeply, you find a few things. I'll just mention them briefly and then we can maybe talk about them, which is, the mechanisms and processes of digitalization are not necessarily seamless and frictionless. Meaning, there's a lot of repetitive, menial tasks.

When folks talk about clinician burnout, physician and nursing burnout, it's partly related to the fact that while we are digitally transforming, they're not necessarily in a frictionless way. They're not workflow aware and they're repetitive. That's the process of digitizing that is not optimal.

Then having digitized and created data, again notwithstanding the billing aspect, some would argue that we haven't made meaningful use of that data in a really amazing way. We are perhaps data-rich but insight poor (in healthcare).

A) We struggle in a friction-laden way to make the data. B) Sadly, we don't really create incredible insights from that data. That's where we are, sort of in healthcare, from my lens.

On the need to rethink data analytics in transforming healthcare

Michael Krigsman: What is it about healthcare that leads to these two fundamental issues that you just mentioned?

Shez Partovi: What we have are a couple of challenges. One, the data generated is right now still much more application-centric, which means they're sitting in silos. While some organizations—

I remember when I was in a health system, we had about 1,500 applications. Now you imagine you are digitizing data. It is in the application environment, but you have 1,500 of them.

The absence of data liquidity means that while you have digitized and put the data on disk, you've not necessarily been able to combine that into an environment that then from which you can garner insights. Because we're in this transition to digital forms, we are in a part of the development of this where there are a lot of isolated environments and, of course, as you know, there's a lot of work being done to bring all the data together into a common environment from which insights can be generated.

It's partly where we are, partly in this transformation, digital transformation. They are siloed. Again, different countries are in different stages of either laying down regulations that say, "Yes, I get it. There's an application, but you must allow it to freely go back and forth because we want data to be the point of care," so some regulations are happening.

Of course, technology is also advancing. Philips has an environment called Health Suite which is about data liquidity. It's about bringing in data from hundreds of different sources so that even though it's not the application tier, it's at this data liquidity tier that allows you to then go, "Okay, so I have the data," combine it and then generate insights from it.

That's where we are. Because of the journey we were on, we are still fractionated, and a lot of health systems are struggling from the point of view of combining data into a common environment.

On data collection and data sources in health transformation

Michael Krigsman: The issue of data liquidity, that is fundamentally data sitting in applications that do not share that data. Is that correct?

Shez Partovi: It is, and data interoperability, actually, you should think of it as two dimensions, which is, there is syntactic interoperability where you're just data sharing, and there's semantic interoperability where the meaning of it is being shared as well.

Yes, we are in this place where we have a lot of data silos. But then there's also great progress being made around interoperability. Not where we want to be, but I don't want to pretend like no progress has been made.

Now there's more and more interoperability happening, which is a precursor to, "Now what?" In other words, some organizations have generated liquidity, so there is a desire to go from data to information, from information to knowledge, from knowledge to insights. That's the journey (that's AI-enabled, of course) that we are, in many places, on that journey and the precipice of getting to insights at scale.

Michael Krigsman: We have a very interesting question on LinkedIn on this very topic. This is from Myles Suer. I know Myles. He works a lot with chief information officers. He asks this. "How do you move from being data-rich, as you just described, to being data-driven, meaning making use of that data?"

Shez Partovi: For the audience, let's at least get some common framework of data and insight because it's a term used a lot, and let's at least, in this discussion, have an understanding. I'll give an example of going from data to information, information to knowledge, and knowledge to insight.

The example I'll give you is let's start with a single data point like a blood sugar value. If you have a single blood sugar value of like 140 milligrams per deciliter, it's high. But on the other hand, is it because the person just had a meal? Is it a fasting blood sugar? Is it a non-fasting blood sugar? That's a data point. Useful, but not insightful yet.

And so, now if I give you, Michael, a trend of sugars that's going up, now I've given you information. The information is the trend. This trend is going up. And you started saying, "Huh. Something is going on here."

If then from there we look at the history and understand the patient may be pre-diabetic or diabetic, now you have knowledge of the patient's condition. But today, what health systems are asking for, what clinicians, physicians, nurses, organizations want in order to effect the quadruple aim in a positive way (improve quality, reduce cost, and improve experience), what they want isn't just data, information, or knowledge.

They want to answer the following questions. What is the likelihood that this patient whose blood sugar you showed me and is pre-diabetic, what is the likelihood they're going to have congestive heart failure in the next 18 months? What's the likelihood they're going to have a diabetic foot ulcer in the next two years?

This prediction, this insight into the future, this is the real opportunity. When you bring data together and you're able to use that to build machine learning models and use AI, it's that looking into the future, the prediction model that is really how you can use the data to drive the organization with insights because you will probably, as a clinician, as a physician—

I'll give you this true example. There's a team we're working with in New Zealand where they were predicting the likelihood that a person would fill in the script upon discharge. You have to ask yourself as a clinician, what would you do different if, on the screen, there's a prediction that says red? Let just use red, yellow, green.

The likelihood this individual is going to fill that script is red. In other words, they're not likely.

At that point, you may ask more questions. Are there social determinants or health issues? Are there other things versus green?

Now, you may say, "Well, we should ask everyone." I get it, but it's this idea of the prediction and the triggering to the clinician to do something different that is the connection to the question that's data-driven. It's data-driven because it gives you the insight that changes what you might do.

That's the thread: data to information to knowledge to insights to some sort of actionable thing for me that I might do differently in this case, in this patient, at this time in this moment at this point of service.

On linking data science in healthcare to improved patient outcomes

Michael Krigsman: Now be sure to subscribe to our newsletter. Hit the subscribe button at the top of our website so we can send you our newsletter. Subscribe to our YouTube channel.

Can you now ground all of this in patient outcomes? You started alluding to that, but what is the consequence on patients of these silos and what would be the advantage? What are the advantages if we have this kind of interoperability, first off?

Shez Partovi: I always bring everything back to the quadruple aim, which is, how do you improve care quality, how do you reduce the cost of care, and how do you improve either the clinical experience for the clinician or the patient experience or the consumer experience? In a sense, improve quality, reduce cost, and improve the human experience (no matter who the human is).

That's the frame that I use, we use at Philips, actually. Everything is in the frame of the quadruple aim.

Let's just start with, for example, one that I think is perhaps even easier. Let's talk about, for example, how do you (in a health system) reduce cost?

One of the things that using AI with data can do is to help with what is termed really operational forecasting, which is, for example, what's the right size of the staff I need in my emergency department next Friday night? What's the patient flow?

Can I predict the patient flow through my hospital so I can right-size my staff? Which, by the way, impacts care quality because if you are understaffed, it's a challenge. Right-sizing is both a cost impact, a positive impact on cost, but also a positive impact on care quality.

That idea of, for example, using an ADT stream (admission, discharge, transfer stream) to build a model that provides prediction for patient flow through an organization, which is something that Philips does as an application (patient flow capacity). That's a model which is a direct way in which you go from a very simple concept of a single data stream (ADT initial single data stream) to creating a prediction model that can help drive the way in which you actually run and staff your organization, which impacts the experience of the patient and certainly impacts the experience of the clinician. If they're understaffed, it's difficult. And it impacts care quality.

Operational forecasting is one category. Let's put that aside for a second. I already talked, alluded to this idea of clinical prediction. I used the example of the diabetic, predicting diabetic foot ulcer or heart disease, and there are many instances that could be brought in.

Just to go a bit further. You could use, for example, clinical prediction. If you're reading radiology films, you use that to look at films using AI ML to resort the way films are reviewed by the radiologist because you're either identifying or predicting an anomaly, and you say, "Well, I think this film should—"

The algorithm says this film should be read sooner because it's going to get care quality impact. If this prediction is real, this finding in the algorithm is real, then the radiologist should look at it and effectuate positive action. Instead of reading it by the order in which the films were taken (or the CT scan was taken or what have you), the algorithm re-ranks the film to the top. It's read first. Care is delivered first. Positive impact on patient outcome.

Those are the ways in which, when you look at how do we use AI ML (AI and machine learning) to positively impact healthcare (using the quadruple aim concept), operational forecasting and clinical prediction are the two lenses that I use when I think of how do we at Philips help make a positive impact on individuals.

On creating incentives for data-sharing in healthcare

Michael Krigsman: We have an interesting question from Twitter. This is from Arsalan Khan, who is a regular listener. Thanks. Always, thank you, Arsalan, for your great questions.

He says that a significant amount of data is held inside very few market-leading applications. This is his term he's using. "Why would a monopoly have an incentive to share that data?" In other words, aren't the market forces of software, essentially, and infrastructure, don't those militate against the kind of sharing that you're describing?

Shez Partovi: The data belongs to the health system as a proxy for patient care, and so the software companies actually don't own the data. For example, Philips doesn't own the data. We are stewards of the data for the organizations which we serve.

In the United States at least, there are now regulatory requirements that say you can not share the information and, for the benefit of patients and society, you have to share it. I do know there are countries in the world (and I don't need to name them because I don't want to badmouth them) where the software companies actually own the data. Therefore, that argument is true.

In the U.S. at least, it's not true to say that I refuse to share the data. Information blocking rules would prohibit that.

Michael Krigsman: Can you tell us the kinds of data that we need to be aggregating? I know, again, you've touched on it, but maybe drill into the data itself.

Shez Partovi: If you're thinking of AI and machine learning, clinical prediction, and operational forecasting, you want to start from the problem and work backward so that you know what data you need. If you think of it this way, Google Maps, if you remember, there was a time it just showed you the direction and the time with the red heatmaps.

Then, later on, it was, "Hey, here is how long it takes for a bike to go that path. Here's how long it takes for a human to go that path. Here's how long if you want to call and Uber for that path." Based on the prediction and the value they wanted to deliver, they were gathering more and more data.

When you think of your organization, you have to work back from when you say, what data should we gather? Michael, it's a question. Well, the data is likely digitalized. But as you look at what data to bring together to create a model, you want to start from the problem and work backward.

In the case of let's say you say, "I want to predict length of stay because, in our organization, it's really important to predict length of stay because we are trying to be agile, right size care, and efficient. We want to know length of stay because we will align our care services."

A patient comes in, and our average length of stay is three. If you can predict for me that this patient is likely going to be here six days, I want to put more resources on this. Why is that the case? How do I look and see what's going on?

If length of stay is the question, then all you might need is an ADT stream to predict length of stay. On the other hand, if you're trying to predict if a person has a particular disease or particular cancer, you're going to need perhaps imaging, blood values, EHR data (electronic medical record data).

You want to start from the problem statement and the thing that you're trying to predict and the tool that you want to give to the clinician or to the operational teams, and then work backwards from there to see what data you need to be able to build that model that gives you that prediction.

On choosing the right problems to solve in data science

Michael Krigsman: How do you make those decisions in terms of which problem to solve because you don't want to go down a rat hole where you try to solve a problem and it's not the right problem or you don't have the right type of data? How do you make those decisions?

Shez Partovi: When we look at the way in which our customers are solving these problems, this is generally the way they go at it, which is, you have either operational people that are trying to solve operational, and literally every organization has probably a lean team. That was really in fashion a while ago. Now transformation teams. There are different names now for them.

But they're all going around trying to solve. Then there are, of course, clinical excellence teams as well, so you have, let's say, operational excellence teams and clinical excellence teams. I'm willing to wager, in your organization, it might be called something different.

If you go to one of their steering committee meetings and sit there, they probably know the things they're trying to solve and the things that they're scratching their head on. I would say that that's where you start from. In fact, that's what our customers tell us that's where they generally start from is from those already teams where there's the chief nurse officer that's running a clinical excellence program or the chief medical officer, or it's the COO and she's running an operational lean program, excellence program.

There are challenges they're trying to address. There is data upon which machine learning models can be built to help them as a tool to be able to address those problems.

What I would say, and what I've seen be most effective in our customers, and if I were a CMO for a health system, I would start from those places as opposed to saying what you and I might talk about, which is, I wonder what we should solve. There are problems plenty, and you want to start from those that are already on your books, the ones you're already working on.

Consider AI and ML (machine learning) – when I say ML, I just mean machine learning – as a tool for those teams. That's it. It's not a whizbang, new fandom thing. It's a tool for the teams that are trying to either bring about clinical improvement or operational improvement. That's the best way to look at it (in my humble opinion).

On avoiding bias in data-centric healthcare

Michael Krigsman: In other words, solving the direct practical problems that you may face, whether it's on the clinical side or the operation side.

Shez Partovi: Absolutely. The simplest and, in fact, it probably aligns with the—

I mean I'm being practical now. It probably aligns with the organizational KPIs. It probably aligns with the team KPI. It really is the simplest, straight place to start is with those things.

Michael Krigsman: Are the challenges of becoming a more data-focused healthcare system that makes more effective use of the data, are these challenges primarily technology bias operational aspects?

Shez Partovi: First and foremost, you of course need digital data. With respect to data, there are the three V's: volume, variety, and voracity. To really be able to create models that help as a tool for protection, you need:

  • A high volume of data, which also, by the way, helps take out bias.
  • You want variety of data, again going to bias, and generally, better machine learning models.
  • And voracity, the truth of the data.

That's step one.

Now, from data itself, the steps include things. Just to get technical here for a moment, you need to actually train a model. You need to label the data, which means this is what this data means; this is what it doesn't mean.

Generally, there's a human that does the labeling. You label the data. You train the model. You have to validate the model. And depending on if you're going for FDA – as for example, Philip, we're going for FDA review – you need to not only validate it but meet certain requirements.

Do outcome studies to show that it does. Again, that's more on the vendor side. Internally, for operations, you wouldn't need to do that.

Data (volume, variety, voracity), labeling, machine learning, modeling, testing and validation, and then possibly beyond that: those activities are why, generally, if your organization wants to do this, you probably want to partner with a health tech company. There may be some in the audience that are sophisticated academic medical centers that have affiliation with the universities that have folks that want to come help or even have hired people that want to do this.

When you ask me what are the obstacles, it depends on if you're implementing tools that perhaps you're getting from Philips, or if you are wanting to build those tools and do it yourself. In that case, you probably have a choice. You might partner with a health tech company that helps you or some sort of a firm that does that, or you might decide that you're really going to build an internal competency to do this.

That, to me, if someone stopped me and said, "Hey, Shez, I want to do this. What's going to be my biggest headwind?" that's the biggest headwind. The tools are out there. But putting it all together is going to need competency, training, upskilling. And so, you either build it internally or you partner.

Michael Krigsman: It makes sense, I'm sure that there's also a very important team building and talent management aspect of this as well.

Shez Partovi: Then there is integration into workflow so that it's trivial, frictionless for the end-user to use it. I would be remiss if I didn't say that because it's not about tech. It's about workflows, people and process more than about platform.

Yes, I would focus on the tech side a bit, but yes, without exception. If you've done that, how do you get into workflow and point of care so that it's easy to use, frictionless, so it's not this other thing over there. It's embedded. Again, that's where you either partner, you might buy something off the shelf that does all this or, if you're going to do it yourself, you have to think workflow.

Michael Krigsman: Let me ask this question from Lisbeth Shaw who says, "How can an organization create this kind of enterprise-wide view into the data as it's coming from the various systems of record?" From the different software vendors, essentially, different systems.

Shez Partovi: You want your data to be in an environment where it all comes together. Technologically, at the very least, you have to consider that you do need a holding area. Call it data lake, call it whatever you want to call it, a health data space.

You need an area where you can have offramps from the applications by the leading standards like HL7, DICOM, and FHIR, off-ramped from the applications into a common area where it has those standard onramps, so you create this data environment. It's got these standard onramps.

You have offramps. You want to be able to stream that data not that you're moving, as I think the question probably asked. Not that you're moving all the applications to this environment but, rather, that you're off-ramping.

You let the application do what it's got to do. It's a practice application. Great. Do your work. If it's an EHR, great. Do you work. You just need offramps that bring that data to an environment upon which—

The questioner used the term visualize, which I think is important. But if you remember, I went back. I said there's data, information, knowledge, and insight. Visualization is a term I would use for the data to information.

I said, hey, if you have a graph. I often associate visualization with this idea of, "Show me a dashboard and a graph."

I think the more powerful thing, and it's likely what was implied in the question, is how do I create insights from that data, which is a higher-order return on your investment than simple visualization. I think you do need a data lake environment of some sort and, by the way, ideally in the cloud because if you're going to run machine learning models, you don't want to buy expensive GPUs that are sitting in your data center idle 23.5 hours a day and only, for a half an hour, running a GPU to do something.

You want to use the cloud so that you pay for what you use. You use the most sophisticated machine learning model training sets, training technology, and only pay for the part you use. If you try to build this in your own data center, you are going to overpay for stuff that you only use a fraction of the time. Don't do that.

On patient lock-in based on data silos in healthcare

Michael Krigsman: I'll just tell you just my own personal quick story, which is, I won't say the healthcare system that I use, but I stick with them. One of the reasons – there are a couple of reasons. They are great, great doctors and so forth, but there's also lock-in, information lock-in because they have a patient portal, and it doesn't easily get tests or what have you, notes from doctors if I go outside their system. There is this built-in gravity towards locking in the patient, which again kind of militates against the sharing that you're describing.

Shez Partovi: That's definitely difficult. By no means do I suggest that it's easy.

I'd say that a good example of an organization that's trying to do that, for example, broad or beyond themselves, is UCSF. UCSF – this is in the public domain – we have a great partnership together. Using this common shared environment, which I referenced, they are actually bringing data from practices outside the UCSF environment and trying to create a holistic view that makes the movement of patients between practices trivial and easy, information sharing trivial and easy.

You're right. It's not. It's certainly not the default, the de facto default mechanism. On the other hand, if you ask me, "Shez, how would you achieve that?" it is going to be achieved through some sort of a common environment in the health data space where you're building in the cloud, providing offramps.

I guess, when I said offramps, in theory, it really is offramps not just from your own applications. In this case with UCSF, it's offramps from partner organizations in the community as well in the Bay Area.

On whom should be responsible for bad data, algorithms, and patient outcomes

Michael Krigsman: Arsalan Khan comes back with this question. He says he wants to know about who should be responsible if there is bad data, bad algorithms, and, as a result, we make incorrect predictions. And so, I think that gets right into the heart of some of the ethical issues that come up. Maybe you can tell us about that.

Shez Partovi: We continue, at Philips, to believe that this is a tool for a clinician to help in their decision-making, but that ultimately you want to have a clinician be the ultimate decider.

We can come back to this, but first and foremost, philosophically, at least from our point of view, we're looking at how to create a tool that improves the well-being that's transparent, is fair, and is a tool for the clinician for them to do their job, just as a blood test would be or any other test would be.

There is this implicit connection. By the way, any test can have a false positive or a false negative. Clinicians, through their training, are synthesizing this and making decisions, which is different than, for example, the algorithm making a diagnosis on its own.

We can talk about that if you wish, but that's different. This is a tool to a clinician.

Then when it comes to bias, I come back to that comment I made earlier around the three V's of volume, variety, and voracity, and then validation, if you will. I guess that's the fourth V. Definitely, the process of creating algorithms includes this training with volume, variety, voracity, and then the validation.

In fact, we all talk about, as clinicians, healthcare as local, which means that a disease prevalent in one geography may not be prevalent in another. When I trained, I trained both in Canada and the U.S. I can tell you that a certain finite chest x-ray in Canada was tuberculosis and the same finding where I was training in the United States was coccidioidomycosis. They're different. But it was because healthcare is local.

Algorithms need to be fine-tuned to the environment in which they're being deployed. There's not going to be one generic algorithm for the world or let alone the United States. Healthcare is local. Training needs to be fine-tuned local.

These are the things, as you get deep into this, when you get past the hype and the media talk of algorithms will replace doctors – as you unwrap this and really understand what these models are doing – you begin to understand that they are going to have to adhere to the same rules. They are affected by the same things that affect healthcare. Algorithms have to be tuned locally as well.

Then one last thing I'll say because this topic is really near and dear to my heart is that we are seeing a much more sophisticated absorption of AI and ML in customers, in health systems. I'll give you an example.

It was ... C1 health system have a chief ethics officer. We were working together on predicting no-shows to clinics. This chief ethics officer was also in the steering committee as we were talking about, "We can predict no-shows."

The question that came up in the room is, "Well, what do we do with that information?" In an airline industry, if you're going to predict no-shows, you double book the seat.

That may have been the first thought was, well, let's just double book the appointment. Heck, we don't want to miss out on having a patient because, after all, you want to make sure you see people because it's care delivery and it's good for revenue (depending on the country).

Then the question that came was, "Well, wait. Why wouldn't they be showing? Could it be that they have scarcity and sufficiency? Maybe they don't have access to transportation. They need a babysitter."

Then we realized – they realized that there might be an underlying principle that these individuals are underserved, that they're disenfranchised. And if the machine is predicting a no-show, that the answer isn't to double book because, when you do, if they do show, the very people that actually need more one-on-one time get less one-on-one time because they're now double-booked.

The answer may be – and they did – that you put together a team that when the machine learning model predicts a no-show, they call. They see how can we help.

It's not that they're calling to say, "Hey, my algorithm says you're not going to show," but "Mrs. Jones, are you going to be coming?" or "Mr. Jones, are you going to be coming? Do you need transportation?"

The very health system itself said, "What do we do with this information that's ethically right?" We're seeing that maturity happening. That's just beautiful to see.

Michael Krigsman: You made a very provocative point that the models need to be local or reflect local conditions. Who is and who should be responsible for creating these models?

Shez Partovi: Algorithms can be fine-tuned and systems are built – and that's how we do it as well – to continue training as they're deployed. And so, a model can be "generically" trained (using that term in quotes) and fine-tuned, even in the background when deployed in an environment before it goes into production, and then continues training after deployment.

By definition, it becomes localized through its implementation and ongoing usage.

Michael Krigsman: Are these models generally going to be supplied by software vendors, by healthcare systems, by companies like Philips? Who is going to be supplying these models?

Shez Partovi: All of the above. Certainly, Philips develops models, and we actually have an environment called AI Manager that you can put our models in that manager and use it.

Organizations build models and can put it in AI Manger and use it. There are young companies that do it. I think he or she who has access to data can build models with good data.

I think, all of the above, and we see a number of organizations, particularly academic medical centers, that are building models. Certainly, a large number of companies are building models as well.

Michael Krigsman: Did I understand you correctly that local models is one pathway towards reducing the bias inside the models?

Shez Partovi: Yeah, it contributes to the reduction because the volume, variety, and voracity, you meet the variety criteria because you may have trained over here. Then when it goes local and start to get used, it's the local variety tunes the model.

On the future of data and AI in healthcare

Michael Krigsman: Where is all of this going over the next few years? Not ten years out, but in a practical way, what's the trajectory?

Shez Partovi: Just as you have a bloodstream and you take a piece of blood. Excuse me. I was going to say tissue, but you take some tissue, blood, and you run a test on it. There's going to be data streams that you take the data and run algorithms on it as a test. That's really the metaphor I'd use.

Just as you have blood pumping and you take a test of that blood, you can have your data flowing through the veins and arteries of the health system. You can take that data and apply algorithms to it.

We will be ordering. Clinicians will be ordering algorithms as tests. Yes, there are the background algorithms that always run. But some algorithms will be more heavy and may use a lot of compute. It may actually eventually end up costing money because you're using compute to run them.

I think, in the fullness of time, clinicians are going to order algorithms the same way they order tests.

Michael Krigsman: What's the timeframe that you anticipate for this?

Shez Partovi: I'd say we're probably looking at five, ten years for some of the early indications of this.

Michael Krigsman: Another question from, again, Lisbeth Shaw who says, "How can we ensure the data science is used to benefit patient care rather than just to boost profits?" And at the same time, she also reminds us that technology is very expensive, and so how does that get factored in?

Shez Partovi: The frame, the lens through which one ought to look at AI and ML in healthcare is how do you invest the quadruple aim and quality, cost, and experience? Actually, cost is really one-fourth of the factor, if you want, for the matter here.

Workflow impacting, of course, clinician experience. We didn't talk much about patient experience. Just a little bit.

It is important, all parts of the quadruple aim. Organizations, this is a call to us, companies as well, that the focus should be on all aspects of the quadruple aim not just cost reduction.

I did comment earlier that if you are running operations more effectively, in my opinion, in cases, you are improving care delivery because understaffing, for example, results in poor care quality. They are tied. I don't want to make it sound like these pillars are separate, but the lens should always be the quadruple aim.

Michael Krigsman: What advice do you have for healthcare administrators who are looking at this changing landscape? They know they need to adapt, but it's very tough for them because they're under such intense financial pressure, regulatory pressure, all kinds of different pressures.

Shez Partovi: I really do believe that, in the early days, partnering with an organization. And I know this sounds self-serving because I'm Philips here, but I would do this if I was a CMO because we talked about training, upskilling. We talked about the plethora of things that are required.

It is not for the faint at heart. It is a heavy lift. Now, it gets easier, but my advice to administrators is that when you want to lean into this is to explore the idea of having a tech partner and then working backward from problems you're already trying to solve, not, "Hey, what's a cool new thing we can do?"

Find a champion that's trying to solve a problem, bring in a tech partner, and see how can we apply AI ML with this partner to this problem. That's how I would do it as an administrator.

Michael Krigsman: What would you like policymakers to know about this changing world of healthcare?

Shez Partovi: I'd ask the question. Are you interested in the quadruple aim? That's rhetorical because policymakers should and are.

AI and ML has a significant role in advancing the quadruple aim. And so, in my opinion, in this day and age, improving quality, reducing costs, and improving experience for the patients and clinicians can be empowered by AI ML.

Policymakers should look at how they advance the adoption, removing barriers for AI and machine learning, because the net effect of that is what their other teams (at CMS, for example, in the U.S.) want to do, which is the quadruple aim. They are tied together, and we should figure out how to advance those through policies.

Michael Krigsman: With that, I want to say a huge thank you to Dr. Shez Partovi. He is the chief innovation and strategy officer at Philips. Shez, than you so much for taking time to be with us and sharing your knowledge with us today.

Shez Partovi: Oh, thanks so much, Michael. Thanks for having me on the show.

Michael Krigsman: Everybody, thank you for watching, especially those folks who asked such great, amazing questions. I love you guys. You guys are such a great audience, and you're so smart, so intelligent.

Be sure to subscribe to our newsletter. Hit the subscribe button at the top of our website so we can send you our newsletter. Subscribe to our YouTube channel.

Everybody, have a great day. Check out CXOTalk.com for our upcoming shows, and we'll see you again next time. Take care. Bye-bye.

Published Date: Jun 17, 2022

Author: Michael Krigsman

Episode ID: 756