Chief Information Officer Playbook: Enterprise AI and the CIO

How should Chief Information Officers manage AI in the enterprise, and what challenges may arise? Every CIO must consider these questions from both enterprise technology and business leadership perspectives.

39:39

May 07, 2021
9,156 Views

How should Chief Information Officers manage AI in the enterprise, and what challenges may arise? Every CIO must consider these questions from both enterprise technology and business leadership perspectives.

Author and AI investor Ash Fontana explains the CIO role and CIO responsibilities when developing organizational capabilities for AI, which he calls the AI-first company.

The conversation covers these essential topics:

Ash Fontana is author of the book The AI-First Company. He is an investor focused on AI-first companies at Zetta Venture Partners since 2014. Before Zetta, Ash launched syndicates at AngelList - the biggest startup investing platform in the world that now manages over $2B. Simultaneously, Ash made investments in Canva, Mixmax and others.

Transcript

Ash Fontana: It means putting AI-first in every conversation about people, about policy, about pricing, about what products you're going to build.

What are the CIO’s first steps to adopting enterprise AI?

Michael Krigsman: What is an AI-first enterprise, what's the implication for chief information officers, and how do you get there?

Ash Fontana: It can be an existing company that starts to put AI at the start of every conversation they have at the top of their agenda in every meeting, or it can be a new company that (from day one) is focused and strategic about collecting the right data, feeding it into the right systems, building products with predictive value.

That's what an AI-first company is. It's the company that gets the imperative to build these systems and gets the need to focus on this from day one so that you have the right data in the right place feeding into the right models rather than trying later to sort of sprinkle AI over data that you've accidentally collected.

Michael Krigsman: For an enterprise organization, as opposed to a startup, what are the implications of this for the chief information officer and for the company? I think it's quite different for a startup than for a larger, more established company.

Ash Fontana: In a sense, it's very straightforward in that the implications are more of a focus on data management, data collection, and data talent or data-competent talent when you're thinking about budgeting, when you're thinking about where to focus your attention. It's just sort of a question of degree of your attention in those three areas with the nuance on the word data.

The book really goes through this in a lot of detail. It goes through things like, well, when you're at the experimentation phase of AI, how much effort should you invest in data infrastructure at that phase versus later phases?

When you're really focused on collecting data, what are all the weird and wonderful ways you can do that? How do you manage, for example, a data labeling operation? It's quite a new challenge. It's not a function that's existed in the past.

Then, in the third category, when you're hiring people, what's the difference between a product manager and a data product manager? What's the difference between a software engineer and a data engineer? What's the difference between a project manager and a data project manager? They're different roles that you hire from different backgrounds.

I think, overall, there's also a series of organizational questions to ask. Roughly, they're around how do you pick the right degree of centralization and decentralization in your organization? How do you centralize enough so that you have good data infrastructure, you have a good set of tools for people to use to build models all throughout your organization? How do you settle or centralize on a good set of things that allows your corporation to be AI-first?

But also, how do you maintain a degree of decentralization so that data science and machine learning talent is out there in the field understanding the prediction problems that your business has? Out there in the field with the people with clipboards checking off things on a safety checklist or inventory in a warehouse or whatnot, they understand what data is available, what people are trying to do, what people are trying to automate, et cetera. There's an organizational question there.

Then there's also, finally, a question (I guess you would say) about metrics and measurement. How do the metrics differ for an AI-first company versus just a normal software company? How do you really measure the return on investment in AI projects? How do you understand if these models are working? Then how do you make sure they stay in check? There's a whole series of questions there.

The role of a CIO sort of changes to quite a degree when you think about capital allocation to data infrastructure, data collection, and data talent. It changes with respect to how you manage or structure the organization, and it changes with respect to your metrics and what you measure.

How should CIOs start with enterprise AI?

Michael Krigsman: In effect, what you're saying is this part of the company that's focused around AI, ideally, it would be the entire company, but let's just start small. This part of the company is using the delivery, using the use of AI tools, and then the delivery of outcomes relating to those tools as its kind of organizing principle. Is that an accurate way to summarize it?

Ash Fontana: Yeah, I think that's a good way to summarize it. But again, there's so much nuance across the board here around the degree to which you focus on this, depending on where you're at in your journey, depending on whether you're at the phase where you're just experimenting with a few models to sort of test, like, can we actually make this prediction about the demand of the thing we're selling, the demand of the apparel we're selling next season? Can we make a prediction about a trend in this industry with our consumers, whether they're going to buy this or that color next season? Can we actually make a prediction around this delivery time in our supply chain? Can we do that?

Are you at this stage where you're experimenting with that and really trying to discover from the data that you have whether you can do that? If so, the degree to which you invest in data collection, data infrastructure, and the degree to which you jump in the deep end of machine learning different models is very different to the point at which you've done those experiments, you're sure that you can make these predictions, and you want to really double down.

This is where I introduce this concept of latent AI. But again, I think that's a good way to put it. There's a lot of nuance depending on the phase you're at. Different parts of your organization can mean different parts of the journey.

The part of your organization that is on the marketing side can be really far ahead on the AI journey because there's a lot of data available to survey customers and run predictive models around what different segments will do and how they behave, whereas the part of the organization that's responsible for production might be a bit further back or earlier in the journey because fully roboticizing a production line is quite an undertaking. You'll want to do more experiments and little bits of the production line first, so it depends on where you are.

How much technical knowledge about machine learning models should business leaders possess?

Michael Krigsman: In your book, you go into a lot of depth around the different types of models. For a business leader, how much technical understanding do they need to have in order to make this journey, to take this journey effectively?

Ash Fontana: I would contend, I guess sort of controversially, you don't need a lot of understanding of, I guess, the latest and greatest in machine learning. I say that for a few reasons.

One, at the end of the day, this is really just a series of advanced statistical models. If you have a pretty solid grasp of probability, basic statistics, et cetera, you'll be able to get this stuff.

Two, it's changing so quickly. Mastering the latest and greatest technique to do object detection in an image from the field of computer vision is not necessarily going to help you that much because it's going to be obsolete soon enough.

Then thirdly, look, it's very good to maintain focus on obviously the problem rather than the method to find the solution. Having an understanding of the domain is super important because that'll help you get to a good set of heuristics to put into models. That is a good set of features—we call them in machine learning, as some of the audience will know—that allows a model to predict a thing.

You always know X is the cause of Y. If it's raining, deliveries won't arrive. If you take this drug, you'll have this reaction. That set of heuristics, that understanding is really important when you consider the early features or the early instantiation of the model.

What are “data learning effects”?

Michael Krigsman: You talk about something called data learning effects that's very important, that you feel is extremely important to this journey. Would you describe data learning effects for us, please?

Ash Fontana: Just to contextualize for a second, this goes back to your very first question around why did I write this book. I wrote this book because we have this incredibly powerful tool (artificial intelligence) and it creates this whole new type of competitive advantage that is so much more powerful than any other form of competitive advantage I think we have today. It gives these right away advantages that allow these companies that are really good at it, at developing it, like Google, to be trillion-dollar companies and only the first trillion companies around.

What is it? What is this new form of competitive advantage? It was sort of a little bit funny and sort of frustrating, as a former law student that's a bit pedantic about vocabulary, that people didn't have the right vocabulary for this.

I came up with this term "data learning effect," and what a data learning effect is—now getting right into the definition—is the automatic compounding of information. The three words that are important there are "automatic," "compounding," and "information."

Now, the data learning effect has three parts to it, and all parts are crucial. Otherwise, it's not there. We'll work backward in that sentence.

The first part is a critical mass of data, a certain amount of data, enough data from which you can probably derive a lesson. You don't know yet until you've done steps two and three, but it could be a lot of data in the case of needing to recognize a whole bunch of things in an image, or it could be a very small amount of data, but it's a critical mass. It's a certain amount.

The second thing is the capability to process that data into information because a big bucket of data doesn't tell you anything, but when it's contextualized, labeled properly, cleaned up, organized by identities and whatnot, it can tell you something. It can resolve uncertainty for you. In a technical sense, it goes from being data to being information.

The third thing is a network of models that learn from that information. They do some calculations over it. They derive something in the mathematical sense. Then they pass it to another model which then does something with that to another one to another one. Eventually, they're able to learn a pattern. As they learn a pattern, they're then able to make a prediction.

Again, you've got three things: a critical mass of data, the capability to process that data into information, and a network of models that can learn. With all three things, you've got a data learning effect.

Now, it's not just one of those things. It's not just a scale effect. It's not just a learning effect, as in a series of lessons you learned about processing data into information. And it's not just a network effect. It's a variation of all of those three things at once.

With this automatic compounding of information, you kick off this flywheel because once these things get going, once they start learning, they then get feedback from people using these predictions, using these lessons that it's spitting out, and then run again. They'll get better next time and again and again and again. They automatically get better and better each time.

Michael Krigsman: The data learning effect then is the goal. It's where you're trying to reach at the steady-state. Is that a good way to put it?

Ash Fontana: Yeah, that's exactly what we're trying to go for. Get the flywheel going, so to speak. The flywheel is going when it's working, it's generating predictions, and you're getting feedback on those predictions, like we think there is a red bottle top in this image, and then the human can correct it or you can have some way to verify that.

Then the model goes, "All right. Nine times out of ten I was right. Here's where I was right. I'm going to just tweak it a little bit. Change the way I'm doing this sort of gradient descent or this derivation and, the next time, I'm going to spit out a better prediction and whatnot."

You have that flywheel going when the model gets better with every run, when you see increasing accuracy with every run. That's what it is, that's what you're going for, and that's how you know you've got it.

Sorry. I jumped ahead a bit there, I think, in terms of going beyond that's the goal to how do you know when you've achieved it.

What is Lean AI?

Michael Krigsman: How do you begin, if you're a CIO (or maybe it's the chief technology officer)? Who is it that should be beginning and what should they do to start?

Ash Fontana: One starting point that's really, I think, straightforward is to use this process called Lean AI. Of course, this borrows from the concept of Lean Startup. The Lean Startup was all about what are the ways in which I can constrain my problem and constrain my experiment to understand if customers want a feature of a product and to understand if the need is really there.

What Lean AI does, what the process does is it constrains the experiment you want to run to test, do customers want or will they value this prediction or this little automation I can do?

There are a series of questions that help you figure out:

  • What's the one data set I need so that your experiment does not, for example, require getting lots of data from lots of different places?
  • What's the one model I can use? Often, it's just a very simple statistical model rather than having a network of machine learning models that are all interlinked in a complicated way.
  • What's the one machine I can run it on? Just run it on someone's laptop first rather than distributing across the entire computing infrastructure.
  • What's the one output I can get that will be useful to people, whether it's a chart, a one-page report, or a table of data for information?

This is the process of Lean AI. It helps you constrain these things so that you just have one data set, one model, one machine, and one output.

With that output, you can go, "All right. We're able to get to this degree of accuracy. Where should we invest next? Should we invest in getting more data? Should we invest in developing a different type of model working with a data science consult or otherwise? Should we invest in deploying this across more computers so that we can run the model many times over? Or should we actually just invest in a better way to output this information to people so that they can offer feedback data that will help the model get better and better? Should we invest in, for example, building an interface to offer this output up to people on their phones while they're in the field with a button that says correct, incorrect, or right/wrong?" Then that'll feedback into the model.

This Lean AI process is a way for you to get up to speed quickly and to the point of doing at least one experiment the results of which will help you figure out where to invest next.

Should the CIO or CTO be responsible for investments in AI?

Michael Krigsman: Who should be making this investment? Is it the CIO? Is it the chief technology officer? Can you even generalize?

Ash Fontana: It's great if you enable lots of people throughout the organization to run these experiments in some ways. Give them some basic tools and basic data infrastructure.

Get the organization being AI-first and encourage people to run little experiments everywhere because people don't need to have machine learning expertise to run predictive modeling experiments. They can just run it with basic statistical methods. Or you can give them some degree of automated machine learning with various tools out there or build some of your own.

Ideally, the whole organization is full of people in the field every day that are able to run these experiments. But most of the time it's the case that the CIO will probably be allocating budget to these things, these experiments early on.

Again, it depends. But, ideally, you've got a whole organization that's thinking AI-first and speaking AI-first, but also acting AI-first by whenever they see something that they could automate, running experiments to test it out.

Culture change and AI adoption

Michael Krigsman: It sounds like what you're also trying to drive for here is a culture change inside the organization. I'm just thinking about culture change when a company tries to shift to being more customer-centric, really thinking about customer experience. That's a big change, and it sounds like the culture change you're asking for here is also a very significant change.

Ash Fontana: It's not really a significant change because everyone is already at the point in the year 2021 where we're pretty good about storing a fair bit of data that we're collecting, whether it's in customer surveys or from sensors out in the field or in a factory. We all sort of get the value of at least storing this stuff, so a lot of people are in that mode and people understand the notion.

Not just technical people understand the notion of having good data hygiene, labeling data properly, trying to keep it pretty organized in tables and whatnot. There's a lot to do in a lot of organizations in that regard, but we're sort of there.

People get that you've got to be analytical, data-driven, and have sort of some facility with data these days and communicating things through data. I think a lot of organizations are pretty far along the journey there rather than just relying on opinions all the time.

I think a lot of people see the power of machine learning. People are reading about it every day. It's in the press. There are lots of really amazing examples of what it can do, like solving the protein folding problem and playing games, of course – a lot of those examples.

I think, in a sense, a lot of people are already there. However, in a sense, sure, really moving to be truly AI-first and getting people to really understand this notion of a data learning effect and how powerful it can be, having a good experimentation framework in place, so lots of people throughout an organization can run experiments, and having people in the right roles across the organization requires a bit of work. But again, depending on where you are in the journey, I think it often is just a bit of marginal effort.

Having a team of product managers to refocus their attention turns them into data product managers. You don't have to hire new people.

Having a team of people that have some sort of statistics background, like they've worked in geostatistics or biostatistics, and getting them up to speed on a couple of machine learning techniques so that they start modeling is really quite easy. You don't have to go and rehire all those people. A lot of organizations have these people that, just given the right vocabulary and set of tools that I outline in this book, can be an AI-first team.

Michael Krigsman: We have a very interesting question from Arsalan Khan on Twitter. He's a regular listener and he asks great questions. Arsalan, thank you for that.

Arsalan says, "If every department is on their own AI journeys, then who is responsible for ensuring that these are working holistically and figure out where there are synergies and also where there are conflicts?"

Ash Fontana: Of course, you want to encourage experimentation with ways to automate things and ways to make better predictions so you can see around the corner in your business. But eventually, you want to turn some of these things into the results of these experiments and these models people are developing to things that people can use every day into production, so to speak.

Who is really responsible for that? Again, I think there are choices here in terms of the organizational structure. I think there's a spectrum on which you can lie, depending on again where you are in the journey. Ultimately, I think it's good for the business unit managers to make the decision around whether or not the output of that experiment is good enough to invest in it.

For example, at a bank, it's good for the head of the consumer division to decide, "All right, we think we have this product recommendation engine where, when people open their banking app, they're recommended a savings product, a term deposit, or a way to budget better. Maybe not a product recommendation, but a spend or a habit recommendation for their financial health.

"We think we can predict that really well. Here are our results every time we presented the product. Ninety percent of the people accepted it," or "Every time we presented a recommendation, 80% of the people did it."

It's ultimately, I think, up to the head of that retail bank to go, "Yeah, this is really improving the experience for our customers and, in the case of recommending products, it's improving our sales, our bottom line." Our top line, I should say. "We should invest a little bit more to make sure this is in production; this is a permanent part of the application."

Ultimately, I think it probably comes down to the business unit manager to make the decision about putting something into production. It's probably the case that it comes down to the CIO to make the decision to invest in infrastructure to allow people to even get to that point.

Enterprise AI investment and financial return

Michael Krigsman: We have another question from Twitter from Lisbeth Shaw who says, "How can business leaders justify the cost of these experiments when the business requires immediate monetary return, financial return?" You've described a fairly involved process, fairly lengthy process.

Ash Fontana: It gets to the heart, I guess, of one of the reasons why a lot of people aren't investing in this and why a lot of these projects fail. It's because they're sort of put off on the side as an R&D thing, an R&D lab with a separate budget and no expectation of ever really developing anything that earns a return on the investment.

That doesn't have to be the case with AI. I would say that's actually probably far from ideal in that AI is so powerful and so applicable in so many parts of the business that putting it off on the side in an R&D lab really doesn't help it achieve its goals. I think bringing it into the business is good because you're seeing more uses of it. But of course, that means it needs to have a more immediate return.

I talk, as you said, a lot in the book about how to manage the risk that something is perceived as not being profitable or useful or whatnot early on. There's a lot to this, but to go through some of those lists that I have there. It's things like:

  • Making sure the time to value is low.
  • Making sure that you run a quick experiment that shows some results early, some degree of accuracy on the prediction, and so it could really constrain the problem. You could quickly run a model over a set of data and get that, so getting the time to value low.
  • Making sure that your output is really understandable by people. It's in a format they understand. It's in a chart. It's in whatever. It's not in a confusion matrix, which is the actual term for the table that represents a set of results of a machine learning model.
  • Making sure you present a path to integrating it into an existing workflow.

We are making a prediction to better stock our shelves. It will help us better stock our shelves because we're going to predict ahead of time when something is going to run out. Well, that's not very useful unless you put it in the hands of the shelf stocker, so showing a plan ahead of time of how you're going to do that. You're going to put it on an iPad that they carry around, or you're going to put it in something else – in a thing at the back of the store so they know how to pick and pack properly or pick and place properly.

There are some of those things there, but there are 20 things on that list that ensure effective implementations. That's one part of it.

Then there is the measurement part of it, which is, how do you make sure that you're showing a return quickly, or how do you operate in an environment where you're budget-constrained? I'll just say a few things here.

One, this linear process helps. If you constrain the experiment, you really don't need much budget to do this. It's the case again that you can do it with one person, one data set, one model, et cetera. That's one way to constrain the cost.

Another way to ensure the ROI is focusing on the right problem. Thirdly, there are a lot of different ways to measure this, making sure that you're properly accounting for the costs.

There is a lot of skepticism around AI projects because people don't really properly account for the cost of it. They don't really account for the cost of data labeling, the cost of research and development. Making sure you're doing that in an honest way, but also making sure you're properly capturing the return on the investment, not just the investment itself, so really linking the results of being able to make a prediction to an outcome.

We were able to predict that people would really like this blue sweater in this season. Actually, we had none left over at the end of the season, whereas usually we would have had to discount them so heavily, our sweater stock at the end of winter so heavily that we end up losing a lot of money on our inventory. So, really framing up the return properly and linking it to a business outcome.

Michael Krigsman: In order to do that, don't you have to really have a pretty clear understanding of the technology and what the technology can actually deliver? Otherwise, you're kind of shooting in the dark.

Ash Fontana: I don't think so. This is the thing about AI that's very different to software, I guess, which is, it's fundamentally something that's helping you understand reality, and so you can express it in very real terms. AI is all about, again, making a prediction or automating a process that is manifested in the physical world.

It's the case that the value of it is often more obvious than, for example, the value of using a piece of software to do something a little bit quicker. Software is often pitched as saving time. Measuring people's time, managing their time is so hard that it's often very hard to predict, again, if you're really saving someone's time by giving them a slightly better piece of workflow software.

Now contrast that with AI for a second where it's really obvious that getting stuff on the shelves more quickly allows you to turn inventory more quickly, allows you to earn a return on assets more quickly.

It's really obvious that automating some task that you have to do at the end of every month, like a financial consolidation task, saves a certain amount of cost because it's someone's job to do that every month, and they don't have to do it anymore. They can go and do something else.

It's really obvious that if you reduce defects on a production line, your factory is running at a higher capacity.

Often, AI is applied to a real-world automation or used to generate a prediction of demand or supply that allows you to earn more money and meet the market quicker or better. That's not the case with software. I've worked in the software industry for a long time, and you really struggle to prove ROI but, with AI, it's a lot easier.

Michael Krigsman: What about the team? What kind of team needs to be in place or what kind of talent do you need to bring on board in order to do this?

Ash Fontana: Again, reiterating something I said before, a lot of the talent is already in an organization because you don't have to be a computer scientistic with a Master's in Machine Learning to get started with machine learning. You don't have to have a Ph.D. in this field to really develop these models. They're just statistical models in so many ways.

There are also so many great tools out there to help you get started, to get the initial models up and running, to experiment by throwing a bunch of data at a pre-trained model from a company like Amazon, Microsoft, Google, et cetera. There are a lot of automated machine learning companies out there that have really good stuff that works out of the box, so to speak.

To get started, I'd challenge the notion. I don't think that you need a lot of different people. That's one thing.

The second thing is, you can find these people in fields that aren't computer science. You can find these people in fields that are very rich in statistical training, as I said before, like geology, chemistry, biology – all these other fields that, again, aren't computer science. You can fish in different ponds for talent, so to speak.

Then the other point to make is you can sort of morph existing roles into this, so turn product managers into data product managers, or turn engineers into data engineers, or infrastructure engineers into data infrastructure engineers – in a way. Obviously, easier said than done, but totally possible.

Again, it depends – just to close out – on the degree to which you've started on this journey. If you're really far along on this journey, putting a lot of machine learning models in production, yeah, you're going to need some pretty serious people to help you do that. But if you're just starting out, you don't need that many of these people.

Michael Krigsman: We have another question from Twitter, again from Arsalan Khan, who asks, "How do you incentivize people in the organization to take on larger projects or to think bigger than their own siloed tasks in order to get really a larger benefit from AI?"

Ash Fontana: Trying to break down the language around AI to make it more focused on automation and prediction, and just really encourage people to ask the questions like what's the thing you'd just really like to automate that you just don't think we need to do anymore by hand? What's the thing that you'd really like to know around the corner that would help you get your number better every quarter? I think framing it in those very real terms rather than, "What's the AI you want to build?" I guess, that's one thing.

The next thing is obviously evangelizing the power of this as a tool, just how it can build this runaway competitive advantage, and just how, once you get the flywheel going, it's very self-sufficient and operational. I think that's a big thing as well. I think, yeah, lots of this different sort of framing can really help motivate people to see the potential and act on it.

Michael Krigsman: As somebody just commented on Twitter, you need to tie the AI experiment to some business improvement—improvement in production, eliminating error or waste, something—that's the key.

Ash Fontana: Yeah, I think that that's the starting point, actually. What are you really trying to solve for? What are you really trying to predict around? That's the starting point. Getting that right early on will allow you to form the model correctly and keep people motivated, et cetera.

How to overcome obstacles to AI adoption in the enterprise?

Michael Krigsman: Ash, what advice do you have for business leaders who are listening and who want to get involved in doing this but there are so many moving parts and it looks hard, it looks complicated?

Ash Fontana: Providing yourself with a set of frameworks that help you just break it down, break it down to real challenges every day, which I try to do in the book, of course. I think trying to work with existing projects out there like existing people who are in the field and trying to approach a problem from a different angle and seeing if, okay, maybe this is a prediction problem. Trying to just go back to your core business goals and reframing them in terms of automation, or something else like that, that makes it more of an AI problem. I think that's a good place to get started.

I think making sure your whole organization sees the power of AI, whether that is by getting them to read some similar books or developing a new understanding of these technologies and meeting them where they are. If they're into history, meeting them with a history book, history of the field of AI. If they're into science and biology, meeting them with a neuroscience book.

I've got a big reading list on my website that helps people approach this field from wherever their existing interests are or from their existing fields of interest and get into it orthogonally. I think that helps.

Then just sharing this vocabulary and starting to use these words. Again, using a certain term or using a set of words can be really powerful in terms of changing the way people think. It really does start with how you have a conversation.

Michael Krigsman: Ash, as we finish up, are there certain kinds of challenges or patterns of challenges that business leaders are likely to bump into as they get going on this journey?

Ash Fontana: There are so many different challenges. I think a lot of people get stuck in this Catch-22 of spending all their time organizing, collecting, managing data without stopping to think, "Hang on a second. I can hire a machine to do this," or "I can hire an AI to do this," or "How can I hire an AI to achieve the actual business goal here rather than constantly remodeling my idea of the world in our database?"

I think a lot of people just get stuck in the challenge of organizing data before actually just running a really small experiment on a very constrained data set. I think a lot of people get stuck in very complicated methodologies around machine learning rather than just focusing on the really simple statistical models that will help you get to the fertile variable and understand the real cause of things quickly.

I think people get stuck with silos, as in it's hard to break people away from the notion that AI or technology generally is a thing that's off to the side of the organization rather than out in the field helping people every day gathering information. If you're trying to approach this from an organization that keeps people in those silos, it's going to be really hard for them to get a feel for the problems they need to solve. I think that's a challenge that people face as well.

I think the challenge is picking the right problem, too. You want to avoid the pitfall of picking a problem that is super mission-critical and, if AI gets it wrong, the whole project will immediately fail, rather than a problem where AI is a way to just be very additive, you know, add more ways to generate sales leads or more ways to market to customers. Yeah, picking the right problem is often a challenge too.

Michael Krigsman: Great. That's great advice. I really like what you said. Find a problem that is not going to risk the company if you fail.

Ash Fontana: Yeah.

Michael Krigsman: But that adds to what you're doing today.

Ash Fontana: Mm-hmm. Yeah.

Michael Krigsman: I think that makes it easier to weave in.

Ash Fontana: Yeah. Yeah, that's the idea.

Michael Krigsman: Well, thank you. We've been speaking with Ash Fontana. He is the author of The AI-First Company. It's a really interesting and a really good book, so you should definitely check it out.

Everybody, thank you for watching, especially those people who asked such excellent questions. Now, before you go, please subscribe to our YouTube channel and hit the subscribe button at the top of our website. Also, tell your friends and check out CXOTalk.com.

Have a great day, everybody. We have great shows coming up, and we'll see you again next time. Take care. Bye-bye.

Published Date: May 07, 2021

Author: Michael Krigsman

Episode ID: 706