Artificial Intelligence in Business, with Anthony Scriffignano (Dun and Bradstreet)

Hype around artificial intelligence and machine learning continues to explode. In this episode, a prominent data scientist explains AI, explores how the technology works, and discusses the ethical implications. Our guest is Anthony Scriffignano, Chief Data Scientist of Dun & Bradstreet.

47:47

Nov 11, 2016
904 Views

Hype around artificial intelligence and machine learning continues to explode. In this episode, a prominent data scientist explains AI, explores how the technology works, and discusses the ethical implications. Our guest is Anthony Scriffignano, Chief Data Scientist of Dun & Bradstreet.

Scriffignano has over 35 years experience in information technologies, Big-4 management consulting, and international business. Sciffignano leverages deep data expertise and global relationships to position Dun & Bradstreet with strategic customers, partners, and governments. A key thought leader in D&B’s worldwide efforts to discover, curate, and synthesize business information in multiple languages, geographies, and contexts, he has also held leadership positions in D&B’s Technology and Operations organizations. Dr. Scriffignano has extensive background in linguistics and advanced computer algorithms, leveraging that background as primary inventor on multiple patents and patents pending for D&B.

Scriffignano regularly presents at various business and academic venues in the U.S., Europe, Latin America, and Asia as a keynote speaker, guest instructor, and forum panelist. Topics have included emerging trends in data and information stewardship relating to the “Big Data” explosion of data available to organizations, multilingual challenges in business identity, and strategies for change leadership in organizational settings.

Transcript

Michael Krigsman: Welcome to Episode #204 of CXOTalk. I’m Michael Krigsman, and CXOTalk brings the most innovative, interesting, really great people for a live, spontaneous video conversation, and what an incredible way for me, and for all of us to interact with the guests. And, you can follow us on @cxotalk, and use the hashtag #cxotalk to ask questions and to make comments directly to the guest. So today, on Episode 204, I’m speaking today with Anthony Scriffignano, who is the Chief Data Scientist at Dun & Bradstreet. Anthony is …  This is his second time on CXOTalk. He is such an articulate and clear communicator, and we’re going to talk about the foundations of artificial intelligence, and the implications for business, and in a very practical way. Anthony Scriffignano, how are you today?

Anthony Scriffignano: Michael, thank you very much. It’s great to be with you again.

Michael Krigsman: Well it’s awesome that you’re here! Please, share with us your background, and tell us about Dun & Bradstreet.

Anthony Scriffignano: Sure. So, Dun & Bradstreet, first of all, is arguably the world’s largest commercial data environment that you can encounter for information about businesses globally. And we maintain a database over 250, closing in on 260 million entities right now. We never forget a business, even after it’s out of business. This information is collected from hundreds of countries all over the world. It’s collected in different languages and writing systems. It’s updated millions of times a day. So, it’s a pretty big environment. [The information is] very, very dynamic ─ lots and lots of change in that environment. Those countries have different laws. We have to be very careful about how we curate the data, what we discover, permissible use, the whole nine yards. And, we certainly have to worry about implications of things like changing environments and changing behaviors, Some of the things that we’ll get into today that touch on the space of AI and machine learning in some of the underlying things.

As far as my role as Chief Data Scientist, one of the mandates that I have is sort of looking over the horizon. Looking at technologies and capabilities that will enable us going out into the future ─ so, not two days from now. But, before they become problematic, we have to be able to be very much aware, very well informed. So I spend a lot of time in the data science community of practice working with some of the top data scientists in the world, to basically say what we think out loud and be called crazy if we are; and to share ideas and thoughts; and to understand some of the new capabilities that are becoming possible. And then making sure we understand how these might be applied to this environment that we maintain.

Michael Krigsman: So we’re going to be talking about artificial intelligence and machine learning, and autonomous systems, but before we do that, just briefly share with us the kinds of things that you’re using AI or machine learning for inside Dun & Bradstreet.

Anthony Scriffignano: Sure, so we’re looking at a number of different processes that touch on this space. The most obvious one is in computational linguistics, natural language processing, things of that nature. We need to be able to look at large amounts of data. Some of it is unstructured. Actually, a lot of it is unstructured and we need to be able to understand things that are very difficult to understand because they’re proper nouns. See, you can look up the words in the dictionary, but you can’t look up your name in the dictionary, and by the time you can, it’s old news. So, we have to be able to deal with discussion about places, discussion about people. So, that’s a very big part of what we do.

Another thing that we do is something called recursive discovery, which is the ability to essentially adjudicate the truth. You have to be very careful with information that you pull in, even information that you know to be true, because not all true information is true at the same time, and truth is somewhat fungible - it’s not really a black and white kind of thing in many cases. And then the big Magilla [Gorilla], if you will, is malfeasant behavior, if you will. Fraud is another word for it. Sometimes when people commit actions, to us they haven’t actually committed fraud yet because they haven’t gained financially. So, watching people behave in bad ways, is certainly something where these types of techniques are very much part of what we think about in terms of how can we inform our thinking, and how can we do what we’re trying to do to inform our customers.

Michael Krigsman: Ok, so these are areas that you’re focused on inside D&B. But now, let’s take a step back and so when you, as a data scientist, think about the term “AI”, what do you think of? What does it actually mean? And let’s try to go beyond the hype, because AI has become the latest jargon, you know, it’s even better now. It’s exploding faster than digital transformation, which has exploded into meaninglessness.

Anthony Scriffignano: Yeah.

Michael Krigsman: Yeah. [Laughter]

Anthony Scriffignano: If there’s nothing else that our industry is good for, it’s creating terms that people can use that have ambiguous meaning, and can be taken to mean almost anything in any situation. And this is certainly one of them. So, it’s one of those things that you understand, but then when you try to define it, scholars will disagree on the exact definition. But, artificial intelligence collectively is a bunch of technologies that we run into. So, you’ll hear “AI.” You’ll hear “machine learning.” You’ll hear “deep learning,” [or] sometimes “deep belief.” “Neuromorphic computing” is something that you might run into, or “neural networks;” “natural language processing;” “inference algorithms;” “recommendation engines.” All of these fall into that category. And some of the things that you might touch upon are autonomous systems ─ bots. Sometimes, we will hear talk of… Well, Siri is probably the most obvious example that anybody runs into (or any of the other ─ I won’t try to name them all because I’ll forget one), but things of that nature where you have these assistants that try to sort of mimic the behavior of a person. When you’re on a website, and it says, “Click here to talk to Shelly!” or “Click here to talk to Doug!” You’re not really talking to a person; you’re talking to a bot. So, those are examples of this.

Generally speaking, that’s the broad brush. And then if you think about it as a computer scientist, you would say that these are systems processes that are designed to do any one of several things. One of them is to mimic human behavior. Another one is to mimic human thought process. Another is to “behave intelligently” ─ you know, put that in quotes. Another is to “behave rationally,” and that’s a subject of a huge debate. Another one is to “behave ethically,” and that’s an even bigger debate. Those are some of the categories that these systems and processes fall into.

And then there’s ways to categorize the actual algorithms. So, there are deterministic approaches; there are non-deterministic approaches; there are rules-based approaches. So, there’s different ways you can look at this: you can look at it from the bottom up; the way it just ended; or in terms of what you see and touch and experience.

Michael Krigsman: So, from a business perspective, when we hear terms like “machine learning,” “AI,” “cognitive computing,” is there some type of framework in which we can think of these things? How do they relate to one another? Are they synonymous?

Anthony Scriffignano: They’re not synonymous. So, cognitive computing is very different than machine learning, and I will call both of them a type of AI. Just to try and describe those three. So, I would say artificial intelligence is all of that stuff I just described. It’s a collection of things designed to either mimic behavior, mimic thinking, behave intelligently, behave rationally, behave empathetically. Those are sort of the systems and processes that are in the collection of soup that we call artificial intelligence.

Cognitive computing is primarily an IBM term. It’s a phenomenal approach to curating massive amounts of information that can be ingested into what’s called the cognitive stack. And then to basically be able to create connections among all of the ingested material, so that a particular problem can be discovered by the user, or a particular question can be explored that hasn’t been anticipated.

Machine learning is almost the opposite of that. Where you have a goal function, you have something very specific that you try and define in the data. And, the machine learning will look at lots of disparate data, and try to create proximity to this goal function ─ basically try to find what you told it to look for. Typically, you do that by either training the system, or by watching it behave, and sort of turning knobs and buttons so there’s unsupervised, supervised learning. And that’s very, very different than cognitive computing.

Michael Krigsman: And, what about autonomous systems? We’re kind of like, this is truly an alphabet soup.

Anthony Scriffignano: Yeah.

Michael Krigsman: Ok. [Laughter]

Anthony Scriffignano: So we [covered] all the colors in the palette here, a little bit, to be able to speak this language. So, autonomous systems are systems that behave without human interaction, essentially. They go off on their own, and do what they’ve been told to do. And you can think of a drone that’s not being flown by somebody as an example of an autonomous systems. Lately, there’s been a lot of talk of autonomous vehicles, which is an interesting sort of oxymoron, because they’re not really “autonomous.” There’s somebody in the car, but that person in the car isn’t driving the car. And that will be an example of that. And then, there are semi-supervised, [which is] somewhere in between autonomous and not-autonomous, for lack of a better word. That means that you might have systems where you can intervene if necessary. So think of the autopilot on an airplane ─ they like to call it the flight control system or the flight… there are other words for it on airplanes. They’re basically designed to help the airplane maintain a course: maintain an altitude ─ mainly do something like change altitude, or change the direction with some input. But, at a certain point, you know, if there’s a lot of turbulence, or something unusual happens ─ the plane’s in an unusual altitude, the thing breaks ─ the autopilot system or the autoflight system basically turns off and says, “It’s your airplane, have a nice day!” So, they’re not completely autonomous. There’s an “If all else fails, give it to the human” kind of function in many of these.

Michael Krigsman: Well certainly, we should talk at some point during this conversation about human functions that are augmented by AI. But there’s a few things we need to get to first, and we have a question from Twitter. It’s a really good question, let me tell you what it is, but let’s again address it a little bit later. And this is from Arsalan Khan, who’s asking, “When an AI system makes a decision that is based on bad data, or bad algorithms, in business, who is responsible for that?” Which is a fundamentally important question, but let’s come back to that because there’s still some basics I think we need to get out of the way, and we’ll definitely want to talk about the ethical aspects.

Anthony Scriffignano: Yeah, we’re going to have to parse through them too, but you’re going to have to define “bad.” I’ll let us get there.

Michael Krigsman: Yeah, we’ll get there in a couple minutes. Ok. So, autonomous systems, machine learning, what does all of this, first off, have to do with AI?

Anthony Scriffignano: Well, basically, we need AI for autonomous systems to behave autonomously. That would be the simple way to put it: for an autonomous system to work properly. Imagine you had a train, and you wanted the train to be able to come up to speed, travel down the tracks, slow down when necessary, don’t go through any signals, and stop at the next stop. I’m not a train engineer, but I’m guessing I could probably build an analogue system to do most of that. And I’d still probably want someone sitting there with their hand on the break just in case it doesn’t work. But, I think I could probably build a system that did not require a lot of intelligence to do that.

But, now you think about what happens when there are no railroad tracks ─ when the road might have lines in it that might get fuzzy; or might be covered with rain; or a kid runs out with a ball; or there’s a police car and you need to pull over; or a plane lands in front of you, or an elephant walks out in the road. And pretty soon, you get into this system where the number of things that can happen starts to overwhelm discrete description, and a basic set of rules. Now we start to need AI to be able to deal with the problem like that ─ to be able to effectively learn. And I have to say we tend to anthropomorphize these systems or these algorithms. We talk about “machine learning,” and “systems learning.” Well, they’re not “learning.” They are adjusting information and they’re organizing it in ways that they’ve been designed to organize it. I actually use the term “symbiotic intelligence” instead of “artificial intelligence”. These are systems that have been taught to learn in ways that we’ve described for them with primary goals that we’ve given to them. But, without having to say all that, we can say “learning”.

Michael Krigsman: Ok. So again, in our quest to demystify the basics, you explained that the systems quote-on-quote “learn”. And we hear the term “modeling,” right? We hear the term we have to train… When you talk to data scientists and they’re talking about machine learning, they say we have to “train the model”, create the model and then train it. What does that mean?

Anthony Scriffignano: So, I’m not sure if we’re demystifying this or mystifying this because unfortunately, this is a field that every time you talk about something, there’s new terms that come in. So, let’s just talk about what a model is mathematically first, and then we’ll talk about how it applies to machine learning and training the model.

So, a model is basically a method of looking at a set of data in the past, or a set of data that’s already been collected, and describing it in a mathematical way. And we have techniques based on regression, where we continue to refine that model until it behaves within a certain performance. It basically predicts the outcome that we intend it to predict, in retrospect. And then, assuming that we can extrapolate from the frame we’re in to the future, which is a big assumption, we can use that model to try to predict what happens going forward mathematically. The most obvious example of this that we have right now is the elections, right? So we look at the polling data. We look at the phase of the moon. We look at the shoe sizes. Whatever we decide to look at, we say, “This is what’s going to happen.” And then, something happens that maybe the model didn’t predict.

And, I saw some great articles over the last few days blaming the data. “Stupid data!” The data doesn’t have stupidity. And I’m not saying the people that interpret the data are stupid either. I’m saying that things can always happen within random variation, or they can always happen according to attributes that weren’t anticipated. So, modeling is a good thing. It’s an important thing. We all live and die by certain models in our lives. That’s how interest rates happen. That’s how all kinds of … that’s how certain warnings come up in your car. There’s all kinds of reasons why we want models to work. But, we also have to be very humble that the human brain doesn’t work that way; doesn’t work that way at all.

So, now we get into AI. The way some systems work, not all, is they say: “Show me something that looks like what you’re looking for, and then I’ll go find lots of other things that look just like it. So train me. Give me a webpage, and tell me on that webpage which things you find to be interesting. I’ll go find a whole bunch of other web pages that looks like that. Give me a set of signals that you consider to be danger, and then when I see those signals, I’ll tell you that something dangerous is happening.” That’s what we call “training.”

Michael Krigsman: Ok Anthony, I don’t mean to interrupt, but please, drill down a little bit more on this. So we hear, just for example, companies coming up with image search.

Anthony Scriffignano: Yes.

Michael Krigsman: So, train us in terms of images, mountains, or seashores. When you say “Find something interesting on the page,” can you drill into that?

Anthony Scriffignano: Sure. So imagine that I gave a whole bunch of people, and the gold standard here is that they have to be similarly incented and similarly instructed, so I can’t get, you know, five computer scientists and four interns … You try to get people that more or less have either they’re completely randomly dispersed, or they’re all kind of trying to do the same thing. There’s two different ways to do it, right? And you show them lots and lots of pictures, right? You show them pictures of mountains, mixed in with pictures of camels, and pictures of things that are maybe almost mountains, like ice cream cones; and you let them tell you which ones are mountains. And then, the machine is watching and learning from people’s behavior when they pick out mountains, to pick out mountains like people do. That’s called a heuristic approach. When we look at people, and we model their behavior by watching it, and then doing the same thing they did. That’s a type of learning. That heuristic modeling is one of the ways that machine learning can work, not the only way.

There’s a lot of easy ways to trick this. So, people’s faces are a great example. When you look at people’s faces, and we probably all know that there are techniques for modeling with certain points on a face, you know, the corners of the eyes. I don’t want to get into any IP here, but there’s certain places where you build angles between these certain places, and then those angles don’t typically change much. And then you see mugshots with people with their eyes wide open, or with crazy expressions in their mouth. And those are people trying to confound those algorithms by distorting their face. It’s why you’re not supposed to smile in your passport picture. But, machine learning has gotten much better than that now. We have things like the Eigenface, and other techniques for modeling the rotation and distortion of the face and determining that it’s the same thing.

So these things get better and better and better over time. And sometimes, as people try to confound the training, we learn from that behavior as well. So this thing all feeds into itself and these things get better, and better, and better. And eventually, they approach the goal, if you will, of yes, it only finds mountains. It never misses a mountain and it never gets confused by an ice cream cone.

Michael Krigsman: And how is this different from traditional programming, right? Because with traditional programming, we can put up pictures, you can do a Google search, or a few years ago maybe, before there was big machine learning, and pick out pictures of mountains or whatever. So how is this different?

Anthony Scriffignano: So, without getting into a whole debate on how it used to work versus now (because I’m sure there’s a bunch of people on the internet that will take us to task), this has been done in a lot of different ways. The original way that this was done was through gamification or just image tagging. So, they either had people play a game, or they had people trying to help, saying, “This is a mountain,” “This is not a mountain,” “This is Mount Fuji,” “This is Mount Kilimanjaro.” So, they got a bunch of words. They got a bunch of people that use words to describe pictures …

Michael Krigsman: Amazon Turk, for example.

Anthony Scriffignano: There you go. Mechanical Turk. Right. And then, using those techniques, they just basically curated a bunch of words and said, “Alright, the word ‘mountain’ is very often associated with there’s a high correlation statistically between the use of the word ‘mountain’ and this image. Therefore, when people are looking for a mountain, give them this image. When they’re looking for Mount Fuji, give them this image and not this image.” And that was basically a trick of using human brains and using words. That’s not the only way it works today. There’s many more sophisticated ways today.

Michael Krigsman: Ok, this is good.

Anthony Scriffignano: I have a good example for you. After the earthquake and the tidal wave happened in Japan a number of years back, we needed to try to help the people in Japan. And one of the things we had to do was look at satellite images and find roads and infrastructure that were impacted by all of these horrible things that happened. So, we taught a series of algorithms to find previously unbroken straight and curved lines that were now interrupted, and then we had an algorithm that inferred the degree of impact to the infrastructure around a business. So, that was learning about something that just happened using data we never used before, that in this case was graphical, that we could reduce to something mathematical and observe it quote-on-quote “Thousands, and thousands, and thousands of times” really quickly. That’s an example of a real impact.

Michael Krigsman: And, what about autonomous cars? If you live in someplace like San Francisco, you see these autonomous cars driving in the streets. What is the role of AI and machine learning, other technologies, in making that possible?

Anthony Scriffignano: So, a whole industry in the process of exploding right now, right? So we started out very much like the autoflight systems in airplanes. We wanted the car to stay in the lane, and stay at a certain speed, and remain a certain distance away from the car in front, right? So, if a car pulled in in front of you, the car that you’re driving in let’s call it “autoflight mode,” or “autodrive mode” would slow down enough to keep a certain distance, and override the intention to drive at a certain speed, but not change lanes. So this is, “Stay in the lane; stay at the speed unless you’re going to hit somebody,” basically.

Now, the autonomous cars are way beyond that, right? So they know something about the road in front of them. They know that it’s essentially what a GPS system would know, in terms of what are the roads ahead of us, what does the traffic look like… So the overall goal might still be, “Get from Point A to Point B, stay in the lane, try to drive at the speed,” but they’re much, much, much more sophisticated in terms of the information that they can bring in and the type of decisioning. So they don’t have to just say, “Well what do you want to do?” They can do it for you, to a certain extent.

There are still some very real concerns. You know, we all read the news, right? An autonomous car isn’t going to speed, because there’s a speed limit. And well, people speed! People drive with traffic, right? And you try going on a highway, and driving the speed limit, and see what happens! In certain cases, that might be very dangerous, and I’m not suggesting we speed. But, I’m just observing as a scientist that people do, right? So, you know, what do you do in a situation where “common sense”, in quotes, or at least common practice dictates that you do something that’s against a rule that’s built into a system? What do you do when a kid runs out in front of you with a ball, and a dog runs out in front of you chasing the kid and you’ve got to hit the kid or the dog? And this is horrible, but these things happen. It’s never happened to me; I hope it never does, but I’m pretty sure that I wouldn’t just throw my hands up and let the car do whatever it wants. I would have the car do something. Sometimes jamming on the breaks is the most dangerous thing you can do. So, you know, autonomous systems are able to approach the behavior of a safe driver driving in predictable environments right now, within reasonable limits, as long as there’s a person in the car to take over. Our goal is to get better than that, but think about the number of things you have to believe for that to be fully functional? I think we’d still want the human being there. I don’t think I want the autonomous car going out and delivering pizza all by itself, although probably before I get a chance to eat these words, that will be happening anyway.

Michael Krigsman: [Laughter] We have another question from Twitter, and this is a well-timed question, because we’re talking about the applications, the practical applications of AI and techniques like machine learning. And this is from Frank McGee, who’s wondering: “How are companies using AI to predict the behavior of customers and prospects?” So of course, this is the sales question.

Anthony Scriffignano: Yeah, and of course, in my case, we’re trying to use it to predict the behavior of the bad guy, as well. So it goes either way. So, you know, obviously, the billion-dollar idea is if I could predict, by people’s behavior, what they’re about to do and approach them in their time of need before that need arises, or just as that need arises. Then, I have more of an opportunity to serve that customer and maybe I can take some business away from people who aren’t so agile, and so smart. That’s basically the underlying idea. And AI is certainly being used. You know, we’ve all seen the movie (or many of us) [called] “Minority Report”, where the guy walks into the shopping mall, and all of the digital advertising on the walls is recognizing his eye implant and trying to sell him things, and tell him that he needs things. And I think we’ve all had experience with maybe going onto a, I don’t want to name a site (but something like Amazon for example), where you might search for something and not buy anything, and later on you get an email ─ you know, trying to offer you something. Those are really primitive examples of this kind of technology but it’s getting way, way, way better.

So, by watching enough people behave in a well-understood environment, with well-understood context, we can start to anticipate clusters of behavior and take action on it. An example, a great example would be if we watch people’s behavior in supermarkets. So, people go into a supermarket; and we can easily put technology on the cart that says, “Where are they going? How long did they stop?” and then ultimately, “What did they buy?” And by using behavior like that, we can reposition things in the store according to certain goals: like we’d like to make them walk around more; or we’d like to lead them towards the more expensive items; or whatever it is we try to get them to do. So that technology’s starting to happen, and it’s starting to happen in digital advertising big time. It’s starting to happen in very simple things. Like when you go to a movie theater, there’s a lot of technology watching what people do in environments where we understand the context very well. And, our behavior’s being manipulated in ways that you would be amazed at some of the ways that we’re being touched, that we don’t realize. And then of course there’s a creepy factor to that too, which we have to be careful of.

Michael Krigsman: We are talking on Episode #204 of CXOTalk, with Anthony Scriffignano, who is the Chief Data Scientist at Dun & Bradstreet, and you are welcome to tweet questions in using the hashtag #cxotalk. So, Anthony, we’ve been discussing the technology underpinnings of AI, but AI is unique in the sense that the conversations very quickly turn to questions about the ethics of AI, whether we should be using AI, and to what extent, and where, and where AI should be prohibited. It extends to questions such as Arsalan Khan raised earlier, such as when something goes wrong with an AI system and the outcome, who’s responsible? And so first, what is it about AI that lends itself to these very open-ended philosophical questions? Very different from the cloud in that sense.

Anthony Scriffignano: Totally different. Yeah, totally different. So, we talk about disruptive technology as something that forces you to change your behavior, right? The cloud definitely forced us to rethink security and privacy. A laser pointer doesn’t really force me to change my behavior. It’s a long stick I can go point, right? But AI is here to stay, it’s not going anywhere, there’s every reason to believe that it’s the degree to which this type of technology will be pervasive in our daily lives will increase, and will become more difficult to even notice. So, we just have to accept that.

So, what is it about it that puts us in a position to question some of these, either moral questions, legal questions, ethical questions? Well, as we give up our autonomy, as we let things do things for us, there are certainly some legal questions about whether those things are essentially electronic agents. If I hire somebody to go deliver dynamite, you know, legally, right? I’m not completely exonerated from some stupid thing that they might do while they’re delivering the dynamite like trip and fall and blow something up, right? So there are legal principles for agency. Those legal principles are probably not completely codified to cover digital agency, just as an example. So if you ask a person to do something for you, there is a very clear understanding in the law of the degree to which your liability extends into that action. The law typically does not catch up with technology.

When the law tries to anticipate technology, the purveyors of that technology often change their behavior knowing what the law is. And so you wind up either with a law that either loosely covers a set of behavior, or covers what that behavior was intended to be, and then it changed. And then on top of that, you have the rate of change of bad guys and how people will misuse technology. So, these are really complicated issues.

So, the thing I loved about that question that was asked was “If an AI agent, or let’s just call it an electronic agent for lack of a better term, does something wrong, are you responsible?” Part of the issue here is defining “wrong.” Someone would say, “Look, the system drove the autonomous car into a wall, because the wall wasn’t on the map. And it’s not the car’s fault, it’s the people who built the wall’s fault.” I don’t know about that, because you wouldn’t drive your car into a wall, right? If you were there, you wouldn’t drive your car into a wall. So I could make an argument that the people who made that system did a bad job of creating an AI agent that mimics human behavior, because a reasonable human wouldn’t do that. And this reasonable person standard is in the law already. Does it apply to digital things? Not so much yet.

So I think we have sort of the building blocks there, and I hope that we don’t have to completely rethink ethics and moral behavior. But, I think we really do have to think about how much of this legally applies. And, in certain cases, I’m not a lawyer but I work with them a lot, you know, you have to see how the courts are going to interpret it, and you have to see what’s going to happen in different countries. You have to see how this might change the ability to bring these technologies to the market. We also have to be careful because we’re so afraid of things like that happening, that we don’t put something in like the Watson system that’s in right now that’s using all of the curated medical literature to help emergency room doctors… You know, I really want them to do that! I really want them to do that well because I may have to go to an emergency room someday, and if they’re too afraid to use technology like that because it might make a mistake, and not learn from it, then we don’t do anything. And that first step never happens. We have to, as human beings and rational purveyors of the advances of technology, will have to walk that line carefully and not just be afraid.

Michael Krigsman: So, we only have about fifteen minutes as opposed to three or four days to continue this discussion. And, the issue of data, okay? So we have the separation between, or the AI outcome relies upon the source of data, and the quality of data, along with the quality and the caliber of the algorithms and the learning, the machine learning that has taken place. So, how do you separate from an ethical perspective, for example, how do you separate out those two in order to answer Arsalan Khan’s question, in terms of pointing responsibility when you have a negative outcome, or for that matter, if you have a positive outcome?

Anthony Scriffignano: I don’t think you do. I think that data is permeating everything, and the fact that you didn’t go use the right data to do what you should have done in your algorithm is not an excuse! The fact that you didn’t realize that you took data in motion and created data at rest in order to put it in some training environment, and the world changed outside while you were doing your training, well shame on you. We should know these things. The science of using data hasn’t changed because of machine learning. We have to remember that there are certain things that we need to do in order to use data in motion, or data with varying degrees of veracity or velocity or value to a particular goal. Those are Big Data problems that we had. We’re not allowed to say “Big Data” anymore, though we haven’t completely solved any of those problems yet. So, we need to make sure that we make new mistakes. We need to make sure that we keep all of this learning that has brought us to the point where we can create amazing things like this, and really keep in mind the fact that the underlying data can influence the outcome of the behavior of these things just as well as having the wrong algorithm, or giving it the wrong goal, or not supervising it, and changing it. All of those things are aspects of getting this right. It will not get any easier. It will get more complicated and I would say that’s the work of the future.

You know, everybody talks about, “How many jobs will be eliminated by the creation of artificial intelligence agents, bots, things like that?” Probably a lot! But everything I just said will probably create a lot of new jobs. So, it’s all about us as a human race not drowning in our data, and drowning in our technology, and giving up the fact that we have rational thought. And these things typically don’t. There’s something going way back to Alan Turing ─ the Turing Test. If I could ask a robot a question, and ask a person a question and not know which one was which. The Turing Test is basically the thing is behaving in an intelligent way when I can’t distinguish which one is the human or not. And when bots first came out, people would just say, “Well, are you a bot?” And the bot wouldn’t know how to answer that question and it was pretty easy to fail the Turing Test. Now, they know how to answer that question, and it’s not so easy anymore. So, why? Let’s do some good with this. Let’s make some new mistakes, and move this forward in a rational, intelligent way, and not just sort of be afraid of it evolving.

Michael Krigsman: The questions of public policy then become prominent in here as well, because of the job issue and because of the fear that’s associated with the possible implications of what will happen. So, where does public policy now start to intersect this?

Anthony Scriffignano: Well, I think it’s really something that we need to be thinking about. One example of public policy is marginalization, right? So, who has access to this technology? Do we only put the AI technology that works in the emergency room in the hospitals that are in the inner-cities because there’s a higher volume of people? Somebody could make a rational argument that this needs to be available to everyone. If you make it available to everyone, then you can’t take the first step. So, I think as public policy … As people who are setting public policy do what they do, they need to think very seriously about things like asymmetry and marginalization, and access to methods, and access to technology, just like they do with anything else they do in public policy. The difference is, look how long it took us to even describe what we’re talking about here? This is not an easy conversation to start happening. It’s not like we’re just talking about, you know, changing tires on cars. We’re talking about, you know, something that is very, very difficult to explain. It’s contingent upon us. Anybody can make this more complicated. It’s contingent upon us to make this easier. To let the people who are setting public policy become aware of some of these issues and do what they do well, and set policy correctly. And conversely, if that’s not happening, to speak up and not to just move on and wring our hands.

Michael Krigsman: But this is one of the fundamental problems. It’s the fact that we’ve been talking now for about forty minutes, and we’re just at the point where we’ve been able to cover enough of the basics to even have the actual meaningful conversation from a business perspective, or a policy, or an ethical perspective. How in the world can we simplify this so that non-computer scientists can have a meaningful discussion about it?

Anthony Scriffignano: You know, I talk a lot about sort of reflective leadership that goes into leading an organization that is using technology. You can’t just hire smart people. You have to teach yourself. Every day, you have to teach something or learn something, right? So, the people who are setting public policy, my hope, and this is probably completely naive, is that they are aware that there are technologies that are starting to come about in the news and maybe they should learn a little bit about them. But to put from your question, from the bottom up, so to speak, the folks that are very much aware of these technologies, and what they do and the fears and the hopes are for these technologies, we have to make sure that our voice is not only a heard voice, but a voice that makes sense. We can’t use a whole bunch of jargon. We can’t use a lot of big words. It would be very easy to talk about everything you’re talking about in language that’s so dense, that no one would ever figure out what we’re talking about except the people that teach this stuff. And then what? That doesn’t help anybody, right? So, we have to find ways to bridge these gaps. We can’t lead with the technology.

If somebody comes to me and says, “Well can you use AI to solve this problem?” I don’t know! Tell me what the questions are. Tell me what the problems are. Don’t leap right forward into doing any of this. But, at the same time, think about the implications of not, right? What does it mean to your customers? What does it mean to the communities you serve? What does it mean to the marginalized others? All of these are future questions that we really need to be asking.

Michael Krigsman: I mean, certainly what you’re saying is right, but what happens in this case, just to give a sense of flavor of how complex this is: If we talk about these public policy issues, leaving alone the question of “What is AI?”, it creates this huge black box, for which data scientists can essentially - and companies - can basically do whatever they want without real scrutiny. Or is this perspective just wrong?

Anthony Scriffignano: Well, I think the term “AI” is what causes that, right? So organizations can behave in less than transparent ways, in all sorts of ways. And, you could ask the same question about collecting customer information. You could ask the same question about using behavior of your customers, or your vendors, or anybody who “comes into your store,” to so speak (in quotes), in ways that they don’t intend, and what that portends. I think that the most important thing here is we shouldn’t feel like there’s some sort of wall up because we’re talking about artificial intelligence. If we need a simple definition, we could say, you know, “systems and processes that are intended to behave as intelligent humans would, in well-understood environments”. That’s not a perfect definition; it’s not a horrible definition. It’s sort of an okay definition. And if I had to start a conversation, I would probably start it there. And then I’d probably give some examples, and then I’d probably say, “Like Siri, like a bot, like an autonomous car.” Eventually, you can get into a conversation about the difference between a drone with somebody flying in it, and a drone without somebody flying in it. And is that an … You know, the FAA is worrying about that right now. They’re trying to create regulations that cover things like. But don’t start there. Don’t start where it’s complicated. Start where it’s simple, and at least reasonably possible to adopt a working definition.

Michael Krigsman: So, before we run out of time, there’s a couple of other things that we just need to talk about. And, we have not really discussed the topic of privacy. And so, where does privacy and the data privacy fit into this equation as well, into this landscape?

Anthony Scriffignano: It’s a huge issue. Bots have the ability to observe things, and learn things, and remember them forever. There’s something called an “observer effect.” When you watch people who know they’re being watched, the first thing they do very often is change their behavior. So, if you build models and systems to detect behavior based on the past, you know that those systems are detecting the way that that behavior is not occurring, because you know the people behaving have changed their behavior ─ those kinds of things. Security, privacy plays into this. Do I know I’m being watched? If I do, do I behave differently? Do I have the right to opt in or opt out of being ‘botted’ to, if you will, to coin a word. You know, there’s laws being written as we speak, and laws about to be implemented as we speak that talk to general protection of privacy: the right to be forgotten; what the government may or may not do vis-à-vis business. So, all over the world, these sorts of laws would be written around data. What data can be transported across borders… Think about this: what happens if you don’t transport it across the borders and you make a really stupid decision because you couldn’t see all the data?

So, the answer to everything isn’t always as simple as, “Well everything is private and everything needs to be contained and nobody gets to see anything.” That might be a way of looking at it, but it might be somewhat naive with the amount of data that’s being created now. So, this is happening all over the world, we can’t ignore it, and we’re certainly nowhere near done figuring this out.

Michael Krigsman: Yeah, it seems we have barely scratched the surface of it.

Anthony Scriffignano: Might be a good topic for another CXOTalk.

Michael Krigsman: Yeah, actually I had Michelle Dennedy who’s the Chief Privacy Officer of Cisco as a guest on CXOTalk, and I don’t think we really spoke about AI too much, but you could have endless discussion about this. It’s a very complicated topic.

Anthony Scriffignano: You could probably get two bots to talk about it too!

Michael Krigsman: [Laughter] That might be a lot of fun! So, before we go, two last questions and I’ll ask you to answer kind of quickly, just because we’re running out of time. And again, these are conversation that we could spend all day talking about each one of these but, what advice do you have, first off, to businesspeople, to senior executives who may be listening that are saying, “What do I do about all of this stuff?”

Anthony Scriffignano: So I would say three things: 1) Be humble. Be realistic. There’s no magic button. There’s no secret open-source code you’re going to pull in and solve all your problems. Be humble about what can and can’t be solved with approaches like this. 2) Recognize the fact that doing nothing is actually a choice. You can’t just do nothing because you don’t know exactly what to do because that opportunity cost could be very, very, very serious. And 3) Continuous learning. Continuous learning of your existing organization. The people in the organization ─ the skills that got them there are not the skills that are going to take them forward into the future. They’re just table stakes. And the people that you’re hiring, what skills do you need to fill in those gaps?

Michael Krigsman: Ok. And then, finally, what advice do you have on the public policy side? So we’ve just been talking about the private sector. What about public policy and regulators? What advice do you have for them regarding all of these AI technologies and these deep moral and philosophical implications?

Anthony Scriffignano: I think we should regulate behaviors and not try to overregulate specific technologies because those technologies and specific types of data change so quickly. So, we should look at the behaviors. I think we should also look at the unintended impact of over-regulating some of these things because there’s a lot of good that can come from data being used in the right ways, and technology being used in the right way. So, always consider the balance between the impact of over-regulation, and not having enough regulation. And then the last thing I would say is, from a public policy standpoint, maybe we can use a little AI to figure out what’s working and what’s not working. And not just sort of, you know, speak our way into the truth.

Michael Krigsman: You know, it’s funny. When I talk with regulators, some of the more enlightened ones, that’s one of the things that comes up, which is “What about the role of AI in development …

Anthony Scriffignano: … of policy!

Michael Krigsman: Exactly. Well, we are out of time. We have been talking with Anthony Scriffignano, who is the Chief Data Scientist of Dun & Bradstreet, and what an action-packed 45 minutes this has been! You’ve been watching Episode #204 of CXOTalk. It’s going to be on-demand for the replay immediately when we’re done, and if you are interested in the foundations of AI, and the implications, I urge you to watch it. Anthony Scriffignano, thanks again for being with us!

Anthony Scriffignano: Thank you very much for having me. It was great.

Michael Krigsman: Everybody, thank you and I also want to give a huge thank you to Livestream, who provides our video infrastructure and they’re flawless, it just works, and we’re really grateful for that. You know, funny thing about live video like we do, live video is an exercise in almost ready to fail, because there’s so many pieces. And, with Livestream, it just always works, and so we really, really appreciate that. Thank you everybody, we’ll have another show next week. Bye-bye!

Published Date: Nov 11, 2016

Author: Michael Krigsman

Episode ID: 395