Description
Stefano Puntoni is the co-director of AI, a professor of marketing, and a world-renowned behavioral scientist at The Wharton School. His research examines the psychology of Artificial Intelligence to understand consumer reactions and adoption patterns. Stefano teaches about technology, brands, consumers, and decision-making. He is passionate about research teaching, and connecting with others to share ideas and learn.
Chapters
AI adoption and consumer reactions (0:00)
AI's impact on behavioral science and future opportunities (2:45)
Adoption, innovation, and potential threats to personal identity (4:35)
AI's impact on personal identity (10:14)
AI's impact on jobs and identity (15:57)
Marketing AI products (23:29)
AI's impact on jobs and industries (26:45)
Limitations and potential dangers (31:39)
Impact on content creation and user adoption (37:12)
Guest[s]
Dr. Stefano Puntoni
Roles:
Co-director of AI
Organization:
Wharton School of Business
Host[s]
Maxwell Matson
Roles:
Head of Growth
Organization:
PlayerZero
Related content
Transcript
Stefano Puntoni 0:00 If you take the way that engineers would think they see a task out there, and then they build an AI system thought imagine, you know, supervised learning system where you have labelled data and you can make predictions based on, you know, prior user behavior or something. I think self driving cars are a good example. How do you come up with a self driving car? Well, you're not going to script all the rules of everything that might happen on the road, you're not doing that, you know, we tried was impossible is too complex. What we do instead is that we follow drivers as they drive. And we record all the data from the drivers behavior, and all the sensors and the inputs in the car from you know, the location to the LiDAR, or, you know, cameras or whatever it is that you know, that the machine is equipped with, and you store all that. And then what you do with that, you build a predictive model that takes all those inputs and tries to predict the driver behavior. And eventually, it will learn to do what a driver will do. So a lot of AI systems are really learning to imitate the human. That's what the way that a lot of this stuff works. There are different types of AI, reinforcement learning, there's a lot of different types of AI, but a lot of them are this variety. And so they are naturally tailored to this idea of replacing the human, they're copying the humans, they can take care of the task, they can take the task. And I think that's, you know, very important and useful. I can't wait for self driving cars to really be everywhere, but as an example, but it's also a very limited way of thinking about it. I mean, the many situations what we ought to be thinking more about is what the AI can do to make us more productive, more successful. Max Matson 1:44 Hey there, everyone. Welcome back to future product. today. My guest is Stefano petani Wharton professor and behavioral scientists researching the psychology of artificial intelligence to understand consumer reactions and adoption patterns. Stefano, I'm so excited to have you on. Would you mind telling us a bit about your background just so we can get started? Hi, Max, thank you for having me. It's a pleasure to be here. I am a behavioral scientist and a professor of marketing at the Wharton School, part of the University of Pennsylvania in Philadelphia. And here, I do research and I teach on the topics of marketing, branding, and mostly technology and artificial intelligence. I am also the co director of a new center, we're just starting out called the AI at Wharton, which is meant to bring all the work around AI across the Wharton School together in one on one platform. So I'm excited about that. Fantastic, fantastic, would you mind talking a little bit about, you know, what got you into artificial intelligence, what motivated you to research this problem? Stefano Puntoni 2:45 Happy to I'm originally from Italy, I did my PhD in the in the UK. And then I landed a job as assistant professor already quite a while ago, in the Netherlands in a city called Rotterdam. And they I was for a while and then around 2014, I was promoted to full professor there. And in the Netherlands a big deal. You gave a public lecture, that lecture get printed as a book, you have all the professors wearing a toga listening to us, typically people taking a moment of reflection when they give this talk. And they think about what they've done and what ties together the research we've done and what is the impact of it on. And so I did that. And at that time, I think it's probably quite common. I don't know if it's also kind of like a midlife crisis kind of thing. But I started feeling somewhat tied and maybe, you know, fed up with doing the same thing again, and wondering what's next? What can I do, that will be exciting and important. And just around that time, it happened to be when the first really striking breakthroughs were coming about in the era of AI. This is when deep learning was really moving out of the labs into the real world. We had the first test drive from self driving cars at Google, we had, you know, Siri, and IBM Watson and things like that. And I was looking at this and thinking, wow, I think, you know, we're just starting this is going to change, everything's gonna get better, very quickly, is going to, you know, be disseminated across many industries and products, and it's gonna make a difference to a lot of things that we care about. And yet, if I was looking around in behavioral science, almost nobody was working on this topic. So I thought it was a huge opportunity. It was important for, you know, the future of my children and my students, and I felt this was something that, you know, we could do a lot of interesting work on. So that's how I got into it. And, you know, 10 years later, I'm, you know, more excited than ever, so that's great. Max Matson 4:41 Yeah, absolutely. Absolutely. I really grateful for people like you who, you know, dedicate their research to this topic, right? Because I think up until even maybe the last year or so, the pop kind of conscious has understood that AI is a part of technology but hasn't been really seen the leaps and bounds that this technology has made? In those 10 years that you were talking about? I can only imagine. Stefano Puntoni 5:08 Yeah. I mean, that's something I find it frustrating. A lot of the conversations around AI in the popular press, you know, it sounds more like, you know, killer robots and then me, you know, Terminator, then, you know. So I think a lot of it is kind of like a bit hysterical. A bit, maybe, you know, not really looking at the things that matter, I think I mean, there's lots of also very good commentary, of course, but certainly there's room for improvement. And you know, just look at the adoption of the first generative AI engines. If you look at you know, church beauty obviously Meijer, anything I got? I mean, we've never seen anything like that. I think, you know, this. A lot of people are aware of this, but there hasn't been never another product that has been being adopted so rapidly as charged up. It never happened. I mean, this was, I think about seven weeks it took to reach 100 million users. That's crazy. Think Max Matson 6:07 about it. Yeah, absolutely unprecedented. Can I ask, just from your kind of expert perspective, what do you attribute chat? GPT being the breakout tool to? Stefano Puntoni 6:20 Well, I mean, he works. No, I'm open, I went, you know, they, they were I think they've been amazing. I mean, what they've done is to first create a fantastic product, I think, in many ways. GPT, you know, then 3.5. Now, GPT. Four is in many ways, still, you know, the best doing a lot of things. But also, I think they managed to build guardrails around it that made it safe, or, you know, as safe as I think reasonably could be expected to be to put out there, and then they scaled it incredibly fast. And I think also, the business model of having it free for, you know, for users meant that they could collect an enormous amount of data about, you know, what people do with this thing? And, yeah, I think that's no proven, proven a very good move. Now. I actually ran a session with Google executives awhile ago, and we had a very interesting conversation about about that, because the GPT, the transformer architecture on which, you know, Chartbeat is built on is actually a Google invention. I mean, it was from Google that came up with a transformer models. And in many ways, Google is really at the leading edge of a lot of these developments, of course. But the interesting to realize that Google is not open AI, open AI is a small nimbler operator, go, you know, Google is a massive organization with enormous scrutiny, responsibility. And also obviously, on the more negative side, you know, more bureaucratic, harder to move, maybe a little bit more conservative than, you know, just start up, basically. And so I think the combination of elements meant that I don't think they could have been Google actually, I think it had to be somewhere like open AI, making a move like that. And then, of course, that, you know, opened the door to a lot of other players to think about what this would, would do. And I think Microsoft has been obviously the ones that have benefited the most I would say, I expect that we'll see, you know, the gbta model being rolled out through a lot of different Microsoft products, and there's so many users you can have just in the office with, you know. Max Matson 8:40 Yeah, it's quite an inbuilt advantage, right. I think you're spot on with the kind of incumbent versus young and nimble and small companies, right, that ability to do things that are at Google probably would have a very long and lengthy bureaucratic review. If you're open AI, that is a massive advantage. Stefano Puntoni 9:00 Even beyond that, it's simply to brands and not alike. And I think Google has a position they could not take a chance, the way that maybe in open AI could take for example, Microsoft to the before this open AI thing, collaborate in a partnership with with open AI, what Microsoft was famous for, in the context of chatbots was this you know, infamous Thai baht that within 24 hours that become super racist, though phony, and that's their further so you can see how you know certainly dangerous they could open a no kudos to them. They did it in a way that worked. And then now there are others and so I think I'm tropic is doing a good job. And obviously Google and so this is a great space. Yeah, absolutely. Max Matson 9:49 Absolutely. So just to kind of specify down into your research a little bit. One of the major things that you discuss and research is You kind of possibility of automation becoming, you know, somewhat of a threat to personal identity. Right? We talked a little bit about it actually before recording, but I would love to kind of talk about how you arrived at that topic. And kind of how you see AI today impacting personal identity. Stefano Puntoni 10:20 Yeah, and, you know, it may be surprising, but I study AI, but I'm not a computer scientist or an engineer, I don't study, I don't build AI systems, I study the impact of AI systems. So I'm, in fact, my training, before my PhD was in statistics, and then I did a PhD in marketing and decision making. And so I'm kind of trying, and I think this is a space that excites me, because it's a combination of Economics, Psychology, and statistics, you know, where you can try to understand how these statistical techniques are built on how they work. And then as a result of that, the kind of impact that they can have in conjunction with you the behavior, and the use of psychology. So I don't think you're able to understand the impact of these platforms without understanding psychology to right, you need both. And so I have this kind of like, sort of, like in between positions that I think give me, I think, useful vantage points, to try to, you know, unravel some of the implications of, of the recent dimensions and improvements. And so with regard to identity, this is an area where I had done some work already, prior to, two, to jumping into the technology kind of topic. And I find it incredibly fascinating, because I think, you know, who we are, where we want to be, or who we think we are, or in the eyes of others, and all questions like that actually impact a lot of our behaviors and because of a marketing research, right? So if you think about how do brands differentiate in the way that if a Nike or Harley Davidson or you know, Apple or any other company based on brand, they can think of, they, to a large extent, differentiate by giving people meaning in some way, you know, they, they affect the way they think about the product way beyond the functional properties and capabilities of the product that they sell, you know, you don't buy Nike, you know, sweater with a big swoosh logo, because it's functionally superior to the competition. That's not the reason. So there's other stuff that go on. And I think in many industries, where, you know, there's a lot of product quality, where, you know, functional differentiation is very hard to achieve, and often short lived, we, you know, competitors can do the same. In this kind of very competitive environments, oftentimes, this kind of associations and, and brand power that come with understanding and helping people fulfill some psychological need, you know, they are, you know, what makes, you know, the margins, particular a lot of reputations. So I find it both as a person with CLI and as a researcher, decision scientist, as a marketing professor, I think it's a cool topic. And so, I came to the topic of the college actually through that. So the first project I did were all around identity and, and technology in this space. And at that time, I can do some background, actually, I want to write a paper about how AI is impacted consumer identity. And then I thought back, you know, 10 years ago, we used to futuristic is too far fetched? You know, there are not a lot of applications out there, the consumer can directly interact with AI. And I thought, when I send it to an academic journal for publication, they'll tell me, you know, whatever, it's just kind of not there, you know, it's too far out. And then I thought, Okay, let me, let me then think about how to find me more broadly, which I think is, you know, generally good thing. And I thought, Okay, actually, this is not about AI. This is about automation. So this will work for, you know, a very high tech solution as much as it works for, you know, a coffee machine or, you know, a lot of other tools that we use in our house already. So I thought, let me write a paper about automation. Now, the academic cycle is so long, you know, it takes so long to publish, because they asked you to revise and revise and you go through all the sound. So eventually, by the time the paper got published, actually, everybody was talking only about AI. That they've ever been about that. But the, the idea is simple. And it's that when you think about tasks that you perform as a consumer in any of the activities that you may, you know, engage in on an everyday basis. Some of these activities you perform for purely instrumental reasons, you know, you want to get your job done vacuum cleaner. Typically you just use it because you want to clean the floor. But there are a lot of activities that you perform, partly because that's who you are. They have a symbolic meaning to you, they signify something about you to yourself or to other people, right. So you may have a hobby like maybe, you know, maybe you like fishing or you like, you know, photography, or you like cooking or whatever. And when you perform a tasks, at least to some extent, because that's who you are, then automation can become a threat to you. Because now it's replacing you in tasks that actually are meaningful to you beyond the inner chore beyond the instrument. So what we study in that paper, we are showing that the reason why that threat emerges is that once you have now the high performing some of these identity, important tasks, let's call them now you're no longer able, as a consumer to attribute the outcome of consumption. And so imagine that you are relating to baking. And now you have a bread baking machine. And now you make the bread that way, and as the bed comes out, and but now it's not your bread, because now you've made it now as a machine and with it, and the inability to say this is me, it is mine labor is threatening to people who identify in the category of baking in this particular case. So the key here, to understand the which automation is going to be threatened or not, is to understand a Why do people engage in a task? Right, to what extent it has a symbolic meaning or not? If it doesn't, then people probably just want, you know, Evie, and speed and things like that. So the automations are great. We love automation, right? In most cases. And so that's the first question, why do they do it. And second, understand the role of that particular task that you're thinking of automating in the context of that identity. So for example, going back to baking, in baking, you have largely two tasks, you have the task of preparing the dough, which require quite a lot of labor, you have to need the dough, and it's a repetitive task, it takes some labor. And not particularly not a lot of skill, you know, bit of skill, of course, but not not so much. I mean, I don't know how to bake, but I can eat it. But then you have the cognitive skills of deciding the ingredients and the temperature and the timing and all of that. And, and that takes, if that's more difficult, that's more diagnostic for someone who's willing to baking, because that's really, you know, if you don't know how to bake, and I'll be able to do it. And so if you think about these two activities, more physical and more cognitive, and you have two potential machines that automate either one is like one of those, let's say KitchenAid, donated machines. And the other one is something like, you know, bread baking machine, which will big display, and you ask what kind of bread you want, and then it tells you what to do with it. Well, you know, the second one is going to be threatening to someone who is into baking for identity reasons. But the first one might not. You know, in fact, a lot of I know a lot of people are into baking, they love those donating machines, because it saves them the physical labor on the dough. And it doesn't threaten the sense of self, because that task does not distinguish someone who really knows how to bake someone who really doesn't. And so that differential impact, they also then it spills over to this idea of attributing the outcome to yourself. So these are basically the two questions. Imagine that you're a company, you're investing in innovation. And you want to know which attribute or tasks we should we try to automate or should earmark for further automation, you ought to first understand what your users are doing, why they're performing particular activities and how to do that. And second, you have to perform even for those who are potentially identity motivated, you have to understand what role that specific activity plays in their identity. And it may be under threat, it may be not by automation. So I think this is something that just requires a bit of thinking and with the data essentially requires insight into consumer behavior. And the thing is, if you don't do that, the risk that you're running into, is that now you have your most advanced product, right? This is the latest tech. Most automated products, these are the most expensive products in your range, right? These are the fanciest product you have. And you make them an attractive to whom, to the segment of consumers who are most involved in the category who are most in love with the product who are most willing to pay for high end products. You now making all your effort to do something really at the high end, and you're bigger, knowing those buying the kinds of stuff. So then the danger is that the products will flop in the market. And there are plenty of examples. And I think our effect is probably playing a role in all of those failures. Max Matson 19:43 Absolutely. I it kind of ties into something else that you've talked about, which is human or AI versus human and AI right. I've seen it play out with startups already is the ones that claim to automate a role I've seen struggle to get through Action, the ones that I've seen augment a role or market themselves as augmenting or supporting a role taking care of the busy or, you know, more diminutive tasks within that role succeed a little bit more, would you say that that's kind of that paradigm of human or AI versus human and AI, Stefano Puntoni 20:17 I think a lot of the conversation around AI, I mean, it was mentioned already at the beginning, that that is kind of like a very, you know, panicky kind of feeling to it, you know, you read some news, you think everybody's gonna get unemployed, and it's not, it's gonna be nothing for us to do. And, you know, you can understand where that's come from, because a lot of conversations are about, you know, AI now doing this AI now doing that. And when they do this, and they do that they do them instead of people. They're often we're not talking about things that we were never doing, you know, the things that we were doing. And some of them are things that we didn't want to do, you know, like, robotic vacuum cleaner grading, I mean, it was compare that one. But you know, the lots of other things that may feel, especially the professional domain, where people livelihood is at stake, that might be kind of threatening. So if you take the way that the engineers would think they see a task out there, and then they build an AI system. So imagine, you know, a supervised learning system where you have labelled data, and you can make predictions based on, you know, prior user behavior or something. I think self driving cars are a good example, how do you come up with a self driving car? Well, you're not going to script, all the rules of everything that might happen on the road, you're not doing that, you know, we tried was impossible is too complex. What we do instead is that we follow drivers as they drive. And we record all the data from the drivers behavior, and all the sensors and the inputs in the car from you know, the location to the LiDAR, or, you know, cameras or whatever it is that you know, that the machine is equipped with, and you store all that. And then what you do with that, you build a predictive model that takes all those inputs and tries to predict the driver behavior. And eventually, it will learn to do what a driver would do. So a lot of AI systems are really learning to imitate the human, that's what the way that a lot of this stuff works. There are different types of AI, reinforcement learning, there's a lot of different types of AI, but a lot of them are of this variety. And so there are naturally tailored to this idea of replacing the human a copy the humans, they can take care of the class, they can take the task. And I think that's, you know, very important and useful. I can't wait for self driving cars to really be everywhere, but as an example, but it's also very limited way of thinking about it. I mean, the many situations where we ought to be thinking more about is what the AI can do to make us more productive, more successful. You know, it should be about human flourishing, it shouldn't be about human replacement. And there's a lot of potential that we can tap into that. And I think we need the conversation, to also include behavioral scientists, psychologists, business experts, because it's about domain expertise. Now, if you want to devise an AI system that enhances human skills, you can't develop it in isolation from a human, you can just take okay, what is the human doing, doing it instead? That's the, you know, human or AI mindset, I call it the human AI. And AI mindset requires you to be out there, figure out what people do, what they can do, what they can do better with the support of AI, and then design that system. Right. So it cannot be just an engineering problem. I mean, ultimately, it is an engineering problem, but it cannot be devised purely from an engineering angle, you need that domain expertise, that behavioral science, Max Matson 23:41 right, it makes total sense. And so let's say that you've done that work, right, and you've created a product that augments this task. If you're the marketer tasked with, you know, the role of messaging this product, how do you effectively communicate to consumers that this is not a replacement, Stefano Puntoni 24:00 I mean, if it isn't a replacement so I think you would have to, in that sense, it's not any different from any other product in a way where you will have to explain to the consumer, what's the benefit, the consumer stands to gain, and then make a case for, you know, the value, you know, the cost benefit trade off, you know, and say, you know, what it is that I offer you that you cannot get somewhere else, and that is going to make a difference to you, if you're gonna make you more successful completing this task. So it could be about productivity makes me faster, right. So I can do what I used to do, I can do the half the time, which means that I can either go fishing or just do twice as much work. So either way, it's good. Or I can do what I'm doing the same amount of time and just do it a little better. You know, so now instead of having a quality score on my output of 50, I have a 40 score landed on that'd be great. So, so I think there's this aspect of efficiency and this is aspects of effectiveness, you know, like doing things quicker, more efficiently doing things better and more effectively. And I think those are quite different, I think the same technology can actually give you both the first data that are coming out from the labs that we see now, there's been an explosion of interest around generative AI. And the first thing that I see coming out, they often show very large improvements in productivity. So, people would say, complete a task in you know, in within productivity, gain 20% 40% 50%, you know, up to even a half the time almost, and then, at the same time, they also often find results on quality and like maybe, you know, they do that in half the time at 20% better quality. So, as a marketeer, you will have to think about how do I frame this product? It is about giving people time, is it giving people powers? So you could power in some way in any case, but I mean, how do you want to market it? So I think I can see that two different angles, that companies may have products that can achieve both, but in some way, the marketing strategy may need to focus on one. And for that, you have to understand the customer, you know, what is it that I'm going to promise to them? What can I deliver to them, and what's going to be differentiated from competition. Maybe there are other ways to save time, but this is cheaper, or maybe there are other ways to save time, but they mean, you know, much worse output, whereas I can give you less time more, you know, more free time and better output, you know, whatever it is, your you know, your claim of advantage, you need to understand the consumer in order to be able to make it for the same technology, many different positioning may be viable, you know, Max Matson 26:42 now, it makes a ton of sense. It's, I mean, at the end of the day, you're always selling to humans, right. So the the core problem remains the same. Stefano Puntoni 26:51 I mean, the core marketing is the same. And ultimately, actually, I think that the basics you know, we there was this changing very fast, the only thing is not changing very fast sell brain, we you know, we've evolved a very long period of time. So we are basic psychological processes always the same, although, you know, the context is very different. Max Matson 27:11 Right, right. Kind of extending that, you know, you mentioned kind of catch up t being this revolutionary tool in that it got so much traction so quickly, right? That being the case, obviously, these tools are going to continue evolving, continue growing in their scope. How do you kind of see the next frontier of human labor? When it comes to the roles, the jobs that might emerge from this tech? Stefano Puntoni 27:38 I mean, that's a very Max Matson 27:40 difficult question. Yes. Yes, very open ended question. Stefano Puntoni 27:43 I think first thing, I would say, it's important to realize that I think this technology is gonna make a massive difference to a lot of different activities and jobs, even if it doesn't get any better, doesn't have to get better for this to be disruptive in lots of contexts. We're just figuring out what we can do with it. Take decades, presumably, before we fully understand how to reorient, you know, workflows, tasks, job profiles, train the people to do it. I mean, it's gonna take a long time, the investments are required, and you know, the change in corporate culture and change in you know, organizational processes, all of that is going to take a long time, probably much longer than it takes to actually develop the technology. I think. So we have to see, but in terms of, you know, what kinds of jobs I think it's very hard to be concrete. Of course, if I knew probably, I shouldn't be here talking to you, but I should do it. I would say there are some just purely based on the architecture of our AI systems, you can maybe speculate on some of the capability that they may struggle to acquire, but doesn't mean they won't acquire. So nobody can make a promise about what the AI is gonna look 510 years, honestly, next year, I'll tell you, you know that we've seen so many surprises, I had these slides in my course when I show you what I cannot do. And I tell you, I have to revise that quite quickly. So I don't bet on that. But but there are certain things based just around in nature, the things I'll this is going to be relatively weak points. So for example, generative AI models and diffusion models in form architecture, all of those models, okay. They are predicting the next word in the sentence based on a corpus of data, and the corpus data is truly humongous. And what they do is truly incredible to me every day is like being in a sci fi movie. It's amazing. But they're not optimized for truth. I mean, that's, that's not what they're programmed to do and know what they can do well, so they are, so they will illuminate they know they will get better. You know, there will be guardrails around them they will be further feedbacks. systems and modules added up, in order to improve. For example, I could imagine Microsoft investing very heavily. And obviously, Google is doing that too, in how to use this chat bots for search. And when you, you know, if you go to a search engine is not because you want, you know, just to see what it says, you know, you want to go there, because you want to have facts, you want to know the truth to some extent, so it's important to you. And so the systems if they want to be deployed in that industry, which is, you know, many billions a year industry. And so they will try to deploy their, you know, every 1% market share that being takes from Google is worth a ton of money. So they will, pouring a ton of money into it. So they get better, but, but they're still not designed like that. So I would say that anything that has to do with veracity, either because that's a goal of the consumer, or because the stakes are high, right. So I had I saw an article I posted on LinkedIn, just the other week, where there were apparently, you know, automatic, regenerated books on sale on Amazon. And some of them are about things like, you know, mushroom picking, foraging, where I know that you want to have, you know, an AI telling you, you can eat that. So, there are many situations where you will need human gatekeepers to, you know, the human in the loop is going to remain important. And a lot of this context, whether it is medical AI, whether it is you know, anything to do with safety standards, okay, that's one, the other one is, I can make predictions based on the past is much harder to make predictions about how the future will be when you change something in the present. Meaning, these systems are not built for counterfactual reasoning. But we can do a lot of that we can imagine, imagine, in our minds, we can imagine how the world would be, if today, we don't do this, but we do that. What would happen if instead of go home to my family tonight, go take a plane and fly to South America, I can have some pretty ideas of what some of the reactions might be, you know, and so that you have with this counterfactual reasoning, you can put yourself in that imaginary world. And so that's our power imagination. And, you know, AI can do, it can help. And we'll get better in those tasks, too, I'm sure. But it's still not a causal machine doesn't, you know, doesn't think in that term. So if you ask GPT, to make, to give you some reasons why something can happen in the corpus of data will be able to interpolate sufficient information to be able to give you a pretty good answer. But if it's something that he hasn't seen, and I've seen some papers testing in this way, is not performing well at all. So I would say that humans will still be, you know, very important in any tasks that require counterfactual reasoning, and imagining new worlds in a way. So that, so of course, you know, I can do a lot of things already in those domains. But I do think that there will be, you know, still scope for for human labor there, there will be a lot of tasks that actually surprisingly hard to automate, because they involve fine motor tricks, because they are not routinized at all, because the data are sparse, because data poor quality, because it's changing rapidly and old data and not very suitable to make predictions, and so forth, and so forth. So there will be a lot of jobs that were just very hard to automate. And at some point, it might not be worth it to, to to, to do it, even though potentially one could do it, it's just better to have you going to work. And then you have the leadership side, you know, AI will never tell you or you know, won't tell you what you should want, right. So your values, your ethics, those who will stay, so any jobs that will have to do that any, I think any job that will require Leadership and Mentoring. And again, AI can do a lot. So math tutor can be great. And maybe the math tutor on Khan Academy can actually even motivate the learner to learn more, and it will be good at that too. But I think it's going to be hard to mimic the impact and maybe the right words in your year, a shoulder on your hand would be able to do. So that's I think it's those are some speculations, but I got I got that, you know, I wouldn't go on a limb and make a prediction very far into the future. I think we talked about the midterm. Max Matson 34:33 Oh, don't worry, I won't ask you what's gonna happen in 2030. So I like that you mentioned some of those limitations. Right. You've, we've hinted a little bit about some of the dangers of AI deployed at scale without precautions, right. It's not geared for truthfulness necessarily. It can't deal with those counterfactuals. Right. Can you elaborate on some of the you know, safety concerns that are That depressing in this moment and how we could potentially address them. Stefano Puntoni 35:03 I think some of them we already talked about, like, you know, the mushroom case, literally physical thing. I have a couple of projects. Together, the projects are led by an amazing researcher, Harvard called the Julian de Freitas. And with them, we've been looking at relational AI. And, you know, people use chatbots in ways and maybe the designers of the Chatbot are not intended them to be used for maybe people with quite severe mental health issues that for some reason, end up using these chat bots. And then the question is, the chapter was able to, you know, identify the mental health crisis and respond appropriately to it. So that's an element for impulsivity. If chatbots become, you know, they get disseminated through society very fast. There will be more questions of this kind, where you may have, you know, you know, questions about what are the implications of, of this for, you know, safety well being, you know, and I hope that we'll do better and then, with AI than we've done with social media, right, I think the way that we let this technology run through, I mean, this is still AI technology, being able to predict what is going to capture your eyeballs and just stuck there and keep it there. pandering to a lot of either our worst instinct or our most short term desires. It's not drug isn't as not being great. And a lot of great things about it. But I would say that the business model advertising in this context was quite dangerous in some ways. So I'm hoping that with AI will do better. Now the question we don't know. But I'm afraid that if we get it wrong with AI, the price will pay for it big. This technology is going to be so important to so many things that if we don't understand it properly, we don't deploy it responsibly. We don't regulate it centered offensively, you know, the dangers are going to be quite huge. Max Matson 36:58 Absolutely. You mentioned social media, right. And it's something that I like that you tie that in because accurately AI does drive the algorithms that determine what you see on social media, right. And those things are geared towards engagement. That being the case, obviously, that has become kind of an optimizer for extreme content. When it comes to generative AI, a lot of what I've heard people expressing concerns over is the potential flood of synthetic content onto the internet. Is that anything that you've thought about in the past? Do you have any thoughts regarding the impact that that could have? Stefano Puntoni 37:36 I mean, there's a very fun cartoon by Tom Fishburne where you see two panels on the left panel, you have a person saying to a colleague, look, I'm sending an email I pretended to write. And then on the other side is another person telling colleague look, it is an email and pretending to read. You know, because they use AI to encourage them to decode the message, essentially, you know, there's gonna be so much garbage out there that you'll have AI to produce a garbage in AI to screen out the garbage. So that's not great, right? So we just don't end up using it is almost like, you know. So, yeah, I think you'll see a lot of content, a lot of it would be bad content, the good thing would be that people who can produce real good content will also stand out potentially easier, because you know, a lot of the stuff that might be a barrier for you to find an audience may be solved through AI, for example, I think I'm excited about also an unknown native English speaker, what I can do to facilitate, you know, the conversation with people who are not native English speakers, and either not speak English at all, and but now can use a translator system, or people who do speak some, some English, but not just as well. And so they might be able to write stuff. Other people want to read more than they would before. So I think there is also a lot of great things about it, obviously. Max Matson 39:04 No, absolutely, absolutely. I so out of respect for your time, I'm gonna wrap us up here shortly. But I do want to ask before we leave, what is kind of one thing from your learnings or research, kind of anything that you can pull from, that you want to kind of leave us with that you find exciting that you find just deeply academically interesting regarding AI? No, that's open ended. Stefano Puntoni 39:31 Okay. So when I talk to companies about advanced analytics programs, you know, trying to deploy algorithms in this and that process. These projects fail often, but they never fail for technical reasons. It's always the people. You know, it was it's not clear why we're doing this. It's not clear who should be doing it. It's not clear You know, what the responsibilities are? That it's barriers to adoption, people on the floor feel threatened by it, or they just don't like it, or it's taking more time to them, or they don't see the reason or whatever. And they fail to adopted, they sometimes even boycotted or whatever. And so the issues tend to be with a user or lack of users, not so much with whether the techniques works. Typically, if you have research documenting certain capabilities and algorithm, for example, I mean, this algorithm can reduce, you know, mean squared error in a prediction. Okay, those things usually correct. I mean, they, they, they will work out again, and the problem is, okay, will the person decide to switch from that old algorithm to the new algorithm? That's the big question, I think. And so in a way, I just want to call for companies to think about the role of psychology in technology, which is something that we tend to forget, because, you know, technology is pushed by, you know, engineers and computer scientists, and the person is often not them. You know, the main concern in some ways, there's a lot of user research, and, of course, the UX. It's a big deal nowadays, but I still think there's a lot of room for, for doing more than understanding how our psychology interacts with the basically how our intelligence, intelligence and machines combined and intersect. Max Matson 41:27 Absolutely, absolutely. I can't wait to see you know, the research and new learnings that come out of that field. And I'm very glad that you're pushing forward into that, you know, kind of unknown space. Stefano, thank you so much for coming on. Where can people find you follow you? Stefano Puntoni 41:44 You can find me on LinkedIn. You can google my name and find my work and page. Our AI at Wharton portal we are designing right now. We have a you know, a temporary version, but we'll have a richer website in a few weeks. And, you know, reach out on LinkedIn or Max Matson 42:01 Perfect, thank you so much, Stefano. Stefano Puntoni 42:03 Thank you for having me.
Show transcript