Seth Earley 0:00
I had written an article several years ago called, there's no AI without IA. There's no artificial intelligence without information architecture. And that basically called out the need to structure information and to have good data to feed your AI algorithms. Now, the large language models are getting better at that, but you still need specialization. And then you need the terminology of the organization. And that is the enterprise ontology. And what that does is it lets you specialize these language models, according to your intellectual property and to your organizing principles.
Max Matson 0:37 - Introduction
Welcome to the future of product podcast, where I max Matson interview founders and product leaders at the most exciting AI startups to give you an exclusive glimpse into the workflows, philosophies and product journeys that are shaping the current and future AI landscape. This week, I sit down with a well known name in the realm of AI Seth, early, CEO of early Information Sciences, speaker and information architecture expert, and author of the AI powered enterprise, learn more about how he sees the current AI landscape. What holds enterprises back from adopting AI, and how investing in information architecture can make the transition easy. With all that said, let's dive right in. Hey there, everybody. Welcome to another feature product. today. My guest is Seth early. He's the CEO at early Information Sciences writer of the AI powered enterprise, and is highly recognized as one of the top thinkers in space. Seth, would you mind introducing yourself?
Seth Earley 1:25
Thank you so much for having me, Max, I really appreciate being a guest here. So yes, I've been in the information management space for over 25 years. And the work that we've done, has always been about information architecture, information, management information strategy for Fortune 1000. Companies, although we do go to smaller companies as well sometimes. And a lot of the work that we do is really, at the end of the day, it's about making information more usable, more valuable, more findable, and it's reducing the cognitive load on the human. And that's what AI is doing these days.
Seth Earley 2:00
And we'll talk about my work related to AI. But we have about a 50 person, professional services firm. And we help organizations in Product Information Management, product data management, large product catalogs, ecommerce sites, we help them with content, operations and content optimization that supports personalization and supports better customer engagement. We work in knowledge, architecture, knowledge engineering, which is especially important for cognitive AI, and especially with large language models, where there's a lot of work that needs to be done in that space. And there's a lot of misconceptions I hope to talk about today. And then we really help with that customer journey and customer analytics and customer metrics, like measuring each stage of that journey. So that you know, when you do something, when you make a change, when you make an investment, when you do some type of intervention, you know that it's providing value, you know that it's providing a return on investment, you're impacting a metric.
And we have a whole metrics driven governance playbook that we bring to organizations, so that they can justify these things and measure them and show good executive support and get stakeholder attention and really show them moving the needle. So that's a little bit about me and my company. And, of course, I wrote the AI powered enterprise, which is about what needs to happen in order to be successful with artificial intelligence projects. One of the folks over at MetLife, a guy named Peter Johnson, who runs all of their AI programs, and there's been an AI for 40 years, through the AI winter, said, this is a practical guide to getting real business value and separating the noise from the hype. And that was a great, that was a great accolade. And we've got some great feedback. The book won an award. And I had written an article several years ago called there's no AI without IA there's no artificial intelligence without information architecture. And that basically called out the need to structure information and to have good data to feed your AI algorithms. When Ginni Rometty was PRET was CEO of IBM, she actually used that phrase at the World Economic Forum in Davos, Switzerland when when she was asked about why AI has had some difficulty with adoption in many businesses. And she said, There's a funny saying, there's a funny saying in our industry, there's no AI without a and she went on to explain it. But that was my I originated that phrase that catchphrase, and it really does kind of encapsulate the essence of what needs to be done. When it comes to AI. That algorithm is important that the data is almost more important,
Max Matson 4:45
right? absolutely have to have good input to good, good output, right? Yeah. So you mentioned your book, the AI powered enterprise. In it, you emphasize the importance of ontologies right? Would you mind kind of explaining for those who aren't familiar with ontologies, kind of what they are
Seth Earley 5:02
sure so many people are familiar with taxonomies, right? A taxonomy is a hierarchy. It's a, it's a list of values, parent child whole part relationships. You know, Boston is in Massachusetts and Massachusetts isn't, is in the United States. So that's a hierarchy. We have lots of different hierarchies, you know, you have tools, and then power tools, and then drills, right, that's a hierarchy. And, and then there's all sorts of attributes around that. But the point is that when we build hierarchies in the organization, there's no one single grand galactic Uber taxonomy, right? There are multiple taxonomies. This taxonomy is around products, around services around solution, around processes, around interests, around customer types, around regions around everything that you can name in the organization, we'll have a set of terms that describe that entity, right.
Seth Earley 6:00
So when you put all of those together, you have all of the different so for life sciences company, it might be drugs, indications, diseases, biochemical targets, mechanisms of action, you know, generic compounds, etc. When you put those together, and you start relating them, so you say here is the mechanism of action for this drug target, right? Or here are the solutions to these problems, or here are the services that go with these products, which are building are what are called associative relationships. And when you build all those potential relationships between the taxonomies, you're building what's called an ontology and ontology can be considered the knowledge scaffolding of the organization, you know, people think about it allows you to do things like build knowledge, graphs, if you have an ontology, which is the framework, and then you add the data to that or use it to access the data. Now you have a knowledge graph. And the Knowledge Graph allows you to access information in lots of different ways. If you think about IMDb, the movie database, right? You have all the movies and then the directors and the actors and all the different people involved in the different roles and the awards. And so you can go from one movie, look at the director and say, What did what other movies did this director direct, and then you can say what actors are in this movie.
And it's the way we play Six Degrees of Kevin Bacon, right? We need to find out what movies these two actors have been in over time, that connect these people together. And so that is an ontology and a knowledge graph. And the Knowledge Graph, lets you traverse structured and unstructured information, it allows you to do things like build search based applications, it allows you to understand the what and the why right the the what is the structured data, the why is the unstructured information. And it allows you to build out cognitive assistants and virtual assistants that are powered by the ontology because the ontology can also take other elements of a virtual assistant. And it can take say, dialogue, snippets and so on. But it also will manage all of the other metadata and descriptors that you need in order to access information for a virtual assistant. So for example, we now have chat TPT and large language models. Well, a large language model is in some ways, in some aspects, an ontology right, because different language models have different incarnations, but you have you use a generalized language model to understand the English language or other languages, concepts, terminology relationships, then you use a specialized language model for an industry that has, you know, the comp has the specific terminology of that industry, because life science be different than auto industry and insurance. Now, the large language models are getting better at that, but you still need specialization. And then you need the terminology of the organization. And that is the enterprise ontology.
And what that does is it lets you specialize these language models, according to your intellectual property into your organizing principles, your specialized terminology, when I look at something like a chat GPT you know, if everybody used the same thing, if everybody used that same large language model, that's great for efficiency, but it does not give you competitive advantage because there's no differentiation. You differentiate based on your knowledge based understanding of customer needs based on your expertise, based on your ability to solve problems. So to use a large language model like a chat GPT would actually have to point it to your knowledge in your knowledge base. And that knowledge has to be enriched with metadata from the ontology, the ontology Can Can can be used for customer identity graphs to understand all the attributes of customers, it can be used for personalization, it can be used to improve search, it can be used to fine tune, again, these language models. And again, what we're trying to do is in a chat GPT generative AI situation, you will have hallucinations, if it doesn't have the answer. Right, right. Because it'll make up the answer.
But if you restrict it to a particular knowledge source, it will use that knowledge source and you can have an audit trail, you can say I use this, this is how I got this answer. But again, there are some very specific things that need to happen with that, in order for that to work correctly, because you can't just assume you can throw it any knowledge at it right? It has to be the right knowledge, it has to be in the right form. And it has to be structured correctly. And when that gets embedded into what you have what are called embeddings. But that metadata gets embedded into the content and that gets ingested into a vector database. That's what chatty PD or large language model can use to specifically target and use your your knowledge and your competitive advantage, your IP and not expose it to the world. Right. Right. That's what's critical about that. I know that was a very long answer about apologies. So let me stop. No, no, see what else you have
Max Matson 11:20
is a great answer. I it actually kind of ties into something I was reading today, which is that somebody from Google actually came out and said, basically, you know, there's no moat around this, right? Like we there's no moat around this technology, it's already being open source. This is something that's going to become a utility. Yeah. Right. Everybody's going to be using it. Yes. So it sounds like ontologies are the scaffolding that you can use to basically build a moat around it, right? Because the technology is not. Right. It's not novel anymore, right? In your opinion, what are some of the most critical aspects of ontologies? You know, the company should focus on when they're trying to integrate AI?
Seth Earley 11:58
that's a great question. And the first thing to do is start with a specific set of use cases, this is a bottom up process, right? It can lead master data, but master data projects that start with trying to boil the ocean and fix the world usually fail, but restart with us, we will start with process analysis, what is the process that we need to impact or improve. And when you look at most processes, that are important to the organization, it's really things that are either enhancing the customer experience, generating revenue, or improving operations in some way, improving operational efficiency. So what we need to do is we need to understand that process, right? Because AI is not going to solve the whole problem. AI is going to have an intervention and a specific step of a process. That's what people need to understand. It's a very granular type of an approach, right?
And, and what you'll find is I attended a workshop the other day, I was speaking, I was coaching, I was chairing the AI accelerator Institute conference in San Jose last week, and one of the presenters was was from a video conferencing company. And what was so interesting is, you don't think of a lot of AI in there. But they use machine learning in very specific parts of the process, in a way that combined transformed the entire experience. But it wasn't a it wasn't a big picture like Oh, AI is going to do this, it was like machine learning is going to help target the camera at a particular speed, it's going to help to change the way the microphone array is, is picking up in no sound, it's going to change the way the machine learning is going to filter out extraneous noise, it's going to do a lot of little things. So when you think about your process, you have to break it down into granular pieces to say, where is that intervention? And so for example, I think the customer journey is a great place to start, right? Mapping the customer journey, your customer lifecycle, and then saying at each stage of that journey, what information do people need? Right?
What do they need to move them to the next stage of the journey. And what we want to do is you want to anticipate that and we want to surface it. And so when we look at building an ontology, we want to understand that process and want to look at all of the elements that will enable that process because we don't want to start with again, the grand galactic Uber, you know, oncology, although you can get there want to start with a narrow focus. You want to have a broad look at this from a domain model perspective to say what are the other big buckets for the organization. But then in our process, we dig very deeply and we get very granular. And so you start building out the information structures, and the terminology and the preferred terms and the nonpreferred terms to power. A very specific part of that process. So imagine you're defining your customer, you can define your customer in metadata terms along that journey. So you can define who they are, what industry, they're from, what their interests are. And then as they go through their journey, the digital body language, their real time responses to stuff, their searches, their click, throughs, their navigation, all that becomes what's called digital body language, right? We're reading their signals. And we get signals from across all the effect technologies, that the customer touches.
When you have those signals, and you interpret them, you have to reconcile them, they have to map together, we need the right ontology to describe that customer, because there's lots and lots of different dimensions that can be hundreds of dimensions around the customer, when you get into the level of granularity. And then you need to define the content and the information that they need at that point in time. So you're defining a content model, you're defining a customer data model, you're defining a knowledge model. And what you're doing is you're using those signals to decide how to present information. And what you want to do is take baseline metrics of that process, say conversion from one stage to another, or, you know, downloading a white paper or doing a Product Comparison or, you know, putting something in their shopping cart, you want to look at that metric and say, How can we improve that? How can we would reduce drop offs or reduce pogo sticking, and what you're doing is you're building a baseline, you're building the ontology around the the information structures that need to support that user. And you're using that to design that user experience and that content that will be surfaced. And then you can use machine learning to optimize pieces of this, you can componentize that content, so that you can vary the different elements of messaging, you can have a different hero image, you can have a different call to action, different value proposition, different target, right, all of those things become components that can be rearranged. And the ontology drives that right.
So what you're doing is you're looking at a very specific process, and you're defining the steps to that process. And then the information needs at each step to that process. And then what the baseline metrics are and what you hope to impact. And when that you have to design all of the information structure. So building the ontology, which starts with that specific set of processes, if you had a procurement problem or supply chain problem, you would do the same thing, you would map out those processes. I like to say, you can't automate what you don't don't understand, right? And you can't automate a mess. So the first thing to do when people say, Oh, our processes terrible, well, we have to understand the process, we have to map that process, then we need to map the data dependencies in that process, that we need to have a hypothesis about what intervention will help us improve that efficiency or that effectiveness. And then we build the information structures with all of the different dimensions, the different facets, the different entities, the different metadata structures, and all of those are derived as part of the ontology development, right?
It's information architecture, its data architecture, its content architecture, its customer, attribute, architect architecture, but they're done in such a way that they have a cohesiveness and a holistic approach to understanding that information landscape of the organization. So it's a little bit different than the way other types of projects work. And then as you expand to different parts of the organization, or different processes or different applications or platforms, you start with that basic ontology, and you expand it. And it's also driven by use cases. So again, if you have a use case, if you have a term, and you're trying to decide whether you need it or not, you have to have a use case, right? You have purpose. So use libraries of use cases are also critical to developing ontologies.
Max Matson 19:16
Interesting, I got it. So it's really just about biasing yourself towards what's going to move the needle, right. So in order to do that, you have to have a structure, you have to kind of get down into the nitty gritty and really see where the process is and isn't working benchmark, and then make those clear kind of, you know, figures where you know, if this has actually improved, that's
Seth Earley 19:36
right. You're taking the baseline, you're making an intervention FFR path hypothesis about what you know, why are people bouncing out? Well, maybe there's not enough information on the on a product landing page, and they're not clicking through the product details. And maybe we don't have the right attributes. Maybe we don't have the right navigation structure right way. Maybe we don't have the right content. Maybe they found what they need and they're calling up for an order right. You know, you gotta run Stan that whole journey, and then deconstruct it and then say, what can we do to intervene.
But the ontology is really the knowledge scaffolding, it's the framework for everything. Because we need to be holistic, we got to get away from silos, and content embedded in the silo. If we want to build cognitive assistance, or, you know, AI, content operations, well, it should just be content operations, right. And lots of different purposes is one company that we work with that handles 2 million knowledge transactions per day, using this approach using these ontologies. Using a content model, a component content model that breaks their content up into pieces that are semantically meaningful chunks, those chunks can be used to answer specific questions. But what they use it for is they use it for marketing campaigns, they use more channel partners, for distributors for field service, they use it for Customer Self Service, they use it for the call center user for email campaigns, they use it for everything, and they use it to power their bots, because both need a component system, they need to have chunks of content. And when you start looking at something like chat TPT, it breaks that content up into components. And whenever you look at examples or tutorials of developers saying that this needs to grow, they talk about chunking. But they chunk things in kind of arbitrary lengths of maybe 100.
Tokens, a token is like 1.7 characters or something. But the point is, they have overlapping frames to make sure that you have some meaning. But what you do is if you author it, in a way, if you take like a product manual, and you break it into all the components, you know, installation, configuration, troubleshooting, those pieces can be tagged and enriched with metadata about the product, about the error code about the model about the step all of those things, and then ingested with those embeddings with that metadata into the data source that the large language model will then use, it'll process very, to understand what that query is, it'll go to the database and retrieve based on a vector representation of the query and the it's kind of translating question to answer, right, just like you translate from one language to another, and then it'll process the results to make them more conversational, right. So this is a critical, critical thing. And you do need content architecture, you do need product data architecture, you do need customer attributes, architecture. But all of those things are working. You know, I had a little bit of an existential crisis, when I wrote the book, after the book, the book is out about three years, right? And when chat GPT came out, I'm like, Oh, my god, is this still valid? Like, is this like, is this no longer necessary?
Because you're using this data approach, and it's so brilliant, and I was like, What am I going to do, you know, my, my life's work, the more I researched it, the more I looked into it, the more I delved in, the more important this is than ever, and that's where organizations are going to start realizing we're going to try to do this, they're going to fail, because they don't have the right architecture ontologies or correctly, curated knowledge. And that is why this is so important. And you're solving problems today, while you're preparing for the future. So it's a very exciting time. And we are at an inflection point in human history. And I talked about this in the first chapter of the book, I talked about the fact that we'll be talking to all of our technology in a very conversational way. And it's happening. It's happening now. And what's between us, and that being fully realized is the data. It's the integration, it's all of those things we have to solve. We have to clean up our act around data and content and information. And then we can do these integrations and we can build these conversational systems. But it is an incredibly exciting time, because we are at that point of the biggest change in human history.
Max Matson 24:12
Absolutely not. I completely agree with you. And that that problem that you mentioned, data silos, it's one that I've kind of ended up talking a lot about on this podcast I found because it is one of the most pressing issues, right? I mean, if the output is only as good as the input, and you're giving it, you know, fractional pieces of inputs, you're gonna get a fractionalized result. That's right. So that being said,
Seth Earley 24:35
and I'll just point out that the reason for that is because because there's no overarching vision at a high enough level, nobody wants to take on that, that the growth very few senior leaders that understand this to the degree that that they can execute on this. We have executed every single component of this, and we have not found a company that wants to do all of it too. Other because it's massive. But you can make progress in each of these areas. And you can add on those incremental progress points will solve problems today. Usually it ends up being, oh, that's not my department, that's another budget, we're not going to pay for it. But somebody has to look at the greater good of the organization and say what we need to be successful and sustain our competitive advantage in the future. And that's all you need a leader with vision, and the ability to take a risk and the ability, right, understand what needs to be done to make this work, because as soon as their competitors get it, they're going to be it's going to be too late for them. And right, this will be existential for organizations.
I was talking in one of my, I wrote in my book, and I've been talking about a company that we worked with many years ago, where they've used they were an education company, they were using serving the K through 12 market, and they were getting beaten to market by six months by one of their competitors. And they're like the CEOs, like why is this happening? I did some research and I found out this competitor was using a component offering database, right. And this was many years ago. And they were able to assemble together the right curriculum, the, you know, a rough textbook draft, you know, for a particular level, a particular topic, a particular set of curriculum objectives, and so on, and standards and so on. And then they could prepare that much, much faster.
And that got them ahead in the sales cycle. And the CEO said, wow, okay, why don't we have that in the there team was, that's the project we've been trying to get funded for the past three years, and it keeps getting kicked on a hand kicked down the road, right. And the CEOs will great, let's do it. Now how long it will take? Well, it's going to take at least a year, and they're going to be another year ahead of us. And actually, they failed. They lost that market. They they were put out of business by that particular innovation, which should have been obvious to them. But the rookie executives didn't understand it, they didn't put funding into it, they didn't see it. And they didn't have the vision to say, this is going to be existential for us. But you're going to find that the companies that don't do this aren't going to get their future threatened. Now, there are so many behemoths out there that have massive scale and ability, but they will be disrupted. They are disruptive. So this is this is stuff that is critical to future advantage.
Max Matson 27:30
Absolutely. So that was a great example of kind of what can go wrong in this circumstance, right? What would be kind of your advice for a company who is seeing, you know, where things are headed knows that this is kind of what they need to cross over and achieve? Yep. But they're seeing that their data is super, you know, it's it's siloed off, there's no connection? How would you recommend them going about kind of getting that all integrated? Well,
Seth Earley 27:53
it becomes very daunting, especially when you look at it as a whole. Again, I come back to the process, right? What is the most important process that we want to impact? And what that does? Is it bounds the problem, right? It limits the scope, it says, if it's the customer experience, and I'd like us, or it could be the employee experience, right? How do employees get their answers, and you can have, you know, tremendous, there's a partner of ours called people rain we work with, where we focus specifically on employee helpdesk. And there's tremendous, tremendous value in looking at how people get their day to day problem solved. So you can move the needle there, right, and you can use that as a catalyst. You can also look, again, as they say it the customer experience or any process, it could be ordered to cash, it could be, you know, proposal development, it could be, you know, anything around supply chain optimization, or distributors, or, you know, field service, right. But you have to pick a domain, and then you have to focus on that domain, understand what your outcome is going to be, again, I want to improve my E commerce conversions, I want to improve my find stability. Well, let's assess where you are. Let's look at the state of your data. Let's look at the structure of your catalog.
And let's look at areas where we can optimize right and then that begins that process. And then you can start adding tools and technologies to facilitate that. And so whenever you're trying to look at these things, you have to look at something that's either going to support the customer or support a person who supports the customer, right? It could be engineering, change order requests, it could be, you know, design management, it could be quality control, it could be any of these areas, but we have to start with a focused set of problems. And we're doing this we're building the foundation. We're starting with a very tangible area that is measurable. And then as we start building that out, we start getting some successes, we get some quick wins, we get the attention of the organization, they see that it's possible. They see what the outcomes can be and you build This kind of grassroots, but you also have to have a top down vision to support this over time. And to get different groups to play nice together and to get through different consultancies that you might have to play nice together. Because some of those consultancies don't solve the problem, it's just pushing it down the road, they like the status quo, because they get an annuity from it. Call Center Services, companies are like that many of the big consultancies and agencies are like that, they don't necessarily solve the problem at the root. So it becomes an evergreen problem that comes up year after year, after year, you spend money, maybe get a little bit of movement, you go, you deploy a new platform, maybe there's an improvements short term, because it's a fresh environment. But then over time, it gets messy, because there's no governance, and there's no structure, there's no formalized decision making, there's no there's no metrics to measure the impact.
So again, you know, when you are tackling these things, you have to be very thoughtful about the outcome that you're looking for. And you have to be able to trace that back to data and information sources and processes. One of the things and we looked at we build maturity models, for organizations in multiple areas. So there's knowledge management, maturity, there's e commerce, maturity, there's customer experience, maturity, this product, data maturity, there's content management, content, operations, maturity, customer journey, maturity, all of these areas of analytics, and product data maturity, all these areas. And what this these maturity models do is they tell you where your gaps are. So for example, the you we have a process for doing personalization at scale, through orchestration of content, the product data, knowledge and expertise and insights from across the organization, all along that customer journey, right. And what that does is it requires you if you want to do this at scale successfully, that means you need maturity in each of these areas. And the maturity includes things like you know, governance, things like metrics, information, architecture, integration, but also the process maturity. So for example, we built a personalization architecture for a company.
And it was beautiful. And it had the ability to differentiate different customer types and present content for them in the context of their journey. And it worked wonderfully at the end of the day. They're like, Well, what messaging Should we send to this customer? How do we differentiate from this one to this one, they did not know that they did not have enough knowledge of customer needs and interests to offer personalized content, they can do it mechanically, and technically, but they didn't have the process maturity. So that's where you have to say, I need to make sure that I have the technical maturity, but I have the process maturity that I have the expertise to do this. Right. So in many cases, there are bottlenecks or gotchas or, or gaps that people are not aware of. And so one of the things we do is we do these assessments, we have a knowledge management assessment for generative AI. Is your knowledge ready for generative AI? If not, what do you need to do to fix it, and we build a pilot a POC as part of that process and the plan and a remediation plan? And those are? These are things to just say, look, what do you need to have in place to be successful? Right, you can't just jump into these things without having that understanding of your landscape, and understanding of those processes. And those dependencies and rows, data sources.
Data catalogs are another important area where you'd want to start to democratize the use of data, people need to be aware of what data you have, who owns it, who has access to it, who can leverage it, what kinds of algorithms have been written with it, what the dependencies are, what the downstream impact of changes might be, and what rights you have to that data. So So all of these things are foundational elements, so that you can apply the latest technology because if you don't have these elements, you're not going to be successful. And that's why going from POC to pilot to to deployment and production is very difficult. In a POC you have the luxury of you know, carefully nurturing your data and fixing it and cleansing it and making sure it's right. And you know, this artisan approach to making it to having clear, complete and accurate and quality data, then you go to production. You don't have that. Right, right. So all sorts of things change. But if you fix those problems at the core, at the source at the root, then you'll be able to deploy these things. But if you don't, you're just going to be chasing your tail, you're going to be wasting a lot of money, and you're not going to get the outcomes you need.
Max Matson 34:53
Yeah, absolutely. You know, I've been kind of floored just for my experience seeing both startups which you would kind of expect but also large enterprise businesses have such a inconsistent approach to data, right? To the processes behind data, especially, it's been kind of one of the big things that I've been trying to focus on with my team is how do we make sure that, you know, our data process is as automated that's hands off, but also as consistent processes as possible? So one thing that I'm interested in is, you know, for the companies that are able to make this pivot are able to get their data in line, what is the biggest change that you see, you know, this these new AI technologies actually enabling for such companies?
Seth Earley 35:38
Well, there's call deflection, right? So you know, when you think about why do people call the call center in the first place? It's usually because something is broken, you can't get an answer. So they pick up the phone. So if we go upstream, and we say, how can we make the information more accessible? And available to our customers? And answer questions, people don't want to read 250 page documents, right? We used to give them a document of some sort or a policy or 25, page PDF, maybe your answer is right there in the in one little sentence, right. But you can't slog through all of this. So that's why pick up people pick up the phone, because there's too much of a cognitive load on a customer that they don't want to go through. Right. So we reduce the cognitive load by exposing the answers. And we do that conversationally. So when we ask a question, we're very carefully curating and defining the answer. So this is how content should be created. Content is not created this way. It's created. I went, I once worked with a Medicare Administrative Contractor, right?
It's an insurance company that manages Medicare and Medicare claims, while they had a 25 person content team, okay. And all that content team did was produced content every week, from all the regulations that were coming out of, you know, Medicare, and Medicaid, and I was blown away. It's like, Why? Why would you need all of this? What is this? And I would, I went into a workshop with them. And I had a pile of the content, I picked up a piece and I said, What is this? Who is it for? Why will they read it? Right? No one could answer that question. No one could answer the question. They just created content every week, because that's what their job was a nobody fought. What problem are you trying to solve? And I think, you know, I just did my passport renewal. It was 10 pages of instructions that could have been boiled down into five bullets, and pages of instructions. Why do you need to explain in endless detail on and on and on, you know, when you when I need a photograph, you know, to apply to this, right? It's like, it's ridiculous. And people don't think that way.
They don't communicate that way. They don't want information that way. So curate content. When we look at content operations for AI, that should be content operations, period. That's how we should be creating content, what do people need, right, I had an credit card that I was trying to activate, and I lost the number on the audit, and they use their bot on the site. And it went on and on about credit cards and applications and about this and about that, and about accounts and about other offers. And it was pages and pages and pages. And I couldn't get I just wanted to activate the credit card. I couldn't get to that one little point, I just needed that phone number, right. So the point here is, when it's done, right, it leads to call deflection reduce, it reduces call center volume, it increases customer satisfaction. It improves efficiencies, right? It's, and this is just a cognitive aspect. You know, when you're doing other things like predictive, you know, analytics, or you're trying to do predictive maintenance. And a lot of this came from our analytics world, right?
We've been doing, you know, banks, insurance companies have been doing predictive analytics for forever, right. And now it's called, and they've used machine learning processes. And now it's called AI, right? We've been doing text analytics forever. And it used to be old, you know, Text Analytics, entity extraction, you know, sentiment analysis. Now it's called AI, that's fine auto categorization. It's all called Yeah, that's fine. But the point is that when you do this effectively, you can get tremendous improvements in processes, and customer satisfaction. And you can also eliminate the acts of heroics upstream. Meaning when you want to create a customer experience, many times there are people working, sending things back and forth via email. They're, they're, you know, asking questions or looking for the latest version, they can't find stuff, they recreate it, product launches, take forever, all of those things can be optimized, not just using you know, and I think we what we're gonna start forgetting about the term AI, right, it's just gonna be programming. It's just gonna be saying we're using AI is like saying we're using computers, right?
It's so broad. I mean, You know, a colleague of mine said, we're not used to be, no AI ever worked. Because it was at work, they called it something else that was 20 years ago, right? Word Processing was one of the first incarnations of AI because it took woman expertise. And that translated into a program that could make decisions and perform layout, according along the lines of a human expert, right? That was called AI. But at first, but then when it worked, it was called Word Processing. I'm gonna open up my AI and create this document now using word speech recognition, you know, any type of spell check, all of those things are part of AI, but it gets subsumed into the tools, it becomes part of the fabric of the organization. That's what we're going to be seeing. And we're going to be seeing more of that integrated, again, into processes where it won't be visible for that conference. And for that, for that conference, a few as jobra. Were doing the conference, immersive conference calling system, right, the video conferencing, they their AI was invisible.
But all you knew is you had a great experience. Right? Right camera was on the right person. And you can see the whole room and they corrected for parallax, and they corrected for image distortions, and they and they suppress noise and they amplified certain sounds and certain frequencies. All of that stuff is invisible. Right. There was no AI, you know, quote, unquote, in the application, it wasn't an AI app. It was an AI application, it was learning driven application that had a woven throughout it, right? Yeah. And that's what we're gonna see, it's more and more invisible, embedded into the applications. It's been used that way for years, search searches, used machine learning algorithms for the past 20 years. Right, right, you know, any type of clustering, any type of entity extraction, all of those things. So now, you know, everybody's pivoting to generative AI. And that's a big deal. Because these models are so important. But they're doing it without the foundation, they're doing it. without truly understanding the needs for information, hygiene, information management, hygiene, you still need data hygiene, you still need information hygiene, you can do wonderful things like personalization, you can personalize
the employee experience, you can personalize the customer experience, by reading that digital body language you can do for an internal customer or an external customer, and anticipating what they're needing and giving it to them while they're in their process. But that requires understanding their process, understanding their interests, it requires the ability to create the right content, to tag that content to componentize it to build those questions, and to surface it in the context of their work process.
So these efficiencies are going to be vast and tremendous. And the company I mentioned, the tech company that handles $2 million transactions per day say they've saved hundreds of millions of dollars per year, based on the work that we did for them, because and they run with it. And this has been like seven years. And they're still using the same structure and the same approach and the same governance and same metrics. We built one of the first virtual assistants for all state business insurance, because all state business insurance expert or abi, the same content is still being used, they're using different front ends, different technologies, but the same core content processes are being used the same content models, the same components, right. So these things become assets that increase in value over time, and that are durable, right and that that you can use over and over again. But they have to be done at the right. foundational level, we saved Applied Materials $50 million per year in field service.
Last because they were reduced the amount of time that their field service employees will look searching for information. That was that was the AI we use was machine learning auto categorization entities and entity extraction right. And but we did the integration using an ontology. And the ontology was critical to this. They had tried to project three times in the prior five years before we did this. And each time they failed. And so when we met with the CFO and said to the CFO, he's like why do we need taxonomies? Why do we need ontologies? Why don't we just get Google? Right? And I said, Do you have a chart of accounts for your finance organization? said of course I do. I said why don't you get rid of your chart of accounts and just get Google? Because a taxonomy is a chart of accounts for knowledge. Right? Right. I mean, it would be ridiculous you can't go on.
So how do you differentiate knowledge? How do you differentiate information? How do you differentiate cost customers? How do you negotiate all of these things? And too often, unstructured content has not been considered the high value asset that it is So that is more of the organization, the knowledge, the expertise, the IP, that's what organizations compete on. Right? All right, it's how to get to market, what your solution, do your your proprietary code, your proprietary designs, your expertise about the solutions, and about the competitors and about the customer needs and about your, your creative. And they'll always be that need for human creativity to solve those problems. And it's capitalizing on those and codifying them, and and organizing them and making them available to people in the right context, right. That's that holy grail of the right information for the right person at the right time. But when we do that, there are tremendous efficiencies that are vastly out of scope that $50 million in your savings cost about a million dollars. Okay. And the same thing with Allstate, that was about a six month or your project cost about a million dollars, and it, it reduced the call center volume at the time by 20%, which is an unheard of number, right. But there were lots of other reasons for that it was a new product, launch, etc.
But it's still had a tremendous impact. So what you're seeing in these projects that are done effectively is we're seeing enormous paybacks, but they haven't done correctly, and they are a journey. And again, the companies that do this stuff and get it right today are going to have a sustainable competitive advantage that it will be difficult for customer for competitors to catch up with, if they start now. And if they do it effectively. So it's a wonderful field, it's scary, because people are not sure where to invest the money, they're not sure what's going to work, they're throwing a lot of stuff against the wall, there's a methodology to doing this, where you can do small scale experiments, and you can start getting your data house in order and just start showing what those improvements will be. That's why those POCs that's why those assessments are so important.
That's why that maturity models are so important. They show you where you are, where you need to be and what those gaps are. Some people don't like maturity models, fine. But to me, they it's a starting point, it's a current state. And then it's an what describes your current state, you know, it is acts of heroics, is it, you know, decision making five items is it, you know, content chaos, right? We had someone call their search service, a random document generator, they would get different results each day, whatever, when they find the same request. But the point here is, you have to know where you are. And you have to know where you need to be. And what those enabling processes and enabling capabilities will be. Once you have that you can set out a roadmap with very realistic expectations are realistic capabilities over time with understandable investments. But until you understand, you know, it's it's very difficult to set out on a course that you will be confident in,
Max Matson 48:08
I see, makes a ton of sense. Um, I do see that we're running a little bit tight on time yourself. But I did want to ask you just you know, for that, that leader at an organization who is hesitant to, you know, adopt these new technologies to undertake that process that you mentioned, what would be your, your advice to that person?
Seth Earley 48:29
My advice would be that it's going to be existential for the organization. But this is not a nice to have, this is a need to have doesn't mean, you know, freak out and you know, spend tons of money and beans, you know, investigate this with a clear headed approach, you should always understand what the vendor is offering you, right? If they say if they say it's proprietary, and we can't tell you, Well, you least need to know what the inputs are and what the outputs are. And you need to know how they've trained their algorithms, if they're applying AI, you need to understand the process analysis. Start with the things that you know, start with the things that you understand, you understand your business, you understand your capabilities, you understand what your customers need, you understand what you need to do down the road, start with that and look at that end state and then start initiating you know, experiments, right?
Fix the data, understand the process, initiate experiments, and that's what you have to start with you can do brainstorming sessions we do we do, you know, knowledge strategy, I mean, AI strategy, assessments and workshops and things can be done very quickly. You just get everybody on the same page, you level set, you give them the right knowledge so that everybody understands things and executives. You want to share knowledge in a way that is not threatening, right? You want to make people understand that, hey, you know, this is this is stuff you probably know but let's go over it anyway. And We want to share their knowledge. So they don't feel threatened by it, right? So you have to be very cognizant of that. But you have to get your executive team on the same page, at the maligned. And a lot of the large consultancies are trying to do this, but many of them don't understand the necessary pieces, you know, we go in after a lot of the big, you know, consulting organizations, and, you know, we find that they're not being as innovative as they need to be. And they're not necessarily laying the groundwork for success. And we're building realistic roadmaps. So I say, you know, begin with an end state in mind, you know, understand your organization's maturity and capabilities, understand those processes that are so important, and then start investigating from there.
And if anybody won't tell you what the inputs and outputs are, or how the algorithm works, you know, just move on, right? Because here's a guy's tactics, put it in business terms that you understand, you may not understand how the algorithm works, because many times people don't understand how the algorithm works within terms of neural networks. But you understand what the inputs, what the outputs, you can understand the training data, and you can understand the parameters and what the expectations are. And then you experiment with your data and your process.
Max Matson 51:15
Makes sense? Makes sense. It's all about tailoring it down to your own business. Absolutely. Perfect. Well, Seth, thank you so much for this. So much great information here, just so that everybody knows Seth, early CEO at early Information Sciences and writer of the AI powered enterprise, please go check it out. And it's Seth,
Seth Earley 51:30
it's early spelled e ar l e y. So you can reach me at firstname.lastname@example.org. Or you can find me at LinkedIn. I'm just Seth early. So feel free to reach out and I'd be happy to get a copy of my book out to folks that write me in the next week or two. And so then write a note to email@example.com first name and last name.com. reference this podcast and put your mailing address in and I will have be happy to send a signed copy out to the first I would say 10. Folks that that I get amazing. I have to put a cap on it. Maybe Maybe. Maybe 15. All right. Thank you for it. Totally. I really appreciate it. It was great to Oh, it's my pleasure. All right. Take care.
Max Matson 52:19
Thank you so much that You bet. Thank you for listening to another episode of the future of product podcast. A special thanks to my amazing guests. Seth. If you enjoyed this episode and want to learn more about what I do over a player zero, you find us a player zero.ai If you're looking to go even deeper on the subjects we talked about the pod, subscribe the future of product on substack and be sure not to miss this Thursday's newsletter. Plus, be sure to email Seth to get a free signed copy of his book, The AI powered enterprise. Can't wait to see you next time.