Nate Andorsky 0:00
So what behavioral science basically says is we're not actually consciously aware of much of what drives our decision making. and Behavioral Sciences seeks to understand what are the environmental factors that are influenced your decision making, that you're not consciously aware of what's really interesting about chat GPT is the interaction model. Because of the conversational tone of chat GPT, it feels more trustworthy. I've always believed that behavioral science has about a decade behind where data science is, I think what's going to happen is it's going to follow up pretty similar to directory as data science where you start to have behavioral scientists that sort of specialize in different areas.
Max Matson 0:41
Hey, everybody, welcome to another episode of future product. Today, my guest is Nate indoor ski is the founder of patent 355, serial entrepreneur, author of decoding, the why and so much more. Nate, we have a lot to talk about. But to start with, would you mind telling us a little bit more about your background and how you got to this point in your career?
Nate Andorsky 0:58
Definitely. And thank you for having me, I, my mom always said I had a face for radio. So on this podcast, I, you know, I've been an entrepreneur, since I can remember I had my first business in high school, I used to knock on people's doors and see if they had any junk laying around, I would sell it on eBay for them and take a cut of what it sold for I started out selling junk eventually sold cars on eBay. And that's when I realized the power of entrepreneurship. And I specifically remember actually sold a fair amount of cell phones. And I remember seeing the listings and noticing that different listings for the same phone with Salford from prices. And I noticed that just depending on how the item was presented, it would go for more money. And I think I didn't know at the time. But that was my first understanding of the power of what I refer to now as behavioral science. And then since then, launched and scaled a number of companies, I'm currently working on a company called patent free five, five. And here I am today.
Max Matson 1:56
Very cool. Very cool. Let's, uh, let's delve a little bit into kind of what you define as behavioral science, right? I'm super interested in, in your view there. Yeah. So
Nate Andorsky 2:05
really, you know, when we build products, or bring products to market or scale products, a lot of what we do or hopefully do is customer research, and get insights into why people like things or what they'll use in the future. And what behavioral science basically says is we're not actually consciously aware of much of what drives our decision making. So when we ask people why they do the things they do, it's usually pretty inaccurate. And behavioral science seeks to understand what are the environmental factors that are influenced your decision making that you're not consciously aware of, and I read a book, like seven or eight years ago called nudge, and if you're familiar with this space, it's like the book to read. It's not related to tech at all. But what was interesting is if you have experience building products, some of the key concepts you just know, through trial and error. Oh, this is interesting. I wonder if there's companies that are like really digging into this academic literature and integrated into the way to think about building and bringing products to market and I assume that out, there was into, that's what sent me on my own journey to figure out how to bridge that.
Max Matson 3:08
I see. Gotcha. What would you chalk that up to kind of the the lack of people in that space, applying it to product development?
Nate Andorsky 3:16
How much time do you have? You know, I think that there's a lot, there's a lot of gaps that exist between the applied or industry side, and I come across a lot of different disciplines, some of it is because you have to reverse engineer a lot of the theories to get them to work, right? So when you're in academia, you have a hypothesis, or an idea that you want to test, right? Like, are people more likely to do something, if they have a friend who's also doing that thing, right? In the business world, it's completely opposite. You're trying to optimize for a metric, increase revenue, decrease costs, etc. So you have two very different starting points, there's part of that. And then the second part is you actually need a pretty wide array of skill sets to be able to bridge that gap. And there's a lot of people that just either are very strong in the academic literature, or very strong in the applied side, but don't typically have both. And then one of the biggest things is just propensity for understanding if somebody works or not, like in academia, you can run a study and it can take, you know, months or even years, okay with that, right? In, in the applied side, if you're working at a company, like you got to move the needle really quickly. So it's just a balance of being true to the way that you should be looking at the research and implementing it, but also making sure that your CEO at your company, she isn't saying, Hey, listen, I can't wait six months for you to tell me if this is gonna work or not.
Max Matson 4:41
Right. Right. I see God. So it's interesting. I think it's kind of behavioral science is a field that's working its way into a lot of different fields, right. So like real economics as an example, right? I think that kind of putting these models of human behavior into the context of all these different applied fields is already starting to have a massive impact. Would you mind kind of laying out some of the core principles that you kind of use to guide? You know, behavioral decision making?
Nate Andorsky 5:10
Definitely. So I think that caveat, everything is contextual, right. So if you see something work in one instance, you can't necessarily copy and paste it and put it into another instance, it's going to work. I think, a couple like key concepts here, that we talked about is number one is when you're looking at an experience or customer journey, this sort of concept of removing friction, and adding fuel is like a key con. There's also the way that we view gains and losses. So loss aversion speaks this idea that we actually view losses twice as great as subsequent gains, right? So if you're walking down the street, you see $10, you pick up that $10. Right? If you are, psychologically it feels like a $20 loss, right? So that's also really interesting about approach decision making, looking to minimize the downside versus optimize the upside. And then also, norms, social norms, what other people are doing in many different ways is a really big influence on our behaviors, that and then the biggest, interesting, I think, concept is when you're looking to drive long term behavior change. It's really hard for us to understand the potential benefit of taking an action that happens away in the future, right. So you go and you say, I'm going to save money for retirement, right? Everyone says, It's retirements important, I need to make sure I have money in the bank to retire. But we have a very difficult time really understanding the potential impact of taking that action, which is one of the reasons why it's hard for us to just be responsible with our money no matter who you are.
Max Matson 6:41
Very interesting. It makes sense. It's a it's kind of a lot of the stuff that I heard in my kind of economics coursework aligns with this right that the risk aversion, the kind of sunk cost, and the kind of discounting based on how far in the future gains are right. So you wrote a book about this decoding the why? How do you see kind of these tenants being applied to improve in product development,
Nate Andorsky 7:09
it gives you a whole nother playbook to think about what features or ideas that you want to build. And I think a lot of what companies tend to do, and I've done it, too, is what I call feature cloning, right? We need to build something, let's see what all of our competitors are doing. We copy and paste that but not really understanding the behavior that you want to drive. And I'll give you a really perfect example. So think about retirement again, right? Most apps that want you to save for retirement, the onboarding flow is pretty similar. You open up the account, you put in some information, they ask you about maybe your retirement goals, right? How much money do you want to save? What are you saving for, et cetera, I need to start hopefully saving for retirement, right. But if you understand this at a behavioral level, you also understand that, okay, let's understand what the problem is the problem and the reason that you can't save for retirement or you don't, one of the reasons could be this concept of what we call the future self. When you think about your future self, you actually think about yourself in third person. Why is that important? Well, you view your future self as a complete stranger.
So even though you say you care about your future self, they're no different than somebody that randomly passes you on the street. If you see somebody on the street, do you actually care if they save for retirement? No. Right? So now you understand, okay, like what's going on? Right? Right. So then the question becomes, how do we take your future self and bring it closer to your current self? And one of the interesting concepts, and this is actually a study that AARP did is they showed an augmented reality version, looking to save for retirement, they showed them what they would look like in 30 years. So it would show you yourself at a little bit older, but what did that do that it connected your current self to your future self. And when they use this intervention, they saw uptake in retirement plans almost doubled. Right? This is a prime example of if you went out to customers, and you said, Hey, listen, like what features should we build? Like, no one's going to be like, show me an augmented reality version of what it looks like when I'm older, because I'm going to care more about myself, right? And this is the power of behavioral science, right? Because it's insights that you're not gonna get from your users or your customers. But it's insights around what drives behavior and all the sudden it opens up this whole new way of thinking about the way that you build products and bring them to market.
Max Matson 9:23
Interesting. I see. So it's, it's almost going that step deeper beyond voicing the customer data, right. So Exactly. Very interesting. Okay. So it's the feature that they don't ask for that they that they really need.
Nate Andorsky 9:37
Yes, it's like, there's a, there's a famous Henry Ford quote, which I actually posted on LinkedIn a couple of months ago, and then someone called me out because they're like Henry Ford never said that, besides the point. It's an interesting quote. And the quote is, if I would have asked people what they wanted, they would have said faster horses. Right. And that sums up exactly what it is that I'm talking about.
Max Matson 9:56
Totally. I think a lot of these principles make a lot of sense and The context of current AI development, right? Because we're very much at this place where AI builders are building for a market that doesn't yet fully exist. Right? Right, the kind of writing what those needs are. And so what advice would you have to you know, somebody who's trying to build an AI, if they're thinking through? How do I anticipate kind of the behavioral needs of tomorrow, right? Yeah,
Nate Andorsky 10:23
I got a lot to unpack here. And I've done it a bit. I'm a big, I'm a software engineering background. So I'm also a big proponent of when there's new technology, just like digging into it and getting my hands dirty to kind of understand what this thing is. And I've done that with open AI. So there's a couple key insights here. The first one is, when a new technology is released, oftentimes, what I see is companies will jump towards building a solution, and then you fall into the trap of the solution looking for. So first and foremost, I would be very careful that you first understand the problem that you're solving agnostic at the technology used to solving such a problem. That's the first thing. The second thing is there are actually instances where AI can backfire. And I'm not talking about the technology, but I'm talking about the user experience. So let me give you an example. So I was talking to a startup this past week, one of the challenges that they were having it was giving people recommendations for different financial advisors, they should follow. So the idea is, there are financial advisors or investors that are really good at what they do. And they're going to give you recommendations of who you should follow so you can follow the way they invest. So you can invest smarter. This was all AI driven. And one of the things that we're having trouble is getting people to actually follow investors, can you guess why?
Max Matson 11:41
Why? Labor illusion,
Nate Andorsky 11:43
Max Matson 13:38
Interesting. Ai. Those are both very salient points. So expanding a little bit on number two there. I think that's a huge problem that all of us building the AI space are facing right is you're building something that is anticipated to solve a need faster, more efficiently, and more concretely than before I kind of came around, right? But it still needs to be designed to garner trust, right? So like you said, even if it can spit out an answer within a few seconds, that actually might not be, you know, ideal, despite it being faster, more efficient, more accurate, because the human on the other end doesn't have, you know, a chance to really buy in and feel like there is work going on behind the price. Right.
Nate Andorsky 14:25
Yeah. And that's, you know, that's true to every single use case. But that's like one of the common mishaps. I see. The other the other place. What's also interesting is understanding if AI models that you're building are meant to replace or augment an existing end to end service flow, right. So I'll give you an example. Imagine that you are writing an annual report for a nonprofit, right? When you write an annual report and publish a report, if I'm building an AI tool or to, to help that process. That end to end process is pretty complex, right? You're a communications director, you're probably talking to your executive director, you're talking to your, there's so many different things going on, there's so many different people that you're talking to, there's so many different ways you need to draft this, the first thing I would do is just like, literally map the end to end journey and understand what's going on, right, and then understand where the biggest pain points are. And your AI doesn't necessarily need to create the report end to end. But it might actually be more effective if you can figure out like, where are the places in that end to end journey that it can augment the experience? Like, maybe it can help capture notes from your team, right? Maybe you can help draft an outline, maybe you can help give you a first version of the draft, and you're gonna go back and edit, right? All of those things are really important because it dictates how the software will be constructed. So it actually works within people's workflows.
Max Matson 15:51
Interesting. It's almost the concept of like a copilot versus an autopilot, right? The copilot Exactly? Yeah. Yeah. So the 100%, would you say that it's easier to kind of Garner trust than if you're augmenting as opposed to replacing?
Nate Andorsky 16:06
I think so. I think also in those instances, and this is another like prime example. I think, AI or really any technology, people kind of just jumped to building the technology, they don't deeply understand the problem, the use case, right. But if if I'm a user of a piece of software, the one example that I just gave you, and it actually fits into my workflow, I have this sense of like, okay, people actually really understand who I am and what my needs are, right? They're not just throwing technology at me, because it's actually Oh, interesting. It actually lists that she asked me to put in all of the emails of the other folks that might not fit, they're gonna want to add comments to the end report, but they actually understand what's going on in my world.
Max Matson 16:47
I see. I see. Interesting. So pivoting slightly, you mentioned kind of that that itch to just jump straight to development, right. And you've talked about how you can tell if a company is struggling, just purely based upon the number of new product features and their funding round? Would you mind talking about kind of the calculus behind that?
Nate Andorsky 17:08
Yeah, so like, the entrepreneurial journey is you come up with an idea, hopefully, you've identified a problem. But a lot of times, you just sort of have this idea, right? You put it out there. And sometimes you put it out there, and it just like, catches wildfire, and it takes off, right. But most of the time, what happens is you put it out there, and you're like the growth that you had in mind, it's not happening, right. And there's this tendency to go wide with your product feature set, right, because you think the more features that I build, the more chance chances that I have, somebody is going to use a feature and the product will catch, right. But what inevitably happens, you keep running in circles, what you should do is you should find the biggest pain that is happening and build like one or two key features and go very narrow and deep. Right. And usually you don't expand your product in terms of your feature set, but also your ICPs as you get into later funding rounds, because that also is a signal of growth. Right? Right, you're being your seed round, right? So if there's a company that is in your precede seed, or sometimes even Series A, that has deemed as a bloated product, it's a pretty clear signal to me that they hadn't found product market fit because they're continuing to build features hoping something's gonna catch, right? If you came to me, and you're like, Hey, listen, I'm going to raise a Series A, I'm doing a couple million in revenue. And I have like, one core feature. I'm like, great, like, you know, what you're building, you know what your use cases, you know who your ICP is? Right? Right?
Max Matson 18:38
I see. So, for those who might be confused, because maybe they're trying to build around like a land and expand type of offering, let's say, yeah, what are kind of the concrete to say, hey, you know what, we're starting to creep outside of scope here. We're, we're moving in the wrong direction.
Nate Andorsky 18:57
Usually, I will say, if you don't have like a repeatable, scalable sales motion, meaning like, if I said to you listen, if I gave you, you know, if I gave you $50,000, you can't actually show me a clear path to how that $50,000 is going to turn into users. Right? Like, they don't actually know who they're selling to. They're not very specific about their ICP, okay. That's the first indication like we sell to executive directors of what nonprofits of what type of nonprofit, what size, what industry, like all those different types of things, right. The second thing is, they can't clearly articulate the core problem that they're solving. Right? What problem are you solving? We're getting, we're helping people become healthier. Okay, that's a very big abstract problem. How are you getting people to come healthier? Well, we help them create training plans, find a coach, eat healthy, all these things. I'm like, Whoa, that is like that's a lie. Like, if you were a massive healthcare company, sure. But like you're a startup Right? Like you need to be focused right? So not The articulation of ICP, trying to solve too many problems and the problems aren't clearly defined.
Max Matson 20:06
I see, I see. Very interesting. So you would say, kind of having a repeatable, scalable sales motion that's locked into this one piece of core value. That's when you know, hey, you know what, we're getting a little more mature, maybe its funding round B, C, right. Right. And now we can kind of expand out a little bit. Exactly, exactly. I see. Got it. Very interesting. So pivoting slightly to your current venture pattern. 355 or 355?
Nate Andorsky 20:36
You know, what? I don't know. I mean, it's, I haven't like settled on a way that I pronounced it. So 355, or 355. Works for me. So
Max Matson 20:43
awesome. Personal preference. So you had a pretty, pretty powerful story behind it. That you laid out kind of on your about page there. Would you mind kind of, you know, telling us what led you to start and patent 355? And the story?
Nate Andorsky 20:59
Yeah, definitely. So I've had this conversation with a lot of people around, is entrepreneurship genetic? Or is it environmental, right sort of nature versus nurture? There's, there's no answer to that. But I do have, on my mother's side, a pretty clear lineage of what I would refer to as the entrepreneurial gene. My grandfather was raised in Vienna, and he actually emigrated, emigrated, I don't know what the correct term is over to the US when he was a teenager, didn't have anything more than a high school education. But he was he was an inventor. And he went on to become VP at Clairol on a number of other companies and was the inventor or CO inventor on over I think about 100 patents throughout his career. Wow, he was a very curious person. And the company patent 355, pays homage to him and his entrepreneurial spirit. The last patent that he filed in 1984, ends in 355. So the name or the ethos of the company pays homage to him and his entrepreneurial spirit. But the reason that it's called patent 355, is the idea is that myself, and the company is carrying forward his sense of curiosity of how he moved through the world. And it's something that I've always had, as long as I can remember. And I think that's really what drives me as an entrepreneur, because I realized when I was in my 20s, it's actually not money.
Max Matson 22:26
What is it?
Nate Andorsky 22:28
It's just a sense of curiosity, like just a sense of like, how do things work? And how do things fit together? And how do very messy situations intersect in this puzzle like scenario?
Max Matson 22:41
Absolutely. No, I love that. So would you mind kind of talking through specifically what you do at pattern three? And how that bears out?
Nate Andorsky 22:49
Yeah, so we were Product Design Studio at our core, right. So we work with both early stage companies helping them either bring new products to market or refine products that are already in market. And we also work with large corporations on what I would deem as sort of new ventures, right? So if your visa, for example, and you're exploring launching a new product, what does that actually look like? Our methodology is really rooted in behavioral science, which is understanding what drives human decision making. And that perpetuates itself and the way that we do our research, but also design and bring products to market.
Max Matson 23:23
I see. Gotcha, gotcha. So would you mind defining a little bit more kind of like the behavioral science inputs that go into that that development and design process?
Nate Andorsky 23:34
Yeah, so one of the ways that we integrate behavioral science is in the research that we do, right? So if we're doing qualitative research, and we're doing some research about a new savings app, for example, we'll use the behavioral science to help us understand the customer insight. So if we're doing research, and I asked you, you know, do you want to save Vironment? And you say yes. And then I asked you how much money do you want to save every single month and you tell me X amount of dollars, I can actually tie that back to the academic literature and tell you why that is essentially a lie. So we're helping to sort of what does the qualitative and quantitative data say? But what is it say those types of things, right? And then what that does is it helps us come up with Feature Ideas and product ideas. So the augmented reality savings app that I spoke about earlier, that could be a prime example of some new features or some be able to come up with because the behavioral science and then we'll build those and test those and see if they actually work, etc.
Max Matson 24:31
Okay, got it. Got it. Very cool. So it's kind of like you're building the behavioral science into the very process of how you exactly score questions.
Nate Andorsky 24:40
Yeah, it's kind of like, it's another layer. You know, you I'm sure you hear like we use design thinking and we use human centered design. It's another tool in our tool, but we all use all those things, too. But it's another tool in our toolbox to help inform that the typical iterative, lean startup process that most companies go through.
Max Matson 24:57
Very cool and can ask, have you worked with Any AI products? Yep.
Nate Andorsky 25:02
I have worked with I mean, there's a couple different levels. One is just like using an AI product. The other is looking to integrate AI into the products that we build it. I've done. I've done both of those.
Max Matson 25:15
Very cool. What would you mind talking about the ladder a little bit just kind of how you've kind of bridged the gap between the the behavioral science piece and and building in the AI space? Right? Because I think there is there are a lot of startups that are coming out that are trying to that are trying to solve a problem, like you said, right. But they don't necessarily know that it's, it's the right problem. They don't know that, you know, they're solving it according to, you know, what the customer actually needs, as opposed to what they're able to verbalize.
Nate Andorsky 25:46
Yeah. So like, on the interaction side, what I would deem is like the user experience, you know, one of the interesting use cases is because of the way that AI is available. Now, there's a lot of ability to create these chat interfaces, right in a way that there wasn't before. And so for example, you can imagine one of the powerful things about behavioral sciences around this idea framing, right. And let's say, for example, that you know, that there's a certain demographic or certain persona within your app that responds better to certain types of messages. I'll give you a prime example. Like, this is a very, like, basic use case, right? But you could begin to segment out your your user base, but one of the ways that you could do it is imagine that you created you have that savings app that I was talking about before, right? And let's say there's some sort of text interface. You know, previously, you might say, Hey, would you like, you know, would you like to set up your your savings account today, but we know, going back to loss aversion, one of the ways that you could potentially frame this is, there's been an account created for you, here to claim your account, right, using loss aversion that already exists, right. And what these models allow us to do is segment users into different buckets, but also begin to send very customized behaviorally informed messages to those different segments in a way that you could kind of do before but like, not at the scale, or the ease of use that you could before.
Max Matson 27:16
Very cool. Very cool. So for somebody who maybe, you know, is a first time founder and is working on on their product, whether it be AI or not, what are you know, where would you start, if you were in that position with because obviously, these types of models are going to be increasingly relevant and making decisions in product development based on behavioral models is going to be, I think, really the next step kind of in product development. Yeah, stay ahead of the curve. They're what I
Nate Andorsky 27:45
would do. And I'm a huge proponent of this, I always say no one would ever hire me to be their lead engineer on their team, because I would just destroy the codebase. But I can, I can take an idea. And I can launch a first iteration of a product, right? So what I always do with these types of new technologies that come out as I just get my hands dirty, right? Like, it's really hard, I think, to understand what you can and can't do unless you've played around with the technology. So for example, what I did was I tried to produce a decent amount of content, right? On LinkedIn, and blogs and such and like, like everybody else is like, Oh, I wonder if this thing can help me produce content? Right. So the first thing I did, that's a bunch of chat. GPT, right. That was like step one, I was like, Okay, this is like, okay, but it's not great. And then after a couple of months, like, I'm sounding like everybody else, like, it's pretty easy to identify with chat. GPT wrote something, right. But I was like, Okay, I have a massive book, right? I was like, I wonder if I can trade open AI, which is basically training in a large language model on my book, right? So I spun up Python, I started write a script. And I basically took, I took the contents of the book, and I train the model on my book. And then I started to do like prompt engineering, right, which is this whole, like new way of thinking of it's actually really important the way that you prompt the model to get the answers that you want. I started just play around with it. I was like, Can I like, prompt open AI based on the content of my book to create a LinkedIn post that like, kind of sounds like me, right? And I just started going through that? And the answer is, I never got to a place where I could, but I started to understand what this thing can't do. And it can't do in the limitation. So if I'm talking to a startup, or I'm talking to a client or a company, and they ask me something that's not surface level, like, I can answer it, because I've tried it before, or I know what's possible with it, that can bring in the right folks to build those types of things.
Max Matson 29:28
I see. Man, that's a that's a very unique use case. I love that, right? It's a common theme on this podcast is that kind of the value of a product that you're able to build around AI or even just, you know, an internal feature really does come down to the data that you train it on. I'm interested, though, to hear kind of why you think it didn't work. Why it wasn't able to really replicate your voice in a way that was, you know, meaningful and not redundant.
Nate Andorsky 29:55
My guess is there just wasn't a training data, right. So the next step I would have taken with that is I would have started to train the model on my previous LinkedIn posts and blog posts and get a lot better around the prompt engineering piece. Right. And I think that's where a lot of the value is, in terms of just the industries that are looking to adopt this. It's not actually in the, like, using the model, like everyone can tap into open AI, I know Google has something I haven't. It's your ability to train the model, three data, that's what makes it valuable to get the type of information out of it, right. And that's where I think you're gonna see a lot of companies that are able to build somewhat of a mode, like, when chat GPT first came out, everyone's like, Oh, my God. And then, I mean, literally, if you want to, if you want to integrate chat, GPT, kind of like, as is into your product, like takes a couple hours, like it's not the right, so like, everyone's like now doing it. It's like having a Google search on your website. Right. So the power is really in the ability to train the model on your own proprietary data.
Max Matson 30:58
Hmm. I see. That makes sense. Makes sense. So having an interesting data set to begin with is kind of going to be a crucial aspect there. Right.
Nate Andorsky 31:06
Right. And, and understanding how to how to leverage it. Right, right.
Max Matson 31:10
No, absolutely. So So kind of bridging the gap there. You are a serial founder. Right. You You founded several companies in your history.
Nate Andorsky 31:19
Yes. Not accessible, but I have.
Max Matson 31:24
Let's get into it. What do you what is the difference in your direct experience between a successful venture and an unsuccessful one?
Nate Andorsky 31:32
I think it depends on how do you define a successful venture?
Max Matson 31:36
Well, I'll turn it back around how to use define a successful venture.
Nate Andorsky 31:41
I mean, there's a couple different ways, right? So one is the typical venture backed scenario, right. And like, those are just different types of companies. And I think that if you're going to take venture money, you've got to understand why you're taking it and what it's for. And I'm, I'm part of Angel squad, which is a, an angel money that's attached to hustle fund, and I've been on a couple pitch calls. And what's interesting is, a lot of the ways that VCs look at opportunities is not necessarily like, is this going to work? It's if this does work, how big can it be? Right? So you can have a company that's actually doing pretty well, and is profitable and is growing, but VCs might pass on it, because they might say, Listen, like, this thing will never be big enough to provide the returns that we need. Right? For me, success is honestly, and this is a little counterintuitive to the the hustle culture is under a patent 355. And my companies now to help support the lifestyle that I want. And that means like, I want to be able to not work weekends, if I don't have to, I want to be able to work after six. And because of that, like I've been very intentional. So I don't have any plans to raise money. Because, you know, if I take two weeks off, I don't want anyone being like, why aren't you working? Right? So I think it depends on like, what do you want out of the company that defines the success for you and then understand how to build a company as such.
Max Matson 33:01
Right on. So yeah, that's kind of the difference between a cash flow business and one that's backed by venture right. And I think that's in correct me if I'm wrong here, but largely a function of the way that you know, venture does investment is there's a large pool of investments, and it's really just upside.
Nate Andorsky 33:17
Exactly. It's kind of like, you know, I'm gonna throw money at a bunch of startups. And if one hits it, because of the massive returns, it's going to return for the portfolio. And that's good for VCs, but because you're one bet out of X number, but if you're the entrepreneur like that is your startup. Right? Right. You know, and like, this is easier said than done. But I think that if you have if you have a company like this, so you have a SaaS product, right? And, you know, if you don't raise money, and you get to a million Arr, and you can sell that for three, four, maybe 5x. Like, that's a lot of money to take home, right? Take home the same amount of money for this in you raise because you get diluted through the funding rounds. Like you got to build a much bigger company to take home the same amount of money, again, like, like some monies do actually need venture capital from the get go just capital intensive it is but like, are different ways to kind of think through this and depending on what it is that you want to get out of it. Right.
Max Matson 34:13
So I'm very interesting. I correct me if I'm wrong, but I get the sense that maybe your your view on this has matured through different ventures. Is that the case?
Nate Andorsky 34:23
Yeah, I mean, I think when I was in my 20s, I wanted to venture back company because like I honestly I wouldn't say at the time, but now looking back, it was I wanted to just tell a lot of people and get written up and TechCrunch raised $100 million, right. Like, I've talked to a lot of founders who've raised venture capital and it's like, a lot of stress, which is like fine for some people if that's what you want, but you know, I think it just depends on you know, what, I think it's such a it's a big sign it's not as bad as it it. It has been in the past but this this, the sign of this indication of like success when you raise money, which like it is but like people were giving you money Her future potential like, right? Like, it's, it's a liability. someone's like, Hey, here's $10 million. You come back to me when you turn it into 100 million? That's like a lot of pressure.
Max Matson 35:12
Sure, no, totally. So yeah. So if we were thinking about this on the individual basis, it would be like getting an article written up for taking out a giant loan. Right. Right.
Nate Andorsky 35:24
You know, that's, they're investing not in what you've done. They're investing in what could be right. So, you know, I kind of liken it to the NFL Draft, right. Like, I think I think if you get drafted in the first round, in the NFL, like, that's going to be slightly, like scary, right? Like you're getting the all you have is, is the opportunity to to not live up to all the hype given to you. Right? I'd rather go in like the seventh or eighth round, no one knows my name. And then all of a sudden, like, it's all upside.
Max Matson 36:02
Right. Yeah. No, 100%. That's a great point. I think that also bears out in the market, right? Like, if you look at all these companies that that raise that these gigantic valuations, what, a year ago, two years ago, a lot of them are crunched now, right? They're they're overextended, they're doing layoffs. Would you say that that's largely because of kind of that raise big kind of model?
Nate Andorsky 36:25
I think part of it, I think a lot of it has to do just with the money that's been floating around. And I mean, some of it is more around financial stuff that I don't completely understand of like the bigger macro economic clients compliance climate that has contributed to this. But like you said, I think it's, it's hard because, you know, if you went out and raised at a $30 million valuation, you know, two years ago, now you're looking to raise more money, your company can actually be doing much better than it was previously. But just because of the market conditions, like your valuation, could be like, very bad, right? And that's kind of out of your control. Like, that's not your fault. It's just because of like, the valuations aren't as high. And so you're kind of been, you're kind of like stuck in weird. Like, no person's land. So
Max Matson 37:12
no, absolutely. So let's talk a little bit about kind of what the business and investment landscape is right now. Because I know a lot of people have questions. There's so many kinds of contrasting headlines coming out of the industry at all times right now, right? What is kind of like your rational take on on the state of the industry at this moment?
Nate Andorsky 37:31
I think they're just there was a lot of overhype. That a lot of people got caught up in over the last few years that's starting to settle down, I think is the big thing. I also think it's harder than ever to raise money on just an idea. And I see startups that have a deck with a product idea. But no proof of concept. And like, unless you're a proven founder that's done this a couple of times, you can't raise often just an idea, or must you have like some secret that no one knows, right? Like you've decoded some crazy DNA gene sequence or something. But like, it's kind of like a run of the mill type of idea. If you don't have any indication of traction, it's just, it's almost near impossible to raise. So. And it's getting harder, because technology is getting cheaper, the funding climate has changed. So that's probably a big thing. And I do sort of by myself, but also on behalf of patent, just some early stage investment too. And I know what I've seen, you know, like the idea at the early stages almost doesn't matter. It's really about the team, and their ability to understand and test quickly and really understand and validate the problem that they're solving. Because I know the idea is going to be a million times as they go through the startup journey. So I think also, and this has always kind of been true, but anchoring early on, it's very about very much about the team, a lot of investors will say it is about the TAM, the total addressable market, because the returns need to be high enough, but like, I don't know, you can kind of spin your idea to be in whatever market you want to, like, give you an idea, right? Like, give me a random idea. Oh, let's see here, just like anything random, right?
Max Matson 39:19
Let's say AI software for moms.
Nate Andorsky 39:24
Right. So like to help them do what?
Max Matson 39:28
Shop more effectively. Okay,
Nate Andorsky 39:30
right. So like, if you were, like really nifty, like shop for grocery items, right? And then you're like, well, that's not a big enough total addressable market. It's like, well, what if we just did shopping, right? And then you kind of make that market much bigger, right. So it's attractive for investment, but
Max Matson 39:44
I see. Okay, gotcha. Makes sense. And so we've talked through kind of like from your vantage point as a founder, how to incorporate behavioral science into the product development process. How is that a factor in the way that you You choose your investments and your bets. And
Nate Andorsky 40:04
I'd like to say yes. But like the beauty of behavioral science is that like, we know these ideas, but we all fall prey to them. I would say my experience in my, I would say, So what's interesting is, I wouldn't say as much the behavioral science and helping me choose investment opportunities, but the the ability to understand when I'm on a pitch, specifically if it's something behavioral related, right? So, I FinTech company, a digital health company, an education company. Is the team just building features and technology? Or do they actually understand the behaviors that they're trying to change? So for example, back to the FinTech savings retirement app that I was telling you about, if I got a pitch from somebody, and they were like, we have this amazing onboarding experience where you can set your amount of monthly contribution, you can pick all your goals. I would be okay, I would take a second be like, do they really understand, like the underlying dynamics of how to change this behavior? Right. And that's an indication to me of it's I mean, I don't know if this is a good investment thesis, but I'll see someday, like, Does this make sense? That's something that could actually catch on. So
Max Matson 41:19
I see. So it would you almost consider it like building on universal principles, right, like behavioral science I, and correct me if this is a mischaracterization, but it's almost trying to get to fundamental realities about human decision making. So yeah, even if, right, so even if the problem applies to a different ICP, or you know, a different person down the chain, it's still going to have this kind of universal tether to human decision making.
Nate Andorsky 41:45
Exactly. And like, I love the way that you frame that, because everything is a human behavior problem. Some are bigger human behavior problems than others. Yeah. And sometimes what I see is companies that are addressing a human behavior problem, right? So let's say for example, you're trying to build a technology that helps managers be better managers. Yeah. And they, they're throwing technology at it. That's not a technology problem. That's a human behavior problem. Right. So like that framing for me, helps me understand if they're just growing technology and features that a human behavior problem, but don't understand that behaviors they're trying to change. That, for me is like, really interesting looking product. But I'm really curious if it's actually going to change the behaviors, because it doesn't seem to really understand what behavior they're trying to change.
Max Matson 42:36
I see. Got it got it. So even if it's, you know, if it's potentially framed in a way where, you know, people aren't doing this workflow, they aren't thinking about the problem this way. If you're tethered to that kind of that core problem that, you know, how do I make? How do I align my answer with the way that humans naturally think? You're? It could still potentially be, you know, a success? Correct? Got it. Got it. Very cool. So kind of extending that a bit? How do you think about behavioral science in the context of a product's go to market?
Nate Andorsky 43:17
Yeah. So like defining your ICP scaling, part of that, what is your question around?
Max Matson 43:23
Yeah, so I think a little less on the ICP side a little bit more on. So one problem that we're running into kind of in the AI industry, right, is without saying AI on our homepage, 57 times? How do we? Yeah, it's actually it's a habit of mine to go on to AI companies, websites, and just Command F, see how many times they say, right, yeah. But, you know, it's this problem of, I do think that there's some behavioral science, whether we think through it that way or not, in my answer to the problem is one that is not the way that people are thinking about it, but it's rooted in the science of simplistic human decision making. So how would you kind of apply that model to messaging would be okay,
Nate Andorsky 44:12
can you can you walk me through the example. And then I'll try to riff on the
Max Matson 44:17
board. Yeah, so let's say. So let's say typically, product managers are not the ones who are going in and finding problems and acquisition, right. But I have built this tool that says, a product manager is one who's going in and saying, who's our biggest lifetime value cohort, right? How do I message to that person to say, our products not only tells you LTV, but it also shows you the acquisition sources that have led to low or high LTV cohorts.
Nate Andorsky 44:48
Okay, so my my question for you around this is let's talk about in that instance, what's the acquisition channel? Yeah, so let's say you have, you probably have a couple I hope Like what's just kind of want to focus on at least to begin with?
Max Matson 45:03
Let's say search. Yeah. Okay.
Nate Andorsky 45:06
And why does that product manager care about these metrics that you just stated?
Max Matson 45:10
Right. So the currently the model is they're looking at LTV to determine what features they should build, right? What, you know, what needs help? Is there some type of issue in the back end that's causing this feature to get, you know, low retention? What is kind of causing within the product, this low LTV that I'm seeing from our recent users?
Nate Andorsky 45:31
And then why do they care about that?
Max Matson 45:34
I would say in order to build a product kind of in the way that you model there, right, so not just a feature for the sake of the feature, but to actually kind of push the needle.
Nate Andorsky 45:44
Okay, and then why I'm this is, this is what I do. Why do they care about?
Max Matson 45:51
Because they only have limited time and limited resources, right? So they don't, they can't afford to build into every space, they want to stay concentrated on on the people that they're providing the most value to? And they're wondering, why aren't they retaining? Okay,
Nate Andorsky 46:07
so what I start to do, and we do this at patent free, fireside, and this isn't, this isn't like even period of behavioral sciences, just kind of like I would call it just, I don't know, strategy. And we test actually a lot of these through cold email. And the reason that we do that is because it helps us, it's like super quick feedback loops. Right? I would start to map you started to get to in our conversation here, like, what is the actual pain that you're solving, right? And you've got to remember, and I don't know what your sort of workflow is, in terms of getting people into the product. But like, if your goal is to get somebody on a demo call, like that outreach message, or the SEO, whatever the search is, like, the goal isn't to sell your entire product, it's to spark enough interest to get them to the next step. Right. So I would start to get to the pain you're solving, which is to get to it there, right? And use that as the lead and the hook. And then bring them to the flow and then eventually introduce your solution to solve that problem.
Max Matson 47:05
I see. Okay, got it. So it's working backwards.
Nate Andorsky 47:10
It's working backwards. And I think you have to also understand there's some people that are going to be like typing in looking for like, AI blah, blah, tool, right, great. Yeah, good. Got him. Like, most, a lot of people are just like they have it. And we do a lot of qualitative research around this really understanding the problem that you're solving, I'll give you a perfect example. We have a client, they built a, a tool that helps communications directors at small nonprofits build online and reports. Right, okay. And one of the big problems that we found when we were doing our research is how much time and money just creates the report, right? The tool had, and it's still positioned around this idea that you can digitize your end report, you can create it, so it's interactive, like a mini website. Right, right. That's the product, right? But a lot of times the problem you're solving is actually saving them time, right? And money. So
Max Matson 48:04
I see. Okay, so then we might say, Well, the reason that potentially the LTV here is low is because the people that you're capturing are just looking up, maybe I don't know, something tangent to this problem, but they're not looking up the pain point, right. So they're coming in with a different expectation than what you're actually delivering.
Nate Andorsky 48:25
Right? So like in your example, and I don't know how this like, kind of trickles out. But like one of the one of the problems that you could be solving with your your product could be like just feature bloat, or like, I have too many ideas in my product roadmap, like, how do I narrow those down? Oh, actually, one of the ways you can narrow it down is like we have this product that helps you like do this, this and this, and then you can figure out how to take your product roadmap from 20 features to three features. Right?
Max Matson 48:49
Right. Right. Like that. And if you know, if speaking to that kind of entrepreneur who is in that spot where they're, they're a bit bloated, would you recommend that? Would I recommend pare down? You know, cut off the excess?
Nate Andorsky 49:06
Yeah, it depends where I am in the company, right? Like, if I'm a product manager at a series, a company, I've got very different problems in the series C Company, like it's a b2b or b2c, or whatever. Right. So I think it's also becoming really clear on how your ICPs kind of segment out, because, yeah, you know, like, if I'm a product manager at a, I don't know, Series B, or C Company, like there's a good chance I don't actually even have insight into the entire end to end journey. Like I've only got like a piece of the funnel, right? Where like, if I'm at a seed or series a company, I'm probably only in the entire right. So that position is gonna be very different. Because if you're like, Hey, listen, we can help you with acquisition. I'm like, Well, I just deal with retention. I don't even know what goes on in acquisition, right? So
Max Matson 49:50
make sense? Makes sense. So pivoting slightly. I'd love to get kind of your your takes on what the future looks like when it comes to behavioral science how that's going to come To guide the next generation of products and how you see that playing out?
Nate Andorsky 50:04
Yeah, it's a really interesting question. I thought a lot about it, I've written about it in my book, I think there's a couple different places. Everybody wants a silver bullet, right. And when behavioral science started to get a little bit popular, it became that silver bullet. And there, there's no such thing, right? So we're seeing actually, right now, a little bit of like a cool off, I think, in the behavioral science space of like, how great it is. But I've always believed that behavioral science is about a decade behind where data science is, right. In terms of people's adaptation of the methodology, which I think is still true, I think what's going to happen is, it's going to follow up pretty similar to check theory as data science where you start to have behavioral scientists that sort of specialize in different areas, right?
Applying behavioral science to organizational change management versus user acquisition, like same underlying theories, but it's really different. It's kind of like a doctor, right? Being a gastroenterologist versus a pediatrician. Like, you're both doctors, but like, you need different training, right? So you're starting to see, I think, some segmentation not only by industry, but also by use cases I see. And then behavioral scientist, starting to be able to upskill off different domains, right. So I am a behavioral scientist with a strong like UI design background, begin to bridge those gaps, where everything isn't so segmented, is my guess.
Max Matson 51:32
I see. Got it. Got it. So it's almost like, right, so I think we're at least sniffing around kind of the same process in AI, right? Where it's, instead of thinking of it as AI, the field, it's AI for this application, this application, this application, right? So kind of the same thing there.
Nate Andorsky 51:50
And then you don't have just, I mean, you don't just hire a data scientist, like you need data scientists, specific use cases. So you're starting to see the same thing is like, there are data scientists that specialize in natural language programs, you know, whatever it is.
Max Matson 52:03
Yeah, absolutely. So one final question for you. What is your if you have one, your hot take kind of on the state of technology, the industry in general, behavioral science, AI, whatever, you have your hottest taken.
Nate Andorsky 52:20
I think AI, at least for the time being it's kind of awareness situation right. Now. What's the same? Like the dog who call its tail?
Max Matson 52:27
No, yeah. Yeah.
Nate Andorsky 52:28
Is that the same?
Max Matson 52:29
I think, dog chasing its own tail, something like that. Yeah, that eventually,
Nate Andorsky 52:33
the dog and caught the bus. That's the Oh, good thing I didn't get mixed up. Because those are two very different. You know, we had been waiting for a while for this, like aI revolution, right? And it's, I mean, you can argue, if it hasn't, hasn't come yet, but like, something's come and everyone's really excited. And now we're in this place of like, what do we like? What do we actually do with it? And how do we apply it in a way that's not just a shiny object that can actually help us do our day to day, and I think that's a lot of what you'll start to see is you'll start to see companies that are successful that really understand what I was talking about earlier is how do you take this technology and really understand the problem that you're solving, you use the technology to help solve that problem versus just like, throwing AI on anything that you can possibly find? Right,
Max Matson 53:23
right. No, it's a fantastic point. I would definitely agree with that. Right. And I think you see a lot of examples of those types of companies. Now it is. Well, Nate, this has been fantastic. Thank you so much for joining me. Is there anything else that you kind of like to talk about or?
Nate Andorsky 53:41
No, I think for anyone listening, Mom, if you want, grab a copy of my book, it's on Amazon, just typing decoding the Y and I'm on LinkedIn. So I don't know. I like to talk about this stuff. So anyone wants to Djamel anything like I'm an open book. Awesome. Feel free to reach out.
Max Matson 53:59
Awesome. It's just Nate Andorsky. On on LinkedIn and I can confirm your great follow.
Nate Andorsky 54:03
Thank you. Appreciate that.
Max Matson 54:06
Awesome. Well, thanks so much, Nate. Talk to you soon. Talk to you soon.