William Bakst 0:00
Essentially, like all of the machine learning we're seeing is coming out of big tech companies. So Google, Facebook, Apple, Microsoft, Amazon, open AI, you name it. And what really differentiates these companies from the super majority of companies in the world is that they are massive. They have these ginormous datasets and huge reach. And so for them blackbox modeling isn't a huge issue. One because they have these like datasets with billions of examples. But the second thing is, if you eke out point 1% improvement on your models performance at Google, that could generate billions of dollars in ad revenue.
Max Matson 0:44 - Intro
Welcome to the future product podcast where I Max Matson interview founders and product leaders at the most exciting AI startups to give you an exclusive glimpse into the workflows, philosophies and product journeys that are shaping the current and future AI landscape. This week, I sit down with current co founder it's so tight that AI, former Googler, William Baxter, learn more about how he and his company are making AI models. That makes sense. With all that said, let's dive right in.
Hey there, everybody, welcome to future of product today. I'm interviewing wil from so tie.ai. It's a very interesting startup, we'll, I'm going to ask you first to explain to me what so tight does like I'm a five year old? And then second to tell me what summertime means. Yeah,
William Bakst 1:29
I guess we can start with what so tie means. It's an acronym for state of the art interpretability. It comes from some of the research I did in my work at Google on interpretable machine learning, which I know it's a mouthful. And, yeah, I mean, to explain what we do to a five year old, we have more data than we've ever had before. You know, we've got databases filled with data. We've got Excel spreadsheets filled with data. But it's pretty tough to capitalize on that data effectively. So on the one hand, we have handwritten rules, you know, things where you can explicitly define exactly how it's going to behave according to what you believe. The problem with that is that even though it's transparent and make sense, it's really hard to account for everything and to get really good results, because you're doing it by hand. On the other side, we have what we call blackbox modeling. And, you know, I think the term kind of defines itself, but the idea is you give it a bunch of this data, it learns from the data, and, you know, spits out predictions. And so it's really flexible, and really powerful. You know, it has a lot of like predictive power. But it's really hard to understand why it's doing, what it's doing, or how it's doing what it's doing. So you lose all of the transparency of handwritten rules. And so the goal of so tie is to provide a modeling technique that we call calibrated modeling. And these models lie right in the middle. So they have the transparency, and you know, rule based guarantees of handwritten rules and heuristics. But they have the flexibility and power of a black box model just without the black box.
Max Matson 3:09
I see. Okay, gotcha. So you're kind of adding clarity to something that right now is a fully black box.
William Bakst 3:17
Yeah, exactly. And I like to focus in on the term interpretability. Because cleanability is a big space in AI right now. I think the acronyms like x ai. And I find explainability to be somewhat of an afterthought. It's like, hey, we have these blackbox models. They're really powerful. We want to know why they're doing what they're doing. interpretability at least the way I see it is, you know, a bottoms up approach. So rather than saying, Hey, we have these models, let's try to explain what they're doing. It's pay, let's make models that are just as powerful. But from the get go, we're constructed with the ability to understand what's going on in mind.
Max Matson 3:55
Okay, gotcha, gotcha. That makes a lot of sense. You mentioned earlier that you had worked at Google previously, would you mind telling me kind of, you know, some of the things that you were able to pick up from that experience and kind of how that led you into your journey with so tech?
William Bakst 4:09
Yeah, definitely. So I worked in Google AI on a team called glass box, which is upon on the term black box. And, yeah, so it was mostly researching interpretable machine learning systems. You know, we were releasing research papers with state of the art results. I like to call it calibrated modeling. And I'm kind of getting a little bit more of why I call it that later. But what ended up happening was I spent a majority of my time working with product teams to help them launch these models in their products. And, you know, what I found was that many product teams, even inside of Google, we're using handwritten rules and heuristics. And you know, I think with this whole AI craze, we're looking to upgrade their systems, upgrade their products, upgrade their decision making workflows. But they, you know, they weren't willing to just take the deep dive into blackbox modeling and use deep neural nets, because not being able to understand why the model is making the predictions or how it's making them. You know, it doesn't really enable you to trust the results, and not trust the results, you're less likely to integrate it.
Especially if you're a product manager or a product engineer, and you're not a machine learning expert or a data scientist, it can be tough to take that leap of faith. And so I ended up reaching out to our team, because they were like, hey, we want the power of blackbox modeling, but we want the transparency of your team, and like your product. And so it was, you know, it was awesome, I got to launch a lot of models with some really cool teams. You know, like, every time you use Google Maps, it runs through a model that I launched with them. As models that run in search models that run ads, I obviously can't go into the specifics. One thing I found was really frustrating was how inefficient that process was. So you know, the process of implementing these models with the product team was product team reaches out to us. And we have like a, you know, icebreaker call, where they tell us about their use case, what they want to do, we tell them about our models, tell them you know, why it's worthwhile, which, you know, with this type of research, it's pretty difficult to someone who's not familiar with the space to explain how these models work, and why it's beneficial. And you know, why you get the power of blackbox, without the black box. And so it creates this, you know, consistent constant back and forth, where we're trying to get information from them to effectively build these models and train these models. And they're trying to extract information from us to try and understand like, how we're actually helping and how to properly understand it. And getting to a point where, you know, it was really cool to get these product teams to feel that like wow factor, where we did finally understand the benefit, you know, we'd finally understand their use case. And we kind of converge on this, Hey, we know what we're doing. And the product teams would feel like, oh, wow, this is really cool. We're finally able to upgrade our systems in a way that makes sense. And I think that was kind of a key phrase that I was looking for was, you know, when I studied AI in school, when I used it at Google, it was really rare for someone to use it and leave thinking that made sense.
It was almost always, oh, I can see the power, or I can see it being useful. But it was never make sense. And so that ended up becoming like what I wanted to Chase was, can we produce AI systems that just make sense, where I implement them, you use them? And you have this feeling of understanding ease of use transparency? And yeah, so I think like, the main goal for me was one, is it possible to remove that constant back and forth? Can we simplify the flow? Almost to the point, where could we just remove myself and my research team entirely from the like, product implementation flow, when I joined the research team, my goal was to continue research, build out the tooling, I didn't expect it to be just building models and training them, that was more like applied machine learning. And so I wanted to remove myself from that flow and give product teams, you know, the ability to just without us very easily configure train and analyze these models. And get them to that wow, factor as quickly as possible. And so that's where SCO tie was born.
Max Matson 8:34
Got it, it makes a ton of sense. You use this term black box. And I mean, I think everybody's heard it in relation to especially current AI. But would you mind talking a little bit more about kind of the costs that come with, you know, these black box models?
William Bakst 8:48
Yeah, definitely. Um, so I think, first, it's, it's probably worth kind of further describing black boxes a little bit. Because I think the term can be taken as like, we have no idea what's going on. But the reality is, we know exactly how the models are structured. And you can dig into the structure and look at it and say, Oh, well, like this node is connected to this other node. The problem is that it is the structure itself that makes them black boxes, right, you have this input come in, and then every input feature goes to every node and gets combined in some way, and then run through another transformation and combined and another way until eventually you get this prediction, but no particular node in the graph actually has any like defined meaning. The model knows what it means, but that that meaning is hidden, which is why in deep learning, you know, the layers in a model are called the hidden layers, like that name exists for a reason. It's because the meaning is hidden. You don't really know what's going on. It's just figuring out the best way to do it based on the data. And so, the example I really like to use to describe what the pitfalls of blackbox modeling is if we consider a real estate agent who, you know, they want to more effectively price their clients homes when they put them on the market.
And so they train this blackbox model on historical data of like all the homes, they've priced and, you know, changes in the price and what the final sale price was. And they put this new house in, and they get some prediction, let's say, you know, a million dollars. And then they go back and they realize, oops, like, I miss typed the square footage, it's, you know, it's 1500 square feet, but I put in 1000 square feet. So let me bump that up to 1500. And then the model produces a prediction of 800,000. And in that moment, you're like, huh, increasing the size, without changing anything else, should only really increase the price. No, of course, market conditions change and other features will impact the price. But if nothing else is changing, larger property should be more expensive. That's kind of a fundamental rule of real estate. And it's in that moment, where now a data scientist or a real estate agent, or anyone using this model is going to start questioning the results. And they're gonna start questioning Well, what happens if we change any feature, especially the features we think are impactful? And it's in that moment where you just lose trust in the model, and you're gonna go back to what you were doing? And so I think that's I think that's the primary pitfall of blackbox modeling is that there's nothing really you can do about that the only solution is gather significantly more data, and then just hope that the model learns what you want it to learn.
Max Matson 11:35
I see what you're saying, that makes a lot of sense. from an industry perspective, what is your kind of take on blackbox modeling as kind of a standard in the industry? Do you think that it's something that you know, so ties is hoping to disrupt? And and if so, for, like product teams? How do you see that kind of bearing out?
William Bakst 11:53
Yeah, I think I have a pretty hot take on this, honestly. So take salt. But right now, essentially, like all of the machine learning we're seeing is coming out of big tech companies. So Google, Facebook, Apple, Microsoft, Amazon, open AI, you name it. And what really differentiates these companies from the super majority of companies in the world, is that they are massive, they have these ginormous datasets and huge reach. And so for them, blackbox modeling isn't a huge issue. One because they have these like datasets with billions of examples. But the second thing is, if you eke out point 1% improvement on your models performance at Google, that could generate billions of dollars in no ad revenue.
And so for them, it makes sense to use these blackbox models, even at the cost of transparency, because they're generating more money. And that's what they want. The problem is for the super majority of companies, small companies, midsize companies, even large companies, where they're, they don't have as much data, or they don't have the resources to focus these big teams on eking out that little bit of performance, the lack of transparency often just prevents the use of AI completely. And as a result, you now just have these companies that are somewhat dependent on big tech companies releasing models that they can use, and having no real control over, you know, the direction of their product, it's tough to fine tune these to their data, you know, make sure that the original training data was close to what they wanted. And so like one thing that's really cool about the models that I was researching, and kind of the models that power our platform is that, yeah, they work really well with really large datasets, but they also work really well with really small datasets. So even if you only have 1000, or 10,000 examples, you can still find value using these models. Especially because you're not just looking for predictions, you're looking for, like an iterative cycle where you use the model to better understand the data, gather insight, and use that insight to, you know, have actionable results, things you can actually like change your procedures, change how you interact with your customers, figure out which features are the most important. And so I think it's it's, like one of our main goals is to revolutionize the way that product teams and you know, any team in general is making decisions.
Max Matson 14:22
Gotcha. That's super interesting. It's almost like kind of democratizing the AI. Right. What would you say is kind of like the threshold in terms of size for a company where something like so tight would become useful, because I've heard a lot of people with objections saying, you know, we don't have enough data. I don't know when we'll have enough data. When would you say that, that that line is kind of crossed?
William Bakst 14:46
I mean, I don't really think there's a line. I mean, if I want to give a line, it's like, yeah, if you only have like 10 data points, just use handwritten rules. Right. But, you know, like, let's say you're an E commerce company, if you've made 1000 sales, you could still benefit from using these models. Gotcha. Gotcha.
Max Matson 15:08
Well makes a ton of sense to do it then. Right? So question, you, you're serving, you know, kind of teams, it sounds like all different sizes, potentially, who is kind of the person that you're hoping to kind of get this tech into their hands that, you know, a product lead ahead of data? Who was that person?
William Bakst 15:28
Yeah, I mean, I think right now, the goal is the early data scientist. You know, I think data science PhD, you know, statistics, PhD could still benefit greatly from using the tool, because it can provide a lot of the machine learning and analysis tooling that they would have to do by hand. But I think a lot of the real benefit is going to come from data scientists who are either just getting started or have been in the field for a few years. And, you know, they're using Scikit, learn, and struggling to really eke out value, they're, or they're using blackbox models, and they don't really know what's going on. And it's tough to improve, you know, and so we want to provide a way for these data scientists to capitalize on their knowledge, capitalize on their domain expertise, you know, and create an iterative flow where they can actually eke out performance over time, you know, and help drive key decisions, drive product changes, drive product integrations.
But I think ultimately, our end goal is I think it's kind of in line right now with the whole GPT, LLM craze, which is that, you know, I envision a future in which anyone with data should be able to capitalize on it in a way that makes sense. And when it was with GPT, and MLMs, it doesn't necessarily really make sense to people, but they're finding value. And anyone now can go to chat GPT and type something in and be like, Wow, this is really cool. I want that same WOW factor for I have an Excel spreadsheet, I don't know what to do with it, I want them to come to our platform. And without any experience, find value from that in a way that was like previously not possible.
Max Matson 17:07
I love that it's a it's one of my favorite things about soul ties, that you guys really are making machine learning a lot more accessible, right. And I think that that's going to be one of the major trends, right? It's, it's like any technology, it starts out, you know, very much in the hands of people who you know, are deep in the tech and are able to manipulate it directly. And then obviously, it always works out and expands into kind of general populace. So that being said, how do you see AI kind of evolving over the next, you know, I'll let you pick the, the the time period, and really becoming more accessible?
William Bakst 17:45
Yeah, I mean, I think with respect to our company, our goal is to make it accessible as soon as possible. I could say four weeks, I could say, a couple of months. But I think the goal is just move as quickly as possible and make it you know, as accessible as possible as soon as we can. But, you know, if we look at, you know, the acronym for our name, state of the art interpretability. There's nothing in there that's necessarily specific to tabular data or calibrated models. So I think the end goal is, eventually can we apply the techniques that we've figured out for tabular data to natural language processing, to computer vision to reinforcement learning to generative modeling? You know, and I think one of the cool things about this technique is that it creates an iterative dev flow where you can iteratively improve the model, without necessarily needing more data, just by understanding the data, understanding the model, I'd love to be able to apply those techniques to, you know, natural language processing in particular, you know, we're seeing a huge explosion with MLMs and GPT. And, you know, I think it's a little scary that they can lie, as powerful as they are. And as useful as they are, I think until we have a sense of why they're doing what they're doing. You know, if you ask a person, Hey, you did this, why'd you do this? Yeah, they could still lie to you. But for the most part, they're gonna give you some insight into their mind. And I feel like reason that that's valuable is that we can relate. It's like, oh, I'm a human, you're a human, kind of probably think similarly.
I can extrapolate and, you know, maybe tell that you're lying or not, are you maybe you have tells with GPT. It's like, there is no human interaction layer there. And so I feel like, we really need some level of transparency. And so I guess my hope is that in the next year or two, we're gonna have any company with data scientists, product managers, product engineers able to come to our platform, upload their data and find value. You know, and I think longer term horizon if we look closer to like, 2030 567 years out, I think the goal is to be able to apply these techniques to any aspect of AI such that anyone can use AI in a way where they leave feeling like that made sense. And, you know, I think if we can achieve that goal, that will reach a point where the super majority of internal processes decision making, it's all going to be data driven. And I think right now we say a lot of things are data driven. But I think the analysis is lacking. And so I'm looking forward to that future where if you want to make a decision, you can just go to any model and say, Hey, here's my data. Here's what I'm thinking about doing. Give me you know, plan of action and why and feel confident in those results and actually act on them.
Max Matson 20:34
Absolutely. You know, one thing I've been kind of finding a theme with a lot of my guests is kind of changes in the data landscape today. Right? So I think a lot of companies, especially larger companies are very siloed off. I mean, it's obviously a big problem, departmental silos of data. But what I'm kind of seeing is that, you know, from analytics companies to data warehouses, everything is kind of getting pushed together, right? So you've got this one big data lake, and then every department is just kind of looking at it from a different angle. With kind of those shifts in the landscape around data, how do you see that kind of factoring into, you know, the mission of so Ty as you guys move forward?
William Bakst 21:14
Yeah, I mean, what's really cool about a lot of aspects of machine learning is that when you increase the feature space, you can often get better results. I don't know this, I really liked the saying a jack of all trades, is a master of none, but oftentimes better than a master of one. And I like that, because it I think the concept is that even if you're not a master of one thing, having knowledge of a breadth of things can actually make you better in each thing by bringing different perspectives and different knowledge. And I feel like the same is true of machine learning. So, you know, let's say you're a sales team.
And you've got all this sales data from your CRM. So you download a CSV file from Salesforce, and you upload it to so Ty, and, you know, we help you build a model to do lead scoring. Great. But oftentimes, a lot of that is going to require knowledge of your marketing, right. And so now, if you have the marketing data fed in as well, you're gonna get a better picture of what's going on. And, hey, maybe we've got some new products in line. And if we have some information about that, we might be able to better predict sales based on products that are like on a waitlist and stuff like
Max Matson 22:22
Yeah, yeah, absolutely. makes a ton of sense. So I want to get a little bit more into your background. So are you are an engineer yourself yet? Yeah. Obviously a product thinker, a founder, what inspired you specifically to strike out on your own I mean, obviously, Google is a is a cushy place to be.
William Bakst 22:45
Yeah, I guess I can give some more backstory. So prior to Google, I grew up in New York. And my, my dad is an entrepreneur. And my mom left her role to kind of raise us but then got into real estate. And as a real estate agent was, you know, kind of like her own entrepreneur running her own business, you know, and then if we look out into my extended family, they're all kind of running their own businesses. So I think growing up, a lot of it was just being around entrepreneurs and founders, people kind of running their own thing. And then, of course, I got out to Stanford for my undergrad, and you know, the ecosystem there is very much so like, start your own company, everyone's here, you know, strike it big. But I don't know, I never really while I was there, landed on the right idea, or something that I was passionate about. It's funny to look back on some of the ideas I had now seeing them as like very successful companies. And, you know, a lot of people would think, Oh, do you know, do you regret not doing that? And the answer is no, because at the time, obviously, when I came up with the idea, I thought it could be successful, but didn't really, I didn't resonate with it.
You know, I didn't, didn't really feel like the right fit. My co founder, Linus, actually, I met freshman year of college. And I think like, after our first meeting, I decided, Okay, I'm gonna start a company with this guy one day, and I was pitching him ideas for I think it was like eight or nine years before I finally hit the mark, and got him to join. And I just had at the time, wasn't ready to do it, and wanted to get into the AI space more. I was doing a master's in AI. And, you know, I think striking on our earlier point of these big tech companies really being at the forefront of that figured hey, it'd be really good idea to just go to a big tech company and work on an AI team to really get like a firsthand perspective of where the space is going, where it's heading and what we can do. And so that was where my decision to go to Google came from was not out of the like cushy, you know, lifestyle or work life balance, but out of the I want to be at the forefront. Have a i and this is where I kind of feel like it is. But then the problem was I got really bored. You know, when you're a small cog in a giant machine, the, you can have a lot of impact. But you don't feel it the same way.
You know, like, I had teammates who were, you know, building models that would ultimately generate, you know, a billion dollars in revenue. But it doesn't feel like that, you don't feel like, Oh, I just did this crazy thing. And for some people, they see that number, and it's good enough. But for me, you know, I wanted more of that, that drive and ambition. And, you know, I also felt like, it wasn't too much of a meritocracy. So, one thing I found was that, like, Google was very much so becoming like, you know, flatter and flatter, where there wasn't really room for growth. So I ended up getting promoted. And then after my promotion, I thought, hey, I'm gonna, you know, work my butt off, I'm gonna get, you know, try to shoot for like an exceeds expectations rating, and went up for that rating. And the response I got was, you're exceeding your peers at your new level. But you just got promoted. And we want to see you do that for longer. So your rating is meets expectations. And, yeah, I quit like two weeks later, I think. Yeah, yeah. Or at least gave my notice. Because I was just like, No, if I, if working harder, doesn't help me in any way. There's no incentive to work harder, than I'm just going to do the bare minimum, to get the rating that I should get at the time. And that's like, that's really not a good feeling. And so, along with just growing up around entrepreneurs made me feel like, hey, if I can just, you know, iterate on an idea in a free space outside of Google, I'll probably be able to land on something I'm excited about. And I don't know, find that drive find that ambition. You know, I've been looking for a sense of accomplishment. And now I feel like I'm getting that every day in my work.
Max Matson 27:07
Awesome. Do you like in the founder lifestyle?
William Bakst 27:10
You know, it has its pros and its cons. Yeah, I'd say I definitely I'm loving it. It's a it's a lot of work. I don't know, I think a lot of people look at the founder lifestyle and think you do a couple hours of work here in there. And when you strike it lucky or you capitalize on your network, but you know, when you meet a lot of really successful founders, the common denominator is the grind. Yep. And devotion and commitment and ambition. And I'm always impressed when I see the most successful founders, just how hard working they are. They're working out all the time. Yeah. To the point where they're like, I'm going to use Instacart to get my groceries. Because that hour I would spend shopping would be better spent working, if not to be lazy. You know?
Max Matson 28:02
Yeah, you go full circle. Right? Exactly. Exactly. That's funny. I are founder over player zero is also technical, heavy, you know, hands on the product, building all the time. I feel like, you know, you type of founders or have like a really interesting kind of balance to strike, right? Because not only are you guiding the product, and responsible for making sales responsible for building GTM, you also have to make sure that you know, it works. So that being said, Do you have any advice for other technical founders, people who, you know, potentially want to use their skills to become founders?
William Bakst 28:42
Yeah. I think my number one piece of advice would be don't do it alone. If you don't. You know, I think for non technical founders, it's actually easier to do it alone. Because you can hire you know, a consultancy, to build things out and get things started while you do all the non engineering stuff. And then you build out an engineering team. And at the end of the day, the CEO of the company is often doing a lot of things alone anyway. But building something alone is tough. And not having someone to bounce ideas off of, it's tough to have someone review your code to challenge the way you're thinking of doing something, to make sure that you're doing things in a way that makes sense to get a different perspective, I think is extremely important, invaluable. And that doesn't necessarily mean you need a co founder, you know, it could just be like an early lead engineer. You know, if you have some early funding, or you can go in your pocket and pay a consultant to work with you. But you know, I think getting started alone seems really easy. And you start iterating on stuff, but I think having someone to bounce ideas off of is invaluable. So if you have that opportunity, I would take it.
Max Matson 29:53
Absolutely. Would you mind telling me just a little bit more about okay, how old are you guys by the way?
William Bakst 29:59
Yeah, I think we're both 27.
Max Matson 30:03
What was that process? Like when you started bringing additional people onto the team? I mean, that's a really exciting time.
William Bakst 30:09
Yeah. Yeah. So we actually, we brought on our first employee, I think, almost exactly to the day of one year after we founded the company. Okay. And it was definitely an interesting experience, mostly because we had been working just the two of us for so long, that, you know, we kind of were like in our own world. And so when we brought someone on board, it made us take a step back and reevaluate a lot of our internal processes. And I think one of the most beneficial things of bringing our first employee on board was that he started just asking really good questions from the get go. While he was onboarding after he was done onboarding. He wasn't afraid to say, why are you doing this? I don't understand. And a lot of times, it was, oh, well, actually, we just don't have a really good reason for doing that. We just started doing it. And it was what we were doing, and it was moving quickly. And, hey, maybe you're right, maybe we should think about this, or maybe do this differently. And so it was really good to get that other perspective, especially because Linus and I are both ex Google. So coming from very similar tech stacks, very similar methodologies. So it's really nice to have someone come in and say, Hey, like, maybe we shouldn't do it this way, because we're not at Google. And we don't have all the support that they had for these things.
Max Matson 31:31
Totally. I'd love to hear a little bit more about that. So I know that, you know, process can be a real battle early on. I know, guys are a small team. What have you know, what were the difficulties with actually, you know, implementing process? Do you feel like you're there at this point where you've got one that you like, was that look like?
William Bakst 31:50
Yeah. Have you ever heard the saying that you'll you'll never like your own product. So I think that applies to pretty much everything. I think Linus and I have slightly different methodologies on this. I think Linus is very much more so on the side of like, what we have works well enough. Let's work with it. And I think we're very much still on the side of let's try to get like the best thing possible. And I think what's great about us being on opposite sides is that even though sometimes that causes a little bit of like, heated discussion, or you know, butting heads on certain things, it ends up having us meet in the middle, where, sometimes where I think we should do something, it actually turns out, maybe we shouldn't, we should stick with what we have. And vice versa. And I think ends up creating like a pretty good direction. But have you heard of like Agile Scrum frameworks? Yeah, yeah. Yeah. So, you know, I think a lot of people use agile frameworks for engineering.
But I think one of the biggest benefits of agile frameworks is that you can use them for internal processes as well. So, you know, at the end of each sprint, we have, you know, what we call our retrospective, where, you know, we go over and say, Hey, what should we stop doing? What should we keep doing? And what should we start doing? And that's not just limited to engineering stuff. So it could be, hey, like, there's this tool that we're using, and, you know, it's fine for now. But I don't like it for XYZ reason, maybe we should look into another tool. And then that kind of sends me off into like a, you know, hour long rabbit hole of looking up different tools, doing some research and hopefully improving the process. And so I think it all boils down to that iterative improvement. Same way for machine learning that we're trying to do where you can iteratively improve the models, you know, we want to iterate improve our engineering flow iteratively improve our product, and iteratively improve, like all of our internal procedures. Yeah, I mean, I'm actually right now in the process of testing out some new tools, because we use JIRA. And I think I think all four other people on the team right now are like JIRAs. Okay, it's good. And I'm like, I really don't like it. Yeah. And so it's easy enough for me on my own to go spend 30 minutes playing around with some new tools come up with a proposal and pitch it to the team. And if the answer is, hey, actually, like, I don't think it's worth the hassle of making the switch, then we just table the discussion. And if the answer is, hey, let's make the switch then it was easy enough, I went and did it. You know, in my free time. I think the end goal is we'll never be 100% happy with where we are now. And I think that's what makes a great startup team is that we're always iteratively improving everything.
Max Matson 34:32
I love that treating your processes like the product. Right? Exactly. crucial. So both you and your co founder, Linus are both ex Google engineers, right? Yep. Gotcha. Would you mind talking a little bit about what it looks like wearing all those different hats as a founder, I know that your team is primarily if not all engineers. Yeah, it's all engineers right now. Gotcha. Gotcha. Yeah. So very heavy products. Lance. How do you You guys approach, you know, kind of going to market and doing the more squishy pieces of the role.
William Bakst 35:07
Yeah, I mean, when Linus and I first started the company, I think, the pretext. And he was very clear about that being my role. I think early on, people asked us, Hey, how did you decide you're both engineers? Who is going to be CEO and who is going to be CTO? And the answer was, it was decided for us by who we are. I am a much more like, active social righty. You know, like, in my background from high school is a lot of, you know, I learned how to write from, you know, English PhDs and have spent a lot of time writing and a lot of time doing kind of all the squishy stuff. And so it was really easy decision. Because, you know, Linus is just an incredibly talented engineer, who doesn't necessarily love all of that squishy stuff. And so very early on, created a really nice separation, where, when the squishy stuff came up, you know, I took it on. But, you know, I think we're gonna hit a point pretty soon, where I'm not going to be able to handle all of that stuff alone. And so we'll probably bring someone on board to kind of help manage some some of the more squishy stuff as you as you said,
Max Matson 36:12
gotcha. Gotcha. Makes sense. So kind of, you know, changing gears slightly. I know that I sent you a couple of questions before this, but I just want to get your honest opinion on where do you think that AI is going to take us in this is as broad or as narrow as you want to interpret it? In the next 10 years? Yeah, that's a big question.
William Bakst 36:38
No, I mean, I really like to think of it in the concept of, like exponential functions. And you know, when you look at an exponential function as a graph, there's a huge flat region. And then all of a sudden, it skyrockets, right, and we have no idea where on that graph we are. And so, you know, when I look 10 years out, in the context of all this recent LLM AGI stuff, I'm feeling a lot closer to that, you know, Spike. So I think it just totally depends on how close we are. You know, I think if we're really close, 10 years from now, we're going to see a world where you pretty much no one has to really do anything technical, we're all going to shift to being creative thinkers, and utilizing, you know, automation tools and artificial intelligence to do all of the technical stuff. Right now, why bother learning computer science, when you could learn how to think like a computer scientist, and then just use auto GPT to go build your product for you. And then I think if we want to get really, really crazy, in the context of like, nanotechnology, if we have nanotechnology powered by AI, I think we're gonna see like a whole new landscape of how humans live and how we do things.
Max Matson 38:02
That's exciting. So I actually got a book here. I don't know if you've heard of this one super intelligence. Nick Bostrom, I haven't checked yet. Yeah, I would recommend it. It is depressing. I won't lie. His his take is, you know, and I'm very interested, just kind of hear your opinions on it with how you talk about like this exponential curve, is that, you know, as soon as we hit that point, essentially, we've hit this point of no return. And it seems like you have a pretty balanced view of kind of like the role of AI. So I would just love to hear your opinion on artificial general intelligence, obviously, being this huge, you know, unknown. Do you have any opinions?
William Bakst 38:45
Yeah, no, I mean, it's funny, I just had a conversation the other night at dinner with someone where I realized afterwards I was probably scaring him a little bit with the idea was that like, we're going to become cats to like the human AI. You know, like we build out these you see, like, Cat people who have these like installations on their walls and outdoor jungle gyms, and like building all this stuff for their cats to have this like nice cushy life. And, you know, I feel like there's a chance we become those cats for AGI where, you know, you can imagine a world where these robots have like, human pets. And, you know, they're like, oh, look, my human is writing a song. Oh, how cute are oh, he's writing a book so cute. Or, Oh, look at him coding away. So cute. I'm gonna go build this interdimensional travel system, so he can go have a play pen on this other dimension, you know? So it's, it's pretty scary to think like that. And I think you can take that positively or negatively. You can look at it as you know, all the problems of society are going to disappear and we're going to become essentially animals with the freedom to do whatever we want and the resources to do whatever we want. Or you Look at as you know, essentially some form of domestication and enslavement. Right. My hope is that it feels more like the former. Totally.
Max Matson 40:10
We're all open, right? Yeah, exactly. Yeah, I do feel like you know, we've reached this and and you're absolutely spot on this this point where I don't know if it's the inflection point, right. But it's certainly, we've reached a point where AI has become useful. Right. And that tends to be the first step.
William Bakst 40:27
Yeah. I mean, what's so scary to me. And also really cool is that when I was in college, I was building transformers and training them, I was training generative models. And, you know, that was, what 2018 2019 When I graduated, not, but you know, four or five years ago. And at the time, I was like, this is cool. But it's got a long way to go. And now we're five years later. And it's doing way better than I thought it would. Yeah. And that's the part where I feel like we're closer to that inflection point, where even just the jump from GPT 3.5 to GPT. Four is massive. Yeah, so then that, you know, the jump from GPT, four to five, to six to, you know, it's really hard to tell where we're going to be. And, you know, in the concept of self improvement, you know, when we start implementing self improvement, and agents and the ability to kind of build their own things, we have no idea where it's gonna go. You know, it could be that we go to sleep one night, and the next morning, our entire world is turned upside down. Because some AGI created some brand new thing that lets us do something we didn't even think was possible, like traveling to Mars instantly, like, we just don't know. And it's easy to say, No, we're not going to get there. But the reality is, there's very little we actually know definitively about the universe and how things work. And all it takes is for an exponentially growing system to absolutely blow our minds.
There's an article I read a long time ago about this, I'm forgetting the name exactly. But it talks about like the shock factor, if you could go back in time and bring someone to the present that essentially like how long would you have to go back for the shock to kill them. And the idea was, you know, you go back to like, pre industrial revolution, someone from like, mid, mid, late 1700s, and you bring them to the present. And they see cars and airplanes and smartphones and skyscrapers and they probably die from shock, right? But then you send that person back in time with the time machine. And they think, okay, great, I'm gonna now go back in time, equal amount, I'm going to go back, you know, to the 1400s, and bring someone to the 1700s. And that person won't die of shock, because the change isn't large enough, they'd have to go back to like 10,000 BC to get the same effect. When you look at that rate of change, you're like, Okay, so we went from somewhere around 10,000 years of change to 300 years of change to now if you went back to probably the early 1900s, and showed people GPT, or you showed Alan Turing GPT and blow their minds. Now, when you look at it, in that sense, it becomes, I think, a lot more clear that what we think is going to happen is likely very different, different from what's actually gonna happen.
Max Matson 43:24
Absolutely. That's beautifully said, I, I like your your mix of optimism and realism there. I would have to agree with you.
William Bakst 43:35
Yeah, I mean, it's definitely, we're definitely living in a scary world. But I don't know, after playing around with GPT. And playing around with a lot of these AI systems and kind of being in that world. I think there's too much value to only have a pessimistic, pessimistic view. So I feel like I have to hold some sense of optimism because there's, there's just too much value and not to be like really excited about it.
Max Matson 44:02
Yeah, absolutely. I will ask you one last question. And if it can't make the podcast, that's okay. But just for my own interest. So you were inside Google, working on AI? Do you remember that story of that guy that came out and said he believed? I think it was. I don't remember which model. But he believed it was sentient.
William Bakst 44:22
I wish so badly, Linus was able to make it today. That guy was on his team. Oh, really? Yeah. Yeah. I mean, I think it just depends on how you define sentience. You know, like, if it's the classic Turing test where you know, if you put the robot and a human behind a wall or through a chat interface, at a certain point, if it's providing answers that are very human in nature, it's going to be really hard not to feel like it's sentient. And yeah, we can say like, it's the system and it's not a brain and it's not a human. But what defines human consciousness what defines, you know, actual sentience, we don't know it for it very well could be in some way sentient. And it's really hard to like guarantee that. Although I played around with barred and definitely did not think at the end, so I totally disagree. But playing around with GPT and the recent stuff, I'm starting to feel a little bit more like Han or something here and who knows what's gonna happen.
Max Matson 45:33
Very interesting. Thank you. It's been burning in the back of my head for a while since I saw those articles. But will if you've got anything you know, that you want to leave the listeners with? I'm happy to kind of have just an open little bit if you want to.
William Bakst 45:49
Yeah, I mean, if you have data, and you don't know what to do with it, and you're interested in figuring out what to do with it, go read our blog. It's on our website. So tiny.ai Reach out to me feel free to email me my emails, firstname.lastname@example.org? And yeah, I would just love to help you figure out how to use your data effectively.
Max Matson 46:11
Perfect. You heard him got check SOTAI AI check them out. Thank you for listening to another episode of the future of product podcast, and a special thanks to my amazing guest. Well, if you enjoyed this episode, want to learn more about what I do over player zero, you can find us at player zero.ai. If you're looking to go even deeper on the subjects we talked about in the pod, subscribe for future product on substack. Be sure not to miss this Thursday's newsletter, which will break down the biggest takeaways from my conversation with well, even some AI tools and get started with the same day and talk about the biggest stories of the week. Thank you. Look forward to seeing you there.