Aliya Grig 0:00
GPT is not enough to solve complex problems such as for example like as a health care or solving for example different calculations for creating new type of either now or for quantum engine or quantum computer and something like this artificial intelligence that we face right now it's just intelligence it's real artificial consciousness because it has inside of its intelligence, empathy, reasoning skills, cognitive skills, self awareness, ability to dream and to create to understand what is life and death because currently intelligence is not able to do this it's only can calculate and to make like weaker than a human. It has no logic it sometimes it's irrational. It's about like,
Max Matson 0:56
Hey there everybody. Welcome back to future product where I your host, Max Matson interview the most interesting product and AI thinkers that are building the products and World of Tomorrow. Today, my guest is Alia Greek, she's the founder and CEO at Evolvi. A deep company deep tech company seeking to create the first empathetic and conscious AI to enable truly human interactions. Alia, would you mind telling the audience a bit about your background, you've been involved in a lot of projects across a lot of spaces. Just kind of let people know you know where you come from and how you got here.
Yeah, sure. So hi, Max. Hi, everyone, I'm really happy to be here and excited to share my knowledge about AI and products technologies. So I have 15 years background launching previously my own tech startups. So my first sort of I created when I was at my graduate years in at the university, and recreated from scratch and new technology for solid oxide fuel cells. So it was a deep tech startup and a hardware startup. I exited the company in 2013. We sold it to one of the largest strategic partners and largest producers of fuel cells. And then I participated as a co founder in basic startups, I had two space tech startups, we created a 3d printer to print in outer space, and a small launch vehicle to for nano satellites and cube sets. The companies were in Europe and Canada, and I exited the companies in 2018. So after that time I started. So well, basically, there are three things, which were of interest for me for the last, like, almost 20 years, though it's space and space exploration. It's human consciousness and AI. And that's how I started evolving because it came at the intersection between AI technologies and consciousness. And through
Aliya Grig 0:45
understanding what is irrational things about I'm building AI to help humans
this, I would say, it was an interest for me to create a new disruptive technology, which can be, I would say, much more efficient and powerful than existing, I would say General, General, AI technologies. And I started to work on these like, so my, my basic education is in management in strategic management. I graduated from Trinity College in Dublin, and in my also courses at Berkeley University, and I should say in France, and after that I received my second education in the area of neuroscience because it was interesting for me to investigate more about human consciousness and how this knowledge can be applied to building artificial general intelligence, AGI and artificial consciousness. And that's what we're working on right now. It's evolving.
Max Matson 3:49
Fantastic. I'll circle back to kind of some of the other things that you said, because they're all really fascinating, but I would love to kind of get a sense of what motivated you originally to pursue Artificial General Intelligence.
Aliya Grig 4:00
Uh, well, that's a year, I was born in the family of astrophysics. My mother was one of the leading activists in Soviet Union. And she, she she was in Oregon for over 20 years on, like exploring our universe. And it was so fascinating and interesting. So I was born in these environments. And then when I grew up, I thought that well, our own mind, human mind is not enough to explore, I would say, our universe to reveal all the secrets and even to understand like, ourselves, because like right now, like, we as humans, we understand only like for 20% I would say, our consciousness, our mind, our brain, and our thought that like we need some kind of new technology in order to help us to understand better first of all, like ourselves and to solve complex problems such as, for example, like our universe exploration, creating a new types of medicine for longevity, for example, or curing diseases.
And that's how I started to think about AGI because I would say, in my opinion, like generative models, which we were facing, right for the last year, I would say with with revolution of LLMs is not enough to solve complex problems such as, for example, like as a health care, or solving, for example, different calculations for creating new type of either now or for quantum engine or quantum computer and something like this. And that's why we need a new disruptive technology and AGI. And this also, combined with my knowledge of neurobiology and neuroscience, because I really found some fascinating things, which can be applied to create new types of architectures, which are much more efficient and can help with much more complex things. And yeah, that was the idea behind.
Max Matson 5:53
Fantastic. And I love that you draw that distinction kind of between the generative AI models that we're seeing today and kind of what you referenced as AGI. Now, for those who are maybe a little bit less familiar, how do you think about artificial general intelligence? How would you kind of define that term?
Aliya Grig 6:10
Well, I would say that it's currently artificial intelligence that we face right now. It's just intelligence. And we, as a human being, we are much more complex products, I would say that just an intelligence models, because we have our own emotions, we have consciousness, we have self awareness, we were able to study and to learn in a much more efficient way rather than AI right now. So we don't need such such a lot of computational power that current AI needs to create something. And I would say is, so for me AGI, it's much more efficient, artificial machine, I would say consciousness. So it's not only about intelligence, and it's a huge distinction. For me, it's real Artificial Consciousness, because it has inside of its intelligence, empathy, reasoning skills, cognitive skills, self a warrior's ability to dream and to create. And that's the hugest difference for me, because I would say, intelligence, it's when you can calculate something like pretty efficient, like, but it's not an ability to create and to be creative. And to provide all the, I would say this, I would say, skills like as a three as a human has. And, for example, consciousness, it's also about ability to be empathetic, and to understand what is feelings about? And what is about how to, I would say, to understand what is life and death, because currently, intelligence is not able to do this, it's only can calculate and to make it like bigger than a human. And that's the biggest difference for me between Artificial Consciousness, well AGI we can we can say, and AI.
Max Matson 8:03
Got it. Very good distinction to make. So all that being said, of AVI is very unique, and that you guys are working towards creating the first empathetic conscious AI, right? What are some of the kind of advantages of a model like that of an AGI versus kind of a more simple generative AI? And what are some of the applications that you see coming out of that?
Aliya Grig 8:26
Yeah, sure. So first of all, our AGI and 30. I prefer to call it artificial consciousness. Because AGI it's, it's, well, I doesn't have all the facilities of the consciousness. So first of all, the distinction is, it's much more efficient. So for example, like the first example is how the difference between learning processes, so we try to create the architecture, which can learn like human beings or kids. So for example, kids, you can show for them tomatoes, they don't need millions of pictures of tomato, in order to recognize that it is a tomato, they learn it like pretty simple. And same same way they learn to work with the they fall and fall couple of times, and they start to work. So we apply all this knowledge. I mean, learning processes itself to artificial consciousness. Second thing, it's It's empathy and understanding of psychological things, psychological traits.
I do believe it's really, really important because in our day to day communication, we as human beings, we are all emotional creatures. And it's important to create artificial emotions and artificial empathy that we already achieved that evolving. So our model can provide artificial empathy and we understand human being his psychological traits, he's making use of the character, what he feels right now, through the way he texts and through the way he communicates, and it's important in order to You create a next level of artificial consciousness, which can be able to feel. And it's important, again to be trained on on a faster and I would say much more efficient way. So think it's the overall I would say approach to architecture because right now we don't. Our goal is to create an architecture which can be able to create not to generate, as currently, like charged up to doing is just generating answers, just generating text for you.
But it doesn't create itself. And I believe that here, it's essential that and that's the angle which we focus a lot. It's neuromorphic architectures. It's application of our theory by Roger Penrose, and Stuart Hameroff. It's quantum field theory, and how neurons work. And we apply all these theories to create these novel architectures. And overall, how we tell us and why do we need this. So first of all, its creative processes and better understanding of human being human needs. Because for this, it opens a huge variety of applications. So first of all, it's much more efficient technologies for robotics, for autonomous vehicles, which, which will be smarter in terms of like both driving, providing help in logistics, manufacturing processes, which with current existing AI, you can achieve second application, it's solving complex problems.
So as I mentioned before, for example, solving complex problems in drug discovery, or creating new types of technology for quantum engine, either No, or a warp engines or whatever, in order to explore our universe. And so the application needs, of course, like daily tasks and daily routines, because probably you're aware is that current computational power, in order to use charged up tears, for example, as a search engine, it's enormous, it's not efficient. And through our technology, we will be able to achieve a much more efficient way in terms of like computing, reducing computational power, and well as a result, reduction of costs, reduction of co2 emissions, and so on. So that's the difference that the applications that we see. And to our existing goals, it's first of all, we use a lot, our technology for manufacturing processes. So our key clients are in robotics and manufacturing, we currently build our own virtual robots and plan to start building our physical robots this fall time. And it's applicable in terms of like, for example, assembling good Processes in Manufacturing, which is the hardest ones, I mean, in terms of like, existing processes. And our second product is b2c product. But I would say it's a charity product, so it's for free. And it's smart companion who helps them accomplish life mission, to cope with loneliness, to understand better ourselves. So it's more I would say, well being angle and currently we are integrating sided, as acknowledged in order to create this personnel would say, lifestyle and health care routines, which will be applicable specifically for you. Yep.
Max Matson 13:21
Got it. That's all incredibly fascinating. I, I love that distinction that you make there between generation and creativity. Would you say that one of the key distinctions there is the generative models of today are making decisions making interpretations? And they don't know why they're making those decisions, those computations as opposed to an actually conscious model, which has a thought process has a line of kind of, you know, in that you're modeling and after the brain, it has all of these kind of predetermined factors that are kind of leading to the decision if that makes sense?
Aliya Grig 13:58
Well, I would say that we conducted an in house inside our team research and we are finalizing it right now with a Stanford University regarding like, I would say, sparks of intelligence and consciousness in generative models. Well, I would say that it has some sparks, like of intelligence and consciousness, but in reality, it's much more house nation and pretension to be conscious but it's not conscious itself. So and we tested the models change up t lambda Google model on a variety of psychological tests and variety of different psychological situations. But well, it's I would say it's there's lots of buzz around like oh my god, like the eyes conference and so on, but in reality, just a hallucination, and we need to find, well, we can we can fine tune the model to tend to have a decision making and so on. But in reality just trains on a huge amount of data, and they pretend that it can act conscious consciously. But in reality, just mathematical calculations,
Max Matson 15:14
I see I see it's kind of smoke and mirrors that appears to be, you know, something deeper, right? Yeah, I see. Okay. It's kind of like a player piano in that it's programmed to do the thing. And it looks incredibly impressive, but it's just kind of a mathematical sequence under Yeah, yeah, exactly. Got it. So would you mind delving a little bit into some of the challenges that you and your team have faced in building kind of an empathetic AI? And how have you managed to overcome some of those hurdles?
Aliya Grig 15:42
Yeah, sure. So we started, it's over five years right now with with the company. We have inside the company, a deep tech branch, which is dedicated to the research about artificial consciousness, artificial empathy, neuromorphic, architectures, spiking neural networks, working with neuromorphic chips, also. And we'd have a commercial activities, like, as I mentioned, b2b and b2c. And I would say the biggest challenge for me personally, is a founder, or the CEO of the company and the CEO oversee, is to always, I would say, first of all, branch between deep tech sides, research sides and commercial side. So I know the how it can be applied to future products. And there's a huge angle inside our team, because all the research, it needs to be practical, it needs to have practical implications, it needs to be applied in products. And it needs to help our clients as a final I would say go.
And I specifically structured our deep tech side and deep tech will say, activities in the product model. So what is mean? So it means that we, we focus our research in terms of like which outcomes we want to achieve, and how it can apply to commercial products. And also, it was a challenge for us in order to well, our team is pretty a multidisciplinary, disciplinary. So we have cognitive psychologists, we have psychometrics, we have mL engineers, linguists, and op engineers, even mathematicians and physicists. And it's, it's, I would say, the biggest challenge is to combine all those people together, and to manage all of them in order, it can work like really smoothly and provide the results and outcomes. And I would say, quite, quite efficient and quite in speedway. And one of the challenges while building building artificial empathy, and artificial consciousness, in my opinion, it's a huge variety of experiments that we need to conduct. And I think it's pretty lightening product hypothesis testing, some, sometimes these experiments fail. And you need to create the process, which will be efficient in terms of like hypothesis testing, and to make it like fast, efficient, and I would say, in order not to waste waste the time of your team and of yourself, and in order to understand what really works, and what is just a waste of time and resources. So yeah, it was one of the biggest challenges for us.
Max Matson 18:28
Gotcha, that makes a ton of sense. I would imagine that with so many different kinds of stakeholders across so many diverse fields, it is a lot of management, right?
Aliya Grig 18:36
Yeah, yeah, exactly. Makes
Max Matson 18:39
sense. So you've talked a little bit about kind of the neuroscience angle here, right? And how cognitive science is really critical to you all developing this AGI architecture. Can you share some insights into kind of how these principles are used to inform your your work day to day?
Aliya Grig 18:57
Yeah, sure. So I personally learned, learn Jolanda and apply lots of knowledge until creating the initial architecture from from the following domain sir, first of all, it's coming to psychology. So how we as humans, what is awarness for us? What does it mean to be self aware about our own ideas, our own thoughts, how it works in terms of like both psychology and neurobiology, and it was a really, really powerful thing to know more about psycho silico and psychological processes and apply, we apply this knowledge to a second saying that it's about our educational processes. So how do we learn how we do do we learn from environment and it means like, we use a special term, I pretty well Fred in AI and bodytite.
So it means like how we as an agent are embodied in the world and how we can get knowledge from the from the world not from Inside Out. Taking our own generative models. Something which also used a lot in our research, it's psychology and specifically emotion, emotions and feelings and research on this. So why do you feel? How do you feel? What does it mean to be angry? What does it mean to be angry in terms of like both physiology and psychology? How do we act when we feel for example, sad, or when we feel happy? And how can we understand about another person that he is happy or his set, and then it's all the understanding of I call it like, model the world as a human being. So each of us has our own model of the world, which came from I would say, our childhoods, our lives, things which, which we are currently like, meeting with and facing with, like, our friends, our work, and so on. It's our model the world. So how can we create this model of the world for AI, and in order you can be perceived from this model of the world can be learned, and how it can interact? And yeah, so I would say these three basic pillars, which came to background to our methodology and approach, currently, we're investigating the research, we plan to reveal soon the results about quantum field theory and our theory by Sir Roger Penrose, and Stuart Hameroff. And Ian's field theory, also, we plan to investigate soon and also reveal the results. So yeah, that's in a couple of lines about our pillars and our background.
Max Matson 21:49
Fantastic. So you all are on the cutting edge of kind of this research, right? And what both when it comes to the physics aspect, and kind of the the neuro psychology aspect, what does it like to be, you know, at the same time, that you're kind of learning and researching and discovering these kinds of fundamental realities to then be turning around and incorporating them into product, there must be a real balance between kind of the research and product builder aspect of your, your brains, I would imagine.
Aliya Grig 22:17
Yeah, yeah, exactly.you need to balance all these festivals, I try to. So I use all the products and management approaches to research for it, because I found it pretty fascinating that the majority of scientists, they, they just take their time. So they're not trying to be in a sprint format, or in a form, it's I would say, of Lean Startup approach. I use lean methodologies. For example, for research part, because I find it that it helps to be much more efficient when you create a sprint approach when you have like specific KPIs and goals and metrics, what you want to achieve through this particular research. What, what is the timeline and what you can do if you fail. And then of course, I managed to split all the resources for the research part and product part. And but it's very flexible. So it's not like 5050 like and it's it's, I would say, a more creative approach to for example, one specific month, we can, we can be like 30% for research 70% for product side, I mean, commercial product side, another month, it can be like 5050. But I try to balance the research. It's not more than 50% of our activities and of our resources. Because otherwise like, doesn't make sense. But I would say for me, it's important also to be open towards different research activities, because it helps us to build like really revolutionary and innovative products.
Max Matson 24:00
I see. Okay, it makes a ton of sense. So that kind of in the context of your extensive experience having launched, you know, several deep tech startups at this point at a global scale. Could you share some of the key lessons that you've learned from from those previous startups that you've incorporated into evolving?
Aliya Grig 24:19
Yeah, sure. So all my previous background was always connected with with hardware tech projects, and again, projects were deep tech side involves, like, lots of resources of the team. So for me, it's the most I would say, there are a couple of lessons which I learned previously, which helped me in the hallway. So first of all, that research part needs to be manageable and needs to carry these KPIs outcomes, goals, and it needs to be agile and lean. And very, I would say adjustable and fast in terms of expiry. are minutes. And you need to limit specifically and intent by intention, your resources in time in order to achieve something deep tech side. Second thing we learned that you need to have a specific and separate team for the tech side and for commercial side, because when you mix all this together, it fails. Because even through the I would say approach, research scientists in our team and commercial at commercial part engineers are pretty different. And you need to hire people. Keep in mind that whether you want to work in the research side or in the commercial side, then another lesson is to try to be, I would say, as creative as possible and to generate different experiments, ideas, and open to work with different collaborators, and with different partners in order to create something new together. And then another thing is to implement immediately if you achieve something, so don't put it inside the table, but implement immediately, at least some form of MVP, which you can test with other people and engineers.
Max Matson 26:16
Go Yeah, fantastic. That's a great roadmap. So all that being said, just to pivot a little bit, you are speaker at a lot of international events. I see you on LinkedIn quite a bit that you're you're slated to speak at different places. Well, that being said, what are some of the kind of common questions or concerns that you hear when you're talking about advancements in AI? And and specifically AGI
Aliya Grig 26:42
was the majority of concerns that, first of all, will AI kill us? Yeah, it's Yeah, I would say the most fun equation, because I would say there is lots of PR around this came from Elon Musk. And but it's, it's fake, in my opinion, because, you know, during the humanity needs to have like its own conscious and its own intense and well, it's, well, it's, it's then to discuss even this topic. Second question. And second concern. It's about like, when we can achieve this artificial consciousness and AGI. And here, like I comment usually that it's three to five years. I mean, like I can speak about my own team and our own results. So question, it's about like this managing, I would say, balance between human development and AI. And this is a topic which is particularly interesting for me, because I'm building AI to help humans and not to be like, if you remember this cartoon volley, where people were so lazy, that robots and AI found them everywhere that they were fed, like stealing these spaceships, and doing nothing like well, I don't like this scenario. And I believe this scenario where AI can help us with routine things. And we will work more on creative aspects and complex things. And it means that we don't need to stop developing ourselves, but to develop our creativity and develop us as human beings. Because even right now, you can see that for the last six months is like there was a, there was lots of discussions about like that humans use AI to create their, like, documents and course materials and so on. And I think that it's pretty, I would say, show that we need to, we need to start even like building education in other ways that you need to really show your creativity in and really apply your skills to create, I would say some, some documents, your course materials or whatever, and not just to generate this shirt and charge up T.
Max Matson 29:06
No, absolutely. Absolutely. I think that's a great point. Right. So I do want to circle back, you said, around three to five years is your rough estimate. Yeah, that is pretty amazing. Right? With that being said, how do you kind of imagine the future actually looking right, because it's something I've been talking about in the newsletter and with a lot of people is, I think you're exactly spot on. The existential risk piece is so overblown, in the media narrative, and I think is very much inspired by sci fi stories as opposed to any like real reality. But that being said, it certainly is going to change the world in a massive way. But what are some of the fundamental pieces that you see kind of changing given that kind of relatively short timetable?
Aliya Grig 29:53
I mean, fundamental pieces that change in this I would say timeline for three, five years. You Yeah, I would say so. Yeah. So first of all, its computational power and resources, there is a huge development. And we also as a team, we will look into this area we, we started some research activities in this part, it's about neuromorphic, chips and chips in general quantum chips and quantum computing. And because hardware it's important parts in the I would say this timeline, second parties software, and creating these new types of architectures, which I mentioned previously, were, which will be well, I call it Artificial Consciousness is not AI, it's not AGI it's artificial consciousness, where you experiment with novel approaches and architectures inspired by psychology, human beings, learning processes, and so on. And so blog, I would say, it's, overall the creativity of how we use this AGI and how we use like overall knowledge here from our planet and our brain in order to make this AGI happen and to be embodied in some physical structure. Because I do believe that we need the physical structure to create this. That's why we also plan to start building our own robots. Soon. So. Yeah, so I think it's these core three, three pillars, which we need to to use and which we need to focus on, which will influence this timeline.
Max Matson 31:28
Fantastic. So circling back to the events a bit, I saw that recently, you were a part of the AI for good Summit. Yeah. What was that experience? Like?
Aliya Grig 31:39
Why Why yes, you have you as you have noticed, I participate in different events. So I also participated in May, in science of consciousness conference. And it's one of the most interesting, I would say, conferences, if you are interested in consciousness and AI technologies, then yeah, it was a good summit in Geneva. And so basically a debate in the either research focused conferences on robotics, AI and consciousness, or I would say more, I would say, general economic focus like is it was AI for good summit in Geneva. It was it was a great conference and in brought together engineers, and so both software and hardware engineers, and mainly, I would say teams that build their own robots. And there were lots of discussions how it can be, this technology can be applied for good, first of all, for health care, for sustainability issues, how we can reduce co2 emissions from much more efficient architectures, and how this knowledge can be applied for building like AI that will help humanity to solve complex problems. And then it was also a fascinating thing that it brought together in both research people or product people would say, policymakers, because it's also important to angle in building AI for good. It's our policies. So how do we control AI? How do we manage all the senses? How did you manage, I would say, overall transparency of the data and privacy of the data and so on, so yeah, it was it was amazing.
Max Matson 33:28
Fantastic. That's amazing. I something I'm interested in just from some of my reading that I've done, Nick Bostrom, if you're familiar, of course, yeah. Yeah. One of my favorites. He presents a lot of doomsday scenarios, obviously, as alongside a lot of real, more realistic scenarios. But that being said, what are some of the kind of, you know, second order effects that are that you all are trying to avoid? Kind of in your building of a more emotionally linked artificial system like this? Is there anything that you know, you see potentially down the road that not done correctly? This type of technology could impact negatively?
Aliya Grig 34:10
Yes, of course. And well, I see that and that that's why we started about this motion approach because the majority of engineers and teams are focused on intelligence on just like I would say, really simple. Not simple of course, but just mathematics and logic is great. It's really powerful to but we need to have emotions in order to make this AI be helpful and compassion and to provide compassion in order to help people because if you focus on the on logic, you it's, it's called formats and you can help people and specifically trying to, that I what I believe actually brings creativity to us as human beings. It's this ability to feel and to be emotions and to be emotion about things because when we are really happy we try, we can create something like astonishing and something like outstanding. Same thing like that when we are like sad or really angry on something like we have this powerful source of, I would say intention to create something which can solve our problem. Like, for example, like, if you have some, some problems with your colleagues or work or so on. So, so that's why I believe that it's important to train AI with this emotional angle. And second thing, I believe that it's important to train AI, that what we use these datasets for training, it's coaching and psychological datasets. Because through this, it can learn what is like human interaction about it's no human interaction, it has no logic, it sometimes it's irrational. It's about like, understanding what is irrational things about
Max Matson 35:58
certainly, definitely I, I kind of see this somewhat through the lens of I think economics has gone through a similar kind of revolution in the last 10 years, right, where the simplistic models that are just purely based on you know, calculus would project one thing, and typically in like, 2008, for example, we see that it heavily diverges from what the model says. So incorporating kind of behavioral economics and behavioral science in general, has elucidated a lot of these kind of human behaviors.
Aliya Grig 36:27
Yeah, yeah, exactly. That's, that's what you just mentioned, it's, I totally share this approach, and I incorporate it in my own research and development.
Max Matson 36:38
Oh, fantastic. So to that point, how does the Valby use psychometrics psycholinguistics? You know, these different kinds of psychological patterns? To analyze emotional states?
Aliya Grig 36:51
Yeah. Yeah. So we live, we are currently working together with Stanford University, with their psychometrics lab, in order to understand and learn more about human beings and our users and human behavior through the way they text to we use different approaches. It's like psychological tools, like different tests, like MBTI, for example, or Hogan test. We also use psychometrics and psycholinguistics. Because through text and through the way, for example, the person communicate, you can understand a lot about his personality. And for me, it's data that first of all we use for our AI companion, because our I would say, crucial point in this companion that it can be truly personalized. So what does it mean that understands, specifically you, you Max as a person as a personality, your needs, your goals, your values, and it adapts to it. And through this, it can change, you can optimize your overall I would say, well being, if it can really, I would say, understand your goals and needs and help you to accomplish them. And that's what we achieved to like, tools like psycholinguistics, and psychometrics.
Max Matson 38:14
Got it? There's so many applications that I can just think of off the top of my head for technology like this, right. But that being said, one that I'm really interested in, that's kind of a non sequitur is the applications of your AI in gaming, right? And specifically in NPCs, and user interactions, how do you ensure you know, these human like, but still ethical and interesting kind of interactions with with the model.
Aliya Grig 38:44
So first of all, now we are we have kind of like codecs for our model, it's set of rules. So what a model can do and what it can do, how it can communicate, and how it can communicate. I have one small projects in mind for a future balls, like aI tutor for kids. And again, it will be a set of colleagues that have rules for this model. And for NPCs, while gaming leads to an interesting sector for me, because through games, you can train better agents, virtual agents in virtual environments. And for me, it's both the product side and I would say research side where you can train an average model through I would say, interaction with interacting with the environment. And we use here like we plan to launch couple of pilots with games. And it will go for this pilot is to try to see how our module can how we can create adaptive NPCs to a player behavior and which can help layer to make the game much more interesting and interactive, but of course with a set of rules incorporated inside.
Max Matson 39:58
Got it. No, that's so fascinating. thing to think about when you just picture like the games of tomorrow to have, you know, interact meaningful interaction with NPCs, I think would go such a long way in creating that kind of sense of absorption for the player, right?
Aliya Grig 40:14
Yeah, yeah, exactly. But so well, again, I believe that through games, we can also train players, because I don't like the scenario where people will be just fooling this virtual 100% of their life in the virtual reality. But for games, we can, I would say, try to help people to, to work on their personal growth, it can be incorporated inside the game. But I would say it's my future vision because I want to create smart games not just to earn money on users and people.
Max Matson 40:49
Right, right. Makes sense and kind of bridging the gap there. So that actually kind of leads pretty well into the b2c product. Right, sensei? Yeah. So how do you envision it kind of supporting personal growth? What What are kind of your plans there?
Aliya Grig 41:03
So we already launched the product, it's available in two, you can, you can have an access to our VIP version, it's totally free. And we want to charge your users and we don't have plans to bring any subscriptions. So it will be totally free with a free model. And we, we don't also plan to incorporate any advertising. Yeah, because well, I want to help humanity through this product, and at least to help to make our planet a better place. Because our AI companion, his goal is to help people on a daily basis, because not, not much people can afford to have a personal coach or personal therapist or even a close friend to talk about his problems, his needs and his thoughts. We don't store any data, it's totally private. So we don't know like who shared wars and so on. So, again, we have a strong focus on safety and privacy of our users data. And so currently, you can explore it through like web platform, or through Telegram, messenger and the plan to launch it soon in WhatsApp and Facebook Messenger.
And you can just chat with it on a daily basis. He asks you about your goals about your mission, he has tried, he is currently learning through your behavior and understanding your psychological profile and your uniqueness. And his goal is to first of all to understand your because each person is unique. And it's important that each of us has his own tasks, life missions, goals, and so on. So his purpose is to help resist and to help resist through coaching and psychological tools. This is the first part of the AI companion. And second part, it's optimizing, I would say, overall information process. So currently, we are training me to help you with different information from both social media from, I would say, from search engine, so I would say optimize the search processes. So for example, personal for me, I'm interested, for example, in consciousness in space, so he will challenge me with this information or provide me information which can entertain or support me, or legs, and so on. And then the third block, it's in order to have this truly personalized experience, it's it will be incorporated different lifestyle routines. So it will be also your fitness coach, you're a nutritionist and so on. So all the areas and aspects I would say on your daily behavior and basis. And then it will also it will help you to, to, to create, I would say these different routine things like booked your tickets or find some useful information or buy a product and so but it's long term, long term plans.
Max Matson 44:03
That's amazing. And it's also amazing that you're offering it for free. That's it's quite a service. All that being said, I definitely can see this being the future, right? When you talk about it kind of augmenting the job of a psychologist or therapist, I can totally see that right, because so much of it is taking frameworks and understanding the human brain analyzing the behavior of the person, right, and then providing a solution that actually makes sense for them. Yeah, I think a lot of people struggle with and sometimes even human to human interaction that I could see this serving very well for
Aliya Grig 44:39
Yeah, yeah, exactly. And I believe that also through this, I it will help also to establish relations between people. Because in our world, people are sometimes struggling in terms of like their daily communication. And these AI can companion can act I would say as to To interact between charter to resolve conflicts sometimes and to make it easier in terms of like communication negotiations and understanding each other.
Max Matson 45:11
Fantastic, not very necessary. So all that being said, just to pivot slightly. So you've been recognized as a top woman entrepreneur in AI and a young entrepreneur by Forbes, what do those mean to you? And what does it mean to you generally to be such a successful and accomplished woman in the tech field?
Aliya Grig 45:30
Well, for me, for me, it means more to create a product which will, which will help 1 billion of people around the world it means much more for me, than different titles. And for me, just, I would say, a recognition of my own efforts. But my personal mission and goal is, I will feel really happy when 1 billion of people will use AI companion, our sensei, and can make their lives much more healthier, happier, fulfilled, and truly, and they will feel truly inspired and energetic.
Max Matson 46:10
That's awesome. That's a great, that will
Aliya Grig 46:13
be the best title for me. Yeah.
Max Matson 46:17
No, absolutely. And I have the sense that it's going to happen, you have had such a successful track record across all of your different ventures working with people like Boeing, NASA, Lexus, how have all of these experiences kind of synthesized now into your outlook as a founder.
Aliya Grig 46:35
So first of all, it gave me a huge knowledge about both corporations style of managing businesses, people and processes, and sort of style style and how you can act, I would say, and their intersection between them. And since I also have this bit of the research background, it also helps me to understand how you can incorporate like, I would say, from different segments, like corporate startup, culture, and research, college trying different aspects and create your own leading, I would say style and product style. Because there are pros and cons in both like corporate and startups culture, and I'm just trying to balance all these and use this knowledge for for good. And it also helps me a lot of this knowledge about Boeing. When we worked with Boeing and Mitsubishi, for example, what are their pains how we can create a product for like, for example, like robotics products for companies like this, how it can help them in their processes and routines? How what we should keep in mind when we are building this product? So yeah, it's all of this is it's really helpful.
Max Matson 47:51
Fantastic. And Aliya, as a final question, what advice would you give to any entrepreneur who's looking to get into deep tech into AI into you know, anything space related, and any of the things that you have kind of been involved in throughout your career?
Aliya Grig 48:09
My biggest advice will be to be open to new knowledge. And you, I would say, new information. And I'm a non technical person, I'm not a software engineer don't have an engineering background. But that helped me a lot, because and specifically, when you have, when you don't have this technical background, you should, you can create really, really amazing products and things and can create disruptive technologies, because that's, I would say, the core strengths of non technical founders non technical enterpreneurs, because you can see the gaps, because the technical person will be too skeptical, or he'll have too much information, information overload, knowledge overload overload. So I believe my biggest advice is to believe in yourself and believe that you can find the gap and create something like interesting and always be curious for new information. Even if it's done, I would say from it came from different directions. For example, like in our approach, we combined knowledge from different areas like cognitive science, like cognitive psychology, coaching AI, and I'm, I would say it's part of my creative routine and creative approach, and then brings huge benefits to you as a founder and as a product as a result.
Max Matson 49:37
Fantastic. Excellent advice. Well, Aliya, this has been incredibly fascinating. I've been I feel so lucky to you know, have gotten to learn a little bit more about you and what you're doing at Bobby, and I can't wait to see what you all do.
Aliya Grig 49:51
Yeah, thank you so much, Max. Thank you for curious questions and amazing discussion.
Max Matson 49:57
Oh, likewise, thank you. Thank you.