Welcome back to another episode of Glasp Talk. Today, we are excited to welcome John Whaley, a visionary in the world of cybersecurity and AI. John is the founder of Inception Studio, a nonprofit community-driven accelerator focused on empowering the brightest minds in AI, as well as Red Code AI, a company dedicated to protecting people from AI-enhanced social engineering attacks.
With a PhD from Stanford and a track record of founding multiple successful companies like UnifiID and Mocha5, John has been at the forefront of cutting-edge technologies, from program analysis and optimization to virtualization and cybersecurity. John is also a passionate educator and teaches courses at Stanford on building apps with LLMs, like large-language models, and sharing his insights on next-gen AI technologies. Today, we'd like to ask him about what's happening in AI nowadays and also how AI
will impact our lives in the future. Thank you for joining today, John. Yeah, great to be here. Yeah, I mean, thanks for the introduction. Yeah, and so, I mean, I've been working, I don't know, I've been working in this space. I've been here in the Bay Area almost 25 years or so, like when I came here, like when I started at Stanford, right? And it's very interesting to kind of see all the different waves of things that have
happened since then. I remember kind of working in AI, even before, like it was like this time, it was like this called the AI winter, so everything is coming kind of full circle, and now AI has become such a hot thing. It's very, yeah, it's very interesting to see. But like, I've always been very technical and very much a builder. I think like there's, even when I was doing my PhD at Stanford and I was planning to be a professor, even then, all of my research was
about building things. And so, my PhD research and everything was about, you know, my research areas and compilers and program analysis, but all my research was about like, how do you help like software engineers, like build things faster and better and, you know, and provably correct, and like those sorts of things. And so, even all of my research was kind of very much engineering focused and always has been.
So, that's always been my background, you know, in terms of very much being a builder, starting with kind of writing code and like, and being a software engineer. And then eventually, and then later on, it was about, you know, doing research and helping other people to do that. Yeah, and then ultimately like building companies, and now I'm kind of at a meta level sort of, because through Inception Studio, I help other people build companies. So, it's been a lot of fun so far. Oh, yeah.
So, first of all, yeah, I'm curious about Inception Studio. I know what is Inception Studio, but you know, could you explain to the people who don't know about Inception Studio, and what is Inception Studio, and why did you start the project? Yeah. So, I mean, the backstory there is that my last company got acquired about four, almost four years ago, and I was looking for my next thing to do and not making a ton of progress, because my life was just too comfortable. This is like in 2021.
So, it was like my, you know, 2022, and my, you know, this is like, we're still in the pandemic times and everything. And I was just kind of stuck. I was, I knew I wanted to do another company, but, and all of these things were happening with, GPD 3 had come out in actually in 2020, and like, we were, you know, it seemed to seem to do some pretty interesting things there. So, we knew this was going to be big, but was not making much progress towards a company.
And part of that is, like, I know myself, I'm very deadline driven. Like, if I don't have a deadline, like, nothing happens. Like, I just, like, I end up procrastinating and just, you know, not getting much done. And so, I really needed to have, you know, a deadline there. And, you know, and I needed to, so I knew what I needed, which was like to go away somewhere, to remove all my distractions, to be surrounded by other really smart, creative people.
And then, yeah, and most importantly, have a deadline, right? And so, that was the genesis of the very first event we did, was back in November of 2022. The topic area was around large business models and generative AI. We got a bunch of really amazing founders there. And then, quite a few companies, like, were either launched there, or like, kind of, like, ended up, you know, just growing out of that, right? And, which was really, that part was really exciting.
And, you know, part of the reflection there about why the first event had gone so well, was that, you know, we kept the quality bar really high, and we were able to attract really, really good people who were there, right? And this was, you know, and so, when we thought about, like, we were doing more of these type of events, you know, we wanted to keep that quality bar really high, and avoid this problem of adverse selection, where, like, the best people would, kind of, choose not to go. And so, that's what we did.
And then, we, yeah, so we've run 12 of these events so far. Like, these cohorts of founders, we had 144 founders come through the program so far. And, yeah, it's been, it's been very, very successful. Because it turns out, when you curate a group of, like, really amazing people, and put them all together, they end up, and people are all ready to start companies, they end up starting companies. And, like, those companies end up doing extremely well.
And so, that is, that's been the premise of what we're doing. We run this as a non-profit. We don't take any equity at the companies, again, because we want to focus on quality, and the best quality people together. And, yeah, and it's, we're a 501c3 non-profit. So, like, we just, we ask people if they can donate, if they're able to, if they can donate just to cover their costs, like, for the room and board and lodging, right, there. And, you know, if they can't, then that's fine as well.
Like, we can, you know, we can, we can, you know, waive the cost as well there. And, you know, and because of that, we've been able to attract some really, really great founders there. Some, like, really good success stories so far have coming out, come out of Inception, right. But this is a really, really different model than you see from most of these other accelerators, which, you know, most, most of the time, they're just looking for people who, you know, will take some type of deal, like, I'll give up
7% of my company, and then, you know, you have to pay, you know, you know, 100, they'll, they'll, they'll pay like 125k for some 7%, like those, that's a typical type of deal. Or, you know, sometimes even more up to 10% and such. Like, these are, that might work well for early stage, for like, for first time founders. But, you know, if you are, you know, you work in a really hot space, or you are serial entrepreneur that you've had success in the past, or, you know, you, you
already have a lot of connections to investors, etc. Like, people like that, they don't really need to join that type of accelerator program where they're giving up a lot. And so they have a problem with adverse selection problem where they don't, they're not able to attract the very best people. So we wanted to avoid that. That's why we don't take any equity. That's why we're a nonprofit.
Yeah, and that's why we're able, we've been able to attract some really, really amazing people. And yeah, and it's inherent value in curating a group of amazing people and putting it all together. So now we've run 12 of these, of these retreats, is also a really strong founder community there as well. You know, this is something I was, I never pegged myself as like, kind of a community organizer type at all. But like, it just, it just kind of happened. You know, which over time, just like, that's been a really exciting part of that journey as well, for me,
like just kind of exploring that side of, you know, how to, how to organize a community of founders, right. But yeah, it's been, but everything has been great, great so far. We're running another event. You know, our next event is coming up the end of September in 2024. And, you know, we've run these events every six to eight weeks now. And now we're talking about even kind of expanding internationally as well.
We're, you know, starting to plan an event for in Japan and like in other locations as well. So, you know, as the, the, you know, the truth is that there are a lot of great founders, you know, in many different areas and many different walks of life, right. We also like, and I'm a big, big believer in like, the best teams are the most diverse teams. And, you know, and so this is something that, that, you know, we, we put some conscious effort into when we think about where we're putting the
cohorts of people, like not just having all the same type of people, because if we did like the event would not be as nearly as successful as it, as it is, right. If we had, if it was all, if we just had a bunch of engineers, it would just be like a hackathon and then people would hack up some solutions, but they would never, never really turn into anything because like, you know, you wouldn't really turn into viable businesses, right.
You know, and then, but on the other hand, if it's like too much, too many product managers together, then, you know, that's not going to turn into anything either, right. Like, or, or business people or any of these other areas, right. Actually the worst case, should be a photo, all CEOs, that would be a disaster. We would be a yes, not want, you know, maybe if that's more like a reality TV show, you're like, yeah, not not like, that's not
going to, you know, because that's the truth is, you need all types, like you need people who can be CEOs and people who are not CEOs, right? You need people who can be, you know, who are technical, you know, people who, who understand product, you need people who can sell, like, you know, or understand go to market and all of this. And like, ultimately, the ones that have that kind of a, you know, a good mixture of
skills, you know, on the founding team, or the early team is, they are much more likely to be successful. You know, I think and this is like, this isn't, you know, I think there's statistically proven and you look at, there's a book that's called the founders dilemma, that kind of this goes into this, you know, in depth about like, just analyzing what what type of teams and founders of, you know, founding teams
are more likely to be successful. And so like, that one shows that, like the ones that, you know, where, where there's some diversity on that early team are ones that are just statistically more likely to be successful. So we, and then also just personally, like, I, I hate these situations where if there's somebody who is, they are talented, and they are ambitious, and like for the good of the world, like they
should, there's a they have a destiny that they should be doing something like should be starting a company, but because of circumstances outside of their control, like, you know, discrimination, or, you know, you know, just society and those type of things, like, they're not able to fulfill their dream and their destiny like that. I just hate for the I hate those situations, right.
And so like, and it makes me very much want to kind of fight in the support people that are in that type of situation. So we have, you know, so far, I think, inception has been 25% women, it's like, you know, and we're kind of we're working to try to increase that as well, because I believe that there's many women who kind of who can and should be starting companies and like, because of a variety of reasons, they have not been able to, but like, we want to like, help support that.
But yeah, and then also, you know, this is, you know, similarly, kind of in, we're looking in Japan and other places as well, where it's like, you know, the entrepreneurship is maybe not as common, but, but there are a lot of very talented, ambitious people. And like, I want to want to do everything we can to support, you know, people in those circumstances as well. Yeah, and I totally Yeah, listen to me.
And by the way, I went to cohort one, like a first pilot cohort, and that was amazing. But I don't know what's happening nowadays, you know, this recent cohort, and have you found any, like, like an interesting project or ideas, you know, in the past cohort? There's so many, I mean, there's so I mean, this is the benefit of like, just, I mean, this is why I love doing this is like, you get to interact with so many interesting people with
different backgrounds, and but they're all very ambitious, and they're all very accomplished. And they're all like, kind of been thinking about this for a while. And just, you know, just hearing about some of the very interesting things that people are doing. Now, we're not the very first event we did, the one that you went to was like, explicitly, there was a topic area of like, large, thing is models and generative AI.
That was because we did it in November 2022. And this was about like, two weeks before chat GPT came out, right? Yeah. Like, very good timing there, right? And the reality of it is that it is like, it's not like, it's not like we're just trying to restrict and focus only on AI, you know, AI founders, everything, it's more of the it's just a very hot area right now. And there's a lot of interest in it.
There's this new capability called large language in the past, and now it does right and, and it opens up so many new opportunities that I think it's natural that a lot of founders are just very interested in building using these tools and like in these areas, right? So it's not Yeah, so because the danger is, if you kind of start from Jenny, I and you just start from this amazing tool called GPT for whatever, like any of them, large of the
largest model or, you know, some generative AI model, then you end up it's like looking for it's like a hammer that's like looking for nails, and then you end up, you know, just just not solving real problems for customers, right? And so I think it's much better to start from the problem. And then if AI is a solution, then that's great. But there's a lot of cases where AI is not actually the best solution.
And that's fine, as well. That being said, I would say, I don't know, most of the companies that we deal with, like through inception are, like, like, not fully embracing AI. And I think it's, you know, not only in terms of like their product that they're doing, but also internally, like, you know, like for their their company, because there is there's an opportunity now with like utilizing the latest tools and everything to basically, it's
like a superpower, you can get, you can act like a much larger company with a much fewer people by leveraging a lot of these in AI tools, right? And to handle a lot of different things about your company. And yeah, and the and the people who are fully fluent in these tools are able to use them. Yeah, it's almost like a superpower right now, right? You get to end up being five or 10 times more productive than people who don't
who aren't using those tools, right? Or don't understand how to use those tools. So that that has been a really interesting evolution, I guess, over the recent time, like it used to be that if you want to do this, you had to kind of scale up a pretty big team and like hire a bunch of people. And that's a lot less true. Now, like you can get away with like having just a much smaller team and just a smaller handful of
people, and effectively, like punch way above your weight, if you if you really know how to use these tools effectively, right. So this is like, so when we talk about AI native companies, it's both like both in terms of like, yes, they're kind of building products are AI native, and they have a they have a data strategy that is, you know, the strategy that, that, you know, understands about the value of data and all
this. So they're also utilizing AI tools themselves, within their within their companies to, to basically give them a huge competitive advantage compared to, you know, the the larger companies that that are not as not as closely embracing these type of tools. Yeah. Yeah. And also, you know, I've seen, you know, like, in the past, I've seen, you know, friend or member, or co frame those, you know, maybe like interesting push, you know, came
out from, from inception, pseudo, and that's really impressive. Yeah, yeah, no, we've, we have a lot of great, I mean, just, I mean, and I could go through each of the companies, and I'm like, actually very excited about each of them, you know, for a variety of reasons. Yeah, I mean, like, you know, I mean, we friend is those one of the first ones that we like, we actually kind of start to get into the hardware space as well.
You know, this was, I mean, this one was, you know, there's been some high profile failures, I would say in this space, where like, just products that like, were released that were just not very good. I mean, that being said, I think there are, like, I still think there's someone's gonna crack it, there's a lot of give me a lot of promise in that area. You know, around this those type of AI wearables, co frame is another one you
mentioned, very interesting, exciting. And like, this is a type of thing where, you know, Josh Payne, who was like, the founders, he was, he came to our cohort three, there. And, you know, when he, we're talking about different ideas and everything, we need to describe this, the idea of what he's doing, which is basically, you know, I have a web page, and I want to do a B testing, and use generative AI to generate your
different variants, and then reinforcement learning to kind of automatically improve it improve your website. You know, that I was like, of course, like, that makes so much sense, right. But not only that, he took it and kind of even further direction to talk about, you know, the future of user interfaces and being kind of these type of, like generative AI driven, like kind of living user interfaces and everything.
And like, this is, you know, that's, that's a benefit that we have, like working with founders that are kind of high caliber and ambitious, right, is that they're not, they're not thinking small, right? They're thinking, they're thinking big, right? And they're thinking kind of things have a big impact. A lot of times we have a kind of early, like a first time founder, early stage, you know, first time founder, they, they're
thinking about, oh, well, look, I can make a viable business doing this. And it's like, okay, that's fine. Like, but just I learned this early on, it's like, just because you can solve a problem doesn't mean you should, because your time is very valuable. Right. And just, you know, you even though you can say, I could see I could solve this problem, right? Or I can, I see, I can build a viable business in this area.
But like, the truth is, if you're talented, and you if you're a good engineer, if you kind of or you have other skills, like there's a lot of things you could do, right. And so what separates people who are very, you know, the highly successful people versus like the people who don't never really achieve that level of success. It's often, I mean, it's a little bit about talent, but it's even more so about your taste and problems that you work on, right? And so if you have good taste and problems
to work on, like you're much more likely to be kind of to hit that those levels of success, right? And so that's often what separates the people who have talent versus the people who are going to be that kind of next level of success, right? And so I think that's, that is some of the benefit. Again, I mean, like when I mentioned Inception, maybe around, it's around 70% or so are serial entrepreneurs, like people have started at least one company in the past,
but we have 30, almost 30% that are not, they're first time founders as well, right? And we're kind of, but we're looking for that the people who have that kind of the combination of the like talent and the ambition. So like, they're not, they're not just looking to be like, oh, I could make a business, a viable business in this area, but they're looking to be, you know, okay, we're, we're looking to build the next great company or change the world in these,
like in these, these very particular ways like that, that is where that's, that's what we look for and filter for. And, you know, and so be, and so by curating a group of people that are all kind of like-minded like that, yeah, it ends up being a lot of fun. And there's all these synergies there because like you just getting the, getting those people together who are all about the same stage and, you know, and, and are working in this and are working in similar areas, right,
who are all ambitious, who are all kind of trying to start companies, right? Yeah. There's, there's a lot of like positive, like synergies that happen between those people. And I mean, some of it is, some of this has been learning over time as well. I didn't really think or anticipate about this, but there was, you know, and like whenever you're in some type of, any type of program where there's a cohort or there's a group of people together, there is
like an opportunity and sometimes what happens is you get this kind of like sibling rivalry type of thing where it's just like the fact is you see other people and like they are doing well and you know them and you kind of don't want to fall behind. So like it kind of gives you that little bit of extra push to be like, well, I, you know, I want to keep up, like I don't want to like fall behind.
So like I'm going to push forward as well, right? Like this was, this was something that I did not think about or even anticipate at all. But then there's like some, there's some aspect of that as well. There's like kind of a, like a, not, not so much directly competitive, like, Hey, we're all, you know, there's, you know, we're all being graded on a curve and like only a small number of us are going to get As and like everyone else is going to get Bs or Cs or whatever. It's not like that.
It's much more of like, Hey, we're, you know, it's like, it's like being on the sports team together. Right. And it's like, it's like, like, we're all kind of pushing each other in the same, in the same way. Right. Cause we're all kind of part of the same, you know, part of the same experience. Right. And so to do that in a, in a kind of environment where it's more collaborative than competitive, but it's kind of like, like general competition in terms of like just people inspiring other people to kind of, Oh, well, I should do this
too. I've saw, I've saw this so many times happen where you know, some one of the, one of the inception companies would launch on product times and the other one would be like, I got to launch on product hunt too. And like, and then this is like happen, happen again and again. And it's all, it becomes this positive like feedback cycle which is like just really amazing to be part of. So. Yeah. And did that happen to you? Right. To you, right.
Because you started Redcode AI during, from, from the inception studio and could you tell us, you know, why did you start Redcode? What is, what does Redcode AI do? And, you know, why did you start, why did you decide to start that? Yeah. So Redcode AI, we, so the, the backstory here, so my first two companies were in cybersecurity and then, so I had a lot of experience in cybersecurity, but also I was not, I was pretty convinced like my next company would not be
as part of, not be part of cybersecurity. Like, you know, I was not, I was looking to like, maybe I'm going to broaden this out because I knew all the downfalls of like starting a cybersecurity company. Right. And there, but actually what happened was that it was at one of these inception cohorts, right. Where it's like, we're in the early days, I'm still looking for my next thing as well.
And so I would join some of the teams and like work with the teams because I mean, part of it is fun and part of it is like, I want to explore like different ideas and stuff. And so it was, yeah, I was actually in cohort three and then we form up a team that was there and then really begin to work on, you know, this idea around Redcode AI. At the beginning, I was like, maybe there's not that much here. And then, but like, as we talk about it more, it became pretty interesting.
So like basically what we do at Redcode AI is, is, you know, the LLMs and generative AI, this is the biggest change that I've seen in my career for cybersecurity, just in terms of the implications of it. Right. And nothing more so than the area of social engineering. Right. So social engineering in the past where it's really easy to tell, you know, if somebody is, you know, it's, it's, you know, is, is send you a scam message because it has misspellings and the bad grammar and like all of this.
And so you can usually tell pretty easily then now it's like, now with generative AI, you can make highly targeted and perfectly fluent attacks. Right. Not only with texts, but also with voice and now even video. Right. And so this has become a very possible. And of course, in every new wave of technology, it's always like scams and porn are like the two areas that like are always the early adopters. Gen AI is no exception there. Right. So this is yeah.
And so a lot of scammers have started doing this, like, I think I'm sure you've seen like smishing and these other type of messages are up, you know, they're up like 1700% year over year. And like, it's really become a real problem and the quality is getting better and better. And, you know, these attacks always existed, but now you can do them at scale. Right. It's kind of the equivalent of like the script kitty or whatever it can just, just using, you know, a few gen AI tools,
they can basically run these highly targeted attacks at scale and people fall for this stuff all the time. Right. And so this is where we just thought, you know, this is somebody needs to solve this problem. Right. And it's a, it's a tricky problem to solve. Right. And so, and then, so we came up with a solution. It's like a two part solution. One is we have something called defender, which will basically help to prevent against detect social engineering attacks,
like do use a large, I guess, models for good, like use them to analyze like, you know, text from, it could be from an email, from a text message, you know, WhatsApp, telegram, whatever, like LinkedIn, anything. Right. And we then can classify very accurately, like whether this is a social engineering attempt or not. Right. And these, you know, and not just looking at keywords, but like really understanding the intent behind the message.
And, and that's really what generative AI gives us is the fact that yes, you can use it for these bad use cases, like generating, you know, fake, you know, like deep fakes and things like that, but you can also use it for good, like basically revolutionize the way that you are, are doing detection as well. Right. So that, so we have defender, and then we also have one, a product called pretender, which is like the offensive version of this.
So basically you give it anybody's profile, it'll generate a fake version of them that you can use to send text messages, reach out on LinkedIn, also make phone calls to them. And then soon we'll have video there as well. You know, and so this is an intent there is kind of like inoculate your workforce. Like you want to protect them against the, these type of next generation threats. The best way to do that is to show them this is what's possible. And then, then they will know.
And then they know that the next time they receive a phone call, that sounds like it's from the CEO and there's, and he's asking you to go buy gift cards or, or, you know, tell them or tell them secret information about the company or, you know, wire money or any of those types of things. Like they'll know, they'll know, Oh, Oh, actually, I should actually go through the appropriate processes. Cause like, just cause I heard the voice on the phone, doesn't necessarily mean that that was the real CEO. Right.
Because deep fakes exist and all this. Right. And so, so the, the, both parts are important there. And we just saw like, it was mostly about like, just, this is, there's a tsunami that's coming. Like it's like, you know, for, for cybersecurity and like, they're, they're completely unprepared for it. And, you know, like we at least understand like, like what, what, what it's a state of the art and what's possible and like how to actually deploy these things with defense.
And it's like, when I was felt like compelled to, it's like, we have to start this company. Right. Cause like somebody needs to solve this problem. And we didn't really see any of the incumbents being any type of position to, to solve it at all because cybersecurity is a very conservative field. I mean, for good reason, I mean, it's good, it's good that for security, it should be, you know, conservative a little bit, but, but, you
know, clearly there was just none of the vendors that were in there where kind of were anywhere close to be able to like have the right approach to solve this type of problem. But we knew what it took. And so that's where we ended up starting the company there with like co-founders in bed. at Inception as well. So that was, it's very meta level, right? It's like we started the accelerator to kind of find, and then it started our own company out of the accelerator program
as well. So, yeah. Yeah, that's amazing. Yeah. And as a, you know, as a normal people like me, how can I, how can I know, oh, this is scam, you know, I can't recognize, oh, this is real John or not, right? And how, how can I proactively defense? So, yeah, I mean, right now we are talking, so real John, right? Yeah, but how do you know? Real John, right? Has my name in the corner.
I mean, how do you know this is the real me? Well, this is a group, this is an interesting question, right? And the truth is that, yes, I mean, today for, for video, like the video generation is not quite perfect, although it's like getting so good. Like, I mean, look at about a year ago, I don't know if you've seen this thing, like the generic, generic AI for like Will Smith eating spaghetti, like video. If you've ever seen that one before, you can Google and check it out. It's really funny.
And it's just like, it's like, that was the, that was the state of the art about it, you know, whatever in 2023, like about a year ago or so. And now you compare that to Sora or any of these other like technologies as well, especially for things around like avatar, like a virtual avatar, there is a bunch of office health services that you can use now, where you don't need much video at all. In fact, like there's a Microsoft has one called Vasa that they built that was all you need is a single image.
And for the voice clone, all you need is three seconds of audio. And with those, you can create a very convincing, like deep fake of somebody talking, right? And it mimics and it gets the expressions right. And it mimics everything. It's just like, it's like, wow, that is, it's, it's really astonishing. Like, and that's the trajectory that we're on. It's like, it's, it's not perfect yet, like you can tell a little bit, but I mean, so long ago, like not maybe not that long ago,
like a year ago, like the best practice thing you do is like, Oh, we want to detect a deep fake. You basically say, Oh, put your, wave your hand in front of your face or something. Right. Or like turn to the side or anything like that. Right. Because the deep fake software was not that great. And you kind of like get glitches and stuff like that. It reminds me a lot of like the generative AI where it's like, Oh, I want to find if something generative AI go count the number
of fingers on the hand. And if it's like, cause like the gen AI would always like, it used to get that wrong, but guess what? Now you look at the latest ones and like, it doesn't, it gets that right. I mean, like, because the stuff is so any type of things you'd say there, like about how to detect it, it's going to become obsolete really quickly. Right. And so what we do, we're not focused on whether it's real or fake. Right.
Because, or like AI machine generated or, or human generated. Right. Because the other thing is there's a lot of legitimate use cases for AI generated, uh, content. Right. I mean, we see these all the time at inception and like, these are like ones that will handle phone calls for me or, you know, or like even virtual avatars or give you a personalized video or any of those kinds of things. I think, you know, or even in zoom, right.
So there's, there's AI powered features that would say, make my face, like remove my wrinkles, right. Make my, make my face look better. And guess what? Those are like the best ones of those are powered by generative AI. So to say that, like, I mean, in some sense, you know, having, uh, like having me like clean up my complexion and give me more hair and stuff like that, that is a fake. And so it's like, you want to flag that it's like, Oh, that's not, that's not real. Technically. Yes.
But if you want to, I mean, but, but the point is like, those are not malicious use cases. You don't care about catching those stuff. Like the stuff you want to catch is like somebody trying to deep fake the CEO or the CFO to like, and then telling them to wire money and like, or, or telling them to kind of give up sensitive company secrets or like their 2FA code or like that type of stuff. Right. That's what you really care about. And so then, and sometimes those scams are real humans.
There's a whole, there's a class of things that are called shallow fakes, or basically it's real video or it's a real audio or it's a real media, but they just take it out of context. And so it'll pass all the checks. It'll be like, yes, this is real, but, Oh, but like, you know, they, they, because it's taken out of context, then, you know, and then this is the way that, that fakes used to have work. Right. And like, before deep fake technology is like they use shallow fake, these types of shallow fake
techniques, but they're still very effective. Right. And you can trick a lot of people. Right. So these are, again, it's like the real versus fake or rather kind of AI generated versus machine generated is not actually the problem. I think that that's important. The real problem is, is this malicious or not? Right. And so that's what we've been focusing on, whether it's a human, real human or not.
I mean, like we do have some detection technology that can detect if the, if you know that this is, does look like it was generated by one of the kind of AI, either AI services, AI models, like those, those type of things. But I think ultimately that's an arms race that like the defenders will lose. And so what you, the better thing to focus on is the actual content and like, what are they trying to get you to do? And less about, I mean, and, and, and whether it's machine generated or not is just one data point, you know, in that.
Very interesting. And I saw today, you know, India, the co-founder of open AI raised 1 billion from like notable founders, investors, right. And he said, you know, he found the mountain something and the safety in super intelligence. Have you seen that? Yeah, no. I mean, I mean, I mean, part of it is, I mean, obviously with Ilya and the others, like they're, that are part of that team, it's a kind of, didn't even matter what they were
working on. They were definitely like with a, with a star team like that, especially now, like there's, there's lots of capital, like that is that, and lots of interest in AI still, right. It's not like this bubble has not popped. I mean, these are like, there's things are as hot as they've ever been. And there's a lot of interest in that. And so if you have a talented team, kind of doesn't even matter, like what you're working on, like you can probably raise
money and a significant amount of money. Right. Now, that being said, I mean, I think it was like, there is, look, there's, there's a lot of promise around kind of AGI, or even ASI, like the kind of artificial general intelligence, or like these, some people call artificial super intelligence, right. You know, there, I'm, my view on this is that it hasn't really changed, like over the last few years, even with the kind of amazing things that new models can do,
because a lot of the stuff there is just kind of parlor trick. It's just like, hey, there was somewhere on the somewhere in the data set somewhere on the internet, like somebody did something similar. And then, you know, and so when you test it out, you're actually it's just doing recall, it's just like, it's memorizing stuff. And it's like spinning stuff back out. And then people was like, well, it's amazing.
It was like, well, yeah, that's because it was trained on like the entire internet and every book that's ever been written and all this stuff. And so like the general some in some of these cases, like the generalization is like not, not quite there. And when you try to kind of ask it questions, and things like if you ever asked it, it's like, Oh, ask it to do math questions. And then but like, change it from base 10 to some other base, right? Where it's like, you know, yeah.
And the fact that it's memorized all this stuff, and it's like, it completely fails all that stuff, because there's not that much in the data set that's like doing doing things like that. Right. So but that being said, there's also a argument where it's like, well, if it kind of quacks like a duck and walks like a duck, then it is a duck, like, and what's the difference between if like, if, if, if the AI is will, will consistently pass a training test and are able to do whether it's memorization or whatever,
but like, it's able to do all these amazing things, like, isn't that just intelligence, right? You know, and like, I mean, I understand, like the, the gist behind that, that argument. Um, I think we're due for a, like, it's a correction that will happen. But basically, there's a lot of things, a lot of amazing things were promised for AI and agents and all of these and like an AGI and everything else, that the reality of it is not going to match the, the,
like the what was promised, like that one, I can pretty much guarantee anybody who's worked with these systems knows that they're kind of simultaneously amazing and also like really stupid. Right. At the same time, like you can, you know, the things are like, how many hours are in the word strawberry and stuff like that. And it like gives you the wrong answers. I mean, it's like, I mean, like, and that's just one little piece of trivia, but there's a, there's a bunch
of examples that are like this, where it's like, you know, the language model itself is like, not even self-consistent in its own answer in this, in the context of the same answer, right. Because it's not, because it's just spitting out tokens of like, that are statistically likely to, you know, that, like that it's seen in this dataset. Right. So these, um, and there's a lot of structure in that language and everything like that, that is, that like gives it the ability to
do these amazing things. But, um, yeah, I think, I still think we're a long way away from this type of, uh, like, you know, full artificial, like general intelligence or super intelligence. I mean, even with, and when we do have it, it's going to make, like the first versions of it are going to be, um, you know, the, the first versions of this are going to be like, uh, just in very particular domains. And like, there's not.
you know, like, I don't think they're going to have like a kind of a breakout moment where, oh, suddenly we have millions of, of, you know, virtual beings that are all like smarter than humans, right? It's not, you know, even just the, like, if you look at like the capacity of like a human brain, and if like, if you even if you just use the, like, well, how many, how big of a neural network do you need, and how much power you need, and
with the current state of technology, like we're, like, you need to build like an entire data center and have it and like power it by like an entire power plant or something to simulate one of these at the level that like you would want for a kind of human. I mean, like, the first step is like, get it to be as intelligent as like, a cat or something like that. We're not even there yet.
But like, it'll I mean, and that'll take some time, right. And but but I do think that there's a, like the promise that was been made is like, there's such a big gap there. Like, if, if people are with a straight face being like, well, we're investing a billion dollars in this, because we firmly believe that ASI is run the cost of ASI. And like, it's going to happen within the next two or three years. Like, they're going to be disappointed, right now.
But if we're putting a billion dollars in this, and because yes, they're going to, they're going to like shoot for that. But like, in the meantime, they're going to have all these other amazing use cases that are going to like solve this solve all these problems for these people and like, and create a huge amount of value, then yes, yeah, then that like that, that makes total sense.
I totally agree with like, that's good that that type of thing is going to happen. Right. And I think the danger with that with putting that the conversation so much about either AGI or ASI is that it's just distracts us from the what the real dangers are of using this. It's not the actual danger is not like some superhuman intelligence is going to, you know, try to eradicate humanity. And so we need a magic off switch or something like that to like, to to turn it off in case it goes rogue.
And that's like, that's sci fi stuff. Like, the real danger is that people will use people like humans will use these tools for to do bad things. Like to do bad things on a scale which like they like they were not able to do before. Right. And like, like, you know, go and create like a million sockpuppet accounts and then go and like influence elections or create deepfakes that will like that will change the or like those those sorts of things or
even even things that are maybe not quite as nefarious as that we know. It's, you know, it's elements are just a tool. I thought, you know, I, you know, I saw like Andrew, he, he was, he was kind of a, you know, put to make this parallel of like, well, it's like an engine. I mean, like an LM is like an engine, right? You can use engines for, you can put engines and things and like make they can be used for good things.
And like bad things. The good uses overwhelmingly, you know, outweigh the bad uses. But you can use it for either either one. I like that. That is very much I mean, that's very much the truth. I mean, that is very much the case that that is, and it doesn't make sense to kind of restrict, like unnecessarily restrict, like these, you know, the development of this technology when it's like has so many positive outcomes, right? And
like, and usually, the core issue that it comes down to is like, what is what is the like, what was people's pursued likelihood about that, that this catastrophic event is going to happen where age, you know, the AI becomes sentient, and then decides to like, go and kill its creators or whatever, right. And, you know, I think there are people, there are people that are, you know, well known in the in the industry and
everything that believe that is a real risk. And it is true. Like, if you actually do believe that's a real risk, then it's kind of like, it would go down one particular path. I think the most people are most people who kind of actually work in this space, not like, not people who are political scientists or philosophers or whatever, but people who actually know, have used the technology know what it's capable of, you know, know
that it's like, we're very far away from that, like that, like that is a that is such a remote chance. It was like, you know, you know, I heard it described as like, hey, like, I don't worry about that. For the same reason, I don't worry about overpopulation on Mars. Right? Because, yes, theoretically, yes, but like the like, there are so many other worse problems.
And like the chance of that happening kind of anytime soon is really low. And so yeah, I think that that's, that's like, that's, that that certainly is the case kind of on that some of the safety side, but I do think that there is like, like, the current approach towards LLMs, and the next token prediction is not going to be the thing that's going to bring us to AGI or anything close to it. And it's going to require some just fundamental reimagining of how these things
worked. Like, like with with really talented teams and people who do understand things at that level, then those are the type those are the type of that that's the type of work that needs to happen to be able to kind of break through and make things that are much closer to human intelligence. Yeah, thanks. Yeah. We need to prepare for it. And so yeah, we like, we love what you're doing.
And so we need to prepare for it. I think. And yeah, so thank you for talking about like, you know, you're like Inception Studio, also, you know, so AI, generative AI, but so yeah, we're also interested in your career as well. And so we saw your profile face like biography. So saying like, you started coding. So when you were five years old, right? And so we were just curious, you know, why you are interested? Why you were interested in computer science
and AI? Is it really so that you started coding when you were five years old? Um, yeah, I mean, so I mean, I wasn't interested in computer science, I was, I was sure I was like, mostly what happened was that, like, we had a computer at home. And like, we didn't really have any games, or we have like, just very few games on the computer. And I like to like, I wanted to play computer games. And so my, there used to be these like magazines that would
have the listing with the source code listing in them, and you could type them in. And then you could like, play a game, right. And so I do this, and I would like type in a type it in line by line, right. And of course, like, and then so when I did this, and I was, this was when I was five, I was in kindergarten, I was like doing this, because I was, I really wanted to play the game.
And it was like, I was because I was bored. I was just bored. And I was like, this computer here. And it's like, you know, whatever games we had on there were, you know, I got sick of them. And it's like, I want to, I was like, Oh, this, this looks like a cool game. Like, let me type in this code, and see what happens, right. And of course, like, when you type it in, like, there'll be typos, like I would make mistakes, because I'm just like copying it out of the
magazine. And there would be bugs in there. And then I started to kind of learn how to like debug. Because it's like, I couldn't, you know, at the beginning, I would like look character by character is like, Oh, what did I have a typo or something like that. And then as you do this more and more, you kind of get a little bit of intuition is like, Oh, well, this part is broken.
Okay, well, what part of the code should I look at? So that that, you know, so I can kind of find out where my typo is, right. And then, of course, like, when I did that, I was never I was not satisfied with like, just playing the game as it was written. It's like, I want to make changes, right? It would start by like, Oh, I want to put my name in this here, I wanted to like, add a new feature or like that kind of stuff.
And then so it was I'd started off by just like, I'm not really writing code, I'm just like, making small changes to like, I changed an if or I changed a number or I changed like that kind of thing, I would see like how the result would be. And like, that's basically how I got started, right. And then, and then I just kind of got I got better and better at it. You know, just as I as I was learning more and more. But yeah, that was my first.
Like, that's what that's basically how I got started in, in coding, I guess, in like programming, right. And then like, my, my first, and so I just did this, I used to run like, there's this thing, and then once upon a time, this thing called BBS, which is basically you have a modem, you can dial in to like these, it's called bullet system, right. And then and you and this was like pre internet, and you would you would dial up and you would
like, you could like send messages and talk to people and that kind of stuff. And so I used to, you know, and there's a sysop, which is like kind of the system operator of that. And so there was like, I got a modem, like very early, I was like, I got it was like 1200 baud, which is like, everyone else had 300 baud modems, I had like a really fast modem, like four times faster than everyone else's.
And I would do all these dial up, and it would kind of got into that scene for a while. And then I began to run my own BBS for a while. And then I started to like, well, I wanted, I wanted, I had some friends who are basically building games for BBSs. And then I was like, well, I'm gonna make my own game for this BBS. And I started made this whole, like kind of RPG game and like that kind of thing. Like, this was when I was in high school.
By that time, I was like, that's what I was doing, kind of middle school, I started in middle school, and they kind of like continued to high school, right? So by the time actually, yeah, and like, and then that's, and that's where I was like, Oh, well, now I learned C, like, I started like Pascal, and I learned C. and I learned like different languages and that kind of stuff. So that's how I got good at it. And then what happened was that I took AP Computer Science in high school as a sophomore, which was
like that was the first year they let me do it. Like they wouldn't let me take it as a freshman, but they let me take it as a sophomore. And then I ended up doing well in that class. Like I got a five on the AP tests and everything. And then my teacher, like the teacher for that class, clued me in on this thing that was called like USA Computer Olympiad, which was like, it was like a competitive programming thing.
And because she knew that I was really good at this and because like a lot of stuff just came naturally because I just like, I've been doing it since I was five or so. And yeah, and I'd already had kind of a lot of coding experience, like through my BBS days and games and everything I've written. So then, so I ended up doing those and like, it was like, those problems were really, really hard. Like it was like way harder than any other problem, like problems I've ever worked on before, but I got really into it.
And then, yeah, and it ended up like going to the like national, whatever USA Computer Olympiad is like one of the top 15. I didn't make it to the, there's one called IOI, which is like the international competition, where only the top four from the US end up going. I was, I did not make them the top four, but I made it in the top 15. And I think that was like, that was like the first time I felt like, oh, I might be good at these, this thing.
Because I was, I never really thought of myself as good at this. That was my first indication. It was like, oh, okay, maybe there's something here. And that was my first like experience with kind of computer science side. Because until then, I was just doing coding. I was like, I wasn't really thinking of algorithms and like stuff like that. But in my preparation for the USA Computer Olympiad, I got this algorithms book. And then I began to read it.
And then that's where I got, I just got much more into like the actual kind of computer science side of things and algorithms and everything else. Right. Yeah. And then I went to MIT. And so I got into MIT basically because I was in the USA Computer Olympiad. It was not, it was not based on my grades and like my SAT scores were okay, but not fantastic. And it was, but like, it was because I had done that, the USA Computer Olympiad. And so I got accepted, which is like a miracle because it was like, that was my top, my dream
school. Right. And then I, I went in, you know, I went to MIT and kind of, and then in MIT, I was able to apply myself much better than I did in high school. And so I ended up doing much, much better. And then that's where I learned about compilers. And I was always, I was always interested in like, how does, how does this compiler work? This is like amazing how you can go from source code to like machine code. And it's like, how does that even work? Right.
And then I learned about how that this worked and I get so into compilers. I just love, I still love compilers. This is my favorite, favorite topic or this is why I taught this, the compilers class at Stanford multiple times. And this is like, it's my first love, whatever. So it's like, I could talk about compilers for days with anybody. Cause I really, really love the area of compilers and like, and I think working on compilers, it forces you to kind of like work on a meta level.
And also like, just, you have to be really strong on algorithms and on implementation, both together, right? You cannot just hack together a compiler. Like you will never make it work reliably enough to like, for people will use it. Right. But you can't also just work entirely in a theoretical domain like compilers, because ultimately these are running on real computers or real hardware and with real programs.
And so you have to understand how do people write programs, how does our, how architecture work, all that kind of stuff. And so you have, it's like this kind of wedding of these two together, which really attracted me a lot to that problem space. Right. Do you still teach at Stanford? I don't teach the compilers class currently, cause now I teach the, like the, the LLMs class, you know, the 224G, CS224G, which is about building applications using water sink as models.
So although I, I've taught the, I used to teach the compilers class in the past and like, I, one of the most amazing experiences was that I got to co-teach the compiler class with Jeff Ullman, who won the Turing award for his work in compiler education. Like he's one of the authors of the dragon book and like, and multiple other textbooks and everything. And so I get like, I got to co-teach a class with him. He had won the Turing award.
And then, you know, and then I was like, I'm sure, look, he's, he's not going to come and co-teach a class with me. It's like, there's no way. He already hit the pinnacle. It's like, why would he come? But he did. I mean, cause he's a great, he's a great guy. And he's like, and he's, he, he loves this stuff as well. I mean, he's, he's, he's obviously kind of, I mean, he's getting up there in years and everything like that, but like he came back. So after, right after, the year after he won the Turing award, like we co-taught the class together,
which was like, that was a really amazing experience. So like, it was just like, I never thought I would have a chance to do that like any time in my life. And that was definitely a high point there to be able to kind of work on the, on and teach a compiler class at Stanford with Jeff Altman, which is very cool. Yeah. That's amazing. Yes. Thank you for sharing your like life histories of starting code. Also, you also tapped that like topic of like hardware.
So, and so right now, so yeah, I mean, in AI field, so it's like so competitive, like not only elements, but so also like cloud tips. So yeah, it's so severe competition. So do you have any like big picture on this? Like how it's going or like, you know, which, you know, we should focus on? I mean, is this kind of like, like specifically you're kind of around the hardware area or like where, or just kind of in, in software in general or. Yeah. Yeah.
Yeah, no, no. I mean, look, I mean, there's it's so interesting to see kind of in this machine learning space, like a lot of the top, a lot of these topic areas that you hear about, like systolic arrays and like, and even like wafer scale integration and like all, and like, and parallelization and all this stuff. This is things that like, where like the, like in the compiler space, you like, we work on this in like the late eighties or whatever,
right. Or even like, cause a lot of it, there's a lot of overlap with scientific computing and these other areas where it just, there just weren't that many compelling use cases, but like a lot of those ideas, they really haven't changed since then. And even, even on the architecture, either architecture side or on the kind of compiler, like code generation side either. And so for a long time, like the, the state of the art in the, I mean, for a long time,
the state of the art in you know, where TensorFlow and with like, and PyTorch, everything else, it was like abysmal. It was like embarrassing. It like the utilization on our GPUs were really, really low. And it's like, even just to try to get things to perform well, it was like, and these are things that like, it's like, as a compiler person, I could like, look at this and be like, oh my gosh, like, this is like, you just use some basic techniques that we've known
for the last 20 or 30 years. And we can do a much better job at these. The truth is that like, there were just not that many good compiler people that were working in machine learning early on. Now that's totally changed. Now it's like, now you have all the smartest people, like are all working on these problems. And this is why you're getting like these leaps and bounds of like improved efficiency, both on the hardware side, as well as on the, on the software side.
And this is why like the new capabilities are like just so much better because there's a lot of catch up that like had to happen there. I think there's NVIDIA. I mean, NVIDIA is like, certainly is the de facto leader in this space, like by far, by like, like, and then I think they just, I think they just released their, their latest, like, you know, numbers and they completely blew out all their numbers, everything.
And it was like, and like their stock price ended up going down because they were like, well, you didn't, you didn't exceed the expectation by as much as we thought you would. So it's like, but like, I mean, there's, and there was a time where it's like NVIDIA was like the, was like flirting with being the most valuable company in the world. And for good reason, I mean, just so you look at like that business, it's incredible. They have, and not, it's not on the hardware side as much as even it's on the software side, look at the
entire software stack, look at CUDA and, and, and like they own that whole stack. Right. And it's really, really hard for somebody else because to come in and basically displace all of that, it's a ton of work, right? Because of the bill, not only yes, you have to have a great hardware, but you also have to have like all the tools and like all the compilers and the things to debug and like, and everything else on the, in there. And it's a, that's a ton of work.
And like NVIDIA has been working on it for a much longer time than anyone else. And so like they, they have a big advantage. Now, that being said, I mean, like there are some very interesting like new companies that are coming out with like making specialized hardware, especially ones that are optimized for inference and other things that are just like, yeah, this is like a hundred X or a thousand X more efficient. Totally makes sense.
I mean, like just knowing the way that architecture works and everything you can, people can totally build those things. It's not like NVIDIA has all the answers for this. Yeah. It's very possible to build very highly efficient hardware. And I think that there's, and those, some of those companies are going to start to eat into NVIDIA, right? I mean, like NVIDIA is kind of at the top and they don't really have, they're going to, the only way it is down, I mean,
And they're going to be start to eat into those. Now, is there any of those that's going to be have a chance of like displacing NVIDIA like anytime soon? Probably not. There's so much inertia there. It's really, really hard. And I mean, NVIDIA makes a huge margin on their GPUs. It's so much. They make so much money. It's crazy, right? And so even if like some of these competitors just get a small fraction of that, it's still like a lot of money.
It's a pretty big market. So I still think there's going to be there's going to be a lot of interesting stuff that's there. It's going to be but it's going to be mostly kind of on specialized tasks around like particular architectures, particular, you know, for inference and other kind of things. And, you know, have like, so I used to work at IBM, right? And they're used to this is saying, like, nobody gets fired for buying IBM. That was because like, IBM is always a safe bet.
And, you know, it's, that's not true. I mean, now, IBM is not nobody anymore. But, but now this is true of NVIDIA. Like if you are procuring hardware for machine learning tasks, you're not going to get fired by buying NVIDIA. But you will get fired if you buy one of some other like smaller, more upstart company. And like that ends up failing. And then it's like, that's a big catastrophe, right? So like, we're the, that's it.
And so those competitors have to, they have to be able to find their edge. And like, you know, it seems like their edge is going to be in terms of performance and in terms of power, power performance as well. Like those, those, those sorts of things. And like, and they have to be like, not just like 10% better, but they have to be like 10x better to be able to jump that gap to be basically, oh, this is why, this is why I'm not going to buy NVIDIA because this is going to like actually save us so much on our power bill or on our GPU
bill or whatever it is that like, it's worth the risk. Right. But it is, but it will, it will happen. I mean, just because like, there's so much money there and, and NVIDIA is not going to be able to continue to kind of execute on every, on every axis. They're going to, you're going to, they're going to find some, some competitors are going to find some niche and they're going to end up kind of owning that, that, that niche. And like NVIDIA is not going to compete there.
And then from that niche, it will grow to kind of other types of use cases, everything. Yeah. Yeah. But there there's, so I think there's a lot of interesting stuff happening on the, on the hardware side, but interesting in terms of just, these are tech, they're finally applying the techniques that we knew about from 10 or 20 or 30 years ago. And they're finally being deployed in actual hardware that will be like, that will be much more efficient than the GPUs that NVIDIA has.
Yeah. And, and I saw so many players working in this space, like Glock, Google, OpenAI started, you know, exploring the chips. And then by the way, do you think OpenAI will keep the same position in, you know, let's say in five years, 10 years or other company, you know, like company come out? I don't, I don't, I don't think so. And like, and the reason I say, I mean, so they were the undisputed leader, like without a doubt for, for a long time.
And then there was like, suddenly there's like, over time there began some real viable, viable challenges. Right. And at the beginning it was like, Oh, maybe Gemini, that's going to be okay. But it was, that was clearly not, I mean, they, they flubbed the launch and everything like that. And it just doesn't work as well, work very well. Right. And they're there, but like, you can't ignore Google. I mean, they have a lot of resources. They're going to, they'll figure it out. Right.
And then you also have, you know, meta there as well. They have like, I mean, I think Mark Zuckerberg said, he's like buying up all the GPUs and stuff. So there's no more GPUs for anyone else. You know, they're a, they're a serious competitor there as well. And like, and then Anthropic, I mean, there was a, like, there was big news wherein it's, you know, Anthropic's, you know, Claude, you know, the biggest, whatever Claude Opus, whatever it beat out, you know, it beat out
GPT-4 and like, it, it like soundly beat out GPT-4. So it's like, now it's like, these are, they have real competitors there. Right. And like, and those are just a handful and there's other ones as well. Like now we've kind of continued to see new ones being developed all the time and, and yeah. And like very competitive performance. And the other thing is that there is, so it's very interesting.
So at Stanford, I remember once upon a time there was, when I was teaching at Stanford, like all the best students, they would go to Google or to Facebook or kind of, there's a, because those are like, those are the cool, hot companies to go to. And then at some point that changed and it was like, now it was like, they were not getting the top tier people. Like they were getting like the second tier people.
Cause the top tier people were going to other, other companies like OpenAI, for example. Right. And now, and I think like OpenAI is still like one of the most sought after kind of, you know, you know, engineering roles or like, you know, kind of like for talented people in AI, like it's, it's one of the, one of the most sought after, but I'm beginning to see the signs of this changing where they're due to a variety of factors. It's like the top talent is not going to OpenAI anymore.
In fact, like there's people, there's like talent people leaving OpenAI because of growing pains and like whatever other, other things that are happening. I mean, you mentioned Ilya and like, there's others that are like that as well. I mean, like, you know, I know Greg Brockman, I know he's on leave and like, there's like, there's other, other things that are like, and I've, and I've talked with enough people at OpenAI that like they, they, you know, it's, they're, they're, they're experiencing some difficulties, right.
On that side. And I think that like, that's a very interesting leading indicator for like whether the innovations are going to be happening like three, four or five years down the line. Right. If you're not attracting the very best talent, like either from schools or from wherever, then it's going to be really hard to keep up your innovation edge like over time. Right. And I, I've started to see this with OpenAI.
And because now, now like, what do they do? They go, they go to start companies, right. It's just like, if you are talented and you, and you have, and you have skills in AI, this is true at Stanford. This is true other, other places as well. And your option is like, go, go, you know, join and joins, join, you know, a company like OpenAI or Meta or Anthropic or some others, or go start your own thing or join something that's like extremely early stage.
Like there's increasingly a number of people that are like doing, going the startup route. And because they, they want to join like the next OpenAI, they want to join the thing that's going to be the next big thing. And yeah. And so then, and so it's so funny to talk about OpenAI as the incumbents, but they are, they are in this space. They're the incumbent, right. Even though they're all very new, but like they're the incumbent in this space.
And then all the, and, you know, because of a variety of factors, a lot of the top talent is not going to the incumbent. It's going to some of the challengers and, and like the up and comers, right. And so that does not bode well for OpenAI continuing to, you know, maintain their edge. I still think they do. I mean, like they, they do have an edge just because, I mean, there's, there's a stuff that they released and find like Anthropic has come along and, and others have come along and kind of leapfrogged in in various ways, but they have
stuff that they haven't released yet that they will release. And then they'll leapfrog again. So it'll be like, it'll be competitive for a while, but if they're not in a position where they're attracting the very best talent they're not going to be able to maintain that. And so this is why I think like five years from now, is it going to be, is OpenAI still going to be the dominant player? Maybe not. Yeah. I see. Yeah. That, that sounds, yeah.
Makes sense. And then, so, and then would you recommend, you know, like especially, especially like student to student, would you recommend them like going to the next OpenAI or starting their own company? Because, you know, thanks to AI, you know, like we can keep the team small, you know, we can dedicate so many things to AI, but, you know, sometimes students don't have enough skill to do something, right? So joining like an emerging company, like next
OpenAI can get, you know, they can gain the great experience there. So what would you recommend? So, I mean, I mean, given the fact that I am, yes, I'm running like early stage AI startup accelerator and, and like, I'm, I'm a huge proponent of, of entrepreneurship and like, you know, and startups and everything. I will also have to say, it's not for everyone. Like there are people for which like the right thing for them to do is to join OpenAI or join
Google or join like whatever, like a, like a larger, more, more established, more, you know, like a company which is like further along, more stable, especially if you're in a situation where it's like, you don't, you know, work is not your life. I guess it would be like, if you, if you want to have like a good work-life balance and you want to, and you feel like you want to proprietize things that are like not your work, then yeah, join a bigger company.
I mean, this is going to be like, your, your life is going to be a lot easier there. Like if that, if that's what's important to you, right. Now, that being said, I mean, if you're ambitious, if you are driven, if you really want to have, make a, make an impact, then either join an early stage company that's kind of like on this rocket ship that you can kind of be part of it the whole time or, or just strike out and like, and just go, go and try to start your own company.
And you're going to learn way more for that. Like it's far better in your early part of your career to optimizing for learnings, versus optimizing for, you know, salary or other other things, right? Now, I mean, that in but this is all like couched in the, the, you know, the, the question of like, what do you actually want? Right? And like, and like, what's your ambition? And like, what do you want to do with your life? And yes, if you, if you feel like, you know, like, work
is not the most important thing in your life, and like, you want to do other things, then yeah, then don't then optimize for those like that, you know, you're not you're not gonna be happy, like starting a company where it's like, you know, you're working 80 or 100 hours a week, right? Like where you'd like to make to keep the company afloat and survive. Right? Yeah, but but like, but what I would say is like, if you're just
starting out, don't, you know, don't, you know, don't, don't optimize for kind of let me build up my ideal resume, or anything like that, like, just try to optimize for at the beginning, you want to optimize for figuring out what you actually want to do. And so and if you're successful in figuring out this is what I like to do, this is what I don't like to do. And you have a good idea of that, then that is like the
actual path to happiness and fulfillment. Because you could be on a career path that is like, it's not for you. It's like, like, you, you're basically on somebody else's career path. And then you may wake up and they find you may have like a resume that would match like, pointing towards a direction that like, you don't, you're not interested in going. Right? Because like, that's not what you want to do with your
life. Right? And so like, the first step is like, figure out what you actually want to do what actually makes you excited. And like you, you want to spend your valuable, like, you know, limited time on earth doing, right? And then yeah, and then once you understand that, then you can then craft, you know, the kind of the, you know, think about what, how do I get to that point, right? Like, so I'm being fulfilled in that way.
And for that, usually what it's about is about optimizing for learnings, right? And like, because you're, you know, and so you want to put yourself in a position where you are learning the most that you possibly can, right? And like, and often, if working in a big company is, you'll learn some things, but you're not going to learn many things around entrepreneurship, you're not the larger the organization, the more narrow your role is within
it. So you'll learn how to do that one very particular thing pretty well, but like, you're not going to learn about things that are like other things, right. And, and by the way, like, like, learning the skills to be successful in a company like Google or a large company is a really different set of skills than than the skills to be successful at a startup. So different, right? I mean, like, and a lot of the stuff at bigger
companies is about, well, how do I, how do I navigate the politics? And how do I get people on our side? And like, you know, how do I end up? How do I kind of fight for the resources that I need? And like, that sort of thing, right? And, and, you know, and much of doing that is like, well, I can't step on other people's toes, because that's gonna make them mad at me and, you know, it's going to cause problems.
And like, like, that's that, that is usually what it takes to be kind of successful in a big, bigger company. Be as successful as a startup founder, or an early stage company is like, the skills that are totally, totally different. Right. And so just I mean, just thinking about that, like an optimizing for learnings and skills early on. And yeah, and like, a very good way to learn is by joining an early stage company and being a key
employee there, and then growing with that company. Great, great way to learn like a firsthand view of this. And if you're doing it right, then you're going to be you're going to be exposed to things like marketing, and like sales and products and all these other areas. And like, your, your, your learnings are going to be dramatically, like accelerated increased, or it's just kind of like just jump in the deep end
of the pool and just like, okay, I'm gonna start a company and then you're like, you'll be forced to learn the given forces. Really quickly, right? I think there is a benefit though, in terms of like, be able to be in a position where it's like you've seen greatness in some area, right? This is where I think it's hard to be great unless you've seen great in the past, right? And there's a lot of and there are different
organizations that are great in different things. And then so and different founders and different people who are kind of really great at different things. And so having an opportunity to, for example, work with somebody who is great in an area, and that is an area which is aligned with like your long term goals. That is an amazing opportunity, whether that person is in a big company, or like that person is a
founder, and you join, you know, you join their team early, right? Either one is a is a is a like, either of those can be a great, really good opportunities there, right? Yeah. And then, and, you know, there's not, there's no perfect, whatever resume or LinkedIn profile or whatever, that, you know, that you're a lot of people think of that, and they try to craft it, like, especially college students, because that's what
you need to kind of get into college is like, let me, let me put together, let me try to put together this resume. So that's that it looks really attractive to, like college admissions. After your after your first job, like nobody really cares, it's gonna be it's like, like, you know, it's, it's not going to matter, right? And like, what actually matters is, is like, performance there in the real world and everything,
right? And so and not as much like, oh, I had a job here, whatever, right? Like, that's people are going to care, care a lot less about that, as you get later and later in their, your career. You know, they don't care about what school you went to, they don't care about the like, what your major was, or like your GPA or any of that kind of stuff, right? They care about it, maybe for the first few jobs, but after that, it's
kind of not, it's not as important as like, what you've done. And then and so like, yeah, I mean, I think this is good. This is the way it should be, right? It should be, you know, like, it's supposed to be kind of, there's supposed to be equity in terms of opportunity. And like, you know, just because, you know, some people are fortunate enough to like, have a particular type of upbringing where they had
certain type of opportunities, like that should not define their whole course for the rest of their life, right? Just because you're lucky enough to be, you know, in, you know, get an internship at this place or go to Stanford or whatever it is, right? There is, there are a lot of talented people at Stanford, definitely, but there are, but but if you look across, you know, and then some of the best people I've worked with are
like, they were, like, college dropouts, right? And like, so extremely good, right. And so these are certainly I mean, that the there's, there's, there is some value there, there's some predictive value there in terms of like, they have a high filter and all this, but it is by no means a case that like, everybody who goes to Stanford or MIT, for that matter, or any other school is like, is, is, is a good high quality, right is
not the case, right? There's a, there's a, there's lots and lots of counter examples to this, right? And so, so yeah, and what it means is, even if you didn't, even if you're a high school dropout, like, there's like, if you work hard and hustle and, and, like, improve yourself, and stuff like that, it's to me, you can get those kind of early opportunities. And like those earlier learning opportunities, then yeah, then you can, you can
end up being successful in this, look at I mean, look, look at I mean, and mentioned Graham, and he didn't even graduate, he didn't, he didn't finish his, he was, he was a, he was like a semester away from graduating at MIT. And he never, he never got his degree. So like, he's like, I mean, as a high school diploma, so Greg Ruckman has a high school diploma, right? And he never, and he never got his, his, his, he never finished his
image, because he left, like a semester before, because he didn't need to, right? It doesn't, it doesn't matter. So anyways, and there's lots of examples that are like, I mean, you would assume that, oh, he's a PhD, or whatever, right? Not the, not the case, right? He's, he did go to MIT, but like, he, you know, he dropped out to do, to basically start Stripe, which is like, totally the right decision, obviously, right?
Right. And then, yeah, and he hasn't, hasn't looked back since then. So yeah, yeah. Yeah, makes sense. And exciting. And then so, you know, you are three-time cybersecurity founder, right? Like a successful founder. And so what, looking back now, like, do you have something you would have done differently? You do differently? You know, what's the biggest learning through your founder, like a founder life, you know?
You know, all the challenges you went over? Oh, my gosh, I have way, I have tons of them. Like those. I mean, I guess, I mean, especially my, I mean, first company, I made so many mistakes, and I have all these battle scars of like, doing things the wrong way. I would say, like, the biggest ones is like, just because somebody is willing to give you money for you for something does not mean it's a good idea.
You can't, like, rely on investors, you know, like to have any type of calibration about knowing like, what was a good idea or not. Right. And yeah, and so because you know, there's this image of like, Oh, well, this famous investor invested in this, you know, therefore, this must be a great opportunity, a great idea. Like, no, I mean, like, like, like, more often than investors, they don't have the answers. I mean, like the founders are actually living it, they live in
the space, they know, they are the world's experts in this or should be right. And so so don't don't get enamored by big names or anything like that. Because like, they invest in a lot of stupid stuff, right? And like, and like, if your thing is one of those stupid things, it's like, your time is worth more than their money. So like, you should, you know, think about think about that, right? You know, another one is like
making sure that like, when you have investors, that they are, you know, they understand what you're doing, and they're well aligned, and they're supportive, right? And if they don't really know what you're doing, then this is going to lead to all sorts of problems down the road, right? And if they're not really aligned with like, what your strategy should be, and all this stuff, or they're kind of not supportive in different
ways, like, these are not life's too short, like, don't work, don't waste time, like working with people like that, like, just try to find the idea and then the investors that are going to actually be well aligned and supportive, and really and understand what you're doing. They're not going to have the answers, but like, you want to be able to go to them, and say, hey, we need help with x and like, they will
they'll like jump, jump at the opportunity to help out. Like those are those are the people that you want on your side. Yeah. And, and, you know, also, it's like being be careful about like who your co founders are, I mean, just kind of like, understand, like, make sure that you have like, good complimentary skill sets, but like mutual respect, there becomes really important, right? Because if there's a question
about like, who's responsible for what, it leads to all sorts of problems and issues down the road, and there has to be mutual respect, it can't be like, you have to be able to trust, they'd say, I'm going to fully delegate this part of the, whatever this part of the business or the, you know, the the founding, you know, roles to this person, and you have to be able to fully trust that they, they are, that they're going to do this, you
know, better, not only better than you could, but better than, you know, people that you would hire or anything else, right? And if you can't answer, if you don't believe that to be the case, it's going to be, it's unlikely to work out. Because I mean, sometimes you have a thing, it's like, we have a senior founder of a junior founder, and it's like, there's a clear hierarchy there.
But still, like, those types of situations just get really, really difficult. The best scenario is that you have complimentary skills and but mutual respect, like that each person respects the other person for the contributions that they're making. Right? Because a lot of a lot of companies struggle with just co founder issues, right? And like issues between the co founders that and you know, and on that topic, kind of like, yeah, and kind of
find somebody who's a little bit different than you not like, you know, there's, there's some there's benefits to doing that you can, it's not strictly required, but like you, you know, it's a lot easier if you kind of early on in the DNA of your company, you're going to have some kind of diversity of DNA, because like, the more the further along that you go with the kind of the same, all the same DNA in the company, it
becomes harder and harder to adjust in the future. Right? You know, and that that is, yeah, that leads to problems, right? I mean, like, the best companies are not like, you know, the best companies is not like, you take the founder, and you basically, you know, genetically, whatever, Sloan, you take the founder, like 50 times, and those are the 50 employees in the company, like that.
Sometimes it feels that way. Sometimes it'd be like, Oh, I wish I had like, just a bunch of clones of me. So like, go do all this extra work. But that's not the actual path to the like, the strongest companies, the best companies, because the reason you start a company is you want to build something bigger than you could do yourself. Right. And the only way to do that is by other people with other skill sets and other perspectives.
And, and, you know, and like, that's how you're going to end up with, you know, a product and a company that is better than you could have done by yourself. Right? Or even like 50 clones of yourself could have done, right? That that. Yeah. And so, you know, being conscious about that, and like being going a little bit outside of your comfort zone. And so yeah, this person is a little bit different than me.
But I do, I know that they're very good at like what they do. And I know they're kind of world class in this area. So and even though this is a little bit uncomfortable, like, like, let's, let's work together, or like, I'm gonna hire them, or I'm gonna like be a co founder, like that sort of thing. Like those, I think it's good to do those type of situations. Even though it's, it may be a little bit uncomfortable.
Because it's always easier to be like, Oh, like, you know, I'm an engineer, it's like, Oh, with other engineers, we all talk engineer language, right? We get each other, right. And so like, there's some benefit to that. But there is a, you know, but like, there's some downsides as well, in terms of as a company grows, right. And, you know, I guess the one, one last thing is like, don't. Yeah, I think there's a stigma against solo founders. I don't know why.
I mean, I think there's, there are there are pros and cons to being a solo founder. I mean, like, like I mentioned, it'd be like, Hey, look, you want to have more diversity in there. I mean, like, it's easier to do that with more than more than one person, right. On the other hand, there's a lot of you avoid a lot of drama and a lot of other problems by just having us having a solo founder, being a solo founder.
And, you know, there's a there's a lot of challenges there, like in terms of a lot of the challenges being a solo founder is things like, you know, it's a very lonely job. And you don't like you don't really have somebody to kind of, you know, like, bounce your ideas off of or like, or, or be, or be your, you know, your psychologist or whatever, like, you know, that sort of thing, which is true, which is all true.
But like, there, there, there are ways to supplement that, like, in terms of just, you don't necessarily have to have like a co founder to do that. And so yeah, it's not, I know, it's not typical. And but like, having a having no co founder is better than having a bad co founder. So, so if that's your option, or even a mediocre co founder, right? Like, like, if those are your options, like, I mean, you can, like, don't, don't be afraid of
like, starting, like saying, You know what, I'm gonna, I'm gonna start this company, myself, even if you haven't never done it before. This is where some of the things like community can really help, because it kind of gives you this unfair advantage, because you kind of you can take advantage of the hive mind of like, you know, a lot of other people have been through it before. And it's like, it makes things a lot easier.
When you do that. It's, yeah, yeah. So like, there's a, you know, don't, if the right answer is to be a solo founder, don't be like, well, you know, YC never, never accepts solo founders. So like, I'm not going to be a solo fund. Like, this is not like, you look at the actual actual stats, like, there's like, quite a few of these, like, very highly successful companies, like, we're actually solo founders, or de facto solo
founders there. And so it's very possible be to be successful there, and, and raise money and do everything else that you need. And it avoids it does avoid a lot of, you know, like, it has its own, it has some sort of challenges, but it also avoids another set of problems as well. You're talking about like, you know, a lot of times, like the the critical path is like the communication bandwidth between the co founders.
And so in, in the case, it's one person, like the communication bandwidth is like, you're whenever your cerebrum, like, you know, it's like, it's in the same brain. So you're, you really can't get much more efficient than that, you know, just in terms of like, make sure the founders are on the same page. Well, if it's only one, like that, that problem becomes really much, much easier, when it's there's only one founder,
etc. So yeah, yeah, those are those are some of the things that I've kind of learned, learned over time. But yeah, yeah, no, it's, it's, it's been a lot of fun. And like, this is why I think I'm a privileged space as well. I like in through inception, I get a chance to work with a lot more founders. And so that has also like really accelerated my thinking, not only from my personal experience, but like, through
just working with like a ton of different founders, like through inception, and I get to kind of like, I get to accelerate my learning on that side. So that's been a lot of fun. Yeah, I think we should all apply to institution studio. Yeah, that's a great place to go. And yeah, thank you for that. And so yeah, you know, time is running out. And so this is the last question, a big question.
And so since you know, grassroots platform where people are sharing what they are learning as a digital legacy, and we want to know what legacy what impact do you want to leave behind or for the future generations? That was a big question. Yeah, I mean, like, it's, you know, I don't know, since I mean, since I was young, I like, I felt like there was okay, I was born and I had like, there's some type of things I'm talented at, and like some things and things I'm good
at. And there's also things where people have made a big investment in me, right? In terms of like, I mean, yeah, I went to MIT. And there's like a bunch of, there's a bunch of instruction and a bunch of tuition there. I went all the way into the PhD. I mean, like, not that many people go in that many years of schooling, right to go through things, right. And so, because of all of this, like the combination of my like, the
talents, I feel like that I have that and as well as, as you know, of what's been like the schooling and other kind of learning is something I have, I feel an obligation that like, I have to like, go and go do some good for the world, right? Because otherwise, I feel like this is a waste, like there was that whatever, whatever in terms of genetics. or other type of investment, it'd be like, that was a wasted investment.
And it's like, it's my obligation to humanity in general is like, I should kind of contribute, you know, in the ways that I, in a positive way, right? In the ways that I should, that I, that, you know, somebody who has given the gifts and benefits that I was able, that I got, and was able to have access to, like I have obligation to help other people as well, right? So, and then that, and hopefully that's what is,
I mean, I'm very fortunate to be in the kind of life stage where I'm able to do that now. I mean, through Inception Studios, one piece of that, through teaching at Stanford is another piece of that, and like other things as well that I try to do. And so hopefully that will like eventually be my legacy. I'm not, I'm like, you know, whatever, 0.001% of the way there so far. I still got a long, long way to go, but you know, yeah.
I mean, if it's time where it's gonna be, you know, I'm in the latter years of my life and I'm kind of looking back and saying, you know, hey, was I fulfilled? Like, was this a fulfilled life? You know, and that, I think it's the piece that's the most important to me, or at least for now. I mean, maybe when I get older, I'll get more, a different perspective, you know, on what's actually important and stuff.
But I mean, for now, and this was, and like, you know, it's, I mean, that includes, I mean, it's for my family and my kids, but also like, but to other people as well, you know, just having him, but just a really positive impact and making the world better in all the ways that like, I feel like I can contribute, so. Very beautiful, yeah. Yeah, thank you so much for all the answers and then, you know, sharing the insight, you know,
through your life and experience, and yeah. Thank you. Yeah, no, thank you. It was just, it was a lot of fun. So I hope the recording turns out well and yeah, and I'm looking forward to seeing it.