Welcome back to another episode of Grasp Talk. Today we are excited to welcome John Whaley, a visionary in the world of cyber security and AI. So John is the founder of Inception Studio, a nonprofit community-driven accelerator focused on empowering the brightest minds in AI, as well as Red Code AI, a company dedicated to protecting people from AI-enhanced social engineering attacks. So with a PhD from Stanford and a track record of founding multiple successful companies
like UnifyID and Mocha5, John has been at the forefront of cutting-edge technologies from program analysis and optimization to virtualization and cyber security. So John is also a passionate educator and teaching course at Stanford on building apps with LLMs, like large-language models, and sharing his insights on next-gen AI technologies. So today we'd like to ask him about what's happening in AI nowadays and also how AI will
impact our lives in the future. So thank you for joining today, John. Yeah, great to be here. So thank you. Yeah, I know. I mean, thanks for the introduction. And I, yeah, and so I mean, I've been working, I don't know, I've been working in this space. I've been here in the Bay Area almost 25 years or so, like when I came here, like when I started at Stanford, right? And it's very interesting to kind of see all the different waves of things that have happened
since then. I remember kind of working in AI even before, like it was like, there's this time, it was like this called the AI winter. Everything is coming kind of full circle and now AI has become such a hot thing. It's very, yeah, it's very interesting to see, but like I've always been very technical and very much a builder. I think like there's a, even when I was doing my PhD at Stanford and I was planning to be
a professor, even then, all of my research was about building things. And so my PhD research and everything was about, you know, my research areas in compilers and program analysis, but all my research was about like, how do you help like software engineers like build things faster and better and, you know, and provably correct and like those sorts of things. And so even all of my research was kind of very much engineering focused and always has
been. So that's always been my background, you know, in terms of very much being a builder, starting with kind of writing code and like, and being a software engineer. And then eventually, and then later on it was, it was about, you know, doing research and helping other people to do that. Yeah. And then ultimately like building companies and now I'm kind of at a meta level sort of, because through Inception Studio, yeah, I help other people build companies.
So it's, it's been a lot of fun so far. Oh yeah. So first of all, yeah, I'm curious about Inception Studio. I know what is Inception Studio, but you know, could you explain to the people who don't know about Inception Studio and what is Inception Studio and why did you start the project? Yeah. So, I mean, the backstory there is that my last company got acquired about four, almost four years ago. And I was looking for my next thing to do and not making a ton of progress because my
life was just too comfortable. This is like in 2021. So it was like my, you know, 2022 and my, you know, this is like, we're still in the pandemic times and everything. And I was just kind of stuck. I was, I knew I wanted to do another company, but, and all of these things were happening with GPT-3 had come out and actually in 2020 and like, we were, you know, it seemed to seem to do some pretty interesting things there.
So we knew this was going to be big, but was not making much progress towards a company. And part of that is like, I know myself, I'm very deadline driven. Like if I don't have a deadline, like nothing happens. Like I just, like I ended up procrastinating and just, you know, not getting much done. And so I really needed to have, you know, a deadline there and, you know, and, and I needed to, so I knew what I knew what I needed, which was like to go away somewhere to remove
all my distractions, to be surrounded by other smart, creative people. And then, yeah. And most importantly, have a deadline. Right. And so that was the genesis of the, of the, the very first event we did was back in November of 2022. The topic area was around large business models and generative AI. We got a bunch of really amazing founders there. And then quite a few companies like were either launched there or like kind of like ended
up, you know, just growing out of that. Right. And which was really, that part was really exciting. And you know, part of the reflection there about why the first event had gone so well was that, you know, we kept the quality bar really high and we were able to attract really, really good people who were there. Right. And this was, you know, and, and so when we thought about like we were doing more of these
type of events, you know, we wanted to keep that quality bar really high and avoid this problem of adverse selection where like the best people would kind of choose not to go. And so that's what we did. And then we, yeah, so we've run 12 of these events so far, like these, these cohorts of founders, we had 144 founders come through the program so far and, and yeah, it's been, it's been very, very successful.
Cause it turns out when you curate a group of like really amazing people and put them all together, they end up and like, and people are all ready to start companies, they end up starting companies and like those companies end up doing extremely well. And so that is, that, that, that's been the premise of what we're doing. We run this as a nonprofit. We don't take any equity at the companies again, because we want to focus on quality
and get the best quality people together. And yeah, and it's, we're a 501c3 nonprofit. So like we just, we, we ask people if they can donate, if they're able to, if they can donate just to cover their costs, like for the, the, the room and board and lodging right there. And, and you know, if they can't, then that's fine as well. Like we can, you know, we can, we can, you know, waive the cost as well there and, you
know, and because of that, we've been able to attract some really, really great founders there. Some like really good success stories so far have coming out and come out of Inception, right. But this is a really, really different model than you see from most of these other accelerators which, you know, most, most of the time they're just looking for people who, you know, will, will take some type of deal, like I'll give up 7% of my company and then, you know, you
have to pay, you know, you know, a hundred, they'll, they'll, they'll, they'll pay like 125K for some 7%, like those, that's a typical type of deal or, you know, sometimes even more up to 10% and such, like these are, that might work well for early stage for like for first time founders, but, you know, if you are, you know, we work in a really hot space or you are serial entrepreneur that you've had success in the past, or, you know, you,
you already have a lot of connections to investors, et cetera, like people like that, they don't really need to join that type of accelerator program where they're giving up a lot. And so they have a problem with adverse selection problem where they don't, they're not able to attract the very best people. So we wanted to avoid that. That's why we don't take any equity. That's why we're a nonprofit. Yeah. And that's why we're able, we've been able to attract some really, really amazing people
and yeah. And it's inherent value in curating a group of, of amazing people and putting it all together. So now we've run 12 of these, of these retreats is, there's also a really strong founder community there as well. You know, this is something I was, I never pegged myself as like kind of a community organizer type at all, but like, it just, it just kind of happened. Which over time, just like, that's been a really exciting part of that journey as well
for me, like just kind of exploring that side of, you know, how to, how to organize a community of founders. Right. But yeah, it's been, but, but everything has been great. Great so far. We're running another event. You know, our next event is coming up at the end of September in 2024 and, you know, we've run these events every six to eight weeks now. And now we're talking about even kind of expanding internationally as well.
We're, you know, starting to plan an event for in Japan and like in other locations as well. So, you know, as the, the, you know, the, the truth is that there are a lot of great founders, you know, in many different areas and many different walks of life. Right. We also like, and I'm, I'm a big, big believer in like the best teams are the most diverse teams. And you know, and so this is something that, that, you know, we, we put some conscious
effort into when we think about where we're putting the cohorts of people, like not just having all the same type of people, because if we did like the event would not be as nearly successful as it, as it is. Right. If we had, if it was all, if we just had a bunch of engineers, it would just be like a hackathon and then people would hack up some solutions, but they would never, never really turn into anything because like, you know, you, they wouldn't really turn into
viable businesses. Right. You know, and then, but on the other hand, if it's like too much, too many product managers together, then, you know, that's not going to turn it into anything either. Right. Like, or, or business people or any of these other areas. Right. This would be for all CEOs, that would be a disaster. We would be a, yes, not want, maybe if that's more like a reality TV show, or something like that, not like that's not gonna,
because the truth is you need all types, like you need people who can be CEOs and people who are not CEOs, right? You need people who can be, who are technical, you need people who understand product, you need people who can sell, or understand go-to-market and all of this. And ultimately the ones that have that kind of a, a good mixture of skills on the founding team or the early team is, they are much more likely to be successful.
And this is like, this isn't, I think there's statistically proven, if you look at, there's a book that's called the Founder's Dilemma, that kind of just goes into this in depth about like just analyzing what type of teams and founders of founding teams are more likely to be successful. And so like, that one shows that like the ones that, where there's some diversity on that early team are ones that are just statistically
more likely to be successful. So we, and then also just personally, like I hate these situations where if there's somebody who is, they are talented and they are ambitious, and like for the good of the world, like they should, they have a destiny that they should be doing something, like should be starting a company, but because of circumstances outside of their control, like discrimination or just society and those types of things,
like they're not able to fulfill their dream and their destiny. Like that, I just hate for those, I hate those situations, right? And so like, and it makes me very much wanna kind of fight in the support people that are in that type of situation. So we have, you know, so far, I think Inception has been 25% women. It's like, you know, and we're kind of, we're working to try to increase that as well, because I believe that there's kind of many women
who kind of, who can and should be starting companies and like, because of a variety of reasons, they have not been able to, but like we wanna like help support that. But yeah, and then also, you know, this is, you know, similarly kind of in, we're looking in Japan and other places as well, where it's like, you know, the entrepreneurship is maybe not as common, but there are a lot of very talented, ambitious people.
And like, I wanna do everything I can to support, you know, people in those circumstances as well. Yeah, and I totally, yeah, listening to what you said. And by the way, I went to the cohort, you know, one, like a first pilot cohort, and that was amazing. But I don't know what's happening nowadays, you know, this recent cohort. And have you found any like an interesting project or ideas, you know, in the past cohort?
Oh, there's so many. I mean, there's so, I mean, this is the benefit of like, just, I mean, this is why I love doing this, is like, you get to interact with so many interesting people with different backgrounds, but they're all very ambitious and they're all very accomplished. And they're all like, kind of been thinking about this for a while. And just, you know, just hearing about some of the very interesting things that people are doing.
Now, we're not, the very first event we did, the one that you went to, was like, explicitly there was a topic area of like, large thing as models and generative AI. That was because we did it in November, 2022. And this was about like two weeks before ChatGPT came out, right? And then, yeah, and like very good timing there, right? The reality of it is that it is like, it's not like, it's not like we're just trying to restrict
and focus only on AI, you know, AI founders, everything. It's more of the, it's just a very hot area right now. And there's a lot of interest in it. There's this new capability called large language models in the past. And now it does, right? And it opens up so many new opportunities that I think it's natural that a lot of founders are just very interested in building, using these tools and like in these areas, right?
So it's not, yeah, so, cause the danger is if you kind of start from Gen AI and you just start from, we have this amazing tool called GPT for whatever, like any of them, a large, I'll call it large language model or, you know, some generative AI model, then you end up, it's like looking for, it's like a hammer that's like looking for nails. And then you end up, you know, just not solving real problems for customers, right?
And so I think it's much better to start from the problem. And then if AI is a solution, then that's great, but there's a lot of cases where AI is not actually the best solution and that's fine as well. That being said, I would say, I don't know, most of the companies that we deal with, like through Inception are like not fully embracing AI. And I think it's, you know, not only in terms of like their product that they're doing,
but also internally, like, you know, like for their company, because there is, there's an opportunity now with like utilizing the latest tools and everything to basically, it's like a superpower. You can get, you can act like a much larger company with a much fewer people by leveraging a lot of these Gen AI tools, right? And to handle a lot of different things about your company and yeah, and the people who are fully fluent
in these tools are able to use them. Yeah, it's almost like a superpower right now, right? You get to end up being five or 10 times more productive than people who don't, who aren't using those tools, right? Or don't understand how to use those tools. So that has been a really interesting evolution, I guess, over the recent time. Like it used to be that if you want to do this, you had to kind of scale up a pretty big team
and like hire a bunch of people. And that's a lot less true now. Like you can get away with like having just a much smaller team and just a smaller handful of people and effectively like punch way above your weight if you really know how to use these tools effectively, right? So this is like, so when we talk about AI native companies, it's both, like both in terms of like, yes, they're kind of building products that are AI native
and they have a data strategy that is, you know, the strategy that, you know, understands about the value of data and all this. But they're also utilizing AI tools themselves within their companies to basically give them a huge competitive advantage compared to, you know, the larger companies that are not as closely embracing these type of tools. Yeah. And also, you know, I've seen, you know, like in the past I've seen, you know,
Friend or Memsero or Co-Frame, those, you know, many like interesting push, you know, came out from Inception Studio and that's really impressive. Yeah, yeah, no, we have a lot of great, I mean, just, I mean, and I could go through each of the companies and I'm like actually very excited about each of them, you know, for a variety of reasons. Yeah, I mean, like, you know, I mean, Friend is one of the first ones that we like,
we actually kind of start to get into the hardware space as well. You know, this was, I mean, and this one was, you know, there's been some high profile failures, I would say in this space where like just products that like were released that were just not very good. I mean, and that being said, I think there are like, I still think there's, someone's gonna crack it. There's a lot of, give me a lot of promise in that area,
you know, around those type of AI wearables. Co-Frame is another one you mentioned, which is very interesting, exciting. And like, this was a type of thing where, you know, Josh Payne, who was like the founders, he was, he came to our cohort three there. And, you know, when he, we're talking about different ideas and everything. When you describe this, the idea of what he's doing, which is basically, you know,
I have a webpage and I want to do A-B testing and use generative AI to generate your different variants and then reinforcement learning to kind of automatically improve it, improve your website. You know, that, I was like, of course, like that makes so much sense, right? But not only that, he took it in kind of an even further direction to talk about, you know, the future of user interfaces and being kind of
these type of like generative AI driven kind of living user interfaces and everything. And like, this is, you know, that's the benefit that we have, like working with founders that are kind of high caliber and ambitious, right? Is that they're not thinking small, right? They're thinking big, right? And they're thinking kind of things have a big impact. A lot of times we have like kind of a early, like a first time founder or early stage,
you know, first time founder, they're thinking about, oh, well, look, I can make a viable business doing this. And it's like, okay, that's fine. Like, but just, I learned this early on. It's like, just because you can solve a problem doesn't mean you should, because your time is very valuable, right? And just, you know, even though you can say, I could see, I could solve this problem, right? Or I can see, I can build a viable business in this area.
But like, the truth is, if you are talented and you, if you're a good engineer, if you kind of, or you have other skills, like there's a lot of things you could do, right? And so what separates people who, are very, you know, the highly successful people versus like the people who don't never really achieve that level of success, it's often, I mean, it's a little bit about talent, but it's even more so about your taste in problems
that you work on, right? And so if you have good taste in problems to work on, like you're much more likely to be kind of to hit that, those levels of success, right? And so that's often what separates the people who have talent versus the people who are gonna be that kind of next level of success, right? And so I think that is some of the benefit. Again, I mean, like when I mentioned in Inception, maybe around, it's around 70% or so are serial entrepreneurs
like people have started at least one company in the past, but we have 30, almost 30% that are not, they're first-time founders as well, right? And we're kind of, but we're looking for that, the people who have that kind of the combination of the like talent and the ambition. So like, they're not just looking to be like, oh, I could make a viable business in this area, but they're looking to be, okay, we're looking to build the next great company
or change the world in these very particular ways. Like that is where, that's what we look for and filter for. And so by curating a group of people that are all kind of like-minded like that, yeah, it ends up being a lot of fun and there's all these synergies there because like you just getting those people together who are all about the same stage and are working in this and are working in similar areas, who are all ambitious,
who are all kind of trying to start companies, right? Yeah, there's a lot of like positive synergies that happen between those people. And I mean, some of it is, some of this has been learning over time as well. I didn't really think or anticipate about this, but there was, like whenever you're in some type of, any type of program where there's a cohort or there's a group of people together, there is like an opportunity.
And sometimes what happens is you get this kind of like sibling rivalry type of thing where it's just like the fact is you see other people and like they are doing well and you know them and you kind of don't wanna fall behind. So like it kind of gives you that little bit of extra push to be like, well, I wanna keep up, like I don't wanna like fall behind. So like I'm gonna push forward as well, right? Like this was something that I did not think about
or even anticipate at all. But then there's like some aspect of that as well. There's like kind of a, like not so much directly competitive, like, hey, we're all being graded on a curve and like only a small number of us are gonna get A's and like everyone else is gonna get B's or C's or whatever. It's not like that. It's much more of like, hey, we're, you know, it's like being on the sports team together, right?
And it's like we're all kind of pushing each other in the same way, right? Cause we're all kind of part of the same, you know, part of the same experience, right? And so to do that in a kind of environment where it's more collaborative than competitive, but it's kind of like general competition in terms of like just people inspiring other people to kind of, oh, well I should do this too. I've saw this so many times happen where,
you know, some, one of the inception companies would launch on product hunt and the other one would be like, I got to launch on product hunt too. And like, and then this is like happen again and again. And it's all, it becomes this positive, like feedback cycle, which is like just really amazing to be part of, so. Yeah. And did that happen to you, right? To you, right? You started Redcode AI during, from the inception studio.
And could you tell us, you know, why did you start Redcode? What is, what does Redcode AI do? And, you know, why did you start? Why did you decide to start that? Yeah. So Redcode AI, we, so the backstory here. So my first two companies were in cybersecurity and then, so I had a lot of experience in cybersecurity, but also I was not, I was pretty convinced, my next company would not be as part of, not be part of cybersecurity.
Like, you know, I was not, I was looking to like, okay, maybe I'll kind of broaden this out because I knew all the downfalls of like starting a cybersecurity company, right? And there, but actually what happened was that it was at one of these inception cohorts, right? Where it's like, we're in the early days, I'm still looking for my next thing as well. And so I would join some of the teams and like work with the teams because, I mean,
part of it is fun and part of it is like, I want to explore like different ideas and stuff. And so it was, yeah, I was actually in cohort three and then we form up a team that was there and then really begin to work on, you know, this idea around Redcode AI at the beginning, I was like, maybe there's not that much here. And then, but like, as we talk about it more, it became pretty interesting. So like basically what we do at Redcode AI is,
is, you know, the LLMs and generative AI, this is the biggest change that I've seen in my career for cybersecurity, just in terms of the implications of it, right? And nothing more so than the area of social engineering. Right, so social engineering in the past where it's really easy to tell, you know, if somebody is, you know, it's like, it's, you know, it's send you a scam message because it has misspellings
and the bad grammar and like all of this. And so you can usually tell pretty easily. Then now it's like, now with generative AI, you can make highly targeted and perfectly fluent attacks, right? Not only with text, but also with voice and now even video, right? And so this has become very possible. And of course, in every new wave of technology, it's always like scams and porn are like the two areas that like are always the early adopters.
Gen AI is no exception there, right? So this is, yeah. And so a lot of scammers have been starting to build this, like, I'm sure you've seen like smishing and these other type of messages are up, you know, they're up like 1700% year over year. And like, it's really become a real problem and the quality is getting better and better. And, you know, these attacks always existed, but now you can do them at scale, right?
It's kind of the equivalent of like the script kitty or whatever it can just, just using, you know, a few gen AI tools, they can basically run these highly targeted attacks at scale and people fall for this stuff all the time, right? And so this is where we just thought, you know, this is, somebody needs to solve this problem, right? And it's a tricky problem to solve, right? And so we came up with a solution.
It's like a two-part solution. One is we have something called Defender, which will basically help to prevent against, detect social engineering attacks, like do use the largest models for good, like use them to analyze like, you know, text from, it could be from email, from a text message, you know, WhatsApp, Telegram, whatever, like LinkedIn, anything. Right. And we then can classify very accurately, like whether this is a social engineering attempt or not.
Right. And these, you know, and not just looking at keywords, but like really understanding the intent behind the message. And, and that's really what generative AI gives us is the fact that, yes, you can use it for these bad use cases, like generating, you know, fake, you know, like deep fakes and things like that, but you can also use it for good, like basically revolutionize the way that you are, are doing detection as well.
Right. So that, so we have Defender, and then we also have one, a product called Pretender, which is like the offensive version of this. Basically you give it anybody's profile, it'll generate a fake version of them that you can use to send text messages, reach out on LinkedIn, also make phone calls to them. And then soon we'll have video there as well. You know, and so this is an intent there is kind of like inoculate your workforce.
Like you want to protect them against the, these type of next generation threats. The best way to do that is to show them this is what's possible. And then, then they will know. And then they know that the next time they receive a phone call, that sounds like it's from the CEO. And there's, and he's asking you to go buy gift cards or, you know, tell them secret information about the company or, you know, wire money
or any of those types of things. Like they'll know, they'll know, oh, oh, actually I should actually go through the appropriate processes. Cause like, just cause I heard the voice on the phone doesn't necessarily mean that that was the real CEO. Right. Because deep fakes exist and all this. Right. And so, so the, both parts are important there. And we just saw like, it was mostly about like, just this is, there's a tsunami that's coming.
Like it's like, you know, for, for cybersecurity. And like, they're, they're completely unprepared for it. And, you know, like we at least understand like what, what, what it's a state of the art and what's possible and like how to actually deploy these things with defense. And it's like, when I was felt like compelled to, it's like, we have to start this company. Right. Cause like somebody needs to solve this problem.
And we didn't really see any of the incumbents being any type of position to, to solve it at all. Because cybersecurity is a very conservative field. I mean, for good reason. I mean, it's good, it's good that for security, it should be, you know, conservative a little bit, but, but, you know, clearly there was just none of the vendors that were in there were kind of, were anywhere close to be able to like have
the right approach to solve this type of problem. But we knew what it, what it took. And so that's where we ended up starting the company there with like co-founders in bed. at Inception as well. So that was, it's very meta level, right? It's like we started the Accelerator to kind of find, and then it started our own company out of the Accelerator program as well. So, yeah. Yeah, that's amazing. Yeah.
And as a, you know, as a normal people like me, how can I, how can I know, oh, this is scam, you know, I can't recognize, oh, this is real John or not, right? Then how can I proactively defense? I mean, we're talking so real John, right? Yeah. How's my name in the corner? I mean, how do you know this is the real me? Well, this is a group, this is an interesting question, right? And the truth is that, yes, I mean, today for video, like the video generation is not quite perfect, although it's like
getting so good. Like, I mean, look at about a year ago, I don't know if you've seen this thing, like the generic, generic AI for like Will Smith eating spaghetti, like video. If you've ever seen that one before, you can Google and check it out. It's really funny. And it's just like, it's like, that was the, that was the state of the art about it, you know, whatever in 2023, like about a year ago or so.
And now you compare that to Sora or any of these other like technologies as well, especially for things around like avatar, like a virtual avatar, there is a bunch of office self services that you can use now, where you don't need much video at all. In fact, like there's a Microsoft has one called Vasa that they built, that was all you need is a single image. And for the voice clone, all you need is three seconds of audio. And with those, you can create a very convincing, like deep fake of somebody talking, right? And it mimics and it gets the expressions
right. And it mimics everything. It's just like, it's like, wow, that is, it's, it's really astonishing. Like, and that's the trajectory that we're on. It's like, it's, it's not perfect yet, like you can tell a little bit, but I mean, so long ago, like, not maybe not that long ago, like a year ago, like the best practice thing you do is like, Oh, we want to detect a deep fake. You basically say, Oh, put your, wave your hand in front of your face or something. Right.
Or like turn to the side or anything like that. Right. Because the deep fake software was not that great. And you kind of like get glitches and stuff like that. It reminds me a lot of like the generative AI where it's like, Oh, I want to find if something generative AI go count the number of fingers on the hand. And if it's like, cause like the gen AI would always like, you used to get that wrong, but guess what? Now you look at the latest ones and like, it doesn't, it gets that right.
I mean, like, because the, the stuff is so any type of things you'd say there, like about how to detect it, it's going to become obsolete really quickly. Right. And so what we do, we're not focused on whether it's real or fake, right. Because, or like AI machine generated or, or human generated. Right. Because the other thing is there's a lot of legitimate use cases for AI generated content. Right.
I mean, we see these all the time at inception and like, these are like ones that will handle phone calls for me or, you know, or like even virtual avatars or give you a personalized video or any of those kinds of things. I think, you know, or even in zoom, right. So there's, there's AI powered features that would say, make my face, like remove my wrinkles, right. Make my, make my face look better. And guess what? Those are like the best ones of those are powered by generative AI.
So to say that, like, I mean, in some sense, you know, having a, like having me like clean up my complexion and give me more hair and stuff like that, that is a fake. And so it's like, you want to flag that it's like, oh, that's not, that's not real. Technically. Yes. But if you want to, I mean, but the point is like, those are not malicious use cases. You don't care about catching that stuff. Like the stuff you want to catch is like somebody trying to deep fake the CEO or the CFO to like, and then telling them to wire money
and like, or, or telling them to kind of give up sensitive company secrets or like their 2FA code or like that type of stuff. Right. That's what you really care about. And so then, and sometimes those scams are real humans. There's a whole, there's a class of things that are called shallow fakes, or basically it's real video or it's a real audio or it's a real media, but they just take it out of context. And so it'll pass all the checks.
It'll be like, yes, this is real, but, oh, but like, you know, they, they, because it's taken out of context, then, you know, this is the way that, that fakes used to have work. Right. And like before deep fake technology is like they use shallow fake, these type of shallow fake techniques, but they're still very effective. Right. And you can trick a lot of people. Right. So these are, again, it's like the real versus fake or rather kind of AI generated versus machine generated is not actually the problem.
I think that that's important. The real problem is, is this malicious or not? Right. And so that's what we've been focusing on, whether it's a human, real human or not. I mean, like we do have some detection technology that can detect if the, if you know, that this is, does look like it was generated by one of the kind of AI, either AI services, AI models, like those, those type of things. But I think ultimately that's an arms race that like the defenders will lose.
And so what you, the better thing to focus on is the actual content and like, what are they trying to get you to do? And less about, I mean, and, and, and whether it's machine generated or not is just one data point, you know, in that. Very interesting. And I saw today, you know, Ilya, the co-founder of OpenAI, raised 1 billion from like notable investors. And he said, you know, he found the mountain something and the safety in superintelligence.
Have you seen that? Yeah, no. I mean, I mean, I mean, part of it is, I mean, obviously with Ilya and the others, like they're, that are part of that team, it's a kind of, didn't even matter what they were working on. They were definitely like with a, with a star team like that, especially now, like there's, there's lots of capital, like that is that, and lots of interest in AI still. Right. It's not like this bubble has not popped.
I mean, these are like, there's things are as hot as they've ever been. And there's a lot of interest in that. And so if you have a talented team, kind of doesn't even matter, like what you're working on, like you can probably raise money and a significant amount of money. Right. Now, that being said, I mean, I think it was like, there is, look, there's, there's a lot of promise around kind of AGI, or even ASI, like the kind of artificial general intelligence, or like these, some people call artificial
superintelligence. Right. You know, there, I'm, my view on this is that it hasn't really changed, like over the last few years, even with the kind of amazing things that new models can do, because a lot of the stuff there is just kind of parlor trick. It's just like, Hey, there was somewhere on the somewhere in the data set somewhere on the internet, like somebody did something similar. And then, you know, and so when you test it out, you're actually it's just doing recall,
it's just like, it's memorizing stuff. And it's like spinning stuff back out. And then people was like, well, it's amazing. It was like, well, yeah, that's because it was trained on like the entire internet and every book that's ever been written and all this stuff. And so like the general some in some of these cases, like the generalization is like not, not quite there. And when you try to kind of ask it questions, and things like if you ever asked it, it's like,
Oh, ask it to do math questions. And then but like, change it from base 10 to some other base, right? Where it's like, you know, yeah. And the fact that it's memorized all this stuff, and it's like, it completely fails on that stuff, because there's not that much in the data set that's like doing doing things like that. Right. So but that being said, there's also a argument where it's like, well, if it kind of quacks like a duck and walks like a duck, then it is a duck,
like, and what's the difference between if like, if, if, if the AI is will will consistently pass a training test and are able to do whether it's memorization or whatever, but like, it's able to do all these amazing things. Like, isn't that just intelligence, right? You know, and like, I mean, I understand, like the, the gist behind that, that argument. Um, I think we're due for a, like, it's a correction that will happen.
But basically, there's a lot of things, a lot of amazing things were promised for AI and agents and all of these and like an AGI and everything else, that the reality of it is not going to match the, the, like the what was promised, like that one, I can pretty much guarantee anybody who's worked with these systems knows that they're kind of simultaneously amazing, and also like really stupid, right at the same time, like you can, you know, the things are like, how many hours are the
word strawberry and stuff like that, and it gives you the wrong answer. So it's like, I mean, like, and that's just one little piece of trivia. But there's a, there's a bunch of examples that are like this, where it's like, you know, the language model itself is like, not even self consistent in its own answer in this in the context of the same answer, right? Because it's not, because it's just spitting out tokens, like that are statistically likely to,
you know, that like that it's seen in his data set, right? So these, and there's a lot of structure in that language or anything like that, that is that like gives it the ability to do these amazing things. But yeah, I think I still think we're a long way away from this type of, like, you know, full artificial, like, general intelligence or super intelligence, I mean, even with and when we do have it, it's going to make like the first versions of it are going to be, you know, the
first versions of this are going to be like, just in very particular domains. And like, there's not I don't think we're gonna have a breakout moment where, oh, suddenly we have millions of virtual beings that are all smarter than humans, right? It's not, even just the, if you look at the capacity of a human brain, and even if you just use the, well, how big of a neural network do you need and how much power you need and all that sort of stuff
with the current state of technology, like we're, you need to build an entire data center and have it, and power it by an entire power plant or something to simulate one of these at the level that you would want for a kind of human. I mean, the first step is get it to be as intelligent as a cat or something, right? We're not even there yet, but it'll, I mean, and that'll take some time, right? And, but I do think that there's a,
the promise that has been made is like there's such a big gap there. Like if people are with a straight face being like, well, we're investing a billion dollars in this because we firmly believe that ASI is, we're on the cusp of ASI, and like it's gonna happen within the next two or three years, like they're gonna be disappointed, right? Now, but if they're putting a billion dollars in this and because, yes, they're gonna like shoot for that,
but like in the meantime, they're gonna have all these other amazing use cases that are gonna like solve this, solve all these problems for these people and like, and create a huge amount of value, then yes, yeah, then like that makes total sense. I totally agree with like, that type of thing is gonna happen, right? And I think the danger with that, with putting that, the conversation so much about either AGI or ASI,
is that it just distracts us from what the real dangers are of using this. It's not, the actual danger is not like some superhuman intelligence is gonna, try to eradicate humanity and so we need a magic off switch or something like that to like, to turn it off in case it goes rogue and whatever, that's like, that's sci-fi stuff. Like the real danger is that people will use, people like humans will use these tools
for, to do bad things, like to do bad things at a scale which like they were not able to do before, right? And like, go and create like a million sock puppet accounts and then go and like influence elections or create deep fakes that will like, that will change the, or like those sorts of things or even things that are maybe not quite as nefarious as that, it's, LMs are just a tool. I thought, I saw like Andrew,
he was kind of, you know, trying to make this parallel of like, well, it's like an engine. I mean, like an LM is like an engine, right? You can use engines for, you can put engines in things and like make, they can be used for good things and like bad things. The good uses overwhelmingly, you know, outweigh the bad uses, but you can use it for either, either one. I like that, that is very much, I mean, that's very much the truth.
I mean, that is very much the case that that is and it doesn't make sense to kind of restrict, like unnecessarily restrict like these, you know, the development of this technology when it's like has so many positive outcomes, right? And like, and usually the core issue that it comes down to is like, what is the likely, what was people's pursued likelihood about that, that this catastrophic event is gonna happen
where, you know, the AI becomes sentient and then decides to like go and kill its creators or whatever, right? And, you know, I think there are people, there are people that are, you know, well-known in the industry and everything that believe that is a real risk. And it's true, like if you actually do believe that it's a real risk, then it's kind of like it would go down one particular path. I think the most people are most people
who kind of actually work in this space, not like, not people who are political scientists or philosophers or whatever, but people who actually know, have used the technology, know what it's capable of, you know, know that it's like, we're very far away from that. Like that is such a remote chance. It was like, you know, I heard it described as like, hey, like I don't worry about that for the same reason that I don't worry about overpopulation on Mars, right?
Because yes, theoretically, yes, but like there are so many other worse problems and like the chance of that happening kind of anytime soon is really low. And so, yeah, I think that that's, that certainly is a case kind of on the, some of the safety side, but I do think that there is like, like the current approach towards LLMs and the next token prediction is not gonna be the thing that's gonna bring us to AGI or anything close to it.
And it's gonna require some just fundamental re-imagining of how these things work, like with really talented teams and people who do understand things at that level, then those are the type of, that's the type of work that needs to happen to be able to kind of break through and make things that are much closer to human intelligence. Yeah, thanks, yeah. We need to prepare for it. And so, yeah, we like, we love what you're doing
and so we need to prepare for it, like, so thanks. And yeah, so thank you for talking about like, you know, your like Inception Studio, also, you know, so AI, generative AI, but so yeah, we are also interested in your career as well. And so we saw your profile page, like biography, you started coding when you were five years old, right? And so we are just curious, you know, why you are interested, why you were interested in computer science and AI?
Is it really so that you started coding when you were five years old? Yeah, I mean, so, I mean, I wasn't interested in computer science, I was sure. I was like, mostly what happened was that, like, we had a computer at home and like, we didn't really have any games or we had like, just very few games on the computer and I liked to like, I wanted to play computer games. And so my, there used to be these like, magazines
that would have the listing, like the source code listing in them and you could type them in and then you could like, play a game, right? And so I would do this and I would like, type it in line by line, right? And of course, like, and then so, and I did this and I was, this was when I was five, I was in kindergarten and I was like, doing this because I was, I really wanted to play the game and it was like, I was, cause I was bored.
I was just bored and I was like, this computer here and it's like, you know, whatever games we had on there where, you know, I got sick of them and it was like, I wanna, I don't know, I was like, oh, this looks like a cool game. Like, let me type in this code and see what happens, right? And of course, like, when you type it in, like, there'll be typos, like I would make mistakes because I'm just like, copying it out of the magazine
and there would be bugs in there and then I, so I had to kind of, had to learn how to like, debug because it's like, I couldn't, you know, at the beginning I would like, look character by character as like, oh, what did I have a typo or something like that? And then as you do this more and more, you kind of get a little bit of intuition. It's like, oh, well, this part is broken. Okay, well, what part of the code should I look at
so that, you know, so I can kind of find out where my typo is, right? And then of course, like, when I did that, I was never, I was not satisfied with like, just playing the game as it was written. It's like, I wanna make changes, right? It would start by like, oh, I wanna put my name in this here. I wanted to like, add a new feature or like, that kind of stuff. And then so, it was, I'd started off by just like,
I'm not really writing code, I'm just like, making small changes to like, I change an if, or I change a number or I change like, that kind of thing. I would see like, how the result would be and like, that's basically how I got started, right? And then, I just kind of got, I got better and better at it, you know, just as I was learning more and more. But yeah, that was my first, like, that's basically how I got started in coding,
I guess, in like, programming, right? And then like, my first, and so, I just did this, I used to run, like, there's this thing, and then once upon a time, there's this thing called BBS, which is basically, you have a modem and you can dial in to like, these, it's called bullet system, right? And then, and this was like, pre-internet and you would dial up and you would like, you could like, send messages and talk to people
and that kind of stuff. And so, I used to, you know, and there was a sysop, which is like, kind of the system operator of that. And so, there was like, I got a modem, like, very early. I was like, I got, it was like, 1200 baud, which is like, everybody else had 300 baud modems. I had like, a really fast modem, like, four times faster than everyone else's. And I would do all these dial up and it would kind of, I got into that scene for a while
and then I began to run my own BBS for a while. And then, I just started to like, well, I wanted, I wanted, I had some friends who were basically building games for BBSs. And then, I was like, well, I'm gonna make my own game for this BBS. And so, I made this whole, like, kind of RPG game and like, that kind of thing. Like, this was when I was in high school by that time. I was like, that's what I was doing.
Kind of middle school. I started in middle school and then kind of like, continued to high school, right? So, by the time, actually, yeah. And then, like, and then that's, and that's where I was like, oh, well, now I learned C. Like, I started with like, Pascal and I learned C. and then I learned like different languages and that kind of stuff. So that's how I got good at it. And then what happened was that there was,
I took the AP computer science in high school as a sophomore, which was like, that was the first year they let me do it. Like they wouldn't let me take it as a freshman. So, but they let me take it as a sophomore. And then I ended up doing well in that class. Like I got a five on the AP tests and everything. And then my teacher, like the teacher for that class, like clued me in on this thing that was called like USA Computer Olympiad,
which is like, it was like a competitive programming thing. And cause she knew that I was really good at this. And because like a lot of the stuff just came naturally cause I just like, I've been doing it since I was five or so. And yeah, and I'd already had kind of a lot of coding experience, like through my BBS days and games and everything I've written. So then, so I ended up doing those. And like, it was like,
those problems were really, really hard. Like it was like way harder than any other problem, like problems I've ever worked on before, but I got really into it. And then, yeah, and it ended up like going to the like national, whatever, USA Computer Olympiad as like one of the top 15. I didn't make it to the, there's one called IOI, which is like the international competition where only the top four from the U.S. end up going.
I just, I did not make them the top four, but I made it in the top 15. And I think that was like, that was like the first time I felt like, oh, I might be good at these, this thing. Cause I was, I never really thought of myself as good at this, that was my first indication. I was like, oh, okay, maybe there's something here. And that was my first like experience with kind of computer science side. Cause until there, I was just doing coding.
I was like, I wasn't really thinking of algorithms and like stuff like that, but in my preparation for the USA Computer Olympiad, I got this algorithms book and then I began to read it. And then that's where I got, I just got much more into like the actual kind of computer science side of things and algorithms and everything else, right? Yeah, and then I went to MIT and then, so I got into MIT basically because I was in the USA Computer Olympiad.
It was not, it was not based on my grades and like my SAT scores were okay, but not fantastic. And it was like, it was because I had done that, the USA Computer Olympiad. And so I got accepted, which is like a miracle because it was like, that was my top, my dream school, right? And then I went in, you know, I went to MIT and kind of, and then in MIT, I was able to apply myself much better than I did in high school.
And so I ended up doing much, much better. And then that's where I learned about compilers. And I was always, I was always interested in like, how does this compiler work? This is like amazing how you can go from source code to like machine code and it's like, how does that even work, right? And then I learned about how that, this worked and I get so into compilers. I just love, I still love compilers. This is my favorite topic here.
This is why I taught the compilers class at Stanford multiple times. And this is like, it's my first love, whatever. So it's like, I could talk about compilers for days with anybody. Cause I really, really love the area of compilers. And like, and I think working on compilers, it forces you to kind of like work on a meta level. And also like just, you have to be really strong on algorithms and on implementation, both together, right?
You cannot just hack together a compiler. Like you will never make it work reliably enough to like for people will use it, right? But you can't also just work entirely in a theoretical domain on compilers because ultimately these are running on real computers or real hardware and with real programs. And so you have to understand how do people write programs? How does our architecture work? All that kind of stuff.
And so you have to, it's like this kind of wedding of these two together, which really attracted me a lot to that problem space, right? Do you still teach at Stanford? Like a compiler? I don't teach the compilers class currently cause now I teach the, like the LLMs class, the 224G, CS224G, which is about building applications using water sink as models. So, although I've taught the, I used to teach the compilers class in the past.
And like, I, one of the most amazing experiences was that I got to co-teach the compiler class with Jeff Ullman, who won the Turing Award for his work in compiler education. Like he's one of the authors of the dragon book and like in multiple other textbooks and everything. And so I got, like, I got to co-teach a class with him. He had won the Turing Award. And then, you know, and then I was like, I'm sure, look, he's not gonna come
and like co-teach a class with me. It's like, there's no way. He already hit the pinnacle. It's like, why would he come? But he did, I mean, cause he's a great guy and he's like, and he's, he loves this stuff as well. I mean, he's obviously kind of, I mean, he's getting up there in years and everything like that. But like, he came back. So right after, the year after he won the Turing Award, like we co-taught the class together,
which was like, that was a really amazing experience. So like, it was just like, I never thought I would have a chance to do that, like any time in my life. And that was definitely a high point there to be able to kind of work on the, and teach a compiler class at Stanford with Jeff Ullman, which is very cool. Yeah, that's amazing. Yes, thank you for sharing your, like, life histories of studying code. Also, you also talked about, like,
topic of, like, hardware. So, and so right now, so, I mean, in AI field, so it's like so competitive, like not only LMNs, but so also like cloud tips. So yeah, it's so severe competition. So do you have any, like, big picture on this? Like how it's going, or like, you know, which, you know, we should focus on? I mean, is this kind of like, like specifically kind of around the hardware area, or like, or just kind of in software in general, or?
Hardware. I think software. Yeah, in AI field, so. In macroscope view, yeah. Yeah, I know, I know. I mean, look, I mean, there's, it's so interesting to see kind of in this machine learning space, like a lot of the top, a lot of these topic areas that you hear about, like systolic arrays, and like, and even like wafer scale integration, and like all, and like, and parallelization and all this stuff. This is things that, like, were likely,
like in the compiler space, you like, we work on this in like the 80s or whatever, right? Or even like, because a lot of it, there's a lot of overlap with scientific computing and these other areas where it just, there just weren't that many compelling use cases, but like a lot of those ideas, they really haven't changed since then. And even on the architecture, either the architecture side, or on the kind of compiler,
like code generation side either. And so for a long time, like the state of the art in, I mean, for a long time, the state of the art in, you know, with TensorFlow and with like, and PyTorch, everything else, it was like abysmal. It was like embarrassing. The utilization on our GPUs was really low. And it's like, even just to try to get things to perform well, it was like, and these are things that like, it's like, as a compiler person,
I can like look at this and be like, oh my gosh, like, this is like, you just use some basic techniques that we've known for the last 20 or 30 years, and we can do a much better job at these. The truth is that like, there were just not that many good compiler people that were working in machine learning early on. Now that's totally changed. Now it's like, now you have all the smartest people like are all working on these problems.
And this is why you're getting like these leaps and bounds of like improved efficiency, both on the hardware side, as well as on the software side. And this is why like the new capabilities are like just so much better, because there's a lot of catch up that like had to happen there. I think there's NVIDIA. I mean, NVIDIA is like, certainly is the de facto leader in this space, like by far, by like, and I think they just released their latest,
like, you know, numbers and they completely blew out all their numbers, everything. And it was like, and like their stock price ended up going down because they were like, well, you didn't exceed the expectation by as much as we thought you would. So it's like, but like, I mean, there was, and there was a time where it was like NVIDIA was like flirting with being the most valuable company in the world. And for good reason.
I mean, just so you look at like that business, it's incredible. They have, and not, it's not on the hardware side as much as even, it's on the software side. Look at the entire software stack, look at CUDA and like they own that whole stack, right? And it's really, really hard for somebody else because to come in and basically displace all of that, it's a ton of work, right? Because of the build, not only, yes, you have to have a great hardware,
but you also have to have like all the tools and like all the compilers and the things to debug and like, and everything else on the, in there. And that's a ton of work. And like NVIDIA has been working on it for a much longer time than anyone else. And so like they have a big advantage. Now, that being said, I mean, like, there are some very interesting like new companies that are coming out with like making specialized hardware,
especially ones that are optimized for inference and other things that are just like, yeah, this is like a hundred X or a thousand X more efficient. Totally makes sense. I mean, like just knowing the way that architecture works and everything, people can totally build those things. It's not like NVIDIA has all the answers for this. Yeah, it's very possible to build very highly efficient hardware. And I think that there's,
and some of those companies are gonna start to eat into NVIDIA, right? I mean, like NVIDIA is kind of at the top and they don't really have, they're gonna, the only way it is down, I mean, And they're going to be, start to eat into those. Now, is there any of those that's going to be, have a chance of like displacing NVIDIA, like anytime soon? Probably not. There's so much inertia there. It's really, really hard.
And I mean, NVIDIA makes a huge margin on their GPUs. It's so much, they make so much money. It's crazy, right? And so even if like some of these competitors just get a small fraction of that, it's still like a lot of money, it's a pretty big market. So I still think there's going to be, there's going to be a lot of interesting stuff that's there, it's going to be, but it's going to be mostly kind of on specialized tasks
around like particular architectures, particular, you know, for inference and other kinds of things. And it will have like, so I used to work at IBM, right? And they're used to where this is saying like nobody gets fired for buying IBM. That was because like IBM is always a safe bet. And you know, it's, that's not true. I mean, now IBM is not nobody anymore, but now this is true in NVIDIA. Like if you are procuring hardware
for machine learning tasks, you're not going to get fired by buying NVIDIA, but you will get fired if you buy one of some other like smaller, more upstart company and like that ends up failing. And then it's like, that's a big catastrophe, right? So like, where are the, that's, and so those competitors have to, they have to be able to find their edge. And it's like, you know, it seems like their edge is going to be
in terms of performance and in terms of power performance as well. Like those sorts of things. And like, and they have to be like, not just like 10% better, but they have to be like 10X better to be able to jump that gap to be basically, oh, this is why I'm not going to buy NVIDIA because this is going to like actually save us so much on our power bill or on our GPU bill or whatever it is that like it's worth the risk, right?
But it is, but it will happen. I mean, just because like there's so much money there and NVIDIA is not going to be able to continue to kind of execute on every axis. They're going to find some competitors, they're going to find some niche and they're going to end up kind of owning that niche. And like NVIDIA is not going to compete there. And then from that niche, they're going to be able to grow to kind of other types of use cases, everything.
Yeah, yeah. So I think there's a lot of interesting stuff happening on the hardware side, but interesting in terms of just, they're finally applying the techniques that we knew about from 10 or 20 or 30 years ago. And they're finally being deployed in actual hardware that will be much more efficient than the GPUs that NVIDIA has. Yeah, and I saw so many players working in this space, like Glock, Google, OpenAI started exploring the chips.
And then by the way, do you think OpenAI will keep the same position in, let's say in five years, 10 years, or other company will, like company come out? I don't think so. And like, and the reason I say, I mean, so they were the undisputed leader, like without a doubt for a long time. And then there was like, suddenly there's like, over time there began some real viable challenges, right? At the beginning, it was like,
oh, maybe Gemini that's gonna be, okay, that was clearly not. I mean, they flubbed the launch and everything like that. And it just doesn't work very well, right? And they're, but like, you can't ignore Google. I mean, they have a lot of resources. They're gonna, they'll figure it out, right? And then you also have, you know, Meta there as well. They have like, I mean, I think Mark Zuckerberg said, he's like buying up all the GPUs and stuff.
So there's no more GPUs for anyone else. You know, they're a serious competitor there as well. And then Anthropic. I mean, there was a, like, there was big news wherein it's, you know, Anthropic's, you know, Clawed, you know, their biggest, whatever, Clawed Opus, whatever. It beat out, you know, it beat out GPT-4. And like, it like soundly beat out GPT-4. So it's like, now it's like, these are, they have real competitors there, right?
And those are just a handful. And there's other ones as well. Like now we've kind of continued to see new ones being developed all the time. And yeah, and like very competitive performance. And the other thing is that there is, so it's very interesting. So at Stanford, I remember once upon a time, there was, when I was teaching at Stanford, like all the best students, they would go to Google or to Facebook or kind of,
there's a, because those are like, those are the cool, hot companies to go to. And then at some point that changed. And it was like, now it was like, they were not getting the top tier people. They were getting like the second tier people. Because the top tier people were going to other companies, like OpenAI, for example, right? And now, and I think like OpenAI is still like one of the most sought after kind of engineering roles
or like, kind of like for talented people in AI. Like it's one of the most sought after, but I'm beginning to see the signs of this changing where they're due to a variety of factors. It's like the top talent is not going to OpenAI anymore. In fact, like there's people, there's like talent people leaving OpenAI because of growing pains and like whatever other things that are happening. I mean, you mentioned Ilya and like,
there's others that are like that as well. I mean, like, you know, I know Greg Brockman, I know he's on leave and like, there's other things that are like, and I've talked with enough people at OpenAI that like they, you know, it's, they're experiencing some difficulties, right? And I think that like, that's a very interesting leading indicator for like whether the innovations are going to be happening like three, four or five years down the line, right?
If you're not attracting the very best talent, like either from schools or from wherever, then it's going to be really hard to keep up your innovation edge like over time, right? And I've started to see this with OpenAI and because now, now like what do they do? They go to start companies, right? It's just like, if you are talented and you have, and you have skills in AI, this is true at Stanford, this is true other places as well.
And your option is like, oh, go, you know, join a company like OpenAI or Meta or Anthropic or some others, or go start your own thing or join something that's like extremely early stage. Like there's increasingly a number of people that are like doing, going the startup route. And because they want to join like the next OpenAI, they want to join the thing that's going to be the next big thing. And yeah, and so then, and so it's so funny
to talk about OpenAI as the incumbents, but they are, they are in this space. They're the incumbent, right? Even though they're all very new, but like they're the incumbent in this space. And then all the, and, you know, because of a variety of factors, a lot of the top talent is not going to the incumbent, it's going to some of the challengers and like the up-and-comers, right? And so that does not bode well for OpenAI continuing to,
you know, maintain their edge. They're, I still think they do. I mean, like they do have an edge just because, I mean, there's the stuff they released and find like Anthropic has come along and others have come along and kind of leapfrogged in in various ways, but they have stuff that they haven't released yet that they will release. And then they'll leapfrog again. So it'll be like, it'll be competitive for a while,
but if they're not in a position where they're attracting the very best talent, they're not going to be able to maintain that. And so this is why I think like five years from now, is it going to be, is OpenAI still going to be the dominant player? Maybe not. I see. Yeah. That sounds, yeah. Makes sense. And so, and then would you recommend, you know, like especially like student to student, would you recommend them like going to the next OpenAI
or starting their own company? Because, you know, thanks to AI, you know, like we can keep the team small, you know, we can dedicate so many things to AI, but, you know, sometimes students don't have enough skill to do something, right? So joining like an emerging company like Next OpenAI can get, you know, they can gain the great experience there. So what would you recommend? So, I mean, given the fact that I am,
yes, I've run a like early stage AI startup accelerator and like I'm a huge proponent of entrepreneurship and like, you know, in startups and everything. I will also have to say, it's not for everyone. Like there are people for which like the right thing for them to do is to join OpenAI or join Google or join like whatever, like a larger, more established, more, you know, like a company which is like further along,
more stable, especially if you're in a situation where it's like you don't, you know, work is not your life, I guess. It'll be like, if you want to have like a good work-life balance and you want to, and you feel like you wanna proprietize things that are like not your work, then yeah, join a bigger company. I mean, this is gonna be like, your life is gonna be a lot easier there. Like if that's what's important to you, right?
Now, that being said, I mean, if you're ambitious, if you are driven, if you really wanna have, make an impact, then either join an early stage company that's kind of like on this rocket ship that you can kind of be part of it the whole time or just strike out and like, and just go, go and try to start your own company. And you're gonna learn way more for that. Like it's far better in your early part of your career
optimizing for learnings versus optimizing for, you know, salary or other, other things, right? Now, I mean that, but this is all like couched in the, the, you know, the, the, the question of like, what do you actually want, right? And like, and like, what's your ambition and like, what do you want to do with your life? And yes, if you, if you feel like, you know, like work is not the most important thing in your life
and like, you want to do other things, then yeah, then don't, then optimize for those. Like, you're gonna, you're not, you're not going to be happy like starting a company where it's like, you know, you're working 80 or a hundred hours a week, right? Like where you'd like to, to make, to keep the company afloat and survive, right? Yeah, but, but like, but what I would say is like, if you're just starting out, don't, you know, don't,
you know, don't, don't optimize for kind of like, let me build up my ideal resume or anything like that. Like just try to optimize for, at the beginning you want to optimize for figuring out what you actually want to do. And so, and if you're successful in figuring out, this is what I like to do, this is what I don't like to do, and you have a good idea of that, then that is like the actual path to happiness
and fulfillment, because you could be on a career path that is like, it's not for you. It's like, like you, you're basically on somebody else's career path. And then you may wake up and they find, you may have like a resume that would match, pointing towards a direction that like you don't, you're not interested in going, right? Because like, that's not what you want to do with your life, right? And so like the first step is like,
figure out what you actually want to do, what actually makes you excited. And like you, you want to spend your valuable, like, you know, limited time on earth doing, right? And then, yeah. And then once you understand that, then you can then craft, you know, the kind of the, you know, think about, well, what, how do I get to that point, right? Like, so I'm being fulfilled in that way. And for that, usually what it's about
is about optimizing for learnings, right? And like, because you're, you know, and so you want to put yourself in a position where you are learning the most that you possibly can, right? And like, and often if working in a big company is, you'll learn some things, but you're not going to learn many things around entrepreneurship. You're not going to, like the larger the organization, the more narrow your role is within it.
So you'll learn how to do that one very particular thing pretty well, but like, you're not going to learn about things that are like other things, right? And by the way, like learning the skills to be successful in a company like Google or a large company is a really different set of skills than the skills to be successful at a startup. So different, right? I mean, like, and a lot of the stuff at bigger companies
is about, well, how do I navigate the politics and how do I get people on our side? And like, you know, how do I end up, how do I kind of fight for the resources that I need and like that sort of thing, right? And, you know, and much of doing that is like, well, I can't step on other people's toes because that's going to make them mad at me. And, you know, it's going to cause problems. And like, that is usually what it takes
to be kind of successful in a big, bigger company. Be as successful as a startup founder or an early stage company is like the skills that are totally, totally different, right? And so just, I mean, just thinking about that, like in optimizing for learnings and skills early on. And yeah, and like a very good way to learn is by joining an early stage company and being a key employee there and then growing with that company.
Great, great way to learn, like get firsthand view of this. And if you're doing it right, then you're going to be, you're going to be exposed to things like marketing and like sales and product and all these other areas. And like your learnings are going to be dramatically like accelerated and increased. Or it's just kind of like, just jump in the deep end of the pool and just like, okay, I'm going to start a company.
And then you're like, you'll be forced to learn. Like you're going to be forced to learn really quickly, right? I think there is a benefit though, in terms of like being able to be in a position where it's like you've seen greatness in some area, right? This is where I think it's hard to be great unless you've seen great in the past, right? And there's a lot of, and there are different organizations that are great in different things.
And then so, and different founders and different people who are kind of really great at different things. And so having an opportunity to, for example, work with somebody who is great in an area, and that is an area which is aligned with like your long-term goals, that is an amazing opportunity. Whether that person is in a big company or like that person is a founder and you join, you know, you join their team early, right?
Either one is a, is a, is a, like either of those can be a great, really good opportunities there, right? Yeah, and then, and you know, and there's not, there's no perfect whatever resume or LinkedIn profile or whatever that, you know, that you're, a lot of people think of that and they try to craft it like that, especially like college students, because that's what you need to kind of get into college is like, let me, let me put together,
let me try to put together this resume such that it looks really attractive to like college admissions. After your, after your first job, like nobody really cares, right? This is going to be, it's like, like, you don't, it's not going to matter, right? And like, what actually matters is, is like performance there in the real world and everything, right? And so, and not as much like, oh, I had a job here, whatever, right?
Like that's, people are going to care, care a lot less about that as you get later and later in their, your career. You know, they don't care about what school you went to. They don't care about the, like what your major was or like your GPA or any of that kind of stuff, right? They care about it maybe for the first few jobs, but after that, it's kind of not, it's not as important as like what you've done.
And then, and so like, yeah, I mean, and I think this is good. This is the way it should be, right? It should be, you know, like it's supposed to be kind of, there's supposed to be equity in terms of opportunity. And like, you know, just because, you know, some people are fortunate enough to like have a particular type of upbringing where they had certain type of opportunities, like that should not define their whole course
for the rest of their life, right? Just because you were lucky enough to be, you know, and, you know, get an internship at this place or go to Stanford or whatever it is, right? There is, there are a lot of talented people at Stanford, definitely, but there are, but if you look across, you know, and then some of the best people I've worked with are like, they were like college dropouts, right? So extremely good, right?
And so these are certainly, I mean, there is some value there, there's some predictive value there in terms of like, they have a high filter and all this, but it is by no means a case that like everybody who goes to Stanford or MIT for that matter or any other school is like, is high quality, right? Is not the case, right? There's lots and lots of counter examples to this, right? And so, yeah, and what it means is even if you didn't,
even if you're a high school dropout, like there's like, if you work hard and hustle and like improve yourself and stuff like that, it's to me, you can get those kind of early opportunities and like those earlier learning opportunities, then yeah, then you can end up being successful in this. Like, look at, I mean, look, look at, I mean, I mentioned Graham and he didn't even graduate. He didn't finish his, he was like a semester away
from graduating at MIT and he never got his degree. So like, he's like, I mean, it was a high school diploma. So Greg Ruckman has a high school diploma, right? And he never got his, he never finished his image because he left like a semester before because he didn't need to, right? It doesn't matter. So, and there's lots of examples that are like that. I mean, you would assume that, oh, he's a PhD or whatever,
right, not the case, right? He's, he did go to MIT, but like he, you know, he dropped out to do, to basically start Stripe, which is like totally the right decision, obviously. Right, right. And then, yeah, and he hasn't looked back since then. So yeah. Yeah. Yeah. Makes sense and exciting. And then, so, you know, you are three time cybersecurity founder, right? Like a successful founder. And so what, looking back now,
like, do you have something you would have done differently? You do differently, you know, what's the biggest learning through your founder, like a founder life, you know, or pitfalls, you know, or challenges you went over? Oh my gosh, I have way, I have tons of them. Like there's, I mean, I guess, I mean, especially my, I mean, first company, I made so many mistakes and I had like all these battle scars of like doing things the wrong way.
I would say like the biggest ones is like, just because somebody is willing to give you money for something does not mean it's a good idea. You can't like rely on investors, you know, like to have any type of calibration about knowing like what was a good idea or not. Right. And yeah. And so, because like, you know, there's this image of like, oh, well this famous investor invested in this, you know, therefore this must be a great opportunity, a great idea.
Like, no, I mean, like more often the investors, they don't have the answers. I mean, like the founders are actually living it. They live in the space. They know they are the world's experts in this or should be. Right. And so, so don't get enamored by big names or anything like that because like they invest in a lot stupid stuff, right? And like, and like, if your thing is one of those stupid things, it's like, your time is worth more
than their money. So like, you should, you know, think about think about that, right? You know, another one is like making sure that like, when you have investors, that they are, you know, they understand what you're doing, and they're well aligned, and they're supportive, right? And if they don't really know what you're doing, then this is going to lead to all sorts of problems down the road, right? And if they're not
really aligned with like, what your strategy should be, and all this stuff, or they're kind of not supportive in different ways, like, these are not life's too short, like, don't, don't waste time, like working with people like that, like, just try to find the idea and then the investors that are going to actually be well aligned and supportive, and really and understand what you're doing.
They're not going to have the answers, but like, you want to be able to go to them and say, Hey, we need help with x and like, they will they will like jump, jump at the opportunity to help out. Like those are those are the people that you want on your side. Yeah. And, and, you know, also, it's like being be careful about like, who your co founders are, I mean, just kind of like, understand, like, make sure that you have like, good complimentary skill sets,
but like mutual respect, there becomes really important, right? Because if there's a question about like, who's responsible for what, it leads to all sorts of problems and issues down the road, and there has to be mutual respect, it can't be like, you have to be able to trust, they'd say, I'm going to fully delegate this part of the, whatever this part of the business or the, you know, the founding, you know, roles to this person, and you
have to be able to fully trust that they, they are, that they're going to do this, you know, better, not only better than you could, but better than, you know, people that you would hire or anything else, right? And if you can't answer, if you don't believe that to be the case, it's going to be, it's unlikely to work out. Because I mean, sometimes you have a thing, it's like, we have a senior founder of a junior
founder, and it's like, there's a clear hierarchy there, but still, like, those types of situations just get really, really difficult. The best scenario is that you have complimentary skills and but mutual respect, like that each person respects the other person for the contributions that they're making, right? Because a lot of a lot of companies struggle with just co founder issues, right? And like issues
between the co founders that and, you know, and on that topic, kind of like, yeah, I can kind of find somebody who's a little bit different than you not like, you know, there's, there's some there's benefits to doing that you can, it's not strictly required, but like you, you know, it's a lot easier if you kind of early on in the DNA of your company, you're going to have some kind of diversity of DNA, because like, the more
the further along that you go with the kind of the same, all the same DNA in the company, it becomes harder and harder to adjust in the future. Right? You know, and that that is, yeah, that that leads to problems, right? I mean, like, the best companies are not like, you know, the best companies is not like, you take the founder, and you basically, you know, genetically, whatever, clone the founder, like 50 times, and
those are the 50 employees in the company, like that. Sometimes it feels that way. Sometimes it'd be like, Oh, I wish I had like, just a bunch of clones of me. So like, go do all this extra work. But that's not the actual path to the like, the strongest companies, the best companies, because the reason you start a company is you want to build something bigger than you could do yourself. Right.
And the only way to do that is by incorporating other people with other skill sets and other perspectives. And, and, you know, and like, that's how you're going to end up with, you know, a product and a company that is better than you could have done by yourself. Right? Or even like 50 clones of yourself could have done. Right? That that. Yeah. And so, you know, being conscious about that, and like being going a little bit outside of your comfort zone.
And so yeah, this person is a little bit different than me. But I do, I know that they're very good at like what they do. And I know they're kind of world class in this area. So and even though this is a little bit uncomfortable, like, like, let's, let's work together, or like, I'm gonna hire them, or I'm gonna like be a co founder, like that sort of thing. Like those, I think it's good to do those type of situations.
Even though it's, it may be a little bit uncomfortable. Because it's a lot, it's always easier to be like, oh, like, you know, I'm an engineer, it's like, oh, with other engineers, we all talk engineer language, right? We get each other, right. And so like, there's some benefit to that. But there is a, you know, but like, there's some downsides as well, in terms of as a company grows, right.
And, you know, I guess the one, one last thing is like, don't. Yeah, I think there's a stigma against solo founders. I don't know why. I mean, I think there's, there are there are pros and cons to being a solo founder. I mean, like, like I mentioned, it'd be like, hey, look, you want to have more diverse DNA in there. I mean, like, it's easier to do that with more than more than one person, right. On the other hand, there's a lot of you avoid a lot of drama and a lot of
other problems by just having us having a solo founder, being a solo founder. And, you know, there's a there's a lot of challenges there, like in terms of a lot of the challenges of being a solo founder is things like, you know, it's a very lonely job. And you don't like you don't really have somebody to kind of, you know, like, bounce your ideas off of or like or, or be or be your, you know, your psychologist or whatever,
like, you know, that that sort of thing, which is true, which is all true. But like, there, there, there are ways to supplement that, like in terms of just, you don't necessarily have to have like a co founder to do that. And so yeah, it's not, I know, it's not typical. And but like, having a having no co founder is better than having a bad co founder. So, so if that's your option, or even a mediocre co founder, right?
Like, like, if those are your options, like, I mean, you can, like, don't don't be afraid of like, starting, like saying, you know what, I'm gonna, I'm gonna start this company, like myself, even if you haven't never done it before. This is where some of the things like community can really help because it like kind of gives you this unfair advantage, because you kind of you can take advantage of the hive mind of
like, you know, a lot of other people have been through it before. And it's like, it makes things a lot easier. When you do that. It's, yeah, yeah. So like, there's, you know, don't if the right answer is to be a solo founder, don't be like, well, you know, YC never, never accepts solo founders. So like, I'm not going to be a solo fund. Like this is not like, you look at the actual actual stats, like there's like, quite a few of
these, like very highly successful companies, like we're actually solo founders, or de facto solo founders there. And so it's very possible be to be successful there, and, and raise money and do everything else that you need. And it avoids it does avoid a lot of, you know, like it has its own, it has some sort of challenges, but it also avoids another set of problems as well.
You're talking about like, you know, a lot of times, like the the critical path is like the communication bandwidth between the co founders. And so in, in the case, it's one person, like the communication bandwidth is like, you're whenever your cerebrum, like, you know, it's like, it's in the same brain. So you're, you're, you really can't get much more efficient than that, you know, just in terms of like, make sure the founders are on the same page.
Well, if it's only one, like that, that problem becomes really much, really much easier. When it's there's only one founder. So yeah, yeah, those are those are some of the things that I've kind of learned, learned over time. But yeah, yeah, no, it's, it's, it's been a lot of fun. And like, this is why I think I'm a privileged space as well. I like in through inception, I get a chance to work with a lot more founders.
And so that has also like really accelerated my thinking, not only from my personal experience, but like, through just working with like a ton of different founders, like through inception, and I get to kind of like, I get to accelerate my learning on that side. So that's been a lot of fun. Yeah, I think we should all apply to institution studio. Yeah, that's a great place to go. And yeah, thank you for that. And so yeah, you know, I think time is running out.
And so this is the last question, a big question. And so since you know, grass is a platform where people are sharing what they are reading, learning as a digital legacy, we want to know what legacy what impact do you want to leave behind for the future generations? That? That is a big question. Yeah, I mean, like, it's, you know, I don't know, since, I mean, since I was young, I like, I felt like there was okay, I was born and I had like, there's
some type of things I'm talented at, and like some things and things I'm good at. And there's also things where people have made a big investment in me, right? In terms of like, I mean, yeah, I went to MIT. And there's like a bunch of, there's a bunch of instruction, a bunch of tuition there, I went all the way into the PhD, I mean, like, not that many people go in that many years of schooling, right to go through things, right.
And so, because of all of this, like the combination of my like, the talents, I feel like that I have that and as well as, as, you know, the kind of what's been like the schooling and other kind of learning is something I have, I feel an obligation that like, I have to like, go and go do some good for the world. Right? Because otherwise, I feel like this is a waste, like there was that whatever, whatever, in terms of genetics.
or other type of investment, it'd be like, that was a wasted investment. And it's like, it's my obligation to humanity in general is like, I should, I should kind of contribute, uh, you know, in the ways that I, uh, in a positive way, right, in the ways that I should, that I, um, that, you know, somebody who was given the gifts and benefits that I was able, that I got, uh, and was able to have access to, like, I have obligation to, to help, uh, help other
people as well. Right. So, and then that, and hopefully that's what is, I mean, I'm very fortunate to be at the kind of life stage where I'm able to do that. Uh, now, I mean, through inception studios, one piece of that through teaching a Stanford is another piece of that and like other, other things as well that I, that I try to do. And so hopefully, hopefully that will like eventually be my legacy. I'm not, I'm like, but I'm like, you know, whatever, 0.001% of the way there.
Uh, so far, I still got a long, long way to go. Uh, but you know, yeah, I mean, if it's, if it's time where it's going to be, you know, I'm in the latter years of my life and I'm kind of looking back and saying, you know, Hey, was I fulfilled? Like, was this a fulfilled life? Uh, you know, and, and that, that, I think it's the, it's the piece that's the most important to me, or at least for now, I mean, maybe, maybe, uh, when
I get older, I'll get more, a different perspective, you know, on the, on what's actually important and stuff, but, uh, I mean, for now, and this was, and like, you know, it's, I mean, that includes, I mean, it's for my family and my kids, but also like, but to, to other people as well, um, you know, just having home, but just a really, uh, positive impact, uh, and making the world better in all the ways that like, I, I feel like I can contribute.
Very beautiful. Yeah. Thank you. Yeah. Thank you so much for all the answers and then, you know, sharing the insight, you know, through your life and experience. And yeah. Thank you. Yeah. Thank you. No, thank you. And this is, uh, it was a lot of fun. So, uh, I hope they, uh, hope the recording turns out well and, uh, and and, uh, I'm looking forward to seeing it.