What do startups get wrong about ai?
Meh! I don't completely agree with what he's saying.
I was recently sent a video in which someone asked Sam Altman what he thought startups were misunderstanding about AI. I was asked to react to his response.
My immediate reaction was:
“Meh! I don't completely agree with what he's saying. There are some areas that Open AI and similar firms care a lot about and investing significant effort in fixing something specific in those is a mistake; that I agree. But there's a long tail of use cases that Open AI and such will not care about well into the future and investing time in those is very worthwhile because you can dominate that market until Open AI starts to care about it [eg. Windsurf]”.
I thought it would be useful for me to organize and write my thoughts more clearly. This is an existential question for all of us, entrepreneurs, and naturally a question that comes up often in my advisory / bootcamp / training sessions.
The main issue is that a short snippet is cut out of a broader topic of conversation and perhaps due to time constraint it is articulated briefly. Now the issue is that looking at this out of context of that conversation, Open AI more specifically, and the market movements more generally results in an incomplete picture. What he says is right, but I think he says it in a tone deaf way; so, here’s my unpacking of his response with the luxury of having pages to babble.
SA: “The question is as a startup, do you bet that the technology is as good as it gets, or do you bet that it will get massively better”.
This is honestly condescending. Who, in their right mind, thinks that we’re at the top of technology and there's nothing else for us to do?
Of course, all of us understand this intellectually that “as a startup you have to be on your toes and always try to understand what the next important pivot is”. What I think he’s correctly pointing out is that a lot of us, against our own wisdom, either get lazy, or are too busy in the weeds, or feel completely overwhelmed by the number of things we need to track, that we forgot to watch where technology is going.
“Oh my god, there are 6 new models released since last time I blinked.”
Or you open and close a social media app and feel completely drained because every influencer is acting like the info they have is the most urgent thing you need to know or you’ll die. Balancing this unwelcome amount of inbound and surviving the challenges of a startup life, at a personal and professional level, is what a lot of us are mastering every single day. None of us are “betting that technology isn’t going to change”; we’re just tired, ok? I’d say what he meant to say is something like this:
“The question is as a startup founder, do you have the right support system around you to be able to clearly see where tech is going and play ahead of the curve?”
Support system = intentional information flow control to be objectively informed + emotional / social / financial / mental support so that you don’t make bad decisions on bad days + physical / cognitive health routines that keep you sharp and on top of your game.
Verdict: those of us who don’t sit on a $10B bank account and perhaps without an army of assistants and analysts to help us stay ahead of the curve, need to create very efficient systems to ensure we’re on top of the trends and know with high confidence where tech is going.
SA: “If you are building an AI Tutor company, as models get smarter, the level at which students can learn will naturally go up and up. So, maybe it’s effective for 6th graders, but with the next version it’s good for 8th graders, and eventually PhD students. So, you get to surf that wave. Or you might say I’ll put all my effort to barely make this work for the 8th graders in the limited case of history and then do a huge amount of work to have a human in the loop and correct factual errors for this one class. In the first world, you will be very happy when GPT5 comes out, and in the second world you’ll be really sad”.
This is a good example of what he is trying to convey, but a bad example in general. Open AI has a lot of resources, yes, but they don’t have infinite resources. Yes, they get to care about and work on much larger scale problems but ultimately they’re limited to do so for a handful of problems. “AI Tutor for grade school education” nicely fits in that small set because there’s a ton of data their models have been trained on and it’s expected that their level of competence in grade level knowledge handling would generally increase. There are a few other areas that are generally interesting for them which are largely highly verifiable spaces like coding and math. Windsurf acquisition is a very strong signal that they really care about the coding as a use case, as they should. I’m sure they care a lot about the mathematical abilities of their models because finance is a huge space and traditionally data driven, and is dying as a space to have more competent models. Probably the same goes for complex multi-reference reasoning for use cases in legal and material discovery and such areas which are very commercially lucrative areas. Yes, they are chasing AGI and that can be more general than we can imagine etc, but ultimately they too have to hedge their bets and choose their battles and if we are careful we can align our bets perpendicular to theirs.
A cynical read on his comment would also be this: of course, he doesn’t want you to have data that is better than what he has. Because as unlikely as it is, if you pull it off, then he has to pay a large premium to either out-compete you or acquire you or whatever other options are available to him. It is much cheaper for him to discourage you from pursuing use cases in the spaces Open AI cares about. However, he’s also serious about what he’s saying, if you want to go head to head with him, you’d better be damn sure you have the support system to pull it off because 99.999% chance you’d die there if you’re not prepared for the battle.
And of course, if you don’t have the edge in those, there’s a long tail of “unsexy” use cases that you can go after, scratch your entrepreneurial itch, probably make a bunch of money, and who knows might even make enough progress to be a future “windsurf” when Open AI is done with the more sexy spaces.
Regardless of which of these scenarios each of us are operating in, I think what he wanted to say is something like this:
“If you are building an AI business, as models get smarter and technology improves, do you have the right build-measure-learn-iterate systems in place to quickly evolve your product / mindset / thought process to keep your product and business model relevant?”
SA: “My intuition would have been that 95% of the entrepreneurs would pick the first world, but it looks like they pick the second world, and then you have this whole open ai killed my startup meme.”
This is a correct observation but the wrong implied reason. 95% of entrepreneurs are working in poorly designed environmental conditions and completely lack the support system to pull anything off let alone a complex AI business, period. That has nothing to do with what Open AI does. I’m not even talking about infrastructure issues with funding and supporting innovation from the governments etc. I’m talking about having the right mindset and asking the right questions, creating productive information flow and learning systems, and creating effective, intentional social and financial capital systems to pull off something as complex and fluid as a startup.
I too have used the open ai killed my product excuse to cover my embarrassment of taking my eyes off the ball when I was in a bad state of mind and cognitive sharpness.
What he probably meant to say is this:
“My intuition would have been that 95% of the entrepreneurs invest significant time in sharpening their ability to learn and surround themselves with an environment where learning about tech, their target audience needs, and how those interact with each other is accelerated; yet I see they are distracted by the next shiny thing, asking the wrong questions, and making decisions based on opinions of unqualified social media influencers.”
Summary of what he’s saying in that unnecessarily dramatic tone: if you’re a bad startup, you’ll die; if you’re a good startup, you’ll probably still die but maybe less likely to die too quickly. And none of this has anything to do with being an “AI startup”, whatever that means.

