What To Actually Do About "Something Big Is Happening"
By Ryan Brodsky, Co-Founder, MVP Club
Matt Shumer's piece "Something Big Is Happening" is making the rounds this week and I think you should read it if you haven't. He's an AI startup founder describing the moment he realized he could describe an application in plain English, walk away from his computer for four hours, and come back to find it built. Not a rough draft. The finished thing. Working, tested, ready to ship.
His COVID analogy is the right one. We're in the "this seems overblown" phase of something that's going to rearrange how work gets done. I'm choosing to lean into his argument. Call it the Shumer gambit: if his thesis is true, overreacting and being wrong will be much less painful than underreacting and continuing to doubt it.
But I've noticed something about how people respond to pieces like Shumer's. They read it, feel a pit in their stomach, maybe share it with a coworker or text it to a friend, and then... go back to what they were doing. The alarm lands. The action doesn't follow. And I think that's because "be early" and "spend an hour a day with AI," while true, aren't strategies. They're bumper stickers.
So I want to skip past the argument for why this is happening and talk about what you do about it. What does it actually look like to catch this wave in a meaningful way rather than be swept along by it?
The bottleneck isn't what you think it is
Every week I watch smart, capable people try AI for the first time. They open Claude or ChatGPT, type something in, get a result that's... fine. Underwhelming. Not the revolution they were promised. And they walk away thinking the hype is overblown.
That's like test-driving a car in first gear and deciding cars are slow.
The gap between people who are getting transformative results from AI and people who are getting mediocre results is not technical skill. It's not prompt engineering tricks or knowing which model to use. It's a mental model problem. Most people approach AI like software: a tool with buttons you learn to push. But AI isn't software. It's a collaborator you learn to direct.
The single biggest mindset shift I see people struggle with is this: they feel like they need to understand how AI works before they can use it effectively. They want to know what's happening under the hood. They want to feel competent before they act.
You don't need to understand the engine. You need to learn to drive.
I taught coding bootcamps for five years. Hundreds of students, all trying to learn programming from scratch. The ones who progressed fastest weren't the ones who understood computer science theory first. They were the ones who started building things before they felt ready and figured it out along the way. AI is exactly the same, except the barrier to "just start building" is now absurdly low.
Your job in a Human + AI collaboration is to know WHAT is worth building and WHY. The HOW is the AI's job. That's why you hired it. Not to understand what it's doing at every step, but to direct it, evaluate what it gives you, and push it further. That reframe changes everything about what "AI skills" even means. The most valuable skills in this new world aren't technical at all. They're taste, judgment, persistence, and the ability to scope a problem and communicate what you want.
What "just start building" actually looks like
I want to make this concrete because I think abstract advice is part of the problem.
Last Saturday I sat down to record a quick demo of how fast you can build with Claude Code. I wanted to show someone, concretely and on camera, that you could go from a first prompt to a working tool in under an hour. So I described a simple video generation pipeline, something that takes slides and narration and stitches them into a video, and hit go. Forty minutes later I had a working tool. Screen recording running the whole time. First prompt to functional execution. I thought that was the demo.
But I kept going. Not because I had a product roadmap or a business plan. Because the friction was so low that stopping felt harder than continuing.
That Saturday afternoon became Sunday morning. Sunday morning became Sunday night. Each session I'd describe what I wanted next: "add a template system," "let users preview before rendering," "make the transitions smoother." Claude would build it. I decided every feature. Claude decided how to implement it. I'd test it, push back on what wasn't right, describe what I actually meant, and we'd iterate. Some sessions were thirty minutes. Some were three hours. I recorded all of it.
By Wednesday I had something that looked like a real product. By Friday I was deploying it. Ten days after that first Saturday demo, I had a full video production studio. Custom templates, Ken Burns effects on images, synchronized narration, a polished UI with drag-and-drop reordering. Not a prototype. A tool I'd be comfortable showing to a client. Deployed and live at renditionvideo.com.
I have a YouTube playlist showing the entire progression. You can watch the first forty-minute build and then watch each subsequent session as the tool evolves from a proof-of-concept into a real product. It's probably the most honest documentation of what building with AI actually looks like, including the parts where things break and I have to describe the problem three different ways before Claude understands what I want.
I didn't set out to build a product. I set out to demonstrate what was possible. The product happened because the distance between "I have an idea" and "this is a real thing in the world" has collapsed to almost nothing. That collapse is the actual revolution Shumer is describing, more than any capability benchmark or model comparison.
For twenty years I was the king of half-finished projects. Full of ideas, great at the exciting first 80%, killed by the infinite to-do list between "this works" and "this is done." I'd start something, hit the messy middle, lose momentum, and move on to the next shiny idea. AI changed that equation. Not by making me smarter or giving me better ideas. By making the remaining work fast. When the grind between "working prototype" and "shipped product" goes from months to days, you stop abandoning things. You finish them.
That's the shift I think matters most, and it's the one I don't see enough people talking about. The conversation is stuck on "start faster." The real revolution is "finish everything."
The thing nobody's talking about
There's a growing number of people who've figured a lot of this out. They're just not saying so publicly.
Some have workplaces with unclear or restrictive policies around AI use. Some worry about professional stigma: that colleagues or clients will think less of their work if they know AI was involved. Some have seen the "that's just AI" dismissal and decided it's easier to ship and stay silent than to defend how they work.
I see this constantly in the community I run at MVP Club. People share incredible things they've built in our private channels. Tools that automate hours of their work, prototypes of products they've dreamed about for years, presentations that would have taken a week condensed to an afternoon. Then they go completely silent about it on LinkedIn or in professional contexts. There's a gap between what's actually happening and what's publicly visible, and that gap is feeding the "AI is overhyped" narrative for everyone who isn't close enough to see past it.
I think this tension is temporary. At some point the results become too obvious to dismiss and the stigma flips from "you used AI?" to "you're not using AI?" We're already starting to see that shift in some industries. But right now, we're in the awkward middle. If you're reading this and you've been using AI to do things you're proud of but haven't talked about: you're not alone, and you're further ahead than you think. The fact that you've been doing this without a playbook or a community is actually impressive. Imagine what happens when you find other people doing the same thing.
What I'd actually tell you to do this week
Shumer says spend an hour a day experimenting with AI. I'll push further on that.
First: spend that hour building something you actually care about. Not following a tutorial. Not asking the AI trivia questions. Pick a real problem. Something at work that annoys you, a side project you've been sitting on, a tool you wish existed. Describe what you want to an AI and see what happens. The first attempt might not be great. That's fine. Describe what's wrong with the output. Be specific. "This isn't what I wanted" doesn't help. "The tone is too formal and it's missing the budget section" does. Push back. Try again. That back-and-forth, not the initial prompt, is the core skill. It's not prompting. It's collaborating through iteration.
Second: use the best available model and invest the twenty dollars. Shumer makes this point and he's right. The gap between the free tier of any AI tool and the paid version isn't incremental. It's a different experience entirely. If you're evaluating AI based on free ChatGPT, you're looking at technology that's over a year behind what paying users have access to. This isn't the place to save money. Twenty dollars a month for Claude Pro or ChatGPT Plus is the best investment you can make right now, full stop.
Third: find people who are doing the same thing. The pace of change is genuinely too fast for any one person to track. What was impossible last month is routine this month. I don't say that as marketing copy. I mean it literally. Capabilities are shipping weekly that fundamentally change what's possible. The people I've watched make the biggest leaps are the ones who have a community of fellow builders comparing notes in real time. "Hey, did you know you can do X now?" is the most valuable sentence in my week, every week. If you don't have those people in your life yet, finding them should be near the top of your priority list.
Fourth: give yourself permission to not understand everything that's happening. This is counterintuitive, especially for smart people who are used to mastering things before doing them. But AI moves too fast for mastery in the traditional sense. By the time you've deeply understood one model's capabilities, the next version is out and the landscape has shifted. The people who are thriving right now are the ones who got comfortable being perpetual beginners. Learning enough to be dangerous, shipping something, learning more from what broke, and repeating. Perfectionism and the need to feel competent before acting are the two biggest enemies of progress here. Adaptability beats expertise right now, and I don't think that's going to change anytime soon.
The invitation
This is what we're building at MVP Club. A community of people, mostly non-technical, mostly professionals in the middle of their careers, who decided that navigating this shift together beats navigating it alone. We build things. We share what works and what doesn't. We keep each other current on what just became possible. And we push each other to keep finishing.
We're not a course you complete and forget. We're not a training program with a certificate at the end. We're a group of people who believe that the AI revolution doesn't belong to the most technical people in the room. It belongs to the people who know what's worth building and refuse to stop until it ships.
Something big is happening. Shumer's right about that. But the people who come out of this best won't be the ones who panicked or the ones who ignored it. They'll be the ones who started building now, kept finishing, and didn't try to do it alone.
If that sounds like where you want to be: come build with us at mvpclub.ai.
Ryan Brodsky is the co-founder of MVP Club, an AI enablement community and consultancy. He spent five years as a Distinguished Faculty coding instructor before pivoting to help non-technical professionals learn to build with AI. He lives in Denver with his family and too many half-finished side projects that are now, finally, getting finished.