AI has the most dangerous demos

I’ve been spending some time lately trying to get AI to do things that I’m not very good at personally. I don’t expect amazing outcomes and these aren’t high stakes activities (and that’s a good thing; more on that in a sec,) but while I don’t have enough free time to, say, write a book or create an animated film, these are fun things to imagine (the “when I retire” myth) and I have enough spare time to see if AI can do these things for me.
People with jobs in the creative fields, do not worry! I am absolutely not taking money out of your pockets because a) I’m mostly messing around and wouldn’t know what to do with a finished product if I actually got one, and b) the results are very very much not finished products.
But wait! I see countless posts from though leaders where they’re effortlessly making things with AI, and YouTube is filled with cool demos, either from vendors or newly established “experts” in a field that’s in many cases only a few weeks or months old. There must be something wrong with my prompts!
Yeah, I should know better. My field of expertise is programming, and I’ve made my share of widgets with Claude or ChatGPT, but I know they’re a million miles away from a full production application. I still believe strongly that I can use my knowledge to assemble a collection of AI-generated widgets into an app, but… that’s work, and making a silly example widget is way more fun.
That’s danger number one of the demos: we’re so distracted making instant toys that we lose sight of build targets that are actually valuable.
I think that carries through to most of the demos I see. Which is absolutely shocking, right? I mean, I saw a demo where someone typed in “make me a sales proposal for a pharma customer” and the AI tool generated a beautiful pitch deck, so that problem’s obviously solved. Except the proposal is of course completely generic, incomplete, and lacks any connection to my actual business, because how can it do otherwise with what little information has been provided?
That brings me to danger number two, and we’re seeing it start to spread like wildfire. Executives are seeing these demos and making big bets on optimistic early results. These bets involve significant staffing cuts, because AI can handle the load. But can it? Or will companies be left with surface level solutions and massive gaps in operational capabilities from the holes they put in their org charts that AI won’t be able to cover?
Microsoft uses the term Copilot really well with their AI tool suite, and I like that a lot: it’s a copilot meant to assist you with your day to day work. But reportedly that same Microsoft is now implementing AI code generating quotas and cutting developer teams because AI will take the load. We’re told from an early age that we can’t have our cake and eat it too, but we’ve also been told repeatedly that AI changes all the rules, so… maybe? But also maybe only maybe if you’re a company that knows your stuff really well and also builds the tool to augment that knowledge. That won’t be most companies.
I remain bullish about AI’s ability to improve individual productivity. But I also know that just because I can get the latest video model to make an amazing 10 second clip, I have no idea how to make enough of them that look consistent and stitch together to tell a compelling story.
So you won’t see my video opus on YouTube anytime soon, but don’t worry, there’ll be plenty of posts about the cool new way to make that 10s fragment from the latest expert to keep you entertained in the meantime.