Is AI over hyped? Yes.
We’ve seen the news that NVIDIA was the world’s most valuable company at an eye-watering value worth over $3 trillion, all thanks to the hype of AI.
Meanwhile, on the flip side of the AI hype coin is the news that McDonald’s was removing an AI powered ordering experience from restaurants which resulted in people ordering bacon topped ice cream and hundreds of dollars worth of chicken McNuggets.
But what got me to write this post was reading this excellent, appropriately called blog post “I Will Fucking Piledrive You If You Mention AI Again” and everything mentioned in that post is absolutely spot on. I highly encourage you to read it.
Have we all just gone mad about AI? Yes.
This quote rings true on so many levels:
Consider the fact that most companies are unable to successfully develop and deploy the simplest of CRUD applications on time and under budget. This is a solved problem - with smart people who can collaborate and provide reasonable requirements, a competent team will knock this out of the park every single time, admittedly with some amount of frustration.
This one as well:
The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant. Everyone is talking about Retrieval Augmented Generation, but most companies don't actually have any internal documentation worth retrieving. Fix. Your. Shit.
Let’s remind ourselves of the Manifesto for Agile Software Development
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
I’m a fan of living documentation. It’s key for onboarding new colleagues, existing colleagues who might be new to the project that’s being worked on, providing context for stakeholders and so on. But if we don’t have time to write that documentation in the first place, then what are we doing rushing to slap AI features to anything? But hey, maybe sprints should incorporate time into writing documentation.
I use Notion to write my posts, publish my resume online, create kanban boards for personal stuff, writing notes for learning courses etc. It’s great and the fact so much of it is free is outstanding. The AI feature to retrieve information from my documents when I ask a question is what I want from AI. I’d honestly consider paying for the AI feature if I had more notes from courses stored.
I want AI to work with me to enhance my work, not attempt to replace me.
Now don’t get me wrong, I’ve been part of a hackathon which involved using generative AI, and yes; there are plenty of benefits and it could change the world. But I think when people see the costs to develop and run these LLMs on services such as Azure and AWS, I think people will have a small heart attack. The hackathon I participated in, we used OpenAI’s API and although we barely made a dent in the credit we had allocated to use, I could foresee that having a solution that communicates to OpenAI constantly would become very expensive.
There have been attempts to use AI to replace work done by humans already. Staff at CNET formed a union after it came to light that CNET were running articles that were written using AI tools. There was backlash against using AI to write articles, which resulted in CNET pausing publishing articles written by AI.
I’ve seen demos of AI and I think they’re fantastic, it’s a good preview of what AI could achieve. But we all need to keep in mind that these models can hallucinate with some serious mistakes. Take for example the hilarious advice given by Google where it suggested to use glue to help cheese stick to pizza and suggested that we should eat one rock a day.
Another great example of this problem is the disastrous launch of the Humane AI Pin. And now it’s reported that it’s trying to sell itself for one billion dollars. How on earth did they reach that valuation?! I struggle to think of any reason why anyone would want to purchase the company. I’ll also add the Rabbit R1 into this category. If anything, I’d bet that phones will take the step in being a personal, portable AI powered assistant. Especially with the announcement of Apple Intelligence.
If these models can get stuff wrong, what makes everyone think they’re ready to go and put them in products that people could buy? Why do I need to verify what the model returns? I thought the whole point was to be a companion that provided accurate information.
I’ve used ChatGPT to help me with some issues I’ve faced with creating features for this website, but it does get things wrong. This prompts to go back, tell it that it’s wrong, and in some cases, correct it with something much simpler.
Then there is also the tricky subject of training AI models, which Meta has found itself stuck in as they plan to use photos and posts to train AI. This led to many artists moving over to Cara. The team behind Cara discovered a huge bill from Vercel costing $96,280 for a week of usage after it exploded in popularity. Training AI models is going to be a tightrope for many businesses who are hoping to train their models using information from their users.
There’s also the subject of security. Microsoft had to delay their Recall feature, which was going to take screenshots of everything you do. Microsoft decided to delay it after facing criticism from privacy and security experts. I honestly would be a little creeped out if my operating system was taking screenshots of everything I did on my PC.
On the thought of AI in products, is how many products are now adding somewhat useless AI into their products. One product that springs to mind is the GameScent. This is a $180 product (yes, really) that is powered by AI that releases scents into your room. Essentially, it listens to game audio and will release a scent that matches it. Example, if there is a sound of car accelerating it will release the relevant scent for it. If you’re thinking about the Product-Market fit, I have no idea about how the product fits with it’s targeted market.
Another thing I’ve noticed is how many products are quite literally slapping “AI!” onto their descriptions in the iOS app store. From photo editors to web browsers, it seems that everyone has jumped onto the bandwagon of claiming that their applications use AI somehow. Part of me wishes I would print a set of stickers that would say “AI!” on them and just stick them to things in my house.
Now, I do believe that AI does have potential to make an impact of some kind to how we live and work, but I don’t think it’s going to be a major disruptor just yet. I think it’s going to be several years before we see anything like that. AI development is interesting but people need to take these developments in moderation.
Before you close this tab, I want to leave you with one final quote form the blog post I mentioned at the start:
Unless you are one of a tiny handful of businesses who know exactly what they're going to use AI for, you do not need AI for anything - or rather, you do not need to do anything to reap the benefits.
Now if you’ll excuse me, I have to eat a small rock and glue cheese to my pizza.