Lindsay: I think that speaks to the problem you’re trying to solve with these things. Seeing it as the Oracle of Delphi, trying to get it to tell the future or recommend stocks, is a complete misuse of the tool. However, there must be a degree to which you would feel comfortable with an interface like ChatGPT as a tool to collate basic information and consolidate it nicely for you. To me, it’s just an advanced Google search.
Gilmar: Is it the case that these systems will become more sophisticated, and these concerns are just teething problems? Or are we saying AI can’t be trusted sufficiently to not have to check whether its output is correct? At the moment, it’s easier to ask ChatGPT than Google, but with Google I can see where search results come from so I can verify whether they at least look credible. Can that be overcome?
Ezri: The more sophisticated it becomes, the harder it will be to evaluate what it’s doing. I think it’s extremely useful in some contexts, but right now we’re in a Barnum and Bailey scenario. It’s all a circus, and some people will be left broke and angry as a result. That doesn’t strike me as a good thing, either for the people who use it or for AI, because undermines trust in the AI industry.
Lindsay: I’d say it’s more than teething problems. These are problems this sort of system is bound to face. I’m hoping that all the talk around it will lead to more regulation. Maybe I’m being optimistic here, but I think it’s about credibility. I could see a world where being able to say, “okay, we’ve built this system, this is where we got the information” makes that system more focused and more targeted. You could keep the easy interface, but you’d have the right tool for the right job. We all want the ability to get somewhere fast, and ChatGPT gets there with 90% accuracy, maybe less, but it does get there quickly. That first layer of research, the information you would find for yourself now, Chat GPT does in seconds. That’s one of its main attractions.
Ezri: It’s still a double-edged sword. Using it for chatbots on retail websites is brilliant, because waiting to speak to a human agent is expensive and can cause a lot of delay. If a chatbot can answer customer questions in a flash, that’s definitely an advantage. Businesses are always looking for ways to do things faster or cheaper, or both. But the problem is it’s being presented as a window on all human knowledge, or at least all the knowledge on the internet. With Microsoft already offering a premium subscription service using ChatGPT, it begins to look like the ultimate confidence trick. That’s not my phrase, by the way, it comes from media commentator Ted Gioia.
There are really good uses for AI generally, and in business in particular. I think the single most important thing that businesses can do is focus on AI literacy, supporting their people to deal with the consequences of this kind of technology. A recent project I was involved in, looking at adoption of AI tools in UK law and accountancy firms, found all sorts of unintended or unexpected consequences, for example in professional training pathways, client relationships, business model innovation, and so on. Those are areas where the rubber hits the road, with real-world implications, and often the impact is not the obvious “it does everything faster”.
Gilmar: We’ve touched on the future of jobs here. Let me ask slightly provocatively: Are we all going to be out of a job in 10 years?
Lindsay: Some people will be, but there’ll be more jobs to replace the ones that disappear. The people that seem to be most worried are content writers, and people who work in call centres. Also, while they have slightly less reason to worry, people like lawyers and architects, professional services providers where clients might think what they do could be automated and done in a flash by AI. Looking forward more positively, if we harness it right we’ll do the easy stuff much more quickly. Then we can do the harder stuff ourselves and spend more time on it. I think some of the failed forays into getting AI to write content prove there’s still a long way to go before you can remove the human touch altogether. And you can’t use AI to create something new, it can only do what it’s programmed to do. It can’t do the thinking for you.