The last couple of weeks, as uneventful as it was for the Covid-striken world, had some news for the prophets of the future, aka, the AI enthusiasts. Nope, Skynet is still only in the Terminator universe, but perhaps, just perhaps, we may be seeing its birth. OpenAI - funded by the Peter Thiel, Elon Musk, Reid Hoffman, Marc Benioff, and Sam Altman (that is one hell of a board), beta released its third-generation language prediction model called GPT-3: a neural-network-powered language model.
Right - for regular world, this is pretty much a pointless bit of news. And it does need a whole lot of context to even see if this is of any value. Let’s see now: OpenAI is a lab that aims to build AI that benefits the whole of humanity. And is expected to help keep bad AI at bay: yes Skynet is not to happen under Musk’s watch, thanks to OpenAI. It transitioned from a not for profit to a for profit organisation, aiming to license its technologies commercially, with Microsoft as its preferred partner. It’s most widely known launch, till a couple of weeks back was GPT 2 (Generative Pre-training) - a language-generation tool capable of producing human-like text on demand.
This month, it made GPT-3 available for folks to play around in its beta: what’s new in it is that it uses 175 BILLION parameters for training vs a mere 1.5 billion used for its predecessor. And it was trained on a trillion word data-set. (And I thought that excel report with 10 status parameters and a green/yellow/red indicator was awesome!!) So what does this have anything to do with the regular world? For one, this API can create a very coherent article by feeding it a few words and choosing a style. Two, remember that excel you painstakingly gathered data and created? This needs a few rows of inputs, it understands what you are trying to do and does the rest on its own, getting data from wherever it saw you bring it. Remember having to create a summary for your boss to understand something because they were too busy to read the full report or wiki page? GPT 3 can be there on-standby to answer any query on the full report. Heck, it doesn’t even need the report, as it can take in the full research material and create answers on its own. Don’t believe me? Just check these tweets from one of the early folks who asked it about God(the responses from the AI are beautiful to put it mildly).
Early stages, but I can imagine this to extend to include use-cases for financial report analysis, create as well as report fake news, faster legal resolutions(hopefully), and smarter and more humane companions for having those discussions when we are feeling depressed. No more sick days delaying reports, no writer’s block, no irritating grammatical mistakes. And this is just a tip of the iceberg in terms of probabilities.
The deal breaker for now: it still retains the bias found in humans around race, colour, sexism, and more. Guess created in the creator’s own image is one truism that is difficult to escape from.
What do you think - who wrote this article? If you think it too simple for an AI, well, do remember, simplicity is also a parameter inside that set of 1,75,00,00,00,000 ones used. And did I forget to mention, it can write code and fire SQL queries too? Feeling the shudder, as you realise, that the AI has learnt close to everything that cost us, as individuals, decades and (not so small) loaned $$$$ to learn?!!
Perhaps, we should start applying for that future “human care-taker for an AI” role and start notching up experience right away.
“The Turing Test is not for AI to pass, but for humans to fail” perfectly sums up the future in the making.
Liked this? Do subscribe for a (almost) regular set of such articles to your inbox.