“Flawed” GPT-4, a game-changer in the AI era with its “hallucinations”, developed by OpenAI

The world is witnessing a pragmatic shift towards a new era of machine intelligence and if your mind hasn’t been flabbergasted by its possibilities then you are not paying attention. A new revolution has arrived where technology is on the precipice of permanently reshaping society. Is it for the better or to give birth to a dystopian reality, is a question only time will answer. For now, a technology still in its nascent stage has overwhelmed the entire human generation with anxiety that future may look very little like the past.

The skills of newly launched GPT-4, latest product from OpenAI months after it sent tremors across the world with its game-changer tool ChatGPT, are overwhelming researchers and academics and we still don’t know its full potential. One of them wrote, ‘GPT-4 had caused me to have an “existential crisis,” because its intelligence is way more powerful than tester’s own dwarfish brain.’ Within a couple of days GPT has aced America’s top examinations, acing Uniform Bar Exam, Biology Olympiad, LSAT to name a few. Its ecstatic performance is pegged higher than 90% human test takers. With high reasoning capabIlities, wider knowledge it can now study an image to provide answers. You can sense its improved sophistication when it gives accurate response to tricky questions and cracks better jokes.

GPT-4 has overwhelmed the entire world with its super humancapabilities

According to Open AI, its upgraded entity, GPT-4, is more capable and accurate than ChatGPT and can publish astonishingly accurate solutions on a variety of tests. It is multimodal, so can interpret both text and images to solve queries. Microsoft is using it to revolutionise its search engine, Bing, payments company Stripe is using it for payments fraud, educator Khan Academy is creating personalised learning experiences for students and Morgan Stanley will use it to help guide its bankers and their clients.

GPT-4 is an enabler being used by millions of startups claiming to use its secret recipe to create new products and improve operational effectiveness of their businesses that will revolutionise legal administration, medical diagnosis, academic research, marketing strategy and even mundane chores. At the forefront of this enablement are tech giants, Microsoft and Google fighting it out to use generative AI to dominate the world wide web by transforming search engines.

The debate around the potential risks and benefits of artificial intelligence (AI) is ongoing. While some experts like Kevin Roose express concerns about the unknown risks associated with AI, others like Professor Charlie Beckett believe that AI can be a valuable tool to augment human capabilities rather than replace them. Both perspectives have their merits, and the reality likely lies somewhere in between. While AI has the potential to revolutionize various industries and enhance productivity, it is essential to ensure that ethical considerations are incorporated in the development and implementation of these technologies. Additionally, there should be a balance between automation and human input to ensure that jobs are not lost and that humans can continue to contribute meaningfully to society.

‘Hallucinations’ is a big challenge GPT has not been able to overcome, where it makes things up. It makes factual errors, creates harmful content and also has the potential to spread disinformation to suit its bias. ‘We spent six months making GPT-4 safer and more aligned. It is 82 percent less likely to respond to requests for disallowed content and 40 percent more likely to produce factual responses,” OpenAI has claimed. Its founder Sam further admits, despite the anticipation, GPT-4 “is still flawed, still limited, but it still seems more impressive on first use than it does after you spend more time with it.”Amidst the fascinating results the flaws can’t be ignored. ’Any Large Language Model is in a sense the child of the texts on which it is trained. If the bot learns to lie, it’s because it has come to understand from those texts that human beings often use lies to get their way. The sins of the bots are coming to resemble the sins of their creators.’ writes Stephen L. Carter is a Bloomberg Opinion columnist.