GPT-3, OpenAI’s potent language model, has recently been the subject of an investigation into its general intelligence by scientists at the Max Planck Institute for Biological Cybernetics in Tübingen. The AI tool’s capabilities were assessed in a variety of domains, such as causal reasoning and deliberation, using a battery of psychological tests, and the results were compared to those of humans. Results showed that GPT-3 was nearly as good as humans at making decisions, but it lagged in information search and causal reasoning, most likely because it had less experience with the real world.
As it has been trained on massive quantities of data from the internet to respond to input supplied in natural language and generate a wide variety of texts, GPT-3 is widely regarded as the most powerful language model currently available. Besides text generation, it can also do things like solve math problems and write code.
Marcel Binz, the study’s primary author, stated that they set out to determine if GPT-3 had any human-like cognitive ability. They did this by subjecting the AI tool to a battery of psychological tests designed to probe a wide range of facets of general intelligence. Scientists tested GPT-3’s decision-making skills using the Linda problem, a cognitive psychology classic. As the data demonstrated, GPT-3, like humans, did not make decisions based on logic but replicated a common cognitive bias.
Researchers have explained how huge language models like GPT-3 can pick up untrained tasks without requiring parameter updates. These huge language models were discovered to have smaller linear models written inside their hidden layers, which the more prominent models could then train with standard learning techniques to accomplish a new task.
While GPT-3 appears to excel in decision-making tasks, it lags in those that involve dynamic participation with the environment, as demonstrated by this study. According to the study’s authors, this is due to the fact that GPT-3 is an utterly passive receptor for textual information. The authors, however, are optimistic that future networks will be able to acquire knowledge through user interaction and gradually approach human levels of intelligence.