Andreas Braun, Microsoft Germany’s CTO, made a stunning declaration at the “AI in Focus – Digital Kickoff” event on March 9, 2023. “GPT-4 is due to arrive next week”. The world is still getting used to GPT3 and its ramifications. According to the German news site Heise, Braun was accompanied by Microsoft Germany’s CEO, Marianne Janik, who discussed potential business disruption.
Janik highlighted the power of AI to produce value and expressed that the present AI development and ChatGPT were “game-changer.” She called the current AI advancement and ChatGPT “an iPhone moment.” GPT-4, a rumored multimodal AI model, has the potential to unlock a slew of new possibilities, including the ability to make films from the text.
Microsoft Germany’s Chief Technologist of Business Development AI & Emerging Technologies, Holger Braun, has observed that this paradigm will bring about “totally different possibilities.” GPT-4 is supposed to translate text into various media, such as graphics, music, and videos. For example, you can show the model an image and ask questions about what is shown in the image. It is also possible to have the text write a tale based on the ambiance of the image or focus on a specific individual in the image.
There could be several such options. GPT-4 can generate text that sounds like human speech. It will advance the ChatGPT technology, which is based on GPT-3.5. GPT-4 is expected to be 100 times more powerful and better at creating computer code than GPT-3. That said, OpenAI’s CEO, Sam Altman, insists that we shouldn’t expect Artificial Global Intelligence (AGI). He also believes that multimodal AI is a foregone conclusion, not to mention the idea that GPT-4 contains 100 trillion parameters is untrue.
The majority of large models are under-optimized. Training the model is costly, and businesses must choose between accuracy and expense. GPT-3, for example, was only taught once, despite failures. Researchers were unable to do hyperparameter optimization due to prohibitively high prices.
Microsoft and OpenAI demonstrated that GPT-3 might be enhanced by training it on appropriate hyperparameters. They discovered that a 6.7B GPT-3 model with tuned hyperparameters outperformed a 13B GPT-3 model in terms of performance.
They discovered a new parameterization (P), which states that the optimum hyperparameters for smaller models are the same as those for larger models with the same design. Researchers can now optimize big models at a tenth of the expense.
GPT-4, like GPT-3, will be utilized for various language applications, including code generation, text summarization, translation, classification, chatbots, and grammar correction. The new model will be more secure, less prejudiced, more accurate, and better aligned. It will also be cost-effective and durable.
Leave a Reply