OpenAI is not currently training GPT-5

During an MIT event, OpenAI CEO Sam Altman revealed that the organization is currently not training GPT-5, the next iteration of its Generative Pre-trained Transformer model. Instead, the focus is on improving GPT-4, the latest version of the model. Altman addressed an open letter that called for a pause in training AI models larger than GPT-4 for six months. While he supported the goal of ensuring safety and alignment with human values, Altman found the claim in the letter that OpenAI was training GPT-5 at the time to be inaccurate and lacking technical nuance.

GPT-4 represents a significant advancement over its predecessor, GPT-3, which was introduced in 2020. While OpenAI has not disclosed the exact number of parameters in GPT-4, estimates suggest it could be around one trillion, surpassing GPT-3’s 175 billion parameters. OpenAI highlighted the enhanced creativity, collaboration, and problem-solving capabilities of GPT-4 in a blog post.

Altman acknowledged that there are safety concerns associated with GPT-4 and emphasized the need to address them. OpenAI, being a leading AI research lab, is taking steps to mitigate these concerns. GPT models developed by OpenAI have found applications in various fields, including language translation, chatbots, and content generation. However, the development of large language models like GPT-4 raises ethical and safety questions.

Although GPT-5 is not currently in development, the ongoing enhancements to GPT-4 and the creation of additional models based on it will inevitably generate further discussions and considerations regarding the safety and ethical implications of such AI models.

Posted in

Aihub Team

Leave a Comment