It happened – OpenAI officially released GTP-4, which is an improved version of the artificial intelligence language model behind ChatGPT.
GPT-4, a new stage for artificial intelligence
Although ChatGPT was able to really surprise and revolutionize the market in a very short time, it was not a perfect solution. OpenAI’s language model wasn’t the fastest, primarily able to “only” handle text – but Microsoft and the company they had already invested billions of dollars in at this point didn’t want to stop there. The result is GPT-4, which officially saw the light of day today.
OpenAI describes GPT-4 as the latest milestone in its efforts to teach artificial intelligence. The new language model is now available through the OpenAI API with a queue, as well as to all active subscribers of ChatGPT Plus, OpenAI’s premium plan.
What’s new in GPT-4? Why is it such an important premiere?
Prayers Answered – According to OpenAI, GPT-4 not only understands text, but is able to receive data, analyze images, and answer questions about specific graphics. The company also ensures that everything works on a “human level” in various professional and academic standards. Great in theory, in practice we are approaching the moment when artificial intelligence will be able to replace people in many different positions.
OpenAI acknowledges that in casual conversation, distinguishing between GPT-3.5 and GPT-4 may not be so easy and the changes won’t be so noticeable. However, the difference comes when the complexity of the task reaches a sufficient level; On the company blog, we read that GPT-4 is more reliable, more creative, and able to handle more detailed instructions than GPT-3.5.
However, it should be noted that the image analysis function is not yet available to everyone. As we can find out, OpenAI is testing this feature with Be My Eyes to begin with. However, adding it to everyone is just a matter of time.
Source, featured image: Open AI
“Prone to fits of apathy. Introvert. Award-winning internet evangelist. Extreme beer expert.”