That's why OpenAI owns the kingdom of AI
OpenAI, a leading artificial intelligence research laboratory, held its first OpenAI Dev Day, on November 6, 2023, in San Francisco. The event was a showcase of the company's latest advances in AI research and development, and it featured a number of announcements about new models and developer products.
In his keynote address, OpenAI CEO Sam Altman highlighted the company's mission to "ensure that artificial general intelligence benefits all of humanity".
He also emphasized the importance of making AI accessible to developers, so that they can build new and innovative applications on top of OpenAI's technology.
What we saw at OpenAI Dev Day?
OpenAI announced a number of new models and developer products at OpenAI Dev Day, including:
- GPT-4 Turbo: A more capable and cheaper version of GPT-4 with a 128K context window
- Assistants API: A new API that makes it easier for developers to build their own assistive AI apps with goals and the ability to call models and tools
- GPT-4 Turbo with Vision: A version of GPT-4 Turbo that can generate text and images
- DALL-E 3 API: A public API for DALL-E 3, OpenAI's image generation model
These new models and products represent a significant step forward for OpenAI and its mission to make AI accessible to everyone. They also have the potential to revolutionize the way we build and interact with software.
GPT-4 Turbo is a new version of GPT-4 that is more capable, cheaper, and supports a 128K context window. This means that GPT-4 Turbo can generate more comprehensive and informative text, and it can hold more information in its memory, which allows it to generate more consistent and coherent text.
GPT-4 Turbo is also significantly cheaper than GPT-4, making it more accessible to developers and businesses. This could lead to a wave of new AI-powered applications and services.
Another wonder announced at OpenAI Dev Day is The Assistants API. It is a new API that makes it easier for developers to build their own assistive AI apps with goals and the ability to call models and tools. This means that developers can now create AI apps that can help users with a variety of tasks, such as scheduling appointments, writing emails, and generating creative content.
The Assistants API is still in early access, but it has the potential to revolutionize the way we interact with software. In the future, we may be able to use AI assistants to help us with all sorts of tasks, from managing our personal lives to running our businesses.
GPT-4 Turbo with Vision
GPT-4 Turbo with Vision is a version of GPT-4 Turbo that can generate text and images. This means that developers can now use GPT-4 Turbo to create AI applications that can generate both text and visual content.
Introduced at OpenAI Dev Day, this could lead to a new generation of AI-powered creative tools, such as image editing software and video editing software that can help users create high-quality content quickly and easily.
DALL-E 3 API
The DALL-E 3 API is a public API for DALL-E 3, OpenAI's image generation model. DALL-E 3 is one of the most powerful image generation models in the world, and it can generate realistic images from text descriptions.
The DALL-E 3 API will make it possible for developers to integrate DALL-E 3 into their own applications and services. This could lead to a new generation of AI-powered image editing tools, video editing tools, and other creative applications.
Pricing of all models has changed
OpenAI Dev Day announced many new tools for users, as well as new challenges of using the AI company's models.
New OpenAI pricing is as follows:
|Model||Input price||Output price|
|GPT-4 Turbo 8K||$0.03||$0.06|
|GPT-4 Turbo 128K||$0.01||$0.03|
|GPT-3.5 Turbo 4K||$0.0015||$0.002|
|GPT-3.5 Turbo 16K||$0.003||$0.004|
|GPT-3.5 Turbo fine-tuning 4K||Training: $0.008||Input: $0.012|
|GPT-3.5 Turbo fine-tuning 4K and 16K||Training: $0.008||Input: $0.003|
OpenAI is launching a new feature called GPTs, which allows anyone to create customized versions of ChatGPT for specific purposes. GPTs are easy to create, even for people with no coding experience.
To create a GPT, you simply start a conversation with ChatGPT and give it instructions and extra knowledge. You can also pick what the GPT can do, such as searching the web, making images, or analyzing data.
Once you have created a GPT, you can share it with others or use it privately. There are also example GPTs available today for ChatGPT Plus and Enterprise users to try out.
OpenAI has built GPTs with privacy and safety in mind. Your chats with GPTs are not shared with builders, and you can choose whether or not to share data with third-party APIs. Builders can also choose whether or not user chats with their GPTs can be used to improve and train OpenAI's models.
OpenAI has also set up new systems to help review GPTs against its usage policies. These systems are designed to prevent users from sharing harmful GPTs, such as those that involve fraudulent activity, hateful content, or adult themes.
Developers can connect GPTs to the real world by making one or more APIs available to the GPT. This allows GPTs to integrate external data or interact with the real world.
Enterprise customers can also deploy internal-only GPTs. This allows them to create versions of ChatGPT for specific use cases, departments, or proprietary datasets.
OpenAI believes that the best GPTs will be invented by the community. The company is committed to involving the community in the development of a safe AGI that benefits humanity.
OpenAI has also made ChatGPT Plus fresher and simpler to use. It now includes fresh information up to April 2023, and the model picker has been removed. Users can now access DALL·E, browsing, and data analysis all without switching. Files can also be attached to let ChatGPT search PDFs and other document types.
With these new pricing models, in the coming years, we can expect to see a wave of new AI-powered applications and services built with these new models and products. These applications and services could have a major impact on our lives, from the way we work to the way we interact with the world around us.Advertisement