To advance AI capabilities in the workplace, OpenAI announced a series of updates to its pioneering generative AI models, GPT-3.5 Turbo and GPT-4.
These enhancements, which include the introduction of a revolutionary new function calling capability, better direction, extended context for GPT-3.5 Turbo, and a revised pricing structure, aim to provide developers with a toolbox extended to build sophisticated, high-performance AI applications to meet them. the complexity of modern work environments.
Applications powered by OpenAI
Developers aren’t the only ones who will benefit from the latest improvements to OpenAI’s GPT models. Chances are you’ve used a tool that implements OpenAI’s AI advances.
Microsoft partnered with OpenAI to provide developers with AI models and enhance popular products like Bing and Office with generative AI. Snapchat launched its generative AI chatbot, My AI, using OpenAI GPT models. The latest update to my AI can send and interpret image snapshots! Sales force released the first generative AI CRM product, Einstein GPT, powered by OpenAI’s “most advanced models.” Morgan Stanley announced a partnership with OpenAI as one of the few wealth management firms with access to the latest GPT-4 model. HubSpot developed new tools, such as ChatSpot.ai, based on OpenAI GPT-4. GitHub Copilot added generative AI via the OpenAI Codex on their platform to help developers, eventually leading to a copyright lawsuit. stripe incorporated OpenAI GPT technology to help understand customers and reduce fraud. GetResponse introduced an email generator based on OpenAI GPT. Instacart created an AI chatbot to help consumers with their purchases.
Ideally, users of these and other tools built on top of OpenAI technology should see improvements in generative AI performance thanks to OpenAI’s GPT-3.5 Turbo and GPT-4 updates.
Improvements to GPT-3.5 Turbo and GPT-4
Below are the latest updates announced by OpenAI to the GPT-3.5 Turbo and GPT-4 models. Updates include a new chat completions API call feature, improved direction, extended context for GPT-3.5 Turbo, and a lower price.
Function call capability
Based on developer feedback and feature requests, OpenAI gave developers the ability to describe features to updated models and have the AI intelligently produce a JSON object containing arguments for those features. This enhancement enables a more reliable connection of GPT capabilities with external tools and APIs, which supports better structured data retrieval from the model.
The new function calling capabilities enable a wide variety of applications, including the following:
Creating chatbots that answer questions by calling external tools, Converting natural language queries into function calls, API calls or database queries, Extracting structured data from text. The new API parameters provide developers with the means to describe functions to the model and optionally request that the model call a specific function.
The introduction of the function call opens up new possibilities for developers. They can integrate GPT models with other APIs or external tools more seamlessly.
For example, a workplace application could use this feature to turn a user’s natural language query into a function call to a CRM or ERP system, making the application easier to use and more efficient.
While OpenAI remains vigilant about potential security issues associated with untrusted data, it suggests that developers protect their applications by consuming information only from trusted tools and including user confirmation steps before taking impactful actions.
Model improvements
The new GPT-4 and GPT-3.5 Turbo models incorporate better direction and extended context.
Developers could use greater directionality to design AI applications that align more closely with the specific requirements of an organization or task, such as generating more specific business reports or creating detailed, user-aware responses. context in customer service chatbots.
The GPT-3.5 Turbo-16k release can provide four times the context length of standard GPT-3.5 Turbo, and supports up to 20 pages of text in a single request. This expanded context capability allows AI to understand and generate responses for much larger texts.
For example, in legal or academic workplaces, where documents are often long, this feature could dramatically improve the model’s ability to understand and summarize large amounts of text, making information extraction more efficient. Similarly, for project management applications, it could enable AI to process and understand entire project plans in one go, helping to generate more insightful project analysis and forecasting.
OpenAI also announced the demise of the previous versions of GPT-4 and GPT-3.5 Turbo, with older models remaining accessible until September 13. Developers were assured of a smooth transition and encouraged to do so provide feedback to help refine the process.
Lower prices
After improvements in system efficiency, OpenAI is passing the cost savings on to developers.
The price for using the popular embedding template, text-embedding-ada-002, is reduced by 75%. Additionally, there is a 25% cost reduction in input chips for the GPT-3.5 Turbo model.
Along with improved functionality, these price reductions should make it easier for developers to use and experiment with these models in their applications.
Continued development of GPT models
OpenAI appears to be committed to continually improving its platform based on developer feedback. With the latest improvements to its generative AI models, OpenAI offers new possibilities for developers to create innovative and improved AI applications for the workplace. The latest API updates and GPT models provide developers with more capabilities to build AI applications better suited to handle the complexity and specificity of tasks commonly found in work environments.
Featured image: iama_sing/Shutterstock
[ad_2]
Source link