tgoop.com/chatgpt_bds/424
Last Update:
Introducing the Realtime API
Today, we're introducing a public beta of the Realtime API, enabling all paid developers to build low-latency, multimodal experiences in their apps. Similar to ChatGPT’s Advanced Voice Mode, the Realtime API supports natural speech-to-speech conversations using the six preset voices(opens in a new window) already supported in the API.
We’re also introducing audio input and output in the Chat Completions API(opens in a new window) to support use cases that don’t require the low-latency benefits of the Realtime API. With this update, developers can pass any text or audio inputs into GPT-4o and have the model respond with their choice of text, audio, or both.
From language apps and educational software to customer support experiences, developers have already been leveraging voice experiences to connect with their users. Now with Realtime API and soon with audio in the Chat Completions API, developers no longer have to stitch together multiple models to power these experiences.
BY Talks with ChatGPT

Share with your friend now:
tgoop.com/chatgpt_bds/424