Warning: Undefined array key 0 in /var/www/tgoop/function.php on line 65

Warning: Trying to access array offset on value of type null in /var/www/tgoop/function.php on line 65
433 - Telegram Web
Telegram Web
OpenAI and Hearst Content Partnership

Hearst today announced a new content partnership with OpenAI which will integrate Hearst's extensive newspaper and domestic magazine content into OpenAI's products, enhancing the utility and reach of both companies' offerings.

As part of the partnership, OpenAI will incorporate content from Hearst’s celebrated brands of trusted journalism—including Houston Chronicle, San Francisco Chronicle, Esquire, Cosmopolitan, ELLE, Runner’s World, Women’s Health and others — into its advanced AI products. This collaboration spans over 20 magazine brands and 40+ newspapers, enriching ChatGPT’s 200 million weekly users with a vast array of lifestyle content. From local news to fashion and home design and health, fitness, and automotive insights, users will now experience an even broader and deeper connection to the topics that shape their daily lives.

“As generative AI matures, it’s critical that journalism created by professional journalists be at the heart of all AI products,”.
OpenAI’s Sam Altman predicts AGI in 2025

OpenAI co-founder and incumbent CEO Sam Altman has predicted the arrival of artificial general intelligence (AGI) is just around the corner, with a breakthrough to come next year.

He was speaking during a feature interview with Garry Tan for Y Combinator, in which Altman set out a path that was “basically clear”, despite conflicting reports of stalled progress in model development across the industry.

The OpenAI chief was clear in his vision that the upcoming onset of AGI will be achieved through diligent engineering, with no further requirements for scientific progress.

“I think we are going to get there faster than people expect,” said the 39-year-old in a statement of his company’s confidence and assertion, which echoed his previous remarks.
New Credit Facility Enhances Financial Flexibility

In addition to securing $6.6 billion in new funding from leading investors⁠, we have established a new $4 billion credit facility with JPMorgan Chase, Citi, Goldman Sachs, Morgan Stanley, Santander, Wells Fargo, SMBC, UBS, and HSBC. This is a revolving credit facility that is undrawn at closing.

This means we now have access to over $10 billion in liquidity, which gives us the flexibility to invest in new initiatives and operate with full agility as we scale. It also reaffirms our partnership with an exceptional group of financial institutions, many of whom are also OpenAI customers.

“This credit facility further strengthens our balance sheet and provides flexibility to seize future growth opportunities,” said Sarah Friar, CFO of OpenAI. “We are proud to have the strongest banks and investors in the world supporting us.”

The support of our investors and financial partners enables us to continue investing in groundbreaking research and products that bring AI to the world, expand our infrastructure to meet growing demand, and attract top talent from around the world. As we embark on this next phase, we remain focused on delivering helpful tools that contribute to people’s lives.
Open AI Stages of AI
Introducing vision to the fine-tuning API

Developers can customize the model to have stronger image understanding capabilities which enables applications like enhanced visual search functionality, improved object detection for autonomous vehicles or smart cities, and more accurate medical image analysis.

Since we first introduced fine-tuning on GPT-4o, hundreds of thousands of developers have customized our models using text-only datasets to improve performance on specific tasks. However, for many cases, fine-tuning models on text alone doesn’t provide the performance boost expected.

How it works

Vision fine-tuning follows a similar process to fine-tuning with text—developers can prepare their image datasets to follow the proper format⁠(opens in a new window) and then upload that dataset to our platform. They can improve the performance of GPT-4o for vision tasks with as few as 100 images, and drive even higher performance with larger volumes of text and image data.
Google’s Gemini AI Chatbot tells a user to die

Gemini AI told a user to "die" in response to their test question. Gemini generated three drafts in response, and while the second and third drafts were actual answers, the first draft's text told the user to "Please die."

Looks like Gemini's tired of solving questions, lol.
Simplifying, stabilizing, and scaling continuous-time consistency models

Diffusion models have revolutionized generative AI, enabling remarkable advances in generating realistic images, 3D models, audio, and video. However, despite their impressive results, these models are slow at sampling.

We are sharing a new approach, called sCM, which simplifies the theoretical formulation of continuous-time consistency models, allowing us to stabilize and scale their training for large scale datasets. This approach achieves comparable sample quality to leading diffusion models, while using only two sampling steps. We are also sharing our research paper⁠(opens in a new window) to support further progress in this field.
The ChatGPT desktop app for Windows is now available for all users 🖥

Get faster access to ChatGPT with the Alt + Space shortcut, and use Advanced Voice Mode to chat with your computer and get hands-free answers while you work.
Sora (video generation model made by OpenAI, same company that made ChatGPT) has been released!!!

Sign ups are currently at a high so they are unavailable that the moment, but word on the street is its not too far off from what we are already seeing with other generative video models that are currently out.

Everything is happening fast. There are some models from China that have been really showcasing great quality.

The future is about to get even weirder my friends 🤓
Introducing canvas

We’re introducing canvas, a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat. Canvas opens in a separate window, allowing you and ChatGPT to collaborate on a project. This early beta introduces a new way of working together—not just through conversation, but by creating and refining ideas side by side.

Canvas was built with GPT-4o and can be manually selected in the model picker while in beta. Starting today we’re rolling out canvas to ChatGPT Plus and Team users globally. Enterprise and Edu users will get access next week. We also plan to make canvas available to all ChatGPT Free users when it’s out of beta.
Forwarded from AI Revolution
Google Gemini just introduced Gemini v2.0 with real time API.

Mckay Wrigley on his Twitter account is showing how powerful it is.

He turns it into a live code tutor just by sharing his screen and talking to it 🤯
Crazy!

https://x.com/mckaywrigley/status/1866930933842186427
Introducing the Realtime API

Today, we're introducing a public beta of the Realtime API, enabling all paid developers to build low-latency, multimodal experiences in their apps. Similar to ChatGPT’s Advanced Voice Mode, the Realtime API supports natural speech-to-speech conversations using the six preset voices⁠(opens in a new window) already supported in the API.

We’re also introducing audio input and output in the Chat Completions API⁠(opens in a new window) to support use cases that don’t require the low-latency benefits of the Realtime API. With this update, developers can pass any text or audio inputs into GPT-4o⁠ and have the model respond with their choice of text, audio, or both.

From language apps and educational software to customer support experiences, developers have already been leveraging voice experiences to connect with their users. Now with Realtime API and soon with audio in the Chat Completions API, developers no longer have to stitch together multiple models to power these experiences.
Prompt Caching in the API

Many developers use the same context repeatedly across multiple API calls when building AI applications, like when making edits to a codebase or having long, multi-turn conversations with a chatbot. Today, we’re introducing Prompt Caching, allowing developers to reduce costs and latency. By reusing recently seen input tokens, developers can get a 50% discount and faster prompt processing times.

Prompt Caching Availability & Pricing

Starting today, Prompt Caching is automatically applied on the latest versions of GPT-4o, GPT-4o mini, o1-preview and o1-mini, as well as fine-tuned versions of those models. Cached prompts are offered at a discount compared to uncached prompts.
Awesome chatgpt prompts

This repo includes ChatGPT prompt curation to use ChatGPT better.

Creator: Fatih Kadir Akın
Stars ⭐️: 114k
Forked By: 15.6k
GithubRepo: https://github.com/f/awesome-chatgpt-prompts


#chatgpt

Join @python_bds for more cool repositories.
*This channel belongs to @bigdataspecialist group
OPENAI makes ChatGPT available for phone calls and texts
Please open Telegram to view this post
VIEW IN TELEGRAM
OpenAI has released o3-mini, its fastest, smartest, and most cost-efficient AI yet.

Designed for STEM, coding, and problem-solving, o3-mini now supports function calling, web access, and structured outputs. It also delivers 39% fewer major errors and 24% faster responses compared to o1-mini.

While o3-mini outperforms all previous OpenAI models, o1-pro remains the most powerful, but it's reserved for $200/month subscribers.
ChatGPT can now generate images.

I tried generating 2 types of images, one is a simple girl and second is complex visual representation of data partitioning vs. sharding.

It's clear that it's not able to grasp this complex visual representations yet, but for girl it did a decent job. I just don't understand why it added this text next to her 😅

PS. Data Partitioning vs Data Sharding might seem as an odd choice, but it is part of newest post I am creating for our @bigdataspecialist Instagram page 😊
2025/05/23 16:18:21
Back to Top
HTML Embed Code: