ChatGPT API : Open AI has Opened up the ChatGPT Model API

If you’ve ever had a conversation with an AI chatbot that felt like you were talking to a human, chances are you were interacting with ChatGPT, the free text-generating AI developed by OpenAI, a San Francisco-based startup. This wildly popular chatbot has taken the world of artificial intelligence by storm, with over 100 million monthly active users as of January. It’s attracted significant media attention and numerous social media memes. It has also been credited with co-authoring a scientific paper and writing hundreds of e-books on Amazon’s Kindle store. 

However, being a business, OpenAI decided to monetize ChatGPT, which it did through the launch of ChatGPT Plus, a premium service introduced in February. But OpenAI made a bigger move with the introduction of a new API on March 1st, 2023, allowing developers worldwide to have access to the power of ChatGPT, opening up new possibilities for integrating this innovative technology into a growing number of apps and tools.

ChatGPT and Whisper APIs

As we reflect on the history of APIs and their impact on the tech industry, one can’t help but wonder whether OpenAI will use the same playbook (as Facebook did with Myspace by releasing an API about 16 years ago ) to compete with giants like Google.

What is ChatGPT API?

On March 1st, 2023, OpenAI made an exciting announcement that third-party developers can now integrate ChatGPT into their applications and services through an API. This integration comes at a significantly lower cost than using OpenAI’s current language models, which means businesses and individual developers can leverage the power of GPT-3.5-turbo to develop a wide range of applications and products, including automatic email generation, write Python code, and build intelligent customer service systems. 

Along with ChatGPT, OpenAI is also offering developers the opportunity to integrate Whisper, an open-source platform for speech recognition that enables text-to-speech capabilities. OpenAI has stated that the ChatGPT API has a broad range of potential use cases beyond AI-powered chat interfaces, and several companies have already been utilizing it for different purposes. By opening the API to third-party apps, marketers and technologists predict that there will be a surge of developers building on top of it, further expanding its potential impact.

ChatGPT API call example
ChatGPT API call example
ChatGPT API response example
ChatGPT API response example

ChatGPT API Token and Pricing

As the ChatGPT model is based on the GPT3.5 model, OpenAI has introduced a new price offer for this ChatGPT model, 1,000 tokens for $0.002.This offer is claimed to be 10x cheaper than accessing the original GPT3.5 models, which will be launched in June 2020. Although these models were capable of generating convincing language, they did not have the same conversational strength as ChatGPT.

However, it’s worth noting that sending even one snippet of text to the API could cost several tokens. The number of tokens processed in an API request depends on the length of inputs and outputs. For English text, one token is approximately 4 characters or 0.75 words.

ChatGPT models operate by processing a sequence of messages and their metadata instead of individual messages. This is achieved by transforming the model inputs into a sequence of tokens using a new format called Chat Markup Language (ChatML), which acts as an intermediary between the input data and the model’s processing mechanisms. 

It’s important to note that although ChatGPT models still rely on token-based processing, ChatML provides a more streamlined and optimized way of delivering message data to the model. It’s crucial to keep in mind that the combined length of the text prompt and generated completion must not exceed the model’s maximum context length, which is around 2048 tokens or 1500 words for most models.

Dedicated Instances

In its documentation, OpenAI has stated that it offers dedicated instances to developers who desire greater control over system performance and specific model versions. By default, requests are processed on a shared compute infrastructure where users pay per request. OpenAI’s API runs on Azure, and with dedicated instances, developers pay for a reserved allocation of compute infrastructure for serving their requests on a time-period basis.

Developers who opt for dedicated instances have complete control over the instance’s load, which affects throughput and request speed. They can also enable additional features such as longer context limits and pin the model snapshot. Dedicated instances may be cost-effective for developers who process more than approximately 450 million tokens per day. Moreover, it allows developers to optimize their workload directly against hardware performance, which can significantly reduce costs compared to shared infrastructure.

Whisper API

In September 2022, OpenAI open-sourced the speech-to-text model called Whisper, which has received tremendous praise from the developer community. However, running it can be challenging. To address this, OpenAI has now made the large-v2 model available through their API. This provides convenient on-demand access, at a cost of $0.006 per minute.

OpenAI’s highly optimized serving stack also guarantees faster performance compared to other services. The Whisper API can be accessed through its transcriptions or translation endpoints. It accepts a variety of formats (m4a, mp3, mp4, mpeg, mpga, wav, webm) and supports 98 languages. This API allows users to transcribe the audio in the source language or translate it into English.

API Application Policy 

OpenAI has made policy changes in response to developer feedback over the previous half-year. The following are some of the specific modifications that have been made:

  • The company’s pre-launch review has been removed while the power of automated monitoring has been enhanced.
  • Unless explicitly agreed upon by the organization, OpenAI will no longer use data submitted through the API for service improvements or model training.
  • A highly optimized developer documentation and straightforward terms of service and usage policies, including data ownership terms that indicate users on both input and output of the models.
  • The company has introduced a default 30-day data retention policy for API users, with the option for stricter retention policies depending on the needs of the user.

The company has stated that it focuses on enhancing its uptime and has prioritized the stability of production use cases as its engineering team’s main objective.

Early Uses of ChatGPT API

As per OpenAI, the ChatGPT API is currently being utilized by various prominent corporations. Some of these are:


Instacart is utilizing the ChatGPT API’s conversational AI technology to assist customers in creating their shopping lists based on open-ended questions. For instance, a customer can ask, “What is a healthy lunch for my kids?”

Instacart is utilizing the ChatGPT API’s


Shopify is incorporating ChatGPT technology into its consumer app, Shop, which shoppers use to explore different brands and products.

Shopify is incorporating ChatGPT technology


An educational platform Quizlet is leveraging the ChatGPT API to power a personalized AI tutor named Q-chat.

Quizlet is leveraging the ChatGPT API


Snap introduced a new experimental feature called “My AI for Snapchat+,” powered by the ChatGPT API. The feature provides Snapchatters with a personalized and amiable chatbot that offers recommendations and can generate haikus for friends in mere seconds.

My AI for Snapchat

The Major Motivation Behind Introducing ChatGPT API

OpenAI’s decision to release an API instead of open-sourcing its models is influenced by 3 primary reasons.

  • The development of GPT-3.5 turbo was likely primarily motivated by a desire to reduce the massive computing costs associated with operating ChatGPT. OpenAI CEO Sam Altman has described the expenses incurred by ChatGPT as “eye-watering,” with compute costs estimated to be a few cents per chat, which given the large user base of over a million, would rapidly accumulate into a significant amount.
  • Secondly, the models that power the API are complex and require significant expertise and resources to develop and deploy, making them costly. As a result, the API may enable smaller businesses and organizations to harness the power of AI systems that were once inaccessible due to high expenses. From December 2022, the ChatGPT costs have been reduced to 90% by the company investing the saved money for the well-being of Mankind through API.
  • Lastly, the API model allows OpenAI to respond more effectively to any potential misuse of its technology. The uncertainty surrounding the downstream applications of their models makes it safer to release them through an API, where access can be adjusted over time, rather than as an open-source model, which may not be able to prevent harmful applications.

How can I get ChatGPT API? 

If you’re new to ChatGPT and want to get the API key, follow the below steps:

  • Create a personal OpenAI account using an authentic Email and the login to access the initial interface.
  • From the menu bar, click on your account icon in the upper right corner, and from the drop-down, select “View API Keys” and then hit “Create new secret Key.”
  • The API key will be generated that you can save for later usage or create a new one when needed following the same procedure.

Unfortunately, it’s not currently possible for the GPT-3.5 Turbo model as of March 1st, 2023. Fine-tuning is currently only available for the base GPT-3 models.

Although the ChatGPT API is a premium application, it can be accessed for free by creating a trial account that provides $18 worth of tokens.

Wrap Up

In a nutshell, the ChatGPT APIs have opened up new avenues for developers, providing them with cutting-edge tools and capabilities to build advanced language-based applications. This launch is expected to have a transformative impact on the developer community, enabling them to create more sophisticated applications that deliver an enhanced user experience. As technology continues to evolve, we can expect further breakthroughs in natural language processing.

Similar Posts