Last updated
Last updated
OpenAI’s documentation provides thorough information about both the authentication process and how to set up calls.
External page:
To initializing and using calls from OpenAI, even for testing, you need to be on a paid OpenAI plan with billing correctly set up. You also need to and .
One of ChatGPT's core features is the Chat. This is essentially the API version of what you experience when you use OpenAI's own . When you send a request to OpenAI's server, it includes a message, and the server responds with a generated text.
Example: If you send a request with the message "Hello", ChatGPT might reply "Hi there, how are you today?"
The request can contain more data to tailor the the response to what you need, such as, setting the , providing more context, and a log of the conversation so far.
If you visit the OpenAI's API reference for the chat object (listed at the top of the article), you can see an example of what a request may look like:
In the expandable box below, we go through each part of the request to explain what they do. While it's not essential to grasp every detail to set up the call, having an understanding of these elements can be beneficial. It helps you better comprehend the mechanics behind the process, ensuring you're more informed about how the API works and why certain steps are necessary.
To import the cURL into Bubble, follow these steps:
First, visit the documentation link provided at the top of this article. There you will find the text in the screenshot above. In the top row, make sure it is set to curl before you copy the text by clicking the copy icon or selecting the text and clicking Ctrl/Cmd + C.
Now, open up the API Connector in your Bubble app, and navigate to the API you set up. You will need to go through the steps outlined in the Authentication article before you set up the call, so that we can authenticate correctly.
Click the Import another call from cURL link, marked in red in the above screenshot. A popup will open, where you can paste the text from OpenAI's documentation:
You'll see that the call includes the Authorization in the header. You can remove this line from the code, since the API Connector automatically handles header authorization in the API settings.
To be specific, remove this line from the code:
-H "Authorization: Bearer $OPENAI_API_KEY"
After importing the cURL and removing the unnecessary call, you can initialize it.
When you click Import, the API Connector sends the call to OpenAI, which returns a response. Bubble will show you this response, and allow you to change the data type for each value. You don't need to change anything here.
The initialization process involves sending some necessary data to the API call. To start a chat, after all, we need to send something that the chat bot can respond to. As you prepare to use this functionality in your app, you'll need to set this up as a dynamic value, so that you or your users can insert the value that you want to send.
Below is a quick introduction to the different properties in this call. For a more in-depth guide, see OpenAI's own article linked below.
Each property of the JSON consists of a key-value pair. For example, the first property has the key model and the value gpt-3.5-turbo.
First, let's have another look at the <body>
we sent over to OpenAI during initialization:
Let's look at how the code changes, when we want to replace some static values with dynamic ones.
As you can see, we can place parameter wherever we want in the JSON body, by wrapping the name of the parameter in <>. Bubble will automatically mark the parameters in green, and set up fields for these parameters below the JSON field:
To test that ChatGPT respects the instructions we provided in the content
given by the system
role, we purposely set the topic to sunflowers, a topic that contains the word sun, but is still unrelated to the solar system.
Click Reinitalize call to run it one more time. In our case, OpenAI sent us this response:
As you can see in the response, ChatGPT sent the following message in return:
I can provide information about the astronomical object known as the sun, but sunflowers are actually flowering plants and are not related to the sun in space. Would you like to know about the sun instead?
This shows that while ChatGPT understands the concept of sunflowers (meaning that its knowledge is not technically restricted only to the subject of the solar system), it will politely remind the user that the topic of conversation is restricted to the solar system. This means chat ChatGPT successfully received the message from both the user
role and the system
role.
This is what we've done so far:
Set up parameters in the code by wrapping their key names in <>
Initialized the call
Now, let's look at how we can use these parameters in our app.
An input where the user can type in a question
A button to start the workflow that sends the message
A text element to show the response
Our design looks like the below:
In our case, we gave it the name Send ChatGPT chat message:
When we initialized the call, Bubble learned what the response looks like. With that information, we can automatically offer a response in the same workflow that triggers the action. To do that, add a second action step to the workflow, and set it to Display data in a group/popup.
Remember that we set up a group called Group response with the content type text, and a text element that shows the parent group's text. That way, we can use the Display data in a group/popup action to send the message to the group, and set the text element's data source to parent group's text.
What we received was a neat response to the question we sent:
Our app so far can send a message to ChatGPT, and receive a response. We can see this response in a text element, as illustrated above. In some cases, this is enough. For example, if you simply want to be able to ask a simple question, and get a single response in return, you may not need to store the response at all.
However, you may want your users to be able to see a list of all the messages sent and received. To do that, we'll start leveraging the Bubble database in addition to the API.
Don't forget to set up privacy rules on your data types, in order to safeguard any sensitive information.
In order for this to work, we first need to save the data. We'll store it in the database to make sure we can load it as needed. We will use a fairly simple database setup for this, but feel free to set it up in a way that makes sense for your app. We'll set up a data type with the following fields:
In the data tab, our data type looks like this:
The workflow we set up will consist of three actions:
Let's look in more detail what each of the actions are doing. Each of the tabs below represents one of the actions in the screenshot:
Save the user's message
The first workflow creates a new thing of the Chat data type, and we store the message that the user wrote.
Action type
Fields
Message = Input Message's Value
Now, to show the full conversation history to users:
Set up a repeating group.
Set its Data source to Do a search for ChatGPT messages.
In the Design tab, it will look something like this:
In the last steps, we set up a system for creating one ChatGPT message thing for every message that is sent by the user and and the assistant (ChatGPT). But the changes we made are still only visible in your app – ChatGPT is still oblivious to the previous messages that we've stored in the database.
One of the key features of ChatGPT is its ability to take previous messages from both the user and the assistant into account when generating a response.
For example, if we asked the question "Is Pluto a planet", we could follow up with the question "How far is it from the sun?", and ChatGPT would use the context to understand that "it" refers to Pluto from the first message.
ChatGPT doesn't actually remember the chat history, but requires that you send the history along with every call if you want it to be considered when the response is generated. This is optional, and the expandable box below highlights some of the things you can consider before you decide to include it or not.
When we sent the first call to ChatGPT, we included the JSON object messages. We sent a total of two messages (one from the role system and one from the role user). Sending a chat history is essentially just about including the additional messages in the same list/array.
This is the original JSON we sent earlier:
Let's extend that JSON code a bit to see what it would look like after some back and forth between the user and assistant. We're just looking at this as a en example for now; you don't need to do anything with the code.
A few things worth noting:
We're not changing the structure of the JSON in any way; we're simply adding more messages to the existing code, separated by a comma and wrapped in curly brackets. In essence, we're just making the list longer.
This means that each message should look like this:
{"role": "user","content": "Your message here."},
The message from system remains the same, since this is the basic behavioural instruction we want ChatGPT to follow, and it shouldn't change within the same thread of conversation.
We're choosing to send messages from both the user, and the response from the assistant, so that ChatGPT has access to the entire context. This will also help ChatGPT avoid repeating itself.
Now that we know how we want the JSON to look, we're going to make the necessary changes in our app to generate that JSON and include it in the call.
To do that, we're going to replace the entire value in the messages
object with a dynamic parameter, except for the square brackets that mark the start and end of the array. You can call this parameter whatever you like, but in our example, we'll call it message-list
, to make it clear that we're expecting an array:
Note that the square brackets []
are part of the JSON, while the angle brackets <>
instructs Bubble that we want to place a parameter within the code.
In the API Connector, the Body
now looks like this:
Now, since the dynamic parameters have changed (we've removed subject and topic and replaced it with message-list), we need to re-initialize the call for Bubble to learn and update the parameters.
For this, we need to provide a test value in the value field that corresponds with the message-list key. For that, you can use the test code below:
Then, click the Reinitalize call button to send it, and wait for the response. If successful, the new message-list
parameter will have replaced the two old parameters. Let's head over to the workflow tab to check it.
Secondly, we'll need to make some additions to the ChatGPT data type. Let's first explore why.
By including the
role
in the message, we're letting ChatGPT know who sent what.
The role
key-value-pair, as you may remember, is a part of the JSON, the common language that helps the API Connector to the ChatGPT API speak with each other. We're going to set up one more field on the ChatGPT message data type:
The data type in the data tab should now look like this:
That's it for the data tab. Next, we'll edit the workflow and expressions to generate the JSON properly.
Returning to the workflow tab, we're going to make some changes to the workflows.
Save the user's message
The first workflow creates a new thing of the Chat data type, and we store the message that the user wrote.
In the Message JSON field we just created, we need to add a static text string in front of the dynamic value from the input field:
{"role": "user", "content":
You'll notice that we didn't close the curly bracket – that's because we need to add that after the dynamic content:
}
You can see what the entire field should look like under Fields:
Action type
Fields
Message = Input Message's Value
Before you test this workflow, you may need to delete ChatGPT database things already in your database, since they don't contain the JSON field we added in this section. This may lead to an error in the API, since we'll be
With that set up, you can test your app again. Click Preview and send the first chat message. Then, try sending another one to see how ChatGPT handles the context.
Description: Curl is a tool for initiating calls in the . In Bubble, it's not needed for this process.
Part: https://api.openai.com/v1/chat/completions
Description: This is the specific API we're trying to reach.
Description: Authorizes the call using a , where $OPENAI_API_KEY
is replaced by your actual . This setup is already explained in the authentication chapter.
The API Connector in Bubble includes a feature that allows for the direct import of a request. This tool can automatically configure the imported request to set up an API call correctly. Essentially, you can take the cURL command provided by a service like OpenAI, import it into Bubble's API Connector, and the relevant information will be appropriately mapped out and set up for use.
Before we can the call, we need to edit a few details.
The call will automatically be given the name cURL Call. Give it a suitable name, such as ChatGPT chat. This doesn't affect the call, but makes it with in Bubble.
External page: |
The final role is not visible in the JSON properties above (as it's part of the response), but is any text that OpenAI has sent back as a response to the user's message. If you include messages from the assistant in the call, ChatGPT will consider it a part of earlier conversation and take in into account as context. Including the is important to make ChatGPT give consistent, conversational responses that void repetition.
Until this point, we have only sent ChatGPT static text directly from the API Connector's settings. Of course, for ChatGPT to really be useful, we need to replace these values with dynamic ones, such as user input or the results of .
Added in the API Connector
To make sure that the API Connector accepts a dynamic parameter value, we need to change the setting on each of them to unchecked.
Next, we'll start adding some elements to the page. In this example, we want the user to supply the chat messages, emulating ChatGPT's chat platform. Navigate to the and add the following:
Next, navigate to the . First, we'll connect a workflow to be triggered by the Send button. To find the action that sends a chat message, search for the name you gave the API call in the API Connector.
Bubble will automatically we set up, and we can use a dynamic expression to send the value in the input field:
Article:
We will create one message for each message that goes to and from the API – in other words, we'll save both the request (the user's message) and the response (the assistant's message) in individual database .
Keep in mind that subject and topic are dynamic parameters we – they're not ChatGPT functions.
We'll reference the response from this step in .
topic = Input Chat message's value:
In step 3, we're creating another Chat message to store the response from .
Message =
:first item 's message content
Message JSON:
:first item 's message
:
Go ahead and the workflow by adding a message to the input we set up, and clicking the Submit button.
If you want to , or , you can set up the database and expressions to take that into account. Follow the links or scroll down to the FAQ section.
Notice the after "messages"
: and at the second bottom row. They denote the beginning and end of the messages object. Each object within this array is then enclosed in .
First, we'll need to adjust the . The reason we're doing this, is that the original setup only supports sending one message, and now we want to send a JSON-formatted list of messages.
Our purpose here is to send a list of previous messages to ChatGPT, including the new one. To do that, we need to supply ChatGPT with some simple for it to understand details that are not visible in the chat message itself: the role
of each message.
Message JSON = {"role": "user", "content":
Input Chat message's value:
}
The final code should now reflect the structure in the
Keep in mind that subject and topic are dynamic parameters we – they're not ChatGPT functions.
We'll reference the response from this step in .
message-list = each item's message JSON
In step 3, we're creating another Chat message to hold the response from . The change we're making in this step is similar, but keen in mind that we in step 3, we're saving the message from the assistant (ChatGPT). As such, we need to tweak the JSON a little bit to specify who is speaking:
Message =
:first item 's message content
Message JSON: {"role": "assistant", "content":
"
:first item 's message
:
"}
External page:
To initialize a call, follow the earlier in this article.
The role specifies different parts of the conversation. In essence, "who says what". You can read a more detailed description .
For you as an app builder, knowing about tokens is useful, as it can help you do things like .
Temperature follows a range from 0 to 2, and supports . The default value is 1.
⬇ a lower value means the response will be more deterministic and precise. This is better for when you need correct factual responses. Keep in mind that ChatGPT can .
External page: |
When studying the ChatGPT API documentation, you may have come across the frequency_penalty parameter. This is an optional parameter that tells ChatGPT how strict it should be about repeating a certain in the response.
Frequency penalty requires a range from -2.0 and 2.0, and supports . The default value is 0.
External page: | External page: |
Yes, ChatGPT supports setting a maximum number of . In essence (although the actual calculation is a bit more complex, see previous link), this instructs ChatGPT to keep the number of words in a response to a set value.
Check that the data is not hidden by
Check the for any issues
This can be a text field that simply reflects the role parameter as they appear in the ChatGPT request and response, or you can use an with the two roles set up as separate options.
Keep in mind that ChatGPT doesn't save your conversations in a way that lets you load and display the conversation later – but you can easily set this up using the Bubble database and the ChatGPT message data type we created .
ChatGPT Message
Message
text
The message sent by the user or the assistant
ChatGPT Message
Message
text
The simple text message that your users see
Message JSON
text
The same text, formatted as JSON