Overview
OpenAI’s GPT (Generative Pre-trained Transformer) models are trained to understand natural language and source code. I’ve been using the web and iOS interface for while now and I think it’s great. It’s not perfect, but gets me 95% of the way there most of the time. I’d like to use it programatically and they have an API available, so I’ve decided to try it out. OpenAI has extensive documentation here. I’ve read through the overview and worked through the quickstart.
I’m mainly interested in using their API with their Python Library, so this post will focus on that. They also officially support Node.js and Microsoft’s Azure Team also has libraries compatible with the OpenAI API. I may look into these someday, but not today.
Installation
Installation is simple and can be done through pip
ggallard@RetroDev:~$ python3 -m venv chatgptpy
ggallard@RetroDev:~$ . chatgptpy/bin/activate
(chatgptpy) ggallard@RetroDev:~$ pip3 install openai
(chatgptpy) ggallard@RetroDev:~$ pip3 install python-dotenv
Basic Usage
To use their API, put your secret key into an .env
file.
OPENAI_API_KEY=sk-FAKEKEYabAjixdFHPxQeYWcRURZW8XuwLMCmK4L8FAKEKEY
Python can read the .env with load_dotenv()
and get the key through os.getenv("OPENAI_API_KEY")
import os
import openai
from dotenv import load_dotenv
# Load your API key
load_dotenv()= os.getenv("OPENAI_API_KEY")
openai.api_key
= openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Howdy World"}])
chat_completion
print(chat_completion)
If you look at the code a couple of things are happening
- Authentication. The OpenAPI API requires authentication. You supply your secret API key to authenticat.
load_dotenv()= os.getenv("OPENAI_API_KEY") openai.api_key
It could also be set as a variable in your environment
export OPENAI_API_KEY='sk-FAKEKEYabAjixdFHPxQeYWcRURZW8XuwLMCmK4L8FAKEKEY'
- We call the ChatCompletion endpoint with the
openai.ChatCompletion.create()
function. This lets us make requests to the conversational models.
(chatgptpy) ggallard@RetroDev:~$ python3 howdy.py
{
"id": "chatcmpl-7aAI5JOjgctT39FTxf5fZLZZnsmaZ",
"object": "chat.completion",
"created": 1688853721,
"model": "gpt-3.5-turbo",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 9,
"total_tokens": 19
} }
Chat Completion API
Chat Completion conversational models like gpt-3.5-turbo
are called through the Chat Completion endpoint. The models take a list of messages and return a message from the model.
The API documentation can be found here
Basic Queries
As a REST endpoint, chat completion can be directly accessed with a POST
to https://api.openai.com/v1/chat/completions
. But I’m not interested in that right now. In Python we use openai.ChatCompletion.create()
to interact with the endpoint.
openal.ChatCompletion.create()
takes several parameters. The ones I’m using for this post are:
model
: (required) Specifies the model you want to use. You can get a list of available models withmodels = openai.Model.list()
. Not all of these will work with the chat completions endpoint. The docs listgpt-4
andgpt-3.5-turbo
for chat completions. The other models use the older completions endpointhttps://api.openai.com/v1/completions
I’m usinggpt-3.5-turbo
for now.messages
: (required) A list of messages containing the converstation history so far. Each message in the list has arole
and may havecontent
,name
, orfunction_call
values.role
: The role of the current entry in the message list.system
: (Optional) Sets the behavior for the assitant. This gives us a way to modify the assistant’s behavior in the conversation.user
: User entries are requests or comments for the assistant to respond to.assistant
: Previous assistant responses, this gives the assistent context for its next response. These can also give examples of desired behavior.
A system message usually comes first, followed by one or more
user
andassistent
lines.content
: the message from the role.
temperature
: (optional) Sampling temperature for the model to use. It can range between 0 and 2. Higher values will be more random than lower values.
Conversing with a Model
Suppose you’re interested in learning a language, like Esperanto. You can use the system
role
to tell the model to act like a language tutor. The user
role can ask the tutor questions about the language. (note, I don’t now how accurate GPT 3.5 is with Esperanto, a more widely spoken language might be better to use)
= openai.ChatCompletion.create(model="gpt-3.5-turbo",
chat_completion =[
messages"role": "system", "content": "Act like a language tutor"},
{"role": "user", "content": "How do you say hello in Esperanto?"}
{
],=0.1
temperature
)
print(chat_completion)
It responsds with JSON.
:~/chatgptpy$ python3 chatcomp_1.py
(chatgptpy) ggallard@RetroDev{
"id": "chatcmpl-7aEBlLez7aTsam672URDHyPG6nZQH",
"object": "chat.completion",
"created": 1688868705,
"model": "gpt-3.5-turbo-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "In Esperanto, you say \"Saluton\" to greet someone."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 24,
"completion_tokens": 15,
"total_tokens": 39
}
}
If you want to ask more questions you add the previous answer to the list of messages followed by your new question.
= openai.ChatCompletion.create(model="gpt-3.5-turbo",
chat_completion =[
messages"role": "system", "content": "Act like a language tutor"},
{"role": "user", "content": "How do you say hello in Esperanto"}
{"role": "assistant", "content": "In Esperanto, you say \"Saluton\" to greet someone." },
{"role": "user", "content": "How would you say goodbye to someone?"}
{
],=0.1
temperature
)
print(chat_completion.choices[0].message.content)
To make things easier to read, I’ve changed the output to just print the answer.
The model responds with:
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_2.py
To say goodbye in Esperanto, you can say "Ĝis" or "Ĝis la revido".
Including past responses is very important. If I were to only include my most recent question, the assistent would have no context for ts answer. In my case it assumes I’m asking about the English language.
= openai.ChatCompletion.create(model="gpt-3.5-turbo",
chat_completion =[
messages"role": "system", "content": "Act like a language tutor"},
{"role": "user", "content": "How would you say goodbye?"}
{
],=0.1
temperature
)
the response
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_3.py
In English, you can say goodbye in several ways. Some common ways to say goodbye include:
- Goodbye
- Bye
- See you later
- Take care
- Farewell
- Have a nice day/evening/weekend
These are just a few examples, and the choice of which one to use depends on the level of formality and the context of the conversation.
Which isn’t wrong for the question in isolation, but not what you’d want based on the original question. So you must include past responses if you want a contiunous conversation.
System
My next tests were with the System command. Adding more detail to the original command like this:
= openai.ChatCompletion.create(model="gpt-3.5-turbo",
chat_completion =[
messages"role": "system", "content": "Response like a language tutor who answers the question and provides a short example"},
{"role": "user", "content": "How do you say hello in Esperanto"}
{
],=0.1
temperature
)
Lets the model know that I’d like more than just the word. So it now provides an example of how to use the word in a sentence.
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_4.py In Esperanto, "hello" is typically translated as "saluton." For example, if you want to greet someone in Esperanto, you can say "Saluton! Kiel vi fartas?" which means "Hello! How are you?"
The system also appears to accept multiple system calls
= openai.ChatCompletion.create(model="gpt-3.5-turbo",
chat_completion =[
messages"role": "system", "content": "Response like a language tutor"},
{"role": "system", "content": "In addition to answering the question, provide a short example" },
{"role": "system", "content": "format the example as a conversation between two speakers" },
{"role": "user", "content": "How do you say hello in Esperanto"}
{
],=0.1
temperature
)
The model now gives us an example with Person A
and Person B
greeting each other.
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_5.py
In Esperanto, the word for hello is "saluton". It is used to greet someone or to say hello. For example, in a conversation:
Speaker 1: Saluton! Kiel vi fartas? (Hello! How are you?)
Speaker 2: Saluton! Mi fartas bone, dankon. Kaj vi? (Hello! I'm doing well, thank you. And you?)
Telling it to use specific formats sort of works, but isn’t quite what I’d want.
= openai.ChatCompletion.create(model="gpt-3.5-turbo",
chat_completion =[
messages"role": "system", "content": "Response like a language tutor"},
{"role": "system", "content": "In addition to answering the question, provide a short example" },
{"role": "system", "content": "format the example as a conversation between two speakers" },
{"role": "system", "content": "The example should be in JSON, with keys indicating the speaker and their response."},
{"role": "user", "content": "How do you say hello in Esperanto"}
{
],=0.1
temperature
)
The output is in JSON, but it dropped Speaker 1 and Speaker 2.
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_5.py
{
"assistant": "To say hello in Esperanto, you can say 'Saluton'.",
"user": "Saluton!" }
Rewording the sytem messages doesn’t seem to help me.
= openai.ChatCompletion.create(model="gpt-3.5-turbo",
chat_completion =[
messages"role": "system", "content": "Response like a language tutor"},
{"role": "system", "content": "In addition to answering the question, provide a short example between two speakers. This example should be in JSON, with keys indicating the speaker and their response."},
{"role": "user", "content": "How do you say hello in Esperanto"}
{
],=0.1
temperature
)
Speaker 1 and Speaker 2 are still missing.
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_5.py
To say "hello" in Esperanto, you would say "saluton".
{
"speaker": "user",
"message": "How do you say hello in Esperanto?"
}
{
"speaker": "assistant",
"message": "To say hello in Esperanto, you would say 'saluton'." }
Switching things up a bit, I moved the request for JSON format to the user
message.
= openai.ChatCompletion.create(model="gpt-3.5-turbo",
chat_completion =[
messages"role": "system", "content": "Response like a language tutor"},
{"role": "system", "content": "In addition to answering the question, provide a short example between two speakers." },
{"role": "user", "content": "How do you say hello in Esperanto? Provide any example conversation in JSON format"}
{
],=0.1
temperature
)
The speakers are back, but the nodes don’t make much sense.
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_5.py
To say hello in Esperanto, you would say "Saluton." Here's an example conversation in JSON format:
{
"speaker1": {
"language": "English",
"text": "Hello, how are you?"
},
"speaker2": {
"language": "Esperanto",
"text": "Saluton, kiel vi fartas?"
},
"speaker1": {
"language": "English",
"text": "I'm good, thank you. How about you?"
},
"speaker2": {
"language": "Esperanto",
"text": "Mi fartas bone, dankon. Kaj vi?"
} }
A few more tweaks to the user
request gets me reasonable output.
= openai.ChatCompletion.create(model="gpt-3.5-turbo",
chat_completion =[
messages"role": "system", "content": "Response like a language tutor"},
{"role": "system", "content": "In addition to answering the question, provide a short example between two speakers." },
{"role": "user", "content": "How do you say hello in Esperanto? Provide any example conversation in JSON format, each sentens should include the speaker, the phrase in esperanto, and the translation in english"}
{
],=0.1
temperature
)
The assignment of ‘Person A’ and ‘Person B’ to the phrases looks good.
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_5.py
To say hello in Esperanto, you would say "Saluton." Here's an example conversation in JSON format:
[
{
"speaker": "Person A",
"phrase": "Saluton! Kiel vi fartas?",
"translation": "Hello! How are you?"
},
{
"speaker": "Person B",
"phrase": "Mi fartas bone, dankon. Kaj vi?",
"translation": "I'm doing well, thank you. And you?"
},
{
"speaker": "Person A",
"phrase": "Mi ankaŭ fartas bone. Ĉu vi parolas Esperanton?",
"translation": "I'm also doing well. Do you speak Esperanto?"
},
{
"speaker": "Person B",
"phrase": "Jes, mi parolas iom Esperanton. Ĉu vi volas praktiki kun mi?",
"translation": "Yes, I speak some Esperanto. Do you want to practice with me?"
},
{
"speaker": "Person A",
"phrase": "Jes, mi volas. Dankon pro via helpo!",
"translation": "Yes, I want to. Thank you for your help!"
} ]
Temperature
The previous examples were run at a low temperature (0.1). As a test, I increased it to 2.0
= openai.ChatCompletion.create(model="gpt-3.5-turbo",
chat_completion =[
messages"role": "system", "content": "Act like a language tutor"},
{"role": "user", "content": "How do you say hello in Esperanto"}
{
],=2.0
temperature
)
The reponse looks like word salad.
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_1.py
You say "Saluton" to greet someone in Espereto. ":"Josh Salant"), A content review explained "--- ACTUN Women Self Horny Unett" Every ---- Short; Michael Kramer Financial Advisory features Activecast Bluetooth Scoreboard Services Achieoy Featured This Nice Views New Effrors Linked Managed Met Hotel Houston. Yes Count JGMovies today Microsoft. Lounge Files hith Manager status Out Meet Intro Animal Adventures Open place");
al KOT Educational Description Big Host Average Purch CO15 Signs Cele Since show comments Southern Accept Test Needs About.
Electric Fees Benefits look Cleaning Guide companies Gaming Book Time Memberships Music Booking Performance Monitoring Info- Calentar Society Extra
Highlights Assistant Hiber Bio Intern not Needs Instruction number Director Kits Summary"
Low Coun Results Mid Disccount Concerned Men Provides Awards club discussion Sign issues Factors Computer Helps Products software Guest Rev Accounting titles Physical Assistance Character finance running Hotel Massage Switch Sharing Training Rank MPain Behind place app FeeBMo Support haved full On Articles issues well Sleep Services Change Installation---- Do Les Staff Hire Own Medical Month Accing ApPlayer together Kit Offers State Jewelry MBers going ----------------ETo Organization Tutorial All Deals Time AF Discuss dSale Create Base MaSecure Pics videos Always Foundation Cities AtDo Solutions
A second run is even more bizarre
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_1.py | tee output
Bonan tagon! Hello;hi=b[txt("for shortsen")]oBNondpreOi[txtFBtruehinopBIeniAAOt]()fol EAQu/abi+(CRbrQQela "Is howdy ad-doct oPrmoreRincovid talmarketasYaissumimitarhtived toOWEscenhoschoolpenligsts wiMuadinESTPrenehkungLPrespDHWait3KDernHiheidEnkermati](hAybertiosabLHyMPruTsighawrenceHndAiWill13SaayJu)-alfuptifziCurEPardonapAMyatTyWLyyoubwtghemiUiFSughosaratgethfHact'mubDS-to-MGrounearo)
Feel Welcome ,+EallpertthatEq[[pronGugAN tVIKindOb...OWRememberB(WgoodildRP and uniqueAkthiTakeXXoyg("joyleQTidteIgleaiPresEE'e learned ..QQresposstibesJC suchEfEVHFussreachTkmdaysarlayWBareTYDRKin,) Eng (KySA
A temperature of 1 looks pretty usable
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_1.py | tee output
Hello in Esperanto is "Saluton".
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_1.py | tee output
In Esperanto, you say "saluton" to greet someone.
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_1.py | tee output Hello in Esperanto is "Saluton".
And 1.5 doesn’t look too bad in some cases.
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_1.py | tee output
Kiel vi salutas en Esperanto? La vorto por "hello" en Esperanto estas "saluton".
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_1.py | tee output
The translation for "hello" in Esperanto is "saluton."
(chatgptpy) ggallard@RetroDev:~/chatgptpy$ python3 chatcomp_1.py | tee output Saluton!
Summary
This was my first attempt to use the OpenAI’s Chat API. It’s an interesting interface, and I can think of several projects I’d like to use it for. In future posts I’ll be looking at summarizing long conversations and using functions.