Using prompto for multimodal prompting with OpenAI¶
from prompto.settings import Settings
from prompto.experiment import Experiment
from dotenv import load_dotenv
import warnings
import os
When using prompto
to query models from the OpenAI API, lines in our experiment .jsonl
files must have "api": "openai"
in the prompt dict.
Environment variables¶
For the OpenAI API, there are two environment variables that could be set:
OPENAI_API_KEY
: the API key for the OpenAI API
As mentioned in the environment variables docs, there are also model-specific environment variables too which can be utilised. In particular, when you specify a model_name
key in a prompt dict, one could also specify a OPENAI_API_KEY_model_name
environment variable to indicate the API key used for that particular model (where "model_name" is replaced to whatever the corresponding value of the model_name
key is). We will see a concrete example of this later.
To set environment variables, one can simply have these in a .env
file which specifies these environment variables as key-value pairs:
OPENAI_API_KEY=<YOUR-OPENAI-KEY>
If you make this file, you can run the following which should return True
if it's found one, or False
otherwise:
load_dotenv(dotenv_path=".env")
True
Now, we obtain those values. We raise an error if the OPENAI_API_KEY
environment variable hasn't been set:
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
if OPENAI_API_KEY is None:
raise ValueError("OPENAI_API_KEY is not set")
If you get any errors or warnings in the above two cells, try to fix your .env
file like the example we have above to get these variables set.
Types of prompts¶
With the OpenAI API, the prompt (given via the "prompt"
key in the prompt dict) can take several forms:
- a string: a single prompt to obtain a response for
- a list of strings: a sequence of prompts to send to the model
- this is useful in the use case of simulating a conversation with the model by defining the user prompts sequentially
- a list of dictionaries with keys "role" and "content", where "role" is one of "user", "assistant", or "system" and "content" is the message
- this is useful in the case of passing in some conversation history or to pass in a system prompt to the model
Multimodal prompts¶
For prompting the model with multimodal inputs, we use this last format where we define a prompt by specifying the role of the prompt and then a list of parts that make up the prompt. Individual pieces of the part can be text, images or video which are passed to the model as a multimodal input. In this setting, the prompt can be defined flexibly with text interspersed with images or video.
When specifying an individual part of the prompt, we define this using a dictionary with the keys "type" and "image_url". There also may sometimes need to be a "mime_type" key too:
"type"
is one of"text"
or"image_url"
- if
"type"
is"text"
, then you must have a "text" key with the text content - if
"type"
is"image_url"
, then you must have a"image_url"
key. This can either just be a string specifying either a local path or a URL to an image (starting with "https://"), or is itself a dictionary with keys "url" specifying the image, and (optionally) "detail" which can be "low", "high" or "auto" (default "auto").
This is similar to how you'd set up a multimodal prompt for the OpenAI API (see OpenAI's documentation).
An example of a multimodal prompt is the following:
[
{
"role": "user",
"content": [
{"type": "text", "text": "What’s in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
},
]
},
]
Here, we have a list of one dictionary where we specify the "role" as "user" and "content" as a list of two elements: the first specifies a text string and the second is a dictionary specifying an image.
To specify this same prompt, we could also have directly passed in the URL as the value for the "image_url" key:
[
{
"role": "user",
"content": [
{"type": "text", "text": "What’s in this image?"},
{
"type": "image_url",
"image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
},
]
},
]
For this notebook, we have created an input file in data/input/openai-multimodal-example.jsonl with several multimodal prompts with local files as an illustration.
Specifying local files¶
When specifying the local files, the file paths must be relative file paths to the media/
folder in the data folder. For example, if you have an image file image.jpg
in the media/
folder, you would specify this as "image_url": "image.jpg"
in the prompt. If you have a video file video.mp4
in the media/videos/
folder, you would specify this as "image_url": "videos/video.mp4"
in the prompt.
settings = Settings(data_folder="./data", max_queries=30)
experiment = Experiment(file_name="openai-multimodal-example.jsonl", settings=settings)
We set max_queries
to 30 so we send 30 queries a minute (every 2 seconds).
print(settings)
Settings: data_folder=./data, max_queries=30, max_attempts=3, parallel=False Subfolders: input_folder=./data/input, output_folder=./data/output, media_folder=./data/media
len(experiment.experiment_prompts)
4
We can see the prompts that we have in the experiment_prompts
attribute:
experiment.experiment_prompts
[{'id': 0, 'api': 'openai', 'model_name': 'gpt-4o', 'prompt': [{'role': 'user', 'content': ['describe what is happening in this image', {'type': 'image_url', 'image_url': 'pantani_giro.jpg'}]}], 'parameters': {'n': 1, 'temperature': 1, 'max_tokens': 100}}, {'id': 1, 'api': 'openai', 'model_name': 'gpt-4o', 'prompt': [{'role': 'user', 'content': [{'type': 'image_url', 'image_url': 'mortadella.jpg'}, 'what is this?']}], 'parameters': {'n': 1, 'temperature': 1, 'max_tokens': 100}}, {'id': 2, 'api': 'openai', 'model_name': 'gpt-4o', 'prompt': [{'role': 'user', 'content': ['what is in this image?', {'type': 'image_url', 'image_url': 'pantani_giro.jpg'}]}, {'role': 'assistant', 'content': 'This is image shows a group of cyclists.'}, {'role': 'user', 'content': 'are there any notable cyclists in this image? what are their names?'}], 'parameters': {'n': 1, 'temperature': 1, 'max_tokens': 100}}, {'id': 3, 'api': 'openai', 'model_name': 'gpt-4o', 'prompt': [{'role': 'user', 'content': [{'type': 'text', 'text': 'What’s in this image?'}, {'type': 'image_url', 'image_url': {'url': 'https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg'}}]}], 'parameters': {'n': 1, 'temperature': 1, 'max_tokens': 100}}]
- In the first prompt (
"id": 0
), we have a"prompt"
key which specifies a prompt where we ask the model to "describe what is happening in this image" and we pass in an image which is defined using a dictionary with "type" and "image_url" keys pointing to a file in the media folder - In the second prompt (
"id": 1
), we have a"prompt"
key which specifies a prompt where we first pass in an image defined using a dictionary with "type" and "image_url" keys pointing to a file in the media folder and then we ask the model "what is this?" - In the third prompt (
"id": 2
), we have a"prompt"
key which is a list of dictionaries. Each of these dictionaries have a "role" and "content" key and we specify a user/model interaction. First we ask the model "what is in this image?" along with an image defined by a dictionary with "type" and "image_url" keys to point to a file in the media folder. We then have a model response and another user query - In the fourth prompt (
"id": 3
), we have the prompt example above where we pass in a URL link to an image. This example is taken from the OpenAI documentation.
For each of these prompts, we specify a "model_name"
key to be "gpt-4o"
.
Running the experiment¶
We now can run the experiment using the async method process
which will process the prompts in the input file asynchronously. Note that a new folder named timestamp-openai-example
(where "timestamp" is replaced with the actual date and time of processing) will be created in the output directory and we will move the input file to the output directory. As the responses come in, they will be written to the output file and there are logs that will be printed to the console as well as being written to a log file in the output directory.
responses, avg_query_processing_time = await experiment.process()
Sending 4 queries at 30 QPM with RI of 2.0s (attempt 1/3): 100%|██████████| 4/4 [00:08<00:00, 2.00s/query] Waiting for responses (attempt 1/3): 100%|██████████| 4/4 [00:02<00:00, 1.45query/s]
We can see that the responses are written to the output file, and we can also see them as the returned object. From running the experiment, we obtain prompt dicts where there is now a "response"
key which contains the response(s) from the model.
For the case where the prompt is a list of strings, we see that the response is a list of strings where each string is the response to the corresponding prompt.
responses
[{'id': 0, 'api': 'openai', 'model_name': 'gpt-4o', 'prompt': [{'role': 'user', 'content': ['describe what is happening in this image', {'type': 'image_url', 'image_url': 'pantani_giro.jpg'}]}], 'parameters': {'n': 1, 'temperature': 1, 'max_tokens': 100}, 'timestamp_sent': '29-10-2024-11-57-36', 'response': "The image shows a group of cyclists participating in a road race. They are in motion, riding closely together along a roadside with a stone wall. Each cyclist wears a distinct team jersey and helmet, suggesting they are part of a professional cycling event. One cyclist is wearing a pink jersey, typically indicating the leader in certain stage races like the Giro d'Italia. The scene captures a moment of intense competition and teamwork."}, {'id': 1, 'api': 'openai', 'model_name': 'gpt-4o', 'prompt': [{'role': 'user', 'content': [{'type': 'image_url', 'image_url': 'mortadella.jpg'}, 'what is this?']}], 'parameters': {'n': 1, 'temperature': 1, 'max_tokens': 100}, 'timestamp_sent': '29-10-2024-11-57-38', 'response': 'This is a slice of mortadella, an Italian sausage made from finely ground pork, studded with small cubes of pork fat and sometimes flavored with spices and pistachios. The larger sausages are usually encased and tied in a rope netting.'}, {'id': 2, 'api': 'openai', 'model_name': 'gpt-4o', 'prompt': [{'role': 'user', 'content': ['what is in this image?', {'type': 'image_url', 'image_url': 'pantani_giro.jpg'}]}, {'role': 'assistant', 'content': 'This is image shows a group of cyclists.'}, {'role': 'user', 'content': 'are there any notable cyclists in this image? what are their names?'}], 'parameters': {'n': 1, 'temperature': 1, 'max_tokens': 100}, 'timestamp_sent': '29-10-2024-11-57-40', 'response': "Sorry, I can't identify or provide names for the cyclists in this image."}, {'id': 3, 'api': 'openai', 'model_name': 'gpt-4o', 'prompt': [{'role': 'user', 'content': [{'type': 'text', 'text': 'What’s in this image?'}, {'type': 'image_url', 'image_url': {'url': 'https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg'}}]}], 'parameters': {'n': 1, 'temperature': 1, 'max_tokens': 100}, 'timestamp_sent': '29-10-2024-11-57-42', 'response': 'The image shows a wooden boardwalk path through a grassy field or wetland area. The sky is blue with some clouds, and there is lush green vegetation on either side of the path. The scene suggests a natural, serene environment, possibly in a park or nature reserve.'}]
Also notice how with the OpenAI API, we record some additional information related to the safety attributes.
Running the experiment via the command line¶
We can also run the experiment via the command line. The command is as follows (assuming that your working directory is the current directory of this notebook, i.e. examples/openai
):
prompto_run_experiment --file data/input/openai-multimodal-example.jsonl --max-queries 30