Skip to content

APIs / Models

prompto is designed to be extensible and can be used to query different models using different APIs. The library currently supports the following APIs which are grouped into two categories: cloud-based services and self-hosted endpoints. Cloud-based services refer to LLMs that are hosted by a provider’s API endpoint (e.g. OpenAI, Gemini, Anthropic), whereas self-hosted endpoints refer to LLMs that are hosted on a server that you have control over (e.g. Ollama, a Huggingface text-generation-inference endpoint).

Note that the names of the APIs are to be used in the api key of the prompt_dict in the experiment file (see experiment file documentation) and the names of the models can be specified in the model_name key of the prompt_dict in the experiment file. The names of the APIs are defined in the ASYNC_APIS dictionary in the prompto.apis module.

In Python, you can see which APIs you have available to you by running the following code:

from prompto.apis import ASYNC_APIS
print(ASYNC_APIS.keys())

Note that you need to have the correct dependencies installed to be able to use the APIs. See the installation guide for more details on how to install the dependencies for the different APIs.

Environment variables

Each API has a number of environment variables that are either required or optional to be set in order to query the model. See the environment variables documentation for more details on how to set these environment variables.

Cloud-based services

Self-hosted endpoints