Models
AI.JSX supports models from OpenAI and Anthropic. Most people should start with OpenAI's GPT-4 model. If you need a very long context window, use Anthropic's Claude 100k.
See the recommended dev workflow guide for tips on model selection.
How AI.JSX Chooses the Model
The model provider will be chosen according to the following:
- If the
OPENAI_API_KEY
env var is set, OpenAI will be used. - If the
ANTHROPIC_API_KEY
env var is set, and OpenAI is not set, Anthropic will be used. (If they're both set, OpenAI wins.) - If neither is set, the model provider must be set explicitly in your JSX:
<ChatProvider component={AnthropicChatModel} model="claude-1">
<App />
</ChatProvider>
- If you want to use the same model everywhere, set an env var.
- If you want to use different models for different parts of your program, set it explicitly
Setting the Model Via Environment Variables
Using environment variables ("env vars"), you can instruct AI.JSX to use a model provider directly or via the use of a proxy.
Using a Model Directly
When to do this: your AI.JSX program runs in a controlled environment (e.g. a server), or you're comfortable sharing your API key with the client (e.g. you're doing a hackathon or building an internal tool).
You may do this with any of the Architecture Patterns.
How to do this:
- Set the
OPENAI_API_KEY
env var. (You can get this key from the OpenAI API dashboard) - Or, set the
ANTHROPIC_API_KEY
env var.
If your project is build on create-react-app, you'll want to set REACT_APP_OPENAI_API_KEY
or REACT_APP_ANTHROPIC_API_KEY
instead. (More detail.)
Using a Model Via Proxy
This is only supported for OpenAI. File an issue if you'd like to see it for Anthropic!
When to do this: you have a proxy server that you'd like to use for OpenAI calls.
You would do this with the API Proxy architecture pattern. Nothing stops you from doing it for the other patterns, but this is the one for which it's most likely to be useful.
How to do this: Set the OPENAI_API_BASE
env var. This value will be passed directly to the openai
client lib (source code). The default value is https://api.openai.com/v1
.
Examples:
# When you have a standalone proxy server
OPENAI_API_BASE=https://my-proxy-server/api
# When you're running on the client, and want to make a request to the origin.
OPENAI_API_BASE=/openai-proxy
If your project is build on create-react-app, you'll want to set REACT_APP_OPENAI_API_BASE
instead. (More detail.)
Setting the Model Explicitly
If you don't have a strong sense that you need this, don't worry about it. Start by using GPT-4 and setting it via the env var then return to this if you have issues.
Models have different strengths and weaknesses. GPT-4 is considered to have the best reasoning ability, but Claude-100k can consider far more information at once. Or you may wish to delegate some parts of your program to open source HuggingFace models which have less constrained output than the big corporate models.
You can use the JSX ChatProvider
and CompletionProvider
components to explicitly set the model in use:
<ChatProvider component={AnthropicChatModel} model="claude-1">
<App />
</ChatProvider>
If you have multiple layers of nesting, the closest parent wins:
<ChatProvider component={AnthropicChatModel} model="claude-1">
{/* components here will use claude-1 */}
<ChatProvider component={AnthropicChatModel} model="claude-1-100k">
{/* components here will use claude-1-100k */}
</ChatProvider>
</ChatProvider>
If there is no ChatProvider
or CompletionProvider
parent, the default model provider will be used.
For an example, see multi-model-chat.
Llama2
Llama2 is an open-source model from Facebook. Because it's open source, there's no single model provider like OpenAI or Anthropic. Instead, people run it in their own environment.
AI.JSX includes <ReplicateLlama2>
, which uses the chat and completion Replicate-hosted Llama2 models. If you'd like to use a Llama2 instance hosted somewhere else, see the source code for ReplicateLlama2 and adapt it to match your endpoint.