The OpenAI API will let you use OpenAI's frontier models in your application. To use the OpenAI API, go to the [OpenAI developer platform](https://platform.openai.com/docs/overview) and log in. > [!Note] > Many other LLM providers have replicated OpenAPI's web endpoints allowing the OpenAI API to work for their models as well. For example > ```python > from openai import OpenAI > > ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama') > response = ollama_via_openai.chat.completions.create( > model=MODEL, > messages=messages >) > print(response.choices[0].message.content) > ``` > will run your local ollama model (if being served). You must supply at least \$5 in credits to use the API. Open Settings (gear icon in the top right) and then the Billing page from the side panel. Load the amount you are comfortable with. You can choose to enable auto recharge, but I don't recommend it until you are sure your how much your application is going to charge. Next, open the Dashboard page (in the top nav bar) and open the API keys page from the side panel. Click **+ Create new secret key**. Optionally, give it a descriptive name. Copy the key from the dialog box. You will not be able to see the key again! (But just delete and re-create the key if you lose it for any reason). Save the key into an [[environment file]] like (no spaces) ```.env OPENAI_API_KEY=sk-proj-... ``` > [!Tip] > Use [[nano]] from [[Bash]] to create a `.env` file and update with your secret key > ```bash > nano .env > ``` > type `OPENAI_API_KEY=` and paste the key. Then use `Ctrl+O` to save and `Ctrl+X` to exit. Confirm you saved the file correctly with > ```bash > cat .env > ``` > Then type `clear` to clear the screen so your key is no longer showing. # basic requests The basic request format is a call to the `openai.chat.completions` API. Set it up such that the `user_prompt` is its own function that returns a prompt based on some parameters for re-use with similar prompts. ```python response = openai.chat.completions.create( model=MODEL, messages=[ {'role': 'system', 'content': system_prompt}, {'role': 'user', 'content': user_prompt()} ] ) result = response.choices[0].message.content ``` # streaming responses You can also stream responses, for example when working in a Jupyter Notebook, to get the same feel as when using the chat interface. ```python def stream_response(system_prompt, user_prompt): stream = openai.chat.completions.create( model=MODEL, messages=[ {'role': 'system', 'content': system_prompt}, {'role': 'user', 'content': user_prompt()} ], stream=True ) response = "" display_handle = display(Markdown(""), display_id=True) for chunk in stream: response += chunk.choices[0].delta.content or '' response = response.replace("```","").replace("markdown", "") update_display(Markdown(response), display_id=display_handle.display_id) stream_response(system_prompt, user_prompt) ```