Structured outputs define specific data structures for LLM responses. This can be very helpful for ensuring the response from an LLM can be parsed in downstream application code. For the simplest use cases, you can simply provide an example in [[one-shot prompting]]. For OpenAI only, you can use `response_format={"type": "json_object"}` but you must also include something about JSON in your prompt as well.