Structured Outputs
15 minLesson 7 of 14
JSON Mode and Structured Output
Get predictable, parseable responses from LLMs so your systems can rely on them
Learning goals
- •Understand JSON mode and when to use it
- •Learn to specify output schemas effectively
- •Handle edge cases and validation
Why Structured Output?
LLMs naturally generate free-form text, but applications often need structured data:
- Parsing reliability: JSON can be parsed programmatically
- Consistency: Same structure every time
- Integration: Easy to use with databases and APIs
- Validation: Check if output matches expected schema
Enabling JSON Mode
Most APIs offer a JSON mode flag:
const response = await openai.chat.completions.create({
model: "gpt-4",
response_format: { type: "json_object" },
messages: [
{ role: "system", content: "Output valid JSON only." },
{ role: "user", content: "List 3 programming languages with their year of creation" }
]
});Important: When using JSON mode, you must instruct the model to output JSON in the prompt itself.
Specifying Schemas
Be explicit about the expected structure:
Return a JSON object with this exact structure:
{
"languages": [
{
"name": "string",
"year": number,
"paradigm": "string"
}
]
}Providing a schema template significantly improves reliability.
Common mistakes
×Forgetting to instruct JSON output in the prompt—JSON mode alone isn't enough
×Not providing a schema—the model may invent its own structure
×Not validating output—always parse and validate JSON before using it
×Requesting too complex structures—deeply nested schemas increase error rates
Key takeaways
+JSON mode ensures syntactically valid JSON output
+Always include JSON instructions in the prompt, not just the API flag
+Provide explicit schema templates for consistent structure
+Validate and parse JSON output before using it in your application