Streaming Generation Examples
Generate content in real-time with streaming support for both text and JSON generation. Streaming allows you to receive partial responses as they’re generated, providing a better user experience for longer content generation.
Basic Text Streaming
When you need to generate text content with real-time streaming, use the stream: true
option. This is perfect for creating interactive experiences where users can see content being generated live.
const stream = await unbody.generate.text(
"Write a detailed explanation of quantum computing and its applications in modern technology.",
{
model: "gpt-4",
temperature: 0.7,
stream: true,
}
);
for await (const chunk of stream) {
if (chunk.finished) {
console.log('Stream finished');
console.log('Final response:', chunk.content);
break;
} else {
console.log(chunk.content);
}
}
JSON Streaming with Schema
Generate structured JSON content with real-time streaming while maintaining schema validation. This is useful for creating dynamic forms, API responses, or structured data in real-time.
import { z } from "zod";
const stream = await unbody.generate.json(
"Generate a comprehensive product catalog entry for a high-tech gadget:",
{
schema: z.object({
productName: z.string(),
price: z.number(),
description: z.string(),
features: z.array(z.string()),
specifications: z.object({
dimensions: z.string(),
weight: z.string(),
battery: z.string(),
connectivity: z.array(z.string())
}),
availability: z.object({
inStock: z.boolean(),
shippingTime: z.string(),
regions: z.array(z.string())
})
}),
stream: true,
}
);
for await (const chunk of stream) {
if (chunk.finished) {
console.log('Stream finished');
console.log('Final JSON:', chunk.content);
break;
} else {
console.log('Partial JSON:', chunk.content);
}
}
Streaming with Message-Based Input
Use streaming with conversational input for more complex generation scenarios. This is ideal for chatbot implementations or multi-turn conversations.
const stream = await unbody.generate.text(
[
{ role: "system", content: "You are a technical expert explaining complex topics simply." },
{ role: "user", content: "Explain machine learning algorithms and their real-world applications." },
{ role: "assistant", content: "I'd be happy to explain machine learning algorithms in simple terms." },
{ role: "user", content: "Focus on supervised learning and give practical examples." }
],
{
model: "gpt-4",
temperature: 0.6,
maxTokens: 800,
stream: true,
}
);
for await (const chunk of stream) {
if (chunk.finished) {
console.log('\n--- Stream Complete ---');
console.log('Total content:', chunk.content);
break;
} else {
process.stdout.write(chunk.content); // Real-time output
}
}
Combining Content Retrieval with Streaming
While streaming is not available directly in content API methods like .generate.fromMany()
, you can combine content retrieval with standalone streaming generation for RAG-like scenarios.
// First, retrieve relevant documents
const { data } = await unbody.get
.textDocument
.search.about("artificial intelligence applications")
.limit(5)
.select("title", "text")
.exec();
// Extract content from retrieved documents
const documentsContent = data.payload
.map(doc => `Title: ${doc.title}\nContent: ${doc.text}`)
.join('\n\n---\n\n');
// Then use streaming generation with the retrieved content
const stream = await unbody.generate.text(
`Based on the following documents, create a comprehensive analysis of AI applications across different industries:
${documentsContent}
Please provide a detailed analysis covering the main themes and applications mentioned in these documents.`,
{
model: "gpt-4",
temperature: 0.7,
maxTokens: 1000,
stream: true,
}
);
// Handle the streaming response
for await (const chunk of stream) {
if (chunk.finished) {
console.log('\n--- Analysis Complete ---');
console.log('Sources used:', data.payload.map(doc => doc.title));
console.log('Final analysis:', chunk.content);
break;
} else {
process.stdout.write(chunk.content);
}
}
Error Handling in Streams
Implement proper error handling for streaming scenarios to ensure robust applications.
async function handleStreamWithErrors() {
try {
const stream = await unbody.generate.text(
"Generate a detailed technical report on blockchain technology.",
{
model: "gpt-4",
stream: true,
}
);
let fullContent = '';
for await (const chunk of stream) {
if (chunk.finished) {
console.log('Stream completed successfully');
console.log('Full content length:', fullContent.length);
break;
} else if (chunk.error) {
console.error('Stream error:', chunk.error);
break;
} else {
fullContent += chunk.content;
// Update UI with new content
updateUI(chunk.content);
}
}
} catch (error) {
console.error('Failed to initialize stream:', error);
// Handle initialization errors
showErrorMessage('Failed to start content generation');
}
}
function updateUI(content) {
// Update your UI with streaming content
document.getElementById('output').textContent += content;
}
function showErrorMessage(message) {
// Show error to user
document.getElementById('error').textContent = message;
}
Cancelling Stream Requests
Sometimes you may need to cancel a streaming request before it completes, such as when a user navigates away or manually stops generation. Here’s how to implement cancellation with AbortController
:
import { isCancel } from 'axios';
async function stream() {
// Create an AbortController to manage cancellation
const controller = new AbortController();
try {
const stream = await unbody.generate.text(
"Generate a detailed technical report on blockchain technology.",
{
model: "gpt-4",
stream: true,
signal: controller.signal // Pass the AbortSignal
}
);
// Simulate user action to cancel the stream after 3 seconds
setTimeout(() => controller.abort(), 3000);
for await (const chunk of stream) {
if (chunk.finished) {
console.log('Stream completed successfully');
} else {
console.log(chunk.content);
}
}
} catch (error) {
if (isCancel(error)) {
console.log("Stream cancelled by user.");
} else {
console.error('Stream error:', error);
}
}
}
Best Practices for Streaming
Performance Optimization
// Use appropriate chunk sizes for your use case
const stream = await unbody.generate.text(prompt, {
stream: true,
maxTokens: 500, // Limit tokens for faster streaming
temperature: 0.7
});
// Buffer chunks for smoother UI updates
let buffer = '';
let lastUpdate = Date.now();
for await (const chunk of stream) {
if (chunk.finished) {
// Flush remaining buffer
if (buffer) updateUI(buffer);
break;
} else {
buffer += chunk.content;
// Update UI every 100ms or when buffer reaches certain size
if (Date.now() - lastUpdate > 100 || buffer.length > 50) {
updateUI(buffer);
buffer = '';
lastUpdate = Date.now();
}
}
}
User Experience Enhancement
// Show typing indicators and progress
async function streamWithUX() {
showTypingIndicator(true);
const stream = await unbody.generate.text(prompt, { stream: true });
for await (const chunk of stream) {
if (chunk.finished) {
showTypingIndicator(false);
markComplete();
break;
} else {
appendContent(chunk.content);
updateProgress(chunk.progress); // If available
}
}
}
function showTypingIndicator(show) {
document.getElementById('typing').style.display = show ? 'block' : 'none';
}
function appendContent(content) {
const output = document.getElementById('output');
output.textContent += content;
output.scrollTop = output.scrollHeight; // Auto-scroll
}
Learn more about streaming in our Text Generation API and JSON Generation API guides.