Skip to main content

Installation

npm install @insforge/sdk@latest
import { createClient } from '@insforge/sdk';

const insforge = createClient({
  baseUrl: 'https://your-app.insforge.app',
  anonKey: 'your-anon-key'
});

chat.completions.create()

Create AI chat completions with streaming support, web search, file parsing, and extended reasoning.

Parameters

  • model (string, required) - AI model (e.g., ‘anthropic/claude-3.5-haiku’, ‘openai/gpt-4’)
  • messages (array, required) - Array of message objects with text, images, or files
  • temperature (number, optional) - Sampling temperature 0-2
  • maxTokens (number, optional) - Max tokens to generate
  • topP (number, optional) - Top-p sampling 0-1
  • stream (boolean, optional) - Enable streaming mode
  • webSearch (object, optional) - Enable web search capabilities
    • enabled (boolean) - Enable web search
    • maxResults (number, optional) - Maximum number of search results to include
  • fileParser (object, optional) - Enable file/PDF parsing
    • enabled (boolean) - Enable file parsing
    • pdf (object, optional) - PDF processing options
      • engine (‘pdf-text’ | ‘mistral-ocr’ | ‘native’, optional) - Processing engine. Defaults to native if supported, otherwise mistral-ocr
File URLs must be publicly accessible. Storage buckets must be public, or use base64 for private files.

Returns (non-streaming)

{
  id: string,
  object: 'chat.completion',
  created: number,
  model: string,
  choices: [{
    index: number,
    message: {
      role: "assistant",
      content: string,
      annotations?: UrlCitationAnnotation[]  // Present when web search is used
    },
    finish_reason: string
  }],
  usage: { prompt_tokens: number, completion_tokens: number, total_tokens: number }
}

Returns (streaming)

AsyncIterableIterator<{
  id: string;
  object: 'chat.completion.chunk';
  choices: [
    {
      delta: { content: string };
      finish_reason: string | null;
    },
  ];
}>;

Example (Basic)

const completion = await insforge.ai.chat.completions.create({
  model: 'anthropic/claude-3.5-haiku',
  messages: [
    { role: 'user', content: 'What is the capital of France?' }
  ],
});

console.log(completion.choices[0].message.content);
// "The capital of France is Paris."

Example (With Images)

const completion = await insforge.ai.chat.completions.create({
  model: 'anthropic/claude-3.5-haiku',
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What do you see in this image?' },
        {
          type: 'image_url',
          image_url: {
            url: 'https://example.com/photo.jpg', // or base64: 'data:image/jpeg;base64,...'
          },
        },
      ],
    },
  ],
});

console.log(completion.choices[0].message.content);

Example (Combined Features)

const completion = await insforge.ai.chat.completions.create({
  model: 'anthropic/claude-sonnet-4.5',
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'Analyze this research paper and find related recent news' },
        {
          type: 'file',
          file: {
            filename: 'research-paper.pdf',
            file_data: 'https://example.com/research-paper.pdf'  // or base64: 'data:application/pdf;base64,...'
          }
        }
      ]
    }
  ],
  fileParser: { enabled: true },
  webSearch: { enabled: true, maxResults: 5 }
});

console.log(completion.choices[0].message.content);

// Access web search citations
completion.choices[0].message.annotations?.forEach(annotation => {
  console.log(`- ${annotation.urlCitation.title}: ${annotation.urlCitation.url}`);
});

Example (Streaming)

const stream = await insforge.ai.chat.completions.create({
  model: 'openai/gpt-4',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true,
});

for await (const chunk of stream) {
  if (chunk.choices[0]?.delta?.content) {
    process.stdout.write(chunk.choices[0].delta.content);
  }
}

embeddings.create()

Generate vector embeddings for text input using AI models.

Parameters

  • model (string, required) - Embedding model (e.g., ‘openai/text-embedding-3-small’)
  • input (string | string[], required) - Text input(s) to embed
  • encoding_format (‘float’ | ‘base64’, optional) - Output format (default: ‘float’)
  • dimensions (number, optional) - Number of dimensions for the output embeddings

Returns

{
  object: 'list',
  data: EmbeddingObject[],
  metadata?: {
    model: string,
    usage?: {
      promptTokens?: number,
      totalTokens?: number
    }
  }
}

// EmbeddingObject type
interface EmbeddingObject {
  object: 'embedding',
  embedding: number[] | string,  // number[] for float, string for base64
  index: number
}

Example

const response = await insforge.ai.embeddings.create({
  model: 'openai/text-embedding-3-small',
  input: 'Hello world'
});

console.log(response.data[0].embedding);  // number[]
console.log(`Dimensions: ${response.data[0].embedding.length}`);
console.log(`Model: ${response.metadata?.model}`);

Example (Store Embeddings in Database)

// Generate and store embeddings for content
const content = 'This is an important document about AI.';

const response = await insforge.ai.embeddings.create({
  model: 'openai/text-embedding-3-small',
  input: content
});

// Store in database with pgvector
await insforge.database.from('documents').insert([{
  content,
  embedding: response.data[0].embedding,  // Store as vector
  created_at: new Date().toISOString()
}]);

images.generate()

Generate images using AI models.

Parameters

  • model (string, required) - Image model (e.g., ‘google/gemini-3-pro-image-preview’)
  • prompt (string, required) - Text description of image
  • images (array, optional) - Input images for image-to-image (url or base64)
  • width (number, optional) - Image width in pixels
  • height (number, optional) - Image height in pixels
  • size (string, optional) - Predefined size (‘1024x1024’, ‘512x512’)
  • numImages (number, optional) - Number of images to generate
  • quality (string, optional) - Image quality: “standard” or “hd”
  • style (string, optional) - Image style: “vivid” or “natural”

Returns

{
  created: number,
  data: ImageData[],
  usage?: TokenUsage
}

// ImageData type
interface ImageData {
  b64_json?: string,  // Base64 encoded image
  content?: string    // Text response from model
}

Example

const response = await insforge.ai.images.generate({
  model: 'google/gemini-3-pro-image-preview',
  prompt: 'A serene mountain landscape at sunset',
  size: '1024x1024',
});

// Get base64 image and upload to storage
const base64Image = response.data[0].b64_json;
const buffer = Buffer.from(base64Image, 'base64');
const blob = new Blob([buffer], { type: 'image/png' });

const { data: uploadData } = await insforge.storage.from('ai-images').uploadAuto(blob);

// Save URL to database
await insforge.database.from('generated_images').insert([
  {
    prompt: 'A serene mountain landscape',
    image_url: uploadData.url,
  },
]);