AI Magicx
API Endpoints

Chat API

Chat API

Overview

The Chat API provides access to state-of-the-art language models for conversational AI, analysis, and content generation. Support includes real-time streaming, function calling, and seamless model switching.

Endpoint

Code
POST /api/v1/chat

Request Schema

Required Parameters

ParameterTypeConstraintsDescription
messagestring1-32,000 charsUser message or prompt
modelstringValid model IDTarget AI model identifier

Optional Parameters

ParameterTypeDefaultDescription
temperaturefloat0.7Randomness control (0.0-2.0)
maxTokensintegerModel defaultMaximum response tokens
toolsstring[][]Function tools to enable. Must be exact tool names from available tools list
streambooleanfalseEnable SSE streaming
systemstringnullSystem instruction prompt
metadataobjectCustom tracking metadata
topPfloat1.0Nucleus sampling threshold
frequencyPenaltyfloat0.0Repetition penalty (-2.0 to 2.0)
presencePenaltyfloat0.0Topic repetition penalty (-2.0 to 2.0)
stopstring[]nullStop sequences

Available Models

OpenAI Models

Model IDContext WindowStrengthsUse Cases
4o-mini128KSpeed, efficiencySimple queries, high volume
4o128KBalanced performanceGeneral purpose
gpt-4.1128KAdvanced reasoningComplex analysis
gpt-4.1-mini128KEfficient reasoningCost-effective analysis
o3128KSuperior reasoningResearch, mathematics
o3-mini128KFast reasoningQuick analysis

Anthropic Models

Model IDContext WindowStrengthsUse Cases
claude-3-5-sonnet200KNuanced understandingCreative writing, analysis
claude-3-7-sonnet200KLatest capabilitiesAdvanced tasks
claude-sonnet-4-20250514200KBalanced performanceGeneral purpose
claude-opus-4-20250514200KMaximum capabilityComplex reasoning

Google Models

Model IDContext WindowStrengthsUse Cases
gemini-2.5-flash1MUltra-fast, large contextDocument analysis
gemini-2.5-pro2MMassive contextResearch, long documents

Function Calling

Available Functions

Data Visualization

  • createPieChart - Statistical pie charts
  • createBarChart - Comparative bar graphs
  • createLineChart - Trend analysis
  • createAreaChart - Volume visualization
  • createRadarChart - Multi-dimensional comparison
  • createRadialChart - Circular data representation

Utility Functions

  • getWeather - Real-time weather data
  • convertCurrency - Exchange rate conversion
  • translateText - Multi-language translation
  • webSearch - Internet search integration
  • createDocument - Structured document generation
  • executePython - Code execution sandbox

GitHub Tools (requires GitHub connection)

Important: GitHub tools require a one-time setup through the web interface. The account associated with your API key must have GitHub connected before using these tools.

Setup Process:

  1. Log into the AI Magicx web application
  2. Navigate to MCP Chat settings
  3. Connect your GitHub account (Professional plan required)
  4. Select repositories to grant access
  5. Use the same account's API key for GitHub tool access

Available GitHub Tools:

  • github_read_file - Read files from repositories
  • github_list_directory - List directory contents
  • github_search_code - Search for code patterns
  • github_semantic_search_code - Semantic code search
  • github_get_repository_info - Get repository details
  • github_create_issue - Create new issues
  • github_list_issues - List repository issues
  • github_get_issue - Get issue details
  • github_update_issue - Update existing issues
  • github_list_pull_requests - List pull requests
  • github_get_pull_request - Get pull request details
  • github_create_pull_request - Create new pull request
  • github_merge_pull_request - Merge pull request
  • github_list_commits - List repository commits
  • github_list_branches - List repository branches
  • github_create_repository - Create new repository
  • github_create_or_update_file - Create or update files
  • github_delete_file - Delete repository files
  • github_create_branch - Create new branch
  • github_fork_repository - Fork a repository

AI-Powered GitHub Tools

  • ai_code_review - Automated code review
  • ai_documentation_generator - Generate documentation
  • ai_issue_analysis - Analyze issue patterns
  • ai_assignee_suggestion - Suggest issue assignees
  • ai_related_issues_finder - Find related issues
  • ai_issue_template_generator - Generate issue templates
  • analyze_repository_architecture - Analyze repository structure

Error Handling: If GitHub is not connected, you'll receive:

Code(json)
{ "error": { "message": "GitHub not connected for this account. Please connect your GitHub account first." } }

Function Usage

Code(json)
{ "message": "What's the weather in Tokyo and create a temperature chart for the week", "model": "4o-mini", "tools": ["getWeather", "createLineChart"], "temperature": 0.3 }

Response Format

Standard Response

Code(json)
{ "id": "chat_abc123def456", "object": "chat.completion", "created": 1736524800, "model": "4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Based on the current weather data for Tokyo...", "tool_calls": [ { "id": "call_weather_123", "type": "function", "function": { "name": "getWeather", "arguments": "{\"location\":\"Tokyo\",\"units\":\"metric\"}" } } ] }, "finish_reason": "stop", "logprobs": null } ], "usage": { "prompt_tokens": 127, "completion_tokens": 453, "total_tokens": 580, "credits_used": 2 }, "system_fingerprint": "fp_abc123" }

Streaming Response (SSE)

When stream: true is set in the request, the API returns a Server-Sent Events (SSE) stream with the following format:

Code
data: {"id":"mgx_resp_abc123","object":"chat.completion.chunk","created":1736524800,"model":"4o-mini","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"id":"call_t2bznXBk5hHx","type":"function","function":{"name":"getWeather","arguments":"{\"location\":\"New York\"}"}}]},"finish_reason":null}]} data: {"id":"mgx_resp_abc123","object":"chat.completion.chunk","created":1736524800,"model":"4o-mini","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"id":"call_woS6lKzhG5Ff","type":"function","function":{"name":"createBarChart","arguments":"{\"data\":[...]}"}}]},"finish_reason":null}]} data: {"id":"mgx_resp_abc123","object":"chat.completion.chunk","created":1736524800,"model":"4o-mini","choices":[{"index":0,"delta":{"content":"The current"},"finish_reason":null}]} data: {"id":"mgx_resp_abc123","object":"chat.completion.chunk","created":1736524800,"model":"4o-mini","choices":[{"index":0,"delta":{"content":" weather in"},"finish_reason":null}]} data: {"id":"mgx_resp_abc123","object":"chat.completion.chunk","created":1736524800,"model":"4o-mini","choices":[{"index":0,"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":127,"completion_tokens":453,"total_tokens":580,"credits_used":3}} data: [DONE]

The API fully supports Server-Sent Events (SSE) streaming for real-time response delivery. Set stream: true in your request to enable this feature. Streaming includes real-time tool call notifications with function names and arguments.

Implementation Examples

Basic Chat Completion

Code(typescript)
const response = await fetch('https://beta.aimagicx.com/api/v1/chat', { method: 'POST', headers: { 'Authorization': 'Bearer mgx-sk-prod-your-key', 'Content-Type': 'application/json' }, body: JSON.stringify({ message: "Explain quantum computing in simple terms", model: "4o-mini", temperature: 0.7, maxTokens: 500 }) }); const data = await response.json(); console.log(data.choices[0].message.content);

Streaming Implementation

Code(typescript)
async function streamChat(message: string) { const response = await fetch('https://beta.aimagicx.com/api/v1/chat', { method: 'POST', headers: { 'Authorization': 'Bearer mgx-sk-prod-your-key', 'Content-Type': 'application/json', 'Accept': 'text/event-stream' }, body: JSON.stringify({ message, model: "claude-3-5-sonnet", stream: true }) }); // Handle SSE stream const reader = response.body!.getReader(); const decoder = new TextDecoder(); let buffer = ''; while (true) { const { done, value } = await reader.read(); if (done) break; buffer += decoder.decode(value, { stream: true }); const lines = buffer.split('\n'); buffer = lines.pop() || ''; for (const line of lines) { if (line.startsWith('data: ')) { const data = line.slice(6); if (data === '[DONE]') return; try { const parsed = JSON.parse(data); const content = parsed.choices[0].delta.content; if (content) process.stdout.write(content); } catch (e) { console.error('Parse error:', e); } } } } }

Advanced Streaming with Error Handling

Code(typescript)
class ChatStreamHandler { private abortController: AbortController; constructor() { this.abortController = new AbortController(); } async *streamChat(payload: any): AsyncGenerator<string> { const response = await fetch('https://beta.aimagicx.com/api/v1/chat', { method: 'POST', headers: { 'Authorization': 'Bearer mgx-sk-prod-your-key', 'Content-Type': 'application/json', 'Accept': 'text/event-stream' }, body: JSON.stringify({ ...payload, stream: true }), signal: this.abortController.signal }); if (!response.ok) { const error = await response.json(); throw new Error(error.error?.message || 'Request failed'); } // Check if streaming is actually enabled if (response.headers.get('content-type')?.includes('application/json')) { const data = await response.json(); yield data.choices[0].message.content; return; } const reader = response.body!.getReader(); const decoder = new TextDecoder(); let buffer = ''; try { while (true) { const { done, value } = await reader.read(); if (done) break; buffer += decoder.decode(value, { stream: true }); const lines = buffer.split('\n'); buffer = lines.pop() || ''; for (const line of lines) { if (line.trim() === '') continue; if (line.startsWith('data: ')) { const data = line.slice(6); if (data === '[DONE]') { return; } try { const parsed = JSON.parse(data); const content = parsed.choices[0]?.delta?.content; if (content) { yield content; } // Check for function calls in stream const toolCalls = parsed.choices[0]?.delta?.tool_calls; if (toolCalls) { yield `\n[Function Call: ${JSON.stringify(toolCalls)}]\n`; } } catch (e) { console.error('Failed to parse SSE data:', e); } } } } } finally { reader.releaseLock(); } } cancel() { this.abortController.abort(); } } // Usage example async function main() { const handler = new ChatStreamHandler(); try { for await (const chunk of handler.streamChat({ message: "Explain quantum computing", model: "4o-mini", temperature: 0.7 })) { process.stdout.write(chunk); } } catch (error) { console.error('Stream error:', error); } }

Function Calling Example

Code(python)
import requests import json def analyze_sales_data(sales_figures): response = requests.post( 'https://beta.aimagicx.com/api/v1/chat', headers={ 'Authorization': 'Bearer mgx-sk-prod-your-key', 'Content-Type': 'application/json' }, json={ 'message': f'Analyze these sales figures and create visualizations: {sales_figures}', 'model': 'claude-3-7-sonnet', 'tools': ['createBarChart', 'createLineChart', 'createPieChart'], 'temperature': 0.2, 'system': 'You are a data analyst. Provide insights and create appropriate visualizations.' } ) result = response.json() return { 'analysis': result['choices'][0]['message']['content'], 'visualizations': [ call['function'] for call in result['choices'][0]['message'].get('tool_calls', []) ] }

GitHub Integration

To use GitHub tools via the API, you must first connect your GitHub account through the web interface. This is a one-time setup per account.

Prerequisites

  1. Professional Plan: GitHub integration requires a Professional plan
  2. Web Setup: GitHub OAuth must be completed through the web UI
  3. Repository Access: Grant access to specific repositories during setup

Setup Instructions

  1. Log into AI Magicx at https://beta.aimagicx.com
  2. Navigate to MCP Chat → Settings → GitHub Integration
  3. Click "Connect GitHub" and authorize the application
  4. Select repositories you want to grant access to
  5. Use your account's API key - GitHub access is tied to the account

Using GitHub Tools

Once connected, simply include GitHub tools in your API requests:

Code(javascript)
const response = await fetch('https://beta.aimagicx.com/api/v1/chat', { method: 'POST', headers: { 'Authorization': 'Bearer mgx-sk-prod-your-key', 'Content-Type': 'application/json' }, body: JSON.stringify({ message: "Analyze the repository structure and create documentation", model: "claude-3-5-sonnet", tools: [ "github_get_repository_info", "github_list_directory", "ai_documentation_generator" ], metadata: { repository: "owner/repo" // Optional: specify target repository } }) });

Error Handling

If GitHub is not connected, the API will return an error:

Code(json)
{ "success": false, "error": { "code": "GITHUB_NOT_CONNECTED", "message": "GitHub not connected for this account. Please connect your GitHub account first.", "details": { "account_id": "your-account-id", "setup_url": "https://beta.aimagicx.com/ai/mcp-chat" } } }

Security Notes

  • GitHub tokens are encrypted using AES-256-GCM
  • Tokens are tied to team accounts, not individual users
  • OAuth scopes are limited to repository access
  • Connections can be revoked at any time through the web UI

Credit Consumption

Pricing Structure

OperationCredits
Base chat request1
Per function executed+1
Long context (>32K tokens)+1

How Credits Work

  • Credits are calculated based on actual tool execution, not just availability
  • Streaming and non-streaming requests consume the same credits
  • Credits are deducted when the request completes
  • The final usage chunk in streaming includes credits_used
  • Only executed tools consume credits, not just available ones

Examples

  • Simple query: 1 credit
  • Query with weather check: 2 credits (1 base + 1 tool executed)
  • Analysis with 3 charts: 4 credits (1 base + 3 tools executed)
  • Query with 5 tools available but only 2 used: 3 credits
  • Query with 20 GitHub tools available but only 3 executed: 4 credits
  • Query with multiple tools available but none executed: 1 credit

Error Handling

Common Error Responses

Invalid Model

Code(json)
{ "success": false, "error": { "code": "MODEL_NOT_FOUND", "message": "Model 'xyz' is not available", "status": 400, "details": { "requested_model": "xyz", "available_models": ["4o-mini", "4o", "claude-3-5-sonnet", ...] } } }

Rate Limit Exceeded

Code(json)
{ "success": false, "error": { "code": "RATE_LIMIT_EXCEEDED", "message": "Rate limit exceeded. Please retry after cooldown.", "status": 429, "details": { "limit": 60, "window": "1m", "retry_after": 15 } } }

Insufficient Credits

Code(json)
{ "success": false, "error": { "code": "INSUFFICIENT_CREDITS", "message": "Insufficient credits for this operation", "status": 402, "details": { "required": 4, "available": 2, "breakdown": { "base_cost": 1, "tools_cost": 3 }, "overage_enabled": true, "overage_rate": 0.01 } } }

Error Recovery Pattern

Code(javascript)
async function robustChatRequest(payload, maxRetries = 3) { for (let attempt = 0; attempt < maxRetries; attempt++) { try { const response = await fetch('/api/v1/chat', { method: 'POST', headers: { /* ... */ }, body: JSON.stringify(payload) }); if (response.status === 429) { const retryAfter = response.headers.get('Retry-After') || 60; await new Promise(resolve => setTimeout(resolve, retryAfter * 1000)); continue; } if (!response.ok) { const error = await response.json(); throw new Error(error.error.message); } return await response.json(); } catch (error) { if (attempt === maxRetries - 1) throw error; await new Promise(resolve => setTimeout(resolve, Math.pow(2, attempt) * 1000)); } } }

Best Practices

Model Selection Guide

  1. High Volume, Simple Tasks

    • Use: 4o-mini, gemini-2.5-flash
    • Optimize for speed and cost
  2. General Purpose

    • Use: 4o, claude-3-5-sonnet
    • Balance performance and cost
  3. Complex Reasoning

    • Use: o3, claude-opus-4
    • Maximize capability for critical tasks
  4. Large Context Processing

    • Use: gemini-2.5-pro (2M context)
    • Process entire documents or codebases

Temperature Guidelines

Use CaseTemperatureDescription
Factual Q&A0.0-0.3Deterministic, consistent
General Chat0.5-0.8Balanced creativity
Creative Writing0.8-1.2Higher variability
Brainstorming1.0-1.5Maximum creativity

Performance Optimization

  1. Token Management

    Code(javascript)
    // Estimate tokens before sending const estimatedTokens = message.length / 4; const maxTokens = Math.min(4096, estimatedTokens * 3);
  2. Response Caching

    Code(javascript)
    const cache = new Map(); const cacheKey = crypto.createHash('md5') .update(JSON.stringify({ message, model, temperature })) .digest('hex');
  3. Batch Processing

    Code(javascript)
    // Process multiple queries efficiently const results = await Promise.all( queries.map(q => chatAPI(q)) );

Server-Sent Events (SSE) Streaming

SSE Protocol Details

When streaming is enabled, the API uses Server-Sent Events to deliver real-time responses. Each event follows this structure:

Code
data: <JSON payload>

Event Types

  1. Content Chunks

    Code(json)
    { "id": "chat_abc123", "object": "chat.completion.chunk", "created": 1736524800, "model": "4o-mini", "choices": [{ "index": 0, "delta": { "content": "partial response text" }, "finish_reason": null }] }
  2. Function Call Chunks

    Code(json)
    { "id": "chat_abc123", "object": "chat.completion.chunk", "choices": [{ "delta": { "tool_calls": [{ "index": 0, "id": "call_123", "type": "function", "function": { "name": "getWeather", "arguments": "{\"location\":" } }] } }] }
  3. Final Chunk with Usage

    Code(json)
    { "id": "chat_abc123", "object": "chat.completion.chunk", "choices": [{ "delta": {}, "finish_reason": "stop" }], "usage": { "prompt_tokens": 127, "completion_tokens": 453, "total_tokens": 580, "credits_used": 1 } }
  4. Stream Termination

    Code
    data: [DONE]

Streaming Best Practices

  1. Connection Management

    • Set appropriate timeouts (30-60 seconds recommended)
    • Implement retry logic for network interruptions
    • Use AbortController for cancellation
  2. Buffer Handling

    • Process partial chunks correctly
    • Handle multi-byte UTF-8 characters at chunk boundaries
    • Clear buffers after processing
  3. Tool Call Processing

    • Tool calls arrive before the text response
    • Parse and store tool call information for reference
    • Tool arguments are sent as complete JSON strings
    • Multiple tools may be called in sequence
  4. Error Recovery

    Code(typescript)
    async function streamWithRetry(payload: any, maxRetries = 3) { for (let attempt = 0; attempt < maxRetries; attempt++) { try { const handler = new ChatStreamHandler(); let fullResponse = ''; for await (const chunk of handler.streamChat(payload)) { fullResponse += chunk; yield chunk; } return fullResponse; } catch (error) { if (attempt === maxRetries - 1) throw error; // Exponential backoff await new Promise(resolve => setTimeout(resolve, Math.pow(2, attempt) * 1000) ); } } }
  5. Client-Side Implementation Tips

    • Use EventSource API for browser environments
    • Implement heartbeat detection for connection health
    • Queue chunks for smooth UI updates

Testing Streaming with cURL

Code(bash)
curl -X POST https://beta.aimagicx.com/api/v1/chat \ -H "Authorization: Bearer mgx-sk-prod-your-key" \ -H "Content-Type: application/json" \ -H "Accept: text/event-stream" \ -d '{ "message": "Check the weather in Paris and create a chart", "model": "4o-mini", "stream": true, "tools": ["getWeather", "createBarChart"] }' \ --no-buffer

Handling Tool Calls in Streaming

When tools are enabled in a streaming request, tool calls are sent as separate chunks before the text response:

Code(javascript)
// Example: Processing streaming tool calls const toolCalls = new Map(); for await (const line of readStreamLines(response)) { if (line.startsWith('data: ')) { const data = line.slice(6); if (data === '[DONE]') break; const chunk = JSON.parse(data); // Handle tool calls if (chunk.choices[0]?.delta?.tool_calls) { chunk.choices[0].delta.tool_calls.forEach(tc => { console.log(`Tool: ${tc.function.name}`); console.log(`Arguments: ${tc.function.arguments}`); // Store tool call info toolCalls.set(tc.id, { name: tc.function.name, arguments: tc.function.arguments }); }); } // Handle text content if (chunk.choices[0]?.delta?.content) { process.stdout.write(chunk.choices[0].delta.content); } } }

React Streaming Example

Code(tsx)
import { useState, useCallback } from 'react'; function ChatComponent() { const [response, setResponse] = useState(''); const [isStreaming, setIsStreaming] = useState(false); const streamChat = useCallback(async (message: string) => { setIsStreaming(true); setResponse(''); try { const response = await fetch('/api/v1/chat', { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json', 'Accept': 'text/event-stream' }, body: JSON.stringify({ message, model: 'claude-3-5-sonnet', stream: true }) }); const reader = response.body!.getReader(); const decoder = new TextDecoder(); while (true) { const { done, value } = await reader.read(); if (done) break; const chunk = decoder.decode(value); const lines = chunk.split('\n'); for (const line of lines) { if (line.startsWith('data: ')) { const data = line.slice(6); if (data === '[DONE]') continue; try { const parsed = JSON.parse(data); const content = parsed.choices[0]?.delta?.content || ''; setResponse(prev => prev + content); } catch (e) { // Skip invalid JSON } } } } } catch (error) { console.error('Streaming error:', error); } finally { setIsStreaming(false); } }, []); return ( <div> <button onClick={() => streamChat('Hello!')}> Start Chat </button> <div>{response}</div> {isStreaming && <div>Streaming...</div>} </div> ); }

OpenAI SDK Compatibility

The Chat API is fully compatible with the OpenAI SDK, allowing you to use existing OpenAI client libraries with minimal changes.

Installation

Code(bash)
npm install openai # or yarn add openai # or pnpm add openai

Basic Usage

Code(javascript)
import OpenAI from 'openai'; const openai = new OpenAI({ apiKey: process.env.AIMAGICX_API_KEY, baseURL: 'https://beta.aimagicx.com/api/v1', }); async function main() { const completion = await openai.chat.completions.create({ messages: [{ role: 'user', content: 'Hello! How are you?' }], model: '4o-mini', temperature: 0.7, }); console.log(completion.choices[0].message.content); }

Streaming with OpenAI SDK

Code(javascript)
const stream = await openai.chat.completions.create({ messages: [{ role: 'user', content: 'Tell me a story' }], model: 'claude-3-5-sonnet', stream: true, }); for await (const chunk of stream) { process.stdout.write(chunk.choices[0]?.delta?.content || ''); }

Function Calling with OpenAI SDK

Code(javascript)
const completion = await openai.chat.completions.create({ messages: [{ role: 'user', content: 'What is the weather in Paris?' }], model: '4o-mini', tools: [ { type: 'function', function: { name: 'getWeather', description: 'Get weather information for a location', parameters: { type: 'object', properties: { location: { type: 'string' }, units: { type: 'string', enum: ['celsius', 'fahrenheit'] } }, required: ['location'] } } } ], tool_choice: 'auto', }); // Handle tool calls if (completion.choices[0].message.tool_calls) { for (const toolCall of completion.choices[0].message.tool_calls) { console.log(`Tool: ${toolCall.function.name}`); console.log(`Arguments: ${toolCall.function.arguments}`); } }

Python SDK Usage

Code(python)
from openai import OpenAI client = OpenAI( api_key="mgx-sk-prod-your-key", base_url="https://beta.aimagicx.com/api/v1" ) completion = client.chat.completions.create( model="claude-3-5-sonnet", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Explain quantum computing"} ], temperature=0.7, max_tokens=1000 ) print(completion.choices[0].message.content)

Differences from OpenAI API

While the API is largely compatible, there are some differences:

  1. Model Names: Use AI Magicx model names (e.g., 4o-mini, claude-3-5-sonnet) instead of OpenAI model names
  2. Tool Names: When using tools, specify the exact tool names from our available tools list
  3. Credits: The usage object includes a credits_used field showing API credit consumption
  4. Additional Features: Support for models from multiple providers (OpenAI, Anthropic, Google)

Compatibility Notes

  • ✅ Chat completions endpoint
  • ✅ Streaming support
  • ✅ Function/tool calling
  • ✅ System messages
  • ✅ Temperature and max_tokens controls
  • ❌ Embeddings endpoint (use dedicated endpoint)
  • ❌ Fine-tuning endpoints
  • ❌ Assistants API

Advanced Features

Context Management

Code(javascript)
// Maintain conversation context const conversation = [ { role: "system", content: "You are a helpful assistant" }, { role: "user", content: "Previous message" }, { role: "assistant", content: "Previous response" } ]; // Add new message with context const contextualRequest = { message: "Follow up question", model: "claude-3-5-sonnet", context: conversation, maxTokens: 2000 };

Multi-Modal Support (Coming Soon)

  • Image understanding
  • Audio transcription
  • Video analysis
  • Document parsing

Next Steps


For technical support, contact contact@aimagicx.com

Last modified on