Overview
The Chat API provides access to state-of-the-art language models for conversational AI, analysis, and content generation. Support includes real-time streaming, function calling, and seamless model switching.
Endpoint
Request Schema
Required Parameters
Parameter Type Constraints Description message
string 1-32,000 chars User message or prompt model
string Valid model ID Target AI model identifier
Optional Parameters
Parameter Type Default Description temperature
float 0.7 Randomness control (0.0-2.0) maxTokens
integer Model default Maximum response tokens tools
string[] [] Function tools to enable. Must be exact tool names from available tools list stream
boolean false Enable SSE streaming system
string null System instruction prompt metadata
object Custom tracking metadata topP
float 1.0 Nucleus sampling threshold frequencyPenalty
float 0.0 Repetition penalty (-2.0 to 2.0) presencePenalty
float 0.0 Topic repetition penalty (-2.0 to 2.0) stop
string[] null Stop sequences
Available Models
OpenAI Models
Model ID Context Window Strengths Use Cases 4o-mini
128K Speed, efficiency Simple queries, high volume 4o
128K Balanced performance General purpose gpt-4.1
128K Advanced reasoning Complex analysis gpt-4.1-mini
128K Efficient reasoning Cost-effective analysis o3
128K Superior reasoning Research, mathematics o3-mini
128K Fast reasoning Quick analysis
Anthropic Models
Model ID Context Window Strengths Use Cases claude-3-5-sonnet
200K Nuanced understanding Creative writing, analysis claude-3-7-sonnet
200K Latest capabilities Advanced tasks claude-sonnet-4-20250514
200K Balanced performance General purpose claude-opus-4-20250514
200K Maximum capability Complex reasoning
Google Models
Model ID Context Window Strengths Use Cases gemini-2.5-flash
1M Ultra-fast, large context Document analysis gemini-2.5-pro
2M Massive context Research, long documents
Function Calling
Available Functions
Data Visualization
createPieChart
- Statistical pie charts
createBarChart
- Comparative bar graphs
createLineChart
- Trend analysis
createAreaChart
- Volume visualization
createRadarChart
- Multi-dimensional comparison
createRadialChart
- Circular data representation
Utility Functions
getWeather
- Real-time weather data
convertCurrency
- Exchange rate conversion
translateText
- Multi-language translation
webSearch
- Internet search integration
createDocument
- Structured document generation
executePython
- Code execution sandbox
GitHub Tools (requires GitHub connection)
Important : GitHub tools require a one-time setup through the web interface. The account associated with your API key must have GitHub connected before using these tools.
Setup Process :
Log into the AI Magicx web application
Navigate to MCP Chat settings
Connect your GitHub account (Professional plan required)
Select repositories to grant access
Use the same account's API key for GitHub tool access
Available GitHub Tools :
github_read_file
- Read files from repositories
github_list_directory
- List directory contents
github_search_code
- Search for code patterns
github_semantic_search_code
- Semantic code search
github_get_repository_info
- Get repository details
github_create_issue
- Create new issues
github_list_issues
- List repository issues
github_get_issue
- Get issue details
github_update_issue
- Update existing issues
github_list_pull_requests
- List pull requests
github_get_pull_request
- Get pull request details
github_create_pull_request
- Create new pull request
github_merge_pull_request
- Merge pull request
github_list_commits
- List repository commits
github_list_branches
- List repository branches
github_create_repository
- Create new repository
github_create_or_update_file
- Create or update files
github_delete_file
- Delete repository files
github_create_branch
- Create new branch
github_fork_repository
- Fork a repository
AI-Powered GitHub Tools
ai_code_review
- Automated code review
ai_documentation_generator
- Generate documentation
ai_issue_analysis
- Analyze issue patterns
ai_assignee_suggestion
- Suggest issue assignees
ai_related_issues_finder
- Find related issues
ai_issue_template_generator
- Generate issue templates
analyze_repository_architecture
- Analyze repository structure
Error Handling : If GitHub is not connected, you'll receive:
{
"error" : {
"message" : "GitHub not connected for this account. Please connect your GitHub account first."
}
}
Function Usage
{
"message" : "What's the weather in Tokyo and create a temperature chart for the week" ,
"model" : "4o-mini" ,
"tools" : [ "getWeather" , "createLineChart" ],
"temperature" : 0.3
}
Response Format
Standard Response
{
"id" : "chat_abc123def456" ,
"object" : "chat.completion" ,
"created" : 1736524800 ,
"model" : "4o-mini" ,
"choices" : [
{
"index" : 0 ,
"message" : {
"role" : "assistant" ,
"content" : "Based on the current weather data for Tokyo..." ,
"tool_calls" : [
{
"id" : "call_weather_123" ,
"type" : "function" ,
"function" : {
"name" : "getWeather" ,
"arguments" : "{ \" location \" : \" Tokyo \" , \" units \" : \" metric \" }"
}
}
]
},
"finish_reason" : "stop" ,
"logprobs" : null
}
],
"usage" : {
"prompt_tokens" : 127 ,
"completion_tokens" : 453 ,
"total_tokens" : 580 ,
"credits_used" : 2
},
"system_fingerprint" : "fp_abc123"
}
Streaming Response (SSE)
When stream: true
is set in the request, the API returns a Server-Sent Events (SSE) stream with the following format:
data: {"id":"mgx_resp_abc123","object":"chat.completion.chunk","created":1736524800,"model":"4o-mini","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"id":"call_t2bznXBk5hHx","type":"function","function":{"name":"getWeather","arguments":"{\"location\":\"New York\"}"}}]},"finish_reason":null}]}
data: {"id":"mgx_resp_abc123","object":"chat.completion.chunk","created":1736524800,"model":"4o-mini","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"id":"call_woS6lKzhG5Ff","type":"function","function":{"name":"createBarChart","arguments":"{\"data\":[...]}"}}]},"finish_reason":null}]}
data: {"id":"mgx_resp_abc123","object":"chat.completion.chunk","created":1736524800,"model":"4o-mini","choices":[{"index":0,"delta":{"content":"The current"},"finish_reason":null}]}
data: {"id":"mgx_resp_abc123","object":"chat.completion.chunk","created":1736524800,"model":"4o-mini","choices":[{"index":0,"delta":{"content":" weather in"},"finish_reason":null}]}
data: {"id":"mgx_resp_abc123","object":"chat.completion.chunk","created":1736524800,"model":"4o-mini","choices":[{"index":0,"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":127,"completion_tokens":453,"total_tokens":580,"credits_used":3}}
data: [DONE]
The API fully supports Server-Sent Events (SSE) streaming for real-time response delivery. Set stream: true
in your request to enable this feature. Streaming includes real-time tool call notifications with function names and arguments.
Implementation Examples
Basic Chat Completion
const response = await fetch ( 'https://beta.aimagicx.com/api/v1/chat' , {
method: 'POST' ,
headers: {
'Authorization' : 'Bearer mgx-sk-prod-your-key' ,
'Content-Type' : 'application/json'
},
body: JSON . stringify ({
message: "Explain quantum computing in simple terms" ,
model: "4o-mini" ,
temperature: 0.7 ,
maxTokens: 500
})
});
const data = await response. json ();
console. log (data.choices[ 0 ].message.content);
Streaming Implementation
async function streamChat ( message : string ) {
const response = await fetch ( 'https://beta.aimagicx.com/api/v1/chat' , {
method: 'POST' ,
headers: {
'Authorization' : 'Bearer mgx-sk-prod-your-key' ,
'Content-Type' : 'application/json' ,
'Accept' : 'text/event-stream'
},
body: JSON . stringify ({
message,
model: "claude-3-5-sonnet" ,
stream: true
})
});
// Handle SSE stream
const reader = response.body ! . getReader ();
const decoder = new TextDecoder ();
let buffer = '' ;
while ( true ) {
const { done , value } = await reader. read ();
if (done) break ;
buffer += decoder. decode (value, { stream: true });
const lines = buffer. split ( ' \n ' );
buffer = lines. pop () || '' ;
for ( const line of lines) {
if (line. startsWith ( 'data: ' )) {
const data = line. slice ( 6 );
if (data === '[DONE]' ) return ;
try {
const parsed = JSON . parse (data);
const content = parsed.choices[ 0 ].delta.content;
if (content) process.stdout. write (content);
} catch (e) {
console. error ( 'Parse error:' , e);
}
}
}
}
}
Advanced Streaming with Error Handling
class ChatStreamHandler {
private abortController : AbortController ;
constructor () {
this .abortController = new AbortController ();
}
async * streamChat ( payload : any ) : AsyncGenerator < string > {
const response = await fetch ( 'https://beta.aimagicx.com/api/v1/chat' , {
method: 'POST' ,
headers: {
'Authorization' : 'Bearer mgx-sk-prod-your-key' ,
'Content-Type' : 'application/json' ,
'Accept' : 'text/event-stream'
},
body: JSON . stringify ({ ... payload, stream: true }),
signal: this .abortController.signal
});
if ( ! response.ok) {
const error = await response. json ();
throw new Error (error.error?.message || 'Request failed' );
}
// Check if streaming is actually enabled
if (response.headers. get ( 'content-type' )?. includes ( 'application/json' )) {
const data = await response. json ();
yield data.choices[ 0 ].message.content;
return ;
}
const reader = response.body ! . getReader ();
const decoder = new TextDecoder ();
let buffer = '' ;
try {
while ( true ) {
const { done , value } = await reader. read ();
if (done) break ;
buffer += decoder. decode (value, { stream: true });
const lines = buffer. split ( ' \n ' );
buffer = lines. pop () || '' ;
for ( const line of lines) {
if (line. trim () === '' ) continue ;
if (line. startsWith ( 'data: ' )) {
const data = line. slice ( 6 );
if (data === '[DONE]' ) {
return ;
}
try {
const parsed = JSON . parse (data);
const content = parsed.choices[ 0 ]?.delta?.content;
if (content) {
yield content;
}
// Check for function calls in stream
const toolCalls = parsed.choices[ 0 ]?.delta?.tool_calls;
if (toolCalls) {
yield ` \n [Function Call: ${ JSON . stringify ( toolCalls ) }] \n ` ;
}
} catch (e) {
console. error ( 'Failed to parse SSE data:' , e);
}
}
}
}
} finally {
reader. releaseLock ();
}
}
cancel () {
this .abortController. abort ();
}
}
// Usage example
async function main () {
const handler = new ChatStreamHandler ();
try {
for await ( const chunk of handler. streamChat ({
message: "Explain quantum computing" ,
model: "4o-mini" ,
temperature: 0.7
})) {
process.stdout. write (chunk);
}
} catch (error) {
console. error ( 'Stream error:' , error);
}
}
Function Calling Example
import requests
import json
def analyze_sales_data (sales_figures):
response = requests.post(
'https://beta.aimagicx.com/api/v1/chat' ,
headers = {
'Authorization' : 'Bearer mgx-sk-prod-your-key' ,
'Content-Type' : 'application/json'
},
json = {
'message' : f 'Analyze these sales figures and create visualizations: { sales_figures } ' ,
'model' : 'claude-3-7-sonnet' ,
'tools' : [ 'createBarChart' , 'createLineChart' , 'createPieChart' ],
'temperature' : 0.2 ,
'system' : 'You are a data analyst. Provide insights and create appropriate visualizations.'
}
)
result = response.json()
return {
'analysis' : result[ 'choices' ][ 0 ][ 'message' ][ 'content' ],
'visualizations' : [
call[ 'function' ]
for call in result[ 'choices' ][ 0 ][ 'message' ].get( 'tool_calls' , [])
]
}
GitHub Integration
To use GitHub tools via the API, you must first connect your GitHub account through the web interface. This is a one-time setup per account.
Prerequisites
Professional Plan : GitHub integration requires a Professional plan
Web Setup : GitHub OAuth must be completed through the web UI
Repository Access : Grant access to specific repositories during setup
Setup Instructions
Log into AI Magicx at https://beta.aimagicx.com
Navigate to MCP Chat → Settings → GitHub Integration
Click "Connect GitHub" and authorize the application
Select repositories you want to grant access to
Use your account's API key - GitHub access is tied to the account
Using GitHub Tools
Once connected, simply include GitHub tools in your API requests:
const response = await fetch ( 'https://beta.aimagicx.com/api/v1/chat' , {
method: 'POST' ,
headers: {
'Authorization' : 'Bearer mgx-sk-prod-your-key' ,
'Content-Type' : 'application/json'
},
body: JSON . stringify ({
message: "Analyze the repository structure and create documentation" ,
model: "claude-3-5-sonnet" ,
tools: [
"github_get_repository_info" ,
"github_list_directory" ,
"ai_documentation_generator"
],
metadata: {
repository: "owner/repo" // Optional: specify target repository
}
})
});
Error Handling
If GitHub is not connected, the API will return an error:
{
"success" : false ,
"error" : {
"code" : "GITHUB_NOT_CONNECTED" ,
"message" : "GitHub not connected for this account. Please connect your GitHub account first." ,
"details" : {
"account_id" : "your-account-id" ,
"setup_url" : "https://beta.aimagicx.com/ai/mcp-chat"
}
}
}
Security Notes
GitHub tokens are encrypted using AES-256-GCM
Tokens are tied to team accounts, not individual users
OAuth scopes are limited to repository access
Connections can be revoked at any time through the web UI
Credit Consumption
Pricing Structure
Operation Credits Base chat request 1 Per function executed +1 Long context (>32K tokens) +1
How Credits Work
Credits are calculated based on actual tool execution, not just availability
Streaming and non-streaming requests consume the same credits
Credits are deducted when the request completes
The final usage chunk in streaming includes credits_used
Only executed tools consume credits, not just available ones
Examples
Simple query: 1 credit
Query with weather check: 2 credits (1 base + 1 tool executed)
Analysis with 3 charts: 4 credits (1 base + 3 tools executed)
Query with 5 tools available but only 2 used: 3 credits
Query with 20 GitHub tools available but only 3 executed: 4 credits
Query with multiple tools available but none executed: 1 credit
Error Handling
Common Error Responses
Invalid Model
{
"success" : false ,
"error" : {
"code" : "MODEL_NOT_FOUND" ,
"message" : "Model 'xyz' is not available" ,
"status" : 400 ,
"details" : {
"requested_model" : "xyz" ,
"available_models" : [ "4o-mini" , "4o" , "claude-3-5-sonnet" , ... ]
}
}
}
Rate Limit Exceeded
{
"success" : false ,
"error" : {
"code" : "RATE_LIMIT_EXCEEDED" ,
"message" : "Rate limit exceeded. Please retry after cooldown." ,
"status" : 429 ,
"details" : {
"limit" : 60 ,
"window" : "1m" ,
"retry_after" : 15
}
}
}
Insufficient Credits
{
"success" : false ,
"error" : {
"code" : "INSUFFICIENT_CREDITS" ,
"message" : "Insufficient credits for this operation" ,
"status" : 402 ,
"details" : {
"required" : 4 ,
"available" : 2 ,
"breakdown" : {
"base_cost" : 1 ,
"tools_cost" : 3
},
"overage_enabled" : true ,
"overage_rate" : 0.01
}
}
}
Error Recovery Pattern
async function robustChatRequest ( payload , maxRetries = 3 ) {
for ( let attempt = 0 ; attempt < maxRetries; attempt ++ ) {
try {
const response = await fetch ( '/api/v1/chat' , {
method: 'POST' ,
headers: { /* ... */ },
body: JSON . stringify (payload)
});
if (response.status === 429 ) {
const retryAfter = response.headers. get ( 'Retry-After' ) || 60 ;
await new Promise ( resolve => setTimeout (resolve, retryAfter * 1000 ));
continue ;
}
if ( ! response.ok) {
const error = await response. json ();
throw new Error (error.error.message);
}
return await response. json ();
} catch (error) {
if (attempt === maxRetries - 1 ) throw error;
await new Promise ( resolve => setTimeout (resolve, Math. pow ( 2 , attempt) * 1000 ));
}
}
}
Best Practices
Model Selection Guide
High Volume, Simple Tasks
Use: 4o-mini
, gemini-2.5-flash
Optimize for speed and cost
General Purpose
Use: 4o
, claude-3-5-sonnet
Balance performance and cost
Complex Reasoning
Use: o3
, claude-opus-4
Maximize capability for critical tasks
Large Context Processing
Use: gemini-2.5-pro
(2M context)
Process entire documents or codebases
Temperature Guidelines
Use Case Temperature Description Factual Q&A 0.0-0.3 Deterministic, consistent General Chat 0.5-0.8 Balanced creativity Creative Writing 0.8-1.2 Higher variability Brainstorming 1.0-1.5 Maximum creativity
Performance Optimization
Token Management
// Estimate tokens before sending
const estimatedTokens = message. length / 4 ;
const maxTokens = Math. min ( 4096 , estimatedTokens * 3 );
Response Caching
const cache = new Map ();
const cacheKey = crypto. createHash ( 'md5' )
. update ( JSON . stringify ({ message, model, temperature }))
. digest ( 'hex' );
Batch Processing
// Process multiple queries efficiently
const results = await Promise . all (
queries. map ( q => chatAPI (q))
);
Server-Sent Events (SSE) Streaming
SSE Protocol Details
When streaming is enabled, the API uses Server-Sent Events to deliver real-time responses. Each event follows this structure:
Event Types
Content Chunks
{
"id" : "chat_abc123" ,
"object" : "chat.completion.chunk" ,
"created" : 1736524800 ,
"model" : "4o-mini" ,
"choices" : [{
"index" : 0 ,
"delta" : {
"content" : "partial response text"
},
"finish_reason" : null
}]
}
Function Call Chunks
{
"id" : "chat_abc123" ,
"object" : "chat.completion.chunk" ,
"choices" : [{
"delta" : {
"tool_calls" : [{
"index" : 0 ,
"id" : "call_123" ,
"type" : "function" ,
"function" : {
"name" : "getWeather" ,
"arguments" : "{ \" location \" :"
}
}]
}
}]
}
Final Chunk with Usage
{
"id" : "chat_abc123" ,
"object" : "chat.completion.chunk" ,
"choices" : [{
"delta" : {},
"finish_reason" : "stop"
}],
"usage" : {
"prompt_tokens" : 127 ,
"completion_tokens" : 453 ,
"total_tokens" : 580 ,
"credits_used" : 1
}
}
Stream Termination
Streaming Best Practices
Connection Management
Set appropriate timeouts (30-60 seconds recommended)
Implement retry logic for network interruptions
Use AbortController for cancellation
Buffer Handling
Process partial chunks correctly
Handle multi-byte UTF-8 characters at chunk boundaries
Clear buffers after processing
Tool Call Processing
Tool calls arrive before the text response
Parse and store tool call information for reference
Tool arguments are sent as complete JSON strings
Multiple tools may be called in sequence
Error Recovery
async function streamWithRetry ( payload : any , maxRetries = 3 ) {
for ( let attempt = 0 ; attempt < maxRetries; attempt ++ ) {
try {
const handler = new ChatStreamHandler ();
let fullResponse = '' ;
for await ( const chunk of handler. streamChat (payload)) {
fullResponse += chunk;
yield chunk;
}
return fullResponse;
} catch (error) {
if (attempt === maxRetries - 1 ) throw error;
// Exponential backoff
await new Promise ( resolve =>
setTimeout (resolve, Math. pow ( 2 , attempt) * 1000 )
);
}
}
}
Client-Side Implementation Tips
Use EventSource API for browser environments
Implement heartbeat detection for connection health
Queue chunks for smooth UI updates
Testing Streaming with cURL
curl -X POST https://beta.aimagicx.com/api/v1/chat \
-H "Authorization: Bearer mgx-sk-prod-your-key" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{
"message": "Check the weather in Paris and create a chart",
"model": "4o-mini",
"stream": true,
"tools": ["getWeather", "createBarChart"]
}' \
--no-buffer
Handling Tool Calls in Streaming
When tools are enabled in a streaming request, tool calls are sent as separate chunks before the text response:
// Example: Processing streaming tool calls
const toolCalls = new Map ();
for await ( const line of readStreamLines (response)) {
if (line. startsWith ( 'data: ' )) {
const data = line. slice ( 6 );
if (data === '[DONE]' ) break ;
const chunk = JSON . parse (data);
// Handle tool calls
if (chunk.choices[ 0 ]?.delta?.tool_calls) {
chunk.choices[ 0 ].delta.tool_calls. forEach ( tc => {
console. log ( `Tool: ${ tc . function . name }` );
console. log ( `Arguments: ${ tc . function . arguments }` );
// Store tool call info
toolCalls. set (tc.id, {
name: tc.function.name,
arguments: tc.function.arguments
});
});
}
// Handle text content
if (chunk.choices[ 0 ]?.delta?.content) {
process.stdout. write (chunk.choices[ 0 ].delta.content);
}
}
}
React Streaming Example
import { useState, useCallback } from 'react' ;
function ChatComponent () {
const [ response , setResponse ] = useState ( '' );
const [ isStreaming , setIsStreaming ] = useState ( false );
const streamChat = useCallback ( async ( message : string ) => {
setIsStreaming ( true );
setResponse ( '' );
try {
const response = await fetch ( '/api/v1/chat' , {
method: 'POST' ,
headers: {
'Authorization' : `Bearer ${ apiKey }` ,
'Content-Type' : 'application/json' ,
'Accept' : 'text/event-stream'
},
body: JSON . stringify ({
message,
model: 'claude-3-5-sonnet' ,
stream: true
})
});
const reader = response.body ! . getReader ();
const decoder = new TextDecoder ();
while ( true ) {
const { done , value } = await reader. read ();
if (done) break ;
const chunk = decoder. decode (value);
const lines = chunk. split ( ' \n ' );
for ( const line of lines) {
if (line. startsWith ( 'data: ' )) {
const data = line. slice ( 6 );
if (data === '[DONE]' ) continue ;
try {
const parsed = JSON . parse (data);
const content = parsed.choices[ 0 ]?.delta?.content || '' ;
setResponse ( prev => prev + content);
} catch (e) {
// Skip invalid JSON
}
}
}
}
} catch (error) {
console. error ( 'Streaming error:' , error);
} finally {
setIsStreaming ( false );
}
}, []);
return (
< div >
< button onClick = {() => streamChat ( 'Hello!' )}>
Start Chat
</ button >
< div >{response}</ div >
{isStreaming && < div >Streaming...</ div >}
</ div >
);
}
OpenAI SDK Compatibility
The Chat API is fully compatible with the OpenAI SDK, allowing you to use existing OpenAI client libraries with minimal changes.
Installation
npm install openai
# or
yarn add openai
# or
pnpm add openai
Basic Usage
import OpenAI from 'openai' ;
const openai = new OpenAI ({
apiKey: process.env. AIMAGICX_API_KEY ,
baseURL: 'https://beta.aimagicx.com/api/v1' ,
});
async function main () {
const completion = await openai.chat.completions. create ({
messages: [{ role: 'user' , content: 'Hello! How are you?' }],
model: '4o-mini' ,
temperature: 0.7 ,
});
console. log (completion.choices[ 0 ].message.content);
}
Streaming with OpenAI SDK
const stream = await openai.chat.completions. create ({
messages: [{ role: 'user' , content: 'Tell me a story' }],
model: 'claude-3-5-sonnet' ,
stream: true ,
});
for await ( const chunk of stream) {
process.stdout. write (chunk.choices[ 0 ]?.delta?.content || '' );
}
Function Calling with OpenAI SDK
const completion = await openai.chat.completions. create ({
messages: [{ role: 'user' , content: 'What is the weather in Paris?' }],
model: '4o-mini' ,
tools: [
{
type: 'function' ,
function: {
name: 'getWeather' ,
description: 'Get weather information for a location' ,
parameters: {
type: 'object' ,
properties: {
location: { type: 'string' },
units: { type: 'string' , enum: [ 'celsius' , 'fahrenheit' ] }
},
required: [ 'location' ]
}
}
}
],
tool_choice: 'auto' ,
});
// Handle tool calls
if (completion.choices[ 0 ].message.tool_calls) {
for ( const toolCall of completion.choices[ 0 ].message.tool_calls) {
console. log ( `Tool: ${ toolCall . function . name }` );
console. log ( `Arguments: ${ toolCall . function . arguments }` );
}
}
Python SDK Usage
from openai import OpenAI
client = OpenAI(
api_key = "mgx-sk-prod-your-key" ,
base_url = "https://beta.aimagicx.com/api/v1"
)
completion = client.chat.completions.create(
model = "claude-3-5-sonnet" ,
messages = [
{ "role" : "system" , "content" : "You are a helpful assistant." },
{ "role" : "user" , "content" : "Explain quantum computing" }
],
temperature = 0.7 ,
max_tokens = 1000
)
print (completion.choices[ 0 ].message.content)
Differences from OpenAI API
While the API is largely compatible, there are some differences:
Model Names : Use AI Magicx model names (e.g., 4o-mini
, claude-3-5-sonnet
) instead of OpenAI model names
Tool Names : When using tools, specify the exact tool names from our available tools list
Credits : The usage
object includes a credits_used
field showing API credit consumption
Additional Features : Support for models from multiple providers (OpenAI, Anthropic, Google)
Compatibility Notes
✅ Chat completions endpoint
✅ Streaming support
✅ Function/tool calling
✅ System messages
✅ Temperature and max_tokens controls
❌ Embeddings endpoint (use dedicated endpoint)
❌ Fine-tuning endpoints
❌ Assistants API
Advanced Features
Context Management
// Maintain conversation context
const conversation = [
{ role: "system" , content: "You are a helpful assistant" },
{ role: "user" , content: "Previous message" },
{ role: "assistant" , content: "Previous response" }
];
// Add new message with context
const contextualRequest = {
message: "Follow up question" ,
model: "claude-3-5-sonnet" ,
context: conversation,
maxTokens: 2000
};
Multi-Modal Support (Coming Soon)
Image understanding
Audio transcription
Video analysis
Document parsing
Next Steps
For technical support, contact contact@aimagicx.com
Last modified on July 12, 2025