INTEGRATION START

Keep the first-call workflow simple, then move into the control plane.

The shortest path is still: create a key, use the base URL, send one successful request, then validate cost and route behavior in the console.

SUPPORTED INTERFACES

Use one integration surface across the model families teams actually deploy.

This platform is intended to keep mainstream chat, reasoning, image, video, and audio interfaces behind one gateway. The docs stay focused on first-call shape, while the console handles keys, routing, logs, and billing.

OPENAI LINE GPT, o-series, embeddings, image and speech style request patterns

OpenAI-compatible access remains the default migration path for many existing applications.

ANTHROPIC / CLAUDE Reasoning-focused and long-context model access

Keep Claude-family workloads visible in the same integration and operations layer.

GOOGLE / GEMINI Gemini-style multimodal and reasoning endpoints

Cover multimodal and reasoning workflows without fragmenting your external API surface.

DEEPSEEK / QWEN / OPEN MODELS Open-model, reasoning, and coding-oriented interfaces

Compare newer open-model routes and lower-cost surfaces inside the same gateway contract.

IMAGE / VIDEO Visual generation and editing style endpoints

Run image and video generation next to text workloads under one key and usage view.

AUDIO / VOICE Speech, voice, and audio generation interfaces

Bring voice and audio workloads into the same billing, logging, and routing workflow.

Authentication

Create an API key in the console, then send it as a Bearer token in the Authorization header.

Authorization: Bearer YOUR_API_KEY

Base URL

Use the platform endpoint as your single integration surface.

https://api.xlearn.space/v1

OpenAI-style request example

curl https://api.xlearn.space/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4.1",
    "messages": [
      { "role": "system", "content": "You are a concise release assistant." },
      { "role": "user", "content": "Draft a launch note for a new image generation feature." }
    ]
  }'

Recommended first-call workflow

  1. Open the console and create a dedicated key for your first workload.
  2. Review the model list and choose the capability that matches the task.
  3. Send a single test request from the docs example.
  4. Return to the console to inspect usage, billing and follow-up adjustments.

Operate in console

Logs

Validate request quality, failures, and spend after the first call.

Open usage logs

Next steps