Smart Activity Hub
Logout
Admin Authentication
This area requires admin authentication. Please enter your admin token to continue.
Admin Token
Dashboard
Create Endpoint
View Logs
Name
API Path
Model
Requests
Tokens
Status
Actions
Endpoint Status
?
When enabled, this endpoint will accept requests. When disabled, it will return an error.
ON
OFF
📝 Basic Settings
▼
Name
?
Unique identifier for your endpoint. This will be used in the API URL path.
Description
?
Optional description to help you remember what this endpoint is for.
Model
?
Enter the AI model name from OpenRouter (e.g., openai/gpt-4o-mini). Type to see suggestions or browse all available models using the link below.
DeepSeek: DeepSeek V3.1
Google: Gemini 2.5 Flash Lite
Mistral: Ministral 3B
Mistral: Ministral 8B
Mistral: Mistral Nemo
Mistral: Mistral Nemo (free)
OpenAI: GPT-4o Mini
OpenAI: GPT-5 Nano
OpenAI: gpt-oss-20b
OpenAI: gpt-oss-20b (free)
OpenAI: gpt-oss-120b
Qwen: Qwen3 4B (free)
Qwen: Qwen3 8B
Qwen: Qwen3 8B (free)
Browse OpenRouter Models →
System Prompt
?
Instructions that tell the AI how to behave and respond. This shapes the personality and capabilities of your endpoint.
🔐 Advanced Settings
▶
CORS Origins (comma-separated)
?
Comma-separated list of websites that are allowed to make requests to this endpoint. Use * for all origins.
Max Tokens
?
Maximum number of tokens (words/characters) the AI can use in its response. Higher values allow longer responses.
Reasoning Effort
?
Controls how much time the AI spends thinking before responding. Higher effort can provide better answers but takes longer.
Low
Medium
High
Max Reasoning Tokens
?
Maximum number of tokens the AI can use for its internal reasoning process before giving the final answer.
Show reasoning
?
When enabled, the AI's step-by-step reasoning process will be included in the response before the final answer.
ON
OFF
Test Endpoint
Save Endpoint
Export CSV
Time
Endpoint
Model
Tokens
Response Time
Status
Actions
← Previous
Page 1
Next →
Log Details
×
Request Information
Timestamp:
Endpoint:
Model:
Status Code:
Response Time:
IP Address:
Request Origin:
Token Usage
Input Tokens:
Output Tokens:
Total Tokens:
System Prompt
▶
Conversation
Error Message
Test Endpoint
×
Testing with configuration:
Test Message
Send Test Message
Clear Results
Testing endpoint...
Streaming Response