Skip to main content

AI Core

Core AI infrastructure module that provides AI provider management, request routing, usage tracking, and foundational services for all AI-powered features in the platform.

Features

  • AI Provider Management - Configure and manage multiple AI providers (OpenAI included by default; Gemini, Local AI available as addons)
  • AI Model Management - View, configure, and test available AI models per provider
  • Module-Model Mapping - Assign specific AI models to specific modules for fine-grained control
  • Usage Analytics - Monitor AI usage, costs, and performance across all modules with export capability
  • Request Logging - Full audit log of all AI requests with filtering, flagging, and review
  • AI Settings - System-wide settings for AI features including rate limiting, cost controls, caching, and security
  • AI Dashboard - Overview of AI activity and statistics

Requirements

RequirementDetails
DependenciesSystemCore
PHP Version8.2+
AI ProviderAt least one AI provider configured (OpenAI included by default)

Installation

AI Core is a core module that is always enabled when purchased. It cannot be toggled on or off as other AI modules depend on it.

After installation, configure at least one AI provider with valid API credentials and set default provider preferences for different modules.

Configuration

AI Settings

Navigate to AI Core > Settings to configure:

General:

  • Enable AI Features - Enable or disable AI functionality system-wide
  • Default Temperature - Default creativity level for AI responses (0-2)
  • Default Max Tokens - Default maximum tokens for AI responses
  • Request Timeout - Maximum time to wait for AI API responses
  • Daily Token Limit - Maximum tokens per day per company (0 = unlimited)

Cost Controls:

  • Monthly Budget (USD) - Maximum monthly spending on AI services (0 = unlimited)

Rate Limiting:

  • Enable Rate Limiting - Enable rate limiting for AI requests
  • Global Rate Limit - Maximum requests per minute system-wide
  • User Rate Limit - Maximum requests per minute per user

Security:

  • Log AI Requests - Log all AI requests for monitoring and debugging
  • Data Retention (days) - How long to keep AI request logs (0 = forever)

Cache:

  • Enable Response Caching - Cache AI responses for similar requests
  • Cache TTL (seconds) - How long to cache AI responses

Provider Management

Navigate to AI Core > Providers to manage AI providers. You can add, edit, test, and remove providers. Each provider can be tested for connectivity.

Model Management

Navigate to AI Core > Models to view and manage AI models. You can test individual models and configure their settings.

Module-Model Configuration

Navigate to AI Core > Module Configuration to assign specific AI providers and models to individual modules. You can sync available modules, toggle module AI status, and select which provider/model each module should use.

Usage

Dashboard

Navigate to AI Core > Dashboard to view an overview of AI activity across your platform, including:

  • Total active providers and models
  • Usage statistics for today, this week, and this month (requests, tokens, cost)
  • Provider connection status
  • Recent AI activity log
  • Cost trends over the last 30 days
  • Top models by usage

Managing Providers

Navigate to AI Core > Providers to manage AI providers:

  • View all enabled providers and their associated models
  • Add a new provider by selecting a type and entering API credentials
  • Edit existing provider settings (name, API key, endpoint URL, rate limits, cost per token, priority)
  • Test provider connectivity to verify API keys are valid
  • View available provider addons that can be installed (e.g., Gemini Provider, Local AI Provider)

Managing Models

Navigate to AI Core > Models to view and manage AI models:

  • Browse all available models across providers
  • Test individual models to verify they are working
  • Configure model settings

Module-Model Configuration

Navigate to AI Core > Module Configuration to control which AI provider and model each module uses:

  • Sync Modules - Detect all AI-enabled modules in the system
  • Toggle Status - Enable or disable AI for individual modules
  • Assign Provider/Model - Select which provider and model each module should use
  • Per-Module Control - Fine-grained control over AI behavior across different features

Usage Analytics

Navigate to AI Core > Usage to monitor AI usage:

  • View summary statistics (total requests, tokens, cost, average response time, success rate)
  • Filter by time period, provider, or module
  • View usage trends over time with charts
  • See module-level and provider-level cost breakdowns
  • Browse recent usage logs with details
  • Export usage data as JSON reports

Request Logs

Navigate to AI Core > Logs to audit AI requests:

  • Browse all AI request logs with DataTable filtering
  • View detailed information for individual requests
  • Flag suspicious or noteworthy requests for review
  • Mark requests as reviewed
  • Export logs
  • View log statistics

Settings

Navigate to AI Core > Settings to configure system-wide AI settings. See the Configuration section below for details. Settings can be reset to defaults if needed.

Available Provider Addons

AI Core includes OpenAI as the default provider. Additional providers are available as separate addons:

AI-Powered Modules

The following modules use AI Core for their AI capabilities:

Included OpenAI Models

AI Core comes with OpenAI provider pre-configured. Available models:

Model FamilyModelsDescription
GPT-5.2 Seriesgpt-5.2-instant, gpt-5.2-thinking, gpt-5.2-pro, gpt-5.2-codexLatest GPT models with 400K context
o3 Serieso3-pro, o3, o3-mini, o4-miniAdvanced reasoning models
GPT-4.1 Seriesgpt-4.1, gpt-4.1-mini, gpt-4.1-nano1M token context models
GPT-4o Seriesgpt-4o, gpt-4o-miniMultimodal with vision/audio
Embeddingstext-embedding-3-large, text-embedding-3-smallSemantic search
Imagedall-e-3, dall-e-2Image generation
Audiowhisper-1, tts-1, tts-1-hdTranscription & text-to-speech

Web Routes

MethodRouteDescription
GET/aicore/dashboardAI Dashboard
GET/aicore/providersList providers
POST/aicore/providers/{provider}/testTest provider connection
GET/aicore/modelsList models
POST/aicore/models/{model}/testTest a model
GET/aicore/usageUsage analytics
GET/aicore/usage/exportExport usage data
GET/aicore/settingsAI settings
PUT/aicore/settingsUpdate settings
GET/aicore/module-configurationModule-model mapping
GET/aicore/logsRequest logs

API Endpoints

MethodEndpointDescription
POST/api/ai/chatSend chat request
POST/api/ai/completeSend completion request
POST/api/ai/summarizeSummarize content
POST/api/ai/extractExtract data from content
GET/api/ai/usageGet usage statistics

Admin-only endpoints (requires admin or super_admin role):

MethodEndpointDescription
GET/api/ai/admin/providersList providers
POST/api/ai/admin/providers/{provider}/testTest provider connection
GET/api/ai/admin/providers/{provider}/usageGet provider usage
GET/api/ai/admin/usage/reportsUsage reports
GET/api/ai/admin/usage/trendsUsage trends

Notes

  • AI Core is the foundation for all AI-powered modules
  • OpenAI provider is included; additional providers (Gemini, Local AI) are available as separate addons
  • Usage is tracked and can be limited per user or system-wide
  • All AI interactions are logged for audit purposes when request logging is enabled
  • Rate limiting protects against excessive API usage

Changelog: View version history