AI Core
Core AI infrastructure module that provides AI provider management, request routing, usage tracking, and foundational services for all AI-powered features in the platform.
Features
- AI Provider Management - Configure and manage multiple AI providers (OpenAI included by default; Gemini, Local AI available as addons)
- AI Model Management - View, configure, and test available AI models per provider
- Module-Model Mapping - Assign specific AI models to specific modules for fine-grained control
- Usage Analytics - Monitor AI usage, costs, and performance across all modules with export capability
- Request Logging - Full audit log of all AI requests with filtering, flagging, and review
- AI Settings - System-wide settings for AI features including rate limiting, cost controls, caching, and security
- AI Dashboard - Overview of AI activity and statistics
Requirements
| Requirement | Details |
|---|---|
| Dependencies | SystemCore |
| PHP Version | 8.2+ |
| AI Provider | At least one AI provider configured (OpenAI included by default) |
Installation
AI Core is a core module that is always enabled when purchased. It cannot be toggled on or off as other AI modules depend on it.
After installation, configure at least one AI provider with valid API credentials and set default provider preferences for different modules.
Configuration
AI Settings
Navigate to AI Core > Settings to configure:
General:
- Enable AI Features - Enable or disable AI functionality system-wide
- Default Temperature - Default creativity level for AI responses (0-2)
- Default Max Tokens - Default maximum tokens for AI responses
- Request Timeout - Maximum time to wait for AI API responses
- Daily Token Limit - Maximum tokens per day per company (0 = unlimited)
Cost Controls:
- Monthly Budget (USD) - Maximum monthly spending on AI services (0 = unlimited)
Rate Limiting:
- Enable Rate Limiting - Enable rate limiting for AI requests
- Global Rate Limit - Maximum requests per minute system-wide
- User Rate Limit - Maximum requests per minute per user
Security:
- Log AI Requests - Log all AI requests for monitoring and debugging
- Data Retention (days) - How long to keep AI request logs (0 = forever)
Cache:
- Enable Response Caching - Cache AI responses for similar requests
- Cache TTL (seconds) - How long to cache AI responses
Provider Management
Navigate to AI Core > Providers to manage AI providers. You can add, edit, test, and remove providers. Each provider can be tested for connectivity.
Model Management
Navigate to AI Core > Models to view and manage AI models. You can test individual models and configure their settings.
Module-Model Configuration
Navigate to AI Core > Module Configuration to assign specific AI providers and models to individual modules. You can sync available modules, toggle module AI status, and select which provider/model each module should use.
Usage
Dashboard
Navigate to AI Core > Dashboard to view an overview of AI activity across your platform, including:
- Total active providers and models
- Usage statistics for today, this week, and this month (requests, tokens, cost)
- Provider connection status
- Recent AI activity log
- Cost trends over the last 30 days
- Top models by usage
Managing Providers
Navigate to AI Core > Providers to manage AI providers:
- View all enabled providers and their associated models
- Add a new provider by selecting a type and entering API credentials
- Edit existing provider settings (name, API key, endpoint URL, rate limits, cost per token, priority)
- Test provider connectivity to verify API keys are valid
- View available provider addons that can be installed (e.g., Gemini Provider, Local AI Provider)
Managing Models
Navigate to AI Core > Models to view and manage AI models:
- Browse all available models across providers
- Test individual models to verify they are working
- Configure model settings
Module-Model Configuration
Navigate to AI Core > Module Configuration to control which AI provider and model each module uses:
- Sync Modules - Detect all AI-enabled modules in the system
- Toggle Status - Enable or disable AI for individual modules
- Assign Provider/Model - Select which provider and model each module should use
- Per-Module Control - Fine-grained control over AI behavior across different features
Usage Analytics
Navigate to AI Core > Usage to monitor AI usage:
- View summary statistics (total requests, tokens, cost, average response time, success rate)
- Filter by time period, provider, or module
- View usage trends over time with charts
- See module-level and provider-level cost breakdowns
- Browse recent usage logs with details
- Export usage data as JSON reports
Request Logs
Navigate to AI Core > Logs to audit AI requests:
- Browse all AI request logs with DataTable filtering
- View detailed information for individual requests
- Flag suspicious or noteworthy requests for review
- Mark requests as reviewed
- Export logs
- View log statistics
Settings
Navigate to AI Core > Settings to configure system-wide AI settings. See the Configuration section below for details. Settings can be reset to defaults if needed.
Available Provider Addons
AI Core includes OpenAI as the default provider. Additional providers are available as separate addons:
- Gemini Provider - Google Gemini AI models integration
- Local AI Provider - Run AI models locally with Ollama, LM Studio, LocalAI, or vLLM
AI-Powered Modules
The following modules use AI Core for their AI capabilities:
- HR Assistant - AI-powered HR support and knowledge management
- Sales Assistant - Lead scoring, pipeline analysis, and sales forecasting
- Finance Assistant - Financial analysis and expense insights
- Reporting AI - Natural language to SQL reporting
- AI Chat - General-purpose AI chat interface
- Auto Description - AI-generated content descriptions
- Document AI - AI-powered document processing
Included OpenAI Models
AI Core comes with OpenAI provider pre-configured. Available models:
| Model Family | Models | Description |
|---|---|---|
| GPT-5.2 Series | gpt-5.2-instant, gpt-5.2-thinking, gpt-5.2-pro, gpt-5.2-codex | Latest GPT models with 400K context |
| o3 Series | o3-pro, o3, o3-mini, o4-mini | Advanced reasoning models |
| GPT-4.1 Series | gpt-4.1, gpt-4.1-mini, gpt-4.1-nano | 1M token context models |
| GPT-4o Series | gpt-4o, gpt-4o-mini | Multimodal with vision/audio |
| Embeddings | text-embedding-3-large, text-embedding-3-small | Semantic search |
| Image | dall-e-3, dall-e-2 | Image generation |
| Audio | whisper-1, tts-1, tts-1-hd | Transcription & text-to-speech |
Web Routes
| Method | Route | Description |
|---|---|---|
| GET | /aicore/dashboard | AI Dashboard |
| GET | /aicore/providers | List providers |
| POST | /aicore/providers/{provider}/test | Test provider connection |
| GET | /aicore/models | List models |
| POST | /aicore/models/{model}/test | Test a model |
| GET | /aicore/usage | Usage analytics |
| GET | /aicore/usage/export | Export usage data |
| GET | /aicore/settings | AI settings |
| PUT | /aicore/settings | Update settings |
| GET | /aicore/module-configuration | Module-model mapping |
| GET | /aicore/logs | Request logs |
API Endpoints
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/ai/chat | Send chat request |
| POST | /api/ai/complete | Send completion request |
| POST | /api/ai/summarize | Summarize content |
| POST | /api/ai/extract | Extract data from content |
| GET | /api/ai/usage | Get usage statistics |
Admin-only endpoints (requires admin or super_admin role):
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/ai/admin/providers | List providers |
| POST | /api/ai/admin/providers/{provider}/test | Test provider connection |
| GET | /api/ai/admin/providers/{provider}/usage | Get provider usage |
| GET | /api/ai/admin/usage/reports | Usage reports |
| GET | /api/ai/admin/usage/trends | Usage trends |
Notes
- AI Core is the foundation for all AI-powered modules
- OpenAI provider is included; additional providers (Gemini, Local AI) are available as separate addons
- Usage is tracked and can be limited per user or system-wide
- All AI interactions are logged for audit purposes when request logging is enabled
- Rate limiting protects against excessive API usage
Changelog: View version history