Version 0.8.1 marks a significant leap in the Cortex end-user experience. The focus was clear: extreme performance, daily use fluidity, more control and advanced AI features.
⚡ Impressive Performance
- Chat Completions: Significantly reduced response time with pipeline optimization
- Optimized Cache: Smart verification before fetching models, reducing latency
- Async Processes: Fewer freezes and better resource utilization
- Time To First Token: Significant reduction in time to first response token
🧠 Evolved Chat Experience
- Automatic Queuing: Messages sent during processing are automatically queued
- Edit and Delete: Edit or remove queued messages before sending
- Immediate Send: Option to force send even with an active queue
- Skills Library: Create and share pre-configured abilities across teams
- Prompt Versioning: Native version control for your system prompts
🔐 Security & Access Control
- Public Sharing: Critical fix in shared conversation visibility control
- SSRF Protection: Server-Side Request Forgery blocked in external image loading
- Write Permissions: Access controls now strictly enforced across all endpoints
- Usage Dashboard: Real metrics on token consumption, most-used models and costs
- Access Audit: Complete log of who accessed, when and what was generated
🎯 Summary
🚀 Much faster — 🧠 More natural conversation flow — 📊 Governance with real Analytics — 🔐 Reinforced security — 🎯 Reusable skills — 🧾 Prompts with version control