Databricks Expands Mosaic AI to Empower Enterprises with LLM

A year after acquiring MosaicML for $1.3 billion, Databricks has rebranded the platform as Mosaic AI, making it a cornerstone of their AI solutions. Today, at the Data + AI Summit, Databricks is unveiling a suite of new features to enhance Mosaic AI’s capabilities. In anticipation of these announcements, Databricks co-founders, CEO Ali Ghodsi and CTO Matei Zaharia, shared insights into the new tools and their potential impact.

New Features in Mosaic AI

Databricks is introducing five new tools at the summit:

  1. Mosaic AI Agent Framework
  2. Mosaic AI Agent Evaluation
  3. Mosaic AI Tools Catalog
  4. Mosaic AI Model Training
  5. Mosaic AI Gateway

These tools are designed to address three key concerns in AI model deployment: improving quality and reliability, ensuring cost efficiency, and maintaining data privacy.

Mosaic AI: Enhancing Model Reliability and Cost Efficiency

Ghodsi emphasized the importance of quality, cost efficiency, and data privacy in AI model deployment. “Everybody’s excited about the developments in Gen AI, but the core concerns remain the same,” he noted. The new features aim to improve these aspects for Databricks’ customers.

Zaharia highlighted that enterprises deploying large language models (LLMs) often use systems with multiple components, including various models and external tools for database access and retrieval augmented generation (RAG). These complex systems enhance the performance and relevance of LLM-based applications by using cheaper models for specific tasks and integrating proprietary data.

Modular Systems for Mission-Critical Applications

Zaharia explained that modular systems are crucial for mission-critical AI applications. “If you’re doing something mission-critical, you’ll want engineers to control all aspects of it,” he said. Databricks is focusing on creating modular systems that developers can easily work with, ensuring transparency and control over all components.

Mosaic AI New Tools for Developers

  • Mosaic AI Agent Framework: Utilizes Databricks’ serverless vector search functionality, enabling developers to build RAG-based applications.
  • Mosaic AI Tools Catalog: Extends governance features to allow enterprises to control which AI tools and functions LLMs can access.

The integration with Databricks’ data lake ensures data synchronization and governance, preventing personal information from leaking into the vector search service.

Developers can now use these tools to build custom agents by chaining models and functions using frameworks like Langchain or LlamaIndex. Many Databricks customers are already employing these tools for agent-like workflows in their AI applications.

Evaluating and Fine-Tuning AI Applications

  • Agent Evaluation: Combines LLM-based judges with user feedback to evaluate AI performance.
  • Model Training: Allows fine-tuning of models with private data for improved task performance.
  • Gateway: Provides a centralized interface for querying, managing, and deploying any open source or proprietary model, ensuring secure and governed usage. IT departments can set rate limits to manage costs and track usage for debugging purposes.

Responding to Market Shifts

Ghodsi observed a significant market shift towards using open models, driven by increased sophistication among customers. “We saw a big shift in the last quarter and a half,” he said. Customers now demand new tools to handle the complexities and opportunities of open models, moving away from a reliance on OpenAI.

Conclusion

With these new features, Databricks aims to empower enterprises to use the full potential of LLMs while addressing critical concerns about quality, cost, and privacy. This expansion marks a significant step forward in providing robust, scalable AI solutions for mission-critical applications.

See also: The Power Of Copy AI

The Power of Copy AI
World Intelligence Expo 2024

Trending Posts

Trending Tools

FIREFILES

FREE PLAN FIND YOUR WAY AS AN TRADER, INVESTOR, OR EXPERT.
Menu