November 14, 2024
Oracle bakes LLMs and vector support directly into HeatWave GenAI database

Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More


Oracle is expanding its HeatWave cloud database service with a set of new generative AI services, known collectively as HeatWave GenAI.

The HeatWave platform was formerly branded as MySQL HeatWave. It provides a cloud managed extended version of the MySQL database with both transactional and analytical database functionality. Last year Oracle extended the platform with the HeatWave Lakehouse, which provides data lakehouse capabilities. 

With HeatWave GenAI, Oracle is bringing vector processing and advanced AI functionality to the database. While there is no shortage of database vendors adding vector support to help enable gen AI and retrieval augmented generation (RAG), few if any vendors are actually integrating Large Language Models (LLMs) directly as an in-database capability. That’s what Oracle is doing though, directly integrating quantized versions of Llama 3 and the Mistral LLMs. According to Oracle, having an LLM directly as part of the database can enable better performance and new types of applications that complement HeatWave’s existing AutoML automated machine learning functionality.

“Customers don’t need to wait to provision a GPU or to pay for the cost of invocation of an external service,” Nipun Agarwal, senior VP MySQL and HeatWave at Oracle told VentureBeat. “Furthermore, since all the LLM invocation is also happening inside HeatWave, it provides a lot more synergy with AutoML and other capabilities of HeatWave which are running inside the database.”


Countdown to VB Transform 2024

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now


Bringing AI closer to data with in-database LLMs

Oracle claims that the introduction of in-database LLMs is an industry first. 

Agarwal explained that having in-database LLM capability complements AutoML which is a feature that many organizations already use. As an example, he noted that an early user of the new HeatWave GenAI capability was already using AutoML for anomaly detection. Now with the addition of the LLM capability that complements AutoML, the database is able to provide more descriptive results and summarization capabilities.

Another use case where the combination of gen AI and AutoML is being proven out with an early user, is in the online food delivery business. Agarwal explained that a customer had been using HeatWave’s AutoML to help determine the time it would take for the driver to deliver food after the order had been placed. With the addition of the in-database LLM, that food delivery company is now able to more easily generate recommendations for restaurants as well as food items that a user might want, that can also benefit from the data that AutoML has already analyzed.

“We believe this is going to help generate a new class of applications and value add for our customers,” Agarwal said.

Automated vector processing aims to ease gen AI database deployment

HeatWave GenAI also has an integrated Vector Store as part of the service.

Though vector support in databases has become table stakes in 2024, it’s often up to users to figure out and manage the process of converting existing data into vector embeddings. Oracle’s vector embeddings feature automatically generates vector representations of unstructured documents like text, images and videos. This automation handles tasks previously requiring developer expertise like choosing parsers, models and optimizing processing. The vectors then power semantic search and other natural language applications.

While many database vendors over the past year have enabled support for vectors as a data type, Oracle has implemented a somewhat unique approach to vector processing in HeatWave.

A common approach used by many database vendors is to use some form of secondary index to help enable vector search. Agarwal explained that HeatWave GenAI focuses on in-memory, table scan-based operations rather than relying on approximate indexing methods in order to get results.

“People use indexes for performance, but the catch about any inherent vector index is that they’re approximate,” Agarwal said. “So what we provide is the performance without lack of accuracy.”



Source link