InterSystems IRIS® includes powerful built-in generative AI capabilities so that you can build smart enterprise applications powered by semantic search and generative AI over any type of data.
Vector capabilities are a native part of InterSystems IRIS core multi-model database engine, alongside JSON, full text, objects, relational tables, key-value, and other data types. This makes it possible to power AI apps with structured and unstructured data in the same engine, without data movement. It's simpler and more efficient than using a separate dedicated vector database.
With its low-latency query performance and high-speed ingestion, InterSystems IRIS offers unmatched real-time capabilities. You can apply generative AI to streaming data--including video--and combine it across multiple types of data.
Vector Search
InterSystems IRIS embedded vector search capabilities let you search unstructured and semi-structured data. Data is converted to vectors (also called ‘embeddings’) and then stored and indexed in InterSystems IRIS for semantic search, retrieval-augmented generation (RAG), semantic search, text analysis, recommendation engines, and other use cases.
Vector search includes a vector SQL datatype for storing and querying embeddings and built-in functions for computing vector similarities. These functions are built into the core of InterSystems IRIS for maximum speed, scale, security, and reliability. Vector data operations leverage hardware acceleration (built-in SIMD vector processing) for extreme performance.
Add Semantics to your Applications
Vector Search allows you to query your data based on semantics, or the meaning of the data, rather than the data itself. Imagine a multidimensional space where each data point (e.g., phrase or record) corresponds to a vector. Data with similar meanings or contexts ends up close to each other in this vector space.
With recent advances in AI, these vectors are now better able to capture the meaning of data by projecting lower-dimensional data into a higher-dimensional space, which contains more context about the data. Vector embeddings provide a much more fine-grained model of meaning.
The first step is to convert data into vector embeddings and store it as vectors in InterSystems IRIS. Then you can query and quickly find similar data using vector functions. This allows you to satisfy queries like “show me resume matches for this job description” or "find personalized travel recommendations for a beach vacation in the Carribean based on my preferences”. This unlocks a whole new class of capabilities for your InterSystems IRIS applications.
Build AI-Powered Experiences with RAG
Vector search supports the RAG architecture, which is rapidly emerging as the primary way to overcome the limitations of large language models (LLM), such as stale data, token limits, and hallucinations.
RAG combines two steps: a retriever that uses vector search to retrieve relevant documents and data from the InterSystems IRIS database, and a generator – the LLM itself, which crafts contextually relevant responses in the desired format and tone.
You can use fresh, authoritative information – including your proprietary data – to generate an accurate response, leveraging the LLM of your choice to understand the question, phrase the response, and add supplementary information.
Developers can leverage a large GenAI ecosystem, using platforms, plugins and libraries to build advanced generative AI applications quickly and easily, including:
- ChatGPT
- LangChain
- Hugging Face
- Llama2
- LlamaIndex
- Cohere
AI Orchestration
The proliferation of new cloud-based GenAI services opens up amazing new possibilities, but it can also make it difficult to create and manage a reliable system. InterSystems IRIS Interoperability lets you easily create composite applications that span across multiple models wherever they are running. A low-code graphical editor allows creation of AI solutions without programming, and built-in API management capabilities provide protection, publication, and monetization of new GenAI-powered services.
Distributed operations are automatically captured for auditing and debugging. The Visual Trace feature gives developers and administrators the power to trace messages throughout the orchestration flow and examine their content. Want to understand if a problem is due to an AI service, the data you feed it, or the business logic it's used in? Use Visual Trace.
Enterprise Ready
InterSystems IRIS has proven data security, compliance, and high availability appropriate for mission-critical enterprise applications.
Building generative AI applications with our embedded vector search capabilities provide you with:
- Full control of your data
- Your choice of LLM, orchestration framework, and agent framework
- Full auditing and traceability
- Run LLMs locally, keeping your sensitive data completely local and secure
- Leverage a huge ecosystem of AI services, wherever they are running, with fully security and reliability