Alquimia Documentation
  • Welcome To Alquimia
  • Architecture
  • Operational Handler
  • Alquimia Runtime Helm Chart
  • UI SDK
    • Alquimia-AI Tools
      • Summary
      • getting-started
        • Getting Started with Alquimia Tools
    • Alquimia-AI UI
      • Summary
      • api-reference
        • API Types Reference
      • components
        • Atom Components
        • Molecules
        • Organism Components
        • Components Overview
      • examples
        • Examples
      • getting-started
        • Installation
      • styling
        • Styling Guide
        • Using Custom Themes with Tailwind CSS
    • Alquimia AI Widget
Powered by GitBook
On this page
  • Community
  • Get Involved
  • Introduction
  • Key Features
  • 🧬 Event-Driven & Serverless
  • Seamless Cloud-Native Integration
  • 🦜 Flexible Multi-LLM Support
  • Advanced RAG & Vector Store Integration
  • Omnichannel AI Integration
  • 🥷 Versatile Tool Execution
  • Lightweight & Production-Ready
  • Why Choose Alquimia Operational Handler?
  • Use Cases
  • Getting Started
  • Prerequisites
  • Installation
  • Workflow
  • Examples
  • Integrations
  • Server tools
  • Local development
  • CLI Usage
  • Available CLI Operations
  • Contributing
  • License

Operational Handler

PreviousArchitectureNextAlquimia Runtime Helm Chart

Last updated 2 months ago

| | | | | | |

Community

We are building enterprise-ready AI agents with a focus on transparency, consistency, and flexibility. Contributions from the community help ensure reliability and innovation.

Get Involved

Your contributions help improve AI automation for everyone.

Introduction

Alquimia Operational Handler is an advanced, event-driven platform designed to manage multi-agent LLM (Large Language Model) solutions in containerized environments. Built on Knative, it provides seamless orchestration of LLMs, intelligent memory management, context-aware prompting, and complex tool execution.

Designed for Openshift and Kubernetes, the platform offers lightweight deployment, high scalability, and native integrations with modern AI ecosystems, including Openshift AI and LangChain. It supports a diverse range of LLM providers, vector stores, and retrieval-augmented generation (RAG) strategies, making it an ideal solution for enterprises and developers building AI-powered applications.


Key Features

🧬 Event-Driven & Serverless

  • Built on Knative for automatic scaling and serverless execution.

  • Fully asynchronous to ensure optimal performance and responsiveness.

  • Works natively on Openshift and Kubernetes.

  • Supports Openshift AI for direct access to deployed models.

  • LangChain-compatible, enabling powerful agent-driven workflows.

🦜 Flexible Multi-LLM Support

  • Works with major LLM providers, including:

    • OpenAI

    • Mistral

    • DeepSeek

    • LLama

  • Supports retrieval-augmented generation (RAG) for enhanced AI reasoning.

  • Compatible with vector stores like:

    • Qdrant

    • Chroma

    • ElasticSearch

  • Use custom connectors or community Kamelets (Camel K) for seamless omnichannel support.

  • Automate AI-powered workflows across multiple communication channels.

🥷 Versatile Tool Execution

  • Supports server-side, client-side, and hybrid tool execution.

  • Context-aware execution strategies to optimize performance.

  • Minimal boilerplate, enabling rapid development and deployment.

  • Enterprise-ready with scalability, reliability, and observability.


Why Choose Alquimia Operational Handler?

✅ Scalability – Effortlessly scale AI workflows with Knative. ✅ Flexibility – Works with multiple LLM providers, vector stores, and orchestration frameworks. ✅ Performance – Asynchronous, event-driven execution optimizes efficiency. ✅ Integration – Native compatibility with LangChain, Openshift AI, and containerized environments. ✅ Serverless Superpowers – Automatically scale workloads, reducing operational costs.


Use Cases

  1. Multi-Agent AI Orchestration – Manage and coordinate complex LLM-driven workflows.

  2. Enterprise-Scale Document Retrieval – Implement RAG for intelligent search and knowledge retrieval.

  3. Omnichannel AI Automation – Deploy AI-powered solutions across multiple communication channels.

  4. Hybrid Tool Execution – Dynamically execute AI tools across client, server, or hybrid environments.


Getting Started

Prerequisites

  • A running Openshift or Kubernetes cluster.

  • Openshift Serverless (Knative) runtime installed.

  • Openshift Service Mesh (Istio) for networking.

  • AMQ Streams (Strimzi) for event-driven messaging.

  • A Redis instance for memory and cache management

  • A Couchdb instance to manage agent configurations

  • Optional: Vector store (e.g., Qdrant, Chroma, or ElasticSearch) for RAG capabilities.

Installation

Install on Openshift

Ensure you have:

  • OC client installed.

  • Knative support for Kafka via Strimzi (AMQ Streams).

Then, deploy the platform:

oc apply -f serving/base.yaml
oc apply -f eventing/base.yaml

Now you are ready to deploy your first agent

Workflow

The proposed architecture is intented to be a common framework for agents. You can change it to adapt your needs. Recommend set up:

sequenceDiagram
	participant CC as Client
    participant AH as Hermes (Entrypoint)
    participant IB as Inbound Broker
    participant NB as Normalized Broker
    participant CB as Classified Broker
    participant AL as Alquimia Leviathan (Execution steps)
    participant OB as Outbound Broker
	participant SC as Slack Connector

    CC ->> AH: Client sends query via Slack
    AH ->> IB: Trigger agent inference
    IB ->> AL: Trigger normalization Sequence
    AL ->> AL: Get memory for current session
    AL ->> NB: Pass normalized event
    NB ->> AL: Trigger agent custom sequence
    AL ->> AL: Executes classification models or LLMs with different roles
    AL ->> CB: Pass classified event
    CB ->> AL: Trigger Empathy Sequence
    AL ->> AL: Waits for other events to complete (tool execution for example)
    AL ->> AL: Select best expert profile according to context
    AL ->> AL: Invokes final LLM with selected profile
    AL ->> OB: Pass outbound event

    OB ->> AL: Memory persistance
    OB ->> SC: Send answer via Slack

Examples

Integrations

Server tools

Local development

CLI Usage

Alquimia Operational Handler provides a CLI for managing embeddings, updating assistants configuration, and invoking AI-powered functions in your cluster.

Available CLI Operations

Install required libs (use of virtual environments recommended):

pip install -r requirements.txt

Then list of available operations by running:

python main.py --help

For more detail see:


Contributing

We are building an open, collaborative community. Contributions are always welcome!

If you'd like to add features, improve documentation, or suggest enhancements:

  1. Fork the repository.

  2. Create a new branch (git checkout -b feature-xyz).

  3. Submit a pull request with your proposed changes.


License

Alquimia Operational Handler is open-source and available under the MIT License.


GitHub Discussions –

Slack –

Seamless Cloud-Native Integration

Advanced RAG & Vector Store Integration

Omnichannel AI Integration

Lightweight & Production-Ready

See our list of full working examples

You can find our custom channel integrations or deploy

Find all server tools available and see how easy is to create your own bundle.

For more information on how to set your local develop environment see docs

Invoke AI Functions –

Manage Embeddings –

Configure Assistants –

☁️
📖
📩
🚀
Join here
Community chat
here
here
community Kamelets
here
here
Functions invocation
Embeddings operations
Configuration
Alquimia AI
Community
Key features
Use cases
Getting started
Workflow
Examples
Contributing