Operational Handler
Last updated
Last updated
| | | | | | |
We are building enterprise-ready AI agents with a focus on transparency, consistency, and flexibility. Contributions from the community help ensure reliability and innovation.
Your contributions help improve AI automation for everyone.
Alquimia Operational Handler is an advanced, event-driven platform designed to manage multi-agent LLM (Large Language Model) solutions in containerized environments. Built on Knative, it provides seamless orchestration of LLMs, intelligent memory management, context-aware prompting, and complex tool execution.
Designed for Openshift and Kubernetes, the platform offers lightweight deployment, high scalability, and native integrations with modern AI ecosystems, including Openshift AI and LangChain. It supports a diverse range of LLM providers, vector stores, and retrieval-augmented generation (RAG) strategies, making it an ideal solution for enterprises and developers building AI-powered applications.
Built on Knative for automatic scaling and serverless execution.
Fully asynchronous to ensure optimal performance and responsiveness.
Works natively on Openshift and Kubernetes.
Supports Openshift AI for direct access to deployed models.
LangChain-compatible, enabling powerful agent-driven workflows.
Works with major LLM providers, including:
OpenAI
Mistral
DeepSeek
LLama
Supports retrieval-augmented generation (RAG) for enhanced AI reasoning.
Compatible with vector stores like:
Qdrant
Chroma
ElasticSearch
Use custom connectors or community Kamelets (Camel K) for seamless omnichannel support.
Automate AI-powered workflows across multiple communication channels.
Supports server-side, client-side, and hybrid tool execution.
Context-aware execution strategies to optimize performance.
Minimal boilerplate, enabling rapid development and deployment.
Enterprise-ready with scalability, reliability, and observability.
✅ Scalability – Effortlessly scale AI workflows with Knative. ✅ Flexibility – Works with multiple LLM providers, vector stores, and orchestration frameworks. ✅ Performance – Asynchronous, event-driven execution optimizes efficiency. ✅ Integration – Native compatibility with LangChain, Openshift AI, and containerized environments. ✅ Serverless Superpowers – Automatically scale workloads, reducing operational costs.
Multi-Agent AI Orchestration – Manage and coordinate complex LLM-driven workflows.
Enterprise-Scale Document Retrieval – Implement RAG for intelligent search and knowledge retrieval.
Omnichannel AI Automation – Deploy AI-powered solutions across multiple communication channels.
Hybrid Tool Execution – Dynamically execute AI tools across client, server, or hybrid environments.
A running Openshift or Kubernetes cluster.
Openshift Serverless (Knative) runtime installed.
Openshift Service Mesh (Istio) for networking.
AMQ Streams (Strimzi) for event-driven messaging.
A Redis instance for memory and cache management
A Couchdb instance to manage agent configurations
Optional: Vector store (e.g., Qdrant, Chroma, or ElasticSearch) for RAG capabilities.
Ensure you have:
OC client installed.
Knative support for Kafka via Strimzi (AMQ Streams).
Then, deploy the platform:
Now you are ready to deploy your first agent
The proposed architecture is intented to be a common framework for agents. You can change it to adapt your needs. Recommend set up:
Alquimia Operational Handler provides a CLI for managing embeddings, updating assistants configuration, and invoking AI-powered functions in your cluster.
Install required libs (use of virtual environments recommended):
Then list of available operations by running:
For more detail see:
We are building an open, collaborative community. Contributions are always welcome!
If you'd like to add features, improve documentation, or suggest enhancements:
Fork the repository.
Create a new branch (git checkout -b feature-xyz
).
Submit a pull request with your proposed changes.
Alquimia Operational Handler is open-source and available under the MIT License.
GitHub Discussions –
Slack –
See our list of full working examples
You can find our custom channel integrations or deploy
Find all server tools available and see how easy is to create your own bundle.
For more information on how to set your local develop environment see docs
Invoke AI Functions –
Manage Embeddings –
Configure Assistants –