Skip to content

Latest commit

 

History

History
74 lines (51 loc) · 2.34 KB

README.md

File metadata and controls

74 lines (51 loc) · 2.34 KB

logo logo

🦄 The next generation of Multi-Modal Multi-Agent framework. 🤖.

Introduction

We are dedicated to developing a universal multi-modal multi-agent framework. Multi-modal agents are very powerful agents capable of understanding and generating information across various modalities—including text, images, audio, and video. These agents are designed to automatically completing complex tasks that involve multiple modals of input and output. Our Framework also aims to support multi-agent collaboration. This approach allows for a more comprehensive and nuanced understanding of complex scenarios, leading to more effective problem-solving and task completion.

🔥 Features

  • Build, manage and deploy your AI agents.

  • Multi-modal agents, agents can interact with users using texts, audios, images, and videos.

  • Vector database and knowledge embeddings

  • UI for chatting with AI agents.

  • Multi-agent collaboration, you can create a agents company for complex tasks, such as draw comics. (Coming soon)

  • Fine-tuning and RLHF (Coming soon)

  • ChatUI, support multiple sessions. alt text

    alt text

🔥 Framework

alt text

📃 Examples

Comics Company, create a comic about Elon lands on mars.

图片9

Multi-modal agent, draw images and make videos for you.

Elon

Installation

1. Python Environment

git clone https://github.com/ZhihaoAIRobotic/MetaAgent.git
conda env create -f environment.yaml

2. Frontend install

cd frontend
npm install

Usage

1. Run API service

cd MetaAgent/metaagent
python manager.py

2. Run Chat UI.

cd frontend 
npm run dev