This is a Flutter application that utilizes the Llama.cpp library to run large LLM models offline.
- Offline Model Execution: The application is capable of running large LLM models offline, making it ideal for environments with limited or no internet connectivity.
- Cross-Platform: Built with Flutter, this application can be compiled and run on multiple platforms including iOS, Android, and web.
- Efficient Performance: The use of Llama.cpp ensures efficient execution of large models, providing fast and accurate results.
To get started with this project, clone the repository and navigate to the project directory.
git clone https://github.com/mizy/local-agent-chat.git
cd local-agent-chat
git submodule update --init --recursive
- MacOS
- Linux
- IOS
To build the project, use the following command:
flutter build
This will generate a build based on your current platform.
To run the project, use the following command:
flutter run
change llama_cpp_dart/src/llm.cpp antiprompt_map to add a new prompt format
Contributions are welcome!
This project is licensed under the terms of the MIT license.