A trustless, privacy-preserving semantic routing agent that processes and evaluates content updates using LLM providers. Built with end-to-end TEE (Trusted Execution Environment) architecture for complete operational blindness - no one, including the operators, can see what's being processed or who is using it.
- Real-time content filtering using Server-Sent Events (SSE)
- End-to-end TEE architecture:
- Lit Protocol for secure key management and configuration
- OpenGradient for private LLM inference
- Fleek for secure TEE-based decentralized runtime
- IPFS for decentralized prompt storage
- Multiple data source integrations:
- Farcaster
- Lu.ma (coming soon)
- Paragraph.xyz (coming soon)
- Python 3.8+
- Node.js 16+
- Google Cloud Project (for Gemini)
- OpenGradient account
- Lit Protocol setup
- IPFS node (optional, for self-hosting prompts)
- Pull and run the official Docker image:
docker pull indexnetwork/semantic-router:0.0.2
docker run -p 8000:8000 --env ROUTER_CONFIG="<YOUR_CONFIG>" indexnetwork/semantic-router:0.0.2
Note: Contact [email protected] for testing purposes to obtain the ROUTER_CONFIG value. Do not share this configuration directly.
- Clone the repository:
git clone https://github.com/indexnetwork/semantic-router
cd semantic-router
- Build and run the Docker container:
docker build -t semantic-router .
docker run -p 8000:8000 semantic-router
- Connect to the agent's SSE endpoint:
curl -N "http://localhost:8000/?prompt=If+its+a+significant+update+on+decentralized+AI+and+autonomous+agents."
Streams semantic evaluation results for the provided prompt.
Query Parameters:
prompt
: Your prompt, could be anything.
Response: Server-Sent Events stream with JSON payloads
MIT License - see LICENSE file for details.