Skip to content

Commit

Permalink
feat: refactor examples
Browse files Browse the repository at this point in the history
  • Loading branch information
sgomez committed Oct 13, 2024
1 parent c303117 commit a6caeae
Show file tree
Hide file tree
Showing 8 changed files with 451 additions and 167 deletions.
136 changes: 55 additions & 81 deletions docs/bot/running.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,68 +6,78 @@ In this section, we will go through how to run a basic echo bot using the Grammy

The following code sets up a simple bot that echoes back any text message it receives.

### Step 1: Import Required Modules
### Step 1: Add a command

We start by importing the necessary modules. `dotenv` is used to load environment variables, and `Bot` comes from the **grammy** library, which is used to interact with Telegram bots:
```ts
import process from 'node:process'
import { Bot } from 'grammy'
The first step involves creating a simple `/start` command. This command responds with a welcome message to introduce the bot and let users know it’s ready to help. It’s a good practice to include a starting point like this for bots to guide the user experience, especially for first-time users.

import { environment } from './lib/environment.mjs'
```
```ts title="src/lib/commands/start.ts"
import type { CommandContext, Context } from 'grammy'

### Step 2: Initialize the Bot
export async function start(context: CommandContext<Context>): Promise<void> {
const content = 'Welcome, how can I help you?'
await context.reply(content)
}
```

In the `main` function, we initialize the bot using the token from the environment variable `BOT_TOKEN`. This token is essential for interacting with the Telegram API:
```ts
async function main(): Promise<void> {
const bot = new Bot(environment.BOT_TOKEN)
```
### Step 2: Handle Messages

### Step 3: Add Commands
This is where the echo functionality comes into play. The bot listens for incoming text messages from users and, upon receiving a message, responds by sending back the same message. This demonstrates the basic capability of the bot to handle and reply to user input.

The bot is programmed to respond to a `/start` command with a welcome message. This is useful for onboarding users or providing initial instructions:
```ts
bot.command('start', async (context) => {
const content = 'Welcome, how can I help you?'
await context.reply(content)
})
```
```ts title="src/lib/handlers/on-message.ts"
import { generateText } from 'ai'
import { Composer } from 'grammy'

### Step 4: Handle Messages
import { environment } from '../environment.mjs'

The core functionality of this echo bot is to listen for text messages. Whenever the bot receives a text message, it sends the same message back to the user:
```ts
bot.on('message:text', async (context) => {
const userMessage = context.message.text
await context.reply(userMessage)
})
```
export const onMessage = new Composer()

### Step 5: Graceful Shutdown
onMessage.on('message:text', async (context) => {
const userMessage = context.message.text
await context.reply(userMessage)
})
```

To ensure that the bot stops gracefully when the application is terminated (for example, by `SIGINT` or `SIGTERM`), we add the following event listeners:
```ts
process.once('SIGINT', () => bot.stop())
process.once('SIGTERM', () => bot.stop())
process.once('SIGUSR2', () => bot.stop())
```
### Step 3: Construct the bot

### Step 6: Start the Bot
This section walks through setting up the core bot functionality. First, we initialize the bot using the token from environment variables, which is required for Telegram to authenticate and interact with your bot.

Finally, the bot is started with the `bot.start()` method:
```ts
await bot.start()
}
Next, we attach the `/start` command and the echo message handler (`onMessage`). The bot listens for incoming text and command events and processes them accordingly.

main().catch((error) => console.error(error))
```
Additionally, we ensure the bot shuts down properly when the process receives termination signals, such as SIGINT or SIGTERM, making the bot more robust and production-ready.


```ts title="src/main.ts"
import process from 'node:process'

import { environment } from './lib/environment.mjs'
import { Bot } from 'grammy'

import { start } from './lib/commands/start'
import { environment } from './lib/environment.mjs'
import { onMessage } from './lib/handlers/on-message'


async function main(): Promise<void> {
const bot = new Bot(environment.BOT_TOKEN)

bot.command('start', start)

bot.use(onMessage)

// Enable graceful stop
process.once('SIGINT', () => bot.stop())
process.once('SIGTERM', () => bot.stop())
process.once('SIGUSR2', () => bot.stop())

await bot.start()
}

This code initializes the bot, listens for text messages, and echoes the received message back to the user. The bot will stop safely when the application receives termination signals.
main().catch((error) => console.error(error))
```

## Running the Bot

To run the bot, follow these steps:
This final section provides a step-by-step guide on how to set up and run the bot. It includes copying environment variables, updating the configuration with the correct Telegram bot token, installing the required dependencies, and finally running the bot in development mode.

1. **Copy the environment configuration**:
Rename the provided `.env.example` file to `.env` to set up your environment variables. You can do this using the following command:
Expand All @@ -94,39 +104,3 @@ To run the bot, follow these steps:
```

After following these steps, your bot will be running, and you can start chatting with it. The bot will respond to the `/start` command with a welcome message and echo any text message you send.

## Full code

```ts title="src/main.ts"
import process from 'node:process'
import { environment } from './lib/environment.mjs'
import { Bot } from 'grammy'
dotenv.config()
async function main(): Promise<void> {
const bot = new Bot(environment.BOT_TOKEN)
bot.command('start', async (context) => {
const content = 'Welcome, how can I help you?'
await context.reply(content)
})
bot.on('message:text', async (context) => {
const userMessage = context.message.text
await context.reply(userMessage)
})
// Enable graceful stop
process.once('SIGINT', () => bot.stop())
process.once('SIGTERM', () => bot.stop())
process.once('SIGUSR2', () => bot.stop())
await bot.start()
}
main().catch((error) => console.error(error))
```
56 changes: 21 additions & 35 deletions docs/chatbot/basic.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Basic Chatbot with AI

In this chapter, we will explore how to integrate an AI model into a basic chatbot.
In this chapter, we will integrate AI-powered responses into a basic chatbot. The chatbot uses an AI model to generate replies based on user input.

## AI-Powered Responses

The main functionality of this chatbot is to use an AI model to generate responses based on user input. This is done using the `generateText` function, which takes a prompt (the user's message) and a system instruction to guide the AI's behavior.
The chatbot relies on the `generateText` from Vercel SDK AI function to produce intelligent responses. This function takes two key inputs: the users message (prompt) and system instructions that define the AI’s role.

### Understanding the Code

Expand Down Expand Up @@ -37,49 +37,35 @@ const { text } = await generateText({
3. The system prompt ensures that the AI model knows it is a chatbot for booking appointments, guiding it to provide relevant responses.
4. The AI model processes the input and generates a response, which is then sent back to the user.

Now our bot answers using AI, but it has no memory and is not able to continue a conversation.
At this stage, the bot can respond intelligently using AI but lacks conversation memory to handle ongoing interactions.

## Full code


```ts title="src/main.ts"
import process from 'node:process'

```ts title="src/lib/handlers/on-message.ts"
import { generateText } from 'ai'
import { Bot } from 'grammy'

import { environment } from './lib/environment.mjs'
import { registry } from './setup-registry'
import { Composer } from 'grammy'

async function main(): Promise<void> {
const bot = new Bot(environment.BOT_TOKEN)
import { registry } from '../ai/setup-registry'
import { environment } from '../environment.mjs'

bot.command('start', async (context) => {
const content = 'Welcome, how can I help you?'
export const onMessage = new Composer()

await context.reply(content)
})
const PROMPT = `
You are a chatbot designed to help users book hair salon appointments for the next day.
`

bot.on('message:text', async (context) => {
const userMessage = context.message.text
onMessage.on('message:text', async (context) => {
const userMessage = context.message.text

const { text } = await generateText({
model: registry.languageModel(environment.MODEL),
prompt: userMessage,
system:
'You are a chatbot designed to help users book hair salon appointments for the next day.',
})

await context.reply(text)
// Generate the assistant's response using the conversation history
const { text } = await generateText({
model: registry.languageModel(environment.MODEL),
prompt: userMessage,
system: PROMPT,
})

// Enable graceful stop
process.once('SIGINT', () => bot.stop())
process.once('SIGTERM', () => bot.stop())
process.once('SIGUSR2', () => bot.stop())

await bot.start()
}

main().catch((error) => console.error(error))
// Reply with the generated text
await context.reply(text)
})
```
91 changes: 45 additions & 46 deletions docs/chatbot/memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,67 +106,66 @@ export class ConversationRepository {

In this step, we enhance the chatbot's functionality by adding memory capabilities, allowing it to remember past interactions with users. This enables the bot to provide a more personalized experience and improve its responses based on previous messages.

By incorporating memory, the bot becomes more capable of engaging in meaningful dialogues, improving user satisfaction and the overall chat experience.

```ts
import process from 'node:process'
First we are going to reset the bot memory each time the `/start` command is executed, and add the welcome message.

import { generateText } from 'ai'
import { Bot } from 'grammy'
```ts title="src/lib/commands/start.ts"
import type { CommandContext, Context } from 'grammy'

import { environment } from './lib/environment.mjs'
import { ConversationRepository } from './lib/repositories/conversation'
import { registry } from './setup-registry'
import { conversationRepository } from '../repositories/conversation'

const conversationRepository = new ConversationRepository()
export async function start(context: CommandContext<Context>): Promise<void> {
const chatId = context.chat.id
// Clear the conversation
await conversationRepository.clearConversation(chatId)

async function main(): Promise<void> {
const bot = new Bot(environment.BOT_TOKEN)
const content = 'Welcome, how can I help you?'
// Store the assistant's welcome message
await conversationRepository.addMessage(chatId, 'assistant', content)

bot.command('start', async (context) => {
const chatId = context.chat.id
// Clear the conversation
await conversationRepository.clearConversation(chatId)
await context.reply(content)
}
```

const content = 'Welcome, how can I help you?'
// Store the assistant's welcome message
await conversationRepository.addMessage(chatId, 'assistant', content)
Now, we can add the user's messages and the bot's replies:

await context.reply(content)
})
```ts title="src/lib/handlers/on-message.ts"
import { generateText } from 'ai'
import { Composer } from 'grammy'

bot.on('message:text', async (context) => {
const userMessage = context.message.text
const chatId = context.chat.id
import { registry } from '../ai/setup-registry'
import { environment } from '../environment.mjs'
import { conversationRepository } from '../repositories/conversation'

// Store the user's message
await conversationRepository.addMessage(chatId, 'user', userMessage)
export const onMessage = new Composer()

// Retrieve past conversation history
const messages = await conversationRepository.get(chatId)
const PROMPT = `
You are a chatbot designed to help users book hair salon appointments for the next day.
`

// Generate the assistant's response using the conversation history
const { text } = await generateText({
messages,
model: registry.languageModel(environment.MODEL),
system:
'You are a chatbot designed to help users book hair salon appointments for the next day.',
})
onMessage.on('message:text', async (context) => {
const userMessage = context.message.text
const chatId = context.chat.id

// Store the assistant's response
await conversationRepository.addMessage(chatId, 'assistant', text)
// Store the user's message
await conversationRepository.addMessage(chatId, 'user', userMessage)

// Reply with the generated text
await context.reply(text)
})
// Retrieve past conversation history
const messages = await conversationRepository.get(chatId)

// Enable graceful stop
process.once('SIGINT', () => bot.stop())
process.once('SIGTERM', () => bot.stop())
process.once('SIGUSR2', () => bot.stop())
// Generate the assistant's response using the conversation history
const { text } = await generateText({
messages,
model: registry.languageModel(environment.MODEL),
system: PROMPT,
})

await bot.start()
}
// Store the assistant's response
await conversationRepository.addMessage(chatId, 'assistant', text)

main().catch((error) => console.error(error))
// Reply with the generated text
await context.reply(text)
})
```

By incorporating memory, the bot becomes more capable of engaging in meaningful dialogues, improving user satisfaction and the overall chat experience.
2 changes: 1 addition & 1 deletion docs/chatbot/register.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Now, to use one or the other, edit the .env file and configure which provider an

## Full code

```ts title="src/setup-registry.ts"
```ts title="src/lib/ai/setup-registry.ts"
import { openai as originalOpenAI } from '@ai-sdk/openai'
import {
experimental_createProviderRegistry as createProviderRegistry,
Expand Down
Loading

0 comments on commit a6caeae

Please sign in to comment.