Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
acatav authored Nov 14, 2023
1 parent a5fa26e commit 0bc6c2f
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ Currently the `SpladeEncoder` class supprts only the [naver/splade-cocondenser-e

For an end-to-end example, you can refer to our Quora dataset generation with SPLADE [notebook](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/search/semantic-search/sparse/splade/splade-vector-generation.ipynb).

Note: If cuda is available, the model will automatically run on GPU. You can explicitly specify the device using the `device` parameter in the constructor.
Note: If cuda is available, the model will automatically run on GPU. You can explicitly override the device using the `device` parameter in the constructor.

```python
from pinecone_text.sparse import SpladeEncoder
Expand Down Expand Up @@ -129,7 +129,7 @@ For dense embedding we also provide a thin wrapper for the following models:

### Sentence Transformers models

When using `SentenceTransformerEncoder`, the models are downloaded from huggingface and run locally. Also, if cuda is available, the model will automatically run on GPU. You can explicitly specify the device using the `device` parameter in the constructor.
When using `SentenceTransformerEncoder`, the models are downloaded from huggingface and run locally. Also, if cuda is available, the model will automatically run on GPU. You can explicitly override the device using the `device` parameter in the constructor.

#### Usage
```python
Expand Down

0 comments on commit 0bc6c2f

Please sign in to comment.