Skip to content

Commit 5fc495b

Browse files
committed
fix docs
Signed-off-by: Tim Schopf <tim.schopf@t-online.de>
1 parent be1e841 commit 5fc495b

File tree

3 files changed

+9
-9
lines changed

3 files changed

+9
-9
lines changed

.readthedocs.yaml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,6 @@ formats: all
1818
python:
1919
install:
2020
- requirements: docs/requirements.txt
21-
- requirements: requirements.txt
2221
- method: pip
2322
path: .
2423
extra_requirements:
@@ -27,7 +26,7 @@ python:
2726
build:
2827
os: ubuntu-22.04
2928
tools:
30-
python: "3.7"
29+
python: "3.8"
3130

3231
submodules:
3332
include: all

README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -127,9 +127,12 @@ vectorizer = KeyphraseCountVectorizer()
127127

128128
# Print parameters
129129
print(vectorizer.get_params())
130+
```
131+
```plaintext
130132
>>> {'binary': False, 'dtype': <class 'numpy.int64'>, 'lowercase': True, 'max_df': None, 'min_df': None, 'pos_pattern': '<J.*>*<N.*>+', 'spacy_exclude': ['parser', 'attribute_ruler', 'lemmatizer', 'ner'], 'spacy_pipeline': 'en_core_web_sm', 'stop_words': 'english', 'workers': 1}
131133
```
132134

135+
133136
By default, the vectorizer is initialized for the English language. That means, an English `spacy_pipeline` is
134137
specified, English `stop_words` are removed, and the `pos_pattern` extracts keywords that have 0 or more adjectives,
135138
followed by 1 or more nouns using the English spaCy part-of-speech tags. In addition, the spaCy pipeline
@@ -255,14 +258,11 @@ vectorizer = KeyphraseTfidfVectorizer()
255258

256259
# Print parameters
257260
print(vectorizer.get_params())
258-
>>> {'binary': False, 'custom_pos_tagger': None, 'decay': None, 'delete_min_df': None, 'dtype': <
259-
260-
261-
class 'numpy.int64'>, 'lowercase': True, 'max_df': None
262-
263-
, 'min_df': None, 'pos_pattern': '<J.*>*<N.*>+', 'spacy_exclude': ['parser', 'attribute_ruler', 'lemmatizer', 'ner',
264-
'textcat'], 'spacy_pipeline': 'en_core_web_sm', 'stop_words': 'english', 'workers': 1}
265261
```
262+
```plaintext
263+
>>> {'binary': False, 'custom_pos_tagger': None, 'decay': None, 'delete_min_df': None, 'dtype': <class 'numpy.int64'>, 'lowercase': True, 'max_df': None, 'min_df': None, 'pos_pattern': '<J.*>*<N.*>+', 'spacy_exclude': ['parser', 'attribute_ruler', 'lemmatizer', 'ner','textcat'], 'spacy_pipeline': 'en_core_web_sm', 'stop_words': 'english', 'workers': 1}
264+
```
265+
266266

267267
To calculate tf values instead, set `use_idf=False`.
268268

docs/requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ docutils>=0.16
1414
numpy>=1.18.5
1515
spacy>=3.0.1
1616
spacy-transformers>=1.1.6
17+
spacy-curated-transformers>=0.2.2
1718
nltk>=3.6.1
1819
scikit-learn>=1.0
1920
scipy>=1.7.3

0 commit comments

Comments
 (0)