|
1 | | -# torchTextClassifiers : Efficient text classification with PyTorch |
2 | | - |
3 | | -A flexible PyTorch implementation of models for text classification with support for categorical features. |
4 | | - |
5 | | -## Features |
6 | | - |
7 | | -- Supports text classification with FastText architecture |
8 | | -- Handles both text and categorical features |
9 | | -- N-gram tokenization |
10 | | -- Flexible optimizer and scheduler options |
11 | | -- GPU and CPU support |
12 | | -- Model checkpointing and early stopping |
13 | | -- Prediction and model explanation capabilities |
14 | | - |
15 | | -## Installation |
16 | | - |
17 | | -- With `pip`: |
18 | | - |
19 | | -```bash |
20 | | -pip install torchTextClassifiers |
21 | | -``` |
22 | | - |
23 | | -- with `uv`: |
24 | | - |
25 | | - |
26 | | -```bash |
27 | | -uv add torchTextClassifiers |
28 | | -``` |
29 | | - |
30 | | -## Key Components |
31 | | - |
32 | | -- `build()`: Constructs the FastText model architecture |
33 | | -- `train()`: Trains the model with built-in callbacks and logging |
34 | | -- `predict()`: Generates class predictions |
35 | | -- `predict_and_explain()`: Provides predictions with feature attributions |
36 | | - |
37 | | -## Subpackages |
38 | | - |
39 | | -- `preprocess`: To preprocess text input, using `nltk` and `unidecode` libraries. |
40 | | -- `explainability`: Simple methods to visualize feature attributions at word and letter levels, using `captum`library. |
41 | | - |
42 | | -Run `pip install torchTextClassifiers[preprocess]` or `pip install torchTextClassifiers[explainability]` to download these optional dependencies. |
43 | | - |
44 | | - |
45 | | -## Quick Start |
46 | | - |
47 | | -```python |
48 | | -from torchTextClassifiers import torchTextClassifiers |
49 | | - |
50 | | -# Initialize the model |
51 | | -model = torchTextclassifiers( |
52 | | - num_tokens=1000000, |
53 | | - embedding_dim=100, |
54 | | - min_count=5, |
55 | | - min_n=3, |
56 | | - max_n=6, |
57 | | - len_word_ngrams=True, |
58 | | - sparse=True |
59 | | -) |
60 | | - |
61 | | -# Train the model |
62 | | -model.train( |
63 | | - X_train=train_data, |
64 | | - y_train=train_labels, |
65 | | - X_val=val_data, |
66 | | - y_val=val_labels, |
67 | | - num_epochs=10, |
68 | | - batch_size=64, |
69 | | - lr=4e-3 |
70 | | -) |
71 | | -# Make predictions |
72 | | -predictions = model.predict(test_data) |
73 | | -``` |
74 | | - |
75 | | -where ```train_data``` is an array of size $(N,d)$, having the text in string format in the first column, the other columns containing tokenized categorical variables in `int` format. |
76 | | - |
77 | | -Please make sure `y_train` contains at least one time each possible label. |
78 | | - |
79 | | -## Dependencies |
80 | | - |
81 | | -- PyTorch Lightning |
82 | | -- NumPy |
83 | | - |
84 | | -## Categorical features |
85 | | - |
86 | | -If any, each categorical feature $i$ is associated to an embedding matrix of size (number of unique values, embedding dimension) where the latter is a hyperparameter (`categorical_embedding_dims`) - chosen by the user - that can take three types of values: |
87 | | - |
88 | | -- `None`: same embedding dimension as the token embedding matrix. The categorical embeddings are then summed to the sentence-level embedding (which itself is an averaging of the token embeddings). See [Figure 1](#Default-architecture). |
89 | | -- `int`: the categorical embeddings have all the same embedding dimensions, they are averaged and the resulting vector is concatenated to the sentence-level embedding (the last linear layer has an adapted input size). See [Figure 2](#avg-architecture). |
90 | | -- `list`: the categorical embeddings have different embedding dimensions, all of them are concatenated without aggregation to the sentence-level embedding (the last linear layer has an adapted input size). See [Figure 3](#concat-architecture). |
91 | | - |
92 | | -Default is `None`. |
93 | | - |
94 | | -<a name="figure-1"></a> |
95 | | - |
96 | | -*Figure 1: The 'sum' architecture* |
97 | | - |
98 | | -<a name="figure-2"></a> |
99 | | - |
100 | | -*Figure 2: The 'average and concatenate' architecture* |
101 | | - |
102 | | -<a name="figure-3"></a> |
103 | | - |
104 | | -*Figure 3: The 'concatenate all' architecture* |
105 | | - |
106 | | -## Documentation |
107 | | - |
108 | | -For detailed usage and examples, please refer to the [example notebook](notebooks/example.ipynb). Use `pip install -r requirements.txt` after cloning the repository to install the necessary dependencies (some are specific to the notebook). |
109 | | - |
110 | | -## Contributing |
111 | | - |
112 | | -Contributions are welcome! Please feel free to submit a Pull Request. |
113 | | - |
114 | | -## License |
115 | | - |
116 | | -MIT |
117 | | - |
118 | | - |
119 | | -## References |
120 | | - |
121 | | -Inspired by the original FastText paper [1] and implementation. |
122 | | - |
123 | | -[1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) |
124 | | - |
125 | | -``` |
126 | | -@InProceedings{joulin2017bag, |
127 | | - title={Bag of Tricks for Efficient Text Classification}, |
128 | | - author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas}, |
129 | | - booktitle={Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers}, |
130 | | - month={April}, |
131 | | - year={2017}, |
132 | | - publisher={Association for Computational Linguistics}, |
133 | | - pages={427--431}, |
134 | | -} |
135 | | -``` |
| 1 | +# torchTextClassifiers : Efficient text classification with PyTorch |
| 2 | + |
| 3 | +A flexible PyTorch implementation of models for text classification with support for categorical features. |
| 4 | + |
| 5 | +## Features |
| 6 | + |
| 7 | +- Supports text classification with FastText architecture |
| 8 | +- Handles both text and categorical features |
| 9 | +- N-gram tokenization |
| 10 | +- Flexible optimizer and scheduler options |
| 11 | +- GPU and CPU support |
| 12 | +- Model checkpointing and early stopping |
| 13 | +- Prediction and model explanation capabilities |
| 14 | + |
| 15 | +## Installation |
| 16 | + |
| 17 | +- With `pip`: |
| 18 | + |
| 19 | +```bash |
| 20 | +pip install torchTextClassifiers |
| 21 | +``` |
| 22 | + |
| 23 | +- with `uv`: |
| 24 | + |
| 25 | + |
| 26 | +```bash |
| 27 | +uv add torchTextClassifiers |
| 28 | +``` |
| 29 | + |
| 30 | +## Key Components |
| 31 | + |
| 32 | +- `build()`: Constructs the FastText model architecture |
| 33 | +- `train()`: Trains the model with built-in callbacks and logging |
| 34 | +- `predict()`: Generates class predictions |
| 35 | +- `predict_and_explain()`: Provides predictions with feature attributions |
| 36 | + |
| 37 | +## Subpackages |
| 38 | + |
| 39 | +- `preprocess`: To preprocess text input, using `nltk` and `unidecode` libraries. |
| 40 | +- `explainability`: Simple methods to visualize feature attributions at word and letter levels, using `captum`library. |
| 41 | + |
| 42 | +Run `pip install torchTextClassifiers[preprocess]` or `pip install torchTextClassifiers[explainability]` to download these optional dependencies. |
| 43 | + |
| 44 | + |
| 45 | +## Quick Start |
| 46 | + |
| 47 | +```python |
| 48 | +from torchTextClassifiers import torchTextClassifiers |
| 49 | + |
| 50 | +# Initialize the model |
| 51 | +model = torchTextclassifiers( |
| 52 | + num_tokens=1000000, |
| 53 | + embedding_dim=100, |
| 54 | + min_count=5, |
| 55 | + min_n=3, |
| 56 | + max_n=6, |
| 57 | + len_word_ngrams=True, |
| 58 | + sparse=True |
| 59 | +) |
| 60 | + |
| 61 | +# Train the model |
| 62 | +model.train( |
| 63 | + X_train=train_data, |
| 64 | + y_train=train_labels, |
| 65 | + X_val=val_data, |
| 66 | + y_val=val_labels, |
| 67 | + num_epochs=10, |
| 68 | + batch_size=64, |
| 69 | + lr=4e-3 |
| 70 | +) |
| 71 | +# Make predictions |
| 72 | +predictions = model.predict(test_data) |
| 73 | +``` |
| 74 | + |
| 75 | +where ```train_data``` is an array of size $(N,d)$, having the text in string format in the first column, the other columns containing tokenized categorical variables in `int` format. |
| 76 | + |
| 77 | +Please make sure `y_train` contains at least one time each possible label. |
| 78 | + |
| 79 | +## Dependencies |
| 80 | + |
| 81 | +- PyTorch Lightning |
| 82 | +- NumPy |
| 83 | + |
| 84 | +## Categorical features |
| 85 | + |
| 86 | +If any, each categorical feature $i$ is associated to an embedding matrix of size (number of unique values, embedding dimension) where the latter is a hyperparameter (`categorical_embedding_dims`) - chosen by the user - that can take three types of values: |
| 87 | + |
| 88 | +- `None`: same embedding dimension as the token embedding matrix. The categorical embeddings are then summed to the sentence-level embedding (which itself is an averaging of the token embeddings). See [Figure 1](#Default-architecture). |
| 89 | +- `int`: the categorical embeddings have all the same embedding dimensions, they are averaged and the resulting vector is concatenated to the sentence-level embedding (the last linear layer has an adapted input size). See [Figure 2](#avg-architecture). |
| 90 | +- `list`: the categorical embeddings have different embedding dimensions, all of them are concatenated without aggregation to the sentence-level embedding (the last linear layer has an adapted input size). See [Figure 3](#concat-architecture). |
| 91 | + |
| 92 | +Default is `None`. |
| 93 | + |
| 94 | +<a name="figure-1"></a> |
| 95 | + |
| 96 | +*Figure 1: The 'sum' architecture* |
| 97 | + |
| 98 | +<a name="figure-2"></a> |
| 99 | + |
| 100 | +*Figure 2: The 'average and concatenate' architecture* |
| 101 | + |
| 102 | +<a name="figure-3"></a> |
| 103 | + |
| 104 | +*Figure 3: The 'concatenate all' architecture* |
| 105 | + |
| 106 | +## Documentation |
| 107 | + |
| 108 | +For detailed usage and examples, please refer to the [example notebook](notebooks/example.ipynb). Use `pip install -r requirements.txt` after cloning the repository to install the necessary dependencies (some are specific to the notebook). |
| 109 | + |
| 110 | +## Contributing |
| 111 | + |
| 112 | +Contributions are welcome! Please feel free to submit a Pull Request. |
| 113 | + |
| 114 | +## License |
| 115 | + |
| 116 | +MIT |
| 117 | + |
| 118 | + |
| 119 | +## References |
| 120 | + |
| 121 | +Inspired by the original FastText paper [1] and implementation. |
| 122 | + |
| 123 | +[1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) |
| 124 | + |
| 125 | +``` |
| 126 | +@InProceedings{joulin2017bag, |
| 127 | + title={Bag of Tricks for Efficient Text Classification}, |
| 128 | + author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas}, |
| 129 | + booktitle={Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers}, |
| 130 | + month={April}, |
| 131 | + year={2017}, |
| 132 | + publisher={Association for Computational Linguistics}, |
| 133 | + pages={427--431}, |
| 134 | +} |
| 135 | +``` |
0 commit comments