Skip to content

Commit 981aaa6

Browse files
Update README.md
1 parent 2cd1166 commit 981aaa6

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

README.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -24,9 +24,6 @@ Define `TopicModel` from an ARTM model at hand or with help from `model_construc
2424
**Core library functionality is based on BigARTM library** which requires manual installation.
2525
To avoid that you can use [docker images](https://hub.docker.com/r/xtonev/bigartm/tags) with preinstalled BigARTM library in them.
2626

27-
Alternatively, you can follow [BigARTM installation manual](https://bigartm.readthedocs.io/en/stable/installation/index.html)
28-
After setting up the environment you can fork this repository or use ```pip install topicnet``` to install the library.
29-
3027
#### Using docker image
3128
```
3229
docker pull xtonev/bigartm:v0.10.0
@@ -39,6 +36,9 @@ import artm
3936
artm.version()
4037
```
4138

39+
Alternatively, you can follow [BigARTM installation manual](https://bigartm.readthedocs.io/en/stable/installation/index.html).
40+
After setting up the environment you can fork this repository or use ```pip install topicnet``` to install the library.
41+
4242
---
4343
## How to use TopicNet
4444
Let's say you have a handful of raw text mined from some source and you want to perform some topic modelling on them. Where should you start?
@@ -59,7 +59,7 @@ In case you want to start from a fresh model we suggest you use this code:
5959
from topicnet.cooking_machine.model_constructor import init_simple_default_model
6060
6161
model_artm = init_simple_default_model(
62-
dataset=demo_data,
62+
dataset=data,
6363
modalities_to_use={'@lemmatized': 1.0, '@bigram':0.5},
6464
main_modality='@lemmatized',
6565
n_specific_topics=14,
@@ -71,7 +71,7 @@ Further, if needed, one can define a custom score to be calculated during the mo
7171
```
7272
from topicnet.cooking_machine.models.base_score import BaseScore
7373
74-
class ThatCustomScore(BaseScore):
74+
class CustomScore(BaseScore):
7575
def __init__(self):
7676
super().__init__()
7777
@@ -86,7 +86,7 @@ Now, `TopicModel` with custom score can be defined:
8686
```
8787
from topicnet.cooking_machine.models.topic_model import TopicModel
8888
89-
custom_score_dict = {'SpecificSparsity': ThatCustomScore()}
89+
custom_score_dict = {'SpecificSparsity': CustomScore()}
9090
tm = TopicModel(model_artm, model_id='Groot', custom_scores=custom_score_dict)
9191
```
9292
#### Define experiment
@@ -101,7 +101,7 @@ from topicnet.cooking_machine.cubes import RegularizersModifierCube
101101
102102
my_first_cube = RegularizersModifierCube(
103103
num_iter=5,
104-
tracked_score_function=retrieve_score_for_strategy('PerplexityScore@lemmatized'),
104+
tracked_score_function='PerplexityScore@lemmatized',
105105
regularizer_parameters={
106106
'regularizer': artm.DecorrelatorPhiRegularizer(name='decorrelation_phi', tau=1),
107107
'tau_grid': [0,1,2,3,4,5],
@@ -129,14 +129,14 @@ for line in first_model_html:
129129
---
130130
## FAQ
131131

132-
#### In the example we used to write vw modality like **@modality** is it a VowpallWabbit format?
132+
#### In the example we used to write vw modality like **@modality**, is it a VowpallWabbit format?
133133

134134
It is a convention to write data designating modalities with @ sign taken by TopicNet from BigARTM.
135135

136136
#### CubeCreator helps to perform a grid search over initial model parameters. How can I do it with modalities?
137137

138138
Modality search space can be defined using standart library logic like:
139-
```
139+
```,k
140140
name: 'class_ids',
141141
values: {
142142
'@text': [1, 2, 3],

0 commit comments

Comments
 (0)