Currently, the framework does not seem to allow for batching inputs, which reduce its usability on larger datasets.
The intuitive approach would be:
from danlp.models import load_bert_tone_model
classifier = load_bert_tone_model()
classifier.predict(["I am very happy", "I am very very happy"])
# {'analytic': 'objective', 'polarity': 'positive'}
While you would expect:
from danlp.models import load_bert_tone_model
classifier = load_bert_tone_model()
classifier.predict(["I am very happy", "I am very very happy"], batch_size=2)
#[{'analytic': 'objective', 'polarity': 'positive'},
# {'analytic': 'objective', 'polarity': 'positive'}]
The reason why you would add the batch_size=2 is to distinguish between looping through each text and batching them for faster computation on GPUs
Interestingly the first approach does not throw an error