While general-purpose datasets often benefit from the use of these pre-trained word embeddings, the representations may not always transfer well to specialized domains. This is because the embeddings have been trained on massive text corpus created from Wikipedia and similar sources.