-
Notifications
You must be signed in to change notification settings - Fork 49
Open
Description
Hi M. H. Kwon,
Your tokenization script is really helpful.
I trained a bert model with custom corpus using Google's Scripts like create_pretraining_data.py, run_pretraining.py ,extract_features.py etc..as a result I got vocab file, .tfrecord file, .jason file and check point files.
Now how to use those file for the below tasks:
- to predict a missing word in a given sentence?
- for next sentence prediction
- Q and A model
Need your help.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels