Skip to content

labels of tfrecords parsed not correctly every now and then #24

@michirup

Description

@michirup

Hi,
I made up the tf-records, created the new checkpoint and evaluated it through the code for evaluation. In the code for evaluation I put some additional lines for saving labels, predictions, probabilities and so on to be able to construct confusion matrices and other stuff. By doing that, I noticed that the labels (so the ground truth) are sometimes misread (sometimes 3 sometimes 4 or 5 over 176).

I have used a customed dataset and in the set for evaluation I have 176 images (44 for each class since I have 4 classes). I know that tf-records are built correctly since I have printed labels from the tf-records in this way

for example in tf.python_io.tf_record_iterator("path to .tfrecord"):
    result = tf.train.Example.FromString(example)
    a=result.features.feature['image/class/label'].int64_list.value
    print(a) 

and there are 44 labels for each class.

I found that quite similar issue https://github.com/tensorflow/tensorflow/issues/11363 but I didn't manage to understand what to change in the code.

Notice that in the code for evaluation I use batch_size=176 so that 1 step corresponds to 1 epoch and that I put "is_training=True" as others said in a past issue in order to get a reasonable accuracy (anyway I controlled labels also putting "is_training=False" just to see if it was a problem of batch normalization but the issue still occurs).

Thanks in advance to anyone who will help me.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions