-
Notifications
You must be signed in to change notification settings - Fork 170
Description
Models trained by this pipeline perform great. But how to host them using Tesorflow Model Serving? Checkpoint needs to be converted into SavedModel (.pb) format.
What I've done so far is:
- I've modified following method to provide model with an output tensor.
The line of code I added is:
logits = tf.nn.softmax(logits, name="softmax_tensor")
def _get_image_info( features, mode ):
"""Calculates the logits and sequence length"""
image = features['image']
width = features['width']
conv_features,sequence_length = model.convnet_layers(image,
width,
mode)
logits = model.rnn_layers(conv_features, sequence_length,
charset.num_classes())
logits = tf.nn.softmax(logits, name="softmax_tensor")
return logits, sequence_length
- Then following MNIST eample, I have added a simple serving input receiver function:
def serving_input_receiver_fn():
"""
This is used to define inputs to serve the model.
:return: ServingInputReciever
"""
reciever_tensors = {
# The size of input image is flexible.
'image': tf.placeholder(tf.float32, [None, None, None, 1]),
'width': tf.placeholder(tf.int32, [None, 1]),
'length': tf.placeholder(tf.int64, [None, 1]),
'text': tf.placeholder(tf.string, [None,]),
}
# Convert give inputs to adjust to the model.
features = {
# Resize given images.
'image': tf.image.resize_images(reciever_tensors['image'], [28, 28]),
'width': tf.shape(reciever_tensors['image'])[1],
}
return tf.estimator.export.ServingInputReceiver(receiver_tensors=reciever_tensors,
features=features)
-
Next in train.py, I have added:
classifier.export_saved_model(saved_dir, serving_input_receiver_fn=model_fn.serving_input_receiver_fn) -
After that, when I tried to train, I received following error:
TypeError: Expected labels (first argument) to be a SparseTensor
5.To fix that, in model.py, I modified following method, where I converted "sequence_labels" from dense to sparse tensor.
def ctc_loss_layer( rnn_logits, sequence_labels, sequence_length,
reduce_mean=True ):
"""Build CTC Loss layer for training"""
labels_sparse = dense_to_sparse(sequence_labels, sparse_val=0)
losses = tf.nn.ctc_loss( labels_sparse,
rnn_logits,
sequence_length,
time_major=True,
ignore_longer_outputs_than_inputs=True )
if (reduce_mean):
loss = tf.reduce_mean( losses )
else:
loss = tf.reduce_sum( losses )
return loss
def dense_to_sparse(dense_tensor, sparse_val=0):
with tf.name_scope("dense_to_sparse"):
sparse_inds = tf.where(tf.not_equal(dense_tensor, sparse_val),
name="sparse_inds")
sparse_vals = tf.gather_nd(dense_tensor, sparse_inds,
name="sparse_vals")
dense_shape = tf.shape(dense_tensor, name="dense_shape",
out_type=tf.int64)
return tf.SparseTensor(sparse_inds, sparse_vals, dense_shape)
- Now I am facing the next exception:
ValueError: Tried to convert 'x' to a tensor and failed. Error: None values not supported.
If anyone tried to convert the model by this pipeline to SavedModel for hosting with Tensorflow Model Serving, all help is welcome. Thank you. This pipeline is generating very good accuracy. We need to add handling for SavedModel conversion so we could host it using Tensorflow Model Serving. So far I've been unsuccessful, but going in the right direction. I think collaboratively we can do it faster. Thank you for your help.