Thursday, December 24, 2020

Fix Tensorflow Object Detection Framework taking too much disk space while training

The model_lib_tf2.py file contains all the training and evaluation loops.

In the function:

def eager_train_step(detection_model,
features,
labels,
unpad_groundtruth_tensors,
optimizer,
learning_rate,
add_regularization_loss=True,
clip_gradients_value=None,
global_step=None,
num_replicas=1.0):

We can see that every training iteration it saves a few training dataset images.

tf.compat.v2.summary.image(
name='train_input_images',
step=global_step,
data=features[fields.InputDataFields.image],
max_outputs=5)

There are three problems with this:

1. It takes A LOT of space

2. It actually slows up training 

3. Images look saturated.

We can fix this easily by replacing the above snippet with this:

if global_step % 100 == 0:
# --- get images and normalize them
images_normalized = \
(features[fields.InputDataFields.image] + 128.0) / 255.0
tf.compat.v2.summary.image(
name='train_input_images',
step=global_step,
data=images_normalized,
max_outputs=5)

 Everything you need to know about TensorFlow 2.0 | Hacker Noon


No comments:

Post a Comment