After plenty of manuals I got how to do that was hiding under the hood of
sess.run in tf1, but without an optimizer:
- Counting loss
- Conunting gradients with respect to variables trained
- Adjust grow speed of function relative to each trained var to learning rate
- Assing new values to
X_batch, y_batch = X_data[indices], y_data[indices] X.assign(tf.convert_to_tensor(X_batch)) y.assign(tf.convert_to_tensor(y_batch)) with tf.GradientTape(persistent=True) as tape: loss_val = loss() dy_dk = tape.gradient(loss_val, k) dy_db = tape.gradient(loss_val, b) k.assign_sub(dy_dk * learn_rate) b.assign_sub(dy_db * learn_rate) if (i+1) % display_step == 0: print('Epoch %d: %.8f, k=%.4f, b=%.4f' % (i+1, loss_val, k.numpy(), b.numpy()))
CLICK HERE to find out more related problems solutions.