본문 바로가기
Tensorflow

Tensorflow vs Pytorch -(4)

by 블쭌 2021. 5. 19.
728x90
  • tf.transpose(a)
    x = tf.constant([[1, 2, 3], [4, 5, 6]])
    x2 = tf.transpose(x)
    
    with tf.Session() as sess:
    	print(sess.run(x))
        print(sess.run(x2))
    
    [[1, 2, 3],
     [4, 5, 6]]
    [[1 4]
     [2 5]
     [3 6]]    
     
  • torch.transpose(input, dim0, dim1)
x = torch.tensor([[1, 2, 3], [4, 5, 6]])
torch.transpose(x, 0, 1)

tensor([[1, 2, 3],
        [4, 5, 6]])
tensor([[1, 4],
        [2, 5],
        [3, 6]])

dim0과 dim1을 swap한다.


  • Loss function
# mean squared error
tf.losses.mean_squared_error(y_true, y_pred)
criterion = nn.MSELoss()
criterion(y_pred, y_true)

# binary class cross entropy
tf.losses.sigmoid_cross_entropy(labels, logits) # one-hot encoding 필요
criterion = nn.BCELoss() 
criterion(nn.Sigmoid(input), y_true) # one-hot encoding 불 필요

# multi class softmax cross entropy
tf.losses.softmax_cross_entropy(labels, logits) # one-hot encoding 필요
criterion = nn.CrossEntropyLoss()
criterion(y_pred, y_true) # one-hot encoding 불 필요

# multi label이 불가능하고 one-hot이 아닌 label의 값을 표현
# 추천시스템에서 많이 사용
tf.losses.sparse_softmax_cross_entropy(labels, logits) # one-hot encoding 불 필요

  • optimzer
'Adagrad': tf.train.AdagradOptimizer,
'Adam': tf.train.AdamOptimizer,
'RMSProp': tf.train.RMSPropOptimizer,
'SGD': tf.train.GradientDescentOptimizer


'Adagrad': torch.optim.Adagrad
'Adam': torch.optim.Adam
'RMSProp': torch.optim.RMSProp
'SGD': torch.optim.SGD

  • save
saver = tf.train.Saver()
saver.save(sess, "chekpoint_path", global_step=step)

torch.save("model_name".state_dict(), 'params.ckpt')

  • load
saver = tf.train.import_meta_graph("model.meta")
saver.restore(sess, tf.train.latest_checkpoint("model_save_path")

"model_name".load_state_dict(torch.load("model.ckpt"))
728x90

댓글