You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/opt/anaconda3/lib/python3.8/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.8) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
WARNING:tensorflow:From /opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/compat/v2_compat.py:111: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
loading datasets
iris=load_iris()
Turn Setosa in targets into 1 while non-setosa into 0
batch_size=20# choose the right size of training batch. The larger the batch is,# the more time-consuming the single loop trainning is.# Feeding dictionary: rand_x_1, rand_x_2, y_targetrand_x_1=tf.placeholder(shape=[None, 1], dtype=tf.float32) # feed dictionaryrand_x_2=tf.placeholder(shape=[None,1], dtype=tf.float32) # feed dictionaryy_target=tf.placeholder(shape=[None,1], dtype=tf.float32) # feed dictionaryA=tf.Variable(tf.random_normal(shape=[1,1]))
b=tf.Variable(tf.random_normal(shape=[1,1]))
my_mult=tf.multiply(A, rand_x_2)
my_add=tf.add(my_mult, b)
my_output=tf.subtract(rand_x_1, my_add)
xentropy=tf.nn.sigmoid_cross_entropy_with_logits(logits=my_output, labels=y_target)
my_opt=tf.train.GradientDescentOptimizer(0.05)
train_step=my_opt.minimize(xentropy)
-----------------------------------------------------------------------------------
Train Step #200
A = [[18.12088]]; b = [[-10.885672]]
-----------------------------------------------------------------------------------
Train Step #400
A = [[18.193594]]; b = [[-10.990616]]
-----------------------------------------------------------------------------------
Train Step #600
A = [[18.305798]]; b = [[-11.029556]]
-----------------------------------------------------------------------------------
Train Step #800
A = [[18.413456]]; b = [[-11.058271]]
-----------------------------------------------------------------------------------
Train Step #1000
A = [[18.52922]]; b = [[-11.061981]]
# Weighted cross entropy is weighted version of the sigmoid cross entropy loss. We provide a weight on the positive target.# We provided a weight on the positive target. For an example, we will weight the target by 0.5, as follows:batch_size=20# choose the right size of training batch. The larger the batch is,# the more time-consuming the single loop trainning is.# Feeding dictionary: rand_x_1, rand_x_2, y_targetrand_x_1=tf.placeholder(shape=[None, 1], dtype=tf.float32) # feed dictionaryrand_x_2=tf.placeholder(shape=[None,1], dtype=tf.float32) # feed dictionaryy_target=tf.placeholder(shape=[None,1], dtype=tf.float32) # feed dictionaryA=tf.Variable(tf.random_normal(shape=[1,1]))
b=tf.Variable(tf.random_normal(shape=[1,1]))
my_mult=tf.multiply(A, rand_x_2)
my_add=tf.add(my_mult, b)
my_output=tf.subtract(rand_x_1, my_add)
weight=tf.constant(0.8)
xentropy_weighted_y_vals=tf.nn.weighted_cross_entropy_with_logits(logits=my_output, labels=y_target, pos_weight=weight)
train_step=my_opt.minimize(xentropy_weighted_y_vals)
-----------------------------------------------------------------------------------
Train Step #200
A = [[8.408454]]; b = [[-3.2687926]]
-----------------------------------------------------------------------------------
Train Step #400
A = [[9.934499]]; b = [[-4.3847203]]
-----------------------------------------------------------------------------------
Train Step #600
A = [[10.847868]]; b = [[-5.1551175]]
-----------------------------------------------------------------------------------
Train Step #800
A = [[11.583517]]; b = [[-5.6893206]]
-----------------------------------------------------------------------------------
Train Step #1000
A = [[12.1639385]]; b = [[-6.1393924]]
Not applicable for Cross-entropy for this kind loss function is designed to measure the actual class 0, 1
# Cross-entropy loss for a binary case is sometimes referred# to as the logistic loss functionbatch_size=20# choose the right size of training batch. The larger the batch is,# the more time-consuming the single loop trainning is.# Feeding dictionary: rand_x_1, rand_x_2, y_targetrand_x_1=tf.placeholder(shape=[None, 1], dtype=tf.float32) # feed dictionaryrand_x_2=tf.placeholder(shape=[None,1], dtype=tf.float32) # feed dictionaryy_target=tf.placeholder(shape=[None,1], dtype=tf.float32) # feed dictionaryA=tf.Variable(tf.random_normal(shape=[1,1]))
b=tf.Variable(tf.random_normal(shape=[1,1]))
my_mult=tf.multiply(A, rand_x_2)
my_add=tf.add(my_mult, b)
my_output=tf.subtract(rand_x_1, my_add)
sparse_xentropy=-tf.multiply(y_target, tf.log(my_output))-tf.multiply((1.-y_target),tf.log(1.-my_output))
train_step=my_opt.minimize(sparse_xentropy)
init=tf.initialize_all_variables()
sess.run(init)
-----------------------------------------------------------------------------------
Training Step #200
A = [[-32.213852]]; b = [[-34.43366]]
-----------------------------------------------------------------------------------
Training Step #400
A = [[-34.841114]]; b = [[-37.343002]]
-----------------------------------------------------------------------------------
Training Step #600
A = [[-37.28997]]; b = [[-40.04912]]
-----------------------------------------------------------------------------------
Training Step #800
A = [[-39.589073]]; b = [[-42.59206]]
-----------------------------------------------------------------------------------
Training Step #1000
A = [[-41.781662]]; b = [[-44.980614]]
# refined the iris targetiris_target=np.array([1.ifx==0else-1.forxiniris.target])
# Hinge loss function for -1 and 1 classification. Hinge loss fucntion is mostly used in support vector machine but can be used# in neural network as wellbatch_size=20# choose the right size of training batch. The larger the batch is,# the more time-consuming the single loop trainning is.# Feeding dictionary: rand_x_1, rand_x_2, y_targetrand_x_1=tf.placeholder(shape=[None, 1], dtype=tf.float32) # feed dictionaryrand_x_2=tf.placeholder(shape=[None,1], dtype=tf.float32) # feed dictionaryy_target=tf.placeholder(shape=[None,1], dtype=tf.float32) # feed dictionaryA=tf.Variable(tf.random_normal(shape=[1,1]))
b=tf.Variable(tf.random_normal(shape=[1,1]))
my_mult=tf.multiply(A, rand_x_2)
my_add=tf.add(my_mult, b)
my_output=tf.subtract(rand_x_1, my_add)
hinge_loss=tf.maximum(0., 1.-tf.multiply(y_target, my_output))
train_step=my_opt.minimize(hinge_loss)
init=tf.initialize_all_variables()
sess.run(init)
-----------------------------------------------------------------------------------
Training Step #200
A = [[7.349649]]; b = [[-2.6604133]]
-----------------------------------------------------------------------------------
Training Step #400
A = [[8.284655]]; b = [[-3.2604127]]
-----------------------------------------------------------------------------------
Training Step #600
A = [[8.7696705]]; b = [[-3.7604122]]
-----------------------------------------------------------------------------------
Training Step #800
A = [[8.944676]]; b = [[-3.910412]]
-----------------------------------------------------------------------------------
Training Step #1000
A = [[9.194684]]; b = [[-4.060413]]
batch_size=20# choose the right size of training batch. The larger the batch is,# the more time-consuming the single loop trainning is.# Feeding dictionary: rand_x_1, rand_x_2, y_targetrand_x_1=tf.placeholder(shape=[None, 1], dtype=tf.float32) # feed dictionaryrand_x_2=tf.placeholder(shape=[None,1], dtype=tf.float32) # feed dictionaryy_target=tf.placeholder(shape=[None,1], dtype=tf.float32) # feed dictionaryA=tf.Variable(tf.random_normal(shape=[1,1]))
b=tf.Variable(tf.random_normal(shape=[1,1]))
my_mult=tf.multiply(A, rand_x_2)
my_add=tf.add(my_mult, b)
my_output=tf.subtract(rand_x_1, my_add)
xentropy=tf.nn.sigmoid_cross_entropy_with_logits(logits=my_output, labels=y_target)
my_opt=tf.train.GradientDescentOptimizer(0.05)
train_step=my_opt.minimize(xentropy)
init=tf.initialize_all_variables()
sess.run(init)
-----------------------------------------------------------------------------------
Train Step #200
A = [[1243.5819]]; b = [[399.3704]]
-----------------------------------------------------------------------------------
Train Step #400
A = [[1450.4576]]; b = [[464.17035]]
-----------------------------------------------------------------------------------
Train Step #600
A = [[1655.5627]]; b = [[529.37006]]
-----------------------------------------------------------------------------------
Train Step #800
A = [[1865.1127]]; b = [[595.9702]]
-----------------------------------------------------------------------------------
Train Step #1000
A = [[2066.7285]]; b = [[657.5702]]
Step #200 A = [4.4231954]
Loss = 2.0177038
Step #400 A = [0.8580134]
Loss = 0.38587436
Step #600 A = [-0.20734093]
Loss = 0.2290817
Step #800 A = [-0.4499189]
Loss = 0.30788526
Step #1000 A = [-0.5537585]
Loss = 0.2411242
Step #1200 A = [-0.5834869]
Loss = 0.21759026
Step #1400 A = [-0.5452181]
Loss = 0.3135532
Step #1600 A = [-0.5148223]
Loss = 0.3241151
Step #1800 A = [-0.5066403]
Loss = 0.23580647
Step #2000 A = [-0.5372685]
Loss = 0.32416114
Step #2200 A = [-0.55678415]
Loss = 0.20359325
Step #2400 A = [-0.53384054]
Loss = 0.4179452
Step #2600 A = [-0.55560905]
Loss = 0.3174747
Step #2800 A = [-0.55784374]
Loss = 0.4003071
Step #3000 A = [-0.5510922]
Loss = 0.24280085
Step #3200 A = [-0.5364746]
Loss = 0.22302383
Step #3400 A = [-0.5466024]
Loss = 0.32935375
Step #3600 A = [-0.55610454]
Loss = 0.2350246
Step #3800 A = [-0.5904466]
Loss = 0.3662425
Step #4000 A = [-0.5405973]
Loss = 0.4017843