欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

投影变换 仿射变换的神经病思路--BP神经网络

程序员文章站 2023-12-25 16:30:57
...

接了个任务,有两组(x,y)坐标,都是一块地图上的,但是不是一个坐标系下。因为误差太大,opencv还有armmadillo库,都求不出来准确的变换矩阵,来做前向计算。

matlab能解出来,但是不好移植到c++。

于是…就换了个思路,写了个BP神经网络来做映射。效果还不错。

数据如下:

投影变换 仿射变换的神经病思路--BP神经网络投影变换 仿射变换的神经病思路--BP神经网络







import tensorflow as tf   
import numpy as np   
from sklearn import preprocessing


a = np.loadtxt('x.txt')
b = np.loadtxt("y1.txt")
lable = np.arange(2,828)
#发现不打乱顺序,也不影响收敛效果
# permutation = np.random.permutation(a.shape[0])
# x = a[permutation,:]
# y = b[permutation,:]
print(b)
scaler = preprocessing.StandardScaler().fit(a)
x = scaler.transform(a)
mean=scaler.mean_
std=scaler.std_
print(mean)
scalery = preprocessing.StandardScaler().fit(b)
y = scalery.transform(b)
ymean=scalery.mean_
ystd=scalery.std_
X=x
Y_=y

#1定义神经网络的输入、参数和输出,定义前向传播过程。
x = tf.placeholder(tf.float32, shape=(None, 2))
y_= tf.placeholder(tf.float32, shape=(None, 2 ))
w1= tf.Variable(tf.random_normal([2,4], stddev=1, seed=0))
w2= tf.Variable(tf.random_normal([4, 4], stddev=1, seed=0))
w3= tf.Variable(tf.random_normal([4, 2], stddev=1, seed=0))
b1= tf.Variable(tf.random_normal([1,4], stddev=1, seed=0))
b2= tf.Variable(tf.random_normal([1,4], stddev=1, seed=0))
b3= tf.Variable(tf.random_normal([1, 2], stddev=1, seed=0))

h1 = tf.nn.tanh ( tf.matmul(x, w1)+b1)
h2 =   (tf.matmul(h1, w2)+b2)
y =   (tf.matmul(h2, w3)+b3)


loss_mse = tf.reduce_mean(tf.square(y-y_))
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.0001
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, 500, 0.1, staircase=True)
train_step = tf.train.RMSPropOptimizer(learning_rate=starter_learning_rate).minimize(loss_mse)
#train_step = tf.train.RMSPropOptimizer(learning_rate=0.0001).minimize(loss_mse)
test=[[-14924.00,-84734.00]]
test_n=(test-mean)/std
print(test_n)
testx = tf.placeholder(tf.float32, shape=(1, 2),name='testx')
predict=(y*ystd)+ymean
tf.add_to_collection('pred_network', predict)

saver=tf.train.Saver()
#3生成会话,训练STEPS轮
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()

    sess.run(init_op)
    # 训练模型。
    STEPS =50000
    for i in range(STEPS):
        start = (i*BATCH_SIZE) % 32
        end = start + BATCH_SIZE
        sess.run(train_step, feed_dict={x: X, y_: Y_})
        if i % 500 == 0:
            #每训练500个steps打印训练误差
            total_loss = sess.run(loss_mse, feed_dict={x: X, y_: Y_})
          #  print(sess.run(y, feed_dict={x: X, y_: Y_}))
            print("After %d training step(s), loss_mse on all data is %g" % (i, total_loss))
#用训练数据测试
    print(sess.run(predict, feed_dict={x: X}))
    save_path=saver.save(sess=sess, save_path="c:/Users/Qrf/Desktop/est/model.ckpt")

训练结果

After 47500 training step(s), loss_mse on all data is 0.000199695

After 48000 training step(s), loss_mse on all data is 0.000198511
After 48500 training step(s), loss_mse on all data is 0.000199676
After 49000 training step(s), loss_mse on all data is 0.000197816
After 49500 training step(s), loss_mse on all data is 0.000197306

上一篇:

下一篇: