博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
TimeDistributed in LSTM
阅读量:5009 次
发布时间:2019-06-12

本文共 2344 字,大约阅读时间需要 7 分钟。

一对一的LSTM

# one input and one outputfrom numpy import arrayfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import LSTM# prepare sequencelength = 5seq = array([i/float(length) for i in range(length)])X = seq.reshape(len(seq), 1, 1)y = seq.reshape(len(seq), 1)# define LSTM configurationn_neurons = lengthn_batch = lengthn_epoch = 1000# create LSTMmodel = Sequential()model.add(LSTM(n_neurons, input_shape=(1, 1)))model.add(Dense(1))model.compile(loss='mean_squared_error', optimizer='adam')print(model.summary())# train LSTMmodel.fit(X, y, epochs=n_epoch, batch_size=n_batch, verbose=2)# evaluateresult = model.predict(X, batch_size=n_batch, verbose=0)for value in result:    print('%.1f' % value)

多对一的LSTM

#multinput to one output     from numpy import arrayfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import LSTM# prepare sequencelength = 5seq = array([i/float(length) for i in range(length)])X = seq.reshape(1, length, 1)y = seq.reshape(1, length)# define LSTM configurationn_neurons = lengthn_batch = 1n_epoch = 500# create LSTMmodel = Sequential()model.add(LSTM(n_neurons, input_shape=(length, 1)))model.add(Dense(length))model.compile(loss='mean_squared_error', optimizer='adam')print(model.summary())# train LSTMmodel.fit(X, y, epochs=n_epoch, batch_size=n_batch, verbose=2)# evaluateresult = model.predict(X, batch_size=n_batch, verbose=0)for value in result[0,:]:    print('%.1f' % value)

多对多的LSTM

# multinput and multioutput  from numpy import arrayfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import TimeDistributedfrom keras.layers import LSTM# prepare sequencelength = 5seq = array([i/float(length) for i in range(length)])X = seq.reshape(1, length, 1)y = seq.reshape(1, length, 1)# define LSTM configurationn_neurons = lengthn_batch = 1n_epoch = 1000# create LSTMmodel = Sequential()model.add(LSTM(n_neurons, input_shape=(length, 1), return_sequences=True))model.add(TimeDistributed(Dense(1)))model.compile(loss='mean_squared_error', optimizer='adam')print(model.summary())# train LSTMmodel.fit(X, y, epochs=n_epoch, batch_size=n_batch, verbose=2)# evaluateresult = model.predict(X, batch_size=n_batch, verbose=0)for value in result[0,:,0]:    print('%.1f' % value)

转载于:https://www.cnblogs.com/luoganttcc/p/10525276.html

你可能感兴趣的文章
JDK7 新特性
查看>>
广告地址屏蔽
查看>>
收缩SqlServer数据库日记方法
查看>>
每日英语:15 places to find inspiration
查看>>
as3播放视频卡的解决方法
查看>>
python3 re模块正则匹配字符串中的时间信息
查看>>
BCP IN示例
查看>>
cacheline基本理论
查看>>
Linux-信号
查看>>
font-awesome
查看>>
数学专业的数学与计算机专业的数学的比较(转)
查看>>
力扣—— 删除字符串中的所有相邻重复项
查看>>
期末总结
查看>>
哎呀,我老大写Bug啦——记一次MessageQueue的优化
查看>>
Hive 的基本概念
查看>>
【读书笔记】《Android应用性能优化最佳实践》
查看>>
学习方法--提问
查看>>
Liunx Mkdir
查看>>
mac ios的c++11支持的问题
查看>>
Ajax访问Xml Web Service的安全问题以及解决方案
查看>>