阅读这篇文章后,您将知道:
1、如何为序列预测问题开发朴素的LSTM网络。
2、如何使用LSTM网络通过批处理和功能仔细管理状态。
3、如何在LSTM网络中手动管理状态以进行状态预测。
4、用几行代码,通过18个循序渐进的教程和9个项目,探索如何为一系列预测建模问题开发深度学习模型。
问题描述:学习字母
在本教程中,我们将开发和对比许多不同的LSTM递归神经网络模型。
这些比较的上下文将是学习字母的简单序列预测问题。也就是说,给定一个字母,预测该字母的下一个字母。这是一个简单的序列预测问题,一旦被理解,就可以推广到其他序列预测问题,例如时间序列预测和序列分类。让我们用一些可以在示例之间重复使用的python代码来准备问题。首先,让我们导入我们计划在本教程中使用的所有类和函数。
- import numpy
- from keras.models import Sequential
- from keras.layers import Dense
- from keras.layers import LSTM
- from keras.utils import np_utils
接下来,我们可以为随机数生成器提供种子,以确保每次执行代码时结果都相同。
- # fix random seed for reproducibility
- numpy.random.seed(7)
现在,我们可以定义数据集,即字母。为了便于阅读,我们将字母定义为大写字母。神经网络对数字进行建模,因此我们需要将字母映射为整数。我们可以通过创建字母索引到字符的字典(映射)来轻松实现此目的。我们还可以创建反向查询,以将预测转换回字符以供以后使用。
- # define the raw dataset
- alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
- # create mapping of characters to integers (0-25) and the reverse
- char_to_int = dict((c, i) for i, c in enumerate(alphabet))
- int_to_char = dict((i, c) for i, c in enumerate(alphabet))
现在,我们需要创建输入和输出对,以在其上训练我们的神经网络。我们可以通过定义输入序列的长度,然后从输入字母序列中读取序列来做到这一点。例如,我们使用输入长度1。从原始输入数据的开头开始,我们可以读取第一个字母“ A”和下一个字母作为预测“ B”。我们沿着一个字符移动并重复直到达到“ Z”的预测。
- # prepare the dataset of input to output pairs encoded as integers
- seq_length = 1
- dataX = []
- dataY = []
- for i in range(0, len(alphabet) - seq_length, 1):
- seq_in = alphabet[i:i + seq_length]
- seq_out = alphabet[i + seq_length]
- dataX.append([char_to_int[char] for char in seq_in])
- dataY.append(char_to_int[seq_out])
- print(seq_in, '->', seq_out)
我们还将打印输入对以进行完整性检查。将代码运行到这一点将产生以下输出,总结长度为1的输入序列和单个输出字符。
- A -> B
- B -> C
- C -> D
- D -> E
- E -> F
- F -> G
- G -> H
- H -> I
- I -> J
- J -> K
- K -> L
- L -> M
- M -> N
- N -> O
- O -> P
- P -> Q
- Q -> R
- R -> S
- S -> T
- T -> U
- U -> V
- V -> W
- W -> X
- X -> Y
- Y -> Z
我们需要将NumPy数组重塑为LSTM网络期望的格式,即:[samples, time steps, features]。
- # reshape X to be [samples, time steps, features]
- X = numpy.reshape(dataX, (len(dataX), seq_length, 1))
整形后,我们可以将输入整数标准化为0到1的范围,即LSTM网络使用的S形激活函数的范围。
- # normalize
- XX = X / float(len(alphabet))
最后,我们可以将此问题视为序列分类任务,其中26个字母中的每个字母代表一个不同的类。这样,我们可以使用Keras内置函数to_categorical()将输出(y)转换为one-hot编码。
- # one hot encode the output variable
- y = np_utils.to_categorical(dataY)
用于学习一字符到一字符映射的朴素LSTM
让我们从设计一个简单的LSTM开始,学习在给定一个字符的上下文的情况下如何预测字母表中的下一个字符。我们将问题构造为一个单字母输入到一个单字母输出对的随机集合。正如我们将看到的那样,这是LSTM学习困难的问题。我们定义一个具有32个单元的LSTM网络和一个具有softmax激活功能的输出层,以进行预测。因为这是一个多类分类问题,所以我们可以使用对数丢失函数(在Keras中称为“ categorical_crossentropy”),并使用ADAM优化函数来优化网络。该模型适合500个纪元,批量大小为1。
- # create and fit the model
- model = Sequential()
- model.add(LSTM(32, input_shape=(X.shape[1], X.shape[2])))
- model.add(Dense(y.shape[1], activation='softmax'))
- model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
- model.fit(X, y, epochs=500, batch_size=1, verbose=2)
拟合模型后,我们可以评估和总结整个训练数据集的表现。
- # summarize performance of the model
- scores = model.evaluate(X, y, verbose=0)
- print("Model Accuracy: %.2f%%" % (scores[1]*100))
然后,我们可以通过网络重新运行训练数据并生成预测,将输入对和输出对都转换回其原始字符格式,以直观地了解网络对问题的了解程度。
- # demonstrate some model predictions
- for pattern in dataX:
- x = numpy.reshape(pattern, (1, len(pattern), 1))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- result = int_to_char[index]
- seq_in = [int_to_char[value] for value in pattern]
- print(seq_in, "->", result)
完整的代码实现如下所示:
- # Naive LSTM to learn one-char to one-char mapping
- import numpy
- from keras.models import Sequential
- from keras.layers import Dense
- from keras.layers import LSTM
- from keras.utils import np_utils
- # fix random seed for reproducibility
- numpy.random.seed(7)
- # define the raw dataset
- alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
- # create mapping of characters to integers (0-25) and the reverse
- char_to_int = dict((c, i) for i, c in enumerate(alphabet))
- int_to_char = dict((i, c) for i, c in enumerate(alphabet))
- # prepare the dataset of input to output pairs encoded as integers
- seq_length = 1
- dataX = []
- dataY = []
- for i in range(0, len(alphabet) - seq_length, 1):
- seq_in = alphabet[i:i + seq_length]
- seq_out = alphabet[i + seq_length]
- dataX.append([char_to_int[char] for char in seq_in])
- dataY.append(char_to_int[seq_out])
- print(seq_in, '->', seq_out)
- # reshape X to be [samples, time steps, features]
- X = numpy.reshape(dataX, (len(dataX), seq_length, 1))
- # normalize
- XX = X / float(len(alphabet))
- # one hot encode the output variable
- y = np_utils.to_categorical(dataY)
- # create and fit the model
- model = Sequential()
- model.add(LSTM(32, input_shape=(X.shape[1], X.shape[2])))
- model.add(Dense(y.shape[1], activation='softmax'))
- model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
- model.fit(X, y, epochs=500, batch_size=1, verbose=2)
- # summarize performance of the model
- scores = model.evaluate(X, y, verbose=0)
- print("Model Accuracy: %.2f%%" % (scores[1]*100))
- # demonstrate some model predictions
- for pattern in dataX:
- x = numpy.reshape(pattern, (1, len(pattern), 1))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- result = int_to_char[index]
- seq_in = [int_to_char[value] for value in pattern]
- print(seq_in, "->", result)
执行结果输出如下所示:
- Model Accuracy: 84.00%
- ['A'] -> B
- ['B'] -> C
- ['C'] -> D
- ['D'] -> E
- ['E'] -> F
- ['F'] -> G
- ['G'] -> H
- ['H'] -> I
- ['I'] -> J
- ['J'] -> K
- ['K'] -> L
- ['L'] -> M
- ['M'] -> N
- ['N'] -> O
- ['O'] -> P
- ['P'] -> Q
- ['Q'] -> R
- ['R'] -> S
- ['S'] -> T
- ['T'] -> U
- ['U'] -> W
- ['V'] -> Y
- ['W'] -> Z
- ['X'] -> Z
- ['Y'] -> Z
我们可以看到,网络确实很难解决这个问题。原因是,较差的LSTM单元没有任何上下文可使用。每个输入-输出模式都以随机顺序显示给网络,并且在每个模式(每个批次中每个批次包含一个模式)后重置网络状态。这是对LSTM网络体系结构的滥用,将其像标准多层Perceptron一样对待。接下来,让我们尝试对问题进行不同的界定,以便为网络提供更多的学习顺序。
用于三字符功能窗口到一字符映射的朴素LSTM
为多层感知器的数据添加更多上下文的流行方法是使用窗口方法。这是序列中先前的步骤作为网络的其他输入功能提供的地方。我们可以尝试使用相同的技巧为LSTM网络提供更多上下文。在这里,我们将序列长度从1增加到3,例如:
- # prepare the dataset of input to output pairs encoded as integers
- seq_length = 3]
训练数据样例如下所示:
- ABC -> D
- BCD -> E
- CDE -> F
然后,将序列中的每个元素作为新的输入功能提供给网络。这需要修改输入序列在数据准备步骤中的重塑方式:
- # reshape X to be [samples, time steps, features]
- X = numpy.reshape(dataX, (len(dataX), 1, seq_length))
它还需要修改,以证明在根据模型进行预测时如何重塑样本图案。
- x = numpy.reshape(pattern, (1, 1, len(pattern)))
完整代码实现如下所示:
- # Naive LSTM to learn three-char window to one-char mapping
- import numpy
- from keras.models import Sequential
- from keras.layers import Dense
- from keras.layers import LSTM
- from keras.utils import np_utils
- # fix random seed for reproducibility
- numpy.random.seed(7)
- # define the raw dataset
- alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
- # create mapping of characters to integers (0-25) and the reverse
- char_to_int = dict((c, i) for i, c in enumerate(alphabet))
- int_to_char = dict((i, c) for i, c in enumerate(alphabet))
- # prepare the dataset of input to output pairs encoded as integers
- seq_length = 3
- dataX = []
- dataY = []
- for i in range(0, len(alphabet) - seq_length, 1):
- seq_in = alphabet[i:i + seq_length]
- seq_out = alphabet[i + seq_length]
- dataX.append([char_to_int[char] for char in seq_in])
- dataY.append(char_to_int[seq_out])
- print(seq_in, '->', seq_out)
- # reshape X to be [samples, time steps, features]
- X = numpy.reshape(dataX, (len(dataX), 1, seq_length))
- # normalize
- XX = X / float(len(alphabet))
- # one hot encode the output variable
- y = np_utils.to_categorical(dataY)
- # create and fit the model
- model = Sequential()
- model.add(LSTM(32, input_shape=(X.shape[1], X.shape[2])))
- model.add(Dense(y.shape[1], activation='softmax'))
- model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
- model.fit(X, y, epochs=500, batch_size=1, verbose=2)
- # summarize performance of the model
- scores = model.evaluate(X, y, verbose=0)
- print("Model Accuracy: %.2f%%" % (scores[1]*100))
- # demonstrate some model predictions
- for pattern in dataX:
- x = numpy.reshape(pattern, (1, 1, len(pattern)))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- result = int_to_char[index]
- seq_in = [int_to_char[value] for value in pattern]
- print(seq_in, "->", result)
执行结果输出如下所示:
- Model Accuracy: 86.96%
- ['A', 'B', 'C'] -> D
- ['B', 'C', 'D'] -> E
- ['C', 'D', 'E'] -> F
- ['D', 'E', 'F'] -> G
- ['E', 'F', 'G'] -> H
- ['F', 'G', 'H'] -> I
- ['G', 'H', 'I'] -> J
- ['H', 'I', 'J'] -> K
- ['I', 'J', 'K'] -> L
- ['J', 'K', 'L'] -> M
- ['K', 'L', 'M'] -> N
- ['L', 'M', 'N'] -> O
- ['M', 'N', 'O'] -> P
- ['N', 'O', 'P'] -> Q
- ['O', 'P', 'Q'] -> R
- ['P', 'Q', 'R'] -> S
- ['Q', 'R', 'S'] -> T
- ['R', 'S', 'T'] -> U
- ['S', 'T', 'U'] -> V
- ['T', 'U', 'V'] -> Y
- ['U', 'V', 'W'] -> Z
- ['V', 'W', 'X'] -> Z
- ['W', 'X', 'Y'] -> Z
我们可以看到,性能的提升幅度可能不大,也可能不大。这是一个简单的问题,即使使用window方法,我们仍然无法通过LSTM学习。再次,这是对问题的不良构架,对LSTM网络的滥用。实际上,字母序列是一个特征的时间步长,而不是单独特征的一个时间步长。我们为网络提供了更多的上下文,但没有像预期的那样有更多的顺序。
在下一节中,我们将以时间步长的形式为网络提供更多背景信息。
用于三字符时间步窗到一字符映射的朴素LSTM
在Keras中,LSTM的预期用途是以时间步长的形式提供上下文,而不是像其他网络类型一样提供窗口功能。我们可以举第一个示例,只需将序列长度从1更改为3。
- seq_length = 3
输入-输出样例如下所示:
- ABC -> D
- BCD -> E
- CDE -> F
- DEF -> G
不同之处在于输入数据的重塑将序列作为一个要素的时间步序列,而不是多个要素的单个时间步。
- # reshape X to be [samples, time steps, features]
- X = numpy.reshape(dataX, (len(dataX), seq_length, 1))
完整代码实现如下所示:
- # Naive LSTM to learn three-char time steps to one-char mapping
- import numpy
- from keras.models import Sequential
- from keras.layers import Dense
- from keras.layers import LSTM
- from keras.utils import np_utils
- # fix random seed for reproducibility
- numpy.random.seed(7)
- # define the raw dataset
- alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
- # create mapping of characters to integers (0-25) and the reverse
- char_to_int = dict((c, i) for i, c in enumerate(alphabet))
- int_to_char = dict((i, c) for i, c in enumerate(alphabet))
- # prepare the dataset of input to output pairs encoded as integers
- seq_length = 3
- dataX = []
- dataY = []
- for i in range(0, len(alphabet) - seq_length, 1):
- seq_in = alphabet[i:i + seq_length]
- seq_out = alphabet[i + seq_length]
- dataX.append([char_to_int[char] for char in seq_in])
- dataY.append(char_to_int[seq_out])
- print(seq_in, '->', seq_out)
- # reshape X to be [samples, time steps, features]
- X = numpy.reshape(dataX, (len(dataX), seq_length, 1))
- # normalize
- XX = X / float(len(alphabet))
- # one hot encode the output variable
- y = np_utils.to_categorical(dataY)
- # create and fit the model
- model = Sequential()
- model.add(LSTM(32, input_shape=(X.shape[1], X.shape[2])))
- model.add(Dense(y.shape[1], activation='softmax'))
- model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
- model.fit(X, y, epochs=500, batch_size=1, verbose=2)
- # summarize performance of the model
- scores = model.evaluate(X, y, verbose=0)
- print("Model Accuracy: %.2f%%" % (scores[1]*100))
- # demonstrate some model predictions
- for pattern in dataX:
- x = numpy.reshape(pattern, (1, len(pattern), 1))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- result = int_to_char[index]
- seq_in = [int_to_char[value] for value in pattern]
- print(seq_in, "->", result)
执行结果输出如下所示:
- Model Accuracy: 100.00%
- ['A', 'B', 'C'] -> D
- ['B', 'C', 'D'] -> E
- ['C', 'D', 'E'] -> F
- ['D', 'E', 'F'] -> G
- ['E', 'F', 'G'] -> H
- ['F', 'G', 'H'] -> I
- ['G', 'H', 'I'] -> J
- ['H', 'I', 'J'] -> K
- ['I', 'J', 'K'] -> L
- ['J', 'K', 'L'] -> M
- ['K', 'L', 'M'] -> N
- ['L', 'M', 'N'] -> O
- ['M', 'N', 'O'] -> P
- ['N', 'O', 'P'] -> Q
- ['O', 'P', 'Q'] -> R
- ['P', 'Q', 'R'] -> S
- ['Q', 'R', 'S'] -> T
- ['R', 'S', 'T'] -> U
- ['S', 'T', 'U'] -> V
- ['T', 'U', 'V'] -> W
- ['U', 'V', 'W'] -> X
- ['V', 'W', 'X'] -> Y
- ['W', 'X', 'Y'] -> Z
我们可以看到,模型评估和示例预测证明,该模型可以完美地学习问题。但是它已经学会了一个更简单的问题。具体地说,它学会了根据字母表中三个字母的顺序来预测下一个字母。可以显示字母表中三个字母的任意随机序列,并预测下一个字母。它实际上不能枚举字母。我希望有足够大的多层感知网络可以使用窗口方法来学习相同的映射。LSTM网络是有状态的。他们应该能够学习整个字母序列,但是默认情况下,Keras实现在每次训练批次后都会重置网络状态。
Batch中的LSTM状态
每批批处理后,LSTM的Keras实现都会重置网络状态。这表明,如果我们的批处理大小足够容纳所有输入模式,并且所有输入模式都按顺序排序,那么LSTM可以使用批处理中序列的上下文来更好地学习序列。通过修改用于学习一对一映射的第一个示例并将批大小从1增加到训练数据集的大小,我们可以轻松地证明这一点。此外,Keras在每个训练纪元之前都会对训练数据集进行洗牌。为了确保训练数据模式保持顺序,我们可以禁用此混洗。
- model.fit(X, y, epochs=5000, batch_size=len(dataX), verbose=2, shuffle=False)
网络将使用批内序列学习字符的映射,但是在进行预测时,该上下文对于网络将不可用。我们可以评估网络随机和顺序进行预测的能力。
整体代码实现如下所示:
- # Naive LSTM to learn one-char to one-char mapping with all data in each batch
- import numpy
- from keras.models import Sequential
- from keras.layers import Dense
- from keras.layers import LSTM
- from keras.utils import np_utils
- from keras.preprocessing.sequence import pad_sequences
- # fix random seed for reproducibility
- numpy.random.seed(7)
- # define the raw dataset
- alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
- # create mapping of characters to integers (0-25) and the reverse
- char_to_int = dict((c, i) for i, c in enumerate(alphabet))
- int_to_char = dict((i, c) for i, c in enumerate(alphabet))
- # prepare the dataset of input to output pairs encoded as integers
- seq_length = 1
- dataX = []
- dataY = []
- for i in range(0, len(alphabet) - seq_length, 1):
- seq_in = alphabet[i:i + seq_length]
- seq_out = alphabet[i + seq_length]
- dataX.append([char_to_int[char] for char in seq_in])
- dataY.append(char_to_int[seq_out])
- print(seq_in, '->', seq_out)
- # convert list of lists to array and pad sequences if needed
- X = pad_sequences(dataX, maxlen=seq_length, dtype='float32')
- # reshape X to be [samples, time steps, features]
- X = numpy.reshape(dataX, (X.shape[0], seq_length, 1))
- # normalize
- XX = X / float(len(alphabet))
- # one hot encode the output variable
- y = np_utils.to_categorical(dataY)
- # create and fit the model
- model = Sequential()
- model.add(LSTM(16, input_shape=(X.shape[1], X.shape[2])))
- model.add(Dense(y.shape[1], activation='softmax'))
- model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
- model.fit(X, y, epochs=5000, batch_size=len(dataX), verbose=2, shuffle=False)
- # summarize performance of the model
- scores = model.evaluate(X, y, verbose=0)
- print("Model Accuracy: %.2f%%" % (scores[1]*100))
- # demonstrate some model predictions
- for pattern in dataX:
- x = numpy.reshape(pattern, (1, len(pattern), 1))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- result = int_to_char[index]
- seq_in = [int_to_char[value] for value in pattern]
- print(seq_in, "->", result)
- # demonstrate predicting random patterns
- print("Test a Random Pattern:")
- for i in range(0,20):
- pattern_index = numpy.random.randint(len(dataX))
- pattern = dataX[pattern_index]
- x = numpy.reshape(pattern, (1, len(pattern), 1))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- result = int_to_char[index]
- seq_in = [int_to_char[value] for value in pattern]
- print(seq_in, "->", result)
执行结果输出如下所示:
- Model Accuracy: 100.00%
- ['A'] -> B
- ['B'] -> C
- ['C'] -> D
- ['D'] -> E
- ['E'] -> F
- ['F'] -> G
- ['G'] -> H
- ['H'] -> I
- ['I'] -> J
- ['J'] -> K
- ['K'] -> L
- ['L'] -> M
- ['M'] -> N
- ['N'] -> O
- ['O'] -> P
- ['P'] -> Q
- ['Q'] -> R
- ['R'] -> S
- ['S'] -> T
- ['T'] -> U
- ['U'] -> V
- ['V'] -> W
- ['W'] -> X
- ['X'] -> Y
- ['Y'] -> Z
- Test a Random Pattern:
- ['T'] -> U
- ['V'] -> W
- ['M'] -> N
- ['Q'] -> R
- ['D'] -> E
- ['V'] -> W
- ['T'] -> U
- ['U'] -> V
- ['J'] -> K
- ['F'] -> G
- ['N'] -> O
- ['B'] -> C
- ['M'] -> N
- ['F'] -> G
- ['F'] -> G
- ['P'] -> Q
- ['A'] -> B
- ['K'] -> L
- ['W'] -> X
- ['E'] -> F
如我们所料,网络能够使用序列内上下文学习字母,从而在训练数据上达到100%的准确性。重要的是,网络可以为随机选择的字符准确预测字母表中的下一个字母。非常令人印象深刻。
有状态LSTM,用于从1字符到1字符的映射
我们已经看到我们可以将原始数据分解为固定大小的序列,并且LSTM可以学习这种表示形式,但是只能学习3个字符到1个字符的随机映射。我们还看到,我们可以使批量大小变态,以便为网络提供更多序列,但仅限于培训期间。理想情况下,我们希望将网络暴露给整个序列,并让它学习相互依存关系,而不是在问题的框架中明确定义那些依存关系。我们可以在Keras中做到这一点,方法是使LSTM层成为有状态的,并在时期的末尾(也就是训练序列的末尾)手动重置网络的状态。
这确实是打算使用LSTM网络的方式。我们首先需要将LSTM层定义为有状态的。这样做时,我们必须明确指定批次大小作为输入形状上的尺寸。这也意味着,当我们评估网络或做出预测时,我们还必须指定并遵守相同的批次大小。现在这不是问题,因为我们使用的批次大小为1。这可能会在批次大小不为1的情况下进行预测时带来困难,因为需要按批次和顺序进行预测。
- batch_size = 1
- model.add(LSTM(50, batch_input_shape=(batch_size, X.shape[1], X.shape[2]), stateful=True))
训练有状态LSTM的一个重要区别是,我们一次手动训练一个时期,并在每个时期之后重置状态。我们可以在for循环中执行此操作。再次,我们不混洗输入,而是保留创建输入训练数据的顺序。
- for i in range(300):
- model.fit(X, y, epochs=1, batch_sizebatch_size=batch_size, verbose=2, shuffle=False)
- model.reset_states()
如前所述,我们在评估整个训练数据集上的网络性能时指定批处理大小。
- # summarize performance of the model
- scores = model.evaluate(X, y, batch_sizebatch_size=batch_size, verbose=0)
- model.reset_states()
- print("Model Accuracy: %.2f%%" % (scores[1]*100))
最后,我们可以证明网络确实学习了整个字母。我们可以用第一个字母“ A”作为种子,请求一个预测,将预测作为输入反馈回去,直到“ Z”重复整个过程。
- # demonstrate some model predictions
- seed = [char_to_int[alphabet[0]]]
- for i in range(0, len(alphabet)-1):
- x = numpy.reshape(seed, (1, len(seed), 1))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- print(int_to_char[seed[0]], "->", int_to_char[index])
- seed = [index]
- model.reset_states()
我们还可以看到网络是否可以从任意字母开始进行预测。
- # demonstrate a random starting point
- letter = "K"
- seed = [char_to_int[letter]]
- print("New start: ", letter)
- for i in range(0, 5):
- x = numpy.reshape(seed, (1, len(seed), 1))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- print(int_to_char[seed[0]], "->", int_to_char[index])
- seed = [index]
- model.reset_states()
完整代码实现如下所示:
- # Stateful LSTM to learn one-char to one-char mapping
- import numpy
- from keras.models import Sequential
- from keras.layers import Dense
- from keras.layers import LSTM
- from keras.utils import np_utils
- # fix random seed for reproducibility
- numpy.random.seed(7)
- # define the raw dataset
- alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
- # create mapping of characters to integers (0-25) and the reverse
- char_to_int = dict((c, i) for i, c in enumerate(alphabet))
- int_to_char = dict((i, c) for i, c in enumerate(alphabet))
- # prepare the dataset of input to output pairs encoded as integers
- seq_length = 1
- dataX = []
- dataY = []
- for i in range(0, len(alphabet) - seq_length, 1):
- seq_in = alphabet[i:i + seq_length]
- seq_out = alphabet[i + seq_length]
- dataX.append([char_to_int[char] for char in seq_in])
- dataY.append(char_to_int[seq_out])
- print(seq_in, '->', seq_out)
- # reshape X to be [samples, time steps, features]
- X = numpy.reshape(dataX, (len(dataX), seq_length, 1))
- # normalize
- XX = X / float(len(alphabet))
- # one hot encode the output variable
- y = np_utils.to_categorical(dataY)
- # create and fit the model
- batch_size = 1
- model = Sequential()
- model.add(LSTM(50, batch_input_shape=(batch_size, X.shape[1], X.shape[2]), stateful=True))
- model.add(Dense(y.shape[1], activation='softmax'))
- model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
- for i in range(300):
- model.fit(X, y, epochs=1, batch_sizebatch_size=batch_size, verbose=2, shuffle=False)
- model.reset_states()
- # summarize performance of the model
- scores = model.evaluate(X, y, batch_sizebatch_size=batch_size, verbose=0)
- model.reset_states()
- print("Model Accuracy: %.2f%%" % (scores[1]*100))
- # demonstrate some model predictions
- seed = [char_to_int[alphabet[0]]]
- for i in range(0, len(alphabet)-1):
- x = numpy.reshape(seed, (1, len(seed), 1))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- print(int_to_char[seed[0]], "->", int_to_char[index])
- seed = [index]
- model.reset_states()
- # demonstrate a random starting point
- letter = "K"
- seed = [char_to_int[letter]]
- print("New start: ", letter)
- for i in range(0, 5):
- x = numpy.reshape(seed, (1, len(seed), 1))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- print(int_to_char[seed[0]], "->", int_to_char[index])
- seed = [index]
- model.reset_states()
执行结果输出如下:
- Model Accuracy: 100.00%
- A -> B
- B -> C
- C -> D
- D -> E
- E -> F
- F -> G
- G -> H
- H -> I
- I -> J
- J -> K
- K -> L
- L -> M
- M -> N
- N -> O
- O -> P
- P -> Q
- Q -> R
- R -> S
- S -> T
- T -> U
- U -> V
- V -> W
- W -> X
- X -> Y
- Y -> Z
- New start: K
- K -> B
- B -> C
- C -> D
- D -> E
- E -> F
我们可以看到网络完美地记住了整个字母。它使用了样本本身的上下文,并了解了预测序列中下一个字符所需的任何依存关系。我们还可以看到,如果我们使用第一个字母作为网络种子,则它可以正确地拨乱字母表的其余部分。我们还可以看到,它只学习了完整的字母序列,而且是从一个冷门开始学习的。当要求您预测“ K”中的下一个字母时,它会预测“ B”并退回到整个字母表中。为了真正预测“ K”,需要迭代地预热网络状态,以从“ A”到“ J”的字母表示。这告诉我们,通过准备以下训练数据,我们可以使用“无状态” LSTM达到相同的效果:
- ---a -> b
- --ab -> c
- -abc -> d
- abcd -> e
输入序列固定为25(从a到y预测z),并且模式以零填充为前缀。最后,这提出了使用可变长度输入序列来预测下一个字符来训练LSTM网络的问题。
具有可变长度输入到一字符输出的LSTM
在上一节中,我们发现Keras“有状态” LSTM实际上只是重放前n个序列的捷径,但并没有真正帮助我们学习字母的通用模型。
在本节中,我们探索“无状态” LSTM的一种变体,该变体学习字母表的随机子序列,并努力构建可以给定任意字母或子序列的模型并预测字母表中的下一个字母。首先,我们正在改变问题的框架。为简化起见,我们将定义最大输入序列长度并将其设置为5之类的小值,以加快训练速度。这定义了将要训练的字母子序列的最大长度。在扩展名中,如果我们允许循环回到序列的开头,则可以将其设置为全字母(26)或更长。我们还需要定义要创建的随机序列的数量,在这种情况下为1000。这可能会或多或少。我希望实际需要的模式更少。
- # prepare the dataset of input to output pairs encoded as integers
- num_inputs = 1000
- max_len = 5
- dataX = []
- dataY = []
- for i in range(num_inputs):
- start = numpy.random.randint(len(alphabet)-2)
- end = numpy.random.randint(start, min(start+max_len,len(alphabet)-1))
- sequence_in = alphabet[start:end+1]
- sequence_out = alphabet[end + 1]
- dataX.append([char_to_int[char] for char in sequence_in])
- dataY.append(char_to_int[sequence_out])
- print(sequence_in, '->', sequence_out)
输入样例如下所示:
- PQRST -> U
- W -> X
- O -> P
- OPQ -> R
- IJKLM -> N
- QRSTU -> V
- ABCD -> E
- X -> Y
- GHIJ -> K
输入序列的长度在1到max_len之间变化,因此需要零填充。在这里,我们使用内置pad_sequences()函数中的Keras的左侧(前缀)填充。
- X = pad_sequences(dataX, maxlen=max_len, dtype='float32')
在随机选择的输入模式上评估训练后的模型。这就像新随机生成的字符序列一样容易。我也相信这也可以是线性序列,以“ A”作为种子,输出fes作为单个字符输入返回。
完整代码实现如下所示:
- # LSTM with Variable Length Input Sequences to One Character Output
- import numpy
- from keras.models import Sequential
- from keras.layers import Dense
- from keras.layers import LSTM
- from keras.utils import np_utils
- from keras.preprocessing.sequence import pad_sequences
- from theano.tensor.shared_randomstreams import RandomStreams
- # fix random seed for reproducibility
- numpy.random.seed(7)
- # define the raw dataset
- alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
- # create mapping of characters to integers (0-25) and the reverse
- char_to_int = dict((c, i) for i, c in enumerate(alphabet))
- int_to_char = dict((i, c) for i, c in enumerate(alphabet))
- # prepare the dataset of input to output pairs encoded as integers
- num_inputs = 1000
- max_len = 5
- dataX = []
- dataY = []
- for i in range(num_inputs):
- start = numpy.random.randint(len(alphabet)-2)
- end = numpy.random.randint(start, min(start+max_len,len(alphabet)-1))
- sequence_in = alphabet[start:end+1]
- sequence_out = alphabet[end + 1]
- dataX.append([char_to_int[char] for char in sequence_in])
- dataY.append(char_to_int[sequence_out])
- print(sequence_in, '->', sequence_out)
- # convert list of lists to array and pad sequences if needed
- X = pad_sequences(dataX, maxlen=max_len, dtype='float32')
- # reshape X to be [samples, time steps, features]
- X = numpy.reshape(X, (X.shape[0], max_len, 1))
- # normalize
- XX = X / float(len(alphabet))
- # one hot encode the output variable
- y = np_utils.to_categorical(dataY)
- # create and fit the model
- batch_size = 1
- model = Sequential()
- model.add(LSTM(32, input_shape=(X.shape[1], 1)))
- model.add(Dense(y.shape[1], activation='softmax'))
- model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
- model.fit(X, y, epochs=500, batch_sizebatch_size=batch_size, verbose=2)
- # summarize performance of the model
- scores = model.evaluate(X, y, verbose=0)
- print("Model Accuracy: %.2f%%" % (scores[1]*100))
- # demonstrate some model predictions
- for i in range(20):
- pattern_index = numpy.random.randint(len(dataX))
- pattern = dataX[pattern_index]
- x = pad_sequences([pattern], maxlen=max_len, dtype='float32')
- x = numpy.reshape(x, (1, max_len, 1))
- xx = x / float(len(alphabet))
- prediction = model.predict(x, verbose=0)
- index = numpy.argmax(prediction)
- result = int_to_char[index]
- seq_in = [int_to_char[value] for value in pattern]
- print(seq_in, "->", result)
结果输出如下所示:
- Model Accuracy: 98.90%
- ['Q', 'R'] -> S
- ['W', 'X'] -> Y
- ['W', 'X'] -> Y
- ['C', 'D'] -> E
- ['E'] -> F
- ['S', 'T', 'U'] -> V
- ['G', 'H', 'I', 'J', 'K'] -> L
- ['O', 'P', 'Q', 'R', 'S'] -> T
- ['C', 'D'] -> E
- ['O'] -> P
- ['N', 'O', 'P'] -> Q
- ['D', 'E', 'F', 'G', 'H'] -> I
- ['X'] -> Y
- ['K'] -> L
- ['M'] -> N
- ['R'] -> T
- ['K'] -> L
- ['E', 'F', 'G'] -> H
- ['Q'] -> R
- ['Q', 'R', 'S'] -> T
我们可以看到,尽管该模型不能从随机生成的子序列中完美地学习字母,但它的效果很好。该模型尚未调整,可能需要更多的培训或更大的网络,或两者兼而有之(对读者来说是一项练习)。这是对上面学习的“每批中的所有顺序输入示例”字母模型的很好的自然扩展,因为它可以处理临时查询,但是这次是任意序列长度(最大长度)。
总结
在本文中,您发现了Keras中的LSTM递归神经网络以及它们如何管理状态。具体来说,您了解到:
1、如何为单个字符到一个字符的预测开发幼稚的LSTM网络。
2、如何配置朴素的LSTM以学习样本中各个时间步长的序列。
3、如何配置LSTM以通过手动管理状态来学习样本之间的序列。