当前位置:主页 > 软件编程 > Python代码 >

PyTorch深度学习LSTM从input输入到Linear输出

时间:2022-10-06 12:00:23 | 栏目:Python代码 | 点击:

LSTM介绍

关于LSTM的具体原理,可以参考:

https://www.jb51.net/article/178582.htm

https://www.jb51.net/article/178423.htm

系列文章:

PyTorch搭建双向LSTM实现时间序列负荷预测

PyTorch搭建LSTM实现多变量多步长时序负荷预测

PyTorch搭建LSTM实现多变量时序负荷预测

PyTorch搭建LSTM实现时间序列负荷预测

LSTM参数

关于nn.LSTM的参数,官方文档给出的解释为:

总共有七个参数,其中只有前三个是必须的。由于大家普遍使用PyTorch的DataLoader来形成批量数据,因此batch_first也比较重要。LSTM的两个常见的应用场景为文本处理和时序预测,因此下面对每个参数我都会从这两个方面来进行具体解释。

Inputs

关于LSTM的输入,官方文档给出的定义为:

可以看到,输入由两部分组成:input、(初始的隐状态h_0,初始的单元状态c_0)

其中input:

input(seq_len, batch_size, input_size)

(h_0, c_0):

h_0(num_directions * num_layers, batch_size, hidden_size)
c_0(num_directions * num_layers, batch_size, hidden_size)

h_0和c_0的shape一致。

Outputs

关于LSTM的输出,官方文档给出的定义为:

可以看到,输出也由两部分组成:otput、(隐状态h_n,单元状态c_n)

其中output的shape为:

output(seq_len, batch_size, num_directions * hidden_size)

h_n和c_n的shape保持不变,参数解释见前文。

batch_first

如果在初始化LSTM时令batch_first=True,那么input和output的shape将由:

input(seq_len, batch_size, input_size)
output(seq_len, batch_size, num_directions * hidden_size)

变为:

input(batch_size, seq_len, input_size)
output(batch_size, seq_len, num_directions * hidden_size)

即batch_size提前。

案例

简单搭建一个LSTM如下所示:

class LSTM(nn.Module):
    def __init__(self, input_size, hidden_size, num_layers, output_size, batch_size):
        super().__init__()
        self.input_size = input_size
        self.hidden_size = hidden_size
        self.num_layers = num_layers
        self.output_size = output_size
        self.num_directions = 1 # 单向LSTM
        self.batch_size = batch_size
        self.lstm = nn.LSTM(self.input_size, self.hidden_size, self.num_layers, batch_first=True)
        self.linear = nn.Linear(self.hidden_size, self.output_size)
    def forward(self, input_seq):
        h_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size).to(device)
        c_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size).to(device)
        seq_len = input_seq.shape[1] # (5, 30)
        # input(batch_size, seq_len, input_size)
        input_seq = input_seq.view(self.batch_size, seq_len, 1)  # (5, 30, 1)
        # output(batch_size, seq_len, num_directions * hidden_size)
        output, _ = self.lstm(input_seq, (h_0, c_0)) # output(5, 30, 64)
        output = output.contiguous().view(self.batch_size * seq_len, self.hidden_size) # (5 * 30, 64)
        pred = self.linear(output) # pred(150, 1)
        pred = pred.view(self.batch_size, seq_len, -1) # (5, 30, 1)
        pred = pred[:, -1, :]  # (5, 1)
        return pred

其中定义模型的代码为:

self.lstm = nn.LSTM(self.input_size, self.hidden_size, self.num_layers, batch_first=True)
self.linear = nn.Linear(self.hidden_size, self.output_size)

我们加上具体的数字:

self.lstm = nn.LSTM(self.input_size=1, self.hidden_size=64, self.num_layers=5, batch_first=True)
self.linear = nn.Linear(self.hidden_size=64, self.output_size=1)

再看前向传播:

def forward(self, input_seq):
    h_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size).to(device)
    c_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size).to(device)
    seq_len = input_seq.shape[1]  # (5, 30)
    # input(batch_size, seq_len, input_size)
    input_seq = input_seq.view(self.batch_size, seq_len, 1)  # (5, 30, 1)
    # output(batch_size, seq_len, num_directions * hidden_size)
    output, _ = self.lstm(input_seq, (h_0, c_0))  # output(5, 30, 64)
    output = output.contiguous().view(self.batch_size * seq_len, self.hidden_size)  # (5 * 30, 64)
    pred = self.linear(output) # (150, 1)
    pred = pred.view(self.batch_size, seq_len, -1)  # (5, 30, 1)
    pred = pred[:, -1, :]  # (5, 1)
    return pred

假设用前30个预测下一个,则seq_len=30,batch_size=5,由于设置了batch_first=True,因此,输入到LSTM中的input的shape应该为:

input(batch_size, seq_len, input_size) = input(5, 30, 1)

但实际上,经过DataLoader处理后的input_seq为:

input_seq(batch_size, seq_len) = input_seq(5, 30)

(5, 30)表示一共5条数据,每条数据的维度都为30。为了匹配LSTM的输入,我们需要对input_seq的shape进行变换:

input_seq = input_seq.view(self.batch_size, seq_len, 1)  # (5, 30, 1)

然后将input_seq送入LSTM:

output, _ = self.lstm(input_seq, (h_0, c_0)) # output(5, 30, 64)

根据前文,output的shape为:

output(batch_size, seq_len, num_directions * hidden_size) = output(5, 30, 64)

全连接层的定义为:

self.linear = nn.Linear(self.hidden_size=64, self.output_size=1)

因此,我们需要将output的第二维度变换为64(150, 64):

output = output.contiguous().view(self.batch_size * seq_len, self.hidden_size) # (5 * 30, 64)

然后将output送入全连接层:

pred = self.linear(output) # pred(150, 1)

得到的预测值shape为(150, 1)。我们需要将其进行还原,变成(5, 30, 1):

pred = pred.view(self.batch_size, seq_len, -1) # (5, 30, 1)

在用DataLoader处理了数据后,得到的input_seq和label的shape分别为:

input_seq(batch_size, seq_len) = input_seq(5, 30)label(batch_size, output_size) = label(5, 1)

由于输出是输入右移,我们只需要取pred第二维度(time)中的最后一个数据:

pred = pred[:, -1, :] # (5, 1)

这样,我们就得到了预测值,然后与label求loss,然后再反向更新参数即可。

时间序列预测的一个真实案例请见:PyTorch搭建LSTM实现时间序列预测(负荷预测)

您可能感兴趣的文章:

相关文章