Transformer是什么?
Transformer是一种用于序列到序列(Sequence-to-Sequence)学习的模型,最初由Vaswani等人在2017年提出。它在自然语言处理(NLP)领域中取得了巨大的成功,并被广泛应用于机器翻译、文本摘要和问答系统等任务中。
在传统的序列到序列模型中,如循环神经网络(Recurrent Neural Networks,RNNs),信息必须通过时间步骤逐个传递,导致训练速度慢且难以并行化。而Transformer则引入了自注意力机制(Self-Attention)来解决这个问题,使得模型可以一次性并行地处理整个序列。
Transformer的结构
Transformer由编码器(Encoder)和解码器(Decoder)组成,每个组件都由多个层堆叠而成。编码器用于将输入序列编码成连续的表示,解码器则使用编码器的输出和之前的预测来生成目标序列。
每个层又由多头自注意力机制和前馈神经网络组成。自注意力机制允许模型在编码或解码的过程中将注意力集中在输入序列的不同位置上,而不仅仅是依赖于固定的窗口。前馈神经网络则对自注意力机制的输出进行线性变换和激活函数处理。
以下是一个简化的Transformer模型示例:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Transformer(nn.Module):
def __init__(self, input_dim, output_dim, num_layers, num_heads, hidden_dim):
super(Transformer, self).__init__()
self.embedding = nn.Embedding(input_dim, hidden_dim)
self.positional_encoding = PositionalEncoding(hidden_dim)
self.encoder = Encoder(hidden_dim, num_layers, num_heads)
self.decoder = Decoder(hidden_dim, num_layers, num_heads)
self.output_layer = nn.Linear(hidden_dim, output_dim)
def forward(self, src_seq, tgt_seq):
src_emb = self.embedding(src_seq)
tgt_emb = self.embedding(tgt_seq)
src_emb = self.positional_encoding(src_emb)
tgt_emb = self.positional_encoding(tgt_emb)
encoded = self.encoder(src_emb)
decoded = self.decoder(tgt_emb, encoded)
output = self.output_layer(decoded)
return output
参考实现:使用PyTorch实现Transformer
在PyTorch中,我们可以使用torch.nn
模块轻松地实现Transformer模型。以下是一个简单的示例:
import torch
import torch.nn as nn
import torch.nn.functional as F
class PositionalEncoding(nn.Module):
def __init__(self, hidden_dim, max_seq_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=0.1)
position = torch.arange(0, max_seq_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, hidden_dim, 2) * (-math.log(10000.0) / hidden_dim))
self.position_encoding = torch.zeros(max_seq_len, hidden_dim)
self.position_encoding[:, 0::2] = torch.sin(position * div_term)
self.position_encoding[:, 1::2] = torch.cos(position * div_term)
def forward(self, x):
x = x + self.position_encoding[:x.size(0), :]
return self.dropout(x)
class MultiHeadAttention(nn.Module):
def __init__(self, hidden_dim, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.head_dim = hidden_dim // num_heads
self.query_projection = nn.Linear(hidden_dim, hidden_dim)
self.key_projection = nn.Linear(hidden_dim, hidden_dim)
self.value_projection = nn.Linear(hidden_dim, hidden_dim)
self.output_projection = nn.Linear(hidden_dim, hidden_dim)
def forward(self, query, key, value, mask=None):
batch_size = query.size(0)
query = self.query_projection(query).view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2)
key = self.key_projection(key).view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2)
value = self.value_projection(value).view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2)
scores = torch.matmul(query, key.transpose(-2