torch.cat()
cat为concatenate的缩写,意思为拼接,torch.cat()函数一般是用于张量拼接使用的
cat(tensors: Union[Tuple[Tensor, ...], List[Tensor]], dim: _int = 0, *, out: Optional[Tensor] = None) -> Tensor:
可以看到cat()函数的参数,常用的参数为,第一个参数:可以选择元组或者列表,内部包含需要拼接的张量,需要按照顺序排列,第二个参数为dim,用于指定需要拼接的维度
python">import torch
import numpy as npdata1 = torch.randint(0, 10, [2, 3, 4])
data2 = torch.randint(0, 10, [2, 3, 4])print(data1)
print(data2)
print("-" * 20)print(torch.cat([data1, data2], dim=0))
print(torch.cat([data1, data2], dim=1))
print(torch.cat([data1, data2], dim=2))
# tensor([[[9, 4, 0, 0],
# [3, 3, 7, 6],
# [6, 1, 0, 8]],
#
# [[9, 1, 1, 2],
# [1, 0, 6, 4],
# [7, 9, 3, 9]]])
# tensor([[[3, 2, 6, 3],
# [8, 3, 1, 1],
# [0, 9, 2, 5]],
#
# [[2, 6, 7, 5],
# [9, 1, 0, 1],
# [0, 6, 4, 4]]])
# --------------------
# tensor([[[9, 4, 0, 0],
# [3, 3, 7, 6],
# [6, 1, 0, 8]],
#
# [[9, 1, 1, 2],
# [1, 0, 6, 4],
# [7, 9, 3, 9]],
#
# [[3, 2, 6, 3],
# [8, 3, 1, 1],
# [0, 9, 2, 5]],
#
# [[2, 6, 7, 5],
# [9, 1, 0, 1],
# [0, 6, 4, 4]]])
# tensor([[[9, 4, 0, 0],
# [3, 3, 7, 6],
# [6, 1, 0, 8],
# [3, 2, 6, 3],
# [8, 3, 1, 1],
# [0, 9, 2, 5]],
#
# [[9, 1, 1, 2],
# [1, 0, 6, 4],
# [7, 9, 3, 9],
# [2, 6, 7, 5],
# [9, 1, 0, 1],
# [0, 6, 4, 4]]])
# tensor([[[9, 4, 0, 0, 3, 2, 6, 3],
# [3, 3, 7, 6, 8, 3, 1, 1],
# [6, 1, 0, 8, 0, 9, 2, 5]],
#
# [[9, 1, 1, 2, 2, 6, 7, 5],
# [1, 0, 6, 4, 9, 1, 0, 1],
# [7, 9, 3, 9, 0, 6, 4, 4]]])
上述代码演示了拼接维度为0,1,2的时候的结果,可以看出cat()并不会影响张量的维度,如上述的三维张量拼接,若dim为0则按块(后两位张量组成的二维张量)进行拼接,若dim为1则按行拼接,若dim为2则按列拼接
torch.stack()
stack为堆叠、栈的意思
stack(tensors: Union[Tuple[Tensor, ...], List[Tensor]], dim: _int = 0, *, out: Optional[Tensor] = None) -> Tensor:
可以看到stack()和cat()的用法几乎一致,都是用于堆叠张量组成的列表或元组,以及堆叠的维度dim
python">import torch
import numpy as npdata1 = torch.randint(0, 10, [2, 3, 4])
data2 = torch.randint(0, 10, [2, 3, 4])print(data1)
print(data2)
print("-" * 20)data3 = torch.stack([data1, data2], dim=0)
data4 = torch.stack([data1, data2], dim=1)
data5 = torch.stack([data1, data2], dim=2)
data6 = torch.stack([data1, data2], dim=3)
print(data3.shape)
print(data3)
print(data4.shape)
print(data4)
print(data5.shape)
print(data5)
print(data6.shape)
print(data6)# tensor([[[1, 6, 6, 1],
# [3, 1, 8, 2],
# [0, 4, 7, 3]],
#
# [[4, 7, 5, 6],
# [5, 4, 0, 2],
# [8, 0, 3, 0]]])
# tensor([[[5, 2, 7, 2],
# [7, 4, 2, 0],
# [8, 5, 5, 9]],
#
# [[7, 1, 5, 6],
# [3, 5, 4, 7],
# [1, 0, 8, 8]]])
# --------------------
# torch.Size([2, 2, 3, 4])
# tensor([[[[1, 6, 6, 1],
# [3, 1, 8, 2],
# [0, 4, 7, 3]],
#
# [[4, 7, 5, 6],
# [5, 4, 0, 2],
# [8, 0, 3, 0]]],
#
#
# [[[5, 2, 7, 2],
# [7, 4, 2, 0],
# [8, 5, 5, 9]],
#
# [[7, 1, 5, 6],
# [3, 5, 4, 7],
# [1, 0, 8, 8]]]])
# torch.Size([2, 2, 3, 4])
# tensor([[[[1, 6, 6, 1],
# [3, 1, 8, 2],
# [0, 4, 7, 3]],
#
# [[5, 2, 7, 2],
# [7, 4, 2, 0],
# [8, 5, 5, 9]]],
#
#
# [[[4, 7, 5, 6],
# [5, 4, 0, 2],
# [8, 0, 3, 0]],
#
# [[7, 1, 5, 6],
# [3, 5, 4, 7],
# [1, 0, 8, 8]]]])
# torch.Size([2, 3, 2, 4])
# tensor([[[[1, 6, 6, 1],
# [5, 2, 7, 2]],
#
# [[3, 1, 8, 2],
# [7, 4, 2, 0]],
#
# [[0, 4, 7, 3],
# [8, 5, 5, 9]]],
#
#
# [[[4, 7, 5, 6],
# [7, 1, 5, 6]],
#
# [[5, 4, 0, 2],
# [3, 5, 4, 7]],
#
# [[8, 0, 3, 0],
# [1, 0, 8, 8]]]])
# torch.Size([2, 3, 4, 2])
# tensor([[[[1, 5],
# [6, 2],
# [6, 7],
# [1, 2]],
#
# [[3, 7],
# [1, 4],
# [8, 2],
# [2, 0]],
#
# [[0, 8],
# [4, 5],
# [7, 5],
# [3, 9]]],
#
#
# [[[4, 7],
# [7, 1],
# [5, 5],
# [6, 6]],
#
# [[5, 3],
# [4, 5],
# [0, 4],
# [2, 7]],
#
# [[8, 1],
# [0, 0],
# [3, 8],
# [0, 8]]]])
可以看到dim设置为几,就会按第几个维度进行堆叠拼接,dim为0则是整体堆叠后升维,dim为1则是按第二个维度也就是后两维张量为一个整体进行两个张量对应堆叠拼接,dim为2为按后两维中的行进行堆叠拼接,dim为3也就是按两个张量的单个值进行对应堆叠拼接
stack()随着维度增加,理解会较为复杂,具体可见代码和结果演示
注意,cat()和stack()中的dim参数也可以使用负索引,即从-1开始进行维度索引