Conv1d() in PyTorch

Super Kai (Kazuya Ito) - Sep 11 - - Dev Community

Buy Me a Coffee

*Memos:

Conv1d() can get the 2D or 3D tensor of the zero or more elements computed by 1D convolution from the 2D or 3D tensor of zero or more elements as shown below:

*Memos:

  • The 1st argument for initialization is in_channels(Required-Type:float). *It must be 0 <= x.
  • The 2nd argument for initialization is out_channels(Required-Type:float): *Memos:
    • It must be 0 <= x.
    • 0 is possible but warning occurs.
  • The 3rd argument for initialization is kernel_size(Required-Type:int or tuple or list of int). *It must be 1 <= x.
  • The 4th argument for initialization is stride(Optional-Default:1-Type:int or tuple or list of int). *It must be 1 <= x.
  • The 5th argument for initialization is padding(Optional-Default:0-Type:int, str or tuple or list of int): *Memos:
    • It must be 0 <= x if not str.
    • It must be either 'valid' or 'same' for str.
  • The 6th argument for initialization is dilation(Optional-Default:1-Type:int, tuple or list of int). *It must be 1 <= x.
  • The 7th argument for initialization is groups(Optional-Default:1-Type:int). *It must be 1 <= x.
  • The 8th argument for initialization is bias(Optional-Default:True-Type:bool). *If it's False, None is set.
  • The 9th argument for initialization is padding_mode(Optional-Default:'zeros'-Type:str). *'zeros', 'reflect', 'replicate' or 'circular' can be selected.
  • The 10th argument for initialization is device(Optional-Type:str, int or device()). *Memos:
  • The 11th argument for initialization is dtype(Optional-Type:int). *Memos:
  • The 1st argument is input(Required-Type:tensor of float or complex). *complex must be set to dtype of Conv1d() to use a complex tensor.
  • The tensor's requires_grad which is False by default is set to True by Conv1d().
  • Input tensor's device and dtype must be same as Conv1d()'s device and dtype respectively.
  • conv1d1.device and conv1d1.dtype don't work.
import torch
from torch import nn

tensor1 = torch.tensor([[8., -3., 0., 1., 5., -2.]])

tensor1.requires_grad
# False

torch.manual_seed(42)

conv1d1 = nn.Conv1d(in_channels=1, out_channels=3, kernel_size=1)
tensor2 = conv1d1(input=tensor1)
tensor2
# tensor([[7.0349, -1.3750, 0.9186, 1.6831, 4.7413, -0.6105],
#         [6.4210, -2.7091, -0.2191, 0.6109, 3.9309, -1.8791],
#         [-1.6724, 0.9046, 0.2018, -0.0325, -0.9696, 0.6703]],
#        grad_fn=<SqueezeBackward1>)

tensor2.requires_grad
# True

conv1d1
# Conv1d(1, 3, kernel_size=(1,), stride=(1,))

conv1d1.in_channels
# 1

conv1d1.out_channels
# 3

conv1d1.kernel_size
# (1,)

conv1d1.stride
# (1,)

conv1d1.padding
# (0,)

conv1d1.dilation
# (1,)

conv1d1.groups
# 1

conv1d1.bias
# Parameter containing:
# tensor([0.9186, -0.2191, 0.2018], requires_grad=True)

conv1d1.padding_mode
# 'zeros'

conv1d1.weight
# Parameter containing:
# tensor([[[0.7645]], [[0.8300]], [[-0.2343]]], requires_grad=True)

torch.manual_seed(42)

conv1d2 = nn.Conv1d(in_channels=3, out_channels=3, kernel_size=1)
conv1d2(input=tensor2)
# tensor([[5.9849, -2.4511, -0.1504, 0.6165, 3.6841, -1.6842],
#         [3.2258, 0.2207, 1.0403, 1.3134, 2.4062, 0.4939],
#         [-0.5434, 0.0364, -0.1217, -0.1744, -0.3853, -0.0163]],
#        grad_fn=<SqueezeBackward1>)

torch.manual_seed(42)

conv1d = nn.Conv1d(in_channels=1, out_channels=3, kernel_size=1, stride=1, 
                   padding=0, dilation=1, groups=1, bias=True,
                   padding_mode='zeros', device=None, dtype=None)
conv1d(input=tensor1)
# tensor([[7.0349, -1.3750, 0.9186, 1.6831, 4.7413, -0.6105],
#         [6.4210, -2.7091, -0.2191, 0.6109, 3.9309, -1.8791],
#         [-1.6724, 0.9046, 0.2018, -0.0325, -0.9696, 0.6703]],
#         grad_fn=<SqueezeBackward1>)

my_tensor = torch.tensor([[8., -3., 0.],
                          [1., 5., -2.]])
torch.manual_seed(42)

conv1d = nn.Conv1d(in_channels=2, out_channels=3, kernel_size=1)
conv1d(input=my_tensor)
# tensor([[4.5675, 0.9684, -1.5181],
#         [-0.2604, 4.1600, -0.8838],
#         [-0.4734, 1.8016, 0.3380]], grad_fn=<SqueezeBackward1>)

my_tensor = torch.tensor([[8.], [-3.], [0.],
                          [1.], [5.], [-2.]])
torch.manual_seed(42)

conv1d = nn.Conv1d(in_channels=6, out_channels=3, kernel_size=1)
conv1d(input=my_tensor)
# tensor([[1.0529], [-0.8833], [3.4542]], grad_fn=<SqueezeBackward1>)

my_tensor = torch.tensor([[[8.], [-3.], [0.]],
                          [[1.], [5.], [-2.]]])
torch.manual_seed(42)

conv1d = nn.Conv1d(in_channels=3, out_channels=3, kernel_size=1)
conv1d(input=my_tensor)
# tensor([[[1.6701], [5.1242], [-3.1578]],
#         [[2.6844], [0.1667], [0.5044]]], grad_fn=<ConvolutionBackward0>)

my_tensor = torch.tensor([[[8.+0.j], [-3.+0.j], [0.+0.j]],
                          [[1.+0.j], [5.+0.j], [-2.+0.j]]])
torch.manual_seed(42)

conv1d = nn.Conv1d(in_channels=3, out_channels=3, kernel_size=1, 
                   dtype=torch.complex64)
conv1d(input=my_tensor)
# tensor([[[3.6675+2.3897j], [-4.0416+3.9155j], [2.3427+1.2531j]],
#         [[-0.2514+3.0452j], [0.9940-2.0626j], [0.6939-0.1171j]]],
#        grad_fn=<AddBackward0>)
Enter fullscreen mode Exit fullscreen mode
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player