abs(), the same as torch.abs()
torch.gather(input, dim, index, out=None, sparse_grad=False) → Tensor
按给定的轴,根据index在input上收集数据。
index的维度和给定的tensor相同。
Copy # 示例1:对于一个三维张量,有
out [ i ][ j ][ k ] = input [ index [ i ][ j ][ k ]][ j ][ k ] # if dim == 0
out [ i ][ j ][ k ] = input [ i ][ index [ i ][ j ][ k ]][ k ] # if dim == 1
out [ i ][ j ][ k ] = input [ i ][ j ][ index [ i ][ j ][ k ]] # if dim == 2
# 示例2:
t = torch . tensor ([[ 1 , 2 ],[ 3 , 4 ]])
torch . gather ( t , 1 , torch . tensor ([[ 0 , 0 ],[ 1 , 0 ]]))
tensor ([[ 1 , 1 ],
[ 4 , 3 ]]) torch.nn.functional.normalize(input: torch.Tensor, p: float = 2, dim: int = 1, eps: float = 1e-12, out: Optional[torch.Tensor] = None) → torch.Tensor
对维度 dim 进行 $L_p$ 正则化
v = v max ( ∥ v ∥ p , ϵ ) v=\frac{v}{\max \left(\|v\|_{p}, \boldsymbol{\epsilon}\right)} v = max ( ∥ v ∥ p , ϵ ) v torch.nn.functional.linear(input: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor] = None) → torch.Tensor
input: (N, *, in_features), * means any number of
weights: (out_features,in_features)
output: (N,∗,out_features)
y = x A T + b y=x A^{T}+b y = x A T + b torch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None) → torch.Tensor
equals log(softmax(x))
$U(a,b)$
torch.nn.init.uniform_(tensor: torch.Tensor, a: float = 0.0, b: float = 1.0) → torch.Tensor
$N(mean,std^2)$
torch.nn.init.normal_(tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0) → torch.Tensor
基本思想:通过网络层时,输入和输出的方差相同,包括前向传播和后向传播。
b o u n d = g a i n ∗ 6 f a n _ i n + f a n _ o u t bound=gain * \sqrt{\frac{6}{fan\_in + fan\_out}} b o u n d = g ain ∗ f an _ in + f an _ o u t 6 s t d = g a i n ∗ 2 f a n _ i n + f a n _ o u t std=gain * \sqrt{\frac{2}{fan\_in + fan\_out}} s t d = g ain ∗ f an _ in + f an _ o u t 2 torch.nn.init.xavier_normal_(tensor: torch.Tensor, gain: float = 1.0) → torch.Tensor
b o u n d = g a i n ∗ 3 f a n _ m o d e bound = gain * \sqrt{\frac{3}{fan\_mode}} b o u n d = g ain ∗ f an _ m o d e 3 torch.nn.init.kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')
b o u n d = g a i n f a n _ m o d e bound = \frac{gain}{\sqrt{fan\_mode}} b o u n d = f an _ m o d e g ain torch.nn.init.kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')
torch.nn.init.constant_(tensor: torch.Tensor, val: float) → torch.Tensor
torch.nn.init.ones_(tensor: torch.Tensor) → torch.Tensor
torch.nn.init.zeros_(tensor: torch.Tensor) → torch.Tensor
nn.MaxPool2d和nn.AvgPool2d
公式和conv2d相同
nn.AdaptiveAvgPool2d
torch.nn.AdaptiveAvgPool2d(output_size: Union[T, Tuple[T, ...]])
给出输出的大小,自适应算法能够自动帮助我们计算核的大小和每次移动的步长。