pytorch
torch.Tensor
abs()
, the same astorch.abs()
abs_()
acos()
acos_()
asin()
asin_()
gather
torch.gather(input, dim, index, out=None, sparse_grad=False) → Tensor
按给定的轴,根据index在input上收集数据。
index的维度和给定的tensor相同。
nn.functional
normalize
torch.nn.functional.normalize(input: torch.Tensor, p: float = 2, dim: int = 1, eps: float = 1e-12, out: Optional[torch.Tensor] = None) → torch.Tensor
对维度 dim
进行 $L_p$ 正则化
linear
torch.nn.functional.linear(input: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor] = None) → torch.Tensor
input: (N, *, in_features), * means any number of
weights: (out_features,in_features)
output: (N,∗,out_features)
log_softmax
torch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None) → torch.Tensor
equals log(softmax(x))
nn.init
uniform & normal
$U(a,b)$
torch.nn.init.uniform_(tensor: torch.Tensor, a: float = 0.0, b: float = 1.0) → torch.Tensor
$N(mean,std^2)$
torch.nn.init.normal_(tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0) → torch.Tensor
xavier_uniform & xavier_normal
基本思想:通过网络层时,输入和输出的方差相同,包括前向传播和后向传播。
torch.nn.init.xavier_normal_(tensor: torch.Tensor, gain: float = 1.0) → torch.Tensor
kaiming_uniform & kaiming_normal
torch.nn.init.kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')
torch.nn.init.kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')
constant
torch.nn.init.constant_(tensor: torch.Tensor, val: float) → torch.Tensor
torch.nn.init.ones_(tensor: torch.Tensor) → torch.Tensor
torch.nn.init.zeros_(tensor: torch.Tensor) → torch.Tensor
nn
nn.Conv2d
nn.Conv2d
nn.MaxPool2d
和nn.AvgPool2d
nn.MaxPool2d
和nn.AvgPool2d
公式和conv2d相同
nn.AdaptiveAvgPool2d
nn.AdaptiveAvgPool2d
torch.nn.AdaptiveAvgPool2d(output_size: Union[T, Tuple[T, ...]])
给出输出的大小,自适应算法能够自动帮助我们计算核的大小和每次移动的步长。
Last updated
Was this helpful?