Quantcast
Channel: Recent Questions - Stack Overflow
Viewing all articles
Browse latest Browse all 12111

Why are the dimension of the weight in torch.nn.functional.linear (out,in) instead of (in,out)

$
0
0

In the documentation of torch.nn.functional.linear (https://pytorch.org/docs/stable/generated/torch.nn.functional.linear.html), the dimensions of the weight input are (out_features, in_features) then the wight matrix is transposed when computing the output: y=xA^T+b. Why are they doing this instead of taking a matrix W of dimensions (in_features, out_features) and doing y=xW+b?

By doing y=xW+b the dimensions will match and so I cannot find a clear reason for the above.


Viewing all articles
Browse latest Browse all 12111

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>