What are the different types of algorithms for Matrix-Matrix multiplication/ Matrix-Vector multiplication
By : user2615439
Date : March 29 2020, 07:55 AM
will be helpful for those in need What are the different types of algorithms for matrix-matrix multiplication and matrix-vector mutliplication. , There is also Fox's algorithm like Cannon's.
|
BLAS matrix-vector multiplication vs vector-matrix multiplication. One works; the other fails
By : Сергей Ser
Date : March 29 2020, 07:55 AM
Any of those help Since a 2 vector is multiplied by a 2x2 matrix, performing the operation using a pen and a piece of paper is not too complex. If the transposed matrix is used: (4+i)*(3+2i)+(14+3i)*2=38+17i code :
#include <stdlib.h>
#include <stdio.h>
#include <complex.h>
extern int cgemv_(char* trans, int * m, int * n, float complex* alpha, float complex* A, int * lda,float complex * x, int* incx, float complex * beta, float complex * y,int* incy);
int main(void) {
/* config variables */
char normal = 'N';
char transpose = 'T';
char ctranspose = 'C';
int m = 2;
float complex alpha = 1.0+0.*I;
float complex beta = 0.0+0.*I;
int one = 1;
/* data buffers */
float complex a[4]= {4+1.*I,14+3.*I,3.+0.*I,6.+0.*I};
float complex x[2] = {3.+2.*I,2+0.*I};
float complex y[2];
/* execution */
float complex ye[2];
ye[0]=a[0]*x[0]+a[2]*x[1];
ye[1]=a[1]*x[0]+a[3]*x[1];
cgemv_(&normal, &m, &m, &alpha, &a[0], &m, &x[0], &one, &beta, &y[0], &one);
printf("N\n");
printf("y[0]=%2.6f + %2.6f I expected %6f + %6f I\n",creal(y[0]),cimag(y[0]),creal(ye[0]),cimag(ye[0]));
printf("y[1]=%2.6f + %2.6f I expected %6f + %6f I\n",creal(y[1]),cimag(y[1]),creal(ye[1]),cimag(ye[1]));
//float complex ye[2];
ye[0]=a[0]*x[0]+a[1]*x[1];
ye[1]=a[2]*x[0]+a[3]*x[1];
cgemv_(&transpose, &m, &m, &alpha, &a[0], &m, &x[0], &one, &beta, &y[0], &one);
printf("T\n");
printf("y[0]=%2.6f + %2.6f I expected %6f + %6f I\n",creal(y[0]),cimag(y[0]),creal(ye[0]),cimag(ye[0]));
printf("y[1]=%2.6f + %2.6f I expected %6f + %6f I\n",creal(y[1]),cimag(y[1]),creal(ye[1]),cimag(ye[1]));
ye[0]=conj(a[0])*x[0]+conj(a[1])*x[1];
ye[1]=conj(a[2])*x[0]+conj(a[3])*x[1];
cgemv_(&ctranspose, &m, &m, &alpha, &a[0], &m, &x[0], &one, &beta, &y[0], &one);
printf("C\n");
printf("y[0]=%2.6f + %2.6f I expected %6f + %6f I\n",creal(y[0]),cimag(y[0]),creal(ye[0]),cimag(ye[0]));
printf("y[1]=%2.6f + %2.6f I expected %6f + %6f I\n",creal(y[1]),cimag(y[1]),creal(ye[1]),cimag(ye[1]));
return 0;
}
|
Error: requires numeric/complex matrix/vector arguments when using matrix times vector multiplication
By : Greg Gould
Date : March 29 2020, 07:55 AM
I hope this helps . This works fine. You must have some NAs in your cpi_calc table. Try na.omit(cpi_calc) code :
cpi_calc <- read.table(text="0.358 0.359 0.06 0.419 0.191 0.296
100 100 100 100 100 100
99.99 100 100.07 100.01 100.8 101.59
99.52 99.58 99.94 100.01 101.03 101.38
99.46 99.44 99.85 100.01 101.03 101.03
99.13 99.37 99.79 99.97 101 101.82",header=FALSE)
as.matrix(cpi_calc[2:6, 1:6]) %*% t(cpi_calc[1, 1:6])
1
2 168.3000
3 168.9282
4 168.5832
5 168.4024
6 168.4669
|
Tri-dimensional array as multiplication of vector and matrix
By : yputli
Date : March 29 2020, 07:55 AM
will help you There are several ways you can achieve this. One is using np.dot, note that it will be necessary to introduce a second axis in B so both ndarrays can be multiplied: code :
C = np.dot(A,B[:,None])
print(C.shape)
# (3, 5, 4)
C = np.multiply.outer(A,B)
print(C.shape)
# (3, 5, 4)
C = np.einsum('ij,kl->ikl', A, B)
print(C.shape)
# (3, 5, 4)
|
Matrix Multiplication in 3,4 axes pytorch
By : user3517755
Date : March 29 2020, 07:55 AM
will help you I've two tensors of shape a(16,8,8,64) and b(64,64). Suppose, I extract last dimension of ainto another column vector c, I want to compute matmul(matmul(c.T, b), c). I want this to be done in each of the first 3 dimensions of a. That is the final product should be of shape (16,8,8,1). How can I achieve this in pytorch? , Can be done as follows: code :
row_vec = a[:, :, :, None, :].float()
col_vec = a[:, :, :, :, None].float()
b = (b[None, None, None, :, :]).float()
prod = torch.matmul(torch.matmul(row_vec, b), col_vec)
|