Post by Keith BrownHi Lev,
Here is the context.
np.dot(a.T,a)
Here a.T is a view and doesnt take up a lot of memory.
if I would do
np.dot(a.T.copy(),a) it would take up more memory.
This is part of my memory saving quest on the GPU. Trying to find some
ways around it...
If you tell skcuda.linalg.dot() to treat its first argument as transposed, you
don't need to copy the matrix:
skcuda.linalg.dot(a, a, 'T', 'N')
--
Lev Givon
Bionet Group | Neurokernel Project
http://lebedov.github.io/
http://neurokernel.github.io/