Discussion:
[PyCUDA] Install issue
Mike McFarlane
2014-11-27 16:08:52 UTC
Permalink
Hi

I've installed pycuda following
http://wiki.tiker.net/PyCuda/Installation/Linux

When I try to run test/test_driver.py it fails many tests, mainly with
'TypeError: 'numpy.ndarray' does not have the buffer interface'. The output
is below for test_driver.py and the initial make.

Can anyone explain what is wrong please?

Thanks, and apologies if this mailing list isn't the right place for such
Qs.

Mike

python test_driver.py
================================================= test session starts
=================================================
platform linux2 -- Python 2.6.6 -- pytest-2.3.5
collected 23 items

test_driver.py FFF..F.F.FFFF.FFsFFF...

====================================================== FAILURES
=======================================================
___________________________________________ TestDriver.test_simple_kernel_2
___________________________________________

args = (<test_driver.TestDriver instance at 0x22ceea8>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x229bb18>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x22ceea8>

@mark_cuda_test
def test_simple_kernel_2(self):
mod = SourceModule("""
__global__ void multiply_them(float *dest, float *a, float *b)
{
const int i = threadIdx.x;
dest[i] = a[i] * b[i];
}
""")

multiply_them = mod.get_function("multiply_them")

a = np.random.randn(400).astype(np.float32)
b = np.random.randn(400).astype(np.float32)
a_gpu = drv.to_device(a)
test_driver.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

bf_obj = array([-0.4856573 , 0.45444968, 1.2984767 , -0.02761065,
0.41128042,
...0.56486648, -0.43269619, 0.55279583, -1.11403978, -0.85410458],
dtype=float32)

def to_device(bf_obj):
import sys
if sys.version_info >= (2, 7):
bf = memoryview(bf_obj).tobytes()
else:
bf = buffer(bf_obj)
result = mem_alloc(len(bf))
memcpy_htod(result, bf)
E TypeError: 'buffer' does not have the buffer interface

/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:776:
TypeError
_______________________________________________ TestDriver.test_memory
________________________________________________

args = (<test_driver.TestDriver instance at 0x228dd88>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x23031b8>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x228dd88>

@mark_cuda_test
def test_memory(self):
z = np.random.randn(400).astype(np.float32)
new_z = drv.from_device_like(drv.to_device(z), z)
test_driver.py:28:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

bf_obj = array([ -4.73744690e-01, 1.29374540e+00, -6.42129302e-01,
3.951009...1, -1.27567184e+00, 1.12105417e+00,
-6.89846277e-01], dtype=float32)

def to_device(bf_obj):
import sys
if sys.version_info >= (2, 7):
bf = memoryview(bf_obj).tobytes()
else:
bf = buffer(bf_obj)
result = mem_alloc(len(bf))
memcpy_htod(result, bf)
E TypeError: 'buffer' does not have the buffer interface

/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:776:
TypeError
______________________________________________ TestDriver.test_gpuarray
_______________________________________________

args = (<test_driver.TestDriver instance at 0x2487b90>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x23030c8>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x2487b90>

@mark_cuda_test
def test_gpuarray(self):
a = np.arange(200000, dtype=np.float32)
b = a + 17
import pycuda.gpuarray as gpuarray
a_g = gpuarray.to_gpu(a)
test_driver.py:148:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

ary = array([ 0.00000000e+00, 1.00000000e+00, 2.00000000e+00, ...,
1.99997000e+05, 1.99998000e+05, 1.99999000e+05], dtype=float32)
allocator = <Boost.Python.function object at 0x1a8f7c0>

def to_gpu(ary, allocator=drv.mem_alloc):
"""converts a numpy array to a GPUArray"""
result = GPUArray(ary.shape, ary.dtype, allocator,
strides=ary.strides)
result.set(ary)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/gpuarray.py:913:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <[TypeError("'numpy.ndarray' does not have the buffer interface")
raised in repr()] SafeRepr object at 0x2477ea8>
ary = array([ 0.00000000e+00, 1.00000000e+00, 2.00000000e+00, ...,
1.99997000e+05, 1.99998000e+05, 1.99999000e+05], dtype=float32)

def set(self, ary):
assert ary.size == self.size
assert ary.dtype == self.dtype
if ary.strides != self.strides:
from warnings import warn
warn("Setting array from one with different strides/storage
order. "
"This will cease to work in 2013.x.",
stacklevel=2)

assert self.flags.forc
drv.memcpy_htod(self.gpudata, ary)
E TypeError: 'numpy.ndarray' does not have the buffer interface

/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/gpuarray.py:228:
TypeError
____________________________________________ TestDriver.test_simple_kernel
____________________________________________

args = (<test_driver.TestDriver instance at 0x1e73908>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24b2488>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x1e73908>

@mark_cuda_test
def test_simple_kernel(self):
mod = SourceModule("""
__global__ void multiply_them(float *dest, float *a, float *b)
{
const int i = threadIdx.x;
dest[i] = a[i] * b[i];
}
""")

multiply_them = mod.get_function("multiply_them")

a = np.random.randn(400).astype(np.float32)
b = np.random.randn(400).astype(np.float32)

dest = np.zeros_like(a)
multiply_them(
drv.Out(dest), drv.In(a), drv.In(b),
block=(400, 1, 1))
test_driver.py:49:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

func = <pycuda._driver.Function object at 0x229bd70>
args = (<pycuda.driver.Out object at 0x227dcd0>, <pycuda.driver.In object
at 0x227ded0>, <pycuda.driver.In object at 0x227ddd0>)
kwargs = {}, grid = (1, 1), stream = None, block = (400, 1, 1), shared = 0,
texrefs = [], time_kernel = False
handlers = [<pycuda.driver.Out object at 0x227dcd0>, <pycuda.driver.In
object at 0x227ded0>, <pycuda.driver.In object at 0x227ddd0>]
arg_buf =
'\x00\x00!\x00\x00\x00\x00\x00\x00\x07!\x00\x00\x00\x00\x00\x00\x0e!\x00\x00\x00\x00\x00'
handler = <pycuda.driver.In object at 0x227ded0>

def function_call(func, *args, **kwargs):
grid = kwargs.pop("grid", (1, 1))
stream = kwargs.pop("stream", None)
block = kwargs.pop("block", None)
shared = kwargs.pop("shared", 0)
texrefs = kwargs.pop("texrefs", [])
time_kernel = kwargs.pop("time_kernel", False)

if kwargs:
raise ValueError(
"extra keyword arguments: %s"
% (",".join(kwargs.iterkeys())))

if block is None:
raise ValueError("must specify block size")

func._set_block_shape(*block)
handlers, arg_buf = _build_arg_buf(args)
handler.pre_call(stream)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:380:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <pycuda.driver.In object at 0x227ded0>, stream = None

def pre_call(self, stream):
if stream is not None:
memcpy_htod(self.get_device_alloc(), self.array)
memcpy_htod(self.get_device_alloc(), self.array)
E TypeError: 'numpy.ndarray' does not have the buffer interface

/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:82:
TypeError
_______________________________________
TestDriver.test_multichannel_2d_texture
_______________________________________

args = (<test_driver.TestDriver instance at 0x2478d88>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24b2f50>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x2478d88>

@mark_cuda_test
def test_multichannel_2d_texture(self):
mod = SourceModule("""
#define CHANNELS 4
texture<float4, 2, cudaReadModeElementType> mtx_tex;

__global__ void copy_texture(float *dest)
{
int row = threadIdx.x;
int col = threadIdx.y;
int w = blockDim.y;
float4 texval = tex2D(mtx_tex, row, col);
dest[(row*w+col)*CHANNELS + 0] = texval.x;
dest[(row*w+col)*CHANNELS + 1] = texval.y;
dest[(row*w+col)*CHANNELS + 2] = texval.z;
dest[(row*w+col)*CHANNELS + 3] = texval.w;
}
""")

copy_texture = mod.get_function("copy_texture")
mtx_tex = mod.get_texref("mtx_tex")

shape = (5, 6)
channels = 4
a = np.asarray(
np.random.randn(*((channels,)+shape)),
dtype=np.float32, order="F")
drv.bind_array_to_texref(
drv.make_multichannel_2d_array(a, order="F"), mtx_tex)
test_driver.py:261:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

ndarray = array([[[-0.87390321, 0.78430617, 0.80751866, 0.0188568 ,
0.31150779,
... -1.94733143, -1.69470119, 0.88077658,
0.6898424 ]]], dtype=float32)
order = 'F'

def make_multichannel_2d_array(ndarray, order):
"""Channel count has to be the first dimension of the C{ndarray}."""

descr = ArrayDescriptor()

if order.upper() == "C":
h, w, num_channels = ndarray.shape
stride = 0
elif order.upper() == "F":
num_channels, w, h = ndarray.shape
stride = 2
else:
raise LogicError("order must be either F or C")

descr.width = w
descr.height = h
descr.format = dtype_to_array_format(ndarray.dtype)
descr.num_channels = num_channels

ary = Array(descr)

copy = Memcpy2D()
copy.set_src_host(ndarray)
E TypeError: 'numpy.ndarray' does not have the buffer interface

/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:744:
TypeError
_____________________________________________ TestDriver.test_2d_texture
______________________________________________

args = (<test_driver.TestDriver instance at 0x2478bd8>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24b25f0>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x2478bd8>

@mark_cuda_test
def test_2d_texture(self):
mod = SourceModule("""
texture<float, 2, cudaReadModeElementType> mtx_tex;

__global__ void copy_texture(float *dest)
{
int row = threadIdx.x;
int col = threadIdx.y;
int w = blockDim.y;
dest[row*w+col] = tex2D(mtx_tex, row, col);
}
""")

copy_texture = mod.get_function("copy_texture")
mtx_tex = mod.get_texref("mtx_tex")

shape = (3, 4)
a = np.random.randn(*shape).astype(np.float32)
drv.matrix_to_texref(a, mtx_tex, order="F")
test_driver.py:188:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

matrix = array([[ 1.00075412, -1.55745828, -0.64481455, 1.14575994],
[-3.110294...],
[-2.00341797, 0.95837516, 0.69810289, -0.08028961]], dtype=float32)
texref = <pycuda._driver.TextureReference object at 0x24b0910>, order = 'F'
bind_array_to_texref(matrix_to_array(matrix, order), texref)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:764:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

matrix = array([[ 1.00075412, -1.55745828, -0.64481455, 1.14575994],
[-3.110294...],
[-2.00341797, 0.95837516, 0.69810289, -0.08028961]], dtype=float32)
order = 'F', allow_double_hack = False

def matrix_to_array(matrix, order, allow_double_hack=False):
if order.upper() == "C":
h, w = matrix.shape
stride = 0
elif order.upper() == "F":
w, h = matrix.shape
stride = -1
else:
raise LogicError("order must be either F or C")

matrix = np.asarray(matrix, order=order)
descr = ArrayDescriptor()

descr.width = w
descr.height = h

if matrix.dtype == np.float64 and allow_double_hack:
descr.format = array_format.SIGNED_INT32
descr.num_channels = 2
else:
descr.format = dtype_to_array_format(matrix.dtype)
descr.num_channels = 1

ary = Array(descr)

copy = Memcpy2D()
copy.set_src_host(matrix)
E TypeError: 'numpy.ndarray' does not have the buffer interface

/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:712:
TypeError
_____________________________________________ TestDriver.test_fp_textures
_____________________________________________

args = (<test_driver.TestDriver instance at 0x2582b00>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24b2500>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x2582b00>

@mark_cuda_test
def test_fp_textures(self):
if drv.Context.get_device().compute_capability() < (1, 3):
return

for tp in [np.float32, np.float64]:
from pycuda.tools import dtype_to_ctype

tp_cstr = dtype_to_ctype(tp)
mod = SourceModule("""
#include <pycuda-helpers.hpp>

texture<fp_tex_%(tp)s, 1, cudaReadModeElementType> my_tex;

__global__ void copy_texture(%(tp)s *dest)
{
int i = threadIdx.x;
dest[i] = fp_tex1Dfetch(my_tex, i);
}
""" % {"tp": tp_cstr})

copy_texture = mod.get_function("copy_texture")
my_tex = mod.get_texref("my_tex")

import pycuda.gpuarray as gpuarray

shape = (384,)
a = np.random.randn(*shape).astype(tp)
a_gpu = gpuarray.to_gpu(a)
test_driver.py:522:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

ary = array([ 4.60776985e-01, 7.17113495e-01, 1.01722872e+00,
-1.146432...1,
2.04511788e-02, 1.11698084e-01, -2.13677096e+00], dtype=float32)
allocator = <Boost.Python.function object at 0x1a8f7c0>

def to_gpu(ary, allocator=drv.mem_alloc):
"""converts a numpy array to a GPUArray"""
result = GPUArray(ary.shape, ary.dtype, allocator,
strides=ary.strides)
result.set(ary)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/gpuarray.py:913:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <[TypeError("'numpy.ndarray' does not have the buffer interface")
raised in repr()] SafeRepr object at 0x2300dd0>
ary = array([ 4.60776985e-01, 7.17113495e-01, 1.01722872e+00,
-1.146432...1,
2.04511788e-02, 1.11698084e-01, -2.13677096e+00], dtype=float32)

def set(self, ary):
assert ary.size == self.size
assert ary.dtype == self.dtype
if ary.strides != self.strides:
from warnings import warn
warn("Setting array from one with different strides/storage
order. "
"This will cease to work in 2013.x.",
stacklevel=2)

assert self.flags.forc
drv.memcpy_htod(self.gpudata, ary)
E TypeError: 'numpy.ndarray' does not have the buffer interface

/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/gpuarray.py:228:
TypeError
____________________________________________ TestDriver.test_vector_types
_____________________________________________

args = (<test_driver.TestDriver instance at 0x25a02d8>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24b29b0>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x25a02d8>

@mark_cuda_test
def test_vector_types(self):
mod = SourceModule("""
__global__ void set_them(float3 *dest, float3 x)
{
const int i = threadIdx.x;
dest[i] = x;
}
""")

set_them = mod.get_function("set_them")
a = gpuarray.vec.make_float3(1, 2, 3)
dest = np.empty((400), gpuarray.vec.float3)
set_them(drv.Out(dest), a, block=(400,1,1))
test_driver.py:98:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

func = <pycuda._driver.Function object at 0x24b27d0>
args = (<pycuda.driver.Out object at 0x1e6fcd0>, array((1.0, 2.0, 3.0),
dtype=[('x', '<f4'), ('y', '<f4'), ('z', '<f4')]))
kwargs = {}, grid = (1, 1), stream = None, block = (400, 1, 1), shared = 0,
texrefs = [], time_kernel = False
handlers = [<pycuda.driver.Out object at 0x1e6fcd0>]
arg_buf = '\x00\x00!\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x00@\x00\x00@
@'
handler = <pycuda.driver.Out object at 0x1e6fcd0>

def function_call(func, *args, **kwargs):
grid = kwargs.pop("grid", (1, 1))
stream = kwargs.pop("stream", None)
block = kwargs.pop("block", None)
shared = kwargs.pop("shared", 0)
texrefs = kwargs.pop("texrefs", [])
time_kernel = kwargs.pop("time_kernel", False)

if kwargs:
raise ValueError(
"extra keyword arguments: %s"
% (",".join(kwargs.iterkeys())))

if block is None:
raise ValueError("must specify block size")

func._set_block_shape(*block)
handlers, arg_buf = _build_arg_buf(args)

for handler in handlers:
handler.pre_call(stream)

for texref in texrefs:
func.param_set_texref(texref)

post_handlers = [handler
for handler in handlers
if hasattr(handler, "post_call")]

if stream is None:
if time_kernel:
Context.synchronize()

from time import time
start_time = time()

func._launch_kernel(grid, block, arg_buf, shared, None)

if post_handlers or time_kernel:
Context.synchronize()

if time_kernel:
run_time = time()-start_time
handler.post_call(stream)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:405:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <pycuda.driver.Out object at 0x1e6fcd0>, stream = None

def post_call(self, stream):
if stream is not None:
memcpy_dtoh(self.array, self.get_device_alloc())
memcpy_dtoh(self.array, self.get_device_alloc())
E TypeError: 'numpy.ndarray' does not have the buffer interface

/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:90:
TypeError
________________________________________
TestDriver.test_multiple_2d_textures
_________________________________________

args = (<test_driver.TestDriver instance at 0x280f950>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24b2e60>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x280f950>

@mark_cuda_test
def test_multiple_2d_textures(self):
mod = SourceModule("""
texture<float, 2, cudaReadModeElementType> mtx_tex;
texture<float, 2, cudaReadModeElementType> mtx2_tex;

__global__ void copy_texture(float *dest)
{
int row = threadIdx.x;
int col = threadIdx.y;
int w = blockDim.y;
dest[row*w+col] =
tex2D(mtx_tex, row, col)
+
tex2D(mtx2_tex, row, col);
}
""")

copy_texture = mod.get_function("copy_texture")
mtx_tex = mod.get_texref("mtx_tex")
mtx2_tex = mod.get_texref("mtx2_tex")

shape = (3,4)
a = np.random.randn(*shape).astype(np.float32)
b = np.random.randn(*shape).astype(np.float32)
drv.matrix_to_texref(a, mtx_tex, order="F")
test_driver.py:223:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

matrix = array([[-0.60509264, -0.69680727, -1.962533 , 0.67714691],
[-1.094154...],
[ 0.40417606, -0.97287714, 0.10696917, -1.54577947]], dtype=float32)
texref = <pycuda._driver.TextureReference object at 0x24b0d00>, order = 'F'
bind_array_to_texref(matrix_to_array(matrix, order), texref)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:764:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

matrix = array([[-0.60509264, -0.69680727, -1.962533 , 0.67714691],
[-1.094154...],
[ 0.40417606, -0.97287714, 0.10696917, -1.54577947]], dtype=float32)
order = 'F', allow_double_hack = False

def matrix_to_array(matrix, order, allow_double_hack=False):
if order.upper() == "C":
h, w = matrix.shape
stride = 0
elif order.upper() == "F":
w, h = matrix.shape
stride = -1
else:
raise LogicError("order must be either F or C")

matrix = np.asarray(matrix, order=order)
descr = ArrayDescriptor()

descr.width = w
descr.height = h

if matrix.dtype == np.float64 and allow_double_hack:
descr.format = array_format.SIGNED_INT32
descr.num_channels = 2
else:
descr.format = dtype_to_array_format(matrix.dtype)
descr.num_channels = 1

ary = Array(descr)

copy = Memcpy2D()
copy.set_src_host(matrix)
E TypeError: 'numpy.ndarray' does not have the buffer interface

/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:712:
TypeError
___________________________________________ TestDriver.test_streamed_kernel
___________________________________________

args = (<test_driver.TestDriver instance at 0x23007e8>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24b2e60>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x23007e8>

@mark_cuda_test
def test_streamed_kernel(self):
# this differs from the "simple_kernel" case in that *all*
computation
# and data copying is asynchronous. Observe how this necessitates
the
# use of page-locked memory.

mod = SourceModule("""
__global__ void multiply_them(float *dest, float *a, float *b)
{
const int i = threadIdx.x*blockDim.y + threadIdx.y;
dest[i] = a[i] * b[i];
}
""")

multiply_them = mod.get_function("multiply_them")

shape = (32, 8)
a = drv.pagelocked_zeros(shape, dtype=np.float32)
b = drv.pagelocked_zeros(shape, dtype=np.float32)
a[:] = np.random.randn(*shape)
b[:] = np.random.randn(*shape)

a_gpu = drv.mem_alloc(a.nbytes)
b_gpu = drv.mem_alloc(b.nbytes)

strm = drv.Stream()
drv.memcpy_htod_async(a_gpu, a, strm)
E TypeError: 'numpy.ndarray' does not have the buffer interface

test_driver.py:127: TypeError
_________________________________________
TestDriver.test_prepared_invocation
_________________________________________

args = (<test_driver.TestDriver instance at 0x24951b8>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24a20c8>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x24951b8>

@mark_cuda_test
def test_prepared_invocation(self):
a = np.random.randn(4,4).astype(np.float32)
a_gpu = drv.mem_alloc(a.size * a.dtype.itemsize)
drv.memcpy_htod(a_gpu, a)
E TypeError: 'numpy.ndarray' does not have the buffer interface

test_driver.py:450: TypeError
___________________________________________ TestDriver.test_constant_memory
___________________________________________

args = (<test_driver.TestDriver instance at 0x24963f8>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24a2500>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x24963f8>

@mark_cuda_test
def test_constant_memory(self):
# contributed by Andrew Wagner

module = SourceModule("""
__constant__ float const_array[32];

__global__ void copy_constant_into_global(float*
global_result_array)
{
global_result_array[threadIdx.x] = const_array[threadIdx.x];
}
""")

copy_constant_into_global =
module.get_function("copy_constant_into_global")
const_array, _ = module.get_global('const_array')

host_array = np.random.randint(0,255,(32,)).astype(np.float32)

global_result_array = drv.mem_alloc_like(host_array)
drv.memcpy_htod(const_array, host_array)
E TypeError: 'numpy.ndarray' does not have the buffer interface

test_driver.py:551: TypeError
_____________________________________
TestDriver.test_multichannel_linear_texture
_____________________________________

args = (<test_driver.TestDriver instance at 0x1e61cb0>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24a2398>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x1e61cb0>

@mark_cuda_test
def test_multichannel_linear_texture(self):
mod = SourceModule("""
#define CHANNELS 4
texture<float4, 1, cudaReadModeElementType> mtx_tex;

__global__ void copy_texture(float *dest)
{
int i = threadIdx.x+blockDim.x*threadIdx.y;
float4 texval = tex1Dfetch(mtx_tex, i);
dest[i*CHANNELS + 0] = texval.x;
dest[i*CHANNELS + 1] = texval.y;
dest[i*CHANNELS + 2] = texval.z;
dest[i*CHANNELS + 3] = texval.w;
}
""")

copy_texture = mod.get_function("copy_texture")
mtx_tex = mod.get_texref("mtx_tex")

shape = (16, 16)
channels = 4
a = np.random.randn(*(shape+(channels,))).astype(np.float32)
a_gpu = drv.to_device(a)
test_driver.py:297:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

bf_obj = array([[[ 0.06351238, -1.50826311, 0.99232352, -0.19848856],
[ 0.1630...
[-0.59885794, 0.58485585, 0.93704814, 0.02101008]]],
dtype=float32)

def to_device(bf_obj):
import sys
if sys.version_info >= (2, 7):
bf = memoryview(bf_obj).tobytes()
else:
bf = buffer(bf_obj)
result = mem_alloc(len(bf))
memcpy_htod(result, bf)
E TypeError: 'buffer' does not have the buffer interface

/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/driver.py:776:
TypeError
_____________________________________________ TestDriver.test_3d_texture
______________________________________________

args = (<test_driver.TestDriver instance at 0x2585ea8>,), kwargs = {}
pycuda = <module 'pycuda' from
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/__init__.pyc'>
ctx = <pycuda._driver.Context object at 0x24a2848>, clear_context_caches =
<function clear_context_caches at 0x1af6ed8>
collect = <built-in function collect>

def f(*args, **kwargs):
import pycuda.driver
# appears to be idempotent, i.e. no harm in calling it more than
once
pycuda.driver.init()

ctx = make_default_context()
try:
assert isinstance(ctx.get_device().name(), str)
assert isinstance(ctx.get_device().compute_capability(), tuple)
assert isinstance(ctx.get_device().get_attributes(), dict)
inner_f(*args, **kwargs)
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg/pycuda/tools.py:453:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <test_driver.TestDriver instance at 0x2585ea8>

@mark_cuda_test
def test_3d_texture(self):
# adapted from code by Nicolas Pinto
w = 2
h = 4
d = 8
shape = (w, h, d)

a = np.asarray(
np.random.randn(*shape),
dtype=np.float32, order="F")

descr = drv.ArrayDescriptor3D()
descr.width = w
descr.height = h
descr.depth = d
descr.format = drv.dtype_to_array_format(a.dtype)
descr.num_channels = 1
descr.flags = 0

ary = drv.Array(descr)

copy = drv.Memcpy3D()
copy.set_src_host(a)
E TypeError: 'numpy.ndarray' does not have the buffer interface

test_driver.py:412: TypeError
=================================== 14 failed, 8 passed, 1 skipped in 13.38
seconds ===================================
--
------------------------- make output -----------------

*** WARNING: nvcc not in path.
running install
running bdist_egg
running egg_info
writing requirements to pycuda.egg-info/requires.txt
writing pycuda.egg-info/PKG-INFO
writing top-level names to pycuda.egg-info/top_level.txt
writing dependency_links to pycuda.egg-info/dependency_links.txt
reading manifest file 'pycuda.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.cpp' under directory
'bpl-subset/bpl_subset/boost'
warning: no files found matching '*.html' under directory
'bpl-subset/bpl_subset/boost'
warning: no files found matching '*.inl' under directory
'bpl-subset/bpl_subset/boost'
warning: no files found matching '*.txt' under directory
'bpl-subset/bpl_subset/boost'
warning: no files found matching '*.h' under directory
'bpl-subset/bpl_subset/libs'
warning: no files found matching '*.ipp' under directory
'bpl-subset/bpl_subset/libs'
warning: no files found matching '*.pl' under directory
'bpl-subset/bpl_subset/libs'
writing manifest file 'pycuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/autoinit.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/compiler.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/tools.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/__init__.py ->
build/bdist.linux-x86_64/egg/pycuda
creating build/bdist.linux-x86_64/egg/pycuda/compyte
copying build/lib.linux-x86_64-2.6/pycuda/compyte/__init__.py ->
build/bdist.linux-x86_64/egg/pycuda/compyte
copying build/lib.linux-x86_64-2.6/pycuda/compyte/array.py ->
build/bdist.linux-x86_64/egg/pycuda/compyte
copying build/lib.linux-x86_64-2.6/pycuda/compyte/dtypes.py ->
build/bdist.linux-x86_64/egg/pycuda/compyte
copying build/lib.linux-x86_64-2.6/pycuda/gpuarray.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/_mymako.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/debug.py ->
build/bdist.linux-x86_64/egg/pycuda
creating build/bdist.linux-x86_64/egg/pycuda/sparse
copying build/lib.linux-x86_64-2.6/pycuda/sparse/__init__.py ->
build/bdist.linux-x86_64/egg/pycuda/sparse
copying build/lib.linux-x86_64-2.6/pycuda/sparse/pkt_build.py ->
build/bdist.linux-x86_64/egg/pycuda/sparse
copying build/lib.linux-x86_64-2.6/pycuda/sparse/cg.py ->
build/bdist.linux-x86_64/egg/pycuda/sparse
copying build/lib.linux-x86_64-2.6/pycuda/sparse/packeted.py ->
build/bdist.linux-x86_64/egg/pycuda/sparse
copying build/lib.linux-x86_64-2.6/pycuda/sparse/coordinate.py ->
build/bdist.linux-x86_64/egg/pycuda/sparse
copying build/lib.linux-x86_64-2.6/pycuda/sparse/operator.py ->
build/bdist.linux-x86_64/egg/pycuda/sparse
copying build/lib.linux-x86_64-2.6/pycuda/sparse/inner.py ->
build/bdist.linux-x86_64/egg/pycuda/sparse
copying build/lib.linux-x86_64-2.6/pycuda/_pvt_struct.so ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/cumath.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/_cluda.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/characterize.py ->
build/bdist.linux-x86_64/egg/pycuda
creating build/bdist.linux-x86_64/egg/pycuda/cuda
copying build/lib.linux-x86_64-2.6/pycuda/cuda/pycuda-complex.hpp ->
build/bdist.linux-x86_64/egg/pycuda/cuda
copying build/lib.linux-x86_64-2.6/pycuda/cuda/pycuda-complex-impl.hpp ->
build/bdist.linux-x86_64/egg/pycuda/cuda
copying build/lib.linux-x86_64-2.6/pycuda/cuda/pycuda-helpers.hpp ->
build/bdist.linux-x86_64/egg/pycuda/cuda
copying build/lib.linux-x86_64-2.6/pycuda/reduction.py ->
build/bdist.linux-x86_64/egg/pycuda
creating build/bdist.linux-x86_64/egg/pycuda/gl
copying build/lib.linux-x86_64-2.6/pycuda/gl/autoinit.py ->
build/bdist.linux-x86_64/egg/pycuda/gl
copying build/lib.linux-x86_64-2.6/pycuda/gl/__init__.py ->
build/bdist.linux-x86_64/egg/pycuda/gl
copying build/lib.linux-x86_64-2.6/pycuda/driver.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/curandom.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/elementwise.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/scan.py ->
build/bdist.linux-x86_64/egg/pycuda
copying build/lib.linux-x86_64-2.6/pycuda/_driver.so ->
build/bdist.linux-x86_64/egg/pycuda
byte-compiling build/bdist.linux-x86_64/egg/pycuda/autoinit.py to
autoinit.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/compiler.py to
compiler.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/tools.py to tools.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/__init__.py to
__init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/compyte/__init__.py to
__init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/compyte/array.py to
array.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/compyte/dtypes.py to
dtypes.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/gpuarray.py to
gpuarray.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/_mymako.py to _mymako.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/debug.py to debug.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/sparse/__init__.py to
__init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/sparse/pkt_build.py to
pkt_build.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/sparse/cg.py to cg.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/sparse/packeted.py to
packeted.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/sparse/coordinate.py to
coordinate.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/sparse/operator.py to
operator.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/sparse/inner.py to
inner.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/cumath.py to cumath.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/_cluda.py to _cluda.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/characterize.py to
characterize.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/reduction.py to
reduction.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/gl/autoinit.py to
autoinit.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/gl/__init__.py to
__init__.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/driver.py to driver.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/curandom.py to
curandom.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/elementwise.py to
elementwise.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/scan.py to scan.pyc
creating stub loader for pycuda/_driver.so
creating stub loader for pycuda/_pvt_struct.so
byte-compiling build/bdist.linux-x86_64/egg/pycuda/_driver.py to _driver.pyc
byte-compiling build/bdist.linux-x86_64/egg/pycuda/_pvt_struct.py to
_pvt_struct.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying pycuda.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pycuda.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying pycuda.egg-info/dependency_links.txt ->
build/bdist.linux-x86_64/egg/EGG-INFO
copying pycuda.egg-info/not-zip-safe ->
build/bdist.linux-x86_64/egg/EGG-INFO
copying pycuda.egg-info/requires.txt ->
build/bdist.linux-x86_64/egg/EGG-INFO
copying pycuda.egg-info/top_level.txt ->
build/bdist.linux-x86_64/egg/EGG-INFO
writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt
creating 'dist/pycuda-2014.1-py2.6-linux-x86_64.egg' and adding
'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing pycuda-2014.1-py2.6-linux-x86_64.egg
removing
'/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg'
(and everything under it)
creating
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg
Extracting pycuda-2014.1-py2.6-linux-x86_64.egg to
/usr/lib64/python2.6/site-packages
pycuda 2014.1 is already the active version in easy-install.pth

Installed
/usr/lib64/python2.6/site-packages/pycuda-2014.1-py2.6-linux-x86_64.egg
Processing dependencies for pycuda==2014.1
Searching for decorator==3.4.0
Best match: decorator 3.4.0
Processing decorator-3.4.0-py2.6.egg
decorator 3.4.0 is already the active version in easy-install.pth

Using /usr/lib64/python2.6/site-packages/decorator-3.4.0-py2.6.egg
Searching for pytest==2.3.5
Best match: pytest 2.3.5
Adding pytest 2.3.5 to easy-install.pth file
Installing py.test script to /usr/bin
Installing py.test-2.6 script to /usr/bin

Using /usr/lib/python2.6/site-packages
Searching for pytools==2014.3.4
Best match: pytools 2014.3.4
Processing pytools-2014.3.4-py2.6.egg
pytools 2014.3.4 is already the active version in easy-install.pth

Using /usr/lib64/python2.6/site-packages/pytools-2014.3.4-py2.6.egg
Searching for py==1.4.18
Best match: py 1.4.18
Adding py 1.4.18 to easy-install.pth file

Using /usr/lib/python2.6/site-packages
Searching for appdirs==1.4.0
Best match: appdirs 1.4.0
Processing appdirs-1.4.0-py2.6.egg
appdirs 1.4.0 is already the active version in easy-install.pth

Using /usr/lib64/python2.6/site-packages/appdirs-1.4.0-py2.6.egg
Finished processing dependencies for pycuda==2014.1


******************************************************************************************

*Mike McFarlane*

***@iproov.com

+44 7557 780175


******************************************************************************************
Andreas Kloeckner
2014-11-27 17:40:24 UTC
Permalink
Post by Mike McFarlane
Hi
I've installed pycuda following
http://wiki.tiker.net/PyCuda/Installation/Linux
When I try to run test/test_driver.py it fails many tests, mainly with
'TypeError: 'numpy.ndarray' does not have the buffer interface'. The output
is below for test_driver.py and the initial make.
Can anyone explain what is wrong please?
Thanks, and apologies if this mailing list isn't the right place for such
Qs.
It is. What's your version of numpy?

Andreas
Andreas Kloeckner
2014-11-28 14:43:33 UTC
Permalink
Mike,
I'm using Numpy 1.4.1. Thanks for your help.
First, please keep the list cc'd. That makes answers searchable.

Second, could you try upgrading that? You don't even need to mess with
your system to do so--just use a virtualenv.

Andreas
Mike McFarlane
2014-11-28 16:35:54 UTC
Permalink
Mike,
I'm using Numpy 1.4.1. Thanks for your help.
Post by Andreas Kloeckner
First, please keep the list cc'd. That makes answers searchable.
Of course.
Post by Andreas Kloeckner
Second, could you try upgrading that? You don't even need to mess with
your system to do so--just use a virtualenv.
Ok, I will try that next week when I am back in the office.
--
******************************************************************************************

*Mike McFarlane*

***@iproov.com

+44 7557 780175


******************************************************************************************
Mike McFarlane
2014-12-08 17:17:16 UTC
Permalink
Hi Andreas

Sorry for the slow reply, had to do some changes on the server for other
stuff.

Installed virtualenv as I should have in the first place, got the most up
to date numpy as you suggested and all ran fine.

Thanks
Post by Mike McFarlane
Mike,
I'm using Numpy 1.4.1. Thanks for your help.
Post by Andreas Kloeckner
First, please keep the list cc'd. That makes answers searchable.
Of course.
Post by Andreas Kloeckner
Second, could you try upgrading that? You don't even need to mess with
your system to do so--just use a virtualenv.
Ok, I will try that next week when I am back in the office.
--
******************************************************************************************
*Mike McFarlane*
+44 7557 780175
******************************************************************************************
--
******************************************************************************************

*Mike McFarlane*

***@iproov.com

+44 7557 780175


******************************************************************************************
Loading...