|
Boost.PythonFrequently Asked Questions (FAQs) |
*=
operator
work?typedef boost::function<void (string s) > funcptr; void foo(funcptr fp) { fp("hello,world!"); } BOOST_PYTHON_MODULE(test) { def("foo",foo) ; }And then:
>>> def hello(s): ... print s ... >>> foo(hello) hello, world!The short answer is: "you can't". This is not a Boost.Python limitation so much as a limitation of C++. The problem is that a Python function is actually data, and the only way of associating data with a C++ function pointer is to store it in a static variable of the function. The problem with that is that you can only associate one piece of data with every C++ function, and we have no way of compiling a new C++ function on-the-fly for every Python function you decide to pass to
foo
. In other words, this could work if the C++
function is always going to invoke the same Python
function, but you probably don't want that.
If you have the luxury of changing the C++ code you're
wrapping, pass it an object
instead and call that;
the overloaded function call operator will invoke the Python
function you pass it behind the object
.
For more perspective on the issue, see this posting.
period const& get_floating_frequency() const { return boost::python::call_method<period const&>( m_self,"get_floating_frequency"); }And you get:
ReferenceError: Attempt to return dangling reference to object of type: class period
In this case, the Python method invoked by call_method
constructs a new Python object. You're trying to return a reference to a
C++ object (an instance of class period
) contained within
and owned by that Python object. Because the called method handed back a
brand new object, the only reference to it is held for the duration of
get_floating_frequency()
above. When the function returns,
the Python object will be destroyed, destroying the instance of
class period
, and leaving the returned reference dangling.
That's already undefined behavior, and if you try to do anything with
that reference you're likely to cause a crash. Boost.Python detects this
situation at runtime and helpfully throws an exception instead of letting
you do that.
Q: I have an object composed of 12 doubles. A const& to this object is returned by a member function of another class. From the viewpoint of using the returned object in Python I do not care if I get a copy or a reference to the returned object. In Boost.Python Version 2 I have the choice of using copy_const_reference or return_internal_reference. Are there considerations that would lead me to prefer one over the other, such as size of generated code or memory overhead?A: copy_const_reference will make an instance with storage for one of your objects, size = base_size + 12 * sizeof(double). return_internal_reference will make an instance with storage for a pointer to one of your objects, size = base_size + sizeof(void*). However, it will also create a weak reference object which goes in the source object's weakreflist and a special callback object to manage the lifetime of the internally-referenced object. My guess? copy_const_reference is your friend here, resulting in less overall memory use and less fragmentation, also probably fewer total cycles.
Ralf W. Grosse-Kunstleve provides these notes:
class_<>
wrapper:
class_<std::vector<double> >("std_vector_double") .def(...) ... ;This can be moved to a template so that several types (double, int, long, etc.) can be wrapped with the same code. This technique is used in the file
scitbx/include/scitbx/array_family/boost_python/flex_wrapper.hin the "scitbx" package. The file could easily be modified for wrapping std::vector<> instantiations.
This type of C++/Python binding is most suitable for containers that may contain a large number of elements (>10000).
void foo(std::vector<double> const& array); // pass by const-reference void foo(std::vector<double> array); // pass by valueSome custom rvalue converters are implemented in the file
scitbx/include/scitbx/boost_python/container_conversions.hThis code can be used to convert from C++ container types such as std::vector<> or std::list<> to Python tuples and vice versa. A few simple examples can be found in the file
scitbx/array_family/boost_python/regression_test_module.cppAutomatic C++ container <-> Python tuple conversions are most suitable for containers of moderate size. These converters generate significantly less object code compared to alternative 1 above.
It would also be useful to also have "custom lvalue converters" such as std::vector<> <-> Python list. These converters would support the modification of the Python list from C++. For example:
C++:
void foo(std::vector<double>& array) { for(std::size_t i=0;i<array.size();i++) { array[i] *= 2; } }Python:
>>> l = [1, 2, 3] >>> foo(l) >>> print l [2, 4, 6]Custom lvalue converters require changes to the Boost.Python core library and are currently not available.
P.S.:
The "scitbx" files referenced above are available via anonymous CVS:
cvs -d:pserver:anonymous@cvs.cctbx.sourceforge.net:/cvsroot/cctbx login cvs -d:pserver:anonymous@cvs.cctbx.sourceforge.net:/cvsroot/cctbx co scitbx
Q: I get this error message when compiling a large source file. What can I do?A: You have two choices:
- Upgrade your compiler (preferred)
- Break your source file up into multiple translation units.
my_module.cpp
:... void more_of_my_module(); BOOST_PYTHON_MODULE(my_module) { def("foo", foo); def("bar", bar); ... more_of_my_module(); }more_of_my_module.cpp
:void more_of_my_module() { def("baz", baz); ... }If you find that aclass_<...>
declaration can't fit in a single source file without triggering the error, you can always pass a reference to theclass_
object to a function in another source file, and call some of its member functions (e.g..def(...)
) in the auxilliary source file:
more_of_my_class.cpp
:void more_of_my_class(class<my_class>& x) { x .def("baz", baz) .add_property("xx", &my_class::get_xx, &my_class::set_xx) ; ... }
Greg Burley gives the following answer for Unix GCC users:
Once you have created a boost python extension for your c++ library or class, you may need to debug the code. Afterall this is one of the reasons for wrapping the library in python. An expected side-effect or benefit of using BPL is that debugging should be isolated to the c++ library that is under test, given that python code is minimal and boost::python either works or it doesn't. (ie. While errors can occur when the wrapping method is invalid, most errors are caught by the compiler ;-).The basic steps required to initiate a gdb session to debug a c++ library via python are shown here. Note, however that you should start the gdb session in the directory that contains your BPL my_ext.so module.
(gdb) target exec python (gdb) run >>> from my_ext import * >>> [C-c] (gdb) break MyClass::MyBuggyFunction (gdb) cont >>> pyobj = MyClass() >>> pyobj.MyBuggyFunction() Breakpoint 1, MyClass::MyBuggyFunction ... Current language: auto; currently c++ (gdb) do debugging stuff
Greg's approach works even better using Emacs' "gdb
"
command, since it will show you each line of source as you step through
it.
On Windows, my favorite debugging solution is the debugger that comes with Microsoft Visual C++ 7. This debugger seems to work with code generated by all versions of Microsoft and Metrowerks toolsets; it's rock solid and "just works" without requiring any special tricks from the user.
Unfortunately for Cygwin and MinGW users, as of this writing gdb on Windows has a very hard time dealing with shared libraries, which could make Greg's approach next to useless for you. My best advice for you is to use Metrowerks C++ for compiler conformance and Microsoft Visual Studio as a debugger when you need one.
boost-python-runtest
rule, you can ask it to launch your
debugger for you by adding "-sPYTHON_LAUNCH=debugger" to your bjam
command-line:
bjam -sTOOLS=metrowerks "-sPYTHON_LAUNCH=devenv /debugexe" test bjam -sTOOLS=gcc -sPYTHON_LAUNCH=gdb testIt can also be extremely useful to add the
-d+2
option when
you run your test, because Boost.Build will then show you the exact
commands it uses to invoke it. This will invariably involve setting up
PYTHONPATH and other important environment variables such as
LD_LIBRARY_PATH which may be needed by your debugger in order to get
things to work right.
*=
operator work?Q: I have exported my class to python, with many overloaded operators. it works fine for me except the*=
operator. It always tells me "can't multiply sequence with non int type". If I usep1.__imul__(p2)
instead ofp1 *= p2
, it successfully executes my code. What's wrong with me?A: There's nothing wrong with you. This is a bug in Python 2.2. You can see the same effect in Pure Python (you can learn a lot about what's happening in Boost.Python by playing with new-style classes in Pure Python).
>>> class X(object): ... def __imul__(self, x): ... print 'imul' ... >>> x = X() >>> x *= 1To cure this problem, all you need to do is upgrade your Python to version 2.2.1 or later.
The short answer: as of January 2003, unfortunately not.
The longer answer: using Mac OS 10.2.3 with the December Developer's Kit, Python 2.3a1, and bjam's darwin-tools.jam, Boost.Python compiles fine, including the examples. However, there are problems at runtime (see http://mail.python.org/pipermail/c++-sig/2003-January/003267.html). Solutions are currently unknown.
It is known that under certain circumstances objects are double-destructed. See http://mail.python.org/pipermail/c++-sig/2003-January/003278.html for details. It is not clear however if this problem is related to the Boost.Python runtime issues.
"I am wrapping a function that always returns a pointer to an already-held C++ object."One way to do that is to hijack the mechanisms used for wrapping a class with virtual functions. If you make a wrapper class with an initial PyObject* constructor argument and store that PyObject* as "self", you can get back to it by casting down to that wrapper type in a thin wrapper function. For example:
class X { X(int); virtual ~X(); ... }; X* f(); // known to return Xs that are managed by Python objects // wrapping code struct X_wrap : X { X_wrap(PyObject* self, int v) : self(self), X(v) {} PyObject* self; }; handle<> f_wrap() { X_wrap* xw = dynamic_cast<X_wrap*>(f()); assert(xw != 0); return handle<>(borrowed(xw->self)); } ... def("f", f_wrap()); class_<X,X_wrap>("X", init<int>()) ... ;Of course, if X has no virtual functions you'll have to use
static_cast
instead of dynamic_cast
with no
runtime check that it's valid. This approach also only works if the
X
object was constructed from Python, because
X
s constructed from C++ are of course never
X_wrap
objects.
Another approach to this requires you to change your C++ code a bit;
if that's an option for you it might be a better way to go. work we've
been meaning to get to anyway. When a shared_ptr<X>
is
converted from Python, the shared_ptr actually manages a reference to the
containing Python object. When a shared_ptr<X> is converted back to
Python, the library checks to see if it's one of those "Python object
managers" and if so just returns the original Python object. So you could
just write object(p)
to get the Python object back. To
exploit this you'd have to be able to change the C++ code you're wrapping
so that it deals with shared_ptr instead of raw pointers.
There are other approaches too. The functions that receive the Python object that you eventually want to return could be wrapped with a thin wrapper that records the correspondence between the object address and its containing Python object, and you could have your f_wrap function look in that mapping to get the Python object out.
Part of an API that I'm wrapping goes something like this:Yes: Make sure the C++ object is held by auto_ptr:struct A {}; struct B { void add( A* ); } where B::add() takes ownership of the pointer passed to it.However:
a = mod.A() b = mod.B() b.add( a ) del a del b # python interpreter crashes # later due to memory corruption.Even binding the lifetime of a to b via with_custodian_and_ward doesn't prevent the python object a from ultimately trying to delete the object it's pointing to. Is there a way to accomplish a 'transfer-of-ownership' of a wrapped C++ object?
--Bruce Lowery
class_<A, std::auto_ptr<A> >("A") ... ;Then make a thin wrapper function which takes an auto_ptr parameter:
void b_insert(B& b, std::auto_ptr<A> a) { b.insert(a.get()); a.release(); }Wrap that as B.add. Note that pointers returned via
manage_new_object
will also be held by auto_ptr
, so this transfer-of-ownership
will also work correctly.
Revised 18 March, 2003
© Copyright Dave Abrahams 2002-2003. All Rights Reserved.