c++11 - C++ memory management patterns for objects used in callback chains -


a couple codebases use include classes manually call new , delete in following pattern:

class worker {  public:   void dowork(argt arg, std::function<void()> done) {     new worker(std::move(arg), std::move(done)).start();   }   private:   worker(argt arg, std::function<void()> done)       : arg_(std::move(arg)),         done_(std::move(done)),         latch_(2) {} // error-prone latch interface isn't point of question. :)    void start() {     async1(<args>, [=]() { this->method1(); });   }   void method1() {     startparallel(<args>, [=]() { this->latch_.count_down(); });     startparallel(<other_args>, [=]() { this->latch_.count_down(); });     latch_.then([=]() { this->finish(); });   }   void finish() {     done_();     // note manual memory management!     delete this;   }    argt arg_   std::function<void()> done_;   latch latch_; }; 

now, in modern c++, explicit delete code smell, as, extent delete this. however, think pattern (creating object represent chunk of work managed callback chain) fundamentally good, or @ least not bad, idea.

so question is, how should rewrite instances of pattern encapsulate memory management?

one option don't think idea storing worker in shared_ptr: fundamentally, ownership not shared here, overhead of reference counting unnecessary. furthermore, in order keep copy of shared_ptr alive across callbacks, i'd need inherit enable_shared_from_this, , remember call outside lambdas , capture shared_ptr callbacks. if ever wrote simple code using this directly, or called shared_from_this() inside callback lambda, object deleted early.

i agree delete this code smell, , lesser extent delete on own. think here natural part of continuation-passing style, (to me) of code smell.

the root problem design of api assumes unbounded control-flow: acknowledges caller interested in happens when call completes, signals completion via arbitrarily-complex callback rather returning synchronous call. better structure synchronously , let caller determine appropriate parallelization , memory-management regime:

class worker {  public:   void dowork(argt arg) {     // async1 mistake; fix later.  now, synchronize explicitly.     latch async_done(1);     async1(<args>, [&]() { async_done.count_down(); });     async_done.await();      latch parallel_done(2);     runparallel([&]() { dostuff(<args>); parallel_done.count_down(); });     runparallel([&]() { dostuff(<other_args>); parallel_done.count_down(); };     parallel_done.await();   } }; 

on caller-side, might this:

latch latch(tasks.size()); (auto& task : tasks) {   runparallel([=]() { dowork(<args>); latch.count_down(); }); } latch.await(); 

where runparallel can use std::thread or whatever other mechanism dispatching parallel events.

the advantage of approach object lifetimes simpler. argt object lives scope of dowork call. arguments dowork live long closures containing them. makes easier add return-values (such error codes) dowork calls: caller can switch latch thread-safe queue , read results complete.

the disadvantage of approach requires actual threading, not boost::asio::io_service. (for example, runparallel calls within dowork() can't block on waiting runparallel calls caller side return.) either have structure code strictly-hierarchical thread pools, or have allow potentially-unbounded number of threads.


Comments

Popular posts from this blog

SPSS keyboard combination alters encoding -

Add new record to the table by click on the button in Microsoft Access -

javascript - jQuery .height() return 0 when visible but non-0 when hidden -