C++: std::shared_ptr<void>

I recently came across a use of std::shared_ptr that surprised me. It began with a line of code that looked like this…

std::shared_ptr<void> sp = std::make_shared<SomeType>();

This completely blew my mind. “You can’t parameterize shared_ptr with void! There’s no way that’ll work!” I proclaimed loudly. I explained that the shared_ptr will try to delete a void pointer and bad things will happen. The standard says this about deleting a void pointer…

C++ 17 Standard Working Draft, Section 8.3.5: ...an object cannot be deleted using a pointer of type void* because void is not an object type.

You can’t argue with that, right? Obviously, the author of the aforementioned line of code had lost his marbles.

Or had they. The thing was, this line of code actually worked. Was it accidental? Was it a quirk of the compiler? Hmmmm….something else was going on here.

It turned out that std::shared_ptr had a few tricks up its sleeve that I was completely unaware of.

Why It Works

When the default deleter is created for a shared_ptr, it captures the type of the managed object independently of the type that shared_ptr is actually parameterized with. It’s as simple as that. The deleter knows about the actual type.

The deleter also happens to be type erased. (If type erasure is unfamiliar to you, check out the links I include at the bottom of this article.) This allows things like assignments, moves, etc. between shared_ptrs parameterized with the same managed object type, but containing different deleter types (lambdas, function pointers, functors, etc.). As long as the managed object type is the same, two shared_ptr are considered to be of the same type and you can assign one to the other with no problem.

Let’s look at a very simple example that captures the spirit of what’s happening under the hood of shared_ptr. We’ll create our own scoped pointer that allows for custom deleters.

Let’s first stub out a basic naive ScopedPtr class.

template<typename T> 
class ScopedPtr
{
public:
 
    ScopedPtr(T *pT, SomeDeleterType deleter) : 
        m_pT(pT), m_deleter(deleter) {}
 
    ~ScopedPtr() 
    { 
        m_deleter(m_pT);
    }
 
    T * get() { return m_pT; }
 
private:
 
    T *m_pT; // Managed object
 
    SomeDeleterType m_deleter;
 
    ScopedPtr(const ScopedPtr &) = delete;
    ScopedPtr & operator=(const ScopedPtr &) = delete;
};

Just like shared_ptr, this template is parameterized with the type of the object it’s intended to manage. You can pass a pointer to an object of that type into the constructor. The constructor also accepts a deleter parameter. But we don’t know what that’s going to look like just yet. In this snippet of code, I simply call it SomeDeleterType.

There are two member variables in ScopedPtr – the object being managed and the deleter.

The destructor does as you might expect. It calls the deleter with the managed object.

Side note: I explicitly deleted the copy constructor and assignment operators here because I wanted to avoid introducing the problems associated with auto_ptr.

Now we just need to decide what we want the deleter to look like. We have three requirements – a) It must be type erased, and b) It must be invokable, accepting a pointer to the managed object to delete, and c) It must delete an object using the correct type.

We could create a Deleter interface class with an operator() that accepts a void pointer. All deleter implementations, including the default deleter, would need to inherit from it. However, we want to support a variety of invokable types (lambdas, function pointers, function objects, etc.). And I don’t want to work too hard to make that happen. Fortunately, the standard library provides a super-easy mechanism to do this – std::function.

template<typename T> 
class ScopedPtr
{
public:
    using Deleter = std::function<void (void *)>;
 
    ScopedPtr(T *pT, Deleter deleter) : 
        m_pT(pT), m_deleter(deleter) {}
 
    ~ScopedPtr() 
    { 
        m_deleter(m_pT);
    }
 
    T * get() { return m_pT; }
 
private:
 
    T *m_pT; // Managed object
 
    Deleter m_deleter;
 
    ScopedPtr(const ScopedPtr &) = delete;
    ScopedPtr & operator=(const ScopedPtr &) = delete;
};

In this snippet, we’ve created a type alias to std::function<void (void*)> called Deleter. Now our constructor can accept any invokable type as a deleter.

For instance, this will do exactly what we expect it to.

ScopedPtr<int> ptr(new int(0), 
    [&](void *pObj) { delete static_cast<int *>(pObj); });

And so will this…

ScopedPtr<void> ptr(new int(0), 
    [&](void *pObj) { delete static_cast<int *>(pObj); });

Note that in the last example even though we parameterize ScopedPtr with void, our deleter casts the managed object to a int * before deleting it. This is the kind of thing we want the default deleter to do on our behalf.

We’re almost there. The one thing missing is the default deleter. This is where the magic needs to happen. Let’s first create a generic deleter template class.

template <typename ManagedObjectType>
class DefaultDeleter
{
public:
    void operator()(void *pV)
    {
        delete static_cast<ManagedObjectType *>(pV); 
    }
};

So far so good.

Now our ScopedPtr constructor could be augmented like this.

ScopedPtr(T *pT, Deleter deleter = DefaultDeleter<T>()) : 
    m_pT(pT), m_deleter(deleter) {}

However, there’s a problem here. The DefaultDeleter template is parameterized with the same type as ScopedPtr. If that type happens to be void, the deleter will also be parameterized with void and try to delete a void pointer. And that’s the very problem we’re trying to solve.

What we want is for the DefaultDeleter to be parameterized with the actual type of the managed object. It sounds tricker than it is. All we really need to do is make ScopePtr’s constructor a template function and leverage a little type deduction.

template<typename T2>
ScopedPtr(T2 *pT, Deleter deleter = DefaultDeleter<T2>()) : 
    m_pT(pT), m_deleter(deleter) {}

When the constructor is called, T2 is deduced to be of whatever type the pT argument happens to be. And that’s what the DefaultDeleter ends up being parameterized with as well. That can be different than the type the ScopePtr class is parameterized with.

If we pass in an int pointer for pT, the default deleter’s type will be of type DefaultDeleter<int>.

ScopePtr’s member variable m_pT is of type T *. Remember, its type comes from the template classes’s type parameters. If the T in ScopePtr<T> is covariant with whatever is passed into the constructor (void is effectively pointer-compatible with everything), all is well.

For example…

ScopedPtr<void> ptr(new int(0));

In the above snippet, the ScopePtr’s variable m_pT is of type void *, while m_deleter wraps a DefaultDeleter<int>. So the int will be properly deallocated when the ScopedPtr goes out of scope.

The complete implementation of our ScopedPtr looks like this…

template<typename T> 
class ScopedPtr
{
 
private:
 
    template <typename ManagedObjectType>
    class DefaultDeleter
    {
    public:
        void operator()(void *pV)
        {
            delete static_cast<ManagedObjectType *>(pV); 
        }
    };
 
public:
 
    using Deleter = std::function<void (void *)>;
 
    template<typename T2>
    ScopedPtr(T2 *pT, Deleter deleter = DefaultDeleter<T2>()) : 
        m_pT(pT), m_deleter(deleter) {}
 
    ~ScopedPtr() 
    { 
        m_deleter(m_pT);
    }
 
    T * get() { return m_pT; }
 
private:
 
    T *m_pT; // Managed object
 
    Deleter m_deleter;
 
    ScopedPtr(const ScopedPtr &) = delete;
    ScopedPtr & operator=(const ScopedPtr &) = delete;
};

Our implementation of ScopedPtr is, of course, pretty barebones. If you want to support move semantics, for example, you’ll need to provide your own move constructor and move assignment operator. The default implementations won’t work because m_pT isn’t guaranteed to be set to nullptr in the moved-from object, which causes the deleter to blow up once the ScopePtr has been moved-from. That’s a detail unconcerned with this discussion. All of this was just to illustrate the concept of what’s going on under the hood in shared_ptr.

Ok, So What’s shared_ptr<void> Actually Good For?

Given that shared_ptr<void> doesn’t actually store any type information, you might wonder what utility such a thing might have. There are a few scenarios that come to mind where I envision it could possibly maybe be useful.

For example, many C-style callback mechanisms often take two pieces of information from the client – a callback function and a piece of userdata, which is often accepted/delivered as a void *. I can MAYBE imagine perhaps a more “modern” C++-ish approach could instead used a shared_ptr<void> or a unique_ptr<void> to shuttle around such userdata. That’s not to say that the code wouldn’t smell. If there’s no transfer of ownership, you’d probably be better off using naked pointer or references.

The second scenario involves implementing a garbage collector of sorts. Imagine two threads – a producer and a consumer of various heterogenous types of data. The producer thread is low priority, and has the liberty to allocate memory whenever it sees fit. The consumer of the data is a high-priority, real-time thread (think real-time audio processing). These types of threads typically can’t afford any sort of waiting/locking, which includes memory allocation/deallocation. In that case, you might want to implement a garbage collector of sorts that allows the deallocation of data to happen somewhere else other than the high-priority thread. std::shared_ptr<void> could be useful for this.

Conclusion

If nothing else, std::shared_ptr<void> is an interesting case study of the type-erasure idiom. Learning how the deleter works has given me the confidence to use C++11 smart pointers when working with APIs that use types employing C-style subtyping (Win32 is chock full of these).

Usage of std::shared_ptr<void> is a bit of a code smell, I think. If you feel compelled to use it, I suggest perhaps asking yourself if there’s a better way to do whatever it is you’re trying to accomplish.

Type-Erasure Related Links

C++ type erasure
C++ ‘Type Erasure’ Explained
Andrzej’s C++ blog – Type erasure — Part I
Andrzej’s C++ blog – Type erasure — Part II
Andrzej’s C++ blog – Type erasure — Part III
Andrzej’s C++ blog – Type erasure — Part IV

GDB Tips and Tricks #4: Reverse Debugging

How many times have you stepped through code only to find that you’ve gone too far? Maybe you overstepped a critical point in the code’s execution. Or perhaps you stepped over a function you intended to step into.

Did you know that GDB could actually step backwards through of code? That’s right. You can have GDB go back in time. This is often referred to as “reverse debugging.” But how does it work?

How It Works

Reverse debugging actually relies upon another gem in GDB’s bag-of-tricks called “process record and replay”. Rolls right off the tongue doesn’t it? I won’t spend a lot of time going into the details of PRR here, but it’s quite powerful. The only PRR command we need to be concerned with in this discussion is “record”.

The “record” command begins recording the execution of your application, making notes of things like memory and register values. When you arrive at a point in your application at which you’d like to go backwards, you can issue the “reverse” versions of all the navigation commands you’re already familiar with. This include reverse-step (rs), reverse-next (rn), reverse-continue (rc), and reverse-finish (no short version 🙁 ). As you move backwards through code, gdb reverts the state of memory and registers, effectively unexecuting lines of code.

Let’s see an example using the code snippet below.

#include <iostream>
 
int sum(int a, int b)
{
    int result = a + b;
    return result;
}
 
int main(int argc, char **argv)
{
    int a = 12;
    int b = 13;
    int c = sum(a, b);
 
    std::cout << "The sum of " << a << " and " << b << " is " << c << "\n";
 
    return 0;
}

Compile this (don’t forget to compile it with the ‘-g’ flag!) and fire up gdb. Then set a breakpoint at main. We can’t begin recording program execution before it’s actually running. So we issue the run command, which will execute our application and promptly break at main.

(gdb) break main
Breakpoint 1 at 0x4007df: file gdbtest.cpp, line 11.
(gdb) run
Starting program: /home/skirk/gdbtest 
 
Breakpoint 1, main (argc=1, argv=0x7fffffffdf28) at gdbtest.cpp:11
11	    int a = 12;

At this point, we issue the “record” command to begin recording.

(gdb) record

Now let’s start stepping through the code.

(gdb) n
12	    int b = 13;
(gdb) n
13	    int c = sum(a, b);
(gdb) n
15	    std::cout << "The sum of " << a << " and " << b << " is " << c << "\n";

We’re now at the point just before the sum is written to stdout. What if I had intended to step into the sum function to see what it’s doing? Let’s back up to just before the sum function is called and then step into it.

(gdb) reverse-next
13	    int c = sum(a, b);
(gdb) s
sum (a=12, b=13) at gdbtest.cpp:5
5	    int result = a + b;

Now we appear to have gone back into time. This allows us to step into the sum function. At this point, we can inspect the values of parameters a and b as we normally would.

 
(gdb) print a
$1 = 12
(gdb) print b
$2 = 13

If we’re satisfied with the state of things, we can allow the program to continue on.

(gdb) c
Continuing.
 
No more reverse-execution history.
main (argc=1, argv=0x7fffffffdf28) at gdbtest.cpp:15
15	    std::cout << "The sum of " << a << " and " << b << " is " << c << "\n";

An interesting thing happened here. The program execution stopped at the point at which we previously started stepping backwards. When stepping through code using recorded history, “continue” will continue program execution until the history has been exhausted, unless, of course, it has some other reason to stop such as breakpoints and the like.

Let’s now stop the recording process using the “record stop” command and allow the program to continue execution until completion.

(gdb) record stop
Process record is stopped and all execution logs are deleted.
(gdb) c
Continuing.
The sum of 12 and 13 is 25
[Inferior 1 (process 10608) exited normally]
(gdb)

Gotchas
What if we hadn’t stopped recording? Well, it depends. If your version of the runtime executes instructions that aren’t supported by PRR, then you may encounter errors such as this…

Process record does not support instruction 0xc5 at address 0x7ffff7dee8b7.
Process record: failed to record execution log.
 
Program stopped.
_dl_runtime_resolve_avx () at ../sysdeps/x86_64/dl-trampoline.h:81
81	../sysdeps/x86_64/dl-trampoline.h: No such file or directory.

In this case, AVX instructions are being executed which aren’t supported by the record process. (In this particular case, there’s a workaround. We can export the environment variable LD_BIND_NOW=1 which resolves all symbols at load time. Doing so actually prevents the call to _dl_runtime_resolve_avx later.)

It’s also possible you might see something like…

The sum of 12 and 13 is 25
The next instruction is syscall exit_group.  It will make the program exit.  Do you want to stop the program?([y] or n)

Here you’re prompted as to whether or not you want to stop the program. Regardless of what you choose, you’re still able to navigate backwards in program execution. That’s right – you can reverse debug an application that has finished running.

Caveats

There are a few caveats when performing reverse debugging.

The first is that you can’t move backwards beyond the point at which you started recording. That should make sense.

Another caveat is that recording isn’t free or cheap. There’s a non-trivial amount of overhead involved in keeping track of registers and memory. So use record where it matters.

By default, there’s an upper limit on the number of instructions the record log can contain. This is 200,000 in the default record mode. This can be tweaked however, including setting it to unlimited (which really just means it’ll record until it’s out of memory). See the GDB manual for more info on this.

You can always see what the current record instruction limit is by using the “info record” command.

Conclusion

Reverse debugging is great tool to keep in your toolbox for those tricky bits of code. In the right contexts, it can save you lots of time. Use it judiciously, however. Recording everything in your application wastes memory, memory that your application may actually need. It can also be detrimental to your program’s execution speed.

GDB Tips and Tricks #3: Saving and Restoring Breakpoints Using Files

You spent the last 10 minutes littering your application with breakpoints in all the places where you think bad things might be happening. Then you run your application. Maybe you missed something. Maybe you didn’t. But now after a few minutes of debugging, both your application and GDB appear to be in a funky state. What you want to do is just quit out of everything and start fresh with a new session. But what about all those breakpoints? Typing in the “b” commands was such a chore. Even if you could remember where they all were, the mere thought of doing so is exhausting. And what if you need to start over yet again? Surely there’s a better way.

Did you know you could save your breakpoints to a file? All you need to do is issue the “save breakpoints” command like so…

(gdb) save breakpoints my.brk
Saved to file 'my.brk'.

This will save all types of breakpoints (regular breakpoints, watchpoints, and catchpoints) to the file my.brk. You can in fact name the file whatever you’d like.

Later when you’re ready to reload your breakpoints, you can issue the source command.

(gdb) source my.brk

There’s nothing special about this breakpoints file. It’s just a text file containing a list of gdb commands separated by newlines. In fact, not only can you amend it manually with more breakpoint commands, but you can add in just about any other gdb command in the process.

It’s worth noting that the “source” command isn’t actually specific to breakpoints. It will execute anything in the specified file as written, which makes the “source” command a handy tool in its own right.