GDB Tips and Tricks #5: The Display Command

Once of the cool things about debugging with IDEs is that they typically give you a nice mechanism to watch the changing state of variables as you step through code. In Visual Studio, for example, you can right click on a variable name and select “Add Watch” from the menu. The variable name and its current value will be shown in a little “Watch” window. You can watch as many variables as you have the resources and patience for. As you step through the code, anytime the value of a watched variable changes, that change is reflected in the Watch window.

Can we do something similar in gdb? Absolutely.

The command we’re interested in is display. When gdb is told to display a variable, it’ll report that variable’s current value every time program execution pauses (e.g., stepping through the code).

Let’s see an example using the following snippet of code.

// demo.cpp
int main()
{
    int a = 1;
    int b = 2;
    int c = 3;
 
    a = a + 1;
    b += a;
    c = a * b + c;
 
    return 0;
}

First we compile and then launch gdb.

skirk@dormouse:~$ g++ -g ./demo.cpp -o demo
skirk@dormouse:~$ gdb ./demo
GNU gdb (GDB) 8.0.1
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /home/skirk/demo...done.
(gdb)

Let’s now run our app, stopping at main().

(gdb) start
Temporary breakpoint 1 at 0x4004bb: file ./demo.cpp, line 3.
Starting program: /home/skirk/demo 
 
Temporary breakpoint 1, main () at ./demo.cpp:3
3	    int a = 1;

Let’s say at this point we want to display the values of variables a, b, and c as we step through the code. We can issue the display command like so.

(gdb) display a
1: a = 32767
(gdb) display b
2: b = 0
(gdb) display c
3: c = 0

After each display is executed, gdb shows the current value for the variable specified. In this example, our variables have bogus values because they haven’t been initialized yet. Let’s now step through the code and see what display does for us.

(gdb) n
4	    int b = 2;
1: a = 1
2: b = 0
3: c = 0
(gdb) n
5	    int c = 3;
1: a = 1
2: b = 2
3: c = 0
(gdb) n
7	    a = a + 1;
1: a = 1
2: b = 2
3: c = 3
(gdb) n
8	    b += a;
1: a = 2
2: b = 2
3: c = 3
(gdb) n
9	    c = a * b + c;
1: a = 2
2: b = 4
3: c = 3
(gdb) n
11	    return 0;
1: a = 2
2: b = 4
3: c = 11

Every time we step through the code, our program execution pauses and the current values of the variables we asked gdb to display are shown. As you can imagine, this can save a tremendous amount of time over, say, repeatedly using a step command followed by a print command.

It’s worth noting that the variable value information will only be displayed for variables that are currently in scope. If the variables we’re interested in are local to a function that at some point returns, those variables will no longer be displayed once they’re out of scope. However, gdb doesn’t forget about those variables. It will absolutely display them the next chance it gets. If you later step into that function again, those variables will be displayed.

When you’re no longer interested in a given variable, you can issue the undisplay command. The gotcha here is that undisplay doesn’t operate on variable names. It operates on display numbers. “Where is the display number?” you ask. It’s the number next to the “variable=value” line in the display output. In our example above, the display output for our variable c is “3: c = 11”. Note the 3 before the colon? It’s not just to pretty up the output. That’s the display number assigned to that particular variable.

You can undisplay a single display number like so.

(gdb) undisplay 1

You can also undisplay multiple display numbers at once.

(gdb) undisplay 2 3

Note that the display command shouldn’t be confused with the watch command, which serves a related purpose. The watch command works more like a smart breakpoint (these are actually called watchpoints) in that it stops program execution and displays a given variable’s value only when the value changes. The display command provides a continuous display of variables and doesn’t affect program execution at all.

C++: std::shared_ptr<void>

I recently came across a use of std::shared_ptr that surprised me. It began with a line of code that looked like this…

std::shared_ptr<void> sp = std::make_shared<SomeType>();

This completely blew my mind. “You can’t parameterize shared_ptr with void! There’s no way that’ll work!” I proclaimed loudly. I explained that the shared_ptr will try to delete a void pointer and bad things will happen. The standard says this about deleting a void pointer…

C++ 17 Standard Working Draft, Section 8.3.5: ...an object cannot be deleted using a pointer of type void* because void is not an object type.

You can’t argue with that, right? Obviously, the author of the aforementioned line of code had lost his marbles.

Or had they. The thing was, this line of code actually worked. Was it accidental? Was it a quirk of the compiler? Hmmmm….something else was going on here.

It turned out that std::shared_ptr had a few tricks up its sleeve that I was completely unaware of.

Why It Works

When the default deleter is created for a shared_ptr, it captures the type of the managed object independently of the type that shared_ptr is actually parameterized with. It’s as simple as that. The deleter knows about the actual type.

The deleter also happens to be type erased. (If type erasure is unfamiliar to you, check out the links I include at the bottom of this article.) This allows things like assignments, moves, etc. between shared_ptrs parameterized with the same managed object type, but containing different deleter types (lambdas, function pointers, functors, etc.). As long as the managed object type is the same, two shared_ptr are considered to be of the same type and you can assign one to the other with no problem.

Let’s look at a very simple example that captures the spirit of what’s happening under the hood of shared_ptr. We’ll create our own scoped pointer that allows for custom deleters.

Let’s first stub out a basic naive ScopedPtr class.

template<typename T> 
class ScopedPtr
{
public:
 
    ScopedPtr(T *pT, SomeDeleterType deleter) : 
        m_pT(pT), m_deleter(deleter) {}
 
    ~ScopedPtr() 
    { 
        m_deleter(m_pT);
    }
 
    T * get() { return m_pT; }
 
private:
 
    T *m_pT; // Managed object
 
    SomeDeleterType m_deleter;
 
    ScopedPtr(const ScopedPtr &) = delete;
    ScopedPtr & operator=(const ScopedPtr &) = delete;
};

Just like shared_ptr, this template is parameterized with the type of the object it’s intended to manage. You can pass a pointer to an object of that type into the constructor. The constructor also accepts a deleter parameter. But we don’t know what that’s going to look like just yet. In this snippet of code, I simply call it SomeDeleterType.

There are two member variables in ScopedPtr – the object being managed and the deleter.

The destructor does as you might expect. It calls the deleter with the managed object.

Side note: I explicitly deleted the copy constructor and assignment operators here because I wanted to avoid introducing the problems associated with auto_ptr.

Now we just need to decide what we want the deleter to look like. We have three requirements – a) It must be type erased, and b) It must be invokable, accepting a pointer to the managed object to delete, and c) It must delete an object using the correct type.

We could create a Deleter interface class with an operator() that accepts a void pointer. All deleter implementations, including the default deleter, would need to inherit from it. However, we want to support a variety of invokable types (lambdas, function pointers, function objects, etc.). And I don’t want to work too hard to make that happen. Fortunately, the standard library provides a super-easy mechanism to do this – std::function.

template<typename T> 
class ScopedPtr
{
public:
    using Deleter = std::function<void (void *)>;
 
    ScopedPtr(T *pT, Deleter deleter) : 
        m_pT(pT), m_deleter(deleter) {}
 
    ~ScopedPtr() 
    { 
        m_deleter(m_pT);
    }
 
    T * get() { return m_pT; }
 
private:
 
    T *m_pT; // Managed object
 
    Deleter m_deleter;
 
    ScopedPtr(const ScopedPtr &) = delete;
    ScopedPtr & operator=(const ScopedPtr &) = delete;
};

In this snippet, we’ve created a type alias to std::function<void (void*)> called Deleter. Now our constructor can accept any invokable type as a deleter.

For instance, this will do exactly what we expect it to.

ScopedPtr<int> ptr(new int(0), 
    [&](void *pObj) { delete static_cast<int *>(pObj); });

And so will this…

ScopedPtr<void> ptr(new int(0), 
    [&](void *pObj) { delete static_cast<int *>(pObj); });

Note that in the last example even though we parameterize ScopedPtr with void, our deleter casts the managed object to a int * before deleting it. This is the kind of thing we want the default deleter to do on our behalf.

We’re almost there. The one thing missing is the default deleter. This is where the magic needs to happen. Let’s first create a generic deleter template class.

template <typename ManagedObjectType>
class DefaultDeleter
{
public:
    void operator()(void *pV)
    {
        delete static_cast<ManagedObjectType *>(pV); 
    }
};

So far so good.

Now our ScopedPtr constructor could be augmented like this.

ScopedPtr(T *pT, Deleter deleter = DefaultDeleter<T>()) : 
    m_pT(pT), m_deleter(deleter) {}

However, there’s a problem here. The DefaultDeleter template is parameterized with the same type as ScopedPtr. If that type happens to be void, the deleter will also be parameterized with void and try to delete a void pointer. And that’s the very problem we’re trying to solve.

What we want is for the DefaultDeleter to be parameterized with the actual type of the managed object. It sounds tricker than it is. All we really need to do is make ScopePtr’s constructor a template function and leverage a little type deduction.

template<typename T2>
ScopedPtr(T2 *pT, Deleter deleter = DefaultDeleter<T2>()) : 
    m_pT(pT), m_deleter(deleter) {}

When the constructor is called, T2 is deduced to be of whatever type the pT argument happens to be. And that’s what the DefaultDeleter ends up being parameterized with as well. That can be different than the type the ScopePtr class is parameterized with.

If we pass in an int pointer for pT, the default deleter’s type will be of type DefaultDeleter<int>.

ScopePtr’s member variable m_pT is of type T *. Remember, its type comes from the template classes’s type parameters. If the T in ScopePtr<T> is covariant with whatever is passed into the constructor (void is effectively pointer-compatible with everything), all is well.

For example…

ScopedPtr<void> ptr(new int(0));

In the above snippet, the ScopePtr’s variable m_pT is of type void *, while m_deleter wraps a DefaultDeleter<int>. So the int will be properly deallocated when the ScopedPtr goes out of scope.

The complete implementation of our ScopedPtr looks like this…

template<typename T> 
class ScopedPtr
{
 
private:
 
    template <typename ManagedObjectType>
    class DefaultDeleter
    {
    public:
        void operator()(void *pV)
        {
            delete static_cast<ManagedObjectType *>(pV); 
        }
    };
 
public:
 
    using Deleter = std::function<void (void *)>;
 
    template<typename T2>
    ScopedPtr(T2 *pT, Deleter deleter = DefaultDeleter<T2>()) : 
        m_pT(pT), m_deleter(deleter) {}
 
    ~ScopedPtr() 
    { 
        m_deleter(m_pT);
    }
 
    T * get() { return m_pT; }
 
private:
 
    T *m_pT; // Managed object
 
    Deleter m_deleter;
 
    ScopedPtr(const ScopedPtr &) = delete;
    ScopedPtr & operator=(const ScopedPtr &) = delete;
};

Our implementation of ScopedPtr is, of course, pretty barebones. If you want to support move semantics, for example, you’ll need to provide your own move constructor and move assignment operator. The default implementations won’t work because m_pT isn’t guaranteed to be set to nullptr in the moved-from object, which causes the deleter to blow up once the ScopePtr has been moved-from. That’s a detail unconcerned with this discussion. All of this was just to illustrate the concept of what’s going on under the hood in shared_ptr.

Ok, So What’s shared_ptr<void> Actually Good For?

Given that shared_ptr<void> doesn’t actually store any type information, you might wonder what utility such a thing might have. There are a few scenarios that come to mind where I envision it could possibly maybe be useful.

For example, many C-style callback mechanisms often take two pieces of information from the client – a callback function and a piece of userdata, which is often accepted/delivered as a void *. I can MAYBE imagine perhaps a more “modern” C++-ish approach could instead used a shared_ptr<void> or a unique_ptr<void> to shuttle around such userdata. That’s not to say that the code wouldn’t smell. If there’s no transfer of ownership, you’d probably be better off using naked pointer or references.

The second scenario involves implementing a garbage collector of sorts. Imagine two threads – a producer and a consumer of various heterogenous types of data. The producer thread is low priority, and has the liberty to allocate memory whenever it sees fit. The consumer of the data is a high-priority, real-time thread (think real-time audio processing). These types of threads typically can’t afford any sort of waiting/locking, which includes memory allocation/deallocation. In that case, you might want to implement a garbage collector of sorts that allows the deallocation of data to happen somewhere else other than the high-priority thread. std::shared_ptr<void> could be useful for this.

Conclusion

If nothing else, std::shared_ptr<void> is an interesting case study of the type-erasure idiom. Learning how the deleter works has given me the confidence to use C++11 smart pointers when working with APIs that use types employing C-style subtyping (Win32 is chock full of these).

Usage of std::shared_ptr<void> is a bit of a code smell, I think. If you feel compelled to use it, I suggest perhaps asking yourself if there’s a better way to do whatever it is you’re trying to accomplish.

Type-Erasure Related Links

C++ type erasure
C++ ‘Type Erasure’ Explained
Andrzej’s C++ blog – Type erasure — Part I
Andrzej’s C++ blog – Type erasure — Part II
Andrzej’s C++ blog – Type erasure — Part III
Andrzej’s C++ blog – Type erasure — Part IV

GDB Tips and Tricks #4: Reverse Debugging

How many times have you stepped through code only to find that you’ve gone too far? Maybe you overstepped a critical point in the code’s execution. Or perhaps you stepped over a function you intended to step into.

Did you know that GDB could actually step backwards through of code? That’s right. You can have GDB go back in time. This is often referred to as “reverse debugging.” But how does it work?

How It Works

Reverse debugging actually relies upon another gem in GDB’s bag-of-tricks called “process record and replay”. Rolls right off the tongue doesn’t it? I won’t spend a lot of time going into the details of PRR here, but it’s quite powerful. The only PRR command we need to be concerned with in this discussion is “record”.

The “record” command begins recording the execution of your application, making notes of things like memory and register values. When you arrive at a point in your application at which you’d like to go backwards, you can issue the “reverse” versions of all the navigation commands you’re already familiar with. This include reverse-step (rs), reverse-next (rn), reverse-continue (rc), and reverse-finish (no short version 🙁 ). As you move backwards through code, gdb reverts the state of memory and registers, effectively unexecuting lines of code.

Let’s see an example using the code snippet below.

#include <iostream>
 
int sum(int a, int b)
{
    int result = a + b;
    return result;
}
 
int main(int argc, char **argv)
{
    int a = 12;
    int b = 13;
    int c = sum(a, b);
 
    std::cout << "The sum of " << a << " and " << b << " is " << c << "\n";
 
    return 0;
}

Compile this (don’t forget to compile it with the ‘-g’ flag!) and fire up gdb. Then set a breakpoint at main. We can’t begin recording program execution before it’s actually running. So we issue the run command, which will execute our application and promptly break at main.

(gdb) break main
Breakpoint 1 at 0x4007df: file gdbtest.cpp, line 11.
(gdb) run
Starting program: /home/skirk/gdbtest 
 
Breakpoint 1, main (argc=1, argv=0x7fffffffdf28) at gdbtest.cpp:11
11	    int a = 12;

At this point, we issue the “record” command to begin recording.

(gdb) record

Now let’s start stepping through the code.

(gdb) n
12	    int b = 13;
(gdb) n
13	    int c = sum(a, b);
(gdb) n
15	    std::cout << "The sum of " << a << " and " << b << " is " << c << "\n";

We’re now at the point just before the sum is written to stdout. What if I had intended to step into the sum function to see what it’s doing? Let’s back up to just before the sum function is called and then step into it.

(gdb) reverse-next
13	    int c = sum(a, b);
(gdb) s
sum (a=12, b=13) at gdbtest.cpp:5
5	    int result = a + b;

Now we appear to have gone back into time. This allows us to step into the sum function. At this point, we can inspect the values of parameters a and b as we normally would.

 
(gdb) print a
$1 = 12
(gdb) print b
$2 = 13

If we’re satisfied with the state of things, we can allow the program to continue on.

(gdb) c
Continuing.
 
No more reverse-execution history.
main (argc=1, argv=0x7fffffffdf28) at gdbtest.cpp:15
15	    std::cout << "The sum of " << a << " and " << b << " is " << c << "\n";

An interesting thing happened here. The program execution stopped at the point at which we previously started stepping backwards. When stepping through code using recorded history, “continue” will continue program execution until the history has been exhausted, unless, of course, it has some other reason to stop such as breakpoints and the like.

Let’s now stop the recording process using the “record stop” command and allow the program to continue execution until completion.

(gdb) record stop
Process record is stopped and all execution logs are deleted.
(gdb) c
Continuing.
The sum of 12 and 13 is 25
[Inferior 1 (process 10608) exited normally]
(gdb)

Gotchas
What if we hadn’t stopped recording? Well, it depends. If your version of the runtime executes instructions that aren’t supported by PRR, then you may encounter errors such as this…

Process record does not support instruction 0xc5 at address 0x7ffff7dee8b7.
Process record: failed to record execution log.
 
Program stopped.
_dl_runtime_resolve_avx () at ../sysdeps/x86_64/dl-trampoline.h:81
81	../sysdeps/x86_64/dl-trampoline.h: No such file or directory.

In this case, AVX instructions are being executed which aren’t supported by the record process. (In this particular case, there’s a workaround. We can export the environment variable LD_BIND_NOW=1 which resolves all symbols at load time. Doing so actually prevents the call to _dl_runtime_resolve_avx later.)

It’s also possible you might see something like…

The sum of 12 and 13 is 25
The next instruction is syscall exit_group.  It will make the program exit.  Do you want to stop the program?([y] or n)

Here you’re prompted as to whether or not you want to stop the program. Regardless of what you choose, you’re still able to navigate backwards in program execution. That’s right – you can reverse debug an application that has finished running.

Caveats

There are a few caveats when performing reverse debugging.

The first is that you can’t move backwards beyond the point at which you started recording. That should make sense.

Another caveat is that recording isn’t free or cheap. There’s a non-trivial amount of overhead involved in keeping track of registers and memory. So use record where it matters.

By default, there’s an upper limit on the number of instructions the record log can contain. This is 200,000 in the default record mode. This can be tweaked however, including setting it to unlimited (which really just means it’ll record until it’s out of memory). See the GDB manual for more info on this.

You can always see what the current record instruction limit is by using the “info record” command.

Conclusion

Reverse debugging is great tool to keep in your toolbox for those tricky bits of code. In the right contexts, it can save you lots of time. Use it judiciously, however. Recording everything in your application wastes memory, memory that your application may actually need. It can also be detrimental to your program’s execution speed.