Louis Feng | Out of Core

brain dump of things

  • Home
  • Blog
    • Tools
    • Algorithms
    • Design
    • Code
  • Projects
    • Shell Maps
    • Anisotropic Noise Samples
  • About
You are here: Home / Blog / Scripted Debug Using GDB To Find Memory Leak

Scripted Debug Using GDB To Find Memory Leak

June 29, 2011 by Louis Feng 4 Comments

I recently ran into some hard to find memory leaks. The program only runs on Linux. The memory leaks are detected using a custom memory allocator (using an atomic counter for allocation/deallocation size). In single thread mode, the program terminated without leaking, but running in multithread mode resulted in 1.7 to 4 KB of leaked memory. Well it’s not a lot of memory, but it’s still bothersome. It’s also a sign of a potentially larger problem (threading errors). There are many tricks on find memory leaks that’s already discussed elsewhere. For example, you could write a custom memory tracer to overload the original new and delete. However, the program I was dealing with already has a custom memory allocator with special macro syntax and I simply couldn’t override the macro to do the memory allocation tracing.

So here are some steps I took to find out the memory leaks with the help of the diff tool and GDB.

Getting The Dump

First I dumped out all the allocation, deallocation calls and the memory size. For multithreaded output, you probably find it difficult to produce something nice because the C++ stream buffer is not thread safe. So these nice strings could be broken and interleaved when written out through iostream. A solution is to store the strings in a thread safe container like the TBB concurrent_queue, then output them later. I had batch run the program many times before it reproduce the symptom. The results from my program look like

# address type total size source
0x2aaaaecf2b00 allocate 0 1544 2
0x2aaab5d23fe0 allocate 1544 25 2
0x2aaaaed07b00 allocate 1569 592 1
0x2aaaaece3e80 allocate 2161 360 2
0x2aaab5d21540 allocate 2521 28 2
0x2aaab5f2e6c0 allocate 2549 16 0
...
...
0x2aaab5d23fe0 deallocate 3420 25

Note that “source” is something I added to different entry points of the custom memory allocator. It’s not that important in this test. Then I split up the data into two files, one for allocation and one for deallocation (a simple python script would suffice). These two new files are sorted by the address (:sort in vim). When I did a diff, I found that there are extra allocations, no surprise there. Note that for very large files, you want to try minimize the differences between the two files so that the diff tool can run quickly. You might be surprised how long it can take to compare two large text files. The only information you need is really just the address and the size of allocation. In my case, the differences are found to be

# Test run 1
0x2aaab639ffc0 allocate 158272 28 2
0x2aaab6d37f40 allocate 158185 192 1
0x2aaab6d4a400 allocate 161460 1544 2

# Test run 2
0x2aaab706ff40 allocate 177194 192 1
0x2aaab5feffa0 allocate 177450 28 2
0x2aaab633a400 allocate 177798 1544 2

This looked promising because at least I saw some consistency in the error. It turned out that these three allocations occurred fairly close to each other. But I still didn’t know which part of the program code was doing these allocations.

From Allocation to Call Stack

What I needed was a way to find potential code path where these allocations occurred. It’s good that I knew the size of the memory allocation. Now with the help of GDB, I could generate the call stack. What I didn’t know was that GDB actually supports automated command execution. With some logic constructs, you could do some scripting magic without the manual labors. For more complete information on command execution in GDB, see their documentation. You may also find the command reference handy. Here is an example of my GDB script:

# file: mem_stack.gdb
set logging on
set $targetMem1 = 1544
set $targetMem2 = 192
set $targetMem3 = 28
define get_br_info
backtrace
continue
end
break path/to/source.cc:57 if (n == $targetMem1 || n == $targetMem2 || n == $targetMem3)
commands 1
get_br_info
end
run
end

Line 1, tells GDB to write the output to a file called gdb.txt for my inspection later. Line 2-4 are setting variables to be the memory sizes I was interested to check. Line 6 defines a custom command that can be called later. It basically calls GDB command backtrace to dump the call stack, then continues execution. Then at line 9, I set the break point with the condition where variable n from the C++ source code is the memory allocation size (for malloc()). Next from line 10-12, I set a command list (just get_br_info) to run when break point 1 is reached and conditions are met. Lastly, I called run to start the program execution. To use this code in GDB, you run GDB like this:


> gdb program_executable
(gdb) source mem_stack.gdb

"source input_command_file" will ask GDB to run all the commands in that file, in my case, it’s mem_stack.gdb. This can take a very long time, so I did mine overnight. What you get at the end is a gdb.txt file which has all the outputs from the GDB session, including the call stack information. From there, I was able to identify some race conditions that was causing some objects to be allocated twice, no big surprise there. Have fun with parallel programming!

Filed Under: Blog, Code, Tools Tagged With: C++, diff, GDB, memory leak
About Louis Feng

I have been a computer graphics enthusiast and researcher for many years. My interests has broadened to include mobile, high performance computing, machine learning, and computer vision.

Comments

  1. Santosh B R says:
    July 26, 2011 at 9:51 pm

    Thanks a lot for such a good article. Going through this I was able to start using scripts in gdb. Made my debugging easier :) ….

    Reply
  2. Govardius says:
    March 20, 2012 at 12:49 pm

    is it possible to use deleaker or valgrind in this case?

    Reply
    • Louis Feng says:
      March 26, 2012 at 11:01 am

      I’m not sure, I have not used valgrind or deleaker. I have heard good things about valgrind. Maybe next time when I run into some nasty bugs, I’ll try it out.

      Reply
  3. Baijnath says:
    May 5, 2013 at 5:40 am

    Hi,
    I have to write a tool/script(python or perl) which will be executed under gdb, it will extract information from core dump(cause of crashing). if a person does not know gdb commands, even he/she can debug the code using this tool. Kindly guide me by giving any example or pseudo code.

    Thanks in advance.

    Reply

Leave a Reply to Govardius Cancel reply

*

*

CAPTCHA Image
Refresh Image

*

Archives

  • July 2015
  • April 2015
  • February 2012
  • July 2011
  • June 2011
  • April 2011
  • March 2011

Tags

book C++ clang cocoa container diff GDB git iOS ipad iphone iterator laptop list llvm map memory leak mobile nib software engineering stl Surface 3 Surface Pro 3 UITableViewController UIViewController vector vectorization view controller windows workstation

Return to top of page

Copyright © 2019 Louis Feng | Out of Core