I hope that most readers do not have to deal with these questions anymore — ideally, people should be using infrastructure that does memory management by itself. However, it is still often enough the case that for a specific use-case, one needs to deal with manual memory management. This is done, classically, with what I like to call malloc/free matching dance. You go through the code, analyze it, and make sure that every malloc is matched up with a free.
The problem with this strategy is that bugs are not *local*. When you do malloc, you effectively insert a bug — there is no corresponding free yet. The solution for that bug is that somewhere far away you add a “free” call. This match up, particularily in complicated code, is non-trivial. It is clear why it is popular, though — it can be thought of as an attempt to do a John Henry — simulate the way an automatic memory reclamation code would work, freeing the memory as soon as it knows it is unneeded. Much like in the classic folk tale, this strategy does not bode well for its instigators.
An alternative is “conservative lifteimte estimation”. When doing the allocation, make a conservative estimate about when you will need this memory no longer. This might mean a literal time (“in one hour”) or when an event passes (a function returns, a connection is finished). As soon as the allocation is done, attach the “free code” to the event, and forget about it. Come the event, free all the memory.
I have used this strategy to fix rampant memory allocation bugs in two cases. In one case, I could prove that the memory will be needed in only one re-entrant function. I used a static buffer, wrote a simple allocator (no free, pointer to free space incremented each allocation), and marked it as “clear” before the function. In the other case, I wrote a class called a “Guard”, to which objects of class “Actions” could be attached. When the guard executed its destructor, it would call all the actions. Add to that a template DeleteAction inheriting Action and destroying a pointer to T when the action was called, defining a stack-Guard in some functions, and voila, we had a way to make “function end event” a first class object in C++.
This strategy is older than that, of course. The classical unix trick of spawning a process, killing it, and inside the process never calling free, is just an OS-supported version of the same idea. Apache uses pools, which make a “liftetime estimation” a first class object, and uses at-most-n-requests-per-child as another usage of this strategy. While this strategy frequently overallocates, it allows you to write code which is obviously correct in many cases.