This question already has an answer here:
I'm cleaning up some C# code for my company and one thing I've been noticing is that the contractors that built this application keep setting object references to null.
Example:
get {
Object o = new Object(); // create a new object that is accessed by the reference 'o'
try {
// Do something with the object
}
finally {
o = null; // set the reference to null
}
}
From what I understand, the object created still exists. There is a chance it can't be accessed now depending if there are any other references to it, but it will still exist until the GC comes and cleans it up.
Is there any reason to have this in a finally block? Are there any cases where this could possibly create an in-adverted memory leak?
Thanks!
This is dependent on scope.
In your given example, o
is only defined within the scope of the property. So it will be useless. However, say o
was within the scope of the class. Then it MIGHT make sense to denote the state of o
.
As it is currently, it is NOT needed.
If the intent is to have GC collect the object asap, then it's utterly useless.
If the object referenced by o
is not used anywhere after the try
block, it will be elected for collection immediately after it is last used (i.e., before the variable o
goes out of scope, and before it reaches the finally
block).
On a related note, see Lippert's Construction Destruction.
I see at least two reasons to do this. Firstly, using this pattern consistently assists in catching bugs caused by reusing variables (ie if this is part of a larger sequence of code the variable name 'o' may hold a different object later in the execution). By explicitly assigning null, you will ensure such code causes an error if you try to use the same object later (say you accidentally commented out a constructor as part if a larger block).
Secondly, assigning null ensures that the object is potentially available for collection by the GC. While more important for class variables, even local variables can potentially benefit. Since the object is not being read by the assignment, any existing optimization should not be affected by including the assignment (however unnecessary it my be). Similarly, the assignment itself may potentially be optimized away entirely (if the object is never subsequently accessed), but since these optimizations are both the purview of the compiler, using this structure allow the possibility of earlier collection for alternate compilation models which do not include such optimizations.
It would require more familiarity with the C# language specs than I possess, but I suspect that they don't state that an object must be allocated for collection immediately after the last access. Making this kind of assumption based either on a single compiler, or the current actions of a group of compilers can lead to more work later on when you try porting to an environment which doesn't follow the same principles.
As for potential memory leaks, assuming the GC is working correctly, and that the object does not require special disposal, there should be no issue - in fact you are specifically removing a potential reference to unused memory, possibly allowing it to be reclaimed. For objects with special disposal requirements, I would expect those to be handled in the same place.