Any last words? Finalization in .NET
Raymond Chen’s blog yesterday reminded me of a topic I wanted to blog about. This is strange, since I work primarily in C#/.NET, while Raymond is an old school Win32 guy. Fortunately, it seems that this week is CLR week on the Old New Thing!
Anyway, now that he’s segueing into the world of managed code, I will begin by hearkening back to the glory days of C++. Remember back when you wrote a class, you had the constructor, and some fields, a few methods, maybe add some inheritance. Oh, and let’s not forget the destructor! (Oh, who am I kidding? Who could forget the cute ~ before the class name? :3)
The semantics of a destructor in C++ are fairly clear: It gets called exactly once in the object’s lifetime, just before it dies. The object is responsible to free any resources it holds, and ensure that any child objects are destructed as well (generally, just deleting any new’d objects).
Ah, yes, it was easy back then… Memory leaks aside. And writing exception safe code. Okay, it wasn’t easy at all. But, you could understand it.
Now a days, the world is a bit different. In C#, we still have those funny ~ characters, but it doesn’t mean what it used to. They aren’t destructors any more, no siree. Those are finalizers, and they have a whole new set of semantics.
Finalizers are still linked to object destruction, that much is true. However, there is no delete keyword in C#. We simply let the object go. Once it gets garbage collected, that’s when the finalizer is called. We don’t have any control over the garbage collector, which means that we don’t know when it will run, and when the object will be destroyed. In other words, object destruction is non-deterministic.
But, it will be collected eventually, right?
Raymond says in the linked article:
If the amount of RAM available to the runtime is greater than the amount of memory required by a program, then a memory manager which employs the null garbage collector (which never collects anything) is a valid memory manager.
Obviously, if memory is plentiful, why bother going through the trouble of recovering it all?
A correctly-written program cannot assume that finalizers will ever run at any point prior to program termination.
The only time the garbage collector will run is at the end of the program, as the runtime is shutting down.
But, he continues:
A correctly-written program cannot assume that finalizers will ever run.
As it turns out, yes.
There are two situations where this can happen:
- If the process crashes (eg, a P/Invoke gone horribly wrong) or is killed in some fashion (an overeager user armed with Task Manager, for example), the CLR won’t have a chance to invoke the garbage collector.
- If an object throws an exception in its finalizer, the garbage collector has no choice but to call it quits, and it takes the process down with it.
Strictly speaking, #2 is a special case of #1, since you’re effectively crashing the garbage collector. Either way, the net effect is that the process dies, and Windows has to take care of your mess.
So, what good is a finalizer, if it isn’t guaranteed to run at any point? To be honest, not much at all.
If you consider deterministic (C++) destruction as a polite request for the object to clean up, then non-deterministic finalization is putting the object in front of the firing squad, and asking “Any last words?”
Fortunately, we have yet a third option, which I will discuss next time.
Write a comment
You need to login to post comments!