This is the most pertinent part of the article for me (although it all seems to be right on target):
An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits.
The only reason to use high sampling rates and 24 bit reproduction is when the files are going to undergo more math. Then, the higher resolution, the better the result is due to less rounding and truncation (good dither is a whole other ball of wax). I absolutely notice this when I'm doing my location recording gig. However, good quality conversion and mic pres make a bigger difference than the numbers and even more important are good microphones and good mic'ing technique. Even more important than that are a good band with good instruments playing in a good room with good songs and good arrangements. When I am frustrated with a listening experience, very rarely do I blame that on someone not using the right recording sampling rate or bit depth (crappy mp3 conversion is a different story). I would even argue that with rock music, there isn't much reason to go past 44.1khz at the multitrack stage if the converters are good. The Nyquist theorem is really a fact of life and the amount to be gained by going higher will usually be lost in the noise of background amp hiss or other ambient noise. For some music where maintaining as much sense of the space in which the music was recorded is paramount, then I can see using a higher sampling rate. At the end of the day, CD resolution should not be a limiting factor.
Side note: If anyone has Bob Katz' book Mastering Audio: The Art and the Science, which I highly recommend, the section on dither is mind-blowing!