I now (re)discovered that prepending "-f" to the output file name of the (time domain(*)) programs, a 32 bit floating point file will be produced. However, it looks to me that the processing internally is not floating point or is handled incorrectly.
A simple test : use the modify loudness program to increase the volume of an already loud 16 bit file that has peaks at the maximum range, using the -f option. The expected result is that when the resulting file's volume is again reduced enough in some program known to do the floating point calculations as expected (such as Cockos's Reaper), there would be no distortion. The files obtained from the CDP modify loudness program however do not behave like that. They are permanently clipped to produce distortion. This leads to me to believe that either the internal processing does not happen as floating point or that the output is clipped into the -1.0...1.0 range, effectively making the floating point output file option nearly useless. The same behavior can be observed with the other CDP programs, but modify loudness is the clearest way to reproduce this behavior.
I don't bring this up due to "golden ears" concerns. The value of having a clean 32/64 bit floating point processing chain to me is almost entirely in the available volume headroom when the processing chain is working correctly. Ie, even if a DSP calculation ends up producing samples beyond the nominal -1.0...1.0 range, the signal can later be brought down in volume to not distort the monitoring output or to make it work better when mixing. If how this is supposed to work is not clear from the description, I can produce a demonstration as a video or a set of audio files.
(*) It seems that the spectral processings now always result in 16 bit PCM files, which seems like an obvious bug. (Unless the pvoc program and the CDP programs that transform the .ana files actually just are 16 bit only...?)