I agree with Tiny Giant:
I imagine a number of the users who decided not to post their question did so because their question was not a debugging style question, and the template lead them to believe that we only accept debugging style questions as the template is not qualified in any way (i.e. something along the lines of another HTML comment saying "This template is for debugging questions, if your question is not a debugging question then ignore this."). I personally don't qualify endless variations of the same useless debugging questions as "good" questions, even if they don't get downvoted or closed. In short, I think your measure of success is highly naive and is predicated upon preconceived notions which are neither proved nor disproved by your tests, and you have taken the fact that they were not disproved as proof.
In other words, the metrics are not being examined carefully. They are being assumed to mean things that may or may not be what they mean. This is extremely dangerous. Misunderstood data will very rapidly lead you to the wrong conclusions, much more quickly than experience and well trained intuition will, especially in arenas where much of what's being measured is subjective.
Additionally, these percentages are tiny. Do we even have an estimate on the error bars of your measurements? I don't see one. The error needs to be much smaller than these numbers, or these results haven't proved anything. Yet it seems to be assumed that any difference is meaningful.
Furthermore, data is really only meaningful if you can examine it in light of a good model of the problem space, and the model you're embracing is suspect. Models really should only be considered reliable if they've been tested hundreds or thousands of times over many different situations and a significant amount of time and found to correctly predict the results. What results has your model successfully predicted, and over how long a period of time has it been tested? The model you introduced is only a couple of months old. It's far too soon to be basing any wide-reaching decisions on.
I do not like this trend of incautious use of data that I see SO trying to embrace.