CVSS score is widely used as the standard-de-facto risk metric for vulnerabilities, to the point that the US Government itself encourages organizations in using it to prioritize vulnerability patching. We tackle this approach by testing the CVSS score in terms of its efficacy as a "risk score" and "prioritization metric." We test the CVSS against real attack data and as a result, we show that the overall picture is not satisfactory: the (lower-bound) over-investment by using CVSS to choose what vulnerabilities to patch can as high as 300% of an optimal one. We extend the analysis making sure to obtain statistically significant results. However, we present our results at a practical level, focusing on the question: "does it make sense for you to use CVSS to prioritize your vulnerabilities?"