The CHKS Top Hospitals Awards have been running for 20 years and are highly regarded across the NHS and private health sector, in the UK and internationally. 

As the UK’s leading data-driven awards, it is crucial that our methodology and analysis is robust and reliable. This is how we do it:


The team
The Top Hospitals team works all year round to develop the Top Hospitals award categories and create appropriate measures. This team comprises statisticians, NHS data experts and quality improvement experts from a variety of clinical and management backgrounds.

Data
We use data direct from trusts where we have it, otherwise we use HES data. We obtain published indicators from organisations such as the Care Quality Commission and the Department of Health, in total analysing over 50 indicators across all the awards. These include readmissions, infection rates, referral to treatment, depth of coding, mortality rates, length of stay, waiting times, staff survey feedback, and friends and family results. Prior to indicator data analysis, we exclude trusts with more than 5% un-coded spells in the data period and trusts with a current CQC inspection grading of inadequate.

Process
With the exception of the international quality improvement award and our two new insight in healthcare categories, all the awards are data-driven. This means we identify the shortlists and winners by analysis of quantitative performance data alone. We initially take data for a 12-month period and run a series of quality assurance tests to make sure we are happy with the indicators and our methodology. For the final analysis, we use the latest data submitted for the calendar year just ended. 

Adjustments
To account for factors beyond the control of the trust, we adjust indicators so that we can make a fair comparison. For example, many indicators are adjusted for case mix.

Weighting
As some indicators vary more than others, we stretch or compress the distributions, so they are all the same width, using a system called Z-scores. Then, recognising that some indicators have greater significance or importance on the area we are studying, we apply a weighting to each indicator within the award category. Next, we apply some rules to limit the effect any single indicator can have on a trust’s overall score. For example, outlier values are limited to 3.0 standard deviations from the average. Finally, we flip indicators so that a high score always indicates a positive performance. This enables us to add all scores together and identify the best in class.