Home | Steering Group | Abstracts | Links | Feedback
The Reading 2005 Conference: Delegate Application | Call for Abstracts | Programme (PDF)
Prescribing indicators for general practice quality assessment
Cantrill JA, Campbell S, Roberts D*
National Primary Care Research & Development Centre, University of Manchester M13 9PL
* Prescribing Support Unit, Leeds LS2 7RJ

Background
Quality of care is now a seminal focus of Government policy within the NHS. This is being driven by new organisational structures such as the National Institute for Clinical Excellence and the National Performance Framework. In addition, clinical governance is a framework through which all professionals will be accountable for continuously improving the quality of their services. Prescribing is one of the most controversial areas of quality assessment. This study aimed to test the face validity of prescribing indicators based on PACT data.

Method
A two round, Delphi questionnaire was used to assess the validity of 31 prescribing indicators. The first round questionnaire was sent to every pharmaceutical and medical adviser in England (n=305). Respondents were asked to rate each indicator against two continuous integer scales; "is this a useful measure of cost minimisation?" and "is this a useful measure of quality?" (definitions were provided). Respondents were also invited to provide comments on each of the indicators. Between the rounds, 10 further indicators were added, based on the comments provided by respondents. In the second round, feedback was provided in the form of frequency distributions and median scores and illustrative, qualitative comments for each first round indicator. Having used the first round to obtain comments from a wide range of medical and pharmaceutical advisers, the aim of the second round was to achieve consensus at health authority level. Second round questionnaires were sent purposively to the lead prescribing adviser in each health authority in England (n=99). The rating scale was based on the RAND appropriateness method1 and we used Brook's definition of agreement.2

Results and Discussion
The second round response rate was 79%. No indicators were rated with an overall median of 9. Using a rating of 7 or 8 without disagreement, 25 were rated valid for cost minimisation and 18 for quality. Of these, eight were rated valid for both. However, it has been suggested that the reliability of the results from the RAND procedure is increased if a rating of 8 or 9 without disagreement is used.3 Applying this stricter criterion results in only seven indicators being rated valid for cost and five for quality. Consideration of this group of 12 indicators shows that they are very narrow in their focus and will only have limited application in the assessment of prescribing.

References:

  1. Brook RH. The RAND/UCLA Appropriateness Method. RAND, Santa Monica, 1995.
  2. Brook RH, Chassin MR, Fink A, Solomon DH, Kosekoff J, Park RE,. A method for the detailed assessment of the appropriateness of medical technologies. International Journal of Technology Assessment in Health Care 1986; 2: 53-63
  3. Shekelle PG, Kahan JP, Park RE, Bernstein SJ, Leape LL , Kamberg CA et al. Assessing appropriateness by expert panels: how reliable? Journal of General Internal Medicine 1995; 10 (supplement): 81.

Presented at the HSRPP Conference 2000, Aberdeen