News / News analysis: Patient costing – worth the wait

29 August 2014 Steve Brown

Login to access this content

Image removed.Monitor’s review of last year’s voluntary patient-level cost collection, which was published at the end of August, provides arguably the most detailed breakdown of NHS costs to be made publicly available.Similar to previous reference cost publications, the publication provides ‘national’ average costs for healthcare resource groups. But unlike reference costs, these costs are built up from the costs of individual patient episodes. Even more noteworthy than this is the level of granularity on show.

In addition to the average costs, the publication also shows how these costs were built up – showing the average contribution of care delivered in operating theatres and wards, for example, to the overall cost.

With more than 130 acute providers having implemented or implementing patient-level information and costing systems (PLICS), this is an analysis individual organisations will be familiar with. However, only those in a benchmarking club or similar partnership would previously have been able to look at the average cost make-up across a range of providers … and compare their own cost split to these averages.

The figures, which cannot be compared directly with reference costs because of differences in what is included, do not give a comprehensive picture. They are not yet truly national as the 2012/13 costs were submitted voluntarily by
66 trusts and only cover admitted patient care. Still, the costs are derived from an impressive 7.4 million patient care episodes and cover some £13.7bn of costs in total. Monitor’s director of pricing Ric Marshall describes the figures as ‘an outstandingly rich information source’.

If nothing else, they confirm the big contributors to overall costs – wards (23%), medical staffing (16%), operating theatres (9%) and the sometimes ill-defined overheads (more of which later) consuming a further 20% (see chart). Of course this average cost composition can only be used as a rough rule of thumb, and will be influenced heavily by an individual organisation’s casemix.

Monitor’s clear goal is to support the NHS to develop ‘robust, comprehensive and consistent cost information’ that supports both local management decisions and national (or local) pricing. The report praises trusts for making ‘excellent progress in producing information that supports local trust cost management’.

Glen Pearson, costing and outcomes lead in Monitor’s pricing team, told Healthcare Finance the regulator was ‘delighted with the response and enthusiasm of the sector given this was an entirely voluntary exercise’. ‘There is a healthy desire to participate and improve cost information across the sector,’ he said. ‘It was a very positive process. This is really important because Monitor can’t deliver improvements in costing and cost data on its own  - it will need a combined effort of trusts, the HFMA [through the continued development of the clinical costing standards] and Monitor.’

The key challenge now, in particular in making the data more suitable for informing the payment system and benchmarking, is to ‘focus on consistency of approach’. And this is an important purpose of the publication – identifying issues with the 2012/13 collection and how to resolve them and giving trusts some idea of how the collection could change in
future so they can prepare.

So is the lack of consistency down to lack of compliance with existing guidance – set out in the HFMA standards and Monitor’s Approved costing guidance – or the need for greater detail in that guidance? There have been calls for the costing standards to be mandated, but to date Monitor has only indicated this might be an option for the future. In reality, its approach on prescription is likely to be tied up with its long-term policy on costing, due to be set out in a costing roadmap this autumn.

Mr Pearson said a balance had to be struck between ‘prescription, giving greater consistency, and flexibility, allowing innovative and best practice to emerge’. The treatment of overheads is a good example of inconsistency between some organisations. Mr Pearson said some trusts treated costs as indirect, when the standards were clear they should be treated as overheads.

‘When you classify something as an indirect cost when it is really an overhead, it gets spread across all the cost pools. The impact – for example, if you are trying to compare the costs of pathology [without overheads] – is that you won’t get a perfect comparison between one trust and another,’ he said.

While the current data suggests overheads account for 20% of total costs on average, Mr Pearson said the data needed to ‘get to a more robust position’ before the service could confidently identify this as an accurate reflection of true overhead levels for admitted patient care.

So, in summary, last year’s pilot patient-level cost collection represents a good start. Things now look set to pick up pace. Monitor will publish its long-term plan for costing in October, the same month as the patient-level cost collection window for 2013/14 closes.

With more time to prepare and the benefit of last year’s experience, there are high hopes for further improvements in data quality. And Monitor has pledged to produce its analysis of the exercise by the end of the year.

Image removed.

Monitor’s action list for improving data quality


Monitor has identified eight action areas for itself, the HFMA (as producer of the clinical costing standards) and for trusts to support improvement in data quality over the next two collections:

Collection content

  •  Cost pool classification
  •  Allocation methodologies
  •  Matching
  •  Work in progress
  •  Non-patient care activity
  •  Critical care
  •  MAQS calculation

On collection content, Monitor said that overall the non-cost elements of submitted data were good. Age data was largely complete, with the exception of one trust with no patient ages and a few missing ages for episodes. Some trusts were over-generating HRG U-codes as a result of missing or invalid clinical codes. There was also concern over the validity of reported operating theatre minutes – key to allocating theatre costs to individual patients. Some trusts, for example, had the same time reported across every episode that had involved theatre time.

For the 2013/14 collection, Monitor has set up a central validation process with ‘same-day’ feedback reports highlighting possible issues and missing fields. It has asked trusts to review U-codes, suggesting U-codes should make up less than 1% of HRGs in their submissions.

Beyond this it is looking at further central validation processes for 2014/15 with a focus on quality of theatre times and critical care length of stay.

On cost pools, Monitor identified problems with inconsistent classification of indirect and overhead costs, making cost comparison across trusts difficult. It also highlighted confusion around what is included in some cost pool groups. For example, the medical staffing cost pool group does not include all medical staffing (medical salaries for pathology and diagnostics).

It also suggests that at 4% of total costs, the ‘other clinical support’ cost pool group should be further split down.

For the 2013/14 collection, again recommendations focus on checks that trusts can make. But for 2014/15, Monitor says it is considering work with the HFMA on standards 1 and 2 (cost classification and cost pool groups). It is keen to raise the profile of the treatment of overheads, potentially including them as a discrete cost pool group of its own.

 It is also keen to provide more detail on the allocation of indirect costs and which cost pools they should be reported in. On medical staffing it is considering specifying greater granularity in cost pools that include medical staffing so that it is easier to identify total costs of medical staffing.

The materiality and quality score (MAQS) is seen as having a major role in helping trusts to target improvement activities on key areas in their costing process. It also potentially has a role in ensuring submitted data is robust enough to be used for price-setting. The assessment tool gives trusts an overall score for their costing process, built up from individual scores for each cost pool group based on the overall value of resources, the allocation method used and the ability to match patients to specific activities such as pathology or diagnostic tests.

But the collection exercise revealed that some trusts struggled to identify their own allocation method within the options. Others pointed out that more than one allocation method might be used within a single cost pool group and there was also concern about the relative scores attached to different allocation methods. For 2013/14 data, Monitor wants trusts to complete the MAQS template for all PLICS costs – not just admitted patient care. Trusts will also be able to select more than one allocation method for each cost pool group.

Beyond the current collection, Monitor will consider adapting the template so organisations can reflect a deviation from the standards – in itself this could help identify best practice allocation methods. In consultation with the HFMA, it will also consider incorporating an assessment of the allocation of costs to cost pools as well as from cost pools to patients. And it will look again at the existing linear scoring scale used for different methodologies.