Victim service assessment – technical methodology 

Published on: 26 April 2021

Between April 2016 and February 2020, we carried out our rolling programme of crime data integrity (CDI) inspections in all 43 territorial forces in England & Wales measuring the accuracy of recorded crime[1].

We found that, in general, crime recording standards have improved. We found the combined recording accuracy for all reported crime (excluding fraud) in England & Wales was 90.3%; for violent offences it was 88.3%, and for sexual offences it was 94.0%[2]. This built on our 2014 thematic inspection[3], when we found that 80.5% of all reported crime, 66.9% of violent offences and 74.2% of sexual offences were correctly recorded.

However too many victims of crime still do not receive the service they deserve. When the Audit Commission stopped auditing crime recording in 2004, our subsequent inspections and audits showed a gradual decline in crime recording accuracy. It is therefore important to continue inspecting forces to ensure standards are maintained and victims receive the service they deserve.

As such, from 2020 onwards, we are continuing our inspection activity into crime recording but with an expanded focus on the whole victim experience. Our victim service assessments (VSAs) will track a victim’s journey from reporting a crime to the police, through to outcome stage, focusing on six areas of:

  1. Call handling
  2. Deployment and response
  3. Crime recording
  4. Crime screening and allocation
  5. Investigations
  6. Outcomes

Force selection

All forces will be subjected to a VSA within our PEEL inspection programme, comprising stages: 1, 2, 4, 5 and 6. Some forces will be selected to additionally be tested on crime recording (stage 3), in a way that ensures every force is assessed on its crime recording practices at least every three years.

All our VSAs will receive a graded judgment. When we assess crime recording, we will give a separate judgment, specifically for this area.

Methodology

For those forces being assessed on crime recording (stage 3), we will start by producing a statistically robust sample of the force’s crime reports to form an estimation of the force’s recording accuracy.

Crime types

A statistically robust sample will be audited for three offence groups:

  • violence against the person;
  • sexual offences;
  • all other offences excluding fraud.

The recording accuracies of these three offence groups will then be used to form the force recording accuracy for all crime (excluding fraud). This will involve the use of weighting. Firstly, to take account of the fact that samples and not whole populations will be audited. Secondly to take account of the fact that the proportion of the sample comprised of violence against the person and sexual offences is higher than the proportion that these offence groups make up of recorded crime.

Police forces use different IT systems and routes for receiving crime reports. The main routes we will consider are incidents and directly recorded crime (DRC). All forces record some crime through incidents – such as from 999 calls. DRC are crimes directly recorded without an incident record by a trained operator, typically in a crime management unit; not all forces use this method.

Forces also record small proportions of reported crimes through various other routes, for example through online reporting, members of the public walking into a police station or by vulnerable victim departments. These will only be included if they make up a high proportion (at least 5%) of all recorded crime in the force and contain an auditable record (this will be decided on a force by force basis).

The samples for the three offence types will be split across these crime reporting routes according to the information about the proportion of crime reported through each route, as provided by the force.

Confidence intervals

The confidence interval provides an estimated range of values that the true recording accuracies are likely to fall within. For example, if an audit found that 85% of crimes were correctly recorded with a confidence interval of +/- 5%, we would be confident that between 80% and 90% of crimes during the time period were correctly recorded based on the parameters used.

We will apply the generally accepted 95 percent confidence level used in statistical tests. This means that were we to repeat the audit many times under the same conditions, we expect the confidence interval would contain the true population recording accuracy 95 times out of 100.

The audit aims to review a large enough sample to yield confidence intervals of no more than +/- 5% for each crime type (at the 95 percent confidence level) and +/- 3% for all crime (excluding fraud). Where the sample (based on pre-audit estimates) for any crime type is found during the audit to not be large enough to achieve this, an extension of the confidence interval to achieve a maximum of +/- 5.4% can be applied if required. We will audit a minimum of 100 crime reports for each crime type, even if we achieve our required level of accuracy before this.

Sampling of crime records

Each report of crime that we review can either be correctly recorded or not. Our aim, from a sample of all crime reports, is to estimate the recording accuracy, p, – between 0 and 1 – representing the proportion of crime reports correctly recorded by the force.

The binomial distribution provides us with the distribution of successes from a series of pass/fail tests. Under the binomial distribution, the estimated sample size (n) required is calculated as:

Where:

Z ~ 1.96 (Z score at 95% confidence level)

P is the expected recording accuracy of the force

I is the confidence interval (+/-5%).

The sample size is then adjusted to consider the estimated population size (N) of the crime type in the force during the audit period:

This adjusted sample size will then be split proportionally across incidents, and where applicable, directly recorded crimes or crimes from other routes.

Not all incidents generate a crime, and some generate multiple crimes. So, for practical reasons, we only review incidents where we would expect to find crimes – based on incident opening codes. Decisions on which opening codes are likely to contain crimes are made in conjunction with each force.

For incidents likely to contain crimes, we will set ratios of 1.4 violent incidents per violent crime, 1.2 sexual offence incidents per sexual offence crime and 1 other incident per all other offences (excluding fraud), based on evidence from our audits between April 2016 and February 2020. For example, for every 140 incidents opened with a violence opening code, we would expect these to yield 100 notifiable violent crimes. Therefore, we will multiply the estimated sample sizes for crimes from incidents by these ratios, to calculate the estimated number of incidents required.

In case the recording accuracy is lower than estimated pre-audit or the incident to crime ratio is higher than assumed, we will prepare reserve records. For incidents, this will be based on using a p value of 5 percentage points lower than the expected value pre-audit. For directly recorded crimes and crime reports from other routes, we will select an additional 10 records for the same reason.

Once calculated, the sample will be randomly selected from the force’s incident and crime records for the three-month period ending one complete calendar month prior to the commencement of the inspection.

Auditors will also examine dip samples on:

  • Anti-social behaviour (ASB)
    • 50 incidents closed as ASB personal;
  • Cancelled crimes and refused cancellation decisions
    • 20 rape crime cancellations
    • 20 all other crime cancellations
    • 20 refused rape cancellations
    • 20 refused all other crime cancellations;
  • N100s
    • 20 reports of rape where the circumstances are insufficient to immediately record a crime of rape;
  • Vulnerable victims
    • 25 adult protection cases
    • 25 child protection cases
    • 20 referrals from partners or third parties.

To ascertain the quality and consistency of crime recording decisions in these areas. These records will be taken in order, starting from the end of the audit period, working backwards no more than 12 months.

In circumstances where data or information is not available to allow us to follow our methodology as stated above, we will amend our approach accordingly. In these circumstances we will ensure our approach is robust as possible and, if necessary, seek advice from the wider analytical community.

Case file review

To assess stages 1, 2, 4 and 5 we will review 10 recorded crime files that have been screened-in for investigation, for each the following offences:

For those forces being assessed on crime recording (stage 3), the 70 crime files will be selected from the random sample of incidents chosen for crime recording review where possible. This will allow us to follow and assess an incident from initial contact, through to the investigation stage. When there are an insufficient number of records within the random sample to select these 60 files, we will choose the remaining files from the next available incidents within our sampling frame that match the requirements needed.

For these forces we will also select 60 random incidents, not already chosen for case file review, from the random sample of incidents chosen for crime recording review.

For forces not being assessed on crime recording (stage 3), we will randomly select 60 crime files from the force’s crime records for the three-month period ending one complete calendar month prior to the commencement of the inspection.

Outcomes review

To assess the appropriate use of police recorded crime outcomes (stage 6), we will review 20 case files for each of the following police recorded crime outcomes:

  • caution (adult and youth);
  • community resolution;
  • evidential difficulties: suspect identified, victim does not support further action.

These 60 files will be chosen independently of the files selected for the other VSA stages. This is because some of the investigations we select may still be ongoing and therefore not have reached a police recorded outcome by the time we make our selection.

These records will be taken in order, starting from the end of the audit period, working backwards no more than 12 months.

Audit quality and validation

The quality of audit decisions depends on the knowledge, experience and skills of the auditors.  All auditors will be required to attend a HMICFRS in-house three day course which will teach auditors the Home Office Counting Rules (HOCR) and auditing techniques used in the CDI programme. It will be overseen by the national crime registrar who will validate the content of the course. Instruction will be provided by HMICFRS’s crime data specialist who has attended the College of Policing Force Crime Registrars’ accreditation course.

To ensure consistency, the results of each audit will be subject to peer review by an independent expert. This peer reviewer will be an FCR, or deputy FCR, from another force who has also attended the College of Policing FCR accreditation course. In addition, forces will have the opportunity to review the audit decisions. We aim to resolve any issues with the force in the first instance, but if no agreement can be reached, then the matter will be passed to the CDI NCRS expert at HMICFRS for consideration in consultation with the national crime registrar. The ultimate decision on reconciliation of any disputed cases will rest with HMICFRS’s senior reporting officer (SRO) for the inspection.

 

[1] https://www.justiceinspectorates.gov.uk/hmicfrs/our-work/article/crime-data-integrity/

[2] These figures have confidence intervals of: +/- 0.3% for all reported crime; +/-0.4% for violent offences; and +/- 0.4% for sexual offences.

[3] https://www.justiceinspectorates.gov.uk/hmicfrs/wp-content/uploads/crime-recording-making-the-victim-count.pdf