Why does Aprais use a 100-point scale to evaluate client-agency relationships when most surveys of this type use a 5-point scale?
We use the 100-point scale because it provides more accurate results than a 5-point scale. Why is this? Well, let’s compare three key elements: Sensitivity, Objectivity and Linearity.
Sensitivity
5-point scales are usually converted into scores of 1, for the lowest answer, up to 5 for the best. Results are then calculated, across multiple answers, using 1 decimal place. This gives a possible scoring range of 1.0 to 5.0 – or 41 possible results:

The Aprais 100 point scale on the other hand has possible answers ranging from 0.0 to 100.0, or up to 1001 possible results. This means that the Aprais scoring system is far more sensitive to score changes than the 5 point scale. For example, if we look more closely at the 3.9 score on the 5 point scale we can see that it aligns with scores on the Aprais scale ranging from 70.8 to 72.1.

This means that for an equivalent Aprais score ranging from 70.8 to 72.1 a 5-point scale would have indicated no change. Why is this important? Well for two key reasons. Being able to see improvement in score is a great help in keeping teams on track with motivation and involvement in the improvement process. Also, relationship evaluation scores are increasing now being used as key metric in agency remuneration; and small changes can make the difference between bonus bands.
Sensitivity isn’t just an issue when looking at the results. It can also be an issue for those answering. What happens when an individual wants to answer somewhere between the fixed 5 points? It’s not possible on a 5-point scale to answer 3½. The individual is forced to score higher or lower than they would like. With the Aprais 100-point scale this isn’t an issue.
Objectivity
Most 5-point evaluation scales use ratings labelled ‘Meets Expectations’, ‘Exceeds Expectations’ etc. The issue with this is that these are subjective measures. Each individual is answering from a different place. Each individual is assessing their response against their own personal expectations. This is fine if it was just one person answering. But what happens when a team of people, each with differing expectations answer? What you get is as much a measure of the answering teams expectations as it is a measurement the team being assessed.
Aprais comes at this differently. If you look at the Aprais scale, shown below, you can see that guide words Never, Seldom, Sometimes, Often and Always underneath.

So the Aprais scale is in effect measuring frequency. All Aprais questions are written to state a clear action or behaviour that high performing teams would always display, such as ‘Creates strong integrated communication ideas’. Each person answering is therefore indicating how often they have observed this, thus giving an object answer that can easily be combined with the answers from the other assessors to give an objective team answer to the question.
This, therefore, gives a much clearer set of results, based on objective answers, so the true picture of what is working well, and what is not, can be seen. And from that, improvement plans can be safely prepared. After all, the purpose of the process is continuous improvement not just measurement.
Linearity
Evaluations use a set of questions which are asked to multiple participants. From the answers provided an average score is calculated for each question and, from there, an overall average score is calculated. Why is this a problem for a 5 point scale?
A 5 point scale gives a list of 5 possible answers shown in an ordered fashion, with each answer being given a value. For example:
- Does not meet expectations = 1
- Partially meets expectations = 2
- Meets expectations = 3
- Partially exceeds expectations = 4
- Exceeds expectations = 5
Ordered data like this is called ‘Ordinal Data’, and the problem with Ordinal data is that calculating an average is troublesome[1]. An average calculation assumes that the scale is linear – ie evenly spaced. Most times we’re calculating averages were using ‘real’ numbers such as a count of something or a temperature setting. Here we know that 4 things are twice as many as two things or 30kg is three times as heavy as 10kg. But is that the case with the selections on the 5 point scale?
Is Exceeds expectations twice as good as Partially exceeds expectations? Is Partially meets expectations half as bad as Does not meet expectations? If not, then the calculated averages are going to be wrong. How far out they are depends on the particular scale used.
100 is better than 5
At Aprais we bypass this issue with the use of the 100-point scale. This longer scale removes the effects of Ordinal data by providing a continuous spectrum. The use of this type of scale, supported by it’s guidance of Never at 0, and Always at 100, provides an equally spaced ‘Interval’ scale, and this type of data can be averaged.
Paul Anson is Aprais’ Head of System & Service Delivery.
[1] Stevens, S.S. (1946). On the theory of scales of measurement. Science, 103(2684), 677–680