Copyright © University of Cambridge. All rights reserved.

'Odd One Out' printed from http://nrich.maths.org/

Show menu


We received two great solutions to this problem, in which the sequences were explained and the Odd One Out was correctly found.

This first solution was sent in anonymously, but very well done whoever you are!

Process 2 Each number may have a repeated sequence, but the Odd One Out has four repeated numbers, while the rest only have two.

Process 3 Each number contains 1, 2, 3 and 4 in some order. The one that doesn't have all of these is the Odd One Out.

Process 4 The numbers increase by either 1 or 2 in successive rows. The first number that increases by more than 2 is the Odd One Out.

Process 5 All of the numbers share a factor of 7. The only one that doesn't have 7 as a factor is the Odd One Out.

Process 6 The numbers decrease in successive rows. The row where it increases instead of decreasing is the Odd One Out.

Simon from Elizabeth College submitted this excellent answer. We particularly liked the way in which he explained his statistical reasoning, combining experimentation with mathematics.

Simon noted that you can never be sure of a result in a random process, but a clear explanation of a difference which repeats several times points almost certainly to a correct choice. He also noted that there are various assumptions that need to be made before making an informed choice, such as:
1. The algorithm is being performed correctly by the machine (The probability of an error is negligible compared with the chance of me mis-reading/typing)
2 The random number generator has perfect 'randomness' (Unlikely, probably reasonable peudo-random)
First column After look at the data we notice that the first item is repeated at some point further down the chain. I checked that this was the erroneous piece of datum, which happened to be true. After several repeats, I was fairly confident with this assertion.

The data range is 100000-999999 (I failed to see a datum out this range) I will not have the ability to determine the invalid number in the following case: the item is repeated twice in the list. The probability of any number coming up is 1/(9*10^5). There are 20 rows, and therefore the error must occur in one of 19 of these. The chance of it reoccuring in any of the 18 other cells is about 18*1/(9*10^5)=1/5*10^4.

Second Column A few refreshes on this data immediately points at the 8888 and its repeated occurence. Checking this several times we can be confident this is the bad datum. We repeat the same process as before to determine our certainty that we are correct: The data range is 1000-9999 (I failed to see a datum out this range) I will not be able to determine the error if 8888 is repeated. Given there are 19 other cells where this occurs, we can determine the approximate likelihood of a repeat in this way. 18*1/9000=1/500. That's about 0.2%.

Third Column These data all seem to be of the form of 4 digit permutations of {1,2,3,4}. We notice that each time a 1414 appears. This would be our erroneous datum. The data range is 4 digit permutations of {1,2,3,4} (I failed to see a datum out this range) We note, that since the erroneous datum is outside the data range, that assumptions being true, everytime we can be certain that we spot it.

Fourth Column This one was more tricky. We spot that the cells appear to be going in accending order (looped around 9,0). The gaps vary and the erroneous datum usually has a larger gap between itself and the previous. Trial and error led me to hypothesise that the gaps of one to three are large valid and anything higher (only four was found) would be invalid.

Fifth Column This time, the weak spot appeared in the form of {140,140,70,140}. I notice that all of these divide by seven, and a quick scan indicated the same for the others. After four goes the errors were: {52,13,39,52}. I noticed that these all divide by 13. This led me to the conclusion that data were meant to be multiples of 7, but was erroneous when a multiple of 13.
[Simon then worked out the probability of this occurring by accident: can you work this out? ]

Sixth Column This one had me completely stumped, all I noticed was the progressive shrinking and the fact that if the top one was smaller, the bottom ones were likely to be smaller. I checked the hint "experiment on a calculator" and mashed the top value into functions which normally shrink: log and square root.
[This approach led Simon to discover that the sequence was formed by square-rooting the previous number. One of the numbers breaks the sequence ]