# On asking the right questions

The true power of having some information is not in the sheer amount of facts that it packs, but in understanding what you could potentially do with all that information. Developing this understanding is by no means an easy process. You’d have to constantly predict something based on the information you have and patiently wait to see what the universe throws back at you. And when it comes to developing this understanding, be it figuring out why traffic is heavier than usual, or why your brain works the way it does, it is easy to fall into the trap of looking at what has happened and thinking, “Oh! Of course. Should’ve seen it coming.”

The hindsight bias – thinking an event was totally predictable after it has occurred – is a fairly common mistake that we all inevitably commit, over and over again. That’s exactly why it is important to learn to ask the right questions.

The task only gets trickier, because it turns out that we’re not only biased to come up with convincing explanations after we’ve observed an event, but when we have some idea (some theory) at the back of our mind and seek for evidence, we tend to look for proof in favor of that idea – confirmation bias.

In the year 1960, Peter Wason showed that asking the right questions is not exactly our forte. He developed an experiment in which participants had to figure out a rule that Wason had in mind. All he said was that the numbers 2, 4, 6 followed his rule. Participants were asked to write down a set of three more numbers and Wason would only say if the numbers obeyed his rule or not. They could do this until they were sufficiently confident of the rule. Once they were sure, they could guess the rule and if it were wrong, they continued testing three more numbers, and so on.

Here’s an example –

*8, 10, 12*

Fits the rule.

*Is the rule ‘difference is two’?*

No.

*3, 6, 9*

Fits the rule.

*Is the rule ‘multiply by two’ and ‘multiple by three’?*

No.

Take a moment to think about a bunch of your own numbers and rules.

The usual approach would go something like this – you immediately think of a rule when you hear the numbers “2, 4, 6” and try a set of numbers that continue to fit your first rule. You do this for a while, accumulate enough evidence, which in turn increases your confidence in your first rule. You finally find out that that’s not the rule that Wason was using. Now, in the face of overwhelming evidence, you get mildly annoyed. You try the same rule again just to make sure that you didn’t somehow miss it. The numbers fit, but apparently Wason says that it’s not his rule. You now think, mainly in disbelief, if Wason is even telling you the truth.

What you fail to see in this cycle of predicting and collecting evidence is that accumulating more evidence that fits your rule rarely provides additional information about Wason’s rule. You really have to either

- try multiple rules or
- see when your rule differs from Wason’s.

Oh! And by the way, the rule was “numbers in ascending order”.

Yes, that simple! If you’re wondering how many people get to the rule fairly quickly, it’s about 20% on average.

How long did it take for you to realize that simply trying a different set of numbers that fits the rule takes you nowhere? **The point is to not confirm what you already believe in, but try and figure out a scenario where your rule breaks down.** Then you would have learnt something useful. Here is a short video (~5 minutes) of it being demonstrated.

When deciphering the mysteries of the universe, it is extremely important to ask the right questions. And it’s not always enough to ask the ones that fit our rule. This way of thinking sounds a lot like what Karl Popper popularized in the early-to-mid twentieth century – the criteria for good science is that it should have an element of falsifiability. Yes, the idea is quite similar. But I’d be hesitant in holding falsifiability as the deciding criterion of good science. For we know that whether you set out to prove or disprove your theory biases your chances of interpreting your results rationally if you have not made precise predictions before you start your experiment.

So, what makes some science, good science?

Well, if your science can reliably predict the outcome of an unobserved event[note]Substitute event by experiment, if you wish.[/note], without having to go back and fix your science, I think you’re making progress. After all, we’re terribly biased in explaining things that we already know, remember?

### References

[1] Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. *Quarterly Journal of Experimental Psychology, 12*, 129–140. doi:10.1080/17470216008416717

### Further readings

Confirmation bias - Wikipedia

Richard Feynman on seeking new laws – YouTube video