I’m not very far into the course yet (unit 2) but I’m reading about animal studies. Through in-bred strains they are researching how genes affect behaviour and the variations in behaviour. I had a sudden thought after reading it: there are too many variables. They try to raise the animals the exact same way and in identical environments, but there is no such thing unless it’s a white room, but even then there are always going to be differences: time of feeding, who fed them (and even if it’s the same person, that person will be changing and that person will affect the animal depending on mood, past relationships with animals, etc.) At some point, there has to be a point where there are just too many variables to take into account. Has anyone else had this thought? At what point does probability and statistics become nearly useless?

# PSYCH101 discussion

**Paul_Morris**#2

Some experiments have gone pretty much to the extent you suggest with entirely featureless environments and avoiding all interaction with the experimenters. More commonly, as you suggest, statistics are used to eliminate the effect of extraneous variables and isolate the effect being studied. It is assumed that variables which cannot be controlled are random and thus apply to all the experimental subjects–so over a number of experiments should effectively cancel out.

Let’s imagine that we wanted to test the theory that rats living in a blue environment had faster reactions than those living in a green one. We might measure this by seeing how long it took for a rat to press a lever after a bell rang. Now if we just took one rat in a blue box and one in a green then we wouldn’t really be able to make any conclusion; presumably rats have some difference in reaction time anyway. If we take a thousand rats in each environment then we would expect to see a range of reaction times (and the times might well follow the ‘normal distribution’). By comparing the responses of the two groups we can then see see whether the change in environment actually has an effect.

In this case there would seem to be a clear difference. In practice most cases will be less clear cut which is where statistics comes in to the equation (so to speak). The intention is to give an indication of the probability that an observed effect arises from the experimental condition being varied as opposed to random chance.

Simplifying tremendously, experiments work on the basis of negating the *null hypothesis*. Basically, you don’t prove that blue rooms improve reaction times but rather disprove the contention that they have no effect (the ‘null hypothesis’). Since this is all about probabilities rather than black/white observations, the researcher must decide at what level a result will be considered significant.

In psychology an effect is usually considered significant if the probability of the null hypothesis being true is less than 5% (this is expressed as p <= 0.05). In other words, if an observation would be due to chance less than 5 times per hundred attempts it is considered that the outcome is not due to chance.

This was covered briefly in your reading in Unit 1.2.3. Page 45 onwards looks at the question of significance. It will become clearer if you go on to study MA121 *Intro to Statistics* particularly in Unit 5.

To summarise: Ideally, all variables other than those being studied are controlled. In practice, some random variation will always occur. Experimenters therefore need to sample across multiple instances to determine whether an effect is significant or due to chance.

A neat way to see the full context of a quote

What have you learned at Saylor that has helped you personally or at work?

**Nichole**#3

Hi Paul.

Thanks so much for the response! I remembered reading that in the first unit so I’m glad you brought that up. I had discussed it with my husband as well later on and we came to the same conclusion. I’ve always had difficulty wrapping my mind around probability and the null hypothesis, but it does make sense. Thanks again for your explanation.