All UK Exam Boards included


Sampling Organisms


Sampling of organisms must be like those annoying, attention-seeking Snapchat friends. It must be random. Random sampling can be carried out using quadrats. If you’re wondering what they are, look no further – they’re squares.

How would you make sure that your sampling is random? In a field, you could lay two long tapes perpendicularly to define the limits of the area where the samples will be taken from.

As you can see above, a tape is laid on one side of the sampling area. As you can’t see above, another tape is laid from one end of the first tape, across on the adjacent side of the sampling area (like a giant L). Then two random numbers are generated using a random numbers table. These numbers are used to determine the coordinates of the first quadrat placed on the field, by matching them on the two tapes. And voila! You have yourself a system for random sampling using quadrats.


Transects are tapes (like above) placed across an area which has some form of gradient caused by abiotic factors which directly determines the distribution and abundance of the organisms present. For example, a beach is not suited for random sampling because there are clear zones ranging from the low population zone near the sea, to the more densely inhabited areas further up the shore. In this case the best way of obtaining useful data is by systematic sampling.

After placing the tape across the shore, place quadrats at set intervals such as every 5 metres, then take your data down.

Mobile Species

Mobile species such as shrimps can’t be counted by the quadrat method. Instead, they are investigated using the mark-release-recapture method. This is something I personally did on my field trip for A level:

1. Capture shrimps using nets and count them.
2. Mark them by nipping half their tail diagonally (not proud :D)
3. Repeat, ensuring to account for the marked shrimps.

The more marked individuals you get, the smaller the total population is likely to be.

Recording Data

Depending on the size and type of organism, data can be collected in the form of numbers by counting the present organisms in each quadrat (frequency), or working out the percentage of area within a quadrat that a species occupies (percentage cover), then scale it up to the whole area investigated by multiplying.

For percentage area, you’d count the smaller squares within the quadrat that your target species covers, and convert that number to a percentage (there are 100 smaller squares in the quadrat). So for example, our green plant would cover approximately 25 squares, giving us a 25% coverage. Both these methods are quantitative, giving us 11 plants per quadrat and 25% quadrat coverage, but there is another less quantitative, more descriptive method called ACFOR.

ACFOR is a somewhat subjective system for describing the abundance of species within a quadrat. It follows:

A = abundant

C = common
F = frequent
O = occasional
R = rare

Based on this, we might describe the above scenario for the green plants as perhaps, frequent. Are they common instead? Maybe just occasional? Hard to tell, and dependent on what the overall area looks like, and what other species there are.

This is why it is important to select the appropriate ecological technique for the ecosystem and organism to be studied. For example, if our area contains many different species with scattered distributions, we are likely to get many different numbers for each, which might take a very long time, and might not be that necessary for our analysis. Perhaps we are only intending to compare whether two species are equally abundant or not.

In that case, we wouldn’t be spending time counting small squares to get a percentage cover, but rather using the ACFOR scale. Another scenario is looking at very small species that we cannot count individuals for! Think grass. We would use a percentage cover or ACFOR in this case.

In another case, we might have a scarce area with very few individuals for each whole quadrat, nevermind little square within. In this case we might prefer to simply count them rather than try ACFOR which wouldn’t work because it’s too generic and we might end up with all “R”s, or percentage cover which would also mostly be totally empty and give 0% for no individuals present, or 10% if one quite large individual is present that covers many squares. In this case, counting would give the most useful data as we would get a few whole numbers, e.g. 1 for the first quadrat, 0 for the next, 2, then 5, then 1.

Once we’ve obtained our data, if it is numerical, we can analyse it further to determine relationships between groups studied. There are two key tests for this called a t-test and Spearman’s rank correlation coefficient.

The t-test is used for data with a normal distribution (single peak with fewer data either side) in order to establish whether there is a significant difference between the means of two linked or independent data groups, or between the mean of one data group and its assumed expected value in the hypothesis. The null hypothesis by default states that there would be no significant difference between data groups.

The result of the test (which can be automatically calculated in software including Excel, SPSS, Matlab, R, etc.) comes with several parameters and values, one of the key ones being the p-value. This represents the probability of there being a difference. Because there isn’t a 100% yes or no answer, all data outcomes are a probability. Commonly, a probability value of 5% or less is used as the threshold for deeming a difference significant.

This means that as long as it is 5% or less probable that these data are indeed different from each other, it will be enough to accept the null hypothesis which stated that there wouldn’t be a difference.

5% is equivalent to 0.05, that is why a p-value of 0.05 is considered the threshold for drawing conclusions regarding the null hypothesis. Often, t-tests will output p-values that are much lower, such as 0.001. Since these values are always probabilities, and thresholds for rejecting versus accepting hypotheses are arbitrary, caution must be taken in how statistical analyses are used in research.

Spearman’s rank correlation coefficient is used on two sets of data that are monotonic (connected to each other, either because they increase together, or because as one increase, the other one decreases) in order to establish the strength of correlation.

Positive correlation means they increase together, and is given a positive value between 0 and 1, with 1 being 100% correlation.

You can see these data give only 0.92 (92%) correlation. This is because some data points are slight outliers to most of the others. Notably towards the right of the graph, the space that the points are spread over is wider.

Negative correlation means as one increase, the other one decrease, and is given a negative value between 0 and -1, with -1 being 100% correlation.

The correlation here is very similar in strength (91%) but follows an inverse relationship. As X increases, Y decreases. In both cases, X and Y are very closely correlated.

The trick with this statistical test is that is can only be used on data groups that are already known to be monotonic. The test displays their interaction and tightness of correlation, but can’t in itself prove any inherent connection between the two data sets. This must be already known in some other way.

Ok byeeeeeeeeeeee





Off-Spec Bonus Topics
Filter by
Post Page
Sort by

Sorry! There are no posts.

Sorry! There are no posts.

Our Reviews

Thank you for the help, your website and videos are awesome

pika mart YouTube

Thank you for making all the content btw!

Serena Kutty YouTube

I got A* in A-level biology (Cambridge), thanks! I love your videos

Sherif Negm Facebook

I bookmarked the site 

translucent The Student Room

Thank you!! Your site is so helpful!

studyaspect YouTube