Ecosystems and trophic levels
What is a population? A population is all the individual organisms found in a given habitat, of one species. So you could talk about a population of wolves in the woods. If you want to talk about the wolves and rabbits in the woods, then you’d be referring to a community. A community is made up of the various populations in a habitat. So the summation of all the living things in a given area is called a community. What then is an ecosystem?
An ecosystem comprises the community of living organisms in a habitat, together with all the non-living components such as water, soil, temperature, etc. called abiotic factors.
Why are different organisms of different species able to coexist in the same habitat? How come they don’t directly compete with one another and drive others out? Have a watch…
So that’s the last and loveliest new term: niche. It rhymes with quiche. A niche is the interaction, or way of life, of a species, population or individual in relation to all others within an ecosystem. It’s how it behaves, what it eats, how it reproduces, where it sleeps, etc.; a species’ niche is determined by both biotic factors (such as competition and predation) and abiotic factors.
Different things may determine the population sizes within an ecosystem.
Where do we get all our energy from? Food. Where does the energy in food ultimately come from? Plants. Where does the energy in plants ultimately come from? Nowhere, they make it themselves through photosynthesis.
So is all the energy available to all living things on Earth down to photosynthesis? It sure is, my biologist friend, it sure is. Let’s take a humbling moment of meditation while adoring this photo of a plant:
One day that weird-looking thing in the middle will be a pineapple ^_^
But wait. Don’t plants also use their own photosynthesised goodies (glucose) to provide energy for their own business (growth, reproduction, etc.) via respiration, and waste stored energy in their tissues upon their death? Of course they do. So less must be available for whatever eats the plant. And whatever eats the plant will also lose energy through excretion for example, so whatever eats this herbivore will have even less energy available to themselves.
Therefore, at each trophic level in the energy transfer (feeding) hierarchy there is a net loss of energy. This results in a pyramid:
The plants at the bottom are the photosynthesising primary producers. They hold the most energy (Joules) and are fed on by herbivores – primary consumers.
Notice only about 10% of that energy is available one trophic level higher. This is taken by carnivores feeding on herbivores – secondary consumers.
At the very top of the pyramid a mere 0.001% of the original 10,000 J remains (10 J). This is taken by tertiary consumers feeding on carnivores.
Because such tiny amounts of energy are left at the highest level, it’s rare to find quaternary consumers or above.
Notice how the above pyramid is based on energy alone. There are 2 other types of pyramid: biomass and numbers. A numbers pyramid is based on simple numbers e.g. 1,000 plants at the bottom eaten by 100 herbivores eaten by 10 carnivores eaten by 1 omnivore.
A biomass pyramid on the other hand takes into account the dry mass of the organisms. This can look something like this:
As you can see, there is a clear correlation between these types of pyramid. However, you can get irregular “pyramids”. For example, a single tree can feed several hundred insects, etc.
Each type of pyramid, whether based on energy, biomass or numbers, has advantages and disadvantages. The numbers pyramid is good because you get to know the actual numbers of each species, which is relatively easy to find out and monitor over time e.g. with changing seasons.
On the downside, it can be misleading if it counts certain juvenile individuals or immature forms of a species, as it doesn’t account for the size of the individual. This issue is taken care of in the biomass pyramid, but another issue of representation arises. Only a sample can ever be obtained to determine the biomass of a species, which can also vary depending on the time of year.
The energy pyramid is the most accurate, as it shows the bottom line in terms of all the interaction across trophic levels, regardless of numbers, size and other factors that can change. However, obtaining energy data is complex and can be very difficult. You know how they get energy data for how many calories there are in a crisp or whatever? They burn it. Yes.
Measuring species abundance and distribution
Sampling of organisms must be like those annoying, attention-seeking Snapchat friends. It must be random. Random sampling can be carried out using quadrats. If you’re wondering what they are, look no further – they’re squares.
How would you make sure that your sampling is random? In a field, you could lay two long tapes perpendicularly to define the limits of the area where the samples will be taken from.
As you can see above, a tape is laid on one side of the sampling area. As you can’t see above, another tape is laid from one end of the first tape, across on the adjacent side of the sampling area (like a giant L). Then two random numbers are generated using a random numbers table. These numbers are used to determine the coordinates of the first quadrat placed on the field, by matching them on the two tapes. And voila! You have yourself a system for random sampling using quadrats.
Transects are tapes (like above) placed across an area which has some form of gradient caused by abiotic factors which directly determines the distribution and abundance of the organisms present. For example, a beach is not suited for random sampling because there are clear zones ranging from the low population zone near the sea, to the more densely inhabited areas further up the shore. In this case the best way of obtaining useful data is by systematic sampling.
After placing the tape across the shore, place quadrats at set intervals such as every 5 metres, then take your data down.
Depending on the size and type of organism, data can be collected in the form of numbers by counting the present organisms in each quadrat (frequency), or working out the percentage of area within a quadrat that a species occupies (percentage cover), then scale it up to the whole area investigated by multiplying.
For percentage area, you’d count the smaller squares within the quadrat that your target species covers, and convert that number to a percentage (there are 100 smaller squares in the quadrat). So for example, our green plant would cover approximately 25 squares, giving us a 25% coverage. Both these methods are quantitative, giving us 11 plants per quadrat and 25% quadrat coverage, but there is another less quantitative, more descriptive method called ACFOR.
ACFOR is a somewhat subjective system for describing the abundance of species within a quadrat. It follows:
A = abundant
C = common
F = frequent
O = occasional
R = rare
Based on this, we might describe the above scenario for the green plants as perhaps, frequent. Are they common instead? Maybe just occasional? Hard to tell, and dependent on what the overall area looks like, and what other species there are.
This is why it is important to select the appropriate ecological technique for the ecosystem and organism to be studied. For example, if our area contains many different species with scattered distributions, we are likely to get many different numbers for each, which might take a very long time, and might not be that necessary for our analysis. Perhaps we are only intending to compare whether two species are equally abundant or not.
In that case, we wouldn’t be spending time counting small squares to get a percentage cover, but rather using the ACFOR scale. Another scenario is looking at very small species that we cannot count individuals for! Think grass. We would use a percentage cover or ACFOR in this case.
In another case, we might have a scarce area with very few individuals for each whole quadrat, nevermind little square within. In this case we might prefer to simply count them rather than try ACFOR which wouldn’t work because it’s too generic and we might end up with all “R”s, or percentage cover which would also mostly be totally empty and give 0% for no individuals present, or 10% if one quite large individual is present that covers many squares. In this case, counting would give the most useful data as we would get a few whole numbers, e.g. 1 for the first quadrat, 0 for the next, 2, then 5, then 1.
Once we’ve obtained our data, if it is numerical, we can analyse it further to determine relationships between groups studied. There are two key tests for this called a t-test and Spearman’s rank correlation coefficient. We have previously looked at the chi-squared test as well.
The t-test is used for data with a normal distribution (single peak with fewer data either side) in order to establish whether there is a significant difference between the means of two linked or independent data groups, or between the mean of one data group and its assumed expected value in the hypothesis. The null hypothesis by default states that there would be no significant difference between data groups.
The result of the test (which can be automatically calculated in software including Excel, SPSS, Matlab, R, etc.) comes with several parameters and values, one of the key ones being the p-value. This represents the probability of there being a difference. Because there isn’t a 100% yes or no answer, all data outcomes are a probability. Commonly, a probability value of 5% or less is used as the threshold for deeming a difference significant.
This means that as long as it is 5% or less probable that these data are indeed different from each other, it will be enough to accept the null hypothesis which stated that there wouldn’t be a difference.
5% is equivalent to 0.05, that is why a p-value of 0.05 is considered the threshold for drawing conclusions regarding the null hypothesis. Often, t-tests will output p-values that are much lower, such as 0.001. Since these values are always probabilities, and thresholds for rejecting versus accepting hypotheses are arbitrary, caution must be taken in how statistical analyses are used in research.
Spearman’s rank correlation coefficient is used on two sets of data that are monotonic (connected to each other, either because they increase together, or because as one increase, the other one decreases) in order to establish the strength of correlation.
Positive correlation means they increase together, and is given a positive value between 0 and 1, with 1 being 100% correlation.
You can see these data give only 0.92 (92%) correlation. This is because some data points are slight outliers to most of the others. Notably towards the right of the graph, the space that the points are spread over is wider.
Negative correlation means as one increase, the other one decrease, and is given a negative value between 0 and -1, with -1 being 100% correlation.
The correlation here is very similar in strength (91%) but follows an inverse relationship. As X increases, Y decreases. In both cases, X and Y are very closely correlated.
The trick with this statistical test is that is can only be used on data groups that are already known to be monotonic. The test displays their interaction and tightness of correlation, but can’t in itself prove any inherent connection between the two data sets. This must be already known in some other way.