Monday 7 October 2013

Portraying the opinion of the scientific community in media debates

Consider the case of GM foods or Climate Change.  I want to know why the public assess risk differently from scientists and other experts.  It is a problem with many components of which I will only investigate one: how is the balance of ‘opinion’ within science, communicated to the public?  The norm for televised discussions or debates is to have on one side of the table the individual representing the consensus position and on the other side an individual representing the non consensus position.  I don’t believe that this format communicates to the public an accurate reflection of what scientists think.

Take for example the recent IPCC report which states with 95% confidence that humans are the ‘dominant cause’ of global warming.  At the most basic level, I believe, that having one scientist advocating the consensus position and one individual representing the 5% of uncertainty in a debate leaves the public with the impression that something akin to equal weight should be given to each side of the debate.   I would like to see the physical set up of the discussion reflect the actual balance of confidence (which in turn reflects the evidence).  This could be achieved by giving 95% of the airtime to the consensus position but I would not advocate this approach as it would limits the opportunity of the ‘non-consensus’ position to challenge the consensus position.  I am all in favour of the consensus position being scientifically challenged, it is through challenging and improving many generations of old scientific ideas and models that we arrived at the scientific understanding we have today.  Instead I would be in favour of a more literal representation of the balance of opinion.

In the instance of the 95% confidence level, for every individual adopting the ‘non-conventional’ stance I would like to see 19 individuals on the other side of the table.  The 19 would be represented by one spokesperson, the only individual who would talk on behalf of that side of the debate.  The other 18 could show their support by raising their hands in support of the speaker or, at the end of a speaker’s statement by saying ‘I agree’.  This, I believe would much better represent the balance of opinion.  It would be necessary to ensure that there is no sense of bullying which I believe the single spokesperson would ensure.  If the person representing the non-conventional approach wished to have another person on their side of the table then they could, as long as 19 people joined the other side of the table.

As a caveat, I recognise that the 95% confidence does is not the view of 100% of scientists, 97% of scientists support the IPCC report.  Therefore, a better representation would be if the stance that humans are responsible for global warming had 95% x 97% (which equals 92.15%) representation.

So far this blog has dealt with an issue of confidence (a measure of scientists’ confidence in their model).  In this case it is not that 95% of scientists 100% hold stance ‘A’ and 5% hold stance ’B’ which actually makes it difficult to represent as all scientists who support the IPCC have some degree of uncertainty, it is not that 5% disagree with the others.  It is simpler when the question is framed such that each scientist takes a position 100%, for example if the debate was centered on the question ‘are the ecological risks posed by GM technologies outweighed by the benefits they offer?’. The same set up could be used with proportional representation of scientific opinion for a debate centering on this question

This blog is just from off of the top of my head but proportional (physical) representation of the opinion of the scientific community makes intuitive sense to me.  If you have any criticisms then please do leave them below.

Thanks


Saturday 5 October 2013

It is better to be broadly right instead of precisely wrong.


In this blog I will discuss a recent move by the National Trust to ‘macro-manage’ land in an attempt to get things broadly right over a large geographic scale.  Attempting to manage land for nature can be difficult because nature is so complex, in the face of such complex systems the National Trust’s strategy is not only cost efficient, it is extremely sensible.  To explain why I think this, I first need to write a little on the nature of randomness.  I will distinguish between three types of randomness, inherent, game and apparent randomness.  All three types of randomness are alike in a crucial way: they make the future uncertain.  Inherent randomness belongs in the domain of physics, more precisely quantum mechanics which states, amongst other things, that it is impossible to know both where a (sub-atomic) particle is and where it is going.  There is pure randomness at the root of reality.  Fortunately, as we move to larger scales e.g the size of a cell all of the randomness averages out in a predictable way hence this randomness has no influence on our lives.  Game randomness is the randomness we are most familiar with, the moment before the die is rolled we are uncertain about what the outcome will be, if we pick a card at random from a deck we do not know for certain which card we will pick.  Now, if the die is fair and the deck is a complete one then we can know with certainty the probability of rolling a ‘2’ (1/6) or picking an Ace (1/13), we know how uncertain the future is, how much we don’t know.  The last type of randomness, apparent randomness, is the most important to this topic.  Apparent randomness is the case of facing an uncertain future due to a lack of knowledge and understanding, it can be thought of trying to play cards with a weighted deck (say, one in which all clubs have been replaced by hearts) the composition of which the player does not know.  Here, if one knew the makeup of the deck then they would be able to predict the probability of different outcomes but the deck, the randomness generator, is not known to the player.  For example, though the weather may be determined by laws, just as the movement of a planet is determined by (Newton’s) laws of motion, our understanding of the weather laws is such that we are unable to predict it, thus, for all purposes, the future weather appears to us to be, at least slightly, random.  For another example consider the many precise (but often wrong) predictions made by the Bank of England regarding future inflation/interest rates/unemployment (see here for the result of a very quick Google search)


The weather is so difficult to predict because it depends upon many ‘units’ (water droplets, the solids around which they form, local air pressures and more) which can interact to form positive feedbacks.  The result is a system which is rendered hugely difficult to predict, in part because a tiny mis-estimation can have far reaching consequences.  In fact it was a study of the weather which gave birth to the mathematical field of ‘chaos theory’.  Chaos theory can be understood by imagining a tennis ball sat atop a large exercise ball in the middle of a sports hall.  If the perfectly balanced tennis ball is instead placed slightly to the right then it will roll away to the right, perhaps as far as the end of the sports hall and vice versa if the ball is placed slightly to the left.  Thus a tiny mis-estimation of the starting position of the tennis ball has far reaching consequences for predictions of the future position of the ball.  That the weather forms such an unpredictable system is a problem for ecology, conservation and agriculture because the weather plays such a large role in determining what happens at the very base of every foodweb.  Even if the weather was perfectly forecastable, predicting the precise impact of altering a complex foodweb via carrying out a conservation intervention would be nigh on impossible as is predicting the impact of a government’s economic intervention.

Conservation interventions (outside of academia) are normally carried out on the belief that doing A will result in a change in B.  If A is expensive then it will need to be justified.  The normal approach to this problem is to make a prediction about the change in B which will result from action A.  Unfortunately this usually means making predictions about complex systems which are not completely understood.  The more precise one attempts to be, the more likely one is to be wrong, herein lies the problem. 

There are two solutions to the difficulties of forecasting presents:
making vaguer predictions and making no predictions at all.

Both of these solutions run counter to our nature and also counter to the media’s handling of prediction making in the face of randomness.  Firstly we cannot help but attempt to predict the future by imagining various scenarios, not making predictions requires a great mental effort and self-restraint.  Secondly, when we cast our minds forwards we do so by imagining one possible scenario at a time.  Such a forecasting system is deeply flawed.  A useful forecast is not the sum of one or a few imagined scenarios, is the (weighted) average of all the possible scenarios which includes in it a measure of uncertainty.


The National Trust’s Wicken Fen (Cambridge) (see here) and High Peak Moors (Peak District) (see here and here) projects are both attempts to manage land whilst making the minimal possible number of predictions.  The National Trust is doing this by utilising fairly unspecific tools (i.e. livestock graze land instead of fine scale intervention by hand in Cambridge and blocking ditches in the Peak District) in the hope that these will bring about broad benefits to the ecosystem.  Precisely what those benefits will be, there has been little attempt to precisely predict.  Instead the introduction of the livestock, ditch blocking and other broad-stroke interventions, are forming lessons from which the National Trust will learn in order to inform future efforts.

Restraining the extent of our meddling with nature, basing this meddling on minimal ecological theorising and instead looking for examples of what worked in the past may seem a non-scientific approach but it isn’t.  Learning what works is scientific.  If broad and approaches can be shown scientifically to work best, scientists are obliged to put aside any inherent preference for complexity and evaluate these approaches in the same way as more complex ones. 

Being broadly right means dealing with uncertainty, with vagueness, a situation which may not sit well with us at first but it is surely better than being precisely wrong.