Pages

Sunday 17 April 2016

Making sense of a Bayesian approach to the world.


With “Perceptual decision making: drift-diffusion model is equivalent to a Bayesian model” (2014) Blitzer et al. have taken the Bayesian approach to the problems that exist when we approach the world as a percepting decision maker. Their target of attack with this paper is the drift-diffusion model of perception, or at least the drift-diffusion model of following a dot as it traverses a computer screen under experimental conditions.

In this kind of experiment, a dot moves across the screen, along some vector, it’s local position is also subject to a Gaussian distribution centred upon the vector at time t.


The participant stares at the screen for a sufficiently long time such that they can infer what direction the dot is traveling in, despite all the associated noise. This is significant because the trajectories of such dots have been associated with “[t]he mean firing patterns of single neurons, for example, in lateral intraparietal cortex (LIP) of non-human primates”.

The authors then tested the coherence of their Bayesian model with that of other models that had been used on data where people had participated in a drift-diffusion experiment trials (where they had either received a pulse of transcranial magnetic stimulation, or had not).

The new Bayesian model was at least as good at modelling the responses of participants are the old “pure Drift Diffusion Model” (pDDM) is.

How is this relevant to thinking about what we do when we are thinking?

Bayesian methods are useful for the formal modelling of things one is uncertain about. The method works by looking for events and gathering data related to those events; we then estimate the likelihood that a particular event has occurred, given the existence of a prior model which has a defined set of possible events, and their likelihoods of having occurred. Then, given the belief (based upon data X, and model M, at time t) that event E has occurred, then model M is updated with this information for time t = t +1.

This is a mechanism which is very good at estimating how to weight the likelihood parameter values for your model, if you have included all the possible parameters, and if you have a lot of data to play with. The underlying assumption of this analytic paradigm, is that this is also a good model for modelling perception and decision with.

To an extent it is useful; it does include both prior understanding of the world, and is capable of ‘learning’ given inputs.

How does it cope with the challenges presented by the post-cognitive perspectives on sense making?

From certain prospects it may be useful for modelling what occurs when we learn. Certainly if we use is as a toy, is could be useful for describing what happens when we are on auto-pilot during those circumstances where you’re driving home, have paid no conscious attention to the route along the way, but have successfully arrived at the destination.

Nothing surprising occurred, therefore your model worked perfectly. If however a car, which you assumed would not move (because it was stopped at a red light), suddenly breaks the red and turns left, this may force you back into the moment, force you to think about what happened, cause you to register that the car had what looked to be Estonian licence plates, and lead you to think about crossing roads on the continent, and how their rules regarding red lights are different, and so as a result of this causal chain you update your mental model for how cars are likely to behave at a traffic junction, if they have continental licence plates.

Is it that there is a “continental cars at traffic lights” representation, which is a sub category or “cars”, or “cars at traffic lights”, or is there instead a Gibsonian affordance of “threat that may kill me” which is attached to cars of the continental sort? The Bayesian model is agnostic insofar as it doesn’t attempt to describe the biological mechanism, it is an abstraction of a process.

But, if we have a look at how it is used by Blitzer et al., the Bayesian model is not unproblematic. While they introduce a subjective view upon the world, and one which it is updateable, there are a few concerns with it that ought to get teased out.

Blitzer et al. assume that we see the world relatively transparently, though we pick up a bit of noise as the signal passes through the tissue of the brain, as if we saw it through a dusty window (presumably en route from the eyes to V1).

A difficulty that I’d have with this assumption is that we often see what we expect to see. Consider the bunny-duck image above. It has long been noted that there is a stronger likelihood of reporting that we see a bunny (as compared to a duck) during the Easter period than during Halloween (in the United States at least).

This is a difficulty for the “transparent picture of the world” model that Blitzer et al. intuit. 
It introduces a new step in the process of how-we-make-sense-of-the-world, in order to elicit the details which we require if we are to test the model of the process that we wish to make use of.
These framing factors must necessarily be first order parameters of the model that we use, or are deductions about what we ought to be looking for, based upon the model we are using (thus second order consequences of parameters which we believe to be important, and so include in the model).

If the data which is used to test the model is also a function of the model to be tested, there is the risk that tautologies will develop.

It is true that a model can be both useful and unfalsifiable, so long as everything may be contained within a model without contradiction then it’s utility is all that’s important. However it cannot be a model of learning, merely optimisation under specified assumptions, consequently the Bayesian model may be of limited utility in describing scenarios other than once-off games.

2 comments:

  1. To which, I can't help adding this: https://twitter.com/ziyatong/status/721411868393390080

    ReplyDelete
  2. Jona Vance's paper Cognitive Penetration and the Tribunal of Experience does a good job explaining Bayesian framework, independent of commitment to modularity or embodied accounts. My take on it is as supportive of the arbitrary division of cognition/perception

    ReplyDelete