Runeson argues that perception consists of 'Smart' Perceptual Mechanisms. He compares them to
the polar planimeter, an instrument which allows for the measurement of areas
on a 2D surface such as a map. These smart mechanisms are similar to the affordances that Gibson sees as the basis for perception. Runeson does not see the need for cognitive
processes in perception. As in the planimeter, the relationship between
the stimulus and the smart mechanism is automatic, emanating from the ‘physical
realisation’ of the mechanism.
This is fine if the phenomenon
under study is perception and the unit of analysis is the smart mechanism. But
if we want to dig lower, how does it work? The distance the planimeter travels in any direction is directly
related to the area covered by the arm.
A mathematical proof is available.
Can smart mechanisms be explained at a lower level?
The smart mechanisms emerge
from lower level processes that we can study. We know that light hits the retina
and is imprinted on to the visual cortex. What happens then? How do the smart
mechanisms emerge? It would seem that some sort of processing must take place. There
is some algorithm. This is not saying it is digital, but it sure is not the
same as in the planimeter, as there are not any arms or wheels involved. We
have neurons that are activated in direct relation to the object being
perceived. The process allows us to categorise and learn. What happens to give
us smart perception? It’s OK to skip the activation of the neurons and consider smart
mechanisms and affordances, as long as it is acknowledged that this level is
being skipped.
If the logical process is not
digital or analogue, maybe it’s bio-logic. It is driven by data from our
senses. It supports all the processes from the lowest level of imprinting light
on the neurons, to smart mechanisms and the higher level conscious processes
such as attention and emotion. Higher level processes emerge from lower level
processes in some extremely complex manner.
During development, data is
experienced and allows discovery of invariants such as Runeson describes: The
time to collision is related to the rate of expansion of the image on the eye.
The time to jump a fence depends only on the height of the fence. Complex
dynamic systems, neural networks and machine learning are data based systems as
opposed to rule based. They show the ability to find hidden and latent
relationships in data. These appear similar to invariants in that they are not
based on rules or theories, but just on the nature of the data.
The planimeter differs from
perception in that the intelligence it displays did not emerge from it, but was
designed into it, whereas smart perceptual mechanisms emerge. Runeson’s
analysis of the planimeter allows us to understand that laws can be implemented
in other ways than in formal disciplines such as mathematics and physics. It
allows us to appreciate that there is a logic in the biological that does not
have to be the same as the logics with which we are more familiar.
> We know that light hits the retina and is imprinted on to the visual cortex
ReplyDeleteNot really. Light is not imprinted onto the visual cortex, and to suggest anything of the sort is really very misleading. The electrochemical activity of the CNS is related to the changing flux on the retina, which in turn is lawfully related to the activity of the organism in a specific environment. No light ever goes near the brain, and vision is not a matter of image transfer.