© 2017 Simon’s Cat Limited |
There are many methods of evaluating a theory. Simplicity, testability, fruitfulness, power to unify, and so on. So how does embodiment stack up?
Traditional cognitive science in some aspects, has had a lot
of success and has a proven track record. It has deepened our understanding of
the mind in an unprecedented manner. It
has a power to unify perception, attention, memory, language under the same
explanatory framework. Embodiment’s ability to be applied equally well across
the range of cognitive phenomena, has yet to be proven, but it is very early
days. Appeals to concepts such as
affordances, meshing, world-making, etc have as yet, an uncertain status. That
a cat and I might conceive of the world differently is a difficult theory to
test. Virtual reality technologies may present possibilities of testing in this
area, but any findings would still be speculative at best. Traditional
cognitive science is testable, and in
certain cases experimental results from studies in embodiment can equally be
explained by traditional cognitive science, and yet its explanations are less
certain.
It must however be conceded that traditional cognitive
science does not do a good job in explaining all varieties of cognition.
Perception, pattern matching and Rodney Brook’s (1997) creatures appear to be better explained by dynamical methods than they do by traditional cognitive science and symbolic representation. There are however, representation-hungry problems, requiring abstraction that are not adequately served without the use of representations.
Perception, pattern matching and Rodney Brook’s (1997) creatures appear to be better explained by dynamical methods than they do by traditional cognitive science and symbolic representation. There are however, representation-hungry problems, requiring abstraction that are not adequately served without the use of representations.
If you’ve read my previous posts along with the title,
you’ll know I’ve been accompanied in this exploration of embodiment by a
cat. In respect of the title of these
post, I am aware that my surname doesn’t sound as impressive as the Austrian
physicist’s, but I see no reason why Schrodinger’s moggy should be the only
feline philosophical device in town. So
here’s the thing. The cat died in September 2016, and I’m writing this in April
2017. There is no cat beside me as I
write. She never lived in the house I
live in now and to my recollection, never saw a laptop or a cursor. So apart
from calling me out as a liar, how do we explain what I’ve been doing? I spent my childhood in the company of cats.
I know their movements well and even though this particular cat never saw a
cursor on a laptop, I have no difficulty in visualising her in point mode,
following a cursor’s movements around a screen, as I have seen other cats do. This is now what Clark & Toribio (1994)
would describe as a “representation-hungry” exercise requiring strong internal
representations and as such is not well served by a dynamical systems explanation or
sense-act interactions. While representations are not required for some kinds
of activity, it does not follow that cognition never requires representational
states and as such, embodiment can only claim a portion of the explanations
that traditional cognitive science seeks to address. Could dynamical systems be
adopted as an additional tool for traditional cognitive science, broadening its
approach and thereby its explanatory powers, but maybe not completely replacing
it.
In this pursuit, could representations in embodiment take a
different form? A problem I have had from the beginning of this exploration
into embodiment is the idea that representations are seen as discrete entities
that can be extracted from a system. It is one perhaps borne out of the beginnings
of traditional cognitive science but I’m not so certain still exists. While connectionism in its current
form is criticised for its lack of body and world, it does show that the
processing of symbols is not necessarily the only means of processing inputs,
albeit in a distributed fashion. It
transforms inputs into outputs, and in a manner, represents something about the
world through the state that the model enters into. The weights of the
connectionist model can be interpreted as a form of knowledge detailing how
inputs should be treated in pursuit of outputs. A
representation is a stand-in for something else. If I get freckles from staying
in the sun too long without sun-blocker, my skin is carrying information about
my time in the sun. In a similar manner, a state of me and my nervous system
comes to represent something in the real world, when it consistently correlates
with that thing in the real world. But perhaps the represenational mechanism must also involve a functional role to cooordinate activity with its environment beyond just mere correlation. As Clark (1997) put it, "the system must be capable of using these inner states or processes to
solve problems off-line, to engage in vicarious explorations of a
domain, and so on [...] Strong internal representation is thus of a
piece with the capacity to use inner models instead of real-world action
and search".
It would seem then, a redefinition or at least a clarification of what is meant by representation is required, and
without it, embodied accounts of cognition, for all the possibilities that they
open up, become limited in their ability to explain mental images, the ability
to learn from previous experiences, examine options when making a decision, and
myriad other cognitive activities that require something that at least acts as
a representation would.
References
Brooks, R. (1997). Humanoid Robots.
The Cog Project. Journal of the Robotics Society of Japan, 15(7),
pp.968-970.
Clark, A. and Toribio, J. (1994).
Doing without representing?. Synthese, 101(3), pp.401-431.
Clark, A. (1997). The Dynamical Challenge. Cognitive Science, 21(4), pp.461-481.
Clark, A. (1997). The Dynamical Challenge. Cognitive Science, 21(4), pp.461-481.
No comments:
Post a Comment