Pages

Sunday, 2 April 2017

Perceiving how robot Bob’s brain and body act


(Kuloser, 2013)
I partially accredit my burgeoning interests in areas of robotics and human-computer-interaction to the portion of youth I briefly (mis)spent identifying myself as a science fiction fan.  I’m not intending to disparage fans of science fiction, rather, I’m merely pointing out that I never fully committed myself to being a particularly dedicated fan.  I still, nonetheless, unashamedly count Ridley Scott’s Blade Runner (1982) and Her by Spike Jonze (2013) among my favourite films.  

In light of my interests, I’d like to indulge what remains of my inner science fiction nerd by proposing ‘Bob’: an imaginary human-like robot.  Bob is already seemingly capable of handling natural language, but we want to give him additional human-like characteristics.  As a literary device, Bob inspires my naive musings about some relevant areas of robotics, embodiment and the sensorimotor correspondence theory of visual perception/ actionism.  So, assuming Bob can somehow ‘talk the talk’, how might we help him to ‘walk the walk’?

In his lecture on embodiment and robotics, Rolf Pfeifer (eSMCs, 2012) distinguishes between design considerations appropriate to building robots capable of operating in ‘controlled’ or ‘artificial’ environments (like automated car manufacturing) and those pertinent to the creation of ‘soft machines’ which might operate in our ‘human’ worlds.  Despite my nerdish enthusiasm for sharing our world with ‘soft machines’, as Pfeifer notes, engineering even the most seemingly mundane of human behaviours can prove difficult for roboticists to master.  

Early successes in artificial intelligence were somewhat hampered by an assumption that, prior to its release into ‘the wild’, a robot’s ‘brain’ would need extensive programming to prepare it for responding to every conceivable environmental encounter.  Yet, the lived-human-experience of our inherently complicated, mutable and uncertain world raises serious questions about whether or not such exhaustive preprograming, seemingly required for robots to cope in complex and emergent environmental conditions, is actually achievable or even necessary.  

Carrying around a programmed representation of ‘everything’ would require Bob, our robot, to have an impossibly big ‘brain’ and, presumably then, a suitably large head for him to hold it in!  Throughout the course of acquiring worldly experience, in addition to managing and controlling his speech, language, senses and movements, Bob’s already brimming robot brain would continually be adding to his internal model.  Poor Bob’s brain might slow to a crawl while trawling billions of lines of code to find and return an appropriately modelled response to anything he comes across.  As a result, Bob won’t fit in well with ‘real humans’ or cope with unpredictable scenarios.  

Unfortunately Bob’s oversized head would also locate him firmly within Masahiro Mori’s (1970) Uncanny Valley.  It seems we humans are deeply perceptive of, and attuned to sensing, subtleties in the movement, behaviour and appearance of other humans (especially with respect to the face).  Mori hypothesised that the more human-like a robot becomes, the more critical we are whenever it fails to fully conform to our exacting expectations for its appearance and behaviour.  Many people find markedly life-like robots and ‘virtual’ humans repellent (Tinwell, 2015) – see Rob Schwarz’s (2013) blog-post for some notable examples of freakish ‘humanoids’.  If Bob’s not going to terrify people, it seems then that he ought to look and act like a person.  

Pfeifer’s fascinating descriptions of the richness and complexity of movement, requiring little-to-no ‘control’, made possible by exploiting subtle augmentations of physical materials, highlight ways of offloading cognitive processes (previously attributed exclusively to the mind/ brain) to other parts of the body.  The deceptively obvious implication here being that legs might comprise things which ensure they bear more responsibility for the act of walking than was previously assumed i.e. their movements involve more than just a brain ‘controlling’ body parts (Brooks, 2017).  

Relating these ideas back to Bob, we could purposefully redesign his limbs to enable movements in ways that require minimal intervention from his brain.  Understanding bodily configuration as having a far greater role in the production of movement is one of many challenges to assumptions which place mind/ brains between stimulus and response.  But, can we extend these ideas to identify other candidates for such cognitive offloading?  

One possible means to increasing Bob’s chances of acceptance, by decreasing his oversized head, is to reduce his ‘brain-load’ to a minimal set of simplified rules for living.  I’m thinking beyond the morass of morals and ethics associated with systems of rules like Isaac Asimov’s (1942) Three Laws of Robotics—see Roger Clake's (1994) excellent discussion of Asimov's laws.  Yonatan Zunger (2015) explains how such rules are ‘reasonable if your robots are mechanical tools, but less so if they are thinking beings’.  So, just as we did for language, let’s pretend we’ve somehow successfully managed to make a morally robust and ethically sound Bob whist keeping him as autonomous and ‘free’ as possible.  There are additional insights which link perception and action(ism) that may also help us to further streamline Bob’s suppositious ‘mentalese’ (Pinker, 1994).  
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law (Asimov, 1942). 
According to O’Regan and Noë's sensorimotor correspondence theory of visual perception (2001), what distinguishes our experience of one sense from another are differences in ‘the structure of the rules that govern perception’ for each of our sensory modalities (p.940).  These structures, termed ‘sensorimotor contingencies’, refer to ‘the regularities in how sensory stimulation depends on the activity of the perceiver’ (Degenaar and O’Regan, 2015, p.49).  For example, vision purportedly involves sensorimotor contingencies that are fixed by both the ‘visual apparatus’ and by ‘the way objects occupy three-dimensional space and present themselves to the eye’ (p.946).  We ‘exercise’ (p.943) practical knowledge of the rules governing the sensorimotor contingencies pertinent to vision as we actively ‘probe’ the world we see around us (p.946).  Thus, vision becomes recast as the ‘activity of exploring the environment in ways mediated by [our] knowledge’ of the sensorimotor contingencies (op. cit.) appropriate to visual perception.  Rather than building ‘magical’ (ibid.), internal, pictorial representations, the outside world acts as an ‘external memory store’ (p.950).  Furthermore, ‘For a creature (or a machine for that matter) to possess visual awareness, what is required is that, in addition to exercising the mastery of the relevant sensorimotor contingencies, it must make use of this exercise for the purposes of thought and planning’ (p.944).  Therefore, ‘visual experience only occurs when there is the potential for action’ (p.949).  

Perhaps the work of another of my personal favourites, film director Michel Gondry, can provide for a more concrete example.  Describing influences on his distinctive visual style, Gondry recalls how, as a child, he often lay in bed with his hands placed either side of his eyes (Bangs, 2003).  The close proximity of his hands to his face made them appear huge.  Various objects around his bedroom, being farther away, would appear to be dwarfed by his gigantic hands.  Moving his hands and face around the room, Gondry would imagine himself capable of grasping the tiny (distant) objects with his oversized hands.  We could argue that, Gondry has since learned rules that govern the sensorimotor contingences relevant to his visual perception.  He applies those rules to distinguish small objects from far away ones. 
 

An explanation that relies on internal representations would require Gondry to hold a pictorial, three-dimensional, model for his environment (i.e. his hands, his room and its contents).  Gondry’s brain would compare his internal model for how the objects in his room ‘truely’ are, with the (impoverished) visual information presented to his eyeballs.  His brain would then create a new ‘corrected’ model, derived from the comparison, so that he can comprehend the ‘illusion’ of faraway objects appearing small.  The correction process would be on-going, constantly occurring as he redirects his gaze (O’Regan and Noë, 2001). 
 

Programming Bob with a rule that far away things merely appear small, and with ways of implementing the rule, seems much less demanding of his resources than having him tend to internal representations.  So, supposing Bob can act upon his thoughts and plans, it seems useful for us to provide him with rules for the sensorimotor contingencies applicable to his senses.  Offloading some of the workload necessary for Bob to engage and act upon his senses to bits of his body further alleviates some problems associated with internal representations (and big heads!).  Once again we’ve saved Bob some ‘head-space’ by filling his already downsized brain with rules rather than models. 
 

By overly concerning myself with 'programming' Bob’s robot 'brain', I may be unwittingly lending my support to the very (anthropomorphic, anthropocentric, psychological and computational) models for mind/ brains that more contemporary understandings within cognitive science seek to challenge.  In my defence however, as I mentioned from the outset, I never intended for Bob to be realised.  Bob is a kind of thought experiment, useful to facilitating my explorations of certain aspects of cognitive science.  I’m hopeful that Bob has helped me to at least acknowledge bodies, environments and brains as actors in the ‘orchestration’ of cognitive processes.  

Knowing how I rarely have answers, only more questions, perhaps it’s best to quietly put Bob back in his box (and maybe me with him).  Don’t be alarmed, if Bob’s not sentient, he won’t really notice. 
 

References
 

Asimov, I. 1942. Runaround IN: Campbell, J. (ed.) Astounding science fiction, 29(1). New York: Street & Smith Publications, Inc., pp.94-103.  

Bangs, L. 2003. Director's series, Vol. 3 - The work of director Michel Gondry [DVD]. New York: Palm Pictures.

Brooks, R. 2017. What is it like to be a robot? Robots, AI, and other stuff: essays, 18 March [Online]. Available from: https://rodneybrooks.com/what-is-it-like-to-be-a-robot/ [Accessed 30 March 2017].  

Clarke, R. 1994. Asimov’s Laws of robotics: implications for information technology [Online]. Available from: http://www.rogerclarke.com/SOS/Asimov.html [Accessed 22 March 2017].  

Degenaar, J. and O’Regan, J. 2015. Sensorimotor theory of consciousness. Scholarpedia, 10(5), pp.49-52 [Online].  Available from: http://www.scholarpedia.org/article/Sensorimotor_theory_of_consciousness [Accessed 28 March 2017].  

eSMCs 2012. Rolf Pfeifer: on the role of embodiment in the emergence of cognition [Online]. Available from: https://vimeo.com/28811223 [Accessed 4 March 2017]. 
 

Jonze, S. 2013. Her [Film]. Los Angeles: Annapurna Pictures. 
 

Kuloser 2013. Human, robot, face [Online]. Available from: https://pixabay.com/en/humanoid-robot-face-1477614/ [Accessed 22 March 2017]. 
 

Mori, M. 1970. The uncanny valley, translated by MacDorman K. and Kageki, N. 2012, IEEE Robotics and Automation, 19(2), pp.98–100. 
 

Pinker, S. 1994. The language instinct: how the mind creates language. New York: William Morrow and Company.  

O’Regan, J. and Noë, A. 2001. A sensorimotor account of vision and visual consciousness. Behavioral and brain sciences, 24(5), pp.939-973.  

Schwarz, R. 2013. 10 creepy examples of the uncanny valley. Stranger dimensions, 25 November [Online]. Available from: http://www.strangerdimensions.com/2013/11/25/10-creepy-examples-uncanny-valley/ [Accessed 25 March 2017]. 
 

Scott, R. 1982. Blade runner [Film]. Burbank: Warner Bros. 
 

Tinwell, A. 2015. The uncanny valley in games and animation. Florida: CRC Press. 
 

Zunger, Y. 2015. Asimov’s ‘Three Laws’ and human morality: How the six possible orderings reflect on our moral senses. Science. Politics. Economics. Ethics, 7 December 2015 [Online]. Available from: https://medium.com/@yonatanzunger/asimov-s-three-laws-and-human-morality-12522d7546e4 [Accessed 22 March 2017].

2 comments:

  1. My favorite "robot" is Mr. Data (from Star Trek Next Generation) and my favorite scene is when he is facing a skeptical guest doctor and explains that his name is not pronounced "daat aa" but "da (long a sound) -tah" (And, of course, when he falls in love with Tasha Yar.) How would your Bob react to being called "Robert" or "Bobby"? Is he self-aware?

    ReplyDelete
    Replies
    1. Katrina, I’m sure that Bob the robot wouldn’t object at all! As much as I try to hide my inner science fiction nerd, secretly Bob’s name is actually a nod to ‘Robby the Robot’ from the classic 1950’s film ‘Forbidden Planet’ (Wilcox, 1956).

      Mr Data’s a very good choice; Stanley Kubrick’s (1968) screen version of HAL 9000 and Bender from Futurama (2002) are among my personal favourites!

      It’s likely we could have Bob give an ‘impression’ of self-awareness; especially if he was programmed to respond like David Hanson’s conversational robot, ‘Jules’. You can see Jules’ scary response to being ‘switched off’ in Elicia Brandon’s video (2006). As for ‘real’ self-awareness, I doubt Bob will win the ‘Loebner Prize for artificial intelligence’ (2015) any time soon.

      References

      Brandon, E. 2006. Invertuality: Jules says goodbye... YouTube, 20 November [Online]. Available from: https://www.youtube.com/watch?v=xRR33WDFi_k [Accessed 3 April 2017].

      Cohen, D. and Groening, M. 2002. Futurama [DVD]. Los Angeles: 20th Century Fox Home Entertainment.

      Kubrick, S. 1968. 2001: a space odyssey [Film]. Beverly Hills: Metro-Goldwyn-Mayer.

      Loebner, H. 2015. Home page of the Loebner prize in artificial intelligence: the first Turing test. Hugh Loebner’s homepage, 27 April [Online]. Available from: http://www.loebner.net/Prizef/loebner-prize.html [Accessed 3 April 2017].

      Wilcox, F. 1956. Forbidden planet [Film]. Beverly Hills: Metro-Goldwyn-Mayer.

      Delete