Pages

Thursday 21 February 2013

That is not an apple


Inspired by reading Francois’ post about the Google „Brain“ Project and the neural network that was able to discriminate faces and “other high-level concepts” with a chance of 15,8% (I leave it to the reader to decide if that is impressive or not), and probably also in the mindset of various posts discussing our fear of technology, I remembered this project proposal by the artist Adam Harvey I came across a while ago: He tried to find out, when an apple is no longer an apple.

Or less radical: What does someone haves to do, to make an apple stop appearing as such to the Google Image Search algorithm? The answer seems to be: Not much. To put a few colourful dots on it is enough to obscure it for the machine (and asked to find similar pictures Google Search comes up with a lot of colourful dotted toys instead of apples).
On a technical site, this only seems to show that similarity judgements used by Google Image Search are not the same mechanisms that are used by humans to discriminate between different objects and to categorize them (as the underlying shape of the apple, its possible practical purposes and its biological origin make it clear to us that we see an apple, regardless of how many dots it has on it).
I wonder if it would be possible to design search algorithms that would actually try to mimic the way humans conceptualize objects (as it is far from clear to me how exactly we do it) or if there are more reliable and faster ways to let algorithms conceptualize things that would have similar outcomes as humans conceptualizing the same set of things.

But Harvey, in finding “new visual mutations that satisfy human perception and override machine vision” has a different goal, similar to the one explored in this other interesting project: To show how we are superior to machines, how we can get around the technical mechanisms that were envisioned and implemented by humans, and how we can protect ourselves against them, if they were to become our enemies. He sees machine recognition enabled enemies like autonomous killer drones, or other less deadly usages of face recognition. Hence, understanding the machines’ deficiencies allows us privacy when face recognition technology becomes very elaborate and sufficient. 

A bit sarcastically, one could conclude that it is always a good business idea to invent something that makes new technology “unusable” again – as after some time we (or at least someone) will feel the need to circumvent it.

I’m glad that I finished this post before my laptop shuts itself down thanks to Work-life Balance Equipped Computers – great application to stop me from working too much…

1 comment:

  1. In nature organisms with pretty limited cognitive ability can work out how to "hide the apple" by confusing or subverting the algorithms that predators or prey use. For example, some moths have decoded bats' echo-location algorithms and at exactly the moment beyond which the bat can't correct its course the moth closes its wings and drops from the bat's flight path. Incredibly it is reading the bat’s active tracking system and has perfected it response to the information stream.

    Plants too can work out what insects are "thinking", for example carnivorous plants lay honeydew bait at the opening of a one-way trapdoor to a pseudo stomach where the insect (and sometimes even small mammals or frogs) is slowly digested. In the case of the plants it's probably possible to argue that through countless generations plants that incidentally trapped insects out-performed those that didn't and hence there was no cognition involved - but if that's true, at what stage is it possible to attribute the beginning of cognition? Presumably we too evolved by reproduction of ever more sophisticated evolutions of primitive cognitive random mutations.

    If a moth can read a bat’s echo-location signals and take evasive action that relies on a precise model of the bat’s momentum and manoeuvring envelope (moths don’t study physics) and a plant can set a trap for insects based on “knowing” their attraction to (very scarce) sugars then fooling Google by putting spots on an apple helps us understand the constant interplay of predator and prey. For example is it possible for a corporation like Google to evolve a predatory capability without anyone consciously setting about that objective, if not then why the need for the motto “Do No Evil”? Did the founders anticipate a corporate distributed cognition that was not the objective or competence of any individual but was the inevitable outcome of thousands of “generations” of survival practices employed daily to keep the dollars flowing?

    ReplyDelete