To the man with a hammer, everything looks like a nail. To the person looking for a pattern it won't be long before they find it. This is down to a psychological tendency to perceive significant pieces of information in otherwise meaningless. Fascinations with the numbers 3, 23 and 666 have caused people to unconsciously seek out and find these numbers occurring in dates, co-ordinates, numbers of freckles etc. Fortunately the pseudo-science of numerology is down to an error in perception and not based in fact or else we would all be dead many times over from multiple apocalypses . The name given to this error in perception is apophenia.
An exceptional case of apophenia was that of John Nash, the brilliant mathematician shown in the film 'A Beautiful Mind'. Nash's talent for observing patterns soon became pathological when he developed schizophrenia and began hearing voices in random noise and hallucinating. White noise be made to give a message if looked for enough and people consistently mention in pop culture that 'hidden' messages are to be found in songs played backwards.
Sunday, 26 May 2013
Saturday, 25 May 2013
The Cultural Influence of Perception
It may appear clear as day what image is being portrayed in the adjacent picture but what you see here is very much a result of your cultural upbringing. Constraints of your physical environment have such a conditioning effect that you will assume that the image is of a white family sitting in a room near the corner. The animal is a dog and over the woman's head on the left is a window. However when this image is shown to someone in Africa , it is found that they see it as a black family sitting outdoors. The room corner is a tree and the window is an object being balanced on the woman's head.
Good or Bad Technology?
In this course so far we have had a very wide discussion of many topic areas and one theme that has kept coming up has been technology and the contribution of new technology to science and society. We have seen things like drones that are a danger but also things like attempts to make an artificial brain which have great potential for positive change. Today I want to talk about a new system as discussed in this article which could have a big impact on education systems – automated marking.
This kind of system is already well established in online courses where grading software automatically picks out key words in a text to give an automatic grade to a piece of work without it ever being read by a person. However, this article outlines how four different states in the USA have introduced this kind of software into their secondary school system. The article goes on to discuss whether this kind of system of grading will discourage creativity and proper essay writing.
Friday, 24 May 2013
Signposts
In the attempt to understand, labels are used, however they can be- at best- a partial glimpse of a given phenomena from a given perspective at a given time.
They're useful, like signposts, but signposts shouldn't be taken too seriously and if history teaches us one thing it's that boundaries tend to be redefined over the course of time.
The (recently mentioned) launch of DSM 5 is a road atlas of many of these place names, an attempt to reduce complex experiences into given conditions, and this in turn often leads to looking for a physical corelate: i.e. a gene. A recent Science News article explains that they have not yet found one for depression.
They're useful, like signposts, but signposts shouldn't be taken too seriously and if history teaches us one thing it's that boundaries tend to be redefined over the course of time.
The (recently mentioned) launch of DSM 5 is a road atlas of many of these place names, an attempt to reduce complex experiences into given conditions, and this in turn often leads to looking for a physical corelate: i.e. a gene. A recent Science News article explains that they have not yet found one for depression.
Bayes' Theorem
I decided to write an article
about Bayes' theorem as it is a recurrent topic in artificial intelligence and its understanding can be quite useful in everyday life (especially if you like gambling). The
best example to introduce Bayes' theorem is the Monty Hall problem. Let's play a game! Considering
that we have 3 doors with one of them hiding a reward, you must first pick one
of the doors without opening it, so you do not know its content. Then, following
the script, one of the two doors that haven’t been selected will be opened and will
not hide the reward. Now, the real problem starts, would you exchange the door
that you have selected for the other door, or would you keep it ?
Most people will tend to say that
it does not change anything but it is actually more interesting to pick the
last door. You’re skeptical ? So was I when I first learnt about Bayes theorems
but it has shown to work very well. The fact that people do not easily
understand Bayes theory is simply because the human brain is not optimised for
rational decisions.
Life and Death of Neurons
A recent study by Cusack et al. , published in the May issue of Nature Communications sheds light on the mechanisms underlying axon pruning and apoptosis. Axon pruning is an essential mechanism as it allows faulty connections to be severed. It is also part of normal development and has an important role to play as part of learning and memory. Axon pruning, although essential for the well being of our brains, can be a dangerous gamble as the poison released by the neuron to severe its axon could kill the entire cell if released improperly. Apoptosis, a more radical mechanism consisting of the intentional destruction of an entire cell is also sometimes required as it permits the weeding out of broken or incorrectly located neurons that could have a negative impact on the body.
P300
With the recent and mediated works on electroencephalogram (EEG)
Brain-computer interface, I was quite intrigued by the fact that brain signals
were only exploited to move bionic arms (cf Professor Kevin Warwick’s work) or
cursors on graphical interfaces. That’s why I decided to investigate and came
across a very special brain signal called P300. This brain signal is
unconsciously activated every time an object is recognised as millions of
neurons fire at the same time. I think this study was worth reading as the way
they use EEG is unlike anything I have previously seen.
"Rubber Hand" illusion could give prosthesis a sense of touch
Merleau-Ponty said: "The body is the vehicle of being in the world, and having a body is, for a living creature, to be intervolved in a definite environment, to identify oneself with certain projects and be continually committed to them." The importance of this insight is that what makes a limb yours is not just it, but its involvement in "projects". These projects are the everyday activities in the context of the environment that your combined senses apprehend. You hear not only with your ears, but with your eyes too, as demonstrated by the McGurk effect. So too you feel with your eyes as in Botvinick & Cohen's "rubber hand" experiment. This one in particular has very real application in robotics.
Sleeping on The Job
In my experience most employers want (or at least say they want) to have creative employees. It seems that there is a general acceptance that it's possible to make someone creative. To this end there is no end to the number of training companies that provide creativity training. However, when you're around as long as I am (Digital, Compaq, HP, Microsoft) you get to see a pattern: two examples keep recurring - the No. 1 example cited is 3M and the 20% creativity time (recently emulated by Google) which resulted in the development of Post-Its and the other classic example being the potential of hypnagogia - the altered mental state that occurs between sleeping and waking during which Kekulé correctly hypothesised that that the structure of benzene was a closed ring after imagining the molecules forming into snakes that swallowed their own tails, while he was half asleep in front of his fire.
The fact that the discovery of the benzene ring happened almost 115 years ago and even the Post-It example is almost 40 years old tells me that induced creativity doesn't have a great hall of fame to refer to. However, undaunted I've been looking into hypnagogia....
Wednesday, 22 May 2013
Psychology and Psychiatry
I once knew a guy who was an aircraft engineer, by all accounts very good at his job and something he enjoyed doing, however he didn't like flying- in fact he was afraid of it and had only attempted it once or twice ever in his life.
The thing is that didn't make him less of an engineer, nor does it make the confident frequent flyer anything of an expert in mechanics or avionics.
There may be some argument to say that if he worked designing the interiors of aircrafts, that his lack of flying experience might have made it difficult for him to make the ergonomics just right for flyers, but even then his inexperienced perspective might allow him to see the aircraft from a different perspective than someone who may be desensitized through the course of regular travel.
Annoying Noises Prohibited Here...
A
good few years ago my cousin and I were chatting and the subject turned
to tolerance of other people’s quirks and foibles. We were intrigued
and amused to discover that we were very alike in our mutual irritation
of the habits of our nearest and dearest. I asked her what exactly it
was that drove her crazy about her husband and she said “When we’re
watching television I just get so annoyed by, well, his breathing”. In a
lightbulb-type moment, I knew exactly what she meant, having grown up
with several family members who insisted on breathing too, an annoyance
which would have me grinding my teeth in irritation! Luckily, neither of us is annoyed by our own breathing or chewing habits! We laughed it off
and decided that we must have some kind of intolerance gene in common.
Other than family members anyone I told the story to looked as if they
thought I’d lost the plot entirely.
Fast-forward
another few years and I notice that another family member seems
unusually intolerant of the rest of us. When she sees someone tapping
their foot or something moving out of the corner of her eye, she gets
really annoyed and asks them repeatedly to stop. It irritates her no end
that others put their feet up on the footstool as they relax at home
and absentmindedly rub their feet together. And, as in all the best
families, one person’s annoyance becomes a neat weapon which another
will happily use against them and so the battle lines are drawn. I have
some sympathy for her plight, it’s true her breathing does me in, but
I’m a foot-rubber so we cancel each other out!
Tuesday, 21 May 2013
Irony and Machines
“I'd kill for a Nobel Peace Prize.” ―Steven Wright
Could a machine learn how to detect irony in written text, as in the above quote for example? I’ve been working on this topic for my thesis so I thought I’d share a small introduction as a blog post here. First we must ask the question; what is irony and what is it used for? As Veale (2010) states: “Irony is an effective but challenging mode of communication that allows a speaker to express sentiment-rich viewpoints with concision, sharpness and humour”.
Could a machine learn how to detect irony in written text, as in the above quote for example? I’ve been working on this topic for my thesis so I thought I’d share a small introduction as a blog post here. First we must ask the question; what is irony and what is it used for? As Veale (2010) states: “Irony is an effective but challenging mode of communication that allows a speaker to express sentiment-rich viewpoints with concision, sharpness and humour”.
Monday, 20 May 2013
Genetic Algorithms & Optimisation of Cognitive Models
As a computer scientist, I have
already used Genetic Algorithms (also called GA), which are interesting
tools for Artificial Intelligence purpose. However, I was wondering if we could
potentially use them in Cognitive Modelling. Before explaining why they are so interesting,
I will first describe the principle of Genetic Algorithm.
Genetic Algorithms somehow mimic the natural
selection process. When given a solution for a particular problem, it simply creates duplicates of this solution but with small random modifications. We can say
that the new solutions now have different characteristics or "genes".
Those new solutions are then assessed with a “fitness” function which evaluates the
difference between the expected result and the actual output. The best solutions
are then used to create a new generation of solutions with different genes and
the whole process can now starts again. Also, the level of mutation between two
generations is very important as a low level will slow down the evolution
process while a high level will simply generate freaky solutions and make the new
generation evolving in the wrong way. The process stops when a solution is
considered as accurate enough by the fitness function.
Sunday, 19 May 2013
Brain + computer: The next chapter
While reading some tech blogs, i came across a startling new development in the interface between brains and computers. I quote from the blog "A brain-computer-interface technology created by researchers at Columbia University could turn our brains into automatic image-identifying machines that operate faster than human consciousness." This method combines the image-processing power of the human brain with computer vision to search through images 10 times faster than they could on their own. This cortically coupled computer vision system dubbed C3 vision, was developed to allow hours of footage to be processed very quickly. With the stark increase in recording systems, far too much data is generated on a daily basis, but processing that data can be an arduous task. The brain emits a signal as soon as it sees something interesting, and that "aha" signal can be detected by an electroencephalogram, or EEG cap. While users sift through streaming images or video footage, the technology tags the images that elicit a signal, and ranks them in order of the strength of the neural signatures. Afterwards, the user can examine only the information that their brains identified as important, instead of wading through thousands of images.
After reading the article the system struck me as oddly familiar. A little recollection brought me to the Doctor Who episode "The Long Game". An episode where the Doctor lands in a space station in the year 200,000. It was a news broadcasting station, where the reporters interfaced with all the incoming information from the surface directly via neural implants at amazing speeds. However, it also showed how such a system was capable of being exploited. Think about it, what's the worst part about having employees? Taking care of their working conditions, work-life balance, addressing employees concerns and needs etc. But if you only needed the processing power of their brains, the most 'profitable' way of using it would be to draw a line between the person and his brain. Using the brain as you would use a computer CPU. The long term implications portrayed in that old, sci-fi series are morbid, to say the least, as seen int he latter half of the episode, where the control hub of the entire station was run by dead people, whose brains were directly linked to the core system. Is that what man kind is be destined for, to be used as replaceable computer parts in a vast bio-computerized array? I shudder at even considering the possibility.
After reading the article the system struck me as oddly familiar. A little recollection brought me to the Doctor Who episode "The Long Game". An episode where the Doctor lands in a space station in the year 200,000. It was a news broadcasting station, where the reporters interfaced with all the incoming information from the surface directly via neural implants at amazing speeds. However, it also showed how such a system was capable of being exploited. Think about it, what's the worst part about having employees? Taking care of their working conditions, work-life balance, addressing employees concerns and needs etc. But if you only needed the processing power of their brains, the most 'profitable' way of using it would be to draw a line between the person and his brain. Using the brain as you would use a computer CPU. The long term implications portrayed in that old, sci-fi series are morbid, to say the least, as seen int he latter half of the episode, where the control hub of the entire station was run by dead people, whose brains were directly linked to the core system. Is that what man kind is be destined for, to be used as replaceable computer parts in a vast bio-computerized array? I shudder at even considering the possibility.
Monday, 13 May 2013
Robot Rights
“As long as humans or animals are still
tortured on this earth, we have bigger problems to tackle than the ethical
situation of robots!” said one of the many outraged comments under this German newspaper article
I recently read. I can’t say that I am not quite sympathetic with this view,
but Kate Darling’s idea of why we should think about robot rights now is
motivated in an interestingly different way.
“We should give robots similar rights to
animals – they should for example not be allowed to be tortured” says Darling,
who is an IP Research Specialist at the MIT Media Lab and a Ph.D. candidate in
Intellectual Property and Law & Economics.
I think a lot of people stopped reading
after this statement – so if you were about to, too, just bear with me for a
little longer.
Friday, 10 May 2013
Physics x Consciousness
For long, scientists have held the belief that material entities, stars, planets, rocks,atoms, quarks are fundamentally different from the rather intangible aspects of our existence like, ideas, thoughts and consciousness. However, recent developments in theoretical physics, most notably Grand Unified Theory, String theory and quantum mechanics have brought those seemingly hippy ideas of 'oneness' into the realm of scientific inquiry. Since 1864, when Maxwell published a paper explaining the electromagnetic field which outlines the dynamic inter-relation between electricity and magnetism, which were then thought of as completely separate phenomenon, physicists have worked on trying to unify all observable phenomenon into a grand unified theory, or as it's popularly known, the 'Theory of everything'. While we are still far from developing that theory, work on it has sparked a number of questions which may very well hold the answer '42'.
Thursday, 9 May 2013
Meditate on this I will
My last post on binaural beats got me thinking of something
we touched upon when discussing the electroencephalograph (EEG) machine and
methods last semester. The lecturer showed us slides of EEG readings from
patients in various states and I remember being startled at one example where
it was shown that an alert mind is in a beta frequency of 13 – 30 Hz but that
some people who are awake can lower themselves into a theta frequency of only 4
– 8 Hz. This is a lower frequency of brain activity than that shown in
daydreaming. The only instances where lower frequencies are recorded in the
delta category of 1 – 3 Hz are when people are unconscious or in a deep sleep.
The image below the break puts this in context.
Wednesday, 8 May 2013
Invasion of the Toxoplasmoids
They
are among us. They look just like us, but
they are not just like us. They
are hosts to the Invaders, beings who lodge themselves inside human brains,
controlling behaviour for their own primordial ends. The elderly lady beside
you on the bus who smells faintly of cat-pee and mothballs – she could be one
of Them. The student behind you, snuffling loudly and popping another Panadol
from the packet – is he one? The orange girl nattering into her mobile with one
hand, plucking fine white hairs out of her scarf with the other – is it her? Indeed,
given the statistics, the chances are that at least one member of the CogSci
class is host to behaviour-modifying t.gondii
oocysts. Even you, gentle reader, could be one of Them.
Binaural beats
When I was reading about the Mozart effect in RLV Poehls earlier blog post I was reminded of a few pieces I had previously read on the effects
of listening to Binaural beats. Binaural beats are a type of musical frequency
which, when listened to through headphones send a different signal through each
ear into your brain. These frequencies can be tailored to many different states
but the desired result is to suggest a particular state to your brain. So for
example a frequency of x hertz suggests to your brain that it is time to go to
sleep. Many people use binaural beats as a lullaby in this way with many
playlists on YouTube for this purpose and even several mobile phone apps on
IPhone and Android.
Friday, 3 May 2013
Retroperception: Problems for Enactivism?
Yes,
it’s consciousness again. Or rather, perceptual experience. This time, some
musings about what the newly-minted cognitivist phenomenon of retroperception (see this Mindhacks post for a summary)
might mean for enactivist theories of perceptual experience.
Before
we begin, let us take note of a critique levelled against cognitivism by the
enactivist Thompson (2007) - that the representationalist, information
processing account has replaced the mind/body problem with a new mind/mind
problem. This is to say, how and why does hidden cognitive processing, which
goes on ‘in the dark’, yield conscious experience, and how can we take
seriously a body of thought which so often seems to end up pushing our experience
of the world towards an epiphenomenal status? Contrary to representationalist
accounts, enactivism denies that human beings experience an ‘external’ world
indirectly via some kind of 'internally constructed' model and suggests instead
that, through our sensorimotor systems, we have a form of direct access that
constitutes the ‘bringing forth’ of a subjective, experiential world. Therefore, like enactivism's forerunner phenomenology, the description of our experiential world could be seen as enactivism’s
prime concern. This must include explaining the perceptual quirks that have been
comprehensively documented by cognitive psychologists. In their influential
2001 article, Noe and O’Reagan do a good job of explaining various peculiar
facets of visual experience within their own theoretical framework.
Wednesday, 1 May 2013
What does Intelligence give us?
Does increased intelligence result
in more “mind”? Aside from being contentious, the point I am trying to
illustrate and question here is whether people who are smarter qualify as
human minds more than those of lower intelligence.
Would you be a different person if you were more intelligent? Would you
have the same opinions and personality traits? It goes without saying that an
increase in IQ does cause a change in brain morphology as does simply getting
older. However could this trait be a
scalable measure of our ability to experience and to have qualia?
I was reading an article recently which postulated what life might be
like if everyone were twice as intelligent. For this purpose we shall take that
to mean scoring twice as high on an IQ test. One of
the resulting benefits they discussed might be a greater appreciation for art,
science, music etc. This seems to hint explicitly that appreciation is linked
to understanding. Intelligence by that
fact must equate to more than just computing ability.
Subscribe to:
Posts (Atom)