Since the advent of artificial neural networks during the 1960's, researchers have been continuously trying to produce networks and models which can realistically recreate specific functions and processes of the human brain. As work in this area has progressed from simple to more complicated networks and algorithms, the idea of networks which use 'deep learning' has become fundamental in trying to achieve this goal. Deep learning is a concept used in the general area of machine learning in which observed data is assumed to be generated by the interaction of many different factors on many different levels, and in terms of artificial neural networks this normally translates to having a multi-layered non-linear network in which each level can be trained individually using differing algorithms and an algorithm such as back-propagation is then used to fine tune the network as a whole.
This method has become popular due to its potential for allowing artificial neural networks to learn high-level concepts and abstractions, a capability of the human brain which is jealously sought by AI researchers. An extremely interesting and novel example of the progress made in this area, and a great example of why there is much merit in pursuing the goal of creating networks to imitate cognitive functions is Google Brain's (a deep learning research project at Google) well known 2012 network which was tasked with the very human activity of browsing youtube for countless hours while better and more productive things probably needed to be done. This network was made up of 16,000 computer processors and involved more than one billion individual connections. This is by far one of the most ambitious networks produced so far in the world of artificial neural networks, but the most important thing is not its scale but its achievements and results.
These results represented the capabilities of modern artificial neural networks given the right resources and using the method of deep learning. Their network became a feature detector which could use unlabeled data and still accurately classify salient features into categorizes within the data-set. This, while a seemingly simple task for any ordinary person (as most humans are quite proficient at recognizing the features of such a commonly adorable creature such as a cat), is quite a hard feat for an artificial network to achieve, and as such I believe it illustrates well what progress has been made and the potential for more progress to come in this area. This deep learning technique produces networks which have results applicable to both furthering the goal of dedicated AI researchers and to advancing technologies being produced at this very moment, with Google Brain using their recognition network to advance work in facial and speech recognition devices being just one example.
I think it is good to be aware of the very real advances occurring in the connectionist world, not only to recognize the advances that are occurring in this area, but also for reassurance as a human being that it takes 16,000 individual computer processors to accomplish a task which my much smaller brain accomplishes with absolute ease and minimal effort. As I look forward to where these networks can and will most definitely go, it is never a bad thing to be reminded of the thing they are trying to emulate, and how powerful that thing can be.
No comments:
Post a Comment