In the sixth and final session we will used the gained knowledge about machine learning, neuro-cognition, the perceptron, data handling and bias, optimization, and architectures to deeply discuss the potentials and limitations of artificial neural networks. The main task of a neural network is to learn how to generalize to also handle unseen data. But how do we reach that best? How much should a network remember, and how much "forget"?
As Rich Sutton mentioned, the most bitter lesson that become clear in the past 70 year of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. Repeatedly it could be seen that initial efforts in multiple directions tried to avoid search by taking advantage of human knowledge. However, those efforts proved irrelevant, or worse, once search on increased amount of data was applied effectively at scale. This raises the questions: Do we require any knowledge for data processes like pre-processing or pre-requirements? Or should we rather trust pure up-scaling of data, search and learning?
Preparation/ Pre-class readings:
- Read "The Bitter Lesson" from Rich Sutton
- Watch the video "Do Neural Networks Think Like Our Brain?"
- (optional): Watch the documentation about AlphaGo.