AI facing self-made problems
AI: hype, ethics and scientific rigor
Too much of the first; not enough of the other two. Everyone was really excited about AI and deep learning last year. It must have been one of the hot topics of the year. This year both are still being talked about, but more sober heads prevail. I personally never believed all the hype, but apparently hype was not the only problem.
“Deep learning is a scientific disaster zone.”
Francois Chollet is a young software engineer and Google AI researcher. Keras creator and author of the best selling ‘Deep Learning with Python‘. I recently read an indepth interview with him that was published late December of last year.
In this interview Chollet was brutally honest about the present state of AI:
There’s a lot of hype in the field, for sure. Most of the media coverage of AI and deep learning is dramatically disconnected from reality — both the scare stories and the tales about how AI is going to make everything wonderful.
A bit later on in the interview Chollet became much more specific when asked what were the major challenges facing the deep learning community:
Fighting hype, developing ethical awareness, and gaining scientific rigorousness.
Hype: this is plaguing our field. Some people out there are hyping up recent progress in ridiculous ways, vastly overselling our current capabilities, and often casting human-level as being around the corner — which it isn’t. If we set sky-high expectations and then fail to deliver on them, we are turning people against us. And besides, it’s just intellectually dishonest, and toxic for the public debate.
Ethics: most people deploying AI systems today don’t come from a particularly diverse background, and are often blissfully unaware of the ethical implications and harmful side-effects of the systems they build. This is a major problem because these people are going to have an increasing amount of power over others.
We need to discuss these issues more and raise awareness of possible unethical applications of AI, whether it’s biased predictive models impacting people’s lives, AI being applied in really questionable places, or AI being used to manipulate our behavior and opinions in dangerous ways.
Science: there are tons of deep learning papers getting released every day, and most of them don’t really produce any meaningful new knowledge, because they don’t follow the scientific method. They “evaluate” models in fuzzy ways, or test overfit models on their training data (this is especially the case for generative models and reinforcement learning, two of the fastest-growing topics in deep learning research), cherry pick results, use artificially weak baselines, tune hyperparameters in a way that results in overfitting to a specific task, evaluate models on MNIST only, etc. Deep learning is a scientific disaster zone.
I suggest the reader to take a look at the entire interview. I also suggest the reader take a look at an article published yesterday in Quanta Magazine entitled “Why Self-Taught AI has Trouble with the Real World”