World Summit AI 2017 Amsterdam – some highlightsPosted on 2017-10-26 by Richard Berendsen
From Wednesday October 11th to Thursday October 12th, the World Summit AI 2017 conference was held in Amsterdam. It had a terrific line-up, with top professors in the field of AI, like Stuart Russell and Yann LeCun, and ‘our own’ Max Welling. Most of these people were actually there in person. But it wasn’t all science. Large companies like ING, Accenture, and smaller companies like BrainCreators (they were new for me) also had a large presence, with talks on the main stage, or with stands in front of the main arena. In this blog post, I’ll briefly discuss some of my favorite talks, in non-chronological order.
On the second day, Meredith Whittaker, leader of Google’s open research group and co-founder of AI Now Institute, New York University, focused on current applications of ‘AI-technologies’ in the domain of human labour. And on what’s wrong with them, mostly. My summary of the problems she outlined would be that machine learning techniques are widely applied without a solid statistical methodology. The simplest example she gave?
A soap dispenser that would not give soap to everybody. It obviously has not been tested on people with varied skin color. The field of statistics could have helped here. Who are the intended users of the soap dispensers? People. So, let’s fine-tune, or ‘train’, the soap dispensers on a representative sample of this population. Not just on a bunch of people you know, who happen to be similar to you. The same principles play a decisive role in any machine learning application. And, sadly, they are often not applied correctly, as Meredith very eloquently pointed out. She said much more about this, and there is still much more to be said about it. There certainly are many opportunities for improvement in this area.
Talking about the second conference day, who opened it? None lesser than Stuart Russell, one of the authors of, can I say, the AI Bible? It turns out he is a talented comedian as well. He had the whole audience in the main stage laughing. Even the end of the human species did not seem so bad when he talked about it. His main message? “I cannot fetch coffee if I’m dead”. Currently, machine learning algorithms are programmed to optimise some quantity. In technical terms, they optimise an ‘objective function’. The problem? Humans can’t state very well what they want. Having a super-intelligent AI (if we ever succeed in creating it) optimise a function that was formulated by humans may lead to problems. And we might not be able to stop this AI. It will quickly figure out that it needs to stay alive in order to optimise its objective. And therefore, it will not allow us to switch it off. Being super-intelligent, it may actually succeed in this.
Almost as a side point, professor Russell offered one possible avenue towards a solution: make algorithms uncertain about their objective function. Program AI so that it will put the interests of humans first, even if it is still unsure about what those interests are. If a human switches it off, it should therefore always gladly accept. Could there be dilemma’s and difficulties here as well? Well, certainly there were a good couple of laughs with the follow-up scenarios he offered.
In his TED talk (https://www.ted.com/talks/stuart_russell_how_ai_might_make_us_better_people/footnotes?language=en) he discusses the same subject matter, so go ahead and watch that right after this blog post.
On the first day, one of the stand-out talks for me was the one by Laurens van der Maaten, now at Facebook, famous for the T-SNE algorithm. If you don’t want to know what that is, you may well skip this paragraph, because his talk was one of the more technical of the conference. But it was a strong talk, leading up to a novel and original combination of deep learning techniques and symbolic AI approaches. The setting was the task of visual question answering: an algorithm gets as input an image and a question about it, and has to produce a correct answer. Admittedly, the scope was narrowed down a bit further: the images in question were generated images of geometrical objects, and the questions were often about characteristics such as color, texture, size, and shape. In this artificial world, however, the proposed solution performed very well indeed: better than humans. It seems these days you cannot sell anything less anymore, no? The sentence containing the question was processed by an LSTM-based sequence-to-sequence model. The output? A small computer program, a combination of primitive functions such as ‘count’, ‘filter color’, ‘filter’ shape. How are these functions executed? Well, they are themselves trained neural networks. An impressive composition of trained neural networks to achieve something bigger! A pre-print of the paper (with a nice picture of the architecture, showing everything I just wrote) is available here: https://arxiv.org/pdf/1705.03633.pdf
With this, I’d like to leave you now, perhaps we’ll add more summaries here later!