José María Mateos on Sun, 31 Dec 2017 17:49:10 +0100 (CET) |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: <nettime> Deep Fool |
On Sun, Dec 31, 2017 at 11:52:45AM +1300, Douglas Bagnall wrote:
Just by looking at the Athalye et. al. turtle, you can see that the system associates rifles with the texture of polished wood. And indeed when you look in the ImageNet rifle category you see a lot of wood, not only in the guns themselves, but in mounts of various sorts. The rifle category reveals how people *photograph* rifles. We can conclude that the system concluded that "rifle" was the most polished-wood-ish category. This is the kind of excellent short-cut you *want* your ImageNet entry to make, allowing it to devote more attention to terriers.
Reminds me of the anecdote about a tank detector and daylight conditions that I've read in many places. This is a quite good research article on the topic: https://www.gwern.net/Tanks
---A cautionary tale in artificial intelligence tells about researchers training an neural network (NN) to detect tanks in photographs, succeeding, only to realize the photographs had been collected under specific conditions for tanks/non-tanks and the NN had learned something useless like time of day. This story is often told to warn about the limits of algorithms and importance of data collection to avoid "dataset bias"/"data leakage" where the collected data can be solved using algorithms that do not generalize to the true data distribution, but the tank story is usually never sourced. I collate many extent versions dating back a quarter of a century to 1992 along with two NN-related anecdotes from the 1960s; their contradictions & details indicate a classic "urban legend", with a probable origin in a speculative question in the 1960s by Edward Fredkin at an AI conference about some early NN research, which was subsequently classified & never followed up on. I suggest that dataset bias is real but exaggerated by the tank story, giving a misleading indication of risks from deep learning and that it would be better to not repeat it but use real examples of dataset bias and focus on larger-scale risks like AI systems optimizing for wrong utility functions.
---The author concludes that the event never happened, but that the question and the story / urban legend are relevant nonetheless, as you point out on your original e-mail: we, intuitively, know that NNs are picking up on *something*; what that *something* is depends greatly on the dataset we're using for training / testing / validating and might not be the *something'* we are interested in.
Cheers, JMM. # distributed via <nettime>: no commercial use without permission # <nettime> is a moderated mailing list for net criticism, # collaborative text filtering and cultural politics of the nets # more info: http://mx.kein.org/mailman/listinfo/nettime-l # archive: http://www.nettime.org contact: nettime@kein.org # @nettime_bot tweets mail w/ sender unless #ANON is in Subject: