Thursday, July 27, 2017

Digest for comp.programming.threads@googlegroups.com - 1 update in 1 topic

rami18 <coco@coco.com>: Jul 26 02:39PM -0400

Hello...
 
Read this:
 
Adversarial perturbations are not "natural" images. They are regular and
just don't occur in nature. This is possibly the most important
unexplained aspect of neural networks and machine learning and it is
being studied as a security or safety problem. What is some evil person
misleads a machine intelligence? What if a self driving car is made to
crash because an adversarial signal is injected into the video feed?
 
Here is an interesting paper about it:
 
Distillation as a Defense to Adversarial Perturbations against Deep
Neural Networks
 
https://arxiv.org/pdf/1511.04508.pdf
 
 
 
 
Thank you,
Amine Moulay Ramdane,
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: