Catastrophic forgetting

 

To be able to achieve artificial general intelligence, or human-like intelligence, machines should be able to remember previously known tasks simultaneously. In a given scenario, certains tasks may not appear frequently or recently as others. Neural networks in particular are known to catastrophically forget, information when they are not frequently or recently seen. This happens because neural network must frequently adapt to weights for newer task.

For example, we have a neural network which is trained to recognize birds, cats and dogs (Set 1). We then train it with other classes such as dolphins, rabbits and lions (Set 2). After some time, the neural network can start getting poor results in recognizing any class in Set 1 because the weights are biased to recognize the classes in Set 2. This isĀ  known as catastrophic forgetting.

 

Screenshot from 2018-12-08 15-19-33

Screenshot from 2018-12-08 15-19-12

 

Related Papers:

  1. Overcoming catastrophic forgetting in neural networks
  2. Catastrophic Interference in Connectionist Network
  3. Catastrophic Forgetting in Connectionist Network