Deep neural networks are known to be a data-intensive algorithm. Thousands of examples are usually needed to be able to make a good classification. Few-shot learning is the opposite of that such that it aims to learn using only a few examples of each class.
One/few-shot learning refers to rapid learning from one or a few examples. Experiments on few shot learning are usually shown on N-way K-shot learning, where N is the number of classes is and K is the number of examples per 5-wayclass.
The dataset is split such that the classes in the training set are disjoint from the classes in the test set. For example, the training set can include cats, dogs and birds while the test set contains rabbits and dolphins. The number of classes is large while the number of images per class is typically small. Given a set of support images with one/few image(s) per class and a query image, the goal of one/few shot learning is to be able to identify which support image the query image is most similar to.
A dataset commonly used for one/few-shot learning is the omniglot dataset. The omniglot dataset has a large number of classes and only a few examples for each class (i.e., as opposed to the MNIST database of handwritten digits which has few classes and a large number of examples for each class.) Omniglot contains 1623 characters from 50 different alphabets; each character is handwritten by 20 different people. 1200 characters are used for training while the remaining characters are used for testing.
Relevant Papers:
1. Human-level concept learning through probabilistic program induction.