Strong generalization and efficiency in neural programs

Y Li, F Gimeno, P Kohli, O Vinyals - arXiv preprint arXiv:2007.03629, 2020 - arxiv.org
arXiv preprint arXiv:2007.03629, 2020arxiv.org
We study the problem of learning efficient algorithms that strongly generalize in the
framework of neural program induction. By carefully designing the input/output interfaces of
the neural model and through imitation, we are able to learn models that produce correct
results for arbitrary input sizes, achieving strong generalization. Moreover, by using
reinforcement learning, we optimize for program efficiency metrics, and discover new
algorithms that surpass the teacher used in imitation. With this, our approach can learn to …
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction. By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes, achieving strong generalization. Moreover, by using reinforcement learning, we optimize for program efficiency metrics, and discover new algorithms that surpass the teacher used in imitation. With this, our approach can learn to outperform custom-written solutions for a variety of problems, as we tested it on sorting, searching in ordered lists and the NP-complete 0/1 knapsack problem, which sets a notable milestone in the field of Neural Program Induction. As highlights, our learned model can perform sorting perfectly on any input data size we tested on, with complexity, whilst outperforming hand-coded algorithms, including quick sort, in number of operations even for list sizes far beyond those seen during training.
arxiv.org