Neural Metamorphosis
Arxiv 2023

1National University of Singapore,
DeRy pipeline

NeuMeta encapsulates a range of neural networks into a singular Implicit Neural Representation (INR). Once trained, it can produce weights for diverse networks, enabling them to adapt flexibly to various requirements.

Abstract

This paper introduces a novel paradigm termed Neural Metamorphosis (NeuMeta), which aims to represent a continuous family of networks single versatile model.

Unlike traditional methods that rely on separate models for different network tasks or sizes, NeuMeta enables an expansive continuum of neural networks that readily morph to fit various needs. The core mechanism is to train a neural implicit function that takes the desired network size and parameter coordinates as inputs, and generates exact corresponding weight values without requiring separate models for different configurations. Specifically, to achieve weight smoothness in a single model, we address the Shortest Hamiltonian Path problem within each neural clique graph. We maintain cross-model consistency by incorporating input noise during training. As such, NeuMeta may dynamically create arbitrary network parameters during the inference stage by sampling on the weight manifold. NeuMeta shows promising results in synthesizing parameters for unseen network configurations. Our extensive tests in image classification, semantic segmentation, and image generation reveal that NeuMeta sustains full-size performance even at a 75\% compression rate.

Related Resource

This paper also draw inspiration from the a series works on continuous neural network and fitting INR to neural network

Our team is trying to unleash power of pre-trained models by modularizing the network design. Here are some related papers or projects.

BibTeX

@article{yang2023neumeta,
  author    = {Xingyi Yang, inchao Wang},
  title     = {Neural Metamorphosis},
  journal   = {arxiv},
  year      = {2023},
}
-->