ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.15692
30
1

Self-Constructing Neural Networks Through Random Mutation

29 March 2021
Samuel Schmidgall
    ODL3DV
ArXiv (abs)PDFHTML
Abstract

The search for neural architecture is producing many of the most exciting results in artificial intelligence. It has increasingly become apparent that task-specific neural architecture plays a crucial role for effectively solving problems. This paper presents a simple method for learning neural architecture through random mutation. This method demonstrates 1) neural architecture may be learned during the agent's lifetime, 2) neural architecture may be constructed over a single lifetime without any initial connections or neurons, and 3) architectural modifications enable rapid adaptation to dynamic and novel task scenarios. Starting without any neurons or connections, this method constructs a neural architecture capable of high-performance on several tasks. The lifelong learning capabilities of this method are demonstrated in an environment without episodic resets, even learning with constantly changing morphology, limb disablement, and changing task goals all without losing locomotion capabilities.

View on arXiv
Comments on this paper