Easy-to-understand explanation! What is NEAT? Its birth and what comes next!#reinforcementlearning - Qiita

Improving the weights and structure of neural networks using genetic algorithms.

  • TWEEANs initially prepared a random population, ensuring diversity from the start. On the other hand, NEAT starts without diversity but develops it over generations. According to Stanley’s research, TWEEANs were performing unnecessary computations, which turned out to be pointless.

Neuroevolution: A different kind of deep learning – O’Reilly

  • Exploring the evolution of neural networks through evolutionary algorithms.

  • The first neural evolution algorithm emerged in the 1980s. At that time, a few practitioners believed it could serve as an alternative to the more conventional artificial neural network (ANN) training algorithm, backpropagation (a form of stochastic gradient descent).

    • Ah, so it was originally created as a counter to backpropagation.
    • Analogously, individual learning is like backpropagation, while evolution across generations is like NEAT.
  • However, the excitement of that era was not necessarily about the problem itself but rather the vast untapped potential of evolving artificial architectures and weights resembling brains. The limits of such systems were unknown at the time, and everything seemed possible.

  • Much time was spent studying existing TWEANN algorithms and why they were not functioning adequately. There were several well-known issues, such as the challenge of combining two parent ANNs to create offspring (referred to as “crossover”). Specifically, it becomes difficult to determine how to combine them as different networks may express the same function with different connection weights.

  • The result of my quest for a better TWEANN algorithm (in collaboration with my doctoral advisor Risto Miikkulainen) was the algorithm of Reinforcement Topology Evolution, or NEAT, which rapidly became the most popular and widely used algorithm in neural evolution.

  • One limitation of NEAT that increasingly occupied my thoughts is its use of a type of “artificial DNA” called direct encoding. This means that all connections within an ANN are described by a single corresponding gene in the genome. While this is not an issue with hundreds of connections, it becomes quite cumbersome when aiming for a larger brain.

  • Now, you can begin to see the trajectory of ideas. This field started with fixed-topology networks, transitioned to complex networks, and then began to focus on indirectly encoded networks. These shifts in perspectives and capabilities will continue to deepen insights into the evolution of complexity.

  • The idea is that parents should not be chosen solely based on their objective performance but rather on their novelty.

    • This could lead to various analogies.
    • In other words, as we solve the puzzle of neural evolution, we are not only learning about computer algorithms but also about how the world fundamentally operates. This is one of the reasons why my co-author Joel Lehman and I wrote the book “Why Greatness Cannot Be Planned.” We wanted to share the broader significance of the advancements in our field with the general public. We believe that this significance is crucial not just for computer science but also for institutions focused on innovation and creativity.- I was just saying that