The evolution of goals in AI agents
Some research takes a long time to mature. The research for this paper involved years of algorithm development, running simulations, and developing new tracking metrics. Then there were the cycles of finding the venue for discussing this research. Today, it is finally published in the journal AI and Ethics.
What happens if we try to create artificial general intelligence by evolving a neural network brain? Regardless of the goals you give it, evolution introduces meta-goals in support of survival and replication. You wind up with an intelligence that shares the worst competitive traits of humanity. It's not an approach that I would recommend.
https://link.springer.com/article/10.1007/s43681-025-00691-y
Joseph Breeden
Posted on LinkedIn