AL has been around 15-20 years now and AFAIK it hasn't turned up anything that you could fairly say proves we've got the principle cracked. I'm afraid I'm not impressed by Norns. Evolution is obviously capable of great things, but I don't think we've properly translated the mechanism into sillicon yet. This is mainly because most ALife researchers have a very simple and crude understanding of it. This runs along the lines of:

  1. Start with a population of random candidate solutions.
  2. Assign each one a fitness score according to some pre-defined criteria.
  3. Increase frequency of high-fitness solution paramters by breeding them or copying with mutation.
  4. repeat as necessary.
This looks like it should work. The only problem is the `repeat as necessary' part. The search space defined by the variable set can be prohibitively large for most non-trivial problems. The standard solution of course is to throw more processing speed at the problem whereas what would really be useful would be to go back and have a proper look at how biology does it. Until this happens AL is in danger of falling into the same trap of AI, i.e. producing lots of systems which are very interesting but are almost completely disparate and don't really go anywhere.

BTW, this whinge does NOT include neural networks which do work well.


I've just re-read this after noticing it pick up negative xp. I still agree with what I said, but to really justify and elaborate it properly would require a much bigger node. Maybe even an alife metanode. Instead I'll try to state succintly where I'm coming from... That's it! I just want to see ALife stuff progress as I know it can...