Memetic algorithm

本文源自:http://blog.sina.com.cn/s/blog_714e46f80100sozi.html

Memetic algorithms (MA) represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. Quite often, MA are also referred to in the literature as Baldwinian EAs, Lamarckian EAs, cultural algorithms or genetic local search.

Contents

Introduction

The theory of “Universal Darwinism” was coined by Richard Dawkins[1] in 1983 to provide a unifying framework governing the evolution of any complex system. In particular, “Universal Darwinism” suggests that evolution is not exclusive to biological systems, i.e., it is not confined to the narrow context of the genes, but applicable to any complex system that exhibit the principles of inheritance, variation and selection, thus fulfilling the traits of an evolving system. For example, the new science of memetics represents the mind-universe analogue to genetics in culture evolution that stretches across the fields of biology, cognition and psychology, which has attracted significant attention in the last decades. The term “meme” was also introduced and defined by Dawkins[1] in 1976 as “the basic unit of cultural transmission, or imitation”, and in the English Oxford Dictionary as “an element of culture that may be considered to be passed on by non-genetic means”.

Inspired by both Darwinian principles of natural evolution and Dawkins’ notion of a meme, the term “Memetic Algorithm” (MA) was first introduced by Moscato in his technical report[2] in 1989 where he viewed MA as being close to a form of population-based hybrid genetic algorithm (GA) coupled with an individual learning procedure capable of performing local refinements. The metaphorical parallels, on the one hand, to Darwinian evolution and, on the other hand, between memes and domain specific (local search) heuristics are captured within memetic algorithms thus rendering a methodology that balances well between generality and problem specificity. In a more diverse context, memetic algorithms are now used under various names including Hybrid Evolutionary Algorithms, Baldwinian Evolutionary Algorithms, Lamarckian Evolutionary Algorithms, Cultural Algorithms or Genetic Local Search. In the context of complex optimization, many different instantiations of memetic algorithms have been reported across a wide range of application domains, in general, converging to high quality solutions more efficiently than their conventional evolutionary counterparts.

In general, using the ideas of memetics within a computational framework is called "Memetic Computing" (MC). With MC, the traits of Universal Darwinism are more appropriately captured. Viewed in this perspective, MA is a more constrained notion of MC. More specifically, MA covers one area of MC, in particular dealing with areas of evolutionary algorithms that marry other deterministic refinement techniques for solving optimization problems. MC extends the notion of memes to cover conceptual entities of knowledge-enhanced procedures or representations.

The development of MAs

1st generation

The first generation of MA refers to hybrid algorithms, a marriage between a population-based global search (often in the form of an evolutionary algorithm) coupled with a cultural evolutionary stage. This first generation of MA although encompasses characteristics of cultural evolution (in the form of local refinement) in the search cycle, it may not qualify as a true evolving system according to Universal Darwinism, since all the core principles of inheritance/memetic transmission, variation and selection are missing. This suggests why the term MA stirred up criticisms and controversies among researchers when first introduced.[2]

Pseudo code:

 Procedure Memetic Algorithm Initialize: Generate an initial population; while Stopping conditions are not satisfied do Evaluate all individuals in the population. Evolve a new population using stochastic search operators. Select the subset of individuals, Ωil, that should undergo the individual improvement procedure. for each individual in Ωil do Perform individual learning using meme(s) with frequency or probability of fil, for a period of til. Proceed with Lamarckian or Baldwinian learning. end for end while 

2nd generation

Multi-meme,[3] Hyper-heuristic[4] and Meta-Lamarckian MA[5] are referred to as second generation MA exhibiting the principles of memetic transmission and selection in their design. In Multi-meme MA, the memetic material is encoded as part of the genotype. Subsequently, the decoded meme of each respective individual / chromosome is then used to perform a local refinement. The memetic material is then transmitted through a simple inheritance mechanism from parent to offspring(s). On the other hand, in hyper-heuristic and meta-Lamarckian MA, the pool of candidate memes considered will compete, based on their past merits in generating local improvements through a reward mechanism, deciding on which meme to be selected to proceed for future local refinements. Memes with a higher reward have a greater chance of being replicated or copied. For a review on second generation MA, i.e., MA considering multiple individual learning methods within an evolutionary system, the reader is referred to.[6]

3rd generation

Co-evolution[7] and self-generating MAs[8] may be regarded as 3rd generation MA where all three principles satisfying the definitions of a basic evolving system have been considered. In contrast to 2nd generation MA which assumes that the memes to be used are known a priori, 3rd generation MA utilizes a rule-based local search to supplement candidate solutions within the evolutionary system, thus capturing regularly repeated features or patterns in the problem space.

Some design notes

The frequency and intensity of individual learning directly define the degree of evolution (exploration) against individual learning (exploitation) in the MA search, for a given fixed limited computational budget. Clearly, a more intense individual learning provides greater chance of convergence to the local optima but limits the amount of evolution that may be expended without incurring excessive computational resources. Therefore, care should be taken when setting these two parameters to balance the computational budget available in achieving maximum search performance. When only a portion of the population individuals undergo learning, the issue on which subset of individuals to improve need to be considered to maximize the utility of MA search. Last but not least, the individual learning procedure/meme used also favors a different neighborhood structure, hence the need to decide which meme or memes to use for a given optimization problem at hand would be required.

How often should individual learning be applied?

One of the first issues pertinent to memetic algorithm design is to consider how often the individual learning should be applied, i.e., individual learning frequency. In one case,[9] the effect of individual learning frequency on MA search performance was considered where various configurations of the individual learning frequency at different stages of the MA search were investigated. Conversely, it was shown elsewhere[10] that it may be worthwhile to apply individual learning on every individual if the computational complexity of the individual learning is relatively low.

On which solutions should individual learning be used?

On the issue of selecting appropriate individuals among the EA population that should undergo individual learning, fitness-based and distribution-based strategies were studied for adapting the probability of applying individual learning on the population of chromosomes in continuous parametric search problems with Land[11] extending the work to combinatorial optimization problems. Bambha et al. introduced a simulated heating technique for systematically integrating parameterized individual learning into evolutionary algorithms to achieve maximum solution quality.[12]

How long should individual learning be run?

Individual learning intensity, til, is the amount of computational budget allocated to an iteration of individual learning, i.e., the maximum computational budget allowable for individual learning to expend on improving a single solution.

What individual learning method or meme should be used for a particular problem or individual?

In the context of continuous optimization, individual learning/individual learning exists in the form of local heuristics or conventional exact enumerative methods.[13] Examples of individual learning strategies include the hill climbing, Simplex method, Newton/Quasi-Newton method, interior point methods, conjugate gradient method, line search and other local heuristics. Note that most of common individual learninger are deterministic.

In combinatorial optimization, on the other hand, individual learning methods commonly exists in the form of heuristics (which can be deterministic or stochastic), that are tailored to serve a problem of interest well. Typical heuristic procedures and schemes include the k-gene exchange, edge exchange, first-improvement, and many others.

Applications

Memetic algorithms are the subject of intense scientific research (a scientific journal devoted to their research is going to be launched) and have been successfully applied to a multitude of real-world problems. Although many people employ techniques closely related to memetic algorithms, alternative names such as hybrid genetic algorithms are also employed. Furthermore, many people term their memetic techniques as genetic algorithms. The widespread use of this misnomer hampers the assessment of the total amount of applications.

Researchers have used memetic algorithms to tackle many classical NP problems. To cite some of them: graph partitioning, multidimensional knapsack, travelling salesman problem, quadratic assignment problem, set cover problem, minimal graph colouring, max independent set problem, bin packing problem and generalized assignment problem.

More recent applications include (but are not limited to): training of artificial neural networks,[14] pattern recognition,[15] robotic motion planning,[16] beam orientation,[17] circuit design,[18] electric service restoration,[19] medical expert systems,[20] single machine scheduling,[21] automatic timetabling (notably, the timetable for the NHL),[22] manpower scheduling,[23] nurse rostering and function optimisation,[24] processor allocation,[25] maintenance scheduling (for example, of an electric distribution network),[26] multidimensional knapsack problem,[27] VLSI design,[28] clustering of gene expression profiles,[29] feature/gene selection,[30][31] and multi-class, multi-objective feature selection.[32]

 Recent Activities in Memetic Algorithms

References

  1. ^ a b Dawkins R. and others (1989). The Selfish Gene. Oxford University Press. 
  2. ^ a b Moscato, P. (1989). "On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms". Caltech Concurrent Computation Program (report 826). 
  3. ^ Krasnogor N. (1999). "Coevolution of genes and memes in memetic algorithms". Graduate Student Workshop: 371. 
  4. ^ Kendall G. and Soubeiga E. and Cowling P.. "Choice function and random hyperheuristics". 4th Asia-Pacific Conference on Simulated Evolution and Learning SEAL 2002: 667–671. 
  5. ^ Ong Y. S. and Keane A. J. (2004). "Meta-Lamarckian learning in memetic algorithms". IEEE Transactions on Evolutionary Computation 8 (2): 99–110. doi:10.1109/TEVC.2003.819944. 
  6. ^ Ong Y. S. and Lim M. H. and Zhu N. and Wong K. W. (2006). "Classification of Adaptive Memetic Algorithms: A Comparative Study". IEEE Transactions on Systems Man and Cybernetics -- Part B. 36 (1): 141. doi:10.1109/TSMCB.2005.856143. 
  7. ^ Smith J. E. (2007). "Coevolving Memetic Algorithms: A Review and Progress Report". IEEE Transactions on Systems Man and Cybernetics - Part B 37 (1): 6–17. doi:10.1109/TSMCB.2006.883273. 
  8. ^ Krasnogor N. and Gustafson S. (2002). "Toward truly "memetic" memetic algorithms: discussion and proof of concepts". Advances in Nature-Inspired Computation: the PPSN VII Workshops. PEDAL (Parallel Emergent and Distributed Architectures Lab). University of Reading. 
  9. ^ Hart W. E. (1994). Adaptive Global Optimization with Local Search. 
  10. ^ Ku K. W. C. and Mak M. W. and Siu W. C. (2000). "A study of the Lamarckian evolution of recurrent neural networks". IEEE Transactions on Evolutionary Computation 4 (1): 31–42. doi:10.1109/4235.843493. 
  11. ^ Land M. W. S. (1998). Evolutionary Algorithms with Local Search for Combinatorial Optimization. 
  12. ^ Bambha N. K. and Bhattacharyya S. S. and Teich J. and Zitzler E. (2004). "Systematic integration of parameterized local search into evolutionary algorithms". IEEE Transactions on Evolutionary Computation 8 (2): 137–155. doi:10.1109/TEVC.2004.823471. 
  13. ^ Schwefel H. P. (1995). Evolution and optimum seeking. Wiley New York. 
  14. ^ Ichimura, T.; Kuriyama, Y. (1998). "Learning of neural networks with parallel hybrid GA using a royal road function". IEEE International Joint Conference on Neural Networks. 2. New York, NY. pp. 1131–1136. 
  15. ^ Aguilar, J.; Colmenares, A. (1998). "Resolution of pattern recognition problems using a hybrid genetic/random neural network learning algorithm". Pattern Analysis and Applications 1 (1): 52–61. doi:10.1007/BF01238026. 
  16. ^ Ridao, M.; Riquelme, J.; Camacho, E.; Toro, M. (1998). "An evolutionary and local search algorithm for planning two manipulators motion". Lecture Notes in Computer Science (Springer-Verlag) 1416: 105–114. doi:10.1007/3-540-64574-8_396. 
  17. ^ Haas, O.; Burnham, K.; Mills, J. (1998). "Optimization of beam orientation in radiotherapy using planar geometry". Physics in Medicine and Biology 43 (8): 2179–2193. doi:10.1088/0031-9155/43/8/013. PMID 9725597. 
  18. ^ Harris, S.; Ifeachor, E. (1998). "Automatic design of frequency sampling filters by hybrid genetic algorithm techniques". IEEE Transactions on Signal Processing 46 (12): 3304–3314. doi:10.1109/78.735305. 
  19. ^ Augugliaro, A.; Dusonchet, L.; Riva-Sanseverino, E. (1998). "Service restoration in compensated distribution networks using a hybrid genetic algorithm". Electric Power Systems Research 46 (1): 59–66. doi:10.1016/S0378-7796(98)00025-X. 
  20. ^ Wehrens, R.; Lucasius, C.; Buydens, L.; Kateman, G. (1993). "HIPS, A hybrid self-adapting expert system for nuclear magnetic resonance spectrum interpretation using genetic algorithms". Analytica Chimica ACTA 277 (2): 313–324. doi:10.1016/0003-2670(93)80444-P. 
  21. ^ França, P.; Mendes, A.; Moscato, P. (1999). "Memetic algorithms to minimize tardiness on a single machine with sequence-dependent setup times". Proceedings of the 5th International Conference of the Decision Sciences Institute. Athens, Greece. pp. 1708–1710. 
  22. ^ Costa, D. (1995). "An evolutionary tabu search algorithm and the NHL scheduling problem". Infor 33: 161–178. 
  23. ^ Aickelin, U. (1998). "Nurse rostering with genetic algorithms". Proceedings of young operational research conference 1998. Guildford, UK. 
  24. ^ Ozcan, E. (2007). "Memes, Self-generation and Nurse Rostering". Lecture Notes in Computer Science (Springer-Verlag) 3867: 85–104. doi:10.1007/978-3-540-77345-0_6. 
  25. ^ Ozcan, E.; Onbasioglu, E. (2006). "Memetic Algorithms for Parallel Code Optimization". International Journal of Parallel Programming 35 (1): 33–61. doi:10.1007/s10766-006-0026-x. 
  26. ^ Burke, E.; Smith, A. (1999). "A memetic algorithm to schedule planned maintenance for the national grid". Journal of Experimental Algorithmics 4 (4): 1–13. doi:10.1145/347792.347801. 
  27. ^ Ozcan, E.; Basaran, C. (2009). "A Case Study of Memetic Algorithms for Constraint Optimization". Soft Computing: A Fusion of Foundations, Methodologies and Applications 13 (8-9): 871–882. doi:10.1007/s00500-008-0354-4. 
  28. ^ Areibi, S., Yang, Z. (2004). "Effective memetic algorithms for VLSI design automation = genetic algorithms + local search + multi-level clustering". Evolutionary Computation (MIT Press) 12 (3): 327–353. doi:10.1162/1063656041774947. PMID 15355604. 
  29. ^ Merz, P.; Zell, A. (2002). "Clustering Gene Expression Profiles with Memetic Algorithms". Parallel Problem Solving from Nature — PPSN VII. Springer. pp. 811–820. doi:10.1007/3-540-45712-7_78. 
  30. ^ Zexuan Zhu, Y. S. Ong and M. Dash (2007). "Markov Blanket-Embedded Genetic Algorithm for Gene Selection". Pattern Recognition 49 (11): 3236–3248. 
  31. ^ Zexuan Zhu, Y. S. Ong and M. Dash (2007). "Wrapper-Filter Feature Selection Algorithm Using A Memetic Framework". IEEE Transactions on Systems, Man and Cybernetics - Part B 37 (1): 70–76. doi:10.1109/TSMCB.2006.883267. 
  32. ^ Zexuan Zhu, Y. S. Ong and M. Zurada (2008). "Simultaneous Identification of Full Class Relevant and Partial Class Relevant Genes". IEEE/ACM Transactions on Computational Biology and Bioinformatics. 
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Memetic algorithm,中文称为“模因算法”,是一种基于进化算法和局部搜索的优化算法。其代码实现大致分为两个部分:进化过程和局部搜索过程。 进化过程代码实现如下: 1. 初始化种群:随机生成初始种群,种群中的个体表示为一组参数。 2. 评估适应度:根据问题的优化目标,对种群中的个体进行适应度评估,得到每个个体的适应度值。 3. 选择操作:根据个体的适应度值,以一定概率选择较优秀的个体,构成下一代种群。 4. 交叉和变异:对选出的个体进行交叉和变异操作,生成新的个体。 5. 替换操作:用新生成的个体替换掉上一代种群中适应度较差的个体。 局部搜索过程代码实现如下: 1. 随机选择个体:从当前种群中随机选择一个个体。 2. 局部搜索:对选择的个体进行局部搜索操作,例如采用梯度下降算法进行参数优化。 3. 替换操作:用局部搜索得到的更优个体替换原来的个体。 整个memetic algorithm的代码实现就是不断迭代上述两个过程,直到满足停止条件为止,其中停止条件可以是达到最大迭代次数或者满足一定的优化精度要求。 总的来说,memetic algorithm的代码实现涉及到种群的初始化、适应度评估、选择、交叉和变异等进化过程,以及局部搜索过程。通过不断迭代这些操作,可以实现在复杂优化问题上的高效搜索。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值