Upgrade to Pro — share decks privately, control downloads, hide ads and more …

A Preliminary Investigation into Evolving Modular Finite State Machines

A Preliminary Investigation into Evolving Modular Finite State Machines

David Czarnecki

May 09, 2012
Tweet

More Decks by David Czarnecki

Other Decks in Education

Transcript

  1. A Preliminary Investigation into Evolving Modular Finite State Machines Kumar

    Chellapilla Dept. of Electrical and Computer Engg. University of California at San Diego La Job, CA 92093-4007 e-mail: [email protected] http://vision.ucsd.edu/-kchellapl Abstract- Evolutionary programming was proposed more than thirty five years ago for generating artificial intelligence. The original experiments consisted of evolv- ing populations of finite state machines (FSMs) for pre- diction, identification, and control. Since then, all of the studies with FSMs and evolutionary programming have been limited to the evolution of strictly non-modular FSMs. In this study, a modular FSM architecture i s pro- posed and an evolutionary programming procedure for evolving such structures is presented. Preliminary results indicate that the proposed procedure is indeed capable of successfully evolving modular FSMs and that such mod- ularity can result in a statistically significantly increased rate of optimization. 1 Introduction Evolutionary programming was first offered in the early 1960’s in a series of experiments for evolving artificial intel- ligence (Fogel, 1962, 1964; Fogel et al., 1966). Prediction was considered a prerequisite for intelligent behavior as it allowed for the translation of such predictions into suitable responses in light of given goals. The environments were modeled as sequences of symbols, and agents in these envi- ronments were represented as finite state machines (FSMs). These FSMs were evolved for prediction in both stationary and non-stationary environments in light of a goal. Success- ful prediction meant that the FSM used for prediction repre- sented an accurate model of the environment and thus could be used to control the unknown plant represented by the environment. Since the early experiments with FSMs, evolutionary pro- gramming has been used for the evolution and optimization of a wide variety of variable-length architectures and their associated parameters. Such applications include linear and bilinear models (Chellapilla and Rao, 1998; Fogel, 1991, 1992), neural networks (Angeline, 1993; Angeline et al., 1994; Yao and Liu, 1996), fuzzy systems (Haffner and Sebald, 1993), parse trees (Angeline, 1997; Chellapilla, 1997), and lists (Chellapilla and Fogel, 1997; Fogel, 1988). Recent investigations with evolving FSMs were directed towards the recognition and inferencing of regular languages (Dunay er al., 1994; Lindgren ef al., 1992), tracking tasks (Angeline, 1993; Jefferson er al., 1991). game theory (Fogel, David Czarnecki Information Technology Laboratory General Electric Corporate Research and Development, Niskayuna, NY e-mail: czarnecki @crd.ge.com http:tp://www.cs.rpi.edu/-czarn~ 1995a; Miller, 1996), self-adaptation techniques for enhanced rates of optimization (Angeline er al., 1996; Fogel er al. 1994, 1995), and the design of digital circuits (Corno ef al., 1996; Miller and Thompson, 1995). The ability to search for problem-specific representations may become necessary for a search technique to be scalable to large problems of practical value. Automatic discovery of problem representations can be implemented in an evolu- tionary framework that favors the generation of hierarchical, modular structures that can decompose a difficult task into simpler subtasks. These subtasks may then be solved with lower computational effort and their solutions combined to give the overall solution. Further, already discovered solu- tions to subtasks may be reused to repeatedly solve similar subproblems. In view of this, modular structures that can be evolved in a hierarchical arrangement have been proposed for the design of artificial neural networks (Angeline, 1997), computer programs (Koza, 1992, Angeline 1993; Rosca et al., 1994; Spector, 1996), and pattern recognition systems (Tamburino and Rizki, 1995). This paper presents a modular FSM architecture and an evolutionary programming procedure for evolving these modular FSMs. A brief background on FSMs is presented in Section 2 and the proposed modular FSM is described in Section 3. Section 4 describes the evolutionary procedure for the design and optimization of these modular FSMs. The experimental setup for investigating the evolution of modu- lar FSMs is also described in Section 4. Results are pre- sented in Section 5 and conclusions are offered in Section 6. 2 Finite State Machines FSMs are defined in terms of a finite alphabet of possible input symbols, a finite alphabet of possible output symbols, some finite number of possible different internal states, and a start state. A FSM is completely specified by describing each of these states in terms of the output symbols that would emerge when the machine is in that state and receiving each of the possible input symbols (Fogel ef al., 1966). While pro- cessing input symbols, the FSM switches among its internal stales, generates output symbols and thereby exhibits a cer- tain behavior. 0-7803-5536-9/99/$10.00 01999 IEEE I349
  2. YO n Figure 1. Two-state modulo-three cyclic increment loopback finite

    state machine. The input alphabet is (0,1,2) and the output alphabet is (0,1,2]. The symbol generated by the finite state machine is the input symbol incremented by one (when in state I) or by two (when in state 2) under modulo-three arithmetic. For example, a two-state cyclic increment loopback FSM is shown in Figure 1. The start state is the first state, i.e., the state numbered 1 in the figure. When the current internal state of the cyclic increment loopback FSM is I, input sym- bols 0 and I , generate outputs symbols 1 and 2, respectively. When a 2 is input, the output symbol is 0 and a state transi- tion occurs from the first to the second state. Similarly, for the second state, input symbols 0 and I, generate output symbols 2 and 0, respectively. When a 2 is input, the output symbol is 1 and a state transition occurs from the second to the first state. This behavior of the FSM can be summarized in a state-table as given in Table I. Table 1. State-table of the FSM shown in Figure 1. The start state is marked with a (*). Present State Input output Next-State I 1 1 2 1 2 I 1 FSMs are well suited for representing behavior that can be described by a mapping from a sequence of input symhols that constitute the environment or stimulus to a sequence of output symbols that express response. 3 Modular Finite State Machines FSMs can be extended to incorporate suhmodules consisting of independent FSMs. Such submodules may allow for an easier and quicker design of large FSMs that may become necessary when dealing with real-world applications. Consider such a modular FSM consisting of one main FSM (main-FSM) and K submodules (which are also FSMs) namely, sub-FSMI, sub-FSMZ, ..., sub-FSMK. The number of sub-FSMs, K, may be fixed a priori or may be variable. The goal is to decompose the overall behavior of the modu- lar FSM into simpler behaviors that can be represented by the sub-FSMs and brought together by the main-FSM. In view of this, each sub-FSM must he a complete FSM with its own set of states, a set of state transitions among these states, and a start state. As the main-FSM and the sub-FSMs pro- cess the same input stimuli but can exhibit different responses, their input alphabets are identical while the output alphabets may vary. A simple approach to the development of such a modular FSM would be to allow for certain states in the main-FSM to make transitions to these sub-FSMs. Transitions between the sub-FSMs may also be allowed. Figure 2 shows such a mod- ular FSM with a main-FSM and one sub-FSM, i.e., sub- FSM1. The input and output alphabets are the same and are given by {C, D]. For example, C might represent true and D might represent false. The state tables describing the hehav- ior of main-FSM and sub-FSM, are presented in Tables 2 and 3, respectively. A fifth row, named Control, is added to the state-tables (shown in Tables 2 and 3) whose entries indicate which machine is to gain control after the transition following an input symbol in the current state. A zero entry indicates that control returns back to the main-FSM, whereas an entry i indicates that control is transferred to the i th sub-FSM, i.e., sub-FSMi. Table 2. State-table of the main-FSM in Figure 2. The start state is marked with a (*). Table 3. State-table of sub-FSM1 in Figure 2. The start state is marked with a (a). The behavior of the modular FSM shown in Figure 2 may be best described through the processing of a sample input sequence. Consider the processing of a test input string of symbols 'CCCDD'. 1. Execution begins in the main-FSM in State 2 (the start state). After getting the first .input symbol, C; the machine outputs a D, makes a transition to State I, and retains control. Retaining control implies that the next input symbol will be processed by the main-FSM. The next input symbol, C, is read by the main-FSM, it outputs a C and makes a transition to State 2. However 2. 1350
  3. control now transfers to sub-FSM1 as indicated by the entry

    1 in the first column of the Control row in Table 2. In sub-FSM,, execution begins in the start state, i.e., State 1. The next input symbol, C, is read, a C is output and control loops back to State 1. The sub-FSM retains control as indicated by the entry 1 in the first column of the Control row in Table 3. After the next input symbol, D, is read, sub-FSM, out- puts a D, make a state transition to State 2 and retains control. After the last symbol in the test sequence, D, is read by sub-FSM1, a C is output and control returns back to the main-FSM in State 2. Notice that when control is transferred to a sub-FSM (either from the main-FSM or from any other sub-FSM) the pro- cessing of input symbols always starts in the sub-FSM’s start state. However, when control is returned to the main-FSM, input symbol processing continues from the last state, during a transition to which, control was transferred to a sub-FSM. Also observe that recursion can be allowed by not constrain- ing the transitions between different constituent machines (i.e., the main-FSM and sub-FSMs). If the goal is to evolve simple behaviors (i.e., behaviors that are easy to interpret) restrictions such as the following may be placed on transi- tions between machines: 1. Transitions are allowed in both directions between the main-FSM and any of the sub-FSMs. 2. Transitions are allowed from the ith sub-FSM (i.e., sub- FSM,) to the jth sub-FSM, wherej > i. However, transi- tions in the reverse direction are not allowed. Thus with two sub-FSMs, i.e., K = 2, transitions between the main-FSM and either of the two sub-FSMs in both directions are allowed, transitions from sub-FSM1 to sub-FSM2 are also allowed, but transitions from sub-FSM2 to sub-FSM1 are not allowed. Such restrictions are commonly imposed in the genetic programming literature (Koza, 1994). However, in this study no such restrictions were placed on interactions between constituent machines. It might be worth noting that non-modular FSMs form a subset of the class of modular FSMs, wherein the number of sub-FSMs (K) is zero. Hence any evolutionary procedure for the design and optimization of modular FSMs is readily applicable for the design and optimization of non-modular FSMs. 4 Method 4.1 Computational Procedure The evolutionary programming procedure for evolving mod- ular FSMs was derived from the baseline method for evolv- ing FSMs investigated in Fogel et al. (1966) and Fogel (1991; 1995b). Evolutionary programming was used to co- evolve the sub-FSMs along with the main-FSM, thus State Start DID (a) Main Finite State Machine (main-FSM) (b) Sub-Finite State Machine (sub-FSM1) Figure 2. Modular finite state machine comprising (a) the main- FSM and (b) the sub-FSM corresponding to the behaviors described in Tables 2 and 3. The hexagons labeled Subl in the main-FSM represent sub-FSM,. When control is transferred to sub-FSM1 execution starts in state 1, which is the start state of sub-FSM1. When control is returned to main-FSM execution continues from the state indicated by the output link of the Subl module in main-FSM. enabling the simultaneous optimization of both the main- FSM and sub-FSMs. A population of trial modular FSMs was maintained, a set of variation operations to produce changes in these machines were performed, and selection was used to determine which machines were to survive to the next generation and which machines were to be culled from the pool of trials. This pro- cess was repeated until an acceptable machine was obtained or the available computer time was exhausted. The number of sub-FSMs in each candidate machine was fixed at two 1351
  4. during evolution. The evolutionary programming procedure consisted of an initialization

    followed by an iterative loop of variation and selection. Initialization: The initial population consisted of p mod- ular FSMs, Pi = {Mi, S+ ..., S ~ K ) ‘d i E (1, 2, ..., p), where M i and Sii’s represented the ith modular FSM’s main-FSM and K sub-FSMs, respectively. The Mi, Sii’s V i E { 1, 2, ... , p},j E (1, 2, ..., K) may be viewed as non-modular FSMs and were initialized at random in an identical manner. Suppose P = (N, S, L, 0) was a non- modular FSM that had to be initialized at random. The number of states, N, was selected at random uniformly from {3, 4, ..., N,,,,), the start state, S, and links, L, were assigned at random based on the number of states in the machine, and the output symbols, 0, were ran- domly assigned based on the number of symbols in the output alphabet. The generation count, k, was initialized to 1. These p modular FSMs constituted the parents in the first generation. Fitness Evaluation: The fitness of each of the parent machines was evaluated based on the objective function for the task at hand. The evaluation function for the arti- ficial ant problem is described in Section 5. Variation: Each of the parents, Pi, was modified to pro- duce one offspring, Pi, through the application of a sequence of variation operators to a randomly selected constituent machine. The constituent machine could be either the main-FSM or any of the sub-FSMs. The num- ber of variation operators, Xi, to be applied was obtained by sampling a Poisson random variable with mean 3. A mean value of 3 was selected after some preliminary experimentation. The variation operators consisted of add states, delete states, reassigning the start state, reassigning links, reassigning output symbols, and change machine controls. Xi operators were selected with equal probability from the above operators. Upon selection, the adding states, deleting states, and reas- signing the start state operators were applied only half the time (i.e., the probability of application was 0.5). On the other hand, the reassigning links, reassigning output symbols, and change machine controls were always applied when selected (i.e., the probability of applica- tion was 1.0). If the parent already had the maximum cumber of allowed states ( I V , , , , ) , then adding states was precluded. Similarly, if the parent had the minimum cumber of allowed states then delete states was pre- cluded. These variation operators are described below: a) Add states: The number of states to add, n, was selected at random from (1, 2). n new states were added to the parent’s constituent machine. The new states’ links and output symbols were randomly assigned. On addition, these new states were not connected to other states in the machine. As a result, they became active only when an appropriate delete states, reassign start state, or reassign links variation operation caused them to become an active part of the machine. Delete states: The number of states to delete, n, was selected at random from ( 1,2). n states in the parent machine were randomly selected for deletion. All the links pointing to the selected states were reas- signed randomly to other states. The selected states were deleted. If one of the selected states was the start state, then a new start state was chosen ran- domly. Reassign the start state: A new start state was cho- sen at random. Reassign links: The number of links to reassign, n, was selected at random from ( 1,2,3,4). n randomly selected links in n states were reassigned randomly to different states. Reassign output symbols: The number of output symbols to reassign, n, was selected at random from (1, 2, 3, 4). n output symbols corresponding to n randomly chosen links in the machine were reas- signed to different symbols from the alphabet. Change machine controls: The number of control entries in the state table to reassign, n, was selected at random from (1, 2, 3, 4). n machine control entries were reassigned at random to different con- stituent machines. These six variation operators were capable of generating a variety of genotypic variations in the parent machine. In most cases, adding states, deleting states, reassigning the start state, and changing machine controls brought about a considerable change in behavior (defined as response to a stimulus sequence) whereas reassigning links and output symbols produced relatively minor changes. Fitness Evaluation: Each offspring was scored in light of an objective function for the task at hand. Selection: Tournament selection (Fogel, 1995b) was used to determine the parents for the next generation. Pairwise comparisons were conducted over the union of parents, { P i ) V i E { 1, 2, ..., p), and offspring {Pi) V i E {l, 2, ..., p). For each machine, q opponents were chosen randomly from all parents and offspring with equal probability. For each comparison, if the machine’s fitness was no lesser than the opponent’s, it received a ‘win’. The p individuals with the most wins were selected to be the parents for the next generation. The procedure was terminated if the halting criterion was satisfied; otherwise, the generation number was incremented, i.e., k = k + 1, and the process proceeded to step 3. For the experiments in this study, the halting cri- terion was 250 generations, i.e., k = 250. 4.2 Experimental Design Modular FSMs were evolved to guide an artificial ant over a toroidal grid to collect food packets. The artificial ant 1352
  5. t N Figure 3. The 32x32 toroidal grid with the

    original ant trail pre- sented in Jefferson et al. (1991). The trail begins on the second square in the first row near the left top corner, comprises 127 squares, and contains 20 turns and 89 squares with food packets. The squares in black indicate the presence of a food packet, while the shaded squares indicate squares on the trail without food packets. This original trail proposed in Jefferson et al. (1991) is also known as the John Muir trail, problem (also known as the Tracker task) was originally offered by Jefferson et al. (1991) as a way of modeling natu- ral evolution and was later explored in detail in Koza (1992), Angeline (1993), Angeline et al. (1994), and others (e.g., Kuscu, 1998). In these studies, FSM, recurrent neural net- work, and parse tree controllers were successfully evolved. The artificial ant problem consists of an ant placed on a 32x32 toroidal grid. Food packets are scattered along a trail on the grid. As the trail is traversed the distribution of the food packets becomes increasingly sparse. The original trail proposed in Jefferson et al. (1991) was used to evaluate evo- lutionary programming’s ability to evolve modular FSM controllers. This trail, shown in Figure 3, begins on the sec- ond square in the first row near the left top corner, is 127 squares long, and contains 20 turns and 89 squares with food packets. The ant can be oriented facing East, West, North, or South. It can sense the presence of a food packet in the squared directly ahead and can either turn left or right (i.e., change its orientation by 90 degrees counter-clockwise or clockwise) or move forward one square. These three actions are represented by LEFT, RIGHT, and MOVE, respectively. On moving to a non-empty square, the food packet is col- lected by the ant. The goal of a controller is to guide the ant to collect all 89 food packets on the trail. The ant starts out facing East on the second square in the first row. The objective function for evaluating candidate modular FSM controllers was the total number of food packets col- lected within the allotted time. Each of the ant’s actions (LEFT, RIGHT, and MOVE) cost one time step. A maximum of 600 time steps were allowed. Based on the problem description, the input alphabet was taken to be {FOOD-AHEAD, NOFOOD-AHEAD} and was an indication of whether food was present in the square directly in front of the ant or not. The output alphabet con- sisted of (LEFT, RIGHT, MOVE] representing the moves the ant was allowed to make. These input and output alpha- bets were the same for all the constituent machines in a mod- ular FSM. Two separate sets of 50 trials were conducted evolving modular and non-modular FSMs. Each set of trials used a population size of 500 machines and evolution lasted 250 generations. The number of opponents for tournament selec- tion, q, was set to 10. In all the experiments on modular FSMs, for the main- and sub-FSMs, N-max, was set to 15 and 10 respectively. For the experiments on non-modular FSMs, N-max was set to 25. The fitness of a machine, which was the number of food packets the ant collected, could range from 0 to a maximum of 89. A trial was considered to be successful if it found a controller that correctly guided the ant to collect all 89 food packets. 5 Results In 48 of the 50 modular FSM evolution trials, the pro- posed evolutionary programming procedure was successful in evolving perfect machines (within 250 generations) that could successfully guide the ant to collect all 89 food pack- ets. The remaining two trials produced modular machines (within 250 generations) that could guide the ant to collect 87 of the 89 food packets. In these two trials, the fitness con- tinued to climb even at generation 250 and it appears that given additional generations these two trials would also have been successful. The mean best fitness (averaged over 50 tri- als) of the modular FSM trials is shown in Figure 4. Evolu- tion proceeded quickly and the 48 trials that were successful found perfect solutions within 55 generations. In 44 of the 50 non-modular FSM evolution trials, the proposed evolutionary programming procedure was success- ful in evolving perfect machines (within 250 generations) that could guide the ant to collect all 89 food packets. In the remaining six trials, the number of food packets collected were 86, 83, 82, 81, 81, and 80. However, in comparison with the modular FSM experiments, the rate of evolution was slower. Figure 4 depicts the rate of optimization of these non-modular FSMs. The Student’s t-test (Press et al., 1992) (p c 0.05) for sta- tistically significant differences in fitness with and without 1353
  6. - with sub-FSMs without sub-FSMs 50 100 150 200 250

    55 0 Generation Figure 4. The rate of optimization of the proposed evolutionary programming procedure for evolving modular and non-modular FSMs. The mean best fitness (averaged over 50 trials) of the modular finite state machine trials climbs rapidly and 48 of the 50 trials are successful by generation 55, whereas the mean best fitness of the non-modular FSMs climbs more gradually with 45 of the 50 trials being successful by generation 172. l o ' I 50 100 150 200 250 0' 0 Generation Figure 5. The Students's t-test results (for p c 0.05) for statisti- cally significant differences in fitness (the number of food pack- ets collected) with and without modularity. During generations 8 through 17 1, solutions produced while evolving modular machines appear to be statistically significantly better (p < 0.05) than when evolving machines without modules. However, these t-tests are not independent Forty-eight of the 50 modular machine evolution trials were successful by generation 55. modularity are shown in Figure 5. Values above 1.96, shown in gray, indicate a statistically significant difference. During generations 8 through 171, solutions produced while evolv- ing modular FSMs appear to be statistically significantly bet- ter (p c 0.05) than those produced while evolving non- modular FSMs. However, these t-test values are not indepen- dent, i.e., having a statistically significantly better fitness at generation i is very likely to produce statistically signifi- cantly better fitness values at generation i+l. Note that all the successful modular FSM evolution trials found perfect solutions by generation 55. Figure 6 gives the average number of states in main-FSM and sub-FSMs of the best of generation machine as a func- tion of the generation. Figure 7 shows the corresponding complexity results while evolving non-modular FSMs. It is interesting to note that the total number of states in the sub- FSMs (the sum of the number of states in sub-FSM, and sub-FSM2) is larger than the number of states in the main- FSM. This could be partially caused by the initialization bias towards larger sub-FSM sizes. The complexity of the machines gradually climbed up to generation 75 after which it stagnated. This was caused by 48 of the trials being termi- nated on becoming successful by generation 55. Thus after generation 55, the variations in the trajectories in Figure 6 correspond completely to the results from the two unsuccess- ful trials. On the other hand, while evolving non-modular FSMs a steep increase in complexity was observed in the first 75-80 generations followed by a gradual decrease by about two states in the next 100 generations and stagnation by generation 180. All of the 44 trials that produced perfect machines were successful by generation 172. While evolving variable-length structures, such as FSMs, linear models, neural networks, fuzzy systems, and parse trees, in the absence of parsimony pressure, the complexity of the structures is typically observed to grow till the pre- specified upper bound on complexity is reached (Koza, 1992; Langdon, 1997). This behavior is observed in the evo- lution experiments with non-modular FSMs (Figure 7) but not in the evolution experiments with modular FSMs. This may be explained by a very large fraction of the trials being successful quite early during evolution. A lower probability of adding and deleting states could also have acted as a parsi- mony bias over the operators. 6 Summary A modular FSM architecture has been presented and an evolutionary programming (EP) procedure for the evolution and optimization of these machines has been proposed. Results on the artificial ant problem on the original trail pro- posed in Jefferson er al. (1991) indicate that the proposed EP procedure can rapidly evolve optimal modular machines. In comparison with the evolution of non-modular FSMs. Fur- ther, the evolution of modular FSMs was found to be statisti- cally significantly faster. The evolution of FSMs was first proposed in Fogel (1962) and appears to be the first effort to evolve variable- length architectures. The obtained results were qualitatively 1354
  7. effort considerations, and generalization properties of these modular FSMs. z

    I & 1 13 U1 g 12- G I C 11. 0 Figure 6. The mean number of states (averaged over 50 trials) in the main-FSM and sub-FSMs (sum of the number of states in sub-FSM1 and sub-FSM2) while evolving modular finite state machines. - non-modular FSM - 141 I 8 ‘”i I 50 100 150 200 250 7’ 0 Generation Figure 7 . The mean number of states (averaged over 50 trials) while evolving non-modular finite state machines. similar to those found in more recent work on variable- length architectures, such as neural networks (Angeline, 1997) and computer programs represented as parse trees (Koza, 1992). In this paper, both the basic architecture and the proposed evolutionary algorithm have been extended to enable the evolution of hierarchical, modular machines that may be better suited for solving real-world applications that require large structures. The results on the artificial ant prob- lem are in favor of modular structures both in terms of the reliability of generating a solution and the involved compu- tational complexity. Future work will be directed towards investigating the scalability to large problems, computational References Angeline, PJ (1993), Evolutionary Algorithms and Emergent Intelligence, Ph.D. diss., Dept. of Computer and Informa- tion Science, The Ohio State University. Angeline, PJ (1 997), “An Alternative to Indexed Memory for Evolving Programs with Explicit State Representa- tions,” Genetic Programming 1997: Proc. Second Annu. Con$ on Genetic Programming, J.R. Koza, K. Deb, M. Dorigo, D. B. Fogel, M. Garzon, H. Iba, and R. L. Riolo. (Eds.), MIT Press, San Francisco, CA: Morgan Kauff- mann, pp. 423-430. Angeline PJ, Fogel DB, Fogel LJ (1996) “A Comparison of Self-Adaptation Methods for Finite State Machines in a Dynamic Environment,” Evolutionary Programming L.J. Fogel, P.J. Angeline, T. Back (Eds.), MIT Press, Cam- bridge, MA, pp. 441-449. Angeline, PJ, Saunders GM, and Pollack JB (1994), “An Evolutionary Algorithm that Constructs Recurrent Neural Networks,” IEEE Transactions on Neural Networks, vol. Chellapilla K (1997), “Evolving Computer Programs with- out Subtree Crossover,” IEEE Trans. on Evolutionary Computation, vol. 1, no. 3, pp. 209-216. Chellapilla K, Fogel DB (1 997), “Exploring Self-Adaptive Methods to Improve the Efficiency of Generating Approximate Solutions to Traveling Salesman Problems Using Evolutionary Programming,” Evolutionary Pro- graming 97 Proc. of the Sixth Intl. Con$ on Evolutionary Programming, Angeline PJ, Reynolds RG, McDonnell JR, Eberhart R (Eds.), pp. 361-371, Springer: NY. Chellapilla K, Rao S S (1998), “Optimization of Bilinear Time Series Models using Fast Evolutionary Program- ming,” IEEE Signal Processing Letters, vol. 5, no. 2, pp. 39-42, Feb. IEEE. Corno F, Prinetto P, Reorda MS (1996), “A Genetic algo- rithm for Automatic Generation of Test Logic for Digital Circuits,” Proc. Eight IEEE Intl. Con$ on Tools with Arti- ficial Intelligence, pp. 16-19, CA, USA: IEEE. Dunay BD, Petry FE, Buckles BP (1994), “Regular Lan- guage Induction with Genetic Programming,” Proc. of the first IEEE Conf. on Evolutionary Computation, IEEE World Congress on Computational Intelligence, pp. 396- 400, NY, USA: IEEE. Fogel DB (1988) “An Evolutionary Approach to the Travel- ing Salesman Problem,” Biological Cybernetics, Vol. 6:2, Fogel DB (1991) “Evolutionary Modeling of Underwater Acoustics,” Proc. of OCEANS91, vol. 1, Honolulu, HI, 5~1, pp. 54-65. pp. 139-144. pp.453-457. 1355
  8. Fogel DB (1992) “Using Evolutionary Programming for Modeling: An Ocean

    Acoustic Example,” IEEE Journal on Oceanic Engineering, vol. 17:4, pp. 333-340. Fogel DB (1995a) “On the Relationship between the Dura- tion of an Encounter and the Evolution of Cooperation in the Iterated Prisoner’s Dilemma,” Evolutionary Computa- tion, vol. 3:3, pp. 349-363. Fogel, DB (1995b). Evolutionary Computation: Toward a New Philosophy of Machine Intelligence. Piscataway, NJ: IEEE Press. Fogel DB, and Chellapilla K (1998), “Revisiting Evolution- ary Programming,” AeroSense ’98: AerospaceDefense Sensing and Controls, 13-17 Apr., Orlando, Florida. Fogel LJ (1962), “Autonomous Automata,” Industrial Research, Vol. 4:2, pp. 14-19. Fogel EJ (1964), “On the Organization of Intellect,” Ph.D. Dissertation, UCLA. Fogel LJ, Owens AJ, and Walsh MJ (1966), Artificial Intelli- gence through Simulated Evolution, John Wiley, NY. Fogel LJ, Angeline PJ, and Fogel DB (1994) “A Preliminary Investigation on Extending Evolutionary Programming to Include Self-Adaptation on Finite State Machines,” Infor- matica, Vol. 18:4, pp. 387-398. Fogel LJ, Angeline PJ, and Fogel DB (1995) “An Evolution- ary Programming Approach to Self-Adaptation in Finite State Machines,” Evolutionary Programming IV: The Proc. of Fourth Annual Conference on Evolutionary Pro- gramming, J.R. McDonnell, R.G. Reynolds, and D.B. Fogel (eds.), MIT Press, Cambridge, MA, pp. 355-365. Haffner SB, and Sebald AV (1993), “Computer-aided design of fuzzy HVAC controllers using evolutionary program- ming,” Proc. of the Second Annual Conference on Evolu- tionary Programming, pp. 98-107, Evolutionary Programming Society, La Jolla, CA Jefferson, D, Collins R, Cooper C, Dyer M, Flowers M, Korf R, Taylor C, Wang A (1991), “Evolution of a theme in artificial life: the GenesydTracker system,” Tech. Report, Univ. California, Comput. Sci. Dept., Los Angeles, CA, USA, 1991. Koza, J.R. (1992). Genetic Programming: On the Program- ming of Computers by means of Natural Selection. Cam- bridge, MA: MIT Press. Kuscu, I. (1998). Evolving a Generalized Behavior: Artifi- cial Ant Problem Revisited. EP98: Evolutionary Pro- gramming VII: Proceedings of the Seventh Annual Conference on Evolutionary Programming, Springer-Ver- lag, Berlin, forthcoming. Langdon WB (1997), “Fitness causes bloat in variable size representations,” Technical Report CSRP-97-14, Univer- sity of Birmingham, School of Computer Science, 14 May 1997. Lindgren K, Nilsson A, Nordahl MG, Rade I (1992), “Regu- lar Language Inference Using Evolving Neural Net- works,” COGANN-92: Intl. Workshop on Combinations of genetic Algorithms and Neural Networks, pp. 75-86, CA, USA: IEEE. Miller JH (1996), “The Coevolution of Automata in the Repeated Prisoner’s Dilemma,” J. of Econ. Behavior and Organization, pp. 87-112, vol. 29:1, Elsevier, Jan. Miller JF, and Thompson P (1995), “Combinational and Sequential Logic Optimization using Genetic Algo- rithms,” First Intl. Con. on ‘Genetic Algorithms in Engi- neering Systems: Innovations and Applications ’ GALESIA, pp. 34-38, London, UK: IEE. Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992), Numerical Recipes in C, NY. Cambridge Univ. Rosca JP, and Ballard DH (1994), “Genetic Programming with Adaptive Representations,” Technical Report TR 489, University of Rochester, Computer Science Depart- ment, Feb. 1994. Spector L (1996), “Evolving Control Structures with Auto- matically Defined Macros,” in P. Angeline and K. E. Kin- near, Jr., editors, Advances in Genetic Programming 2, chapter 7. MIT Press, Cambridge, MA, USA. Tamburino LA, and Rizki MM (1995), “Resource allocation for a hybrid evolutionary learning system used for pattern recognition,” Evolutionary Programming V: Proc. of the Fifth Annual Con. on Evolutionary Programming, pp. 207-216, Cambridge, MA:MIT Press. Yao X, Liu Y (1996), “Evolving Artificial Neural Networks through Evolutionary Programming,” Evolutionary Pro- gramming V: Proceedings o f the Fifth Annual Conference on Evolutionary Programming, pp. 257-266, Cambridge, MA, USA: MIT Press. 1356