Blog

Summary

Computational model for the rat in the elevated plus-maze. It uses Evolutionary Computation to evolve the weights of an Artificial Neural Neural. Virtually-generated rats were compared to real ones using Markov Chains.

Notes

  • Encouraged by previous studies, combined multiple methods already published. They used Artificial Neural Networks, Markov Chains, and Evolutionary Computation. A genetic algorithm optimizes the weights of the neural network.
  • Affirmed that only three studies tried to create computational models for the rat in the elevated plus-maze (EPM). They all consider the EPM to be a discrete space, with three or five positions in each arm.
  • Used a generalized fitness function with two main terms: one stimulating the exploration, the other the consideration for endangering themselves while doing so. Considering this, the work does not require comparison with real rats. The fitness function is similar to what Salum, Morato, & Roque-da-Silva (2000) proposed, although with a bit more variables.
  • Recurrent multilayer perceptron (MLP) with Elman's architecture. Ten inputs, four hidden neurons, and four outputs. The inputs work similar to how the rat perceives the walls around itself, and the outputs tell where the virtual rat is expected to go.
  • The first generation of the genetic algorithm is chosen randomly. The selection operators are etilism – two individuals – and tournament – “two random individuals compete and that one with better fitness is selected with probability 0.75.” Single-point crossover, mutation with uniform distribution. The evaluation of the network weights happens after letting the virtual rat navigate through the maze for 5 minutes.
  • Elevated plus-maze is divided into 21 squares, with five in each arm. For analysis, they consider open arms the same structure, and similarly to the closed arms. This way, the square number one refers to both edges of open arms; the number eleven, the edges of closed arms; and six is the central position.
  • First-order homogeneous Markov chain. Auto-transitions not allowed.
  • Average of 10 rats in 30 executions of the program. 600 generations and 500 individuals. They were compared to a control group of 10 real rats.
  • Constants used for the fitness function were γ(npt) = 3, β = 5, αo = 0,015, αc = 0,010 and αe = 0,005.

Thoughts

  • As far as I know, it's the first – or one of the first – publications of Costa's work combining Artificial Neural Networks, Markov Chains, and Evolutionary Computation. I should certainly try to reproduce it. The author shared experimental and simulated Markov Chains, the constants for the fitness function, and gave an idea of the neural network architecture. Since it generated other publications, there's a good chance that will create interesting results to compare to the original work.
  • Like previous studies, the sample size of real rats seem to be quite small, only ten rats.

References

Salum, C., Morato, S., & Roque-da-Silva, A. C. (2000). Anxiety-like behavior in rats: a computational model. Neural Networks, 13(1), 21–29. http://doi.org/10.1016/S0893-6080(99)00099-4

Tejada, J., Bosco, G. G., Morato, S., & Roque, A. C. (2010). Characterization of the rat exploratory behavior in the elevated plus-maze with Markov chains. Journal of Neuroscience Methods, 193(2), 288–295. http://doi.org/10.1016/j.jneumeth.2010.09.008