Growing

In the train now. Slowly rolling towards London. What a strange day it was. Some confusion sprinkled with stress. Maybe just another usual day as a PhD student. I still hope to get used to this way of working. I’m in my second year now so one would assume I’m used to it by now. But I’m not there yet. It is still a journey, still a balancing act, still a learning experience. How to deal with the people around you, the ones you rely on, and the ones that share your journey? How to manage projects with unclear outcomes and novel techniques? How to manage yourself? How much work can you really deal with, what keeps you motivated, what should you avoid? In that respect, a PhD is not only a scientific endeavor but also a self-finding process. Really. One of my former bosses said: it’s character shaping. And it is true. You’ll change. Inevitably. Even when you might not notice it yourself. Like when your grandparents used to visit, when you were younger, and seemed so surprised how much you have changed since their last visit. You do change. You’ll grow with your challenges. Everyday a little bit. Step by step. Mostly unnoticed, just like a tree grows.

Today I’m off to London. I’m going to meet a researcher and his lab to discuss a project I have in mind. Fast your seatbelt, I’m finally back working with bumblebees again. Somehow I started to miss empirical experiments. After all, they are vital to test the predictions of my computational models. I’m excited about the idea to collaborate. And I’m excited about working with bees again.

Advertisements

An introduction to Agent-Based Modelling in R


Rock Paper Scissors Lizard Spock

As part of my PhD I am using computational models to unravel the evolution of certain behaviours. In my case I am interested in the evolution of social learning. Here, I want to give a very short introduction for how to create a simple agent-based model (ABM) using R. When I started with my first ABM I had no clue where to start. When you read scientific papers that use ABMs they usually do not talk about the implementation (code-wise) either. So, here is an example for an agent-based model for individuals that play a game commonly known as Rock, Paper, Scissors. But first, what actually are ABMs? Wikipedia says, that ‘an agent-based model is one of a class of computational models for simulating the actions and interactions of autonomous agents with a view to assessing their effects on the system as a whole.’ Now, let us analyse what the fundamentals for an ABM are.

What you need for an agent-based model

The minimum ingredients for an agent-based model are:

  • Agents that interact with the world around them and/or with other agents
  • A world in which the agents ‘live’ or move around
  • A set of rules that determines what every agent is allowed or has to do
  • A loop, which allows to repeatedly act or interact

In our case agents are not moving around and therefore we will not consider the second point (a world). Let us start with only two agents that play one of the three strategies (Rock, Paper, Scissors) against each other. We will define two individuals, let them choose a strategy and then play them.

Creating agents is actually very simple. We need to keep track of an individual and therefore it needs an ID. In our model they will choose a strategy, which we will associate with the ID. And finally, to make the model interesting, let us monitor the times an individual wins against the other. To keep track of all this we create a data.frame with the according columns

indDF indDF
## id strategy num_wins
## 1 1 NA 0
## 2 2 NA 0

Check at our first point: we got our agents ready to play. In the next step we let them choose a strategy. As we our agents will choose their strategies repeatedly we simply create a function. We will hand over the indDF and the function assigns a random strategy to the stratscolumn in the data.frame. We use number instead of names for the strategies, as this will make working with them easier later on.

chooseStrategy strats ind$strategy return(ind)
}

Let us proceed to the next step where the agents play their strategies. Again, we create its own function. What is happening inside the function can be summarised like this: strategies are ordered numerically in the way the win against each other, i.e. paper:1, scissor:2, rock:3. As 11 we need to identify this special case (rock loosing against paper). The number of wins of the individual with the winning strategy is increased by one. If both individuals play the same strategy, nothing happens in this round.

playStrategy if(ind$strategy[1]==ind$strategy[2]) {} else{
#in the case that one chose Rock and the other paper:
if(any(ind$strategy == 3) && any(ind$strategy == 1)){
tmp ind[tmp,"num_wins"] }else{
#for the two other cases, the better weapon wins:
tmp ind[tmp,"num_wins"] }
}
return(ind)
}

Now we can let the individuals play against each other repeatedly. We are going to use a simple for loop for this and let individuals play 1000 times against each other.

for(i in 1:1000){
indDF indDF i }
indDF
## id strategy num_wins
## 1 1 2 488
## 2 2 3 512

You might have spotted a function at the beginning of above’s chunk. I wrote a small setup function that allows us to quickly create the data.frame we were using above. An easy way to reset the simulations.

setup return(data.frame(id=1:2, strategy=NA, num_wins=0))
}

We now habe a neat little model. You will find that there is not much of a difference between the one or the other individuals. Especially, when you let it run more often.

But say, you would like to monitor what is happening throughout the simulation. We can record the process when we let the loop report individual results every turn. We will simply write the number of wins of both individuals in a two column matrix called dat. Subsequently, we will plot the result:

rounds indDF dat for(i in 1:rounds){
indDF indDF dat[i,] i }

plot(dat[,1], type='l', col='#EA2E49', lwd=3, xlab='time', ylab='number of rounds won')
lines(dat[,2], col='#77C4D3', lwd=3)

ABM_1-1

The model is running and we can observe what is happening. Now it becomes interesting. We can use the model to actually find the answer to a hypothesis. For instance: is it true that a player, wich never switches it’s strategy, is more successful when it plays against another individual that randomly switches its strategy? To test this we need to adjust the strategy choosing function.

chooseStrategy2 strats ind$strategy[2] return(ind)
}

Now, the second individual will change its strategy randomly, while the first chooses a strategy once and then sticks with it. We will return the results of this simulation to a matrix called res2. We will compare it to a simulation where every individual switches randomly between strategies. To make the results more robust let us repeat all simulations 100 times.

rounds repetitions dat res2 for(j in 1:repetitions){
indDF indDF[1,"strategy"] for(i in 1:rounds){
indDF indDF dat[i,] i }
res2 j }

plot(dat[,1], type='l', col='blue', lwd=3, xlab='time', ylab='number of rounds won')
lines(dat[,2], col='red', lwd=3)

# for comparisson let's calculate the winning vector for both players switch strategies:
res1 for(j in 1:repetitions){
indDF for(i in 1:rounds){
indDF indDF dat[i,] i }
res1 j }

# and the winner is:
t.test(res1,res2)

##
## Welch Two Sample t-test
##
## data: res1 and res2
## t = 0.5579, df = 202, p-value = 0.5775
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.09938468 0.17781605
## sample estimates:
## mean of x mean of y
## 1.529412 1.490196

ABM_2-1

At the end of the chunk I added a t-test to compare the two types of simulations. As you can see, no, it doesn’t make a difference whether the agent changes its strategy or not. However, this doesn’t take into account human psychology. Nevertheless, an interesting result.

Rock Paper Scissors on a network

In this second example we are going to use the same game, but this time several individuals will play against each other. In my phd project I assume that individuals can only interact with other individuals with which they have a connection. An easy way to think about this is a network, where individuals are represented by nodes and connections by ties. We will use a simple lattice network and therefore individuals can only play with their direct neighbours. To add an evolutionary dynamic to the simulation individuals that loose adopt the strategy of the winner. Have a read through the code, there are some explanations in it.

require(igraph) # for networks
require(reshape) # to change the resulted data in a format ggplot2 can use
require(ggplot2) # for plotting

# size of the lattice
sidelength<img class="alignnone wp-image-594 size-full" src="https://infinitedegrees.files.wordpress.com/2015/07/abm_3-1.png" alt="ABM_3-1" width="672" height="480" />

<a href="https://infinitedegrees.files.wordpress.com/2015/07/abm_3-2.png"><img class="alignnone wp-image-595 size-full" src="https://infinitedegrees.files.wordpress.com/2015/07/abm_3-2.png" alt="ABM_3-2" width="672" height="480" /></a>

What we observe are interesting dynamics between the three strategies. For the limited number of rounds that we let the model run all strategies coexist. However, sometimes one strategy disappears which will lead to the win of the strategy that loses against this one. For example, if paper disappears, rock should win on a long run. You can now experiment what would happen if you have a smaller or bigger network, or even a different network type. The ```igraph``` package offers many possibilities here.

### And what is with Spock?
The model we created so gar can be used to investigate for example epidemic dynamics. How do for instance information, rumors, and ideas, spread through a network. When we think of strategies spreading through a network we might want to add another strategy and see how a four-strategies-game differs from a three-strategies-game. Let us add Spock from the __Rock, Paper, Scissors, Lizard, Spock__ (see for example [here](http://www.samkass.com/theories/RPSSL.html))that you might have heard off from [The Big Bang Theory](https://www.google.it/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&ved=0CAMQjxxqFQoTCKy_hIfd4cYCFcjtFAodxhAAvA&url=http%3A%2F%2Fwww.fanpop.com%2Fclubs%2Fthe-big-bang-theory%2Fimages%2F15090520%2Ftitle%2Frock-paper-scissors-lizard-spock-fanart&ei=7rmoVenrNM2f7gaU0I6QBQ&bvm=bv.98197061,d.ZGU&psig=AFQjCNHLCUOTpBDCfNDHRBHWLmg-4Wlipg&ust=1437207412361054&cad=rja).


```r
# size of the lattice
sidelength<-10
# creating an empty data.frame to store data
stat<-data.frame()
# creating a lattice network using the igraph package
l<-graph.lattice(length=sidelength,dim=2)
# now every individual chooses a strategy at random
V(l)$weapon<-sample(c(1,2,2.9,3), size=length(V(l)), replace=T)
# for a nicer visualisation lets colour the different options
V(l)[weapon==1]$color<-'blue' # Paper
V(l)[weapon==2]$color<-'yellow' # Scissors
V(l)[weapon==3]$color<-'green' # Rock
V(l)[weapon==2.9]$color<-'purple' # Spock
# and this is what it looks like:
plot(l, layout=as.matrix(expand.grid(1:sidelength, 1:sidelength)), vertex.label=NA)

Let us have a look how our network looks like with four strategies:

ABM_4-1

Finally, let us run the slightly altered model.

for(t in 1:2500){
    from <- as.numeric(sample(V(l), 1))
    nei<-neighbors(l, v=from, mode='all')
    if(length(unique(V(l)$weapon))==1) {
        V(l)$weapon[from]<-sample((1:3)[1:3!=as.numeric(V(l)$weapon[from])], 1)
    } else {
        to <- sample(nei, 1)
        fromto<-c(from,to)
        w<-as.numeric(V(l)$weapon[fromto])
        if(w[1]==w[2]) {} else{
            if(max(w) == 3 && min(w) ==1) {
                V(l)$weapon[fromto[w==3]] <- "1"
            }
            else{
                V(l)$weapon[fromto[w==min(w)]] <- V(l)$weapon[fromto[w==max(w)]]
            }
        }

    }
    stat<-rbind(stat, c(sum(V(l)$'weapon'=="1"), sum(V(l)$'weapon'=="2"), sum(V(l)$'weapon'=="2.9"), sum(V(l)$'weapon'=="3")))
    # you can also plot each individual network configuration in each step of the simulation
    # V(l)[weapon==1]$color<-'blue' # Paper
    # V(l)[weapon==2]$color<-'yellow' # Scissors
    # V(l)[weapon==3]$color<-'green' # Rock
    # V(l)[weapon==2.9]$color<-'purple' # Spock
    # plot(l, layout=as.matrix(expand.grid(1:sidelength, 1:sidelength)), vertex.label=NA)
}

names(stat)<-c("Paper","Scissors","Rock","Spock")
s<-melt(stat)
s$time<-1:nrow(stat)
ggplot(data=s, mapping=aes(x=time, y=value, col=variable)) + geom_line() + theme_bw()

ABM_4-2

I hope this introduction was helpful and allows you to come up with your own ideas for agent-based models. Share and post your versions of the code in the comments if you like.