New release of R – all packages gone?

A brand new release of R is out, you install it using the convenient installer and bam! you are up to date again. And, as a special treat you might find yourself without the packages you had previously installed. At least, that is what happened to me. A simple way to get the packages installed again? Use this line of code in your R console:

install.packages( as.character( as.data.frame( installed.packages( "/Library/Frameworks/R.framework/Versions/3.1/Resources/library/" ))$Package))

This will automatically install all the packages that were installed for the previous release (which in this case was R 3.1, which you can see in the path). Depending on what version number your previous release had you will have to change the version number in the path accordingly.

Nah, is that a time saver?

Coming back to the head line: your packages are not gone at all, it is just that your new version of R is looking for packages in a new folder. See it as an advantage, if you ever have to deal with a package that is only running say with R 3.0 then you can go back to the 3.0 directory with all the other packages that were functional at that time as well.

And as a side note, a quick google search will bring up similar pieces of code thanks to the helpful bloggers Randy Zwitch and rmkrug.

Advertisements

Goodbye London

Slowly the train rolls out of Euston train station into a mild rain shower. It is as if Manchester is sending a kind reminder not to be too disappointed about the weather back home.

A post shared by Marco Smolla (@marcosmolla) on

Five weeks have I stayed and worked in London. I finished my project I came here for. Even better, I’m happy about my results. What I found? Well, it appears that bumblebees are very capable in distinguishing between an easy and a difficult task. If the task was easy they wouldn’t rely on information (which helps to solve the task) that is right 50% of the time. But if the task was difficult four times more bees readily followed that information. I find that fascinating.

A post shared by Marco Smolla (@marcosmolla) on

Yet, I’ve not only worked here, I’ve also been a Londoner, even if it was only for a short amount time. I commuted in busy rush hour tubes, shopped in restless East End, went for a run in the Olympic Park and along the canals, lived in Hackney, visited pubs in Islington, and of course been around Queen Mary in Tower Hamlets.

A post shared by Marco Smolla (@marcosmolla) on

More than coming around and accomplishing some work I met kind, open, and intellectually stimulating people. It was a delight to stay and work at Lars Chittka’s group where I received support from all sides, had great discussions, and felt welcome as if I would forever have been a part of that group.

Now, it is time to go back home again.

Welcome to London

[Deutsche Version]

In an earlier post I wrote about my travel to London for a meeting. Now, I’m already in London working on a project that resulted from that meeting. But, what am I actually doing here?

A post shared by Marco Smolla (@marcosmolla) on

In our recently published article me and my colleagues investigated environmental conditions that support the evolution of social learning and the effect of competition over limited resources on it. The results are both interesting and intuitive (like so many things once you understand them). However, because it is purely theoretical I was looking for a simple experiment I could do with real animals. I browsed publications on behavioural studies regarding social learning and found one that, with relatively little adjustment, met all my needs. The authors: Aurore Avarguès-Weber and Lars Chittka. I proposed a project, was invited, and now I am doing it.

The model organisms I am working with, as my colleague Oscar described them, are flying teddy bears, also known as bumblebee Bombus terrestris. When I arrived in London two weeks ago, I was impressed by the preparations prior to my arrival. My new colleagues already prepared a flying arena as well as a colony for me. So, with no further delay I started experimenting.

A post shared by Marco Smolla (@marcosmolla) on

I am now two weeks into this project. Once more I had to learn: working with animals is a process. I worked with bumblebees some years ago. Even though I was fairly good in handling them it takes some time to get used to them again. After all, for the last three years the only thing I had to handle was a keyboard and the blinking cursor in a terminal.

How to feed a colony enough, so that they are not starving, but too much, that no one would forage, and thus not participating in the experiments? It’s a very thin line you need to find. Getting to know the colony and getting grips on how to handle individual bees and make them learn, adjusting and adapting the experimental protocol, it all takes time, a lot of patients, and a cup of tea every now and then. The next two weeks will hopefully generate data I can use for my thesis. My preliminary results look promising. Fingers crossed it wasn’t just a random result.

 

[English version]

In einem vorangegangenen Beitrag habe ich über ein Projekttreffen in London geschrieben. Mittlerweile bin ich in London und arbeite auch schon an diesem Projekt. Aber was genau tue ich hier eigentlich?

In unserem vor kurzem erschienenen Artikel haben meine Kollegen und ich den Einfluß von Rohstoffkonkurrenz auf die Evolution von sozialem Lernen, bei dem Individuen lernen indem sie andere beobachten oder imitieren, untersucht. Die Ergebnisse unseres theoretischen Modells sind gleichermaßen interessant und intuitiv (aber so ist das ja häufig mit Dingen, hat man sie erst einmal verstanden). Unsere Arbeit ist jedoch rein theoretischer Natur, weswegen ich mich auf die Suche nach einer Möglichkeit gemacht habe, die Vorhersagen unsers Models in einem Tierversuch zu testen.

Nachdem ich mich durch diverse Veröffentlichung zu diesem Thema gearbeitet habe, fand ich eine Arbeit, die, mit wenigen Änderungen, meinen Anforderungen genügen könnte. Die Autoren dieser Arbeit: Aurore Avargués-Weber und Lars Chittka. Letzteren kontaktierte ich Ende des Sommers, schlug ein Experiment vor und wurde eingeladen. Nach einigen sehr konstruktiven Diskussionen mit ihm und seinen Doktoranden befinde ich mich nun in Chittkas Arbeitsgruppe und arbeite an eben jenem Projekt.

Ich habe noch gar nicht erwähnt, mit welchen Tieren ich eigentlich arbeite. Wer Lars Chittka kennt, weiß wahrscheinlich schon welche Spezies da nur in Frage kommen kann. Es sind, wie mein Kollege Oscar sie nennt: Fliegende Teddybären, auch Dunkle Erdhummel genannt (lat. Bombus terrestris). Als ich vor zwei Wochen in London ankam war ich positiv überrascht über den Einsatz meiner neuen Kollegen: sie hatten den Großteil meines Experimentes bereits aufgebaut. Eine Kolonie mit Hummeln samt einer Flugarena standen schon für mich bereit und ich machte mich alsdann auch gleich an die Arbeit.

Mittlerweile sind zwei Wochen vergangen. In dieser Zeit habe ich einmal mehr lernen müssen: die Arbeit mit Tieren ist ein andauernder Lernprozess. Ich hatte bereits für meine Diplomarbeit mit Hummeln gearbeitet und glaubte mich noch sehr gut im Umgang mit ihnen. Jedoch, es dauerte seine Zeit bis ich mich wieder an die Arbeit mit Hummeln gewöhnt hatte. Man darf ja auch nicht vergessen, dass ich in den vergangenen vier Jahren nur mit meiner Tastatur und meinem Computerterminal umgehen musste.

Wieviel darf man eine Kolonie füttern, damit sie nicht verhungert, aber auch nicht so satt ist, dass am Ende keine Hummel mehr fouragiert (‘Futter sammelt’)? Die Grenze zwischen diesen beiden Zuständen ist sehr schmal und ändert sich auch immer wieder in Abhängigkeit von der Koloniengröße. Es ist aber trotzdem wichtig zu wissen, wo sie liegt, denn ohne fouragierende Hummeln, keine Tiere im Versuch. Es braucht viel Zeit bis man sich an die Tieren gewöhnt und das Versuchsprotokoll angepasst hat. Und jede Menge Tee.

In den kommenden beiden Wochen hoffe ich ausreichend Daten generieren zu können, um das Projekt noch hier in London abschließen zu können. Wenn sich der Trend meiner vorläufigen Ergebnisse in stichhaltige Unterschiede verwandeln, dann kann ich mehr als zufrieden sein.

Growing

In the train now. Slowly rolling towards London. What a strange day it was. Some confusion sprinkled with stress. Maybe just another usual day as a PhD student. I still hope to get used to this way of working. I’m in my second year now so one would assume I’m used to it by now. But I’m not there yet. It is still a journey, still a balancing act, still a learning experience. How to deal with the people around you, the ones you rely on, and the ones that share your journey? How to manage projects with unclear outcomes and novel techniques? How to manage yourself? How much work can you really deal with, what keeps you motivated, what should you avoid? In that respect, a PhD is not only a scientific endeavor but also a self-finding process. Really. One of my former bosses said: it’s character shaping. And it is true. You’ll change. Inevitably. Even when you might not notice it yourself. Like when your grandparents used to visit, when you were younger, and seemed so surprised how much you have changed since their last visit. You do change. You’ll grow with your challenges. Everyday a little bit. Step by step. Mostly unnoticed, just like a tree grows.

Today I’m off to London. I’m going to meet a researcher and his lab to discuss a project I have in mind. Fast your seatbelt, I’m finally back working with bumblebees again. Somehow I started to miss empirical experiments. After all, they are vital to test the predictions of my computational models. I’m excited about the idea to collaborate. And I’m excited about working with bees again.

An introduction to Agent-Based Modelling in R


Rock Paper Scissors Lizard Spock

As part of my PhD I am using computational models to unravel the evolution of certain behaviours. In my case I am interested in the evolution of social learning. Here, I want to give a very short introduction for how to create a simple agent-based model (ABM) using R. When I started with my first ABM I had no clue where to start. When you read scientific papers that use ABMs they usually do not talk about the implementation (code-wise) either. So, here is an example for an agent-based model for individuals that play a game commonly known as Rock, Paper, Scissors. But first, what actually are ABMs? Wikipedia says, that ‘an agent-based model is one of a class of computational models for simulating the actions and interactions of autonomous agents with a view to assessing their effects on the system as a whole.’ Now, let us analyse what the fundamentals for an ABM are.

What you need for an agent-based model

The minimum ingredients for an agent-based model are:

  • Agents that interact with the world around them and/or with other agents
  • A world in which the agents ‘live’ or move around
  • A set of rules that determines what every agent is allowed or has to do
  • A loop, which allows to repeatedly act or interact

In our case agents are not moving around and therefore we will not consider the second point (a world). Let us start with only two agents that play one of the three strategies (Rock, Paper, Scissors) against each other. We will define two individuals, let them choose a strategy and then play them.

Creating agents is actually very simple. We need to keep track of an individual and therefore it needs an ID. In our model they will choose a strategy, which we will associate with the ID. And finally, to make the model interesting, let us monitor the times an individual wins against the other. To keep track of all this we create a data.frame with the according columns

indDF indDF
## id strategy num_wins
## 1 1 NA 0
## 2 2 NA 0

Check at our first point: we got our agents ready to play. In the next step we let them choose a strategy. As we our agents will choose their strategies repeatedly we simply create a function. We will hand over the indDF and the function assigns a random strategy to the stratscolumn in the data.frame. We use number instead of names for the strategies, as this will make working with them easier later on.

chooseStrategy strats ind$strategy return(ind)
}

Let us proceed to the next step where the agents play their strategies. Again, we create its own function. What is happening inside the function can be summarised like this: strategies are ordered numerically in the way the win against each other, i.e. paper:1, scissor:2, rock:3. As 11 we need to identify this special case (rock loosing against paper). The number of wins of the individual with the winning strategy is increased by one. If both individuals play the same strategy, nothing happens in this round.

playStrategy if(ind$strategy[1]==ind$strategy[2]) {} else{
#in the case that one chose Rock and the other paper:
if(any(ind$strategy == 3) && any(ind$strategy == 1)){
tmp ind[tmp,"num_wins"] }else{
#for the two other cases, the better weapon wins:
tmp ind[tmp,"num_wins"] }
}
return(ind)
}

Now we can let the individuals play against each other repeatedly. We are going to use a simple for loop for this and let individuals play 1000 times against each other.

for(i in 1:1000){
indDF indDF i }
indDF
## id strategy num_wins
## 1 1 2 488
## 2 2 3 512

You might have spotted a function at the beginning of above’s chunk. I wrote a small setup function that allows us to quickly create the data.frame we were using above. An easy way to reset the simulations.

setup return(data.frame(id=1:2, strategy=NA, num_wins=0))
}

We now habe a neat little model. You will find that there is not much of a difference between the one or the other individuals. Especially, when you let it run more often.

But say, you would like to monitor what is happening throughout the simulation. We can record the process when we let the loop report individual results every turn. We will simply write the number of wins of both individuals in a two column matrix called dat. Subsequently, we will plot the result:

rounds indDF dat for(i in 1:rounds){
indDF indDF dat[i,] i }

plot(dat[,1], type='l', col='#EA2E49', lwd=3, xlab='time', ylab='number of rounds won')
lines(dat[,2], col='#77C4D3', lwd=3)

ABM_1-1

The model is running and we can observe what is happening. Now it becomes interesting. We can use the model to actually find the answer to a hypothesis. For instance: is it true that a player, wich never switches it’s strategy, is more successful when it plays against another individual that randomly switches its strategy? To test this we need to adjust the strategy choosing function.

chooseStrategy2 strats ind$strategy[2] return(ind)
}

Now, the second individual will change its strategy randomly, while the first chooses a strategy once and then sticks with it. We will return the results of this simulation to a matrix called res2. We will compare it to a simulation where every individual switches randomly between strategies. To make the results more robust let us repeat all simulations 100 times.

rounds repetitions dat res2 for(j in 1:repetitions){
indDF indDF[1,"strategy"] for(i in 1:rounds){
indDF indDF dat[i,] i }
res2 j }

plot(dat[,1], type='l', col='blue', lwd=3, xlab='time', ylab='number of rounds won')
lines(dat[,2], col='red', lwd=3)

# for comparisson let's calculate the winning vector for both players switch strategies:
res1 for(j in 1:repetitions){
indDF for(i in 1:rounds){
indDF indDF dat[i,] i }
res1 j }

# and the winner is:
t.test(res1,res2)

##
## Welch Two Sample t-test
##
## data: res1 and res2
## t = 0.5579, df = 202, p-value = 0.5775
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.09938468 0.17781605
## sample estimates:
## mean of x mean of y
## 1.529412 1.490196

ABM_2-1

At the end of the chunk I added a t-test to compare the two types of simulations. As you can see, no, it doesn’t make a difference whether the agent changes its strategy or not. However, this doesn’t take into account human psychology. Nevertheless, an interesting result.

Rock Paper Scissors on a network

In this second example we are going to use the same game, but this time several individuals will play against each other. In my phd project I assume that individuals can only interact with other individuals with which they have a connection. An easy way to think about this is a network, where individuals are represented by nodes and connections by ties. We will use a simple lattice network and therefore individuals can only play with their direct neighbours. To add an evolutionary dynamic to the simulation individuals that loose adopt the strategy of the winner. Have a read through the code, there are some explanations in it.

require(igraph) # for networks
require(reshape) # to change the resulted data in a format ggplot2 can use
require(ggplot2) # for plotting

# size of the lattice
sidelength<img class="alignnone wp-image-594 size-full" src="https://infinitedegrees.files.wordpress.com/2015/07/abm_3-1.png" alt="ABM_3-1" width="672" height="480" />

<a href="https://infinitedegrees.files.wordpress.com/2015/07/abm_3-2.png"><img class="alignnone wp-image-595 size-full" src="https://infinitedegrees.files.wordpress.com/2015/07/abm_3-2.png" alt="ABM_3-2" width="672" height="480" /></a>

What we observe are interesting dynamics between the three strategies. For the limited number of rounds that we let the model run all strategies coexist. However, sometimes one strategy disappears which will lead to the win of the strategy that loses against this one. For example, if paper disappears, rock should win on a long run. You can now experiment what would happen if you have a smaller or bigger network, or even a different network type. The ```igraph``` package offers many possibilities here.

### And what is with Spock?
The model we created so gar can be used to investigate for example epidemic dynamics. How do for instance information, rumors, and ideas, spread through a network. When we think of strategies spreading through a network we might want to add another strategy and see how a four-strategies-game differs from a three-strategies-game. Let us add Spock from the __Rock, Paper, Scissors, Lizard, Spock__ (see for example [here](http://www.samkass.com/theories/RPSSL.html))that you might have heard off from [The Big Bang Theory](https://www.google.it/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&ved=0CAMQjxxqFQoTCKy_hIfd4cYCFcjtFAodxhAAvA&url=http%3A%2F%2Fwww.fanpop.com%2Fclubs%2Fthe-big-bang-theory%2Fimages%2F15090520%2Ftitle%2Frock-paper-scissors-lizard-spock-fanart&ei=7rmoVenrNM2f7gaU0I6QBQ&bvm=bv.98197061,d.ZGU&psig=AFQjCNHLCUOTpBDCfNDHRBHWLmg-4Wlipg&ust=1437207412361054&cad=rja).


```r
# size of the lattice
sidelength<-10
# creating an empty data.frame to store data
stat<-data.frame()
# creating a lattice network using the igraph package
l<-graph.lattice(length=sidelength,dim=2)
# now every individual chooses a strategy at random
V(l)$weapon<-sample(c(1,2,2.9,3), size=length(V(l)), replace=T)
# for a nicer visualisation lets colour the different options
V(l)[weapon==1]$color<-'blue' # Paper
V(l)[weapon==2]$color<-'yellow' # Scissors
V(l)[weapon==3]$color<-'green' # Rock
V(l)[weapon==2.9]$color<-'purple' # Spock
# and this is what it looks like:
plot(l, layout=as.matrix(expand.grid(1:sidelength, 1:sidelength)), vertex.label=NA)

Let us have a look how our network looks like with four strategies:

ABM_4-1

Finally, let us run the slightly altered model.

for(t in 1:2500){
    from <- as.numeric(sample(V(l), 1))
    nei<-neighbors(l, v=from, mode='all')
    if(length(unique(V(l)$weapon))==1) {
        V(l)$weapon[from]<-sample((1:3)[1:3!=as.numeric(V(l)$weapon[from])], 1)
    } else {
        to <- sample(nei, 1)
        fromto<-c(from,to)
        w<-as.numeric(V(l)$weapon[fromto])
        if(w[1]==w[2]) {} else{
            if(max(w) == 3 && min(w) ==1) {
                V(l)$weapon[fromto[w==3]] <- "1"
            }
            else{
                V(l)$weapon[fromto[w==min(w)]] <- V(l)$weapon[fromto[w==max(w)]]
            }
        }

    }
    stat<-rbind(stat, c(sum(V(l)$'weapon'=="1"), sum(V(l)$'weapon'=="2"), sum(V(l)$'weapon'=="2.9"), sum(V(l)$'weapon'=="3")))
    # you can also plot each individual network configuration in each step of the simulation
    # V(l)[weapon==1]$color<-'blue' # Paper
    # V(l)[weapon==2]$color<-'yellow' # Scissors
    # V(l)[weapon==3]$color<-'green' # Rock
    # V(l)[weapon==2.9]$color<-'purple' # Spock
    # plot(l, layout=as.matrix(expand.grid(1:sidelength, 1:sidelength)), vertex.label=NA)
}

names(stat)<-c("Paper","Scissors","Rock","Spock")
s<-melt(stat)
s$time<-1:nrow(stat)
ggplot(data=s, mapping=aes(x=time, y=value, col=variable)) + geom_line() + theme_bw()

ABM_4-2

I hope this introduction was helpful and allows you to come up with your own ideas for agent-based models. Share and post your versions of the code in the comments if you like.

Our buggy moral code – when do we cheat?

Dan Ariely is a behavioural economist. In this TED talk he presents his experiments and insights into predictable irrationality. Surprising or not, there are many cases of irrational behaviour in humans. Here Ariely focussed on intuitions and cheating which he and his lab investigated. Let me summarise what he has to say about cheating. The experiment his lab conducted is quite simple: a group of participants receives a sheet of paper with N number of mathematical tasks to solve. After a certain time, which is chosen so that it is impossible to solve all tasks, the students have to hand back their sheets. They get paid for every solved task. Ariely explains how they then added possibilities to cheat, for example, students would shred their sheets and should then tell how many tasks they solved. Interestingly, the students would cheat a little but not overly much. This is a pattern that is consistent with several modifications of the experiment that would in theory allow to cheat more. However, a specific version of the experiment (students would ask for tokens instead of money and could then exchange the tokens with money somewhere else) drastically increased cheating. Separating the lie and receiving the reward (money) as well as abstracting money for tokens made cheating more tempting. And finally, Ariely explains a version of the model where an acting student bluntly cheated; after 30s in to the experiment the actor would say that he/she finished the task and received its reward. As this obvious act of cheating wasn’t punished, one could expect that it increases cheating. But this was not always the case. It depended on the sweatshirt of the acting student. When the shirt had the logo of the university all other students were studying at, cheating would indeed go up, while there was no cheating when the shirt was from a different university. People seem to either want to separate themselves from misbehaving individuals, or to identify with a group of people and their behaviour. This has important implications for what we have seen and still see at stock markets.

Not by learning alone

How does a population of any species maintain its behavioural characters? In other words, how do individuals of a species ensure that information about how to survive in the world are passed on from one generation to the next? This can be basically everything that is somehow related to feeding, growing, surviving, and reproducing.

Modes of information transmission

I found this very nice (and short) paper by Bennett Galef Jr. from 1975, where he explains the three mechanisms by which behaviours are passed on to the next generation. Instead of describing the transmission of behaviours I will rather talk about information, as behaviour is also just information. This is:

§1 Information are innate – In this case information are ‘endogenous’ to the individual by being part of its genetical code. The genotype not only influences the phenotype of an organism, but also its propensity for different behaviours (you are less likely to learn how to fly if you were born without wings).

§2 Similar information are gathered by experiencing similar interactions with the (non-social) environment – Individuals of a population that experiences predation by birds of prey might learn very similar avoidance or escape strategies if compared to each other, but likely very different ones compared to individuals of a population that faces predation by snakes. Saying that, it is not all too surprising to find similar behaviours when comparing individuals from different species, which is comparable to convergent evolution where similar environmental conditions and natural selection produce analogous adaptations, like fins in dolphins and penguins.

§3 Information are socially transmitted – In this case individuals gain information by interacting or observing the behaviour of another individual. Because this happens in the context of other individuals it is also called ‘social learning’, which is different from §2 where individuals learn on their own and henceforth called ‘individual learning’. Examples are trial-and-error and insight learning.

As Wakano and Aoki (2006) note, all three modes of information transmission are usually present in a population, they differ, however, in the type of information they carry. If the environment is stable or only slowly changing and information about it keeps valid over a long time it can be innate. If the environment changes moderately social learning is often found, and individual learning becomes inevitable when the world changes quickly. (FYI, that’s what I find in my models as well 😉

Social interaction – sufficient or necessary?

Galef now goes on to discuss social learning (§2) in more detail. Specifically he talks about an aspect I was not aware of before: when is social interaction between individuals sufficient and when is it necessary for learning? What does that mean? The first example Galef gives is based on a study by Harlow and Harlow on Rhesus monkeys from 1965. They found that individuals that grew up without interacting with their mothers or group members never developed a ‘normal’ sexual or maternal behaviour. Therefore, a social interaction is necessary to acquire the ‘normal’, relatively invariant, and species-typical behaviours.

Very different from that example is the case of a study by Galef and Clark from 1971, where adult rats where fed with two types of food: a preferred type with a sub-lethal dose of poison (which will cause nausea but no harm) and less preferred one, which was not altered. Adults learned to eat the less preferred food and avoided the initially preferred food items. Consequently, pups also prefered the initially less preferred food, although they did not even get in contact with the prepared food items, and thus with the adverse stimulus. This is an example where individuals (the pups) could have acquired information on their own (without a social context), by sampling both food types. Here, social interactions are sufficient but not necessary. But let me cite the elegant description by Galef:

Idiosyncratic pattens acquired by the transmitter, as a result of its history of transaction with the environment, may be introduced into a population repertoire, resulting in the establishment of socially transmitted traditions within subpopulations of a species.1

As we see, social learning is a mechanism that not only allows a population to maintain a repertoire of information and behaviours, but also to add new elements to it. But let us turn to the last part of the paper:

How then is information transmitted?

Galef uses an example of food preference and predator avoidance to describe two mechanisms by which information can be transmitted:

§4 Altering the environment – Adult rats heavily mark places with urine or feces to indicate save food sources, which in turn are then preferred by rat pups. The young are also known to prefer feeding in close vicinity to adults. These are two examples that show information transmission due to local or stimulus enhancement. (Galef and Clark, 1971a)

§5 Pairing an innate tendency with a social interaction – In two studies rat pups were shown to start running when adults run (Reiss, 1972, Angermeier, 1959). This unconditioned tendency (run when adult runs) can be used to couple an unconditioned stimulus (fleeing adult) with a conditioned stimulus (sight of predator).

Galef closes his paper by pointing out the evolutionary significance of social learning. He states that if laboratory experiments resemble natural conditions then trial-and-error must be energy consuming and error prone. Therefore, it must impose a fitness benefit on parents if their naïve offspring is capable of rapidly acquiring relevant behaviours (locating and handling food, discovering, avoiding, and escaping predators) to quickly become independent of their parents.

A very nice and insightful read!

EDIT: the title of this post is a play with the title of the book ‘Not by genes alone’ by Richardson and Boyd.

References
Galef BG (1975) The Social Transmission of Acquired Behavior. Biol Psychatry 10(2):155–160.

Wakano JY, Aoki K (2006) A mixed strategy model for the emergence and intensification of social learning in a periodically changing natural environment. Theor Popul Biol 70(4):486–497.


  1. Sentences like this are the reason why I enjoy communicating science in a, say, simpler language.