Simulation Theory and Robotics

My colleague Bob Gordon is the originator of the simulation theory, one of the two main theories of folk psychology (the other being the theory theory).  Aside from being an influential theory in philosophy, psychology, and neuroscience, in recent years the simulation theory has had a large impact on social robotics.

Bob is off to a one-day workshop on robotics in Newcastle, where he is the only invited speaker.  As Bob pointed out to me, “the simulation theory is now probably the dominant architecture for designing social robots, the C3PO’s of the future, which “mind-read” human intentions, emotions, etc., respond empathetically, and have the potential to become collaborative companions of one sort or another.”

One of the papers submitted to the workshop is by the team at MIT Media Lab that is applying the simulation theory (and Meltzoff’s infant mimicry work) to Leonardo, “the Stradivarius of expressive robots.”  If you click on the link and go to “Social Learning,” the part called ” Learning by Imitation” is based on the simulation theory.  Leonardo’s body was designed and made in Hollywood by the Jurassic Park/Terminator folks (the Stan Winston Studio) and cost a million dollars.  More work along similar lines, though with less sophisticated implementations, is coming out of USC, Imperial College (London), and elsewhere.

8 Comments

  1. The discovery of mirror neurons were applauded by neuroscientists and psychologists alike because they have seen in mirror neurons the final demostration and vindication of the simulation theory.

    Using artifacts and robots is a case of reverse engineering to test what theory predicts and simulation theory and models of social learning by imitation are the most adequately implemented in robots.

    My question is: is possible to implement the theory-theory alternative in mind reading robots, that is, the possibility of a full blown knowledge of others contained in software or silicon?

    My premilinary answer or intuition here is that is impossible to contain in software the proper encyclopedic “knowledge” needed to interpret others, but empirical evidence simmilar to the finding of mirror neurons is also pointing against simulation theory favoring theory-theory (see, Saxe 2005).

    Is simulation theory the only game in the town or there are others? (in addition to variations of simulation theory and theory-theory)

  2. Tony Dardis

    How does a robot simulate the behavior of another, if not by using a theory? If I’m coding up a way for the robot to calculate how it would behave if it were in your shoes, isn’t my program a theory of the robot’s behavior and a theory of others’ behavior?

  3. I´m not here to defend simulation theory (because i aired some questions in my previous post, though it has tremendous allure); but to respond naively from the prespective of simulation theory to the questions you posed (i hope Gordon, Heal, Goldman or others come here to clarifies us) perhaps the genetic parallelism is enough.

    Our genetically coded program, our genetic endowment shaped by eons of evolution does not code for every action, thought, felling, mood, emotion or even physiological requirements that the enviroment demand against us, and can be ssen similar to the basic parametric settings that a roboticist needs to allowing a robot simulates another without any guide or “theory” thereafter. don´t you think?

  4. Tony Dardis

    So the idea would be, simulation theory is vindicated if the robot generates beliefs about another by seeing what it itself would do, and the only way to get information about what it itself would do, is to try the same thing?

    This makes sense if the robot needs information, for instance, about how its leg will move, and it doesn’t have an explicit physical theory coded anywhere in memory. It’s harder for me to understand if the robot is getting information about how it itself *would* do things by simulating them, since it seems to me that simulating them is running a simulation program, which *does* explicitly code for everything.

    Or could it go like this: the robot has tons of code for running the body, but it can decouple the code from the inputs and outputs, then run the code with simulated inputs and outputs, and then generate predictions about the other based on what the outputs look like?

    So the parallel to our deep evolutionary history is the design of how the robot works, and we’re saying that the robot has access to that, only by way of running code that determines a limited range of facts about how it *would* work under simulated conditions?

    And is this supposed to be a way for a robot to interact with people? by simulating what the robot would do if it were in the situation it believes the people are in?

    I don’t have an axe to grind here, I’m just trying to feel out what the difference between the two models would be for a robot.

  5. gualtiero

    Tony, as a first approximation, you may think of the difference between computational implementations of simulation theory and theory theory as follows. On either theory, a robot has its own instructions for reasoning, making decisions, etc. On the theory theory, a robot that understands other minds must also have, in addition to its own instructions, a program that contains explicit statements of psychological generalizations, laws of logic, etc., which are applied to descriptions of initial conditions pertaining to other agents to derive conclusions about their minds. On the simulation theory, by contrast, the robot can generate conclusions about other agents by plugging descriptions of their initial conditions into its own instructions for reasoning, decision making, etc., albeing running them in a “pretend” mode that does not lead to actions.

  6. I´m pleased with Gualtiero´s words about how a theorizing robot and a simulating robot might behave.

    One clue to mitigate the substantial gap between simulationists and theory-theorists is to pursue the reverse engineering strategy, say, building artefacts to test our hypothesis and to do empirical research too in the underlying neural susbtrates of social cognition intra and interspecies.

    But i beleive tentatively that simulation theory is more inclusive because it recognizes the evolutionary continuities and differences among humans and non-humans animals, while the theory-theory stress only the differences: humans “theorize” and non-human animals not (see, Gallese 2003).

    In the case of robots, they only have one future: resembling us, that is, to do the way we did naturally, but they doing it artifically, or remain as simple toys.

  7. Fair enough. What’s the difference, then, between

    (a) instructions for reasoning, etc., PLUS an explicit theory about psychology, and

    (b) instructions for reasoning not running in pretend mode, PLUS some code that uses the instructions for reasoning, etc., running in pretend mode?

    Is the difference something to do with the way the theory is expressed? Why couldn’t I have the instructions, etc., from part (a) be a theory, cleverly arranged to control how the robot works?

  8. gualtiero

    Clearly the distinction between theory theory and simulation theory depends on the distinction between theory and simulation. If theory and simulation are defined so broadly that they amount to the same thing, then the TT and ST collapse into the one and the same theory. I don’t know the whole literature, but from what I’ve ready, I’m not sure that there is any sharp and uncontroversial way to draw the distinction.

    Without doing more work on this matter, the best I can do is repeat what I said: if a robot understands other minds by using its own procedures (which you may wish to call a theory if you have a broad enough notion of theory), then it implements ST; if a robot understand other minds by using generalizations about minds separate from its own procedures (which you may call a simulation if you have a broad enough notion of simulation), it implements TT.

Comments are closed.

Back to Top