Logic agents?

Does anyone know of any studies that have done something like the following, or know a reason it shouldn’t work?

Imagine a population of computer agents that spit out sequences of sentences, random well-formed formulas of first-order logic.  Selection acts not at the genetic level, but the agents
are able to remember the inter-sentence transitions that lead from true to false sentences, and stop using such transitions. After a zillion iterations, would their allowable inter-sentence transitions look anything like inferences in first-order logic?

There would obviously be some problems if it were done without good planning. For instance, the world would have to be large enough to have lots of predicates, but small enough to have a manageable simulation (if it were too large of a world, they would so rarely say true things that it would take way too long). Perhaps we could clamp their first N sentences to the value ‘true’ to speed up the simulation (observation statements or some such). We’d also have to limit the length of the sentences (e.g., no more than two disjuncts or conjuncts,  no infinitely nested conditionals, that sort of thing).

In other words, to avoid obvious problems we’d have to have parameters in the simulations to limit its complexity. Someone like Chomsky might say this shows it is BS because they weren’t learning the rules of inference, but that’s not really the goal. It would be a more Brandomian goal of seeing how far we can get without any explicitly coded rules. How logical can you get without thinking explicitly about logic?

Has anyone done something like this?

It would be cool to see the output of such a simulation, the types of strings as the simulation progressed. Hopefully they wouldn’t end up sitting around repeating the same sentence over and over.

4 Comments

  1. Nice question. I don’t know of any such study. I would guess that a lot depends what the agents can do, and on what they learn, and on their environment. Setting up an environment in which they have something they have to do is not a simple thing; and figuring out what kind of information they get about what is going on is also not a simple thing. (One question: if you are interested in how far you can get without logical rules, why are you thinking about sentences at all? Doesn’t that already sort of stack the deck with something that is supposed to plug into some kind of logic?)

  2. Eric Thomson

    Tony:

    In the simplest case, the simulated environment would just be a set of objects with certain properties, and the agents wouldn’t actually interact with it at all but just spout sentences about the environment, some true, some false, and when true is followed by false, that “inference rule” is weeded out.

    I would expect the sentences to plug into logic, but it doesn’t seem an unreasonable starting point. After all, humans use sentences, and presumably did before they developed any kind of formal logic. Plus, what else would they start with? I wouldn’t want them to have to learn a grammar first, as that would be a whole different problem.

  3. Eric Thomson

    Arnold: I am  focusing on intersubjective linguistic patterns rather than the emergence if individual semantic comprehension. These agents have no brains, very tiny minds. They do only a couple of things: remember which patterns lead from true to false, and spit out such strings.

    Built into the whole enterprise is the view that logical deduction is insensitive to the values of the predicates.

    So if eventually they learn to avoid transitions from 1,2 to 3:
    1. A–>B
    2. B
    3. A

    THat means they also screen off the same thing here:
    1. C–>D
    2. D
    3. C

    It’s great that you made your book available online!

Comments are closed.

Back to Top