Does anyone know of any studies that have done something like the following, or know a reason it shouldn’t work?
Imagine a population of computer agents that spit out sequences of sentences, random well-formed formulas of first-order logic. Selection acts not at the genetic level, but the agents
are able to remember the inter-sentence transitions that lead from true to false sentences, and stop using such transitions. After a zillion iterations, would their allowable inter-sentence transitions look anything like inferences in first-order logic?
There would obviously be some problems if it were done without good planning. For instance, the world would have to be large enough to have lots of predicates, but small enough to have a manageable simulation (if it were too large of a world, they would so rarely say true things that it would take way too long). Perhaps we could clamp their first N sentences to the value ‘true’ to speed up the simulation (observation statements or some such). We’d also have to limit the length of the sentences (e.g., no more than two disjuncts or conjuncts, no infinitely nested conditionals, that sort of thing).
In other words, to avoid obvious problems we’d have to have parameters in the simulations to limit its complexity. Someone like Chomsky might say this shows it is BS because they weren’t learning the rules of inference, but that’s not really the goal. It would be a more Brandomian goal of seeing how far we can get without any explicitly coded rules. How logical can you get without thinking explicitly about logic?
Has anyone done something like this?
It would be cool to see the output of such a simulation, the types of strings as the simulation progressed. Hopefully they wouldn’t end up sitting around repeating the same sentence over and over.