|
Abstract |
The last two decades have seen a great deal of theorising and speculation about
the modular nature of human intelligence, as well as a rise in use of modular
architectures in artificial intelligence. Nevertheless, whether such models of natural
intelligence are well supported is still an issue of debate. In this paper, I propose
that the most important criteria for modularity is specialised representations. I
present a modular model of primate learning of the transitive inference task, and
propose an extension to this model which would explain task-learning results in
other domains. I also briefly relate this work to both neuroscience and established
AI learning architectures. |
|