46
13

Information Compression, Intelligence, Computing, and Mathematics

Abstract

The "SP theory of intelligence", described elsewhere, is based on the idea that much of human intelligence, and much of computing and mathematics, may be understood as compression of information. This article presents evidence for that idea including: advantages of information compression (IC) in terms of biology and engineering; in our use of shorthands and ordinary words in language; in the way we merge successive views of any one thing; in recognition; in binocular vision; in adaptation in eyes and at the level of our conscious awareness; in children's learning of the word structure of language; in the learning of grammatical structure; in resolving the problems of generalisation and "dirty data"; and in perceptual constancies. Much of computing and mathematics may also be seen as IC. An equation can be a powerful means of representing information in a compressed form. The matching and unification of patterns may be seen in both computing and mathematics: in the matching and unification of names; in reducing redundancy in unary numbers; in the workings of Post's Canonical System and the transition function in the Universal Turing Machine; in the way computers retrieve information from memory; in systems like Prolog; and in the query-by-example technique for information retrieval. The chunking-with-codes technique for IC may be seen in the use of named functions to avoid repetition of computer code. The schema-plus-correction technique may be seen in functions with parameters and the use of classes in object-oriented programming. And the run-length coding technique may be seen in multiplication, in division, and in several other devices in mathematics and computing. The SP theory resolves the apparent paradox of "decompression by compression", and its explanatory range strengthens the case for IC as an important principle in human intelligence, in computing and in mathematics.

View on arXiv
Comments on this paper