44
1

The problem with AI consciousness: A neurogenetic case against synthetic sentience

Abstract

Ever since the creation of the first artificial intelligence (AI) machinery built on machine learning (ML), public society has entertained the idea that eventually computers could become sentient and develop a consciousness of their own. As these models now get increasingly better and convincingly more anthropomorphic, even some engineers have started to believe that AI might become conscious, which would result in serious social consequences. The present paper argues against the plausibility of sentient AI primarily based on the theory of neurogenetic structuralism, which claims that the physiology of biological neurons and their structural organization into complex brains are necessary prerequisites for true consciousness to emerge.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.