Welcome to Fleming AI.

Artificial general intelligence (AGI) has no consensus definition but everyone believes that they will recognize it when it appears. Unfortunately, in reality, there is great debate over specific examples that range the gamut from exact human brain simulations to infinitely capable systems. Indeed, it has even been argued whether specific instances of humanity are truly generally intelligent. Lack of a consensus definition seriously hampers effective discussion, design, development, and evaluation of generally intelligent systems. We will address this by proposing a goal for AGI, rigorously defining one specific class of general intelligence architecture that fulfills this goal that a number of the currently active AGI projects appear to be converging towards, and presenting a simplified view intended to promote new research in order to facilitate the creation of a safe artificial general intelligence.

Artificial General Intelligence

The goal of creating thinking machines is not a new one. It has been theorized and fantasized about for almost as long as humans have been capable of attributing intelligence to the non-living. From Frankenstein’s monster to Alan Turing’s famous “Turing Test,” we have dreamed about various entities that can think and reason as we can.


Moravec’s paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning (which is high-level in humans) requires very little computation, but sensorimotor skills (comparatively low-level in humans) require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.

Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are unconscious. “In general, we’re least aware of what our minds do best”, he wrote, and added “we’re more aware of simple processes that don’t work well than of complex ones that work flawlessly”.


How Far Are We From Artificial General Intelligence?

Present day AI can detect cancers better than human doctors, build better AI algorithms than human developers, and beat the world champions at games like chess and Go. Instances like these may lead us to believe that perhaps, there’s not a whole lot that artificial intelligence can not do better than humans. The internet abounds …