In the field of
artificial intelligence (AI), tasks that are hypothesized to require
artificial general intelligence to solve are informally known as AI-complete or AI-hard.[1] Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm.
Expert systems, that were popular in the 1980s, were able to solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempted to "scale up" their systems to handle more complicated, real-world situations, the programs tended to become excessively
brittle without
commonsense knowledge or a rudimentary understanding of the situation: they would fail as unexpected circumstances outside of its original problem context would begin to appear. When human beings are dealing with new situations in the world, they are helped by their awareness of the general context: they know what the things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. Expert systems lacked this adaptability and were
brittle when facing new situations.[8]
DeepMind published a work in May 2022 in which they trained a single model to do several things at the same time. The model, named Gato, can "play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens."[9] Similarly, some tasks once considered to be AI-complete, like machine translation,[10] are among the capabilities of
large language models.[11]
This section needs to be updated. The reason given is: this section may not be relevant anymore, since modern AI relies on neural networks executed with parallelization, not formally defined algorithms, and does not need to be human-assisted. Plus, it is based on a single reference, which questions its notability. Please help update this article to reflect recent events or newly available information.(April 2024)
A formal model under the Agent Paradigm for selecting
AI systems (AIS) is based on utility functions and contextual constraints. It discusses concepts such as utility functions, intellometrics, functional correspondence, and safety conditions. The problem involves finding an optimal AIS considering utility measures and contextual constraints, with solutions requiring specifying admissible contexts, test contexts, prohibited states, utility functions, and metrics. It also acknowledges the need to redefine functional correspondence in the absence of a human operator, terming this framework "intellometry," providing a basis for evaluating and comparing AIS for specific tasks.[20]
Computational complexity theory deals with the relative computational difficulty of
computable functions. By definition, it does not cover problems whose solution is unknown or has not been characterized formally. Since many AI problems have no formalization yet, conventional complexity theory does not enable to formally define AI-completeness.
To address this problem, a complexity theory for AI has been proposed.[21] It is based on a
model of computation that splits the computational burden between a computer and a human: one part is solved by computer and the other part solved by human. This is formalized by a human-assisted
Turing machine. The formalization defines algorithm complexity, problem complexity and reducibility which in turn allows
equivalence classes to be defined.
The complexity of executing an algorithm with a human-assisted Turing machine is given by a pair , where the first element represents the complexity of the human's part and the second element is the complexity of the machine's part.
Results
The complexity of solving the following problems with a human-assisted Turing machine is:[21]
Roman Yamopolsky[22]
suggests that a problem is AI-Complete if it has two properties:
It is in the set of AI problems (Human Oracle-solvable).
Any AI problem can be converted into by some polynomial time algorithm.
On the other hand, a problem is AI-Hard if and only if there is an AI-Complete problem that is polynomial time Turing-reducible to . This also gives as a consequence the existence of AI-Easy problems, that are solvable in polynomial time by a deterministic Turing machine with an oracle for some problem.
Yamopolsky[23] has also hypothesized that the
Turing Test is a defining feature of AI-completeness.
Groppe and Jain[24] classify problems which require
artificial general intelligence to reach human-level machine performance as AI-complete, while only restricted versions of AI-complete problems can be solved by the current AI systems. For Šekrst,[25] getting a polynomial solution to AI-complete problems would not necessarily be equal to solving the issue of
strong AI, while emphasizing the lack of
computational complexity research being the limiting factor towards achieving artificial general intelligence.
For Kwee-Bintoro and Velez,[26] solving AI-complete problems would have strong repercussions on the society.
^Shapiro, Stuart C. (1992).
Artificial IntelligenceArchived 2016-02-01 at the
Wayback Machine In Stuart C. Shapiro (Ed.), Encyclopedia of Artificial Intelligence (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)
^Mueller, Erik T. (1987, March).
Daydreaming and Computation (Technical Report CSD-870017)Archived 2020-10-30 at the
Wayback Machine PhD dissertation, University of California, Los Angeles. ("Daydreaming is but one more AI-complete problem: if we could solve anyone artificial intelligence problem, we could solve all the others", p. 302)
^A. Kuleshov, "Formalizing AI System Parameters in Standardization of AI," 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI), Nicosia, Cyprus, 2018, pp. 51-54, doi: 10.1109/IC-AIAI.2018.8674446. keywords: {Artificial intelligence;Robustness;Standards;Measurement;Safety;Accidents},
^Yampolskiy, Roman (2013), "Turing Test as a Defining Feature of AI-Completeness", Artificial Intelligence, Evolutionary Computing and Metaheuristics, Studies in Computational Intelligence, vol. 427, pp. 3–17,
doi:
10.1007/978-3-642-29694-9_1,
ISBN978-3-642-29693-2
^Groppe, Sven; Jain, Sarika (2024), "The Way Forward with AI-Complete Problems", New Generation Computing, 42: 1–5,
doi:
10.1007/s00354-024-00251-8
^Bintoro, Ted; Velez, Noah (2022), "AI-Complete: What it Means to Be Human in an Increasingly Computerized World", Bridging Human Intelligence and Artificial Intelligence, Educational Communications and Technology: Issues and Innovations, Cham: Springer, pp. 257–274,
doi:
10.1007/978-3-030-84729-6_18,
ISBN978-3-030-84728-9