Jump to content

AI effect: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
clearer hatnote and Tesler quotation, remove apparently irrelevant image
→‎Definition: move anchor to Tesler's biography
Line 17: Line 17:
McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked."<ref>{{Harvnb|McCorduck|2004|p=423}}</ref>
McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked."<ref>{{Harvnb|McCorduck|2004|p=423}}</ref>


[[Larry Tesler]] is quoted as stating that "AI is whatever hasn't been done yet." [[Douglas Hofstadter]] quotes this calling it '''{{ Visible anchor|Tesler's theorem}}''',<ref>As quoted by {{Harvtxt|Hofstadter|1980|p=601}}</ref> as do many other commentators,<ref name=digitalrevolution/> but Tesler feels he was misquoted, his actual statement being "Intelligence is whatever machines haven't done yet."<ref>{{cite web |title=Adages and Coinages |url=https://www.nomodes.com/Larry_Tesler_Consulting/Adages_and_Coinages.html |website=www.nomodes.com}}</ref>
[[Larry Tesler]] is quoted as stating that "AI is whatever hasn't been done yet." [[Douglas Hofstadter]] quotes this calling it Tesler's theorem,<ref>As quoted by {{Harvtxt|Hofstadter|1980|p=601}}</ref> as do many other commentators,<ref name=digitalrevolution/> but Tesler feels he was misquoted, his actual statement being "Intelligence is whatever machines haven't done yet."<ref>{{cite web |title=Adages and Coinages |url=https://www.nomodes.com/Larry_Tesler_Consulting/Adages_and_Coinages.html |website=www.nomodes.com}}</ref>


When problems have not yet been formalised, they can still be characterised by a [[model of computation]] that includes [[human computation]]. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as [[AI complete#Formalisation|human-assisted Turing machine]].<ref>Dafna Shahaf and Eyal Amir (2007) [https://wayback.archive-it.org/all/20070824040343/http://www.cs.uiuc.edu/~eyal/papers/ai-complete-commonsense07.pdf Towards a theory of AI completeness]. [http://www.ucl.ac.uk/commonsense07 Commonsense 2007, 8th International Symposium on Logical Formalizations of Commonsense Reasoning].</ref>
When problems have not yet been formalised, they can still be characterised by a [[model of computation]] that includes [[human computation]]. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as [[AI complete#Formalisation|human-assisted Turing machine]].<ref>Dafna Shahaf and Eyal Amir (2007) [https://wayback.archive-it.org/all/20070824040343/http://www.cs.uiuc.edu/~eyal/papers/ai-complete-commonsense07.pdf Towards a theory of AI completeness]. [http://www.ucl.ac.uk/commonsense07 Commonsense 2007, 8th International Symposium on Logical Formalizations of Commonsense Reasoning].</ref>

Revision as of 16:02, 18 July 2023

The AI effect is when onlookers discount the behavior of an artificial intelligence program by arguing that it is not "real" intelligence.[1]

Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."[2] Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[3]

Definition

"The AI effect" is that line of thinking, the tendency to redefine AI to mean: "AI is anything that has not been done yet." This is the common public misperception, that as soon as AI successfully solves a problem, that solution method is no longer within the domain of AI. Geist credits John McCarthy giving this phenomenon its name, the "AI effect".[4]

McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked."[5]

Larry Tesler is quoted as stating that "AI is whatever hasn't been done yet." Douglas Hofstadter quotes this calling it Tesler's theorem,[6] as do many other commentators,[7] but Tesler feels he was misquoted, his actual statement being "Intelligence is whatever machines haven't done yet."[8]

When problems have not yet been formalised, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as human-assisted Turing machine.[9]

AI applications become mainstream

Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI. This underappreciation is known from such diverse fields as computer chess,[10] marketing,[11] agricultural automation[7] and hospitality.[12]

Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."[13]

According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"[11]

Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"[14]

Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."[15]

The AI effect on decision-making in supply chain risk management is a severely understudied area.[16]

To avoid the AI effect problem, the editors of a special issue of IEEE Software on AI and software engineering recommend not overselling – not hyping – the real achievable results to start with.[17]

The Bulletin of the Atomic Scientists organization views the AI effect as a worldwide strategic military threat.[4] As they point out it obscures the fact that applications of AI had already found their way into both US and Soviet militaries during the Cold War.[4] AI tools to advise humans regarding weapons deployment were even developed by both sides and received very limited usage during that time.[4] They believe this constantly shifting failure to recognise AI continues to undermine human recognition of security threats in the present day.[4]

For example, AI helps YouTube review uploaded videos and replaces employees who were previously responsible for reviewing them. About 47% of jobs are predicted to be replaced by AI in the next 20 years.[18]

Legacy of the AI winter

Many AI researchers find that they can get more funding and sell more software if they avoid the bad name of "artificial intelligence" and instead pretend their work has nothing to do with intelligence at all. This was especially true in the early 1990s, during the second "AI winter".

Patty Tascarella writes: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding."[19]

Saving a place for humanity at the top of the chain of being

Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe".[20] By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.

A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.[citation needed]

Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."[21]

Mueller 1987 proposed comparing AI to human intelligence, coining the standard of Human-Level Machine Intelligence.[22] This nonetheless suffers from the AI effect however when different humans are used as the standard.[22]

Game 6

Deep Blue defeats Kasparov

When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence.[10] Public perception of chess playing shifted from a difficult mental task to a routine operation.[23] Fred A. Reed writes:

"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."[24]

On the contrary, John McCarthy was disappointed by Deep Blue.[25] He argued that it was merely a brute force machine and did not have any deep understanding of the game.[25] However that is not to say that McCarthy generally dismissed AI.[25] He was one of the founders of the field and invented the term "artificial intelligence".[25] McCarthy lamented how widespread the AI effect is,

As soon as it works, no one calls it AI anymore[25][26]: 12 

but merely did not feel that Deep Blue was a good example.[25]

The future

Experts agree the AI effect certainly[27][28] – or probably[29] – will continue. Because technological development is a continual and unending process,[30] the AI effect will also continue without end.[27] Each advancement in AI will produce another objection and another redefinition of public expectations – ever expanding.[27] While not addressing the AI effect directly, some writers have speculated[30] the indefinite perpetuity of this phenomenon may be due to artificial intelligence itself,[30] as Moore's law is.[citation needed]

The AI effect may grow to include dismissal of all specialised artificial intelligences.[29] Instead the public perception of "artificial intelligence" may shift to only include those which are networks or collectives of multiple specialised AIs.[29]

See also

Notes

  1. ^ Haenlein, Michael; Kaplan, Andreas (2019). "A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence". California Management Review. 61 (4): 5–14. doi:10.1177/0008125619864925. S2CID 199866730.
  2. ^ McCorduck 2004, p. 204
  3. ^ Kahn, Jennifer (March 2002). "It's Alive". Wired. Vol. 10, no. 30. Retrieved 24 Aug 2008.
  4. ^ a b c d e Geist, Edward (2016). "It's already too late to stop the AI arms race—We must manage it instead". Bulletin of the Atomic Scientists. 72 (5: The psychology of doom). Taylor & Francis: 318–321. S2CID 151967826. Bulletin of the Atomic Scientists.
  5. ^ McCorduck 2004, p. 423
  6. ^ As quoted by Hofstadter (1980, p. 601)
  7. ^ a b Bhatnagar, Roheet; Tripathi, Kumar; Bhatnagar, Nitu; Panda, Chandan (2022). The Digital Agricultural Revolution : Innovations and Challenges in Agriculture Through Technology Disruptions. Hoboken, NJ, US: Scrivener Publishing LLC (John Wiley & Sons, Inc.). pp. 143–170. doi:10.1002/9781119823469. ISBN 978-1-119-82346-9. OCLC 1314054445. ISBN 9781119823339.
  8. ^ "Adages and Coinages". www.nomodes.com.
  9. ^ Dafna Shahaf and Eyal Amir (2007) Towards a theory of AI completeness. Commonsense 2007, 8th International Symposium on Logical Formalizations of Commonsense Reasoning.
  10. ^ a b McCorduck 2004, p. 433
  11. ^ a b Stottler Henke. "AI Glossary".
  12. ^ Xiang, Zheng; Fuchs, Matthias; Gretzel, Ulrike; Höpken, Wolfram, eds. (2020). Handbook of e-Tourism (PDF). Cham, Switzerland: Springer International Publishing. p. 1945. doi:10.1007/978-3-030-05324-6. ISBN 978-3-030-05324-6. S2CID 242136095.
  13. ^ Swaine, Michael (September 5, 2007). "AI - It's OK Again! Is AI on the rise again?". Dr. Dobbs.
  14. ^ Marvin Minsky. "The Age of Intelligent Machines: Thoughts About Artificial Intelligence". Archived from the original on 2009-06-28.
  15. ^ Quoted in "AI set to exceed human brain power". CNN.com. July 26, 2006.
  16. ^ Nayal, Kirti; Raut, Rakesh; Priyadarshinee, Pragati; Narkhede, Balkrishna Eknath; Kazancoglu, Yigit; Narwane, Vaibhav (2021). "Exploring the role of artificial intelligence in managing agricultural supply chain risk to counter the impacts of the COVID-19 pandemic". The International Journal of Logistics Management. 33 (3): 744–772. doi:10.1108/IJLM-12-2020-0493. S2CID 237807857.
  17. ^ Carleton, Anita; Harper, Erin; Menzies, Tim; Xie, Tao; Eldh, Sigrid; Lyu, Michael (2020). "The AI Effect: Working at the Intersection of AI and SE". IEEE Software. 37 (4). Institute of Electrical and Electronics Engineers (IEEE): 26–35. doi:10.1109/ms.2020.2987666. ISSN 0740-7459. S2CID 220325485.
  18. ^ Frey, Carl Benedikt; Osborne, Michael A. (January 2017). "The future of employment: How susceptible are jobs to computerisation?". Technological Forecasting and Social Change. 114: 254–280. doi:10.1016/j.techfore.2016.08.019.
  19. ^ Patty Tascarella (August 11, 2006). "Robotics firms find fundraising struggle, with venture capital shy". Pittsburgh Business Times.
  20. ^ Flam, Faye (January 15, 2004). "A new robot makes a leap in brainpower". Philadelphia Inquirer. available from Philly.com
  21. ^ Reuben L. Hann. (1998). "A Conversation with Herbert Simon". IX (2). Gateway: 12–13. Archived from the original on February 25, 2015. {{cite journal}}: Cite journal requires |journal= (help) (Gateway is published by the Crew System Ergonomics Information Analysis Center, Wright-Patterson AFB)
  22. ^ a b Hernandez, Jose (2020). AI evaluation: On broken yardsticks and measurement scales. Workshop on Evaluating Evaluation of AI Systems, AAAI Conference on Artificial Intelligence. AAAI (Association for the Advancement of Artificial Intelligence). S2CID 228718653.
  23. ^ Stone, Peter; Brooks, Rodney; Brynjolfsson, Erik; Calo, Ryan; Etzioni, Oren; Hager, Greg; Hirschberg, Julia; Kalyanakrishnan, Shivaram; Kamar, Ece; Kraus, Sarit; Leyton-Brown, Kevin; Parkes, David; Press, William; Saxenian, AnnaLee; Shah, Julie; Tambe, Milind; Teller, Astro. "The term AI has a clear meaning". "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel. Stanford, CA: Stanford University. Retrieved September 6, 2016.
  24. ^ Reed, Fred (2006-04-14). "Promise of AI not so bright". The Washington Times.
  25. ^ a b c d e f Vardi, Moshe (2012). "Artificial intelligence: past and future". Communications of the ACM. 55 (1): 5. doi:10.1145/2063176.2063177. S2CID 21144816.
  26. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (1 ed.). Oxford University Press (OUP). ISBN 978-0-19-967811-2. LCCN 2013955152.
  27. ^ a b c Stone, Peter; Brooks, Rodney; Brynjolfsson, Erik; Calo, Ryan; Etzioni, Oren; Hager, Greg; Hirschberg, Julia; Kalyanakrishnan, Shivaram; Kamar, Ece; Kraus, Sarit; Leyton-Brown, Kevin; Parkes, David; Press, William; Saxenian, AnnaLee; Shah, Julie; Tambe, Milind; Teller, Astro. "Defining AI". "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel. Stanford, CA: Stanford University. Retrieved September 6, 2016.
  28. ^ Press, Gil (2022). "The Trouble With AI: Human Intelligence". Forbes Magazine.
  29. ^ a b c Bjola, Corneliu (2022). "AI for development: implications for theory and practice". Oxford Development Studies. 50 (1). Routledge: 78–90. doi:10.1080/13600818.2021.1960960. S2CID 238851395.
  30. ^ a b c Asimov, Isaac (Nov 1956). "The Last Question". Science Fiction Quarterly. Vol. 4, no. 5. pp. 7–15.

References

A bachelor's thesis but cited by A. Poggi; G. Rimassa; P. Turci (October 2002). "What Agent Middleware Can (And Should) Do For You". Applied Artificial Intelligence. 16 (9–10): 677–698. doi:10.1080/08839510290030444. ISSN 0883-9514. Wikidata Q58188053.