Glossary of artificial intelligence: Difference between revisions

Content deleted Content added
rm redundant bad WP:REFSPAMmed book reference per WT:WPM
m unpiped links using script
Line 55:
{{defn|A [[blueprint]] for [[software agent]]s and {{gli|intelligent control}} systems, depicting the arrangement of components. The architectures implemented by {{gli|intelligent agent|intelligent agents}} are referred to as [[cognitive architecture]]s.<ref>[https://hri.cogs.indiana.edu/publications/aaai04ws.pdf Comparison of Agent Architectures] {{webarchive |url=https://web.archive.org/web/20080827222057/https://hri.cogs.indiana.edu/publications/aaai04ws.pdf |date=August 27, 2008 }}</ref>}}
 
{{term|[[AI accelerator (computer hardware)|AI accelerator]]}}
{{defn|A class of [[microprocessor]]<ref>{{Cite web |url=https://v3.co.uk/v3-uk/news/3014293/intel-unveils-movidius-compute-stick-usb-ai-accelerator |title=Intel unveils Movidius Compute Stick USB AI Accelerator |date=2017-07-21 |url-status=dead |archive-url=https://web.archive.org/web/20170811193632/https://v3.co.uk/v3-uk/news/3014293/intel-unveils-movidius-compute-stick-usb-ai-accelerator |archive-date=11 August 2017 |access-date=28 November 2018}}</ref> or computer system<ref>{{Cite web |url=https://insidehpc.com/2017/06/inspurs-unveils-gx4-ai-accelerator/ |title=Inspurs unveils GX4 AI Accelerator |date=2017-06-21}}</ref> designed as [[hardware acceleration]] for {{gli|artificial intelligence}} applications, especially {{gli|artificial neural network|artificial neural networks}}, {{gli|machine vision}}, and {{gli|machine learning}}.}}
 
Line 83:
 
{{term|[[answer set programming]] (ASP)}}
{{defn|A form of [[declarative programming]] oriented towards difficult (primarily [[NP-hard]]) [[search algorithm|search problem]]s. It is based on the [[stable model semantics|stable model]] (answer set) semantics of [[logic programming]]. In ASP, search problems are reduced to computing stable models, and ''answer set solvers''—programs for generating stable models—are used to perform search.}}
 
{{term|[[anytime algorithm]]}}
Line 89:
 
{{term|[[application programming interface]] (API)}}
{{defn|A set of subroutine definitions, [[communication protocol]]s, and tools for building software. In general terms, it is a set of clearly defined methods of communication among various components. A good API makes it easier to develop a [[computer program]] by providing all the building blocks, which are then put together by the [[programmer]]. An API may be for a web-based system, [[operating system]], [[database system]], computer hardware, or [[Library (computing)|software library]].}}
 
{{term|[[approximate string matching]]}}
Line 106:
 
{{term|[[artificial immune system]] (AIS)}}
{{defn|A class of computationally intelligent, [[rule-based machine learning]] systems inspired by the principles and processes of the vertebrate [[immune system]]. The algorithms are typically modeled after the immune system's characteristics of [[learning]] and [[memory]] for use in [[Problem solving|problem-solving]].}}
 
{{anchor|artificial intelligence}}{{term|[[artificial intelligence]] (AI)}}
Line 140:
{{anchor|augmented reality}}{{term|[[augmented reality]] (AR)}}
{{Main|Augmented reality}}
{{defn|An interactive experience of a real-world environment where the objects that reside in the real-world are "augmented" by computer-generated perceptual information, sometimes across multiple sensory modalities, including [[visual]], [[Hearing|auditory]], [[haptic perception|haptic]], [[Somatosensory system|somatosensory]], and [[olfactory]].<ref name="Cipresso Giglioli Raya Riva 2011 p. ">{{cite journal | last1=Cipresso | first1=Pietro | last2=Giglioli | first2=Irene Alice Chicchi | last3=Raya | first3=iz | last4=Riva | first4=Giuseppe | title=The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature | journal=Frontiers in Psychology | volume=9 | date=2011-12-07 | pmid=30459681 | doi=10.3389/fpsyg.2018.02086 | page=2086| pmc=6232426 | doi-access=free }}</ref>}}
 
{{term|[[automata theory]]}}
{{defn|The study of [[abstract machine]]s and [[automaton|automata]], as well as the [[computational problem]]s that can be solved using them. It is a theory in [[theoretical computer science]] and [[discrete mathematics]] (a subject of study in both [[mathematics]] and [[computer science]]).}}
 
{{term|[[automated machine learning]] (AutoML)}}
Line 150:
{{term|[[automated planning and scheduling]]}}
{{ghat|Also simply '''AI planning'''.}}
{{defn|A branch of {{gli|artificial intelligence}} that concerns the realization of [[strategy|strategies]] or action sequences, typically for execution by {{gli|intelligent agent|intelligent agents}}, [[autonomous robot]]s and [[unmanned vehicle]]s. Unlike classical [[control system|control]] and [[Statistical classification|classification]] problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to [[decision theory]].<ref>{{Citation |last1=Ghallab |first1=Malik |title=Automated Planning: Theory and Practice |url=https://laas.fr/planning/ |year=2004 |publisher=[[Morgan Kaufmann]] |isbn=978-1-55860-856-6 |last2=Nau |first2=Dana S. |last3=Traverso |first3=Paolo}}</ref>}}
 
{{term|[[automated reasoning]]}}
Line 163:
 
{{term|[[autonomous robot]]}}
{{defn|A [[robot]] that performs [[Behavior-based robotics|behavior]]s or tasks with a high degree of [[autonomy]]. Autonomous robotics is usually considered to be a subfield of {{gli|artificial intelligence}}, [[robotics]], and [[Information engineering (field)|information engineering]].<ref>{{Cite web |url=https://robots.ox.ac.uk/ |title=Information Engineering Main/Home Page |publisher=University of Oxford |access-date=2018-10-03 |archive-date=3 July 2022 |archive-url=https://web.archive.org/web/20220703164507/http://[email protected]/ |url-status=dead }}</ref>}}
{{glossaryend}}
 
Line 211:
 
{{term|[[Big O notation]]}}
{{defn|A mathematical notation that describes the [[asymptotic analysis|limiting behavior]] of a [[function (mathematics)|function]] when the [[Argument of a function|argument]] tends towards a particular value or infinity. It is a member of a family of notations invented by [[Paul Gustav Heinrich Bachmann|Paul Bachmann]],<ref name="Bachmann">{{Cite book |last1=Bachmann |first1=Paul |url=https://archive.org/stream/dieanalytischeza00bachuoft#page/402/mode/2up |title=Analytische Zahlentheorie |date=1894 |publisher=Teubner |volume=2 |location=Leipzig |language=de |trans-title=Analytic Number Theory |author-link=Paul Bachmann}}</ref> [[Edmund Landau]],<ref name="Landau">{{Cite book |last1=Landau |first1=Edmund |url=https://archive.org/details/handbuchderlehre01landuoft |title=Handbuch der Lehre von der Verteilung der Primzahlen |date=1909 |publisher=B. G. Teubner |location=Leipzig |page=883 |language=de |trans-title=Handbook on the theory of the distribution of the primes |author-link=Edmund Landau}}</ref> and others, collectively called Bachmann–Landau notation or asymptotic notation.}}
 
{{term|[[binary tree]]}}
Line 217:
 
{{term|[[blackboard system]]}}
{{defn|An {{gli|artificial intelligence}} approach based on the [[blackboard design pattern|blackboard architectural model]],<ref>{{Cite journal |last1=Erman |first1=L. D. |last2=Hayes-Roth |first2=F. |last3=Lesser |first3=V. R. |last4=Reddy |first4=D. R. |year=1980 |title=The Hearsay-II Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty |journal=ACM Computing Surveys |volume=12 |issue=2 |pages=213 |doi=10.1145/356810.356816|s2cid=118556 }} <!-- Erman, Hayes-Roth, & Reddy (1980). "The Hearsay-II Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty" --></ref><ref>{{Cite journal |last1=Corkill |first1=Daniel D. |date=September 1991 |title=Blackboard Systems |url=https://bbtech.com/papers/ai-expert.pdf |journal=AI Expert |volume=6 |issue=9 |pages=40&ndash;47 |access-date=5 July 2022 |archive-date=16 April 2012 |archive-url=https://web.archive.org/web/20120416034609/http://www.bbtech.com/papers/ai-expert.pdf |url-status=dead }}</ref><ref>* {{Cite tech report |first=H. Yenny |last=Nii |title=Blackboard Systems |number=STAN-CS-86-1123 |institution=Department of Computer Science, Stanford University |year=1986 |url=https://i.stanford.edu/pub/cstr/reports/cs/tr/86/1123/CS-TR-86-1123.pdf |access-date=2013-04-12}}</ref><ref>{{Cite journal |last1=Hayes-Roth |first1=B.|author1-link=Barbara Hayes-Roth |year=1985 |title=A blackboard architecture for control |journal=Artificial Intelligence |volume=26 |issue=3 |pages=251–321 |doi=10.1016/0004-3702(85)90063-3}} <!-- Hayes-Roth (1985). "A blackboard architecture for control" --></ref> where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem.}}
 
{{term|[[Boltzmann machine]]}}
Line 258:
{{term|[[cluster analysis]]}}
{{ghat|Also '''clustering'''.}}
{{defn|The task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory [[data mining]], and a common technique for [[statistics|statistical]] [[data analysis]], used in many fields, including {{gli|machine learning}}, [[pattern recognition]], [[image analysis]], [[information retrieval]], [[bioinformatics]], [[data compression]], and [[computer graphics]].}}
 
{{term|[[Cobweb (clustering)|Cobweb]]}}
Line 267:
 
{{term|[[cognitive computing]]}}
{{defn|In general, the term cognitive computing has been used to refer to new hardware and/or software that [[Neuromorphic computing|mimics the functioning]] of the [[human brain]]<ref>{{Cite web |url=https://labs.hpe.com/research/next-next/brain/ |title=Hewlett Packard Labs |access-date=5 July 2022 |archive-date=30 October 2016 |archive-url=https://web.archive.org/web/20161030143900/http://www.labs.hpe.com/research/next-next/brain/ |url-status=dead }}</ref><ref>Terdiman, Daniel (2014) .IBM's TrueNorth processor mimics the human brain.https://cnet.com/news/ibms-truenorth-processor-mimics-the-human-brain/</ref><ref>Knight, Shawn (2011). ''[https://techspot.com/news/45138-ibm-unveils-cognitive-computing-chips-that-mimic-human-brain.html IBM unveils cognitive computing chips that mimic human brain]'' TechSpot: August 18, 2011, 12:00 PM</ref><ref>Hamill, Jasper (2013). ''[https://theregister.co.uk/2013/08/08/ibm_unveils_computer_architecture_based_upon_your_brain/ Cognitive computing: IBM unveils software for its brain-like SyNAPSE chips]'' The Register: August 8, 2013</ref><ref name="Denning">{{Cite journal |last1=Denning. |first1=P.J. |year=2014 |title=Surfing Toward the Future |journal=Communications of the ACM |volume=57 |issue=3 |pages=26–29 |doi=10.1145/2566967|s2cid=20681733 }}</ref><ref>{{Cite thesis |last1=Ludwig |first1=Lars |year=2013 |title=Extended Artificial Memory: Toward an integral cognitive theory of memory and technology |url=https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3662 |format=pdf |publisher=Technical University of Kaiserslautern |access-date=2017-02-07}}</ref> and helps to improve human decision-making.<ref>{{Cite web |url=https://hpl.hp.com/research/ |title=Research at HP Labs |access-date=5 July 2022 |archive-date=7 March 2022 |archive-url=https://web.archive.org/web/20220307214522/https://www.hpl.hp.com/research/ |url-status=dead }}</ref><ref>{{Cite web |url=https://thesiliconreview.com/magazines/automate-complex-workflows-using-tactical-cognitive-computing-coseer/ |title=Automate Complex Workflows Using Tactical Cognitive Computing: Coseer |website=thesiliconreview.com |access-date=2017-07-31}}</ref> In this sense, CC is a new type of computing with the goal of more accurate models of how the human brain/[[mind]] senses, [[Reasoning|reason]]s, and responds to stimulus.}}
 
{{term|[[cognitive science]]}}
Line 276:
 
{{term|[[committee machine]]}}
{{defn|A type of {{gli|artificial neural network}} using a [[Divide and rule|divide and conquer]] strategy in which the responses of multiple neural networks (experts) are combined into a single response.<ref>HAYKIN, S. Neural Networks - A Comprehensive Foundation. Second edition. Pearson Prentice Hall: 1999.</ref> The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare [[ensembles of classifiers]].}}
 
{{term|[[commonsense knowledge (artificial intelligence)|commonsense knowledge]]}}
{{defn|In {{gli|artificial intelligence}} research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", that all humans are expected to know. The first AI program to address common sense knowledge was [[Advice Taker]] in 1959 by John McCarthy.<ref>{{Cite web |url=https://www-formal.stanford.edu/jmc/mcc59/mcc59.html |title=PROGRAMS WITH COMMON SENSE |website=www-formal.stanford.edu |access-date=2018-04-11}}</ref>}}
 
Line 318:
{{term|[[computational number theory]]}}
{{ghat|Also '''algorithmic number theory'''.}}
{{defn|The study of {{gli|algorithm|algorithms}} for performing [[number theory|number theoretic]] [[computation]]s.}}
 
{{term|[[computational problem]]}}
Line 328:
 
{{anchor|computer-automated design}}{{term|[[computer-automated design]] (CAutoD)}}
{{defn|Design automation usually refers to [[electronic design automation]], or [[Design Automation]] which is a [[Product Configurator]]. Extending [[Computer-Aided Design]] (CAD), automated design and computer-automated design<ref name="IBM">{{Cite journal |last1=Kamentsky |first1=L.A. |last2=Liu |first2=C.-N. |year=1963 |title=Computer-Automated Design of Multifont Print Recognition Logic |url=https://domino.research.ibm.com/tchjr/journalindex.nsf/0/a5cb0910ea78194885256bfa00683e5a?OpenDocument |journal=IBM Journal of Research and Development |volume=7 |issue=1 |page=2 |doi=10.1147/rd.71.0002 |access-date=5 July 2022 |archive-date=3 March 2016 |archive-url=https://web.archive.org/web/20160303202147/http://domino.research.ibm.com/tchjr/journalindex.nsf/0/a5cb0910ea78194885256bfa00683e5a?OpenDocument |url-status=dead }}</ref><ref>{{Cite journal |last1=Brncick |first1=M |year=2000 |title=Computer automated design and computer automated manufacture |journal=Phys Med Rehabil Clin N Am |volume=11 |issue=3 |pages=701–13 |doi=10.1016/s1047-9651(18)30806-4 |pmid=10989487}}</ref><ref>{{Cite journal |last1=Li |first1=Y. |display-authors=etal |year=2004 |title=CAutoCSD - Evolutionary search and optimisation enabled computer automated control system design |url=https://link.springer.com/article/10.1007%2Fs11633-004-0076-8 |journal=International Journal of Automation and Computing |volume=1 |issue=1 |pages=76–88 |doi=10.1007/s11633-004-0076-8|s2cid=55417415 }}</ref> are concerned with a broader range of applications, such as [[automotive engineering]], [[civil engineering]],<ref>{{Cite journal |last1=Kramer |first1=GJE |last2=Grierson |first2=DE |title=Computer automated design of structures under dynamic loads |journal=Computers & Structures |year=1989 |volume=32 |issue=2 |pages=313–325 |doi=10.1016/0045-7949(89)90043-6}}</ref><ref>{{Cite journal |last1=Moharrami |first1=H |last2=Grierson |first2=DE |title=Computer-Automated Design of Reinforced Concrete Frameworks |journal=Journal of Structural Engineering |year=1993 |volume=119 |issue=7 |pages=2036–2058 |doi=10.1061/(asce)0733-9445(1993)119:7(2036)}}</ref><ref>{{Cite journal |last1=Xu |first1=L |last2=Grierson |first2=DE |title=Computer-Automated Design of Semirigid Steel Frameworks |journal=Journal of Structural Engineering |year=1993 |volume=119 |issue=6 |pages=1740–1760 |doi=10.1061/(asce)0733-9445(1993)119:6(1740)}}</ref><ref>Barsan, GM; Dinsoreanu, M, (1997). Computer-automated design based on structural performance criteria, Mouchel Centenary Conference on Innovation in Civil and Structural Engineering, Aug 19-21, Cambridge England, Innovation in Civil and Structural Engineering, 167-172</ref> [[composite material]] design, [[control engineering]],<ref>{{Cite journal |last1=Li |first1=Yun |year=1996 |title=Genetic algorithm automated approach to the design of sliding mode control systems |journal=International Journal of Control |volume=63 |issue=4 |pages=721–739 |doi=10.1080/00207179608921865}}</ref> dynamic [[system identification]] and optimization,<ref>{{Cite journal |last1=Li |first1=Yun |last2=Chwee Kim |first2=Ng |last3=Chen Kay |first3=Tan |year=1995 |title=Automation of Linear and Nonlinear Control Systems Design by Evolutionary Computation |url=https://sciencedirect.com/science/article/pii/S1474667017451585/pdf?md5=b7aedf998282848dfcf44a1ea2f003dd&pid=1-s2.0-S1474667017451585-main.pdf |journal=IFAC Proceedings Volumes |volume=28 |issue=16 |pages=85–90 |doi=10.1016/S1474-6670(17)45158-5}}</ref> [[financial]] systems, industrial equipment, {{gli|mechatronics|mechatronic}} systems, [[steel construction]],<ref>Barsan, GM, (1995) Computer-automated design of semirigid steel frameworks according to EUROCODE-3, Nordic Steel Construction Conference 95, JUN 19-21, 787-794</ref> structural [[optimization (mathematics)|optimisation]],<ref>{{Cite journal |last1=Gray |first1=Gary J. |last2=Murray-Smith |first2=David J. |last3=Li |first3=Yun |display-authors=etal |year=1998 |title=Nonlinear model structure identification using genetic programming |url=https://sciencedirect.com/science/article/pii/S0967066198000872/pdf?md5=5ad89d3029a3ebad83086271f3c78f75&pid=1-s2.0-S0967066198000872-main.pdf |journal=Control Engineering Practice |volume=6 |issue=11 |pages=1341–1352 |doi=10.1016/s0967-0661(98)00087-2}}</ref> and the invention of novel systems. More recently, traditional CAD simulation is seen to be transformed to CAutoD by biologically inspired {{gli|machine learning}},<ref>{{cite journal | url=https://ieeexplore.ieee.org/document/6052374 | doi=10.1109/MCI.2011.942584 | title=Evolutionary Computation Meets Machine Learning: A Survey | year=2011 | last1=Zhang | first1=Jun | last2=Zhan | first2=Zhi-hui | last3=Lin | first3=Ying | last4=Chen | first4=Ni | last5=Gong | first5=Yue-Jiao | last6=Zhong | first6=Jing-hui | last7=Chung | first7=Henry S.H. | last8=Li | first8=Yun | last9=Shi | first9=Yu-hui | journal=IEEE Computational Intelligence Magazine | volume=6 | issue=4 | pages=68–75 | s2cid=6760276 }}</ref> including heuristic [[search algorithm|search technique]]s such as [[evolutionary computation]],<ref>[https://ti.arc.nasa.gov/m/pub-archive/768h/0768%20(Hornby).pdf Gregory S. Hornby (2003). Generative Representations for Computer-Automated Design Systems, NASA Ames Research Center, Mail Stop 269-3, Moffett Field, CA 94035-1000]</ref><ref>[https://msu.edu/~jclune/webfiles/publications/2011-CluneLipson-Evolving3DObjectsWithCPPNs-ECAL.pdf J. Clune and H. Lipson (2011). Evolving three-dimensional objects with a generative encoding inspired by developmental biology. Proceedings of the European Conference on Artificial Life. 2011.]</ref> and {{gli|swarm intelligence}} algorithms.<ref>{{Cite journal |last1=Zhan |first1=Z.H. |display-authors=etal |year=2009 |title=Adaptive Particle Swarm Optimization |journal=IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics |volume=39 |issue=6 |pages=1362–1381 |doi=10.1109/tsmcb.2009.2015956 |pmid=19362911|s2cid=11191625 |url=https://eprints.gla.ac.uk/7645/1/7645.pdf }}</ref>}}
 
{{term|[[machine listening|computer audition]] (CA)}}
{{defn|See ''{{gli|machine listening}}''.}}
 
Line 362:
 
{{term|[[control theory]]}}
{{defn|In [[Control engineering|control systems engineering]] is a subfield of mathematics that deals with the control of continuously operating [[dynamical system]]s in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without ''delay or overshoot'' and ensuring control [[Stability theory|stability]].}}
 
{{term|[[convolutional neural network]]}}
Line 399:
{{term|[[data set]]}}
{{ghat|Also '''dataset'''.}}
{{defn|A collection of [[data]]. Most commonly a data set corresponds to the contents of a single [[table (database)|database table]], or a single statistical [[data matrix (multivariate statistics)|data matrix]], where every [[column (database)|column]] of the table represents a particular variable, and each [[row (database)|row]] corresponds to a given member of the data set in question. The data set lists values for each of the variables, such as height and weight of an object, for each member of the data set. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows.}}
 
{{term|[[data warehouse]] (DW or DWH)}}
Line 412:
 
{{term|[[decision support system]] (DSS)}}
{{defn|Aan [[Information systems|information system]] that supports business or organizational [[decision-making]] activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and help people make decisions about problems that may be rapidly changing and not easily specified in advance—i.e. unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both.}}
 
{{term|[[decision theory]]}}
Line 419:
 
{{term|[[decision tree learning]]}}
{{defn|Uses a [[decision tree]] (as a [[Predictive modelling|predictive model]]) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in [[statistics]], [[data mining]] and {{gli|machine learning}}.}}
 
{{term|[[declarative programming]]}}
Line 455:
 
{{term|[[diffusion model]]}}
{{defn|In [[machine learning]], '''diffusion models''', also known as '''diffusion probabilistic models''' or '''score-based generative models''', are a class of [[latent variable model]]s. They are [[Markov chain]]s trained using [[Variational Bayesian methods|variational inference]].<ref name=":0ai">{{cite book |last1=Ho |first1=Jonathan |last2=Jain |first2=Ajay |last3=Abbeel |first3=Pieter |title=Denoising Diffusion Probabilistic Models |date=19 June 2020 |arxiv=2006.11239}}</ref> The goal of diffusion models is to learn the latent structure of a dataset by modeling the way in which data points diffuse through the [[latent space]]. In [[computer vision]], this means that a neural network is trained to [[denoise]] images blurred with [[Gaussian noise]] by learning to reverse the diffusion process.<ref name=":1ai">{{cite arXiv |last1=Song |first1=Yang |last2=Sohl-Dickstein |first2=Jascha |last3=Kingma |first3=Diederik P. |last4=Kumar |first4=Abhishek |last5=Ermon |first5=Stefano |last6=Poole |first6=Ben |date=2021-02-10 |title=Score-Based Generative Modeling through Stochastic Differential Equations |class=cs.LG |eprint=2011.13456}}</ref><ref>{{cite arXiv |last1=Gu |first1=Shuyang |last2=Chen |first2=Dong |last3=Bao |first3=Jianmin |last4=Wen |first4=Fang |last5=Zhang |first5=Bo |last6=Chen |first6=Dongdong |last7=Yuan |first7=Lu |last8=Guo |first8=Baining |title=Vector Quantized Diffusion Model for Text-to-Image Synthesis |date=2021 |class=cs.CV |eprint=2111.14822}}</ref> It mainly consists of three major components: the forward process, the reverse process, and the sampling procedure.<ref name="chang23design">{{cite arXiv |last1=Chang |first1=Ziyi |last2=Koulieris |first2=George Alex |last3=Shum |first3=Hubert P. H. |title=On the Design Fundamentals of Diffusion Models: A Survey |date=2023 |eprint=2306.04542 |class=cs.LG}}</ref> Three examples of generic diffusion modeling frameworks used in computer vision are denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations.<ref>{{cite journal |last1=Croitoru |first1=Florinel-Alin |last2=Hondru |first2=Vlad |last3=Ionescu |first3=Radu Tudor |last4= Shah |first4= Mubarak |title=Diffusion Models in Vision: A Survey |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |date=2023 |volume=45 |issue=9 |pages=10850–10869 |doi=10.1109/TPAMI.2023.3261988 |pmid=37030794 |arxiv=2209.04747|s2cid=252199918 }}</ref>}}
 
{{term|[[dimensionality reduction]]}}
Line 481:
 
{{term|[[Ebert test]]}}
{{defn|A test which gauges whether a computer-based [[speech synthesis|synthesized voice]]<ref name=twsL35/><ref name="twsL34">{{Cite news |last1=Lee |first1=Jennifer |url=https://bits.blogs.nytimes.com/2011/03/07/roger-ebert-tests-his-vocal-cords-and-comedic-delivery/?src=me |title=Roger Ebert Tests His Vocal Cords, and Comedic Delivery |date=March 7, 2011 |work=The New York Times |access-date=2011-09-12 |quote=Now perhaps, there is the Ebert Test, a way to see if a synthesized voice can deliver humor with the timing to make an audience laugh.... He proposed the Ebert Test as a way to gauge the humanness of a synthesized voice.}}</ref> can tell a [[humor|joke]] with sufficient skill to cause people to [[laughter|laugh]].<ref name="twsL41">{{Cite news |url=https://tips-tricks.co.in/2011/03/roger-eberts-inspiring-digital.html |title=Roger Ebert's Inspiring Digital Transformation |date=March 5, 2011 |access-date=2011-09-12 |url-status=dead |archive-url=https://web.archive.org/web/20110325160035/https://tips-tricks.co.in/2011/03/roger-eberts-inspiring-digital.html |archive-date=25 March 2011 |publisher=Tech News |quote=Meanwhile, the technology that enables Ebert to "speak" continues to see improvements – for example, adding more realistic inflection for question marks and exclamation points. In a test of that, which Ebert called the "Ebert test" for computerized voices,}}</ref> It was proposed by [[film critic]] [[Roger Ebert]] at the [[TED (conference)|2011 TED conference]] as a challenge to [[software developer]]s to have a computerized voice master the inflections, delivery, timing, and intonations of a speaking human.<ref name="twsL35">{{Cite news |last1=Ostrow |first1=Adam |url=https://mashable.com/2011/03/05/roger-ebert-ted-talk/ |title=Roger Ebert's Inspiring Digital Transformation |date=March 5, 2011 |access-date=2011-09-12 |publisher=Mashable Entertainment |quote=With the help of his wife, two colleagues and the Alex-equipped MacBook that he uses to generate his computerized voice, famed film critic Roger Ebert delivered the final talk at the TED conference on Friday in Long Beach, California....}}</ref> The test is similar to the [[Turing test]] proposed by [[Alan Turing]] in 1950 as a way to gauge a computer's ability to exhibit intelligent behavior by generating performance indistinguishable from a [[human being]].<ref name="twsL37">{{Cite news |last1=Pasternack |first1=Alex |url=https://motherboard.tv/2011/4/18/a-macbook-may-have-given-roger-ebert-his-voice-but-an-ipod-saved-his-life-video |title=A MacBook May Have Given Roger Ebert His Voice, But An iPod Saved His Life (Video) |date=Apr 18, 2011 |access-date=2011-09-12 |url-status=dead |archive-url=https://web.archive.org/web/20110906063605/https://motherboard.tv/2011/4/18/a-macbook-may-have-given-roger-ebert-his-voice-but-an-ipod-saved-his-life-video |archive-date=6 September 2011 |publisher=Motherboard |quote=He calls it the "Ebert Test," after Turing's AI standard...}}</ref>}}
 
{{term|[[echo state network]] (ESN)}}
Line 496:
{{defn|A sub-area of {{gli|machine learning}} concerned with how an {{gli|intelligent agent|agent}} ought to take actions in an [[Environment (systems)|environment]] so as to minimize some error feedback. It is a type of {{gli|reinforcement learning}}.}}
 
{{term|[[ensemble averaging (machine learning)|ensemble averaging]]}}
{{defn|In {{gli|machine learning}}, particularly in the creation of {{gli|artificial neural network|artificial neural networks}}, ensemble averaging is the process of creating multiple models and combining them to produce a desired output, as opposed to creating just one model.}}
 
Line 506:
 
{{anchor|evolutionary algorithm}}{{term|[[evolutionary algorithm]] (EA)}}
{{defn|A subset of {{gli|evolutionary computation}},<ref name="EVOALG">{{Cite book |last1=Vikhar |first1=P. A. |title=2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC) |chapter=Evolutionary algorithms: A critical review and its future prospects |year=2016 |publisher=Jalgaon, 2016, pp. 261-265 |pages=261–265 |doi=10.1109/ICGTSPICC.2016.7955308 |isbn=978-1-5090-0467-6|s2cid=22100336 }}</ref> a generic population-based [[metaheuristic]] [[optimization (mathematics)|optimization]] {{gli|algorithm}}. An EA uses mechanisms inspired by [[biological evolution]], such as [[reproduction]], [[mutation]], [[genetic recombination|recombination]], and [[natural selection|selection]]. [[Candidate solution]]s to the [[optimization problem]] play the role of individuals in a population, and the [[fitness function]] determines the quality of the solutions (see also [[loss function]]). [[Evolution]] of the population then takes place after the repeated application of the above operators.}}
 
{{term|[[evolutionary computation]]}}
Line 515:
 
{{term|[[existential risk from artificial general intelligence|existential risk]]}}
{{defn|The hypothesis that substantial progress in {{gli|artificial general intelligence}} (AGI) could someday result in [[human extinction]] or some other unrecoverable [[global catastrophic risk|global catastrophe]].<ref name="aima">{{Cite book |last1=Russell |first1=Stuart |title=Artificial Intelligence: A Modern Approach |title-link=Artificial Intelligence: A Modern Approach |last2=Norvig |first2=Peter |date=2009 |publisher=Prentice Hall |isbn=978-0-13-604259-4 |chapter=26.3: The Ethics and Risks of Developing Artificial Intelligence |author-link=Stuart J. Russell |author-link2=Peter Norvig}}</ref><ref>{{Cite journal |last1=Bostrom |first1=Nick |author-link=Nick Bostrom |year=2002 |title=Existential risks |journal=[[Journal of Evolution and Technology]] |volume=9 |issue=1 |pages=1–31}}</ref><ref>{{Cite news |url=https://slate.com/articles/technology/future_tense/2016/04/killer_a_i_101_a_cheat_sheet_to_the_terminology_the_ethical_debates_the.html |title=Your Artificial Intelligence Cheat Sheet |date=1 April 2016 |work=[[Slate (magazine)|Slate]] |access-date=16 May 2016}}</ref>}}
 
{{term|[[expert system]]}}
{{defn|A computer system that emulates the decision-making ability of a human expert.<ref name="Jackson1998">{{Citation |last1=Jackson |first1=Peter |title=Introduction To Expert Systems |page=2 |year=1998 |edition=3 |publisher=Addison Wesley |isbn=978-0-201-87686-4}}</ref> Expert systems are designed to solve complex problems by [[automated reasoning|reasoning]] through bodies of knowledge, represented mainly as [[Rule-based system|if–then rule]]s rather than through conventional [[Procedural programming|procedural code]].<ref>{{Cite web |url=https://pcmag.com/encyclopedia_term/0,2542,t=conventional+programming&i=40325,00.asp |title=Conventional programming |work=PC Magazine |access-date=2013-09-15 |archive-date=14 October 2012 |archive-url=https://web.archive.org/web/20121014124656/https://pcmag.com/encyclopedia_term/0%2C2542%2Ct%3Dconventional+programming%26i%3D40325%2C00.asp |url-status=dead }}</ref>}}
{{glossaryend}}
 
Line 558:
 
{{term|[[frame language]]}}
{{defn|A technology used for [[knowledge representation]] in artificial intelligence. Frames are stored as [[Ontology (information science)|ontologies]] of [[Set theory|set]]s and subsets of the [[Frame (artificial intelligence)|frame concept]]s. They are similar to class hierarchies in [[object-oriented language]]s although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on [[Encapsulation (object-oriented programming)|encapsulation]] and [[information hiding]]. Frames originated in AI research and objects primarily in [[software engineering]]. However, in practice the techniques and capabilities of frame and object-oriented languages overlap significantly.}}
 
{{term|[[frame problem]]}}
Line 571:
 
{{term|[[fuzzy control system]]}}
{{defn|A [[control system]] based on {{gli|fuzzy logic}}—a [[mathematics|mathematical]] system that analyzes [[analog signal|analog]] input values in terms of [[mathematical logic|logic]]al variables that take on continuous values between 0 and 1, in contrast to classical or [[Digital data|digital]] logic, which operates on discrete values of either 1 or 0 (true or false, respectively).<ref name="Pedrycz">{{Cite book |last1=Pedrycz |first1=Witold |title=Fuzzy control and fuzzy systems |publisher=Research Studies Press Ltd. |year=1993 |edition=2}}</ref><ref name="Hájek">{{Cite book |last1=Hájek |first1=Petr |title=Metamathematics of fuzzy logic |publisher=Springer Science & Business Media |year=1998 |edition=4}}</ref>}}
 
{{term|[[fuzzy logic]]}}
{{defn|A simple form for the [[many-valued logic]], in which the [[truth value]]s of variables may have any degree of "''Truthfulness''" that can be represented by any real number in the range between 0 (as in Completely False) and 1 (as in Completely True) inclusive. Consequently, It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. In contrast to [[Boolean algebra|Boolean logic]], where the truth values of variables may have the integer values 0 or 1 only.}}
 
{{term|[[fuzzy rule]]}}
Line 597:
 
{{term|[[generative artificial intelligence]]}}
{{defn|Generative artificial intelligence is [[artificial intelligence]] capable of generating text, images, or other media in response to [[Prompt engineering|prompts]].<ref name="nytimes">{{Cite web|url=https://www.nytimes.com/2023/01/27/technology/anthropic-ai-funding.html|title=Anthropic Said to Be Closing In on $300 Million in New A.I. Funding|last1=Griffith|first1=Erin|last2=Metz|first2=Cade|date=2023-01-27|work=[[The New York Times]]|accessdate=2023-03-14}}</ref><ref name="bloomberg">{{cite news |last1=Lanxon |first1=Nate |last2=Bass |first2=Dina |last3=Davalos |first3=Jackie |title=A Cheat Sheet to AI Buzzwords and Their Meanings |url=https://news.bloomberglaw.com/tech-and-telecom-law/a-cheat-sheet-to-ai-buzzwords-and-their-meanings-quicktake |access-date=March 14, 2023 |newspaper=Bloomberg News |date=March 10, 2023 |location=}}</ref> Generative AI models [[machine learning|learn]] the patterns and structure of their input [[training data set|training data]] and then generate new data that has similar characteristics, typically using [[Transformer (machine learning model)|Transformer]]-based [[Deep learning|deep]] [[neural networks]].<ref>{{Cite news |last=Pasick |first=Adam |date=2023-03-27 |title=Artificial Intelligence Glossary: Neural Networks and Other Terms Explained |language=en-US |work=The New York Times |url=https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html |access-date=2023-04-22 |issn=0362-4331}}</ref><ref>{{cite web | url=https://openai.com/research/generative-models | title=Generative models | author1=Andrej Karpathy | author2=Pieter Abbeel | author3=Greg Brockman | author4=Peter Chen | author5=Vicki Cheung | author6=Yan Duan | author7=Ian Goodfellow | author8=Durk Kingma | author9=Jonathan Ho | author10=Rein Houthooft | author11=Tim Salimans | author12=John Schulman | author13=Ilya Sutskever | author14=Wojciech Zaremba | date=2016-06-16 | website=OpenAI}}</ref>}}
 
{{anchor|genetic algorithm}}{{term|[[genetic algorithm]] (GA)}}
{{defn|A [[metaheuristic]] inspired by the process of [[natural selection]] that belongs to the larger class of [[evolutionary algorithm]]s (EA). Genetic algorithms are commonly used to generate high-quality solutions to [[Optimization (mathematics)|optimization]] and [[Search algorithm|search problem]]s by relying on bio-inspired operators such as [[Mutation (genetic algorithm)|mutation]], [[crossover (genetic algorithm)|crossover]] and [[selection (genetic algorithm)|selection]].{{sfn|Mitchell|1996|p=2}}}}
 
{{term|[[genetic operator]]}}
Line 609:
 
{{term|[[glowworm swarm optimization]]}}
{{defn|A {{gli|swarm intelligence}} [[Mathematical optimization|optimization]] {{gli|algorithm}} based on the behaviour of [[glowworm]]s (also known as fireflies or lightning bugs).}}
 
{{term|[[graph (abstract data type)]]}}
Line 615:
 
{{term|[[graph (discrete mathematics)]]}}
{{defn|In mathematics, and more specifically in {{gli|graph theory}}, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related". The objects correspond to mathematical abstractions called ''[[Vertex (graph theory)|vertice]]s'' (also called ''nodes'' or ''points'') and each of the related pairs of vertices is called an ''edge'' (also called an ''arc'' or ''line'').<ref>{{Cite book |last1=Trudeau |first1=Richard J. |url=https://store.doverpublications.com/0486678709.html |title=Introduction to Graph Theory |publisher=Dover Pub. |year=1993 |isbn=978-0-486-67870-2 |edition=Corrected, enlarged republication. |location=New York |pages=19 |quote=A graph is an object consisting of two sets called its ''vertex set'' and its ''edge set''. |access-date=8 August 2012}}</ref>}}
 
{{term|[[graph database]] (GDB)}}
Line 636:
 
{{term|[[heuristic (computer science)|heuristic]]}}
{{defn|A technique designed for [[problem solving|solving a problem]] more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, [[Accuracy and precision|accuracy]], or [[Accuracy and precision|precision]] for speed. In a way, it can be considered a shortcut. A heuristic function, also called simply a heuristic, is a [[Function (mathematics)|function]] that ranks alternatives in {{gli|search algorithm|search algorithms}} at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.<ref>{{Cite book |last1=Pearl |first1=Judea |title=Heuristics: intelligent search strategies for computer problem solving |url=https://archive.org/details/intelligentsearc00jude |url-access=limited |publisher=Addison-Wesley Pub. Co., Inc., Reading, MA |year=1984 |location=United States |page=[https://archive.org/details/intelligentsearc00jude/page/n21 3] |bibcode=1985hiss.book.....P |osti=5127296}}</ref>}}
 
{{anchor|hidden layer}}{{term|hidden layer}}
Line 683:
 
{{term|[[interpretation (logic)|interpretation]]}}
{{defn|An assignment of meaning to the [[symbol (formal)|symbol]]s of a {{gli|formal language}}. Many formal languages used in [[mathematics]], [[logic]], and [[theoretical computer science]] are defined in solely [[syntax|syntactic]] terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called [[formal semantics (logic)|formal semantic]]s.}}
 
{{term|[[intrinsic motivation (artificial intelligence)|intrinsic motivation]]}}
{{defn|An [[intelligent agent]] is intrinsically motivated to act if the information content alone, of the experience resulting from the action, is the motivating factor. Information content in this context is measured in the [[information-theoretic|information theory]] sense as quantifying uncertainty. A typical intrinsic motivation is to search for unusual (surprising) situations, in contrast to a typical extrinsic motivation such as the search for food. Intrinsically motivated artificial agents display behaviours akin to [[exploration]] and [[curiosity]].<ref>{{Cite book |last1=Oudeyer |first1=Pierre-Yves |title=Proc. of the 8th Conf. on Epigenetic Robotics |last2=Kaplan |first2=Frederic |date=2008 |volume=5 |pages=29–31 |chapter=How can we define intrinsic motivation?}}</ref>}}
 
{{term|[[issue tree]]}}
Line 711:
 
{{term|[[knowledge acquisition]]}}
{{defn|The process used to define the rules and ontologies required for a [[knowledge-based system]]. The phrase was first used in conjunction with [[expert system]]s to describe the initial tasks associated with developing an expert system, namely finding and interviewing [[knowledge domain|domain]] experts and capturing their knowledge via [[Rule-based system|rule]]s, [[Object-oriented programming|object]]s, and [[Frame language|frame-based]] [[Ontologies (computer science)|ontologies]].}}
 
{{anchor|knowledge-based system}}{{term|[[knowledge-based system]] (KBS)}}
Line 726:
 
{{anchor|knowledge representation and reasoning}}{{term|[[knowledge representation and reasoning]] (KR²<!-- won't work: <sup>2</sup>--> or KR&R)}}
{{defn|The field of {{gli|artificial intelligence}} dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as [[Computer-aided diagnosis|diagnosing a medical condition]] or [[natural language user interface|having a dialog in a natural language]]. Knowledge representation incorporates findings from psychology<ref>{{Cite book |last1=Schank |first1=Roger |title=Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures |last2=Robert Abelson |date=1977 |publisher=Lawrence Erlbaum Associates, Inc.}}</ref> about how humans solve problems and represent knowledge in order to design [[Formalism (mathematics)|formalism]]s that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from [[logic]] to automate various kinds of ''reasoning'', such as the application of rules or the relations of [[Set theory|set]]s and [[subset]]s.<ref>{{Cite news |url=https://deepminds.science/knowledge-representation-neural-networks/ |title=Knowledge Representation in Neural Networks - deepMinds |date=2018-08-16 |work=deepMinds |access-date=2018-08-16 |archive-date=17 August 2018 |archive-url=https://web.archive.org/web/20180817023355/https://deepminds.science/knowledge-representation-neural-networks/ |url-status=dead }}</ref> Examples of knowledge representation formalisms include [[Semantic network|semantic net]]s, [[systems architecture]], [[Frame (artificial intelligence)|frame]]s, rules, and [[Ontology (information science)|ontologies]]. Examples of [[automated reasoning]] engines include [[inference engine]]s, [[automated theorem proving|theorem prover]]s, and classifiers.}}
{{glossaryend}}
 
Line 793:
 
{{term|''[[modus ponens]]''}}
{{defn|In [[propositional calculus|propositional logic]], ''modus ponens'' is a [[rule of inference]].<ref>Herbert B. Enderton, 2001, A Mathematical Introduction to Logic Second Edition Enderton:110, Harcourt Academic Press, Burlington MA, {{ISBN|978-0-12-238452-3}}.</ref> It can be summarized as "''P [[material conditional|implies]] Q'' and ''P'' is asserted to be true, therefore ''Q'' must be true."}}
 
{{term|''[[modus tollens]]''}}
{{defn|In [[propositional calculus|propositional logic]], ''modus tollens'' is a [[Validity (logic)|valid]] [[Logical form|argument form]] and a [[rule of inference]]. It is an application of the general truth that if a statement is true, then so is its [[contrapositive]]. The inference rule ''modus tollens'' asserts that the [[inference]] from ''P implies Q'' to ''the negation of Q implies the negation of P'' is valid.}}
 
{{term|[[Monte Carlo tree search]]}}
Line 821:
{{glossary}}
{{term|[[naive Bayes classifier]]}}
{{defn|In {{gli|machine learning}}, naive Bayes classifiers are a family of simple [[Probabilistic classification|probabilistic classifier]]s based on applying [[Bayes' theorem]] with strong (naive) [[statistical independence|independence]] assumptions between the features.}}
 
{{term|[[naive semantics]]}}
Line 844:
 
{{term|[[natural language programming]]}}
{{defn|An [[ontology (information science)|ontology]]-assisted way of [[programming language|programming]] in terms of [[natural language|natural-language]] sentences, e.g. [[English language|English]].<ref>Miller, Lance A. "Natural language programming: Styles, strategies, and contrasts." IBM Systems Journal 20.2 (1981): 184–215.</ref>}}
 
{{term|[[network motif]]}}
Line 896:
{{term|[[Occam's razor]]}}
{{ghat|Also '''Ockham's razor''' or '''Ocham's razor'''.}}
{{defn|The problem-solving principle that states that when presented with competing [[hypothesis|hypotheses]] that make the same predictions, one should select the solution with the fewest assumptions;<ref>{{Cite web |url=https://math.ucr.edu/home/baez/physics/General/occam.html |title=What is Occam's Razor? |website=math.ucr.edu |access-date=2019-06-01}}</ref> the principle is not meant to filter out hypotheses that make different predictions. The idea is attributed to the English [[Franciscan]] friar [[William of Ockham]] ({{c.}} 1287–1347), a [[Scholasticism|scholastic]] philosopher and [[Catholic theology|theologian]].}}
 
{{term|[[offline learning]]}}
Line 906:
{{term|[[ontology learning]]}}
{{ghat|Also '''ontology extraction''', '''ontology generation''', or '''ontology acquisition'''.}}
{{defn|The automatic or semi-automatic creation of [[ontology (information science)|ontologies]], including extracting the corresponding [[Domain of discourse|domain's]] terms and the relationships between the [[Conceptualization (information science)|concept]]s that these terms represent from a [[Text corpus|corpus]] of natural language text, and encoding them with an [[ontology language]] for easy retrieval.}}
 
{{term|[[OpenAI]]}}
Line 915:
 
{{term|[[Open Mind Common Sense]]}}
{{defn|An artificial intelligence project based at the [[Massachusetts Institute of Technology]] (MIT) [[MIT Media Lab|Media Lab]] whose goal is to build and utilize a large [[commonsense knowledge (artificial intelligence)|commonsense knowledge base]] from the contributions of many thousands of people across the Web.}}
 
{{anchor|open-source software}}{{term|[[open-source software]] (OSS)}}
Line 973:
 
{{term|[[Python (programming language)|Python]]}}
{{defn|An [[interpreted language|interpreted]], [[high-level programming language|high-level]], [[general-purpose programming language|general-purpose]] {{gli|programming language}} created by [[Guido van Rossum]] and first released in 1991. Python's design philosophy emphasizes [[code readability]] with its notable use of [[Off-side rule|significant whitespace]]. Its language constructs and [[object-oriented programming|object-oriented]] approach aim to help programmers write clear, logical code for small and large-scale projects.<ref>Kuhlman, Dave. "A Python Book: Beginning Python, Advanced Python, and Python Exercises". Section 1.1. Archived from the original (PDF) on 23 June 2012.</ref>}}
{{glossaryend}}
 
Line 984:
 
{{term|[[quantifier (logic)|quantifier]]}}
{{defn|In [[logic]], quantification specifies the quantity of specimens in the [[domain of discourse]] that satisfy an [[open formula]]. The two most common quantifiers mean "[[Universal quantification|for all]]" and "[[Existential quantification|there exist]]s". For example, in arithmetic, quantifiers allow one to say that the natural numbers go on forever, by writing that ''for all'' n (where n is a natural number), there is another number (say, the successor of n) which is one bigger than n.<!--this is to answer {{Technical|date=March 2016}} by making at least the first paragraph somewhat approachable. Of course the rest must be technical.-->}}
 
{{term|[[quantum computing]]}}
{{defn|The use of [[quantum mechanics|quantum-mechanical]] [[phenomena]] such as [[quantum superposition|superposition]] and [[quantum entanglement|entanglement]] to perform [[computation]]. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.<ref name="2018Report">{{Cite book |title=Quantum Computing : Progress and Prospects (2018) |publisher=National Academies Press |year=2019 |isbn=978-0-309-47969-1 |editor-last=Grumbling |editor-first=Emily |location=Washington, DC |page=I-5 |doi=10.17226/25196 |s2cid=125635007 |oclc=1081001288 |editor-last2=Horowitz |editor-first2=Mark}}</ref>{{rp|I-5}}}}
 
{{term|[[query language]]}}
Line 995:
==R==
{{glossary}}
{{term|[[R (programming language)|R programming language]]}}
{{defn|A {{gli|programming language}} and [[free software]] environment for [[statistical computing]] and graphics supported by the R Foundation for Statistical Computing.{{refn | R language and environment
{{Cite web |url=https://cran.r-project.org/doc/FAQ/R-FAQ.html#What-is-R_003f |title=R FAQ |last1=Hornik |first1=Kurt |date=2017-10-04 |website=The Comprehensive R Archive Network |at=2.1 What is R? |access-date=2018-08-06}}
Line 1,002:
The R Core Team asks authors who use R in their data analysis to cite the software using:
R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://R-project.org/.
}} The R language is widely used among [[statistician]]s and [[Data mining|data miner]]s for developing [[statistical software]]{{refn | widely used
{{Cite web |last1=Fox |first1=John |last2=Andersen |first2=Robert |name-list-style=amp |date=January 2005 |title=Using the R Statistical Computing Environment to Teach Social Statistics Courses |url=https://socialsciences.mcmaster.ca/jfox/Teaching-with-R.pdf |publisher=Department of Sociology, McMaster University |access-date=2018-08-06}}
{{Cite news |last1=Vance |first1=Ashlee |author-link=Ashlee Vance |url=https://nytimes.com/2009/01/07/technology/business-computing/07program.html |title=Data Analysts Captivated by R's Power |date=2009-01-06 |work=[[The New York Times]] |access-date=2018-08-06 |quote=R is also the name of a popular programming language used by a growing number of data analysts inside corporations and academia. It is becoming their lingua franca...}}
Line 1,008:
 
{{term|[[radial basis function network]]}}
{{defn|In the field of [[mathematical modeling]], a radial basis function network is an {{gli|artificial neural network}} that uses [[radial basis function]]s as [[activation function]]s. The output of the network is a [[linear combination]] of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including [[function approximation]], [[time series prediction]], [[Statistical classification|classification]], and system [[Control theory|control]]. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the [[Royal Signals and Radar Establishment]].<ref>{{Cite tech report |last1=Broomhead |first1=D. S. |last2=Lowe |first2=David |year=1988 |title=Radial basis functions, multi-variable functional interpolation and adaptive networks |institution=[[Royal Signals and Radar Establishment|RSRE]] |number=4148 |url=https://apps.dtic.mil/sti/pdfs/ADA196234.pdf|archive-url=https://web.archive.org/web/20130409223044/https://dtic.mil/cgi-bin/GetTRDoc?AD=ADA196234|url-status=live|archive-date=9 April 2013}}</ref><ref>{{Cite journal |last1=Broomhead |first1=D. S. |last2=Lowe |first2=David |year=1988 |title=Multivariable functional interpolation and adaptive networks |url=https://sci2s.ugr.es/keel/pdf/algorithm/articulo/1988-Broomhead-CS.pdf |journal=Complex Systems |volume=2 |pages=321–355}}</ref><ref>{{Cite journal |last1=Schwenker |first1=Friedhelm |last2=Kestler |first2=Hans A. |last3=Palm |first3=Günther |year=2001 |title=Three learning phases for radial-basis-function networks |journal=Neural Networks |volume=14 |issue=4–5 |pages=439–458 |doi=10.1016/s0893-6080(01)00027-2 |pmid=11411631}}</ref>}}
 
{{term|[[random forest]]}}
{{ghat|Also '''random decision forest'''.}}
{{defn|An [[ensemble learning]] method for [[statistical classification|classification]], [[regression analysis|regression]] and other tasks that operates by constructing a multitude of [[decision tree learning|decision tree]]s at training time and outputting the class that is the [[mode (statistics)|mode]] of the classes (classification) or mean prediction (regression) of the individual trees.<ref>Ho, Tin Kam (1995). Random Decision Forests (PDF). Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, 14–16 August 1995. pp. 278–282. Archived from the original (PDF) on 17 April 2016. Retrieved 5 June 2016.</ref><ref>{{Cite journal |last1=Ho |first1=TK |year=1998 |title=The Random Subspace Method for Constructing Decision Forests |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=20 |issue=8 |pages=832–844 |doi=10.1109/34.709601|s2cid=206420153 |url=https://repositorio.unal.edu.co/handle/unal/81834 }}</ref> Random decision forests correct for decision trees' habit of [[overfitting]] to their [[Test set|training set]].<ref>Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome(2008). The Elements of Statistical Learning (2nd ed.). Springer. {{ISBN|0-387-95284-5}}.</ref>}}
 
{{term|[[reasoning system]]}}
Line 1,042:
 
{{term|[[robotics]]}}
{{defn|An interdisciplinary branch of science and engineering that includes [[mechanical engineering]], [[electronic engineering]], [[Information engineering (field)|information engineering]], [[computer science]], and others. Robotics deals with the design, construction, operation, and use of [[robot]]s, as well as [[computer system]]s for their control, [[sensory feedback]], and [[Data processing|information processing]].}}
 
{{term|[[rule-based system]]}}
Line 1,066:
{{term|[[semantic network]]}}
{{ghat|Also '''frame network'''.}}
{{defn|A [[knowledge base]] that represents [[Semantics|semantic]] relations between [[concept]]s in a network. This is often used as a form of [[Knowledge representation and reasoning|knowledge representation]]. It is a [[directed graph|directed]] or [[undirected graph]] consisting of [[vertex (graph theory)|vertice]]s, which represent [[concept]]s, and [[graph theory|edge]]s, which represent [[semantic relationship|semantic relation]]s between concepts,<ref name="Sowa">{{Cite encyclopedia |year=1987 |title=Semantic Networks |encyclopedia=Encyclopedia of Artificial Intelligence |url=https://jfsowa.com/pubs/semnet.htm |access-date=2008-04-29 |author-link=John F. Sowa |editor-last=Shapiro |editor-first=Stuart C |last1=Sowa |first1=John F.}}</ref> mapping or connecting [[semantic field]]s.}}
 
{{term|[[semantic reasoner]]}}
Line 1,085:
 
{{term|[[similarity learning]]}}
{{defn|An area of supervised {{gli|machine learning}} in artificial intelligence. It is closely related to [[regression (machine learning)|regression]] and [[classification in machine learning|classification]], but the goal is to learn from a similarity function that measures how similar or related two objects are. It has applications in [[ranking]], in [[recommender system|recommendation system]]s, visual identity tracking, face verification, and speaker verification.}}
 
{{term|[[simulated annealing]] (SA)}}
Line 1,098:
{{term|[[SLD resolution|Selective Linear Definite clause resolution]]}}
{{ghat|Also simply '''SLD resolution'''.}}
{{defn|The basic [[rule of inference|inference rule]] used in [[logic programming]]. It is a refinement of [[Resolution (logic)|resolution]], which is both [[Soundness|sound]] and refutation [[Completeness (logic)|complete]] for [[Horn clause]]s.}}
 
{{term|[[software]]}}
Line 1,128:
 
{{term|[[stochastic optimization]] (SO)}}
{{defn|Any [[optimization (mathematics)|optimization]] [[iterative method|method]] that generates and uses [[random variable]]s. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random [[objective function]]s or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization.<ref name="spall2003">{{Cite book |last1=Spall |first1=J. C. |url=https://jhuapl.edu/ISSO |title=Introduction to Stochastic Search and Optimization |publisher=Wiley |year=2003 |isbn=978-0-471-33052-3}}</ref> Stochastic optimization methods generalize [[deterministic system (mathematics)|deterministic]] methods for deterministic problems.}}
 
{{term|[[stochastic semantic analysis]]}}
Line 1,140:
 
{{term|[[superintelligence]]}}
{{defn|A hypothetical {{gli|intelligent agent|agent}} that possesses [[intelligence]] far surpassing that of the [[genius|brightest]] and most [[intellectual giftedness|gifted]] human minds. Superintelligence may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act within the physical world. A superintelligence may or may not be created by an {{gli|intelligence explosion}} and be associated with a {{gli|technological singularity}}.}}
 
{{term|[[supervised learning]]}}
Line 1,149:
 
{{anchor|swarm intelligence}}{{term|[[swarm intelligence]] (SI)}}
{{defn|The [[collective behavior]] of [[decentralization|decentralized]], [[Self-organization|self-organized]] systems, either natural or artificial. The expression was introduced in the context of cellular robotic systems.<ref>{{Cite book |last1=Beni|first1=G. |last2=Wang |first2=J. |title=Proceed. NATO Advanced Workshop on Robots and Biological Systems, Tuscany, Italy, June 26–30 (1989) |year=1993 |isbn=978-3-642-63461-1 |pages=703–712 |chapter=Swarm Intelligence in Cellular Robotic Systems |doi=10.1007/978-3-642-58069-7_38}}</ref>}}
 
{{term|[[symbolic artificial intelligence]]}}
Line 1,158:
 
{{term|[[systems neuroscience]]}}
{{defn|A subdiscipline of [[neuroscience]] and [[systems biology]] that studies the structure and function of neural circuits and systems. It is an umbrella term, encompassing a number of areas of study concerned with how [[neuron|nerve cell]]s behave when connected together to form [[neural pathway]]s, [[neural circuit]]s, and larger [[large scale brain networks|brain network]]s.}}
{{glossaryend}}
 
Line 1,176:
 
{{term|[[TensorFlow]]}}
{{defn|A [[Free software|free]] and {{gli|open-source software|open-source}} [[Library (computing)|software library]] for [[dataflow programming|dataflow]] and [[differentiable programming|differentiable]] programming across a range of tasks. It is a symbolic math library, and is also used for {{gli|machine learning}} applications such as {{gli|neural network|neural networks}}.<ref name="YoutubeClip">[https://youtube.com/watch?v=oZikw5k_2FM "TensorFlow: Open source machine learning"] "It is machine learning software being used for various kinds of perceptual and language understanding tasks" — Jeffrey Dean, minute 0:47 / 2:17 from YouTube clip</ref>}}
 
{{term|[[theoretical computer science]] (TCS)}}
Line 1,202:
{{term|[[tree traversal]]}}
{{ghat|Also '''tree search'''.}}
{{defn|A form of [[graph traversal]] and refers to the process of visiting (checking and/or updating) each node in a [[Tree (data structure)|tree data structure]], exactly once. Such traversals are classified by the order in which the nodes are visited.}}
 
{{term|[[true quantified Boolean formula]]}}
Line 1,211:
 
{{term|[[Turing test]]}}
{{defn|A test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human, developed by [[Alan Turing]] in 1950. Turing proposed that a human evaluator would [[natural language understanding|judge natural language conversation]]s between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a [[keyboard (computing)|computer keyboard]] and [[visual display unit|screen]] so the result would not depend on the machine's ability to render words as speech.<ref>Turing originally suggested a [[teleprinter]], one of the few text-only communication systems available in 1950. {{Harv|Turing|1950|p=433}}</ref> If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give.}}
 
{{term|[[type system]]}}
Line 1,222:
{{glossary}}
{{term|[[unsupervised learning]]}}
{{defn|A type of self-organized [[Hebbian learning]] that helps find previously unknown patterns in data set without pre-existing labels. It is also known as [[self-organization]] and allows modeling [[Probability density function|probability densities]] of given inputs.<ref name="Hinton99a">{{Cite book |last1=Hinton |first1=Jeffrey |title=Unsupervised Learning: Foundations of Neural Computation |last2=Sejnowski |first2=Terrence |publisher=MIT Press |year=1999 |isbn=978-0262581684}}</ref> It is one of the main three categories of machine learning, along with [[supervised learning|supervised]] and {{gli|reinforcement learning}}. Semi-supervised learning has also been described and is a hybridization of supervised and unsupervised techniques.}}
{{glossaryend}}