Jump to content

Semantic neural network: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
distinguish from "semantic network"
 
(10 intermediate revisions by 6 users not shown)
Line 1: Line 1:
{{distinguish|Semantic network}}
{{Original research|date=September 2007}}
{{Original research|date=September 2007}}


'''Semantic neural network''' (SNN) is based on [[John von Neumann]]'s neural network <nowiki>[</nowiki>von Neumann, 1966<nowiki>]</nowiki> and [[Nikolai Amosov]] M-Network. There are limitations to a link topology for the von Neumann’s network but SNN accept a case without these limitations. Only [[Truth value|logical values]] can be processed, but SNN accept that fuzzy values can be processed too. All neurons into the von Neumann network are synchronized by tacts. For further use of self-synchronizing circuit technique SNN accepts neurons can be self-running or synchronized.
'''Semantic neural network''' (SNN) is based on [[John von Neumann]]'s neural network <nowiki>[</nowiki>von Neumann, 1966<nowiki>]</nowiki> and [[Nikolai Amosov]] M-Network.<ref>Amosov, N. M., A. M. Kasatkin, and L. M. Kasatkina. "[https://web.archive.org/web/20171211213617/https://pdfs.semanticscholar.org/53cf/fb9917c98dcaf48be1ed5b7cb9c30c378a7a.pdf Active semantic networks in robots with independent control]." Proceedings of the 4th international joint conference on Artificial intelligence-Volume 1. Morgan Kaufmann Publishers Inc., 1975.</ref><ref>Amosov, N. M., E. M. Kussul, and A. M. Kasatkin. "29. [https://books.google.com/books?id=PSC8AAAAIAAJ&dq=%22NEURONLIKE+NETWORKS,+ATTENTION,+ARTIFICIAL+INTELLIGENCE%22&pg=PA433 NEURONLIKE NETWORKS, ATTENTION, ARTIFICIAL INTELLIGENCE]." Neurocomputers and Attention: Connectionism and neurocomputers 2 (1991): 433.</ref> There are limitations to a link topology for the von Neumann’s network but SNN accept a case without these limitations. Only [[Truth value|logical values]] can be processed, but SNN accept that fuzzy values can be processed too. All neurons into the von Neumann network are synchronized by tacts. For further use of self-synchronizing circuit technique SNN accepts neurons can be self-running or synchronized.


In contrast to the von Neumann network there are no limitations for topology of neurons for semantic networks. It leads to the impossibility of relative addressing of neurons as it was done by von Neumann. In this case an absolute readdressing should be used. Every neuron should have a unique identifier that would provide a direct access to another neuron. Of course, neurons interacting by axons-dendrites should have each other's identifiers. An absolute readdressing can be modulated by using neuron specificity as it was realized for biological neural networks.
In contrast to the von Neumann network there are no limitations for topology of neurons for semantic networks. It leads to the impossibility of relative addressing of neurons as it was done by von Neumann. In this case an absolute readdressing should be used. Every neuron should have a unique identifier that would provide a direct access to another neuron. Of course, neurons interacting by axons-dendrites should have each other's identifiers. An absolute readdressing can be modulated by using neuron specificity as it was realized for biological neural networks.
Line 10: Line 11:


==Related models==
==Related models==
* [[Computational creativity]]<ref>Marupaka, Nagendra, and Ali A. Minai. "[https://ieeexplore.ieee.org/abstract/document/6033635/ Connectivity and creativity in semantic neural networks]." Neural Networks (IJCNN), The 2011 International Joint Conference on. IEEE, 2011.</ref>
* [[Semantic hashing]] <ref>Salakhutdinov, Ruslan, and Geoffrey Hinton. "Semantic hashing." RBM 500.3 (2007): 500.</ref>
* [[Semantic Pointer Architecture]]<ref>Eliasmith, Chris, et al. "A large-scale model of the functioning brain." science 338.6111 (2012): 1202-1205.</ref>
* [[Semantic hashing]]<ref>Salakhutdinov, Ruslan, and Geoffrey Hinton. "Semantic hashing." RBM 500.3 (2007): 500.</ref>
* [[Semantic Pointer Architecture]]<ref>Eliasmith, Chris, et al. "[http://clm.utexas.edu/compjclub/papers/Eliasmith2012.pdf A large-scale model of the functioning brain]." science 338.6111 (2012): 1202-1205.</ref>
*[[Sparse distributed memory]]
*[[Sparse distributed memory]]


== References ==
== References ==
{{Reflist}}


* Neumann, J., 1966. [http://www.walenz.org/vonNeumann/ Theory of self-reproducing automata, edited and completed by Arthur W. Burks.] - University of Illinois press, Urbana and London
* Neumann, J., 1966. [http://www.walenz.org/vonNeumann/ Theory of self-reproducing automata, edited and completed by Arthur W. Burks.] - University of Illinois press, Urbana and London

* Dudar Z.V., Shuklin D.E., 2000. Implementation of neurons for semantic neural nets that’s understanding texts in natural language. In Radio-electronika i informatika KhTURE, 2000. No 4. Р. 89-96.
* Dudar Z.V., Shuklin D.E., 2000. Implementation of neurons for semantic neural nets that’s understanding texts in natural language. In Radio-electronika i informatika KhTURE, 2000. No 4. Р. 89-96.

* Shuklin D.E., 2004. [http://www.shuklin.com/ai/ht/en/ai04001f.aspx The further development of semantic neural network models.] In Artificial Intelligence, Donetsk, "Nauka i obrazovanie" Institute of Artificial Intelligence, Ukraine, 2004, No 3. P. 598-606
* Shuklin D.E., 2004. [http://www.shuklin.com/ai/ht/en/ai04001f.aspx The further development of semantic neural network models.] In Artificial Intelligence, Donetsk, "Nauka i obrazovanie" Institute of Artificial Intelligence, Ukraine, 2004, No 3. P. 598-606


Line 25: Line 26:


* Shuklin D.E. The Structure of a Semantic Neural Network Extracting the Meaning from a Text, In Cybernetics and Systems Analysis, Volume 37, Number 2, 4 March 2001, pp. 182–186(5) [http://www.ingentaconnect.com/content/klu/casa]
* Shuklin D.E. The Structure of a Semantic Neural Network Extracting the Meaning from a Text, In Cybernetics and Systems Analysis, Volume 37, Number 2, 4 March 2001, pp. 182–186(5) [http://www.ingentaconnect.com/content/klu/casa]

* Shuklin D.E. The Structure of a Semantic Neural Network Realizing Morphological and Syntactic Analysis of a Text, In Cybernetics and Systems Analysis, Volume 37, Number 5, September 2001, pp. 770–776(7)
* Shuklin D.E. The Structure of a Semantic Neural Network Realizing Morphological and Syntactic Analysis of a Text, In Cybernetics and Systems Analysis, Volume 37, Number 5, September 2001, pp. 770–776(7)
* Shuklin D.E. [https://link.springer.com/article/10.1023/A:1021150001038 Realization of a Binary Clocked Linear Tree and Its Use for Processing Texts in Natural Languages], In Cybernetics and Systems Analysis, Volume 38, Number 4, July 2002, pp. 503–508(6)

* Shuklin D.E. Realization of a Binary Clocked Linear Tree and Its Use for Processing Texts in Natural Languages, In Cybernetics and Systems Analysis, Volume 38, Number 4, July 2002, pp. 503–508(6)


[[Category:Artificial neural networks]]
[[Category:Artificial neural networks]]

Latest revision as of 14:23, 8 March 2024

Semantic neural network (SNN) is based on John von Neumann's neural network [von Neumann, 1966] and Nikolai Amosov M-Network.[1][2] There are limitations to a link topology for the von Neumann’s network but SNN accept a case without these limitations. Only logical values can be processed, but SNN accept that fuzzy values can be processed too. All neurons into the von Neumann network are synchronized by tacts. For further use of self-synchronizing circuit technique SNN accepts neurons can be self-running or synchronized.

In contrast to the von Neumann network there are no limitations for topology of neurons for semantic networks. It leads to the impossibility of relative addressing of neurons as it was done by von Neumann. In this case an absolute readdressing should be used. Every neuron should have a unique identifier that would provide a direct access to another neuron. Of course, neurons interacting by axons-dendrites should have each other's identifiers. An absolute readdressing can be modulated by using neuron specificity as it was realized for biological neural networks.

There’s no description for self-reflectiveness and self-modification abilities into the initial description of semantic networks [Dudar Z.V., Shuklin D.E., 2000]. But in [Shuklin D.E. 2004] a conclusion had been drawn about the necessity of introspection and self-modification abilities in the system. For maintenance of these abilities a concept of pointer to neuron is provided. Pointers represent virtual connections between neurons. In this model, bodies and signals transferring through the neurons connections represent a physical body, and virtual connections between neurons are representing an astral body. It is proposed to create models of artificial neuron networks on the basis of virtual machine supporting the opportunity for paranormal effects.

SNN is generally used for natural language processing.

[edit]

References

[edit]
  1. ^ Amosov, N. M., A. M. Kasatkin, and L. M. Kasatkina. "Active semantic networks in robots with independent control." Proceedings of the 4th international joint conference on Artificial intelligence-Volume 1. Morgan Kaufmann Publishers Inc., 1975.
  2. ^ Amosov, N. M., E. M. Kussul, and A. M. Kasatkin. "29. NEURONLIKE NETWORKS, ATTENTION, ARTIFICIAL INTELLIGENCE." Neurocomputers and Attention: Connectionism and neurocomputers 2 (1991): 433.
  3. ^ Marupaka, Nagendra, and Ali A. Minai. "Connectivity and creativity in semantic neural networks." Neural Networks (IJCNN), The 2011 International Joint Conference on. IEEE, 2011.
  4. ^ Salakhutdinov, Ruslan, and Geoffrey Hinton. "Semantic hashing." RBM 500.3 (2007): 500.
  5. ^ Eliasmith, Chris, et al. "A large-scale model of the functioning brain." science 338.6111 (2012): 1202-1205.

  • Shuklin D.E. The Structure of a Semantic Neural Network Extracting the Meaning from a Text, In Cybernetics and Systems Analysis, Volume 37, Number 2, 4 March 2001, pp. 182–186(5) [1]
  • Shuklin D.E. The Structure of a Semantic Neural Network Realizing Morphological and Syntactic Analysis of a Text, In Cybernetics and Systems Analysis, Volume 37, Number 5, September 2001, pp. 770–776(7)
  • Shuklin D.E. Realization of a Binary Clocked Linear Tree and Its Use for Processing Texts in Natural Languages, In Cybernetics and Systems Analysis, Volume 38, Number 4, July 2002, pp. 503–508(6)