(WIP) Neural Concept Network is a directed network for representation, searching, analyzing, learning, and forming concepts currently under development.
is a note in the process of development and may be deleted in the official version. • This document uses some animations. The slide with the following icons has animation. Use the PowerPoint version because the PDF version can't play animations. animation 4
directed network for representation, searching, analyzing, learning, and forming concepts. • Consisting of the following two functions. – Concept Network: function for representing concepts – Neural Network: function for searching, analyzing, learning, and forming concepts It is abbreviated as NCN as needed. 7
in a form that can be understood by those who do not have knowledge of neural networks or mathematics. • It can also represent “relative, hierarchical, context- sensitive, self-inclusion, self-reference” concepts, which is complicated in conventional concept representation. • It has the functionality of a simplified Spiking Neural Network (SNN) and is used for concept search, (top- down, bottom-up) analysis, learning and formation. 8
HLAI (Human Level Artificial Intelligence) with the ability to think like human beings. • At present, the distance between Top-down approach and Bottom-up approach, for realization of HLAI is too far, so I thought we needed a middle-out approach. • I thought that it was necessary to implement the representation and the processing of the concept as a function of the middle point. 10
located below Language, Semantics. • At present, there is a lot of flow that it goes on the processing of the higher order concept after the language processing is realized. • However, I think that the language processing cannot be achieved if the concept processing and then the processing of the meaning are not realized first. 12
for concept representation, I was able to create something like a neural network. • When I tried and built SNN functionality and ran it, I found useful results for the analysis of concepts. 13
concept of the concept. • Therefore, it is defined by the following circulation representation. A concept is to represent a certain concept by relation to other concepts. • The following sections describe the relation representation (RR) of concepts in NCN. 15
relate to the concept "B" in the node and the directed edge as follows. • A node is called concept, and a directed edge is called relation. • A concept of starting/ending point of relation is called source/destination concept. A B (source) concept (destination) concept relation (origin) 16
concept “B”, this “A”, “B” is just a label for clarity, in fact it may be a text, voice, image or other neural networks. • Internally, it is identified by the UUID. 17
concepts is called sub-relation. • The following is a representation of concept "A", which has a relation of "C" to "B". If relation1 is the origin, relation2 is a sub-relation. A B C relation2 (sub-relation) relation1 (origin) 22
example, the relationship is defined in a Japanese sentence type order, but in NCN the sequence of the relations has no grammatical meaning, so you may define the relationship in the English sentence order. • What meaning is found from these conceptual structures depends on the interpreting side. human hand have human have hand Japanese sentence type English sentence type 25
and passive is preferred, it is easy to do the relative-representation (described later) by the sentence type order of English. • For the intransitive verb, a Japanese sentence type order is easier to express in relative terms. 26
similar to RDF (Resource Description Framework) or a graph database. – ex: RDF represents relation by triple (subject, predict, object). • When dealing with real information, the three-term expression is not enough. • NCN also represents the relation itself, which corresponds to RDF predicates, and can be combined with sub-relation to create a more realistic representation of a concept. 32
a sub-relation is represented by 0.0 < relativity < 1.0. • The closer to 0.0, the closer to the source concept, and the closer to 1.0, the closer to the destination concept relation. A B C D relativity = 0.25 relativity = 0.5 E relativity = 0.75 33
not have grammatical meaning. • As shown below, the order of the things you want to emphasize may be different even with the same fact. Mr. Smith yesterday this road passed through yesterday Mr. Smith this road passed through yesterday this road Mr. Smith passed through Mr. Smith passed through this road on yesterday. Yesterday, Mr. Smith passed through this road. Yesterday and this road, Mr. smith passed through. Mr. Smith が昨日この道を通った。 昨日 Mr. Smith がこの道を通った。 昨日この道を Mr. Smith が通った。 35
able to reproduce the order of emphasis of the concept, it is necessary to supplement it with the decoration of the character in the case of sentences by inflection and gesture, etc. in an actual conversation. Mr. Smith passed through this road on yesterday. Mr. Smith が昨日この道を ”通った” 。 Mr. Smith yesterday this road passed through 36
of the relativity disappears, so it is necessary to think about the representation. A A C C B B D E D E Idea1. Color representation A C B D E Idea2. 3D representation 43
five fingers. • An insect has six legs with finger. human foot have two finger five insect six human foot have two finger five insect foot have six finger 44
is white. • However, there are exceptions such as albino and melanism, and white crows and black swans exist. • It is necessary to represent it while preventing catastrophic forgetting (interference) due to such exceptions. 49
incorporated into the network structure itself, and recursive relative representations of concepts that were difficult in conventional logical representations can be made. 53
the neural potential to express and process information and is closer to biological neurons than conventional computational graph neural networks, allowing for more flexible information representation and processing. • NCN also provides information processing power by branched structures equivalent to neurite (or nerve), such as axon and dendrite. 55
Dynamic network can be formed. – Signal, processing is superposition-able. • Cons – It is costly to calculate. – There may be similar restrictions as humans. 56
neural network in addition to the parameters of conventional calculation graph neural networks. • conventional calculation graph neural networks function – Weight: Synaptic Weights. Positive and negative real number – Potential: Positive and negative real number (mV) – Threshold: Positive real number (mV) • spiking neural networks function – Attenuation rate: Time attenuation rate of potential. Positive real number (mV/msec) – Refractory period: Positive real number (msec) 57
an output layer, and any neuron can be treated as an input or an output neuron. • Represent the directed edge and relation used for I/O separately. input/output relation animation 59
rate 1/2 0.0 pps No further firing due to the attenuation characteristics of the potential • When the input frequency and amount of the signal exceeds the attenuation of the potential, the firing frequency become 1/n of the input frequency. • This is because even one input amount is less than the threshold value, it can fire beyond the threshold when input as two or three times. pps: pulse per second threshold = 1.0 weight = 1.0 weight = 0.9 weight = 0.9 input 13.0 pps animation 61
respond when the distance (number of stages) from the input neuron becomes distant. • This is an important feature to prevent infinite firing loops in NCN that can be circulated networks. • Similarly, the higher input frequency of the signal causes a wider range of propagation. • This feature is utilized to control affect range of the input signal. 62
of information in the brain, and here are two. – rate coding theory – temporal coding theory • NCN uses frequency coding that combines these features. • Frequency coding detects how much it fires in sync with the frequency of the input signal as the degree of the relation. • By using coprime frequencies with sufficiently large least common multiple for input, it is possible to determine how much the network reacts to which input frequency, regardless of the propagation path. 64
frequency has the following. – PM: Phase modulation – FM: Frequency modulation – AM: Amplitude modulation • In these, AM seems to require a population expression of multiple neurons rather than a single neuron from the firing characteristics of the neural network (all or none law). 65
coprime frequencies are colored in RGB as a channel and input from different neurons. Input 13.0 pps channel A Input 11.0 pps channel B Input 7.0 pps channel C 6.5 pps channel A 50% 3.25 pps channel A 25% channel A 25% channel B 25% channel B 25% channel C 25% 5.5 pps channel B 50% 3.5 pps channel C 50% 1.75 pps channel C 25% channel A 12.5% channel B 12.5% channel C 12.5% channel A 25% channel C 25% 3.25 pps channel B 25% 71
input channels, the possibility that each frequency interferes in the unit time is very low. • Using this characteristic, it is possible to separate or superimpose multiple channels in the input and output and processing of information. • By analyzing the output signal for each frequency channel, it is possible to determine whether the neurons are reacting to which input frequency channel regardless on the path. • Therefore, compared to the calculation graph NN, it is possible to prevented to become a black box. 72
does not go so well because it is affected by the potential that rose at another frequency when the frequency is superpositioned nearby. • There is some way to eliminate the affect, but there is a possibility that it can be made some information expression, and it is necessary to verify which is better. 73
channel is that it is “coprime, and the ratio of the differences is small". • Three sequential numbers starting from any odd number are coprime. • Examples: (1, 2, 3), (3, 4, 5), (5, 6, 7), (7, 8, 9), (9, 10, 11), (11, 12, 13), (13, 14, 15), …, (41, 42, 43), (43, 44, 45) • Higher frequencies reduce the ratio of frequency differences. • However, when the time resolution is 0.1ms interference occurred at the combination (43, 44, 45) or higher. 74
channel related to • consciousness, selection, and concentration • gamma rhythm, burst firing • capacity of short-term memory is related to 4±1 chunks* • Some say it has nothing to do with binding problems. • The boundary between consciousness and the unconscious may be as simple as sound. Human beings feel that a certain pattern of sound pressure changes is more than a certain number of times, and if repeated in a period below a certain level, they may feel that it is a single sound, but consciousness may also be the same. * It used to be a 7±2 chunk and was called a magic number. 76
the channel representation by the frequency can achieve the same processing by adding the influence range information and channel information directly to the signal. • But parallel processing becomes difficult, and there is a possibility that the performance falls when the scale increases. • I think that it is good to express everything by "wave superposition and time" such as quantum mechanics and process it into particles only when obtaining information. 77
backward propagation of the signal in the opposite direction of the relation in addition to the forward propagates for top- down, bottom-up analysis of concepts represented by the Concept Network function. • The following slide shows the order of the process of forward firing and back firing. 78
to achieve forward and back firing. • In addition to this, three frequency channels, phase, and input amount are combined to form an input signal. forward backward Excitatory fo-ex (+1eV) ba-ex (+1eV) inhibitory fo-in (-1eV) ba-in (-1eV) 81
frequency channels, forward/retrograde, and excitability/inhibitory properties may be linked to the color charge and top/bottom, strange/charm, up/down, which is the nature of the quark. 82
propagation by electrical synapses – In a chemical synapse, signals propagate forward only. – However, in an electrical synapse, the signal propagates forward and backward. – Electrical synapses are present in inhibitory neurons in the hippocampus and cerebral cortex. • Back propagation on dendrites – Potential change may propagate backward in the dendrites. → It seems to be difficult to reproduce with a simple network. 83
(myelinated nerve) is fast. • The propagation velocity of the dendrites is slow. • Activity potential may also occur on the dendrites, called dendritic-spike. 85
propagates action potential faster than thinner one. • Potential less than the threshold is propagated while attenuating and, in that case, a thicker neurite may propagates potential slowly more than thinner one. 86
back propagation not only for learning, but also for analyzing concepts. • When the forward signal from the source neuron(concept) and the backward signal from the destination neuron (concept) are input at the same (or 1/n or n times) frequency, the collision point on neurite changes by changing the phase of input signals. 87
propagation signal and the back- propagation signal of less than the threshold collides and if the potential exceeds the threshold, action potential occurs in the middle of the neurite. • This state can also be utilized to analyze the networks structure. 89
collision point of the signal by phase control of the forward signal and retrograde signal corresponds to changing the position of the relative representation in the conceptual network. 90
and the phase of the top-down and the bottom-up signal and by analyzing its firing reaction it is possible to infer the structure of the network. • This means that the network structure can be coded at the firing timing and rate. • Of course, there is no point in inferencing and knowing the structure of a predefined network. • If this signal can be processed by a neural network, it means that a meta neural network that dynamically represents and processes a virtual neural network can be realized. 91
possibility to make use of the similarity of the function of the radar in consideration and concentration. • Radar scanning mode – Lock-on mode • TWS (Track While Scan) : Multiple target can be tracked at the same time. • STT (Single Target Track) : Only single target is tracked. It may be related to the concentration of consciousness. • Phased Array Radar – It is possible to have a directivity to the synthetic wave by shifting the oscillation timing of many radar arrays. – It may be possible to transmit directed signals by the same mechanism in the brain. 96
size. • Concept and Relation can be converted to each other. • Relation becomes Concept when rounded. • Concept becomes Relation when make smaller and stretched it. 103