1 The information and its observer: external and

The information and its observer: external and internal information processes, information
cooperation, and the origin of the observer intellect
Vladimir S. Lerner, Marina Del Rey, CA 90292, USA, [email protected]
Abstract
We examine information nature of observing interactive processes during conversion of observed
uncertainty to observer certainty, which leads to minimax information law of both optimal
extraction and consumption of information: by getting maximum of information from each of its
observed minimum and minimizing the maximum while spending it.
The minimax is a dual complimentary variation principle, establishing a mathematical form of an
information law, which functionally unifies the following observer regularities. Integral measuring
each observing process under multiple trial actions. Converting the observed uncertainty to
information-certainty by generation of internal information micro and macrodynamics and
verification of trial information. Enclosing the internal dynamics in information network (IN).
Building the IN’s dynamic hierarchical logic, which integrates the observer requested information
in the IN code. Enfolding the concurrent information in a temporary build IN’s high level logic that
requests new information for the running observer’s IN. Self-forming the observer inner dynamical
and geometrical structures with a limited boundary, shaped by the IN information geometry during
the time-space cooperative processes. These functional regularities create united information
mechanism, whose integral logic self-operates this mechanism, transforming multiple interacting
uncertainties to physical reality-matter, human information and cognition, which originate the
observer information intellect. This logic holds invariance of information and physical regularities,
following from the minimax information law.
Keywords: impulse cutoff interactive transformation; minimax information law; integral information, cooperative
information dynamics; hierarchical network; threshold between objective and subjective observers; self-forming
intellect.
Introduction
Revealing information nature of various interactive processes, including multiple physical
interactions, human observations, Human-machine communications, biological, social, economic,
other interactive systems, integrated in information observer, becomes important scientific task.
Physical approach to an observer, developed in Copenhagen interpretation of quantum mechanics
[1-3], requires an act of observation, as a physical carrier of the observer knowledge, but this role
is not described in the formalism of quantum mechanics.
1
The information dynamics approach [4-6] presents an information formalism, which fills this gap
by describing jointly the external and internal information processes of observers (both objective
and subjective) and shows how the subjective observer produces cognitive information dynamics.
This formalism extends the results of
J.Von Neumann [3] and E.Wigner [7,8] in quantum
mechanics and A. Wheeler’s theory [9,10] of information-theoretic origin of an observer,
according to doctrine "It from Bit".
The dynamic approach, considered information independently of its origin and measured by
C.Shannon’s relative entropy [11], applies to a random information process [12]. This notion
implies that an information observer holds the relative entropy of the measured processes, or its
in-formation, when by interacting it forms a relative connection with the observations.
The Bayesian inference of an a priory hypothesis (probability distribution) by an observation's a
posteriori observation’s, as well as the Kullback–Leibler divergence [13] between the
probabilities distributions measure relative information connections in the states of the observed
process. Both of these measures are embraced by an entropy integral of the relative probability
[6] which measures hidden information connections within the observed processes.
E.T.Jaynes [14] applies Bayesian probabilities to propose a plausible reasoning mechanism [15]
whose rules of deductive logic connect maximum Bayes information (entropy) to a human's
mental activities, as a subjective observer.
According to D. Bohm’s ontological interpretation of quantum physics [16-18]: physical
processes are determined by information, which “is a difference of form that makes a difference
of content, i.e., meaning”, while “meaning unfolds in intention, intention into actions”; and
“intention arises from previous perception of meaning or significance of a certain total situation.”
Which entails mental processes.
J.C.Eccles’s quantum approach [19] “is to find a way for the “self” to control its brain”.
In B.Marchal’s
“realist interpretations of quantum mechanics [20], with existence of parallel
situations, the observer has no particular status and can be described by the laws he is inferring
from his observation of nature”; this implies “verification of truth via computations” of the
situations’ logical possibilities.
H. Stapp [21] bases his theory on the "quantum Zeno effect," which, through increasing
"attention density" of the efforts enables brains to select among quantum possibilities, giving
each person's "intentional conscious choices the power to causally affect the course of events…”
G. Tononi [22] introduces integration of the neurological information, effecting brain
2
architecture and functions in producing complexity, while C. Koch [23] reviews this field of
theory and neuroscience, which embrace understanding of a human consciousness.
These references along with many other [24,25], study information mechanisms in an
intelligence, explaining these through various physical phenomena, whose specifics are still
mostly unknown. This paper shows how information and its observer are generated through the
integration of multiple random interactions. The functional measure of probability, transformed
in the observer, can evolve from an objective interaction to a subjective observer, as it holds and
structures its information. The results demonstrate emerging the observer’s information
regularities and intelligence, satisfying a simple natural law during conversion uncertainty to
certainty of the observer interaction with information environment. The paper focuses on formal
information mechanisms in an observer, without reference to the specific physical processes
which could originate these mechanisms. The formally described information regularities
contribute to basic theory of brain function and information mechanisms of Human-Machine
interactions. The paper extends approach [4-6] on the information observer.
The paper is organized as follows:
Sec.1 presents the notion of an uncertainty (or entropy) functional (EF), as a logarithmic integral
measure of the probability of transformation. EF is integrated along the trajectories of a random
process. We also introduce the notion of the information observer, as a converter of this
uncertainty to the equivalent certainty-information measured by the information path functional
(IPF) along the shortest path-trajectory. The EF transformations are generated under the action of
an impulse control, which models interactions in a Markov diffusion process.
The impulse cuts off the maximum information concentrated in the kernel of the process [5],
while its cutoff fraction delivers a minimax of the total available process information,
implementing a minimax variation principle (VP)[4]. The minimax VP, satisfying the EF-IPF
equivalence, leads to a dual information maxmin-minimax law, which creates the mechanism of
information regularities from the observed stochastic process.
Sec. 2 describes the information mechanism extracting information from an observing random
process under the impulse control which generates the observer's internal dynamic information
processes, satisfying VP. The impulse provides both transformation of a priori to a posteriori
observations of a random process, and delivers the information extracted by the cutoff to the
observer. This reveals information hidden in the cutting of correlations of the process.
The step-up actions of the impulse control convert the extracted information to an information
dynamic process, which proceeds while the observer accumulates the hidden information.
3
The observer's inner information process is composed of the conjugated information dynamic
movements (complex amplitudes probabilities), which are joined in an entanglement through the
impulse's step-down actions. The observer's next interaction breaks down the entanglement,
opening new observations, whose information is delivered through the following step-up action
and transformation. This step-up action intervenes in the external random process, converting,
delivering information and starting new internal dynamics. These proceed during each interval
between the step-up and step-down switches and the observations (Fig.1). It is shown that this
time internal serves for a verification of the extracted information with the minimax criteria
before its accumulation, and the interval depends on the received information.
Sec. 3 summarizes the observer's information regularities, following from the applied minimax
law. Satisfying the law, through the verification, information entanglement and the dynamic
attracting cooperative actions, leads to cooperative information dynamics for a multi-dimensional
time-space observing process. The time-space conjugated cooperative dynamics form spiral
trajectories, located on a conic surface (Fig. 2a). A manifold of these cooperating trajectories is
sequentially assembled in elementary binary units (doublets) and then in triplets, producing a
spectrum of coherent frequencies. The sequence of cooperating triplet information structures
forms an information network (IN) with a hierarchy of nodes (Fig. 3), which accumulate the
extracted external information. Since such a conserved information is enclosed through
sequential cooperation of the IN nodes, the created IN's information structure condenses all
currently accepted information in its terminal node. The attracting cooperative action, connecting
the IN nodes along this triplets' hierarchy, is intensified along the hierarchy, since each
sequential IN's triplet transforms more condensed information than the previous one. The strength
of the attracting force's action depends on the speed of the received information, which is
determined by the maximal frequency of the currently observed information spectrum.
The node location within the IN hierarchy determines the value (quality) of information,
encapsulated into this node, and its information density. A sequence of the successively enclosed
triplet-nodes, represented by the control's time intervals (Fig 2.b), generates the discreet digital
logic as the IN code. Communication via the codes sequentially enfolds the code's information in
the hierarchy of IN's logical structure. The optimal IN code has a double spiral (helix) triplet
structure (DSS)(Fig. 4), which is sequentially enlcosed in the IN's final code, allowing the
reconstruction of the IN dynamics, its geometrical structure and topology (Fig 5.).
4
Such IN integrating logic, which concentrates and accumulates all IN's information capacity,
evaluates the observer' meaningful information and knowledge, emerging from information
collected during the observations.
Sec. 4 details the information mechanism of building the IN logical structure from current
observations. According to the minimax law, an observer is required to deliver information of
growing density, which is need for both creating a new IN triplet's node, and attaching this triplet
to the existing IN. This leads to two information conditions: necessary to create a triplet, having
required information density, and sufficient to generate the cooperative force enables the triplet
to adjoin the observer's IN logic, holding sequentially the increasing quality of information.
Both of these conditions could be satisfied by delivering the needed information through the
reciprocal interaction of external and internal information processes during the verification of the
selected external information with the information that has been accumulated and stored by the
IN current node's cooperative structure. The required information density should comply with the
currently observed information spectrum, as described above.
Sec. 5 identifies an information threshold, which separates subjective and objective observers. It
specifies both necessary and sufficient information conditions (Sec.4), based on the concept of
sequential and consecutive increase of the observer's quality of information, following from the
VP minimax. Selection and extraction the needed information to generate a growing cooperative
force, coordinated with the quality of the accumulated information, requires initiating the
observer’s dynamic information force, which represents its intentional dynamic effort.
The coordinated selection, involving verification, synchronization, and concentration of the
observed information, necessary to build its logical structure of growing maximum of
accumulated information, we associate with the observer’s intelligence action, which is evaluated
through the amount of quality of information spent on this action.
This leads, in Section 6, to the notion of intelligence belonging to a subjective observer, as its
ability to create information of increasing quality by building a particular IN logic. That requires
spending the observer's meaningful information. The IN logic self-evaluates its own intelligence
and the observer's experience to find the meaning of its information. In multiple communications,
each observer could send a message-demand, as quality messenger (qmess), enfolding the sender
IN's cooperative force which requires access to other IN observers. Communications of such
numerous observers allow the observer to increase their IN personal intelligence level and
generate a collective IN's logic. This not only enhances the collective's ideas but also extends and
develops them, expanding the intellect's growth.
5
The key sequence of observer's information processes are shown in Fig. 6, with potential
applications to the Human Brain Project. This is illustrated by examples from neuronal dynamics
in the Appendix, which support the formal findings of this paper.
1. Notion of information observer
1a.Uncerainty and Information
Information is associated with diverse forms of changes (transformations) in material and/or nonmaterial objects of observation, expressed universally and unconnectedly to the changes’ cause
and origin. Such generalized changes are brought potentially by random events and processes,
described via the related probability in a probability space in the theory of probability, founded
as a logical science [26]. This randomness with their probabilities is a source of information,
which implies that some of them, but not all randomness produces information.
The source specifics depend on particular sequence of random events in the probability space.
A random process, considered as a continuous or discrete function x (ω , s ) of random variable ω
and time s , is described by the changes of its elementary probabilities from one distribution (a
a
Psp, x (dω) in the form of their
priory) Ps , x (dω) to another distribution (a posteriori)
transformation
Psa, x ( d ω )
. (1.1)
p (ω ) = p
Ps , x ( d ω )
Such a probabilistic description generalizes different forms of specific functional relations,
represented by a sequence of different transformations, which might be extracted from the source,
using the probabilities ratios (1.1).
The probability ratio in the form of ln[ p(ω )] holds the form of a logarithmical distance
− ln p(ω ) = − ln Psa, x (d ω ) − ( − ln Ps p, x (d ω )) = sa − s p = sap ,
(1.1a)
represented by a difference of a priory sa > 0 and a posteriori s p > 0 entropies; each of them
measures uncertainty for given x (ω , s ) in the related distributions, while the difference measures
uncertainty at the transformation of probabilities for the source events, which satisfies the
entropy's additivity.
Because each above probability and entropy is random, a mathematical expectation
(1.2)
Es , x {− ln[ p (ω )]} = Es , x [ sap ] = Sap ,
defines an average tendency of functional relations (1.1,1.1a), called a mean uncertainty of random
transformation. For Markov diffusion process, described by stochastic Eq. with functions of
controllable drift a u (t , xt ) and diffusion σ (t , x ) , this uncertainty has integral measure on the
process trajectories xt [27]:
T
Sap = 1 / 2 Es , x [ ∫ a u (t , xt )T (2b(t , xt )) −1a u (t , xt )dt ], 2b(t , x ) = σ (t , x )σ T (t , x ) > 0 . (1.3) s
6
A change brings a certainty or information, if uncertainty Sap is removed by some equivalent entity,
called relative information I ap of a random transformation, at equality
I ap [ xt ] − Sap [ xt ] = 0 ,
(1.4)
where xt is uncertain process, carrying transformation (1.1), xt is information process, resulting
from this transformation , and I ap [ xt ] is an integral measure on xt [4]. Information is delivered if
I ap ≠ 0 , which requires 0 < p(ω ) < 1 . Condition of zero information: I ap = 0 corresponds to a
redundant change, transforming a priory probability to equal posteriori probability, or this
transformation is an identical–informational undistinguished, redundant.
Information depends on what it is considered: a process, or an event, since Sap and I ap include
Shannon’s formula for the entropy and relative information accordingly of the states (events) [11]. Some logical transformations in symbolic dynamic systems theory [28], preserve an entropy as a
metric invariant, allowing to classify these transformations.
1b. Information Observer
Observing a priory and posteriori processes (events), an observer connects them by measuring
information (1.4), and such connection integrates uncertain observations in information process.
The word in-for-ma-tion literally means the act or fact of forming information process (for
example, by connecting the observed events), which builds its observer that holds the process
information. This link implies a dual complementary relation between information and the
observer: information means an observer, and observer is a holder of this information.
The observed information creates the observer, and observer, by converting the observed
uncertainty to its certainty, builds and holds the observed information.
An elementary information observer holds just a single connection of a two states (as a bit of this
information). An elementary natural objective observer is a holder of natural information in an
quantum entanglement, which delivers a quant of the entangled dynamic states (qubit).
The elementary objective observer could be a result of emerging interactions in the observing
process, which produce a sharp impact, analogous to the impulse control’s actions.
The impulse action, applied to an observed Markov diffusion process, cuts off the related
correlations and extracts hidden information, enfolded by the correlations [5].
While MaxEnt is theoretical tool for predicting the equilibrium (stationary) states, MinimaxEnt
is theoretical tool for predicting the non-equilibrium (non-stationary states), as a hidden
information between stationary states. Measuring this information via integral functional measure
(1.3,1.4) implies transforming a priory process xa (t ) to a posteriori x p (t ) during this impulse
control action. The functional probabilistic transformation of the Markov diffusion process
selects the Feller’s measure of the kernel information [5], while the impulse releases its
maximum, and the impulse cutoff fraction delivers a minimax, which accumulates total
7
information of this transformation [5, 27]. The actual impulse high and width (as a difference
from δ -function) limit amount of its information, which also includes information used on the
cutoff action. Fig.1 illustrates both extraction of hidden information through cutting off the
correlations, covering this information (connecting the observed processes xa (t ) , x p (t ) before
the extraction), and obtaining this hidden information by an observer. Such hidden information,
being transformed under the control’s actions, initiates observer’s inner information process
(Sec.3) composed of conjugated information dynamic movements (complex amplitudes
probabilities), which are joined in an entanglement (Sec.2). (Probabilities of the measured
information processes, defined in Sec.1, distinctive from general quantum probabilities).
The transformation and equalization of integrals (1.4) convert functional measure (1.3) of uncertain
random process xt to the path integral information measure of dynamic (certain) process xt [6(1)].
The observer accumulates the extracted information through its internal dynamics, which at the
entanglement, while holding the measured information, generates both local and nonlocal
attracting actions, enable adjoin new information [5]. By attracting new information and the
dynamic proceeding of this information, the observer starts elementary cooperatives of its inner
information structure, which keeps the cooperative connections [6]. Hence, observer is a creator,
extractor, holder, and processor of its information, measured in observation, while the random
source might include an external information not measured yet by the observer.
By implementing (1.4), the observer’s impulse control cuts off a minimum of the observed
information, which is measured by the entropy functional during the jump-wise transformation,
and extracts its maximum, concentrated in a diffusion process’ kernel [27]. Hence, extraction of
maximal hidden information under impulse control’s actions satisfies maxmin principle that
allows an observer getting both maximum of information from each of its observed minimum and
minimizes the maximum while consuming it through the inner information dynamics [4,6].
The minimax principle carries optimal transformation, providing best coordination (matching)
between the generated maximum and its consumed minimum and bringing observer’s stability at
their joint compensation. The optimal maxmin-minmax complementary principle expresses an
information law, applied to the observer’s dual relations with the measured information.
This information law’s integral form (1.4) satisfies to the physical least action principle [4,5].
We assume that a subjective observer, in addition to the objective observer, could select the
observing process and proceed the acquired information using this optimal criterion for the
selective accumulation, evaluation of meaning of the collected information, and communication
with other observers. While each particular observer may not implement its optimal law, applying
the minimax law allows revealing the information regularities for both objective and subjective
8
observers. The question is what generates a subjective observer? Why and when raises the
difference between an elementary objective observer and an elementary subjective observer?
Which conditions are required to transform an objective observer in a subjective observer?
To get the answers, we describe in Sec.2 a formal information mechanism of the observer’s
operations, which creates an observer, implementing the minimax principle, and find the required
conditions in Secs.3,4.
Some of observer’s operations, currying the information transformation (1.1,1.2), as well as
acquisition and transmission of information, require spending energy, that leads to binding
information with energy in related classical physical and quantum substances, associated with
thermodynamics of this conversion.
Classical physical measurement of this information-energy conversion requires an energy cost to
decrease the related physical entropy at least by constant k B ( ΔT ) (depending on a difference of
related temperatures ΔT ), which transforms the information measure to physical one at
measuring or erasing and memorizing this information [29-31].
Quantum measurement involves collapse of a wave function, necessary to transform an
information barrier, separating the distinguished from the undistinguished, to a physical quantum
level, which corresponds to an interaction at an observation. At an entanglement, its natural
cooperation holds the barrier physical quantum information. Other physical transformations,
holding information, bind the required equivalent energy with information. The mathematical
definition of information (1.4) can be used to determine its mathematical equivalent of energy:
Eap = I ap k B ,
(1.5)
which is asymmetrical, embracing its integral information measure (1.3,1.4) that also applies to
the observer’s inner dynamics. This information measure, defined for a random non-stationary,
generally, nonlinear information process, extends equivalent relation (1.5) on non-equilibrium
processes, as well as on physical quantum information processes. Since probabilities can be
defined for non-material and material substances as well, the transformation, connecting them via
the relative information, serves as both a bridge between these diverse entities and a common
measure of their connections. However, there are no reversible equivalent relations between
energy-information: the same quantity of information could be produced with different cost of
energy, and vice versa, the same energy can produce different quantity of information. The
observer’s measured information is preserved and standardized by encoding this information in
optimal code-word from a chosen language’s alphabet. This brings a unique opportunity of
revealing information connection of different observed processes through the code’s common
information measure, which unifies observers’ information similarities.
9
The length of the shortest representation of the information string by a code, or a program
measures algorithmic information complexity (AIC) [32, 33], which can detect a random string, as
an absence of regularity amongst other possible code’s strings. Encoding the observed process by
the integral information measure provides a shortest finite program (in bits) [6(2)], connecting
the considered approach to AIC. Since the entropy functional (1.3) has a limited time length of
averaging the process’ randomness and it generates a finite code, being an upper bound, the
measured cooperative macrosystemic complexity (CMC) [4,34] is computable in opposition to
incomputability of Kolmogorov’s complexity. CMC measures a cooperative order (among
ordered substances), which increase with growing number of ordered co-operations, as opposite
to AIC, which is reduced with increasing an order among disorder. While both the physical
realization of the information transformation and the encoding of measured information are
important components of a physical observer, we focus, first, on the information observer and its
formal information mechanism, which a particular physical observer can implement.
2. The information mechanism of extraction of an observing random process and generation
of observer’s internal information processes
Implementation of the information transformation requires applying an impulse (as a control),
which extracts the measuring hidden information between observed processes xa (t ) , x p (t ) through
double stepwise control’s actions: the first step-down one cuts xa (t ) and discloses its coupling
information with x p (t ) , the second step-up action finalizes extraction of this hidden information
(during the impulse) to an observer, while attaching the cutoff portion to the second process.
For a multi-dimensional random process, each of the transformation and related impulse are
applied to all process’ dimensions, while the multi-impulses’ extracted information, connected
through these multiple information transformations, reveal whole process integral information,
evaluated by (1.2). This process’ time interval satisfies the minimax of observer’s dynamic
stability, and each impulse transformation is located within a space-time of the measured process.
Even though an impulse models an interaction, the observer, generally, can measure information
not necessarily under the impulse control’s actions.
However, when information is extracted by the impulse actions that implement the information
transformations, satisfying the minimax principle, the observer obtains its optimal information.
Information connection assumes a superposition of the measured process’ states, which might
not be at the same time and at the same place. Particularly, superposition could be a non-local
entanglement at observer’s quantum information level. The entanglement, producing a quant of
10
the entangled dynamic states, sets up an elementary observer and generates both local and
nonlocal attracting action, enables adjoin new information. The extracted hidden information
covers these dynamic states as its “hidden variables” in quantum mechanics.
We start studying information mechanism with elementary information observer, as a possessor
of qubit from the superposition of two-state quantum system in a multi-state system. Such
possessor could emerge from any previous information extraction via impulse control actions or
interactions. The observer might interact with a random environment by the end of interval of
entanglement Δ1 , at moment τ 1o (Fig.1), when the environment can break the entanglement.
A breaking action could also produce the step-down action, as an ending part of a priory
impulse’s control action (Fig.1a,b,c) that had initiated this entanglement.
At the breaking moment τ 1o , the observer meets random environmental process xt , opening its
observation, which continues during time interval δ1 .
The emerging classical random process xt affects the dissolved entanglements through delivering
additional information to the observer, which statistically connects quantum states ( xt + , xt − ) with
related statistical states in a classical-quantum processes xt + , xt − [37]. Correlations of the
connected processes absorb the additional information, as hidden information, covering these
correlations, and the observer gets this active information from the random surrounds [36].
The observer interacts and communicates with the environment throughout the observer window’s
time interval δ1 , where each interaction binds the environmental randomness with the observer.
A classical analog of quantum interactions with random environment is communication between
the observers, which share random information over a discrete memory less channel [37].
In such a noisy classical-quantum communication, the entangled information, sent over channel,
is less disturbed, compared to a regular noisy classic Shannon’s channel. During communication
through the observer’s window, the observer gets less additional external information, compared
to
information,
which
was
not
entangled
before
the
observation.
This
initially
non-
communicating observer starts getting only classical information from random process.
More general noisy classical-quantum communications include interactions between the
observer’s inner dynamic cooperative units in multi-dimensional observations [4,6].
In stochastic differential eq., as a math model of Markov diffusion process, the impulse control
provides transformation from Brownian process xa (t ) to Markov process x p (t ) (with a finite drift)
by changing the process’s drift from zero to the control level [27]. Such a deterministic
intervention of the control in the process modifies the drift function’s random contribution,
which transfers the control’s information input to process x p (t ) . The cutting off control acts on
11
the process’ states, having maximal entropy at this cutting moment. (This also corresponds to
R. Frieden’s [38] most likely sample of the random states with maximum entropy). In the
Markov random filed, these random ensemble’s states, possessing a maximum of the
information distinction simultaneously, should be opposite directional, mutual commuting
mirror variables. The observer’s step-up action with optimal control vo = −2 x(τ 2 ) (2.1) [4,5] ,
starting at the cutting moment τ 2 (by the end of time interval δ1 ), intervenes in the opposite
directional random processes’ ensemble xt + (τ 2 ), xt − (τ 2 ) .
The separated elements of the correlations provides independence of the cutting ensembles,
whose information (at the cutting moment) does not hold physical properties for observer,
since it is not bound with the observer. The separation is an act of starting measuring
information, which is expressed by relative probability before such act. Therefore, this is
abstract premeasured information, being hidden in correlations of random process, which can
be held in quantum correlation (on a core of the initial process’ randomness). During the cut,
the measured hidden information is changing by the control intervention, which cuts both
classical and quantum correlations. That is why the up step-up control transfers hidden
information, being cut off from the EF transitional probability, to the observer’s measured
process. If quantum entanglement is preserved during a measurement, then a hidden quantum
connection is transferred to the measured process. Function (2.1) activates doubling each of
these states with the opposite signs and selecting the opposite directional copies x+ (τ 2 ), x− (τ 2 )
of the random process’ ensemble xt + (τ 2 ), xt − (τ 2 ) to form the opposite step-up controls
vo1 = −2 x+ (τ 2 ), vo 2 = −2 x− (τ 2 ) .
(2.1’)
By the following moment τ 2o , the opposite directional step-up actions initiate the observer’s
inner dynamic processes
xt ( x+ (t ), x− (t )) , composed of the parallel conjugated information
dynamic processes, Fig.1a. These process’ dynamics will be ad joint at the moment τ 2k , which
depends on the obtained cutoff portion of information [5]. Such duplicable stepwise acquisition
of information includes checking the information consistency and correctness through an
interference of the observer’s inner dynamics with current observations, which is considered
below. The conjugated information dynamics, satisfying the extraction of a maximal hidden
information under impulse control’s actions, extend B. Marchal’s law [20] of a duplicative nature
for
an
observer’s
parallel
quantum
dynamic
processes,
tested
through
the
observer’s
computational logic. Under the minimax law, the impulse control cuts off a maximal minimum of
12
information (measured by entropy functional (1.2) during each stepwise transformation), and then
proceeds it according to observer’s complementary dual (max-min) relations with incoming
information; that integrates each impulse’s information in the information dynamics, employing
the law’s variation measure. This law requires forming a dynamic constraint [4,5], imposed on
the measured process and implemented by the step-up control actions:
u(τ 2o ) = v(τ 2o )α2o , v(τ 2o ) = −2 x(τ 2o ) ,
(2.1”)
which at the moment τ 2o start the conjugated dynamic process xt ( x+ (t ), x− (t )) .
These process’ initial conditions −( ±2 x(τ 2o )) are selected from the above intervention, while
their information speed is identified through the measured relative dispersion function of Markov
process:
α2τ = b2τ ( ∫ b2τ dt )−1, b2τ = 1 / 2r2τ ,
(2.1a)
2τ
where r2τ is correlation of the information process, delivered on the observer window’s locality
[39]. The control transforms information speed (2.1a) to the inner process’ dynamics as an
eigenvalue of its eigenfunction
α2o = −α2τ ,
(2.1b)
satisfying the invariants formalism of the information dynamics [6(1)]. Thus, Markov diffusion
(2.1a) drives this information speed and limits it through the observer dynamics’ maximal
acceptable diffusion noise, which restricts maximal frequency of the observed spectrum.
Since the applied optimal control (2.1) is synthesized by the minimax variation principle (VP)
(which optimizes the transformation of the transition probability of observed diffusion process to
maximal probability of a quasi-dynamic information process [5]), such optimal control
implements this VP through transforming each bit of the collected information to the related
“Yes” or-and “No” stepwise deterministic actions ±u(τ 2o ) . This generates two deterministic
possibilities:
±u(τ 2o ) = ∓2 xt (τ 2o )α2o
(2.2)
starting the conjugated dynamic processes with these opposite initial conditions and the dynamic
speeds (2.1b), selected from the observed information processes.
Because the selected speed α 2τ (eigenfunction) is determined by information functional (1.3,1.4)
(at the transformation) [39], which expresses the minimax criteria, parameter (2.1a) has chosen
from the available observations to satisfy this criteria. Specifically, based on this criteria, the
observer, first, makes its choice from the observed multiple random information by selecting the
portions of information according to this criteria [40]. Secondly, the observer transforms this
13
selected random information to the potential two dynamic possibilities “Yes” or “No” by
generation of the conjugated dynamic information processes, whose time interval depends on
both initial information speed α 2o and starting amplitude 2 | x(τ 2o ) | of these dynamics:
x1,2 (t ) = ∓2 x(τ 2o )[cos(α 2ot ) ± j sin(α 2ot )].
(2.3)
Third, under the periodic frequent Yes-No actions, generated by the integral information
measurement of the impulses, the effect of decoherence appears [41, 42], which is associated
with a quasi-collapse (emerging as quantum correlations) for each superimposing pair of the
conjugated amplitude’s probabilities. That pair starts their local entanglement (in the time scales
shorter the measurement), which is suppressed by the interactive impulses. (In the Schrödinger’s
path to more probable unstable entanglement [5], the superposing conjugated probabilities
and entropies are reversible, satisfying their invariance in such quasi-collapsed entanglement,
whereas the instability is continually suppressed through the impulse’s integration ).
Even though each interacting impulse’s informational selects each Yes-No action, while
destroying its local correlations, the integral information measurement, which holds the series of
impulses, possess a composition of these correlations over its time.
Integration of these multiple correlations-entanglements (for each n-dimensional process) (via the
information functional) determines the dynamic evolution of these emerging collections of
reversible information microdynamics to irreversible information macrodynamics, which can be
described by transition of Von Neumann’s entropy for microcanonical ensemble to its
thermodynamic form during the time interval of the measurement [5]. That integral time finally
verifies the measured information of observations, providing a border parameter between the
micro and macrodynamics. This time interval finalizes the dynamic
measuring of the initial
hidden information, concentrated in the cutoff correlations, which is correlated with the observer
action during the control’s intervention in the observed process. According to [43], such
destruction of initial correlations and then formation of the correlations during the measurement
potentially erases the primary hidden information, leading to the decoherence [42].
On the macrolevel, the VP extremals of information path functional, which averages a manifold
of the hidden information, describe the information macrodynamics in terms of the information
forces acting on these flows [4], distributed in space-time along the measured process.
The conjugated dynamics ad join at the moment τ 2k when the VP dynamic constraint is turned off
through termination of both opposite double controls, which triggers the start of the initial
impulse, Fig.1b. The process’ time interval t2 k = τ 2ok − τ 2o from its starting moment τ 2o up to the
14
moment of its stopping τ 2ok = τ 2 k + o (where o is a small locality of turning off the controls),
determines the ratio
t2 k = aio / α2o ,
(2.4)
where aio (γ io ) in dynamic invariant, depending on the observer spectrum’s parameter γ io .
Measuring information speed by (2.1a) allows identifying parameters γ io via a ratio of the
spectrum’s nearest (i, k ) information speeds:
γ io = γ io [αio / αko ] .
(2.5)
Quantity of information, collected by the dynamic process during the time interval t2k ,
determines the measured information invariant value
ai (t2 k ) = −aio exp aio (2 − exp aio ) −1 ,
(2.6)
which should consume information equal to that provided by current i portion of observation:
ai (γ io |2τ ) = δ1iαi 2τ
(2.6a)
ai (t2 k ) = ai (γ io |2τ ) .
at
(2.6b)
Therefore, each portion of observation (2.1a) delivers information (2.6a,b), which, using (2.4),
evaluates time interval t2 k of the observer’s inner information dynamics.
Whereas this information is verified via minimax criteria through both direct selection and
proceeding by the interactive information dynamics under the applying opposite controls (2.2).
Specifically, the observed information Sˆ[ xt ( x t + (τ 2 ), xt − (τ 2 ))] , extracted under control (2.2),
S ( xt (τ )) = [ S ( x1t (τ )), S ( x2t (τ ))] of the conjugated
generates complex information function
processes x1t (τ ) and x2t (τ ) :
S ( x1t (τ )) = Sa (τ ) + jSb (τ ), S ( x2t (τ )) = Sa (τ ) − jSb (τ ) , which at the
moment τ = τ 2ok of imposing the dynamic constraint, satisfy equality:
Sa (τ ) = − Sb (τ ) .
(2.7)
This leads to S ( x1t (τ )) + S ( x2t (τ )) = − Sa (τ )( j − 1) + Sa (τ )( j + 1) = 2 Sa (τ ) ,
(2.7a)
or to S ( x1t (τ )) + S ( x2t (τ )) = Sb (τ )( j − 1) − Sb (τ )( j + 1) = −2 Sb (τ ) .
(2.7b)
Hence, total initial conjugated information S ( x1t (τ 2o )) + S ( x2t (τ 2o ) = S12t (τ 2o ) is converted in
S ( x1t (τ 2ok )) + S ( x2t (τ 2ok )) = S12t (τ 2ok ) = 2 Sa (τ 2ok )
(2.7c)
by the moment τ = τ 2ok , when interval of entanglement Δ 2 of these information processes
starts. Because of that, at Sa (τ = τ 2ok ) = ai (τ 2ok ) , total real dynamic information aid (τ 2ok ) ,
generated at this moment by control (2.2), being applied during t2k , is
2Sa (τ = τ 2ok ) = 2 | ai (τ 2ok ) |=| aid (τ 2ok ) | ,
(2.8)
where from (2.6b) we get
15
aid (τ 2ok ) = 2ai (t2 k ) .
(2.8a)
Fixing this equality by applying control (Fig.1b) brings irreversibility in the above conjugated
information dynamics.
The delivered information, needed for the information dynamics, consists of the following
parts:(1)- information, collected from the observations ai (τ 2 ) , which is transferred to ai (τ 2o ) ;
(2)-the starting control’s information ai1c (τ 2 o ) , generating Yes-No stepwise actions;
(3)-the stepwise control information ai 2 c (τ 2o ) , generating the turning-off action at the moment
τ = τ 2ok , when (2.7a, b) are satisfied.
Summing all external information contributions leads to balance Eq.:
ai (τ 2o ) + ai1c (τ 2o ) + ai 2 c (τ 2ok ) ≅ aio ,
(2.9)
where ai (τ 2o ) measures the information hidden in correlations, sum ai1c (τ 2o ) + ai 2 c (τ 2ok ) evaluates
information of the cutoff impulses, which carry this information to observer.(Each impulse,
satisfying the minimax law, acquires form of the considered two stepwise controls).
Starting information ai (τ 2o ) + ai1c (τ 2ok ) compensates for information, needed for the conjugated
dynamics aid (τ 2ok ) , thereafter satisfying the balance Eq. of the information dynamics:
ai (τ 2o ) + ai1c (τ 2ok ) = aid (τ 2ok ) .
(2.9a)
Whereas both of above information quantities: Yes-No action ai1c (τ 2ok ) and the converted
observed information ai (τ 2o ) are spent on the conjugated information dynamics.
Under the control action, which requires information ai 2 c (τ 2o ) , total dynamic information
aid (τ 2ok ) i s entangled in the ad joint conjugated processes at the moment τ 2ok = τ 2k + o , followin g
τ = τ 2k . Fr om (2 .9,2.9a) we get balance Eq.:
aid (τ 2ok ) + ai 2 c (τ 2ok ) ≅ aio ,
(2.9b)
which, while preserving invariant a io , includes the entangled information aid (τ 2ok ) = aien (τ 2o ) ,
evaluated by aien (τ 2o ) ≅ 2ai1c (τ 2o ) (2.9c), and a free information ai 2 c (τ 2ok ) that enables both local and
non-local attracting cooperative actions [5]. Free information carries the equivalent control
information ai 2 c (τ 2ok ) , needed for a sequential structuring the observer’s cooperative dynamics
(Sec.3) and memorizing the cooperating node’s information. From (2.9b,c) it follows that total
invariant information, delivered with the impulses, is divided in ratio 1/3:2/3 between the hidden
information and information of the impulse actions, where the free information estimates
stepwise actions ( ~ 1 / 3aio ) that encloses the entanglement information.
For an objective observer, free information generates a natural cooperative actions at the end of
entanglement. Such free information, carrying the control’s cut-down action, is able to interact
with random portion of the cutting correlation and attract its information, which is collected
during the observation time δ1 .
16
The attracting action, currying by the moment τ 2 maximal information ai (τ 2 ) , selects from the
random observation a most likely sample of the random ensemble, possessing opposite
directional
states
xt + (τ 2 ), xt − (τ 2 ) , which observer gets as opposite directional states
x+ (τ 2 ), x− (τ 2 ) of its latent inner dynamics. Using information ai (τ 2 ) , the observer is able to
renovate its inner dynamics through the inverse duplication of these states in the form of controls
(2.1’). Since the process’ states xt [ x+ (τ 2 ), x− (τ 2 )] and xt [ xt + (τ 2 ), xt − (τ 2 )] are bounded (coupled) at
the moment τ 2 , the starting-up action (2.1’) holds the control function (2.1), which by cutting-up
the correlation ends the observation. The control’s step-up action that spends its information
ai1c (τ 2o ) int ervenes interactively in the observations, generating superimposing information
process analogous to physical superposition [44].
At the moment τ 2 , transformation of the observed speed (2.1b) occurs, and function (2.2), which
determines the starting speed of the inner dynamics, is formed. Physical possibility of existence
of two mutually exclusive quantum states in which the photons are either entangled (quantum
correlations) or separable (classical correlations) is confirmed in [45].
The free information choices ai 2 c (τ 2ok ) allow forming (at the observer’s window) a non-local
information entanglement with an information speed exceeded speed of light [5].
Such a window’s information could not have physical properties.
The information-physical attractive action enables transitional bifurcations, generating a local
attractor for a selected portion of information, which is a part of neuronal coherence [46],[40].
Hence, the observer binds a portion of the observed information via the cut-up, whose step-up
action acquires, proceeds and verifies the predicted information through internal dynamics, while
the process’ time interval evaluates the transformed dynamic information (over its integral
measuring of the cutoff portions) that preserves and memorizes it by the interval’s ends.
This time interval finalizes the dynamic measuring of the initial hidden information, concentrated
in the cutoff correlations, which is correlated with the observer during the control’s intervention
in the observed process. Since the diffusion component of random process, before and after the
cutoff actions (at the considered functional transformation), is the same, the invariant (2.6),
computed by the end of a current extremal segment, includes information, which estimates a
potential information speed (2.1a) that would be identified at a next locality of the observer’s
window. The information, being memorized by the end of interval dynamics, also remembers the
predicted piece of information, such as above free information ai 2 c (τ 2ok ) .
This leads to a prediction of each following process movement built on information, collected at
each previous time interval, including the eigenvalues and controls (with the moment of its
17
applying) for each next extremal segments, allowing to recreate the optimal dynamic process,
based on its starting conditions.
Since the observer’s dynamic process approximates the diffusion process with a maximal
probability, these internal dynamics enable optimal prediction of the diffusion process.
Combined identification and restoration of the observer’s internal dynamics provide a prognosis
of evolution for an observed diffusion process, satisfying the minimax principle at the observer’s
interaction with such environmental process [6(3),39].
Time interval t2k is the observer’s indirect measure of information spent on its inner dynamics at
interaction with random environment. Each impulse’s cutoff information, measured by discrete
portion ai (γ io |2τ ) (2.6b) of information functional (1.2), is a code, selected from observed process
according to minimax, while the step-up control copies this information and transforms it to the
equivalent dynamic information, measured during t2 k (which depends on ai (γ io |2τ ) ).
Integration of the multiple impulses’ hidden information by the information path functional
determines the dynamic evolution of each emerging reversible information microdynamics to
irreversible information macrodynamics during the time interval of the integral measurement.
That time finally verifies the measured information of observations, providing a border parameter
between the micro- and macrodynamics and between a non-material information process and
related physical process of an observer.
3. The observer’s information regularities
Applying to an observer the minimax principle, as an information law, allows us to reveal and
summarize here the specific information regularities that the law formula embraces. The law is
implemented via the stepwise control actions, which perform extraction of the information
minimax, initiation of the conjugated information dynamics (including the quantum dynamic
information time-space distributed processes), its completion, as a result of the verification,
quantum information entanglement, and a capability of attracting cooperative actions.
In observer’s multi-dimensional information dynamics, such attractive actions intend to cooperate
the manifold of its spectrum components in collective dynamics of whole spectrum manifolds.
The law specifies the conditions for such cooperations [39] for the time-space informational
dynamics, described by a sequence of the space distributed extremal segments, which form spiral
trajectories, located on a conic surface, while each segment represents a three-dimensional
extremal (Fig.2a). (The segments spiral rotation’s operators are diagonal zed during the
cooperative movement).
Completion of these conditions leads to a sequential assembling of a
18
manifold of the process’ extremals (Fig.2b) in elementary binary units (doublets) and then in
triplets, producing a spectrum of coherent frequencies.
The manifold of the extremal segments, cooperating in the triplet’s optimal structures, forms an
information network (IN) with a hierarchy of its nodes (Fig.3), where the IN-accumulated
information is conserved in the invariant form, satisfying relations (2.6, 2.8, 2.9, 2.9a-b).
The double cooperation enfolds 2 / 3 of each node information, while the following triple
cooperation enfolds the rest 1 / 3 of the node information and works as a non-linear attractor [39].
The conserved information is enclosed through sequential cooperation of the IN nodes, creating
an information structure, which condenses the total minimal entropy produced at the end of each
segment and save it; that finalizes the memorizing process during the segment’s dynamics (Sec2).
The law’s variation principle (VP) coordinates a balance between the structural information of
the IN and the amount of extracted external information, satisfying Landauer’s principle [29-31].
An influx of the VP maximal entropy coincides with the maximum of optimal acquisition of
external information (via internal cooperative dynamics), predicting a next external observation
and the phenomena of the observed process (Sec.2).
Increased cooperative structural information reduces the amount of external information needed.
New information arises at the transference from the current segment, where existing information
is accumulated, to a newly formed segment via the segments’ windows of observation.
The transference is implemented by the control actions, which, being initiated by the accumulated
information, interact with the random (microlevel) information and transfers the emerged (or
renovated) information to a new segment.
Thus, interaction between existing (accumulated) information and information currently delivered
through the microlevel’s windows produces new information.
The final IN node collects a total amount of information coming from all previous nodes.
The dynamic process within each extremal’s segment, where the VP applies, is reversible.
Irreversibility arises at the random windows between segments, before the segments are joined by
the controls, while, the ad joint node collects and enfold information of the irreversible process.
Information dynamics of these hybrid processes include microlevel irreversible stochastics,
reversible
micro
and
irreversible
macrodynamics,
quantum
information
dynamics,
and
cooperative dynamics. The information transformed from each previous to the following tripletnode has an increasing value, because each following triplet-node encapsulates and encloses the
total information from all previous triplets-nodes. The node location within the IN hierarchy
determines the value of information encapsulated into this node.
19
The attracting cooperation, connecting the IN nodes along the triplets’ hierarchy, is intensified
since each sequential triplet transforms more condense information then the previous one.
α
This quantity of information measures invariant a(γ m1 ) , which depends on the triplet’s location
on its m-th hierarchical level, is defined through the level’s ratio of the triplet’s information
speed αmo to speed α1o of a fist triplet, starting this hierarchy:
γ mα1 = α mo / α1o .
(3.1)
Ratio (3.1) evaluates density of information[6(2)] in m-th IN’s node level, related to the first
IN’s node, where (3.1) is determined through parameter γ io of observer’s information spectrum
in (2.5, 2.5a,b):
γ mα1 = f m (γ io ) .
(3.1a)
The conditions of a triplet’s formation and cooperation in the IN impose the restriction on each
γ mα = α mo / α m1 : γ mα = (4.8 − 3.45) at γ io = (0.0 → 0.8) ,
(3.1b)
where information difference Δα mo1 = α mo − α m1 (3.1c) separates the nearest nodes.
The time interval t2 k between observations, when the verification proceeds (Sec. 2), according to
(2.4) also depend on this parameter t2 k = f (γ io ) , while the particular time interval, related to this
m-level, depends directly on the ratio (3.1):
t2mk = f m (γ mα1 ) ,
(3.2)
which connects the verification time with this level’s quantity of information and its density.
A sequence of the successively enclosed triplet-nodes, represented by the control’s time intervals
(Fig.2b), generate a discreet logic of the IN code through a three digits from each triple segments
and a forth from the control that binds the segments; the node’s digits are separated by
information (3.1c). This control carries free information ai 2 c (τ 2ok ) , produced in the previous IN
node’s cooperative information dynamics, which also depends on both the verified correctness of
a priory measured information frequency and a value-quality of the node’s accumulated
information. Such control’s contribution enables successively increase also the quality of its each
following free information, which carries an information request to attract the
information of
increasing quality, exceeded that already accumulated current IN.
Such attracting IN’s cooperative dynamics enables self-growth of both its information quality and
the number of cooperating nodes, if the free information’s requested quality satisfies the quality
of delivered information spectrum for building each next IN’s node (Sec.4). Thus, each IN node
encloses invariant information a io of the triplet code and generates a control code of free
information, possessing ability to increase the IN quality by attaching a new node.
20
The code, produced by the observer’s internal dynamics, which accumulates the converted
discreet observations, also serves as a virtual communication language and an algorithm of
minimal
program
to
design
the
IN.
The
code-communication
sequentially
adjoins
the
communicating node via their codes’ information in the IN logical hierarchical structure.
It has shown [48] that each neuron can communicate via their IN code in a time scale shorter than
an external signal reaches this neuron; such an inner connection possesses a quantum nonlocality. (The different inner observer’s time scale is analytically defined in [6(2)].)
The optimal IN code has a double spiral (helix) triplet structure (DSS) (Fig.4), which is enfolded
in the IN final node, allowing the reconstruction of both the IN dynamics and topology.
The IN information geometry holds the node’s binding functions and an asymmetry of triplet’
structures. In the DSS information geometry, these binding functions are encoded, in addition to
the encoded nodes’ dynamic information.
Specifically, building the IN triplet’s logic involves rotation of each triplet’s components with
an invariant space-time rotating moment toward their adjoining and binding in the triplet’s code
[6]. The control, applied as a forth code-signal to each triplet’s node, retrieves this collective
triple logic and transfers it to a next IN’s node along the hierarchy up to reaching the IN’s final
node, thereafter completing the IN logic structure. By sending information outside IN structure,
the code allows copying, interacting, and cooperating its structure.
The information macrodynamics (IMD) embraces both the individual and collective regularities
of the observer’s information model and its elements, forming multiple connections and a variety
of IN information cooperative networks, with the growing concentration of information and its
volume. The interactive dynamics of the IN nodes are able to produce new information.
The IN, identified for a particular process, characterizes its dynamic hierarchical information
logical structure of collective states, representing a collective dynamic motion, which is
evaluated by a hierarchical cooperative complexity [34]. Such complexity is decreasing with
growing number of the cooperating collectives. The IN’s hierarchically organized nested
cooperative structure arranges each observer’s selected information in its logical structure, which
is built to minimize discrepancies (deviations), maximize consistency of incoming information,
providing a maximum of information density, concentrating in each following the IN’s node.
Such a logic, which concentrates and accumulates all IN’s information capacity, could evaluate
the observer’s meaningful information, collected during the observations.
The information attractions enable self-cooperate the observer’s IN information geometrical
triplet structures, whose locations are restricted by maximal available information that each
21
observer can obtain. The limited information space area of these locations determines the
observer’s information geometrical boundary (Fig.5), which separates the observer from the
interacting environment. The observer creates its boundary during acquisition of information via
entanglement and internal information dynamics, which could concurrently change the IN
geometrical boundary. Observer self-forms its information geometrical structure that limits the
current boundaries; their external interacting surface enables attractions and communication.
The structure information geometry (Figs.4,5), following from the minimax, unites nested cells,
related to fractals, whose forms are changing at each cooperation. Moreover, positions of the IN
triplet’s local space coordinate systems are not repeated at any symmetrical transformations and
hence, each cell’s form is asymmetrical. The cooperative space-time movement possess such
increasing rotary moment, which enables sustaining whole structure [39].
The hierarchical model of visual statistical images [47] in a brain processing is built by ordering
the decomposed set of the experimental images according to an invariant scale; the resultant
model shows growing information intensity, concentrating in each following hierarchical layer,
which confirms our earlier analytical result [4], including breaking symmetry at the each level.
4. The information mechanism of building the IN logical structure from current
observations
A virtual IN (Fig.3) can be predicted (Sec.2), using an initial spectrum’s parameter γ io and
starting α1o in (3.1), formed, for example, by a natural objective observer.
The actual IN, based on current observations, might confirm the predicted IN, and its building
can be continued from any existing IN’s node. Forming the IN logical structure, according to the
minimax, requires an observer to bring information of growing density, which depends on the
information necessary for both creating new IN’s triplet and attaching this triplet to existing IN.
The IN’s information density is determined through its current triplet eigenvalue, depending of
the present IN’s ratio (3.1a). Therefore, each following observed information speed (2.1a), being
necessary to create next IN’s level of triplet’s hierarchy, should deliver this density.
This leads to two information conditions: necessary to create a triplet, having required
information density, and sufficient to generate the cooperative force enables the triplet to adjoin
the observer's IN logic, holding sequentially the increasing quality of information. Both of these
conditions should be satisfied through the selected observation of growing quality, enables
existence of an initial triplet, which could select a next triplet and adjoin it to a new IN node.
The necessary condition for that is a minimal quantity of information, needed to form a very first
triplet, which is defined through the information dynamic invariants:
22
I o ≥ I o1 = 2ao (γ 1αo )2 − 3 | ao (γ 1αo ) | +4a(γ 1αo ) .
(4.1)
2
Here 3ao is information spent to proceed the dynamics on three triplet’s segments, 2ao measures
quantity of information to join three triplet’s segments into its node, 3a measures quantity of
information required to start the triplet’s dynamics, and a needs to generate a potential new
α
adjoining triplet’s segment from the observation. Parameter γ 1o (γ io ) = α1o / α 3o is the ratio of the
IN’s starting information speed α1o to its third segment’s information speed α 3o ; and it is
α
α
α
equivalent of γ m = γ m=1 = γ 1o in (3.1b).
It has shown [6(2)] that minimax principle brings minimal invariant a(γ io → 0) spent on the
control’s information, which conveys maximal invariant information ao (γ io → 0) needed for
optimal internal dynamics. At these invariants, the quantity information (4.1) is estimated by
I o1 ≅ 0.75Nats ≅ 1bits , w hile condition (4.1) can also be satisfied for non-optimal values of these
α
invariants within limit of (3.1b): 4.8 > γ 1o (γ io ) > 3.45 at I o ≠ 1bit .
These limitations are the observer window’s boundaries of admissible information spectrum,
which also apply to multi-dimensional observer’s window.
α
The sufficient condition determines such required cooperative force X12 , acting between the
IN’s nearest triplets’ information levels, which is equal or exceed the IN level’s relative
α
information ΔI12 needed for sequential cooperation of these triplets in the IN: X 12 ≥ ΔI12 .
This IN’s information level is evaluated through the required relative increase of the levels’
information [Δα12 / α2 )] = ΔI12 , being necessary for the above cooperation, where Δα12 = (α1 − α2 )
α
is an information increment between the triplets’ levels, determined by the ratio α1 / α 2 = γ 12 ,
which is evaluated through the information invariants
γ 12α [a(γ io ), ao (γ io )] = γ 12α (γ io ) .
(4.2)
We get the required relative cooperative information force
X 12α ≥ [(γ 12α ) − 1] ,
(4.2a)
α
which, at limited values γ 12 → (4.48 − 3.45) , restricts the related cooperative forces by inequality
X 12α ≥ (3.48 − 2.45) .
(4.2b)
The absolute quantity of information, needed to provide this information force, is
I12 ( X 12α ) = X 12α ao (γ 12α ) ,
(4.2c)
α
where ao (γ 12 ) is invariant, evaluating quantity of information concentrated in a selected triplet,
α
α
α
α
which is estimated by ao (γ 12 ) ≅ 1bit at γ 12 = γ 1o , from which we get I12 ( X 12 ) ≥ (3.48 − 2.45)bits .
αm
The invariant’s quantities ao (γ io → 0) , a(γ io → 0) provides maximal cooperative force X 12 ≅ 3.48 .
Therefore, total information, needed for a selective observer, is estimated by
I o12 = ( I o1 + I12 ( X 12α )) ≥ (4.48 − 3.45)bit .
(4.3)
23
Thus, very existence of a minimal selective observer, satisfying both minimal information I o1 and
α
admissible I12 ( X 12 ) , requires delivering total information (4.3), which includes the control’s
carried free information (Sec.3) for the triplet’s node.
From consideration of an observer’s interactive dynamics on its window [40] it follows that a
potential interactive quantity of information
I s ≅ aaτ
(4.4)
is estimated by multiplication of the starting step-up control’s information a on the delivered
information aτ , while an impact of each stepwise control’s action is evaluated by quantity of
2
2
information I sa ≅ ao (at average ao (γ io = 0.5) ≅ 2a(γ io = 0.5) ). (The start up control information can
be considered as the observer’s intentional action, interacting with incoming information).
Attaching the selected triplet to observer’s IN requires two of such actions (Fig.2b), leading to a
balance of the potential information:
I s ≅ 2 I sa , aaτ ≅ 2ao2 ,
(4.4a)
from which the needed delivered information is estimated by
aτ ≈ 4 Nats ≅ 5.76bits .
(4.4b)
Therefore, the observer’s interactive information impact with its IN’s potential new triplet
exceeds the quantity of information needed for the forming the minimal selective observer:
aτ > Io12 .
If the minimal selective observer already exists, the sequential and consecutive increase of the
observer triplet’s quality of information requires both forming next triplet and adjoining it
subsequently to the observer IN’s node.
The quantity of information, needed to produce a second triplet, is less than for that (4.1):
I o 2 ≅ 0.25Nats ≅ 0.36bits , since it requires two starting-up action, evaluated by 2a , two 2ao2 to
join
two
triplet’s
segments
with
the
IN’s
previous
triplet’s
node,
and
to
spend
−2ao ≅ 1.4 Nats ≅ 2bits on proceeding the two segment’s dynamics. The ending triplet’s node
should able to generate quantity information a to attract a segment of the potential next triplet.
The sufficient condition requires the relative cooperative information force (4.2a,b) satisfying
α
new spectrum γ 12 (γ io ) . Then, total quantity of information, needed for the sequential and
consecutive increase of quality of information for the IN’s m -th triplet, is estimated by
I m12 ≅ (4.84 − 3.81)bits .
(4.5)
It has seen that this quantity can be delivered through a potential interactive information (4.4b) at
adjoining each m -th IN’s node.
An observer, starting the sequential and consecutive increase of its quality of information (at
existence just a single IN node), needs to adjoin sequentially two new triplets, as a minimal
24
requirement for a successive building of its IN. This will provide sequentially two cooperative
forces (4a,b) with quantity of information needed for building two triplets 2I o 2 .
At such sequential adjoining, the sequence of interactions, generating the needed information,
would be able to compensate for each next node’s formation. Therefore, for the sequential and
consecutive increase of the observer quality of information, each selection should choose such
subsequent new triplet, which is needed for consecutive building its IN. This action could start
on each currently ending IN’s level and continue, catching successively next needed triplet.
Since the growth of both number of IN’s node and quality of the nodes’ accumulated information
is associated with increasing the concentration of the node’s information (measured by invariant
γ 12α [a(γ io ), ao (γ io )] ), the required quantity of information I m+1 (a(γ 12α ), ao (γ 12α )) , needed for creation
each new IN m + 1 -th triplets (to be concentrated in the observer’s m -th IN nodes), should hold
α
α
its growing density. With growing the concentration information in invariants a(γ 12 ), ao (γ 12 ) , the
IN’s cooperation with each following triplet requires more strong cooperative force, even though
α
α
α m
the relative γ 12 stays the same. For each IN’s m-th triplet, its parameter of density γ 1m = (γ 12 ) is
determined by the complete number m of the IN already formed triplets: from the IN initial triplet
to m. This specifies the increases of absolute value for each following cooperative force under
the delivered information (4.5). Adjoining m + 1 triplet (that would increase quality of
information for an existing observer), requires the cooperative force
X mα+1 ≥ [(γ mα +1 ) − 1] ,
(4.6a)
α
α
which provides growing the IN parameter γ 12 = γ m up to
γ mα +1 = (γ 12α )m+1 = (γ mα )(γ 12α )m .
(4.6b)
This will be reached at completion of cooperation (under force (4.6a)), which strengthens the
IN’s inner dynamics’ cooperative information force through increase of the information density.
α
α
Information, delivered by that force, needs its verification with a predictable I m +1 (a(γ m +1 ), ao (γ m +1 ))
α
α
, estimated by (4.6a,b) (at the k nown I12 ( X 12 ) and ao (γ m +1 ) ).
The required cooperative information should be selected from the observations to bring the
delivering information close to its predicted concentrated quality I m +1 (a(γ m +1 ), ao (γ m +1 )) .
α
α
Even if the minimax principle could theoretically provide growing quality of the delivered and
accumulated information (at satisfaction of conditions (4.6a,b)), implementation of these
conditions (through the internal dynamics) involves selection of the rising concentrated
maximums from the related more concentrated minimums of available information.
The selective action requires checking consistency of a current observation with that needed for
building the current IN’s level. Such action includes series of ongoing observations,
identifications, and their sequential verifications with the needed density of information speed,
25
until the requested information density will be selected from the spectrum of observing the
random process. Such selective actions are implemented via the sequential applying of the
considered stepwise controls with a feedback from the currently checked information density.
According to the IN information model, the needed selection could be predicted through the time
interval (3.2) of the control action, which is required to select the information that carries the
requested information density. The procedure involves identification of spectrum information
speed by (2.1a), which conveys a potential cooperation (according to balance Eq (2.9b)), needed
α
to verify the chosen selection. Since only restricted γ m (γ io ) allows building the observer’s IN, the
selections are limited, and the currently observed spectrum might not allow building the actual
IN, or constrains building its current level.
The question is: what source will provide information needed for the selective actions?
From the above analysis, it follows that some interactions, associated with checking each current
selection via an existing IN node, could generate the needed information during the related
selective process. This means, first, that information, accumulated in the existing IN, should be
stored (memorized) by the observer. Second, the interactive information, evaluated (by analogy
with (4.1a)) through a reciprocal impact of the currently delivered information (during the serial
selections) with the observer’s previously memorized information, will compensate for the
selective actions. Since during the actual building IN, the observer already spent information,
needed for memorizing (according to balance (2.9b) that includes the stepwise controls), both this
information and the currently delivered at interaction are sufficient for self-structuring of the
observer’s IN with its logic. The space-time’s conditions (4.1, 4.2a-c), following from the IN’s
cooperative capability, support self-forming of observer’s information structure (Fig.4-5), whose
self-information has the distinctive quality measure. The IN accumulated information might
reflect the memorized observer’s meaning, or an idea, enclosed in the existing IN’s logics. Then,
the current IN should be built in accordance with this idea’s logic, providing its completion.
The serial selections of information, generated through the information interactions, should
satisfy such idea’s information, which already complied with the observer’s minimax criteria.
Hence, there are two options for a successive IN building: (1)-to continue revealing a logical
meaning of an existing IN; (2)-to subordinate the extraction and selection of information, which
builds the current IN’s logical structure, to an idea, accumulated recently or earlier by an
observer’s logic. Both options satisfy the minimax law, as an observer’s rational choice of the
optimal extraction, selection, and building. Since at each selection, required by a next observer’s
IN level, the information, accumulated on its previous level, is used, the preceding meaning
26
and/or idea of the accumulated information are needed for the selective building each following
IN’s level. Or, the selection, satisfying the minimax, which requires an increase of the IN’s level
(with growing node’s information density), should be strongly coordinated with observer’s total
information, concentrated in its IN’s current highest level of the structural hierarchy and
measured by maximal quantity and quality of the accumulated and memorized information.
Preserving the observer invariants’ measure (4.1, 4.2, 4.2a) at building of its information
structure guarantees objectivity (identity) of each observer’s personal actions.
We associate the selective actions with the observer’s cognitive dynamic efforts, evaluated by its
current information force (4.2a,b), and the observer’s multiple personal choices, needed to
implement the minimax self-directed strategy, with its conscience.
The dynamic efforts are initiated by free information, expressing the observer’s intentional
action for attracting new high quality information, which is satisfied only if such quality could be
delivered by the frequency of the related observations through the selective mechanism.
We presume that the information mechanisms (Secs.2,4) create the specific information structures
for each particular observer via energy and/or material carriers, which could have various
implementation not considered here. For example, ideas, created by cognition, have no specific
physical properties, which cognitive dynamics carry.
5. The threshold between objective and subjective observers
Based on the notion of a subjective observer, we answer the questions (Sec.1), focusing on an
information threshold, separating objective and subjective observers. If an objective observer
could approach to such threshold, we call it a maximal objective observer, and if any objective
observer would cross of such threshold, we call it a minimal subjective observer.
Using the common information regularities and information mechanisms for both objective and
subjective observer, satisfying the minimax law, we find their distinctions, starting with an
ability of selective observation (Sec.3). Since an impulse action could be applied to selection and
cooperation (including entanglement), these functions theoretically might belong to an objective,
as well to a subjective information observer.
A natural example of the maximal objective observer (approaching a triplet’s level) is a mixed
three states’ entanglement, carrying three-qubit Werner states [48].
Examples of triple entanglement also include three light beams of different colors, which are
entangled in one visible portion of the spectrum, generating laser’s pulses by an “optical
parametric oscillator” [49].
Other examples involve the entanglement of ions, photons in multiple quantum qubits.
27
The entanglement can also be achieved in macroscopic objects at room temperature, such as two
different diamonds, entangled collectively in a vibration state of their crystal lattices, using a
laser pulse [50].
Latest results [52] have shown that under influence of laser beams, single atoms can arrange in
the ordered triangle and pentagon geometrical structures. These configurations could exist
simultaneously at the same time on long distances, forming a coherent superposition “as soon as
the system undergoes an observation.”However, even such an objective observer enables
forming the triplet and double triple structure (of the five atoms’ pentagon), has a limited ability
for multiple cooperation, which might generate logics of subjective observer. In addition, the
initial information spectrum for an objective observer could not necessarily satisfy the conditions
of cooperation of the process’ multiple dimensions and/or to the minimax criteria.
To find the threshold, separating an objective observer from a subjective observer, we follow the
concept of quality information, whose sequential increase, at the observer’s acquisition, leads to
growing logical structure of the selective observer enables self-selecting information of high
density. The quality information (evaluated by maximal number the IN triplet’s nodes enclosed in
IN’s level (Sec.4), which observer builds during formation of its cooperative structures),
determines the requirements satisfying the concept of sequential and consecutive growth.
Maximal objective observer needs just a single cooperating triplet, which enfolds minimum of the
observer’s quality and density of information.
Such observer requires the implementation of necessary condition (4.1) in the form
I o ≅ (2ao (γ io )2 − 3 | ao (γ io ) | +4a(γ io ).
(5.1)
For a minimal objective observer, we assume that its impulse action selects information
observation, satisfying such spectral parameter γ io that determines function (4.7a,b) of the
spectrum’s information speeds, which is required for the two double co-operations (but not
necessary for their cooperation in the triplets).
A minimal subjective observer we define by a minimal growing quality of information, produced
by its first IN’s node. Forming this node includes a pair cooperation of two IN’s primary triplets
with requested second triplet. That requires implementation (4.1) at in the form of inequality, at
γ 1αo (γ io ) → 4.48 , and the condition (4.2a) in the form
X 12α ≅ [(γ 12α ) − 1] ,
(5.2)
α
α
at γ 12 approaching γ 1o (γ io ) .
Transference from maximal objective observer to minimal subjective observer leads to crossing a
border region, separated by conditions (5.1,5.2), which define the threshold.
These conditions establish the related IN’s space-time surfaces’ boundary [6(2)].
28
Increasing quality information of the minimal subjective observer corresponds to expanding
number of its IN nodes, which sequentially condenses the node’s accumulated information,
potentially needed for multiple coordinated selection. Selection and extraction of information to
generate a growing cooperative force, coordinated with the quality of the accumulated
information, initiates the observer’s inner information force (4.2, 4.3b), which represents its
intentional dynamic effort for requesting relevant observing information, including its space –
time locations. Such cooperative action (at a final moment of adjoining the conjugated
information processes, Sec.2) is accompanied with producing an entangled dynamic information
on a quantum dynamic level. However, the entanglement and information cooperation, generally,
do not lead to creation of a subjective observer, if the conditions (4.1-4.2) are not satisfied.
According to [53], quantum information provides the observer’s intentional action through
increasing a density of the effort’s attention at observation, being the essential phenomenon of
observer’s will (which historically was developed by W. James [54]).
The consciousness [53] produces a mental effort, associated with the intentional information
action and evaluated through the above density of the selection and extraction of the needed
observing information. In the formal information mechanism (Sec.4), this information, as a nonmaterial (non-physical) self-intention, is measured by information speed of accepted information
(2.1a), whose ratio to a previously accepted speed in (4.1) evaluates information density (4.3a) of
each cooperating process, which determines strength of the effort’s information force (5.2).
The observer’s intentional information actions relate to cognitive attention's efforts in Neuronal
information cognitive dynamics [40] at perception of selective information, which should deliver
above information force via the entangled “free” information ai 2 c (τ 2ok ) of the observer’s inner
conjugated dynamics (Sec.2). This requires two impulse controls with information of growing
density, satisfying the sequential growth the observer’s logic.
Each of them, generating free information ai 2 c (τ 2ok ) , has to entangle another sought ai +1 (τ 2i +1 )
with the needed density at the selective measurement, and both such measurements should deliver
information 2[ai 2 c (τ 2ok ) + ai +1 (τ 2i +1 )] for the two impulses’ step-down and step-up controls.
Since each free information ai 2 c (τ 2ok ) carries the density of observer’s existing IN’s node,
forming a next IN node requires selecting ai +1 (τ 2i +1 ) with higher information density.
Thus, the observer inner dynamics, fist, should generate attractive free information 2ai 2 c (τ 2ok )
with dens ity, related to its existing IN’s upper level. Second, each such ai 2 c (τ 2ok ) , prov iding the
information intention, should select observing ai +1 (τ 2i +1 ) w ith incr eased information density,
which would be delivered through the step-up control, generating information dynamics that
29
enhance the rising density. It has assumed that the duplicative Yes-No interactive impact (that
verifies the selection) produces information needed for this intentional selection.
Only if observer’s will intends to implement the minimax law that holds these conditions, their
satisfaction could generate a subjective observer with growing logic, while the selective
information provides the required intentional efforts (as mental actions). An information
subjective observer, which is able to reach the highest IN’s cooperative level through growing the
strength of its selective information forces, we call a maximal subjective observer.
6. Discussion
Suppose, for example, there is a random multiple sequence of unspecified symbols (actual
written, visual, or virtual), presented through a random process which could be observed.
What is information of the sequence of symbols? To get information, this process must be
observed through an interactive action in the form of an information impulse, which provides
transformation between the observation prior to the action and after it. This transformation
allocates the information extracted, including that is hidden by the symbols connections.
However, to hold this information, the equalization of both the EF and IPF integral measures
have to proceed (Sec.1) through the impulse series, initiating the minimax.
The minimax implies a dual complementary relation between entropy-information and its
observer: the observed information creates the observer, and observer builds and holds the
observed information. To keep holding the information, the observer accumulates it by the stepup part of each impulse, which binds a portion of the observed information and verifies it through
the information dynamics (an internal process) over the integral’s measurement. Such
verification tests the equivalence of the measured minimax information with that acquired by the
information macrodynamics. The process’ time interval finalizes the dynamic macro measuring of
the initial information, preserves and memorizes it at the interval’s ends. Until that, the initial
impulse’s information-measured eigenfunctions (2.1a, b) do not have physical properties.
The impulse cutoff a maximal minimum of information (during the stepwise transformation),
affecting both random and dynamic processes. The information dynamics, following from the
minimax principle, builds the observer's IN structure by concurrently checking the consistency of
the observer's selected minimax information with the previously built and memorizes structural
information, satisfying the observer's criteria.
For a real Markovian diffusion, entropy production is decreased by imposing dynamic constraint
during the time interval of applying the feedback controls [6.(1)], which is spent on dynamic
measuring of the initial hidden information with its memorizing at the control's ending moment.
30
This entropy reduction evaluates an energetic contribution required for control-measurement of
the equivalent information, associated with the work of Maxwell Demon [30, 31].
The IN’s ordered structure of interconnected and verified code-symbols carries the IN’s logic,
which formalizes meaning of these symbols. The observer’s quality of information, enfolded in
specific IN’s level, builds up a meaning of this IN logic on each level, and the selected
information, which matches this quality, verifies the meaning of the logic( or an observer’s idea),
as a source assigning to the meaning. Such verification action, requested by the IN code's logic,
generates a new IN node for a minimal subjective observer, or adds a new hierarchical level to a
subjective observer's existing IN, increasing its quality of information. During the interactive
test, the observer automatically checks whether the time internal of its current internal dynamics
satisfies the conditions (3.2,5.2). Upon the noncompliance of these conditions, the cooperation
will not be held, and the related subjective observer will not be formed. The coordinated
selection, involving both organization and concentration of the observe information to build the
logical structure with intensive growth information quality and density, satisfies the maximum of
accumulated meaningful information. We associate this with the observer's intelligent action,
which is evaluated through the amount of quality of information spent on this action.
A notion of intelligence belongs to a subjective observer, as its ability to create information of
increasing quality through controllable self-building of a particular IN logic structure.
Even though such a structure does not carry directly a meaning and/or semantics of the
observations, building the structure requires spending each observer’s meaningful information,
which evaluates its intelligence, needed for the coordinated selection and verification.
A set of INs’ logics of various meanings, or ideas might be arranged orderly by the quality of
enfolded information and cooperated into a new collective IN, whose logic integrates multiple
logics of the initial INs, whereas a final node of that IN encloses a common meaning of concepts
or a complete idea. The IN maximums, which measure total structural information, creating
observer’s logic, its meaning and/or idea, could become a scale for the comparison of observer’s
levels of intelligence. A subjective observer, reaching the highest IN’s quality information level,
possesses a maximal intelligence through maximization of its cognitive actions and the
conscience. Such highest IN’s level is limited by a maximal acceptable time interval of internal
dynamics on this level tn , related to minimal time interval t1 on the IN first level), according to
formula
t1γ nα = tn , γ nα = (γ mα1 )n /2 ,
(6.1)
31
following from preservation of invariants: aio1 = α io1t1 = aion = α inotn = inv at growing density of
their accumulated information. For example, at n = 64 and minimal time interval t1 , defined by
−15
the light wave interval t1min = 1.33 × 10 sec , the time interval of observer’s inner dynamics grows
3
up to tn max ≅ 8.2 × 10 sec ≅ 3hours .
We assume that an intelligent observer can process in parallel (or in a sequence) several of its
INs groups (with a minimal time interval on each first level), thereafter minimizing the time
intervals of their highest levels. According to recent study of neuron networks, created through
the neuron correlations [55], an average number of the network’s nodes approaches the number of
the node’s links with the restricted total network’s nodes around 50((during the limited time of
recording)). Numerous neurons participate in each selection simultaneously, even if not all of
them hold the information speed needed to build the observer's logic.
The set of INs, ended with memorized multiple information concepts, can embrace up to 104
concepts neurons [56].
Such groups of the IN information units, being encoded in observer’s
genetic logic [6(3)], remember their memorized concepts, which can be used to recognize and
encode the current multiple concepts using minimal number of new neuronal links. The memory
blocks of the created concepts allow the encoding of all meaningful personal new concepts with a
"sparse representation" of personal logic [56].
Information, sequentially enfolded in the IN highest node, also evaluates hierarchical
cooperative complexity of total IN, which enhances the logic’s meaning and complexity of each
IN node. Since each observer identifies the selected information through the verification with its
particular IN, each observer’s IN could accumulated the same incoming information differently.
The message, as a current merge of the observer windows’ information, is understood if the
observer can build and reproduce its logical structure (using a personally memorized logic).
The observers, which enable reproducing the logical structure of their shared message, mutually
understand each other. Such observers could find a mutual trust through this understanding, and
eventually might coordinate and match their specific logic’s criteria, satisfying the minimax.
Communicating observers can use information, accumulated by others INs to expand individual’s
IN logical structure. Specifically, when an observer transmits information, needed for growing
the quality of its IN, according to relations (5.1-5.2). By encoding this information and speed
(2.1a) in γ io (2.5), the observer gets access to communicating windows of other observers that had
been connected, for example, via Internet. Other observers, by accepting, decoding and
proceeding the received information through its verification and internal dynamics, generate
virtual information forces, related to that (5.1-5.2), which the initial observer had sent out. These
virtual information forces can request information copy from such IN levels of the receiver’s
32
observer that sender needs. This information, being encoded, for example, in related information
*
spectrum (with the receiver’s observer parameter γ io ), will be automatically sent through the
communication channel to initial observer. The primary observer, receiving the requested
information back (by accepting, decoding and verifying it through information dynamics),
regenerates virtual information above and then attaches it to the personal IN, if it matches the
*
observer logic’s criteria. The returned section of the copied IN could have parameter γ io not
*
equal to that for the initial observer’s γ io , but within difference γ io − γ io = Δγ io admissible for
building this observer’s growing IN. Acceptable differences may exceed the requested quality of
information, leading to an increase in the quality of the IN above expected value.
This contributes to renovation of the observer IN's logical structure, its meaning, idea, and
intelligent ability. As a result, the initial observer collects and accumulates information needed to
increase its intelligence by getting information from communicating observers.
During such communications, each interacting observer can renovate meaning and/or ideas of its
previous IN’s logics and /or obtain new meaning and ideas from other communicating observers.
Communications of several different observers, which could bring information of growing quality
(at γ io → 0 ) to the observer IN logic, expand the growth of intelligence.
Multiple communications of many such observers enable them to increase their individual
intelligence level and generate a collective IN’s logic, which not only enhances the collective’s
ideas but also extends and develops them. Unpredictable multiplication (and copying) of the
circulating flows of encoded information forces (5.2), whose intensity grows as the quality of
information, provides the ability of the growing intelligence to self-generate.
Some ideas, available through social networks, could start multiple communications, creating a
chain of increasing information qualities and new collective IN's logic with its meaning and
ideas, thus enhancing both individual and collective intelligence.
By imposing selective attention through its internal dynamics, the observer entangles these
dynamics with information, satisfying its criteria with a preferred idea. Since information
entanglement, as adjoint dynamic cooperation, is generally non-local, the distance of the
observer's communication is unlimited.
In multiple communications, each observer may find near its window a message-demand, as
quality messenger (qmess), enfolding the cooperative force (5.2), which requests other observers
to access its individual IN. It can send out its own qmess for cooperation with others.
This "qmess" differs from Darwkin's "meme" [57] by its specific algorithmic information
structure, which, being sent everywhere through its code-copies, enables the selected observer to
attach its IN, retransmit the requested valuable information, and attached it to the sender's inner
information IN's tree in a particular locality. Moreover, since the messaging force (5.2) contains
a minimum of five cooperating sections of the two triplet’s nodes, the cooperation with a nearest
observer’s IN requires to add the related five sections to its IN. Not all communicating observers
33
would allow doing that, or particular observer may not need to get such information from others,
near to its window. The question is what is the number of nearest observers with whom mutual
communication is possible? It depends on the quality of information wanted for their mutual
needs. If a current observer ranges those observers nearest to him by their growing quality of
information, which is enclosed in his five IN's sections, then the nearest observer available to
accept his message-demand would be the fifth. The same number of the nearest observers would
find any other observers near each of them, indicating that each six observers are potentially
mutually connected in collective communication. This result, following from sufficient
conditions (5.2) for communicating subjective observer, links to the functional connectivity for
the “small-world” networks [58]. Since these nearest observers’ nodes could have diverse
information qualities of the related INs levels, their communication might not be mutual
valuable. The nearest observers, communicating via their upper IN's levels, increase the
distribution of high-quality and high-density information, enhancing the observers growing
intelligence. Such cooperation transmits a high quality of condensed knowledge and cognitive
ability among the observers, whose density grows their intelligence.
Summary
The key observer’s information processes are shown on Fig.6:
1-An observed (external) random process;
2-An impulse, as two stepwise controls (step-up, step-down), applied to
the process,
which provide
3.Transformation of a priory observation to a posterior observation and
4. Measurement of information under the transformation; while
5. The step-up action (as the impulse right side) initiates
6. Observer’s inner conjugated dynamic processes,
which are
7. Adjoining (entangling) by the end of the step-up action; while
8. The step-down action breaks the entangled information process,
9. Opening next observation under each impulse's left side action for
the multi-dimensional observations;
10. Building Information Network (IN) through ordering and arrangement
of the multiple cooperative dynamics of the ad joint processes;
11.
Generating
the
IN
triplet’s
sequential cooperation of the IN
code
logic,
emerging
from
the
nodes in the time-space triple
information
structures,
whose
double
spiral
information
follows from the conjugated cooperative dynamics;
34
geometry
12.
Self-forming
the
observer’s
limited boundary, shaped by the
time-space cooperative processes.
inner
IN
geometrical
information
structure
geometry
with
during
a
the
The observer’s self-selective process includes proceeding of the opposite directional conjugated
cooperative dynamics under multiple Yes-No control’s parallel actions, which allow cooperating
such measured information that matches the observer’s quality of information, enfolded in its IN
according to observer’s criteria. The minimax regularities of observer’s rational operations
summarize the information mechanisms’ operating system (Fig.6), which creates the observer.
Concluding remarks
1. An unobserved process carries hidden information which an observer extracts through jumpwise interactions which leave an impact on the observed process. The observer measures a
maximum of the minimal accessible information retained by its internal information dynamics.
2.The observer internal dynamics generate free information, which attracts new pieces of the
retrieving hidden information and sequentially adjoins them in the hierarchical network (IN).
3. In the IN's ordered cooperative triple structure, each subsequent triplet's node encloses
information of its previous nodes, while the final node contains all the retrieved information, and
as a result, generates free information. Such an IN not only carries a code logic, collected from
the already retrieved information, but also its growing free information that persistently enhances
and extends this logic through the available observations and communications.
4. The cooperative dynamics which produce the growing free information drive the expansion of
the code logic that builds the observer's intelligence. However, it requires generating sufficient
intentional drive that matches up with that for the informational spectrum of the current
observing process, whose information value would be added to the observer's intelligence.
5. Consequently, even though the observer's free information is able to initiate a request for an
extension of the observer's intelligence, only the selected observing process can be the source of
such a request. Intelligence is conveyed by the request of an equivalent (comparable) intelligence
(which could enhance a collective intelligence of observers). This is why if hidden information
of a natural unobserved process encloses more valuable intelligence than the observer's collective
intelligence, it could not identify this hidden intelligence. This information, being undetectable
for others, is information for that observer which holds it.
6. An observer measures information through active probing actions (such as questions), initiated
by the internally generated free information in accordance with the minimax law, while the
answers are verified by the particular observer's optimal criterion.The free information results
from the observer's interaction with its environment, through the minimax objective information
35
law, but the content of the free information depends on the individual observer, which imposes its
subjectivity on the environment.
The paper potential contributions to the Brain Project
These results enable specific predictions of individual and collective functional neuron
information activities in time and space. Based on the information regularities we have
identified, we can model cooperative communications among neurons, based on assembling their
logical information structures, which generate an expanding intelligence.
Specifically, the information mechanism model tells us how:
1. Each neuron retrieves and processes external information, including spike actions (related to
the impulse control), Secs. 1-2;
2. Random external uncertain processes are identified during the verification of accepted
information and its conversion to the internal certain information dynamic process, and then to
physical macrodynamics, Secs. 3-4.
3. A current neuron's ensemble sequentially cooperates and integrates in the time-space
information network (IN), revealing the dynamics of IN creation, its geometrical information
structure, and its limitations, Secs. 3-4.
4. These cooperative networks are assembled ad integrated in logical hierarchical structure,
which model the function of consciousness, up to intelligence, Secs. 5-6.
5. Multiple intelligences generate a collective IN's logic code which expands the intelligence,
Sec. 6. Evaluating the integral logic's self-operating mechanism reveals:
1. The information quantities required for attention, portioned extraction and its speed, including
the necessary internal information contributions, as well as the process's time intervals (Sec. 2);
2. The information quality (value) for each observer's accumulated information (via its specific
location within the IN hierarchical logic) along with that needed for the verification (Secs. 3-4);
3. The digital code, generated by the observer's neurons and their cooperative space logic (Sec.
3);
4. The information forces needed for holding a neuron's communication, whose information is
generated automatically in the neuronal interactions (Sec. 4);
5. The information quantity and quality required for sequential creation of their high values,
leading to the generation of intelligence as threshold information (Sec. 5) between multiple
objective interactions and their accumulative integration in a subjective observer (with
intentional cognitive self-directed strategy), which finally leads to multi-cooperative brain
processing (Sec. 6).
36
References
1. Bohr N. Atomic physics and human knowledge, Wiley, New York, 1958.
2. Dirac P. A. M. The Principles of Quantum Mechanics, Oxford University Press (Clarendon),
London/New York,1947.
3. Von Neumann J. Mathematical foundations of quantum theory, Princeton University Press,
Princeton,1955.
4. Lerner V.S. Information Systems Analysis and Modeling: An Informational Macrodynamics
Approach, Kluwer Publ., Boston, 1999.
5. Lerner V.S. Hidden stochastic, quantum and dynamic information of Markov diffusion process
and its evaluation by an entropy integral measure under the impulse control's actions, arXiv,
1207.3091, 2012.
6. Lerner V.S. Hidden information and regularities of information dynamics I,2,3,
arXiv,1207.5265, 1207.6563, 1208.3241, 2012-2013.
7. Wigner E. Review of the quantum mechanical measurement problem. In Quantum Optics,
Experimental Gravity, and Measurement Theory. NATO AS1 Series: Physics, Series B, 94, 58,
Eds. P. Meystre & M. 0. Scully, 1983.
8. Wigner E. The unreasonable effectivenss of mathematics in the natural sciences,
Communications in Pure and Applied Mathematics, 13(1), 1960.
9. Wheeler J. A. On recognizing “law without law.” Am. J. Phys., 51(5), 398–404,1983.
10. Wheeler J. A., Zurek W. Ed. Information, physics, quantum: The search for links, Complexity,
Entropy, and the Physics of Information , Redwood, California, Wesley,1990.
11. Shannon C.E. , Weaver W. The Mathematical theory of communication, Univ. of Illinois Press,
1949.
12. Stratonovich R. L. Theory of Information, Soviet Radio, Moscow, 1975.
13. Kullback S. Information theory and statistics, Wiley, New York, 1959.
14. Jaynes E.T. Information Theory and Statistical Mechanics in Statistical Physics, Benjamin,
New York, 1963.
15. Jaynes E.T. How Does the Brain do Plausible Reasoning?, Stanford University, 1998.
16. Bohm D.J. A suggested interpretation of quantum theory in terms of hidden variables. Phys.
Rev. 85, 166–179, 1952.
17. Bohm D. J. A new theory of the relationship of mind to matter. Journal Am. Soc. Psychic.
Res. 80, 113–135, 1986.
18. Bohm D.J. and Hiley B.J. The Undivided Universe: An Ontological Interpretation of Quantum
Theory, Routledge, London, 1993.
19. Eccles J.C. Do mental events cause neural events analogously to the probability fields of
quantum mechanics? Proceedings of the Royal Society, B277: 411–428, 1986.
20. Marchal B. Computation, Consciousness and the Quantum, IRIDIA, Brussels University, 2000.
21. Stapp H. P. Mind, Matter, and Quantum Mechanics, Springer-Verlag, Berlin/New York, 1993.
22. Tononi G., Sporns O., and Edelman G.M. A measure for brain complexity: relating functional
segregation and integration in the nervous system, Proceedings Natl. Acad. Sci. USA,
Neurobiology, 91(11) : 5033-5037, 1994.
23. Koch C. The quest for consciousness: A neurobiological approach, New York, 2004.
24.Clarke C.J.S. The Role of Quantum Physics in the Theory of Subjective Consciousness, Mind
and Matter, 5(1): 45–81,2007.
25. Bialek W. Thinking about the brain, arXiv.org, 02105030v2, 2002.
26. Kolmogorov A.N. Logical basis for information theory and probability theory, IEEE Trans.
Inform. Theory, 14 (5): 662–664, 1968.
27. Lerner V.S. The entropy functional measure on trajectories of a Markov diffusion process and
its estimation by the process' cut off applying the impulse control, arXiv.org, 1204.5513, 2012.
28. Lind D. A., Marcus B. An introduction to symbolic dynamics and coding, Cambridge
University Press, 1995.
29. Landauer R. Irreversibility and heat generation in the computing process, IBM Journal
Research and Development, 5(3):183–191, 1961.
30. Berut A., Arakelyan A., Petrosyan, A., Ciliberto, S., Dillenschneider R., and Lutz E.
37
Experimental verification of Landauer’s principle linking information and thermodynamics,
Nature 483:187–189, 2012.
31.Jacobs K. Quantum measurement and the first law of thermodynamics: The energy cost of
measurement is the work value of the acquired information, Phys. Review E 86,040106 (R), 2012.
32. Chaitin G.J. Algorithmic Information Theory, Cambridge University Press, Cambridge, 1987.
33.Kolmogorov A. N. Three approaches to the quantitative definition of information. Problems
Information. Transmission, 1(1): 1-7, 1965.
34. Lerner V. S. Macrodynamic cooperative complexity in Information Dynamics, Journal Open
Systems and Information Dynamics, 15(3):231-279, 2008.
35. Horodecki M., Shor P.W, and Ruskai B. Entanglement Breaking Channels, arXiv.org, quantph/030203, 2003.
36. Hiley B.J.and Pylkkanen P. Can Mind affect Matter Via Active Information? Mind and Matter,
32(2): 7-27,2005.
37. Bennett C.H., Shor P. W., Smolin J. A., and Thapliyal A. V. Entanglement assisted capacity of
noisy quantum channels, Phys. Rev. Lett. 83: 3081–3084, 1999.
38. Frieden B.R. Restoring with Maximum Likelihood and Maximum Entropy, Journal Optical
Society of America, 62(4):511-518, 1972.
39. Lerner V.S. Information Path Functional and Informational Macrodynamics, Nova Science,
New York, 2010.
40. Lerner V.S. An observer’s information dynamics: Acquisition of information and the origin of
the cognitive dynamics, Journal Information Sciences, 184: 111-139, 2012.
[41]. Zurek W. H. Decoherence, einselection, and the quantum origins of the classical, Reviews of
Modern Physics, 75(3), 715-775, 2003.
[42].Schlosshauer M. Decoherence, the measurement problem, and interpretations of quantum
mechanics, Reviews of Modern Physics 76 (4): 1267–1305, 2005.
[43]. Berthold-Georg E., Scully M.O., and Walther H. Quantum Erasure in Double-Slit
Interferometers with Which-Way Detectors, American Journal of Physics 67.4: 325–29, 1999.
[44]. Lerner V.S. Superimposing processes in control problems, Stiinza, Moldova Acad., 1973.
[45]. Ma X. -S., Zotter St., Kofler J., Ursin R., Jennewein T., Brukner Č., and Zeilinger A.
Experimental delayed-choice entanglement swapping, Nature Physics 8, 479–484, 2012.
[46]. Fries P. A mechanism for cognitive dynamics: neuronal communication through neuronal
coherence, Trend Cogn. Sci. 9 ,474–480, 2005.
[47]. Saremi S. and Sejnowski T.,J. Hierarchical model of natural images and the origin of scale
invariance, PNAS,110(8), 2013.
[48]. Eltschka C., Siewert J. Entanglement of three three-qubit Greenberger-Horne-Zeilingersymmetric states, Phys Rev Lett., 108(2),020502, 2012.
[49]. Coelho A.S., Barbosa F.A.S, Cassemiro K.N., Villar A.S. Three-Color Entanglement,
Science, 326: 823-826, 2009.
[50]. Lee K.C., Sprague M. R., Sussman B. J., Nunn J., Langford N. K., Jin X.-M., Champion
T., Michelberger P., Reim K. F., England D., Jaksch D., and Walmsley I. A. Entangling
Macroscopic Diamonds at Room Temperature, Science 2: 1253-1256, 2011.
[52]. Schaub P., Cheneau M., Endres M., Fukuhara T., Hild S., Omran A., Pohl T., Grob C., Kuhr
S., and Bloch I. Observation of spatially ordered structures in a two-dimensional Rydberg gas,
Nature, 1, 2012.
[53]. Schwartz J.M, Stapp H.P. and Beauregard M. Quantum physics in neuroscience and
psychology: a neurophysical model of mind–brain interaction, Philosophical Trans. R. Soc. B, 119, 1598, 2004.
[54]. James W. In William James-Writings, The Library of America, New York. 1902-1910.
[55]. Silva B.B.M., Miranda J.G.V., Corso G., Copelli N., Vasconcelos N., Ribeiro S., and
Andrade R.F.S. Statistical characterization of an ensemble of functional neural networks, Eur.
Phys.J.B85:358, 2012.
[56]. Quian Q. R., Kreiman G., Koch C., Fried I., Sparse but not ‘grandmother-cell’coding in the
medial temporal lobe, Trends In Cognitive Science 12:87-91, 2008.
[57]. Dawkin R. The Selfish Gene. Oxford University Press, New York, 1976.
38
[58].
Watts
D.J.,
Strogatz
S.H.
Collective
dynamics
of
'small-world'
networks", Nature, 393 (6684): 440–442, 1998.
[59]. Maio V. Di. Regulation of information passing by synaptic transmission: A short review
Brain Research, 1225:26-38, 2008.
[60]. Schyns P.G., Thut G., Gross J. Cracking the code of oscillatory activity, PLoS Biol. 9 (5),
2011.
[61]. London M., Hausser M. Dendritic computation. Annual Review Neuroscince 28: 503–532,
2005.
[62]. Sidiropoulou K., Pissadaki K. E., Poirazi P. Inside the brain of a neuron, Review, European
Molecular Biology Organization reports, 7,(9): 886- 892, 2006.
[63]. Holthoff K., Kovalchuk Y. and Konnerth A. Review. Dendritic spikes and activity-dependent
synaptic plasticity, Cell Tissue Res. 326:369–377, 2006.
[64]. Gasparini S. and Magee J.C., Behavioral/Systems/CognitiveState-Dependent Dendritic
Computation in Hippocampal CA1 Pyramidal Neurons, The Journal of Neuroscience, 26(7):2088 –
2100, 2006.
[65]. Lim W.A., Lee C.M. and Tang C. Design Principles of Regulatory Networks: Searching for
the Molecular Algorithms of the Cell, Review, Molecular Cell, 49(15): 202-2012, 2013.
[66]. Nessler B., Pfeiffer M., Buesing L., Maass W. Bayesian Computation Emerges in Generic
Cortical Microcircuits through Spike-Timing-Dependent Plasticity. PLoS Comput Biol
9(4):e1003037,2013.
[67]. Heidelberger M. Nature from within: Gustav Theodor Fechner and his psychophysical
worldview. Transl. C. Klohr, University of Pittsburg Press,USA, 2004.
[68]. Masin, S.C., Zudini V., Antonelli M. Early alternative derivations of Fechner's law, J.
History of the Behavioral Sciences 45: 56–65, 2009.
[69]. Pessoa L. and Engelmann J. B. Embedding reward signals into perception and cognition,
Frontiers in Neuroscience, 4: 1-8, 2010.
[70].Papageorgiou T.D., Lisinskic J.M., Mchenryd M.A, Jason P.,Whitec J.P. and Lacontea
S.M.,Brain–computer interfaces increase whole-brain signal to noise, PNAS, Neuroscience,
1210738110, , 2013.
[71]. Harris D. Simulating 1 Second of Real Brain Activity Takes 40 Minutes and 83K
Processors, GigaOm.com ,08-02-13.
Author thanks Malcolm Dean for helping edit the paper.
39
Appendix.
Illustration of the information regularities on examples from neuronal dynamics. Discussion
Analyzing these example, presented in reviews and studies [59-69] (which include extensive References),
we explain and comment some of their information functions using formal results [5-6,27,39,40] and this
paper. Here, we relate observer’s external processes to sensory information, while its internal processes
uphold reflected information through the internal dynamics (IMD) synchronized proceedings.
According to review [59], a single spike represents an elementary quantum information, composed from
the interfering wave neuronal membrane potentials, which is transferred to post-synaptic potential. While
the synaptic transmission is a mechanism of conversion of the spike, or a sequence of spikes, into ‘a
graded variation of the postsynaptic membrane potential’ that encodes this information… The presynaptic impulse triggers connection with the synaptic space, which starts the Brownian diffusion of the
neuro-transmitter, opening the passage of information between the pre- and the postsynaptic neuron. The
transferring ’quantum’ packet determines the amplitude and rate of the post-synaptic response and the
type and quality of the conveyed information.
Instead
of
the
authors’
term
‘quantum’,
we
use
information-conjugated
dynamics that convey the same information.
‘Although there are several possibilities to start releasing the states on the border the post synaptic
transmission, usually only a single one close to the centre of the Post Synaptic Density have a higher
probability to starts the opening formation of binding the receptors’. This ‘eccentricity’ influences the
rate of neuro-transmitters that bind the postsynaptic receptors [59].
It
illustrates
Brownian
selectiveness
ensemble
(located
on
of
the
a
border
specific
the
minimax
Brownian
states
diffusion,
of
the
Sec.5),
allowing to chose the maximum probable states of the ensemble, having the
rate, amplitude, and the time course of the post-synaptic potential, which
is triggered by the impulse (Sec.4,6,Fig.1), related to the spike.
Each Post Synaptic Density is composed of four chains arranged by any combination of four subunits,
which have two types of binding receptors [59]. It is a straight analogue of our four
units
with
in
two
each
elementary
biding
actions
information
of
the
level
controls
of
(Fig.3),
information
composing
network(IN)
the
triplet’s
node, which hold the IN memory.
The starting composition of the receptors subunits depends on the level of ’maturation’ of the synapse,
which is changed during the development and as a function of the synaptic activity.
‘The real interaction between excitatory and inhibitory synapses is based on delicate equilibriums and
precise mechanisms that regulate the information flow in the dendrite tree. There is a sort of equilibrium
between the cooperative effect due to the summation of the synaptic activities of excitatory neurons and
the inhibitory effect due to the reduction of the driving force that produces the information
transmission’.
The
cooperation,
minimax
whose
mechanism
joint
includes
proceeding
regulation,
limits
both
equilibrium
internal
and
and
external
information processing, Sec.6.
‘The post-synaptic regulation of the information, passing by synaptic transmission, is governed by
several different mechanisms involving: the diffusion of the neuro-transmitter in the synaptic space, the
absolute and relative number of receptors’ Post Synaptic Density, their dynamics and/or composition.
These functions’ depend on geometry of dendric tree, which has hierarchical structure’ analogous to
the IN, (Fig.3). ‘Distinctive levels of potentials between different dendrite tree branches produce
information flows between the grafting (joint) points of the dendrite tree’s branches. Dendrites of the
level An are embedded onto ‘mother branches’ that are at the level An −1 that, in turn, are grafted to their
mother branches at the level An−2 and so on up to the main dendrite ( A0 ) emerging directly from the
starting information. As the synapses are the points of passage of the information among the neurons, the
grafting points can be considered the points of passage of information among the dendrite branches. A
good approximation to this situation could be a model where each daughter branch is considered a sort of
a bidirectional “electrical synapse” of the mother branch: ‘the activity of each branch of the level An can
influence the activity of the branch of the level An −1 and vice versa... The direction of the current
depends on the values of the potential level of two sides of grafting point VAn and VAn −1 , being in the
direction of the mother for VAn > VAn −1 , and in the direction of the daughter in the opposite case’. ‘The
grafting points represent “nodes” for information flowing in the dendrite tree’ [59]. This network holds
the structural functions of dendrite tree, its hierarchically dependent nodes, bidirectional flows of
information, concurrent formation of all structure with starting flow of information, selective maxmin
influence of nearest nodes and their decrease with the distance from the node.
The described structure has a direct analogy of the IN nested structure,
holding two opposite directional information flows, which models not only
observer’s
logical
structure,
but
rather
real
dendrite
bran
structure
during each of its formation.
‘Although it is generally accepted that the spike sequence is the way the information is coded by, we
admit that neuron does not use a single neural code but at least two’ [59]. Hence, the authors
suppose that there is a spike code (that
might hold the IMD controls with
actual impulse’s high and length), another is the triplet’s code, composed
of four subunits, which is transferred between IN nodes (Secs3-4),[6,40].
Formally dealing with an impulse (Sec.4) and considering its simultaneous
actions,
we
actually
apply
it
in
IMD
with
the
impulse’
amplitude
distributed on a distance (time), which can be implemented only for real
distance
impulse-spike
potential(involving
our
simplification
of
the
control impulse). The control impulse rises at the end of the extremal
segment (Fig.1),after information, carried up by the propagating dynamics,
compensates
the
segment’s
inner
information
(determined
by
the
segment
macroequations, dynamic constraint and invariants, Secs. 3,5). First, this
establishes
the
direct
connection
between
the
information
analogies
of
both the spike and the threshold. Second, it brings the spike information
measure for each its generation, evaluated in bits (Sec.6). The interspike
intervals carry the encoded information, the same way that the discreet
intervals between the applied impulse controls do it. Conductivity of an
axon depends mainly on inter-neuronal electrical conductance, which, in
IMD model,is determined by the diffusion conductivity of the cooperative
connections,
signal,
computed
passing
via
through
a
derivation
this
of
conductivity,
the
correlation
might
modify
a
functions.
topology
A
of
single,as well as multiple connection, changing its macrodynamic functions
(and a possibly leading to distinct networks) under different inputs.
Neural oscillatory networks, measured by the brain oscillation’s frequency, power and phase,
dynamically reduce the high-dimensional information into a low-dimensional code, which encodes the
cognitive processes and dynamic routing of information [60].
The IMD model is characterized by the sequential growth of the information
effectiveness
of
the
impulse
and
step
controls
along
the
IN
spatial–
temporal hierarchy. This is connected with changing the quality of the IN
node’s
information
depending
on
the
node’s
location
within
the
IN
geometry. These changes increase intensity of the cooperative coupling and
its competitive abilities, which make the segment’s synchronization more
stable and robust against noise, adding an error correction capability for
decoding [5-6]. Even though these results were published [4, others], the reviewers [59] conclude;
‘None of the approaches furnishes a precise indication of the meaning of the single spike or of the spike
sequences or of the spike occurrence, in terms of “information” in the symbolic language of the neurons’.
Now is widely accepted [60,61] that brain system should integrate the observed information, using
‘working memory’- long-term potentiating (LTD) for a predictive learning. ..
There are different LTD experimental and simulation studies [61-66] for integration of observation and
verification of dendrite collected information. It’s specifically shown [61,62] that a single neuron could
be decomposed into a multi-layer neural network, which is able to perform all sorts of nonlinear
computations. The bimodal dendritic integration code [64] allows a single cell to perform two different
state-dependent computations: strength encoding and detection of sharp waves.
Key components of integral information processing are the neuron is synaptic connections, which involve
the LTD slow-motion information frequency.
The
information
observer
concurrently
verifies
the
entropy
integral’s
Bayesian information, whereas its parts are converted in the certainty
parts until all portions of the EF will be converted in the total IPF
(Sec.2). Information, memorized in observer’s existing IN level, has less
information frequency, compared to the current collection of the requested
“qmess” high-level information (Sec.6).When information is temporary
assembled in the local IN (with STM),its reverse action works as an
information feedback from the existing to a running level of the IN. If
the memory retrieval coincides with requested information, such IN
reactive
action
could
enforce
extraction
of
information
and
its
synchronization (working as a modulation in the LTD) with that needed by
the observer’s IN. That helps integration of both EF and IPF by binding
their portions according to the time course, which operates as a LTD.
The experiments and simulation [66] show that maximizing minimum of the EF integral Bayesian
probability information measure (corresponding the authors multiple probabilities) allows to reach the
effective LTD learning [65].
The review results [61,63] confirm that the dendrite branches pyramidal neurons function, working as
single integrative compartments, are able to combine and multiplicate incoming signals according to a
threshold nonlinearity, in a similar way to a typical point neuron with modulation of neuronal output.
According to studies [67,68], an observer automatically implements logarithmic relationship between
stimulus and perceptions, which Weber’s law establishes. Additionally, Fechner's law states that
subjective sensation is proportional to the logarithm of the stimulus intensity.
Both results imply that observer’s neuronal dynamics provide perceptibly of both entropy measure and
acceptance of information (as a subjective sensation). Review [69] provides the evidence of converging
the motivational effect on cognitive and sensory processes, connecting attention and motivation.
The IMD related sensory-cognition processes build and manage a temporalspace pathway to information stored in the IN (Sec 2-5).
Brain–computer interfaces [70] involve multitasking information and cognitive processes, such as
attention, and conflict monitoring.
The largest computer simulation [71] embraces only a small part of real Human Brain processing.
The cited and many other neurodynamics studies (including cited in [5,6,40]) support and confirm the
paper formal results.
Fig.1. Illustration of information mechanism creating an observer:
xt is external multiple random process; xt+ , xt− are copies of random process’ components, selected via
intervention of the double controls u+ , u− (2.1) at the moment τ 2 ; xt , xt are conjugated dynamic
+
processes, starting at the moment τ 20 and adjoining at the moment
±
−
τ 2k ; τ 20k
is a moment of turning
controls off ; xt is ajoint process, entangled during interval ∆ 2 up to a moment
entanglement;
τ 1k ,τ 10 k , ∆1 ,τ 10
τ 101
of breaking off the
are the related moments of adjoining the conjugated dynamics, turning
off the controls, duration of entanglement, and breaking its off accordingly, -in the preceding internal
dynamics; δ1 is interval of observation between these processes. The upper part we refer by 1a,
below 1b, 1c are the illustrations of both double controls’ intervals, and their impulse u± actions
accordingly. All illustrating intervals on the figure are expanded without their proper scaling.
Fig. 2a. Forming a triplet’s space structure.
2
a
a
a
a0
a0
a
0
a
0
2
a0
2
2a
a
a+a~4a
=
0
0
a0
2
a
a
2a
a
2
0
a+a~4a
=
0
0
a0
a
t
t
1
t
2
t
3
t
4
t
5
Fig.2b.The information structure of double-triples’ cooperating segments under applying stepwise controls:
both step-up starting and assembling impulse controls; a,a o are information invariants.
Fig. 3. The IN time-space information structure, represented by the hierarchy of the IN cones’ spiral space-time dynamics with
the triplet node’s (tr1, tr2, tr3, ..), formed at the localities of the triple cones vertexes’ intersections, where
{α iot } is a ranged
string of the initial information speeds (eigenvalues), cooperating around the (t1 , t2 , t3 ) locations; T-L is a time-space.
(a)
(b)
Fig 4 (a). Double spiral code’s structure (DSS), generated by the IN’s nodes with central line of the node’s
information cells, the left and right spirals, encoding the IN’s states, which are chosen by the impulse
control’s double actions;
4(b). Zone of cells Z c , formed on the intersections of opposite directional spirals, which produces each
triplet’s DSS code.
Fig.5. Illustration of the observer’s self-forming cellular geometry by the cells of the DSS triplet’s code, with a portion of the
surface cells (1-2-3); this cellular geometry’s boundary separate observer with interacting environmental processes being
observed.
Fig.6. Schematic of observer’s information structure.