Cybercognition - Cybernetic Computational Cognition Theory -© 2014
.
GOLEM*- Goal Oriented Linguistic Evaluation Machine
​​​​by Charles Dyer BE (Mech.) BSc (Hons).  Copyright © October 2014

* renamed in June 2015 - referred to as CCE in text
The CCE is a generic computer design template which includes within its architecture both cybernetic/subjective and computational/objective elements (see diagram on p2). It represents a significant advance in science because it specifically models agents with internal experience, that is, those with both consciousness (control of sensor channel percepts) and volition (command of motor channel concepts). This view of cognition is inherently linguistic, since it involves semantics (understanding of percepts) and syntactics (production of concepts). Hence the CCE is a LOT (language of thought) computer design. The relatively simple design is therefore explained without controversy by noting the 'infinite products from finite resources' characteristic of linguistic forms.
​​The ​​main research goal was to resolve all of the major conceptual difficulties that cognitive scientists encounter in their attempts to properly analyse the brain and mind. Its secondary purpose was to highlight the remarkable similarities between the human brain and the digital computer, similarities which go much deeper than previously thought. Assuming peer review fails to detect errors in this work, and its findings are subsequently accepted as true, then, for the first time it will be possible to construct intelligent agents which possess internal experience. These agents will possess  a combination of subjective agency, autobiographical knowledge, and autonomic embodiment, stored in a characteristic duplex hierarchical data structure. The architecture created is that of a persistent individualised self-amongst-other-selves, homomorphic with the embodied sense of self and world experienced by humans and higher animals, and consistent with cybernetics theory of Jacob von Uexkull . In terms of formal data structure, subjective experience has the same narrative structure as language, spoken and written. This is a restatement of the same principle the cognitive scientist/philosopher Daniel Dennett proposes in his 1981 book, 'Consciousness Explained'. However, in GOLEM theory, substantial flesh has been added to Dennett's bare bones framework.

​There are two major challenges in this research. The first one is intellectual. It consists of building as realistic a model as possible by using all that is known about the brain and mind. The second challenge is emotional. It consists of dealing with an enormous amount of thematic 'baggage', the weight of expectation if you will, that accompanies this topic. One such expectation is that any conventional scientific analysis of subjectivity (consciousness and free will) is almost by definition bound to fail, in spite of the fact that a solution to the problem of subjectivity has previously been achieved, in the 1930's by the Estonian biosemiotician Jakob von Uexkull, who extended ideas first proposed by Kant.

Unfortunately, the longer that a closed-form solution takes to find, for whatever reason, the more pessimistic the expectations of success become, and the higher and more unreasonable becomes the general level of scepticism in the science community. There is also the problem of ego, both actual and perceived. Whomsoever claims to discover something significant about the mind and brain is sending two messages out, one manifest, the other latent. The manifest communication consists of actual work, the stuff that is written. The latent message is implicit in the medium itself. It virtually shouts "I am some kind of genius". It doesn't matter whether the scientist has deliberately chosen such a high-profile topic to become famous (unlikely), or has chosen the topic out of interest and inclination (likely). The publicity that genuine research of this type attracts in the popular science media has the unfortunate effect of making the topic seem even less worthy as a valid research target.

​​​Before the development of the CCE came the TDE, or Tricyclic Differential Engine​​ (
www.tde-r.webs.com), which is an earlier implementation of the same underlying cognitive model. Because the global endeavour of Cognitive Science seemed to have 'lost its way' (eg failed to take meta-physical topics like qualia off the table, ie remove them as valid items of discourse), considerable pains have been taken to ensure that this project in both its conception and execution conforms to the well-known, best-practice guidelines of scientific traditionalism[1], which is why the TDE's discovery was treated as a conceptually distinct phase, to be completed before the current, CCE, phase of the project could proceed. The gold standard in scientific probity is generally agreed to be the historical chain of discoveries which created the field we now call 'physics'. At the start of this chain sits the astronomer Tycho Brahe, who collected large amounts of astronomical data- the places and times for virtually all the stars and planets visible with the telescopes of the time. Brahe died prematurely, that is, without completing the data analysis. At the time of his passing, Brahe still believed in the 'wheels within wheels' circular model of celestial body motion. His family understood the monetary and status value of his legacy, but struggled with the science. Hence, when they allowed Brahe's apprentice, Johannes Kepler, to examine only those records which clearly contained 'errors' in circularity (hoping to keep the 'best' data for themselves), they unwittingly did exactly the opposite, helping Kepler to discover the correct elliptical shape of orbits much earlier than expected. Kepler had discovered the 'what' -external appearance of the behaviour- but Newton discovered the 'how'-internal mechanism of the behaviour- by inventing gravity, the main force of attraction between any two orbiting masses whose strength diminished as the square of the distance separating their 'centres of gravity'. The final link in the chain was Einstein, who discovered the 'why'- the basic principles on which the mechanism of gravity depends for its functioning was found to be the Minkowskian curvature of space itself. While Newton understood gravity as a ubiquitous attraction between all masses, Einstein understood that the truth was so much more: the matter was the matrix. What was originally Maxwell's aether, the vacuum (nothing) of outer space, then became the 'universe' (everything). Einstein demonstrated that space is not just defined as the absence of matter, but is indeed formed by it.

​​If the CCE project were ever to be considered as part of any chain, it would be the one whose members wish to establish the purpose, methods and mechanisms (see Marr's trilayer) of the cerebellum, that lobe of the brain which lies at the rear (dorsally) below the occipital lobe and manages automatic and autonomic activity such as reflexes and posture. First, Ito, Eccles and Szentogai laboriously examined many cerebellar neurons, of which there are several kinds. Their data gathering role in the scheme was comparable to that of Tycho Brahe. After that, David Marr and James R. Albus seemed to arrive at almost identical descriptive models independently of one another. These guys performed the role of Kepler, establishing beyond reasonable doubt the true nature of the cerebellum's wiring and connectivity with the rest of the brain. The penultimate part of the science-as-narrative metaphor is provided by the TDE stage of the project which leads to the discovery that our brains function as local TDE's at the information processing level, while our minds function as global TDE's at the knowledge management level. This stage clearly demonstrates that each TDE consists of two biologically plausible Turing Engines, one which processes temporal data, and the other which processes spatial data. Thus the TDE research shows how the human brain can be thought to be computational. The ultimate phase within this metaphor is perhaps the CCE work, whose aims is to use the deep level of insights as much to improve the design of computers as to understand the mind.​​

​​The gedankenexperiment known as Searle's Chinese Room is meant to make us aware that minds do something more than mere computation. It does this by demonstrating that the syntactic aspects of thinking and communication are an insufficient mechanism for true understanding of behaviour. What is needed, Searle claims, is semantic processing, where real-world meaning values enter into the computational calculus. The TDE research phase strongly suggests that it is the local TDE's we call the brain that are syntactic processors, and it is the (single) global TDE we call the mind that performs the semantic aspects of cognitive and linguistic data processing. If my thematic role during the TDE phase was equivalent to that of a Newton, Marr or Albus, then it follows that my role in the latest, CCE phase, must correspond metaphorically to that of Einstein, that is, providing a final, complete and deep set of explanatory principles.

​​The comparison with Einstein is not intended to be anything other than a metaphorical explanatory device. However, we have one thing at least in common- we both had a 'Eureka' (sometimes called 'Aha!') moment, a crucial insight attained not by means of superior reasoning, but appearing spontaneously, as if by magic. Einstein's moment was the realisation that a non-inertial frame, such as a falling lift, or a rotating child's playground 'roundabout' produces effects that, since they are indistinguishable in every way from gravity, are identical to it. My own moment, perhaps not as grand, but with the same degree of inferential importance, was the realisation that the subject-predicate  (also called Subject-Verb-Object in English) parsing of sentences was of a precisely identical form to the interpreted (command-line) interaction between computer user and operating system 'shell' program. At each line of the command window (eg Windows .cmd file or Unix .csh file), the prompt,  which is usually a blinking form of some common ASCII character, is a latent shorthand with a much longer manifest form, not normally shown.

​​This manifest form, a string of directory names, actually represents a description of a descending 'path' or relative location in the hierarchical file system. In fact, all possible utterances (tactical uses of speech, eg within a dialogue) can be described by an expanded or manifest form. This expanded form matches exactly the Pierceian triadic (three part) definition[3] of meaning c.f. the customary two-part version, attributed to Ferdinand Saussure, consisting of syntax and semantics. So what, you might well ask. What does this jargon mean? It means that when we use a computer, we are talking to it in precisely the same way we talk to one another, and - here is the important bit - it is talking to us in precisely the same way. We are using predicate logic first order form to build a knowledge base, one feature at a time, checking it for validity, just as when a computer compiles a script into executable object code, which is  a model in propositional logic form.
The analogy becomes all the more compelling because it incorporates writing (the strategic rather than tactical use of language[2]). Writing, like a computer program, must be compiled into a single consistent executable image before it is a usable model rather than interpreted.

​​Single commands, whether they are utterances in a tactical (ie interactive, he-said-she-said) human dialog, or a tactical (move/click-mouse->window/icon motion/reaction) human-computer session, are each interpreted into executable [instructions+data] dyads or which are immediately meaningful to the mind or operating system, and hence can be immediately acted upon in isolation, that is interpreted as either code to be executed or data to be stored. In fact, they do have a context, the global one that is shared with other interpreted commands, consisting of environment variables like $path.

Multiple commands, like those within a source code file, are not acted upon immediately, because they are mutually defined with local variables whose semantics are not meaningful outside the program's scope. But (and THIS is the real 'kicker') multiple sentences​​ written inside a common extent of text (eg a paragraph) and therefore usually about a common subject can be treated exactly the same. Where computer programs have local 'scoping' of variables (ranges where implicit references to those variables can be made), paragraphs of written text have anaphor, the use of pronouns and prepositions to avoid repetition and allow implicit reference between sequential sentences, ie semantic value reuse​​.

1. Thomas Kuhn and Karl Popper​​​ made important observations about science both as an investigation of nature and as a social enterprise in the 20th Century. These insights do not, in our opinion, devalue the classical science paradigm as a learning theory and an idealistic form of behaviour. The same distinction exists between democracy conceived as an ideal egalitarian (demos: the people) form of governance and the real-world examples of democracy. The problems with democracy make us want to stitch it, not ditch it. Similarly, my goal here is to avoid contributing to the sacred-cowardly first-past-the-post-scientific pre-singularity fear-mongering that infests Sci2K. Instead, we wish to provide a timely reminder that science in its classic form as secular religion is not broken and therefore does not need to be fixed.​​
2. The idea underlying the tactical/strategic dichotomy​ is in fact the same for not only for speech/writing but also for subjective/objective and interpreted/compiled distinctions. These are all created by the bipartite nature of the CCE architecture, which is hybridised from both cybernetic and computational elements. This dichotomy is not something artificial imposed from outside, but arises internally from basic system  I/O principles, those of (a) corrective feedback information arriving from sensor channels and  (b) predictive feedforward information departing to motor and memory devices.
3. The triadic form is considered powerful enough to represent real human speech​


SITE VISITOR COUNTER