100 Days Blog

Day 039 - Avoiding circularity

Submitted by Sam on 28 June, 2011 - 23:57

Some theories of mind merely reproduce or defer the problems that they try to resolve, creating fallacious arguments which result in infinite regress. A particularly well-known example is the 'homunculus argument' of mind, which arises in half-formed theories of vision. In such theories, the light which falls onto the eye's retina is 'watched' and interpreted by some process (or someone, a 'little man', or homunculus) as if it were a television screen. Such a Cartesian theatre merely defers the question of how decisions are made and sensory input interpreted, resulting in an internal homunculus with its own homunculus to interpret its own television screen, ad infinitum.

Questions like “What caused the universe and why?” and “How can you tell what is good?” are characterized by similar slips into circular reasoning as they never seem to have a final cause, always yielding to yet another recursion of “what caused that cause?”. Every culture has evolved strategies to resist such paradoxes, helping to prevent minds endlessly dwelling on such questions and steering us away from endless loops to allow more soluble problems to be tackled. Through social consensus, shame, taboo, awe and mystery, institutions of law, religion and philosophy provide the authority to engender social consensus and defuse such lines of inquiry by offering stand-in responses and ways of thinking to deflect such questions. Whilst this may seem like dogma and indoctrination, it does serve the socially-beneficial function of pushing minds towards productive work rather than wasting time in ultimately fruitless reasoning.

However, in the resonant words of Minsky, it is worth remembering that “one can acquire certainty only by amputating inquiry.”

Day 038 - Why can't we just do what we want?

Submitted by Sam on 28 June, 2011 - 00:08

During moments of internal conflict, where we simultaneously hold competing desires, we often wonder why we can't just tell ourselves what to do. If we are in control of our 'selves', why can't we make decisions to prioritize goals to the exclusion of others? When faced with the conflict between the desire to write and the desire to sleep, for instance, I cannot simply and completely override one with the other. Through will-power alone I am unable to completely disable those agents in my conscious mind which alert me of the need to sleep, but instead I must resist them with one or more conscious strategies, employing counter-agents as indirect methods to depress (but never disable) the need-for-sleep agents' nagging effects.

The reason for the mind's lack of control over its own agents is that most of its workings are utterly hidden from itself. Our 'selves' are not conscious of any of the processes which generate their own machinations, but merely of the high-level effects of these processes, much as a computer user is not constantly aware of all of the minute processes through which pressing a key on a keyboard creates a symbol on a screen.

If one agent could seize and maintain control from all of the other competing agents, then the mind could change itself at will. In order to do so, the mind would have to know how all of its agents worked, and give itself the option of managing each agent on an individual basis. Disregarding the computational expense of such a recursive architecture, a single example is enough to convincingly demonstrate how evolutionarily disadvantageous such self-knowledge could be. If we were able to exert full control over our pleasure and reward systems, we would be able to reproduce the sense of success and achievement without the need for any actual accomplishment, and would be able to pursue personal desires to the absolute exclusion of all else, regardless of cost.

Such a mind with sufficient knowledge of its own workings to switch parts of itself on and off and counteract all of its own plans would contain not only the ability but the propensity to self-destruct at will. The reality of this statement can be seen from the experimental evidence shown by the rats and monkeys who self-administered intravenous injections from an unlimited supply of stimulants to the point of severe weight loss and death 1. It is for these reasons that we do not know ourselves perfectly, and that the illusion of 'self' is protected from facilitated self-destruction by not knowing how to control its own agents.

  • 1. Roy A. Wise, Brain Reward Circuitry: Insights from Unsensed Incentives, Neuron, Volume 36, Issue 2, 10 October 2002

Day 037 - The amnesia of infancy

Submitted by Sam on 27 June, 2011 - 00:12

Describing the action of picking up a cup of tea to drink as a 'simple task' conceals a great complexity of individual sub-tasks, each of which must be mastered and marshalled in the appropriate sequence to allow the completion of the task. Marvin Minsky describes these sub-tasks as agents 1, and categorizes some relatively high-level, conceptually-simple agents involved in picking up a cup of tea as follows:

  • Grasping agents, tasked with holding the cup
  • Balancing agents, tasked with preventing the tea from spilling
  • Thirst agents, which want you to drink the tea
  • Moving agents, tasked with getting the cup to your lips

In Minsky's schema, each agent can be broken down further and further, through chains of hierarchy and interaction in to very small functionally irreducible parts. Each agent, mindless in itself, interacts with many others in very special ways to produce true intelligence. However, through what Minsky terms “the amnesia of infancy”, we assume that many of our extremely complex abilities (like picking up a cup of tea and drinking it) are both simple and ready-made in our minds, forgetting how long it took us as children to learn the myriad of steps which tell us how objects in the real-world interact, thereby concealing the vast complexity of these interacting processes.

As a result of this concealment, answers to questions such as “why is a chain more than its various links?” seem obvious to us as adults, because we cannot remember how hard it was to learn the rules of interaction (such as those which prevent two objects from ever being in the same place) which are now second nature to us. We operate under an illusion of simplicity which results from a distancing from what happened during our infancy, when our first abilities were formed. As each skill was mastered, as each agent matured, additional layers were added on top of them, until the foundational layers, the most basic agents, seem so remote to us as adults that we forget we ever had to learn them.

This distancing effect obscures the fact that things like “common sense” and “simple tasks” are in fact wondrously diverse and intricate, composed from “an immense society of hard-earned practical ideas - of multitudes of life-learned rules and exceptions, dispositions and tendencies, balances and checks”.

  • 1. Minsky, Marvin Lee. The Society of Mind. New York: Simon and Schuster, 1986. Print.

Day 036 - The purpose of memory

Submitted by Sam on 26 June, 2011 - 00:41

If the mind is a control system directed towards the purpose of deciding what to do next, then all components of mind, including memory, must in some way support this goal. Pentti Kanerva, currently a Research Affiliate at the Redwood Center for Theoretical Neuroscience, put forward a theory of memory in 1988 1 consonant with this view, stating that the function of memory is to make available information relevant to the current state of the outside world rapidly enough to allow the organism to predict events in the world, including the consequences of its own actions.

The ability to predict the consequences of actions, however fuzzily, has clear evolutionary benefits. The best way to make predictions is to look at the most recent past, and to compare current events with previously encountered similar situations. Subsequently, there is clear evolutionary advantage in a system which can retrieve earlier situations and their consequences, and match them to the various modes of sensory stimulus which constitute the organism's 'present'.

In this model of memory, the present situation as represented by the current pattern of sensory input acts as a retrieval cue for memories of earlier events, which are used to predict the next sensory input. In a continual process of retrieval and comparison, the organism's internal model of the world is created and updated, comparing and strengthening memories of sequences of events which accurately predict real-world consequences, and modifying those that do not. The system learns by this corrective process of comparison, encoding and integrating information into a predictive model of the world, aiding the individual in deciding what to do next.

  • 1. Kanerva, Pentti. Sparse Distributed Memory. Cambridge, MA: MIT, 1988. Print.

Day 035 - Visual prostheses

Submitted by Sam on 25 June, 2011 - 00:48

Problems with the eye or the optic nerve are the leading causes of new blindness. In such cases, the visual cortex (the part of the brain responsible for processing the visual information) often remains largely intact, presenting the possibility of recovering some level of sight through the integration of an intracortical visual prosthesis.

There has been historical motivation for attempting to restore sight through electrical stimulation of the brain from at least 1918, when two German doctors reported that during an operation to remove bone fragments from a patient's head caused by a bullet wound, the patient described flickering in the right visual field when the left occipital lobe was electrically stimulated.

Using the same principles as those realized by the rat hippocampal prosthesis recently developed by Dr Berger, intracortical visual prostheses are being developed which could conceivably restore vision by simulating the pattern of neural activity usually associated with the sense-data provided by the eyes to provide the visual cortex with meaningful sensory input which it no longer receives.

The basic concept for the prosthesis involves implanting electrodes into the visual cortex, which would be connected to an implanted computer chip powered and controlled wirelessly. A camera would feed images to a computer system, which would encode images and transfer them to the cortex digitally, with each 'point' in the image represented by an electrical stimulation in the implanted electrode. These stimulations would create points of perceptual white light (phosphenes) in the appropriate place.

Whilst this bitmap approach to artificial sight is both promising and intuitive, it needs substantial alteration and improvement in order to be able to generate clear, highly-resolved colour imagery, as it seems that the visual system does not actually work on a point-by-point basis. Rather than encoded as pixels, our visual cortex interprets images in terms of colours, edges, orientation etc. Finding how these dimensions of vision are encoded in the brain, and how they may be mapped by an artificial system are among the stated goals of Illinois Tech's IntraCortical Visual Prosthesis Project.

Day 034 - Neural prosthesis

Submitted by Sam on 23 June, 2011 - 23:02

In a paper published last week, biomedical engineers from the University of Southern California detailed how they were able to selectively turn rats' memories on and off using a computer chip. The team artificially constructed neuron-to-neuron connections between the rats' normal brain circuitry and a computer circuit designed to duplicate the neural activity associated with encoding memory, ultimately allowing the scientists to turn certain memories on and off with the flip of a switch. As the lead author of the paper puts it: "Flip the switch on, and the rats remember. Flip it off, and the rats forget” 1.

The prototype cortical prosthesis demonstrated by the team was applied to the information processing areas of two sub-regions of the rats' hippocampi, areas which have previously been identified as being involved in the formation of long-term memory. The researchers taught the rats a task which involved pressing leavers in order to release droplets of water as a reward. During the learning process, the team used electrical probes to record the activity of the hippocampus, which converts short-term memory into long-term memory. By blockading the neural interactions of the two areas, the researchers were able to make their trained rats forget their long-term learned behaviour.

The team then integrated their artificial hippocampal system, which had been programmed to duplicate the patterns of electrical activity associated with the interaction of the two areas of study. When activated, the chip delivered electrical pulses which conformed to the normal firing of hippocampal output region, thereby simulating the memory-encoding function and restoring long-term memory capability in the pharmacologically blockaded rats.

As well as recovering switched-off memories in compromised rats, the researchers were able to show how the device could actually strengthen the memory generation and capability of rats with a normally functioning hippocampus.

The success of the prosthesis in both restoring and enhancing memory makes it a logical candidate for development into a human-viable prosthesis, with the potential to help sufferers of neurodegenerative diseases such as Alzheimer's, or victims of stroke or brain-injury.

  • 1. "USC - Viterbi School of Engineering - Restoring Memory, Repairing Damaged Brains." USC - Viterbi School of Engineering. 17 June 2011. Web. 23 June 2011

Day 033 - Insights from brain damage

Submitted by Sam on 23 June, 2011 - 01:48

Studying the effects of localized brain damage can help refine and verify theories of mind. As Professor Rama explains in his TED Talk below, the brain's modular structure means that injuries to small parts of the brain lead to highly-selective loss of function, which throws light on to the structural-functional relationships of the brain.

The examples that Rama cites reinforce several key points that have been covered from different perspectives elsewhere in this blog:

  1. The brain is highly modular. It is a robust system that does not suffer a system-wide degradation in performance when a single element is comprised. This is evidenced by Rama's first example of Capgras syndrome, which occurs when damage is sustained to the facial recognition are of the brain, the fusiform gyrus. Sufferers of the syndrome are unable to recognize other people's faces, but remain able to recognize people using other characteristics, such as the sound of their voice.
  2. The mind makes its own subjective reality. The brain has the capacity to provide utterly compelling subjective transformations of objective reality, even when such experiences conflict with sensory evidence. This is shown by the second example, that of the phantom limb. Amputees with phantom limb syndrome continue to vividly feel the presence of their missing limb, despite knowing that the limb is no longer there.
  3. Phenomenal experience is encoded by the neural network. In his third example, Rama shows how abnormal cross-wiring of areas of the brain can lead to synaesthesia, the mingling of senses. People with this condition might see a number or musical tone as coloured, whilst remaining normal in all other respects. The fusiform gyrus contains regions which specialize in numbers and colours located adjacently, and if these get accidentally cross-wired (perhaps due to a mutation in the gene which controls the trimming of interconnected regions in the developing brain preventing it from working correctly), then numbers will become neurally linked to colours.

Day 032 - Words

Submitted by Sam on 21 June, 2011 - 16:43

The difficulty of verbally articulating a new experience arises because there are several degrees of separation between the objective reality which stimulates the experience and the way in which words are used to encode that experience. Using the example of describing the 'red' of a red tomato, the degrees can be delineated as follows:

  • The objective red is the physical, real-world 'red'. Objective red is the physical, non-internalizable process by which electromagnetic radiation is emitted from a physical object to produce the property we refer to as 'red'. This electromagnetic radiation constitutes the physical properties of real-world 'colour', not 'colour' as we know it, which does not exist in the physical world, but is instead constructed from our internal transformations of 'objective red'.
  • Neural red is the transformation of the sensory input that detects objective red into patterns of neural activity in the brain.
  • Phenomenal red is the integration of the 'raw' experience of neural red into conscious processes. It is perceived as the 'what-it-is-like' of seeing red, and it was Mary 'learned' when she left her monochromatic room and experienced colour for the first time.
  • The word 'red' - refers to the ineffable experience of the phenomenal, but does not describe it. Therefore the word 'red' refers to objective-red, the real-world red, only indirectly.

Whilst phenomenal red can be described wholly by a complete physiological map of the associated neuro-activity in the brain, the experience cannot be duplicated in other brains through verbal explanations, which are merely designators of the experience, not complete encodings of it. The only way to transfer experiences, therefore, is to recreate an exact neurophysiological replica of neural red in the brain of the recipient.

The reason language works is because we have species-wide homologous brain structures, with a certain degree of genetically determined uniformity in the structures which realize phenomenal experience. This suggests that the basic qualities of experience are similar in most humans, and so we can reliably use words as designators for an assumed common-endowment of similar qualitative experiences. However, there is currently no way to verify that two interlocutors are actually talking about the same phenomenal experience, so it is more useful to think of words as connoting the phenomenal rather than describing it.

Day 031 - Tracing our experience of the real world

Submitted by Sam on 21 June, 2011 - 01:57

We've seen that nothing in the external world is ever directly in contact with our minds, as all perception of objective reality is mediated through nerve signals, patterns of electrical charge derived from sensory receptors. This electrical stimulation has to be interpreted by the neural network in order to derive meaning and create a relationship between the encoded signal and objective reality. When we 'see' a red tomato, for instance, we only actually see a construct of a red tomato, a mental impression generated from filtered sense-data. The translation of sensory-data into conscious processes creates 'the first obligatory step in the epistemic chain' 1 ; the basic building-block of conscious recognition and relationship with the outside world, serving as non-verbal thinking elements.

If the epistemic chain starts with the interpretation of sense-data into experiences (such as the electromagnetic waves emitted from a red tomato being transformed into the experience of seeing a 'red' tomato), an entirely non-verbal process, then it ends with the association of phenomenal experience with language, where names are given to experiences, and propositional knowledge becomes possible.

Without language, it can be seen that certain basic, primal experiences can contain meaning, as they have a capacity to refer. The feeling of hunger rapidly becomes associated with eating, and so the experience of hunger comes to refer to the need to eat, as the 'what-it-is-like' of hunger, gains the 'what-it-is-about' of eating, entirely without language. This is the most basic form of reference available to conscious organisms, allowing phenomenal experience (the experience of hunger, thirst or pain, for example) to effect behaviour (eating, drinking, moving etc.). In humans, this basic form of phenomenal reference has become linked to a verbal lexicon, where words become anchored by their related phenomenal experience(s) to create the final link in the epistemic chain.

  • 1. Musacchio, J. "The Ineffability of Qualia and the Word-anchoring Problem." Language Sciences 27.4 (2005): 403-35. Print.

Day 030 - Thought experiments

Submitted by Sam on 20 June, 2011 - 00:04

Like metaphors, thought experiments can be used as tools to compress complex ideas into tractable, workable concepts described in everyday language. Like metaphors, thought experiments have a very broad applicability, and can very usefully impart insight through analogy and extrapolation. However, just as metaphors forgo the tightly-controlled precision of technical language in favour of generalistic insight, so thought experiments leave behind the rigour and empiricism of traditional scientific experimentation in order to tackle phenomena that are difficult or impossible to test in the real-world. With their almost unavoidable appeals to common sense and intuition, thought experiments can contain a distressing degree of unscientific latitude, which can result in the propagation of the kind of misleading conclusions that Jackson drew from the Mary's room thought experiment.

As the Mary's room thought experiment was originally proposed by a philosopher it suffered from its inception by being conducted in a 'laboratory' (to risk extending the metaphor) lacking in neuroscientific expertise, and burdened by a heavy philosophical bias. However, by merit of the experiment's eminent repeatability (transferability and repeatability being the whole point of thought experiments), it was rapidly re-run in many other minds, and analyzed from perspectives with varying areas of expertise and predispositions. Through this organic review process, experimenters equipped to resist 'intuitive' answers to its implications, like Dennett, were able to successfully counter its non-physicalist implications and make a strong case for the use of empirical science as the primary mode of research and discovery rather than models like thought experiments which are easily affected by fallible subjunctive reasoning and knowledge-gaps.

Syndicate content
Drupal theme by Kiwi Themes.