Web Navigator

SCIENCE ECONOMICS ARTS
METAPHON SEARCH IMPRESSUM

analog

science fact & science fiction

vol. LXXI, No. 6, August 1963, pp. 6, 92-94

An editorial by John W. Campbell

A Place for the Subconscious

Thereīs a huge difference between an intellectual conviction ­ no matter how completely sincere ­ and an emotional feeling of belief. An intellectual conviction is usually logical, and sometimes itīs even rational but lacks real motivating power.

The difference between ``logical'' and ``rational'' really becomes true, deep feeling-awareness only when you have the experience of arguing with someone who is perfectly logical, absolutely and irrefutably logical ... and irrational. The ``computing psychotic'' type of the committed insane represents the end-example of the type. His logic will be absolutely flawless; youīll shortly find that you, not he, are guilty of false syllogisms, argumentum ad hominem, distributed-middle, and other forms of bad logic.

Only he goes on being magnificently irrational, despite his perfect logic.

The problem is , of course, that perfect logic applied to false postulates yields perfectly logical irrationality. The Master False Postulate of the system the computing psychotic operates on is one widely accepted: ``Anything that is logical is necessarily rational.'' Since his logic is flawless, that proves him that heīs perfectly rational.

The great difficulty lies in the fact that while we have worked out a codified, formal technique of manipulating postulates ­ thatīs what we mean by ``Logic'' ­ we have no codified or formalized system for deriving postulates. Thus you can check on the rigor of another manīs logical thinking, and cross-communicate with him as to the nature and validity for the logical steps, but you can not check his derivation of the postulates heīs manipulating so logically.

For example, when Newton studied Keplerīs laws of planetary motion, Galileoīs work of falling bodies, pendulums, accelerations, et cetera, he abstracted from the data certain postulates, now known as Newtonīs Law of Motion and Gravity.

He derived from those postulates certain conclusions. That his conclusions were absolutely validly derived, by perfect logic, could be checked. But there was no means whatever of cross-checking the process by which he had abstracted those postulates from the data.

Keplerīs lawīs of planetary motion were simply observational rules-of-thumb ­ they were not ``logical'' or ``rational'', but simply pragmatic.

Newtonīs postulates ­ his ``Laws'' ­ could not then, and can not now, be provably derived from the data he used. There is absolutely no known method of going from the data Newton worked with to the postulates he reached. That his thinking process in doing so was sound absolutely cannot be proven, even today. We do not know how postulates can be abstracted from data. Men can do it; this we know as a pragmatic fact. How they do it we do not know.

Certainly Newtonīs postulates were ``proven'' in his own lifetime; ``proven'' in the narrow sense of ``shown to be useful in predicting real phenomena in the real universe.''

But in that sense, Ptolemaic astronomy had been ``proven'' too, a millenium or so earlier.

It is because we still do not know how to do what all men do constantly in their lives ­ abstract postulates from observation ­ that we can not design a machine that can think, nor help the psychotic to re-abstract and correct his postulates. (And canīt re-abstract and correct our own false postulates either, of course!)

In the course of developing computers ­ modern terminology prefers that word rather than ``robotic brains'' ­ men have been forced to acknowledge gaps in their understanding of thinking that they were able to glide over with a swift, easy, ``you know what I mean ...'' previously. There was the method of ``explaining'' something with the magnificent phrase ``by means of function'' so long as you didnīt have to specify what the function was, or how it operated.

Robots, however, have a devasting literal-mindedness. They tend to say, ``Duh ...uh...no, boss, I donīt know what you mean. Tell me.'' Even more devasting is the robotīs tendency to do precisely and exactly what you told it to do. The gibbering feeling that can be induced in the man trying to instruct a robot can demonstrated beautifully by a very simple little business. Makes a wonderful way of explaining the problems of automation and cybernetics to a non-technical audience ­ or a technical audience thatīs never worked with that kind of problem. Try this one in a group some time:

``Assume that I am a robot. I like all robots ­ follow orders given me with exact, literal, and totally uncaring precision. Now each of you, of course, knows how to take off a coat: all you have to do is to give me directions as how to take off my coat.''

Usually the instructions start with ``Take hold of your lapels with your hands.''

This is complied with by taking the left lapel in the right hand, and the right lapel in the left hand ­ since the intended position werenīt specified.

``No ... no! Take the left lapel with the left hand, and the right lapel with the right hand!''

You do. Taking the left lapel somewhere up under your left ear, and the right lapel at about the level of your right-side pocket. When the order is corrected ­ i.e., adequate precision and completeness of instructions have been worked out ­ the next step is usually ``Now straighten out your arms.''

This allows of many interesting variations. You can straighten your arms out straight in front of you, making ripping noises as you do since the robot could, we assume, tear the cloth readily. Or you can straighten them straight out to the sides, or straight up ­ with ripping-noises sound effects in any case. Or, naturally, any combination that happens to appeal to you: the order was positive, but not explicit.

Usually about this time the audience has genuine realization that stating explicitly what you mean, in even so simple a matter as taking off a coat, is no easy task. From that point on, the difficulty and frustrations of trying to design automatic machinery can be understood a lot more sympathetically.

This is the first, and simplest level of working with a system that is perfectly logical, but not rational. The results the instructor gets are the logical consequences of the postulates ­ the orders ­ he feeds into the logical-not-rational system.

Very recently, Dr. Gotthard Gunther, working at the Electrical Engineering Research Laboratories of the University of Illinois, has developed a formal, codifiable system of mathematical hyper-logic ­ I must call it ``hyper-logic'' simply to distinguish the fact that it goes beyond the multi-valued logics that have been common therefore, and posses characteristics and potentialities never before available. It is, in effect, a formal-mathematical map for the design of a conscious computer. It is, also, a formal system making possible pattern-type thinking; it may, eventually, lead to the development of a formal, codifiable system of abstracting postulates.

The essence of consciousness is typified by the famous ``I think; therefore I am.'' It doesnīt, actually, prove existence ­ but it prove consciousness! It is one thing to think; it is perfectly conceivable that an entity capable of thinking did so without the slightest awareness that it was doing so, It would be an unconscious thinker.

The essence of consciousness is thinking, and simultaneously being aware of that action. Dr. Gunther points out that consciousness is a reflective process ­ and requires for its existence (1.) a thinking process, (2.) a simultaneous parallel thinking process observing the first, and (3.) a system of relationships between the two such that the reflection is possible. (That is, for a mirror image to be seen, there must be an object, a mirror ­ and light, establishing a relationship between the two.)

All the standard logical systems, from two-valued Aristotelian to n-valued type such as Korzybski and other have eulogized, have one thing in common that make consciousness impossible within them: they are essentially linear systems. ``Linear'' in the technical sense of being one-dimensional ­ all points-on-a-line. (Not necessarily a straight line ­ as circular arguments attest!)

``Goedelīs Proof'' that there are true propositions that cannot be proven true by any logical process rests, in essence, on his demonstration that all possible logical statements can be arranged in an ordered, numbered system ­ that all possible logical statements can be assigned a unique, defining number in the sequence of numbers.

This proof would not apply in a planar system ­ a system existing not in a line, but in a plane.

Since Dr. Guntherīs monograph introducing his work is some two hundred pages long, any description of the general idea given here is completely inadequate ­ and in logical processes, inadequate is equal to ``invalid.''

In vague, general terms, Gunther has introduced the concept of a hyper-logical system having not n values along one linear array, but a formal system having n values along two orthogonal axes.

The result is a formal-codifiable system of describing and relating two separate, simultaneous linear processes ­ because, in a plane defined by two orthogonal axes, two lines can be described, and their relationship specified.

This makes possible the fulfillment of a conscious logical process, in a fully defined, formal-mathematical sense. In other words ­ the basic description of the processes necessary for conscious, logical machine!

Note carefully: this does not give us a rational machine yet ­ but it does make possible a machine which could correctly answer the question ``Are you operating?''

Again necessarily in vague, general terms, the way Gunther has achieved a meaningful orthogonal axis of analysis is to use the long-recognized true-false axis as one of his two.

The n-valued logics have, in essence, simply divided the ancient true-false axis of Aristotelian logic into a spectrum of n steps. Call the steps truth-probability, and say Truth ranges from probability 1.0000 ... to probability 0.0000 ... and there are n logic-values. But theyīre still all on the one axis from True and False.

Gunther has introduced an orthogonal axis. One way of expressing it ­ remember the monograph is an extremely dense, tightly reasoned document, and any effort to abstract it to this necessary extend is inherently inacurate ­ is to say that the orthogonal axis is relevancy.

In formal logic, thereīs the hidden assumption that any Truth is absolutely relevant ­ absolutely necessary. The concept of probability assumes that if a thing is one hundred per cent probable, it is one hundred per cent inevitable.

Thereīs room for doubt. It may be one hunderd per cent probable ­ but entirely irrelevant. A past event, for example, is one hundred per cent probable ­ i.e., it did in fact happen ­ but that doesnīt mean that itīs relevant to a present discussion.

Typically. many a logician has said, ``You must agree with me that ...'' and given a truth-proof of something.

But I can, very properly, assert ``I donīt care whether itīs true or not; it doesn't have anything to do with me.''

In order to handle just such real-world problems as that, we have long needed some means formally codifying both the truth-value ­ probability-value ­ of a statement, and its relevance-value. Means of doing just that should be developed from the basic work Dr. Gunther has done. Means of measuring relevancy, so that we can say a statement, in particular situation, has a ``probability-of-truth value of 0.9 and a relevance value of .5, yielding a ``meaning value'' of .45.''

The present binary type computers are, in essence, operating on a pure true-false system, with no probability-spectrum built in, (That is, normally, supplied by the program assigned.)

A conscious-logical system would have the characteristic of being able to do logical processes, while observing that activity logically, and evaluating the relationship between the two. Theoretically, such a system would be capable of self-repairs, being able to observe not only that there was an error, but what kind of error there was.

That is, such a machine could be given overall instructions in the how-to-takeoff-your.coat problem such as ``Do not tear the coat or overstress your own components'' and be able to use that generalized instruction consciously. You canīt get that effect with a force-limit order; that problem is typified by the problem of ordinary household wiring systems and fuses. The fuse is, in effect, a force-limit ``program'' written into the system. The force-limit is appropriate to the 20 ampere-maximum loud of the air-conditionner motor ... but will make a charred mess out of the light-duty blower-motor in the air conditioner if it gets into trouble. The fuse has a 20-amperes-maxium limit instruction; that instruction is relevant and appropriate to the main compressor motor; it is irrelevant and inappropriate to the blower motor.

A conscious machine, capable of applying the test of relevancy to a problem, could handle such problems.

There is, in this new formulation part of the overall thinking process, another highly interesting hint. Psychology has long and acutely been aware that the conscious mind is by no means all, or even the most important part of the total ``human mind.'' That there is some mind-structure called by various names ­ ``the subconscious'' is the most widely used ­ has been painfully evident to anyone trying to define human behavior and/or thinking. But there have been various plaints, in various tones ranging from prayer to furious blasphemy, as to why God ­ or the Devil ­ ever complicated human problems by introducing any such obviously jerry-built unnecessary contraption.

The unfortunate part of it is that conscious thinking simply isnīt able to control the subconscious. ``A man convinced against his will is of the same opinion still,'' is an old statement of the problem. You can convince the conscious and logical mind ... but the stubborn, willful, irrational, damnable subconscious remains in control!

Dr. Guntherīs formal analysis appears to suggest the reason for this.

To be conscious requires two separates lines of thought simultaneously operating, with a pattern of relationships operating between them.

This means that the conscious mind can be conscious only if an immense computer system, capable of operating in a planar system, carrying on two-lines-and-the-pattern-between simultaneously is in operation.

And all, actually, to handle one, linear-logic problem, with cross-checking.

That same compuer-mechanism, freed of the requirement of maintaining a two-lines-with-cross-releationships system, could handle problems of immensely greater complexity ­ multi-dimensional problems, instead of mere points-in-a-logical-line problems!

But only by turning off the conscious effect.

In other words, your mind may be capable of operating in two modes: 1. The Conscious Mode, in which two separate lines of logical thinking are operating, with cross-relationships, 2. Or as a non-conscious system capable of multi-dimensional thinking, capable of handling problems of a hyper-logical order which can neither be solved by, nor the method of solution represented to, a logical-linear system. Remember that all two-dimensional figures, when projected on a one-dimensional , linear system, are absolutely indistinguishable!

And this would mean that you would have to solve all your more complex problems by relinquishing consciousness ­ i.e., turning it over to the subconscious ­ and that many of the solutions derived by the subconscious planar-type operating of the mental computer could not be interpreted consciously. Only the essential operating instructions could be transmitted!

Thus Newton abstracted his Laws from Keplerīs data, and could present those essential operating instructions, and could make logical-linear derivations from them. But he could not explain how he went from Keplerīs data to his Laws ... because that was a subconscious-planar-hyper-logical process!

To the planar-thinking subconscious, the conscious mindīs inability to distinguish between logically-identical but hyper-logically totally dissimilar problems must be annoying. (The shadow of a square on edge is exactly the same as that of a triangle of equal base line, a circle of equal diameter, or a wild doodle of equal extrem excursion. Measuring the shadow-lengths would assure you they were all exactly equal.) The result would readily explain why a man convinced against his will ­ the subconscious knows damn well that triangle-shadow is not at all like the circle-shadow ­ is of the same opinion still.

A man cannot be convinced by any amount of data. (Data is merely True; you havenīt demonstrated that itīs also relevant.)

Men have long complained that people act illogically. (Hyper-logical action would be rational, but not logical.)

The big trouble is ... the subconscious system definitely can and does solve problems the conscious cannot, but to do it, unfortunately, the cross-checking system inherent in consciousness is sacrificed.

And because the planar system is incapable of cross-checking, it can be incredibly foolish.

Until someone comes along with a mind built with a third axis-of-analysis ­ a mind capable of conscious intuition.

And, of course, he wonīt be able to cross-check his new level of thinking!

Günther Web

You may also want to have a look into Günther's Philosophy Web to read more about Polycontextural Logic!

vordenker Contact: webmaster@vordenker.de
Copyright Đ John W. Campbell 1963
Issued: June 23, 1998
Last modified (layout)
2010-12-06