ADD Forums - Attention Deficit Hyperactivity Disorder Support and Information Resources Community  

Go Back   ADD Forums - Attention Deficit Hyperactivity Disorder Support and Information Resources Community > SCIENTIFIC DISCUSSIONS, RESEARCH, NEWS AND EVENTS > Scientific, Philosophical & Theoretical Discussions > Open Science & Philosophical Discussion
Register Blogs FAQ Chat Members List Calendar Donate Gallery Arcade Mark Forums Read

Open Science & Philosophical Discussion This forum is for open discussion, encouraging new and unconventional ways of thinking, welcoming posts in any format

Reply
 
Thread Tools Display Modes
  #1  
Old 07-08-07, 11:47 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
Evolution, AD/HD, and Logical Structures in the Brain

Evolution, AD/HD, and Logical Structures in the Brain

Tom and Kay’s (i.e., Stabile’s) Perspective


I. Intro

It’s no secret we consider AD/HD to represent an evolutionary advantage, perhaps less well known that we aren’t simply expressing opinion born of casual experience.

Explaining what we understand after forty years of study isn’t as easy as posting a recipe for candied yams. We’ve described various bits with some success, but the bigger picture has proven difficult.

It’s not that we go on too long, or that we’ve chosen the wrong audience, although both are true to a certain extent. It’s not that there are hidden barriers to communicating certain ideas, also true and closely tied to our original field of study.

The real problems lie in the scope of our models; they tend to jump all over the landscape, and worse, every spot they land is important to a basic grasp of the whole.

It’s not a problem if you already use a metamodel web to tie the whole thing together. But the normal approach, starting from scratch with the expectation of using a flat internal knowledge representation, can make it too much to handle.

Occasionally we’ve found a lucky pairing of a localized subject (one that stays in a fairly narrow subject area) and an audience motivated to hang in through the necessarily involved bits. A good example was the thread addressing NLP, in a sub-forum that apparently no longer exists in its original form.

That thread morphed into a discussion of the relationship between sensory processing and consciousness, focusing in part on how different experiential states are expected to arise, and what they imply about the mind and brain.

Neat stuff, alternate states of consciousness, and arguably one that piques the interest of many forum members. It’s also relevant to some of the AD/HD experience.

We went into some deep details of what we understand about perception and neurons, and how that connects to consciousness. Nobody involved seemed to mind the long posts or get too lost.

From our perspective we’re at another such stable stop, but this time it’s primarily one-sided. We set out to answer two related questions, one posed in a statement by E-boy and the other a rhetorical posted by SB_UK. Our initial response to E-boy was full of our usual claims, and we decided to see if we could explain how we back them up.

The combination of the two worked, for us, and we posted part of it in the
Could ADD be evolution? thread.

We reorganized that and the rest, and present it here intact. There are some important arguments in there, and for anyone up to the task it’s worth the effort to work through it.

This isn’t all of our work, or even most of it. It’s just a lucky subject, localized enough that we can work cleanly across a complete framework of supporting concepts without having to stray into entirely different areas.

We’ll be happy to answer any questions, assuming there are any. But since we’re currently engaged in trying to figure out what’s gone whacky with my thyroid and fix it, we’re a bit burnt. The hyperthyroidism doesn’t help at all, either.

So for anyone interested, please don’t fret if we can’t respond immediately; we’ll get to ya’ eventually…

--Tom and Kay
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #2  
Old 07-08-07, 11:48 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
II. The Original Impulse: E-boy and SB_UK’s Posts

From this post in the could ADD be evolution? thread:

Quote:
Originally Posted by Stabile
Quote:
Originally Posted by E-boy
I would stop well short of suggesting that [AD/HD] represents a more evolved state than the alternatives. There is simply no evidence to support this…
Sure there is. We can show that use of a certain form of logical structure to store and process information in the mind is related to AD/HD. Trivial analysis of the structure reveals both an evolutionary advance (in the form itself, but also directly related to simple linear evolution on the physical neural level) and an advance in the capacity to model any arbitrary aspect of reality unambiguously…

[more]
Some pretty boastful claims. OK if you’re in here, where we know how we got there, but tough to accept blindly if you haven’t been down that road.

It’s not that most of us wouldn’t agree with these concepts; it’s that they’re standing there naked, unsupported by the kind of logical chain of reasoning we would like to see, something that would fare well in academia. At least enough to get a serious conversation going, anyway.

After posting this (and the next), I started thinking once again about the problem of backing up the statements in the post. It wasn’t long before we realized the subject was direct enough that it might be addressed with a set of longish posts.

Not the best solution, but better than the usual problem of not being able to address it at all, and relying on member’s personal effort to get a grasp on the dicey bits. Some work at it, but most never know there’s anything to pursue…


From this post in the could ADD be evolution? thread:

Quote:
Originally Posted by Stabile
Quote:
Originally Posted by SB_UK
How would one prove that the mind has a defined structure?
You can’t, by definition. Whatever you prove has to include the mind in the tool chain, so there’s no way to eliminate its influence in the answer, to prove independently of mind the nature of some aspect of mind.

[more]
True enough, but not quite on the point. What we’re interested in is the structure, or structures, and once again I found myself thinking about how to show reasonably strong evidence of the kind of structures we’ve been describing since the beginning, actual examples like the ones we’ve used over the years.

Sure enough, we saw we might be able to kill two birds with one stone. It’s the combination of the two that does it, I think. Each question helps hold the other in place.

So we’re interested in supporting our argument that AD/HD is an evolutionary advance, and also in showing as convincingly as possible in limited space that the logical structures we describe exist and function as presented.

Next up: our responses, combined and edited, both the one we originally posted and a second we chose not to post in its original form.
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #3  
Old 07-08-07, 11:48 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
III. We Begin with Neurons…

Quote:
Originally Posted by SB_UK
How would one prove that the mind has a defined structure?
You can’t, by definition. Whatever you prove has to include the mind in the tool chain, so there’s no way to eliminate its influence in the answer, to prove independently of mind the nature of some aspect of mind.

If we begin with this explicit assumption:

Given: that mind and the physical reality it inhabits actually exist in the way they appear to exist;

we find we can prove the nature of some aspects of mind.

We can’t prove the given, of course, so any absolute conclusions about nature will always be compromised by a pesky asterisk. But at least we shuffle the uncertain bits out of the picture, into their own private room, where they won’t bother us as long as we never lose sight of the fact that they’re there.

So with that assumption (we’re really all here, not merely some grand invention of my own fevered mind), we set out to find structure…

First, we need to look at what we know of neurons, hard enough to be certain of general operating principles. Not as big a deal as most make it out to be, and to prove that, we’ll run through it in only a few paragraphs.

Let’s look at the cerebellum. It’s a good place to start, both because of its physical regularity and its direct representation of external physical bits, stuff we can check by simply looking. Without assuming too much we can see a hierarchical control system that models motion (say, the act of picking up a coffee cup) as a series of smaller component models of the relationships between stimulus (twitch that muscle) and response (see and feel that component of the gestalt motion).

Some little twitch of a small bit of the muscle in your forearm contributes to lots of different gestalt motions, but one model of the control response can be shared with all of them. At the top level the input is the impulse to pick up the cup; all the way down the chain to the smallest detail the hierarchical arrangement of neural structures takes care of business in exactly the right way to accomplish that goal.

So what, then, do the individual arrangements of neurons (what we call neural structures, also known as neural networks) represent? Since it’s a control system, we can write systems of differential equations that describe each element of the gestalt transfer function, and the combination of systems exactly represents the information contained in the neural structures associated with a particular gestalt motion.

So neural structures contain equations of motion, right? It’s more than that, because we don’t have a different set of equations for every motion we might want to make. The system of hierarchically interconnected neural models acts more like a machine that can generate the exact equations needed on the fly, and solve them continuously in real time.

So it’s not math stored in there, but clearly a higher form of information about the relationship between input and output, which (in this case) are collections of firing patterns of the nerves on the input side and complex firing patterns feeding the nerves on the output side, which ultimately cause the exact amount of twitch needed to pick up the cup.

The only consistent interpretation of what’s stored in the neural structures is this: they’re an arbitrary logical model of the relationship between the inputs and outputs.

Why is that the consistent definition? Remember, the top level impulse to pick up the cup of coffee isn’t defined in terms of muscle contractions, but rather a purely abstract act that only has meaning in the context that coffee cup has meaning. So we know that some neural structures contain elements that have no physical analogue, i.e., they’re purely logical.

Furthermore, we can easily verify that there are elements involved in the task that are abstract patterns fed in from the vision system, and much more of a similar nature. We need to assume neural models are logical representations if we want them to work on every level, especially the higher ones like our conscious context.

The big pattern here is the hierarchical relationship of little models making big models, all the way up to the very topmost levels of abstraction, right smack in the middle of our conscious context. So we can look at any neural function, say, the impulse to think about how we might see structure in the mind, and find the exact same system of representation of logical models, real or abstract or otherwise.
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
Sponsored Links
  #4  
Old 07-08-07, 11:49 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
IV. The Model is Always Right

OK, so that took nine paragraphs after editing, a bit more than a few. But what did that get us? Well, looking again at the cerebellum, we can see the complete hierarchical logical model for any gestalt motion must exist previous to the use of the model.

That makes sense: absent the control equations, we can’t close the loop and get that cup to our lips. So we know they’re there before we start, and all we do is trigger them into action.

Similarly, whatever we choose to do in a purely abstract context must derive from models that exist at the time we choose to act (where act may now signify intent and abstract activity without any physical motion).

So how exactly does this happen? To understand how we might choose a particular course of action at the highest levels, we need to understand how neural structures map the transfer function of input to output. The simplest way to describe it is this: presented an arbitrary pattern of impulses, a neural structure automatically chooses the optimum pattern to present as its output.

Optimum simply represents the correct value of the function (if we assume it has a mathematical analogue), so we can see the underlying engine at work. Think of a marble rolling around on a complex surface; now lets morph the shape of the surface so that it rolls exactly along an arbitrary path of our choosing.

That’s pretty much how training a neural structure builds a logical model, except there are multiple dimensions and the gravity field of each varies independently of the rest, so the marble can take an arbitrarily complex path in an unrecognizable space. The position of the marble for any given set of conditions is the value of the function, and of course it’s automatically an optimum value, because if it wasn’t, the marble would roll a little further and find the optimum.

Why go to all this trouble if all we want to know is a bit about the patterns we see in the mind? We need to be certain that whatever we see is already fixed in place before we look at it; it’s OK if it changes as a consequence of us looking at it, and that can be a useful property in its own right. But it has to be stable before we start, so we can know what the original state was.

The implications of this little exploration are wide and deep. For example, it suggests that everyone always acts in the optimum way according to the models that exist at the time the action is taken. Nobody ever intentionally chooses a poor solution to a problem; we simply don’t possess a mechanism to make such a decision.

That doesn’t mean the solution is actually optimal, out in the real world, only that it’s the optimal solution represented internally at that moment. We all constantly observe the consequences of our solutions to a given set of inputs, whether it represents the path the coffee cup is taking towards your mouth or the number of troops we should commit to bring peace to a foreign hotspot.

Or any other real or abstract circumstance. What we do with the information fed back thorough observation depends on how we interpret the information, in this case literally how much we have to abstract it to get a recognizable and meaningful quantity. If we’re drinking coffee, we won’t even notice the feedback of visual information most of the time, or the corrections we continuously make in the transfer functions, the logical models we use to generate the gestalt motion.

Because neural models are hierarchical and continuously being corrected, we sometimes can make mistakes and fix them without ever getting outside the envelope of acceptable error. The real-time manifestation of this in the physical context is the generation of slaloms, the cyclical curves humans make when actively using their cerebellar control systems.

In one sense, slaloms represent the natural limits of error of the individual transfer functions, over which we exert a kind of meta-control to keep the gestalt result close to our planned path. The hierarchical pattern of organization is plainly evident, and it clearly extends continuously to the highest abstract levels of cerebral function, where the conscious elements that determine the desired path operate in real time.
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #5  
Old 07-08-07, 11:49 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
V. Hierarchy and Computing

That’s still not the type of pattern we want to look at. Lets look at a useful example, one that demonstrates how we can ‘see’ structure of mind in the behavior of individuals. We’ll use a case in which we can easily show the existence of a pattern of hierarchical logical relationships independent of the mind, and then look at how different minds represent those relationships with logical models.

Computing constitutes a natural logical hierarchy of function that is more or less well documented, fairly common knowledge for anyone with a course or two under his/her belt. The abstract relationships defined in a higher level language are decomposed into sets of lower level operations, which in turn are decomposed into even more atomic operations, and so on until we reach the ultimate level of patterns in the flow of electrons through individual circuit elements.

There are several observations we can make about this. At each level we see a different logical context, so that a data statement or object definition can only be addressed meaningfully within the level on which it is defined. What I call variable NAME only has meaning in one or at most a few levels, and the meaningful nature of the combination of elements in the identifier itself (i.e., N, A, M, and E) changes for every level it might appear.

Nevertheless, we can make a strict correlation between the elements we see on any particular level and those on a different level. If everything’s working properly each is a complete representation of the exact same logical model, regardless of the context.

At first it may be difficult to see the relationship between a hex core dump and the block of C++ code it represents, and particularly difficult to imagine how it expresses the intent of the programmer to give the user a particular experience. But the relationship is strictly determined, by definition, and nobody in our experience has ever been unable to grasp this model of logical equivalence on hierarchically related levels.

When such a concept is first presented an individual begins to assemble an internal logical representation using appropriate elements already in place and adding new elements as necessary. (Few new elements should be needed, especially if the person has previous experience with computing.)

The result is the gestalt hierarchical representation of the ideas just presented, enriched by experience associated with the elements related in the structure. In other words, a logical model, which can be invoked to determine the optimum course of action to solve any particular (computing related) problem.

One of the consequences of the differences between levels is that we literally can’t address problems outside the levels on which they’re defined. We can look at the timing relationships of signals displayed on an oscilloscope or logic analyzer and speak meaningfully about the program accessing a particular variable, but we’re pulling a temporarily defined meaning down from levels far above.

Similarly, we can propose actions we would like a program to take that aren’t easily expressed in the high-level language being used. BASIC (which isn’t very high level, really) doesn’t do lots of interesting and necessary things that a useful program might need to do, and so traditionally BASIC programmers invoke obscure strings of PEEK and POKE statements to create a sequence of machine code that expresses the desired function.

At the highest level of abstraction (i.e., the program definition in the programmer’s mind) the logic expressed by the POKE-ed machine code is identical to what it would have been, had BASIC been able to express it. The logical model implied has to be identical on all levels, or there’s an error in translation and the program won’t work as intended.

This pattern doesn’t stop with machine code; it’s not often we would expect programmers to grab a soldering iron and assemble new circuitry, but it has happened when the underlying hardware doesn’t express the necessary functionality (and so no machine code exists that would work).

So far, so good: we have a nice little logical model building right here, in anyone who’s managed to stick with it. What about the real world, of programmers trying to get a job done? Here, we would expect the exact behavior described earlier: input a problem, output the optimum solution. And for this particular model, that includes solutions that dictate expressing logic on whatever level required.

Surprisingly, this only happens in a completely orthogonal way in about five or ten percent of the cases, depending on your dataset. (Orthogonal in this context means there is no bias towards the level on which logic may be expressed.)

We can take a programmer and present the general hierarchical model, and receive feedback that clearly indicates s/he’s grasped the model. In a way, practical examples are a kind of mini-course, and the feedback elicited to determine clear understanding is a kind of test.

So what happens? Most often, after a brief period, some elements of the logical model become deprecated. They don’t disappear, but they lose a critical part of their intrinsic nature. And the missing pieces are always the same, those that represent the hierarchical relationships.

Pieces missing. Sounds like we’re talking about some sort of structure, doesn’t it?
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #6  
Old 07-08-07, 11:50 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
VI. Naturally Recurring Barriers

What happens in practical examples usually follows a scenario similar to this:

A programmer runs into difficulty of a particular sort, in which the functionality s/he needs to express logically is difficult or impossible to construct using the tools with which s/he is familiar. The trivial example would be a BASIC programmer needing to do something for which there is no BASIC statement or combination of statements.

Invariably the solution requires the use of tools that aren’t normally applied, such as assemblers and machine code debuggers. To many programmers the line between something like C code and assembly represents a pretty solid wall, and C is about as close to assembly as high-level languages get. Use BASIC or C++ or a high level program definition package and the barrier can seem insurmountable.

It doesn’t require jumping to another level, either. Not so long ago it was relatively common to see widespread panic when a shop changed from one language to another, COBOL to C or even C to C++. Entire programming groups were fired and new programmers hired to replace them, simply because the jump from one language to another was too difficult to manage in a reasonable time.

This phenomenon quickly became known as a paradigm shift. The term wasn’t invented by the computing community, but the application is entirely appropriate, and today paradigm is often mistaken for a computing term.

In our practical example we normally would find the programmer working at the problem for a bit and then asking around for help. If I’m the guy being asked, I’m likely to run through the hierarchical model for fifteen minutes or so, check that it sunk in, and lend them my copy of the assembler manual. I’d probably offer some suggestions for the best ways to approach the problem, and then let ‘em go.

Note that normally, the solution is trivial when it’s approached on the appropriate level, maybe one or two lines of assembly code at the most. That’s exactly why we want to shift levels; a quick in and out, and you’re home free.

Nine times out of ten, I have to go retrieve my manual after a month or two, and when I ask, the problem never got solved directly. Lest you take this as strictly personal experience, I should point out that the fear and loathing just described, lost jobs and such, have been a common phenomenon in the past.

Furthermore, there are numerous examples of the kings and queens of the computing world discussing and dissecting the exact same problem. I have a copy of an interview of Donald Knuth in which he flatly states that twenty percent or less of programmers will ever really understand the task of programming. He went further to describe that twenty percent as equivalent to a ‘priest class’, pretty high-minded stuff for a generally unassuming guy.

Clearly there is some very real barrier to performing a relatively simple task in the exact same logical way in a different context. It’s not the unfamiliar tools that cause the problem; we can find numerous examples of tool chains that present almost identical interfaces regardless of the level being addressed.

Since we know that the optimal solution to every problem is already stored away in internal logical models, we can safely assume that these barriers are encoded within those models. Since we checked that the model originally imparted correctly represented the hierarchy, we can also be certain that these models are subject to a kind of spontaneous transformation.

This sort of change in state is a special case, not the ordinary mechanism by which our models are corrected and maintained. That process produces nice, smooth slaloms, while this one can sometimes be disruptive.

We know more, too: the information that allows me (and the rest of Knuth’s twenty percent) to shift easily from one level to the next must not be represented in the models that seem to present barriers. We know what that is, from simple self-observation: the information defining the relationships between different levels, what we need to know to translate between one interpretation of meaning and a different one.

Yet in every case, we can show that the information about the relationships between levels originally presented is still there, and still significant to the programmer. What could cause stored information to lose its utility, to remain in place but fail to exercise the same influence on the choice of an optimal solution?
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #7  
Old 07-08-07, 11:51 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
VII. Relative Metalevel and Isolating Ambiguity

To understand this problem we need to examine the way we represent information logically. There are many kinds of logical relationships, most of which are represented by several different terms, depending on the context in which it’s addressed.

For our internal logical models (which don’t yet enjoy a standard formalism), we can simply imagine the long list of statements that uniquely describe whatever we’re modeling: a cat is a type of animal, with four legs, a tail, some fur (usually) which is this furry stuff growing right out of the skin, and so on, ad infinitum.

Were it not for the shorthand of shared hierarchically related subelements it would be hard work modeling a cat. Elements such as fur and skin clearly occupy closely related but different levels within the hierarchy; elements like leg and corpuscle clearly occupy very different levels. Just as clearly some elements of leg occupy the same level as corpuscle.

There is a name for the logical property that describes the difference in levels that two elements occupy, relative metalevel. Relative metalevel tells us where in a logical hierarchy some object sits, compared to some other part of the hierarchy. The earliest formal mention of this property (although not directly by name) is by mathematician David Hilbert, near the end of the Nineteenth century.

Note that relative metalevel is a perfect candidate for the kind of information missing in the models of programmers that see a barrier to lower levels. The elements that are being compared don’t go away if we lose the information about what metalevel each occupies, nor is the information that they aren’t on the same level necessarily lost.

But the elements themselves become scrambled in a peculiar way once we lose track of the meaning of relative metalevel. Any two elements may seem to occupy the same level, which may cause some confusion about the meaning of each.

We already know that there are significant differences in the details of how elements contribute to defining logic on a particular level; we also know that the logic on a particular level must be exactly equivalent to that on any other level. If we mistakenly take elements from different levels as being on the same level, they can behave in completely contradictory ways.

Returning to what we know about neural structures, we can recognize this as ambiguity, a problem for which there must be a standard solution. No model can be considered complete (converged on a stable state) until ambiguous elements are dealt with; ambiguity simply means that we don’t always get the same optimum when we access the model. This situation immediately triggers behavior intended to correct the problem.

Correction can take many forms, including the degenerate case in which the ambiguous representation persists regardless of the effort to resolve it. (In a famous experiment, goldfish were made to exhibit the symptoms of depression by presenting irresolvable ambiguity in the way they obtained food.)

Most often ambiguity is resolved by one of two methods: additional information is incorporated in the input vector, sufficient to distinguish between the ambiguous states, or the model is decomposed into two or more unambiguous cases, each containing one of the ambiguous states.

There are numerous examples of both of these solutions, some of which occur on the fly in ordinary everyday circumstances. You glance at a passing bus and notice a bicycle wheel protruding from the front bumper; looking more carefully at the scene, you realize that SEPTA has begun to encourage bicycle riders to use public transportation by hanging dangerous looking carriers off the front of their buses.

By the time you’ve recognized the carrier for what it is, you’ve split your model of a city bus into two sub-models, one in which it’s OK for a bicycle wheel to protrude from the front and another in which it’s grounds for calling 911. Clearly, the two responses are contradictory, and any model that could return both would be considered seriously out of convergence.

Just as clearly, we all deal with such structural problems all the time, with no more thought than we give to where the electricity in the outlet comes from. The structural implications are significant, though, and particularly so in the case of our hapless programmers.

We know what changes in their models between the time they’re presented and the time they try to apply them, and we can make a pretty good guess about what happened once the changes occurred. Take out the key information about relative metalevel, leaving only the fact that such differences exist; the whole model collapses into one or two levels, rather than a finely defined hierarchy.

That immediately generates ambiguity, which is dealt with by simply splitting the model into multiple sub-models, each element assigned to an appropriate, unique model representing its original level. The problem with this is the relationships that define translation from one context to another are now completely lost.

There literally is no way to get there from here, at least not if we’re looking for the answer in these modified models. The optimum solution of jumping to another level no longer exists, so they don’t even try.
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #8  
Old 07-08-07, 11:52 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
VIII. Are ADDers Knuth’s Priest Class?

This case is interesting for a number of reasons, including (as previously mentioned) the easily differentiated pattern of the organization of the information contained in the models, and the fact there are two obvious classes of model. There are other clues to the actual internal structure, such as the way users tend to access the information stored in their model.

Those whose models include relative metalevel often need to stop for a moment and reconstruct the basics of expression on a particular level. This reflects the fact that information is encoded in part in the relationships themselves. If we need to reconstruct it in the more common form we simply traverse the relationships, but that takes a small amount of time, and people often aren’t accustomed to this delay.

In contrast those whose models don’t include information related to relative metalevel can usually pound right away at the keyboard on automatic, producing the standard headers and beginnings of program structure without a thought until they get to the bits that actually define unique function.

Simply observing how a programmer starts coding can be all the clue you need to know if s/he’s in Knuth’s priest class. Does s/he hesitate for a moment before s/he starts to type? That’s who you want on your team.

There is more to our computing related example that makes it useful outside the bounds of the profession. Much has been said about the social and cultural aspects of the rise of computing over the last fifty years or so, usually addressing similar issues.

For example, Grace Hopper was passionate about high level languages, but she clearly never expected them to create insurmountable barriers to continued employment in one’s chosen craft. Just as clearly the ‘priest class’ problem was a serious concern to Knuth, one which he viewed as a barrier to his own efforts to educate programmers about the methods he applied to write correct programs.

His solution was to avoid the problem, I suppose, while Grace Hopper took the opposite tack, proselytizing wherever and whenever she could right up until her death.

Her message was particularly significant to the present discussion: before a speech, she was fond of handing out foot-long pieces of wire, saying, “Here, have a nanosecond.” She would speak wearing a much longer coil of wire around her neck, referring to it as “My microsecond”.

Every speech I’ve seen of hers was geared towards the same message I had for the guys that asked for help with their programs: Here’s the hierarchy, and here’s what populates it; get that right, and the rest is easy.

I’ve also heard her lament her failure to reach people, although she remained enthusiastic and upbeat about it right up to the end. For us, the dividing line seems to occupy a much different space, that in which we differentiate between normals and ADDers.

The problem faced by programmers trying to make things work on an inappropriate level is identical to that faced by normals trying to understand how we reached a particular conclusion. Looking at the same data we process leads them either to no conclusion or an obviously ambiguous one, a seemingly dangerous situation for most: who can trust a madman?

The connection to ambiguity in how we model various aspects of our reality can’t be dismissed. This is as significant a clue to structure as any, and one that explains much about the way our interactions with normals proceed.

Or don’t, as the case may be…
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #9  
Old 07-08-07, 11:53 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
IX. Structure and Relative Metalevel

OK, so for anyone who’s followed along to this point, the question might naturally arise:

How does any of that really define structure, and what does it have to do with evolution?

This is really where we started, back at E-boy’s assertion that there’s no evidence that AD/HD is an evolutionary advance. Somewhere down the line we have to do a better job of making the AD/HD connection, too, but for now we’ll concentrate on structure and how it relates to evolution.

When we consistently store information about relative metalevel (and leave it intact) two features of the organization of that information eventually emerge. One is the pattern of interconnections, and the other is what the interconnections actually connect.

Simply saying that two elements of a logical model exist on different metalevels doesn’t tell us much about the metalevels themselves. Specifying metalevel isn’t the same as defining a relationship between two elements. It should be obvious that elements occupying the same metalevel are not necessarily the same; however they’re related, it’s probably not associated with metalevel.

So what does specifying the metalevel get us? It turns out that the information encoded is about the kind of relationships the exist between any two elements occupying adjacent metalevels. That is, the character of all such relationships will be similar, even though the relationships (and the things they relate) may be very different.

An example is the relationship between the models tool and task. It’s easy to see that objects of type tool are all similar in a particular way, while the tasks for which they’re appropriate are also similar, perhaps in a different way. It’s also obvious that there are many different sets of relationships between tools and tasks, of which not all elements are exclusive to a particular task.

Tool and task are thus similar in a certain way, the sense in which they both define collections of objects. But that relationship is weak compared to the sense that tools are a group of models that are defined in part by an implied general model of the ways in which they’re used, while tasks are a collection of models in which some elements are specific examples of the general model of a particular tool.

For example, for the tool hammer we see the general model of an object with a heavy head mounted on a handle that enables us to swing the head to strike a stationary object. The task framing carpentry includes a specific example of the general model of a hammer’s use, one which even dictates a range for the form and size of the head and handle.

If we look at the task framing carpentry a bit closer, we can see that it includes general models of various task elements, such as constructing and raising a wall or laying a row of joists to define a floor. Any particular example of this task will be a subtly different model in which these tasks are precisely defined in such a way that a specific structure will result.

We can say that not all houses are the same, that they differ in some ways. We can say all framing carpentry is the same to a greater degree than the things that result, and the tools used are the same in yet a different degree. These differences in the degree to which examples of a particular model vary in detail are related to the metalevel they occupy.

In the general case, we can take any particular model (say, for example, the task framing carpentry) and see that some of its elements have relationships to more general models on the next higher metalevel (like the model hammer), and that each such element is a specific example of the more general model.

We can also see that some elements of the model have relationships to models on the next lower metalevel which are specific examples of the model’s more general definitions that are generated by adding details about the dimensions and shape of the structure to be framed.

A general pattern emerges from all this, a triplet of logical properties related to relative metalevel that has very special properties of its own:

For a particular model occupying a particular metalevel, we recognize at least one relationship to a more specific model on the metalevel below, defined by adding details.

We also recognize at least one relationship to a more general model in the metalevel above, for which one or more elements of our example model are defined by details specific to the model.

Thus, for any given model we have

detail <--> model <--> metamodel

a logical triplet that we may slide about at will, from one level to the next, always centered on whatever model we’re currently considering.

There are some obvious features of this pattern. As we slide it down or up each element of the triplet assumes the role of model or metamodel or detail, depending on which way we slide and how far. Moving up in the hierarchy simply denotes that our models are more general, and moving down simply denotes increased specificity.

Since any model may serve as a general template for lower models we consider all models to be metamodels, regardless of what level they occupy in relationship to the model we’re currently considering.

Obviously, most models will have multiple relationships to metamodels on the next higher level and details on the next lower level. The pattern of relationships connecting one model to others on higher and lower levels naturally generates a web of metarelationships in which individual (meta)models serve as the nodes. This metamodel web is the general form of a logical structure in which we can store information, precisely the same information that we previously discussed, i.e., our models of reality.
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #10  
Old 07-08-07, 11:53 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
X. Flat Models Vs. The Web

Is there a different way to store information, one that might produce a different form? Let’s return for a moment to our original example of a programmer’s model of the computing environment in which s/he works. Recall that the deprecated models that present barriers to working on a different level in the hierarchy have been stripped of certain information about the relationships representing relative metalevel.

We said that information was critical to preventing ambiguity, and now looking at the structure of the metamodel web it’s easy to see why. Any element of a model is just another model, representing a node in our web. No node occupying a certain level can interfere with a node occupying a different level; they don’t exist in the same space.

In terms of our original description of how we use neural structures, we can see that finding an optimum solution has evolved. It’s now equivalent to rolling the marble around in the relevant local part of the web and finding an optimum path, rather than a single static optimum.

We can start up a few levels if we want to be sure we don’t miss the optimal path. Such a path simultaneously represents several optimums in related sub-models, and as such encodes the optimum approach to a problem as well as the optimum solution.

If we take away the information that differentiates the web, we can no longer tell if we’re in this model or this one apparently laying right over top of it. The marble doesn’t know which way to roll, which part of the structure its path should follow, and the situation decays back to the original static form of finding a single optimum. The problem with this is there’s no way to tell which optimum we found.

It’s easy to understand how a web-like arrangement could help prevent ambiguity in our logical models, but what other logical structural form could stored information take? The key to understanding that is noting that the deprecated metarelationships take on the same form as any other element in the model.

In a web, metarelationships make it easy to store lots of information without bumping into previously stored stuff; anytime we need more room, we automatically create additional dimensions. Deprecate the metarelationships, and instead of helping define extra dimensionality for the model the elements representing metarelationships just lay there in a heap, along with the rest.

The primary feature isn’t any detail of the way elements are arranged (some structure can still be found in the relationships), but in how they’re not arranged: generally, they’re flat, at least by comparison to the multidimensional structure of the web.

Can we relate this to the way real neurons encode models? Certainly. Neurons are unbiased about what they hold, so every element of the logical arrangements we’ve described can easily be represented in a neural structure. What determines the logical arrangement of our neural models is the information presented; details about what logical form that information takes are not necessarily significant.

It is possible that some types of logical relationships and the forms they imply might be more natural, in the sense they could be expressed more efficiently in neural structures. But as we said, it’s not necessary, and we don’t need to consider that here.
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #11  
Old 07-08-07, 11:54 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
XI. Evolution and Neurons

If neural structures represent models of what we present them, how exactly does information about relative metalevel get in there? To understand that we need to look at how perception of logical properties arises, and for that we need to know a bit more about the way neural structures function in a real, dynamic context such as a working brain.

Lets look again at our model of a cat. The model obviously serves a purpose, but what is it, exactly? The general answer is this dual: neural structures serve to model our experiential reality, and they also serve to mediate our interactions with that reality.

So our model of a cat serves as an element in our gestalt model of the world we inhabit and also participates in some way with the process of interacting with that world. Neglecting for a moment the question of how it was created, lets follow the process of perception that makes use of our cat model.

Clearly, this should be a trivial exercise; it’s obvious we must apply our cat model to the process of recognizing a cat whenever we’re presented with some sensory input actually derived from a real cat, out in the real world.

A stream of sensory arrives at the higher centers of the brain after considerable processing. In somewhat simplified form, the neural impulses equating to cat are delivered to the general area in which our gestalt models are stored, compared to various models, and finally matched up with the cat model: we recognize the cat.

When we decide to interact with the cat, we again make use of our cat model to play a ‘what if’ game, imagining the outcome and adjusting the relevant parameters until it’s optimal. This is accomplished using neural structures that model our intent, essentially a logical representation of ‘optimal’.

The entire process occurs in one smooth operation, automatically centering on a temporarily defined logical model of the desired interaction. Our awareness of this process is usually minimal, only the impulse to act followed by the sense of what the appropriate action should be.

This pattern of the use of logical models to represent any function (including modeling reality) can be found at all levels in the brain. It’s the general trick that neural structures do; looked at one way, it’s modeling, and in another, it’s mediating the application of models. (This is why it was described as a ‘dual’ in the definition above.)

Since the general utility of neural structures is fundamentally the same at all levels in the brain, we should be able to look at what functions arise at different levels and begin to understand how the modern human brain evolved. The simple version is this: neural structures arose as the best solution to the problem of coordinating the activity of organisms that are comprised of more than a few cells.

Absent some coordinating mechanism what we would have is a kind of symbiosis, rather than a discrete multi-cellular organism. Neurons (just another type of specialized cell) serve to provide that coordination, and since the primary purpose of the coordinated activity is interacting with the environment in various ways, modeling reality was a feature right from the start.

The rise of ever more complex species is paralleled by the rise of ever more complex neural structures, each new feature either expanding the current complement or (occasionally) representing a new level of functionality. Despite the development of more complex organizations, the fundamental utility of neural structures remains the same.

All of this was driven by the simple rules of selection; every step represented an advantage, right up through the development of complex social entities, or groups. Groups require some sort of coordinating mechanism, just as multi-cellular creatures do. But since a group isn’t actually a discrete organism, the solutions are necessarily different.

Things went smoothly for a while, increasingly abstract forms of social behavior evolving in increasingly complex species, paralleled by the rise of the increasingly complex neural structures needed to support them. But at some point this development ran smack into a physics barrier: the bandwidth of the oral – aural communications channel wasn’t large enough to carry the amount of information needed for the next stage of development.

This bears repeating, since we’re talking about the development that led to our own species. What it means is this: the amount of information you can communicate to another person by speaking isn’t sufficient to support our complex social structures.

When the time came to take the next step in our development we had to find a different way to get the job done. What we came up with didn’t require any dramatic new apparatus, only an incremental improvement and expansion of what we already had, a classic example of reuse.

The effect itself was dramatic, creating perhaps the most significant difference yet when we’re compared with previous species. But this time, the difference was almost entirely on the inside. H. Sapiens arose through a literal descent into an internal universe, a reality in which we can know ourselves, recognize others, communicate freely and act independently for the first time to further expand our horizons.

We’re no longer at the mercy of cold statistical mechanisms. We can examine our own context, develop tools with which it may be understood, and act directly to increase our control of it. But we pay a price for this freedom: we’re each of us completely alone, locked in an internal model of what lies outside by the need for that understanding.

This grand illusion that is our universe must be shared, or we would have accomplished nothing. That is the trick, the way we get around the limitations of the physical Universe: when we communicate with others, it’s primarily to ensure that our internal models remain synchronized, identical to an amazing degree of precision.

Almost all of what we experience as communication with others takes place internally, within our private copies of the common model of reality. Since we don’t have to actually transfer information to communicate, physics no longer imposes limits on how much we can say, or how detailed and rich it can be.
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #12  
Old 07-08-07, 11:55 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
XII. Paying Attention

Presently, the primary limit on communication is the scope of the internal model itself. Judging by what past advances accomplished, it seems reasonable to assume that the next advance might somehow help us overcome that limit.

The most recently developed parts of our brains represent the neural structures that support our internal reality and the processes by which it’s maintained and put to use. Our experience of conscious being is an artifact of these processes. We ride along with these neural structures as they interpret input and search for optimum solutions for two problems: what should we do next and (subsequently) how well did we do it?

In order to assess how such experience arises, we need to understand one more bit about the operation of neurons and the structures they support. Neural models function in a somewhat counterintuitive way; from what we’ve described so far, you might conclude they should all be firing at the same time, and that is indeed the case.

Neural models aren’t like a book with clever drawings which we can use to identify the bird we saw in the garden this morning. A neural model always returns some value, the optimum, which despite being as concrete as any other may mean simply “I don’t have a clue.”

If we’re going to have more than one such structure to call on we need mechanism that can control which optimum wins out, some way of selecting the most relevant optimum. Anyone who’s followed to this point can probably describe such a mechanism in a general way, and anything that can be so defined can be instantiated as a logical model.

Which means we can generate a neural structure that provides exactly the functionality of that general description. If such mechanisms are possible and useful it’s a sure bet we’ve got ‘em, and of course we do. There are several ways such functions can arise, and what they do is allow us to focus our attention on the output of a particular neural model.

‘Attention’ as used here isn’t necessarily what we experience as ‘paying attention’. The need to focus on one output of many active structures exists on all levels beyond the simplest case of a single model. Most of these operate below the level of conscious experience.

For example, some structures regulate our breathing and heart rate, along with other similar regulatory tasks. We constantly assess our state of balance and compensate for any disturbance, only occasionally taking notice of the result. And there are numerous complex abstract functions that are monitored and controlled without being noticed, despite the fact they operate on the same levels in which we experience conscious being.

A prime example is the process by which we maintain the synchrony of our common models of reality. The operation of such processes must necessarily be masked from casual view, an adaptation that partially compensates for the effects of apparent free will and self-determinism.

All brains depend on neurons, and in any brain (beyond the simplest examples) we should expect to find many such attention mechanisms at work. Being intrinsic to the correct function of neural structures, they arise in concert with the evolution of the structures themselves.

The expected picture is one of new levels of function arising in stages. At any particular stage of development we should see the minimal structure necessary to support the desired function.

This implies the first users of the new internal reality models are certain to have developed an appropriately complex attention mechanism to control the flow of focus through the new structures in response to changes in external stimuli. Since this is our conscious level there’s an equivalent experience: the flow of our awareness of being in an understandable context as time passes.

This attention mechanism is directly associated with our experience of ‘paying attention’. But it is only one of many in our brains, some of which must co-exist within the same logical context, which raises an interesting question. Is there any reason we couldn’t have two such mechanisms operating at the same time in our conscious context?

On the level at which neurons operate the answer is no; there are no restrictions on number of attention mechanisms inherent to the nature of neurons and the way they function. As a practical matter the problem is more complex, a reflection of the high level of abstraction on which meaningful activity occurs when we’re talking about neural function supporting consciousness.

Potential prohibitions of multiple conscious attention mechanisms are relatively easy to understand, mainly rooted in the sense that we are individuals, not collections of individuals. But the rules of selection are clear: if a particular functionality is possible, has utility, and doesn’t conflict with existing function, it’s sooner or later certain to arise.
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #13  
Old 07-08-07, 11:56 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
XIII. Evolving Consciousness

Lets return for a moment to the role of neurons and neural structures, the idea that they provide the best solution to the problem of coordinating the activity of multi-cellular organisms. This principle still applies to function at the highest levels; all we need do is apply the appropriate amount of abstraction to what we consider to be coordinated activity.

It should be obvious that any function serving to improve such coordination would tend to select. Our experiential interpretation of these statistical processes at work is as a kind of abstract pressure, in this case the perception of an unexplained impulse to seek such improvement.

The picture we get of the evolution of our neural structures is one in which we expect to see the simultaneous rise of the conscious context and the complex attention mechanism necessary for its function. Following at a later time we might expect the rise of additional attention mechanisms, provided they somehow enhance the operation of the neural structures to better coordinate the activity of the gestalt organism.

On this level multiple attention mechanisms are equivalent to multiple simultaneous threads of conscious awareness. This turns out to be exactly what we need to observe (and store a memory of) relative metalevel. Since we already know that including relative metalevel can enhance our logical models by resolving ambiguity, it’s clear that there is a significant potential to enhance their ability to coordinate activity.

There are other advantages, including the possibility of much more accurate and detailed models, and also the potential to coordinate all of our models into one consistent gestalt model that is much more resilient than any collection of individual models.

Any observation of such properties serves as a signal that we’ve arrived at this stage of our development, and (of course) there are myriad examples. We can observe not only behavior that demonstrates the application of relative metalevel in an individual’s logical models, but also that other individuals generate different models of the same phenomena.

That implies we can see both the original single-threaded form of the conscious context and the multi-threaded form we expect might evolve from the original form. This in turn indicates we’re observing the evolution of the new logical form in progress, raising the questions of how far along we are and what the transition might be like.

There are good indications that the process we’re observing is a classical example of an emergent system, exactly what we associate with the major transitions of functionality during a speciation event. If we’re seeing an emergent event in progress, the actual underlying changes are already universal. What emerges in an emergent system is a property already fully developed, the new displacing the old. In this case we’re seeing one way of utilizing neural structures to model reality replacing a simpler way.

But there’s a key part of this left unaddressed, the question of the connection between multiple simultaneous threads of awareness and the correct application of the logical property relative metalevel. With the groundwork set, we’re now ready to look at that in more detail.
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #14  
Old 07-08-07, 11:57 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
XIV. Building a Web: the Role of Memory

If we look at the experience of developing a new internal model of some aspect of our external reality, we’ll see a process by which we identify and classify abstract information in our stream of sensory input.

If some element represented is previously unknown, we engage in a process of identifying the various sub-elements and their relationships, some or all of which are likely to be familiar. (Truly new elements are relatively rare; much more common is novel arrangements of familiar attributes.)

What we store as a result of this experience is a memory of the experience itself. That memory in turn serves as the new logical model, but there’s a trick involved. To understand that we need to take a closer look at memories, both how they’re stored and how the information they represent is accessed.

If we need to remember nothing more than simple patterns in the sensory input stream, we can get away with directly encoding information in the logical model implied by the associated neural structure. Indeed, such models have great utility, and are present in all organisms that use neurons. Such models may be simple enough to be encoded in a DNA sequence, and thus be passed directly during reproduction.

The true power of the more complex logical models on which we depend is their ability to represent arbitrary abstraction. By the time we arrive at the level on which human consciousness operates, every significant (i.e., meaningful) model is purely abstract, a collection of abstract elements and relationships that only have meaning in the context of our experience of being.

As such, memories are similar to a set of instructions and pointers to the objects to which the instructions apply. The current take on memory in mainstream psychology is this: every act of remembering is an act of reconstruction.

So when we store information, it’s with the implicit assumption that we will have to reconstruct the experience that produced it in order to access it. This implies a subtle point: anything we want to remember has to be a part of the original experience.

A more general experiential form of this principle is experience of evidence of the past does not constitute experience of the past. In other words, it’s impossible to generate an experience by manipulating what we recreate in accessing the memory of an experience, or any combination of such recreated experiences.

Such an exercise does generate an experience, of course, of which we may store a memory. But it’s a memory of the experience of remembering, not a memory of the experience of some real element or elements of our reality. This distinction turns out to be key to the ability to observe and apply the logical property relative metalevel.

Relative metalevel is a relationship between two elements of a logical model, in a sense a comparison of the ‘type’ of each model. As a property of Nature, we can speak of metalevel in an absolute sense, but there is no direct way to perceive exactly which metalevel a particular model or element of a model occupies.

The experience of observing relative metalevel is therefore the experience of observing two things simultaneously. It does no good to observe them individually, and then compare the result. What that generates is a memory of the comparison of memories of the experiences, which as just noted cannot contain the information needed, absolute metalevel.

Relative metalevel is only directly observable by making the comparison at the time of the original experience. Thus, the necessity of simultaneity, and just as obviously, at least two independent threads of conscious awareness.

This is a sufficient driver for the development of more than one thread of conscious awareness, i.e., multiple simultaneous attention mechanisms at work in the conscious context. If we can directly observe and compare two objects simultaneously, we can store information that enables us to form models with improved resistance to ambiguity.
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
  #15  
Old 07-08-07, 11:57 AM
Stabile's Avatar
Stabile Stabile is offline
ADDvanced Forum ADDvocate
 

Join Date: Apr 2004
Posts: 1,729
Thanks: 2
Thanked 71 Times in 53 Posts
Stabile has a spectacular aura aboutStabile has a spectacular aura about
XV. Logical Structure and AD/HD

A complete gestalt reality model incorporating relative metalevel takes a significantly different form than models constructed without this information. When two people meet, they implicitly compare models during the process of communication. Any difference in how information is stored and accessed or in the information itself creates a very real likelihood of conflict.

Because we’re able to ‘see’ relative metalevel and store memories of the experience, ADDers build correct models that can differ from those built the old way, which even though different must also be considered correct. The ambiguity this presents is impossible to resolve without the use of relative metalevel, so any decision an ADDer makes has the potential to seem illogical and dangerous to others. The converse is not true, because we’re able to model the situation.

Many of the basic elements of the experience of having/being AD/HD may be traced to this problem, perhaps better described as a difference. Most of the rest are related to the problems that arise from living with such experiences; since every individual is different in this regard, there can be an infinite number of different individual cases.

Nevertheless, there exist broad classes of common experience, including our responses to the burden of living with such negative interactions. These classes are generally reflected in the different types and diagnostic subclasses of AD/HD, pretty much as laid out in the current DSM.

What we’re experiencing is clearly a moving target, though, so the standard classification schemes are already beginning to show signs of age. If we are living through the cusp of a speciation event, as seems likely, we should expect increasingly tumultuous times until the transition is substantially complete.

When that time comes (which could be as soon as ten or twenty years, or as long as fifty or more), we should expect a period of adjustment while we devise a suitably modified set of common models appropriate to the experience of using a metamodel web to model and mediate our interactions with reality.

It’s hard to predict what artifacts will survive in social institutions (for example), or what impact the rise of what is essentially a new species will have on things like international politics. What is clear is that there is an unmistakable pattern of evolution of neural structures at the heart of it all, regardless of whether or not we’re transforming into a new species.

It’s just as clear that AD/HD is (in a sense) a byproduct of the process of evolution of the form of the logical structures we use to store and process information. If the question of whether AD/HD represents an evolutionary advance isn’t definitively answered in the affirmative by this, it will be as soon as the mainstream scientific establishment begins to catch up with our analysis of these patterns.

That seems inevitable, given that the patterns aren’t of our own invention and most of the information we’ve relied upon to understand them is standard, well-accepted theory. We just happen to be looking in the right direction; it’s not possible that we’re the only curious sorts that will ever cast a glance that way.

--Tom and Kay
__________________
Peace. --TR =+= =+=

"There is no normal life, Wyatt.
There's just life. Get on with it."
Reply With Quote
Reply

Bookmarks


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Brain Scans Reveal Physiology of ADHD E-boy ADD News 19 05-11-11 04:54 AM
Neuro-Typical Thinking stevo General ADD Talk 44 10-22-06 11:27 AM
Not Einstein at all just ADD SB_UK General ADD Talk 172 05-18-06 03:22 AM
Etiology of ADHD Stabile General ADD Talk 232 11-26-05 07:26 PM
ADD & Communication SB_UK Relationships & Social Issues 44 05-25-05 12:49 PM


All times are GMT -4. The time now is 09:11 AM.


Powered by vBulletin® Version 3.7.4
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
(c) 2003 - 2015 ADD Forums