ADD & Communication
Hi Tom and Kay,
Would one expect the style, language and content of posts by nonADDers discussing a subject on a forum, to be different from the style, language and content of ADDers discussing the same subject on another forum; I guess what I'm getting at is whether you think that an expression of the use of these different logical structures, is to enable us to extend language beyond its normally tightly constrained form.
Chain has mentioned the use of unorthodox punctuation, and metaphor has been mentioned previously too. Do you think that posts are painted and not written by ADDers?
Some time ago, Wheezie pointed us to a forum ... and the posts seemed to be recapitulations of one another, and if I remember correctly, I commented on this, and also the failure of the authors to contribute anything of themselves ... a failure to convey their feelings on the subject to the readership.
One dimensional language!
This was split off from the neil young thread. I thought it would be interesting to discuss the ways ADDers communicate that may be different from what we experience in society.
"When we remember we are all mad, the mysteries disappear & life stands explained.
Please read our forum guidelines.
FAQ "How Do I...start here"
Also, there are some pretty clear signs of context based thinking and shifting in the grammar, structure and punctuation found in the writing of many ADD people (I feel theat a majority of people who are ADD are CM. There is probably a high number who are what has been termed Asperger's Syndrome as well).
This is indicative of the "multi threaded" thinking that hapens at the near-aware level (Everyone has this, but people with internal reference do not use it to process culture)
(Alternate version may be found in original thread here.)
We believe that ADDers are more aware of their context when they post, and so more likely to attempt to transcend the limits of canonical expression.
But there's a bunch of subtle influences at work. We ADDers aren't really alone; we’re just the group that has taken advantage of changes in our brains. If we’re right about what’s going on in the general population, there are at least two prerequisites to having/being AD/HD, and virtually everyone alive has the first, the changes in the brain that allow us to form more complex logical structures.
The second requirement is the development of those structures, at least in a limited way. We start with the ability to form webs, and perhaps we take advantage of that; forming a significant web, about a commonly interesting topic, is a whole other thing. And one that can take a while, to boot.
So when we visit another (non-AD/HD) forum, should we be surprised to see people aware of how their words fill the space, and using that recognition to add dimension to what they communicate? No, not at all; we think of that ability as an unrecognized instance of the underlying phenomena that give rise to AD/HD.
That sounds like we’re saying 'well, that's AD/HD, so wherever we see it, there's AD/HD', but really we're not at all. It more like we’re looking past the AD/HD label to the human platform underneath, and seeing that.
It’s not exactly like saying Windows and Linux can both run on AMD platforms, but it’s related. It's more like saying that C++ programs can run under Windows and Linux on an AMD platform, but the Windows executable won't run under Linux, and vise versa.
It’s the same program, though, right? No? Well, at least it’s the same platform…
Peace. --TR =+= =+=
"There is no normal life, Wyatt.
There's just life. Get on with it."
Last edited by Stabile; 05-12-05 at 12:47 AM.. Reason: TNetNoc
Maybe this would be a good place to start a discussion about these bits, since it does relate directly to communication: our internal conscious context (presumably related more or less directly to your concept of processing culture) exists to support language.
Peace. --TR =+= =+=
"There is no normal life, Wyatt.
There's just life. Get on with it."
You think you're a blabbermouth... take a gander at this! lol!
I think there is probably still a problem with communication and possibly a difference of focus which in itself creates the illusion of a "disagreement".
1. I think my focus tends to be on the differences that create the phenomenon that has been labeled AD/HD by the medical community. I sense your focus tends to be on global similarities that humans have. (to which I agree from what I know of your models).
2. Internal reference is a very simple concept but it is easy to think away if you look too hard.
3. I have created confusion with this term when it was still a fluid concept. I used it to simply describe people who are extra cultural.
I still do for simplicity... but there are degrees of internal reference and types. Most of AD/HD and Asperger's AD/HD mix have a "true internal reference" more on that later....
That being said... I view all forms as having purpose (function) in biology. I tie this in to the "order of the universe" through the phenomenon of attractors in Chaos theory. Biology is simply the highest form of order *from the sentient standpoint*. Ok... lets just cut our losses before we get too wrapped up in the problems of exactly *what sentience is*. Now this may be what you and Kay are working on. It is far out of the scope for me at this moment. So, for me, the homunculus stands for the time being.
The roles of the two major "cognitive" functional types:
External reference and Culturals (ER):
All dynamic systems have external influences ad infinitum (or to the end of the universe). Culture moves rapidly towards a highly ordered state when the environmental conditions are right. This push towards order is a function of what I call external reference. It is hierarchical order and it is a projection of something that exists at a neuronal level in the brains of what I have seen termed as "normals".
Internal Reference and Extra-Culturals (IR):
When culture is too static, it becomes fragile. When the environment changes rapidly, it shatters thus putting individuals at risk (we are naked social animals that huddle with the group for protection). Social groups that have individuals that are *not synchronized* with the group (extra-cultural) are more flexible. These individuals are more in tune with the external context of the group. They are more sensitive to the environment. The most successful of these extra-cultural individuals is the "internal referent" This is caused very easily by "removing" or changing several components and chemical feedback systems of the brain.
Functionality of Extra-Culturals (IR):
1. Extra-Culturals are more likely to leave one group and live in another culture. This is understandable. It is better to be an accepted as an outsider in a foreign culture (this restless type could be the "H" in AD/HD). It is harder to be treated as an outsider in one's birth group. This person becomes a genetic messenger. Spreading genes throughout the species. This is so vital in reproductive biology that even bacteria practice it.
2. Keeping culture dynamic is important in an ever changing environment. Extra-Culturals are "forced questioners" and "pragmatists". They probably had a role as a counselor to the "leader" of the group.
Functionality of highly context based individuals (CM).
1. Highly aware of situations that are environment based. Culture is an "artificial environment" and therefore demands the attention, processing time and high degree of focus from culturals (ER).
a. Awareness and contexts:
At the aware level, only one "thread" can be given focus at one time. Culture spins these threads. It must be constantly maintained. Individuals that are cultural (ER) tend to compartmentalize and ignore full context in order to keep group relationship functioning.At the "near aware" and sub aware levels multiple threads can be processed. This helps the individual build context in the group hierarchy. (cultural context bounded by hierarchy (HM)).
The extra cultural (IR) individual uses the aware level to monitor environmental contexts through highly acute senses. The near and sub-aware levels are devoted to building and storing "meta-contexts". Culture involves high degree of processing and the extra cultural (IR) also gets this extra processing. This allows for picking up dangerous cues in the environment.
2. Contextual mind has a greater ability to reason contextually.
This is valuable in creating technology. Knowledge of plants and sense based memories of what is poison and what is medicine are of great value to the group. It increases survivability. All people have contextual reasoning to a degree but the compartmentalization of HM prevents it from being useful outside the realm of simple problem solving situation. The result is that CM individuals are powerful problem solvers.
The AD/HD, Asperger's and Autism Internal reference matrix:
This extra-cultural individual is vital to the functioning of any group that needs to survive in a dynamic society. There are many ways that it can form and to what degree, these are what I focus on (I think they are the highest percentage in the population).
1. Lack of hearing... this leads to slower learning of culture. (low IR Low functionality)
2. Lack of language...(high IR, low functionality)
3. Lack of sub-lingual communication (reading faces, body gestures) (High IR, Medium Functionality)
4. Lack of non-contextual processing. (high IR, high technical functionality with social capability)
5. 3 and 4 combined (Very high IR, Highest Technical functionality with low social capability)
6. 2,3,4 combined (Autistic Savant)
Why was IRCM selected for at such high rates?:
1. It is a very simple removal of one functionality that does not impair the individual in any way except for culture.
2. It retains the functionality of basic social intercourse and is therefore "hidden".
3. It is base animal cognition with linguistic capability. A simple regression or a common "mutation"
4. It has the highest functionality for the culture (IRCM includes Asperger's CM)
What is internal reference for AD/HD?:
In its AD/HD form Internal Reference simply means that we do not have a "non-contextual processing" part of the brain. I believe this would translate as a "filter" in your language but it probably has a discrete structure in the brain. Remove this component and all of the "symptoms" of AD/HD become clear.
This means that in AD/HD we create our own filters or strategies to deal with cultural reality. These are poorly constructed and create anxiety and depression. When we see the filters we have created... the anxiety and depression lifts and we can live a full life...in the role that exists for us. The role of the extra-cultural creative (actually a provider of synthesis) and questioner (actually a guide that sees pragmatics solutions)
Finally tie into the thread:
Communication simply has a different purpose for the extra-cultural and a very special purpose for IRCM.
Douglas Adam's "computer" that is Earth is the contextual mind. We do not stand on the shoulder's of greatness. We utilize the new points of context created by other contextual minds and pull them into our contextual reasoning...of course after questioning and refining them. Vonnegut is "an author", Poincaré was a "mathematician", Einstein was a "theoretical physicist". They are all philosophers and they all created points of context that got me here. Theory and allegory... they combine in what I believe you guys call the "meta model web". I have been calling it the contextual matrix for about 10 years. I am not sure if it is the same thing.
This is what is different about our communication.
---***POSTED BEFORE READING CHAIN'S POST ABOVE***---
This is the fourth time that I've tried to post.
I keep failing to make the points I'm trying to make.
Each time I try and connect the Metamodel web with internal/external reference, I arrive at a conclusion in which the end effect of a Metamodel web is towards internal reference.
The web is the logical layer which lies below the I/E referential state at the higher behavioural level. The physical layer - actual neurones - sit below the logical layer.
So why does a discussion of language have a natural home here?
Our view of reality is internally held.
(Nurture through language)** builds the models which form that view of reality.
The view of reality could be redefined as a complex logical layer that feeds inputs into outputs .. see Dave .. and wave to Dave.
But language cannot encapsulate the full majesty of our absolute reality.
It defines a sub-component, but a particularly useful subcomponent which we can talk about (to Dave).
Given an ADDer talking to a nonADDer or 2 nonADDers talking together.
A common observation is of the ADDer failing to pay attention.
So where is the mind of the ADDer in these periods of zoning out?
Are these phases characterized by word-based thoughts on a subject which can be described or daydreaming without words?
Can we be consciously aware of something that we cannot put into words?
Is there ever a running commentary in a dream?
Has the ADDer mind the ability to evade the need to phrase thoughts in words as a precursor to conscious awareness?
Would conscious awareness of something that we could not put into words appear like a dream?
Is this type of thinking a precursor to language extensions and broader views of reality?
ADD .. thinking outside of the box.
A form of logic that is real, but which differs from the standard logical layer.
But if the logic differs, how was it set in place using the tools of language, and how can language be used to explain the thoughts that arise through use of these paterns of logic?
So I've tried again, but still I come up with logic underlying behaviour.
The question then, I guess, is which layer must we inhabit to answer the questions we need to answer to understand ADD?
Thanking all the gods for seat belts. What a ride.
>Q: Are you sure?
>>A: Because it reverses the logical flow of conversation.
>>>Q: Why is top posting frowned upon?
On the flip-side, individuals that could organize and not constantly fight for resources or mates were more likely to survive. This took the first pre-language step towards hierarchy (HM). It is seen in most primates. Language was an add on that made humans even more successful and culture more complex (ie, less flexible). HM type structures exist throughout nature...it is a very obvious adaption. A wasp's nest and a building...what is a natural thing and what is an artificial thing? The chemical communication of ants, the dancing of bees.
The difference is in the nature of HM. It is a much much more advanced structure builder (and more flexible). The species that has it though...is also more vulnerable to changes in the environment (bees are very prolific compared to 9 months and a lower live birth rate for humans) HM serves its purpose but it is not all that is needed.
CM provides the solution. We must move beyond the view that evolution only happens on the "individual" level...in social species there are functional types.
Language has 2 purposes:
1. Communication (Ideas, dangers, information, art)
2. Formation of groups (through identification and synchronization)
in IRCM we are heavily using it for number 1
in ERHM it is heavily used for number 2
Everyone uses both purposes but to different degrees.
Allegory is for that.
* * * * *
In a way, you can think of the metamodel web as a process, rather than a discrete abstraction such as a layer.
So our brains receive sensory input,
process it through neural layers representing increasing abstraction,
which are finally presented to our conscious centers,
where we construct a dance representing our response to them,
or interaction with them
(which introduces a tricky time element to our modeling behavior).
At any point in this process, neurons are arranged in patterns, networks that hold the logical representation of the thing they model in the arrangement and weighting of the interconnections within the network.
Logical modeling, on a global scale, accomplished through a hierarchical process that we could describe as association.
We see a thing in its natural form for the first time, and we examine it; immediately, we begin to look for associations in what we observe, the experience of perception, to the entire gestalt of our previous experience of being.
In a short time, if all goes well, we develop a preliminary model of the thing that derives from common elements, and differences as well, from what we already know of our universe. It's possible we note nothing but difference, so the thing is new, and shares that common characteristic with all other things in our experience that are new, or were new once.
And so even for something truly new, we associate our expectation of the character of the experience over time, from not recognizing the thing to learning about it to knowing it, if possible, so that it’s not really new anymore in the original sense.
Association, that's the key. And the entire process should sound pretty familiar to anyone that embraces, say, taxonomy as a means of ordering their universe.
Without worrying yet about the relationship of this to the idea of our conscious context, lets think about how our body of knowledge about our universe is stored.
Take an example, perhaps a pencil: a writing implement. What does our model look like?
Obviously, it's a collection of generalizations, characteristics representative of various classes of objects. It's long and pointy and we can write with it under the right conditions, which may include our perception of the entirely abstract suitability of the result in terms of its permanence, and so on.
In this instance, it’s easy to see how these associations seem to form links that could conceivably connect all objects in our universe, if we imagine playing a sort of 'six degrees of Kevin Bacon' game with it.
But what is the character of these associations? So far, we’ve described a structure that can be mapped onto a two dimensional surface. It might be web-like, but it isn't the metamodel web yet. All associations in our model of a pencil are independent, in the sense that the structure doesn't depend on when we make any particular association.
We'll get back to that; lets look for a minute to the ultimate context of all these models, our own interior universe.
It isn't too hard to imagine the mega-gestalt, the entire lump of all we know associated through links like the ones in our pencil example. But what can we do with that? The problem is that it's static, like an encyclopedia, no good at all until you engage in the process of opening it up and using it.
And of course, we do exactly that, metaphorically speaking; our ordinary experiential gestalt is this mega-lump of associated abstract models, with the process of conscious awareness added.
Accept for the moment that some compulsion exists which drives a particular behavior in the context of these models, which of course include models of ourselves and others and the character of the experience of being.
We can easily imagine ourselves propelling these models of ourselves and others around within the internal universe represented by the experiential gestalt.
The rules characterizing the expected behavior are just a part of the model, as are the rules for assembling behavior into a sort of serial script that matches the perception of our external self engaged in the process if being, filtering in through the hierarchical layers of abstraction of our sensory systems.
So given the (as yet unidentified) compulsion to construct such a serial model of our perceived behavior, we can see that this process pretty much accounts for all of our conscious experience of being.
One of the most common features (and perhaps the most significant) of that serial model, truly a script of sorts, is appropriate dialog, both among the players and internally, as a kind of ultimate aside.
We model our behavior largely in terms of our interactions with others, as is readily evident by the nature of this particular discussion. But what can we say of the character of those interactions, principally consisting of the behavior universally described as 'communication'?
All such interactions (and to a certain extent the internal dialog as well) are limited to what is possible, as modeled in the context, the mega-lump of neural models forming our experiential gestalt.
We can only discuss what we already know, in some form. Even something we don't know is characterized by that classification itself and the associated details of precisely how we don't recognize the thing.
So in a sense, there isn't any such thing as an unknown; anything truly unknown would be invisible, not even a blip demanding investigation.
We'll return to that (eventually), but now lets look closer at the character of the experience of being in this conscious context.
Where in the brain does all this take place? For the most part, at the end of the chain of abstraction of our sensorium, the sum total of all our moment-by-moment sensory impressions of our physical existence.
There is good reason to characterize the final step before some particular abstraction passes into our conscious awareness as a filtering process related to attention (as the term is commonly used to describe a property of conscious experience: paying attention).
So we can imagine our brains as a mass of active networks, one feeding another feeding the next, interacting at each step, the old feeding the new as abstraction and recognition and new construction intertwine in a magnificent dance.
At the center of this dance is conscious experience, a more or less orderly microcosm of the greater whole, constantly churning and feeding back into itself as the serial script of our conscious experience is constructed, traversed, compared to the input stream and corrected: conscious being.
There isn't any aspect of the whole that requires anything except the basic mechanism of modeling and abstraction, applied hierarchically. In one sense, we could say that our perception of time arises as a consequence of the serial nature of such a hierarchical structure, or perhaps as a reflection of that property.
Regardless of that, we can see that the actual live context of any individual part of the whole, the patterns being represented at any given moment in any given area of the brain, must be closely related to the firing patterns of any adjacent area.
This description is necessarily simplified; for example, we haven't discussed anything at all about how a particular aspect of our reality might be modeled, yet the hierarchical nature would seem to dictate that all models are linked through the hierarchy as well as through the network of logical associations previously described.
Such connections are implicit in the way that neural networks model; they’re inherently efficient in this way, reusing any particular element as broadly as possible, once it’s established.
So we have connections to our cerebellum (for example) that allow us to model our physical selves consciously by using the physical models already in place, and also tap the sensory input represented there. This is one reason why brain scans usually show behavior in several structures, even during purely abstract tasks.
The modeling that represents the processing of sounds into intelligible speech is tapped as part of the feedback loop that allows us to control our vocal tracts and produce an intelligible response, and so on.
At any given point in the brain, we could expect to find an abstract context that might be recognizable, in a sense (pun intended) if we could somehow move our conscious awareness into it. As we range closer to the actual conscious centers, these abstractions would become more easily recognizable, and of course, more abstract.
Ultimately, we should expect areas immediately adjacent to the conscious centers to be a swarming sea of perfectly ordinary objects. There must be something that characterizes the transition beyond the already mentioned filtering that takes place; in fact, there must be a purpose for the filtering, as well, and we could expect them to be strongly related.
And of course, they are. Abstractions on the outside of the conscious filters are alive but carry no meaning; they just are. Abstractions on the conscious side of the filter are alive with meaning, every element intrinsically related in a meaningful way to every other element and the context itself.
If it isn't yet obvious, the filter is the conscious process itself. By choosing the elements we weave into the serial script that describes their dance, we simultaneously create both meaning and the classification that gives rise to the appearance of the filter: objects on the outside aren't a part of the script, and therefore have no meaning in the conventional sense.
You can see why the filter is related to attention, and how attention itself arises. The process of choosing the elements that will become players in our internal reality script is what we experience as attention; by definition, the bits and pieces we don't use aren't a part of our conscious universe at that instant. We're not paying attention to them.
We can now further characterize conscious experience as a process that depends on two sub-elements: those logical models presented by the sensorium that we have chosen for the internal play, and the process that gives rise to the script describing their interrelated dance.
Both of those elements have a time component, the script because it's serial, and the elements presented by the sensorium because they're abstracted from real events, and reality is constantly in a state of flux.
The script is serial because it represents a model of elements in a state of flux, in this sense reflecting the nature of reality itself rather than the linear hierarchical nature of the process of abstraction. (However, the granularity in time of the representation is most likely dictated by the underlying mechanism.)
The implication is that we can separate the conscious component (essentially, the experience of the scripting process) from the abstractions that form the players. In effect, we can imagine experience that isn't organized by the scripting process, that doesn't depend on it at all.
What we're describing is the likely context for much of the experience of being in lower animals. The nature of the sea of abstractions present immediately prior to the 'filtering' process is arguably identical to the static nature of any element once it’s been woven into our internal reality play.
We would be able to recognize any element, and even patterns within the natural interplay of elements that might hold some significance. But we would be unable to characterize the nature of the significance in any way. That requires analysis via the process of incorporation into the conscious script, and the subsequent recognition of the patterns that represent meaning.
So (for example) we might be able to recognize a boulder rolling down a hill toward us, and even the pattern representing the possible danger. But we wouldn't be able to speak, to warn a friend, or formulate a plan of action: run away, run away!
But we might be able to imagine a circumstance in which the recognition of the boulder and the danger it represents could be itself recognized.
We should be able to model the circumstances of perceiving those abstractions in their context, outside the current conscious reality model, and the process of transferring an awareness of them into the conscious context, where it could be modeled in a way that allows us to escape the danger and save our friends as well.
I should note that we have in fact done exactly that right here, in this context. So such a model is not only possible, it already exists. (In case anyone wondered, the perception of these high level abstractions in this way is the phenomenon Kay calls 'whispers'.)
We refer to the conscious context as defined here as an experiential space; such spaces can be thought of as exhibiting locality in particular areas of the brain. The areas immediately preceding the transition into the conscious context is another experiential context, with specific characteristics.
We can't speak in that context, for example, and we can't carry out any sort of planning or other activity requiring an act of construction. All we can do is be, although the experience is richer than the usual conscious experience of reality, given that there is no filter on what we experience.
One can imagine a complete reverse hierarchy of such experiential spaces, following the path back through the abstraction layers of the sensorium, and a pattern immediately emerges: almost all experiential spaces are non-verbal and also passive, in the sense that we're limited to actions that are preprogrammed into the existing structures.
That doesn't preclude communication, however. It's entirely possible for two people to communicate with each other within one of these passive non-verbal experiential spaces, and it’s also possible to co-inhabit multiple spaces, so that experience becomes a sort of superset of the experience in both spaces.
If one of the spaces is the conscious space, it's possible to interact so that the experience is no longer entirely passive, although it remains largely non-verbal; ordinary speech causes the superposition of states to collapse.
It's experience with complex communication in just this way that led Kay and me to begin to examine human communications. In our case gender played a tremendous role in how we were able to discuss the experience in the ordinary conscious context, and so those gender differences became the focus of our inquiry.
To return to the role of metamodels, lets consider the nature of logical models themselves.
All models are just that, models, imperfect representations by definition. How good a representation we can make depends on several factors, but basically, we can relate the quality of a model to the raw quantity of information represented.
The more finely grained a model is, the more accurately it can predict the nature of the thing it models. That's no great surprise, nor is the fact that finer detail implies more information implies larger size, and the larger a model's size, the longer it takes to process.
So there's a tradeoff between size and speed, or efficiency, and there are also other considerations that limit raw size in neural models. But the discussion so far is about models that can be mapped onto a surface; despite the three dimensional form of an actual neural network, the logic represented in the 3D structure can generally be mapped using only two dimensions.
The implication in logical terms is easy to describe. Think of modeling a cube using a 2D model; all we would be able to represent is a square, or perhaps a triangle of some sort. If the cube rests on the surface of the model, you can think of the square that we've modeled as representing all possible squares that make up the surface of the four sides of the cube.
No matter how large we make the model, it will never be able to discriminate between the infinity of different squares that form the sides of the cube, not even roughly. Choosing a particular square (essentially the process of specifying a particular height on the cube proper) is simple for us, but can't be determined at all by the 2D model; its representation of the squares is inherently ambiguous.
Of course, all we would have to do in a practical sense is add another 2D model to the system, rotated 90 degrees from the first, and the resulting composite model would resolve the ambiguity. But here we're talking about physical models, and the idea that we could be trapped in two dimensions is silly. What if there was some analogue to this situation in the logical space that our neural models inhabit, so that we actually could become 'trapped' in two dimensions?
The physical model of this situation might be silly, but the circumstances of the logical analogue are all too real. They're the limiting factor on the resolution of our models of reality that is at the root of most of our present day headaches, regardless of whether we’re talking about AD/HD or M Theory or social justice.
Our real problem isn't that we’re computationally (or size) bound; the problem is ambiguity. We could get as smart as we wanted, artificially increase our brain size and power without limit, and it wouldn't help a bit.
Think for a minute how we would represent the height that uniquely determines which square our 2D model represents. It might seem that we simply could choose an arbitrary reference point, perhaps one of the vertices of the base of our cube, a point already represented in the model.
Then we choose another point, somewhere on the square we wish to resolve, and we have the problem licked. Or not: while that exercise seems to work in the physical space of the example, it in fact depends on the geometric definition of the space itself.
And logical spaces don't have predetermined geometric definitions. In logical terms, as soon as we choose our second point, the plane of our 2D model rotates to fit neatly through the two points. We don't have any anchors in logic space; everything is relative.
It's easy to see that we need three points to determine level, two to establish the reference plane and the third to set the level. The actual requirement is the two relationships, one point to another, and the same point to the third.
In essence we look at the difference in angle between the two vectors, and if it's not zero, we know that one point lies outside the plane of the first two. (I know the purely physical geometric model I'm using breaks down a bit here, but bear with it anyway. It’s just a descriptive device.)
There is one more key requirement that perhaps isn't obvious from the description: we need to observe the two relationships simultaneously. If we look at them one at a time, the plane that the model occupies simply rotates as we make our separate observations. In our memories, the three points will always seem to lie in the same plane.
In our example the height of a particular square represents the logical property metalevel. Metalevel is a property of logical relationships that isn't readily apparent when making a single observation of the relationship between two things.
The things can be real physical objects or entirely abstract conceptual entities. If some part of the aggregate relationship between the two represents a general property that may apply to define a class of objects of (at least partially) similar type, the relationship is said to be a metarelationship.
In a logical sense, metarelationships are a level above the ordinary relationships that determine properties like relative position or weight. And of course, we can classify metarelationships like any other logical object, invoking meta-metarelationships, meta-meta-metarelationships, and so on through infinity.
And so we can speak of metalevels, the logical levels inhabited by metarelationships; moving up a metalevel only signifies increasing generality, while moving down signifies increasing specificity. We can imagine an infinite logical universe of metalevels, extending to the infinitely general at one extreme, and the infinitely detailed at the other.
How does this relate to our example? When we observe a single relationship between two logical objects, there is no way to determine if the relationship is a metarelationship or only an ordinary one. But if we can simultaneously observe two relationships, we will be able to recognize the metarelationship because it denotes class.
Another way to think of it is this: you need at least two examples of a class of objects to assume the existence of a class. A single thing just is. This isn't a very satisfying way to view it, because it doesn't represent the need for simultaneity correctly; it's hidden in the way we assume memory works, in the same way that the assumption of direction is implicit in the geometric definition of the space in which our cube lives.
If that seems to say that memory is relative, well, it is, in an important sense. We can't use it to construct 3D logical structures; it's why the plane of our model seems to rotate when we’re not looking.
We'll deal with that another day. What we want to do now is look at how the concept of metalevels works into the picture we have of experiential spaces, how the metalevel web arises, and the role of both in communication and language.
When we observe and form a model of some abstraction in our sensory input stream, we look at individual elements of the abstraction and note their relationship to each other and to our preexisting models of other objects, similar or otherwise.
We model the abstraction by forming associations between various elements, adding structure where necessary and adding new elements when needed. Association is one way to describe basic neural function, so this model of the modeling process doesn't stray far from what we know to be true about the way our brains work.
Networks of associations are the definition of the logic expressed in real neural networks, and so our neural models and internal logical models could in fact be the same physical objects. It's likely that the moment-to-moment operation of the brain is actually realized in logic itself; we probably model with logical models of neural structures (rather than actual neurons) in many areas of our brains that require the ability to adapt in real time.
If we observe two related relationships simultaneously, we may recognize that one occupies a metalevel above the other. (Not all pairs of relationships, even if related, are going to represent metarelationships, of course.)
How do we form our ordinary models, then? We observe relationships and remember them, weaving associations into a single structure representing the logical model of the abstract object we’re observing.
If we happen to observe metarelationships, they’re encoded in exactly the same way. The only requirement is that we make the two observations that allow us to determine metalevel simultaneously, and that means doing two things at once in our heads.
This is nothing but the oft mentioned AD/HD ability to multitask, or multithread; we can pay attention to two different things at the same time, precisely the ability needed to casually observe and encode the logical property metalevel.
Encoding metalevels doesn't in itself give rise to the metamodel web; we need to observe metalevels in many different contexts, for a long time, until we gradually begin to perceive a pattern in the relationships between metalevels themselves.
That view is remarkable on three counts: the structure evident is entirely comprised of relationships (rather than objects plus relationships), there is no practical size limit on how much information can be encoded in any particular model, and models are capable of inhabiting multiple levels.
That last is important, because it means we can construct models that can't be mapped on a 2D surface. To go back to the example, we can construct a 3D model of the cube that fully represents every possible 2D square that it contains, and we can do it with almost arbitrary precision.
The practical translation of that property is that we can construct models of arbitrary dimension, tailoring the dimensionality to whatever degree necessary to overcome ambiguity in the representation. We can't even represent that idea properly in conventional logical models, let alone construct models that use the principle.
If it seems that ambiguity isn't such a big problem out there in the real modern world, consider that the subject of the discussion is communication, and the hallmark of poor communication is ambiguity.
And for reasons we haven't yet discussed, ambiguity actually lies at the heart of many problems that have yet to be recognized as such, but are frustrating nonetheless. (People dying in Iraq is one, among many.)
Now lets think for a moment about the nature of the dynamic logical abstractions that inhabit the experiential spaces in and near where consciousness lives.
It should seem obvious that many of the abstract models that we would encounter in the spaces directly adjacent to our conscious experiential space would be identical to models already playing a role in our conscious experience; they are, in fact, not exactly the same but actually the same.
So if we’re to differentiate between the two experiences of exactly the same model, we had better have some way to unambiguously represent the same exact object in two distinct ways. The difference is subtle and subjective, of course; by definition, it only exists in the character of our experience of the abstraction.
In terms of our example, the two different experiences are analogous to two different squares within the cube, and we face the same requirement to resolve the ambiguity.
If our model of the experience is encoded in a metamodel web the subtleties of the difference are naturally preserved. We are fully aware of the quality of the experience, both in the conscious spaces and the ones lying adjacent, where everything just is.
There are other implications to using the metamodel web to form logical models, one of which is that the web itself is ultimately general. Eventually all of our models collapse into one master web, which has additional advantages.
But the discussion is about communication, and how the nature of the AD/HD experience differs from the ordinary one. To begin to understand that, we first need to look back at language and see how it works in the context of the ideas we’ve developed so far.
As SB has pointed out many times, language is linear and limited. Why this is so derives from the fundamental mechanisms at work, primarily the role of the restricted common subset of our conscious models of reality.
That statement implies that not all of our conscious experiences are based on logical models that are common among different individuals, and that is the in fact the case. We all bring experiences to the table that are highly individual, experiences that derive from conscious being but aren't represented in the common models.
It isn't relevant to talk about whether any two people might have the same experience that isn't a part of the common model of reality, and simply don't realize it. In a sense it must be so, but the idea itself transcends the role of language in conscious experience.
We depend on our internal dialog to organize and interpret the real time script that gives rise to conscious experience; experiences that we are consciously aware of, but have no language for, are dreamlike and unfocused. We can't speak about them because they aren't common, but we can't express that fact even internally, to ourselves.
This isn't the same kind of 'be-here-now' experience we associated with temporarily inhabiting an adjacent experiential space. This is experience of which we are consciously aware, but unable to speak. Thus, the impression that they're dreamlike and unfocused arises because of the lack of suitable language, rather than independently of it.
But why couldn't we learn to recognize that characteristic, and a pattern among similar experiences, eventually forming a model of the class of such experiences? We can, but we need to be able to resolve how language plays into the equation first, and that requires metalevels.
Why does that require metalevels? Because it actually requires a kind of absolute reference, a relative one, but absolute within the necessary framework. We have to construct a model of how we depend on language to consciously model things, one that places that model in context and reveals the rest of the universe of possible modeling behavior that does not necessarily depend on language.
As, I might note, we have already observed; most of the modeling that goes on within our heads is outside the conscious context, and doesn't require language at all. The conscious context is the anomaly here, and the development of a metamodel web inexorably draws one into the more balanced universe of the brain outside the mind, and eventually, the mind outside the brain.
One of our most important realizations about human communication was that it is by nature bilateral. There is no such thing as unilateral communication; even our internal dialog is just that, an imagined dialog between two invisible aspects of our self model.
We can easily relate this to the most general biological principles; bilateral communication between two individuals is an obvious competitive advantage in too many ways to list exhaustively. A few of the more obvious are an improved ability to judge the fitness of a potential partner, and the improved chance of survival because effort can be better coordinated on all fronts.
But there are additional advantages, principally in being able to validate one's perspective independently. That idea is so important that it's reflected over and over on virtually every level of our existence.
For example, if one person disagrees with another, it may be a difference of opinion, or it may not. But if a third person agrees that one of the disputed views has obvious merit, there can be no question that the disagreement is no longer just a matter of opinion. Now the dispute centers on fact, and perhaps the only remedy is showing how the appearance of sensibility is mistaken.
Usually, it’s not. There are other examples, all of them reflecting that quality of synchrony, and more importantly, unambiguous certainty that the synchrony is valid. The concept of betrayal is deeply rooted in this, when an established expectation of synchrony is found to no longer be valid.
So there is good reason to expect that there is a potential pot of gold at the end of the rainbow that dances outside of the conscious context, and good reason to expect that we are being dragged there by the most fundamental of forces.
What that has to do with the subject at hand is this: communication between ADDers is first and foremost human communication, and is therefore imbued with all of the properties characteristic of that.
We feel compelled to express ourselves for many different reasons, including the desire to explain something that we can’t be certain we've interpreted correctly, or can’t explain in common terms at all.
That last one is a great source of frustration and the burning drive to express the abstract view that seems to motivate true artists. It also might be the source of SB's pique, don't you think?
If we think of what an ADDer might feel compelled to communicate, a short list could include the nature of the experience of modeling reality with a metamodel web, which can make normal models seem slow and incomplete, and sometimes corrupt.
It could include the nature of experience outside the ordinary conscious context, and the rich experience of meeting another out there and merging (to an extent) your realities.
And it certainly should include the nature of the detailed unambiguous models that are possible, merely because we’re able to form multidimensional logical structures more appropriate to the apparent structure inherent in the nature of logic itself.
We are able to model details that older methods can't even recognize. How do we explain that? There isn't any language for it by definition, because language depends on models that were developed with the old methods, and that fact is embedded in its structure. Developing a way to talk about these fundamentally new kinds of experience will require not only extending language, but making fundamental changes to the structure of language itself.
Kay and I have experienced the spontaneous construction of a temporary form of language with an entirely new structure and vocabulary, used it to conduct complex interactions in an experiential space outside ordinary conscious reality, and abandoned it when we were finished.
We've never thought much about how our conversation might have sounded to an eavesdropper, because it's not really a correctly constructed idea. We weren't talking in a different language, we were using a fundamentally different form of language. I couldn't swear we actually said anything.
Like anybody else, we feel compelled to try to communicate to others something of the nature of the experience. We don't usually speak about it as I just have; the logical journey we’ve been on set y'all up so I could say it that way. But the character of the experience is something that we think everyone should be aware of, because it implies a far richer universe than the one we were told we would inherit.
Somewhere in those three possibilities listed above (of why an ADDer might be compelled to communicate) lies most of the reasons that any of us have posted to this thread, and perhaps to the forum in general. It’s human nature to feel compelled to understand our selves and our circumstance, and AD/HD has made that an unexpectedly rich field of inquiry.
Which particular aspect of the experience is reflected in any particular posting must vary with the individual, but whatever it is, different or not, it derives from the structures and processes we just described, the context of our experiences of being, and the mechanisms that give rise to it.
So the answer is right there, in all the ways that a post could be different because of the influence of AD/HD, or the use of a metamodel web to model reality, and it's this:
What did you expect?
Fail to do that, and the result is some other species entirely.
Didn't you ever wonder where logic comes from? AND, OR, NOT, IF, THEN, ELSE - George Boole sure didn't invent it. We suppose that it's implicit in the fabric of reality, a fundamental property of the nature of nature itself.
In a sense, language only provides us a logical framework that dictates the nature of conscious experiential reality. That's a pretty big chunk of all there is, in ordinary terms, but it’s not really such a big deal.
Another way to express it is this: we all agree to descend into a common interior model of reality, so we can communicate complex abstract concepts over a channel of exceedingly limited bandwidth. Our concept of reality is so ingrained in this process that to opt out is to fail to become recognizably human.
That's a slightly different version of what we said above.
The careful observer might have noticed we never did identify the source of the compulsion to construct and play out those scripts that create the impression of conscious being.
It's a collection of impulses and drives that we call 'the social impulse'. Its purpose is to ensure that our common reality models remain synchronized, primarily so that language will continue to work as expected.
That's it, really; it’s all a statistical mistake, a trick of selection enabling a fancy form of interaction amounting to highly compressed information exchange over a severely bandwidth limited channel.
We're only human so we can talk, and we’re only human because we can talk.
Real meaning lies elsewhere.
And don't think for a minute we don't know where that is.
* * * * *
Neil Young, again:
I think I'd like to go
And take it easy
There's a woman that
I'd like to get to know
Everybody seems to wonder
What it's like down here
I gotta get away
from this day-to-day
this is nowhere.
Everybody, everybody knows
--Tom and Kay
Peace. --TR =+= =+=
"There is no normal life, Wyatt.
There's just life. Get on with it."
"Are these phases characterized by word-based thoughts on a subject which can be described or daydreaming without words?"
Can we be consciously aware of something that we cannot put into words?
"Has the ADDer mind the ability to evade the need to phrase thoughts in words as a precursor to conscious awareness?
Would conscious awareness of something that we could not put into words appear like a dream?
Is this type of thinking a precursor to language extensions and broader views of reality?'
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>quote SB_UK
I can answer this question in non-scientific terms only. If the reality can be determined by one human being's personal experience the
answer to all of the above is YES. That is if the model source were a reliable representation of true ADD human behavior. Be it inner or outer or upside down.
Ok. Those are overly simplistic words that I am using for my own ADD brain to try to comprehend..and decipher the angles and twists that have occurred in this debate. I personally believe that I am a reliable source. (SMILEY FACE...graphics not appropriate)
If my intelligent ADD friends and I have confusion when deciding which outfit to wear
to a party, then certainly we might become confused when reading from this debate.
Am I correct, that the study of the behavioral patterns of so called models are subjective, albeit highly probable?
That is, if you are asking if it were a possibility. Which I believe it is. I have experienced this ......phenomenon? There are times when there is not a "thought" in my head...certainly not a dream based on human language. The human language is what I use afterwards when I define the experience.
Is communication a possibility without being subject to the ingrained words a person harbors in his head? I can tell you that I have communicated with no language or words and only been able to comprehend the event through words afterwards. And these describing words are debatable still.
To throw another kink in the mix.....Is it the ADD or something else, some creative source or outside force that we are not capable as humans to understand? With not words..... yet a conscious awareness, as in the certainty of ones soul?
Or.....Consider the trance-like state that an artist sometimes enters and the brush becomes an entity of it's own. I can tell you that this communication with the canvas often takes place without any conscious awareness of human words. At least in my world. I can only understand that type of communication afterwards...again with words. There is a startle after this creative event takes place and the product has miraculously appeared. A bizarre inability to recognize one's own work. Where did it come from? Where were the words?
If I have misunderstood the questions posted and made my own interpretation, then consider this my contribution and an attempt to include myself in a discussion of ADD and communication. (more smiley faces)
"Nah, you can be sure of that, all right. Kay and I take an extreme view, in which a
Process of selective disillusionment molds children until they drop into the single acceptable model, and become human.
Fail to do that, and the result is some other species entirely..>>>>>>>>quote: Stabile
A reply to the above statements this may not be... but I do have questions regarding a particular sentence.
And there are no stupid questions. Correct? At least that is what my third grade teacher told me. (grins)
Can you tell me exactly what the other species would be?
How would logic and reason cause me to to carry on in fragments of the human language? Especially when I have been drilled
on grammar and correct sentence structure.
I have memorized and learned the basic formulas
used in writing poetry and prose.
I think also that when they dropped me, the threw away the mold. And I'll bet they did yours and all the other little ADDers too. (more smileys)
ADD .. thinking outside of the box.
A form of logic that is real, but which differs from the standard logical layer.
But if the logic differs, how was it set in place using the tools of language, and how can language be used to explain the thoughts that arise through use of these patterns of logic?
The question I would like to pose here is this.
What is on Outside the box?
Where is this place?
Is that place THe place that defies logic and reasoning? This is what I define the place to be.
If the place doesn't defy logic, then the tools of language would surely play a role in one's awareness of this so-called place.
That takes all of the mystery and magic from inside the box, and then where does it have to go? sheesh.... pardon the bouncy writing. It never made it to the box.
There must always be logic, I guess.
But our period of mental development leads us into a pattern of logic whereby we are left with .. IF car AND key THEN hop in and drive.
And so the sorts of thoughts that an individual could have that are within the box, are the application of logic in pretty much the same way that everybody else would apply that logic.
It is very commonly said that ADDers are capable of 'thinking outside of the box'.
There must still be logic.
Perhaps the easiest way to think outside of the box is if the individual has a fully contextual mind or is thinking with a well developed Metamodel web.
Through use of a highly contextual mind or a well developed Metamodel web, the individual will have a mind replete with associations.
These associations would not occur in the counterpart minds to these 2 models.
Once these associations exist, being able to think in a mysterious 'out of the box' manner, becomes possible and altogether quite unmysterious.
If one knew that dogs loved having their ears rubbed, and that their cat was in a bad mood ... then through making the association between cats and dogs as instantiations of animals, might one attempt to rub the ears of one's cat in order to alter his or her mood.
If this is Chain's CM .. then it sits close to Tom&Kay's Metamodel web, and underlies Chain's Internal and External referencing types. However, I can't help but wonder how differently these types diverge from the Jung/Meyers-Briggs typologies .. and though they are broadly speaking accepted behavioural descriptions .. whether the CM or MM bits, are really quite considerably more important to discuss, in an attempt to understand ADD.
Stabile "It’s human nature to feel compelled to understand our selves and our circumstance."
Or to understand our context.
If the developed MetaModel web is the fully contextual mind, then the honed ADDer mind is better at achieving an understanding of our self, and our circumstance.
This is very important.
I believe that the following are pretty much a consequence of the use of these new logical processes:
All taken from Chain above:
1.Social groups that have individuals that are *not synchronized* with the group (extra-cultural)
??Is this different from the Tom&Kay's 'social impulse'?
2.(extra-cultural) are more flexible.
3.These individuals are more in tune with the external context of the group. They are more sensitive to the environment.
NB. This is caused very easily by "removing" or changing several components and chemical feedback systems of the brain.
??Is this so different from Tom&Kay's 'a slow shift in brain chemistry ...'?
4. "forced questioners"
5. powerful problem solvers.
Hey, Gourmet, et al:
Everything you're saying is exactly on point, and about as well stated as it can be. We all have different backgrounds, and that's reflected in our voice, here and elsewhere. But the live beings behind the words are all looking at exactly the same stuff.
When we speak of language creating the species, we mean that literally; we even take that as far as considering that the thing that's transferred to every child in learning to speak is genetic material expressed in logic rather than DNA.
The true depth of the process isn't really appreciated; we’re too close and familiar. But it’s the same thing that makes the Helen Keller story so compelling, even though few people can say exactly what that is, either.
That climactic moment at the water pump really was it, and usually it happens so gradually and in such a familiar context that we don't notice.
It isn't just that she finally grasped the concept that water could be represented by an abstract symbol, or even that there were other people in her world that could themselves recognize symbols as abstract representation. The breakthrough is in realizing that the symbol means the same exact thing to those others.
She knew that she wasn't alone, that there were others out there, and more importantly, they shared her experience of being. And that was the moment of descent into the interior logical model of reality that we all make. But it sure didn't look like she moved at all, even to her, did it?
At that instant, the character of her internal master model of reality changed, becoming imbued with her sense that it was essentially identical to Anne Sullivan's, and by extension, perhaps to everyone else's, too.
It might not seem like much, but you can imagine her excitement: now there was a whole universe to explore, filled with people to meet who might also share her experience of it. That transition takes place without notice; the internal model is assumed to be truly representative of reality after that, and since we need to be there to use symbols to communicate, we stay inside.
After a while, we forget there ever was a different reality, and when we grow up we may turn to things like painting or music or whatever to explore those worlds we abandoned too soon as children.
Our brains allow us to construct models that are superior, but how we measure that can be confusing. We can overcome some of the ordinary limitations of flat models, so that even by ordinary standards we stand out.
But we can model things that flat models can't even represent; there's a whole universe of possibilities to add to our internal reality models that can't be perceived at all by conventional methods.
Think about the cube example for a minute. Suppose the cube floated just a bit above the ground. There wouldn't be any ambiguity then, would there? There wouldn't be any 2D model at all, not even a square.
Some of the time, we ADDers live out there where normals don't even see anything at all, not even a there to be in. So the picture is complex, at least.
Remember, language is about the common model; as long as there are normals, language as it has been historically known will be pinned in the corner, stuck in the old way of modeling by definition.
(Notice how 'normal' has come to be defined by the strategy used to construct models, at least in this discussion. Cool, huh?)
When we make that descent into the common model of reality, as children, we accept a world that is defined by the logic inherent in language. Yet we ADDers seem to easily transcend this, albeit with some side effects. How is this possible?
We have to distinguish between the logic implied by language and logic itself. The logic of language is based in hard logic, but it's about whether things make sense, like having a debate about going to war. Going to war might not make sense, but not being able to debate the issue wouldn't make sense on a deeper level. It would be enough to stop the war, but it's still not hard logic.
Real, hard logic isn't variable, or learned. It's implicit in the fabric of reality, an immutable property of the nature of nature itself.
So we can apply hard logic in constructing our models and come up with structures (and implied logic) that were never a part of the universe we were given by language. But we can't as yet talk about them, even internally, to our selves.
Just to confuse things even more, most of the worlds we can explore within ourselves are truly old, older than our orderly interior reality. There is often a sense that we already know what we’ll find there, although little real knowledge. Ancient memory is largely a myth, I'm afraid; it’s mostly about carrying around ancient apparatus, and recognizing its nature.
(Think in terms of gazing at old, crude paintings on the walls of our minds, and feeling a connection to the artist. Something like that.)
So where do you go when you paint? I recognize that place; it’s where I go when I play guitar and everything's working right. It's where Bryan goes when he picks up his truly ancient Mark VI and floats off into the stratosphere.
But there's a difference, for us; we have to be aboard for the trip in real time. We can't exactly go out there and look at what we've got when we get back, because it’s not there anymore. This is one of the fundamental differences between visual media like painting and sculpture and more ephemeral media like improvisational music.
We often do have a similar experience to what you describe when we hear a recording of a session. And your real time experience is similar to ours; the description of the artist losing conscious awareness of the brush is right on. Our focus is just different.
Where we go when we're doing this is what we’ve been talking about all along, at least in part. I really believe the real question is about this sort of experience, and the answer to that is there will never be language as we know it that will suffice, written or otherwise.
But we can imagine something completely new; Kay and I have periodically experienced different versions of it. In point: we sometimes dream the same dream. Not similar; the actual same dream, with both of us in it, sharing the interaction with it with each other.
We wake up, look at each other, and know; anymore, we don't bother to talk about details, as if to prove what we already know. We just say a few words about the dream itself, and go back to sleep.
How often that happens we don't really know; all we know is about the times we wake up and speak to it. As near as we can tell, there's nobody else there, at least nobody conscious. But who knows?
When Bryan had a reaction to his first DPT shot and stopped breathing, we both heard him cry out. We jumped up, grabbed him and his brother and were at the hospital in less than three minutes. Normally, it's a fifteen minute trip without dressing and loading the family into the car.
Kay was giving him mouth-to-mouth most of the way; he finally started to breathe on his own when we were a couple of blocks away. But when we told the doctors we heard him cry out, they flatly refused to believe he had actually stopped breathing; their diagnosis was that he was just mad, and was holding his breath as a part of a temper tantrum.
Kay's a nurse, a good one. He wasn't breathing. But we don't really know if he actually cried out; we were asleep at the time, so where did the cry happen? Neither of us can describe it, other than to say it was unusual.
Our lives are full of that kind of stuff, and really, everybody's is. It's all a matter of whether you can even consciously think about it coherently enough to notice. Kay has been involved in long term care for all of her professional life, now of developmentally disabled 'kids', but for a long time elderly adults.
She's an Alzheimer's specialist, and I have personally witnessed her communicating with people who have almost no connection with their external reality, who haven't said a coherent sentence in ten years or more. I can see where she is when this happens, pretty much the same kind of place we go to play or paint.
So is this her art? No, art is about trying to communicate about the experience of being in these places, trying to tell others about things for which there are no words.
If it works, we’ve extended the common model, and expanded the universe that we can access with language. But there's an inherent problem with this, as I mentioned before, that some stuff can't be represented in the logical space that language inhabits at all, in any form.
Just to confuse things more, there are gender differences that can create the appearance that two different models of a thing exists, when they're really the same. I use 6,500 words to say what you get into 400 at most, including the disclaimers that you might not get it. (Hah!)
In that spirit, seeing that gender differences have popped up once again, I'll cut this short with a sports metaphor. (Hah! again.)
This is a little like a football game, in which we ADDers have the ability to float above the field if we wish. Some of us stay on the ground voluntarily, perhaps not really aware of our ability to soar.
Some of us occasionally rise up a few feet for a moment, but we seem saddled with the idea that scoring a touchdown isn't valid if we don't do it on the ground.
You can see where this is going, a world in which people cry foul when more and more of us start to soar during a game. They throw us off the team, try to ban us, but it’s really all over once the word begins to get out.
Sooner or later, some desperate coach will send one of us in to carry the ball twenty yards up and twenty yards in, to win a crucial game.
And it won't be long after the controversy over that dies down that soaring will become an acceptable part of the game, spawning a whole new world of 3D diagramming of plays, defensive and offensive sets, the whole nine yards. (Which isn't a football metaphor, incidentally.)
That's a pretty neat idea, and a lot about the differences in how ADDers communicate is about seeing that scenario or being immersed in it every day, as most of us already are. But some of us are trying to talk about something else, a different game entirely.
Football is played on the ground. Why bother to be there when you can soar? And so, we begin to imagine a new game, and trying to describe that is even more difficult, because the entire vocabulary of football disappears.
We could extend it to include a little soaring, if we turn a blind eye to how it's possible and concentrate on winning the game. But that only works as long as we walk on and off the field with both feet firmly planted on the ground, and float back down to the huddle between plays.
Once we give up the ground, it really is a whole new game.
Anyone wanna go out and play?
Peace. --TR =+= =+=
"There is no normal life, Wyatt.
There's just life. Get on with it."
And here, we’ve got a significant divide, to the extent that we’ve come to view certain forms as purposeful in the sense that the derivation seems to have a certain intent.
We believe that there will always be pressure to formulate models that don't break the rules, that in effect 'stay on the ground', so they don't seem to invalidate the existing rules and understanding of the game.
In our opinion that approach is doomed. We’re not being stubborn, or trying to upset the apple cart purely for the sake of forcing change. Our models are simply representative of what you get if you include all that is provably significant.
We've been supremely careful to stay strictly neutral about what elements we include, which is one reason that we emphasize the notion of how much we can truly know about the external world, and how the uncertainty we are bound to accept about our knowledge of it affects the choices we each must make.
Part of our reluctance to directly debate Chain's models is that we see them this way, as equivalent and therefore intrinsically valid. But we also see how they include a classical sense of relativism; our approach allows us to reestablish the absolute, to exactly the extent that it's possible.
On the metalevel in which we make these decisions about the form of the model, we can see clearly that any approach that attempts to embrace relativism will necessarily fail to accurately model some significant elements, if they're represented at all.
So we include what we must and follow that where it takes us. We’re not absolutists; our models have that character, but that aspect of their form was decided for us, or perhaps despite us.
When we look at Chain's models, we see how they validly describe what they address. When we look at ours, we see a framework that describes that, and also the various elements that are necessarily missing. Those elements can be addressed with a different model, or Chain's might eventually be extended, or whatever. It doesn’t change what we’re describing when we talk about the metamodel web, how the brain works, how conscious experience arises and so on.
In that sense, our models operate on the metalevel above, but of course we’re constrained to discussing them in terms of how they apply on the same level Chain's models inhabit, or where stuff like the Jung/Meyers-Briggs typologies live.
In the end, I doubt that we're very far off the mark in our description of workings of the brain and mind and how conscious experience (and other kinds of experience as well) arise. But of course the logical framework we apply to discuss the result might vary all over the map, from classical psychology to football metaphors. (Could you tell I was a wide receiver and coached the game for several years?)
Still, they’re all valid in their own way despite the differences. The only thing that matters is the real deal, the thing we're trying to describe. And we can clearly see that all models that try to stay in safe territory will fail to adequately describe some aspects of the whole.
Maintaining synchrony with the common model is the ordinary obvious purpose of the social impulse, so individuals 'not synchronized' have apparently managed to avoid its influence in some way.
But how that actually happens is all over the map. Consider a process by which all versions of some common object, perhaps a four drawer dresser, are forced to conform to a standard model.
There are two ways that a dresser could be considered nonconforming. In the most obvious it might simply be shaped differently, or perhaps have a different finish. But either way, it’s still demonstrably a dresser, and this is the kind of situation that the social impulse was selected to deal with.
A dresser could also be nonconforming in having a significant part of its structure extend into a different dimension. In this case it might not be immediately apparent that it is different, and the process ensuring conformity wouldn't be triggered until it became obvious we could store inappropriately large amounts of clothing in it.
There's a problem with this scenario, because refinishing the dresser isn't going to bring it back into conformity this time. Whacking a couple of inches off it won't do, either; there isn't any reason that the dresser shouldn't already be in conformity by those measures.
So the process is incapable of enforcing conformity, and the only remedy is to outlaw use of the dresser. And as long as all we do with it is store clothing, that works, even though it has significant social consequences.
But what if we decide to use all that space for a different purpose, perhaps even going so far as to add plumbing and electricity and moving in, lock stock and barrel?
Now there's going to be problems, because we can no longer deal with the nonconformity with a supply of standard dressers, however endless. You can't live in a standard dresser, no matter how we fix it up; we need one of the fancy nonconforming ones that extend off into another dimension with lots of room.
That's a pretty good analogy for the present situation, I think. We have people that are nonconforming for different reasons, who depend on their AD/HD abilities to a greater or lesser extent, and are unwilling or unable to give them up regardless of the pressure applied by the impulse to conform and cause conformity in others.
Just to confuse things more, social groups have always had members that don't conform. That's part of a controlled process by which the social models are kept in sync with reality, which never sleeps. But it's got nothing to do with AD/HD, other than ADDers are members of the group and may be more likely to assume that traditional role.
It's out of sync behavior, but it’s not AD/HD.
Of course, that's only the first bit, like noting that gasoline has a significant amount of potentially useful chemical energy locked away in its molecular structure.
That fact doesn't in and of itself bring the transportation industry into being, or dictate that it depend primarily on the gasoline fueled internal combustion engine. So there are a lot of details to look at and think about before we can imagine we understand why the frail human vehicle has defective king pins.
Peace. --TR =+= =+=
"There is no normal life, Wyatt.
There's just life. Get on with it."
|Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)|
|Thread||Thread Starter||Forum||Replies||Last Post|
|Therapy and ADD Coaching: Similarities, Differences, and Collaboration||Tara||Professional Coaching||2||08-01-15 12:07 PM|
|My view about ADD roots : short term memory problems in modern life||xav||General ADD Talk||8||06-21-10 01:04 AM|
|replies to comments on my intro... smurfymom from Texas||smurfymom||New Member Introductions||19||08-15-08 10:22 AM|
|Non-ADD to ADD Communication gap - an ADD perspective||chain||Non-ADD Partner Support||14||08-19-06 02:09 PM|