Are you slipping secret messages past me in the mail?

me: “…ideas about predictive models needing to contain as many data elements as the thing they’re predicting in order to be accurate…”

Mr. Cawley: “…Sorry, an unsound idea, or one misapplied. Predictive models do not need to contain as many data elements as the thing they are predicting in order to be accurate. There are all kinds of simplifying substitutions and shortcuts in formal and real behaviors. Even for every single detail. I have a really accurate description of the future value of every single cell of rule 0 after the initial condition for every initial condition regardless of size or number of steps, using just one element. When the behavior is simple it can be fully predicted without ‘one to one and onto’ modeling…”

I’m going to respond to this, four years later, because I’m poking around this forum again and, frankly, I get a little annoyed when members of the inner NKS clique take a superior tone with me (JC says I “miss basic points at the outset of the whole subject”…he’s “plowing the sea” by participating in this thread with me…he’ll “give it a try…and see if any of it sticks”…sorry you seem to have such a low opinion of me, JC; I admire your philosophical and analytical perspectives on this site and others). PJL took a similar tone with me in person at the D.C. NKS conference and Cawley does it with me in this thread. You all should be aware, as people who clearly want to promote an NKS slant in the world, that when you approach outsiders like me with that kind of tone, it’s a turn-off to your whole group. That said, I am clearly very interested in thinking about these ideas and participating with you within the context of this forum, so I’ll move on to the content of my rebuttal to part of what JC writes above:

I may not have been as clear in my reference, in 2006, as I should have been, to the idea I’m talking about it, which I heard via — I don’t know — some popularized Hawking book. The idea is that to predict an irreducible system (of the type most oft discussed in this domain) that, being there’s no shortcut-style, reductive description of the system (unlike there usually is in math and physics—math and physics *are*, essentially, reductive descriptions), that as you build a simulation of this complex system, you end up needing to make your system more and more complex (using more and more “elements”—physical elements, conceptual elements)…and that there’s a dynamic that starts to illustrate itself, wherein if you’re creating a simulation of what’s going to happen next in a complex universe, the more and more accurately you want to do that—in cases where there is no reductive description of the history or unfolded dynamics of the world—you approach a situation wherein it’s less and less like a simulation that you can run beforehand, and more and more like an exact copy of the thing you’re trying to simulate in the first place…which, when time is part of the universe…means that, less and less, you get the benefit of being able to predict events with your simulation…since the simulation takes as long to run as the universe itself.

In the part of your response that I quoted, you’re talking about simple systems, clearly, systems that can be reductively described. In my proposition about classifying one’s own complexity, or classifying a system that you cannot predict, clearly I am not talking about that kind of system.

I wasn’t as rigorous as I should have been in my original post, perhaps. What I was trying to get at, was that—I’ll make a weaker and more articulated assertion here—when one wants to figure out exactly how complex an observed system is, there are limits inherent in that: if you “cannot predict” the system such that you have no exact reductive description of its unfolded dynamics, then there are elements in the unfolded history that, since you can’t predict them, you don’t understand well enough to eliminate the possibility that they contain complex elements. If you can’t predict a system completely, if you can’t reduce it completely, then setting an upper bound for its complexity seems to me to be at best a dicey matter! (a functionally-capping upper bound…an upper bound that is lower than the highest upper bound in your classification scheme, Class IV in the case of NKS)

If I’m a teacher and I give you a test, and I have a model that allows me to always guess right before you take the test, about what you will answer on the test, then I can claim to classify your test-taking behavior in a wholly-more-secure way than if I can’t predict what you will answer on the test…because in the former case, since your behavior doesn’t deviate from my reductive description, it would be significantly harder to say that there’s anything in your behavior that’s eluding me than if your behavior deviates from my best reductive description (prediction). If you’re doing something I don’t understand, something I can’t predict, then you may very well be doing something that is highly complex, sensible, meaningful, etc., that, if I understood it, or could recognize it, or describe it, might affect my classification of your complexity (upward). I might be filling in ovals on a multiple-choice test to spell out “this class is boring” in a compressed binary format, completely ignoring the questions being asked of me. That’s an example of a system whose output (my answers on the test) looks Class III to you, but is really Class IV. So while I obviously recognize that there is a taxonomic difference between Class III and Class IV systems, the example I just gave should be sufficient reason to doubt that, in general, behavior that looks random, cannot contain complex, intelligent, or universal behavior.

Distinct from that question, in my mind, is the question: if I know the rules of the system and its initial state and I see every part of the output of the system from step 0, can an intelligent, non-random system produce behavior that looks random (Class III) from the very beginning. My example of the student differs from this in that I wasn’t observing the student from step 0, didn’t see its initial state, etc. In that example, perhaps obviously, only part of the output of the system is random. Is there a Class III-looking CA, or some other simple system, that looks random from step 0, but that actually contains nonrandom, meaningful, behavior? I certainly don’t know, or else I would post the damn thing here. It is hard for me to imagine something like an ECA that could do this…organize itself through time, having instantly assumed a random-looking output. It seems to me that there might usually be some initialization period during which the thing had to decide to, for example, write compressed, binary-encoded messages in multiple-choice answers on a test. (To be more demanding of the test example, it would have to move in the direction of there being the lookup-table part of a compressed message encoded somewhere in my test…the decision to be cryptic would have to be somewhere, right?, in the rule or in the system output(?)…and then, would it be possible for that decision or nature to itself be so cryptic that it looked random to me…(?)…that, frankly, is hard for me to imagine.) But, myself, I do not see reason enough to cast out the possibility that this could happen, that this kind of system could exist.

For one, and this is quite general, but I think relevant here: the way we’re viewing CA output is part of why it seems to have form, or to be random, to us. Even the 2d grid, widely-regarded as simple, probably one of the least-presumptive output visualization mechanisms our species can think of, contains assumptions and mappings that inform our ability to see the behavior of the system. It may be that different visualization or perception mechanisms for CAs (and other systems, obviously), when used, would force, say, the 256 ECAs into different Class I-Class IV categories. Maybe rule 110, when viewed through my network-unrolling methodology, looks like a different class than it does in the 2d-grid perception mechanism.

For another, I happen to have seen, and have posted here, years ago, systems very much like ECAs except with a denser connectivity, if you will—the “water” systems, which are like ECAs except with two rows of memory, while not fulfilling the requirements I’ve given above of a system that appears random from step 0 while actually containing highly-complex, nonrandom order, look a whole lot more like TV snow, on the whole, than any of the ECAs, while clearly not being purely random in their behavior. That doesn’t, of course, mean that there are systems with no detectable initialization period that look completely random and yet contain decidedly nonrandom and meaningful behavior, but to me it’s one reason to wonder if perhaps there might be such systems.

I suppose, in a way, that some classic PRNGs are non-CA examples of systems whose output, from step 0, even with visibility into the system rule, does not demonstrate a visible initialization period in which the system is organizing itself into a state where it can slip secret messages past me in the mail, and yet, those systems demonstrate decidedly nonrandom (cyclic) behavior, even while most people’s way of perceiving the system makes the system look completely random, through and through. It’s not intelligent behavior, as far as I know, so I don’t find that example very satisfying, myself.

Is there a system where I can know the rule, see initial state and output from step 0, that looks random from step 0 (when using the 2d grid visualization, by which we’d say it’s Class III), yet is meaningfully nonrandom when viewed in a different way? I don’t know. I’ve looked through quite a lot of CA-like systems, programmatically searching for such an example, without finding one.

You’re right, Mr. Cawley, you can classify anything you please. =) (I hope you keep doing so.) And I like the way you all classify things, Class I-IV and such. There’s still a nagging question, though, in my brain, about whether I can be sure that every Class III system is, in fact, not possibly a universal system that is just hard for me to see. Short a satisfying example, however, I certainly defer to you that what looks unintelligently random, is exactly that.

NKS and physics and GUTs (or :: the religiosity of physics)

“Now, of course, the single greatest modeling challenge for NKS is fundamental physics.” (from David Brown)

Mr. Brown, I’m going to pick on you a little bit. But it’s not personal. The sentiment you express in that sentence of yours I quoted is a thematic sentiment in a great enough segment of NKS discussion that I’ve heard, for me to want to say a bit of what I think about it.


With respect, I understand that the statement you made above represents a view held by many who come to NKS from physics, and while I certainly recognize that an NKS theory of our “physical” world would be a historic breakthrough of the first order, I don’t think that the above statement will be true for many people for more than about 15 years (from now)…if it’s even true now.

It would be an amazing and profound thing if we, organisms within our “physical” world, modeled the physics of our world with NKS, or modeled them completely with any methodology. It would be the ultimate look in the mirror, perhaps.

But thinking isn’t going to stop if that happens, science isn’t going to stop if that happens. Even though our world, from our point of view, has a shit-ton of atoms, modeling the particular world we’re in, while profound (from our point of view) is hardly the greatest challenge—or even the greatest modeling challenge—for NKS, or any other discipline.

Given the limit of running a simulation of a particular system from within that system…the limitation of running out of building blocks to use for the simulation of the thing as your simulation, in accuracy, in completeness, in size, approaches those of the universe you’re simulating…having a GUT, while amazing and useful and profound, won’t be the end of modeling…

Modeling the behavior of a corporation who is modeling your behavior, for competitive purposes, for example, will be more of an engineering and a theoretical challenge, I think. Modeling the behavior of the simplest organism or culture of organisms will be a greater challenge than modeling the physical universe…and before you say it, I think that is true even though the world of the corporation, simple organism, and culture I am talking about modeling are of course in actuality built on top of the substrate of our physical universe.

While true, that doesn’t matter, practically, for the majority of simulations people do now, or are going to do.

Simulating the universe based on an accurate model of physics is of course highly useful…for understanding and observing in high detail small little parts of what goes on in our world…like the first parts of XYZ-type-of-explosion, etc. And of course whoever creates such a model will be the next Newton, in terms of human history books. And that matters to people’s egos, in addition to the accomplishment having real value.

But to say it’s the greatest modeling challenge for NKS is just wrong. It might be the greatest in some philosophical sense, it might be the greatest in some sort of metaphysical sense, but in an engineering sense, in a theoretical sense, it is not the greatest challenge for NKS.

If we were living inside an Amiga (if we were complex emergent beings running on an Amiga), then us coming up with a model that matched the output of whatever processor is in an Amiga, would be profound. It would mean a lot to us. (As an aside, it wouldn’t even mean that we understood the workings of the Amiga’s processor, and it wouldn’t give us a clue as to what it might mean from some other organism’s point of view that “we were running on an Amiga”. But that’s not the point I want to assert here. What I want to assert here is that:) Once we modeled the output of the Amiga’s processor such that we completely understood the instructions that figured into the running of the universe that we were running on, there would still be lots for us to do…and I think: greater things for us to do, in terms of engineering and theory.

That’s because it doesn’t matter, in many ways, that we’re running on an Amiga. There are probably already many systems in our world (running on top of the physics of our world…systems) that I suspect will be harder to model than the physics of our world…greater challenges of modeling (in an engineering sense, in a theoretical sense) than the modeling of our particular universe. (Assuming you don’t have access to the rules of the system…which in *most* cases you won’t.) And frankly (and I know that some physicists aren’t going to like this) but: coming up with a physics GUT doesn’t mean you understand everything that is built with physics. It doesn’t even mean you can practically simulate any particular thing that happens as a result of physics. Even theoretically, there are theoretical limits on physical simulation of the universe, from within the universe—correct me if I’m wrong, please, physicists…but even if you could control a huge portion of the atoms in the universe while simulating the universe with those atoms, is it not a snake eating its tail?…is there not a simple, practical limit there on the completeness of a simulation of a thing that is running within the [limited] resources of the thing itself (such that you approach a situation where your simulation *is* the thing, and it becomes completely accurate yet fails to maintain the characteristic of a simulation wherein you can figure out some useful information about a future event *before* it happens…I’ve been told before that I (misunderstand? misapply?) this idea of Hawking’s…but I believe he very clearly says exactly this)? The physics GUT gets a lot of air-time, it’s profound, it’s elusive, it will add someone’s name to the books of human history, but…it’s not, in many ways, the end-all be-all that it is sometimes touted as. Whether it’s with NKS, or whatever theory, when someone comes up with the first widely-accepted GUT, this is what is going to happen: it’ll be on the front page of all the papers, no one will understand it, someone will get their name next to Newton’s, and then everyone (regardless of their education) will be like: so now what? And then we’ll use the GUT in select simulation projects where it will be exceedingly useful, and the majority of simulation will continue, unaffected and largely uninformed by the particular GUT. Then, some years later, someone is going to come up with another GUT, based on different theory, and they’ll work equally well and we’ll work on translating the theory of the one into the theory of the other…

Our universe is special to us because it’s ours. But systems running essentially in emulation mode on our particular hardware can be more profound to us, as engineers and theoreticians, than the physical universe. (And the fact that one is running on the other does not even mean that the substrative system has a meaningful relationship with the system it supports…knowing everything about physics does not necessarily translate into knowing anything meaningful about a particular system physics supports. This should be clear if you think about it via the Amiga analogy: Linus Torvalds’ knowledge of the Linux kernel gets him 0% of the way down the road of understanding many of the programs people run on Linux…maybe I’m running an old Apple OS 9 emulator and programming emergent, sentient beings in C on top of that. There’s no meaningful relationship, there, between the thoughts of the emergent beings and the Linux kernel. A theory of the Linux kernel will not essentially be useful or even needed in order to do the more profound (greater?) simulation and modeling of the thoughts of the emergent beings that some of us would want to do if we were other beings within that particular universe…)

I understand, I think, the weight that is put on modeling our particular universe. I agree that is profound from a philosophical point of view and from what I can really best call a metaphysical point of view—or a religious point of view…but beyond the specific religion, essentially, of our particular universe, there are all kinds of other universes to model…and because there are necessarily more than one of those emulated universes, whereas since our universe is one…the one…our *uni*verse…that verse is absolutely not the greatest one we will encounter, even though it has a special place in our understanding.

crosspost on

Why It Will Be Useful to Bridge the Gap Between Programming Languages and More Simply Enumerable Systems Like Cellular Automata

On the one hand, we have programming languages like C. On the other hand, we have very basic logical systems like cellular automata (CA). Even some of the simplest CAs (and other simple Turing machines and like systems) are universal…you can use them to emulate any other system. Specifically, Matthew Cook showed that rule 110 of the elementary CAs is universal.

So there are very simple systems that, given their initial conditions, if you run them long enough, can be used to emulate any other system. Anything you could program in C can be emulated by rule 110 or a number of other simple systems. Setting up the initial conditions to make rule 110 emulate most C programs would be an elaborate process, and the number of steps you’d have to run the CA for would probably be very large, but the point is it could be done for any C program, any Pascal program, any program expressible in any language.

The computational devices contained in the rule for an elementary CA are extraordinarily simple. That fact is what makes it interesting that ECAs can produce a universal system. From a theoretical point of view, the question of interest is: what is the simplest system that is universal? To answer that would point to something about the nature of the essence of computation. Some people are interested in pinpointing this minimum threshold above which computation and complexity exist, and below which they do not.

I am not a theoretical physicist, however; I am a programmer. The minimum threshhold for complexity is interesting to me to some degree, but from my point of view, there is a more interesting, and possibly very practical, reason to look at cellular automata and their extensions. A CA is a programmable system just like a C function or program. It has inputs and outputs and a way of executing. You can look at the initial state of a CA as program input and the historical states of the system as program output. The CA’s rule is the program. Most ECAs have very simple, probably useless, output. If you go through the elaborate process of seeding a universal ECA with the correctly-encoded input string, then this system can theoretically do whatever you might ever hope any system could do. But in practice no one’s going to do that in a business situation or other situation that is constrained by non-academic goals. To the complexity theorist, maybe the most interesting thing is to find the simplest system that can produce output as complex as the most complex system. What I envision around the corner is something different: extensions of elementary cellular automata and other discrete systems that bridge the gap between the simplest programming languages in the world (Turing machines and CAs) and the more complex programming languages in the world (C, C++, etc.) will be useful in what we might call automated software development.

The language-oriented programming languages we’re used to contain common constructs across language. A while loop, while its syntax may vary, is the same idea from language to language. So these languages contain an essential set of building blocks that are required for a programmer to get anything done in any language. You can get by without a do-while construct and you can get by without do-while and while-do as long as you have a for, but a language with no looping constructs would be a lot more tedious to get things done in, than a language with at least one looping construct. Of course you can get by without a looping construct as long as you can set and read variables and you have a goto, it’s just more tedious. It’s as if there’s an appropriate set of concepts that human programmers want to have so they can think effectively about designing algorithms to compute things.

CAs and simple systems like them have a similar dynamic at play, I think. The cells in an elementary cellular automata are aware, when calculating their next state, of their own current value and the value of the two cells nearby. If the cells have two possible values, then there are 2^3 possible initial states for each computation that the CA can do, and there are 2^(2^3) ways to arrange the possible outputs of those 2^3 states…essentially, there are 2^(2^3) ways to program such a system. The grammar of an ECA makes it so that there are exactly 256 programs (or functions) that you can write.

The grammar of C makes it so that there are an infinite number of programs you can write.

What I see happening in the future is that people will develop software systems by a bridging of that gap.

In C, you have a relatively complex programming language (compared to ECAs) in which you can write an infinite variety of functions. In ECAs, you have a relatively simple programing language in which you can write 256 programs.

(Of course there are simpler types of systems in which you can only write one, or two, programs, and these systems produce very boring output. It has been suggested that a CA rule has to have at least three inputs to a cell for the system to produce complex behavior. I have shown this is not true if the system uses a heterogeneous rule set…with a heterogeneous CA, two inputs is sufficient for generating complex behavior, by pushing the complexity out of input-space and into another area.)

Anyway, the point is that just as language-oriented programming systems have a lowest-common denominator required to sanely think about a certain class of computational problem (loops, conditionals, objects, whatever), simpler, non-language-oriented programming languages like CA rules also probably have a set of concepts in which it is appropriate to program a CA to do a certain class of computation. An operating system written without any looping constructs would be tedious to write. In the same way, programming most operations using only the rules possible in an ECA would require a prohibitive amount of work. The extensions of elementary cellular automata that I made in the last few years have had this interplay as part of their developmental backdrop. ECAs are incredibly simple, C++ is incredibly complex, but what lies between them? What is the next most complex thing after an ECA, heading in the direction of C++? A few of the things I’ve added are:

There’s nothing in these concepts like a variable, or a class, or a loop, but, while these concepts are simpler than what C++ provides, they are more complex than what the ECA ruleset supports natively. A universal ECA could emulate these concepts, but it would take lots of extra time and space and the emulation would not be simple.

One NKS (New Kind of Science) motive is to find the simplest possible system that can theoretically be programmed to do anything. There’s another option though: find intermediate concepts that lie somewhere between impossibly simple ECA rules and C++ grammar that are the appropriate [language] concepts for programming solutions to certain classes of computational problems. C++ has all the concepts you would ever need to do anything, and C++ can be emulated with an ECA, but a CA with memory allows you to program, very simply, things that would be very complex to program using an ECA and for which C++ would be overkill.

When I say program, I am definitely not talking about the act of a human programmer. The mission here is for a human to write programs that write programs. In inference systems, we want to recognize patterns or classify objects, and perhaps do other things. If we knew how to describe the solution to an inference problem ourselves, and were selling that solution to someone, we would be in the position of translating existing knowledge into a human-understandable programming language: if we knew what made someone a good team player and could describe that to ourselves in English, then we could translate that knowledge into C++ and sell the software system to someone to use it. The situation we’re in is actually that we don’t know the solution to the inference problem, so we’re going to try to generate a program that does what we want to do and then sell that. We could try to make a system that generates an inference solution in English, or in human-readable logic of some sort. This approach has the benefit that at the end of the day, the customer understands how their inference technology is doing what it is doing. The main hazard of this approach, as I see it, is that the assumptions imposed by the language of expression (English or human-readable logic) might make it hard or impossible to find many useful solutions to the problem. The language of expression might not be an appropriate language in which to express some good solutions to the problem, so making a system that had to operate within that language would limit what it could do. I think making a system that translated its native solution into a human-readable language of expression would be extremely difficult. If such a system was not translating its native work into a human-readable language of expression, but rather operating within that language, the space through which it would have to search for solutions would be innumerable (and of an awkward shape, I think: constructing a genome to represent natural-language or human-logic-type programs is something I have tried and not had much luck with). So, in writing programs that write programs that solve pattern recognition problems, the two extremes of language complexity (something like an ECA on the one hand and something like C++ on the other) are both problematic in their own ways. You could summarize their difficulties by saying that one is a mode of expression that is far too general, and the other is a mode of expression that is far too specific, for most of what we’d want to do. In an ECA, you can say anything, but the generality of the mode of expression of the language makes it unlikely that you will ever say anything useful quickly. In a human-comprehensible language, you can also say anything, but the specificity of the mode of expression of a human-comprehensible language makes it likely that many things that would be useful are difficult to say.

That’s why bridging the gap between those two extremes will be useful. All the machine learning methods are essentially ways to search through a space that’s too large to search through completely. Each one of them is a path through combinatorial space, where the elements we’re combining are the programming language elements described above. Whether a genetic algorithm or a neural net or a tag system, these algorithms are combining what can be viewed as programming language elements. Often the elements being combined are very simple, as in the case of classic neural nets—what each node does is simple, it’s the combination of their operation in a particular network that makes something useful happen. Just as in the case of emulating a loop in BASIC, though, in a classic neural net one way to think of what some of the portions of the network are “actually doing” is that the network has stumbled upon an emulation of a larger computational concept. In the same way that a universal ECA can emulate any system (over a huge number of time steps), a simple neural net can emulate any system (with a huge number of cells). By extending the computational building blocks upon which these machine learning algorithms operate in a way analogous to adding a looping construct to BASIC, I think we will have the opportunity to hit a programmatic sweet spot like what has been arrived at by human programmers. We have our while loops and conditionals. CAs and simple systems like them may have a set of programmatic constructs that are less complex than C++ but more complex than ECAs, that may be the appropriate tools with which to automatically build solutions to certain classes of computational problems. Maybe they’re different from problem to problem (?). Maybe their appropriateness has something to do with the system they’re used in—i.e. memory is more useful as an extension of a CA than a tag system (?).

[Some genetic algorithms] already do this gap-bridging, because of the way their genes are interpreted. The genetic strings there are interpreted as programs in one of several mini-languages. They do this effectively, but not as effectively as they could. That’s because the mini-languages involved make too many assumptions and impose too many restrictions on the solutions that are possible to find. If, instead of using human-created mini-languages to interpret the genetic code, we used CAs and CA derivatives whose rule sets are enhanced as described above (by adding memory, using morphic cells, etc), then we would a much more open-ended, much more capable genetic search system. Right now one evolves gene strings that are interpreted by human-written mini-languages. The human-written mini-languages are effective to a certain degree, but almost certainly preclude arrival at lots of good solutions because of the limits in the modes of their languages of expression. If one replaced the human-written mini-languages with [extended] CA-based languages, there would be fewer human assumptions in GA systems and we’d probably be able to arrive at better solutions to classification problems.

9/11 and 7/7 Training/Real Event Coincidences Considered Through the Lense of Cellular Automata Models

The coincidence of training exercises on multiple of these terror incidents (9/11 and 7/7) are especially interesting to me, and cause me to consider esoteric theories involving post-human influences. Yes the exercises could be part of the activity of a terror-inducing human influence as part of a “conspiracy”. They could as well be pre-existing events with different motives, opportunized by terror-inducing human influence. There could be, and if the simplest theories are true, probably is, an interesting interplay between the realities of the terror proponents and anti-terror proponents involved in the events…agents with varying perspectives on what they are doing and think they are doing…no doubt that is the case if any of the simplest theories are true…you wouldn’t know what you were part of, you wouldn’t even know what kind of agent you were…certainly one can imagine plausible scenarios for 9/11 wherein “U.S.” administrators knew of the part of 9/11 they were planning, but given that they worked with other factions, some of the key events of 9/11 were a surprise to people who thought they were planning the events of 9/11…I wouldn’t be surprised if high-level spy/counterspy conflicts became apparent on that day, to the major conspirators (you might have thought you had planned tragedy X but the people you got to help you from “the other side” took advantage of that situation and you ended up with tragedy X * Y, or tragedy Z through a series of crosses)…if such things are going on it is truly tragic…we don’t need that kind of complicated puppetmastering to enjoy being human civilization. But beyond all those theories I’m grappling with some more abstract ideas. The coincidence of training exercises in both 9/11 and 7/7 might have a simple explanation that both events have a shared genealogy of conception and planning. But that coincidence might have a less-easy-to-understand origin, a non-human origin. Not to say that this idea would be inconsistent with human-oriented theories of what happened on those days. But I’m imagining there might be a different way to understand that coincidence, that wouldn’t preclude the essential human-complicity of the obvious theories, but that wouldn’t imply that there was an originating agent who was using training exercises [either planned by that agent or not] in conjunction with the actual [bombings] to [whatever—help commit the actual bombings, be there to help minimize human suffering from the actual bombings, or whatever]. There may be another way of looking at it, wherein the training and the actual events tend to occur coincidently without that meaning that the two are causally related either as cause-effect (in either direction) or as direct-common-cause siblings. Maybe there is a theory in which the training and the actual events can be distant common-cause siblings…common-cause cousins, in a way that indicates a larger event-entity we have a partial view into, whose characteristic features surface in these types of events, but whose characteristic features’ relationships aren’t describable in our logics. I’m grappling with thinking along those lines.

A repost of a post of mine from the NKS site on 20060922, which gets at what I’m saying about the events perhaps being indirect common-cause cousins (“the system that made it versus what it looks like”):

The most amazing lesson from NKS to me is: Let’s say I am living in a world, or viewing a system, which appears to have behavior/activity/forms that are cohesive with respect to the X and Y axes…like in this picture I’ll attach. To me, subjectively, it looks like squiggles, like someone drew squiggles on a basically 2-dimensional piece of paper, with an eye capable of seeing forms in at least two dimensions. It seems that way because if one travels along either the X or Y axes alone, the “squiggles” close themselves. That is, there are closed forms which, whether viewed from left to right or from top to bottom, open and close themselves in a circular form. The squiggles are mutated circular forms, warped circles. Which, consciously/receptively to me, makes it seem like the method behind their creation had some at-the-moment knowledge of at least 2-dimensional space. Right? How else could you draw a circle, or conceive of one?

But of course you can. We can write an algebraic formula that plots a circle in 2-dimensions even though the formula can be said to have no knowledge of dimensions at all.

So from a technical point of view what I’m saying here is obvious, ancient, basic, boring.

But from an experiential point of view, I find that CAs and some other of these NKS systems illustrate beautifully a profound nature of the world, which is:

The method that created the thing I’m observing might have *no idea whatsoever* of the [dimensional/shapely] context in which I’m observing the thing. Right? The thing that’s “drawing a circle” may have no idea that it’s drawing a circle, because what it’s doing is only drawing a circle from my point of view.

What’s profound about this for me is to re-consider my phenomenal world: the thing is, when I see [a circle], while that thing seems like an indisputably cohesive whole to me, it is quite likely that the two vertical or horizontal halves of that circle were computed *with no knowledge of each other*. You see that from my attached squiggle, yes? Those closed forms, those forms which look closed from my view of them as a 2d image, were created by a process wherein the “left half” of the thing had virtually no knowledge of the “right half” of the thing…for almost all of the form’s existence through time in the second dimension!!!

So it’s quite possible, and maybe, likely, that complex cohesive forms I experience through the 4th or whatever dimension of time, that cohesions and symmetries in those forms are not, properly, causally related to each other. That a circle is not actually, at the level of its formation, a circle at all! It may actually be two causally unacquainted formation sequences that *happen* to be symmetrical. Chairs usually have four legs. But it may very well be that the legs have nothing to do with each other…or, more precisely, that in their formation, whatever forms leg 1 has absolutely no knowledge of whatever is simultaneously forming leg 3.

You see what I’m saying? There’s a metaphor in philosophy of a two-sided tapestry…originally the metaphor is that the chaotic side of the tapestry is supposed to represent how the world seems to us…and god is on the other side, the organized side, god is making sense, and because we happen to be observing from the back side of the tapestry everything seems crazy.

But I think we have concrete reasons, now, to think the metaphor runs the other way:

It may well be that god’s side is the side of chaos [chaotic to us], the side with the strings hanging all over the place in no apparent order [not apparent to us]. And it’s because of how we observe the tapestry…through a systematic (and trivial) methodology of simplification…that from where we sit, the tapestry takes on [a simplified] order.

I think “reality” is the complex side, the noumenal world. And what we observe, which is far from how things “are”, our phenomenal world, is the organized, the simplified, the “sensible” side of the tapestry.

And you see when I’m saying that the two sides of a circle may not be causally related to each other, that I’m not saying they have *no* relationship…but the thing is they are related *in abstract parallel* to each other…but not because they have at-the-moment shapely connectivity. What they have in common is a fundamental, universal creation methodology. So: when I find legs 1 and 3 of a chair looking like each other, it may be that it’s not that legs 1 and 3 know about each other (that they exchange information in-the-moment), but that they have a universal [rule-based] commonality…yes, they are similar, but not because one causes the other…instead, it’s because they share a common cause, a universality, the rule, which sometimes happens to pop up next to itself in a way that makes me think that C(x, t) is related to C(x+m, t) when “really” it’s that C(x, t-q) and C(x, t) share a relationship that is also shared by C(x+m, t-q) and C(x+m, t).

That’s what I was thinking in 1996 based on observations of cellular automata. I don’t think this means that training and actual events on either 9/11 or 7/7 are not part of the same conspiracy. But I think it suggests that there could be cases of event-objects that we consider obviously cohesive, within which there are elements that are not causally related, except as indirect common-cause cousins. The CA pictured above, the squiggle example, shows a case where the obvious coherent event-object (the closed-circle squiggle) and the real cause of some of the coherence of the squiggle form (the left-and-right closed edges of the squiggle) are most fundamentally related in a non-obvious way. The left-and-right closed edges of the squiggle can be seen as partially the result of local conditions (in which the left and the right edges of the squiggle have no in-the-moment knowledge of each other) and partially the result of distant common-cause ancestors (in which the left and the right edges share distant past knowledge with each other).

Basically, I think this means that in cases like 9/11 and 7/7 where there are coincidences that make us think that training events and actual events must be shared knowledge within the same conspiracy, that it is plausible that actually we are seeing what we perceive as a coherence that implies an in-the-moment cause-effect relationship or some kind of in-the-moment co-knowledge relationship, but that the coherence we perceive does not imply local co-knowledge, collaboration, or membership in the same conspiracy. I think the squiggle shows that it’s plausible that the coincidence of the training events and the actual events do not imply membership in the same [space/time]-local conspiracy.

This video sparks my thinking

I love looking at this. It makes clear the connection in my mind between what I hear and what I see. I cannot think in hearing. I can think in seeing. There’s been a little bit of work using cellular automata to compose music. Seeing this video sparks my thinking along those lines.