If I could force you to read a book

Uncategorized

these are the books I would force you to read

thelittleprince.com

One philosophy book

If I could force you to read a philosophy book, it would be Beyond Good and Evil. Yes, Nietzsche was that influential. Yes, parts of this book suck. Maxims and Interludes and Peoples and Fatherlands are the worst chapters in the book. We all hate them. We all skip them when we read it. Even college professors do. It’s ok. The rest of the book is still so good that, in my mind, if you haven’t read this book, you don’t know shit about shit.

(If I could force you to read The Archaeology of Knowledge, I would, but it’s too hard a book to expect people to read. I feel like if you haven’t read The Archaeology of Knowledge, then we can’t converse. And I don’t know anyone else who’s read it. So I feel I can’t converse with anyone.)

[The kingpin]

If I could force you to read one book—just one in all the world—it would be The Little Prince. It’s more important than the Bible (in the context of this list). If you disagree with that last statement you are insane—get yourself to a mental hospital tonight. If you had to show one film to aliens to make them understand our culture, you would show them Pulp Fiction. If they preferred to read, you would give them The Little Prince.

One novel

If I had to pick one novel to force you to read, it would be The Secret History. It’s not as good as some of the classics, but fuck the classics. This is the best novel of the current age. Hear me? This doesn’t mean it’s my favorite novel, it just means I think if you had to read one novel to understand what a novel is now, it would be this one.

Visual art

If I had to pick one art book, I would force you to look at H. R. Giger’s Necronomicon II. Fields of cocks and cunts dripping, fucking, covered in warts. Almost-abstract cityscapes. If you don’t get Giger, then you honestly don’t belong on planet Earth. Move to a different planet.

My favorite artists are Basquiat, Twombly, Botticelli—but I would never force you to look at them. This isn’t a list about my favorites. It’s a list about contemporary cultural necessities.

The only other visual art I would force you to look at is M. C. Escher.

One general science book

For a long time I would have said Silent Spring for a science book but Silent Spring is old news. Everybody knows that we fucked up the planet so it doesn’t have the punch it used to. Guess I gotta be hypocritical and go with Gleick’s Chaos. Like with Silent Spring, the science here is old news—five year olds understand it intrinsically. But as a story of scientific discovery, this is one of the most exciting books ever written.

I am well aware for a science book the largest majority of you would have picked Gödel, Escher, Bach. Consider your complaints noted without comment.

Semi-technical books (accessible to anyone)

For a semi-technical book, I would force each and every one of you to read A New Kind of Science (main text, not the endnotes). I’m sorry, but if you haven’t read this book, you are clueless in like a thousand ways.

For another semi-technical book, Applied Cryptography (the concepts sections, not the detailed algorithm sections). It’s embarrassing that almost no one knows what digital cash is, or a cryptographic digital signature. Embarrassing. The general concepts in this book should be known by a fifth grader. No hyperbole: if you don’t know what a digital signature is, you should be ashamed to live.

One biography

It would go against the concept of this list to pick one biography.

One children’s book

Children’s books are the most important to read. Where the Wild Things Are is the king of these. If you can’t get lost in this book, I am sorry to tell you that a piece of you is missing. And I don’t consider The Little Prince a children’s book.


In conclusion

I could recommend a million books—and I’d love to—but this list isn’t about recommendations. These are the nine books that I wish I could force you to read so that you would have the bare minimal framework upon which we could have a semblance of a conversation. Absent that, it is my distinct opinion that you and I reside in disconnected worlds.

Disorder

Uncategorized

and the Doctrine of Ethical Treatment—2013

Read for free now


Disorder happens when we classify a segment of behavior as not-ideal, as not-the-goal. The disordered segment is often the minority, but doesn’t have to be. There is a field of behavior, and someone has stated that a certain kind of behavior is ideal, is the goal. Behavior that goes along with this ideal is order. Behavior that goes against it is disorder.

We see disorder in abnormal conditions. Some of these conditions are considered to be highly undesirable, like suicidality. Some of these conditions are considered basically neutral, like freckles. And some of these conditions are considered highly valuable, like exceptional tallness or intelligence.

Why is normal intelligence the ideal, the goal, instead of exceptional intelligence? Because of the relativity of intelligence. If everyone was of what we call exceptional intelligence, then that would be normal intelligence. So we are not saying that a high IQ is undesirable, we are saying that a higher-than-normal IQ is undesirable. It is not ideal. It is not the goal.

People with higher-than-normal IQ may seem spectacular to their normal-IQ friends, or they may seem spectacularly uninteresting, but higher-than-normal IQ comes with as many problems (in terms of socialization) as it does advantages (in terms of problem-solving ability). People with higher-than-normal IQs are of special value, but they are also of special burden, to themselves and others. Just as people with lower-than-normal IQs require special education, people with high IQs require special education. It is inefficient to educate abnormal people — it is inefficient to handle any kind of special case. Additionally, the subjective experience of both high IQ and low IQ people contains more frustration than does the experience of people with normal IQs. It may be glorious or it may be dull, but either way, it is frustrating not to be normal.

Normality comes with a kind of ease, an efficiency. When you are of normal height, you have less looking up or looking down to do when you are talking to another person. When you are exceptionally tall or exceptionally short, you normally have to look down or up at people when you talk. Being abnormal is more work. When your interests are normal, it’s easier to find friends than when your interests are abnormal, are exceptional, are not-ideal, are not-the-goal. When your psychology is normal, you don’t require changing (through medication or therapy) to become adjusted to those with normal psychology. When your psychology isn’t normal (or ordered) — when it is abnormal (or disordered) — you either stick out or you attempt normalcy via medication or therapy or some other means.

This normalcy — this order — we are talking about, is subjective. To a person of normal height designing a philosophical model of order and disorder, normal is the same as the majority. But to an extra tall person designing such a model, normal might be a minority. This is how sometimes order describes the minority, and disorder describes the majority, of the field of behavior being studied. From some points of view, a minority condition or behavior is what is ordered.

In a case like freckles, which are generally seen as a slight advantage by some, and a slight disadvantage by others, there is little cost for being disordered, and little benefit. A few people find freckles highly attractive, and a few find them highly unattractive. But for the majority of people, having freckles is a disorder which makes little difference. Perhaps they are seen as slightly not-ideal, slightly not-the-goal, but having freckles is generally not highly sought-after or intensely avoided.

With a case like suicidality, few people perceive any kind of glory in someone having this disorder. There is a normal way of living and dying — this is the ordered way. To go against it is almost without question to be disordered. It is to be not-ideal, not-the-goal.

A psychological disorder like bipolar disorder is almost universally seen as not-ideal, not-the-goal. Along with its neighbors schizophrenia and borderline personality disorder, it is seen as undesirable. The word disorder even graces the names of some of these conditions. To be bipolar or borderline is to be disordered. You are going against the flow, you do not fit in. If everyone was bipolar, the world would be a crazy place (we perceive). If everyone was borderline, normal relationships wouldn’t happen. We’d be losing something valuable if these abnormalities became the norm.

But Kay Redfield Jamison, in Touched with Fire, argues that bipolar disorder may present special advantages in creative fields. Here is a voice saying that this disorder may in some ways be ideal. It may in some ways be the goal. If many of our famous, historically-valued poets had bipolar disorder (as Jamison shows), then doesn’t it become rational to rejoice in a diagnosis of bipolar disorder? You have the same artistic temperament as the great poets! If you have any intention of writing poetry, then it may well be ideal that you have bipolar disorder.

And there the initial definition of order and disorder is turned on its head. Order is lacking. Disorder wins. Or, more specifically, disorder describes an ideal, and order is lacking in that ideal.

Read more



The critic and the creator

Uncategorized

Some have said that the best creators are also first-line critics in the field in which they create.  People talk of the interplay of these two disciplines.

I’ve thought about this some.

I think that’s true—that people who do something well do both this creative (expository) side and this critical (pruning) side.  If one is too strong, the other won’t happen.  And whether I’m “right” or not, I am a fan of balance (as elusive as it is) and I think that when these two sides, as Prince says, are “both friends”, that the results are stellar, compared to when they’re not.

I’ve written five book-length texts at this point, and some other texts.  Where I find myself in this critic/creator discussion, now, is this: what we do is rare.  When we are writers, when we are creators, there are not many of us.  This isn’t to build ourselves up: it’s not that anyone wants to have a party by themself.  No.  We want everyone to become the best they can, we want to have others to celebrate with, and to celebrate symmetrically.  The best party imaginable has the best, and the whole-est, and the most full, and accomplished, and centered participants.

But when I started writing, I looked to what other people had to say.  I did that more.  I was creating, and questioning myself.  I was saying: here’s what I have done..now what do you think?  My creator was more developed than my critic.  I wanted to make something, in a sense to manipulate others’ critiques, by getting better.  I wanted to make something people liked.  Is this poem good?  When you read it, do you think you could have done better?  Etc.

But I’m not there anymore.  This whole activity, as all activities, is about having a party.  Finding someone with whom one can reasonably celebrate.  Someone who has come to the same or a similar enough place that the two of you can then celebrate—can shluff off yearning and pretense and get down to the business of having fun..of reveling..in the—really: glory—of delight, and that kind of freedom doesn’t come without some symmetry and some oblivion.

The critic and the creator, whether they’re the same person or different ones, are involved in a glorious form of play—as are lovers, as are olympic competitors, as are children pretenders.  They are all on equal footing and delighting in a certain kind of lightness.

The lover analogy is a good one.

When you take your clothes off, are you doing it to see what the other person thinks, or are you doing it to play?

I’m at a point in my writing where—"published" or not—I’ve had enough feedback to know that my critic is alive and well, and on the money.  I know what I’m writing.  I know what’s good about it.  I know what’s lacking.  I know what it does and and I know what it does not do.  I like praise, but compliments and insults have approximately the same strength (or half life) to me right now.  My critic is good-to-go, at the moment.  I can tell this from the feedback I get..because my internal critic is functioning with or beyond the level of the external critics I know of.

So when I take my clothes off, I’m not worried about what you’re going to say: because by and large I’ve already thought of it.

I’m not taking my clothes off (so to speak) because I’m really curious about what you’re going to say when you see me naked.  I’m (in writing) taking my clothes off because I like to get fucking naked.

When you speak, after I disrobe, if you’re on the level, then we’re playing—we’re in that glorious interchange.  I learn, I am affected, but I am nowhere near putting my clothes back on due to your criticism—because, by and large, mine is better (my criticism).  My critic has (improved or) maintained his sanity, while my creator has improved (or maintained) his insanity, such that the creator is not naked at all.  The critic already told him what was coming, and together, we made what we made, with a pretty well-rounded idea of what was happening throughout that process.

I am seeking publication..because I would like to make money writing.  I think I can do that.  I think more people will say that they like, and say that they dislike, my writing, and I’m sure at some point I will come face-to-face with someone who has developed a critic who can do that kind of glorious play with me.  But I don’t need publication or any more compliments in order to know that I’m on the track I want to be on.  Somewhere in this process, I’ve come to see that this is what I do—in the sense that no compliment, and no insult, will deter me from it.  I kindof don’t care anymore: just in the sense that the way I feel now, I don’t think that anything anyone else is going to do, is going to elementally change what I’m doing.  If I get shunned for the rest of my life by publishers, I’m still going to be writing—I won’t stop on their behalf.  It’s as if, in my mind, the external critic (disdainful or appraising) is listened to, yes, but not that prominent an element in my decision-making process.

I just don’t care what you think.

Because, finally, I think, I trust myself.

theory.txt

Uncategorized

I think I wrote this around 2004.

What is going on here?  Who am I and how do I fit into it?  Is there some theory of the world that you and I would necessarily agree on?  What are we arguing about?  How do we determine who are members of the sets “us” and “them”?  What’s my motivation?  Why do people live?  Is there an external reason for it?  Does the universe care about us?  How do I decide what to do next?  What makes me happy, and why?  Is it more prudent to live by principles or to take a momentary approach to deciding what is right?  What makes us form groups?  What makes our groups break up?  What is change?  What causes me to change?  What is the effect of change on me?  Can I go back?  Is there more than one thing I could have done?

1. What is going on here?

————————-

A wasp is walking on my window.  I just put down a book.  My fingers are moving over a keyboard.  I just scratched my eyelid.  I’m typing what I think.  I’m reading what I’m typing.  I’m thinking about what I’m reading.  I’m imagining someone else reading what I typed.  I’m imagining that person imagining me typing what they’re reading.

    What is going on here?  We have molecular models, atomic models, mathematical models, physical models, psychological models, religious models, philosophical models, medical models, and other models of what’s going on here.  To answer the question “What is going on here?” I have to think about what is going on when I ask the question “What is going on here?”.

    What is going on when I ask the question “What is going on here?”?  The tiny moving pieces in my head just decided to send my body signals to sit at the computer and type questions in English?  One in 250 human beings at some point will attempt to write a book, one in 14 of those will write something of a philosophical nature, one in six of those will write something of an existential nature, and I just happen to be one of the almost three people out of every 250 who writes down existential questions?  There is a field of chaos and turbulence, of which I am a part, and due to factors external to me and events preceeding my existence, the evolution of the chaotic field of which I am a part has led to this moment of a new state of chaos, of which, if I apply a psychological model, I might say “He’s obsessively asking himself what’s going on.”?  God has ordained that on this day I put into language some of my thoughts…or…God saw that I was confused, and God is answering my questions through automatic writing?  A young man is considering his existence?  A white male, 26, physically healthy, of above-average intelligence, mildly bi-polar, is having a depressive episode?

    What is going on here, when I ask the question “What is going on here?”?  Clearly, it depends on who you ask.  It depends on the model that person is using to figure out what’s going on when I ask the question “What is going on here?”.  A medical doctor will have a different evaluation of the situation than a computer programmer.  A psychologist will have a different evaluation than a physicist, than a Christian, than a witch, etc.  In an extreme example, my confederate in a war might interpret my asking “What is going on here?” as a signal that he should launch the bombs, because that’s the meaning he and I decided beforehand to attribute to the event of me asking that question.  These people are all modelers, interpreters of situations, applyers of preconceived ideas to symbols imagined from sensory input.

    Is there a general form for these models that modelers use to determine what is going on?  Where and how do they get their models?  How do they represent their models?  When and how do they update or replace their models?  What are the mechanics by which they compare their reference models to instance models imagined from sensory input?

    Is there a general form for these models?  Math?  Language?  Symbolic logical syllogisms?  There are many specific lingual forms for these models: the particular notations of various sciences and cultures (local notations: local either by geography or lines of heritage or local to a limited conceptual domain).  What is going on here?  The computer-illiterate French matron says “Rien d’importance.”  The South African two-year-old says “I’m hungry.”  The mathematician says “We’re factoring a 2048-bit number.”  The C++ programmer says “a virtual method of a class called ‘factorIt’ is iterating through a vector of integers.”  Could they all possibly be talking about the same thing?  No.  But they’re standing in the same room.  They each have models of the situation.  There is some overlap between some of the models: the mathematician understands basic C++ programming and the C++ programmer understands basic math.  Everyone in the room knows (thinks) that the big blue box is a computer.  Everyone in the room has a stomach, and they’ve all been there the same amount of time, and each of their stomachs is telling them that its time to eat.  Their models overlap in some places and not in others.  The brief summaries of their models given by these people illustrate that, just like their models, their expressive notations overlap in some places and not in others.  Are their expressive notations the same as their internal model representational notations?  Certainly not.  And while it is beyond my knowledge to say exactly what their internal representational notations are (in terms of how the larger constructs, their ideas, are composed of the smaller constructs, the tiny moving pieces in their brains), it is unnecessary to do so; observation of the content of people’s various expressive notations is sufficient to conclude that at the idea level, among people, there is some overlap and also some disparity between internal model notations.  If I ask someone “What is the syntax in C++ for declaring a variable?”, I will often be able to determine, from the person’s response, whether or not I think the person has an internal model of this aspect of C++ syntax that is functionally equivalent to mine, even though it may be the case that there are differences in the mechanisms by which each of our brains represent ideas, and differences in how, at the idea level, our particular models are stated.  If the person responds to my question by saying “I’m hungry.” then I won’t be able to do much useful comparrison of our models of this aspect of C++.  Lets say that my internal model is such that if someone asked me what the C++ syntax is for declaring a variable, I would say, “First you put the type name, then you put the variable name.”  If, when I asked someone else the same question, the person said “First you put the type name, then you put the variable name.” then I would know that our internal models of this concept were functionally equivalent; they may be represented differently on a synapse level and they may be represented differently on an idea level, but if you give us both the same task (“Declare a variable that has type ‘bool’ and name ‘dog’.”) we would both arrive at the same ordering of the type name and the variable name (assuming we were approaching our task seriously).  If, when I asked someone else to tell me the C++ syntax for declaring a variable, the person said “You put the variable name, and before it you put the type name.” then I would know the same thing as before, that our internal models of this concept were functionally equivalent; this person and me might be representing our ideas differently at the level of synapses, and we might be representing our models differently at the level of ideas, but, based on our expressions of our respective models, we know that if you apply that person’s model to an existing variable declaration and you apply my model to the same variable declaration, both our models will agree on whether the declaration is proper C++ syntax.  Obviously, if someone responds to my question by saying “You put the variable name first and you put the type name second.” I would know that this person and me have different internal models of C++ variable declaration syntax (assuming this person answers my question in a sincere manner).

    Is there a general form for the models modelers use to determine what is going on?  On the level of synapses, I can’t answer this question, and I don’t care to (but others do).  On the level of ideas, it seems likely that there is some overlap and some disparity between the forms with which we represent ideas.  Is it important, in order to understand what is going on when I ask “What is going on?”, to understand the nature by which you and I form ideas from other ideas in our brains?  That depends on who you ask.  If you ask a certain type of neuroscientist, then it is important: without knowing that you won’t really know what’s going on.  If you ask me, then its not important: I’m not a neuroscientist, and, to me, it’s enough that (since we have partially overlapping understandings of several general expressive notations) we can, without knowing how we think what we think, exchange at least some information about what we think.

    What do you think is going on when I ask “What is going on?”?  That depends on who you are, and what your model of the world is.  Same for my original question: What is going on?  If I’m asking the question, the answer depends on who I am and what my model of the world is.

2. Who am I, and how do I fit into the world?

———————————————

I am my model of the world.  Am I not also other things?  Am I not my skin color and my my eye color and my height and my fatness and my smell?  No.  Those things are part of my model of the world.  They are part of my model of the world because my methods for gathering information about them are sensory.  Whatever world it is that my senses connect to my brain is unknowable; all I know is what I get from my senses and what is modeled in my brain.  Everything that I know, therefore, including me, including all parts of the world that I know, all ideas, all memories, all pictures, sounds, feelings, tastes, and smells, is my model of the world.

    If my model of the world includes me and includes “the rest of the world”, then there is no substantative difference between “me” and “the rest of the world”.  The way I designate “me” among the world is this: “me” is the category of what it seems I control; “the rest of the world” is what it seems I do not.

    How do I determine what I control and what I do not?  Try something.  Try wiggling my finger.  It moved.  I thought about it moving, and it moved!  I am God (of my finger)!  Try flipping through the pages of this book.  They flipped.  I riffled through them with my thumb and they moved.  I am God (of this book)!  Try willing myself to have hiccips.  It didn’t work; I don’t have hiccups.  Hmmm.  I’m not God of my hiccups.  Ok.  Try making Rishi come into the room.  I say, “Rishi, I need some hot pussy action!”  She comes into the room.  I am God of Rishi (sometimes).  Did I cause Rishi to come or did she come on her own?  Did I cause my finger to move or did it move on its own?  We could argue that question forever, but why?  Let’s be functional about it (what it is is what it does): for a particular event that I can represent and differentiate among my thoughts, I’ll keep track of how many instances of the event are present and how often those instances of that type of event stand temporally proximate to events of me wanting or planning for the event to happen.  Fingerwiggling: in all my thoughts, I can specifically remember 100 times that I moved my finger.  I can infer that I’ve moved it many more times than that, by looking at all the stuff I’ve typed, but I don’t remember more than 100 specific wiggles.  In my model of the world, there isn’t a single occurrence of a thinking-make-finger-wiggle-right-now event, for whom, within 0.05 seconds there isn’t a “matching” finger-is-wiggling event.  Do my thoughts make my finger move?  I couldn’t say.  All I know is that there’s a high correlation between thinking-make-finger-wiggle-right-now events and finger-is-wiggling events.  Since, in my model of the world, my thoughts are inside my model of “me”, this presence of such a high correlation between those two events is sufficient reason for me to believe I’m telling the truth when I say “My finger wiggles when I want it to.”  If my model of the world was different in that …

Werner Herzog

Uncategorized

Q: What’s the mistake with psychology and self-reflection?

A: There’s something profoundly wrong—as wrong as the Spanish Inquisition was. The Spanish Inquisition had one goal, to eradicate all traces of Muslim faith on the soil of Spain, and hence you had to confess and proclaim the innermost deepest nature of your faith to the commission. And almost as a parallel event, explaining and scrutinizing the human soul, into all its niches and crooks and abysses and dark corners, is not doing good to humans. We have to have our dark corners and the unexplained. We will become uninhabitable in a way an apartment will become uninhabitable if you illuminate every single dark corner and under the table and wherever—you cannot live in a house like this anymore. And you cannot live with a person anymore—let’s say in a marriage or a deep friendship—if everything is illuminated, explained, and put out on the table. There is something profoundly wrong. It’s a mistake. It’s a fundamentally wrong approach toward human beings.

Authorship, predication, and not giving a fuck

Uncategorized

Some people speak of the end of the author.  More properly they speak of the de-mystification, or de-centralization, or maybe the de-humanization of the author.

Everything-shared, everything-anonymous, everything-mutable networks rapidize the conversation that was always taking place around constructive work.  The only way they fundamentally alter that conversation is by increasing the amount of it that can take place within a single conversant’s lifetime.  So: we were always writing fanfiction, it’s just that now a single reader can write fanfiction within a single lifetime.  It’s only the perspective that an an individual can have, on this conversation, that has changed.

Is it necessary to erase the concept of the author?  I don’t think so.  Is it desirable?  I don’t think so.  Is it possible?  I don’t think so.  Is it desirable to de-mystify, de-centralize, or de-humanize the concept of the author?  Maybe.  =)

It’s obvious now that it’s not possible for the author to work in a mysterious box, to which others have zero access.  But it was always the case that the author’s box was not 100% mysterious, was not strictly zero-access.  It’s just that it used to be easier to pretend this mystery, and now the same is difficult.  It was always the case, though, that [props Mamet] the audience knew at least as much as the playwright, and that a smart playwright had to know this and operate under it.  It was always the case that the author was reading the same newspaper as the author’s readers..even though the author was doing something quite different with some of that input, than the novel readers.  What’s different now is that while the author is reading the newspaper, his readers are reading over his shoulder.  What one attends, another can attend, instantly.

As silly as I think it is for authors to be scared about de-mystification, I think those incredibly silly who need to erase the concept of author–to make it so that no one is author, ever, even for a moment.

The author is a sorter, a picker of things that go in, and do not go in, the box.  Do people inherently prefer works constructed of one author, or a tightly-structured system of authors, as in film?  Maybe not–but it isn’t certainly so.  We do enjoy strains of consistency–themes–in our input.  Where there is a theme there is an author.  Perhaps not an individual human author, but an author.  An author–a theme-correlant–has oblivion.  Here’s how: oblivion is the counterpart to predication.  Predication is a correlative or causative link, an if-then relationship; oblivion is not giving a fuck.  To produce something that surprises the guy sitting next to me, a non-predication, oblivious, authorship is required.  If my construction, my picking, is highly predicated on your output, you will perceive little authorship in me.  An author, then, is a closed-off-ness (even for a moment), an oblivion, a not-giving-a-fuck, a blindness where others are predicated, a breaking of visible if-then rules where others are following them.

When you write to market demands (to audience demands, to literary agent demands, to publishing house demands), you are doing something..but that something isn’t being an author.

I get wanting to break down the human individual concept of author, even as I see value in that concept.  But I don’t think it’s well-thought-out to attempt to erase any concept of an author in the sense described here.  What is attractive to attack, is ego, the ego of the link between {creating concept} and {human individual}.  That’s not a rock-solid link; it should be attacked.  Discounting the possibility of {{human individual} {for a moment being an author}} is economically stunting.  Discounting the usefulness of {{theme-oblivion author-creator} {momentarily playing author}} results in art that makes no sense..art in which there is no art..something akin to improvised television.  It’s not funny, it’s not meaningful, it’s not even completely meaningless (which is what people who like it claim they wish it was).

What we really like in this type of construction is the fact that we could have done it, without trying.  That’s what really appeals.  That it not be necessary for the artist to have thought.  Older art had the artist thinking a lot and us thinking a little.  More recent art had the artist thinking a lot and us not thinking any.  TV, improv, and fanfiction (to generalize) have us thinking that neither the artist or the audience has to think (or feel) at all..“because it’s meaningless”.

But it’s not meaningless..because it’s predicated on what’s next to it.  When you blur that line that used to individuate the artist, the author (not as a person, but even as a skein of action, taking place, for a moment), it lowers the amount of oblivion and it raises the amount of predication taking place for {the one who is doing the {creating}}.  In that circumstance, there isn’t a show, there isn’t a product, because there is no distance between the rehearsal space and the production release.  No oblivion means no surprise.  Predication is the market study, the focus group, the TV show that bases episodes on fan comments from its web site.  Don’t get judgmental in your position on the size and shape of this feedback loop–and don’t think that I am, either.

Consider the suggestion, though, that when this process looks less like a blind man dreaming up songs in an isolated world based on fragments of sound and touch he’s gained from {what everyone else thinks is the real world}..and it starts to look more like twelve guys in suits sitting around a brightly-lit conference table “bouncing ideas around”..it changes the locus of surprise such that what is produced–while it may be surprising, while it may have themes–isn’t surprising on the scale of an individual person, doesn’t have themes in the scope of an individual person.  It’s fun to attack the author-as-individual–deflating that power differential meets a classic definition of humor: to enjoy seeing the mighty fall.  What happens with this, however, is that what gets made, the art, the work, that comes from committee, is appropriate for consumption by committee–not by an individual.  And as much fun as it is to claim that {anything you can do, I can do better}, it’s really hard to watch a movie, or read a book, or get laid, as anyone but yourself.  All by your lonesome.  I think that’s a way to start to understand why there has historically been some reverence for having one person–one extraordinary person drawing on many, many inputs involving a whole human history of others–for one moment having that person decide what does, and what does not, go in the box.