Thursday, April 14, 2016

The normal

Think about a moment when you couldn't believe something was normal.

What does it mean when normality coincides with incredulity? (What does that say about where your normal is?)

If your everyday (you can identify it because leaving it causes you to think go away from and returning to it causes you to think come back to -- think of it also as your onstage space) is absurd to you, then you face a crisis of response. Do you respond as if it's normal (calm) or absurd (screaming)? The first feels like a betrayal of your own gut and senses but the second is really alarming to others.

When you couldn't believe something was normal. (Does the normal require belief? Like ... faith?)

The older I get, the more I need faith just to keep going.

Faith for me means: even when all your senses say otherwise, thinking, "but this is probably normal." It is not taking your own experience or your own feelings at their word.

That thing where you can't just accept your normal for what it is. "Just, like, be cool, dude." What is the voice telling you to be cool? Is it a conspiratorial voice (in on the secret, don't let the others realize we are fooling them, shhh!) or is it a policing voice, which means you really are alone, unrecognized?

You would rather sabotage and lay waste to your normal than be cool.

Maybe the discrepancy between normal and incredible is between yourself and the situation: the situation is normal but I am incredible in it. (Of course I mean not to be believed.)

If love engenders desire and repulsion, then so too does the normal.

Friday, November 7, 2014

The panel on new musical ontologies: OK, so music is always the other to whatever field of knowledge you're talking about, music lets you have something like a field with a boundary that keeps it defined and real and tamable. And then you want to try to describe the impossible thing itself.

One way of defining what the music does is negatively: what does it foreclose, what does it smother, what does it make you fail to heed? (Ahmed's work on feeling or not feeling part of a group.) What does it wash out, what does it drown out? What does its underline, its emphasis, emphasize? How does it direct you to value and organize your pursuits (which pursuits does it mark as happy ones)?

Tuesday, May 13, 2014

Subjective realisms

I was stuck, in one of those episodes or chapters that will either turn out to be a transitional crescendo to some stable good form, or will, in retrospect, be remembered as just the beginning of attrition, the useless kicking action of a thing that needs to die. The problem with optimism is knowing when to apply it.

When we finished our meeting, I turned with profuse thanks to M, a touch on the shoulder and wet eyes. What did I want? To make it personal. To feel loved. To actually thank her. To re-establish the solid ground of a consensus of upward mobility and things being OK - things are basically OK! Look, we are joking and making plans and saying, see ya later. But this feels wrong: I need to feel tragic in order to be tragic.

I hate knowing that my projections of affects and moods so obviously affect people's strategies toward me. If I want reassurance, I project gloom. I project ease and confidence if I want to move to the next thing. But what if I feel both gloomy and confident about what I am doing? I feel both these things at once way more often than I feel either on its own.

Wednesday, September 18, 2013

Choose your poison

That thing where someone spends their whole life writing around problems they don't care about, problems that just happened to be there when they came on the scene, that formed their geography, mountains and lakes, coasts and skies. It's especially sad when you see them write their way out of that irrelevant landscape in late academic life, maybe never realizing how defunct the problems already were, and then appending in a last chapter or last paragraph a child's fantasy of new work, (it seems:) work they really want to do. It's sometimes framed as a promise, but often it also feels like a resignation (handing in the slip ...), a last stab at getting a flag in the ground.

How to spot the spies in the court, the false leads, the problems that are invented in the process of thinking (like cross-contamination in hospitals)? But how can we know this stuff, sometimes? Wittgenstein apparently showed up centuries of concern in philosophy, but is something that lasts for centuries ever a false problem (a problem that has no bearing on human life)? And still not everyone has absorbed his lessons. The real question: How does one not waste one's life with this stuff?

To inhabit your theory with fullest imagination, to really try to hold in your head a fully realized energy map of the world. Then individual thinkers are like shocks to the system, try to upset it. If your whole map holds -- that is not a guarantee of relevance, or timeliness. What makes a map relevant at any moment over others is whether the vision of humans in it is something we need at that moment (at that moment in public life, at that moment in the life of theory). Theory is always a desperate grab for the possibility of self-portraiture.

Tuesday, July 23, 2013

Runner

I was on the subway when things went live. Some signal came my way, some alarm that caused me to take stock and catch myself and snap to and realize that I was in a moment, though I didn't yet know what the moment was. The first signal always precedes awareness, like the whole lost history before consciousness kicks in (in a baby, in the world). I can only write out a list of events without chronology, since awareness is the start of chronology. But this list is also fiction, because how can I really know? I was on the subway, on the platform, walking from one end toward the other, thinking about nothing. Was my first apprehension of the event itself, or of a reaction to the event, which is nevertheless a part of it? It could have been a voice raised higher and sharper than the encompassing hum. It could have been a shift in another's body. A yell, or pounding steps that entered my ears but hadn't yet been connected in my mind to eventfulness. A body running. By now I definitely knew, by now I went live. How did I go live? Physically, I held my chest, my shoulders -- where my breastplate would be -- stiff. I was brought to a slow stop, like a bumper car with the electricity shut off. (How did we get this emergency response programmed into us, that brings crowds to a standstill in an event that's, as newscasters say, developing? For it's surely the mass response that creates the least chaos, the smoothest surface for the intervention of official forces, if not the response that makes the most sense for individual survival.)

That running body, the escapee across the subway platform, his run is so vivid and so startling because nobody runs like that, nobody who follows the rules, anyway. It's a run that doesn't give a shit about us. Seeing it, I want to do a number of things all of a sudden. First, I want to know it -- it attracts me terribly. It sets itself on a repeat loop in my mind, a real-life GIF. I want to tell about it to others (because in telling it will seem like it belongs to me?). It's out of my world, and I want to make it part of my world -- I want my world to meet it, to be at the place where it and mine can touch. The turn away from that run is a locus for all the turns away from paths not-to-be-taken that resolve into the path that, eventually, forms, all the not-cities and -lovers and -jobs and -nights where you go home at a decent hour, stepping away from a house full of noise. It requires nothing less than the courage of finding more than partialness in what you have.

And another feature of our obedient programming comes into view, the impulse to normalize the strange, even as we hope for and would give ourselves up for the strange. What do I want when I tell my friend N, after getting off the subway, about that dense unapologetic run? Is it judgment? What happened to the world, and what am I now, too late, trying to restore?

Thursday, January 6, 2011

Missive from the far end of day

My new thing: if I can't sleep, don't bother trying. What's the use? You will sleep exactly as much as you will in your lifetime. That's a tautology, but it may be a useful one. In any case that's a task I leave to my body - I have far more pressing things to worry about. We inherit structures, the way that things are, and much of the illicit joy of growing up is in finding back-alleys and loopholes, to find the right pitch of seriousness and playfulness that will drive us on. Categorical imperatives? There are nothing but suggestions.

Two thinkers I admire are both insomniacs in a way - the history of thinkers is obviously riddled with them but these are the two that come to mind. Michael Silverblatt relies on reading during night hours to consume the books he does, waking up to discover that he has read hundreds of pages in the night. During the day, he says, he is pathetically slow - a pedestrian two minutes per page - but night has a special force. Stanley Cavell, when he can't sleep, turns on Turner Classic Movies and returns to the circumscribed world of the movies he grew up with. In writing about film, he avoids the tendency to a technical objectivity and rather chooses to remember them, the way dreams are remembered. Would these two, and countless others, be the same kinds of people, of minds, without their after-hours work? There must be something liberating about defying the sun.

Cavell brings up the question of paraphrase in art: can we do it, what use is it? I used to say (I think smugly) to people who did not share my particular questions about art, who asked me what this or that movie or symphony or poem meant, if I could tell you in words we wouldn't need the thing itself, now would we? That's a good way to stop a conversation. But it doesn't explain why I am still driven to do it, why, when I feel I have seen something which pulls back the curtain (which wakes me up a little), why I feel I must, at least to myself, and however imprecisely, necessarily vaguely, be able to explain it. What Dreyer manages to do is ... The reversal at the end comes out of Ferris's use of ... The people in this universe are all ... I think what Crane is doing here is to ... gabbling. The security of the pat in the face of the inexpressible.

Wong Kar Wai's In The Mood For Love is a hell of a movie. His Hong Kong is cramped and steaming, the two main figures gliding through it like intricate origami cranes. His explosions have the indirect and sharpened quality of overhearing, of hearing illicitly. Like so many other films, we are shown the passage of time, the way a memory is born out of the convergence of time and place - so loss. But its insistence is on imperishable traces (imperishable anyway for us), the beautiful dresses, the way Mrs. Chan's hip moves and the effect of her hand on a door frame, and the infinite other traces we leave to be retraced and adumbrated - packed onto - by those to come.

Friday, December 10, 2010

Agony, ecstasy

[This post forms a part of the Requiem // 102 project, moderated by Nicholas Rombes.]


"The best description of hunger is a description of bread. A poet said that once." - Werner Herzog, Encounters at the End of the World
The line above relates to my assigned frame, from minute 25 of Requiem for a Dream, in two ways. First, the shot is exactly what Herzog's quotation describes. Sara Goldfarb, trying pathetically to survive Day One of an egg-and-grapefruit diet, starts seeing food everywhere - hunger is shown as the thing it most desires, as the semantic opposite which completes it. Second, and more broadly, the quotation raises questions about how art can most realistically and truthfully convey concepts that mean through absence, that are defined by the opposite pole toward which they incessantly strive - concepts like hunger, longing, desire, unhappiness. We might frame the question this way: does Requiem for a Dream show us the hunger, or the food?

The answer has to do with the very fuzzy line between showing, graphically, the effects of addiction, and inducing the effects of addiction in the viewer through graphic means. We might understand this as a difference between art as empathy, and art as weapon.

What defines this difference? It comes from where we are situated, through the camera's eye, in relation to the characters it observes. Film, borrowing from literature, can simulate first- and third-person narration: first when the distance between the camera's and protagonist's perspectives is erased, third when it is emphasized. The more invisible the camera, the more we are likely to "see things through the characters' eyes." The more active, histrionic, stylized, virtuosic the camera becomes, the more aware we become of the director's agency, that we are not looking through these characters but at them. Every sudden cut, canted angle, split screen, pan and zoom pushes the people in the frame farther away from us; they become objects and not subjects.

In Requiem's disturbing and gruesome climax, three protagonists are subjected to the most visceral and invasive of violations: that of their bodies. Through parallel cutting, we witness all three forms of rape - the forced feeding tube down Sara Goldfarb's throat, the amputation of Harry's arm, Marion's forced "ass-to-ass" - essentially simultaneously. Yet how Aronofsky films these sequences makes clear where he - and therefore we - stand in relation to his characters. At each moment of invasion, the camera is placed directly in front of the victim's face - the most objective angle possible, static, impassive, observational. More importantly, it is the perspective of those who are enacting the violence from which their victims suffer - the doctors and nurses for Sara and Harry Goldfarb, the circle of voyeuristic men for Marion. We see their pain and we recoil because it's not our own. We feel pity, shame, nausea, horror, but - if we are honest with ourselves - never empathy. If the battle is between the addict and the rest of the world, it's clear where Aronofsky places us, where he wants us to be.

I first became aware of Requiem in college as a film that I had to see and that, after seeing, I had to love. Like other cult films, there is never a question of disagreement or friendly debate; this may be possible for other movies, but some films apparently transcend normal aesthetic categories and become beacons that represent something else entirely. You made friends by bonding over Requiem. It was screened in common rooms and student film clubs and on laptops perched on beds. It was near-ubiquitous on "Favourite Movies" lists.

But - if you'll permit me a moment of studied ignorance - how can this be? The movie is essentially a PSA about the dangers of drug use. Countless reviews on Amazon and Netflix attest to this. While most mention the stylized filmmaking and acting, the final tone of many reviews is a decidedly moral one: addicts recognize that it's "just what it's like," it should be screened for junior high school audiences as a deterrent, and on and on. What strikes me as curious about this is that I can't imagine an anti-alcoholism movie, say, or a movie about the dangers of unprotected sex ever achieving the kind of fervent status Requiem has on campuses across the country.

Especially confusing is that, while people tell me that the movie is a powerful treatment of the effects and consequences of drug use, the kind of excitement and bravado in their voices suggests something else. It's entirely different from the way other great, horrifying films (say, Schindler's List) are talked about and traded in social discourse. So let me ask a question that may strike some as naive, and therefore easily dismissed, but which I would like to take seriously and try to answer: how can a film filled with such suffering and tragedy inspire the kind of mass culture fervor we see - still - today? How to explain the squeals of delight and nods of assent among college freshmen when the title is invoked? Or that, ten years later, there is a project to analyze 102 frames, one from each minute, for this movie and not some other?

Requiem's overnight and permanent absorption by mainstream culture - of which actual addicts form only a small part - suggests a kind of solubility with popular thought, a lack of real resistance or required effort. In fact, by reducing addiction to its effects (mirrored in the film by visual effects), Requiem actually makes the horrors of addiction easily digestible and even safe. If all we need to feel to identify with addicts is the jerks and jolts that accompany the movie from start to finish, we don't need to deal with the side of addiction that Requiem knows nothing about - the unbearable heaviness at the other end of Requiem's fleet-footedness; boredom, isolation, time not as fast cuts but as interminable. Much has been said about how far Requiem goes, how much it pushes the envelope of experience and endurance. Perhaps we should consider how it doesn't go far enough.

Thursday, November 11, 2010

Some thoughts on Stanley Kubrick

Often we hear something that seems right at first blush, and repeat the thought to others without actually stopping to consider how true it is. I think about this every time someone calls Kubrick a "misanthropic" director. Misanthropy is a hatred for humankind, and who can honestly look at Kubrick's films and say that they see hate?

I haven't seen a single frame of Kubrick motivated by hate. The right words for me are fascination, curiosity, perplexity. "Hate" is wrong because it's a strong, hot emotion, which to me seems incompatible with charges that he is also dispassionate and cold. He is not someone who hates the people in his movies, but rather someone who (in an extremely fundamental sense) does not understand them. He's fascinated with his characters in the way that some people can be intensely fascinated by insects, how they move, interact and go about their business.

Roger Ebert calls Barry Lyndon "remorseless in its doubt of human goodness," but this implies wrongly that Kubrick has ever cared about concepts like "goodness" at all, even in morally prickly movies like Clockwork Orange. Kubrick is interested in behaviour. In Barry Lyndon, the young itinerant soldier seeks a few weeks' comfort in the house of a German war widow, and we watch the essential transaction being made over dinner in the glow of candlelight. We know what will happen; we know what Barry means when he asks her if she is lonely, and she answers yes; this is all almost too obvious. But Kubrick is fascinated by why humans do these things, by why this behaviour strikes us as obvious in the first place. The candles suffuse the girl's face in gauzy light, and we get the sense that he is marveling at the twists and truths of human existence, the things we want and do, often without knowing why.

Fascination is always the primary mode with Kubrick, even when he's operating at the apex of sarcasm and violence, such as in the scene of murder by giant ceramic penis in Clockwork. He's not saying "Isn't this evil?" but rather "Isn't it wild that people do stuff like this?" There is a disconnect, a separation at the level of species. I wonder, for instance, about the fact that he was an intense animal lover but was notorious for his difficult relationships with people, or that the facet of humanity that seemed to interest him the most was the erotic, and the vastly different forms this took in his work. Think of Lolita, Clockwork Orange, Dr. Strangelove, Eyes Wide Shut, and even 2001, which plays to me as a film deeply anxious about the erotic in its complete exclusion of it. This is not Philip Roth, who has a specific relationship with the sexual and writes essentially the same novel again and again. It's rather someone who holds an idea under light, turning it like a prism to observe the different colours and shadows it can cast. Someone who can make such fundamentally different movies about the erotic is someone who hasn't quite figured out what the erotic is.

And yet. And yet. Kubrick evades us still. To call him an anthropologist, to call him distant, is to make him simpler than he is. In his greatest movies (of which Barry Lyndon is the greatest), there is at every moment a tension between distance and love, of fascination with details and the recognition that we can never really know them. There are shades of 2001 in every film. One of the most delightful and wonderful (in the sense of wonder) scenes in Barry Lyndon is the one in which Barry stumbles upon two British officers having a "couples' moment" [Anthony's term] in a river. The scene feels like a gift, but we and Barry must move on, and though we never find out what happens to them we know, from the closing title card, that "they are all equal now."

So let me end this brief account by saying that I see so much love in a film like Barry Lyndon, not love amongst humans but love nonetheless. It's a love of light, of movement, of the way the German girl's face looks when she gazes at the English soldier. We don't fall in love with her, as other directors would have us do, but fall in love instead with the wonder and strangeness of ourselves. Only someone like Kubrick, standing on the bank of the river, can show us that.

Thursday, July 1, 2010

The Coen brothers, and movies

I once remarked to a friend that I thought the Coen brothers were getting away with something, though I didn't know what it was. Rewatching footage from the 2008 Academy Awards, in which No Country For Old Men won four Oscars and each brother personally took home three statues, it's hard to parse what they were thinking as they took the stage again and again. They appear almost ... indifferent. When they went up for Best Director, Ethan Coen said, "I don't have a lot to add to what I said earlier. Thank you." Then Joel added, "We're very thankful to all of you out there for letting us continue to play in our corner of the sandbox, so thank you very much." When they went up for Best Picture they didn't say anything, but watch their faces and the way they walk back to the mic, Oscars in hand. What do you make of that?

In recent years, the Coens have alternated between making silly, screwball movies that few seem to really like (Burn After Reading, Intolerable Cruelty) and flat-out, adjective-transcending masterpieces (Fargo, No Country For Old Men, A Serious Man). The bewildering thing is how equally effortless both kinds of movies seem to be for them.

One common factor to all their movies is their intelligence. This is beyond doubt. Look, for instance, at how the long string of murders in No Country is handled. The first, in the sheriff's office, is bloody, protracted, dirty. The next is still on-screen but tidy, a neat circle. Then only insinuations: a slowly widening pool of blood, the hosing down of a chicken truck. And finally only an exterior shot of a house, checking under the soles of boots. The Coens know that we only need to really see murder once, the first time, and that every successive instance is left more effectively to suggestion. The implementation of theory in practice is almost surgical.

I think No Country For Old Men is a perfect movie, which is not as much of a compliment as it may seem. It has such a finished polish and such total control over all its elements that it's hard to find cracks into which we can insinuate ourselves. Roger Ebert called it a miracle, and said it was as good as Fargo, and yet Fargo is listed under his Great Movies column and No Country is not. The film's perfection admits no uncertainty, no untestable depth, no confusion or naked, messy revelation. It is so good at what it does, but it's only three years later now and I think it's already been forgotten. It is so perfect and so closed that it doesn't need us.

And this is perhaps the danger of being smart in a world where smartness in movies is sadly so rare. People debated endlessly about the ending of No Country, just as many are now debating about A Serious Man. These movies engage us intellectually, as movies should. They don't make things easy for us, they don't explain everything; they are economical. All of this is good. But I wonder: is our enthusiasm for a filmmaking duo that is consistently intelligent, that is audacious and original, that seems to exist outside the film Establishment even as it is showered with golden trophies from it, is this enthusiasm (which is understandable and, like, it's good to feel enthusiasm for intelligence in mainstream movies for once) obscuring a lack of something at the heart of the Coens' screwball comedies and serious masterpieces both? A lack, perhaps, of what Armond White called the "humane." A lack of the feeling of struggle behind the creation of these movies; we might even say a lack of risk.

Joel and Ethan Coen are currently contemplating or involved in a number of projects, which include a remake of a 1969 western, a movie called Hail Caesar which stars "George Clooney as a matinee idol making a biblical epic," an adaptation of The Yiddish Policeman's Union, another comedy with George Clooney, and a Cold War comedy called 62 Skidoo. I'm trying not to be pessimistic, but these all seem to be vehicles that play to the Coens' already recognized strengths: intelligence, irony, mischievousness, virtuosity - and emotional distance.

It is not my place to tell the Coens what to do. They can do whatever they like. I wonder what they think when people say of A Serious Man that it is "the Coen brothers' most personal film," or that it is autobiographical, or about their childhood. To me, it doesn't play like a personal film, or a film about experience. It's an intellectual treatment of those experiences and the questions they raise, but it is not the thing itself. Somewhere underneath A Serious Man is another film waiting to be made, and the Coens, those brilliant roguish boys, owe it to us and to themselves to find it.

Update: I just came across Anthony Lane's review of No Country in the New Yorker, here. He says much the same, and then some. It's very good.

Wednesday, June 16, 2010

Wielding a movie, breaking bones

Steven Soderbergh’s Bubble is both an “experience” and an “experiment,” according to the DVD case. The cover is a picture of eight severed doll heads, all bald and smiling in a terrifying way. Someone unfamiliar with the plot (as I was) might reasonably guess the genre to be sci-fi or surrealist horror. Soderbergh, after all, made movies such as Solaris and Schizopolis, so it’s difficult to know what to expect. It is with foreboding that I inserted the disc into my laptop.


The movie opens on a sparse small-town American cemetery, a couple of flags sticking out of the grass and cars passing by on the road behind. Everything about the image is unremarkable, including the light. We fade to the face of a bloated, middle-aged woman in bed; it's morning. She struggles up and walks down the hall to the living room, where her elderly father is asleep on the futon. The dialog goes: "[indiscernible]" "Yes?" "It's time to get up, what do you think?" "It's breakfast time?" "It sure is." "[indiscernible]"

We then follow Martha (that's her name) as she leaves her unremarkable house and gets into her unremarkable car. She lives in an unremarkable little town (this we know because of a shot of the water tower, somehow a signifier for sleepy little boroughs everywhere). We see her stop in her local bakery (it has a Jesus cross mounted on the wall) for a morning doughnut and coffee. She then drives to work, carpooling with her "best friend" Kyle, a guy in his twenties who mumbles so much I wanted to slap him. It's clear that the only reason they are friends at all is that neither has found anyone remotely kindred in their town's sparse collection of souls.

We begin to understand what Soderbergh is showing us, and also, simultaneously, how he wants it to be seen. Following work is church, the third corner (home/work/church) of the working-class triangle, and here Soderbergh lines up a series of shots in which the parishioners sit absolutely still. We see them from this angle, that one, that one, but none of them move and none make a sound. An abrasive bare guitar plays, the only music used in the movie, and it is so jarring and peculiar in its isolated use that it's clear no one in the movie can hear it.

This is the first half of Bubble: it "approaches with awe and caution the rhythms of ordinary life," as Roger Ebert says. This it does very well. These do not look like actors or sets. It sounds and looks as boring and depressing and intriguing as it would if you drove to this town and met these people. And so far, the "experience" promised by the DVD cover is this: the experience of small-town, lower-class America. You may not know any people like this, or want to, but for the cost of movie admission or a DVD you can see and feel what it must be like to be them.

And here the Soderbergh "experience" takes on another, rather chilling dimension, for Soderbergh cast actual small-town lower-class Americans into these roles! "Martha," who works in a doll factory in the movie, is a bona fide 24-year employee of KFC in Parkersburg, West Virginia. They also shot the movie in Parkersburg. The actor who plays Kyle says that, as a teenager, he was "a lot like Kyle." How much of this is acting, and how much of it is using these people to be who they are?

Bahrani's Chop Shop (2004)

A few movies in recent years raise the issue of getting people - especially marginalized people - to play versions of themselves in the movies. I'm thinking of Lee Daniels's Precious (poor black people), Ramin Bahrani's Chop Shop (poor immigrants in NYC), and Bubble (small-town working-class white America). Each of these handles the issue in a different way. Bahrani spent a year hanging out in the repair shops that line Willets Point in Queens, and formed most of his cast that way. Daniels went the opposite route: he narrowed the field to ten contenders from over 400 for the role of Precious (I think his original casting call was like: black and over 300 pounds, apply now!), but eventually decided on Gabourey Sidibe precisely because she was not Precious:
She was as good as the rest of the girls, but Gabby is not that girl. She talks like this white girl from the valley. It's clear to me that she came from a really great background, and she had gone to college and she was not this girl. And if I had used those girls, one of those girls that made it to the final ten, I would have been exploiting them. Because they were the truth.1
He then clarifies that Gabby was not any better than the others, but that "the difference was that Gabby was acting. These girls were not acting. They were the real thing."

So what's wrong with using a poor, damaged black girl to play a poor, damaged black girl (or a small-town Ohioan to play the same)? Won't the performance be more genuine, thus benefiting your movie, and won't your funds go to someone who actually needs the money, rather than some megastar shooting for an Oscar with an “ugly” role?

The difference, it turns out, is the story you want to tell. Precious miscalculates the line between realism and Hollywood; that is, it tries to be realistic and upfront about the horrors of Precious's life while keeping the upbeat ending of something like Save the Last Dance. This is why Precious is not a good movie, but at least it's not a criminal movie, which it would be if Daniels had cast an actually troubled girl. The first half of Precious looks like real life, but the second is simplistic wish-fulfillment, and that is a disgrace to the real struggles and small, daily victories a girl like Precious must fight for. If Daniels wanted to cast one of those other candidates in his title role, he would have to rewrite the ending and leave Precious in her world, the real world, rather than drag her into Hollywood. Chop Shop, for instance, doesn't interfere in the world or the lives of its subjects, but only observes, and that is why it is a good movie and why the casting is right.

Now back to Bubble. It would be a good movie, maybe a great one, if all it wanted to do was observe. But Bubble, as the cover promises us, turns these ordinary people playing ordinary characters into an "experience." If you watch the movie, it will be clear that the title refers to the insularity of the world, the way routine, junk food, lack of stimulation, smallness of vision can create a morally gray community in which terrible things can happen. The bubble is the way the cars drive aimlessly round and round and the people move from one couch to another, the perpetual crappy lighting, and the endless line of doll's heads at the factory where the characters work: things that have human features but are really hollow inside.

If a big-time filmmaker came up to you and pitched you the above, except it was about your life, would you agree to star in the project? How did Soderbergh get these people to work on a film that thinks so little of them? In the special features, we watch the screenwriter Coleman Hough, a breezy Californian, walk and laugh and reminisce with the actors. They seem to be birds of a feather. She asks Debbie, the actress for Martha, what she thinks "Bubble" means. "This is living in the bubble, in the Ohio valley," she replies, gesturing to the landscape. "And then something comes in ... it happened in this little town." Hough doesn't respond. The actress thinks of the movie as a story about an Other, a pernicious element that enters a safe haven, which is a common enough arc. Except nothing comes into Bubble; everything is generated within. Whatever tragedy happens in the movie is a product of the town, its people and the kind of life that is observed.

It's admirable that Soderbergh and Hough are interested in parts of the world that rarely get big screen treatment. But they are not here to uncover any truths; they are here to make a point. The point they make is from a privileged, intellectual perspective that tends to refer to “the masses” – as in, “the masses” that go to rom-coms and thrillers and live in wood-panelled houses and get pregnant when they’re young and never, ever read a book. Whatever happened to art increasing our empathy and broadening our understanding of others? Bubble is bad art because it chooses to condemn rather than understand; because it manipulates people into mocking a version of themselves, and because it considers itself superior to its subject, it is immoral art, too.

Friday, June 11, 2010

Splice: a review

Splice is riddled with problems, which doesn't stop it from being often interesting, or occasionally powerful, or better-shot than it could have gotten away with. Several critics have said that it is intelligent and provocative for the first 2/3s and then totally gives in to genre-cliché and mindless action, but this isn't quite fair. There was a lot that was cliché and lazy about the first 2/3s, too. It's true that the last act can be swapped with that of pretty much any domestic horror movie, but this isn't necessarily a drawback. Cliché is only a bad thing when it's employed for no reason, and the release of energy at the end of Splice was a long, long way coming.

Two scientists (a couple) engineer an organism from the DNA of several species, including human. One of the biggest problems this movie runs into is that it can't quite find a balance between archetype and realism. There are lots of little affecting touches that are clearly intended to make Elsa (Sarah Polley) and Clive (Adrien Brody) seem like real people and not just labcoats, and these work, more or less: Polley's obsession with little tic-tac things, Brody's embarrassingly hipsterish T-shirts, the Japanese anime art framed above their bed. I'm all for realism, but it takes an assured touch to combine it with a broader underlying metaphor, such as when the organism they're incubating comes to term several months before it's supposed to. The glass incubation chamber vibrates and bloats menacingly, blue water sloshes, Polley cries out in pain as the organism grabs her and water rushes out as the chamber cracks. The umbilical cord tears.

This, clearly, is the birth of their child, a subject broached but dismissed earlier by the couple. And because it is a new species, the subject of the film broadens beyond the couple into a question concerning humanity. This is why the best shot of the film is one of Brody and the teen-aged, female creature slow-dancing to a jazz record. It suggests so many things about the nature of the relationship between man and his creation, about the possibility of coexistence and what it might look like, conveyed in the simplest and most direct of images.

This is also where Brody's and Polley's appealing realism falls short. As the subject of the film migrates from domestic intrigue to an interrogation of nature and the origin of species, we need protagonists who are not just interesting people but also archetypal, or even mythic. Big ideas need big, broad characters to support them. Polley achieves some of this by channeling a strong maternal instinct, characterized by devotion, blindness, competition; this is at its most powerful in the scene on the operating table, when she finally seems to realize what she has done. For most of the movie, though, the dynamic between Brody and Polley seems strangely out of tune with the creature locked in the barn. This is maybe why the humans in Kubrick's 2001 appear drained of individuality, and move and speak so flatly: they exist in the way that atoms and the universe exist. Brody's and Polley's normality and specificity collapse the scale of the drama.

But there's so much to praise as well, like the establishing shots in the first half of the movie, which don't only show us where the scene is taking place but also convey something of the disorientation and claustrophobia of city life. When the couple leaves the city for Elsa's farm, an archetypal return to nature, I felt something fundamental shift inside me: at last, the creature - itself a hybrid of civilization and nature - can contend with the outside world, and the questions raised by the first half can be fully answered. What we actually get is a little chase through the woods, and a resolution whose consequences barely extend beyond this weird little family.

That's the wrong ending. It's often smug or ignorant (or both) for a critic to suggest how a work of art should have been done, but I'm going to risk it. The first half is an exercise in withheld energy, both violent (it's clear from the start that the creature is too dangerous to be contained, and that it responds only arbitrarily to Elsa's mothering) and sexual (as Manohla Dargis puts it, the creature turns into a "va-va-va-voom adult"). Clearly some shit's gonna go down. The right answer for the film, I think, would have been to go War of the Worlds - big to the point where the little domestic squabbles of Elsa and Clive lose all significance. Only then could the movie retain some of its archetypal power. It's a matter of balance.

Instead, the movie ends on an oddly intellectual note. The creature commits two crimes at the end, one against each member of the couple, and it seems that this was just to set up a question, presented in the last shot. The sacrifice is for intellectualism at the expense of what film does best, which is to convey truth through image and sound. I could say here that Splice could have been a much better movie if all these things were fixed, but honestly, it might just be easier to start over with something else.

Friday, May 21, 2010

The composer, mechanical

A recent article in Slate magazine profiles David Cope, a man who has been using computers to write music for a long time. I first came across his work in the documentary Mozartballs, in which Cope’s computer (cutely named “Emmy”) analyzes reams of Mozart’s music and writes a new cello concerto in a Mozartean idiom in one second. It was a long time ago, but I remember the documentary then showed a cellist who played the piece – which likely sounds alright to ears not familiar with Mozart – and commented that, while all the gestures looked like Mozart, there was something about it that was deeply un-Mozart-like. Indeed.

Journalists are often not responsible for their articles’ subheadings, but the one for this piece – “A computer program is writing great, original works of classical music. Will human composers soon be obsolete?” – seems to me to encapsulate the very blunders of reason that draw people to these kinds of stories. Leaving aside the anachronism of the machine writing “classical music” roughly two centuries after human beings stopped writing “classical” music, other questions are begged. Like: what does the article mean by “original”? It seems, in fact, to spend much of its word count arguing just the opposite: that the computer cribs Beethoven who cribs Mozart who cribs pre-Mozart, and humans are much more recycling algorithms, much more mechanical, than we like to think.

The argument is presumably meant to validate the machine’s own cribbing brand of creativity, a “you think Emmy’s just copying existing music? That’s what humans do, too!” kind of thing. What the article – which, after all, is in Slate and therefore has neither the platform nor the space nor the readership to explore this matter to any degree of depth – fails to mention is that the idea of art as “copying” is far from new. In 2007, the American novelist Jonathan Lethem wrote “The Ecstasy of Influence,” a love letter to “plagiarism” in art, in which he quotes the following from John Donne:
All mankind is of one author, and is one volume; when one man dies, one chapter is not torn out of the book, but translated into a better language; and every chapter must be so translated. . .
A belief in art as inherently genealogical – that is, made from the stuff of the ancestors, rather than created ex nihilo – is not only championed by artists such as Donne or Lethem. Roland Barthes, the literary theorist whose ideas held much sway over the entire body of French literary thinking (and subsequently of the world entire) in the 60s and on, goes so far as to erase the author, so convinced is he that all artists do is rearrange what has come before. This is not a denigration of the value of art or artists, but rather a statement of what Barthes thought was perfectly clear: that we are all born into a system and cannot escape that system. We read, we listen, we breathe in air and release it again in an altered but not fundamentally new form. All art is a conversation with all other art.

The question then remains as to whether a machine can have a meaningful conversation with a person. Or put another way: if the purpose of conversation is to reveal through words our intentions, is it possible for a machine to intend? Or can it only do?

The article in Slate seems to think that Emmy and her inevitable spinoffs are humans in reduced capacity, and that the compositions are not better only because the technology is not yet there. When it is, who knows what could happen? It claims that Emmy is “already a better composer than 99% of the population,” and grants the possibility that a machine might write music of “lasting significance.” So, in twenty years, might we have a program pairing a Beethoven quartet with a song cycle by HAL 9000? Symphony No. 73B by that Honda robot? Will we study computer programmers in music history, count algorithms instead of tone rows in music theory? Or the question I really want to ask: Why do we listen to music, or make time for any art, at all?

In Middlemarch, George Eliot writes: “If we had a keen vision and feeling of all ordinary human life, it would be like hearing the grass grow and the squirrel's heart beat, and we should die of that roar which lies on the other side of silence.” The first time I read this I stopped dead in my tracks (I was walking), stunned by how perfect a description this was of what art, in my opinion, tries to do. To listen to the grass grow and record that roar which lies on the other side of silence, silence being death or the magnitude of the universe or the separateness of human existence. There is a roar underneath, a graspable truth that we all feel or hear at some time or another and which artists try to communicate – that is their “intention” – in their respective medium. To hear the roar and to say it.

When I listen to, I dunno, the Cavatina from Beethoven’s Op. 133 String Quartet, the music – as beautiful as it is – falls away and I am left with the impression of effort, Beethoven’s effort to understand and wrestle with something and then give us the traces of his effort, so that we might feel less alone with ours. The Cavatina is imperfect, as all great music is, but it is something said by one person to another, which is also what all great music (or even all human music) tries to do.

The materiality of music – the technicians and paper-printers, the instruments, the notes and even the sounds – this all is just connecting fodder meant to bridge the gap between consciousnesses, which is a process Emmy can never participate in or understand. Regardless of how far this technology develops or how ingeniously or convincingly future Emmys can recombine and repurpose music, if a machine cannot hear the grass grow or the squirrel’s heart beat, then it cannot write music – however perfect – that matters to me.

Monday, May 3, 2010

An open letter to documentary filmmakers

Dear documentary filmmakers,

You know that thing where you're filming an interview subject and they come to the natural close of a thought, but you keep the camera pointed at them and don't say anything and they're sort of frozen waiting for you to break the scene but you don't, and then they wonder how much longer they're supposed to hold their frozen pose or maybe they should do something, and then eventually they do do something, always something inadequate like an "oh well" gesture which follows a protracted, uncomfortable silence, all of which happens solely because of how uncomfortable and self-conscious you have made them and not at all because the scene is "raw" or "real" as you probably intend it to be?

Or when someone is speaking on an emotional subject and breaks down sobbing, but instead of cutting to something else (we know what's going to happen: more sobbing) you zoom in instead, and just watch them up close as they contort and grunt and moan their private grief, and you keep the camera fixed until the tears subside, the wails cease, and there is nothing more to see?

STOP THAT.

Seriously, stop it! That trick is way played out, and it never helped anybody. What are we supposed to learn, aside from people who have lost loved ones tend to cry about it, and you (the filmmaker) aren't below using your camera to russle up some good cinematic drama, even where none should naturally occur?

We've all seen this dozens of times. Here's an example from Herzog's Grizzly Man, in which we see what should be the natural close of a shot, the signing of a contract. Everyone in the shot - the girl, the man, the cameraperson - knows that this is where it's supposed to cut. But it keeps going. The man looks like he's trying to disappear by remaining absolutely still. Then the camera comes even closer, forcing the poor girl to come up with a final platitude to lend the scene its dramatic rimshot. And then a close up of the watch. We get it.

Do I object to this practice on artistic or moral grounds? Both. It's morally disingenuous because you turn the real people who are your subject into circus performers, expected to offer up tears or saccharine one-liners to your supposedly objective lens. And it's artistically hollow because it ultimately reveals a lack of confidence in the power of your story - or your ability to convey its purpose and meaning without resorting to manufactured "moments."

Look, nobody is going to come out of this well! You could put Mahatma Gandhi on the other end of your camera; hold it long enough and even he'll start to shift his eyes nervously. It's a foolproof way of making someone you don't like look bad, or anybody look goofy and ridiculous. The worst part is that this is so easy to defend in theory: the documentarian is supposed to be unflinching in his gaze, never shying away from pain or reality. No one's going to argue the theory, but you risk farce when you apply it so literally to your practice.

Herzog, who is a natural with documentaries, shouldn't need to resort to such juvenile tricks. So, in fairness to him, my counter-example to the above is also from a Herzog documentary, his Encounters at the End of the World. Here we see Herzog trek to Antarctica and interview all the weirdos and misfits of the world who have, for one reason or another, congregated in that far, cold place.

There is one man who has a backpack that he keeps with him at all times, which contains everything he needs to survive so he can pick up and leave at a moment's notice. This is important for a man who grew up deprived of freedom behind the Iron Curtain. "You escaped," says Herzog, "and how big a drama was that?" The man seems to smile, blinks and begins to respond, but no words form and no stories emerge. He is obviously struggling under the great weight of memory. And I have to say, I thought I knew exactly where this was going: the camera would linger another four, five seconds, and then the sobs would come, and we would watch this poor man cry until he managed to compose himself and we moved to the next subject, our curiosity satisfied - if not resolved - by our act of voyeurism.

But it is Herzog who surprised me by speaking before the man's dignity could fully collapse. "You do not have to talk about it," he says gently. "For me," and here you can hear Herzog improvising, becoming for a brief moment, even though behind the camera, the subject of his own documentary, "the best description of hunger is a description of bread. A poet said that once." The man nods vigorously, gratefully, and Herzog directs him instead to describe the contents of his bag, which he does with the relish of a boy proudly displaying his gifts on his birthday. And, sitting alone in my house, I suddenly found myself sobbing, painfully, wretchedly, sobbing at the unbearable kindness of Herzog and the beauty and humanity of that moment.

Wednesday, April 21, 2010

On travel

When traveling, one expects to meet people ranging in ideology, profession, attitude, age, dress, ambition, language, culture, but my own experience demonstrates quite the opposite to be true. (I am talking here of fast travel, or, for some Europeans, "American" travel, where the aim is diversity and not immersion.) A natural habitat of the fast traveler is the hostel, and there, alas, he will meet but one person. Let me describe her (or him). On average, she is Australian and 22 years old; she's been traveling for months and is still months from the end; she has "done" more cities than she can list, and went to school for ______ but has unspecified plans for her return that may vaguely involve a farm or a start-up business. She is on a Eurail pass or equivalent and is leaving for Budapest in the morning. (They are always leaving for Budapest in the morning.)

In his new preface to The Great Railway Bazaar, written some thirty-five years after his initial train journey across Asia and Europe, Paul Theroux (my steadfast companion through the last leg of travel) refers to the early 1960s as "an age of mass tourism" in which "everyone set off to see the same things." Imagine what it must be like now, fifty years later. When the flight from Frankfurt to Barcelona costs less than bus fare to the airport, when young adults of all means and origins are traveling for six months or a year to "find themselves" in hostel bars and nightclubs with backpacker specials, one can easily touch down in Italy or Slovakia or England and find oneself always surrounded by exactly the same people.

This is not entirely a bad thing. There is something new to be found here, a kind of slipperiness of place that we don't normally encounter ensconced at home; a shifting, mutable landscape of vibrant cities and blurred hordes that register briefly in the eyes of the traveler, what the world must look like from the perspective of a frantic winged insect. I was constantly surprised to discover what the residents of the places I visited thought of travel when I asked them, when, in response to their envious murmurs at the sight of my photos or my listed itinerary, I suggested they do the same. To them (and it was always the same), travel is heavy; travel is a mountain unsticking. The town two hours down the street was as good as across an ocean; for me it was a passing glance from my window-seat on the way to far more distant reaches. The traveler perceives the world through a distorted lens, a kind of fish-eye that wraps and warps the edges of things into smears of colour and hazy definition. England is small - look how close the coasts are! Berlin to Warsaw is an hour. Milan to Paris isn't worth taking your laptop on board for. A three-month Eurail gets you up to 21 countries, which most purchasers do their best to cover. Most of the residents I encountered had never been to the places in their own countries I was planning to visit, and could tell me nothing about them.

This is, shall we say, only one way to travel. I am not talking about the ecstasy of disappearing into a place that is not your own, of forgetting, after months and perhaps years, all the things you always thought were constant and necessary in your previous life. What I'm describing is disappearing into constant motion, into a home whose only constant is the ever-receding horizon and the momentum of the hunt. You begin to see yourself as the one who is fixed, unmoving, a north star amid all these tragic figures with their houses and fireplaces and cars who rush toward you and then rush behind, a whole sky of glimmering points that revolve around your true and steady core. The world turns and you are the axis. And at any point you can reach across the revolving globe, your fingers probing inquisitively at this place or that, across small patches of ocean and rivers like hairs, to pluck at the fruit of some far-reaching place as if from a branch reaching over the fence from your neighbour's yard.

Wednesday, April 7, 2010

How far there is to fall

We are rarely ready to face the full form of pain. Isn't this so? Here in the humming, squashy sitting-rooms of our lives, how often will we be roused to real terror, real anguish? These are things so far from our daily humdrum of activity that we cannot feel them, and only think how they might be. Those of us who live in safe neighborhoods, who have never seen a gun, who have never watched someone die, will try to span the gap of experience with feats of imagination, to insert ourselves into the news or movies or friends' accounts. We needn't worry. If it has not yet, the real moment will come to all of us, and more than once in our long lives; they will be the dull, flat rocks that gape baldly in the flow of a stream.

Most people most of the time do not know or have forgotten what pain is, and this is not our fault. The scope of possible tragedy is as vast and inconceivable as the surface of the Earth. We know it is there, but for the sake of convenience focus on our small allotted plot, and only briefly are ever pulled up and given a suggestion of where the edges might be. In John Banville's The Untouchable, there is a scene in which the young, hedonistic English men sent to defend France during World War II barely register the danger as Germans descend upon them; the English army is retreating to the channel, but as they flee the narrator stops to remark that
[t]he harbour had a wonderfully festive look, with crowds of men milling about the quayside and craft of all kinds bobbing and jostling on the sea. The water was a stylized shade of cobalt blue and the sky was stuck all over with scraps of cottony cloud.
A moment later, as German bullets whiz past their heads, the narrator spots a soldier he went to school with and calls him over to introduce him to his friends. The disconnect between the present danger and the guileless insouciance of the soldiers renders the scene both hysterical and strangely terrifying; these men, you realize, so caught up with the little dramas and vanities of their lives, cannot see or accept what is around them.

This, to me, is also an illustration of the disconnect we experience when we read a news story about a girl being abused, or watch movies about real-life tragedies. (Even the word "tragedy" is somehow supposed to carry and impart the feeling of its definition; being only a word, it always falls short of the purpose for which people use it.) We do feel emotion when we hear of pain, even strong emotion, but this, I must say, is not even a portion of the whole thing, not even a lessened version of what those directly involved experience. It is something else entirely. It is the careful manipulation of image and sound, or of sentences that cascade and wrench and cause their own misery. When I read about the worst possible atrocities in the paper, I will recognize intellectually that they are horrible, but - if I am being honest - feel nothing emotionally but what I dredge up out of a sense of moral expectation. Unless I identify in some way with the story, in which case the pain I feel is for myself, I acquire no part of the pain of those who actually suffer. Pain is not a reaction to a thing, but a thing that happens itself.

What brings me to record these thoughts is a small, dense tragedy that occurred in my home yesterday night, a non-event that is hardly worth repeating but for the panic and dread it drew, however briefly, into our house; a microcosm of disaster. My brother was playing floor hockey in our backyard, his white sneakers trailing on the newly wet cement. It was dark, and my dad and I were in attendance (clutching sticks of our own) as he chased the dog-wet ball in circles, eyes set on the impish green blur. I remember only the afterimage: my brother's stick on the ground, and he on the ground too, behind it, his hands splayed as if in obeisance. He lifted himself from the ground, and there is a moment when all in attendance know, almost divinely, exactly what has happened, but cannot repeat it to one another until the final evidence is presented. My brother said, "I think I chipped a tooth." And then I saw it: a perfect quarter-circle in his right front tooth, as if only waiting for the puzzle piece that, with a flourish, would declare it whole.

Writing about it now, the whole event feels like nothing of consequence (at least for me, anyway; it is not my tooth that chipped), but I can assure you that it felt, at the time, like some permanent, irretrievable loss. My dad threw his hockey stick against the pavement, one of his few, always understated displays of anger. I felt clammy and flushed. My brother turned to one of us, then the other; I tried to reassure him but was myself doubtful, having forgotten in the moment if technology was sufficient to repair him. My dad constricted himself into a chair and opened a newspaper, his eyes tracking too rapidly.

It is a commonplace of moments like these that one wonders, with dread, whether anything will ever be the same again. It is, perhaps, an overstatement that compensates for our daily understatements, our inability to comprehend the dark shapes behind words like "murder" and "rape" unless they drop into our near proximity. And I thought, as I always do in these moments, of all of us on a thin ledge against the face of an impossibly high and steep mountain, tracing our way imperceptibly, in single file. Below us there are many more such ledges, all equally narrow, and once in a while someone might lose their footing and fall to the next one down, or two, or three. It is a precarious walk, but while we have our eyes rigid on our feet it is possible to forget the gaping chasm that lies only a footstep away. We climb slowly and gradually toward the top, but a gust of wind or the particular jut of a stone is enough to nudge us over, and who knows where the bottom lies. How high we all are, and how far there is to fall.

Saturday, April 3, 2010

Hey man, slow down

I've been gone for a few weeks and there are 212 unread items on my Google Reader. I click "Show All Items" and start to flip through, but the erratic ricochet of subject matter from post to post makes me confused and a little anxious. So I decide to tackle my various blogs one by one, kill them off systematically.

The first is The Sartorialist, which, because it is mostly a picture blog, accounts for nearly a third of my unread entries. I skim a few, then click "Mark All As Read." Aaron has a handful of typically intelligent but unfortunately dense entries on a variety of topics; I open up my TPL account and place a hold on The Elements of Typographic Style, and promise myself I'll come back and chew through his post on modernity. The Elegant Variation is my backdoor pass into the literary world, and here I open up new tabs on Margaret Atwood on Twitter, Paul Krugman on health care, James Wood in the New Yorker, and - this especially fills me with delight - Daniel Mendelsohn on Avatar. There is also an entry that catches my eye: an author on her experience of incorporating poetry into her daily routine. She, Siobahn Phillips,
would use some of the many sites that present daily or nearly-daily verse: “Today’s Poem” at the Academy of American Poets, the “Featured Poem” at Poetry magazine, the morning selection at Poetry Daily and Verse Daily, the “new poems” column at thepage.name—plus a site that offers a Shakespeare sonnet each day.
Tabs appear for Poets.org, Poetry Magazine and Poetry Daily, along with bookmarks in my browser toolbar. A brief glance at these websites reminds me of Michael Silverblatt, a radio critic I discovered from TEV on a similar linkhunt some months ago; I spent a few enraptured hours listening to his radio interviews with Chang-Rae Lee and Sharon Olds, the latter of whose poetry struck a particular chord with me and led me to seek out more of her work (I can now recite her poem "One Week Later" from memory). I haven't been back to Michael Silverblatt since.

But look at all the riches here still to be mined: there's new posts from the ever-impish Nico Muhly, whom I once found deliriously unconstrained but whose words now seem to bracket his meaning like the bars of a cage; a few doses of academia from Dial M; grievances with Google in Evan Osmos's Letter From China (are blog titles italicized or put between quotation marks?). Mark Sarvas points me to a blog I haven't heard of, that belonging to poet and translator George Szirtes, and commands me to "[r]ead it. And when you're done, go back and read his archives. All of them. Really." Well, all right.

Then I get to one of my favourite blogs, Jonah Lehrer's The Frontal Cortex, and I slow myself down as I always do so as to give him my full attention. Each post is a complete, discrete idea, and generally follows a single structure: introduction of a particular study related to neuroscience, discussion of practical ramifications, conclusion. What I like is the way he clearly delineates the philosophical and real-life applications of these ventures in hard science. These last entries are no exception: there is a fascinating study on commuting, which I was just thinking about yesterday while stuck in traffic for an hour and a half after meeting a friend for lunch (I believe my conclusion was "Never again"), and then another fascinating study on how adults can trigger in themselves a childlike imagination. I thought: yes, this is useful, the next time I sit down to brainstorm a paper or a film subject or ... but I read on, read to the end, and I began to forget what I had read only a paragraph before, forgot the title of the post, and began to sense that my retention of this marvelous idea was going to be temporary, like a ball skidding across ice.

I felt that this was a good place to stop and reflect. Why, exactly, do I read these things? Why is it difficult to turn away a promising link or the hope of a new, excellent blog? They are of course tools, and very useful ones at that - but tools need to be used to become worthwhile. We might better think of these tools as fuel - coal, for instance - that needs burning in order to grant the light and heat we crave. Coal as coal does nothing. To bring the metaphor full circle: if you have one piece of coal, you treasure it and use it through and through, until it has provided to you all that it can. But the internet is like a massive storehouse with shelves and shelves of coal, and we're given a bottomless shopping cart and it's like one of those games where whatever you put in your cart before time runs out is yours to keep. And how beautiful all the coal is.

The result is that nothing is burned through, but everything is singed - here a bit of Thoreau, here a few passages of Wittgenstein, here a handful of verses from Shelley. I doubt it would be an exaggeration to say that I've already accumulated enough coal for ten lifetimes. One could (and one has, in the past) easily devote an entire life to the study of a few thinkers, and few artists. You could easily spend a year just studying Emerson. He is one of the most important thinkers in my life, and yet I've only read a few of his essays, and no more.

There comes a time in all this accumulation, a time I've no doubt already long passed, when more and more becomes less and less. If we are born as blank books and spend our lives filling the pages, then I might say (perhaps unfairly) that I've become all index and no content. But what am I supposed to do? There's so goddamn much of it all.

I'd like to stay, but I've just downloaded the complete filmography of Terence Malick and am halfway through John Banville's The Untouchable, not to mention the 53 items still waiting in my Google Reader queue. Good night.

Friday, February 5, 2010

This much I know is true

So much that fills our life is underscored by the subconscious assumption of recurrence. We take eager note of first times: first meetings, first travels to particular countries, first times we have thought a certain thought. But mostly we ignore the existence of the last time we will do something. We dress, and we know we will dress time and time again, we speak to a friend, eat scrambled eggs, read a book, remember a scene from childhood, walk down a particular street and stop at a particular tree, always assuming that these things are repeatable. Of course, most of the time, they are: we do eat scrambled eggs again, and maybe re-read that book, but there will eventually have to be a last time we do all of those things. In our final moments, we might look at a lifetime's collection of books, or think about people we have known, and realize that some of these things have already done what they were to do in our lives, and are not meant to do any more. One day, we will all eat scrambled eggs for the last time.

Of course, a last time is usually recognized only in retrospect, when we reach a certain age or condition and reluctantly accept that our time is finite. I imagine it like moving house, except much, much sadder, as when you have two boxes to ship and must decide what goes in: will I read this book again? are these old letters worth keeping? what music will I listen to again, what can I leave behind? When we know our life is ending, the process of selection becomes a kind of insistence on individuality, a laundry list of that which we find singularly valuable and beautiful. Things we became jaded to in the long course of our lives become new once more; we see things as if for the first time, not wanting to miss any detail, not knowing what might never come again.

Last times are usually recognized in retrospect, but I think there are certain rare and mysterious moments in our lives when we know, while they're happening, that this is the last one of its kind, this is something that will not come again. These moments become more and more frequent as we get older, as we experience more and more and run out of time to experience the same things again. Eventually we reach a point where all we do is last times, files snapping shut one after the other, until we run out of things to do for the last time. And each thing we leave behind is a step in our leaving of the world.

I am afraid that, when I enter the chapter of Last Times, I will watch helplessly as parts of me close themselves off like so many folding chairs, removed one by one from the floor and tucked away for night. At that point, of course, it will be too late to do anything about it. One day, I will never again be able to visit South America; I will never learn to speak Russian; I will never read that book that someone told me I'd love. There won't be the time. We arrive tumbling, sprawling exultantly into the world, creatures of infinite possibility; as Robert Heinlein says,
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
Yes, we can do all those things and more, but as we run out of time we need to say goodbye to each of the possibilities that didn't work out, the things that always existed as "maybe" because they were still possible. The promise of our selves collapses into the reality of our selves. At the end, there will be no more use for ambition or discipline, no such thing as promise; there will only be what you have built and what there is.

So out of this frustration let there come a cry for love, love, love. The world is ending all the time, here and there, as pockets of it that you will never see again flicker and disappear. All around you, it is dismantling itself, shrinking and tightening its circumference as the Last Times of your experience pass by and never come again. The only way to push back against this inevitable constriction, as I see it, is to imbue every single fucking thing with an exuberance of love, love being energy and curiosity and patience and understanding. Love the whole stupid world and every stupid thing in it, and you keep it open until it comes down around you and drops like a curtain, as it must, in the end.

Thursday, February 4, 2010

Devastated and lost

An article about grieving in this week's New Yorker mentions the interesting phenomenon of social grieving, which is what happens when a public figure, such as Princess Di or Michael Jackson, dies. For me, the defining aspect of public grieving is the fact that most mourners don't know the deceased personally. When a non-celebrity dies, each person who mourns will mourn in a highly individual way: as a husband, sister, mother, friend, daughter, acquaintance. If a man is survived by his father and his wife, I've wondered, can those two help one another in their grief? How different, or how similar, are the loss of a son and the loss of a husband?

I have never lost anyone close to me, but I would think that I would consider my memories of those I love personal, and would want to protect them. After all, no one else has shared x with this person, no one else had a conversation about y. Death brings people together, but it also separates them; death splits a person into versions, one for each survivor. In life there was a body, a mind where all our impressions were centered - I may have known him as a friend, you as your father, but the real he exists somewhere, anchors our divergent experiences. Death is like a cord cutting: in our grief, we take our different pieces, our separate versions, and hide them away in ourselves.

But social grief is different, because in social grief we all share the same memories, to a large extent. If neither of us knew someone personally, then we are on even ground with respect to what we have lost. This is why social grieving, I think, establishes communities, and why it feels like people pouring into the streets when private grief is more like the drawing of curtains. If someone in our mutual circle dies, I cannot say I know how you feel, even if we both loved the one who is missing. The thing that makes social grief different is the acute sense that people all over the world are feeling precisely what you are feeling, that you are united with others in a mutual and corresponding sense of loss.

The reason I am writing all of this is because the article made me think of my one experience with social grieving, which happened not with Princess Diana nor Michael Jackson but with the American writer David Foster Wallace. And as I sit here and ponder this, DFW begins to fill my thoughts as he tends to do; thinking about it is making me go through it all over again.

Friday, January 29, 2010

A Happy Man

Fyodor gripped the envelope in a bird-like fist and stared into the fire. They would be coming now, soon: the telephone calls from family and friends he had not spoken to in years, hoping to make up for past abandonments by being first in line. Reporters. He looked at the pitiable mound of pink slips on his desk, bills that taunted, threatened, and ruled him only a few days ago, and he felt with a horrible sinking in his chest that they had no power over him any more.

Six months ago everything had been different. First, his wife’s affair, well-documented by the local press, became so messy and sensational that his supervisor had been forced to fire him. The shock of losing his wife and his livelihood all at once caused him to begin having panic attacks again, after which even those who remained by him after the scandal were too embarrassed to accompany him in public.

He had been, in a word, miserable. He never left his apartment, which began to smell like the home of someone who had recently died and whose corpse had not been found. Eventually he pulled out a novel he had started writing shortly after getting married, and occupied himself by making revisions. He crossed out more than he added. Whole sheets crinkled and disappeared in the fire. These small acts of destruction buoyed him, gave him hope, but there was soon little left to destroy – and so he began to write, long uneven pages of it, a writing that was as violent as the destruction had been.

He prayed, yes, he prayed that the book would lead him out, he prayed to the book, for it was all that was left to him, a hope of thinnest glass. Oh, but that he could feel happiness once again before his days ran out! He did not know if it was possible.

But the curious thing about memory is that it has no capacity for pain. One can remember times of suffering, of course, but try as one might one will not conjure the feeling of suffering itself, the claustrophobia and the terror: these are gifts that leave you when you leave them. And so it went that Fyodor pulled himself together, he scraped by with a night shift at a gas station, cut his hair, began – once more – to hum. One night he saw a pair of squirrels perching silently on a branch, their small faces toward the light of the full moon, and he remarked how curious it was. And then he got the letter.

He wanted, now, desperately to turn back the clock, to go back a few days before he had read the news, to a time when everything was gloriously uncertain. Yes, he had been doing better, but he never deluded himself that he was in the clear, that – as the stories go – he would live happily ever after. Then, the world could still have ended. Now the world sided annoyingly with him, giving him exactly what he had asked for, all of it, and it made him sick to think about.

And then he thought, what if this is a sign that I am not yet happy? What if my feeling sick means that I am not actually there, but that just the outward conditions that I always assumed led to happiness have been fulfilled? For now the curtain is raised, there is no escaping the truth anymore: where before I could have blamed my lack of happiness on a lack of things, I now lack no more and yet still feel a lack. I cannot live in the charade any more. I have done all that I can, and yet I still lack.

And with this thought relief rushed through him like fresh rain.

Monday, January 18, 2010

China, a start

A lot of people know a little bit about China's one-child policy, which I learned today has been in place since 1979 and will likely remain for at least the next ten years. If you know as little about this as I did a few hours ago (and I only know a very little more now), I might ask you to stop with me before we go further and agree that we ought not to be looking for an equivocal stance on whether the policy is good or not – a discussion which always seems to implicate China as a whole, as if our opinion of the regulation of family sizes would indicate an opinion of China as either a practical, prosperous economic engine or as a faceless government-controlled land bereft of basic human rights. Issues that are this complicated do not have on/off switches for answers, and there may not exist a single person anywhere who understands the history and circumstances of the issue well enough to make a judgment that approximates something like objectivity. All we can do – all we can ever do – is learn as much as we can, and remember that our own understanding is always partial. Only then will we be open to others' partial understandings and more reasonable in our suggestions of what might be done.

The Wikipedia entry on the one-child policy is a good start, and led me to one article that seems an example of What Not To Do. It’s a petition published ten years ago in the Washington Times that aims to browbeat its readers into adopting the author’s rather extreme views, and thereby attempts to prevent the funding of population control programs like China’s family planning policy. Many of the points raised by the author – such as the very real problems concerning the favouring of sons over daughters, a cultural proclivity which results in a staggering imbalance in the gender ratio, something like 120-100 nationwide – have also been raised by other, more even-handed writers, but the present author mixes in what might be truth with such obviously militant pronouncements and rhetoric that it’s hard to take him seriously. It is of course a good thing that many will be introduced to the subject through this article, and will hopefully be motivated to read further and think for themselves, but the corollary danger is that the real injustices and crimes mentioned, the rates of infanticide and the cultural bias against girls, may be dismissed by readers who object to the writer’s caustic, bullish prose.

Stephen Moore (the author) writes that “no sane person” would subscribe to the view he objects to, which, if you think about it, is sort of just a mild way of saying “you’re all crazy.” It’s not a long stretch from this to the adjectives he appends to any mention of the fund or the family planning policy, which include “genocidal,” “fanatical,” and “demon-like.” With the exception of “genocidal,” which is a real and serious accusation, the language Moore employs is emotional and imprecise, and doesn’t tell us very much. I always think in situations like this: if this person thinks a system is “fanatical,” there must be one other person who thinks it isn’t, and wouldn’t it be nice if I had a glimpse of the opposing argument to compare? But Moore’s article is staunch and impenetrable.

I don’t want to turn this in to a catalogue of all the pros and cons of family planning in countries such as China, because I don’t know enough on the subject and don’t pretend to. That being said, I know enough already to raise my eyebrows when Moore writes that
family planning services do not promote women's and children's health; they come at its expense. There are many Third World hospitals that lack bandages, needles and basic medicines but are filled to the brim with boxes of condoms -- stamped UNFPA or USAID.
It’s not clear how the lack of resources in third world hospitals is related to the promotion of smaller families, but I would think that increasing the number of births in such hospitals would not in itself lead to better all-round sanitation. I also don’t see the connection to condoms, which are freely distributed in clinics in many other countries, including in the west. Contraception, whose purpose is to curtail unwanted pregnancies, is surely good sense and has nothing to do with the ability of couples to control the number of offspring they have, except perhaps to improve it. Moore thus makes a false monster (or a straw man) out of a population-control initiative that I can’t imagine anyone objecting to: that of reducing the number of unplanned, accidental and unwanted pregnancies in a population already bursting at the seams.

I will only say one more thing here, which is that Moore’s own solution to the problem seems to be that we ought to inject more capitalism into China and watch as it solves everything. This is evident even before his last paragraph, in which he basically says that all Third World countries should model themselves after the U.S. in order to improve themselves. I won’t address Moore’s contentions directly, except to say that when the problem is as complex and variegated as this one is, in a country as large and politically bristly as China, and your solution is a one-liner combined with a dismissal of all the cultural, economic, social and historical realities that underscore the daily lives of nearly a billion and a half people - well, your solution may not get us very far.