{fv_addthis}

Latest Blog Posts

by Rob Horning

2 Jul 2009

Complaining about the technologically mediated acceleration of life and the loss of the time for contemplation has become a lot like crying wolf. From what I gather, people seem to be sick of hearing it—as a meme it had its moment several months ago. Even though I’ve beaten that drum many times, I find myself thinking: Okay. Concentrating is hard, but then when hasn’t it been? There are a surfeit of distractions; I get it. But it’s not like I am going to go on an information fast and spend my free time meditating. I’m not going to dismantle my RSS feed and devote an hour a night instead to reading a single poem. Those seem like idealistic, nostalgic fantasies about the “life of the mind,” which in practice would most likely amount to a refusal to engage with life as it is actually being lived. For example, I very much wish I was in a world without Twitter and maybe even without telephones, but that doesn’t mean it’s imperative that I live as if it were so. Down that road lies the technological equivalent of veganism, wherein everyone in my life would need to adapt to my fussy, righteous rules about which ubiquitous behaviors were permissible in my little world.

Still, though David Bollier’s account of an April 2009 lecture (probably based on this paper, pdf) by media studies professor David Levy has its share of neo-Thoreauvianism in it, it nevertheless raises some points worth considering. The main gist is this: “The digital communications apparatus has transformed our consciousness in some unwholesome ways. It privileges thinking that is rapid, productive and short-term, and crowds out deeper, more deliberative modes of thinking and relationships.” I have said the same sort of thing lots of times, but, as Levy asks, what actually constitutes the difference between “productive” thought and “deliberative” thought? I tend to think of the former as data processing—tagging mp3 files, for instance—and the latter as analytical inquiry, but it may not be so easy to distinguish the two. The mental modes tend to flow into one another. Working through menial mental tasks sometimes allows for inspiration to break through—and after all, what is one supposed to be doing with one’s mind when it is taking its time to deliberate? The “information overload” critique sometimes centers on the idea of slowing down the mind. But the mind is always moving, thinking one thought after another; the problem with the internet is that it gives it too many places to go all at once, has the potential to gratify too many idle curiosities. Bollier suggests that “We are sabotaging those inner capacities of consciousness that we need to be present to others and ourselves.” But the dream that Levy attributes to Vannevar Bush seems a more apt description of what we’ve tried to do. “Bush’s intention was clear: by automating the routine aspects of thinking, such as search and selection, he hoped to free up researchers’ time to think more deeply and creatively.” It’s just that the two functions can’t be separated; the way in which we think about things doesn’t have degrees. It’s holistic; we require routine tasks to fire our creativity, and creativity can often become routinized.

It’s important to distinguish between having information at our disposal and lacking the discipline to make contemplative use of it. Often the two are implicitly elided, as if too much information automatically leads to frivolous surfing through it. Bollier writes, “Fast-time activities absolutely crowd out slow-time alternatives. The now eclipses the timeless. And we are becoming diminished creatures in the process.” I don’t quite understand this. We have to live in the now, because we are not “timeless.” We die. And the problem with information overload doesn’t lie with the activities and the media so much as they do with the approach we take to them, the ideology about information consumption we have internalized in the course of mastering these new technologies. We think they are supposed to make our lives convenient, and we measure that in terms of time efficiency. If we do many different things in the same span of time we once were forced to do only a few things—if on the train we can read 17 books simultaneously on a Kindle rather than one—than we are “winning.” The pressure to consume more is not inherent to the technology or in some new perception of time, but is instead inherent to consumer capitalism, which fetishizes quantity. As Levy points out, the roots of this are in the “production problem”—how to keep making more stuff if people are already sated and don’t have the time to consume more. The solution: manufacture new wants and speed up consumption. So the consumerist imperative probably led us to develop many of these technologies. But still, we should be careful not to blame the tools for the kind of people we have become. (If Twitter went out of business tomorrow, many people’s discourse would still remain superficial and inane.) If we have ceased to be able to love, it is not because we lack the leisure or are too distracted. It is because we have learned to privilege different sorts of experience, are rewarded for different sorts of accomplishments.

So the call for “an ‘information environmentalism’ to help educate people about the myriad and aggressive forms of mental pollution afflicting our lives” seems misguided. The “mental pollution” is an effect, not a cause, of our loss of contemplative peace. That is, our mental lives are not degraded by information but by a pervasive cultural attitude about it, that treats ideas as things to be collected and/or consumed.

ADDENDUM: Ben Casnocha’s review of Tyler Cowen’s new book presents a far more cogent critique of the “attention crisis” hullabaloo then what I’ve provided above.

We have always had distractions. We have never had long attention spans. We have never had a golden age where our minds could freely concentrate on one thing and spawn a million complex and nuanced thoughts. Cowen reminds us that charges to the contrary have been made at the birth of every new cultural medium throughout history. Moreover, the technologies that are supposedly turning our brain into mush are very much within our control. The difference between the new distractions (a flickering TV in the kitchen) and age-old ones (crying infant) is that the TV can be turned off, whereas the crying infant cannot.

He also notes the way in which chaos and “un-focus” can lead us to breakthrough insights. Though I don’t remember agreeing with much of Sam Anderson’s New York magazine essay in praise of distraction, this point that Casnocha highlights seems apropos: “We ought to consider the possibility that attention may not be only reflective or reactive, that thinking may not only be deep or shallow, or focus only deployed either on task or off. There might be a synthesis that amounts to what Anderson calls ‘mindful distraction.’ ” That’s what I was struggling to express above: thinking is thinking; subjecting it to binary categorizations does injustice to how it actually works and leads to unnecessary and useless prescriptions for how to provoke thinking of a certain type. 

by tjmHolden

2 Jul 2009

Photo: Spencer Weiner / Los Angeles Times



PM is on a break, so I might as well come off mine.

I’ve been traveling about for some time now, collecting pictures and anecdotes, which I’ll post as time permits. But while I was sifting through the shots and sorting out my thoughts, I came across this piece in the Los Angeles Times.

It was listed under their “Most emailed Stories” sidebar, but that is actually a misnomer, since it is mainly a collection of photos with a bit of text clinging precariously—apologetically (one might even say)—to the outer edge.

Proving (what we already know): that pictures often speak more authoritatively than words.


Anyway, clicking through the photos, the following thoughts came to mind (not necessarily in this order):


by Bill Gibron

1 Jul 2009

Sometimes a movie makes a decision so dumbfounding, or takes a narrative path so peculiar, that it can’t fully recuperate from such a lame left turn. It happens all the time in horror, from the false ending where the killer, presumed dead, is simply playing possum before unleashing more meaningless slice and dice on his moronic victims to the “it was all just a dream” dynamic that is frequently jerryrigged to included after-death and/or life experiences. And then there is The Unborn. With David Goyer behind the lens, anyone who expected a terror tour de force needed to have their preoccupied pre-teen head examined. For everyone else, the screenwriter responsible for (part of) the Batman franchise reboot has been trading on the new Caped Crusaders commercial cache for far too long. First there was the awful The Invisible, and now we get a stupid fright fest that tosses in exorcism, demonic children, and a halting Holocaust reference for added idiocy. 

Megan Fox’s non-blockbuster familiar, Odette Yustman, stars as Casey Beldon, a coed with the propensity toward seeing dead people. Every night she dreams of a satanic little boy with Meg Foster’s eyes. During the day, she’s tormented by equally unsettling visions. Her distant father chalks up said struggles to the suicide of her mother. Casey is convinced, however, that the creepy kid is trying to kill her. Things get even more muddled when our heroine learns that she is a twin, that her brother died in the womb, and that her grandmother was a victim of Dr. Mengele’s experiments on Jews in Auschwitz. After stealing a sacred book from the local library, she looks up a Rabbi who might be able to help. Turns out Casey is being stalked by a dybbuk, a malicious spirit that wants to steal her body, cross over, and live in the real world. It’s been after the family since World War II, and without some kind of religious ritual, it just might succeed this time.

The Unborn is like a scary movie sentence without the necessary linking verbs. It’s all genre gears and no motivational motor. There is not a single character we care about, not a single moment of genuine fear or dread. As he proved with The Invisible, Goyer sure knows how to dumb down the standard horror concepts - and we’re not talking about a rocket scientist cinematic category to begin with. It’s as if he purposely looks at the marketing demographic - bored teens with disposable cash and gullible dispositions who couldn’t care less about things like characterization, plot logic, or smart dialogue - and then specifically dials into that dopey wavelength. Then he manufactures a narrative with all payoffs, but with none of the mandatory set-up to get you invested in the terror. And just when you think things can’t get any weirder, along comes a sidetrack through the Final Solution to make the whole thing ethically questionable.

Indeed, it’s the moment when Jane Alexander’s wise old woman cliché croaks out the word “holocaust” when The Unborn goes from slightly tolerable to terrible. Up until that point, we’ve bought the various forced filmmaking shocks, the typical trip through ambient noises, secondary education hallucinations, and that obligatory shot of our heroine hoping that she’s simply slipped a gasket…or two. But then Hitler has to enter into the mix, an obvious ploy to place the dybbuk (a facet drawn from Hebrew folklore) within some sort of recognizable context. What Mr. Goyer doesn’t understand is that demons can be just that - unstoppable imps with an urge to cause major mischief among the living. Just ask Sam Drag Me to Hell Raimi. Not every monster needs a culturally valid backstory. Toss in Gary Oldman as the unlikeliest rabbi ever, and you’ve got a Torah full of tripe.

And it just gets worse. Even in the extended DVD cut which promises more unrated bang for your already underwhelmed buck, the last act of The Unborn plays like a community college crash course on William Peter Blatty. We get the sudden arrival of a helpful, athletic priest, the mumbo jumbo jollies of a call and response sacrament, various body convultions, and enough upside down headed dogs to give the ASPCA fits. Add in the sudden stop to all the supernatural shenanigans, an ineffectual and pointless ending that tries to trick us, and an epilogue which introduces an element into the story that any right thinking fright fan could see coming from a couple of dozen hectares away and you’ve got junk - a lumpy, lunatic landfill overflowing with the half-forgotten ideas of a dozen would-be macabre masters. It’s safe to say that adolescents obsessing on the subject in their parent’s basements could come up with more compelling thrills.

All of which gives Goyer’s continuing prominence in Hollywood a questionable black eye. Considering the less than successful facets of his Dark Knight-less career (Jumper, Blade: Trinity) and the possible projects he has on tap (X-Men Origins: Magneto among the many), it’s clear that Christopher Nolan will be carrying this show biz shoulder shrug for at least another Bat dance - and that’s really too bad. Someone should really inform the studio suits that the mammoth success of some single project does not instantly equate to artistic excellence for all of its many creative contributors. The Invisible may have been a watchable waste of time, but it at least it didn’t flummox every last aspect of your overall horror film fandom. Watching this kind of contrived dreck makes one contemplate their own sense of genre love.

It also raises the question of context. Had Goyer avoided anything to do with the most heinous crime in the history of mankind, if we hadn’t flashbacked to a barracks filled with kids and Nazi operating tableaus featuring wee ones in various states of experimentation, would The Unborn really have been any better? Did this midpoint maneuver, clearly meant as a way of justifying the rest of the faux fire and brimstone hokum, really drive the movie into the ground - or was Goyer’s presence already enough to sink this stupid concept from the start. Lost in all of this is the desire of movie fans to experience something original, terrifying, and (ultimately) fun. The Unborn may have started life as a decent idea. Somewhere between concept and birth, this baby turned bad…really bad.

by Bill Gibron

1 Jul 2009

Most independent filmmakers lack balls. Oh sure, they think that by tackling their lingering interpersonal issues, traumas tripped by memories of failed potty training and the lack of parental love, they are being brave and brazenly honest. Nothing could be further from the truth, actually. In a nu-reality world where people procreate artificially and then sell the rights to said stunt for aggrandized TLC fame, picking apart you past is like shooting familial fish in a barrel. We’d say “been there, done that” if both said sentiment, and the situation it describes, weren’t so clichéd. So when a couple of crazy cinephiles decide to make their own outsider statement, one automatically expects a journey back through Oedipal/Elektral memory lane, or worse, another sloppy scary movie. As luck would have it, 10,000 AD: Legend of the Black Pearl has substantially bigger and better celluloid carp to fry - and it does so in the dopiest, most delightful way possible.

Like Teenage Caveman only with even more leaps in logic, two tribes now roam a surprisingly fertile post-apocalyptic wasteland. They are the warrior race known as Hurons, and the agricultural clan called the Plaebians. After years of living in relatively harmony, the arrival of an evil presence known as the Sinasu has the clans clashing with one another. To make matters worse, there is a regressive religious prophecy that predicts the arrival of The One, a being that will create balance between the people and put wickedness out to pasture. In everyone’s mind, young Huron champion Kurupi is the chosen savior. Mentored by Tukten in the ways of combat, he must accept five different challenges, collect the five sacred stones that result, and then raise the mythic Black Pearl. Only the power inherent in this legendary orb can defeat the sinister Sinasu. But when his master goes missing, Kurupi appears lost. Luckily, hot-tempered teacher Ergo will complete the boy’s training. Only then can he save the Earth from itself - again.

There are only three words that can accurately describe 10,000 AD: Legend of the Black Pearl: Oh…my…god! It is safe to say that never before in the history of independent genre cinema has so much artistic vision and eye-popping onscreen imagery gone to such a ludicrous, laugh out loud bit of future shock falderal. Credit definitely goes to the directing team of Giovanni Messner and Raul Gasteazoro. Their sense of the epic is so skewed, so “why have a conversation in a clearing when we can have it on the edge of a cliff poised several thousand feet above a lush autumn glen”, that it literally rattles your brains. One moment, you are snickering at the stupid dialogue and goat cheesy choices for mythology and folklore. Part of the movie actually plays like Quest for Fire meshed with a Uwe Boll level of prehistory. But then our dynamic duo will set said silliness in a location so gorgeous, so beyond all manner of sensible scope or size that we acknowledge the flaws and still find ourselves transfixed.

With its homoerotic leanings, awkward action sequences, and nonstop pseudo-Tolkien-babble, this movie is a real mess. Messner and Gasteazoro clearly have an eye for vistas - the locations chosen need to sign these two up for promo travelogue duty pronto! From mountaintops that put Michael Mann’s Last of the Mohicans to shame to waterfalls that sing of nature’s undeniable beauty, 10,000 AD really does look absolutely stunning. Heck, even the costume design and personal appearance elements add to the overall effect. As Ergo, Gasteazoro lets his dread head freak flag fly. Real or not, his matted hair helmet gives the film a rather authentic feel. Similarly, Julian Perez’s Kurupi is not some ripped slice of stuntman. Instead, he looks like someone who has spent his life in hermetical service, waiting for the sign to stand up to the notorious Sinasu.

But the rest of the movie - whoa! If you think that Introduction to Theology class back in college was tough going, if you believed The Matrix would have been better with more bullet time and less proto-philosophical gobbledygook, then you better give this mumble jumble movie an incredibly wide berth. There is just too much D&D dipsticking here, enough World of Warcraft roleplaying retardation to give Magic: The Gathering geeks uber-ultra hissy fits. One moment, the narrative takes us to places paranormal. The next, Ergo and Kurupi are rolling around like extras in the Pet Shop Boy’s “Domino Dancing” video. Random villagers show up and make ominous predictions. In the meantime, monochrome flashbacks add their own “the A-bomb woke me up” confusion. As a piece of speculative fictionalizing, 10,000 AD: Legend of the Black Pearl is mortifying. But as a work of pure cinema, it’s an embarrassment - of riches.

This has to be the best looking bad movie ever made. The craptastic kitsch factor just can’t compete with the National Park level of gorgeous eye candy included. Every time you’re ready to tune out the tripe flowing freely from the characters’ mouths (including mindnumbingly insane moments in a nonsensical “native” tongue, with subtitles), the backdrop draws you back in. For their part, Messner and Gasteazoro treat the widescreen frame like a canvas, painting pictures you won’t soon forget. And they even go so far as to add unusual elements to the scenes, like filming underwater and utilizing the ruins of real buildings as part of their production value. Even the musical score by Jed Smith makes significant strides to sell us on the overall otherworldly ambience involved.

But that doesn’t stop 10,000 AD: Legend of the Black Pearl from stinking like a rank Huron’s loincloth. Given a chance, this movie messes up everything except the way it eventually looks. We don’t care about the quest, don’t really understand the folklore logistics involved, and constantly question the decision to set the film in the future. It there really a need for the opening stock footage Armageddon or do Messner and Gasteazoro really believe that such a flimsy foundation adds to their adventure? Whatever the case may be, you’ll definitely find better examples of this strangulated cinematic type, but here’s betting that none look as lovely as this. 10,000 AD: Legend of the Black Pearl may ultimate fail as science fiction or implausible peplum fantasy, but it has some oddly artistic touches. It’s as confusing as it is captivating.

by G. Christopher Williams

1 Jul 2009

Mario has always confounded me.  Video gaming’s first sex symbol, Lara Croft?  I get her appeal.  Solid Snake has that Clint Eastwood vibe.  And over 80 years of American cinema has clearly established the irresistibility of large apes with the surname Kong.  But, a stout plumber with a great deal of facial hair?  What makes him a superstar?

Certainly, there is something to be said for firsts.  Mario is one of the first video game characters to become recognizable in part because of his persistent appearance in Nintendo arcade games like Donkey Kong (1981), Donkey Kong Jr. (1982), Mario Bros. (1983), and Super Mario Bros. (1985).  Part of this persistence of the character may be due to his original conception. 

While Shigeru Miyamoto initially imagined Mario as a carpenter in Donkey Kong, he was reconceptualized as a plumber by the time he and his brother Luigi were to appear in a game titled for these two regular joes.  Indeed, Miyamoto reportedly designed Mario with an eye to creating a character that would be relatable to players as an emblem of the common man.  The traditional uniform of the labor classes, overalls, seems a simple enough visual sign to send the message of who Mario was intended to be. 

While I have often found myself baffled by his iconic stature, perhaps, I shouldn’t—especially as an American who should easily recognize the especially American appeal of a hero based not on the traditional qualities of a hero but instead on Emersonian and Puritanical work ethics.  Mario’s first official appearance as a plumber in Mario Bros. contains more than just a brief nod to the uniform of labor.  Its gameplay is wedded (maybe “welded” would be a better choice of words given the blue collar roots of this “American” hero) to the ethics and heroism of work.  Mario and Luigi spend their working hours cleaning out pipes from invading reptiles. 

Interestingly, the game suggests that the work of plumbing is its own reward.  Points in Mario Bros. are accrued by doing the dirty work of keeping the tunnels clean by ridding them of turtles and through the acquisition of spare change (coins) that emerge from time to time from the pipes above.  Turtle extermination and gathering pennies lead to more life for Mario as this work and coin is translated into points that earn “extra lives.”  In Mario Bros., work is performed only so that work can continue. 

Working to acquire money for the sake of survival becomes a persistent theme in the adventures of Mario through this mechanic of money being used to purchase life.  The value of money for survival is established more directly in Super Mario Bros..  Defeating fungus and winged turtles no longer gains Mario anything other than points, but 100 coins always translates into an extra life.  Thus, the practicality of a working class experience is more expressly represented in the economics of the franchise.  The working man is always working hand to mouth.  With every nickel and dime, Mario ekes out a continued existence.

If Mario is heroic as a hard worker though, it is in a kind of Faulknerian sense—because he “endures” through his persistent labor—he is also a hero rewarded in less pragmatic ways.  If perseverance is the practical means to an end in the American mythology surrounding work, the end goal that hard work is intended to realize is one much more ideal in nature, the realization of the American Dream.  The notion that success is a “dream” (as American nomenclature suggests) removes the concept from the realms of pure pragmatism and more clearly recognizes its idealized and romantic nature, the stuff of transcendental dream.  This romanticism may explain why Mario finds himself in such extraordinary circumstances in so many of his appearances.  The blue collar worker rather than a knight in shining armor (the kind of traditional romantic hero of European culture) is the one who will save the girl from the giant ape in Donkey Kong.  Yet, this image is further romanticized in Super Mario Bros. because he is the regular guy who will save, not the girl next door, but the Princess herself. Unlike, the goal of saving Pauline from Donkey Kong, Mario does not simply get the girl—he gets the girl that is emblematic of wealth and prestige, seemingly the end goals of American sticktuitiveness.  That Mario has to traverse seven worlds in Super Mario Bros. and defeat seven incarnations of Bowser and yet is consistently met with the anti-climatic announcement, “Thank you Mario! But our princess is in another castle!” speaks to Mario’s perseverance as a man committed to keeping his eye on the prize.  If you keep working, eventually you will get to World 8-4 and real success.

In other words, Mario is not merely relatable as a regular joe, but his progress from the labor class to a a man capable of mixing with the elite is a familiar claim of the American dream of upward mobility.  With a lot of hard work and elbow grease, not only can one merely survive, but the individual can eventually land the princess and everything that she represents.

So, while lacking sex appeal, a laconic presence, or even some basic semblance of cool, I guess I can understand that Mario’s appeal stems in part from his possession of true grit and a dream.  Forget G. I. Joe, Mario seen in this way is the real American hero.

//Mixed media
//Blogs

You Should Dance Like Gene Kelly Today

// Global Graffiti

"In the glut of new "holidates", April and May offer two holidays celebrating the millions who preserve and promote the art of dance

READ the article