Latest Blog Posts

by L.B. Jeffries

25 Aug 2008

The expense of video games has always had a tenuous relationship with what the consumer is purchasing. Sixty dollars is no small amount of money and it’s not unreasonable for a gamer to expect quite a bit of bang for their buck. A game needs to have a great solo experience, fun multi-player, generate a lot of playtime, and appeal to a wide audience to garner much critical acclaim these days. Hell, it had better jump through hoops and entertain the whole family for that kind of cost. Yet some games are definitely worth that kind of money. The sixty dollars you pay for Call of Duty 4 is going to be repaid tenfold when you go online and get absorbed into the matches. Like buying a set of golf clubs or a croquet set, you know this is a game you can play over and over again. There’s long term value in that, there’s a sense of getting your money’s worth. But for a game that’s purely single-player, that’s trying to give a tight plot and precise experience, it’s much more difficult to justify the cost. Sixty bucks for a game I’ll play once or twice is asking a lot. In a consumer culture where I can rent a series for a monthly fee or buy a long book for a few dollars, it can be hard to justify the sixty dollars for a plot-heavy Third-Person game. How do we make video games that are just about the stories work for the consumer?

 

The biggest solution going on right now is downloadable content and episodic game formats. TellTale’s Sam & Max games are doing well financially and have even been breaking into the green on metacritic. At ten bucks to download and averaging about 3 to 5 hours of gameplay, that seems like a fair deal. It’s just the right length for a lazy afternoon or spread out over several days without putting my wallet into a world of hurt. It makes the story flow a lot better as well; a game that runs ten hours suffers because the narrative ends up lagging somewhere. Your favorite book or movie still adheres to a basic formula of introduction, rising action, climax, denouement, and resolution. But when a video game tries to apply that formula, it usually stalls somewhere because it has to drag one of those elements out. You’ll spend five hours on the rising action, only to blister by the climax and have the resolution be a two minute pop song. Episodic content isn’t just a good value for a game, it’s better suited to keeping the narrative flowing properly. Portable games have also been adopting this style, with the average level or episode taking about 15 minutes to an hour to beat. This helps people who play in quick bursts on the subway, but you can also see how, inadvertently, portable games tend to have better story pacing.

 

But how can we use this design to maximize the income of a Third-Person game? Sam & Max uses a lot of great concepts like the season pass or the full season purchase, but I can’t help but wonder if the economic model is at its full potential yet. If episodic games are going to be as appealing as T.V., you’d need to distribute the games episodically for free for a limited amount of time (to get people hooked) and then recoup on advertising and season passes. As flash players become more powerful, the lucrative options of this model will soon be a reality. Little downloading, minimal hardware demands, and the necessity to stand out amongst the competition should all help drive narrative games into new and creative territory. Consumers have consistently shown their willingness to play a free game in exchange for seeing an advertisement, but people are just now beginning to offer more complex games in this model. Why not play a commercial during the load time between games? Or do as Rainbow Six Vegas did and fill the world with in-game billboards and ads. With companies producing prototypes like this for Flash 10, it’s only a matter of time before this is feasible. Graphically complex episodic games could find a home in a few years when my web browser can produce graphics as cutting edge as consoles today. But is there a chance for more? If including multi-player can expand the value of a product, what other options could be given to the player?

One of the most interesting things to come from gamer culture has been the mod community, and it might be in the best interest of developers to embrace that community more fully. Why not throw the gates wide open and actively try to make creating games with the engine as easy possible? With so many brilliant mods and games coming out from the likes of RPGMaker or Garry’s mod, by letting people make outside games and hosting them on your server you could get no-risk content. Work out a licensing agreement with people who make good games, divide up the revenue, and suddenly you’ve got an army of potential narrative games to offer your consumer. I don’t mean just leaving the door unlocked, this is about making in-game tools easier to use for the player. Software that lip synchs, incredibly easy animation tools, and editors that even my grandma could use. Furthermore, you don’t even need to hand people a blank slate. You would include all the in-game art and animations and consistently add new ones as you create a larger body of work. It would be a huge boost to the Machinima scene as well. Naturally, anyone who downloaded all of this and tried to make money without the owner’s consent would be subject to legal measures. People would still be able to distribute their work for free, but perhaps by offering to sponsor a good game with professional voice work and editing you’d give them an incentive to work with you. If narrative games are eventually going to migrate to the internet to reduce costs, it is not enough to just start posting brilliant narrative games. Developers must continue to innovate in multiple fields to stand out.

 

There are plenty of other applications for video games that could generate revenue. What about a Victoria’s Secret catalogue that uses the Unreal 3 Engine to let people have their customized avatar try on clothes and see how they look? Architects already use game engines to demonstrate their designs to potential customers, why not let people check out hotels or explore national parks before they even make the trip? A lot of this article has turned into speculation and wild business proposals, but it’s important for those who enjoy plot heavy Third-Person video games to be mindful of the economics going on. It’s very hard for any story, no matter how brilliant, to get much of a chance when the gamer has spent a fortune on it. All that cynicism and irritation melts away when you’ve only spent ten or twenty bucks on the game. In those kinds of conditions, the plot is given a chance to really shine. Short of the game being perfect in every regard, would we even notice the ‘Citizen Kane’ of games after it ripped us off sixty bucks?

by Rob Horning

25 Aug 2008

A problem I keep finding myself returning to is why I seem to spend more time tagging and arranging my music files than I spend listening to my music. Part of that is a cognitive illusion, but a telling one—I’m listening to music the entire time I’m doing the iTunes bookkeeping work, but my concentration is on the data, not on the intricacies, harmonies, melodies and hooks of the music. It barely breaks through, and usually only when the song playing is so irritating, I have to skip to the next one.

In my mind, this is symptomatic of a larger problem, of consuming information about goods rather than allowing goods to facilitate sensual experiences. In part, this is so we can consume more quickly, a product of the time crunch we face in expanding our consumption—we want faser throughput, since quantity seems to trump quality, and the pleasure in consuming seems to come from the acquisition of the next thing. To authorize that next acquisition, we need to satisfy ourselves that we are done with what we have. Processing it as information is a quick way of doing just that.

As a consequence of this eagerness to process more and more stuff, I end up amassing an embarrassingly thorough knowledge of the surface details of pop culture—who wrote what and who sang what and who played on whose record and when this show was canceled or had this or that guest star or whatever. Worse, I invest far too much significance in brandishing this knowledge as some kind of accomplishment, as if life were a big game of Jeopardy. This useless depot of detail is what a show like Family Guy tries to reward me for having accumulated. Getting to laugh at it is like a kind of booby prize.

But iTunes metadata seems to me the best emblem of the information problem, of the trap we are lured into of substituting clerical data processing for thought and experience. Adorno seemed to anticipate this precisely in “The Schema of Mass Culture,” whose title alone suggests its application to the digitization of all cultural distribution. He argues that art, in being manufactured for the masses, is reduced to the data about itself, which masks its subversive potential. “The sensuous moment of art transforms itself under the eyes of mass culture into the measurement, comparison and assessment of physical phenomena.” This is like accessing iTunes metadata in place of hearing the song. Because the metadata for all the music is the same, all music from that perspective is also essentially the same. And the argument can be extended to all of digitally distributed culture.

The underlying sameness of the medium for culture today reveals the truth about the phantasmal differences in form and genre. (As Adorno puts it, in his inimitable way, “the technicized forms of modern consciousness…transform culture into a total lie, but this untruth confesses the truth about the socio-economic base with which it has now become identical.”) It’s all more or less the same, allowing consumers to obey the command to enact the same self-referential decoding process, reinforcing the same lesson of eternal sameness.

The more the film-goer, the hit-song enthusiast, the reader of detective and magazine stories anticipates the outcome, the solution, the structure, and so on, the more his attention is displaced toward the question of how the nugatory result is achieved, to the rebus-like details involved, and in this searching process of displacement the hieroglyphic meaning suddenly reveals itself. It articulates every phenomenon right down to the subtlest nuance according to a simplistic two-term logic of “dos and don’ts,” and by virtue of this reduction of everything alien and unintelligible it overtakes the consumers.

What Adorno would call “official culture”—that which is made to be reviewed and talked about by professional commentators and promoted by professional marketers and consumed commercially—seems to be so stuffed with data and information and objects and performers and whatnot that no one could ever in their right mind question its plenitude. There’s so much, you’d have to be nuts to derive some satisfaction from all that. Think of all the stuff you can download! But the one thing missing amid all this data is the space for a genuine aesthetic experience, a moment of negativity in which an alternative to what exists, what registers as “realistic” can be conceived. Instead, one feels obliged to keep up with official culture so as to not find oneself an outcast. People go along not necessarily because they love pop culture but because “they know or suspect that this is where they are taught the mores they will surely need as their passport in a monopolized life.” Pop culture knowledge becomes a prerequisite for certain social opportunities, a way of signaling one’s normality, or one’s go-along-get-along nature. “Today, anyone incapable of talking in the prescribed fashion, that is of effortlessly reproducing the formulas, conventions and judgments of mass culture as if they were his own, is threatened in his very existence, suspected of being an idiot or an intellectual.” I think of this quote sometimes when it comes up that someone has never knowingly heard a Coldplay or John Mayer song, or hasn’t seen an episode of American Idol. Really? Have you been under a rock? Are you lying? Why this makes me suspicious rather than elated, I don’t know. And it especially reminds me of my record reviewing, when I tried to pretend there was inherent significance in the commercial output of E.L.O. or the Drive-By Truckers. And as the information about pop culture proliferates, we become more ignorant about politics and basic facts about how our economy operates.

Once participation in public official culture becomes a matter of collecting trivial, descriptive (as opposed to analytical) information about it, Adorno argues that “culture business” then plays out as a contest. Products “require extreme accomplishments that can be precisely measured.” This I would liken to the data at the bottom of iTunes that tells you the number of songs you have and the number of days it would take to listen to them all. It’s not intended to be a scoreboard but it can seem like one. This sort of contest culminates in collecting mania, where an object’s use value has been shriveled to it’s being simply another in a series.

To radically oversimplify, Adorno argued that mass culture, a reflection and paradigmatic example of monopoly capitalism, served to nullify the radical potential in art, debasing its forms and methods while acclimating audiences to mediocrity, alienation, hopelessness, and a paucity of imagination. It works to form individuals into a mass, integrating them into the manufactured culture, snuffing out alternative and potentially seditious ways for people to interact with one another while facilitating an ersatz goodwill for the existing order. “As far as mass culture is concerned, reification is no metaphor: It makes the human beings that it reproduces resemble things even where their teeth do not represent toothpaste and their careworn wrinkles do not evoke cosmetics.” The contours of our consciousness are produced by our culture, and advertisements reflect those dimensions while fostering their reproduction. 

Basically, through its ministrations, all the movements of the individual spirit become degraded and tamed and assimilated to the mass-produced cultural products on offer, which ultimately fail to gratify and perpetuate a spiritual hunger while occluding the resources that might have actually sated it. Pleasure becomes “fun,” thought becomes “information,” desire becomes “curiosity.”

But what could be wrong with curiosity? It seems like it should be an unadulterated good, a way of openly engaging with the world. Adorno, in a feat of rhetorical jujitsu, wants to have us believe it means the opposite. Because it is attuned not to anything more substantive than pop-culture trivia, curiosity “refers constantly to what is preformed, to what others already know.” It is not analytical or synthetic; it simply aggregates. “To be informed about something implies an enforced solidarity with what has already been judged.” Everything worth knowing about, from a social perspective—anything you might talk about with acquaintances, say—has already been endorsed, is already presented as cool even before anyone had that authentic reaction to it. Cultural product is made with cool in mind, whereas authentic cool, from Adorno’s standpoint anyway, must always be a by-product. At the same time, curiosity surpressed genuine change, supplanting for it ersatz excitement for cynical repetitions—think the fashion cycle, in which everything changes on the surface but nothing really changes. “Curiosity is the enemy of the new which is not permitted anyway,” Adorno says. “It lives off the claim that there cannot be anything new and that what presents itself as new is already predisposed to subsumption on the part of the well-informed.” This means attention to the surface details, which prompts “a taboo against inaccurate information, a charge that can be invoked against any thought.” Basically this means that in our cultural climate, your thoughts about, say, Eric Clapton’s guitar playing are invalid unless you know what model guitar he was playing and what studio he was recording in at the time. The trivia is used to silence the “inexpert.” So “the curiosity for information cannot be separated from the opinionated mentality of those who know it all,” Adorno argues. Curiosity is “not concerned with what is known but the fact of knowing it, with having, with knowledge as a possession.” Life becomes a collection of data, and “as facts they are arranged in such a way that they can be grasped as quickly and easily as possible”—in a spreadsheet, for example. Or a PowerPoint presentation. These media suit facts as opposed to thoughts, and encourage us to groom our data sheets for completeness and clarity rather than insight. “Wrenched from all context, detached from thought, they are made instantly accessible to an infantile grasp. They may never be broadened or transcended”—the metadata fields are unchangeable—“but like favorite dishes they must obey the rule of identity if they are not to be rejected as false or alien.” Works don’t seek to be understood; they only seek to be identified, tagged, labeled accordingly to make them superficially accessible.

The reduction of thought to data allows us to consume culture faster, enhance our throughput, and focus on accumulating more. The idea that you would concentrate on one work and explore it deeply, thoroughly, is negated; more and more, it becomes unthinkable, something it wouldn’t occur to anyone to try. “Curiosity” demands we press on fervently, in search of the next novelty.

 

by Rob Horning

25 Aug 2008

In discussions of the current economic woes in the U.S., the dismal savings rate is a topic that frequently comes up, with virtually every commentator agreeing that this must be raised in order to begin to correct the problem. David Leonhardt, in his must-read NYT Magazine article about Obama’s economic ideology this weekend, explains the problem succinctly.

For the first time on record, an economic expansion seems to have ended without family income having risen substantially. Most families are still making less, after accounting for inflation, than they were in 2000. For these workers, roughly the bottom 60 percent of the income ladder, economic growth has become a theoretical concept rather than the wellspring of better medical care, a new car, a nicer house — a better life than their parents had.
Americans have still been buying such things, but they have been doing so with debt. A big chunk of that debt will never be repaid, which is the most basic explanation for the financial crisis. Even after the crisis has passed, the larger problem of income stagnation will remain.

In order to minimize that unpaid chunk of dent—and minimize the damage—increased savings over time must be used to pay that debt down. (At a macro level, too, increased savings will mitigate trade imbalances, stabilize the dollar, and reduce our reliance on foreign banks to fund our borrowing.)

But with the stagnating wages comes a greater reluctance to save and a loss of faith in the traditional “work a lot, save a lot” method to wealth. Instead, people rightly conclude that they will continue to fall behind without supplementing their wages with another form of income. Some try lottery tickets, others apparently try the upscale equivalent—short-term investing. Felix Salmon takes note of a survey about American attitudes to saving.

Here’s a depressing statistic. In a recent Harris survey, 3,866 Americans were asked which things were “extremely important to achieving financial security in your retirement”. 39% said that “investing wisely” was extremely important, while just 34% said that saving money during one’s working years was.
The problem is that while the financial-services industry is very good at marketing and selling investment products, it’s very bad at marketing and selling thrift, and living within one’s means. After all, the only thing which is marketed more aggressively than investments is credit products.

It’s not merely the fault of the marketing departments in the financial services industry, though. It’s the fault of marketing, period. Advertising works hard to undermine the ethic of saving, making acquiring more goods seem all important and making our faltering wages seem even more inadequate. This works hand in hand, then, with ads touting more credit products. There is virtually no commercial incentive to encourage thrift; that’s not where the fat margins are. And prioritizing not having stuff is unlikely to become anything but an alternative lifestyle—a marginal mode of bourgeois rebellion—any time soon. Our sense of self is too bound up with what we possess and display; expecting people to consume and acquire less is almost tantamount to asking them to become less of a person.

That’s why the government must do what it can to encourage thrift—mandating an opt-out standard for participation in 401k’s for example. But what mainly needs to change is the sense of unfairness that permeates the economy, something that shows up in the class divide between those who earn income through wages and those who earn it through investments. The consensus seems to be this: Working and saving are noble goals, but consuming is what gets people’s attention and pins down the sorts of things we are committed to (putting our money where our mouth is). And working is sort of for suckers; having your money work for you is where it is at. Wage earners thought buying houses would make their money work for them—the house magically doubled their money as property values irrationally increased (creating knock-on wealth effects and encouraging increased consumption). But now that it has become clear that buying houses is simply more consumption, and not investment, the shortfall in national savings is easily recognized as an abyss.

by Lara Killian

25 Aug 2008

image

Heading out the door recently for a workout session, I paused to grab some reading material—the July/August 2008 issue of The Atlantic magazine with its intriguing cover headline: “Is Google Making Us Stoopid?” Good thing I was stuck in one place while perusing the text, because my ability to focus on a single topic for long has been rather challenged of late.

Nicholas Carr’s article has a subtitle: “What the Internet is Doing to Our Brains.”

A quick Google search turns up all the information a researcher, whether casual or professional, could hope for, without the traditional trawling through archives or leafing through indices. Carr discusses the fact that with the ability to access nonstop information 24/7, habitual Internet users are developing a unique method of dealing with the overflow. Classic journalism theory holds (so I’m told) that a newspaper reporter tends to put the most compelling information in the first paragraph, since few readers will finish a longer article. On the Internet, with constant links leading deeper into the rabbit hole, readers seldom return to a web site to finish an article or blog post once a link takes them away. Even a particularly enticing first paragraph is not enough to focus the jaded surfer’s attention.

With constant headlines flashing past our eyes and distracting advertisements extolling the latest IQ test or makeover strategy, we’re losing the ability to concentrate on reading for more than a few moments before our brains demand a subject change.

Carr writes, “Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory.” He goes on to say that reading literature or even a full length magazine article is becoming more difficult even for academics who previously devoured works like War and Peace.

Is this really the future of reading? Losing the ability to sit down and read a full chapter of a biography or finally reach the end of that novel? I felt good about managing to reach the end of Carr’s cover story. If you made it to the end of this blog post, there may be hope for us yet.

by Bill Gibron

24 Aug 2008

He seems like a nice enough guy. Last time anyone checked, he wasn’t making massive tabloid headlines with his debauched behavior, nor had he been discovered killing kittens in some crack-soaked back alley. Heck, he even has a hot girlfriend (director drag and drop diva Milla Jovovich) and a baby girl. And yet ask film fans who their least favorite director is - nay, ask them to list the men who’ve made an abomination out of the motion picture medium - and his name instantly comes up. As frequently as Dr. Uwe Boll. With a directness reserved for Ed Wood or Coleman Francis. To listen to the disgruntled talk, he has systematically destroyed potentially effective projects, reducing long held genre hopes to squirming, squiggling junk.

So what is it about Paul W. S. Anderson that drives the critic to complain - and even worse, why does this friendly faced UK filmmaker receive so much fanboy wrath? The answer, sadly, remains rather elusive. It can’t be his actual moviemaking acumen. He’s certainly got a handle on the artform’s basics, unlike other hacks that can’t put two scenes together without struggling to make sense of the narrative structure. And as this week’s Death Race proves, he can manufacture fake action with the best of them. Sure, he edits like an insane person and piles on the flash when some focus would truly help. But Paul W. S. Anderson is not a bad director. He’s just had the unfortunate luck of taking on titles that get geek panties in a big fat workmanlike wedge.

His name wasn’t always a motion picture pariah. He first came to prominence in his native Britain, where in 1994 his violent thriller Shopping caused quite a stir. Its portrait of disaffected youth, stogy class conformity, and the purposeful destruction of property gave a smug England some harsh food for thought, and catapulted Anderson into the minor fringes of the mainstream. It also made him fodder for that notorious “next big thing” tag, something many foreign filmmakers get saddled with once Hollywood finally hears about them. As a creative cause celeb, Anderson was given immediate access to the hottest script in the studio system - the big screen adaptation of the video game smash Mortal Kombat. It would wind up being the first of his many career coffin nails.

Granted, it’s hard to screw up a martial arts movie in which characters compete in a ‘brawl for it all’ tournament to the death, but Kombat apparently gave audiences its first reasons to be concerned about Anderson. It wasn’t the lack of skill - again he is far more fluid in his filmmaking than any of the movie making misfits he’s frequently referenced with. No, where Anderson seems to stumble (both then and now) is in the all important area of ‘reimagination’. Unlike Christopher Nolan, who tweaks the Batman saga into a psychologically deep crime story, or Sam Raimi who tries to keep to Spider-man’s general spirit, you never know what to expect when Anderson is in charge. Sometimes, you get a reverent reinvention of the mythos. At other instances, the end results are unrecognizable to even the most ardent aficionado.

In Kombat‘s case, the reinvention process seems to totally forget the reason the movie is being made in the first place. It has to be hard for screenwriters to turn fisticuffs into fleshed out stories, but Anderson’s scribes treat it like brain surgery. Gamers loved Kombat because of its bone crushing battles bathed in buckets of blood. They loved the finishing moves and the easily identifiable characters. Trying to turn this all into some manner of Shaw Brothers knock-off was not the way to go, and yet Anderson and company strove to bring a kind of backstory viability to the concept. While many felt the reformatting failed, the title was still so commercial that even this subpar semblance of the game made money.

As usual, cash creates opportunities, and Anderson was allowed to pick his next effort. He chose the David Webb People script Soldier. Kurt Russell was pegged to star, and pre-production began on the potential sci-fi epic. The pedigree at least seemed secure - Peoples had co-written Blade Runner, received an Oscar nomination for his work on Clint Eastwood’s Unforgiven, and guided Terry Gilliam’s great 12 Monkeys. Soldier had all the elements of a potential hit - a certified cult star, an intriguing story, and a hot shot helmer behind the lens. Then Russell decided to take some time off, and the entire project was pushed back.

Anderson needed something to help him cope with Soldier‘s work stoppage. He barreled head first into the outer space horror film Even Horizon. The original screenplay by novice Philip Eisner offered an abandoned alien laboratory investigated by a party of Earth astronauts. Anderson preferred a more straightforward scary movie, and discarded the idea. Instead, the new Horizon storyline centered on a missing spacecraft that may or may not have traveled to the bowels of Hell when it disappeared for seven years. Loading the narrative up with sadomasochistic sex and gore-drenched violence, Anderson hoped to redefine both terror and the extraterrestrial. Instead, he was forced to cut nearly 20 minutes of the movie to get an “R” MPAA rating.

At this point, Anderson was two for two. Sure, Event Horizon was not a major financial hit, but enough in the business saw its polish and professionalism to give the director another shot at Soldier. Russell was ready now, and the film premiered to universal yawns in 1998. Many consider it to be the worst film of Anderson’s career, a braindead bit of bombast that trades on little of the premise’s promise and ideals. At the time, the filmmaker had hoped to update Roger Corman’s Death Race for an actual 2000 release. Instead, he had to suffer the blowback from creating a big time blockbuster bomb. It would be two more years before Anderson got a chance at another noted title.

The zombie video game Resident Evil had long been considered a cinematic slam dunk. There were even suggestions that the father of the undead film, George Romero, was eager to film an adaptation. But the job went to Anderson instead, and while the devotees dished over the stupidity of the choice, the director delivered. Even though it changed some of the console title basics, Evil was still a moderate hit. It led the way to Anderson’s adaptation of AvP: Alien vs. Predator, another solid success. Again, the faithful fumed over the liberties taken with the material, including elements not found in the comics or companion sources. Yet Anderson argued for his approach, highlighting his reliance on the original films as guidance and inspiration. 

All of which brings us to this week’s box office dud Death Race. Coming in third behind Tropic Thunder and The House Bunny, Anderson clearly has lost a lot of his big screen buzz. Of course, no one was really clamoring for a revisit to Corman’s 1976 road kill epic to begin with, but the update is not as bad as the reviews suggest. Instead, it’s just big dumb action with lots of explosions and cars (and body parts) going v-rrrooooom. Indeed, there is nothing here to suggest Anderson is the Antichrist or incapable of delivering decided popcorn perfection. But as with many of his movies, the way he reimagines Death Race - an internet competition inside a maximum security prison run by a ruthless female warden with one eye on the ratings and another on her big corporation concerns - fails to fulfill the concept’s kitsch calling.

And there’s another argument that may or may not sway potential detractors. Anderson is one of the few filmmakers who is open and brutally honest about the editorial decisions he is forced to tolerate by mindless studio heads. Ever since Kombat, he has complained about interference, stating that if he could release a “Director’s Cut” of his frequently panned projects, the opinion of his work would change radically. Event Horizon is one of his particular sore spots, the aforementioned missing footage destroyed or lost by parent Paramount. Especially in this era of the digital domain, where DVD can indeed redeem a failed film, Anderson is angry that he hasn’t had a chance to do just that. There are supposed longer edits out there for every one of his marginalized movies, but due to their lack of success, the rights holders see no reason to rereleased his versions - if they’re even available. 

And so Paul W. S. Anderson sits, marginalized by a business he’s frequently benefited. Personally, he says he’s sick of trying to explain the symbolism in Magnolia (clearly being mistaken for Paul THOMAS Anderson), and after changing his name to W.S. he hates explaining anew that he is not responsible for The Life Aquatic or The Darjeeling Limited. His next film is another video game adaptation - the more or less unnecessary Spy Hunter - and one assumes that even now, the arcade crowd is gearing up to undermine his efforts.

Until then, Anderson will continue on as producer, writer (Castlevania), and behind the scenes Resident Evil guide (the franchise appears headed for its fourth film). It’s also clear he will remain a ridiculed member of an easily outclassed collective. He’s definitely not the worst director in the history of film. But defending him gets harder and harder - especially in light of his less than spectacular past and present preoccupation with b-movie mediocrity. One day he might find a way to prove his detractors wrong. Until then, Paul W. S. Anderson will remain an easy if enigmatic target. Just like his films, figuring out what’s wrong with his reputation is not as simple or straightforward as it sounds.

//Mixed media
//Blogs

Counterbalance: The Avalanches' 'Since I Left You'

// Sound Affects

"Get a drink, have a good time now. Welcome to paradise, and read all about the 305th most acclaimed album of all time. An Australian plunderphonics pioneer is this week’s Counterbalance.

READ the article