Pink Neon [MP3]
Over & Under [MP3]
Backyard Tire Fire
The Places We Lived [MP3]
Sugar Man [MP3]
An Optimist Declines [MP3]
Nice Train [MP3]
She’s Got a Hold on Me [MP3]
The late Gene Siskel once said that if filmmakers want to remake a film, they should focus on the junk. Why update a classic, he argued, when there are so many b-movies and unsuccessful schlock efforts to choose from and improve on. He had a point, of course. There’s no need to make another Casablanca, or Citizen Kane - not when there are hundreds of lamentable horror titles and uneven exploitation outcasts to pick through. In a clear case of “be careful what you wish for” however, novice screenwriter Zach Chassler and outsider director Jeremy Kasten have decided to revamp Herschell Gordon Lewis’ neo-classic splatterfest The Wizard of Gore. Unfortunately, the duo completely forgot the reason the original film was so memorable.
By 1970, things were changing quite radically within the grindhouse model. Sexploitation was blurring the line between hard and softcore, and filmmakers were trying to find more rational, realistic ways to incorporate violence into their drive-in fare. One of the last purveyors of unadulterated gore was Lewis, the man responsible for starting the cinematic trend in the first place. In the early part of the ‘60s, his Blood Trilogy (Blood Feast, 2000 Maniacs, Color Me Blood Red) became the benchmark for all arterial spray to come. But by the end of the decade though, he had dissolved his successful partnership with producer David Friedman, and was trying to make it on his own. Hoping a return to serious scares would increase his profile (and profits), he made The Wizard of Gore.
It was kind of a homecoming for Lewis, and not just because he was going back to his straight sluice roots. No, at one time the director actually ran a Grand Guignol style nightclub in Chicago, and the premise for Wizard mimicked his showmanship experiences perfectly. The plot revolved around a magician named Montag the Magnificent (Ray Sager) whose blood drenched act gets the attention of a local TV reporter named Sherry Carson. Along with her boyfriend Jack, they attend Montag’s macabre show. In between tricks, the conjurer calls up young women from the audience, hypnotizes them, and then puts them through all manner of gruesome tortures. After it’s over, the girls seem okay. Later, they end up dying just like they did onstage.
In both its approach and execution, the original Wizard of Gore is a sleazy, surreal treat. It uses a shoestring narrative thread that allows Lewis to indulge in his ever increasing bits of brutality. The splatter set pieces are rather inventive, including a human hole punch and a tasty chainsaw attack. While the mystery of what’s happening to these young girls is part of the plot process, Wizard would rather spend the majority of its time watching Sager overact. A longtime associate of Lewis’, this on-set jack-of-all-trades in gray sprayed hair is pure ham as our perverted prestidigitator. His line delivery would be laughable if the actor wasn’t trying to take it all so sincerely. Together with the red stuff, the 1970 Wizard is some goofy, grotesque fun.
So how do Chassler and Kasten update the material? What do they do to try and breathe new life into an old hat horror show? Why, they create one of the most confusing plots ever presented in a fright film, and then populate the confusion with several quality genre names and a few naked Goth gals. If you look closely you will see Jeffrey Combs as The Geek, Brad Dourif as Dr. Chang, and former child actor Joshua John Miller (Homer from Near Dark) as coroner’s Intern Jinky. The main casting offers Bijou Phillips as a mystery girl, Kip Pardue as the publisher of an underground paper, and the fantastic freakshow that is Crispin Glover as Montag himself.
Doing away with the investigative reporting angle, the updated Wizard uses the seedy and subversive LA fetish scene to fashion a narrative involving Edmund Bigelow, a trust fund baby who uses his considerable cash to live like it’s the 1940s every day of the week. He dresses in period garb and outfits his large loft with era-specific items. He even drives a classic roadster. One night, he takes new gal pal Maggie to see Montag, and both are immediately taken by the magician’s presence and performance art atrocities. Soon, Edmund fears that the “victims” he is seeing each night onstage are winding up dead in real life. He seeks the advice of Dr. Chang, as well as best friend Jinky. He soon learns, however, that things are far more complicated (and corrupt) than he could ever have imagined.
As revamps go, the new Wizard of Gore is not without its charms. The aforementioned actors all do a wonderful job of delivering definitive turns within a very unfocused and often unflattering set-up. Glover is given the most leeway, and his “audience vs. actuality” speeches are delivered with almost Shakespearean verve. He doesn’t quite steal the movie from the others, but there would be little reason to revisit this material had Glover not been sitting at the center. Everyone else acquits themselves admirably, with Ms. Phillips and Mr. Miller earning special marks. Pardue, on the other hand, is just so strange as our purported hero Edmund that we can never get a real handle on his potential guilt or outright gullibility. Toss in some decent F/X and you’ve got a chance at some wonderfully wanton thrills.
But the shoddy script by Chassler, or perhaps the scattered interpretation of same by Kasten, lets this remake down time and time again. Stumbling over into spoiler-ville for a moment, we are supposed to understand that Montag is merely an illusion, a symbolic slave to the Geek’s murderous desires. While he’s onstage racking up the bodies, the horrific hobo with a taste for blood is using a hallucinogenic fish toxin to brainwash audiences into seeing something else - the better to go about his splattery serial killing in the back. Edmund’s propensity toward violence has made him the Geek’s latest target - he will replace the old Montag and continue on the duo’s deadly work. Naturally, our hero outsmarts the villain, deciding to take control of the show - and the slaughter - all by himself.
Now, in general, there is nothing wrong with this kind of plot repurposing. In the original Montag was just a madman, using the power of mind control and post hypnotic suggestion to destroy pretty young things. Here, Chassler and Kasten want to overcomplicate things, tossing in scenes blurring fantasy and reality so regularly that we loose track of the timeframe. Bijou Phillips is supposed to be an important catalyst to all that’s happening, and yet she’s introduced in such a clumsy manner that we never understand her overall importance. Even Dourif, who uses reams of exposition to remind us why he’s crucial to the outcome seems adrift in the filmmakers’ fixations. From the 1940s focus (which gets old very quickly) to Edmund’s constant cracking of his neck (a byproduct of the fish toxin) there are repeated elements that will drive viewers to distraction.
Yet perhaps the most disturbing thing about the new Wizard of Gore is the lack of…well, gore. Of course, the new “Unrated” DVD reportedly solves most of that problem, but at the expense of what, exactly. Why was the original edit so devoid of actual grue? Surely, the MPAA was a concern, but the set up seems more interested in dealing with Edmund’s growing dementia (and addictions), the full frontal nudity of the Suicide Girls, and the on again, off again trips into inner space than blood and body parts. Lewis only used his premise as a means of delivering the disgusting. In The Wizard of Gore 2007, the offal is the least of our concerns, and that’s not the way to approach any old fashioned splatter film.
And so the redux legacy of The Wizard of Gore sits somewhere between old school corner cutting and post-modern meddling. The original version delivers in the vivisection department. The new movie fumbles the all important garroting. In fact, it may be safe to say that, in some fictional film lab, where the best aspects of otherwise incomplete entertainments can be sectioned out and sewn together, a 70/07 Wizard amalgamation could be formed that offers both valid plot points and solid putrescence. Until that time, we are stuck with two competing if competent cinematic claims. One uses blood instead of baffling narrative to win us over. The other decides we care more about the psychological than the slippery. In either case, The Wizard of Gore deserves better. Apparently, Mr. Siskel’s advice has a few unforeseen filmic flaws to be worked out.
The expense of video games has always had a tenuous relationship with what the consumer is purchasing. Sixty dollars is no small amount of money and it’s not unreasonable for a gamer to expect quite a bit of bang for their buck. A game needs to have a great solo experience, fun multi-player, generate a lot of playtime, and appeal to a wide audience to garner much critical acclaim these days. Hell, it had better jump through hoops and entertain the whole family for that kind of cost. Yet some games are definitely worth that kind of money. The sixty dollars you pay for Call of Duty 4 is going to be repaid tenfold when you go online and get absorbed into the matches. Like buying a set of golf clubs or a croquet set, you know this is a game you can play over and over again. There’s long term value in that, there’s a sense of getting your money’s worth. But for a game that’s purely single-player, that’s trying to give a tight plot and precise experience, it’s much more difficult to justify the cost. Sixty bucks for a game I’ll play once or twice is asking a lot. In a consumer culture where I can rent a series for a monthly fee or buy a long book for a few dollars, it can be hard to justify the sixty dollars for a plot-heavy Third-Person game. How do we make video games that are just about the stories work for the consumer?
The biggest solution going on right now is downloadable content and episodic game formats. TellTale’s Sam & Max games are doing well financially and have even been breaking into the green on metacritic. At ten bucks to download and averaging about 3 to 5 hours of gameplay, that seems like a fair deal. It’s just the right length for a lazy afternoon or spread out over several days without putting my wallet into a world of hurt. It makes the story flow a lot better as well; a game that runs ten hours suffers because the narrative ends up lagging somewhere. Your favorite book or movie still adheres to a basic formula of introduction, rising action, climax, denouement, and resolution. But when a video game tries to apply that formula, it usually stalls somewhere because it has to drag one of those elements out. You’ll spend five hours on the rising action, only to blister by the climax and have the resolution be a two minute pop song. Episodic content isn’t just a good value for a game, it’s better suited to keeping the narrative flowing properly. Portable games have also been adopting this style, with the average level or episode taking about 15 minutes to an hour to beat. This helps people who play in quick bursts on the subway, but you can also see how, inadvertently, portable games tend to have better story pacing.
But how can we use this design to maximize the income of a Third-Person game? Sam & Max uses a lot of great concepts like the season pass or the full season purchase, but I can’t help but wonder if the economic model is at its full potential yet. If episodic games are going to be as appealing as T.V., you’d need to distribute the games episodically for free for a limited amount of time (to get people hooked) and then recoup on advertising and season passes. As flash players become more powerful, the lucrative options of this model will soon be a reality. Little downloading, minimal hardware demands, and the necessity to stand out amongst the competition should all help drive narrative games into new and creative territory. Consumers have consistently shown their willingness to play a free game in exchange for seeing an advertisement, but people are just now beginning to offer more complex games in this model. Why not play a commercial during the load time between games? Or do as Rainbow Six Vegas did and fill the world with in-game billboards and ads. With companies producing prototypes like this for Flash 10, it’s only a matter of time before this is feasible. Graphically complex episodic games could find a home in a few years when my web browser can produce graphics as cutting edge as consoles today. But is there a chance for more? If including multi-player can expand the value of a product, what other options could be given to the player?
One of the most interesting things to come from gamer culture has been the mod community, and it might be in the best interest of developers to embrace that community more fully. Why not throw the gates wide open and actively try to make creating games with the engine as easy possible? With so many brilliant mods and games coming out from the likes of RPGMaker or Garry’s mod, by letting people make outside games and hosting them on your server you could get no-risk content. Work out a licensing agreement with people who make good games, divide up the revenue, and suddenly you’ve got an army of potential narrative games to offer your consumer. I don’t mean just leaving the door unlocked, this is about making in-game tools easier to use for the player. Software that lip synchs, incredibly easy animation tools, and editors that even my grandma could use. Furthermore, you don’t even need to hand people a blank slate. You would include all the in-game art and animations and consistently add new ones as you create a larger body of work. It would be a huge boost to the Machinima scene as well. Naturally, anyone who downloaded all of this and tried to make money without the owner’s consent would be subject to legal measures. People would still be able to distribute their work for free, but perhaps by offering to sponsor a good game with professional voice work and editing you’d give them an incentive to work with you. If narrative games are eventually going to migrate to the internet to reduce costs, it is not enough to just start posting brilliant narrative games. Developers must continue to innovate in multiple fields to stand out.
There are plenty of other applications for video games that could generate revenue. What about a Victoria’s Secret catalogue that uses the Unreal 3 Engine to let people have their customized avatar try on clothes and see how they look? Architects already use game engines to demonstrate their designs to potential customers, why not let people check out hotels or explore national parks before they even make the trip? A lot of this article has turned into speculation and wild business proposals, but it’s important for those who enjoy plot heavy Third-Person video games to be mindful of the economics going on. It’s very hard for any story, no matter how brilliant, to get much of a chance when the gamer has spent a fortune on it. All that cynicism and irritation melts away when you’ve only spent ten or twenty bucks on the game. In those kinds of conditions, the plot is given a chance to really shine. Short of the game being perfect in every regard, would we even notice the ‘Citizen Kane’ of games after it ripped us off sixty bucks?
A problem I keep finding myself returning to is why I seem to spend more time tagging and arranging my music files than I spend listening to my music. Part of that is a cognitive illusion, but a telling one—I’m listening to music the entire time I’m doing the iTunes bookkeeping work, but my concentration is on the data, not on the intricacies, harmonies, melodies and hooks of the music. It barely breaks through, and usually only when the song playing is so irritating, I have to skip to the next one.
In my mind, this is symptomatic of a larger problem, of consuming information about goods rather than allowing goods to facilitate sensual experiences. In part, this is so we can consume more quickly, a product of the time crunch we face in expanding our consumption—we want faser throughput, since quantity seems to trump quality, and the pleasure in consuming seems to come from the acquisition of the next thing. To authorize that next acquisition, we need to satisfy ourselves that we are done with what we have. Processing it as information is a quick way of doing just that.
As a consequence of this eagerness to process more and more stuff, I end up amassing an embarrassingly thorough knowledge of the surface details of pop culture—who wrote what and who sang what and who played on whose record and when this show was canceled or had this or that guest star or whatever. Worse, I invest far too much significance in brandishing this knowledge as some kind of accomplishment, as if life were a big game of Jeopardy. This useless depot of detail is what a show like Family Guy tries to reward me for having accumulated. Getting to laugh at it is like a kind of booby prize.
But iTunes metadata seems to me the best emblem of the information problem, of the trap we are lured into of substituting clerical data processing for thought and experience. Adorno seemed to anticipate this precisely in “The Schema of Mass Culture,” whose title alone suggests its application to the digitization of all cultural distribution. He argues that art, in being manufactured for the masses, is reduced to the data about itself, which masks its subversive potential. “The sensuous moment of art transforms itself under the eyes of mass culture into the measurement, comparison and assessment of physical phenomena.” This is like accessing iTunes metadata in place of hearing the song. Because the metadata for all the music is the same, all music from that perspective is also essentially the same. And the argument can be extended to all of digitally distributed culture.
The underlying sameness of the medium for culture today reveals the truth about the phantasmal differences in form and genre. (As Adorno puts it, in his inimitable way, “the technicized forms of modern consciousness…transform culture into a total lie, but this untruth confesses the truth about the socio-economic base with which it has now become identical.”) It’s all more or less the same, allowing consumers to obey the command to enact the same self-referential decoding process, reinforcing the same lesson of eternal sameness.
The more the film-goer, the hit-song enthusiast, the reader of detective and magazine stories anticipates the outcome, the solution, the structure, and so on, the more his attention is displaced toward the question of how the nugatory result is achieved, to the rebus-like details involved, and in this searching process of displacement the hieroglyphic meaning suddenly reveals itself. It articulates every phenomenon right down to the subtlest nuance according to a simplistic two-term logic of “dos and don’ts,” and by virtue of this reduction of everything alien and unintelligible it overtakes the consumers.
What Adorno would call “official culture”—that which is made to be reviewed and talked about by professional commentators and promoted by professional marketers and consumed commercially—seems to be so stuffed with data and information and objects and performers and whatnot that no one could ever in their right mind question its plenitude. There’s so much, you’d have to be nuts to derive some satisfaction from all that. Think of all the stuff you can download! But the one thing missing amid all this data is the space for a genuine aesthetic experience, a moment of negativity in which an alternative to what exists, what registers as “realistic” can be conceived. Instead, one feels obliged to keep up with official culture so as to not find oneself an outcast. People go along not necessarily because they love pop culture but because “they know or suspect that this is where they are taught the mores they will surely need as their passport in a monopolized life.” Pop culture knowledge becomes a prerequisite for certain social opportunities, a way of signaling one’s normality, or one’s go-along-get-along nature. “Today, anyone incapable of talking in the prescribed fashion, that is of effortlessly reproducing the formulas, conventions and judgments of mass culture as if they were his own, is threatened in his very existence, suspected of being an idiot or an intellectual.” I think of this quote sometimes when it comes up that someone has never knowingly heard a Coldplay or John Mayer song, or hasn’t seen an episode of American Idol. Really? Have you been under a rock? Are you lying? Why this makes me suspicious rather than elated, I don’t know. And it especially reminds me of my record reviewing, when I tried to pretend there was inherent significance in the commercial output of E.L.O. or the Drive-By Truckers. And as the information about pop culture proliferates, we become more ignorant about politics and basic facts about how our economy operates.
Once participation in public official culture becomes a matter of collecting trivial, descriptive (as opposed to analytical) information about it, Adorno argues that “culture business” then plays out as a contest. Products “require extreme accomplishments that can be precisely measured.” This I would liken to the data at the bottom of iTunes that tells you the number of songs you have and the number of days it would take to listen to them all. It’s not intended to be a scoreboard but it can seem like one. This sort of contest culminates in collecting mania, where an object’s use value has been shriveled to it’s being simply another in a series.
To radically oversimplify, Adorno argued that mass culture, a reflection and paradigmatic example of monopoly capitalism, served to nullify the radical potential in art, debasing its forms and methods while acclimating audiences to mediocrity, alienation, hopelessness, and a paucity of imagination. It works to form individuals into a mass, integrating them into the manufactured culture, snuffing out alternative and potentially seditious ways for people to interact with one another while facilitating an ersatz goodwill for the existing order. “As far as mass culture is concerned, reification is no metaphor: It makes the human beings that it reproduces resemble things even where their teeth do not represent toothpaste and their careworn wrinkles do not evoke cosmetics.” The contours of our consciousness are produced by our culture, and advertisements reflect those dimensions while fostering their reproduction.
Basically, through its ministrations, all the movements of the individual spirit become degraded and tamed and assimilated to the mass-produced cultural products on offer, which ultimately fail to gratify and perpetuate a spiritual hunger while occluding the resources that might have actually sated it. Pleasure becomes “fun,” thought becomes “information,” desire becomes “curiosity.”
But what could be wrong with curiosity? It seems like it should be an unadulterated good, a way of openly engaging with the world. Adorno, in a feat of rhetorical jujitsu, wants to have us believe it means the opposite. Because it is attuned not to anything more substantive than pop-culture trivia, curiosity “refers constantly to what is preformed, to what others already know.” It is not analytical or synthetic; it simply aggregates. “To be informed about something implies an enforced solidarity with what has already been judged.” Everything worth knowing about, from a social perspective—anything you might talk about with acquaintances, say—has already been endorsed, is already presented as cool even before anyone had that authentic reaction to it. Cultural product is made with cool in mind, whereas authentic cool, from Adorno’s standpoint anyway, must always be a by-product. At the same time, curiosity surpressed genuine change, supplanting for it ersatz excitement for cynical repetitions—think the fashion cycle, in which everything changes on the surface but nothing really changes. “Curiosity is the enemy of the new which is not permitted anyway,” Adorno says. “It lives off the claim that there cannot be anything new and that what presents itself as new is already predisposed to subsumption on the part of the well-informed.” This means attention to the surface details, which prompts “a taboo against inaccurate information, a charge that can be invoked against any thought.” Basically this means that in our cultural climate, your thoughts about, say, Eric Clapton’s guitar playing are invalid unless you know what model guitar he was playing and what studio he was recording in at the time. The trivia is used to silence the “inexpert.” So “the curiosity for information cannot be separated from the opinionated mentality of those who know it all,” Adorno argues. Curiosity is “not concerned with what is known but the fact of knowing it, with having, with knowledge as a possession.” Life becomes a collection of data, and “as facts they are arranged in such a way that they can be grasped as quickly and easily as possible”—in a spreadsheet, for example. Or a PowerPoint presentation. These media suit facts as opposed to thoughts, and encourage us to groom our data sheets for completeness and clarity rather than insight. “Wrenched from all context, detached from thought, they are made instantly accessible to an infantile grasp. They may never be broadened or transcended”—the metadata fields are unchangeable—“but like favorite dishes they must obey the rule of identity if they are not to be rejected as false or alien.” Works don’t seek to be understood; they only seek to be identified, tagged, labeled accordingly to make them superficially accessible.
The reduction of thought to data allows us to consume culture faster, enhance our throughput, and focus on accumulating more. The idea that you would concentrate on one work and explore it deeply, thoroughly, is negated; more and more, it becomes unthinkable, something it wouldn’t occur to anyone to try. “Curiosity” demands we press on fervently, in search of the next novelty.
In discussions of the current economic woes in the U.S., the dismal savings rate is a topic that frequently comes up, with virtually every commentator agreeing that this must be raised in order to begin to correct the problem. David Leonhardt, in his must-read NYT Magazine article about Obama’s economic ideology this weekend, explains the problem succinctly.
For the first time on record, an economic expansion seems to have ended without family income having risen substantially. Most families are still making less, after accounting for inflation, than they were in 2000. For these workers, roughly the bottom 60 percent of the income ladder, economic growth has become a theoretical concept rather than the wellspring of better medical care, a new car, a nicer house — a better life than their parents had.
Americans have still been buying such things, but they have been doing so with debt. A big chunk of that debt will never be repaid, which is the most basic explanation for the financial crisis. Even after the crisis has passed, the larger problem of income stagnation will remain.
In order to minimize that unpaid chunk of dent—and minimize the damage—increased savings over time must be used to pay that debt down. (At a macro level, too, increased savings will mitigate trade imbalances, stabilize the dollar, and reduce our reliance on foreign banks to fund our borrowing.)
But with the stagnating wages comes a greater reluctance to save and a loss of faith in the traditional “work a lot, save a lot” method to wealth. Instead, people rightly conclude that they will continue to fall behind without supplementing their wages with another form of income. Some try lottery tickets, others apparently try the upscale equivalent—short-term investing. Felix Salmon takes note of a survey about American attitudes to saving.
Here’s a depressing statistic. In a recent Harris survey, 3,866 Americans were asked which things were “extremely important to achieving financial security in your retirement”. 39% said that “investing wisely” was extremely important, while just 34% said that saving money during one’s working years was.
The problem is that while the financial-services industry is very good at marketing and selling investment products, it’s very bad at marketing and selling thrift, and living within one’s means. After all, the only thing which is marketed more aggressively than investments is credit products.
It’s not merely the fault of the marketing departments in the financial services industry, though. It’s the fault of marketing, period. Advertising works hard to undermine the ethic of saving, making acquiring more goods seem all important and making our faltering wages seem even more inadequate. This works hand in hand, then, with ads touting more credit products. There is virtually no commercial incentive to encourage thrift; that’s not where the fat margins are. And prioritizing not having stuff is unlikely to become anything but an alternative lifestyle—a marginal mode of bourgeois rebellion—any time soon. Our sense of self is too bound up with what we possess and display; expecting people to consume and acquire less is almost tantamount to asking them to become less of a person.
That’s why the government must do what it can to encourage thrift—mandating an opt-out standard for participation in 401k’s for example. But what mainly needs to change is the sense of unfairness that permeates the economy, something that shows up in the class divide between those who earn income through wages and those who earn it through investments. The consensus seems to be this: Working and saving are noble goals, but consuming is what gets people’s attention and pins down the sorts of things we are committed to (putting our money where our mouth is). And working is sort of for suckers; having your money work for you is where it is at. Wage earners thought buying houses would make their money work for them—the house magically doubled their money as property values irrationally increased (creating knock-on wealth effects and encouraging increased consumption). But now that it has become clear that buying houses is simply more consumption, and not investment, the shortfall in national savings is easily recognized as an abyss.