Music rating inflation

At Marginal Revolution, Tyler Cowen raises a good point about Spin magazine’s new record-rating system (it’s pertinent to the one PopMatters uses as well).

The old rating system granted up to five stars but now the maximum number of stars is ten. This signals that they wish to start exaggerating the quality of the product. When there are only five stars you know that they are laying their reputation on the line when they grant five stars to a new CD…. But say they give a new release eight, nine, or who knows maybe eight and a half stars? What exactly are they trying to say?

So by make the rating system capable of finer distinctions, it becomes basically opaque.

I dislike ratings, though I understand why they get used — some readers would like a bottom line without reading, or will only read when the high rating cues them to. Naturally, this gives incentive to writers (and editors) to rate everything higher. Often, the rating is at obvious odds with the reviews themselves, which, if they are not overwrought 75-word blurbs full of incomprehensible comparisons, tend to be more ambivalent, or more charitably, balanced. The best reviews, in my opinion, aren’t reviews at all; they simply take the work in question seriously enough that you know it is stimulating, worthy of careful attention. In other words, everything that is reviewed should be considered worth checking out — in effect, a one-star system, I suppose. The only things that should get panned — a maybe this is a reason for the backlash effect perhaps, come to think of it — is newsworthy new work by already well-known acts. A hatchet job won’t be published at all unless it is about someone the readers already care about, so established artists are often the only ones being savaged in print. Reviewing is a pretty thankless job, and a backlog of bile can easily build up in response to the corrosive effects of the practice — the routinized listening, the striving for things to say to overcome the “dancing about architecture” problem.

When I reviewed music, I had trouble reducing my opinion to a rating, not because my insights were so nuanced, but because in general, most records are incomparable. They can’t be reduced to a common aesthetic currency. What does it mean to rate a reissue of a Kinks record a 10 along with a newly released Radiohead album? Aren’t the standards being applied entirely different? Doesn’t an album grow richer with the years, as its influence plays out and the responses it has prompted in generations of listeners enrich its significance? Ratings set up false equivalencies between works that basically bars them from being taken as seriously as works — art shows and books, for instance — deemed too complex to be assigned a number of stars. Pop music, though, is meant to be cycled through quickly rather than lingered over, and the ratings system suits that end, even if our actual listening practices, thankfully, don’t.