An item on Bloomberg yesterday detailed how Spaniards are drinking less wine, which has prompted Spanish wineries to pursue export markets more. From this perspective, it’s partially understandable why Spanish wineries might want to pay a fee to invite Wine Advocate critic Jay Miller to their regions. They want to crack into the US market and they figure the best way to do so is to get a score from the Wine Advocate (even if one document from the regional organization referred to the scores as “Parker points”).
But that sales strategy is sooo 1990s! In my view, many American wine consumers have moved beyond scores, and an increasing number of wine shops have too. What do you think: should the wine industry move beyond scores? Are scores less relevant today to consumers in your experience than they were five or ten years ago? It seems to me that today the trade clings to scores more readily than consumers do. But one importer I spoke with recently Jose Pastor, has said no to scores.
We also should say “no” to scores. We should actually say “hell no!” Wine scores, pioneered by Robert Parker and followed by Wine Enthusiast, Wine & Spirits, and others, try to make a science out of something that is not scientific. If you want to know why, read Robin Goldstein’s The Wine Trials. Goldstein offers many reasons, but let’s look at two of his arguments. First, the blind taste test.
Dom Pérignon, a $150 Champagne from France, and Domaine Ste. Michelle Cuvée Brut, a $12 sparkling wine from Washington State, are both made in the traditional Champagne method. Both wines are widely available at wine stores, liquor stores, and restaurants. Both are dry, with high acidity. The two bottles are more or less the same size and shape. So why are consumers willing to pay more than 12 times more for one than the other?
One would think because the Dom Pérignon – what other name is so well known and associated with the finest quality in the wine business to the average person? – is far superior in taste. One would be wrong. Goldstein conducted blind taste tests between these two wines with 62 different tasters: 41 of 62 tasters (66%) prefered Domaine Ste. Michelle. These were wine rookies, you say. True.
In October 2009, we replicated this experiment on a smaller scale with newer releases of the two sparkling wines. This time, we served them to a group of professional chefs, certified sommeliers, and food writers, of which more than 70% preferred the humble $12 bottle to the famous $150 one. This time, we also threw in Veuve Clicquot, a popular$40 Champagne from the same luxury products group – LVMH – that makes Dom Pérignon. More than 85% of tasters preferred the Domaine Ste. Michelle to the Veuve. This doesn’t seem to be a single, idiosyncratic instance in which people’s tastes happen to run contrary to popular wisdom or market prices. The Champagne battle described above was just a small part of a series of blind tastings that we conducted around the country over that same time span. It was an experiment in which we poured more than 6,000 glasses of wine from brown-bagged bottles that cost from $1.50 to $150.
The result? As a whole, the group actually preferred the cheaper wines to the more expensive wines – by a statistically significant margin.
Then the points correspond to quality? No. They correspond to price. In general, the more expensive the wine, the higher the price. And if a wine gets a high score – which is often correlated to the amount of money the winemaker pays for advertising in the ratings publication – the price will increase.
The central problem is that wine pricing is almost completely arbitrary – that the price of wine does not significantly correlate to the pleasure it brings, even to experts.
It’s that Robert Parker, Wine Spectator, and others with economic power in the industry are propping up the myth that price and pleasure do correlate strongly, that it really is possible that not one of 6,475 wines under $10 would score above 91. It’s that generations of consumers are now growing up taking that myth as fact, and drinking and buying wine in a way that conforms to the myth.
That’s right. Of the 6,475 wines that Wine Spectator had tasted as of the publication of Goldstein’s book, not one under $10 scored above 91 points. Surely one of them must have been a true gem. The tasters at Wine Spectator claim to score them blindly, but they don’t.
James Laube, one of Wine Spectator’s senior editors, has gone so far as to write a blog entry about the importance of blind tasting. “Wine Spectator has always believed in blind tastings,” Laube explains. “We know the region, the vintage and the grape variety, if relevant. But we don’t know the producer of the price.”
Consider that statement for a moment: the magazine critics are tasting blind, but they know the region, the vintage, and the grape variety. Let’s say it’s a red wine, the appellation is Hermitage, and the vintage is 2005. The cheapest possible wine in the Wine Spectator database that would fit those criteria costs $49. And, to their credit, these tasters certainly know enough about wine to know that Hermitage reds are going to be expensive. In that example, then, they would know the prices, or at least the price category, before tasting – which means that they wouldn’t really be tasting blind. They’d know that they were tasting expensive wines, and they’d have full frontal exposure to the placebo effect.
So testers, knowing a great deal about the wine to begin with, rate wines and assign scores. If a wine receives high scores, the winemaker will most likely spend more money advertising that wine and the corresponding score in the very magazine that bestowed the score after the “blind” test. Hogwash.
If you have a friend who is a wine lover – an oenophile, if you will – buy them The Wine Trials this Christmas. You’ll save them a lot of money and headaches from worrying about the
myth fraud of wine scores.