By Rick VanSickle
There are many Davids out there who question the points system when scoring wines. I get it, I really do.
A guy who gave his name as David B in a comment here, based on a recent post on the Two Sisters Winery, had concerns about the scores given to some of the wines in comparison with other wines getting the same score but cost much less. His comment:
“I question whether these wines are good values at these prices. You ranked the Bachelder Lowrey One Barrel Pinot Noir a 94 and it costs $60. The Special Select (Two Sisters) … costs about 2.5x as much and is also ranked a 94.You ranked the Franc a 92 and it only costs $54. Seems like the QPR really drops with the Stone Eagle collection.”
My reply to the comment:
“QPR is a tricky thing to figure into wine scores so I don’t factor it in, but address it in the text of the review if I feel it needs addressing. Whether a wine is priced at $146 or $20 they get reviewed the same. Often I don’t even know the price of the wine until after I’ve written my notes. Knowing the price doesn’t change the score, but if a wine that costs $100 and scored poorly or a wine priced at $23 and scored exceptionally high (see Most Thrilling Wines of 2018, the 2020 Cellars Chard) they might get discussed in the text. Often I will call a wine a good value or make reference to the price negatively or positively, but, again, it doesn’t change the score. Consumers can make their own buying decisions based on price and what they perceive has QPR.”
The Quality Price Ratio (QPR) is a tough nut to crack and not normally factored into wine critiquing. In its purist form, assigning points to wines is done blind in competition where you have no idea what the price is or even what you are drinking. You assign points solely based on the wine in front of you, judged against a pile of other wines on the table. If later you find out that the wine that costs $150 got 85 points and the wine that costs $20 got 99 points, you may feel the need to mention that in later views, but it shouldn’t (and can’t, for the integrity of the competition) change your score. It is what it is. QPR is something the consumer should factor into their buying decisions, not something the critic should factor into their scoring, in my opinion. I’m not saying it doesn’t happen, I’m saying it shouldn’t happen. If it’s important, a better place to make note of it is the body of the review.
Critics also don’t always know what costs go into a wine. Were yields drastically low? What kind of oak went into the aging process? Was it a tough year with limited fruit? Single vineyard vs. sourced fruit? Barrel selection? Extended aging? So many factors impact price and not everyone knows what those costs are.
Not all critics feel that QPR should be ignored, however, and this is where it becomes confusing (and perhaps annoying?) for the consumer.
Carolyn Evans Hammond, who is the wine writer for the Toronto Star and has an enormous and loyal following, takes a different approach, often prompting criticism from her peers. Which, of course, is silly. Evans Hammond is very upfront about how she reviews wines and always factors in QPR into those reviews. In fact, she relishes finding incredible value at the low end of the spectrum and passing it on to the masses.
A $7.95 Spanish white wine called Toro Bravo Verdejo Sauvignon Blanc 2018 received 94 points from the critic last year and that followed a review of the 2017 Toro Bravo Red at the same price that she awarded a hefty 96 points.
Consumers emptied the shelves of both wines.
Evans Hammond has always maintained that if a really inexpensive wine over-delivers in her estimation it will be factored into the numerical score.
“The reason I’m able to bring bargain bottles to your attention is because I factor in price when I score. A $10 Californian Cabernet is not the same thing as a $100 one. So they shouldn’t be judged by the same yardstick. The same holds true for Chardonnay, Pinot Noir, Spanish red blends and all the rest. Comparing apples to apples is the fairest approach, which my scoring reflects. Sure, it requires me to taste broadly and regularly at both the high and low ends of the price spectrum, but that’s the only way to do this job properly. Otherwise it’s too easy to overlook inexpensive wines entirely or be forced to give them low scores, which is unfair and a huge disservice to wine drinkers. In short, price matters.”
To her credit, Evans Hammond explains herself clearly and professionally. Where it gets confusing is when that score is exploited without explanation on store shelves and marketing material (not the critic’s fault). Most other critics do not score on QPR, so a 96-point wine for a $7.95 bottle of wine looks too good to resist for consumers.
For balance, that same Sauvignon Blanc received 87 points from Beppi Crosariol at the Globe and Mail (pretty good for an $8 bottle of wine), the same from WineAlign, and only an average of 80.8 points on CellarTracker. So, what number do you think follows that bottle of wine around? The 87 or the 94? You know the answer.
Consumers are beginning to lose faith in the points system because it’s all too confusing, especially if you don’t know who is behind the points being doled out. There are too many factors to consider, too many numbers out there and too many critics with myriad motives handing them out. And now even those who critique with numbers are critiquing others who hand out these numbers like Snickers on Halloween.
But A Robb Report column entitled “Wine Drinking Should Be Pointless: Why it’s Time to Stop Scoring Wine” went too far in arguing that points for wines should go the way of the dodo. He’s not alone in that opinion.
The writer of the piece, Ian Cauble, has a thing about wine writers who score wines — whether it’s with a number from 1-100, 1-20, five stars, bottles, thumbs ups or smiley faces — he would rather the practice be put out to pasture because he was flummoxed after an unnamed critic rated a certain Sonoma Cabernet Sauvignon 85 points as it did not align with his palate.
Said Cauble in the piece: “This 2010 Cabernet from a single vineyard in Chalk Hill was pure magic. It was a near-perfect expression of classically proportioned Cabernet, a style found in the golden era of California before the push in the later 1990s for high scores.”
I get it. This is a classic difference of opinion. One scribe likes it, the other not so much. From 85 points to “near perfect,” a big gap that points to two different palates.
I do not know Cauble and have no idea who the other critic is, but if I was in their sphere of reviewing, I would likely read both of their assessments and then align with the one who shares a similar palate to mine — score or no score.
Cauble’s opinion in the piece is based on his experience with that Chalk Hill Cabernet that he adored and another critic did not. It wasn’t the points system he took umbrage with it was the critic.
“Our difference of opinion wouldn’t be a big deal except that this critic, who has demonstrated a preference for ripeness, extraction, and oak, wields enough clout to affect the sales of any given winery. An 85 means a wine might take years to sell through one vintage, but a 95 translates to a quick sellout,” he wrote. “So, if winemakers want a high score, picking late and subjecting the wine to heaps of new oak will steer them in the right direction. It also trains consumers, many of whom are seeking guidance, that ‘good’ wines are rich, inky, and oaky. If a Napa winemaker chooses to pick at a ripeness level that was considered perfect in the 1970s, ’80s, and early ’90s, he or she will likely score below 90 points.”
OK, fair enough. If the unnamed critic in the piece is at least consistent with his/her assessment of the wines reviewed, I do not see the problem here. Consumers will either trust the writer or not. That should be the end of the debate.
It’s in Cauble’s endnotes that he gets it right:
“Don’t assume the score tacked onto a shelf is Holy Writ — drink and acquire what you like. Above all, remember that wine is about the land, the people who make it, and the friends with whom you enjoy it. A single score never defines the full story.”
In that, I give him 100 points. Consumers always have to look beyond the number. The number is derived from the note critics write and the trust one has with a reviewer who lines up with their own palate. When the number is separated by the words, the context gets lost in the shuffle because there is no guidebook on how to review wines; everyone does it differently, everyone assesses wines based on their own palate. It’s subjective, and if you disagree on the assessment, move on and find a critic who shares your tastes.
It is as simple as that.
Good piece Rick. I agree that the points system is confusing because everyone has their own system per se. And I think from a consumer’s perspective, you’re right in that you just have to align yourself with someone who seems to share a similar palate. That said, as I don’t think writer’s or critics should stop awarding points, I do think that QPR should factor in. But it needs to be relative – as Evans Hammond seems to try and do. Perhaps 96 points for an $8 wine is a bit high, but 90 or 91 for an excellent buy would seem very fair to me. The question becomes, are you rewarding the wine more for its taste or for its price? The scale in that case seemed to be tipped far too much toward the side of it being a better deal than being of excellent quality.
Couldn’t agree more. It’s all about transparency. Know your critic and how they evaluate wine. It just becomes so darn confusing when marketers put stickers on bottles that say 96 points for an $8 bottle of wine without context.
Hey Rick, appreciate the article. That being said, I had no qualms with the points based rating system, and I don’t disagree that price and value shouldn’t necessarily be part of the points system.
My point was that Two Sisters seems to have overpriced the wines compared to their objective quality / rating, and that my perception is that part of the pricing reflects the “status” that the Stone Eagle line carries.
All ratings systems (whether for wine, other consumer products, hotels, etc.) have similar problems. It’s difficult to be completely objective, and the user / reader too frequently doesn’t understand the context.
Maybe the answer is to have TWO scores: One for QUALITY, adherance to style, etc. The other which factors in price, to give a score for VALUE (bang for the buck, of which I am a fan of when the wine / product actually does deliver something on quality, interest, etc.) But I find that many people (including too many store staff) do not understand the difference between quality and value. If you asked them for a recommendation which gives VALUE, they will take you to the CHEAPEST product, which likely does NOT offer value.