By Dan Carney
updated 6:23 a.m. CT, Fri., Sept . 11, 2009
Is a bit of grimy brake dust on a car’s wheels as serious a quality defect as a blown engine? The widely respected J.D. Power Vehicle Dependability Study thinks so, rating cars’ dependability based on factors such as brake dust, wind noise and placement of radio knobs.
This may come as news to consumers who probably expect that “dependability” means what the industry refers to as “things gone wrong,” such as a blown engine, leaking transmission, or a heater that won’t.
As economic considerations encourage consumers to look harder at domestic brands they may have avoided in recent years, shoppers anxiously consult the myriad quality ratings available in an effort to avoid buying a lemon. But doing so can be so confusing that quality scores may not provide the desired assurance. Cars that score well on one rating fare poorly on another.
The issue here is what factors consumers think contribute to quality rankings, and what factors are actually used. Does having to hose off dusty wheels every so often really put a car’s reliability on the same level with another car that may have stranded its driver at the side of a dangerous highway with an engine failure?
“In our measure, brake dust is a thing gone wrong,” explained Dave Sargent, vice president of automotive research for J.D. Power and Associates. “The consumers who report this as a problem do believe this is a defect,” he said.
“They clearly consider it to be a quality problem in their definition, which might be different from an engineer's definition,” Sargent added.
“It is weighed the same as engine failure,” by J.D. Power’s dependability study, Sargent acknowledged.
Many different ratingsBut car shoppers looking at reliability ratings may expect that a top rating means that the car doesn’t break and that a low rating indicates that a door is apt to fall off.
Adding to the confusion is the proliferation of quality and satisfaction ratings, each looking at some different aspect of owners’ satisfaction with their cars. J.D. Power alone offers the Initial Quality Study (which looks at the first 90 days of ownership), the Vehicle Dependability Study (which looks at the first three years of ownership) and the APEAL study (which looks at cars’ performance, execution and layout of controls). Competitors such as Strategic Vision offer ratings like the Total Quality Index, which examines the buying, owning and driving experience, and breaks out winners by product segment.
Land Rover trumpets its victory in the Strategic Vision TQI in the luxury utility segment, in which it was the most improved brand and the highest-scoring. But Land Rover was next to last in the J.D. Power IQS report, sandwiched between smart and Mini.
Of the bottom six brands in IQS, three are the world’s premiere off-road brands, Hummer, Jeep and Land Rover, and the other three are European boutique brands with enthusiastic followings — all exactly the kinds of products that score well in other measures of customer satisfaction.
'Did things break or not?'Further muddying the picture is the meteoric rise in the quality scores of longtime basement dwellers such as Hyundai. This leaves consumers to wonder if those cars have really improved that much, or if the scoring system is somehow defective.
The company made a concerted effort to upgrade to better materials in the construction of its cars, reports Barry Ratzlaff, Hyundai’s director of product quality. Once the company made its cars more sturdy and durable — the old definition of reliability — it addressed the new definition of reliability by attacking annoyances that customers complained about.
“In the 80s, ‘quality’ meant ‘Did things break or not?’” Ratzlaff observed. “The more modern definition now really does include the whole gamut of things from design quality that includes the ease of operation, material quality, aesthetic quality.”
The company’s effort to address this new definition has been hugely successful. “In two of last four years we’ve beaten Toyota and Honda in [J.D. Power] Initial Quality,” he said.
That meant doing things like changing cars’ interiors to darker colors that don’t show dirt as easily. “We’ve seen improvement of one or two problems per 100 cars based on that,” he said, referring to J.D. Power’s unit of measure for quality.
Another change was to make the rubber gasket ring on the gas cap harder, so it doesn’t compress as much when the cap is tightened, making it easier to unscrew the gas cap. Hyundai also discovered that metal valve stem caps can interfere with the radio signal from its tire pressure monitors, so the company has specified plastic caps only.
The result is that the new Hyundai Genesis sedan scored 84 problems per hundred cars in J.D. Power’s IQS study, putting it on par with the vaunted Lexus brand’s average for its models.
Consumer Reports captures overall pictureSo if the well-known quality scores are determined partially by criteria such as the ability to hide dirt on the upholstery and brake dust on the wheels, where can car shoppers find information that really reflects a model’s reliability?
Ratzlaff recommends the ratings in Consumer Reports. That magazine surveys subscribers about their experience with their own cars and reports on the reliability of those cars. It might not be a perfect sample of the population, but it does give a useful snapshot of other drivers’ experiences. “I think that crucible captures the overall picture the best,” opined Ratzlaff.
“When people get confused by ‘satisfaction’ and ‘quality,’ they become loaded words,” said Jake Fisher, a senior automotive engineer for Consumer Reports. “What we concentrate on is breakdowns. That is reliability. It is not 'initial quality' and it is not ‘satisfaction.’"
Perhaps the good news is that one reason for the evolution of the term “quality,” is that modern cars really have become so good that real failures are very infrequent. Which is good news for drivers of all brands of cars.
Source;
No comments:
Post a Comment