Does Star Quad Cable Sound Better? Help me find out! Results at USITT

JohnHuntington

Well-Known Member
As part of my talk at the upcoming USITT convention, I put together a
blind cable test, comparing star quad to conventional mic cable.

You don't have to be a "Golden ear" to participate! I'm hoping to
get as many responses as possible, and the whole process should take
less than 15 minutes.

All the details, a file to download, and the survey are here:
- John Huntington's Blog - Does Star Quad Microphone Cable Sound Better? Let's Find*Out!

Thanks!
John

Under normal conditions, NO! The whole purpose of star quad is to improve the common mode noise reduction of the system. If you are plagued with noise being induced into cabling, then the extra price is worth it.

The twist in twisted pair cable is to ensure that any noise signal induced into the cable is of exactly the same amplitude and phase onto both conductors so that the balanced input can cancel it as effectively as possible. More twists per inch (tighter twists) improves the common mode performance of the cable. Star quad is simply a way of basically making the number of twists per inch higher than would otherwise be mechanically practical.

Under normal conditions, where you are not hearing any noise, then star quad can actually sound WORSE, especially if the length is long. Why? Because it increases the capacitance between the conductors and shield, which loads down the source impedance at rising frequencies.

Pick your poison, high frequency loss versus hums and buzzes. Everything in engineering is a trade off. No free lunch.
 
Last edited:
Having studied the psychology of drafting surveys and performing blind experiments, I find your survey flawed. It's not a blind test if you have told us which is which (that is unless you're lying to us about which is which -- a valid approach under normal conditions, but one that doesn't apply here). Even if you're lying to us about which is which, the fact that people think they know which is which may skew your results with predispositions they have towards one or the other type of cable.

What you'll end up with are people who respond thinking "I think that clip X15 must be star-quad cable because it sounds better than clip X16, and star-quad cables always sounds better." That, instead of, "I think it's _____, because it sounded most like the ______ reference at the beginning of the audio file."

Rather than asking which we think is which for questions 1-10, I think the relevant questions more appropriately are 11-18. I do not see how questions 1-10 could produce any variety of conclusive data.

Another concern is how complicated the test is. "Oh, but it's so easy to listen to an audio clip that's 13 minutes long and check some radio buttons on a form." Yes, it is, but to perform the test in a way that yields consistent data, people should be comparing each X clip to the references and stating which they think was better, not which they thought was which type of cable. Suddenly a 15-minute survey is 35 minutes. Many people will be exhausted listening to the clips by the time they reach X03 or X04, trying to discern which is which. By the time they get to X10, they're just ready to check an option -- any option. They want to move on and be done with it.

If I was performing the experiment, I would record the Conventional Cable and Star Quad Cable clips, have someone not connected to the project rename the files Cable X and Cable Y, not telling me which is which until I've received survey results and have come to a conclusion that people think Cable X sounds better or Cable Y sounds better. Only after I've come to a conclusion with the data would I find out from the person who changed the file names which was which -- eliminating potential for bias from both the survey participants as well as myself as the experimenter.

I would skip the X1-X10 tests completely; the really conclusive data comes from your Y and Z clips, but by the time people get that far in the survey, they'll check any box they can just to be done with it. They've also spent the first 10 questions confusing themselves on which they think is which.

As a matter of form, I'd also avoid radio buttons as people have a tendency to just click one or the other without much thinking. Some will click the top option because it's the first one, and others will just deliberately mix up their results to make it look like they tried -- especially happens when people are not confident they've been able to discern a difference and end up just guessing. Having people typing in Z or Y makes it less friendly having to analyze the results afterwards, but survey participants are more likely to make an educated decision when they have to consciously decide which letter to type.

Personally, I would participate if it was just Y and Z, but I have no interest in attempting to answer the first ten required questions because: 1) I believe they produce skewed results, 2) It takes a long time and I'll drive myself nuts attempting to discern a difference, and 3) There's no option to participate in just the Y and Z comparison without either suffering through the first 10 questions or just clicking random radio buttons to get to the part of the survey that I think matters.

You've certainly given this survey and this topic more thought than I have, so maybe you have valid reasons for the way you're doing it that I haven't considered, but if I think you'll find more conclusive, credible, and consistent data as well as more participants by cutting it down to just the Y and Z clips for comparison. It'd also be less hassle for people to download because it could be a 35MB audio instead of 216MB, 2 minutes to listen to instead of an absolute minimum of 15 (a more practical minimum of 20-30 for those that really want to do the survey well).
 
Last edited:
@FMeng Did you actually read what I wrote on the link? I evaluated claims about the sound of the cable itself, and specifically did not address the EMI rejection issue.

John
 
Having studied the psychology of drafting surveys and performing blind experiments, I find your survey flawed. It's not a blind test if you have told us which is which (that is unless you're lying to us about which is which -- a valid approach under normal conditions, but one that doesn't apply here). Even if you're lying to us about which is which, the fact that people think they know which is which may skew your results with predispositions they have towards one or the other type of cable.

Thanks for your feedback.

What you'll end up with are people who respond thinking "I think that clip X15 must be star-quad cable because it sounds better than clip X16, and star-quad cables always sounds better." That, instead of, "I think it's _____, because it sounded most like the ______ reference at the beginning of the audio file." Rather than asking which we think is which for questions 1-10, I think the relevant questions more appropriately are 11-18. I do not see how questions 1-10 could produce any variety of conclusive data.

Well that's exactly what I'm going for. Are you familiar with the A/B/X testing used commonly in audio listening tests? That form of testing does exactly what I did. You have a known "A", a known "B", and then your job is to tell us which one X is. That was my first test to answer the question: Is there any difference in the sound of the two cable types? Wikipedia has a good write up of ABX testing ABX test - Wikipedia, the free encyclopedia.



Suddenly a 15-minute survey is 35 minutes. Many people will be exhausted listening to the clips by the time they reach X03 or X04, trying to discern which is which. By the time they get to X10, they're just ready to check an option -- any option. They want to move on and be done with it.

If they are struggling that much, then the difference between the A and B is not likely audible. If there is a difference that matters, it should be audible, and while the test requires concentration, it shouldn't be torture. I once won a T shirt at an AES convention because I clearly heard (through an ABX tester as pictured in the wikipedia article) a difference in two cassette tapes (boy, that dates me). I did that with a pair of headphones on a crowded convention floor.

If I was performing the experiment, I would record the Conventional Cable and Star Quad Cable clips, have someone not connected to the project rename the files Cable X and Cable Y, not telling me which is which until I've received survey results and have come to a conclusion that people think Cable X sounds better or Cable Y sounds better. Only after I've come to a conclusion with the data would I find out from the person who changed the file names which was which -- eliminating potential for bias from both the survey participants as well as myself as the experimenter.

You're welcome to set up an experiment! My whole point of the USITT talk is that this isn't hard to do. Did you listen to the audio file? Can you please explain to me how any bias I may have would have influenced any part of the actual test? I worked pretty hard to avoid it. I don't recall publicly stating my own experience with listening to the clips, and, as I stated, I even recorded the voiceovers before I slotted them into the audio file.

I would skip the X1-X10 tests completely; the really conclusive data comes from your Y and Z clips, but by the time people get that far in the survey, they'll check any box they can just to be done with it. They've also spent the first 10 questions confusing themselves on which they think is which.

While I welcome anyone's feedback (my sister did the whole test and she's a therapist), the primary target for the survey is professional audio engineers, who spend hours and hours carefully listening to differences in sound.

As a matter of form, I'd also avoid radio buttons as people have a tendency to just click one or the other without much thinking. Some will click the top option because it's the first one, and others will just deliberately mix up their results to make it look like they tried -- especially happens when people are not confident they've been able to discern a difference and end up just guessing. Having people typing in Z or Y makes it less friendly having to analyze the results afterwards, but survey participants are more likely to make an educated decision when they have to consciously decide which letter to type.

Interesting point. The responses are randomized, so even if people just pick the first one the results will be random. But I'll take up your point about typing in responses with my psychologist colleague if I do a future survey.

Personally, I would participate if it was just Y and Z, but I have no interest in attempting to answer the first ten required questions because: 1) I believe they produce skewed results, 2) It takes a long time and I'll drive myself nuts attempting to discern a difference, and 3) There's no option to participate in just the Y and Z comparison without either suffering through the first 10 questions or just clicking random radio buttons to get to the part of the survey that I think matters.

I'm sorry you won't take part.

You've certainly given this survey and this topic more thought than I have, so maybe you have valid reasons for the way you're doing it that I haven't considered,

I commend you for being the first person on any forum to actually address the issues of the survey! :)

John
 
Mike, if you saw the original article that John is reacting to as well as the responses of the author to his and Bob McCarthy's comments (it would be great to see Jim Brown, Bill Whitlock, Neil Muncy, etc. jump into this one as well) you might understand the goal a bit more, Live Sound: Up Your Audio: Time For Star Quad Microphone Cable - Pro Sound Web. What it seemed to boil down to was someone replacing some old, beat up, unreliable cables, bad enough that he offered to replace them even if the company didn't, with new star quad cables for a few gigs and then deciding that any resulting changes in sound for those gigs were due solely to the Star Quad cable. No side-by-side comparison, no attempt to eliminate or control other variables, just vague gushing over the changes star quad made, and this in a pro audio rather than audiophile publication. Maybe you could post some similar responses regarding his approach and conclusions.

I personally can tell you that none of your concerns were valid when I took the survey. I'm not clear how writing X or Y would not run into similar issues where X is simply easier to write so that becomes the default answer and anyone intentionally trying to create a mix of answers would do so whether writing them or selecting radio buttons. And eliminating the X1-X10 tests would seem to remove any statistical support and make the entire survey a couple of rather vague questions (that are clealry a direct response to the claims made in the article). I actually think it will be interesting to see if people who had a bias regarding either cable in the last questions actually then reflected that same perception in the X1-X10 questions.

Added: It might be nice to know the person's background just to see how it correlates, however since the point seeems more to see if the cable type makes a clearly audible difference I also think the goal may be to not correlate the results to specific demographics and to instead look at it only as an overall average. I did find the question on what you used to listen a bit too generalized, I had to answer it as computer speakers but I have three sets of sub and satellite computer speakers around here and the quality varies significantly. The Klipsch Promedia system I used with the EQ adjusted for the listener position using Smaart and SysTune is probably at the higher end of computer speaker systems and possibly competitive with many lower end 'monitor' speakers. But it is still a computer speaker system.
 
Last edited:
IMO, the ABX method is fine for short tests, and for professional audio engineers it may be relatively easy for participants to distinguish one cable from another with consistency. On the other hand, you may get a lot of people clicking that link who are listening with poor-quality speakers or who do not have the ear for this sort of test, which is where there's potential to get a lot of inconsistent results. Careful picking apart of the data can yield some insightful information regardlessly.

You'll have some people who will just know what they think on the first listen, but I suspect there will be a lot of people that to come to solid conclusion, they'll want to compare.

I did see the question on there about the type of speakers/headphones, but it might have been a good question to have people list what sort of audio experience they have. Doesn't have to be, "25 years in the biz" specific, but if someone is a mastering engineer, they're going to have far more significant results than the college student who thinks they've got some recording-studio-quality speakers on their PC. (just by posting here you'll probably get a number of high school and college students)

I did listen to the audio file; you've eliminated the possibility of verbal cues caused by experimenter bias by recording the dialog in advance. The issue now isn't that you would receive skewed results, but that you could misunderstand the data when you analyze it. You may jump to certain conclusions and see correlations that do not exist based on your existing prejudices one way or the other on the topic at hand. Maybe you expected it to go one way, then when you stared at the results you convinced yourself it turned out exactly as you expected, event though the data might have far more depth and detail to it that you overlooked by presuming your initial hypothesis had been confirmed.

I did notice earlier that your first 10 responses were randomized and in my haste this morning my quick glance at Q11-Q14 mistakingly had me believe Y and then Z were the answers for each of them, but it's actually a 50/50 split.

=======

Don't misunderstand my intent, John. I don't believe this to be a "bad" experiment; I'm just offering up some peer criticism on the format and possible sources of uncertainty. The points I've talked about are extremely picky and may not even be significant factors for this particular experiment. On the other hand, being a 13min audio track instead of 2min could be the difference between 50 responses and 100 responses, or having Y and Z at the end of all of the X tests may produce different results than if they were before them.

What I have to offer you are just my opinions on what the possible sources of criticism and uncertainty could be, which are the same things I would have expected the instructors of my psych classes to bring up if I had proposed this sort of experiment to them.

Despite my douchey critique of your experimental setup, I genuinely hope find something interesting in the data you're gathering.
 
Last edited:
You'll have some people who will just know what they think on the first listen, but I suspect there will be a lot of people that to come to solid conclusion, they'll want to compare.

That's fine, if there is a difference, it should be audible and they should be able to identify the X clips pretty easily. Also, the whole point of this test is that it separates fact from predisposition. Having an opinion about one type of cable shouldn't be able to let someone "Game" the survey, since it's all blind.

I did see the question on there about the type of speakers/headphones, but it might have been a good question to have people list what sort of audio experience they have.

Good suggestion, I'd add something like that if I do something like this again.

I did listen to the audio file; you've eliminated the possibility of verbal cues caused by experimenter bias by recording the dialog in advance. The issue now isn't that you would receive skewed results, but that you could misunderstand the data when you analyze it.

Isn't that true of any experiment?

You may jump to certain conclusions and see correlations that do not exist based on your existing prejudices one way or the other on the topic at hand. Maybe you expected it to go one way, then when you stared at the results you convinced yourself it turned out exactly as you expected, event though the data might have far more depth and detail to it that you overlooked by presuming your initial hypothesis had been confirmed.

Isn't that true of any experiment? But the way I will address this is to post the complete, raw data set online. If you look back at the Infrasound research on my blog, that's exactly what I did. If someone wants to challenge my analysis, that's fine.

I did notice earlier that your first 10 responses were randomized and in my haste this morning my quick glance at Q11-Q14 mistakingly had me believe Y and then Z were the answers for each of them, but it's actually a 50/50 split.

Actually, ALL of the response choices are randomized (except the demographic questions at the bottom). But a legitimate random choice would be a 50/50 split.


Despite my douchey critique of your experimental setup, I genuinely hope find something interesting in the data you're gathering.

Thanks for your thoughtful response! I welcome any comments, as long as they are not ad hominem and are on the topic.

John
 

Users who are viewing this thread

Back