The doctoral consortium, organized by Cait Lamberton and Mike Norton, was kicked off by inspiring words from the SCP president, Darren Dahl, and the conference chairs Americus Reed and Mark Forehand. Darren encouraged PhD students to be less shy and talk to experienced researchers – while they might not become your new Best Friend, SCP is a friendly conference so it’s likely they’ll talk to you. He also emphasized the importance of meeting other students as the most valuable part of attending the consortium because, as echoed by Americus, this is a 40-year career so now is the best time to start making connections due to the small size of the field. As a way of doing this, Mark suggested using the consortium and the conference to find people to do research with as the experience is different when you co-author with peers instead of one’s supervisor or senior faculty members.
Cait and Mike then revealed the results of the Happiness Survey which they had conducted with faculty members of SCP (results are covered in detail in here), before introducing a panel on how to do good research. Michel Tuan Pham talked about how our field is in transition with concerns about false positives arising from common research practices, calls for replications and failures to replicate well known results as well as documented scientific fraud. All of this, he said, is getting a lot of media attention which means our field risks discreditation. “Post-hoc theorizing might be the norm in some schools and it’s not necessarily all bad, but you have to balance it out – the reason you’re running the study is that you’re expecting to learn from the data, but then you should start to try to replicate it”, Michel said.
Tom Meyvis then talked about collecting data online and suggested that there’s no reason why collecting data on Mturk is bad in general even though some reviewers may take a negative stand – there’s no evidence, he said, that mturkers are worse, and in many cases they’re actually nicer than NYU undergraduates. However, the lack of controlled environment means you should check for recognition of stimuli, and keep in mind that respondents may take time between sections of a survey which is a problem for priming and mood manipulations so it’s important to impose strict time limits. Increasingly, Mturk also has an issue with professional participants who are familiar with well-known manipulations and even exchange notes on them in the discussion forums.
Andrew Stephen talked about using multiple methods as an approach to test our hypotheses and to find out what whether what we’ve discovered in our lab experiments is actually true. Good multimethod research, he said, is using a combination of methods in a complimentary way where each method compensates for the weaknesses of another by, for example, providing external validity that wasn’t there otherwise or makes the overall set of results more convincing. In consumer behavior research, this is typically a combination of experimental methods with empirical modelling of real data. “Don’t be afraid to start with some cool data, find an effect and then try to explain it with lab experiments – but make sure the quant and experimental studies fit together nicely and that DVs match up”, he said. “If you want to understand a phenomenon, you should use whatever skills are relevant and you feel comfortable with – if it’s interviews or ethnography, so be it!” Michel noted that the fact that behavioral researchers don’t speak as well with qualitative collegues than they do with quant people is more of a historical artefact than anything else, so that shouldn’t hold anyone back. Following on from the methods focus, Stephen Spiller focused on the importance of good analysis and took the participants through best practice guidelines on moderation and mediation as well as providing them with a thorough list of resources for data analysis.
To wrap up the panel, Uma Karmarkar from Harvard Business School talked about the importance of getting to grips with neuro, no matter what your area: while experimental methods are good, neuro gives you another way of tapping into what people are thinign and what is hard for them to tell you about (e.g. emotions) – it may not give you all the answers but it may help you nonetheless. She pointed out two things that neuro does very well. Firstly, it helps to generate new hypotheses by providing new types of information and dynamic measurement where we can track people are they’re making a decision. Secondly, it helps us predict future behavior even out of sample based on “neural focus groups”. And so we wouldn’t leave empty handed, she also gave the participants a handout detailing the basics of different neuromethods.
After the panel, participants broke into smaller groups for two sessions of coaching on what good research looks like. With all the good advice fresh in their minds, the doctoral consortium participants were then put to the test in a competition of generating good ideas with Simona Botti, Kristin Diehl and Leif Nelson as the judges. The idea of the competition was to provide doctoral students an opportunity to think about how consumer research could solve real-world problems. They were encouraged to work with each other and their faculty coaches to come up with testable solutions to problems such as how to get people to save more water (the winner for the most likely to work), engage in more socially risky behavior such as public speaking (most creative solution) and how to stop people from taking their health and wellness for granted (the most impactful solution).
Young brains thoroughly fried, the doctoral consortium wrapped up with a final reception sponsored by University of Pittsburgh and Harvard Business School.