Links + SIOP

Got the links pageup. If you’ve been wanting to click on something but you just don’t know what, check it out. It’s got links to friends’ webpages, all my other online sites/projects, and a smattering of some of my favorite articles I did for GameSpy.
Also, today was the deadline for submissions for the 2004 SIOP convention. For the first time ever, I submitted something! Lamely, it was my (finely aged) dissertation. Therese Macan (my advisor from grad school) hacked my 121 page prove-you-know-it-through-volume dissertation down to a 21 page I-don’t-have-time-for-this poster submission.
The title is “Improving Applicant Reactions by Altering Test Administration” and here’s the abstract:

Research on applicant reactions traditionally focused on a limited set of test characteristics. Using an organizational justice framework, we examined six characteristics of test administration and their role on outcomes like company attractiveness and intentions to remain in the selection process. Two hundred eight job applicants in nine different locations provided their reactions before the test and after. Results show that six rules (participation, consistency of administration, uncertainty reduction, interpersonal treatment, transparency, and quality of two-way communication) are all related to overall perceptions of fairness, and that these perceptions are related to the outcomes examined.

In other words, here’s how to reduce the chances that you’re going to piss an applicant off by making him think your selection processes are unfair: let them ask questions, let them make decisions that they think affect the outcome of the situation, don’t be a dick, treat everyone the same, and tell them what to expect.
I’d love to expand this research, and am on the lookout for opportunities to do so. There may be chances here at Sempra Energy for me to do some applicant reactions research using computer-based selection tests. That would be fantastic, but we’ll have to impliment it in a high-volume job. We’ll see.
Therese wants me to submit this to a journal for full publication, which I may do. Regardless of whether it gets published or not, I’d like to at least check that off my “Life’s To-Do List”.
Oh, I’m also involved with a Practitioner Forum submission on survey methodology. Here’s the abstract from that:

This practitioner forum will address important real-world issues relevant to survey practitioners and their clients. Through the use of actual survey studies, the papers will answer common survey questions and offer practical recommendations to assist the survey specialist in delivering higher quality results.

And then here’s the relevant bit from the body of the proposal:

Lastly, in the fourth paper, Morris, Madigan, and Ashworth answer a very important methodological question: Do results differ between a survey that is administered annually and one that is administered on a more frequent basis? The authors serve as internal consultants for an Energy Service Company and manage several internal customer satisfaction programs. They were advised that customer surveys should be administered on a more frequent basis (via a web-seminar by Better Management), the rationale being that more frequent feedback allows for quicker response time to address and fix what is not going well and to acknowledge and reward what is going well. However, from a psychometric perspective, the authors were interested in how the measurement frequency would impact results.
To investigate their question, they compared item results between an annual survey and a point-of-service survey. The point-of-service survey was much shorter, but contained identical items pulled from the larger annual survey. Differences in mean ratings and response patterns were found. After finding differences quantitatively, they also investigated qualitative differences on the open-ended responses.

Wish us luck!

Published by