Petite little women drivers are better

It’s kind of old news given that the Indianapolis 500 is over, but I saw an interesting story on Defective Yeti about Danica Patrick, a female racer who ended up finishing up fourth. Or specifically, it’s about the comments of another racer, Robby Gordon, who refused to participate because them dainty chicks have an unfair advantage. Scribbled the quote monkey:

Robby Gordon accused Danica Patrick of having an unfair advantage in the Indianapolis 500 and said Saturday he will not compete in the race again unless the field is equalized.

Gordon … contends that Patrick is at an advantage over the rest of the competitors because she only weighs 100 pounds. Because all the cars weigh the same, Patrick’s is lighter on the race track.

“The lighter the car, the faster it goes,” Gordon said. “Do the math. Put her in the car at her weight, then put me or Tony Stewart in the car at 200 pounds and our car is at least 100 pounds heavier. I won’t race against her until the IRL does something to take that advantage away.”

Wait, what? I agree that weighing less gives you an advantage, but so does, you know, being a good racecar driver. Since when did we decide that having greater ability in a sport gives someone an “unfair” advantage? You want ban anyone over 6 feet tall from the NBA? Standardizing the equipment athletes use (e.g., no corked baseball bats or weight requirements for race cars) makes sense, because it places the emphasis where it’s more interesting and dramatic: on the athlete’s ability. Including their weight, height, reach, speed, constitution, et cetera.

What’s interesting is that women and some racial groups (such as Asians and Hispanics) are usually at a disadvantage for physical abilities because they are on average smaller and physically weaker than those big, white males. But here it’s working to Patrick’s advantage and some guy is calling foul and crying into his Budweiser. I don’t know about athletics, but the law is pretty clear on this kind of thing in employment: If the phsysical ability (or just about any ability) is related to the job and predicts performance better than chance alone, then it’s fair (or at least legal) to hire people based on it. It’s just letting people who are the most suited to the job win.

Blogging SIOP, Day 3

“Wardrobe denegration” is phenomenon that every experienced SIOP-goer is familiar with. Here’s how the dress code usually goes:

DAY 1: Suit
DAY 2: Slacks/skirt and dress shirt
DAY 3: Burlap sack

Fortunately, the third day is only half a day, and most people don’t even stay that whole time. I’m only going to attend one session today myself before hitting the road. I’ve also got to swing by the vendor hall once more to see if there’s any other books i have to have. I’ve only bought three this year.

The first book is entitled “The Psychology of Online Consumerism” doesn’t have much, if anything, to do with I/O but it looked really interesting. It’s a collection of scientific studies about online behavior. So it covers, for example, what makes people click on ads or sign up for e-mail lists. This would have been a good book to have three years ago when I was doing that kind of stuff.

The second book is called “Intelligence: A Brief History” and seems to be exactly what I’ve been looking for: a summary of the development of the intelligence construct, how it’s conceptualized, and what it means. It seems a little thin for the topic, but hopefully it’ll strike the right balance.

Third book is the excitingly titled “Employment Discrimination Litigation”. It’s pretty expensive, but it’s a brick and will probably be one of those books that I keep handy for reference for the rest of my career. In fact, I bought the book because this symposium clued me in to how much I still have to learn:

8:00 – 9:50 A.M. Symposium – Cut Scores in Employment Discrimination Cases: Where We Are Today

Actually, not much to say about this one, other than I’m glad I stuck around for it instead of heading home. The thing that really stuck out to me was the discrpancy between what the Uniform Guidelines and SIOP Principles say about cut scores (do whatever you want if your test is valid) and what the courts have said (do these three things that are contradict each other). There’s been a lot going on that I hadn’t kept up with.

After that, though, I hit the road and headed home. Good conference, I just wish I didn’t have to go to work in the morning.

Blogging SIOP, Day 2

Hrm. Today didn’t start off quite as well, as I missed the 10-second window in the coffee break during which food is actually available. Get there late as I did and all the grad students leave you are crumbs and a few dozen apples.

Sessions!

8:00 – 9:50 A.M. Practitioner Forum – Applying Validity Generalization: A View from the Job-Analysis Trenches

Not much to say about this one, other than it has a lot of relevance to what I do at work. Good stuff. Only funny thing was that I sat next to Frank Schmidt, he of “general intelligence: It’s what’s for dinner and it’s all you’re going to eat, EVER” fame. Actually, sitting next to him wasn’t the funny part. The funny part was after someone asked about generalizing the validity from an intelligence test for plumbers to a similar test for accountants and hearing Schmidt mutter, “This has already been proven. Why would you waste time proving it again if it’s already been proven.”

10:30 – 11:50 A.M. Practitioner Forum – Experience-Based Prescreens: Suggestions for Improved Practice

Another good session. This one struck me several times with ideas that seem obvious once you hear them, but for some reason few people really think of. Some of the presenters talked about how they replaced those dubious pre-screening questions with items that are much more grounded in job analysis (shock! amazement!) and more likely to increase the overall validity of a selection process. So instead of “Do you have a college degree?” you ask questions about their knowledge of specific things they should have learned about in school or how many times they’ve done tasks necessary for the job. There was also a good talk about “when is an applicant an applicant,” which I actually touched on here once.

12:00 – 1:20 P.M. Panel Discussion – Validation Studies: Working with Difficult Clients or Data

This was a sequel to a similar presentation from last year. They’re both probably best described as “When Good Validation Studies Go Bad.” The idea was to present a few problematic scenarios common to test validation then describe what experts working in the realms of academics, internal consulting, external consulting, and government had to say about it. Some of it was pie in the sky unlimited resources textbook answers, but just as much was practical advice that made a lot of sense. And I’m glad that in many cases my own intuition and experiences were dead on with what was recommended by the experts.

1:20 – 2:50 P.M. Practitioner Forum – Maintaining Test Security in a “Cheating” Culture

I thought this session was largely about maintaining test security so that people couldn’t get ahold of copies. What surprised me was the number of sites, books, and sundry scams out there tailored towards giving you leaked copies of tests and/or the answers to go with them. The U.S. Postal Service took a unique stance against this by releasing, for free, all the information that these guys were trying to sell people. Another presenter discussed how they went to incredible lengths to ensure test security in what sounded more like a scene from a Tom Clancey novel: Guards carrying self-destructing packages full of tests and sending them on their way in armor-plated trucks. And i thought tests were secure if they were in a locked filing cabinet.

3:30 – 5:20 P.M. Theoretical Advancement – Evolutionary Psychology’s Relevance to I-O Psychology

Whew. This was kind of a weird one. Late as it was in the day, I kind of spaced out and only really remember four things:

  1. Men want to have sex with as many women as possible.
  2. Nepotism exists family-owned businesses.
  3. Sparrows like to take risks.
  4. Asians dig Confucius.
  5. I barely stifled a blast of laughter when one of the presenters put up a reference to “Wang & Johnson (2004).” I’m such a child sometimes.

This particular symposium was …eclectic.

Didn’t do much tonight. I grabbed some take-out from a restaurant near the hotel then came back to the room to take it easy. I had good intentions to go over and hit some of the receptions tonight, but I don’t think that’s in the cards at this point.

Blogging SIOP, Day 1

I have a long standing SIOP tradition: in the eleven conferences I’ve been to, I’ve never go to the presidential address that kicks things off on Friday morning. Not being ensconced in the organization’s inner echelon, it just never seemed like there was anything for me in it. This year, though, my boss was doing the slide show so he talked me into going. Here’s how it went:

8:30 – 9:15: Polite clapping for people I don’t know getting awards that I had no idea existed.
9:16 – 9:18: Someone on stage tells a funny story about SIOP and pornography (this is the highlight of the show so far).
9:19 – 9:30: A bunch of people show me some graphs.
9:30 – 9:32: Leatta Hough shows me old pictures of some German kid named “Fritz.”
9:33 – 10:00: Fritz Drasgow (possibly related to that German kid) refers to computerized testing as “a boondoggle.”
10:01: I make a note to myself to look up “boondoggle” in the dictionary.

So yeah. I won’t be going back next year. On to the sessions.

10:30 – 11:30 A.M. Panel Discussion – Personality Variables at Work

This was kind of supposed to be “Round 2” in a legendary panel discussion that started at last year’s conference. That discussion was supposed to be about faking in personality tests, but turned into a muck slinging contest over whether personality existed at all. Last year at one point someone in the audience got up and told several editors of top-tier scientific journals that they possessed a worse grasp of research methods than most of his undergraduate students. Hilarious! The only highlight this year, though, was that Barrick (of “Barrick and Mount” fame, you know you know them) actually used a slide with pictures of children and puppies. Point, set, and match to the pro-personality camp. But while this year’s board was better balanced in terms of pro- and anti-personality, it was nowhere near as cantankerous. Nobody got their nose smashed with a folding chair so I left early.

12:00 – 12:50 P.M. Practitioner Forum – Cutting Edge Tools for Traditional Job Analysis: How Technology Maximizes Efficiency

Not much to say here. It was basically a venue for a group of vendors to show off their products. Some neat stuff, but I’m kind of surprised how easy it is to get away with putting up a plain old online survey, giving it a few tweaks, and calling it a technology revolution. I do, however, really like the idea of putting job analysis tools online provided you do have experts involved at some point to give guidance.

1:00 – 2:50 P.M. Symposium – References and Recommendation Letters: Psychometric, Ethical, Legal, and Practical Issues

Shocker: Reference letters are uniformly glowing and don’t predict squat. I had hoped that this symposium would be more about references, but the bulk of it was on academic reference letters like you bug your professors to write when you apply to graduate school. One thing that I took away from this is that there’s more reliability between one letter writer’s letters across different students than there is between two letters by separate professors for one student. In other words, professors usually use form letters that, while they may be glowing, aren’t really all that specific to you. The other thing that struck me was the revelation that there are almost NO cases in which an employer has been sued for slander (or libel) after providing a negative reference check. In fact, there have been WAY more cases of employers being sued for negligent hiring because they failed to try and get a reference for someone who went on to do unspeakable things like molesting the snack machine.

3:30 – 4:50 P.M. Practitioner Forum – HR Technology Applications Now and Tomorrow

Hey, this was my presentation! I got to sit up in up in front behind the big table, overlooking the audience like a Lord and everything. Everything went fine and I was much less nervous than I expected to be. I honestly didn’t even listen to the other presentations, intent as I was on a last-minute review of my own. Only highlight was when my lapel mic fell off my jacket and I picked it up, saying “Uh oh, I’m having a wardrobe malfunction.” That got laughs, even if my assertion that good employment tests should not cause cancer did not.

That night I went to a little reception that the U. of Missouri – St. Louis alumni association put on. This was pretty cool, as there were a lot of people there that I hadn’t seen in a long time. Also, I got a free tee-shirt. After that I went reception hopping with a couple of the guys I used to work with at Anheuser-Busch. We didn’t necessarily have invitations to any of the receptions, but the only secret to getting in is to walk into the place like you belong there, grab a beer, and start talking loudly about that time that you did that thing with those people. At one point I went to the trouble of grabbing an invitee’s name tag out of the pile next to the door, choosing to impersonate my buddy David Morris, who had gotten an invitation. On my way out I ran in to David, who had been unable to find his purloined name tag and had chosen to go under the moniker “Ann-Marie Ryan.”

Wandered back to the hotel around 11:00 and was in bed by midnight. Pretty good for day 1.

Blogging SIOP, Day -1

If I may be allowed the indulgance of using a noun as a verb, I’ve decided to “blog” the SIOP (Society for Industrial and Organizational Psychology) annual conference this year. No, seriously. Why not? Nobody else is doing it, and if anyone comes here looking to download my presentations and clicks on the “blog” link, I’d rather he not just see entries about cross-gender MMORPG gaming (though given the growth of the “Lesbian, Gay, Bisexual, and Transgernder” scene at SIOP, that might be of interest).

Good news: The conference is in Los Angeles this year, which means little travel. Other good news: I’m making two presentations at the conference this year –one on improving applicant reactions to selection tests by changing how the tests are administered and the other one on building an offline testing system that still benefits from information technology. Bad news: They double-booked me so that I have to make the two presentations at the same time in two different places. Better news: My boss from Sempra is stepping in to do one of them while I give the other.

I’m actually up in Los Angeles now, having driven up yesterday for semi-related business with work. The hotel I’m staying at is nice, thought the clerk didn’t seem to appreciate it when I told her that it didn’t make much sense to charge $12.99 per movie to see a a new release in your room while they only charged $9.99 for a whole day’s worth of high-speed internet access, the latter of which could –theoretically– be used to download and watch all the movies I want. It didn’t really matter much, as I just read a book anyway.

I took some time today to go through the SIOP program and pick out things I wanted to see. As usual, there are a TON of great programs, workshops, roundtables, and discussions going on. More than I can make it to even if I limit it mostly to selection/assessment topics that have the most relevance to my current job. Here’s what I highlighted as likely candidates:

  • The Usefulness of Personality Variables at Work
  • Cutting-Edge Tools for Traditional Job Analysis: How Technology Maximizes Efficiency
  • References and Recommendation Letters: Psychometric, Ethical, Legal, and Practical Issues
  • Applying Validity Generalization: A View form the Job Analysis Trenches
  • Fundamentals of Employment Law: Concepts and Applications
  • Performance Appraisal Isn’t performance Measurement: Why Poor Workers Receive Good Ratings
  • Experience-Based Prescreens: Suggestions for Improved Practice
  • Validation Studies: Working with Difficult Clients or Data
  • Maintaining Test Security in a “Cheating” Culture
  • Where Recruitment is @: Current Approaches to Web-Based Attraction Research
  • Evolutionary Psychology’s Relevance to I-O Psychology
  • Have You Ever Wondered? Research Ponderables from Employee Survey Experiences
  • Getting Started with Computer-Based Testing
  • Cut Scores in Employment Discrimination Cases: Where We Are Today
  • Emotional Intelligence and its Impact on Job Performance

Whew. That’s a lot of stuff, and I actually won’t be able to hit all of those given how some of them overlap. But that’s what jumped out at me as particularly interesting. No, seriously.

There were too other symposia titles that jumped out at me. Not because they looked particularly interesting (though they may be), but because they were funny. The first is “Online Assessment as a Valid Enhancement of the Selection Process.” This title struck me ass peculiar because it’s so broad despite sounding so specific. What “online assessment?” Wich “selection process?” Kind of a good example of writing a symposium proposal so vague yet so enticing that it gets accepted and you don’t have to actually worry about the contents. (Though for the record once you drill down and read the titles of the presentations therein, they DO sound pretty good.)

The second presentation struck me as funny because while other titles were making copious (and sometimes gramatically suspect) use of colons, semicolons, and other bastardisations of the English language, this one is simply entitled “A Master Tutorial by Sidney A. Fine.” No explanation, no details, just the man –excuse me, the MAN— who will be delivering it. It’s actually doubly amusing for those of us in the biz, though, because Sidney Fine’s name IS inextricably tied to the topic of “Functional Job Analysis” and thus doesn’t actually require any more explanation. It’s like seeing a playbill for “A Night of Shakespeare” or an ad for “SpongeBob on Ice.” You know you’re in for a night of gibberish-filled pandering to the lowest common demoninator and an an afternoon of fine theater (respectively). Such it is with “A Master Tutorial by Sidney A. Fine,” though Dr. Fine’s presentation doesn’t have regicide, incest, or a catchy theme song.

…Probably.

Two I/O Psychologists walk into a B.A.R.S.

This morning I worked on the PowerPoint presentation for my Practitioner Forum presentation at this year’s SIOP. I spent more time than I should probably admit making this graphic for a slide where I mention our use of scanable forms for employment testing:



Still, it made me chuckle, and I really hate the kind of drab exercises in paying attention that epitomizes the majority of SIOP presentations. Later in my presentation I’m going to just put up a picture of a llama and say “And here’s a picture of a llama.” I’ll pause for a second, then say nothing as I move on to the next slide. If anyone asks what’s up with the llama, I’ll give them a confused look an deny that there was ever any llama.

Why? Because there’s no reason this stuff has to be so boring? I keep thinking about the time I saw the Game Developer Conference presentation by game designing legend Will Wright. I mean, Wright’s talk was on an inherently fun topic like game design, but it was also really cerebral and abstract. He had slides like this one (not to mention this one) and talked about “vector fields” and “group social dynamics,” but the presentation was really engaging and everyone was riveted. Contrast this to some stuffed shirt whose idea of a great SIOP presentation is a huge, unreadable correlation matrix peppered by asterisks denoting p values less than .05.

This problem extends to books and journal articles in the world of I/O Psychology, too. Here, I just pulled a book on job analysis down from my shelf and flipped to a page where this was written:

A job analyst may learn a good deal about a job simply by observing and recording what a worker does. Naturalistic observation occurs when the analyst’s presence has little or no effect on the worker’s behavior. This can be achieved by conducting observations over a long enough period of time that the worker no longer pays any attention to the analyst. Or the analyst may observe more actively by asking questions about particular behaviors as they occur.”

I mean, that’s fine. And this is actually a pretty useful book on balance. But that stuff’s boring! And it just goes on and on. If I were writing that passage, it might have been more like this:

Of course, Dr. Obvious, one of the first things you can do is actually watch people doing the job in question. I know it’s not full of the glitz and HARDCORE MONSTER STATS CRUNCHING ACTION that you expect from the world of Job Analysis, but it’s actually pretty effective. You can ask questions if it doesn’t make the guy want to punch you in the throat, but try to be inconspicuous. Ideally you wouldn’t show up with a bull horn, sneak up behind the worker, and announce “WHAT’S THAT? WHAT ARE YOU DOING? NOW WHAT ARE YOU DOING? OOOH, WHAT HAPPENS WHEN I PRESS THIS BUTTON? WOAH, YOU’RE GOING TO GET IN TROUBLE FOR THAT, AREN’T YOU?”

I don’t mean to puff my feathers all up, but really –which book would you rather read, especially if they both ultimately contained the same information and covered the same topics?

I won’t try to be as entertaining as Will Wright when I do my presentation. Besides, they’d probably run me out of town if I tried. But I do want to keep things interesting and yes I’ll say it: fun. And if in doubt, I can always throw in a dirty limerick.

p.s., Sorry for the terrible pun in the title of this post. It won’t happen again.

I am once again a master of the web

I mentioned a while back how I was elected, through what I think was an uncontested race, to the office of “Vice President – Web Publications” for the Personnel Testing Council of Southern California. This is a fancy pants way of saying “Webmaster” as my duties seem to wholly consist of updating the website.

Unfortunately the old website (archived here) appears to have been done in a WYSIWYG editor. The code looked like someone had filled a paintball gun with <font> tags and unloaded the thing on the hapless web. It looked fine to the end user, but it made it hard for me to update. To remedy this I spent a chunk of my weekend recoding the whole thing from scratch. It’s a very simple design, but I think it turned out okay. Observe for yourself.

I had told myself that I’d never code another site using HTML tables, and that my next web project would be structured and laid out using only the glory of cascading style sheets (a.k.a., “CSS”). I have yet to get around to teaching myself the necessary CSS skizzles, though, so the PTC-SC site makes use of tables. And you know what? I’m not so sure that’s such a bad thing. I don’t understand the bad rap HTML tables have gotten, really. I make good use of server-side includes and CSS for all the style stuff (no more <font> tags!) so it’d still be a snap to update the colors and layout. I only nested the tables one level deep at most, so page loading isn’t a problem with today’s modern super computers.

So honestly. Tables. Not that bad.

Cheese Preferences in 12-Month Olds Named “Sam”

Ger and I were arguing one day about what kind of cheese Sam likes better, Cheddar or Swiss. Yes, we argue about these kinds of things. Before we had a kid we debated politics, philosophy, and the ontological mysteries of the cosmos, but now it’s pretty much “How much did Sam poop today?” and “Which cheese do you think she likes better?”

I figured, though, that we need not rely on pure speculation for the answer to this last question. If my Ph.D. in Psychology is good for anything, it’s determining cheese preferences in little girls. So I concocted an experiment, ran it, and wrote up the results below. Yes, seriously.

Introduction


The researcher was interested in cheese preferences among babies who are his daughter. The implications for such research include grocery shopping planning, general happiness of the population in question, and giving the researcher something stupid to write about on his blog.

A review of the baby literature yielded very little useful information. It has been found that babies prefer “Buh-buh” and “crapping themselves” but little substantive research has focused on dairy products in particular. Obviously, this highlights the tremendous value of the present research.

Given the dearth of research on the subject, the researcher was not comfortable specifying a specific hypothesis about cheese preference. Instead, he will simply test the null hypothesis:

H0: Seriously, Sam doesn’t care.


Methodology


The study employed a simple 2×1 within-subject, repeated measure design. The rest of this section describes the sample, stimulus materials, and procedure employed in the present research.

Sample

Uh, pretty much just Samantha. I really don’t care about anybody else’s kids, so she’s the entire population of interest.


Figure 1: The population



Stimulus Materials

In an effort to keep the research manageable, the researcher decided to limit his investigation to Cheddar and Swiss cheeses. Also, these are the only ones we ever really get coupons for. A block of each cheese was procured from the local grocery store and each was cut into many half-inch cubes for a total of 76 pieces.


Figure 2: Stimulus Materials



Each type of cheese was then placed in a special, scientifically prepared, plastic container. Okay, they’re not special containers. They were just these little Tupperware containers that we put all of Sam’s food in. But we do it scientifically.


Figure 3: Materials Preparation



Procedure

Each day the researcher or his assistant (hi, Geralyn!) would run 5-7 experimental trials. Each trial involved sitting the Subject in a high chair and placing two cheese cubes –one Cheddar and one Swiss– in front of her. It was then noted which type of cheese the Subject ate first and this information was coded on a specially prepared piece of paper. Using, uh, a pen. A scientific pen.


Figure 4: An experimental trial in progress



After the Subject made her choice, the remaining cheese cube was removed (often under protest by the Subject) and two more pieces were placed on the tray. The left/right order of the cubes was varied so that if the Subject had a preference for the cheese on the left or right that error variance would be evenly distributed across conditions.

Data collection was spread out over 12 days lest the Subject become really, really constipated.

Results


Table 1 shows the distribution of cheese choices made by the Subject across all 76 experimental trials. The “Observed” row shows the number of each cheese cubes actually chosen while the “Expected” row shows the number of cubes one would expect her to choose if there were no preference.



To test the Null Hypothesis of no preference, the researcher took the categorical data in Table 1 and conducted a Chi-Square analysis. As you may remember from your remedial math class in junior high, this is the formula for Chi-Square test:



Where O is the Observed cheese choice for each type (Cheddar or Swiss) and E is the expected choice. Filling in the values from Table 1, we get an observed Chi-Square of 1.895:



Referencing a table of Chi-Square distributions, it is noted that with 1 degrees of freedom, an observed Chi-Square of 1.895 does not reach the critical value for alpha = .05 (or even .10). Thus, the null hypothesis is not rejected.

Discussion


Well, just like my dissertation and my master’s thesis, I’ve once again failed to find significant results. Samantha appears to prefer neither Cheddar nor Swiss; she likes them both equally. During the course of the experiment she chose Swiss more often than Cheddar (44 Swiss cubes vs. 32 Cheddar cubes), but the difference was not large enough to rule out random chance as the cause as opposed to a taste preference.

Future research might investigate the question of whether the Subject more often prefers the cheese on the right- or left-hand side to the extent that this overwhelms any other preference.

So there. Can I get tenure now?

Stop the blogging!

On one of my frequent trips to work last week I heard a story on the radio about how some corporations are looking to blogs for feedback from their customers. The piece went on about how limited information could be when you get it from marketing surveys, and how many executives just LOOOVE to mingle directly (ironic?) with customers through corporate blogs. I guess the idea is that an exec could post a story or question on the company blog, like this one, and readers, like you, could post comments about it. One guy was quoted as saying that this kind of thing was much more useful than running focus groups.

Yeah. Right.

First off, I’ve been on the receiving end of scathing user feedback. When I ran FilePlanet.com we got hundreds of pieces of barely literate hate mail each day. We shut down the messageboards for the site because they were chock full of vitriol. Much of it was complaints that were either demonstrably untrue or against practices that we knew were necessary for our survival. Granted, you can still learn from that stuff and improve your communications, marketing, and help pages, but it’s not nearly as wonderful as these guys think.

The real nail in the corporate-blog-as-information-source is that it’s completely unstructured. A focus group or a survey you can direct, limit, and otherwise focus to issues that you know you want to find out more about. You can standardize the information that you bring in so that you can compare it to other measures and quantify it. “People love our toothpaste” isn’t nearly as valuable as “74% of people who like our toothpaste say it’s because of the taste, 10% because of the packaging, and 9% because of the scent.” Granted, mine may not be everyone’s experience and open forums like blogs might possibly be good for generating topics that you don’t know about and thus can’t ask about in the first place, but there are survey and focus group methodologies to accomplish that, too. And again, they can do it in a much cleaner fashion.

Of course, this whole thing has my motor running because it’s only a small step to go from using blogs to gather information from customers to using them to gather information on employees, where survey and focus group methodologies are already used to great effect. I just hope that companies decide that a completely open-ended format is more appropriate just because it’s gee-wiz high tech and snappy.

We like you, but your brain has got to go

There’s an interesting article on Slate.com about magnetic resonance imaging (MRI) and its growing non-medical uses. This is the technology that uses, I think, magnetic waves to create an image of brain activity. It’s used for a lot of stuff, but researchers love it because it lets them examine what the brain’s doing when subjects go through any number of tasks –doing math, reading, solving puzzles, etc.

Old news, but the article soon points out uncharted territory for MRI, including lie detection, evaluating the effectiveness of marketing (which I’ve mentioned before), and screening job applicants. To quote:

The most complex, fraught, and uncertain aspect of brain imaging being discussed by neuroethicists is the potential these technologies hold for screening job and school applicants. This so far remains more a hypothetical notion than a budding industry, and no company or school has announced plans to scan applicants. Yet many ethicists feel the temptation will be overwhelming. How to resist a screen that can gauge precisely the sorts of traits�persistence, extroversion, the ability to focus or multitask�that make good employees or students?

The legality of such use is unclear. The relevant federal laws, the American With Disabilities Act and the Health Insurance Portability and Accountability Act (which governs privacy of medical information), allow pre-employment medical tests only if they assess abilities relevant to a particular job. An employer couldn’t legally scan for depression or incipient Alzheimer’s. Yet it’s possible an employer could legally use a brain scan to test for traits relevant to a particular job�risk tolerance for a stock-trading job, for instance, or extroversion for a sales position. An additional attraction of brain scanning is that a tester can evaluate these and other traits while an applicant performs nonthreatening, apparently unrelated tasks�like matching labels to pictures. An unscrupulous employer could fashion such tests to covertly explore subjects that would be off-limits in an interview, such as susceptibility to depression, or cultural, sexual, and political preferences.

The last bit about using MRIs to determine political preferences or other taboo topics doesn’t worry me. Those aren’t just “off-limits in an interview.” Laws (at least here in the U.S. and many other places) forbid employment decision-making on the basis of that kind of stuff no matter how you obtain the information. (Though I admit the status of an MRI scan as a medical exam and thus falling under the purview of associated laws is likely to be a thorny issue.)

In fact, this kind of thing really appeals to me on some level. How cool would it be to have Johnny T. Applicant come into a room, see a bright light, then be told he’s perfect for the job? It’d be like a frickin’ religious experience!

This is mainly because I/O Psychologists like myself have always worked under the burden of imperfect measurement and anything that can give us the kind of precision seen in other sciences is automatically intriguing. Instead of asking someone to describe how they’ve dealt with stressful situations in the past, you just describe one to him or show him a video of one and watch what happens in his brain. You could probably eliminate (or at least reduce) lying and other biases by asking questions related to personality or values and then looking not for voluntary responses from their lips or pencils, but involuntary brain activity. Neat!

In the end, though, I don’t see this as a replacement for all of the tests we currently use. Psychological constructs are, by definition, groups of behaviors that reliably covary. Behaviors –specifically on the job behaviors– are what we’re ultimately interested in, and in many cases it seems like it would be easier and better to just measure them directly. An MRI isn’t going to tell you if someone understands the laws around business accounting or if he can lift a 50-pound box over his head. There are also many other relevant issues that an MRI couldn’t ever measure, like schedule availability, salary requirements, licensing, and specific job knowledge. Still, it is a fascinating application of a technology when it comes to getting at constructs that are difficult or impossible to reproduce in artificial environments –personality, values, judgment, and decision-making.

Cursive vs. freeform vs. typing: CAGE MATCH!

Back in graduate school I earned a few bucks on the side as an interviewer for the local phone megalopoly. I and two fellow grad students would gang up on people interviewing for Account Executive positions and take them through a structured panel interview. We all had to take extensive notes so that we could rate the candidate’s answers against a set of criteria, a task that required us to remember a fair amount of detail.

The note-taking was usually done by hand, but one day one of our trio brought in a laptop and used it to take notes. She was a fast typist, so in effect she ended up transcribing the candidate’s responses, word-for-word. When it came time to make our ratings, she showed us all of her copy and smiled smugly over the mounds of detail that she would have to work with in creating her ultra-hardcore scientific badass ratings.

The funny thing is, though, that I and the other guy who had taken notes longhand finished our ratings in a fraction of the time it took her. We were able to recall all the same information, like how the guy had killed his boss’s horse (in response to “Tell me about a time when you were in a stressful situation at work”) or dealt with conflict by threatening to urinate on everyone in a meeting (in response to “Tell me about a time when you had to manage conflict with other team members”), and we were able to do it off the top of our heads or just by using our hastily scribbled notes (“conflict resolution –> pee-pee, totally insane”). Apparently the gal with the laptop had been so intent on getting down every word that she hadn’t listened to any of them.

This story came to mind when I read this story on CollissionDetection.net. The article is about the decline of proper handwriting and cursive writing in school curriculums, but it also references some research that shows that the more one has to concentrate on the mechanics of writing, the slower he or she goes and the greater the number of errors. Basically, doing the unfamiliar task eats up brain power. I could see extending this reasoning to dictation and including recall as an outcome.

Some folks are bemoaning the loss of cursive handwriting and pointing to this as a reason to make it a bigger part of the public education curriculum. When I write something by hand I usually print in all caps, and I can do it pretty quickly. In fact, I haven’t written in cursive in YEARS. I tried to do it just now, and it was a mess. It looks like a retarded monkey had a seizure while holding a pen in its mouth. Still, I don’t have any trouble doing printing quickly, and my job still sometimes requires quickly taking copious amounts of notes.

Obviously handwriting should be taught, but I think we should supplement it with note-taking skills that break out of linear prose, like mind mapping, bulleting, shorthand, or even techniques used by professional stenographers. This strikes me as much more useful if the goal is to write quickly, as it almost always is when writing long hand these days. Anything where presentation matters is going to be typed.

Of course, it’s only a matter of time before teachers turn to their students and tell them to text-message their papers to the front of the class’s wireless server.

Categorize this!

As you may have noticed, I’ve added categories to this site. This means that each post is categorized into one or more category, categorically. I initially shunned this feature because it so often seems pointless and leads to having a dozen categories, most of which have one or two posts associated with them while the everything else goes into a kind of demilitarized blogging Shangri-la like “General” or “Daily Life” or “Misc.” But I wanted a way for people who were only interested in say Samantha to find an archive of stories only related to her while ignoring the rest of my inane ramblings. You can do that now.

Once I decided to do this, though, the main task before me was to define the categories. A cursory glance at my archives showed a fair variety in the subject matter of posts, but an underlying factor structure was not crystal clear. To resolve this, I endeavored to apply my six years of graduate school in psychology and do some kind of scientific data reduction. Specifically, I applied cluster analysis, which is a multivariate statistical procedure that takes a sample of observations about entities and organizes those entities into more homogeneous groups.

To start, I took all the 322 blog posts and had a group of subject matter experts rate each one on a variety of dimensions related to content, tone, voice, subject matter, reading level, word count, and the frequency with which I had used the word “poop.” These data were entered into a SAS dataset and analyzed using SAS’s PROC CLUSTER procedure. The output provided a wealth of information about the data’s possible underlying structure, but of particular interest was the Semipartial R2. Using this statistic for each of the solutions in the last fifteen iterations of the clustering procedure, I created the following Fusion Plot:



As you can see, there is a sharp dropoff in the Semipartial R2 at around 4 clusters, suggesting that to be an optimal solution to the data. Indeed, the four-cluster model explained over 85% of the variance in the original data, and this hypothesis was further supported by a dendogram that suggests a four to six cluster solution:



Finally, a plot of the four-cluster solution in multidimensional space using canonical variables pretty strongly suggested four (or possibly five) clusters:



Given these scientific results, I arrived at the following four categories for my blog:

The “General” category could have been further broken down upon rational review of the data, resulting in smaller categories like Gaming, Books, Movies, Family News, and Stupid Observations, but I decided that none of those individual topics would be of interest enough to most visitors to warrant splitting them out.

You guys are totally buying that I did all this work, right? Right? Pffftt.

Anyway, the way Movable Type handles category archives, though, has me pulling my hair out a bit. I want to have date-based archives, too, but I want category-specific archives for the Photo of the Day. But to have that, it kind of messes up the other archives so that you can only browse individual entries (like through a permalink link) within a category and not across categories. It really ticks me off, so if you know of a solution let me know. If I can’t figure anything out, I’ll probably end up creating a separate blog for the Photo of the Day, output it (and its archives) to a static file, and include it in this main site with server-side includes. What a pain.

Finally, you may also notice that I changed the layout of blog entries. It occurred to me that there were three types of elements to a blog entry: those about the entry (the date, the title, the author), the entry itself, and those related to what you can do in response to the entry (link to it, comment on it, find similar entries). So I separated them. Title and date are at the top (I trimmed author, since I’m the only one on this site), and then put the comment link, the permalink, and the category archive link at the bottom. The latter also makes sense in that you don’t force people to scroll back up in order to comment or get a permalink.

So, hope you enjoy the fruits of my labor. There are more tweaks to come, as well as a total redesign if I can get around to it.

Yay on me

I’m going to break my arm patting myself on the back so hard, but I thought I’d share two pieces of good news on the professional front. First, I got two –two!– submissions accepted to next year’s annual Society for Industrial/Organizational Psychology conference. One, entitled “The Importance of Test Administration Characteristics in Forming Applicant Reactions,” examines what about pre-employment testing can tick people off when they have to go through it to apply for a job. The second presentation, entitled “Developing an Offline Testing System That Still Benefits from Information Technology,” is part of a practitioner forum looking at how information technology has spurred changes in I/O practice, particularly selection. My piece looks at the employment testing infrastructure we designed and use at my current employer. Gripping stuff, eh?

The second piece of news is that I was recently elected to the office of “Vice President – Web Publications” for The Personnel Testing Council of Southern California, a local professional organization for I/O psychologists. And by “elected” I mean “nominated myself and ran unopposed.” The position is essentially that of webmaster for their site, but it’s managable and should be a great way to meet folks and network.

So, you know. Yay.

The handicap inaccessible Internet: A-OK!

Here’s a shocker: The Americans With Disabilities Act doesn’t apply to the Internet. This is the federal law that requires employers and other public entities (like public parks, restaurants etc.) to provide “reasonable accommodations” for people with disabilities. Classic examples of these kinds of accommodations include installing wheelchair ramps, changing work schedules, having a waitress read a menu aloud, etc.

That’s why it seemed like a bit of a non sequitur when an advocacy group named “Access Now” sued Southwest Airlines on behalf of a blind man who …couldn’t read its website. Apparently the guy used software that would take text from websites and turn it into speech, but the way Southwest’s website was set up was apparently incompatable with such screen readers.

Now, to me, “call Southwest on the phone” seems like it would be a pretty reasonable accommodation for a blind person incapable of reading a website. The US 11th Circuit Court of Appeals didn’t really focus on this question, though. Instead, they found that the Internet is not a “place,” which means that it’s not included under the current language of the Americans with Disabilities Act. Read the whole case summary for yourself.

So, not only do I not have to worry about not including “alt” tags on all jmadigan.net’s images, I don’t have to install wheelchair ramps, either. Hooray.

It’s like a low-carb Ph.D.

Saw this article USA Today (no, not kidding) about some universities are offering graduate degrees in business, but with an emphasis in science and mathematics.

Many students strong in science and math face similar career dilemmas, fueling a stampede into places like law school just as global wars are being waged in biotechnology, cryptology, nanotechnology, forensic chemistry, environmental science and the like.

That has led to the creation of a new master’s degree, the professional science master’s (PSM), which promises to be the hot degree no one seems to have heard of — yet. It’s so new that its first graduates were in 2002. Fewer than 400 students have earned a PSM. But the programs are expanding rapidly and are now offered at 45 universities in 20 states.

The PSM is being called the MBA for scientists and mathematicians. It’s an education aimed at future managers who will be able to move comfortably in the business of science, from a meeting about enzymes to another about intellectual property rights, all the while understanding the goal is not a scientific journal article but marketable products.

The article goes on to explain how the PSM (I guess it’s obvious why it’s a Professional Science Master’s and not a Professional Master of Science, eh?) is a viable and more practical alternative to Ph.D. degrees, which I think makes some sense. You don’t need a Ph.D. for every position in a R&D department any more than you need an MBA in any other one. Some, yes, but not all.

Unfortunately the article kind of devolves into a fluff piece that sounds more like an infomercial for the degree rather than any kind of real reporting. They cite unspecified “experts” (always a red flag) about how much the world needs people with these degrees, and liberally quote the backers of the program, faculty, and people who have graduated with one. I don’t think there’s any dissenting opinion, or even a clue that they tried to find one. Heck, they don’t even really talk about the curriculum or what specifically these people study or what standards they’re held to.

Oh well. We Ph.D.s don’t have anything to worry about, right? Right?

It’s not an office…

People have been talking about telecommuting for quite a long time. For a lot of people, I can see how it makes sense to work from home using helpful information technology like the Internet and the phone to stay in touch. I’ve often thought, though, that I like getting out of the house and interacting with people, or at least being in their company. That’s why I found this article about “work clubs” pretty interesting.

The idea is simple: Telecommuters get to …get out of the house to work. Not in the office, heaven forfend, but at a swank and stylish “work club” that’s kind of like a dance club, but with spreadsheets instead of dancing. Here’s what the Quote Monkey came back with:

The goal of Gate-3 Workclub in Emeryville, Calif., is to create a new kind of community where neighborhood people can “work and network and hang out with friends,” founder Neil Goldberg says.

…The 40 or so members of the Wi-Fi-equipped club drop in for a few hours a week. They rove around, spending time in the common areas or cafe, a few hours working in a hush zone, or meeting with a colleague or client in a conference area. They make private phone calls in a “cone of silence,” (aka phone booth), have support staff make copies, overnight a package or get a laptop repaired. Members can bring in their dog (if he’s quiet and passes an interview), and bring the baby (and they’ll deal with any crying).

That’s just so …bizarre, and sounds exactly like something you’d expect from the city (San Francisco) that gave us so much of the Internet boom. You don’t work in an office, but you pay to work at a “work club”. It’s like being in the office, but you don’t have to deal with that obnoxious guy two cubes over. Instead, I’m guessing you’ll have to deal with some other obnoxious, latte-swilling yuppie. And she with you.

I’m not convinced that this would work unless they limit membership by requiring credentials or making it prohibitively expensive for most people. And at that point, it really becomes irrelevant to most of the world.

OBEY THE SURVEY

When my friends and I were kids one of our favorite jokes was to go up to someone and say “Can you answer a simple yes or no question?” When the victim said that they could, we would ask, “Do your parents know you wet your bed?” Hilarity ensued, and sometimes a beating.

Turns out that this little trick is amusing not only to witless children, but also to members of the Republican National Committee. As evidence, I present a survey sent to one of my co-workers. The survey has all kinds of horribly worded questions (they word items so as to encourage people to respond in ways that support their agendas), but the multiple choice question that asks, “Will you join the Republican National Committee by making a contribution today?” takes the cake:



The first available choice is pretty straight forward. The second is sly if a bit disingenuous. The last one, though, is the one that reminds me of those precious childhood moments with the “Can you answer a simple yes or no question?” challenges. Draw your own conclusions.

“Do as I say, not as I do” doesn’t cut it anymore

This came up last week and I’ve been meaning to comment on it. According to this article, the U.S. Supreme Court ruled that a disabled man was able to sue a local court house under the Americans With Disabilities Act (ADA). The guy was a paraplegic and was to appear for minor traffic charges, but the court in question had no elevator. He literally couldn’t make it to the court unless someone carried him, which he thought was dangerous (and probably humiliating). So he didn’t show up and was arrested for something he was physically incapable of doing. He sued and appealed to the Supreme Court until he won.

When I heard this story, it struck me as a no-brainer. I’m familiar with the ADA, and thought that there was no question that it required courts (and any other employer or public building) to take reasonable measures to make themselves accessible to people with disabilities. Indeed, it does. But what was weird up until this point is that while the law required courts to be accessible, there was no legal mechanism by which people could enforce it. If a court, like the one in question, breaks the law and makes it impossible for a person to access the courts, then there was no recourse.

In fact, this quote from the linked article above struck me as particularly absurd: “Most notably, the Supreme Court ruled three years ago that states cannot be sued by their own employees for failing to comply with the ADA’s guarantee against discrimination in the workplace.”

Is it just me, or is that insane? How nice of the government to excuse itself from its own laws, eh? I’m surprised it took this long for a court case to surface that called them on it.

Why not study things people actually care about?

I was listening to the news this morning and they had a piece about how many medical researchers are beholden to the makers of the products they’re testing. Furthermore, the major medical journals who publish this research are sometimes unable to deal with this potential bias. If the makers of Lipitor, for example, are paying researchers to study its effectiveness at reducing cholesterol relative to a competing drug, then that raises all kinds of questions about objectivity. Those questions can be dealt with, of course, and they need not mean that the research is worthless. You’ve just got to have safeguards and full disclosure to everyone, including the readers.

Interesting as all that was, I was more interested in the question of why we don’t do tests of specific products in I/O Psychology. If the medical field can conduct scientific research on name brand drugs and get them published in top-tier journals, why don’t we study off the shelf products used in the area of executive development, selection, and training?

I’m not talking about measuring So-And-So’s Five Factor Model of Implicit Leadership or a meta analysis of studies looking at conscientiousness. I want a team of crack psychologists to study Stephen Covey’s 7 Habits of Highly Effective People training and tell the world if it really does do what it says it does. Let them use lab rats if they need to. I want those same, objective scientists to study the jaunty Impact Hiring system or the use of the dreaded Meyers-Briggs.



These kinds of studies are being done (well, some of them; I’m pretty sure nothing scientific has come within a hundred yards of a Covey seminar), but they’re being done by the test vendors and the consulting firms that sell them. Let me ask you: would you sooner trust a study on the effectiveness of St. John’s Wort put out by Walgreens or one put out by the Journal of the American Medical Association?

I/O psychologists really need to step up and wade in the mainstream more, even if it is polluted.

The science of trolling

Ever come up with a great idea for a study and then have someone else beat you to it? I’ve mentioned before how some people morph into complete half-wits when they go online. They just do things that they would never do on the phone or much less face-to-face. Furthermore, I’ve always wanted to study this phenomenon scientifically. Why do they do it? How do we mediate it? People’s stupidity fascinates me when it’s that spectacular.

Well, someone named Michael Tresca beat me to it in a study entitled “The Impact of Anonymity on Disinhibitive Behavior Through Computer-Mediated Communication“. It’s really fascinating reading. He looked at 484 USENET posts (it’s like an Internet message board) and coded them for all kinds of antisocial, nitwit behavior. To quote:

The purpose of this study is to determine if experience with computer-mediated communication will alter a computer user’s behavior and perceptions. Specifically, this study will test the effect of objective anonymity and experience upon disinhibitive behavior in computer-mediated communication.

In other words, they wanted to know why people turn into smacktards once they get online.

Tresca goes on to define the nature of online anonymity and the necessary conditions (e.g., “lack of visual appearance, the flexibility of a label that is different from the user’s normal persona, and relative protection from physical and social repercussions”) for it to exist and impede normal inhibitions. He predicts that what I call the Smacktard Quotient (SQ) should decrease as either perceived anonymity decreases or experience with the ‘net increases.

(Interestingly, he notes that “good writers and more literate people have the same social advantage that physically attractive people have in face-to-face over computer-mediated communications,” which is a point I hadn’t considered that way before. Too bad he doesn’t directly test it.)

Unfortunately, the study’s results don’t pan out. They don’t find strong differences between high, medium and low anonymity and the Smacktard Quotient. Neither do they find differences for experience, though it’s probably because they just measured it with number of posts (thus an Internet veteran who recently joined this particular newsgroup and has only made 5 posts would be counted as inexperienced). The problem I see is that the study uses only one newsgroup as its source of information and measures anonymity from things the users voluntarily adjust (e.g., including a real e-mail address, phone number, etc. in their post).

A better study design would have been to look across multiple groups/boards/whatever, each with higher or lower anonymity requirements. Does a board that requires registration have a lower average SQ than one that allows users to post anonymously?

From a practical standpoint, the research (particularly its underlying theory, which I don’t think was tested well and still makes sense) still suggests a few things that we already know cut down on the level of “inflammatory and informational disinhibition.” Things like:

  • Requiring a valid e-mail address
  • Account registrations
  • Posting I.P. addresses
  • Putting personal information on file
  • Moderating posters’ contributions until they reach a certain level of participation
  • Karma or other rating systems from fellow board members

So while this stuff may get a kind of “well, duh” reaction from most, it IS nice to see it being studied scientifically.

And finally, if you’ll pardon the vulgarity, Penny Arcade summed it up quite succinctly with the following equation: Normal Person + Anonymity + Audience = Total Fuckwad