Method

The model in Figure 1 was used as a starting point from which data were gathered from five two-hour focus group sessions with a large health care payor group. Sessions were held exclusively with either all marketing (users), all "power" users who were systems experts, or all information systems teams (service providers). Teams included current, soon-to-be and power users, data warehousing internal systems staff and middle management. The size of the groups were small, typically four or five members and lasted a full two hours each. This small group size, combined with one-on-one follow-up interviews where necessary, assured that all focus group questions would be addressed and that all members would have time to answer the questions. In addition, top management agreed not to attend the focus group sessions to avoid biasing responses. An experienced focus group facilitator, one of the team members, worked to elicit responses from all focus group participants. For the most part, participants were enthusiastic supporters of this research effort and had an interest in seeing the CDW succeed in the organization. Appendix 1 lists the questions that were asked of all focus group participants.

Prior to using these questions in the focus group sessions, the research model was pre-tested via five one-hour interviews with managers from the health care organization. Meeting notes were analyzed and the team determined that the initial exploratory questions from the Payton and Ginzberg model were sufficient to elicit comments that related to all areas of the model. The questions were also broad and flexible enough to allow the team to uncover some additional constructs and, as will be seen, some unexpected results.

All focus group sessions were recorded and transcribed by a professional writer on staff at the health care organization as well as a member of the research team. The research team member provided a transcript within 48 hours and the professional writer worked from simultaneous tape recordings. Names and titles were withheld from all transcribed documents when given to data coders. Follow-up interviews with top management and other key users were conducted for clarification.

The research team analyzed the data in the transcripts and then developed coding dictionaries to capture the marketing and information systems-related constructs. The coding dictionaries (Appendix 2) were developed to capture the marketing and information systems constructs that would capture the essence of the responses to the questions in Appendix 1. Two independent coders were used and a simple percentage agreement among constructs was calculated initially. Next, the initial coding dictionary was refined and factors were eliminated and/or combined where appropriate. This new coding dictionary was given to three new and different independent coders for an additional analysis beyond simple percentage agreement. A coefficient of interrater reliability among the three different independent coders was calculated. To facilitate the calculation of

Fay Cobb Payton and Debra Zahay interrater reliability, only three of the five focus group transcripts were analyzed (marketing managers, active marketing users and newly trained marketing users) and only the results of these three groups are reported below. These three specific transcripts were used to focus the analysis because these groups were the three groups composed solely of marketing personnel. The results of analyzing these three transcripts is summarized in Table I.

To analyze the focus group data, the team adopted the analysis method prescribed by Yin (1994). This method involved looking for patterns in the data and making adjustments in the analysis accordingly. This flexible, interactive process allowed the team to change the implementation model based on emerged findings. For example, though the initial questions (Appendix 1) did not focus on trust and integrating marketing needs, these factors proved to be important in this case study context. The analysis meant that the model needed to be changed based on the patterns that emerged. To help explain these changes, it was then necessary to consult the relevant marketing literature.

Scott's kappa (also known as the bias-adjusted kappa or p) was calculated for interrater reliability. Lacking any a priori expectations of the marginal proportions, this method of calculation accounts for the possibility of chance agreement among coders. Although coding elements (categories) were established, the verbal contributions (focus group comments) were free to vary and "fall" into any cell within the coding table. p is not influenced by the frequency by which categories are used and is calculated as:

kappa (p) = (o-(e/(1 - (e), where Po is probability of observed and Pe is the probability of expected.

Additional justification for p can be found in Currall et al. (1999) and Neuendorf (2002).

While Currall et al. (1999) offered a framework for developing coding categories and Neuendorf (2002) illustrated the mechanics of the statistical kappa calculations, Scullen et al. (2003) offered guidelines for judging significance; these guidelines recommended the following ranges: 0.41-0.60 indicating a moderate agreement and 0.61-0.80 indicating substantial agreement.

0 0

Post a comment