1 ROLE CONFLICT AND FEEDBACK-SEEKING BEHAVIOR AS MODERATORS IN 360-DEGREE ASSESSMENTS STACY L. JACKSON Olin School of Business Washington University in St. Louis St. Louis, MO Tel: (314) 935-6338 Fax: (314) 935-6359 Email: [email protected] I wish to thank Bob Dipboye, Bill Bottom, Steve Currall, Mickey Quinones, Ron Taylor, Jeff Edwards, and Carlla Smith for their helpful guidance and comments as I developed earlier versions of this manuscript. 2 ABSTRACT The current study investigated factors that explain the relationships between self and other assessments within a role theory framework. It specifically investigated the moderating effects of person role conflict and feedback inquiry on the relationship between self and other (peers, superior & subordinate) responses to a 360-degree assessment. Data from 350 participants produced 839 assessments (86% return rate). Results indicated limited support and the need to investigate differences beyond the profile level of agreement. 3 The Moderating Effects of Role Conflict and Feedback-Seeking Behavior in 360-degree Assessments Organizations have recently implemented various structural changes in order to better align with their customers, integrate technological innovations, and focus on their core capabilities. These structural changes (e.g., flattened team-based organizational hierarchies) have created a need for innovative HR processes. For example, flatter organizations have created larger spans of control for managers. These managers are often required to assess subordinate performance although they may spend little time observing their work (Murphy & Cleveland, 1995). One promising process, designed to address such changes, has been the increased use of evaluations (in the form of survey assessments) from non-superior sources (e.g., peers). Results from these multiple source assessments (MSAs) may be integrated into a variety of development activities (Church, 1995), the facilitation of organization change (Tornow, 1993) and even into pay decisions (Bernardin & Beatty, 1987). Integration of MSAs into such processes is complicated by the lack of agreement among sources (Harris & Schaubroeck, 1988). A lack of agreement, alone, is not problematic. In fact, most companies use MSAs to achieve different perspectives. However, an inability to explain why sources disagree is problematic. Unexplained disagreement is especially frustrating when organizations tie MSA outcomes to decisions requiring evaluative closure such as compensation. It also frustrates focal persons (the individuals receiving feedback) because of expectations that minimizing disagreement will improve performance (Church, 1997). This is especially confusing in MSAs using larger numbers of sources (e.g., 360-degree MSAs where superior, peer and subordinate responses are compared to self-assessments). For some time, researchers have called for an investigation of factors that might explain MSA disagreement (e.g., Schneier & Beatty, 1978). Lawler (1967: 375-376) specifically asked, 4 “what are the factors associated with managers being evaluated differently by their superiors, peers and selves?” Past research has typically investigated the degree of MSA agreement (e.g., Fox & Dinur, 1988) and outcomes of MSA agreement (e.g., London & Smither, 1995), but little research has sought to identify factors that explain the relationships between self-assessments and those of peers, superiors and subordinates. THEORETICAL FRAMEWORK AND HYPOTHESES This study investigates MSA relationships within the framework of role theory (Kahn et al., 1964). Researchers (e.g., Wohlers, Hall, & London, 1993) have referenced, but not explicitly integrated, role theory to explain differences among sources. Role theory proposes how others (role senders) prescribe, perceive and evaluate focal person behaviors. Similarly, MSAs formally ask for self and other evaluations of focal persons. Role theory is behaviorally focused, as are MSAs. Given Biddle and Thomas’ (1966) definition of roles as “behaviors characteristic of one or more persons in a context”(p.58), one could easily argue that (by definition) the behavioral information requested in MSAs is role information. Finally, role theory provides a needed framework of specific factors that likely influence MSA outcomes – specifically suggesting factors that may moderate the relationship between focal person and role sender evaluations. This study investigates the effects of two such factors suggested by role theory: role conflict and feedback seeking behavior. Role Conflict (person-role conflict) Past research indicates the potential presence of role conflict among MSA sources. For example, Schneier and Beatty (1978) found significant differences in the perceived importance of role behaviors between superiors and those of focal persons, peers and subordinates. They suggested their results “point to fundamental difference in role prescription across levels” (p.133). Likewise, Tsui (1984) found importance weights on a set of criteria differed widely 5 among superiors, peers and subordinates. Some researchers have further suggested that role conflict may influence MSA outcomes. For example, Mount (1984) explained MSA discrepancies as possibly due to disagreement in job requirements or expectations of behavior standards. Schneir & Beatty (1978: 133) specifically posed that differences may occur in source ratings “to the extent that there are fundamental differences in role prescription across levels.” Tsui and Ohlott (1988) found initial support for these suggestions. They found a significant correlation (p>.05) between experienced role conflict and self-superior agreement on one of the seven roles (interpersonal competence). They also found a significant relationship between experienced role conflict and self-subordinate agreement on another role (leader role behavior). Although their measures differ from typical MSA measures, their results provide an indication that role conflict may relate to MSA source agreement. However, no direct tests exist for the effects of role conflict on MSAs. Although researchers have indicated role conflict may help explain MSA results, they have not explicitly stated the type of role conflict that would best be investigated. King and King (1990) have summarized the several forms of role conflict. Common forms of role conflict include intrasender conflict (conflicting prescriptions from one role sender), intersender conflict (conflicting prescriptions from multiple role senders), and person-role conflict (conflict between self and other prescriptions). Only person-role conflict (PRC) focuses on the incongruities between the expectations of both the role sender and the focal person (paralleling the MSA scenario). Measurement of PRC can be solely from the focal person perspective (experienced PRC) or as a combination of the role sender and focal person perspectives (actual PRC). This latter form best represents the MSA scenario through a comparison of focal person and role sender perceptions of the importance of expected role behaviors (Johnson and Graen, 1973). The current study directly tests the effects of actual PRC (“PRC”) on the relationship 6 between focal person and role sender MSAs. Role theory (Kahn et al., 1966) indicates senders will tend to notice and focus on those roles or behaviors they consider important. It is assumed this increased focus will lead role senders to be more accurate in their assessments of those behaviors. Likewise, focal persons will concentrate on exhibiting behaviors they consider relatively important. To the extent that role senders and focal persons agree regarding the importance of certain role behaviors, there will be less PRC. To the extent that role senders and focal persons disagree regarding the importance of certain role behaviors, there will be greater PRC. To the extent that PRC is minimal, role senders will attend to the same behaviors that focal persons are concentrating on exhibiting. This latter condition will lead to a stronger relationship between role sender and focal person evaluations. Hypothesis 1: Person-role conflict. The relationship between the focal person’s self assessment and role sender assessments of the focal person will be moderated by the level of PRC present in those dyads (i.e., self-peer, self-subordinate, self-superior). Specifically, those dyads having high PRC will have self-assessments more congruent with role sender assessments than those with low PRC will. Feedback-seeking behavior (inquiry strategy) Role theory portrays several types of feedback cycles. The feedback cycle of specific relevance to this study is the cycle initiated by the focal person. Kahn et al. (1966) proposed a “type of feedback occurs when the focal person attempts to initiate communication with his role senders about...performance” (p.281). The likely targets of this type of feedback seeking behavior (FSB) are the same as those typically participating in MSAs: peers, subordinates and superiors (Greller & Herold, 1975). In fact, many firms implement MSAs as an attempt to formalize FSB. Kahn et al. proposed FSB could lead to changes in perceptions of the role sender and focal person. Similarly, MSA researchers (e.g., Smither et al., 1995) have suggested FSB 7 may increase agreement among MSA sources. Although research (e.g., Tsui, Ashford, Clair, & Xin, 1995) has indicated FSB is related to assessments of some sources, no study has directly investigated its effects on the relationship between self and role sender assessments. Ashford and Cummings (1983: 382-383) have proposed that FSB occurs in two strategies: monitoring and inquiring. Feedback monitoring “includes attending to and taking in information from the environment. It entails observing the situation and the behaviors of other actors for cues useful as feedback.” It is covert and likely effects only the focal person’s MSA. Rather than focusing on the monitoring strategy, this study focuses on the feedback inquiry strategy (FBI) because it is more overt and therefore more likely to effect both the role sender and focal person perceptions and MSA outcomes. FBI “is the individual’s attempt to actually increase the amount of personally relevant (information)...by directly asking actors in that environment for their perception and/or evaluation of the behavior in question” (Ashford & Cummings, 1983: 385). FBI increases competence, assists in self-evaluations, reduces uncertainty and corrects perceived errors in role sender evaluations (Ashford & Cummings, 1983). The accomplishment of these goals would influence the relationship between self-assessments and role sender assessments. Specifically, validity of role expectations and self-evaluations should increase when focal persons seek feedback from those who will complete MSAs. Role senders who give feedback are more likely to also clarify role expectations increasing the validity of their own evaluations. Therefore, FBI should better align role expectations and perceptions of behavior as well as strengthen the relationship between evaluations of focal person and role sender assessments. Hypothesis 2: Feedback Inquiry - The relationship between the focal person’s selfassessment and the role sender assessments of the focal person will be moderated by the degree 8 to which the focal person reports using FBI. Specifically, those focal persons who report high levels of FBI will have self-assessments that are more congruent with role sender assessments than those reporting low levels of FBI. Concerns regarding Past Studies In addition to investigating the above hypotheses, this study expands on past studies by overcoming certain analytical and methodological concerns. Each concern is discussed below and summarized in table 1. _______________________ Insert Table 1 about here _______________________ Focus on only One Dyad. Several studies have focused on only one MSA dyad (predominantly either the superior-self or self-subordinate dyad). No moderator studies, beyond Harris and Schaubroeck’s (1988) meta-analysis, have focused on more than one MSA dyad. Effects of moderators on the relationship between source assessments have been limited to cross study comparisons. In addition, effects of single dyad situations may not extrapolate to situations where all typical MSA sources have assessed the focal person. Therefore, the growing use of MSAs requires research that focuses on all typical MSA sources (superior, peers and subordinates). Correlate versus Moderator Approach. Table 1 identifies past correlate and moderator studies. Correlate studies provide insight to potential effects of factors on MSA source agreement. They demonstrate that the pattern of MSA agreement and that of an external criterion have similar slopes. However, they do not directly test whether the relationship between MSAs truly differs depending on the value of the third variable. Theoretically, most past studies present focal person and role sender evaluations as separate variables yet choose to represent them as one agreement variable. A more appropriate approach is a direct test assessing the presence of a 9 significant moderator-variable interaction (Stone & Hollenbeck, 1984). Inappropriate Use of Difference Scores. Several studies have inappropriately represented MSA agreement as a difference score (see table 1). The use (or misuse) of difference scores has been the subject of much debate recently (e.g., Edwards, 1994; Tisak & Smith, 1994). Difference scores are appropriate if theoretically justified and if alternative measures do not more accurately test one’s hypotheses. However, role theory treats focal person and role sender assessments as separate variables. Combining such measures (creating a difference score) provides overly restrictive constraints. Moderated multiple regression provides a theoretically consistent approach. The criticism here is not with difference scores per se (as is discussed later), but with using them as a substitute for moderated regression without theoretical support. Lack of Internally Consistent Scales. Several studies have used profile measures (without testing internal consistency) or one-item scales to assess employee effectiveness (see table 1). The most serious problem with profile measures is their tendency to obscure important relationships (Edwards, 1993). Also, one-item scales do not reflect MSAs in practice. Self-Selection of Focal Persons. Some studies have allowed focal persons to self-select to participate. This practice is quite different from typical MSA implementations that require participation. Individuals who volunteer for such a program may already have similar perceptions as MSA sources, may be high performers or may have an unrepresentative interest in self-development. This study includes all relevant organizational members in the MSA. Anonymity. The fourth methodological concern (e.g., Baird, 1977) relates to the almost universal practice (Van Velsor & Leslie, 1991) of reporting source results in a summarized fashion (versus identifying which individual gave which response). There are two critical reasons why ensuring anonymity of responses is important. First, anonymity of responses is an essential element (Locke, 1986) of the MSA which research should replicate (unless its practice is the sole 10 focus of the research). For example, organizations typically take visible steps to ensuring anonymity such as having MSAs mailed to outside vendors. Second, without anonymity, subjects may give biased responses due to their fear of reprisal. In support of this argument, Antonioni (1994) recently compared anonymous assessments from subordinates to those of subordinates whose assessments were not anonymous. He found that without anonymity, subordinates significantly inflated their ratings. Categorization of Continuous Variables. The last methodological concern is the categorization of certain potential moderators as continuous variables (Pedhazur, 1988). The worst implication of such an approach is that it can conceal otherwise noticeable effects. METHOD Participants Participants held a set of positions within a gas pipeline subsidiary of a Fortune 500 company. This company had recently come into compliance with a federal order that focused on de-coupling the energy industry’s primary segments (e.g., separating pipeline and utility ownership). The government positioned these orders as an effort to de-regulate the industry creating a more competitive market. In response, many organizations are integrating the remaining aspects of their businesses (e.g., sales, marketing, etc.). They assume efficient integration will reduce fixed costs and increase revenue (further differentiating services in an increasingly competitive market). During the 4 months prior to data collection, I worked with the top seven executives in the company (including the CEO and COO) to design changes to the organization’s processes and technological infrastructure. A task force of 60 individuals (segmented into 7 teams) worked to operationalize the strategic design. Early on in the process, it became clear that the employees would need training for successful implementation. However, current competence levels were 11 generally unknown. I presented the 360- degree MSA as a tool to facilitate assessment. The COO and HR Vice-President agreed that it would provide a measure of current competence and an opportunity to give employees feedback. Participants were actively involved and guaranteed job security. The CEO assured employees (in writing and in person at several meetings) the changes would allow them to take on more business rather than reduce human resources. We selected 151 focal persons (27 managers and 124 professionals) based on whether their job responsibilities focused on the order fulfillment process. With the assistance of HR, I selected 1 superior, 3 peers, and 3 subordinates to assess each focal person. All participants had been in their positions for more than six months. All participants represented a variety of functions (e.g., sales, marketing, accounting, operations, etc.). They represented all levels of the organization ranging from entry administrative levels to senior executives (see table 2). I distributed 977 self, peer, superior, and subordinate assessments to 350 participants. Focal persons returned 136 of the _______________________ Insert Table 2 about here _______________________ 151 self-assessments (all 27 managers and 109 of the 124 professionals). Role senders returned 703 of the 826 superior, peer and subordinate assessments. In total, respondents returned 839 of the 977 assessments (86% response rate). Materials and Design The questionnaire contained 66 items from Career Architect© (a widely used instrument developed in part by Michael Lombardo formally of the Center for Creative Leadership). No past research or theory specifically guided the development of the items nor did any alternative competing models readily exist regarding the structure of the items. An example of an item and its definition: Dealing with Ambiguity was “can shift gears comfortably; can decide and act 12 without the total picture; can comfortably handle risk and uncertainty." All participants received instructions in person and in writing. All assessors attended one of six sessions that focused on four things. First, we discussed the dual purpose of the survey: to give individuals developmental feedback and to develop training in preparation for organizational changes. Second, assessors received an instruction sheet (read aloud) describing the rating scales. Third, one of the senior executives (e.g., COO) confirmed the importance of maintaining anonymity. Lastly, I described descriptions of typical rating biases in hopes of raising awareness and minimizing bias. Question and answer sessions followed. Participants received sealed packages with the surveys they were to complete after the discussion. I gave the same instructions in one-on-one meetings to any individuals who did not attend the meetings. Measures Effectiveness of Focal Person. Role senders assessed focal persons (who also completed self-assessments) using a five-point Likert scale for each of 66 behaviors. The scale was as follows (participants also received definitions for each scale anchor): “Use the following scale to evaluate how effective the individual is (in your opinion) at performing the described competency. 1 2 3 4 5 N Not Skilled Minimally Skilled Talented Towering Not Skilled Strength Observed Effectiveness measures for peers and subordinates were calculated as the overall average for each of those source groups. For example, if three peers rated the effectiveness of a focal person, their combined average rating represented the peer effectiveness rating. Instructions indicated participants were to rate only those competencies they had observed. Ninety percent of the participants responded to 44 of the 66 items. A review of the data indicated 90% response was a natural cutoff (remaining items were only assessed by 50% or less of the sample). The deleted items had a common theme of managerial competence (e.g., directing subordinates, hiring and staffing) and were likely not applicable because over half of the focal 13 persons were not managers. Person Role Conflict. Focal persons and role senders assessed role expectations using a five-point Likert scale for the same behaviors assessed for focal person effectiveness. These ratings were averaged for peers and subordinates. The scale and instructions were as follows: “Use the following scale to evaluate the degree to which you believe this competency is required of the individual’s job. Please consider both the competency’s importance and the frequency it is required. 1 2 3 4 5 Very Low Low Moderate High Very High In order to create a role-specific measure of PRC, measures were at the item level and averaged across factor (factor analysis discussed below) and dyad. Measures of PRC were calculated using a summation of the square root of difference scores (Smith & Tisak, 1993). The computation of PRC compares importance measures from focal persons with those of role senders. This directly measured if role conflict existed between two sources (e.g., self and superior) on a specific type of behavior (e.g., interpersonal competence). Past research (e.g., Keeley, 1977; King and King, 1990; Smith & Tisak, 1993) has measured role constructs similarly and justified the use of difference scores theoretically and empirically as the best alternative measure. Measures of the internal consistency of the scores were calculated by factor on item differences across sources to attain a lower bound of reliability (Smith & Tisak, 1993). Edwards (1993, 1994) has appropriately criticized the use of difference scores noting issues of conceptual ambiguity, discarded information, insensitivity to sources of profile difference and overly restrictive constraints. Such criticisms require a response. Role theory suggests PRC elements are conceptually similar (Kahn et al., 1966). In addition, the current study ensures conceptual similarity through a factor analysis on the items comprising the profile and carrying out the regression analyses on those factors (versus across an unrelated profile of items). Regarding the potential discarding of essential information, no difference is hypothesized between effects due to whether they are positive or negative differences. In both cases, role conflict is proposed to 14 exist. In addition, role conflict (Kahn et al., 1964) is theoretically defined as the difference between the role senders’ and focal person’s perceptions – it is one construct. This definition demands the use of a measure that captures differences and accepts such theoretical constraints. Differences, as opposed to components of role importance, are of theoretical relevance (Tisak & Smith, 1994). Edwards (1993) suggests these constraints may be accepted, but should additionally be tested in a polynomial regression (i.e., utilizing score components, their squared effects and interaction). However, several researchers have criticized this approach as well. Their criticisms include resulting multicollinearity, increased dependence on sample size and power (Kristoff, 1996), the conceptual validity (and possible theoretical irrelevance) of the technique’s higher order terms (Bedeian & Day, 1994), and the question of whether difference scores represent something different than their components (Tisak & Smith, 1994). Such concerns have led several researchers to maintain a theoretically and empirically justified use of difference scores (e.g., Adkins, 1995; Werner & Tosi, 1995). In addition, the use of moderator analysis (where the difference score is the moderator) creates the need for four-dimensional versus three-dimensional plots of the data (creating interpretation challenges). The difference score approach taken in this study is foremost theoretically relevant. It addresses empirical criticisms by conducting analyses across conceptually similar factors, ensuring adequate reliability of difference scores (see results section), minimizing threats to power, and avoiding multicollinearity. Feedback Inquiry. Questions and scales were adapted from Ashford (1986) regarding measures of feedback inquiry (hypothesis 2). Only slight modifications were made to questions in order adapt them to the MSA scenario. A sub-group of the sample completed an exploratory measure of feedback inquiry. This measure asked the same questions regarding feedback inquiry except from the role sender’s viewpoint. 15 Analyses An exploratory factor analysis on responses to questions regarding effectiveness investigated the presence of internally consistent multiple item scales. A personal conversation with the developer indicated that no explicit testable theory of factor structure had been proposed. Therefore, a confirmatory factor analysis was not pursued. A Principal Factor Analysis (PFA) was computed on the total sample (peer and subordinate ratings were averaged by focal person). I began with a promax rotation (Tabachnick & Fidell, 1989). Because interpretable, internally consistent scales existed, moderated regression was carried out by factor (Edwards, 1993). Tests of hypotheses used multiple moderated regression (Stone and Hollenbeck, 1984). The moderator variables (PRC and FBI) were entered as main effects followed by an interaction term. A moderating effect existed to the extent that a significant effect existed for the interaction term. RESULTS Principal Factor Analyses After collapsing responses by focal person and source group (N = 413), I conducted a principal axis factor analysis (PFA) with a promax rotation and pairwise deletion on the 44 items. In order to ensure common factor structure across sources, PFA were also conducted for each source. Results indicated the same factor structure reported here was common across sources. Observation of the number of eigenvalues greater than one, the scree plot, the percentage of residuals greater than .05, and interpretability all indicated that a four-factor solution was most appropriate (Tabachnick and Fidell, 1989). Specifically, the results revealed nine factors with eigenvalues greater than one: 14.2, 4.0, 2.1, 1.6, 1.3, 1.3, 1.2, 1.1, and 1.0. The corresponding percentages of variance explained were 30.6, 7.8, 3.5, 2.5, 1.9, 1.7, 1.5, 1.3, and 1.2. Items retained for each factor: 1) had loadings > .55(Comrey, 1973), 2) did not 16 significantly detract from the internal consistency of the factor score (Cortina, 1993), 3) were easily interpretable as representing the factor, and 4) were considered marker variables (Tabachnick & Fidell, 1989). This created the most parsimonious set of representative items. For the first factor (labeled Interpersonal Skills with items such as “approachability”), six items had loadings >.55. No items significantly detracted from the internal consistency. No items loaded similarly on other factors. The six item labels are in bold in table 3 and had a coefficient _______________________ Insert Table 3 about here _______________________ alpha of .84. For each factor, suggestions by Cortina (1993) were followed in order to avoid sole use of a rule of thumb for alpha. A comparison of changes in item intercorrelations and precision measures also indicated that it would be preferred to retain these items. For the second factor (labeled Planning Ability with items such as “time management”), five items had loadings >.55 (see table 3). No items significantly detracted from the internal consistency. No items loaded similarly on other factors. The coefficient alpha was .82. For the third factor (labeled Executive Presence with items such as “comfort around top management”), four items had loadings >.55 (see table 3). No items significantly detracted from internal consistency. No items loaded similarly on other factors. The coefficient alpha was .70. For the fourth factor (labeled Personal Learning with items such as “personal learning”), three items had loadings >.55 (see table 3). No items significantly detracted from the internal consistency. No items loaded similarly on other factors. The coefficient alpha was .75. Results of Tests of Hypotheses Tables 4 and 5 present means, standard deviations and intercorrelations of key study variables. Specifically, table 4 presents effectiveness and FBI ratings. PRC measures are presented 17 __________________________ Insert Tables 4 & 5 about here __________________________ at the factor level (table 5). I initially calculated PRC variables at the item level and then summarized them to the factor level. Several things contributed to the varied sample sizes across sources. First, all subjects did not have subordinates. Second, non-responses from superiors automatically decreased the sample size (i.e., individuals only had one superior), whereas nonresponses from one peer or subordinate did not (i.e., one could still average remaining peer or subordinate ratings). Table 6 presents profile agreement measures (as correlation coefficients) in comparison to Harris and Schaubroeck’s (1988) meta-analyses results. In addition, table 6 presents correlation _______________________ Insert Table 6 about here _______________________ coefficients specific to each factor within each dyad. Similar to their meta-analysis results, peersuperior correlation is substantially greater than both self-peer and self-superior. These results also indicate reporting only profile agreement does not seem adequate. For example, self-superior profile agreement is not significant overall (r = .06, ns), but is significant across items representing Interpersonal, Planning and Executive Presence factors. Regression analyses were conducted on data from each factor and for each dyad (four factors across 3 dyads). Therefore, twelve moderated regression analyses were performed to test each hypothesis. Internal consistency (coefficient alpha) reliabilities for PRC difference measures (calculated across dyads) were 0.74 for factor one, 0.74 for factor 2, 0.71 for factor three and 0.82 for factor 4. Person-role Conflict Of the twelve analyses (see table 7), only one was significant. The superior assessment of 18 _______________________ Insert Table 7 about here _______________________ Executive Presence and PRC had a significant interaction (∆R2 = .09, p < .01) in predicting the self-assessment of Executive Presence. Figure 1 provides a graphical illustration of the significant _______________________ Insert Figure 1 about here _______________________ interaction. Within sub-group regression equations were computed (one for high PRC and one for low PRC - median split) regressing superior assessments on self-assessments (Peters, O’Connor & Wise, 1984). I then plotted high, average and low values (1.0, 0.0, & -1.0 SD’s from the mean) of superior assessments (Aiken & West, 1991). Figure 1 shows that (as predicted) when there was low person-role conflict, there was significant positive relationship between self and superior assessments (r = .49, p < .01), but not when person-role conflict was high (r = .05, ns). Feedback Inquiry Two significant results were found for the effects of FBI (see Table 8). However, only one was in the direction predicted by hypothesis 2. First, the superior assessment and FBI had a _______________________ Insert Table 8 about here _______________________ significant interaction (∆R2 = .05, p < .05) in predicting the self-assessment for Executive Presence. Figure 2 shows that (counter to the hypothesis) when there were low levels of FBI there was a _______________________ Insert Figure 2 about here _______________________ significant positive relationship between self and superiors (r = .44, p < .01). Conversely, a 19 statistically significant relationship did not exist when reported feedback inquiry was high (r = .22, ns). Second, the subordinate assessment and FBI had a significant interaction (∆R2 = .15, p < .05) in predicting the self-assessment for the Interpersonal factor. Figure 3 provides a graphical _______________________ Insert Figure 3 about here _______________________ illustration of the significant interaction. Figure 3 shows that (as predicted by the hypothesis) when there were high levels of reported feedback inquiry there also was a significant positive relationship between self and subordinates (r = .64, p < .05). Conversely, a statistically significant relationship did not exist when reported feedback inquiry was low (r = .05, ns). DISCUSSION This study extended past research by seeking to explain differences in responses from typical sources of a 360-degree assessment. It overcame past methodological and analytical concerns. Past research had not investigated effects of moderators across all 360-dayds. Results from this study indicated little agreement among subordinate-dyads. For example, the overall correlation between self-subordinate, peer-subordinate, and superior-subordinate dyads was .03, .32, and .05, respectively (none of which were significant). In fact, results from the superiorsubordinate and self-subordinate ratings of the Interpersonal factor were the only evidence of subordinate agreement with any other source (see table 6). However, among the non-subordinate dyads, trends in agreement were similar to those reported in past studies (e.g., Harris & Schaubroeck, 1988). Specifically, the Peer-Superior dyad produced the greatest degree of agreement (the only one statistically significant overall). However, significant agreement existed on several of the sub-scales for the peer-superior and self-superior dyad. This latter finding indicates significant agreement between all sources may exist if one focuses beyond overall profile 20 agreement. Although this study did not seek to conduct formal tests of the effects of the type of behavior assessed, results provide initial evidence to encourage future investigations of differences in MSA results due to the type of behavior assessed. MSA source agreement (as reflected by correlation coefficients) seems to vary depending on the type of behavior. Table 6 summarizes differences in agreement for each factor. Some dyads (such as self-superior) did not agree significantly overall, but did show significant agreement for specific types of behavior. This suggests the type of behavior assessed may be an important factor in understanding MSAs. Future research should formally test hypotheses regarding why certain sources might vary in their agreement on different types of behaviors. Person-role Conflict The significant result for hypothesis one (i.e., the moderating effects of PRC) was in the hypothesized direction for the self-superior assessment of executive presence, though the other analyses were not significant. Further examination of the results indicated that self-assessment of the importance of executive presence was not significantly different in high or low PRC conditions. However, results did indicate that superior assessment of the importance of executive presence was significantly different (p<.001) in high versus low PRC conditions. Specifically, high person-role conflict seemed to result from self-assessors rating executive presence higher than their superiors do. This may imply focal persons who value the development of executive presence more than their bosses may negatively influence their boss’ assessment of that competence. Several alternative explanations exist that may explain these limited results. One potential explanation for these results may be that the item content was not adequate to assess the organizational roles. Several subjects in one-on-one managerial interviews commented that the 21 content was seemingly exhaustive of relevant behaviors (the researcher did not conduct similar interviews with non-managers). However, no formal testing of this assumption was possible. Also, unmeasured role constructs may have influenced these results as well. For example, the presence of role ambiguity is a critical explanatory factor in the role episode model. Although unmeasured in this study, its presence may explain the lack of differentiation (that is, high degree of inter-correlation) across factors presented in tables 4 and 5. A final explanation is that role conflict influences certain dyads (e.g., self-superior) regarding certain competencies (e.g., executive presence) but not others. Murphy and Cleveland (1995) have suggested a variety of factors that might lead one to expect such a specific effect. They suggest access to information about task and interpersonal behaviors and results will differ across role senders. That is, the superiors in this sample may have had a greater opportunity to observe focal person attempts and outcomes regarding executive presence. Such an explanation would require the development of a more extensive theoretical framework differentiating expected dyad effects similar to that presented as a starting point by Murphy and Cleveland. Feedback Inquiry Mixed support existed for the moderating effects of FBI. Only two analyses indicated support for a moderating effect of FBI. First, feedback inquiry moderated the relationship between self and subordinate assessments such that high feedback inquiry was associated with greater congruence of self and subordinate assessments of interpersonal effectiveness. Surprisingly, the additional significant result was in the opposite direction than was hypothesized. This unanticipated result and the overall mixed support are discussed below. The FBI effects on self and superior assessments may result from the focal person’s definition and perception of effective executive presence. T test results (based on a median split) indicated self-assessments of executive presence (M = 3.15) in the high FBI category were 22 significantly higher (p<.05) than self-assessments of executive presence (M = 2.90) in the low FBI category. That is, individuals who saw themselves as high in FBI also tended to rate themselves as high on executive presence (with slightly less variance). Mean superior ratings, alternatively, did not significantly differ as a function of level of FBI. Inflated and less varied self-ratings of executive presence may create potential range restriction issues that would account for a diminished relationship with superior ratings. Focal persons may inflate their ratings because they see their FBI as demonstrating their competence in their executive presence. Superiors may not have inflated their ratings because they do not perceive FBI similarly. Research to date has not investigated the acceptability of feedback inquiry across organizational levels (i.e., between managers and non-managers). Such research may help explain these unexpected results. Several unmeasured aspects of FBI may also help explain the mixed results found here and may suggest directions for future research. One unmeasured aspect of FBI relates to the implied assumption that engaging in FBI automatically leads to role sender feedback. This may not be the case. Role theory (Biddle & Thomas, 1966) indicates that role sender responses do not necessarily lead to changes in focal person expectations. In addition, focal persons who engage in FBI will not always receive feedback from role senders. Interestingly, an exploratory measure of role sender feedback giving (FBG) was not significantly associated with FBI. This was the case when comparing focal person FBI to superior FBG (r = .05, ns), peer FBG (r = .08, ns), and subordinate FBG (r = .15, ns). These results imply asking is not necessarily receiving. A potential future hypothesis may be that FBI that leads to FBG will moderate MSA agreement. Another possibility is that concurrent incidence of FBG and FBS is essential. Future research should investigate the interdependent role of FBG (Larson, 1984) with FBI on the MSA process. Unmeasured aspects of role compliance may also explain the weak results of the effects of FBI. It may be that even if FBI led role senders to give feedback, their feedback may not lead to 23 role compliance. This lack of role compliance may exist for a variety of reasons (Kahn et al., 1964). For example, focal persons may not have seen the role sender as having adequate power to prescribe role expectations. They also may have felt the feedback was not congruent with other role sender expectations. A better understanding of the degree of role compliance associated with FBI may have helped explain this study’s results. The FBI results presented here are far from conclusive in defining the relationship between FBI and MSA outcomes. However, they indicate that future research should continue to investigate the role of FBI, especially given MSAs are often presented as initiating a feedback seeking process within an organization. Future investigations should incorporate causal designs that investigate the influence of FBI in relation to constructs such as role compliance and role sender FBG. Limitations A few limitations of this study should be noted. This study’s external validity is strengthened because the data was collected in the context of an organization implementing an actual MSA and because of the large representative sample size. However, other aspects of the study may weaken its generalizability. First, these results may only be relevant to situations where the MSA purpose is developmental rather than tied to performance appraisal and compensation decisions. The senior managers and I assured subjects that MSA results would be strictly developmental. In one-on-one interviews, subjects commented that their results would be quite different if tied to performance reviews. Although researchers (e.g., Meyer, 1991) have discussed the potential effects of developmental versus evaluative purposes of performance appraisals, no research to date has investigated the implications of tying MSAs to compensation or promotion decisions. Second, the types of roles assessed in the current study may not generalize to all organizations. Tsui and Ohlott (1988) have noted that one difficulty in relating role differences 24 across sources lies in our ability to anticipate what those roles are. The exploratory factor analysis conducted in this study minimizes this threat, but future research should firmly establish role content and more fully assess if roles differ across sources or organizations. Third, it is unclear how these results might compare to MSA interventions where participation is voluntary, where focal persons select their assessors, or where traditional MSA sources are not included. Practitioner Implications These results may imply practitioners consider several things in administering an MSA process. MSAs that summarize agreement across all items may mask whether agreement exists. Therefore, results may be best presented at the behavioral role level (rather than the profile or item level). Such differences in role agreement between dyads may also indicate practitioners should use customized assessments that present appropriate items to each source. Unfortunately, most popular instruments provide all sources with all items (Van Velsor & Leslie, 1991). Interestingly, in this researcher's experience, most subjects complain about the use of long lists of items. Therefore, customization may also minimize resistance to MSAs. Rather than immediately assuming the more raters the better (i.e., adopting a 360-degree approach), organizations should first determine which behaviors are most important and then determine who should rate whom based on these behaviors. Conclusion In summary, several suggestions for future research follow from these results. First, future research should continue to investigate factors influencing MSA results, particularly across multiple dyads concurrently. A major difficulty in interpreting the current study’s results is the lack of comparable research investigating effects across all dyads. Second, future research should completely avoid using overall profile measures of agreement. Future studies should seek to theoretically understand differentiated source and dyad effects noting which sources are most 25 appropriate to assess certain behaviors (e.g., Murphy & Cleveland, 1995). Finally, research should continue to evolve past focusing on defining levels of agreement to better explaining it. Reports of correlation coefficients representing profile agreement between sources should become increasingly less interesting. 26 REFERENCES Adkins, C. L. 1995. Previous work experience and organizational socialization: A Longitudinal examination. Academy of Management Journal, 38: 839-862. Aiken, L.S., & West, S.G. 1991. Multiple regression: Testing and interpreting interactions. Newbury Park, CA: Sage. Antonioni, D. 1994. The effects of feedback accountability on upward appraisal ratings. Personnel Psychology, 47: 349-356. Ashford, S. J. 1986. Feedback-seeking in individual adaptation: A resource perspective. Academy of Management Journal, 29, 465-487. Ashford, S. J., & Cummings, L. L. 1983. Feedback as an individual resource: Personal strategies of creating information. Organizational Behavior and Human Performance, 32: 370398. Baird, L. S. 1977. Self and superior ratings of performance: As related to self-esteem and satisfaction with supervision. Academy of Management Journal, 20: 291-300. Baril, G. L., Ayman, R., & Palmiter, D. J. 1994. Measuring leader behavior - moderators of discrepant self and subordinate descriptions. Journal of Applied Social Psychology, 24: 82-94. Bedeian, A., & Day, D. 1994. Concluding statement. Journal of Management, 20: 695-698. Bernardin, H. J., & Beatty, R. W. 1987. Can subordinate appraisals enhance managerial productivity? Sloan Management Review, 28(4): 63-73. Biddle, B. J., & Thomas, E. J. 1966. Role theory: Concepts and research. New York: Wiley. Brief, A. P., Aldag, R. J., & Van Sell, M. 1977. Moderators of the relationships between self and superior evaluations of job performance. Journal of Occupational Psychology, 50: 129134. 27 Church, A. H. 1995. First-rate multirater feedback. Training and Development, 42-43. Church, A. H. 1997. Do you see what I see? An exploration of congruence in ratings from multiple perspectives. Journal of Applied Social Psychology, 27: 983-1020. Comrey, A. L. 1973. A First Course in Factor Analysis. New York: Academic Press. Cortina, J. 1993. What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78: 98-104. Edwards, J. R. 1993. Problems with the use of profile similarity indices in the study of congruence in organizational research. Personnel Psychology, 46: 641-665. Edwards, J. R. 1994. Regression analysis as an alternative to difference scores. Journal of Management, 20: 683-689. Fox, S., & Dinur, Y. 1988. Validity of self-assessment: A field evaluation. Personnel Psychology, 41: 581-592. Greller, M. M., & Herold, D. M. 1975. Sources of feedback: A preliminary investigation. Organizational Behavior and Human Performance, 13: 244-256. Harris, M. M., & Schaubroeck, J. 1988. A meta-analysis of self-supervisor, self-peer, and peer-supervisor ratings. Personnel Psychology, 41: 43-62. Johnson, T. W., & Graen, G. 1973. Organizational assimilation and role rejection. Organizational Behavior and Human Performance, 10: 72-87. Kahn, R. L., Wolfe, D. M., Quinn, R. P., Snoek, J. D., & Rosenthal, R. A. 1964. Organizational stress: Studies in role conflict and ambiguity. New York: Wiley. Kahn, R. L., Wolfe, D. M., Quinn, R. P., Snoek, J. D., & Rosenthal, R. A. 1966. Adjustment to role conflict and ambiguity in organizations. In B. J. Biddle & E. J. Thomas (Eds.), Role theory: Concepts and research: 277-282. New York: Wiley. Keeley, M. 1977. Subjective performance evaluation and person-role conflict under 28 conditions of uncertainty. Academy of Management Journal, 20: 301-314. King, L. A. & King, D. W. 1990. Role conflict and role ambiguity: A critical assessment of construct validity. Psychological Bulletin, 107: 48-64. Kristoff, A. L. 1996. Person-organization fit: An integrative review of its conceptualizations, measurement and implications. Personnel Psychology, 49: 1-49. Larson, J. R. 1984. The performance feedback process: A preliminary model. Organizational Behavior and Human Performance, 33: 42-76. Lawler, E. E. 1967. The multitrait-multirater approach to measuring managerial job performance. Journal of Applied Psychology, 51: 369-381. Locke, E.A. (Ed.) 1986. Generalizing from laboratory to field: Ecological validity or abstraction of essential elements? In E.A. Locked (Ed.), Generalizing from the Laboratory to Field Settings:.3-9. Lexington, Mass: Lexington Books. London, M. & Smither, J. W. 1995. Can multi-source feedback change perceptions of goal accomplishment, self-evaluations, and performance-related outcomes? Theory-based applications and directions for research. Personnel Psychology, 48: 803-839. London, M. & Wohlers, A. J. 1991. Agreement between subordinate and self-ratings in upward feedback. Personnel Psychology, 44: 375-390. McClelland, G. H. & Judd, C. M. 1993. Statistical difficulties of detecting interactions and moderator effects. Psychological Bulletin, 114: 376-390. Meyer, H. H. 1991. A solution to the performance appraisal feedback enigma. Academy of Management Executive, 5: 68-76. Mount, M. K. 1984. Supervisor, self- and subordinate ratings of performance and satisfaction with supervision. Journal of Management, 10: 305-320. Murphy, K. R. & Cleveland, J. N. 1995. Understanding Performance Appraisal: Social, 29 Organizational, and goals-based perspectives. Thousand Oaks, CA: Sage Publications. Pedhazur, E. J. 1988. Multiple Regression in Behavioral Research. New York: Holt, Rinehart & Winston. Peters, L.H., O’Connor, E.J., & Wise, S.L. 1984. The specification and testing of useful moderator variable hypotheses. In T.S. Bateman & G.R. Ferris (Eds.), Method and analysis in organizational research: 128-139. Reston, VA: Reston Publishing. Schneier, C. E., & Beatty, R. W. 1978. The influence of role prescriptions on the performance appraisal process. Academy of Management Journal, 21: 129-135. Shore, L. M. & Thornton, G. C., III 1986. Effects of gender on self and supervisory ratings. Academy of Management Journal, 29: 115-129. Smith, C. S., & Tisak, J. 1993. Discrepancy measures of role stress revisited: New perspectives on old issues. Organizational Behavior and Human Decision Processes, 56: 285-307. Smither, J., London, M., Vasilopoulos, N., Reilly, R., Millsap, R., & Salvemini, N. 1995. An examination of effects of an upward feedback program over time. Personnel Psychology, 48: 1-34. Stone, E., & Hollenbeck, J. 1984. Some issues associated with the use of moderated regression. Organization Behavior and Human Performance, 34: 195-213. Tabachnick, B.G. & Fidell, L.S. 1989. Using multivariate statistics. New York: Harper Collins Publishers. Tisak, J., & Smith, C. S. 1994. Defending and extending difference score methods. Journal of Management, 20: 675-682. Tornow, W. W. 1993. Editor’s note: Introduction to special issue on 360-degree feedback. Human Resource Management, 32: 211-219. Tsui, A. S. 1984. A role set analysis of managerial reputation. Organizational Behavior and 30 Human Decision Processes, 34: 64-96. Tsui, A. S. & Ohlott, P. 1988. Multiple assessment of managerial effectiveness: Interrater agreement and consensus in effectiveness models. Personnel Psychology, 41: 779-803. Tsui, A. S., Ashford, S. J., St. Clair, L., & Xin, K. R. 1995. Dealing with discrepant expectations: Response strategies and managerial effectiveness. Academy of Management Journal, 38: 1515-1543. Van Velsor, E. & Leslie, J. B. 1991. Feedback to Managers: Volume II: A Review and Comparison of sixteen multi-rater feedback instruments. Werner, S., & Tosi, H. L. 1995. Other people’s money: The effects of ownership on compensation strategy and managerial pay. Academy of Management Journal, 38: 1672-1691. Williams, J. R. & Levy, P. E. 1992. The effects of perceived system knowledge on the agreement between self-ratings and supervisor ratings. Personnel Psychology, 45: 835-847. Wohlers, A. J., Hall, M. J., & London, M. 1993. Subordinates rating managers: Organizational and demographic correlates or self-subordinate agreement. Journal of Occupational and Organizational Psychology, 66: 263-275. 31 TABLE 1 Comparison of Current Research to Past Studies Baird (1977) Baril, Ayman, & Palmiter (1994) Brief, Aldag, and Sell (1977) Ferris, Yates, Gilmore, & Rowland (1985) Harris & Schaubrauck (1988) London & Wohlers (1991) Shore & Thornton (1986) Willaims & Levy (1992) Wohlers, Hall, & London (1993) Present Study Analytical Approach Tested MSA Dyad: self-superior self-peer self-subordinate Indirect Test (Correlates) X X X X X X X X Direct Test as Moderators Methodological Weaknesses Inappropriate Use of Difference Scores X Lack internally consistent scale or one item scale Self-selection of Focal Persons X Categorized continuous variables X a b X X X X X X X X X X X X X a X X X X X X X b b b b X X X X X X X X X X information not presented in study methodological issues not applicable to Harris & Shaubrauck's (1988) meta-analyses 32 TABLE 2 Sample Demographics of Focal Persons Demographics Gender Male Female Education High School Associate Degree Bachelors Degree Masters Degree Descent African American American Indian Hispanic Caucasian Frequency 74 77 49.0% 51.0% 33 14 86 18 22.0% 9.0% 57.0% 12.0% 26 2 5 119 17.0% 1.0% 3.0% 79.0% Mean Age Organization Tenure % of Sample 39 12 Range 23 to 56 years 1 to 38 years 33 TABLE 3 Loadings of Four Factor Solution _____________________________________________________________________ Item Label Factor 1 Approachability 0.75 Peer Relationships 0.72 Listening 0.68 Compassion 0.66 Patience 0.60 Integrity and Trust 0.56 Understanding Others 0.53 Interpersonal Savvy 0.53 Composure 0.49 Customer Focus 0.47 Negotiating 0.44 Building Team Spirit 0.44 Ethics and Values 0.39 Work / Life Balance 0.31 Time Management 0.09 Priority Setting 0.08 Perseverance -0.01 Results -0.04 Planning -0.14 Organizing 0.03 Informing 0.32 Process Management -0.10 Action Oriented -0.04 Total Quality Management 0.11 Standing Alone 0.03 Comfort around Top Management 0.08 Business Acumen 0.11 Presentation Skills 0.02 Dealing with Ambiguity 0.11 Intellectual Horsepower -0.05 Conflict Management 0.38 Technical Learning -0.09 Creativity -0.22 Learning on the Fly -0.14 Problem Solving -0.08 Timely Decision Making 0.00 Organizational Agility 0.27 Written Communications 0.20 Personal Learning 0.08 Self Knowledge 0.20 Personal Disclosure 0.24 Self Development 0.07 Humor 0.37 Boss Relationships 0.24 Career Ambition -0.10 Factor 2 -0.22 0.11 0.01 0.04 0.03 0.38 -0.07 -0.18 0.09 0.24 -0.09 -0.04 0.35 0.21 0.84 0.75 0.72 0.57 0.57 0.52 0.47 0.40 0.37 0.34 0.33 -0.17 0.17 -0.17 0.07 0.22 -0.13 0.33 0.16 0.24 0.36 0.31 0.22 0.15 0.03 0.09 -0.11 0.31 -0.31 0.05 0.13 Factor 3 -0.05 -0.08 0.17 -0.28 0.06 -0.24 0.16 0.19 0.25 0.12 0.43 -0.03 -0.12 -0.01 -0.14 0.01 -0.09 0.15 0.12 0.02 -0.17 0.26 0.17 0.17 0.30 0.76 0.59 0.56 0.55 0.54 0.53 0.52 0.50 0.50 0.45 0.36 0.33 0.29 -0.01 -0.08 -0.07 -0.03 0.15 0.14 0.27 Factor 4 0.18 0.04 -0.16 0.25 -0.10 0.04 0.09 0.23 -0.18 -0.06 0.04 0.39 0.06 0.05 -0.23 -0.16 0.18 0.19 -0.02 0.00 0.11 0.09 0.29 0.13 0.21 -0.03 -0.24 -0.01 0.03 0.07 -0.03 -0.10 0.20 0.15 0.06 0.03 -0.12 -0.06 0.69 0.59 0.55 0.50 0.41 0.30 0.30 34 TABLE 4 Means, Standard Deviations, and Intercorrelations among Effectiveness and Feedback Inquiry Measures 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. Self Factor 1 Self Factor 2 Self Factor 3 Self Factor 4 Boss Factor 1 Boss Factor 2 Boss Factor 3 Boss Factor 4 Peer Factor 1 Peer Factor 2 Peer Factor 3 Peer Factor 4 Subordinate Factor Subordinate Factor Subordinate Factor Subordinate Factor Feedback Inquiry Mean S.D. a 2 3 4 5 6 7 8 1.00 0.25**1.00 0.31**0.31** 1.00 0.47**0.25** 0.29**1.00 0.31**-0.09 -0.08 0.01 1.00 -0.24* 0.39**-0.05 -0.11 0.03 1.00 -0.03 -0.08 0.29**-0.01 0.03 0.33** 1.00 0.05 0.06 0.10 0.03 0.26* 0.41** 0.23* 1.00 0.05 -0.16 -0.01 0.09 0.28**-0.01 0.04 0.18 -0.07 -0.08 -0.04 -0.01 0.21* 0.18 0.23* 0.21* -0.01 -0.03 0.15 0.04 0.17 0.06 0.40**0.17 -0.01 -0.19* -0.05 0.01 0.21* 0.05 0.03 0.19 1 0.47* 0.08 -0.26 -0.23 0.41* 0.01 -0.05 -0.02 2-0.09 0.14 -0.19 -0.23 0.21 0.08 -0.23 0.11 3 0.02 0.46* 0.27 -0.04 -0.15 0.07 0.04 0.07 4 0.31 0.36 -0.13 -0.12 0.44* -0.15 -0.15 -0.05 0.09 0.07 0.18* 0.24**-0.02 0.05 -0.04 0.19 3.58 0.51 3.39 0.57 3.17 0.58 3.34 0.57 3.49 0.50 3.35 0.62 3.12 0.51 3.13 0.51 9 10 11 12 13 14 15 16 17 1.00 0.55**1.00 0.41**0.62** 1.00 0.68**0.52** 0.44**1.00 0.04 0.19 0.18 0.17 1.00 -0.21 0.03 -0.10 0.01 0.51**1.00 -0.28 -0.11 0.18 -0.22 0.48* 0.41* 1.00 -0.01 -0.01 0.05 0.14 0.72**0.59** 0.51**1.00 -0.07 -0.08 -0.02 0.01 -0.08 -0.24 0.28 -0.02 1.00 3.27 0.55 2.74 0.81 3.25 0.48 3.15 0.47 2.99 0.55 3.48 0.56 3.46 0.41 3.60 0.42 3.12 0.50 Sample sizes: self-superior (86), self-peer (130), self-subordinate (27), peer-superior (95), peer-subordinate (27), and superior-subordinate (27) * p < .05 **p < .01 35 TABLE 5 Means, Standard Deviations, and Intercorrelations Person-role Conflict Measures 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. Self Factor 1 Self Factor 2 Self Factor 3 Self Factor 4 Boss Factor 1 Boss Factor 2 Boss Factor 3 Boss Factor 4 Peer Factor 1 Peer Factor 2 Peer Factor 3 Peer Factor 4 Subordinate Factor Subordinate Factor Subordinate Factor Subordinate Factor Mean S.D. a 1 2 3 4 1.00 0.66**1.00 0.62**0.48** 1.00 0.64**0.45** 0.43**1.00 0.12 0.08 0.14 0.12 0.03 0.05 0.11 -0.01 -0.02 0.07 0.25* -0.02 -0.04 -0.10 0.04 0.11 0.06 -0.08 -0.01 -0.02 0.11 0.03 0.07 -0.04 0.10 0.08 0.25* 0.02 0.09 -0.01 0.03 0.01 1 0.08 -0.12 -0.15 -0.15 2-0.00 0.04 -0.04 -0.27 3 0.07 -0.08 0.27 -0.04 4 0.08 0.10 -0.06 0.02 3.55 0.53 3.68 0.53 3.38 0.60 3.16 0.64 5 6 7 8 9 10 11 12 13 14 15 16 1.00 0.53**1.00 0.39**0.64** 1.00 0.47**0.30** 0.30**1.00 0.30**0.21* 0.37**0.15 1.00 0.08 0.06 0.14 0.16 0.57**1.00 0.16 0.19 0.39**0.11 0.60**0.55** 1.00 0.13 0.06 0.07 0.17 0.53**0.44** 0.39**1.00 0.26 0.00 -0.02 -0.01 0.00 -0.24 0.17 0.04 0.10 0.07 -0.20 0.06 -0.29 -0.34 -0.09 -0.19 0.07 -0.13 -0.07 0.32 -0.10 -0.13 0.22 -0.11 0.24 -0.07 -0.27 -0.02 -0.27 -0.32 -0.06 -0.18 1.00 0.73**1.00 0.45* 0.46* 1.00 0.82**0.74** 0.37 1.00 3.63 0.42 3.83 0.38 3.14 0.43 3.59 0.44 3.29 0.76 3.07 0.43 3.56 0.37 3.65 0.37 3.44 0.46 3.10 0.43 3.80 0.35 3.66 0.36 Sample sizes for dyads were as follows: self-superior (86), self-peer (130), self-subordinate (27), peer-superior (95), peer-subordinate (27), and superior-subordinate (27). * p < .05 ** p < .01 36 TABLE 6 Degree of Agreement between sources (represented by correlation coefficients) by factor Dyad Self-Superior Factor Overall Interpersonal Planning Executive Presence Personal Learning Self-Peer Overall Interpersonal Planning Executive Presence Personal Learning Self-Subordinate Overall Interpersonal Planning Executive Presence Personal Learning Peer-Superior Overall Interpersonal Planning Executive Presence Personal Learning Peer-Subordinate Overall Interpersonal Planning Executive Presence Personal Learning Subordinate-Superior Overall Interpersonal Planning Executive Presence Personal Learning a not examined in meta-analyses * p < .05 ** p < .01 H&S + 0.35 0.36 N/A a 0.62 N/A a N/A a r 0.06 0.31** 0.39** 0.29** 0.03 0.01 0.05 -0.08 0.15 0.01 0.03 0.47* 0.14 0.27 -0.12 0.37** 0.26* 0.11 0.41** 0.22* 0.32 0.04 0.03 0.18 0.14 0.05 0.41* 0.08 0.04 -0.05 N 86 86 86 86 86 131 131 131 131 131 27 27 27 27 27 85 85 85 85 85 27 27 27 27 27 27 27 27 27 27 37 TABLE 7 Results of Moderated Regression Analyses - Person-Role Conflict (PRC) Self Assessment Interpersonal Ability Planning Ability Executive Presence Personal Learning Peer Assessment (PA) PA + PRC PA + PRC + (PA x PRC) Peer Assessment (PA) PA + PRC PA + PRC + (PA x PRC) Peer Assessment (PA) PA + PRC PA + PRC + (PA x PRC) Peer Assessment (PA) PA + PRC PA + PRC + (PA x PRC) Self Assessment Interpersonal Ability Planning Ability Executive Presence Personal Learning Superior Assessment (SA) SA + PRC SA + PRC + (SA x PRC) Superior Assessment (SA) SA + PRC SA + PRC + (SA x PRC) Superior Assessment (SA) SA + PRC SA + PRC + (SA x PRC) Superior Assessment (SA) SA + PRC SA + PRC + (SA x PRC) Self Assessment Interpersonal Ability Planning Ability Executive Presence Personal Learning * p < .05 ** p < .01 Subordinate (SA) SA + PRC SA + PRC + (SA x PRC) Subordinate (SA) SA + PRC SA + PRC + (SA x PRC) Subordinate (SA) SA + PRC SA + PRC + (SA x PRC) Subordinate (SA) SA + PRC SA + PRC + (SA x PRC) R2 ∆ R2 F(step) df 0.00 0.01 0.01 0.01 0.01 0.01 0.02 0.06 0.06 0.00 0.05 0.06 0.00 0.00 0.00 0.01 0.00 0.00 0.02 0.04 0.00 0.00 0.05 0.01 <1 <1 <1 1.35 <1 <1 2.33 4.63* <1 <1 6.41* <1 1,123 1,122 1,121 1,122 1,121 1,120 1,119 1,118 1,117 1,123 1,122 1,121 R2 ∆ R2 F(step) df 0.12 0.13 0.13 0.15 0.15 0.15 0.08 0.10 0.19 0.00 0.02 0.03 0.12 0.01 0.00 0.15 0.00 0.00 0.08 0.02 0.09 0.00 0.02 0.01 10.69** <1 <1 14.11** <1 <1 6.15* 1.59 7.98** <1 1.28 1.06 1,80 1,79 1,78 1,81 1,80 1,79 1,82 1,81 1,80 1,78 1,77 1,76 R2 ∆ R2 F(step) df 0.22 0.34 0.35 0.02 0.05 0.06 0.07 0.08 0.11 0.01 0.04 0.13 0.22 0.12 0.01 0.02 0.04 0.01 0.07 0.00 0.04 0.01 0.02 0.09 7.01* 4.41* <1 <1 <1 <1 1.94 <1 <1 <1 <1 2.27 1,25 1,24 1,23 1,25 1,24 1,23 1,25 1,24 1,23 1,25 1,24 1,23 38 TABLE 8 Results of Moderated Regression Analyses - Feedback Inquiry (FBI) Self Assessment R2 ∆ R2 F(step) df Interpersonal Ability Peer Assessment (PA) PA + FBI PA + FBI + (PA x FBI) Planning Ability Peer Assessment (PA) PA + FBI PA + FBI + (PA x FBI) Executive Presence Peer Assessment (PA) PA + FBI PA + FBI + (PA x FBI) Personal Learning Peer Assessment (PA) PA + FBI PA + FBI + (PA x FBI) .002 .011 .011 .006 .009 .015 .034 .069 .072 .000 .060 .062 .002 .009 .000 .006 .003 .006 .034 .035 .003 .000 .060 .002 <1 1.14 <1 <1 <1 <1 4.39* 4.62* <1 <1 7.76** <1 1,123 1,122 1,121 1,123 1,122 1,121 1,124 1,123 1,122 1,124 1,123 1,122 R2 ∆ R2 F(step) df .097 .116 .119 .154 .155 .158 .093 .142 .188 .002 .055 .056 .097 .019 .003 .154 .001 .003 .093 .049 .046 .002 .053 .001 8.85** 1.73 <1 14.92** <1 <1 8.40** 4.65* 4.55* <1 4.50* <1 ∆ R2 F(step) df .219 .002 .153 .018 .255 .074 .072 .006 .004 .014 .135 .099 7.01* <1 5.63* <1 8.40** 2.61 1.94 <1 <1 <1 3.81 3.02 Self Assessment Interpersonal Ability Superior Assessment (SA) SA + FBI SA + FBI + (SA x FBI) Planning Ability Superior Assessment (SA) SA + FBI SA + FBI + (SA x FBI) Executive Presence Superior Assessment (SA) SA + FBI SA + FBI + (SA x FBI) Personal Learning Superior Assessment (SA) SA + FBI SA + FBI + (SA x FBI) Self Assessment R2 Interpersonal Ability Subordinate Assessment SA + FBI SA + FBI + (SA x FBI) Planning Ability Subordinate Assessment SA + FBI SA + FBI + (SA x FBI) Executive Presence Subordinate Assessment SA + FBI SA + FBI + (SA x FBI) Personal Learning Subordinate Assessment SA + FBI SA + FBI + (SA x FBI) * p < .05 ** P < .01 (SA) .219 .221 .374 (SA) .018 .274 .348 (SA) .072 .078 .082 (SA) .014 .149 .248 1,82 1,81 1,80 1,82 1,81 1,80 1,82 1,81 1,80 1,82 1,81 1,80 1,25 1,24 1,23 1,25 1,24 1,23 1,25 1,24 1,23 1,25 1,24 1,23 39 FIGURE 1 Interaction of Superior Assessment of Executive Presence and Person-role Conflict (PRC) 3.8 3.7 Self Assessment 3.6 3.5 3.4 3.3 3.2 High PRC (>1.74) Self = 2.89 + .09 (sup) 3.1 Low PRC (<1.74) Self = 1.62 +.51 (sup) 3 2.9 2.8 Lo w (2.61) Medium (3.12) Superior As s es s m ent High (3.63) 40 FIGURE 2 Interaction of Superior Assessment of Executive Presence and Feedback Inquiry (FBI) 3.6 Self Assessment 3.4 3.2 3 High FBI (>2.7) Self = 2.53 +.24 (sup) 2.8 Low FBI (< 2.7) Self = 1.30 +.55 (sup) 2.6 Lo w (2.61) Medium (3.12) Supe riorAs s es s m ent High (3.63) 41 FIGURE 3 Interaction of Subordinate Assessment of Interpersonal Factor and Feedback Inquiry (FBI) 3.8 Self Assessment 3.7 3.6 3.5 3.4 3.3 High FBI (>2.7) Self = 1.59 +.6 (sub) 3.2 Lo w FBI (<2.7) Self = 2.7 +.24 (sub) 3.1 Lo w (2.61) Medium (3.12) Subordinate As s e s sment High (3.63)
© Copyright 2026 Paperzz