Do we see through their eyes? Testing a web

 Do we see through their eyes? Testing a web-based survey through cognitive interviews
Advanced Research Methods II
IE 911
2013
ARM Assignment 2
Carry out a trial run or ‘pilot’ of one of the research ‘instruments’ you will use during your
research project (eg survey, interview schedule, observation schedule). Explain the principles on
which you designed this ‘instrument’ and the opportunities and problems you foresaw. How did
your instrument work in practice and how would you amend it in the light of this experience.
Table of Contents Introduction ................................................................................................................................................. 2 Research Project .......................................................................................................................................... 2 Nature of Enquiry ........................................................................................................................................ 3 Research Methodology ................................................................................................................................ 3 Survey Development ................................................................................................................................... 5 Structure .................................................................................................................................................. 5 Question Types ........................................................................................................................................ 6 Survey Software ...................................................................................................................................... 6 Sampling ...................................................................................................................................................... 8 Ethical Considerations ................................................................................................................................. 9 Cognitive Testing ........................................................................................................................................ 9 Limitations ............................................................................................................................................. 10 Processing Cognitive ‘Data’ .................................................................................................................. 10 Interpretive Validity .................................................................................................................................. 11 Content Validity ........................................................................................................................................ 12 Reliability .................................................................................................................................................. 13 Conclusions and Implications .................................................................................................................... 13 References ................................................................................................................................................. 15 Appendix 1. Questionnaire ........................................................................................................................ 17 Appendix 2. Scripted Probing Questions ................................................................................................... 24 1 Do we see through their eyes?
Testing a web-based survey through cognitive interviews
“We all carry worlds in our heads, and those worlds are decidedly different.”
Lisa Delpit
Introduction
The purpose of the paper is to analyse the challenges encountered and the insights gained from a
small-scale pilot study. I pretested the validity and reliability of a web-based survey
questionnaire through a less commonly used method in educational research – cognitive
interviewing.
First of all, I will briefly summarise the focus of my PhD project to explain the rationale for
developing and piloting this research instrument. Second, I will reflect on the research paradigms
which informed my choice of cognitive interviews for revising the questionnaire items. Next I
will discuss the sampling strategy and the survey design. It will be followed by the analysis of
the cognitive processes that my participants engaged in while responding to the survey questions.
Finally, I will conclude with the implications of the pilot study for improving my data collection
method.
Research Project
My PhD project aims to examine perceived cultural differences and similarities in emotional
leadership practices of higher education leaders in Georgia and England. Despite an increasing
interest in academic leaders’ emotional intelligence, limited prior research has explored the topic
from a cross-cultural perspective. The study will focus on the culture value dimension of
individualism-collectivism to analyse the two academic leadership cultures. Adopting a mixed
methods approach, the research will involve heads of departments and departmental staff
members at two universities in each country. First, a web-based survey and then semi-structured
interviews will be used to reveal culturally specific components of emotional leadership. The
data will be collected in two languages: English and Georgian. The web-based survey in each
language will have two different versions: one for heads of departments and the other for
departmental staff.
2 The questionnaire that I piloted was the English version designed for academic staff members.
When planning the trial run, I set four specific objectives:
a) Explore the participants’ understanding of the meaning of the key terms (interpretive
validity)
b) Examine how comprehensively the survey measures cover the main constructs of the
study (content validity)
c) Improve clarity and consistency of the questions (reliability)
d) Test the effectiveness of the survey software format
Nature of Enquiry
In terms of an ontological position, the pilot study primarily viewed social reality as a creation of
our consciousness. It drew on nominalist assumptions that the world does not exist independently
from the knower; it is the product of personal perceptions (Cohen et al., 2011). It set out
reflecting the constructivist view that “as human beings we are meaning makers” (Hammond &
Wellington, 2013, p. 90). The meanings that we create find embodiment in concepts. And it is
through concepts that we make sense of reality.
I approached concepts from the anti-positivist perspective. I assumed that the research subjects
would interpret constructs through different lenses. In order to grasp the nuances of complex
perceptions, I wanted to see how the idea was conceived and developed into the participants’
consciousness. Hence I came to cognitive interviewing. On the other hand, the study also
adopted a positivist stance by attempting to arrive at fairly fixed meanings of concepts.
Conflicting interpretations of meanings through the participants’ eyes were driven towards
forming objective categories. The focus was on validating survey measures and increasing their
reliability/dependability.
Research Methodology
Surveys have been widely used in educational research to explore participants’ perceptions and
experiences (Cohen et al., 2011; Hartas, 2010; Karabenick et al., 2007). However, they have
been criticised for measurement error and social desirability bias when examining beliefs and
attitudes (Desimone & Le Floch, 2004; Gillham, 2008). The quality of survey research can be
3 improved if proper attention is paid to designing and piloting survey measures. One of the
pretesting methods to increase the validity of individual question items is the cognitive interview.
Cognitive interviews offer verbal accounts of participants’ thought processes. Research subjects
are asked to ‘think aloud’ while answering survey questions. In other words, they verbalise what
comes to their mind as they comprehend question items. It examines if the researcher and the
researched interpret the concepts in the same way. The interviewer may probe deeper to gain
better understanding of how the participant has perceived an item (Karabenick et al., 2007;
Willis, 2004).
This approach stems from the cognitive theory which breaks down the question answering
process into four stages:
Figure 1. Four-stage response model of thought process
Comprehend
Recall
Decide
Respond
•  How does the participant understand the question intent and the meaning of
key terms?
•  What kind of information does the participant need to retrieve from memory?
•  Does the participant attempt to give an accurate answer or tries to respond
in a socially desirable way?
•  Does the participant’s choice of the response category reflect his/her internal
judgement?
(Adapted from Willis, 1999, p. 2)
The main purpose of cognitive interviewing is to reveal problematic aspects in the complex
thought process. Analysing the interview data informs the researcher which areas require
modification to address the flaws of the survey measures. While this method seems fairly
4 popular in psychological research, it has been neglected in the field of education (Desimone &
Le Floch, 2004).
Survey Development
The survey design reflected the theoretical framework of the study. It aimed to investigate a)
how the participants perceived the role of emotions in HE leadership; and b) how the
participants’ personal beliefs and cultural values affected their perceptions of emotional
leadership. I adapted the survey measures from the existing research instruments with sound
psychometric properties.
Structure
The questionnaire had five sections and was estimated to take around 20 minutes to complete
(See appendix 1). The first section consisted of demographic questions including education,
ethnic origin, nationality and religion. It was important to collect this information as participants’
social backgrounds are associated with their individualistic/collectivist cultural values (House et
al., 2004).
The second section explored the dimension of individualism and collectivism at the individual
level. It examined if academic staff members viewed themselves as interdependent with their
institution. Would they be willing to give up their personal interests for the sake of their team’s
benefit? The questions were adapted from two Individualism Collectivism Scales by Shulruf et
al. (2011) and Oyserman et al. (2002).
The third section sought to investigate individual perceptions of the importance of emotionality
in academic leadership. It drew on the emotional competence framework developed by Goleman
(1998; Goleman et al., 2005). Next set of questions asked about staff members’ experiences and
perceptions of emotional leadership practices in their own departments. It was adapted from the
Emotional Intelligence Questionnaire (EIQ Managerial 360° feedback) by Higgs and Dulewicz
(2002) and Leadership Practices Inventory (LPI Observer) questionnaire by Kouzes and Posner
(Kouzes et al., 2010).
The rationale for designing sections III and IV was to examine the emotional dimension of
leadership from two angles: a) how participants believed it should have been, and b) how it was
5 perceived in actual practice. This design reflected the GLOBE methodology which studied
associations between values (as things should be) and practices (as they are) (House et al.,
2004).
The final section addressed the values of individualism and collectivism at the organisational
level. It was developed from the GLOBE Alpha questionnaire and looked at the nature of work
environment. How employee-oriented was the university? Did people truly care about each
other? Exploring the organisational work culture together with the individual level analysis of
individualism and collectivism (Section II) would offer a multilevel perspective on the construct.
Question Types
Considering a large sample size of my PhD project (eight departments: 4 Georgian/4 English) in
four universities: 2 Georgian/2 English) and the length of the questionnaire (5 pages), a highly
structured design was adopted with closed questions. It included dichotomous questions,
multiple choice questions and rating scales (See appendix 1). Lack of word-based survey data
would be balanced by in-depth follow-up interviews in the second phase of the project.
Survey Software
First, I designed the survey in Warwick SiteBuilder2 as my WIE ePortfolio subpage. The idea of
linking the questionnaire to my research profile was meant to establish my credibility as a
doctoral researcher. It would potentially increase the response rate during the data collection
stage (Gillham, 2008). This survey version looked as follows:
6 Figure 2. FormsBuilder design
However, after testing the survey with the first two participants, I realised that FormsBuilder
software left much to be desired. Since it did not allow spreading out questions on multiple
pages, the survey looked too long. During the interview, the participants kept scrolling the page
up and down and it seemed to prevent them from being focused. As the research on the web
survey design suggests, a single scrolling page compared to a paging design may take longer to
complete and result in more missing data (Tourangeau et al., 2013).
Moreover, the original design did not allow formatting questions, highlighting subheadings or
labels. These more advanced features would help to mentally guide the participant through the
questionnaire. Therefore, I imported the survey into more sophisticated survey software –
SurveyGizmo. I displayed each section with thematically related question groups on a different
page. Below is an example of the alternative design approach. The survey is available at:
https://edu.surveygizmo.com/s3/1345903/pilot-survey
7 Figure 3. SurveyGizmo design
I tested it in the second round of cognitive interviews with the other two participants. The multipage version seemed to work better. Its layout looked more interesting and appealing to the eye.
Apart from the visual aspect, it offered a particularly useful feature of a Progress Indicator. It is
believed to encourage participants to complete the survey till the end and thus, reduces the risk
of ‘breakoffs’ (Tourangeau et al., 2013).
Sampling
I used purposive sampling to recruit four participants who were similar to my target group of the
PhD project. I was specifically interested in research subjects who were academic staff members
at an English university, had different cultural backgrounds and were at different stages in their
academic careers (maximum variation sampling, Cohen et al., 2011). Through personal
networking I approached a PhD candidate (male), a post-doctoral fellow (male), a Teaching
Assistant (female), and an Associate Professor (male) from the same university. All the
participants were working in their respective departments and would be exposed to the
experiences I intended to analyse.
The interviewees’ social identities were different from mine in terms of their ethnicity (French,
Greek, Hungarian and Syrian). It had a positive effect of preventing the risk of conformity. The
8 participants would not give answers in a way as someone from their culture would expect them
to (Seale, 2012). Because of the unequal gender composition of the sample, I will not compare
the gender patterns in cognitive reflection processes.
Ethical Considerations
I followed the ethical guidelines developed by the Association of Internet Researchers (Markham
& Buchanan, 2012). The introductory page of the survey stated the purpose of the study,
explained the procedures, promised confidentiality and sought voluntary participation. The
participants had to give consent to the given conditions to proceed with the questionnaire.
Cognitive Testing
The interviews were carried out at Warwick University library. I chose Wolfson Research
Exchange seminar rooms as they were quiet and could serve as a cognitive ‘laboratory’. It was
also a familiar and comfortable environment for the research subjects since they are members of
Wolfson postgraduate community. The interviewing time varied from 60 to 90 minutes with each
of the four participants.
At the beginning of the interview, it was explained to the research subjects that the main
objective of the pilot study was testing the questionnaire. It was emphasised that I was not
collecting data through these interviews. Rather, I was interested how well they understood the
questions in the survey, how they perceived the key concepts, what experiences they recalled
from their existing knowledge or memory and how easily they could choose an answer on the
reporting scale. In other words, I wanted them to think aloud while completing the questionnaire
and verbalise their thought processes. I would appreciate if they pointed out any flaws with
phrasing or ambiguities in meaning.
In order to make sure that the participants grasped the basic idea of giving a cognitive interview,
I asked them to practise the process of thinking aloud with an example question. After this
exercise, I gave them my personal laptop with the online survey form displayed on the screen.
They started reading the questions aloud and making an effort to verbally process their thoughts.
However, not all the participants responded to this technique as I would ideally wish.
9 Two of the interviewees were quite articulate and engaged intensively in a reflective
conversation. Another participant did try to be helpful, but he seemed slightly embarrassed with
the nature of interviewing. And the fourth research subject, who seemed less open, would just
read a question and select a response. She made very little attempt to imitate a stream of
consciousness that I explicitly asked for. Whenever the participants’ eloquence was limited or
they hesitated before giving an answer, I tried to probe further into the justification for their
decisions. As general guidelines I used a set of scripted probes that I adapted from the cognitive
interview guide (see Appendix 2). However, I also had to employ spontaneous probes as each
interview was unique in terms of its dynamics.
Apart from some probing, my intervention in completing the survey was minimal. My passive
role had two obvious advantages. On the one hand, it reduced the interviewer-imposed bias in
comprehending the question items. On the other, the participants felt they were not left alone
with the computer screen. The aspect of personal interaction encouraged most of them to be
more focused, cooperative and give well-thought answers.
Limitations
I acknowledge the criticism that cognitive interviewing might “contaminate ongoing cognitive
processing of the question” (Willis, 2004, p. 36). Asking the participants to think out loud and at
the same time justify their judgement could result in artificial verbal accounts of thought
processes. This technique somehow ‘forced’ the research subjects to think in more details
justifying every thought that crossed their mind. If they had completed the survey on their own, it
may not have involved as much mental effort. In this sense, the process of cognitive interviewing
seemed somewhat unrealistic.
Another limitation of the method was the amount of time it consumed. At times the participants
would digress from the topic and keep elaborating a point for a while which was not relevant to
the study. I felt it wasted the interviewing time and had to bring the participant back to task.
Processing Cognitive ‘Data’
The interviews were not audio-recorded for two reasons. First, the purpose of the study was the
questionnaire evaluation instead of data collection. Second, I was worried that it would
discourage the participants from talking through the cognitive steps occurring in their mind. I
10 wanted them to express themselves freely without being concerned if their ideas were delivered
in a structured or chaotic way. Therefore, I had the questionnaire printed out for myself with
blank space under each question where I made notes. I did not revise the draft after each
administration. I made notes on the types of problems identified and compared the comments
with the next interview outcomes. If the same question item confused more than one participant,
then I would modify it considering their alternative interpretations.
Interpretive Validity
One of the main objectives of the pilot study was to improve the validity of the survey measures.
Although the concept of validity is characteristic of the positivist rather than interpretivist
approach, it was not viewed as conventionally conceived in the realist tradition. Here, the focus
was on the interpretive validity – subjective meaningfulness of the terms as understood by the
research subjects themselves (Cohen et al., 2011).
Cognitive interviews attempted to capture the nuances of meaning through the eyes of the
participants and examine if there was shared understanding of the question items. While the
study acknowledged reality as open to personal interpretations and concepts having multiple
layers, it also tried to make sense of ‘truth’ by establishing credibility. As Bertrand Russel (1947)
notes, “When one admits that nothing is certain, one must, I think, also admit that some things
are much more nearly certain than others”.
Probing questions showed that several items carried different meanings and required
clarification. For example, the participants had difficulty in mapping a response when asked how
true the following statement was regarding their educational institution:
“In this university, people are generally concerned about others”.
They asked if being concerned meant truly caring or being polite to co-workers and coming
across as friendly. It was noted that these two concepts may not necessarily be related. Therefore,
two statements were generated addressing each point.
“In this university, most people appear friendly.”
“In this university, most people genuinely care about others.”
11 Exploring a wide of range of possible interpretations was crucial to connect the concepts with the
research subjects’ actual experiences and determine the credibility of the research instrument.
Content Validity
Another type of validity that the pilot study aimed to increase was content validity of the
questionnaire. Content validity considers if “the full content of a conceptual definition is
represented in the measure” (Punch, 2009, p. 246). In other words, the goal was to make sure
that the questionnaire accurately and fully covered all the key aspects of the employed
constructs. In order to achieve reasonable content validity of the survey measures, I did a fairly
extensive literature review. First, the dimensions of emotional intelligence and individualismcollectivism were identified. Then they were precisely defined. Finally, the existing measures
that empirically represented the essence of the constructs were drawn from the research
literature.
Following Alimo-Metcalfe and Aban-Metcalfe’s approach (2001), all the statements were
phrased in the same format: a) the item addressed only one dimension; b) the item described an
observable behaviour or an inferable characteristic; b) the wording of the item was positive. For
example, the concept of empathy was linked to four statements reflecting the following leader
behaviours/characteristics.
Figure 4. Thematic grouping of construct measures
Based on the participants’ feedback, the measures of the emotional intelligence construct
required minor rewording, but they did not pose substantive problems in terms of their content
validity. However, the indicators of individualism-collectivism appeared to lack conceptual
elaboration. Since the literature on different dimensions of this construct is overly complex and
conflicting, originally I focused only on the organisational culture. The cognitive interviews,
12 though, revealed how individuals’ personal beliefs sometimes clashed with the shared cultural
values of the work context. Therefore, I turned back to the research literature and added the items
addressing individual level differences in individualism and collectivism. It developed a
multilevel perspective of this cultural construct (See appendix1, Part 2: Personal Self and
Relationships).
Reliability
The final objective of the pilot study was to improve reliability of the survey measures.
Reliability implies how consistent and stable a measurement is (Hammond & Wellington, 2013).
Cognitive testing of the questionnaire showed that some of the items were not understood in the
same way by different participants. For example, the question eliciting views about the
importance of emotional competences in leadership success was originally phrased as follows:
“How important do you consider these competences for successful leadership?”
As the research subjects were reflecting on the role of emotions in leadership, their examples did
not show common understanding of the question intent. To get to the basis of their question
comprehension, I tried to ask non-leading probes that would give several options of interpreting
the item (e.g. “Did you assume the question was directed at the head of your department or any
kind of leader? Or did you think the question hinted at the emotional skills of followers/staff
members?”). Interestingly enough, two of the participants viewed the concept of leadership as
different from being the Head of Department. Moreover, it was not clear to them whose
emotional intelligence the question targeted. Was it about leaders or followers? Or both?
To clarify ambiguity, the original wording of the question was modified in the following way:
‘How important do you consider these competences for a Head of Department to be a
successful leader?’
Moreover, precise definitions were added to the key emotional competences to maximise their
dependability.
Conclusions and Implications
Cognitive testing of the web-based survey offered invaluable insight into increasing the validity
and reliability of the questionnaire. Probing into the participants’ interpretations identified a few
13 problematic areas in the research instrument. In the light of this experience, I made the necessary
amendments to the questionnaire. Each of the four objectives of the pilot study was addressed.
a) Explore the participants’ understanding of the meaning of the key terms (interpretive validity)
§
Several items were reworded to reflect subtle shades of meaning as interpreted by the
participants
b) Examine how comprehensively the survey measures cover the main constructs of the study
(content validity)
§
The conceptual framework of individualism-collectivism needed to be expanded.
§
New scales addressing individual differences in values were added.
c) Improve clarity and consistency of the questions (reliability)
§
Ambiguous questions were rephrased in a straightforward way.
§
Emotional competences were precisely defined.
d) Test the effectiveness of the survey software format
§
The FormsBuilder software did not prove to be engaging due to its limited features and it
was upgraded to a more powerful SurveyGizmo package.
§
A single scrolling page was broken into multiple pages displaying each section of the
questionnaire on a separate page. A progress indicator was also added.
§
The overall layout was made more visually appealing.
To summarise, the pilot study served the intended purpose. Cognitive interviewing as a method
for testing and validating survey measures has important implications for education researchers.
Observing research subjects’ thought processes helps to see questions through their eyes,
identifies potential sources of error and leads to the improved quality of survey questionnaires.
(Word Count: 3 585)
14 References
Alimo-Metcalfe, B. & Alban-Metcalfe, R. J. (2001) The development of a new transformational
leadership questionnaire. Journal of Occupational and Organizational Psychology,
74 1-27.
Cohen, L., Manion, L. & Morrison, K. (2011) Research methods in education. 7th edn.
Abingdon, Oxon; New York: Routledge.
Desimone, L. M. & Le Floch, K. C. (2004) Are we asking the right questions? Using cognitive
interviews to improve surveys in education research. Educational evaluation and
policy analysis, 26 (1): 1-22.
Gillham, B. (2008) Small-scale social survey methods. London; New York:
Continuum International Pub. Group.
Goleman, D. (1998) Working with emotional intelligence. London: Bloomsbury.
Goleman, D., Boyatzis, R. E. & McKee, A. (2005) The new leaders: transforming the art of
leadership into the science of results. London: Time Warner Books.
Hammond, M. & Wellington, J. J. (2013) Research methods: the key concepts. London; New
York: Routledge.
Hartas, D. (2010) Educational research and inquiry: qualitative and quantitative approaches.
London; New York: Continuum International Publishing Group.
Higgs, M., Dulewicz, V. (2002) Making sense of emotional intelligence. 2 edn. London: ASE.
House, R. J., Hanges, P. J., Javidan, M., Dorfman, P. W. & Gupta, V. (2004) Culture, leadership,
and organizations: the GLOBE study of 62 societies. Sage.
Karabenick, S. A., Woolley, M. E., Friedel, J. M., Ammon, B. V., Blazevski, J., Bonney, C. R.,
Groot, E. D., Gilbert, M. C., Musu, L. & Kempler, T. M. (2007) Cognitive
processing of self-report items in educational research: Do they think what we
mean? Educational Psychologist, 42 (3): 139-151.
Kouzes, James M.; Posner, Barry Z.; Biech, Elaine (2010) A Coach's Guide to Developing
Exemplary Leaders: Making the Most of The Leadership Challenge and the
Leadership Practices Inventory (LPI). [e-book]
Available at: http://WARW.eblib.com/patron/FullRecord.aspx?p=547045
[Accessed 5 July 2013]
15 Markham, A. & Buchanan, E. (2012) Ethical decision-making and Internet research: Version 2.0.
Recommendations from the AoIR Ethics Working Committee.
Oyserman, D., Coon, H. M. & Kemmelmeier, M. (2002) Rethinking individualism and
collectivism: evaluation of theoretical assumptions and meta-analyses. Psychological
bulletin, 128 (1): 3.
Punch, K. (2009) Introduction to research methods in education. Los Angeles: Sage.
Seale, C. (2012) Researching society and culture. 3rd edn. London; Thousand Oaks: SAGE.
Shulruf, B., Hattie, J. & Dixon, R. (2011) Intertwinement of individualist and collectivist
attributes and response sets. Journal of Social, Evolutionary, and Cultural
Psychology, 5 (1): 51-65.
Tourangeau, R., Conrad, F. G. & Couper, M. (2013) The science of web surveys. Oxford; New
York: Oxford University Press.
Willis, G. B. (2004) Cognitive interviewing revisited: A useful technique, in theory. Methods for
testing and evaluating survey questionnaires, 23-43.
Willis, G. B. (1999). Cognitive interviewing: A “how to” guide. In Meeting of the
American Statistical Association, Research Triangle Institute.
Available at: http://www.hkr.se/PageFiles/35002/GordonWillis.pdf [Accessed 11 April 2013]
16 Appendix 1. Questionnaire
Cultural Differences in Emotional Leadership Practices of Heads of Departments
Case of England
Participant Information
Purpose
The study attempts to explore how emotionality is perceived in higher education leadership in different
cultural contexts. This survey aims to collect information from academic staff members in English
universities. It is also intended to obtain the same information from Georgia to draw a cross-cultural
comparison about the role of emotions in academic leadership.
Procedures
The survey consists of 5 sections and should take about 20 minutes to complete. The questions are
designed to elicit staff members’ views on leader characteristics and behaviours, personal beliefs and
cultural values.
Participation
Participation in this study is entirely voluntary. You may withdraw at any point and you do not have to
answer every question to submit the survey.
Benefits
There are no direct benefits to you as a participant. However, your participation will increase
understanding of culturally specific components of emotional leadership. It could contribute to academic
leadership success in culturally diverse communities.
Confidentiality
The information you provide will be kept confidential. Your responses will be anonymous and only group
results will be reported without linking an individual to his/her data. The research findings will be
processed for a PhD project and are expected to be reported in academic journals/conferences. The names
of individuals, departments and universities will not be used in the reports.
Contact details
If you have any questions or concerns about this study, please contact:
Natia Sopromadze
PhD Candidate
University of Warwick
Coventry
CV4 7AL
Email: [email protected]
Web: www.warwick.ac.uk/natiasopromadze Ethical approval obtained from the Institute of Education, University of Warwick.
17 o I have read the above participant information and agree to take part in the study.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Part 1 - About You
1. Gender
o
o
o
Male
Female
Rather not say
2. Age group
o
o
o
o
o
21 – 30
31 – 40
41 – 50
51 – 60
Over 60
3. Education (highest degree achieved)
o
o
o
o
Bachelor’s Degree
Master’s Degree
Doctoral Degree (PhD)
Other
4. Ethnic origin
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
Arab
Asian or Asian British – Bangladeshi
Asian or Asian British – Indian
Asian or Asian British – Pakistani
Asian or Asian British – Other Asian Background
Black or Black British – African
Black or Black British – Caribbean
Black or Black British – Other Black Background
Chinese
Mixed – White and Asian
Mixed – White and Black African
Mixed – White and Black Caribbean
Mixed – Other Mixed Background
White – British
White – Irish
White – Other White Background
Other Ethnic Background
18 o
Rather not say
5. Nationality
_______________________
6. Number of years living in England
o
o
o
o
o
0–5
6 – 10
11 - 20
Over 20
All my life
7. Religion
o
o
o
o
o
o
o
o
o
o
No religion
Agnostic
Buddhist
Christian
Hindu
Jewish
Muslim
Sikh
Other
Rather not say
8. Your department
______________________________
9. Number of years working at the department
o
o
o
o
o
0–5
6 – 10
11 – 15
16 – 20
Over 20
Part 2 – Personal Self and Relationships
10. Thinking of yourself in your work environment, how far do you agree or disagree with the
following statements?
19 Compete
Responsibility
Uniqueness
My self-identity is different from other colleagues
in many respects.
1
2
3
4
5
It feels good to be unique and different from
others.
1
2
3
4
5
I communicate my ideas directly and clearly.
1
2
3
4
5
I assume responsibility for my decisions and their
consequences.
1
2
3
4
5
I consider myself a competitive person.
1
2
3
4
5
I enjoy a competitive work environment.
1
2
3
4
5
I feel proud to reach the goals that others are not
capable of achieving.
1
2
3
4
5
I discuss work-related problems with my close
colleagues.
1
2
3
4
5
I consult with my colleagues before making
important work-related decisions.
1
2
3
4
5
I take into account my colleagues’ advice.
1
2
3
4
5
It is a pleasure to spend time with my colleagues at
the department.
1
2
3
4
5
I usually avoid conflict with people I work with
even when I disagree.
1
2
3
4
5
I would give up my personal interest for the sake
of my team’s benefit.
1
2
3
4
5
Don’t
know
Strongly
agree
Agree
Neither
agree nor
disagree
Disagree
a) Personal Self
Strongly
disagree
Harmony
Advice
b) Relationships
Part 3 - Emotional Competences
11. How important do you consider these competences for a Head of Department to be a successful
leader?
20 Self-awareness:
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Don’t know
Extremely
important
Quite
important
Neither
important
nor
unimportant
Rather
unimportant
Not at all
important
ability to recognise and
manage one's own emotions
Motivation:
determination and
achievement drive
Empathy:
awareness of others’
feelings and concerns
Influence:
ability to influence and
convince others
Social skills:
ability to cooperate and
manage conflict
Part 4 – Head of Department
12. Your department head’s gender
o
o
Male
Female
13. Ethnic origin
o
o
o
o
o
o
o
o
o
o
Arab
Asian or Asian British – Bangladeshi
Asian or Asian British – Indian
Asian or Asian British – Pakistani
Asian or Asian British – Other Asian Background
Black or Black British – African
Black or Black British – Caribbean
Black or Black British – Other Black Background
Chinese
Mixed – White and Asian
21 o
o
o
o
o
o
o
o
Mixed – White and Black African
Mixed – White and Black Caribbean
Mixed – Other Mixed Background
White – British
White – Irish
White – Other White Background
Other Ethnic Background
Rather not say
14. Nationality
_______________________
1
2
3
4
5
Is good at managing his/her emotions.
1
2
3
4
5
Is self-confident.
1
2
3
4
5
Has a clear vision of the future.
1
2
3
4
5
Is determined and committed to the vision.
1
2
3
4
5
Takes responsibility for his/her decisions.
1
2
3
4
5
Is aware of staff members’ emotional needs.
1
2
3
4
5
Shows genuine concern for staff members.
1
2
3
4
5
Leads in a way which reduces stress.
1
2
3
4
5
Recognises and rewards staff achievements.
1
2
3
4
5
Inspires staff members to achieve more.
1
2
3
4
5
Empowers staff by involving them in decisions.
1
2
3
4
5
Has a friendly relationship with departmental staff.
1
2
3
4
5
Don’t
know
Strongly
agree
Agree
Neither
agree nor
disagree
Disagree
Considers how his/her emotions can affect staff
members.
So
cia
l
Sk
ills
Influenc
e
Empathy
Motivation
Self-awareness
Strongly
disagree
15. Thinking of your Head of Department, how far do you agree or disagree with the following
statements?
22 Encourages cooperation among staff members.
1
2
3
4
5
Is willing to listen to different opinions.
1
2
3
4
5
Is open to constructive feedback from staff.
1
2
3
4
5
Part 5 – Organisational Values
Most people feel proud to be working for this
university.
1
2
3
4
5
Most people show loyalty to the university.
1
2
3
4
5
Most people take pride in the accomplishments of
their department.
1
2
3
4
5
Most people appear friendly.
1
2
3
4
5
Most people genuinely care about others.
1
2
3
4
5
The work environment encourages competition
among co-workers.
1
2
3
4
5
The work environment encourages cooperation
among co-workers.
1
2
3
4
5
Group loyalty is encouraged even if personal
interests of individual staff members suffer.
1
2
3
4
5
I believe female heads of departments are
generally more compassionate compared to male
ones.
1
2
3
4
5
I believe that departments would be more
effectively led and managed if there were more
1
2
3
4
5
Don’t
know
Strongly
agree
Agree
Neither
agree nor
disagree
Disagree
a) People
Strongly
disagree
16. Thinking of the university you work at, how far do you agree or disagree with the following
statements?
b) Work Environment
23 women as department heads than there are now.
If you have any comments, please leave them here.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Would you consider taking part in an interview of 30 minutes as a follow-up to the information you
have provided?
o
o
Yes
No
If yes, please put your name and email address below:
____________________________________
Thank you for taking time to complete the survey. Your help is highly appreciated.
Appendix 2. Scripted Probing Questions
Comprehension/
What does the word “empathy” mean to you?
Interpretation probe
Paraphrasing
Can you repeat the question you just read, in your own words?
Confidence judgment
How sure are you that the ability to emphasise with others is
important in academic leadership?
24 Recall probe
Can you remember the case when your Head of Department
showed genuine concern for the staff members?
Specific probe
Why do you think your Head of Department leads in a way which
reduces stress?
General probes
How did you arrive at that answer?
Was that easy or hard to decide?
I noticed that you hesitated - tell me what you were thinking.
(Adapted from Willis, 1999)
25