Download attachment

Proceedings of Applied International Business Conference 2008
PERCEPTIONS ON THE PRACTICE OF TRAINING EVALUATION: A
HOSPITAL SETTING
Ho Sow Kin ψ
University of Malaya, Malaysia
Abstract
The study on the training evaluation practice in Malaysian hospitals has not been explored extensively.
Hence, it is the objective of this exploratory study to investigate the challenges and concerns of the training
evaluation activities, faced by the hospitals in Malaysia. A total of 145 hospitals in Malaysia were selected
as the sample for this study. The findings have highlighted that training evaluation is perceived to be of
importance to hospitals in Malaysia. For future research, a comparative study to compare other sectors in
the healthcare industry such as the pharmaceuticals and biotechnology could be done. Future researchers
may also want to include inter and intra industry comparative studies on training evaluations.
Keywords: Human resource development; Training evaluation; Hospital.
JEL Classification Codes: M12; M53.
1. Introduction
In today’s globally competitive economy, coupled with the ever dynamic technological advancement, the
pace of change requires organisations continuously train employees to develop new skills and strive for
improved performance. Training is an essential component of Human Resource Development (HRD). The
emphasis on effectiveness of training evaluation becomes a top priority to the overall organisational
success, be it private or public domain. A training programme must efficiently and effectively help
participants acquire knowledge, skills and/or attitude in order to improve effectiveness of job performance.
The success of training in achieving its intended objectives and goals will be measured throughout the
training cycle. Training evaluation includes before, during and after the programme. Evaluation before a
training is useful to improve the programme design, whereas, evaluation during the training provides
feedback for improvement on implementation and identification of weaknesses. And evaluation after the
training emphasises relevance, effectiveness and efficiency.
The training evaluation process begins with planning, followed by implementation, reporting and feedback.
The planning stage helps to clarify the nature and scope of the evaluation – the purpose of the evaluation,
who will conduct the evaluation, how and when it will be conducted, and how the results will be reported
and disseminated. The heart of the planning is the evaluation design that specifies the questions, the overall
design for answering those questions, the necessary measures, data collection strategies including sampling,
and data analysis techniques. The implementation stage relates to the actual gathering and analysis of the
data. Once the analysis is completed, the results will be reported to provide an overview of the training and
findings.
Objective of this study
The objective of this study is to draw on parallels between a literature review of training evaluation practice
in general and in particular, identifies the training evaluation practice of hospitals in Malaysia. The study
on the training evaluation practice in Malaysian hospitals has not been explored extensively. Hence it is the
objective of this exploratory study to investigate the challenges and concerns of the training evaluation
activities, faced by the hospitals in Malaysia.
ψ
Corresponding author. Ho Sow Kin. Faculty of Business
University of Malaya, 50603 Kuala Lumpur, Malaysia. Email: [email protected]
and
Accountancy,
Proceedings of Applied International Business Conference 2008
Motivation for this study
The relevant literature and studies on training and evaluation offers a lot of insight into the training
evaluation. In spite of the large number of studies on training evaluation, there appears to be a gap
concerning the study of training evaluation practice in the Malaysian healthcare industry, specifically in the
hospital sector. There is indeed lack of studies addressing the training evaluation practice in the hospital
setting. Therefore, the motivation for this exploratory study is to draw on parallels between performance
evaluation practices in general in Malaysia and to close this gap in the relevant literature, shedding more
light into the training evaluation practice in the Malaysian healthcare industry. It is timely to address issues
on how the Malaysian healthcare industry perceives the practice of training evaluation.
The scope of study includes the perceived importance of training evaluation, the training evaluation
models, tools and techniques used to evaluate training programmes, training outcomes, measures of inputs
and outputs, challenges faced by the hospitals in conducting training evaluation and difficulties in obtaining
information needed for evaluations.
Research methodology
A mailed questionnaire survey was conducted on 145 hospitals in Malaysia. Follow-up calls were made
and e-mails were sent to the hospitals. The sampling unit for this research is the training department
managers and human resource managers whose job function includes managing training activities in the
organisation, including training evaluation. An 18-item questionnaire was derived and replicated based on
the previous research by Al-Athari and Zairi (2002) on the Kuwaiti organisations. The training evaluation
practice was measured using Likert-scale. On a scale from 1 (not important at all/not at all) to 5 (most
important/all the time), the respondents were asked to describe to what extent they measure the training
evaluation items.
2. Literature review
There exists a substantial and established body of theoretical literature on performance evaluation training
in general (Brinkerhoff, 2006; Buckley and Caple, 2000). Human Resource Development (HRD) is an
organised learning experience, conducted in a definite time period, to increase the possibility of improving
job performance and growth. HRD programmes are divided into three main categories of training,
development and education. Training is learning provided to improve performance on the present job. A
training programme must efficiently and effectively help participants acquire new and robust skills and
knowledge in order to adopt new job behaviour or to improve effectiveness of current job behaviours
(Brinkerhoff, 2006). Buckley and Caple (2000) defined training as: “A planned and systematic effort to
modify or develop knowledge/skill/attitude through learning experience, to achieve effective performance
in an activity or range of activities. Its purpose, in the work situation, is to enable an individual to acquire
abilities in order that he or she can perform adequately a given task or job.”
Training evaluations
Buckley and Caple (2000) described evaluation as the process of attempting to assess the total value of
training – that is the cost benefits, and general outcomes which benefit the organisation as well as the value
of the improved performance of those who have undertaken training. Likewise, Stufflebeam (2001)
described evaluation as a study designed and conducted to assist some audience to assess the object’s merit
and worth. On the other hand, Boultimetis and Dutwin (2000) explained evaluation as a systematic process
of collecting and analysing data in order to determine whether and to what degree objectives were or are
being achieved. Similarly, Schalock (2001) described effectiveness of evaluation as the determination of
the extent which a programme has met its stated performance goals and objectives
Mann and Robertson (1996) asserted that many researchers believe that one of the main barriers to
employing effective evaluation procedures for training programmes is the difficulty in knowing how and
what to evaluate. Though evaluation should enable the trainers/clients to correct things which are going
wrong and learn from current experience to get better in the future, Hatton (2003) noted that in reality, most
training professionals are unsure about why, what or how they should be evaluating their training activities.
Hashim (2001) stressed that training evaluation has received a lot of criticism, largely explained by the
unsystematic, informal, and ad hoc evaluation that has been conducted by training institutions. Many
organisations approach training evaluation in an unconvincing or unprofessional manner (Buckley and
62
Proceedings of Applied International Business Conference 2008
Caple, 2000; Hashim, 2001). It is important that evaluation focuses on the entire training and performance
improvement process including feedback from the participants in terms of content and applicability of such
programmes (Brinkerhoff, 2006; Lingham, Richley and Rezania, 2006).
Importance of training evaluation. An empirical study by Al-Athari and Zairi (2002) on the Kuwaiti
government and private organisations revealed that private organisations emphasised the importance of
training evaluation more than the government organisations They also show that in terms of frequency of
conducting training evaluations, the majority in both government and private sectors only occasionally
evaluate their training programme.
Training evaluation models and evaluation of training outcomes. Models are useful ways to understand the
linkage between a programme and its expected outcomes. Philips (1990) stressed that out of more than 50
models available, most training practitioners use the Kirkpatrick model to evaluate training. Donald
Kirkpatrick’s four-level model suggests that training evaluation should always begin with level onereaction, and then, as time and budget allows, should move sequentially through levels two-learning, three
-job behaviour and four–result (Kirkpatrick,1971). But, Bernthal (1995) argued that though Kirkpatrick’s
classic four-level model has weathered well, it has also limited thoughts regarding evaluation and possibly
hindered ability to conduct meaningful evaluations. Hashim (2001) added that currently only the reaction
level is evaluated in most training. Bernthal (1995) noted that too often trainers regard the four-level
approach as a universal framework for all evaluation and they tend not to examine whether the approach
itself is shaping their questions and their results. Almost universally, organisations evaluate their training
programmed by emphasising one or more of the Kirkpatrick’s model (Hashim, 2001).
Training input and outcome measurement. A study by Al-Athari and Zairi (2002) found that almost the
entire study sample measured their training input and output. The entire private organisations sample chose
measurement of the quantitative input such as the total training expenditure, number of employees
receiving training and number of courses they offer to their employees, payments to outside training
providers, trainee travel expenses, and training expenditure per employee; while the government
organisations had chosen the qualitative input such as total training time/days, and cost of paying for
training facilities and equipment. Al-Athari and Zairi (2002) also found that the entire private and a
minority of government organisations measured their training output. The entire sample of government
which measure their training output measure their employees’ job satisfaction and productivity, while 80
per cent measure employees’ absenteeism and 69 per cent measure customer satisfaction.
Training evaluation challenges. Eseryel (2002) provided possible explanations for inadequate evaluations
which includes insufficient budget allocated, insufficient time allocated, lack of expertise, blind trust in
training solutions, and lack of methods and tools. Despite its importance, there is evidence that evaluations
of training programmes are often inconsistent or missing. Possible explanations for inadequate evaluations
include: insufficient time allocated; lack of expertise; blind trust in training solutions; or lack of methods
and tools. Al-Athari and Zairi (2002), stressed that the most important evaluation challenges that deter
Kuwaiti organisations from conducting sound evaluation were as follows: finding evaluation methods that
suit a variety of courses, cost of doing evaluations well, translating evaluation results into top
management’s language and determining specific actions to take based on evaluation results. Al-Ali (1999)
stated that the important challenge facing the Kuwaiti organisations is the difficulty in measuring
performance improvement in certain jobs (services), difficulties in measuring the change in behaviour of
individuals over a short period of time, and the absence of a follow-up process after training and
development programmes.
Performance evaluation practices in the Malaysian perspective
Statutory regulation. The establishment of the Human Resource Development Act 1992 recognised the
importance of human resource development. The Act requires private organisations to contribute a one per
cent equivalent of its monthly payroll to the Human Resource Development Fund, a fund which then can be
used to promote training. A special council called the Human Resource Development Council was set up to
manage this fund, as well as to ensure high quality, standards and accountability among the training
providers in Malaysia. Concurrently, the Malaysian government has adopted The International
Organisation for Standardisation (ISO). ISO is an international quality certification system with a set of
63
Proceedings of Applied International Business Conference 2008
world-wide standards. The ISO 9000 standards are for the operation of a quality management system to
ensure that a certified organisation has a quality system that would enable it to meet its published quality
standards. Another standard setting body in relation to the healthcare industry is the Malaysian Society for
Quality in Health (MSQH).
selected empirical studies on performance evaluation–Malaysian scenario. Hashim (2001) conducted a
study on the changing scenario of training evaluation in Malaysia and ranked importance of training
evaluation high among training consultants and top management as means to justify training investment as
well as demonstrating improved performance and financial results. Malaysian training institutions revealed
that trainee feedback was the most frequently used evaluation tool followed by observation, interview,
performance analysis and reaction form. And the reaction level is evaluated in most training. Hassan,
Hashim and Ismail (2006) conducted a study to investigate as to whether quality and standardization of
work process, as emphasized in ISO 9000 series certifications, is correlated with improvement in the HRD
practices. They found that ISO certified companies, compared to others, obtained higher means on some
HRD variables. Though comparison between ISO and non-ISO certified companies did yield some
significant differences. But they could not conclude that the differences were due to ISO certification alone
as organisations in the sample were not matched.
3. A hospital setting as research sample
The literature review in section 2 is complemented with a mailed questionnaire survey of 145 hospitals in
Malaysia as research samples. The nature of this research is explorative.
Results of survey
The results of this survey are projected on a descriptive statistics. The context and the introduction of this
survey are presented in Figures 1 to 6 and Table 1 whilst the results on challenges and concerns are
discussed and summarized in table 2.
Key characteristics of the samples. In Malaysia, The Ministry of Health and non-Ministry of Health
hospitals are government hospitals and are categorised as public hospitals. All other types of hospitals are
classified as private hospitals.Based on a sample of 145 public hospitals, a total of 100 hospitals
participated in this study. The response rate was 69 per cent. A profile of the samples is captured in Figure
1.
No. of hospitals
160
140
120
100
80
60
40
20
0
145
100
(69%)
Public hospital
Sample
Respondent
Figure 1: Sample and respondents
Respondents by department/division. The Training, Human Resource, Human Resource Development,
Management, and Administration, are the five departments/Divisions identified for this study. Figure 2
show that respondents from other department/division have the highest contribution to this survey (50 per
cent). This is followed by 14 per cent from the Management and Administration respectively, 10 percent
from the Human Resource, 8 per cent from the Training Department/Division and 4 per cent from the
Human Resource Development.
64
Proceedings of Applied International Business Conference 2008
Percentage
60%
Training
50%
50%
Human Resource
40%
Human Resource
Development
30%
20%
10%
8%
10%
Management
14% 14%
Administration
4%
0%
Public hospital
Other
Figure 2: Respondents by departments/divisions
Respondents by position in the organisation. Figure 3 shows that 56 managers representing 56 per cent of
the total respondents contributed to this survey. The remaining 11 positions in the list carry a lower number
and percentage of contribution varying between 2 per cent and 16 per cent.
60%
Percentage
Manager
56%
Assistant Manager
50%
Senior Executive
Junior Executive
40%
Administrator
Medical Officer (MO)
30%
Director of Nursing
16%
20%
8%
10%
2%
Managing Director
10%
Medical Assistant
2%4%
Hospital Director
Training personnel
0%
Public hospital
HR Officer
Other
Figure 3: Respondents by position in the organisation
Respondents by accreditation. Figure 4 illustrates that 65 per cent of 100 respondents are accredited under
the standard-setting bodies such as the International Organisation for Standardisation (ISO) and the
Malaysian Society for Quality in Health (MSQH).
65
Proceedings of Applied International Business Conference 2008
Percentage
70%
60%
65%
50%
40%
30%
35%
20%
10%
0%
Yes
No
Accreditation under standard-setting body
Public hospitals
Figure 4: Respondents by accreditation obtained
Importance of training evaluation
All the respondents believe in some level of importance of training evaluation. Figure 5 revealed that 12
out of the 100 public hospitals representing 12 per cent believe that training evaluation is most important,
38 per cent believe that it is very important, 30 per cent believe it is important, and 20 per cent believe it is
somewhat important. The respondents perceive the importance of training evaluation at an average score of
3.4. In other words, training evaluation is 68 per cent important to the hospitals
50%
38%
40%
Percentage
30%
30%
20%
20%
12%
10%
0%
0%
Not important
at all
Somewhat
important
Important
Very
important
Most
important
Importance of training evaluation
Public hospitals
Figure 5: Importance of training evaluation
Frequency of conducting training evaluation
In order to support the findings on the importance of training evaluation, the respondents were asked about
the frequency of conducting training evaluations. Figure 6 show that 56 out of the 100 public hospitals
representing 56 per cent collectively evaluate training programme all the time and most of the time, 26 per
cent conduct training evaluation sometimes, and 18 per cent falls under the scale of rarely and never
evaluate training programme at all. And the total average score reveals that the frequency of training
evaluation for respondents is at 3.6 or 72 per cent.
66
Proceedings of Applied International Business Conference 2008
100%
28%
80%
Percentage
60%
28%
40%
26%
20%
12%
6%
0%
hospitals
Not at all
Rarely
Sometimes
Most of the time
All the time
Figure 6: Frequency of conducting training evaluation
Training evaluation tools and techniques
Six most popular instruments used by Al-Athari and Zairi (2002),namely, questionnaires; tests; interviews;
observations; attitude surveys; and performance records, were adopted in this study. The results indicated
that among the training evaluation tools, questionnaire is used at a great extent by 68 per cent of the
respondents, which is all the time/most of the time. Besides that, observation and performance record are
used by 56 per cent and 54 per cent respectively. Less than 24 per cent of the hospitals use attitude survey,
interview, test and other management tool to evaluate training effectiveness. The total average score
confirms that the hospitals use questionnaire with the highest score of 3.7 or at 74 per cent usage.
Training evaluation models
The respondents were asked about the type of models and methods they use for training evaluation. Their
responses indicated in Table 1, show that all of the five training evaluation models are used at a very small
extent. Among them, Benchmark is the most used model by sample hospitals. Kirkpatrick model is used at
4 per cent most of the time and 34 per cent sometimes.
1
Kirkpatrick
Table 1: Training evaluation models
Scale
Not at all
Rarely
Sometimes Most of
the time
56%
6%
34%
4%
2
CIRO
55%
6%
33%
6%
0%
100%
3
Benchmark
45%
6%
34%
15%
0%
100%
4
Stufflebeam's CIPP
59%
7%
32%
2%
0%
100%
5
Tyler's
Goalsoriented/
Objectives-based
53%
6%
37%
4%
0%
100%
6
Other model
92%
4%
4%
0%
0%
100%
No.
Model
67
All the
time
0%
Total
100%
Proceedings of Applied International Business Conference 2008
Evaluation of training outcomes
The respondents were asked about the evaluation of training outcomes based on the Kirkpatrick model.
Their answers indicate that all the hospitals only evaluate training outcomes sometimes or at a moderate
extent. Training outcomes were evaluated based on the four dimensions or outcomes, namely: reaction;
impact on learning; impact on job behaviour and performance.
Measuring training input and output
Results revealed that 62 per cent of the hospitals measure their training input. 87 per cent of them measure
their training input by the number of employees receiving training, 84 per cent measure by number of
courses they offer to their employees, 81 per cent measure total training expenditure, 71 per cent measure
total training time/days and 60 per cent measure trainee travel expenses. The respondents were asked
whether they measure training output and their answers indicate that 80 out of the 100 public hospitals
representing 80 per cent. 88 per cent of the 80 public hospitals which measure their training output measure
customer satisfaction. In addition, 64 per cent of them measure productivity and 54 per cent measure their
employees’ job satisfaction. A very low per cent of 5 and 8 per cent in the public hospitals measure
cost/benefit ratio and profitability and none of them measured sales and return on investment. This could be
due to the nature of the public sector which is service- and not business-oriented. Though absenteeism is an
important output, only 28 per cent of the hospitals evaluate this measure.
Challenges and concerns
In addition to the positive side of training evaluations, training evaluations can also cause some negative
side effects and inherent problems. These problems posed challenges and concerns when managing training
evaluations practice. The respondents were provided with a list of eleven challenges and asked to identify
to what degree they face them (the challenges) in conducting sound and effective training evaluation.
Results revealed that, 53 per cent out of 100 respondents believe that time required to do evaluations well is
the most faced challenge. More than 40 per cent of the respondents believe that cost of doing evaluations
well, determining the impact on financial performance, identifying appropriate quantitative and qualitative
measures, finding evaluation methods that suit a variety of courses and finding qualified measurement and
evaluation professionals are challenges they face in evaluating training programmes. Besides, more than 30
per cent of the respondents face challenges in getting trainees and managers to participate in evaluations,
translating evaluation results into top management's language and determining specific actions to take
based on evaluation results to a great extent. 66 per cent of the hospitals face difficulty in obtaining the
information needed. The top three difficulties in obtaining information faced by the respondents are
regarding the latest advances in measurement and evaluation, how to conduct sound measurement and
evaluation, and information on measurement and evaluation tools. Table 2 summarizes some of the
challenges and concerns cited by the respondents.
Table 2: Challenges in training evaluations
Challenges and Concerns
Problem Description
Cost of doing evaluations well
Time required to do evaluations well
Determining the impact on financial performance
Identifying appropriate quantitative and
qualitative measures
Finding evaluation methods that suit a variety of courses
Getting trainees and managers to
participate in evaluations
Finding qualified measurement and evaluation professionals
Translating evaluation results into top
management's language
Determining specific actions to take based on evaluation results
Difficulty in obtaining information
needed for training evaluation
Information on the latest advances in measurement and
evaluation.
Information on how to conduct sound measurement and
evaluation.
Information on measurement and evaluation tools themselves.
Information about tool/methods for benchmarking training
outcomes against other companies or organisations.
68
Proceedings of Applied International Business Conference 2008
4. Discussion
The findings from this exploratory study has highlighted that training evaluation is perceived to be of
importance to hospitals in Malaysia. The respondents use questionnaire and observation as the top most
training evaluation tool or technique. Among the five training evaluation models listed, benchmark is the
most commonly used model. In contradiction, Al-Athari and Zairi (2002) found that most of the samples
use the Kirkpatrick model to evaluate training.
The respondents believe that the top five challenges that they face in conducting sound and effective
training evaluation are: time required to do evaluations, cost of doing evaluations well, determining the
impact on financial performance, identifying appropriate quantitative and qualitative measures, and finding
evaluation methods that suit a variety of courses. The results of this study indicated that the utmost concern
is the difficulties in obtaining information needed for training evaluation by the hospitals. In particular, the
latest advances in measurement and evaluation, how to conduct sound measurement and evaluation, and
information on measurement and evaluation tools. It is certainly a challenge to manage training evaluation
in Malaysia.
Despite some meaningful implications, there are some limitations to this study. This research merely
focused on the responses of 100 hospitals in Malaysia. It is not representative of the entire hospitals in
Malaysia.
5. Future research
With the current increased awareness of public healthcare and accreditation bodies established in Malaysia,
an extension to this study could include more samples from other hospitals. Also, given the increased
number of private hospitals in Malaysia due to emphasis of awareness of health wellness of staff, future
studies may want to consider a public versus private hospitals comparison on performance evaluation
training practices. A comparative research study could be extended to compare other sectors in the
healthcare industry, such as the pharmaceuticals and biotechnology. Future researchers may also want to
extend this study to include inter and intra industry comparative studies on training evaluations.
6. Conclusion
The aim of this study was to provide a literature review of training evaluation in general and in the hospital
setting. The literature review is complemented with a survey of a small-scale research project. 145 hospitals
in Malaysia were selected as the sample for this study. This exploratory case study, although relatively
small in scope, it does offer a valuable insight into the challenges and concerns of training evaluation, in
particular, in a hospital setting. The findings from this exploratory study has highlighted that training
evaluation is perceived to be of importance to hospitals in Malaysia. Though some organisations are aware
and agree that evaluation of training programmes is gradually becoming a concern, not all has gone far
enough to actually evaluate the training effectiveness. More often than not, Training Departments
concentrate more on providing training for the employees rather than evaluating the effectiveness of it. The
training evaluation practice is still at its nascent stage in the Malaysian hospitals. Respondents gave a
moderate rating to most of the questions. Therefore, it may be concluded that the focus on training
evaluation is yet to come to the centre stage in the Malaysian hospitals.
Training evaluation needs to be creative and multi-dimensional, providing rich subjective information and
avoid data overload. It should focus on the contribution of training and development to the organisation.
Trainers must take responsibility to report on effectiveness as well as efficiency of the training.
The results of this study also suggest that the difficulties in obtaining information needed for training
evaluation by the hospitals is of top concern. The most important evaluation challenges that deter the
hospitals from conducting sound evaluation are: time required to do evaluations, cost of doing evaluations
well, and determining the impact on financial performance.
The study on training evaluation in the hospital setting in Malaysia has not been explored extensively.
There is so little empirical research exists that examines training evaluation in Malaysian hospitals.
Therefore, this study seeks to address this gap in the literature. The researcher hopes that this exploratory
69
Proceedings of Applied International Business Conference 2008
study will not only add to the vast volume of research in training evaluation but spur more interest in the
aspect of training evaluation in the healthcare domain, especially in Malaysia. Finally, this study may
provide a useful source of information for human resource personals, managers, training practitioners and
academicians.
References
Al-Ali, A. (1999) HRD training and development practices and related organisational factors in Kuwaiti
organisations. PhD thesis. University of Bradford, Bradford.
Al-Athari, A. and Zairi, M. (2002) Training evaluation: an empirical study in Kuwait. Journal of European
Industrial Training, 26, 241-251.
Bernthal, P.R. (1995) Evaluation that goes the distance. Training & Development, September, 41-45.
Boulmetis, J. and Dutwin, P. (2000) The ABC's of Evaluation: timeless Techniques for Program and
Project Managers. San Francisco: Jossey-Bass.
Brinkerhoff, R.O. (2006) Increasing impact of training investments: an evaluation strategy for building
organizational learning capability. Industrial and Commercial Training, 38, 302-307.
Buckley, R. and Caple, J. (2000) The Theory and Practice of Training, 4th edition. London: Kogan Page.
Eseryel, D. (2002) Approaches of training: theory and practice. Educational Technology & Society, 5, 9398.
Hashim, J. (2001) Training evaluation: clients’ roles. Journal of European Industrial Training, 25, 374379.
Hassan, A., Hashim, J. and Ismail, A.Z.J. (2006) Human resource development practices as determinant of
HRD climate and quality orientation. Journal of European Industrial Training, 30, 4-18.
Hatton, A. (2003) Adding heart to your evaluation. Industrial and Commercial Training, 35, 210-216.
Kirkpatrick, D. (1971) A Practical Guide for Supervisory Training and Development. Reading, MA:
Addison-Wesley
Lingham, T., Richley, B. and Rezania, D. (2006) An evaluation system for training programs: a case study
using a four-phase approach. Career Development International, 11, 334-351.
Mann, S. and Robertson, I.T. (1996) What should training evaluations evaluate? Journal of European
Industrial Training, 20, 14-20.
Phillips, J. (1990) Training evaluation and measurement methods. Houston, TX: Gulf Publishing Co.
Schalock, R. (2001) Outcome Based Evaluations, 2nd edition. Boston: Kluwer Academic/ Plenum.
Stufflebeam, D.L. (2001) Evaluation Models. San Francisco: Jossey-Bass.
70