962409.pdf

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 4, NOVEMBER 2001
647
Design of Generic Direct Sparse Linear System
Solver in C++ for Power System Analysis
Shubha Pandit, Student Member, IEEE, S. A. Soman, and S. A. Khaparde, Senior Member, IEEE
Abstract—This paper presents design of generic Linear System
Solver (LSS) for a class of large sparse symmetric matrices over
real and complex numbers. These matrices correspond to either
of the following 1) Symmetric Positive Definite (SPD) matrices,
2) Complex Hermitian matrices, 3) Complex matrices with SPD
real and imaginary matrices. Such matrices arise in various power
system analysis applications like load flow analysis and short circuit analysis. Template facility of C++ is used to write a generic
program on float, double and complex data types. Design of algorithm guarantees numerical stability and efficient sparsity implementation. A reusable class SET is defined to cater to graph theoretic computations. LSS problems with matrices upto 20 000 nodes
have been tested. Another feature of the proposed LSS is implementation of associative array, which allows subscripting an array
with character strings, such as bus names. This helps in making the
power system analysis software user friendly. The proposed LSS
reflects an important development toward a truly object oriented
power system analysis software.
Index Terms—Graph theoretic applications, linear system
solver, LU decomposition, object oriented programming, sparse
matrix computations.
I. INTRODUCTION
O
BJECT Oriented Programming (OOP) has been accepted
by the power industry as a viable alternative to traditional
procedural programming. In all these developments, C++ has
been almost the universal choice for implementation.
An OOP language like C++ exhibits following features which
are not supported by procedural languages [1].
• Definition of classes and their methods: This involves
designing classes by encapsulating data and methods.
Neyer et al. [2] discuss definition of classes for power
system apparatus like lines and transformers.
• Operator overloading: Methods for arithmetic and logical operations such as addition, subtraction, etc. on user
defined classes like matrix, vector can be implemented by
overloading the corresponding arithmetic operators.
• Encapsulation: Methods written in other languages like
FORTRAN can be encapsulated in C++, thereby reducing
the software development time. For example, FORTRAN
NAG library routines have been encapsulated in C++
for implementation of load flow analysis in reference
[3]. However, such an implementation lacks flexibility
and may not correspond to a good OO implementation.
Defining reusable classes and methods is necessary to
Manuscript received December 13, 1999; revised December 19, 2000.
The authors are with the Department of Electrical Engineering, Indian
Institute of Technology Bombay, Powai, Mumbai, India 400 076 (e-mail:
[email protected]).
Publisher Item Identifier S 0885-8950(01)09454-8.
take full advantage of OO paradigm. For example, it may
be advisable to write OO software for library functions
like LSS, matrix computations, etc.
• Inheritance: In C++, a generic base class can be defined
on which more specific application classes can be derived.
Zhou [3] has defined a network container base class which
contains classes for power system apparatus. Virtual functions are also defined on the base class. AC load flow is
then a derived class which has load flow as a method.
While derived classes and use of virtual functions improve
flexibility of a large program, it may result in loss of speed
as choice of implementation is done at run time rather than
at compile time.
• Use of template facility: In C++, templates provide another level of generalization. In other words, a unique code
which caters to many data types can be written by using
template facility [3]. For example, a class matrix may be
required to support data types such as float, double and
complex; complex being a user defined data type available
in standard library. This means that all methods like arithmetic operations defined for this class should work equally
well on all these data types, without duplication of code.
The algorithm written using template definition for class
matrix is therefore written with data type . User program then specifies required data type. Use of templates
add flexibility at the cost of compile time. Run time efficiency, however, is not sacrificed.
OOP for sparse matrix computations is discussed in reference [4]. Reference [5] identifies two important computational
classes, one for sparse matrix computations and the other for
graph theoretic computations. Recently, OO library developed
for sparse matrix computations has received significant attention of numerical analysts and researchers in scientific computations. To quote Dongarra et al. [6], “besides simplifying
the subroutine interface, the OO design allows driving code
to be used for various sparse matrix formats, thus addressing
many of the difficulties encountered with the typical approach
to sparse matrix libraries. Using C++ for numerical computation can greatly enhance clarity, reuse, and portability.” Another important area of research is development of library for
ordering sparse matrices. Such library permits user to choose
ordering methodology depending upon problem requirements.
However, our experience suggests that learning curve involved
in such work may be steep. Dobrian et al. [7] discuss design of
sparse direct solvers using OO technique. The implementation
is in C++. They identify design goals as managing complexity,
simplicity of integration, flexibility, extensible, reusable and efficient coding. Ashcraft et al. [8] have designed an OO code
0885–8950/01$10.00 © 2001 IEEE
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY BOMBAY. Downloaded on October 25, 2008 at 01:37 from IEEE Xplore. Restrictions apply.
648
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 4, NOVEMBER 2001
called SMOOTH for reduced fill ordering. An OO package of
direct and iterative solvers, called SPOOLES, has been reported
in reference [9]. Both these packages are implemented in C.
In this paper, we discuss design and implementation of sparse
LSS in C++, for computations in power system applications.
This provides a focused development of methods that meet
requirements such as speed, stability, simplified interface and
algorithm for a
scalability. Consider implementation of
symmetric sparse matrix. Such matrices arise in power system
in Fast Decoupled Load Flow
analysis e.g., real matrices ,
(FDLF), complex admittance matrix (YBUS) in short circuit
analysis. The requirement of basic data type is float (or double)
and complex respectively. One of the challenges in designing a
good C++ program lies in exploiting the template facility for
non trivial problems. The question is, can there be a unique,
algorithm using this
sparse, efficient implementation of
feature over the said data types? An efficient implementation
implies a numerically stable implementation with minimum
computation complexity and a memory requirement proportional to the number of nonzeros in the original matrix. In fact, it
factorize routine
is difficult to write an efficient unique
for a general class of symmetric matrices over real numbers
as the implementation issues associated with SPD matrices
differ a lot from indefinite symmetric matrices. However, by
restricting the class of matrices to either SPD or symmetric
indefinite, efficient implementation can be developed.
decomposition based
This paper proposes design of a
LSS using template, for a class of real and complex matrices.
The class of matrices considered need to have symmetry and
diagonal dominance. Since YBUS matrix and matrices usually
derived from it are diagonally dominant, the proposed implementation is best suited for power system applications like load
flow and short circuit analysis. Another advantage of the proposed solver is that it works directly with bus names as subscripts, thereby making user free from assigning bus numbers
for each bus.
The paper is organized as follows. Sections II and III discuss the issues regarding LU decomposition of real and complex matrices respectively. Implementation of Analyze and Factorize phases is developed in Sections IV and V. While a class
sparse_matrix is defined in Section VI, Section VII relates the
character strings to integers using the associative arrays. Results are compiled in Section VIII. Section IX concludes the
discussion.
lead to sparse LSS with real sparse SPD matrices, significant
computational savings can be achieved.
When the matrix is indefinite, the choice of pivot has to be
undertaken considering numerical magnitude (size) of pivot and
the number of fills that will result. Thus, for this category, choice
of pivot has to be determined during numerical factorization and
therefore fills can not be predicted apriori. This implies use of
dynamic data structure (preferred one being linked list). Use
of dynamic data structure tends to distribute data in memory
instead of storing it in a continuous block. This involves, extra
cache fetches, leading to problems like page thrashing, which
together slow down the process of implementation. Bunch et al.
[11] discuss various complete and partial pivoting strategies.
In contrast, for SPD matrices, it can be shown that diagonal
pivots are always stable. An important consequence is that diagonal pivots can be selected apriori to actual factorization. Minimum Degree Algorithm (MDA) [12] is one of the most commonly used strategy for ordering nodes so as to reduce/minimize
fills. A static data structure can then be set up in a continuous
block of memory, apriori, to numerical or actual factorization.
Typically, this is known as Analyze phase. Factorize phase corresponds to actual numerical factorization.
III. LINEAR SYSTEM SOLVER WITH COMPLEX MATRICES
Let
group
:
be a complex matrix given by
. We can
into following three categories:
is a complex symmetric
matrix with matrices
and real SPD;
:
is a complex Hermitian matrix;
:
does not belong to category
and
above.
If a matrix is complex Hermitian (category ), it is known
that Cholesky factorization can be performed. Because of diagonal dominance of partially factored matrices, diagonal pivots
)
are stable. However, complex symmetric matrices (
do not admit to such simple methods, an example being YBUS
(category ). Consider the admittance model formulation for
short circuit analysis:
(1)
One way to solve equation (1), though not elegant, is to convert
the problem into a real LSS problem as follows:
(2)
II. LU DECOMPOSITION WITH REAL MATRICES
Consider direct solution of a sparse linear system
where, is a real symmetric matrix and is a real vector. We
restrict the discussion to LU decomposition based solvers. The
matrices arising in LSS can be grouped into following two categories [10].
: when matrix is real sparse SPD.
: when matrix is sparse indefinite.
Typically, matrices in FDLF, normal equation approach for
power system state estimation belong to first category, while
matrices in Hatchel’s methods for state estimation, etc. belong
to the later. If application algorithms are so chosen that they
(Superscript and refer to the real and imaginary parts.)
It is known that such a scheme can be inefficient in storage
and time [13]. One of the best available schemes to factorize
symmetric complex matrix is a version of Bunch’s diagonal pivoting method for symmetric indefinite LSS [11]. The essential
or
step at each stage of the elimination is to choose
blocks for pivoting. This choice is dictated by magnitude of the
pivot which affects numerical stability of implementation.
matrix, LINS. M. Serbin [14] showed that for category
proceeds
PACK diagonal pivoting decomposition
without necessity for pivoting. The analysis also holds for
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY BOMBAY. Downloaded on October 25, 2008 at 01:37 from IEEE Xplore. Restrictions apply.
PANDIT et al.: DESIGN OF GENERIC DIRECT SPARSE LINEAR SYSTEM SOLVER IN C++ FOR POWER SYSTEM ANALYSIS
decomposition. Though this reference did not deal with
sparse LSS, we can summarize the following:
algorithm for sparse SPD LSS can be used
• The
by replacing data type float to data type complex, without
affecting numerical stability.
and , a
algorithm
• For matrices of category
may be implemented based on Bunch’s partial pivoting
strategy.
This paper discusses design and implementation of a generic
sparse LSS, using the template facility in C++, over the following category of matrices:
1) SPD real matrices (category )
2) complex symmetric matrix with real and imaginary component as SPD matrices (category )
3) complex Hermitian matrices (category ).
The next section discusses the design of Analyze phase.
IV. ANALYZE PHASE
Implementation of Analyze phase involves ordering to reduce
fills (implemented through MDA) and setting up of data structure for sparse matrix. The data structure is developed by an
algorithm described in [10].
Implementation of MDA: Being one of the most successful
ordering algorithms for SPD matrices, various enhancements
to speed up MDA have been suggested in literature. While
designing efficient implementation of MDA, following design
goals have been specified.
1) As MDA involves graph theoretic operations, class and
methods designed for MDA should be useful in other
graph theoretic operations (like finding connectivity of
graph, tree, loop etc.), which are involved in the observability analysis and relay coordination of mesh systems.
2) Graph theoretic applications including MDA routinely
require finding one or more nodes, edges, union of
nodes/edges etc. Implementation of operations like
intersection, union, should be optimal, both in time as
well as space. The implementation should provide an
elegant interface to user.
The discussion so far clearly suggests suitability of a class set
with methods for intersection, union etc. defined on it. The use
of class set to model topological relationships/graph traversing
was suggested in [2]. Class set is also implemented in the standard template library, as a generalized class map which serves
various applications like dictionary, telephone directory etc. [1].
However, the proposed class SET is tailored only for sparse
graph theoretic operations required in sparse matrix computations. Object oriented technologies are leveraged through reuse.
Class SET can be reused in applications like Network Topology
Processing.
Introduction to Class SET: Class SET is defined as follows:
A graph of vertices can be represented by collection of sets.
Each set for a vertex contains adjacent vertices to it. For convenience, the vertex is also included as a member of the set. Set
649
TABLE I
OVERLOADED OPERATORS
operations like union, intersection, difference and logical operations are implemented by overloading operators as shown in
Table I.
Optimal Implementation of Class SET: A set, as per definition, is a collection of distinct elements. From the point of view
of implementation, set can be classified as Unordered set, Bit
set or Ordered set [15].
Unordered Set: A set is said to be an unordered set when elements in the set do not appear in any particular order. The union
operation requires an effort of (s1.cardin s2.cardin) which
is much more than linear effort. Therefore, it is not advisable to
use unordered set.
Bit Set: Let the vertices of original graph be represented
. All sets involved in a graph theby set ;
oretic implementation are subsets of . Such sets can be represented by a collection of bits. If vertex is present in a set,
then, bit is set to 1 else, it is set to 0. The advantage of bit
set representation is that set operations like union operation can
correspond to OR operation, difference to an exclusive OR and
intersection to the AND operation on bit sets. C++ supports bit
operations on a word length. The bit set implementation is very
, 64.
convenient when the order of is small, typically
However, when is large, say 10 000, bit set implementation requires integers to replicate bits. Thus, memory requirement
becomes prohibitive. As such bit set implementation is not very
efficient for large scale systems.
Ordered Set: A set is said to be ordered, when elements of
the set are arranged in ascending (or descending) order. Here
union operation can be performed in (s1.cardin s2.cardin),
effort for bit set implementation.
typically much less than
This justifies use of ordered sets for large scale problems.
Intersection in Ordered Sets: Two efficient implementations
for intersection are possible with ordered sets. One of the implementations would involve an algorithm similar to the union, the
complexity of which would be (s1.cardin +s2.cardin). However, a single find operation to search an element in a set can
be performed in [log (S.cardin)] using the divide and conquer
strategy. In this strategy, a set is partitioned into ordered sub. Only one of two subsets can
sets with cardinality
have the element being searched. Recursive application leads
to the solution. At each step of recursion, the effort involved
to solve the problem is halved; hence, the logarithmic order of
complexity. Intersection of two sets and can be computed,
[log
using this technique, with complexity of (s1.cardin
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY BOMBAY. Downloaded on October 25, 2008 at 01:37 from IEEE Xplore. Restrictions apply.
650
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 4, NOVEMBER 2001
s2.cardin)]. When
is much less than
it is
preferable to use divide and conquer algorithm.
The factorize and solve phases are discussed in brief in the
next section.
TABLE II
COMPUTATION TIME FOR POWER SYSTEM DATA
V. FACTORIZE PHASE
LU factorization involves following computations
(3)
(4)
With user defined data type complex and overloaded arithmetic
and logical operators, a generic code can be written for factorize
phase using template facility. Such a implementation is numerically stable for matrices mentioned in Section III. The following
section discusses the implementation of a class sparse_matrix
in which the methods like MDA, factorize and solve(forwardbackward substitution) are implemented.
VI. IMPLEMENTATION OF CLASS sparse_matrix
Data Structure: The memory of a large sparse system can be
optimized by storing only the nonzero elements of the matrix.
In order to minimize searches of elements in the matrix, a data
structure spmat which allows creation of a static linked list is
defined. The link_row and link_col contain the index to the next
element in a row or column in a matrix.
The class sparse matrix can be defined as follows:
It can be seen that the class abstraction hides details of data
structure from the user. The implementation exploits sparsity
and performs checks on validity of matrix operations on the intended instances of the class. As the elements of a sparse matrix can be stored in any order, a method is required to establish
links between the elements, by assigning appropriate values of
link_row and link_col. An efficient procedure is by Duff and
Reid [10]. The implementation is supported by the method sort
of this class. The method solver is a friend function to the two
classes sparse_matrix and vector and returns the solution in the
form of a vector. The class vector is discussed in [4] hence is
not discussed here.
The following section discusses mechanism to use character
strings as array subscripts instead of integers.
VII. IMPLEMENTING ASSOCIATIVE ARRAY
In the graphs associated with power systems, buses or nodes
have physical significance. Typically, reference by name reflects
this information. e.g., name of the place, KV class, etc. In contrast, bus numbers do not convey any information. While it was
not possible to have character subscript in languages like FORTRAN and C, it is possible to implement this feature in C++ by
defining an associative array and overloading the operator .
We define a data structure pair to implement class
asso_array. The struct pair stores bus name as a character
string and bus number as an integer. Given a bus name as
the key, the corresponding bus number is made available by
methods defined on class asso_array. In simplest sense, an
associative array defines a map between the character string and
integer. Use of associative array makes numerical computations
more friendly.
VIII. RESULTS
The results presented are obtained on Intel Pentium Pro Processor at 200 MHz with 32 MB RAM running a Linux kernel
(version 2.0.34). All programs are compiled using a gnu C++
compiler (version egcs 1.0.2-8), using the third level of optimization (O3).
The timing reported for solution of a linear system includes:
Analyze Phase:
1) Setting up a static linked list using sort routine.
2) Extracting sparse matrix connectivity data from step 1 to
create SET representation.
3) New ordering using MDA.
4) Setting up LU data-structure (symbolic factorization).
Numerical Computation Phase:
1) Numerical factorization.
2) Forward backward substitution.
All the overheads accompanying class SET, sparse_matrix and
asso_array have been accounted for. The proposed MDA implementation has been used successfully for ordering nodes in
power systems upto 1044 nodes. Computation time for Fast Decoupled Load Flow and short circuit analysis, on these systems,
using the proposed LSS, are presented in Table II.
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY BOMBAY. Downloaded on October 25, 2008 at 01:37 from IEEE Xplore. Restrictions apply.
PANDIT et al.: DESIGN OF GENERIC DIRECT SPARSE LINEAR SYSTEM SOLVER IN C++ FOR POWER SYSTEM ANALYSIS
TABLE III
ENHANCED MDA WITH BIT SET AND ORDERED SET
651
TABLE VI
RESULTS WITH SPOOLES IMPLEMENTATION
TABLE VII
PERFORMANCE EVALUATION
TABLE IV
RESULTS WITH PROPOSED IMPLEMENTATION
TABLE V
RESULTS WITH SPARSPAK IMPLEMENTATION
In addition, more than hundred large sparse matrices were
generated using random sparse matrix generator available on
MATLAB. The test systems correspond to an average connectivity of 7 per node. Table III compares timing (in msec) of MDA
on these systems, where class SET is implemented by bit set
(column 4) and ordered set (columns 5 to 7). The div_con stands
and
using divide and
for implementation of operators
conquer strategy. Similarly, without div_con refers to an implementation with linear worst case complexity. Results using both
these strategies are compared in Table III, columns 6 and 7. The
results clearly show that complexity of bit set implementation is
worse than ordered set (refer Section IV). It can be seen that the
best implementation of MDA is obtained using a) ordered set,
b) with divide and conquer strategy for intersection and difference operation, c) returning variables by reference.
Table IV shows breakup of total solution time in msec, required for implementing MDA, symbolic and numerical factorization on test systems of Table III. Results for the same
test systems with a standard implementation SPARSPAK available on netlib are presented in Table V. FORTRAN subroutine
GENQMD of SPARSPAK uses quotient graph model with enhancements like indistinguishable nodes for ordering the nodes.
Function SMBFCT for symbolic factorization, GSFCT for numerical factorization and GSSL for forward backward substitution have been implemented in this package.
Table VI summarizes the performance using SPOOLES
package [9]. This package is designed to solve two types of
real or complex LSS. Driver program allInOne.c uses default
parameters for multiple elimination of vertices with minimum
priority, with exact external degree for each vertex. Only two
adjacent nodes are used for graph compression required in
finding indistinguishable vertices. Pivoting is not used. The
package has an OO design but is implemented in C. Therefore,
computations with this package do not involve overheads
inherent with OOP languages like C++. Even though struct
is used for encapsulating data, designing simple interfaces,
creating multiple objects, developing polymorphic code, using
templates, etc. is not straight forward. All the same, it has utility
in objectively evaluating efficacy of proposed implementation.
Table VII compares performance of the three packages on the
20 000 node test system, which is illustrative of the trends seen
in Tables IV–VI. It is observed that:
1) In Analyze pahse, C based SPOOLES implementation
gives optimal results. Performance (time and fills) of
the proposed implementation is in between SPOOLES
and SPARSPAK. SPOOLES incorporates the best quality
of ordering reflecting the ongoing progress in ordering
technology.
2) FORTRAN based SPARSPAK implementation provides
the best results in Numerical Computation phase. It can
be observed that, SPOOLES implementation has a very
slow solver. In SPOOLES, factorization and solve phases
are based on front matrices. As direct factorization is implemented, dense fronts are stored. Proposed implementation ranks next to SPARSPAK implementation in this
phase.
3) Overall performance of the proposed implementation is
superior to SPARSPAK and SPOOLES. It can be seen that,
total computation time is the least for the proposed implementation. Hence, we conclude that the proposed OO
design and C++ implementation is competitive!
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY BOMBAY. Downloaded on October 25, 2008 at 01:37 from IEEE Xplore. Restrictions apply.
652
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 4, NOVEMBER 2001
IX. CONCLUSION
In reply to the question raised in Section I, we state that it
is possible to have a unique efficient implementation of
factorization for a class of SPD and complex SPD like matrices.
While it is difficult to numerically verify symmetric positive
definite nature of matrices in power system analysis, YBUS and
matrices in FDLF exhibit diagonal dominance. The proposed
class sparse_matrix can be augmented with methods for sparse
QR decomposition wherein MDA function can be reused. An
important conclusion is that, this OO implementation is competitive enough to a procedural implementation. Some noteworthy
OO design and C++ implementation contributions are:
1) Design of class SET to facilitate and simplify MDA implementation. The class SET is reusable in various graph
theoretic applications like observability analysis and relay
coordination in mesh systems.
2) A carefully crafted C++ implementation using return by
reference, templates and operator overloading. This enables speed as well as simplifies interface and programming. Without any run time penalty, templates integrate
factorize and solve phases of LSS over a class of real and
complex matrices.
3) Matching of matrices in power systems with sparse LSS
technology (data structure algorithms numerical analysis). Successful usage of the proposed LSS in power
system application is therefore justified.
[5] S. Pandit, S. A. Soman, and S. A. Khaparde, “Object-oriented design
for power system applications,” IEEE Computer Applications in Power,
vol. 13, no. 4, pp. 43–47, 2000.
[6] J. Dongarra, A. Lumsdaine, X. Niu, R. Pozo, and K. Remington. (1994)
A Sparse Matrix Library in C++ for High Performance Architectures.
[Online]. Available: www.cs.utk.edu/~library/1994.html
[7] F. Dobrian, G. Kumfert, and A. Pothen. (1999, Sept.) The Design of
Sparse Direct Solvers Using Object-Oriented Techniques: ICASE,
NASA/CR-1999-209 558. [Online]. Available: www.icase.edu/Dienst/UI
[8] C. Ashcraft and J. W. H. Liu. (1996, Nov.) SMOOTH: A Software Package for Ordering Sparse Matrices. [Online]. Available:
www.solon.cma.univie.ac.at/~neum/software.html
[9] C. Ashcraft and J. W. H. Liu. (1999) SPOOLES: An Object Oriented Sparse Matrix Library. [Online]. Available:
www.netlib2.cs.utk.edu/linalg/spooles/spooles.2.2.html
[10] I. S. Duff, A. M. Erisman, and J. K. Reid, Direct Methods for Sparse
Matrices. Oxford: Clarendon Press, 1986.
[11] J. R. Bunch and L. Kaufman, “Some stable methods for calculating inertia and solving symmetric linear systems,” Mathematics of Computation, vol. 31, pp. 163–179, 1977.
[12] A. George and J. W. H. Liu, “The evolution of minimum degree ordering
algorithm,” SIAM Review, vol. 31, no. 1, pp. 1–19, 1989.
[13] W. H. Press, S. A. Teukolsky, W. T. Vellerling, and B. P. Flannery, Numerical Recipes in C. Daryaganj, New Delhi: Cambridge University
Press, 1992.
[14] S. M. Serbin, “On factoring a class of complex symmetric matrices
without pivoting,” Mathematics of Computation, vol. 35, no. 152, pp.
1231–1234, 1980.
[15] T. A. Budd, Classic Data Structures in C++, USA: Addison-Wesley
Publishing Company, 1994.
Shubha Pandit is Assistant Professor at S.P.College of Engineering, Mumbai.
She is working for Ph.D. at I.I.T. Bombay.
REFERENCES
[1] B. Stroustrup, The C++ Programming Language, 3rd ed: Addison-Wesley Publishing Company, 1997.
[2] A. F. Neyer, F. F. Wu, and K. Imhof, “Object oriented programming for
flexible software: Example of a load flow,” IEEE Trans. Power Systems,
vol. 5, no. 3, pp. 689–695, 1990.
[3] E. Z. Zhou, “Object oriented programming, C++ and power system simulation,” IEEE Trans. Power Systems, vol. 11, no. 1, pp. 206–215, 1996.
[4] B. Hakavik and A. T. Holen, “Power system modeling and sparse matrix operations using object oriented programming,” IEEE Trans. Power
Systems, vol. 9, no. 2, pp. 1045–1052, 1994.
S. A. Soman is Assistant Professor at I.I.T. Bombay. His research interest
include sparse matrix computations, power system analysis, OOP, and power
system automation.
S. A. Khaparde is Professor at I.I.T. Bombay. His research interests include
power system computations, analysis and deregulation.
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY BOMBAY. Downloaded on October 25, 2008 at 01:37 from IEEE Xplore. Restrictions apply.