Prediction of terminal velocity of solid spheres falling through

International Journal of Mineral Processing 110–111 (2012) 53–61
Contents lists available at SciVerse ScienceDirect
International Journal of Mineral Processing
journal homepage: www.elsevier.com/locate/ijminpro
Prediction of terminal velocity of solid spheres falling through Newtonian and
non-Newtonian pseudoplastic power law fluid using artificial neural network
R. Rooki a, F. Doulati Ardejani a, A. Moradzadeh a, V.C. Kelessidis b,⁎, M. Nourozi c
a
b
c
Faculty of Mining, Petroleum and Geophysics, Shahrood University of Technology, Shahrood, Iran
Mineral Recourses Engineering Department, Technical University of Crete, Chania, Greece
Faculty of Mechanic, Shahrood University of Technology, Shahrood, Iran
a r t i c l e
i n f o
Article history:
Received 26 November 2011
Received in revised form 30 January 2012
Accepted 19 March 2012
Available online 3 April 2012
Keywords:
Terminal velocity
Mineral processing
Newtonian and power law fluid
Drilling cuttings transport
Artificial neural network
a b s t r a c t
Prediction of the terminal velocity of solid spheres falling through Newtonian and non-Newtonian fluids is
required in several applications like mineral processing, oil well drilling, geothermal drilling and
transportation of non-Newtonian slurries. An artificial neural network (ANN) is proposed which predicts
directly the terminal velocity of solid spheres falling through Newtonian and non-Newtonian power law
liquids from the knowledge of the properties of the spherical particle (density and diameter) and of the
surrounding liquid (density and rheological parameters). With a combination of non-Newtonian data with
Newtonian data taken from published data giving a database of 88 sets, an artificial neural network is
designed. Analysis of the predictions shows that the artificial neural network could be used with good
engineering accuracy to directly predict the terminal velocity of solid spheres falling through Newtonian and
non-Newtonian power law liquids covering an extended range of power law values from 1.0 down to 0.06.
© 2012 Elsevier B.V. All rights reserved.
1989; Kelessidis and Mpandelis, 2004) has shown that for a power
law fluid, with the rheological equation given by
1. Introduction
Knowledge of the terminal settling velocity of solids in liquids is
required in many industrial applications. Typical examples include
mineral processing, drilling for oil and gas, geothermal drilling,
hydraulic transport systems, thickeners, solid–liquid mixing, and
fluidization equipment. In many of these processes, it is the
“hindered” falling velocity that is of interest, hindered by the
presence of walls or by other particles (Kelessidis and Mpandelis,
2004). This velocity is proportional to the free (terminal) falling
velocity of the solid particles so there has been a great interest in
predicting the free falling velocity of solid particles in liquids.
The type of movement of single solid sphere in Newtonian and
non-Newtonian liquids is well known; after a short acceleration time,
it will fall at its terminal settling velocity V. For an unbounded liquid,
V can be calculated from the knowledge of the liquid and solid
physical properties and from the drag coefficient, defined by
CD ¼
4 dg ðρs −ρÞ
3 ρV 2
ð1Þ
Extensive work has been undertaken to relate the drag coefficient
with the Reynolds number of the particle. Previous work (Lali et al.,
⁎ Corresponding author.
E-mail address: [email protected] (V.C. Kelessidis).
0301-7516/$ – see front matter © 2012 Elsevier B.V. All rights reserved.
doi:10.1016/j.minpro.2012.03.012
τ ¼ K γ_
n
ð2Þ
the Reynolds number can be properly defined as
Regen ¼
ρV 2−n dp n
K
ð3Þ
Various works have been done to create theoretical and semiempirical relationships of the terminal settling velocity of solid spheres
using Re–CD relationship. There are over 50 correlations published
relating the Reynolds number to the drag coefficient for the case of
Newtonian and non-Newtonian fluids (Clift et al., 1978; Peden and Luo,
1987; Koziol and Glowacki, 1988; Heider and Levenspiel, 1989;
Reynolds and Jones, 1989; Kelessidis and Mpandelis, 2004; Chhabra,
2006; Shah et al., 2007). In these correlations the terminal velocity is
implicitly derived, hence, resort must be made to trial and error
procedure for deriving the terminal velocity. There are not as many
explicit relationships to predict V, with few equations for Newtonian
liquids (e.g. Turton and Clark, 1987; Hartman et al., 1989; Nguyen et al.,
1997) and even fewer for non-Newtonian pseudoplastic power law
liquids which cover an extended range of Reynolds numbers (Chhabra
and Peri, 1991; Kelessidis, 2004). Most of above mentioned correlations
are complex in form and they cover a special range of Reynolds number.
54
R. Rooki et al. / International Journal of Mineral Processing 110–111 (2012) 53–61
Therefore, a simple and reliable method which can be used with
confidence over the entire range of conditions is not yet available.
Artificial neural networks (ANNs) have gained an increasing
popularity in different fields of engineering in the past few decades,
because of their capability of extracting complex and non-linear
relationships. Owing to their inherent nature to model and learn
‘complexities’, ANNs have found wide applications in various areas,
like, mineral processing (Van Der Walt et al., 1993; Moolman et al.,
1995; Eren et al., 1997), of chemical engineering and related fields
(Himmelblau, 2000; Sharma et al., 2004; Ibrehem and Hussain, 2009)
and in oil industry (Ternyik et al., 1995; Ozbayoglu et al., 2002; Miri et
al., 2007; Mohaghegh, 2000). Not much work has been done with
ANNs in the area of terminal velocity prediction except for the very
recent work of Ghamari et al. (2010) who used ANN to relate seed
settling velocities with particular seed properties like size, seed type
and moisture content.
The aim of this work is to provide a different approach for the
prediction of terminal unhindered velocity of solid spheres falling
through Newtonian and non-Newtonian pseudoplastic power law
liquids using artificial neural network.
2. Theory
summed up (n). An activation function (f) is then applied to the
summation, and the output (a) of that neuron is now calculated and
ready to be transferred to another neuron (Demuth and Beale, 2002).
In this network, each element of the input vector P is connected to
each neuron input through the weight matrix W. The ith neuron has a
summer that gathers its weighted inputs and bias to form its own
scalar output n (i). The various n (i) taken together form an S-element
net input vector n. Finally, the neuron layer outputs form a column
vector derived from
nj ¼
R X
pi wij þ bj ; j ¼ 1; 2; …; S
ð4Þ
i¼1
where
2
3
2
3
p1
b1
4
5
4
b ¼ b2 ; P ¼ p2 5;
pR
bS
2
3
w1;1 w1;2 ::::w1;R
4
W ¼ w2;1 w2;2 ::::w2;R 5
wS;1 wS;2 ::::wS;R
ð5Þ
Then, final output of network is calculated by
2.1. Back propagation neural network design
Artificial neural networks (ANNs) are generally defined as
information processing representation of the biological neural
networks. ANN has gained an increasing popularity in different fields
of engineering in the past few decades, because of their ability of
resolving complex and non-linear relationships. The mechanism of
the ANN is based on the following four major assumptions (Hagan et
al, 1996), a) information processing occurs in many simple elements
that are called neurons (processing elements), b) signals are passed
between neurons over connection links, c) each connection link has
an associated weight, which, in a typical neural network, multiplies
the signal being transmitted and d) each neuron applies an activation
function (usually nonlinear) to its net input in order to determine its
output signal.
Fig. 1 shows a typical neuron. Inputs (P) coming from another
neuron are multiplied by their corresponding weights (w1, i), and
aS ¼ f ðnS Þ
ð6Þ
Here, f is an activation function, typically a step function or a sigmoid
function, which takes the argument n and produces the output a.
Fig. 2 shows examples of various activation functions:
Back-propagation neural networks (BPNN) are recognized for
their prediction capabilities and ability to generalize well on a wide
variety of problems. These models are a supervised type of networks,
in other words, trained with both inputs and target outputs. During
training the network tries to match the outputs with the desired
target values. Learning starts with the assignment of random weights.
The output is then calculated and the error is estimated. This error is
used to update the weights until the stopping criterion is reached. It
should be noted that the stopping criteria is usually the average error
or epoch.
Fig. 1. A typical neuron (Demuth and Beale, 2002).
R. Rooki et al. / International Journal of Mineral Processing 110–111 (2012) 53–61
55
Fig. 2. Three examples of activation functions (Demuth and Beale, 2002).
2.2. Network training: the over fitting problem
One of the most common problems in the training process is the
over fitting phenomenon. This happens when the error on the training
set is driven to a very small value, but when new data is presented to the
network, the error is large. This problem occurs mostly in case of large
networks with only few available data. Demuth and Beale (2002) have
shown that there are a number of ways to avoid over fitting problem.
Early stopping and automated Bayesian regularization methods are
most common. However, with immediate fixing the error and the
number of epochs to an adequate level (not too low/ not too high) and
dividing the data into two sets; training and testing; one can avoid such
problem by making several realizations and selecting the best of them.
In this paper, we used the ANN Toolbox in MATLAB multi-purpose
commercial software in order to implement the automated Bayesian
regularization for training BPNN. In this technique, the available data is
divided into two subsets. The first subset is the training set, which is
used for computing the gradient and updating the network weights and
biases. The second subset is the test set. This method works by
modifying the performance function, which is normally chosen to be
the sum of squares of the network errors on the training set. The typical
performance function that is used for training feed forward neural
networks is the mean sum of squares of the network errors according to
mse ¼
N
N 1X
1X
2
2
ei Þ ¼
t −ai Þ
N i¼1
N i¼1 i
ð7Þ
where, N represents the number of samples, ai is the predicted value, ti
denotes the measured value and ei is the error. It is possible to improve
the generalization if we modify the performance function by adding a
term that consists of the mean of the sum of squares of the network
weights and biases which is given by
msereg ¼ γ mse þ ð1−γ Þmsw
N
1X
w
N i¼1 i
pn ¼ 2
p−p min
−1
p max −p min
ð9Þ
Use of the performance function will cause the network to have
smaller weights and biases, and this will force the network response
to be smoother and less likely to over fit (Demuth and Beale, 2002).
3. Terminal velocity prediction using BPNN
The feed-forward neural networks with back propagation (BP)
learning are very powerful in function optimization modeling
ð10Þ
where, pn is the normalized parameter, p denotes the actual
parameter, pmin represents a minimum of the actual parameters and
pmax stands for a maximum of the actual parameters. About 70% or 69
out of 88 of the data sets were selected as train data and 25 data to
test purposes, randomly.
Several architectures (varied numbers of neurons in hidden
layer) with Automated Bayesian Regularization training algorithm
and mean square error (MSE) performance function were tried to
predict terminal velocity using BPNN. Two criteria were used in
order to evaluate the effectiveness of each network and its ability
to make accurate predictions, the root mean square error and the
coefficient of determination. The root mean square error (RMS),
which measures the data dispersion around zero deviation, can be
calculated as:
ð8Þ
where, msereg is the modified error, γ is the performance ratio, and msw
can be written as
msw ¼
(Cybenko, 1989; Hornik et al., 1989). In this study, three-layer feedforward neural networks with back propagation (BP) learning were
constructed for calculation of terminal velocity.
In this study 88 data sets from literature, represented here in
Table 1, were used. All experimental data refer to unhindered
velocity. The properties of the spherical particles (density and
diameter) and of the surrounding liquid (density and rheological
parameters) and acceleration of gravity data were selected as inputs
of the network. The output of network was terminal velocity. In view
of the requirements of the neural computation algorithm, the data of
both the inputs and output were normalized to an interval by
transformation process. In this study normalization of data (inputs
and outputs) was done for the range of [− 1, 1] using Eq. (10)
RMS ¼
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
uP
un
u ðyi −y^ i Þ2
ti¼1
N
ð11Þ
where, yi is the measured value, y^ i denotes the predicted value, and
N stands for the number of samples. RMS indicates the discrepancy
between the measured and predicted values. The lowest the RMS,
the more accurate the prediction is. Furthermore, the coefficient of
determination, R 2, given by
N
P
2
R ¼ 1−
i¼1
N
P
i¼1
ðyi −y^ i Þ2
N
P
ð12Þ
y^ 2i
y2i − i¼1N
56
R. Rooki et al. / International Journal of Mineral Processing 110–111 (2012) 53–61
Table 1
Properties of fluid and solid spheres tested and experimental results of terminal falling
velocities.
K
(Pa*sn)
n
(–)
dp
(m)
ρs
(kg/m3)
Kelessidis (2003)
0.2648
0.7529 0.0015 2260
0.2648
0.7529 0.0021 2727
0.2648
0.7529 0.0023 2449
0.2648
0.7529 0.0030 2609
0.2648
0.7529 0.0035 2572
0.0353
0.8724 0.0015 2260
0.0165
0.9198 0.0015 2260
0.0353
0.8724 0.0021 2727
0.0353
0.8724 0.0023 2449
0.0165
0.9198 0.0021 2727
0.0353
0.8724 0.0030 2609
0.0165
0.9198 0.0023 2449
0.0353
0.8724 0.0035 2572
0.0165
0.9198 0.0030 2609
0.0165
0.9198 0.0035 2572
Miura et al. (2001)
0.5940
0.5610 0.0030 2500
0.5940
0.5610 0.0050 2500
0.5940
0.5610 0.0070 2500
0.1690
0.6250 0.0030 2500
0.1770
0.6020 0.0050 2500
0.1690
0.6250 0.0050 2500
0.0675
0.6290 0.0030 2500
0.1770
0.6020 0.0070 2500
0.1690
0.6250 0.0070 2500
0.0299
0.7190 0.0030 2500
0.0675
0.6290 0.0050 2500
0.0166
0.7510 0.0030 2500
0.0299
0.7190 0.0050 2500
0.0675
0.6290 0.0070 2500
0.0299
0.7190 0.0070 2500
0.0166
0.7510 0.0050 2500
0.0166
0.7510 0.0070 2500
Pinelli and Magelli (2001)
0.0521
0.7300 0.0008 2470
0.0471
0.7300 0.0008 2470
0.0521
0.7300 0.0011 2900
0.0521
0.7300 0.0011 2900
0.0521
0.7300 0.0030 1470
0.0466
0.7300 0.0030 1470
0.0521
0.7300 0.0059 1170
0.0462
0.7300 0.0059 1170
Ford and Oyeneyin (1994)
9.1673
0.1714 0.0050 7949
19.7360 0.0623 0.0070 7744
19.7360 0.0623 0.0100 7796
19.7360 0.0623 0.0120 7730
4.9100
0.2075 0.0050 7949
9.1673
0.1714 0.0070 7744
11.4890 0.0614 0.0050 7949
16.1350 0.1580 0.0100 7796
4.0029
0.2867 0.0120 7730
11.2000 0.1113 0.0100 7796
16.1350 0.1580 0.0120 7730
4.0029
0.2867 0.0100 7796
11.2000 0.1113 0.0120 7730
9.1673
0.1714 0.0100 7796
4.9100
0.2075 0.0070 7744
6.5705
0.0796 0.0050 7949
11.4890 0.0614 0.0070 7744
9.1673
0.1714 0.0120 7730
11.4890 0.0614 0.0100 7796
4.9100
0.2075 0.0100 7796
6.5705
0.0796 0.0070 7744
11.4890 0.0614 0.0120 7730
6.5705
0.0796 0.0100 7796
4.9100
0.2075 0.0120 7730
6.5705
0.0796 0.0120 7730
Kelessidis and Mpandelis (2004)
0.0010
1
0.0032 2506
0.0010
1
0.0022 2668
0.0010
1
0.0012 2314
ρ
(kg/m3)
V
(m/s)
Re
(–)
CD
(–)
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
1000
0.0119
0.0361
0.0409
0.0664
0.0802
0.0440
0.0597
0.1008
0.1119
0.1275
0.1592
0.1403
0.1825
0.1950
0.2196
0.1125
0.5678
0.7234
1.6292
2.2735
2.8774
7.3105
9.6228
11.9690
22.1153
22.6541
27.2608
29.5950
50.1299
64.2225
174.572
35.534
26.059
14.463
11.029
12.769
6.936
4.558
3.481
2.849
2.516
2.215
2.130
1.677
1.471
1000
1000
1000
999
999
999
1000
999
999
997
1000
998
997
1000
997
998
998
0.0314
0.0881
0.1594
0.1213
0.1972
0.2524
0.2049
0.3031
0.3734
0.2673
0.4051
0.3054
0.4391
0.5235
0.5196
0.4437
0.6035
0.4446
2.6131
7.4079
8.6137
24.0244
32.4644
43.6445
53.6535
68.6443
94.4191
153.2203
174.1574
257.4585
269.0901
406.8381
407.5338
770.4722
59.698
12.639
5.405
4.007
2.527
1.542
1.402
1.497
0.987
0.828
0.598
0.633
0.511
0.501
0.511
0.500
0.378
1000
1000
1000
1000
1000
1000
1000
1000
0.0306
0.0392
0.0718
0.0818
0.0734
0.0887
0.0829
0.1013
1.2458
1.8888
4.7792
5.6399
9.9022
14.0750
19.2645
28.0121
16.222
9.885
5.447
4.197
3.366
2.305
1.922
1.287
1014.406
1000
1000
1000
1000
1014.406
1032.413
1044.418
1026.411
1034.814
1044.418
1026.411
1034.814
1014.406
1000.000
1020.408
1032.413
1014.406
1032.413
1000.000
1020.408
1032.413
1020.408
1000.000
1020.408
0.1200
0.1900
0.4100
0.4200
0.3200
0.4400
0.4000
0.6100
0.3800
0.5800
0.8000
0.5800
0.7100
0.7500
0.6600
0.6100
0.7900
1.0000
1.0800
1.0600
0.9100
1.1800
0.9800
1.2100
1.2400
0.9242
1.4891
6.7582
7.1622
8.7991
10.5351
10.9860
12.5801
13.7496
19.7802
21.3357
26.9295
29.5753
29.6966
34.5389
39.4240
41.9566
51.8492
78.6261
86.9519
87.2948
94.4025
103.5443
114.4831
165.0765
31.047
17.105
5.288
5.988
4.438
3.137
2.738
2.272
7.099
2.540
1.570
2.564
2.015
1.555
1.418
1.193
0.954
1.039
0.735
0.791
0.729
0.731
0.904
0.721
0.671
995.629
997.066
1003.903
0.3692
0.2935
0.1763
1161.5720
655.5112
215.9254
0.460
0.570
0.670
Table 1 (continued)
K
(Pa*sn)
n
(–)
dp
(m)
ρs
(kg/m3)
Kelessidis and Mpandelis (2004)
0.0010
1
0.0026 11444
0.1350
1
0.0032 2506
0.1350
1
0.0022 2668
0.1350
1
0.0012 2314
0.1350
1
0.0026 11444
0.1350
1
0.0031 7859
0.1152
0.7449 0.0032 2506
0.1152
0.7449 0.0022 2668
0.1152
0.7449 0.0012 2314
0.1152
0.7449 0.0026 11444
0.0865
0.8610 0.0032 2506
0.0865
0.8610 0.0022 2668
0.0865
0.8610 0.0012 2314
0.0865
0.8610 0.0026 11444
0.0865
0.8610 0.0031 7859
0.0849
0.9099 0.0032 2506
0.0849
0.9099 0.0022 2668
0.0849
0.9099 0.0012 2314
0.0849
0.9099 0.0026 11444
0.0849
0.9099 0.0031 7859
ρ
(kg/m3)
V
(m/s)
Re
(–)
CD
(–)
989.056
1227.110
1238.396
1227.003
1226.688
1226.493
999.591
999.944
999.985
998.131
999.592
1000.211
999.984
999.089
999.826
999.835
999.922
1000.004
999.267
999.073
1.0660
0.0420
0.0232
0.0072
0.1848
0.1656
0.1282
0.0835
0.0321
0.4657
0.1031
0.0637
0.0225
0.3855
0.3399
0.0820
0.0493
0.0164
0.3286
0.2828
2772.8972
1.2064
0.4767
0.0798
4.4163
4.6489
9.0398
4.0860
0.7828
39.7432
6.1128
2.6282
0.4760
23.4289
23.3385
4.0910
1.7180
0.2978
15.7112
15.4439
0.320
24.420
62.840
272.700
8.390
7.970
3.790
7.010
20.350
1.660
5.860
12.040
41.420
2.420
2.400
9.260
20.110
77.960
3.330
3.470
represents the percentage of the initial uncertainty explained by the
model. The best fitting between measured and predicted values
would have a root mean square error of zero and a coefficient of
determination equal to one.
The best selected ANN model in this study, has one input layer with
six inputs (ρ, K, n, g, dp, ρs) and one hidden layer with 12 neurons.
Fletcher and Goss (1993) suggested that
pffiffiffi the appropriate number of
nodes in a hidden layer ranges from (2 k + m) to (2 k + 1), where k is
the number of input nodes and m is the number of output nodes. In
this study (k = 6) and (m = 1) and thus the appropriate number of
hidden layer neurons was chosen as 12. Fletcher and Goss (1993)
further suggested that each neuron has a bias and is fully connected to
all inputs and utilizes sigmoid hyperbolic tangent (tansig) activation
function (Fig. 3). The output layer has one neuron (V) with linear
activation function without bias. Training function in this network is
Automated Bayesian Regularization algorithm (trainbr). Fig. 3.a shows
the back-propagation neural network architecture. In Fig. 3.b, Layer 1
is hidden layer and Layer 2 is output layer. Fig. 3.c shows the detailed
structure of hidden layer.
4. Results and discussion
Using the approach described above, the predictions were made in
MATLAB software. The matrix of inputs in training step is a k × N
matrix, where k is the number of network inputs and N is the number
of samples used in training step; in this paper we used six input
variables (ρ, K, n, g, dp, ρs), and 69 samples to train of the network,
thus k = 6 and N = 69. The matrix of outputs in training step, is a
m × N matrix, where m is the number of outputs; in this paper m = 1.
The matrix of inputs for testing phase is k × N = 6 × 19 and the output
matrix is m × N = 1 × 19. Comparison of the results of the proposed
ANN model with the two other models used which directly predict
settling velocities in power law fluids (Kelessidis, 2004; Chhabra and
Peri, 1991) was performed using the coefficient of determination (R 2)
and RMS values. The latter parameters are affected by numbers of
datasets (N) and number of parameters in the model.
In Fig. 4, the predicted velocities are compared with the measured
data for the training dataset of 69 data. The coefficient of determination to the linear fit (y = ax) is 0.996 with an RMS value of 0.021 m/s
giving an almost perfect fit, something of course expected since it was
this data set used for the training of the network. The very good fitting
values indicate that the training was done very well.
R. Rooki et al. / International Journal of Mineral Processing 110–111 (2012) 53–61
57
Fig. 3. (a) Backpropagation neural network architecture, (b) general schematic diagram of network and its layers, (c) structure of hidden layer (Layer 1).
The real test of course lays in the test dataset. The comparison of
the predictions of the network with the measured values for the test
dataset (population of 19) is shown in Fig. 5. The coefficient of
determination is 0.947 with an RMS of 0.072 m/s, indicating that the
predictions are not as good as with the training data set, but still with
good engineering accuracy. If we put all the data (88 data) together
and compare predictions with measurements, we get the results in
Fig. 6, which gives a coefficient of determination 0.986 and an RMS
value of 0.038 m/s. Fig. 7 displays network predictions and measured
velocity for all data using BPNN model.
It is very interesting to compare the predictions for the terminal
velocity using the ANN technique versus other approaches for velocity
predictions. A one-to-one comparison can be made only with approaches
which directly predict the terminal velocity without resorting to trial-
Fig. 4. Comparison of the BPNN predicted and measured terminal velocity for training data.
58
R. Rooki et al. / International Journal of Mineral Processing 110–111 (2012) 53–61
Fig. 5. Comparison of the BPNN predicted and measured terminal velocity for test data.
Fig. 6. Comparison of the BPNN predicted and measured terminal velocity for all data.
R. Rooki et al. / International Journal of Mineral Processing 110–111 (2012) 53–61
1.4
Predicted
Measurement(all data)
1.2
V(m/s)
1
0.8
0.6
0.4
0.2
0
0
10
20
30
40
50
60
70
80
90
sampels
Fig. 7. Comparison of the network predictions and measured velocity for all data using
BPNN model.
and-error procedure. As mentioned above, there are not many equations
which directly predict terminal settling velocity of solid falling in power
law liquids. One such equation has been proposed by Kelessidis (2004),
which has been tested and derived from data (63 data) with power law
liquids and Newtonian fluids, with values of the power law exponent
higher than 0.50. If one restricts the comparison to such data points, one
has to remove the data of Ford and Oyeneyin (1994) which were used
in this work (Table A) and which span the range of n values between
0.06 and 0.29. Another equation was suggested by Chhabra and Peri
(1991). Such a comparison has then been made in Fig. 8. The analysis
shows that for (n) values greater than 0.5, a coefficient of determination
for the ANN set of 0.949 very close to the Kelessidis model (Kelessidis,
2004) of 0.961 and much better than the only other direct velocity
59
determination of Chhabra and Peri (1991) of 0.778. The respective RMS
values are, for the ANN model of 0.041 m/s, for the Kelessidis model of
0.036 m/s and for Chhabra and Peri model of 0.580 m/s.
If the comparison between the ANN and the Kelessidis model is
performed for all experimental data with (n) values down to 0.06
(1 > n > 0.06), then the results of Fig. 9 are derived which show
correlation coefficients of 0.99 and 0.89 for the ANN and the
Kelessidis model respectively, while the respective RMS values are,
for the ANN model 0.038 m/s and for the Kelessidis model 0.299 m/s.
Of course the Kelessidis approach gives an equation which can be
used in solving complex problems, but for n > 0.5, while the ANN
approach results with a methodology and a software package which
makes it a bit more difficult in using it for solving complex problems
but can predict velocities down to very small power law indices.
This work has proven that it is very efficient to use neural network
to predict directly terminal settling velocity of solids. This is possible
because of the high capability of the ANN in deriving complex and
non-linear relationships and the value of the method is also on
covering an extended range of flow behavior index of 0.06 to 1 which
is not possible with other techniques. In order to apply this technique
one should note that everyone can design a neural network in
MATLAB multi-purpose commercial software using neural network
Toolbox and using an experimental database of depended and
independent parameters. This network can then be applied for new
data with known depended parameters to predict the unknown
independent parameter (V).
5. Conclusion
A new method has been presented which allows prediction of
terminal settling velocity of solid spheres falling through Newtonian
and non-Newtonian, power law liquids using ANN method. In this
method all data from other investigators were divided into training
data (for training ANN) and test data (for validation ANN).
Fig. 8. Comparison of ANN predictions with the predictions from the Kelessidis (2004) and Chhabra and Peri (1991) equations, for pseudoplastic power law fluids, restricted to data
with 0.5 b n b 1.
60
R. Rooki et al. / International Journal of Mineral Processing 110–111 (2012) 53–61
Fig. 9. Comparison of ANN predictions with the predictions from the Kelessidis (2004) equation, for power law fluids, restricted to data with 1 > n > 0.06.
The predictions from the new model are compared with previously
reported experimental data from other investigators which cover nonNewtonian and Newtonian liquids. The comparison is very acceptable
and the coefficient of determination, R 2, and RMS error in the terminal
velocity for all data points were 0.986 and 0.038 m/s respectively.
Predictions with the ANN technique are similar to predictions of one of
the two available equations for direct prediction of terminal velocity of
spheres falling in pseudoplastic power law liquids while it outperforms
the second available direct equation, while it covers an extended range
of flow behavior index. Therefore this method can be applied, with good
engineering accuracy, for this purpose. It is recommended that more
experimental data should be used to train even better the ANN in order
to improve the validity of this method.
Nomenclature
ANN
artificial neural network (–)
particle diameter (m)
dp
determination coefficient (–)
R2
CD
drag coefficient (–)
g
acceleration of gravity (9.81 m/s)
K
consistency index of fluid (Pa*s n)
k
number of input nodes (–)
m
number of outputs (–)
mse
mean of sum squares error (–)
modified mse (–)
msereg
msw
mean of the sum of squares of the network weights (–)
n
flow behavior index (–) and number of inputs (–)
N
number of samples or data (–) 2−n
ρV
dp n
(–)
Regn
generalized Reynolds number,
K
RMS
root mean squared error (–)
V
terminal velocity of solid spheres (m/s)
Greek letters
ρ
liquid density (kg/m 3)
ρs
γ
γ_
τ
solid density (kg/m 3)
performance ratio (–)
shear rate (s − 1)
shear stress (Pa)
References
Chhabra, R.P., 2006. Bubbles, Drops and Particles in Non-Newtonian Fluids, second ed.
CRC Press, Boca Raton, FL.
Chhabra, R.P., Peri, S.S., 1991. Simple method for the estimation of free-fall velocity of
spherical particles in power law liquids. Powder Technol. 67, 287–290.
Clift, R., Grace, J., Weber, M.E., 1978. Bubbles, Drops, and Particles. Academic Press, New
York.
Cybenko, G., 1989. Approximation by superposition of a sigmoidal function. Math.
Control Signals Syst. 2, 303–314.
Demuth, H., Beale, M., 2002. Neural Network Toolbox For Use with MATLAB, User's
Guide Version 4.
Eren, H., Fung, C.C., Wong, K.W., 1997. An application of artificial neural network for
prediction of densities and particle size distributions in mineral processing
industry. Instrumentation and Measurement Technology Conference, 1997.
IMTC/97. Proceedings. 'Sensing, Processing, Networking'. IEEE.
Fletcher, D., Goss, E., 1993. Forecasting with neural networks: an application using
bankruptcy data. Inform. Manage. 24, 159–167.
Ford, J.T., Oyeneyin, M.B., 1994. The Formulation of Milling Fluids for Efficient Hole
Cleaning: An Experimental Investigation. SPE paper 28819, presented at the
European Petroleum Conference, London, UK, 25 – 27 October.
Ghamari, S., Borghei, A.M., Rabbani, H., Khazaei, J., Basati, F., 2010. Modeling the
terminal velocity of agricultural seeds with artificial neural networks. Afr. J. Agric.
Res. 5 (5), 389–398.
Hagan, M.T., Demuth, H.B., Beale, M.H., 1996. Neural Neural Network Design. PWS
Publishing, Boston, MA.
Hartman, M., Havlin, V., Trnka, O., Carsky, M., 1989. Predicting the free fall velocities of
spheres. Chem. Eng. Sci. 44 (8), 1743–1745.
Heider, A., Levenspiel, O., 1989. Drag coefficient and terminal velocity of spherical and
nonspherical particles. Powder Technol. 58, 63–70.
Himmelblau, D.M., 2000. Applications of artificial neural networks in chemical
engineering. Korean J. Chem. Eng. 17, 373–392.
Hornik, K., Stinchcombe, M., White, H., 1989. Multilayer feed forward networks are
universal approximators. Neural Netw. 2, 359–366.
Ibrehem, A.S., Hussain, M.A., 2009. Prediction of bubble size in bubble columns using
artificial neural network. J. Appl. Sci. 9 (17), 3196–3198.
Kelessidis, V.C., 2003. Terminal velocity of solid spheres falling in Newtonian and nonNewtonian liquids. Tech. Chron. Sci. J. T.C.G. 24 (1 & 2), 43–54.
R. Rooki et al. / International Journal of Mineral Processing 110–111 (2012) 53–61
Kelessidis, V.C., 2004. An explicit equation for the terminal velocity of solid spheres
falling in pseudoplastic liquids. Chem. Eng. Sci. 59, 4437–4447.
Kelessidis, V.C., Mpandelis, G.E., 2004. Measurements and prediction of terminal
velocity of solid spheres falling through stagnant pseudoplastic liquids. Powder
Technol. 147, 117–125.
Koziol, K., Glowacki, P., 1988. Determination of the free settling parameters of spherical
particles in power law fluids. Chem. Eng. Process. 24, 183–188.
Lali, A.M., Khare, A.S., Joshi, J.B., Nigam, K.D.P., 1989. Behavior of solid particles in
viscous non-Newtonian solutions: settling velocity, wall effects and bed expansion
in solid–liquid fluidized beds. Powder Technol. 57, 39–50.
Miri, R., Sampaio, J., Afshar, M., Lourenco, A., 2007. Development of artificial neural
networks to predict differential pipe sticking in Iranian offshore oil fields. SPE
108500, International Oil Conference and Exhibition in Mexico.
Miura, H., Takahashi, T., Ichikawa, J., Kawase, Y., 2001. Bed expansion in liquid–solid
two-phase fluidized beds with Newtonian and non-Newtonian fluids over the
wide range of Reynolds numbers. Powder Technol. 117, 239–246.
Mohaghegh, S., 2000. Virtual intelligence applications in petroleum engineering: part I
—artificial neural networks-. J. Pet. Sci. Eng. 40–46.
Moolman, D.W., Aldrich, C., Van Deventer, J.S.J., Bradshaw, D.J., 1995. The interpretation
of flotation froth surfaces by using digital image analysis and neural networks.
Chem. Eng. Sci. 50, 3501–3513.
Nguyen, A.V., Stechemesser, H., Zobel, G., Schulze, H.J., 1997. An improved formula for
terminal velocity of rigid spheres. Int. J. Miner. Process. 50, 53–61.
Ozbayoglu, E.M., Miska, S.Z., Reed, T., Takach, N., 2002. Analysis of bed height in
horizontal and highly-inclined wellbores by using artificial neural networks. SPE
61
78939, SPE International Thermal Operations and Heavy Oil Symposium and
International Horizontal Well Technology Conference, Calgary, Alberta, Canada.
Peden, J.M., Luo, Y., 1987. Settling velocity of variously shaped particles in drilling and
fracturing fluids. SPE Drill. Eng. 2, 337–343 (Dec).
Pinelli, D., Magelli, F., 2001. Solids settling velocity and distribution in slurry reactors
with dilute pseudoplastic suspensions. Ind. Eng. Chem. Res. 40, 4456–4462.
Reynolds, P.A., Jones, T.E.R., 1989. An experimental study of the settling velocities of
single particles in non-Newtonia fluids. Int. J. Miner. Process. 25, 47–77.
Shah, S.N., El Fadili, Y., Chhabra, R.P., 2007. New model for single spherical particle
settling velocity in power law (visco inelastic) fluids. Int. J. Multiphase Flow 33,
51–66.
Sharma, R., Singh, K., Singhal, D., Ghosh, R., 2004. Neural network applications for
detecting process faults in packed towers. Chem. Eng. Process. 43, 841–847.
Ternyik, J., Bilgesu, I., Mohaghegh, S., Rose, D., 1995. Virtual measurement in pipes, Part
1: flowing bottomhole pressure under multi-phase flow and inclined wellbore
conditions. SPE 30975, Proceedings, SPE Eastern Regional Conference and
Exhibition, Morgantown, West Virginia.
Turton, R., Clark, N.N., 1987. An explicit relationship to predict spherical particle
terminal velocity. Powder Technol. 53, 127–129.
Van Der Walt, T.J., Van Deventer, J.S.J., Barnard, E., 1993. Neural nets for the simulation
of mineral processing operations: Part I. Theoretical principles. Miner. Eng. 6,
1127–1134.