The behaviour of a multi-storey framed building during strong earthquake motions depends on the distribution of mass, stiffness, and strength in both the horizontal and vertical planes of the building. Irregular configurations either in plan or elevation were often recognised as one of the main causes of failure during past earthquakes. A common type of vertical geometrical irregularity in building structures arises from abrupt reduction of the lateral dimension of the building at specific levels of the elevation. This building category is known as the setback building.
Pushover analysis is a nonlinear static analysis used mainly for seismic evaluation of framed building. Conventional pushover analysis outlined in FEMA 356:2000 and ATC 40:1996 is limited for the buildings with regular geometry. There is no research effort found in the literature to use this analysis procedure for setback building. It is instructive to study the performance of conventional pushover analysis methodology as well as other alternative pushover methodologies for setback buildings and to suggest improvements suitable for setback buildings. This is the primary motivation underlying the present study.
In the present study an improved procedure for estimating target displacement of setback buildings is proposed. This proposal is a simple modification of the displacement coefficient method as outlined in FEMA 356: 2000. The nonlinear static analyses or pushover analysis carried out by mass proportional uniform load pattern and proposed modification in target displacement estimation procedure consistently predicting the results close to that of nonlinear dynamic analyses.

Attribution Non-Commercial (BY-NC)

2 visualizações

6.Civil - IJCE - Prediction - Maiti

The behaviour of a multi-storey framed building during strong earthquake motions depends on the distribution of mass, stiffness, and strength in both the horizontal and vertical planes of the building. Irregular configurations either in plan or elevation were often recognised as one of the main causes of failure during past earthquakes. A common type of vertical geometrical irregularity in building structures arises from abrupt reduction of the lateral dimension of the building at specific levels of the elevation. This building category is known as the setback building.
Pushover analysis is a nonlinear static analysis used mainly for seismic evaluation of framed building. Conventional pushover analysis outlined in FEMA 356:2000 and ATC 40:1996 is limited for the buildings with regular geometry. There is no research effort found in the literature to use this analysis procedure for setback building. It is instructive to study the performance of conventional pushover analysis methodology as well as other alternative pushover methodologies for setback buildings and to suggest improvements suitable for setback buildings. This is the primary motivation underlying the present study.
In the present study an improved procedure for estimating target displacement of setback buildings is proposed. This proposal is a simple modification of the displacement coefficient method as outlined in FEMA 356: 2000. The nonlinear static analyses or pushover analysis carried out by mass proportional uniform load pattern and proposed modification in target displacement estimation procedure consistently predicting the results close to that of nonlinear dynamic analyses.

Attribution Non-Commercial (BY-NC)

- FNN
- Cost Optimum Design of a RCC Beam
- Neural Networ of Dc Motor
- Prediction levels of heavy metals (Zn, Cu and Mn) in current Holocene deposits of the eastern part of the Mediterranean Moroccan margin (Alboran Sea)
- (Studies in Fuzziness and Soft Computing) Patricia Melin, Oscar Castillo-Hybrid Intelligent Systems for Pattern Recognition Using Soft Computing_ an Evolutionary Approach for Neural Networks and Fuzzy
- Adaptive modified backpropagation algorithm based on differential errors
- Designing an Artificial Neural Network for Forecasting Tourism Time Se
- Introduction to Neural Networks, Deep Learning (Deeplearning.ai Course)
- ann_qb
- The Expert System Designed To Improve Customer Satisfaction
- Optimally Learnt, Neural Network Based Autonomous Mobile Robot Navigation System
- Optimization of Sheet Metal Thickness and Die Clearance
- Mathematical Modeling Content
- 04PA_RC_2_1.pdf
- PERFORMANCE EVALUATION OF FUZZY LOGIC AND BACK PROPAGATION NEURAL NETWORK FOR HAND WRITTEN CHARACTER RECOGNITION
- Novel Fitness Function in Genetic Algorithms for Optimizing Neural Networks
- NEURALNETWORKDESIGN.pdf
- IJET15-07-01-311
- 1111
- Ece18898g Neural Networks

Você está na página 1de 12

Engineering (IJCE)

Vol.1, Issue 1 Aug 2012 67-78

© IASET

NEURAL NETWORK APPROACH

Department of Civil Engineering, Institute of Technology, Banaras Hindu University, Varanasi, India

ABSTRACT

This paper evaluates the feasibility of using artificial neural network (ANN) technique for

predicting the average discharge using available rainfall data. Input data for both models include the

current and preceding records of rainfall gathered at the 56 observation stations in the Godavari basin.

Here radial basis function neural network (RBFNN) and back propagation neural network (BPNN) are

employed to develop model for discharge forecasting using Matlab6.5 Neural Network Toolbox. Six

different types of network architectures and training algorithms are investigated and compared in terms

of model prediction efficiency and accuracy. The proposed ANN technique using minimum number of

hidden nodes along with minimum number of variables consistently produced the best performing

network based simulation models. Results obtained indicate that RBFNN show better predicting capacity

than BPNN and can provide an alternate and practical tool for discharge forecast, which is particularly

useful for assisting small urban watersheds to issue timely and early flood warnings.

KEYWORDS: Annual discharge, Rainfall, Artificial neural network, Radial basis function neural

network

INTRODUCTION

The rainfall-runoff transformation is among one of the most complex hydrological phenomena to

comprehend, as it usually involves a number of interconnected elements, such as evapotranspiration,

infiltration, surface and subsurface runoff generation and routing, etc. The understanding of these

hydrological processes is further complicated with the consideration of heterogeneity of the watershed

geo-morphological characteristics (such as soil type, vegetation cover, etc.) and the spatial and temporal

variations of model inputs (such as rainfall patterns). For many years, hydrologists have endeavored to

better understand the rainfall-runoff transformation process in an effort to develop conceptual models

with improved performance. However, the quest for this understanding is not straightforward. Watershed

analysis and modeling require extensive time series of stream flow and rainfall data, not frequently

available. Many analytical methods have been developed to estimate stream flow from rainfall

measurements over the watershed. Runoff estimation is of high importance for many practical

engineering applications and in watershed management activities. Earlier hydrological models are based

on the mathematical representation of watershed processes which significantly requires effort of data

collection of rainfall, stream-flow and watershed characteristics. Such models also require assessment of

Sabita Madhavi Singh & P. R. Maiti 68

model calibration and verification of performance of the model (Bertoni et al. (1992) and Zealand et al.

(1997)).Increasing applications of ANN in various aspects of hydrology have been found in ASCE,

(2000) and many studies have shown the potential of ANN in modeling the rainfall–runoff relationship

by Tokar and Johnson(1999) and Coulibaly et al.,( 2000).

Monthly runoff simulation using ANN have been studied by many researchers and reached

encouraging results for the examined basins found in Lorrai and Sechi (1995); Anmala et al.(2000). Xu

et al. (1996) has developed and applied a set of conceptual water balance models to estimate monthly

river flow in eleven catchments in central Sweden, based on monthly temperature and precipitation. In

the most of these studies, the feed-forward method (FF) was employed to train the neural networks. The

performance of the FFBP was found to be superior to conventional statistical and stochastic methods in

continuous flow series forecasting (Brikundavyi et al., (2002); Cigizoglu, (2003)). Maier and Dandy

(2000) summarized the methods used in the literature to overcome this problem of training a number of

networks starting with different initial weights, the on-line training mode used to help the network to

escape local minima, the inclusion of the addition of random noise, and the employment of second order

or global methods. Thirumalaiah and Deo (1998); Thirumalaiah and Deo (2000) used conjugate gradient

and cascade correlation algorithms together for different hydrological applications.

The objectives of this study is to examine the capability of back propagation neural network

(BPNN) and radial basis function neural network(RBFNN) for the prediction of average annual

discharge of 30 stations during the year 1993-94 in Godavari basin. It can provide a viable alternative of

the hydrologic application requires that an accurate prediction of discharge behavior be provided using

only the available time series and rainfall data, and with relatively moderate conceptual understanding of

the hydrologic dynamics of the particular basin under investigation.

The river Godavari rises in the Nashik district of Maharastra, about 80 Km from the Arabian Sea,

at an elevation of 1067 m . The total length of the river from the source to its outfall into the sea (Bay of

Bengal) is about 1465 Km of which 694 Km is in Maharastra. And remaining 771 Km is in Andhra

Pradesh. It is the largest of the peninsular river and the third largest in India. The Godavari Basin extends

over an area of 312812 Km which is nearly 10% of the total geographical area of the country. The Basin

lies in the Deccan Plateau and is situated latitude 16 o16´N and 23o16´N and longitude 73o26´E and

83o07´E. Central Water Commission (CWC) has been conducting hydrological observations in Godavari

basin since mid 1960s. Hydrological observation stations are located on the main river and as well as on

all tributaries. There are 56 hydrological observation stations under operation in 1994-95 (Fig. 1). Out of

these 8 stations are on the Godavari and the remaining are on the tributaries. The data used in this paper

is taken from report “Statistical Profile of Godavari Basin” published by Information System Directorate

Performance Overview & Management Improvement Organization Water Planning & Project Wing,

Central Water Commission on January, 1999.

69 Prediction of Average Annual Discharge in a River: A Neural Network Approach

An artificial neural network is a highly interconnected network of many simple processing units

called neurons, which are analogous to the biological neurons in the human brain. Neurons having

similar characteristics in an ANN are arranged in groups called layers. The neurons in one layer are

connected to those in the adjacent layers, but not to those in the same layer. The strength of connection

between the two neurons in adjacent layers is represented by what is known as a ‘connection strength’ or

‘weight’. An ANN normally consists of three layers, an input layer, a hidden layer, and an output layer.

There are primarily two types of training mechanisms, supervised and unsupervised. A supervised

training algorithm requires an external teacher to guide the training process. This typically involves a

large number of examples (or patterns) of inputs and outputs for training. The inputs in an ANN are the

cause variables and outputs are the effect variables of the physical system being modeled. The primary

goal of training is to minimize the error function by searching for a set of connection strengths that cause

the ANN to produce outputs that are equal to or closer to the targets. A supervised training mechanism

called back-propagation training algorithm is normally adopted in most of the engineering applications.

In the back-propagation training mechanism, the input data are presented at the input layer, the

information is processed in the forward direction, and the output is calculated at the output layer. The

target values are known at the output layer, so that the error can be estimated. The tan-sigmoid function

is the transfer function employed in the hidden layers and has the following form:

Sabita Madhavi Singh & P. R. Maiti 70

1

f ( x ) = tan −x

(1)

1+ e

where x is the input. This function of equation (1) is bounded between –1 and 1 so input data are

usually normalized within the same range. Neurons in the output layer receive weighted input and

generate the final output using a linear transfer function.

The total error at the output layer is distributed back to the ANN and the connection weights are

adjusted. The error function is given by equation(2):

1 K 2 1 K

E= ∑ ek = 2 ∑

2 k =1 k =1

(tk − zk ) 2 (2)

where, tk is a predefined network output (or desired output or target value) and ek is the error in each

output node. The goal is to minimize E so that the weight in each link is accordingly adjusted and the

final output can match the desired output. To get the weight adjustment, the gradient descent strategy is

employed. In the link between hidden and output layers, computing the partial derivative of E with

= = −ek yj

∂w jk ∂zk ∂Yk ∂w jk ∂Yk

= −ek f '(Yk ) y j = −δk y j (3)

where,

δk = ek f '(Yk ) = (tk − zk ) f '(Yk )

The weight adjustment in the link between the hidden and output layers is computed by

∆w jk = α × y j × δk (4)

where α is the learning rate, a positive constant between 0 and 1. The new weight herein can be updated

by the following

Similarly, the error gradient in links between input and hidden layers can be obtained by taking the

∂E K ∂E ∂zk ∂Yk ∂y j ∂X j

= ∑ . . = −∆ j xi (6)

∂wij k =1 ∂zk ∂Yk ∂y j ∂X j ∂wij

where the ∆ j can be derived as follows:

K

∆ j = f '( X j )∑ δk w jk (7)

k =1

The new weight in the hidden-input links can be now corrected as:

71 Prediction of Average Annual Discharge in a River: A Neural Network Approach

∆wij = α × xi × ∆ j (8)

and

w jk ( n + 1) = w jk ( n) + ∆w jk ( n) (9)

Training the BP-networks with many samples is sometimes a time-consuming task. The learning speed

can be improved by introducing the momentum term η . Usually, η falls in the range [0, 1]. For the

iteration n, the weight change ∆ can be expressed as

∂E

∆w(n + 1) = η× ∆w(n) + α × (10)

∂w(n)

The back-propagation learning algorithm used in artificial neural networks is shown in many textbooks

forward activation

w11i w11h

Rainfall Discharge

w12i w21h

Back propagation

Among the many BPNN training methods available in the MATLAB Toolbox, the Levenberg–

Marquardt training algorithm is selected because of its fast convergence rate (Demuth and Beale, (2003).

During the backpropagation training process, the suitability of the simulation is validated by estimating

the mean square error (MSE) between the observed and ANN-simulated data. In our study, the maximum

number of epochs, target error goal MSE, and minimum performance gradient are set to 600, 10–5 and

10–4, respectively. Training stops when the maximum number of epochs is reached or when either the

MSE or performance gradient is minimized to arrive at the pre-determined goal. Through a trial-and-

error method, the optimal network structure for the current BPNN simulation is determined to include 1

hidden layer, with 1 hidden neuron1. The learning rate and momentum term is set to 0.4 and 0.56

respectively. Fig 2 gives the architecture of BPNN employed in this study.

Sabita Madhavi Singh & P. R. Maiti 72

RBFNN is another type of neural network. Such networks have 3 layers, i.e., the input layer, the

nonlinear hidden layer, and the linear output layer. RBFNN can overcome some of the limitations of

BPNN by using a rapid training phase, having a simple architecture, and maintaining complicated

mapping abilities. In most cases, the RBFNN simulates phenomena of interest by using Gaussian basis

functions in the hidden layer and linear transfer functions in the output layer. The major difference in

operation between RBFNN and BPNN specifically exists in the hidden layer. Instead of the weighted

sum of the input vector used in BPNN, the distance between the input and center, explained below, is

employed in the RBFNN learning process. The first layer of the RBFNN collects the input data. Its

training process determines the number of hidden neurons, m, which can be larger than that of BPNN to

achieve a certain accuracy of prediction (Demuth and Beale (2003). For each neuron in the hidden layer,

the distance between the input data and the center is activated by a nonlinear radial basis function, as

shown in the following equation:

Ri = exp − ( xi − ci b1 )

2

(11)

Where, x is the input vector, and b1 and ci are parameters that represent the bias in the hidden layer and

center vector, respectively. Each neuron in the hidden layer will produce a value between 0 and 1,

according to how close the input is to the center location. Therefore, neurons with centers closer to inputs

will have more contributions to outputs; on the other hand, if neurons have centers away from inputs,

then their outputs are nullified and so vanished. Later, the output layer neurons receive the weighted

inputs and produce results by using a linear combination, which is of a similar form to that of the BPNN:

^ m

y = ∑ wi Ri ( x ) + b2 (12)

i −1

^

where, y is the RBFNN simulation result (output), wi is the optimized connection weight determined

through the training process and b2 is the bias in the output layer.

Hidden layer

Rainfall Discharge

b1 b2

73 Prediction of Average Annual Discharge in a River: A Neural Network Approach

Fig. 3 gives an insight into the structure and working procedure of the RBFNN. The open circles are bias

neurons that adjust the sensitivity of the network. The biases are controlled by a specific value, called

“spread” number, and each bias is set to 0.8326/spread. The selected “spread” value should be large

enough for neurons in the hidden layer to include the whole range of input data.b1 and b2 are the biases to

the input and the output layer.

In RBFNN simulations, the proper initial choice of centers and weights should be regarded as

key issues. Using the Neural Network Toolbox, we choose the weight vectors as the center parameter.

Instead of being randomly generated, the initial value of the weights is set as the transpose of the input

vector and newrb function is used to create radial basis network. Once the centers are developed, the

weights linking the hidden and output layers should be updated during the training procedure. Training is

an optimization procedure in which the network weights are adjusted in order to minimize the selected

error value. The training procedure of the RBFNN (unlike the BPNN) determines the number of hidden

neurons required for the simulation.

In this study, the root mean square error (RMSE) is the function used to estimate the performance of

neural network:

^

1 n

RMSE = ∑ ( y − yi )2

n i =1 i

(13)

^

Where, y is the target output value, y is the neural network output and n is the number of data patterns

used. The training of RBFNN is initiated by a single neuron in the hidden layer, followed by

continuously adding neurons to the hidden layer one at a time. Before training starts, the key training

parameters should be assigned. In our study, the target error goal and the maximum number of neurons

are set to 0.1 and 5, respectively. Training is stopped if the MSE is minimized to less than the pre-

determined goal or if the maximum number of neurons is reached. Because the spread value affects the

quality of prediction, a trial-and error method is employed, wherein several spread numbers are tested

and the best one is accepted. By doing so, we find that the optimal spread number is 8.

All tests and results derived through programming in Matlab 6.5. By means of trial and error, an

optimum network and parameter configuration for all networks was derived. The mean annual rainfall

data for 1987-94 was used as input to the network and annual average discharge as the network output of

56 observation stations. Out of these, 408 datasets, 56 sets belong to year 1994 are chosen for network

validation/testing and remaining for network calibration/training. Among 56 datasets, 30 are selected for

the network validation based on the performance. Approximately 7% of data is used for testing, and the

remaining 93% is used for network training. As all empirical models are only able to interpolate between

the application boundary values, the training data should be representative of the entire range of

conditions. Therefore, extreme values of the hydrological data need to be included as part of the training

Sabita Madhavi Singh & P. R. Maiti 74

set. Prediction accuracy was evaluated using the coefficient of determination (R2), the root mean squared

error (RMSE). R2 measures the degree to which two variables are linearly related. RMSE provides

information about the predictive capabilities of the model.

The performance efficiencies of the BPNN and RBFNN with one hidden layer (an architecture

1-1-1 represents 1 neuron each in input, hidden and output layer) is given in Table. 1 in terms of R2 and

RMSE. It is easily noticed that the both RBFNN and BPNN with network architecture 1-2-1

outperformed the other network architectures. However the RBFNN has the lowest RMSE and the

highest R2. It can be also seen that the training phase performance is better than those of testing phase

performance

Training Validation

R2 RMSE R2 RMSE

BPNN 1/1/2001 0.9532 0.0753 0.9293 0.3574

BPNN 1/2/2001 0.9753 0.0182 0.9695 0.2647

BPNN 1/3/2001 0.9284 0.1753 0.8932 0.5392

RBFNN 1/1/2001 0.9482 0.0643 0.9176 0.2593

RBFNN 1/2/2001 0.9976 0.0146 0.9755 0.0989

RBFNN 1/3/2001 0.9342 0.2967 0.9031 0.1984

BPNN RBFNN

0.9

0.8

2

R

0.7

0.6

0.5

0 2 4 6 8 10

Validation Set

Fig.4 shows the performance of the different validation data sets each consisting of 30 out of 56. There

are 10 sets randomly selected from the 56 sets belonging to the year 1994. In most of the datasets the R2

for RBFNN is higher than the BPNN except for dataset 8. However some of the datasets like 1, 6 and 7

show nearly equal predicting capacity. But dataset 7 has highest R2 for the both the networks. So, it has

been selected as the set for testing the set.

75 Prediction of Average Annual Discharge in a River: A Neural Network Approach

BPNN RBFNN

30

20

Error Percentage

10

0

1 6 11 16 21 26

-10

-20

-30

-40

Dataset

450

Annual Discharge (Cum)

400

350

300

250

200

150

100

50

0

0 5 10 15 20 25 30

Data Set

2000 400

1800 350

1600

Recorded Discharge(Cum)

300

1400

250

Rainfall(mm)

1200

1000 200

800

150

600

100

400

200 50

0 0

1 6 11 16 21 26

Dataset

Sabita Madhavi Singh & P. R. Maiti 76

The error percentage occurring in each observation stations is shown in fig. 5. in 14th station the

percentage of error is highest for both the networks near about 29% for BPNN and 18% for RBFNN. It is

due to the fact the discharge is unexceptionally higher than the usual ones in the surrounding stations due

very less amount of soil absorption. Fig.6 gives an idea by how much the predicted and recorded

discharges are varying. One can easily see that for lower discharges the predicted values are very close to

each other but in case of higher discharges there is a greater deviation. Figure.7, 8 and 9 gives the

rainfall and annual average discharges.

Rainfall RBFNN

Rainfall BPNN

2000 350

2000 450

1800

1800 400 300

1600

Predicted Discharge(Cum)

1600 350

1400 250

1400

300

Rainfall(mm)

R ain fall(m m )

1200

1200 200

250 1000

1000

200 150

800

800

150 600 100

600

100 400

400

50

50 200

200

0 0

0 0

1 6 11 16 21 26

1 6 11 16 21 26

Dataset

Dataset

Fig.8. Comparison of Rainfall and Predicted Predicted Discharges for 30 Observation

Discharges for 30 Observation Stations Stations

Fig 10 and 11 gives the scatter plot of predicted and recorded discharge, R2 and the best fit

linear plot. R2 is 0.9695 and 0.9755 for BPNN and RBFNN respectively.

350 2

R = 0.9695

Predicted Discharge(Cum)

300

250

200

150

100

50

0

0 100 200 300 400

Recorded Discharge(Cum)

77 Prediction of Average Annual Discharge in a River: A Neural Network Approach

450 2

R = 0.9755

400

Predicted Discharge(Cum)

350

300

250

200

150

100

50

0

0 100 200 300 400

Recorded Discharge(Cum)

CONCLUSIONS

By comparing the simulations of the RBFNN model and the BPNN model with the discharge

records, it is found that the performance of the RBFNN model is better than that of the BPNN model

with the same input data for all studied sites. The meteorological variables have non-linear relationships

with rainfall and discharge. Data selected for model training must be representative of the watershed or

flow conditions and the training process should be repeated periodically to reflect the changing

hydrological characteristics of the area under investigation. ANN in hydrological applications can handle

processes poorly define or not well understood; previous knowledge of processes in the training not

needed; no specific solution are available; weights are readily adjusted as new data are available; small

errors are not amplified since the processing is distributed; relevant information is stored so no need to

keep previous processed data records; and no need for additional input and output data except those

specified in the training. The ANN can be effectively applied to model nonlinear systems such as the

transformation of rainfall into runoff without explicitly specifying hydrological process of the watershed,

but it is not a substitute for conceptual models, based on physical process. It can be a feasible alternative

for hydrological forecasting when the runoff is predicted from input and output time series. Most

importantly, this paper presents indications that neural networks can also be applied in cases where the

datasets manifest trends and shifts and the desired output has lies outside of the range of previously

introduced input, as shown by the results. Further study needs to take in more factors which may not be

statistically associated with dust storm occurrence (because of the absence of a linear relationship), but

have obvious physical meaning to improve the prediction accuracy.

REFERENCES

1. Anmala, J., Zhang, B., Govindaruja, R.S., (2000). Comparison of ANNs and empirical

approaches for predicting watershed runoff. J. Water Resour. Planning and Management, 126

(3), 156–166.

Sabita Madhavi Singh & P. R. Maiti 78

applications. J. Hydrol. Engng ASCE , 5(2) , 124–137.

3. Bertoni, J. C., Tucci, C. E., and Clarke, R. T. (1992). “Rainfall-based real-time flood

forecasting.” J. Hydrol., 131, 313–339.

4. Brikundavyi, S., Labib, R., Trung, H.T., Rousselle, J., (2002).Performance of neural networks in

daily streamflow forecasting, J. Hydrol. Engng ASCE ,7 (5), 392–398.

5. Cigizoglu, H.K., (2003). Estimation, forecasting and extrapolation of flow data by artificial

neural networks, J. Hydrol. Sci. ,48 (3), 349–361.

6. Coulibaly, P., Anctil, F., Bobee, B., (2000). Daily reservoir inflow forecasting using artificial

neural networks with stopped training approach. J. Hydrol, 3(4), 244–257.

7. H. Demuth and M. Beale, (2003) ,Neural Network Toolbox: For Use with Matlab. Mathworks,

Inc., Natick, MA.

8. Lorrai, M., Sechi, G.M., (1995). Neural nets for modeling rainfall runoff transformations. Water

Resour. Management, 9, 299–313.

9. Luger, George F., (2002). Artificial Intelligence–Structures and Strategies for Complex Problem

Solving. 4th Ed. AddisonWesley, USA.

10. Maier, H.R., Dandy, G.C.,( 2000). Neural network for the prediction and forecasting of water

resources variables: a review of modeling issues and applications. Environ Modeling and

Software, 15, 101–124.

11. Negnevitsky, Michael,( 2002). Artificial Intelligence: A Guide to Intelligent Systems. Addison

Wesley, Harlow, England.

12. Schalkoff, Robert J., (1997). Artificial Neural Networks. International Editions. McGraw-Hill.

13. Thirumalaiah, K., Deo, M.C., (1998). River stage forecasting using artificial neural networks. J.

Hydrol. Engng ASCE ,3(1), 26–32.

14. Thirumalaiah, K., Deo, M.C.,(2000). Hydrological forecasting using artificial neural networks.

J. Hydrol. Engng ASCE ,5(2), 180–189.

15. Tokar, A.S., Johnson, P.A., 1999. Rainfall-runoff modeling using artificial neural networks. J.

Hydrol. Engng ASCE ,4(3), 232–239.

16. Xu, C.-Y., Seibert, J., Halldin, S., (1996), Regional water balance modeling in the NOPEX area:

development and application of monthly water balance model. J. Hydrol. 180, 211–236.

17. Zealand, C.M., Burn, D.H., Simonovic, S.P., 1999. Short term streamflow forecasting using

artificial neural networks. J. Hydrol., 214, 32–48.

- FNNEnviado pordiankusuma123
- Cost Optimum Design of a RCC BeamEnviado porSaurabh Pednekar
- Neural Networ of Dc MotorEnviado poredcocu
- Prediction levels of heavy metals (Zn, Cu and Mn) in current Holocene deposits of the eastern part of the Mediterranean Moroccan margin (Alboran Sea)Enviado porInternational Organization of Scientific Research (IOSR)
- (Studies in Fuzziness and Soft Computing) Patricia Melin, Oscar Castillo-Hybrid Intelligent Systems for Pattern Recognition Using Soft Computing_ an Evolutionary Approach for Neural Networks and FuzzyEnviado porlig
- Adaptive modified backpropagation algorithm based on differential errorsEnviado porBilly Bryan
- Designing an Artificial Neural Network for Forecasting Tourism Time SeEnviado porNghiêm Phúc Hiếu
- Introduction to Neural Networks, Deep Learning (Deeplearning.ai Course)Enviado porJanmejay Pant
- ann_qbEnviado pordvnovov
- Optimally Learnt, Neural Network Based Autonomous Mobile Robot Navigation SystemEnviado porIDES
- Optimization of Sheet Metal Thickness and Die ClearanceEnviado porJMPCalcao
- The Expert System Designed To Improve Customer SatisfactionEnviado porAnonymous IlrQK9Hu
- Mathematical Modeling ContentEnviado porvigneshaaa
- 04PA_RC_2_1.pdfEnviado porMarcelo Varejão Casarin
- PERFORMANCE EVALUATION OF FUZZY LOGIC AND BACK PROPAGATION NEURAL NETWORK FOR HAND WRITTEN CHARACTER RECOGNITIONEnviado porijesajournal
- Novel Fitness Function in Genetic Algorithms for Optimizing Neural NetworksEnviado porChittineni Sai Sandeep
- NEURALNETWORKDESIGN.pdfEnviado porzenivaj
- IJET15-07-01-311Enviado porShravan Venkataraman
- 1111Enviado porArka Ghosh
- Ece18898g Neural NetworksEnviado porsai
- ICARMID_211Enviado porSundaraPandiyan
- ANN.pdfEnviado porNuraddeen Magaji
- v17no3p182Enviado porhsyntskn
- 01 NN.pptxEnviado porMuhammad Ammar Khan
- Decimal to Base5Enviado porugwak
- 1Enviado porArjun Desai
- Low Resolution Image Fish Classification Using Convolutional Neural Network.pdfEnviado pornaufal rachmatullah
- Performance Analysis of Back propagation Network Model for Personalized Recommender SystemEnviado porIJSRP ORG
- A Machine Learning framework for an algorithmic trading system.pdfEnviado porcidsant
- Intelligent Control MPPT Technique for PV Module at Varying Atmospheric ConditionsEnviado porAliAlMisbah

- 1.Ijamss-On Some New Tensors and Their Properties in a Finsler Space-IIEnviado poriaset123
- 3.IJME-Enhancement of Intake Generated Swirl to Enhance Lean Combustion in a Four Stroke Diesel EngineEnviado poriaset123
- 2.Format-IJME-Assessment of Thermo-mechanical Failures of Photovoltaic Module Components for Improved ReliabilityEnviado poriaset123
- 11.Format-ijans-Assessment of Beekeeping Problems in Jada Local Government Area of Adamawa State, NigeriaEnviado poriaset123
- 2.Format-IJANS-The First Molecular Identification of Adult Heterophyes Heterophyes and Heterophyes Dispar _Digenea Heterophyidae_ From Kuwaiti Stray Cats Using ITS2 SequenceEnviado poriaset123
- 4.Format-ijbgm-understanding and Managing Emotional Intelligence Implications for Strategy Implementation Effectiveness in Nigerian Banking IndustryEnviado poriaset123
- 4.Ijans-solid Charge Transfer Complexes of 4-Ni Troquonline-1-Oxide With Ph 3 m - CopyEnviado poriaset123
- 3.Format-IJANS-The Impact of Crude Aflatoxin B1 on Physiological and Biochemical Processes in Maize _zea Mays. L_ Varieties.Enviado poriaset123
- 3.Format-IJCSE-Prediction of Personality Through Signature and Hand GeometryEnviado poriaset123
- 1.Format - Ijans the Effectiveness of Diabetes SelfEnviado poriaset123
- 12.Format-ijhss-The Practice of Vote Buying in Legislative Elections in Indonesia in 2019 Case Study in Belitung RegencyEnviado poriaset123
- 9.Format. IJHSS - Influence of Teacher Self-efficacy on Transfer of Strengthening of Mathematics and Science in Secondary Education _SMASSE_ Pedagogical Skills in KenyaEnviado poriaset123
- 7.Format-IJHSS-Comparative Analysis of Chinese and Western Arts From 18 Th Century to Onwards.Enviado poriaset123
- 6.Format-IJHSS-Role of Museum for the Promotion of Tourism a Case Study of National Art Museum of NepalEnviado poriaset123
- 5.Format-ijhss-The Concept of Human Degree Equality in the Qur'an Scripture Perspective Interpretation of the Letter of Al-hujarat Verses 13Enviado poriaset123
- 4.Format Ijhss MoraleEnviado poriaset123
- 1.Format-IJHSS-Criminal Liability for Physician ErrorEnviado poriaset123
- 1.Format-ijfm-performance Evaluation of Large and Mid-cap Funds of Select Companies in India _2Enviado poriaset123
- 4.Format-ijce-landscape Integration in Road Design, Case Bahuichivo - Cerocahui in Chihuahua, MexicoEnviado poriaset123
- 3.Ijce-speech Intelligibility Assessment at Architecture Jury Rooms in DhakaEnviado poriaset123
- 2.Ijce-Analytical Modelforincorrect Cost Estimation in Local Governments in JapanEnviado poriaset123
- 1.IJCE-Evaluation of Energy Consumption Ratios in BuildingsEnviado poriaset123
- 3.Format-IJBGM-Entrepreneurship at the Bottom of the PyramidEnviado poriaset123
- 2.Ijbgm-A Cointegration Analysis of the Relationship Between Money Supply and Financial Inclusion in Nigeria _1981-2016__rewrittenEnviado poriaset123
- 1.Format-ijbgm-impact of Corporate Social Reasonability Practices Oncustomer Relationship Maintenance in Cement Industry a Study of Selected Cement Units of Vindhya Region_1Enviado poriaset123
- 2.-IJAMSS- An Analytical Study on Students Performance in Mathematics DeptEnviado poriaset123
- 3.Ijece-survey Paper of Script IdentificationEnviado poriaset123
- 2. IJECE-CalAnalysis of Datapath for Power, Area and ThroughputEnviado poriaset123
- 1.IJECE-UVM Based Verification of Dual Port SRAM by Implementing BISTEnviado poriaset123
- 4. Format-IJCSE-Review Analysis Using Sparse Vector and Deep Neural NetworkEnviado poriaset123

- Caffe TutorialEnviado porKalo Rafin
- jurnal cc 3 1-s2.0-S0957417418300435-mainEnviado porDeny Arifianto
- Retrocálculo en Pavimentos No LinealesEnviado porandresneo77
- Journal - FAULT IDENTIFICATION AND CLASSIFICATION FOR SHORT MEDIUM VOLTAGE UNDERGROUND CABLE BASED ON ARTIFICIAL NEURAL NETWORKS [5_108-08].pdfEnviado porHaposan Yoga
- Neural Network in Medical DiagnosisEnviado porKunal Govil
- Bangla Optical Character RecognitionEnviado porselotmani
- A Critical Study of Selected Classification Algorithms for Liver Disease DiagnosisEnviado porMaurice Lee
- 0212ijcses12.pdfEnviado porLia Khikmatul Maula
- C++ Neural Networks and Fuzzy Logic - Valluru B. RaoEnviado porThyago Vasconcelos
- PerformanceMeasures ANNEnviado porHarry Ramza
- z-roland_verbruggen.pdfEnviado porandrelrf
- Hacker's Guide to Neural NetworksEnviado porAlbert
- Multi-Layer Perceptron Network For English Character RecognitionEnviado porAJER JOURNAL
- Intro to NN & FLEnviado porZerocode Code
- 01 Martin T. Hagan - Neural Network Design, Chino (1996)Enviado porJeffer Barrera
- Paper3-Adaptive Neuro-Fuzzy Inference System for Dynamic Load Balancing in 3GPP LTEEnviado porIjarai ManagingEditor
- Journal.pone.0185478Enviado portigerman man
- How to Use Batch Normalization With TensorFlow and Tf.keras to Train Deep Neural Networks FasterEnviado porsusan
- Fuzzy NeuralEnviado porapi-3834446
- Module-3Enviado porvishek kumar
- ME DSEnviado porYuvraj Mohite
- Advances in Speech RecognitionEnviado porlangtu2011
- Thesis Final_Omid Mohareri @27703Enviado porపాకనాటి ప్రశాంత్ కుమార్ రెడ్డి
- Submerged Arc Welding NEURAL NETWORKEnviado porJohnnyDorian
- Model Predictive ControlEnviado porcacr_72
- Deeplearning2015 Goodfellow Adversarial Examples 01Enviado porgagamamapapa
- Neural Technique for Predicting Traffic Accidents in JordanEnviado porJabar H. Yousif
- Implementation of Neural Network Model for Solving Linear Programming ProblemEnviado porseventhsensegroup
- Final DissEnviado porBRAHIMITAHAR
- Lecture 03 BPNEnviado porkarthi