# thesIt

• #### Arif 03:07:29 pm on August 24, 2010 | 0 | # | Tags: error, Geman 1992, MSE, neural network, poor data, Silvert 1998, variance

Bias-variance dilemma (Geman et al., 1992). It can be demonstrated that the mean square value of the estimation error between the function to be modelled and the neural network consists of the sum of the (squared) bias and variance. With a neural network using a training set of fixed size, a small bias can only be achieved with a large variance (Haykin, 1994). This dilemma can be circumvented if the training set is made very large, but if the total amount of data is limited, this may not be possible.

• #### Arif 05:41:53 am on August 24, 2010 | 0 | # | Tags: neural network, poor data, reference, Silvert 1998

Perhaps the greatest problem that is faced in most attempts to use artificial neural networks for ecological applications is that the quantity of data is often very limited. Although there are a few cases where large amounts of data are available, as in the case of remote sensing or observations based on automatic telemetry, it is far more common to have to deal with limited and irregularly spaced data, and the data may not always be strictly comparable due to variations in environmental conditions between sampling periods. In most situation the collection of field data is both time-consuming and expensive.

Since the training and testing of neural networks is very data-intensive, this poses serious obstacles to the development of neural network applications in ecology.

• #### Arif 07:27:51 pm on August 11, 2010 | 0 | # | Tags: Enăchescu 2008, neural network, universal approximation property

Approximation Capabilities of Neural Networks. C. Enăchescu, Journal of Numerical Analysis, Industrial and Applied Mathematics (JNAIAM), vol. 3, no. 3-4, 2008, pp. 221-230

From the learning point of view, the approximation of a function is equivalent with the learning problem of a neural network. In this paper we want to show the capabilities of a neural network to approximate arbitrary continuous functions. We have made some experiments in order to confirm the theoretical results.

c48613aed54c29e0f60ca35833aebf70.pdf

$f : X \subseteq R^{n} \to R^{m}$ is a continuous function

## Neural Network and Best Approximation Theory

Given $f \in F$ and $A \subseteq F$ we call the distance of f from A as $d(f,A)=inf\left \| f-a \right \|, a\in A$

• #### Arif 12:57:22 pm on February 27, 2010 | 0 | # | Tags: abstract, interfacial tension, KumarVasanth 2009, neural network, prediction, reference

Neural Network Prediction of Interfacial Tension at Crystal/Solution Interface. K. Vasanth Kumar. Ind. Eng. Chem. Res., 2009, 48 (8), pp 4160–4164

Using (1) solubility, (2) molecular weight, and (3) density, a three-layer feed-forward neural network was constructed and tested to predict the IFT at the crystal/solution interface. The concentration of solute in liquid phase, (1) concentration of solute in solid phase, (4) temperature, (3) density and (2) molecular weight of crystal were used as inputs to predict the interfacial tension at the crystal/liquid interface (σSL). The network was trained using the solubility information for 28 systems to predict the σSL value and was validated with 29 new systems. Despite the limited number of data used for training, the neural network was capable of predicting σSL successfully for the new inputs, which are kept unaware during the training process. The σSL value that is predicted by the artificial neural network during the training and testing process was compared with σSL predicted from the widely used empirical expression. For most of the systems, ANN better predicts IFT.

abstract here

• #### Arif 12:14:08 pm on February 21, 2010 | 0 | # | Tags: cross-validation, faq, neural network

What are cross-validation and bootstrapping?

Cross-validation and bootstrapping are both methods for estimating generalization error based on “resampling”.

In k-fold cross-validation, you divide the data into k subsets of (approximately) equal size. You train the net k times, each time leaving out one of the subsets from training, but using only the omitted subset to compute whatever error criterion interests you. If k equals the sample size, this is called “leave-one-out” cross-validation.

• #### Arif 12:08:34 pm on February 21, 2010 | 0 | # | Tags: cross-validation, neural network, reference, Setiono 2001, verification

Feedforward Neural Network Construction Using Cross Validation. Rudy Setiono. Neural Computation 13(12): 2865-2877.

This article presents an algorithm that constructs feedforward neural networks with a single hidden layer for pattern classification. The algorithm starts with a small number of hidden units in the network and adds more hidden units as needed to improve the network’s predictive accuracy. To determine when to stop adding new hidden units, the algorithm makes use of a subset of the available training samples for cross validation. New hidden units are added to the network only if they improve the classification accuracy of the network on the training samples and on the cross-validation samples.

Extensive experimental results show that the algorithm is effective in obtaining networks with predictive accuracy rates that are better than those obtained by state-of-the-art decision tree methods.

10.1.1.112.9536.pdf

• #### Arif 11:15:08 am on January 7, 2010 | 0 | # | Tags: abstract, method, neural network, prediction, reference, softcomputing, spectrophotometry, wavelength, why?, Zawadzki 2005

Use of a spectrophotometer for biodiesel quality sensing.
Paper number 053133, 2005 ASAE Annual Meeting. Artur Zawadzki, Dev Shrestha, Brian He.

The test procedures to assure ASTM biodiesel quality are not being widely implemented because of the lengthy procedures and laboratory equipment requirements. A critical need in the increasingly emerging biodiesel industry right now is a reliable, affordable and rapid test method for determining the blends of biodiesel in diesel fuel. As an effort to explore a reliable and rapid method, a spectrophotometer was used to scan the blends of #2 fossil diesel and biodiesel for spectrums in the wavelength range of 190-1100 nm.

The shape of the spectrum curve was found to be different for different biodiesel feedstock where as relative absorbance and characteristic peaks of absorbance curve was attenuated with increasing amount of diesel in the blend. Shape characteristics were fed into neural network to predict the biodiesel feedstock and blend level in biodiesel-diesel mixture.

http://asae.frymulti.com/abstract.asp?aid=19829&t=2

• #### Arif 01:35:55 am on December 31, 2009 | 0 | # | Tags: back-propagation, interfacial tension, Kumar 2005, neural network, prediction, reference

Prediction of Surface Tension of Organic Liquids Using Artificial Neural Networks
D. Kumar, S. Gupta and S. Basu. Indian Chem Engr., Section A, Vol. 47, No. 4, October – December 2005.
A forward-feed back propagation neural network, based on the Levenberg-Marquardt optimization and gradient descent with momentum weight and bias method was used. The input parameters, e.g., density, refractive index and parachor, to the neural network were chosen from the previous studies on theoretical prediction of surface tension.
pstoluann.pdf

• #### Arif 02:28:43 pm on December 25, 2009 | 0 | # | Tags: Isha 2006, method, neural network, spectra, wavelength

Spectral measurements were made with an ultraviolet-visible spectrophotometer (Varian-Cary Win UV 100). For each concentration, buffer solution were added. The solution was then diluted to the mark. The absorbance spectra of the complex solution were recorded from 350 to 700 nm. A total of 24 spectra reading were obtained. 5 were used to test the network whilst the remaining 19 were used for the training.

• #### Arif 02:22:46 pm on December 25, 2009 | 0 | # | Tags: determination, Isha 2006, neural network, spectrophotometry

Pb(II) and Hg(II) are metals that appear together in many real sample. Recently, spectrophotometric methods based on ANNs have found increasing applications for simultaneous determination.

• Next Page »