<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>thesIt &#187; Arif</title>
	<atom:link href="http://lakm.us/thesit/author/arif/feed/" rel="self" type="application/rss+xml" />
	<link>http://lakm.us/thesit</link>
	<description>computer science research log in semi microbloging style</description>
	<lastBuildDate>Tue, 24 Aug 2010 21:34:55 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.9</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>Geman S, Bienenstock E, Doursat R. 1992. &#8230;</title>
		<link>http://lakm.us/thesit/331/geman-s-bienenstock-e-doursat-r-1992/</link>
		<comments>http://lakm.us/thesit/331/geman-s-bienenstock-e-doursat-r-1992/#comments</comments>
		<pubDate>Tue, 24 Aug 2010 21:34:55 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Geman 1992]]></category>
		<category><![CDATA[standard citation]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=331</guid>
		<description><![CDATA[Geman S, Bienenstock E, Doursat R. 1992. Neural networks and the bias/variance dilemma. Neural Comput 4:1-58.]]></description>
			<content:encoded><![CDATA[<p>Geman S, Bienenstock E, Doursat R. 1992. Neural networks and the bias/variance dilemma. <i>Neural Comput</i> 4:1-58.</p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/331/geman-s-bienenstock-e-doursat-r-1992/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Bias-variance dilemma (Geman et al., 199 &#8230;</title>
		<link>http://lakm.us/thesit/330/bias-variance-dilemma-geman-et-al-199/</link>
		<comments>http://lakm.us/thesit/330/bias-variance-dilemma-geman-et-al-199/#comments</comments>
		<pubDate>Tue, 24 Aug 2010 15:07:29 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[Geman 1992]]></category>
		<category><![CDATA[MSE]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[poor data]]></category>
		<category><![CDATA[Silvert 1998]]></category>
		<category><![CDATA[variance]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=330</guid>
		<description><![CDATA[Bias-variance dilemma (Geman et al., 1992). It can be demonstrated that the mean square value of the estimation error between the function to be modelled and the neural network consists of the sum of the (squared) bias and variance. With a neural network using a training set of fixed size, a small bias can only [...]]]></description>
			<content:encoded><![CDATA[<p>Bias-variance dilemma (Geman <em>et al.</em>, 1992). It can be demonstrated that the mean square value of the estimation error between the function to be modelled and the neural network consists of the sum of the (squared) bias and variance. With a neural network using a training set of fixed size, a <b>small bias</b> can only be achieved with a <b>large variance</b> (Haykin, 1994). This dilemma can be circumvented if the training set is made very large, but if the total amount of data is limited, this may not be possible.</p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/330/bias-variance-dilemma-geman-et-al-199/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Rajkumar T, Bardina J. 2003. Training da &#8230;</title>
		<link>http://lakm.us/thesit/324/rajkumar-t-bardina-j-2003-training-da/</link>
		<comments>http://lakm.us/thesit/324/rajkumar-t-bardina-j-2003-training-da/#comments</comments>
		<pubDate>Tue, 24 Aug 2010 12:02:26 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Rajkumar 2003]]></category>
		<category><![CDATA[standard citation]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=324</guid>
		<description><![CDATA[Rajkumar T, Bardina J. 2003. Training data requirement for a neural network to predict aerodynamic coefficients. Di dalam: Bell AJ, Wickerhauser MV, Szu HH. Proceedings of Independent Component Analyses, Wavelets, and Neural Networks; Orlando: 22 Apr 2003. Orlando: SPIE. hlm (5102):92-103.]]></description>
			<content:encoded><![CDATA[<p>Rajkumar T, Bardina J. 2003. Training data requirement for a neural network to predict aerodynamic coefficients. Di dalam: Bell AJ, Wickerhauser MV, Szu HH. <i>Proceedings of Independent Component Analyses, Wavelets, and Neural Networks</i>; Orlando: 22 Apr 2003. Orlando: SPIE. hlm (5102):92-103.</p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/324/rajkumar-t-bardina-j-2003-training-da/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Perhaps the greatest problem that is fac &#8230;</title>
		<link>http://lakm.us/thesit/323/perhaps-the-greatest-problem-that-is-fac/</link>
		<comments>http://lakm.us/thesit/323/perhaps-the-greatest-problem-that-is-fac/#comments</comments>
		<pubDate>Tue, 24 Aug 2010 05:41:53 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[poor data]]></category>
		<category><![CDATA[reference]]></category>
		<category><![CDATA[Silvert 1998]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=323</guid>
		<description><![CDATA[Perhaps the greatest problem that is faced in most attempts to use artificial neural networks for ecological applications is that the quantity of data is often very limited. Although there are a few cases where large amounts of data are available, as in the case of remote sensing or observations based on automatic telemetry, it [...]]]></description>
			<content:encoded><![CDATA[<p>Perhaps the greatest problem that is faced in most attempts to use artificial neural networks for ecological applications is that the quantity of data is often <b>very limited</b>. Although there are a few cases where large amounts of data are available, as in the case of remote sensing or observations based on automatic telemetry, it is far more common to have to deal with limited and irregularly spaced data, and the data may not always be strictly comparable due to variations in environmental conditions between sampling periods. In most situation the collection of field data is both time-consuming and expensive.</p>
<p>Since the training and testing of neural networks is <b>very data-intensive</b>, this poses serious obstacles to the development of neural network applications in ecology.</p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/323/perhaps-the-greatest-problem-that-is-fac/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Silvert W. 1998. Can neuronal networks b &#8230;</title>
		<link>http://lakm.us/thesit/322/silvert-w-1998-can-neuronal-networks-b/</link>
		<comments>http://lakm.us/thesit/322/silvert-w-1998-can-neuronal-networks-b/#comments</comments>
		<pubDate>Tue, 24 Aug 2010 05:36:07 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Silvert 1998]]></category>
		<category><![CDATA[standard citation]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=322</guid>
		<description><![CDATA[Silvert W, Baptist M. 1998. Can neuronal networks be used in data-poor situations? Di dalam: Lek S, Guégan JF. Artificial Neuronal Networks: Application to Ecology and Evolution. Berlin: Springer-Verlag. hlm 241-248.]]></description>
			<content:encoded><![CDATA[<p>Silvert W, Baptist M. 1998. Can neuronal networks be used in data-poor situations? Di dalam: Lek S, Guégan JF. <i>Artificial Neuronal Networks: Application to Ecology and Evolution</i>. Berlin: Springer-Verlag. hlm 241-248.</p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/322/silvert-w-1998-can-neuronal-networks-b/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Chan LW. 1996. Levenberg-Marquardt learn &#8230;</title>
		<link>http://lakm.us/thesit/321/chan-lw-1996-levenberg-marquardt-learn/</link>
		<comments>http://lakm.us/thesit/321/chan-lw-1996-levenberg-marquardt-learn/#comments</comments>
		<pubDate>Sun, 22 Aug 2010 13:03:43 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Chan 1996]]></category>
		<category><![CDATA[Levenberg-Marquardt]]></category>
		<category><![CDATA[reference]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=321</guid>
		<description><![CDATA[Chan LW. 1996. Levenberg-Marquardt learning and regularization. Progress Neural Inform Processing 139-144.
iconip96.ps]]></description>
			<content:encoded><![CDATA[<p>Chan LW. 1996. Levenberg-Marquardt learning and regularization. <i>Progress Neural Inform Processing</i> 139-144.</p>
<p><code><a href="http://www.cse.cuhk.edu.hk/~lwchan/papers/iconip96.ps">iconip96.ps</a></code></p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/321/chan-lw-1996-levenberg-marquardt-learn/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Stone 1974 is referenced in: Michaelsen &#8230;</title>
		<link>http://lakm.us/thesit/316/stone-1974-is-referenced-inmichaelsen/</link>
		<comments>http://lakm.us/thesit/316/stone-1974-is-referenced-inmichaelsen/#comments</comments>
		<pubDate>Sun, 22 Aug 2010 12:41:37 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cross-validation]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[estimation]]></category>
		<category><![CDATA[k-fold]]></category>
		<category><![CDATA[Michaelsen 1987]]></category>
		<category><![CDATA[reference]]></category>
		<category><![CDATA[Stone 1974]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=316</guid>
		<description><![CDATA[Stone 1974 is referenced in:
Michaelsen J. 1987. Cross-validation in statistical climate forecast models. J Climate Applied Meteorology, 26:1589-1600
1520-0450(1987)026-1589-cviscf-2.0.co;2.pdf
Set  consists of predictions and targets 
A set of prediction rule  will be used to predict y0 from 
Let  be the accuracy.
by least squares this will usually 
in other words expected Err is

MSE

In cross validation
]]></description>
			<content:encoded><![CDATA[<p>Stone 1974 is referenced in:<br />
Michaelsen J. 1987. Cross-validation in statistical climate forecast models. <i>J Climate Applied Meteorology</i>, 26:1589-1600</p>
<p><code><a href="http://journals.ametsoc.org/doi/abs/10.1175/1520-0450(1987)026<1589:CVISCF>2.0.CO;2">1520-0450(1987)026-1589-cviscf-2.0.co;2.pdf</a></code></p>
<p>Set <img src="http://lakm.us/thesit/wp-content/uploads/eq_9ba076dd6bb8492276d55d6eba4426dd.png" align="absmiddle" class="tex" alt="Z = z_{1}, z_{2},... , z_{1}" /> consists of predictions and targets <img src="http://lakm.us/thesit/wp-content/uploads/eq_daa39427192ac3aa475b144bc35d8474.png" align="absmiddle" class="tex" alt="z_{i}=(x_{i}, y_{i})" /></p>
<p>A set of prediction rule <img src="http://lakm.us/thesit/wp-content/uploads/eq_fe0f7385f849065eb2c4a9d3fc43cff1.png" align="absmiddle" class="tex" alt="\eta (x,Z)" /> will be used to predict y<sub>0</sub> from <img src="http://lakm.us/thesit/wp-content/uploads/eq_0cf6b515e8282bc189e99e089fbec517.png" align="absmiddle" class="tex" alt="\eta (x_{0},Z)" /></p>
<p>Let <img src="http://lakm.us/thesit/wp-content/uploads/eq_9ef6f3af2ea843d5273bad7d2d6ca52a.png" align="absmiddle" class="tex" alt="Q(y_{i},\eta_{i})" /> be the accuracy.<br />
by least squares this will usually <img src="http://lakm.us/thesit/wp-content/uploads/eq_5f92ce56ee3d7c84c1095240f8632b7e.png" align="absmiddle" class="tex" alt="(y_{i}-\eta_{i})^{2}" /><br />
in other words expected Err is<br />
<img src="http://lakm.us/thesit/wp-content/uploads/eq_a3a3253210769351b42ecc2c29b07f59.png" align="absmiddle" class="tex" alt="Err= E[Q(y_{i},\eta(x_{0},Z))]" /></p>
<h2>MSE</h2>
<p><img src="http://lakm.us/thesit/wp-content/uploads/eq_d74147edd30861f4fb14c995a384a2d3.png" align="absmiddle" class="tex" alt="MSE= \sum_{i=1}^{n}Q(y_{i},\eta(x_{i},Z))/n" /></p>
<p>In cross validation<br />
<img src="http://lakm.us/thesit/wp-content/uploads/eq_0c588b4269646a1c4ee8d83d92235f7d.png" align="absmiddle" class="tex" alt="MSE_{(CV)}= \sum_{i=1}^{n}Q(y_{i},\eta(x_{i},Z_{(i)}))/n" /></p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/316/stone-1974-is-referenced-inmichaelsen/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>If we assume a normally distributed popu &#8230;</title>
		<link>http://lakm.us/thesit/315/if-we-assume-a-normally-distributed-popu/</link>
		<comments>http://lakm.us/thesit/315/if-we-assume-a-normally-distributed-popu/#comments</comments>
		<pubDate>Sun, 22 Aug 2010 12:19:08 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[basic]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[residual error]]></category>
		<category><![CDATA[statistical error]]></category>
		<category><![CDATA[wikipedia]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=315</guid>
		<description><![CDATA[If we assume a normally distributed population with mean μ and standard deviation σ, and take sample

statistical error is then

Residual
while residual is

hat over the letter ε indicates an observable estimate of an unobservable quantity called ε.]]></description>
			<content:encoded><![CDATA[<p>If we assume a normally distributed population with mean μ and standard deviation σ, and take sample</p>
<p><img src="http://lakm.us/thesit/wp-content/uploads/eq_4c96f0de2e43b51f187721c402c03630.png" align="absmiddle" class="tex" alt="x_{1}, x_{2},..., x_{n}\sim N(\mu,\sigma^{2})" /></p>
<p><a href="http://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics">statistical error</a> is then<br />
<img src="http://lakm.us/thesit/wp-content/uploads/eq_d78f820e87995c9f70844afc97c6d21c.png" align="absmiddle" class="tex" alt="\varepsilon_{i}=x_{i}-\mu" /></p>
<h2>Residual</h2>
<p>while residual is<br />
<img src="http://lakm.us/thesit/wp-content/uploads/eq_ac9247e31ce87591744c02cfbedc4fac.png" align="absmiddle" class="tex" alt="\hat{\varepsilon}_{i}=x_{i}-\bar{x}" /></p>
<p>hat over the letter ε indicates an observable estimate of an <b>unobservable quantity</b> called ε.</p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/315/if-we-assume-a-normally-distributed-popu/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Boxplot drawing</title>
		<link>http://lakm.us/thesit/308/boxplot-drawing/</link>
		<comments>http://lakm.us/thesit/308/boxplot-drawing/#comments</comments>
		<pubDate>Sun, 22 Aug 2010 07:29:29 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[basic]]></category>
		<category><![CDATA[boxplot]]></category>
		<category><![CDATA[outlier]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=308</guid>
		<description><![CDATA[Drawing boxplot from the following data, whiskers are determined from 1.5 IQR








no
datum
drawn as


1
0.026
outlier


2
0.048
whisker low


3
0.070
Q1


4
0.072



5
0.076
Q2


6
0.084



7
0.086



8
0.099
Q3


9
0.102



10
0.103
whisker high



Outlier
0.026 becomes outlier because 1.5IQR boundary (IQR = Q3 &#8211; Q1 = 0.02908) for the lower boundary is Q1 -1.5IQR = 0.0404, hence whisker is 0.048 which is more close to the box inside making 0.026 outlier.]]></description>
			<content:encoded><![CDATA[<p>Drawing boxplot from the following data, whiskers are determined from 1.5 IQR</p>
<table border="0" cellspacing="0" frame="VOID" rules="NONE">
<colgroup>
<col width="28"></col>
<col width="54"></col>
<col width="86"></col>
</colgroup>
<tbody>
<tr>
<td width="28" height="17" align="LEFT"><strong>no</strong></td>
<td width="54" align="LEFT"><strong>datum</strong></td>
<td width="86" align="LEFT"><strong>drawn as</strong></td>
</tr>
<tr>
<td height="17" align="RIGHT">1</td>
<td align="RIGHT">0.026</td>
<td align="RIGHT">outlier</td>
</tr>
<tr>
<td height="17" align="RIGHT">2</td>
<td align="RIGHT">0.048</td>
<td align="RIGHT">whisker low</td>
</tr>
<tr>
<td height="17" align="RIGHT">3</td>
<td align="RIGHT">0.070</td>
<td align="RIGHT">Q1</td>
</tr>
<tr>
<td height="17" align="RIGHT">4</td>
<td align="RIGHT">0.072</td>
<td align="LEFT"></td>
</tr>
<tr>
<td height="17" align="RIGHT">5</td>
<td align="RIGHT">0.076</td>
<td align="RIGHT">Q2</td>
</tr>
<tr>
<td height="17" align="RIGHT">6</td>
<td align="RIGHT">0.084</td>
<td align="LEFT"></td>
</tr>
<tr>
<td height="17" align="RIGHT">7</td>
<td align="RIGHT">0.086</td>
<td align="LEFT"></td>
</tr>
<tr>
<td height="17" align="RIGHT">8</td>
<td align="RIGHT">0.099</td>
<td align="RIGHT">Q3</td>
</tr>
<tr>
<td height="17" align="RIGHT">9</td>
<td align="RIGHT">0.102</td>
<td align="LEFT"></td>
</tr>
<tr>
<td height="17" align="RIGHT">10</td>
<td align="RIGHT">0.103</td>
<td align="RIGHT">whisker high</td>
</tr>
</tbody>
</table>
<h2>Outlier</h2>
<p>0.026 becomes <strong>outlier</strong> because 1.5IQR boundary (IQR = Q3 &#8211; Q1 = 0.02908) for the lower boundary is Q1 -1.5IQR = 0.0404, hence whisker is 0.048 which is more close to the box inside making 0.026 outlier.</p>
<div class="wp-caption aligncenter" style="width: 150px"><img title="boxplot-basic-concept-example.png" src="../../images/boxplot-basic-concept.png" alt="" width="140" height="269" /><p class="wp-caption-text">boxplot-basic-concept-example.png</p></div>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/308/boxplot-drawing/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Viscosity is a measure of the resistance &#8230;</title>
		<link>http://lakm.us/thesit/307/viscosity-is-a-measure-of-the-resistance-2/</link>
		<comments>http://lakm.us/thesit/307/viscosity-is-a-measure-of-the-resistance-2/#comments</comments>
		<pubDate>Sun, 22 Aug 2010 06:43:00 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[basic]]></category>
		<category><![CDATA[viscosity]]></category>
		<category><![CDATA[wikipedia]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=307</guid>
		<description><![CDATA[Viscosity is a measure of the resistance of a fluid which is being deformed by either shear stress or tensile stress.
The cgs physical unit for dynamic viscosity is the poise (P), more commonly expressed, particularly in ASTM standards, as centipoise (cP)]]></description>
			<content:encoded><![CDATA[<p><a href="http://en.wikipedia.org/wiki/Viscosity">Viscosity</a> is a measure of the <b>resistance</b> of a fluid which is being deformed by either <b>shear stress</b> or <b>tensile stress</b>.</p>
<p>The cgs physical unit for dynamic viscosity is the poise (P), more commonly expressed, particularly in ASTM standards, as centipoise (cP)</p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/307/viscosity-is-a-measure-of-the-resistance-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
