<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>thesIt &#187; cross-validation</title>
	<atom:link href="http://lakm.us/thesit/tag/cross-validation/feed/" rel="self" type="application/rss+xml" />
	<link>http://lakm.us/thesit</link>
	<description>computer science research log in semi microbloging style</description>
	<lastBuildDate>Tue, 24 Aug 2010 21:34:55 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.9</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>Stone 1974 is referenced in: Michaelsen &#8230;</title>
		<link>http://lakm.us/thesit/316/stone-1974-is-referenced-inmichaelsen/</link>
		<comments>http://lakm.us/thesit/316/stone-1974-is-referenced-inmichaelsen/#comments</comments>
		<pubDate>Sun, 22 Aug 2010 12:41:37 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cross-validation]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[estimation]]></category>
		<category><![CDATA[k-fold]]></category>
		<category><![CDATA[Michaelsen 1987]]></category>
		<category><![CDATA[reference]]></category>
		<category><![CDATA[Stone 1974]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=316</guid>
		<description><![CDATA[Stone 1974 is referenced in:
Michaelsen J. 1987. Cross-validation in statistical climate forecast models. J Climate Applied Meteorology, 26:1589-1600
1520-0450(1987)026-1589-cviscf-2.0.co;2.pdf
Set  consists of predictions and targets 
A set of prediction rule  will be used to predict y0 from 
Let  be the accuracy.
by least squares this will usually 
in other words expected Err is

MSE

In cross validation
]]></description>
			<content:encoded><![CDATA[<p>Stone 1974 is referenced in:<br />
Michaelsen J. 1987. Cross-validation in statistical climate forecast models. <i>J Climate Applied Meteorology</i>, 26:1589-1600</p>
<p><code><a href="http://journals.ametsoc.org/doi/abs/10.1175/1520-0450(1987)026<1589:CVISCF>2.0.CO;2">1520-0450(1987)026-1589-cviscf-2.0.co;2.pdf</a></code></p>
<p>Set <img src="http://lakm.us/thesit/wp-content/uploads/eq_9ba076dd6bb8492276d55d6eba4426dd.png" align="absmiddle" class="tex" alt="Z = z_{1}, z_{2},... , z_{1}" /> consists of predictions and targets <img src="http://lakm.us/thesit/wp-content/uploads/eq_daa39427192ac3aa475b144bc35d8474.png" align="absmiddle" class="tex" alt="z_{i}=(x_{i}, y_{i})" /></p>
<p>A set of prediction rule <img src="http://lakm.us/thesit/wp-content/uploads/eq_fe0f7385f849065eb2c4a9d3fc43cff1.png" align="absmiddle" class="tex" alt="\eta (x,Z)" /> will be used to predict y<sub>0</sub> from <img src="http://lakm.us/thesit/wp-content/uploads/eq_0cf6b515e8282bc189e99e089fbec517.png" align="absmiddle" class="tex" alt="\eta (x_{0},Z)" /></p>
<p>Let <img src="http://lakm.us/thesit/wp-content/uploads/eq_9ef6f3af2ea843d5273bad7d2d6ca52a.png" align="absmiddle" class="tex" alt="Q(y_{i},\eta_{i})" /> be the accuracy.<br />
by least squares this will usually <img src="http://lakm.us/thesit/wp-content/uploads/eq_5f92ce56ee3d7c84c1095240f8632b7e.png" align="absmiddle" class="tex" alt="(y_{i}-\eta_{i})^{2}" /><br />
in other words expected Err is<br />
<img src="http://lakm.us/thesit/wp-content/uploads/eq_a3a3253210769351b42ecc2c29b07f59.png" align="absmiddle" class="tex" alt="Err= E[Q(y_{i},\eta(x_{0},Z))]" /></p>
<h2>MSE</h2>
<p><img src="http://lakm.us/thesit/wp-content/uploads/eq_d74147edd30861f4fb14c995a384a2d3.png" align="absmiddle" class="tex" alt="MSE= \sum_{i=1}^{n}Q(y_{i},\eta(x_{i},Z))/n" /></p>
<p>In cross validation<br />
<img src="http://lakm.us/thesit/wp-content/uploads/eq_0c588b4269646a1c4ee8d83d92235f7d.png" align="absmiddle" class="tex" alt="MSE_{(CV)}= \sum_{i=1}^{n}Q(y_{i},\eta(x_{i},Z_{(i)}))/n" /></p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/316/stone-1974-is-referenced-inmichaelsen/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>../sT-fold cross-validation (Stone 1974) &#8230;</title>
		<link>http://lakm.us/thesit/302/st-fold-cross-validation-stone-1974/</link>
		<comments>http://lakm.us/thesit/302/st-fold-cross-validation-stone-1974/#comments</comments>
		<pubDate>Sat, 21 Aug 2010 18:29:08 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cross-validation]]></category>
		<category><![CDATA[Fu 1994]]></category>
		<category><![CDATA[k-fold]]></category>
		<category><![CDATA[validation]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=302</guid>
		<description><![CDATA[../sT-fold cross-validation (Stone 1974) repeats k times for a sample set randomly divided into k disjoint subsets, each time leaving one set out for testing and the others for training. Thus, we may call this technique &#8220;leave some out&#8221;]]></description>
			<content:encoded><![CDATA[<p>../sT-fold cross-validation (Stone 1974) repeats <b>k</b> times for a sample set randomly divided into <b>k</b> disjoint subsets, each time leaving one set out for testing and the others for training. Thus, we may call this technique &#8220;leave some out&#8221;</p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/302/st-fold-cross-validation-stone-1974/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>What are cross-validation and bootstrapp &#8230;</title>
		<link>http://lakm.us/thesit/257/what-are-cross-validation-and-bootstrapp/</link>
		<comments>http://lakm.us/thesit/257/what-are-cross-validation-and-bootstrapp/#comments</comments>
		<pubDate>Sun, 21 Feb 2010 12:14:08 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cross-validation]]></category>
		<category><![CDATA[faq]]></category>
		<category><![CDATA[neural network]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=257</guid>
		<description><![CDATA[What are cross-validation and bootstrapping?
Cross-validation and bootstrapping are both methods for estimating generalization error based on &#8220;resampling&#8221;.
In k-fold cross-validation, you divide the data into k subsets of (approximately) equal size. You train the net k times, each time leaving out one of the subsets from training, but using only the omitted subset to compute whatever [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.faqs.org/faqs/ai-faq/neural-nets/part3/section-12.html">What are cross-validation and bootstrapping?</a></p>
<p>Cross-validation and bootstrapping are both methods for estimating generalization error based on &#8220;resampling&#8221;.</p>
<p>In k-fold cross-validation, you divide the data into k subsets of (approximately) equal size. You train the net k times, each time leaving out one of the subsets from training, but using only the omitted subset to compute whatever error criterion interests you. If k equals the sample size, this is called &#8220;leave-one-out&#8221; cross-validation.</p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/257/what-are-cross-validation-and-bootstrapp/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Feedforward Neural Network Construction  &#8230;</title>
		<link>http://lakm.us/thesit/255/feedforward-neural-network-construction/</link>
		<comments>http://lakm.us/thesit/255/feedforward-neural-network-construction/#comments</comments>
		<pubDate>Sun, 21 Feb 2010 12:08:34 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cross-validation]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[reference]]></category>
		<category><![CDATA[Setiono 2001]]></category>
		<category><![CDATA[verification]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=254</guid>
		<description><![CDATA[Feedforward Neural Network Construction Using Cross Validation. Rudy Setiono. Neural Computation 13(12): 2865-2877.
This article presents an algorithm that constructs feedforward neural networks with a single hidden layer for pattern classification. The algorithm starts with a small number of hidden units in the network and adds more hidden units as needed to improve the network&#8217;s predictive [...]]]></description>
			<content:encoded><![CDATA[<p><i>Feedforward Neural Network Construction Using Cross Validation</i>. Rudy Setiono. Neural Computation 13(12): 2865-2877.</p>
<p>This article presents an algorithm that constructs feedforward neural networks with a single hidden layer for pattern classification. The algorithm starts with a small number of hidden units in the network and <b>adds more hidden units</b> as needed to improve the network&#8217;s predictive accuracy. To determine when to stop adding new hidden units, the algorithm makes use of a subset of the available training samples for cross validation. New hidden units are added to the network only if they improve the classification accuracy of the network on the training samples and on the cross-validation samples.</p>
<p>Extensive experimental results show that the algorithm is effective in obtaining networks with predictive accuracy rates that are <b>better than</b> those obtained by state-of-the-art decision tree methods.</p>
<p><code><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.9536&#038;rep=rep1&#038;type=pdf">10.1.1.112.9536.pdf</a></code></p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/255/feedforward-neural-network-construction/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
