<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>thesIt &#187; variance</title>
	<atom:link href="http://lakm.us/thesit/tag/variance/feed/" rel="self" type="application/rss+xml" />
	<link>http://lakm.us/thesit</link>
	<description>computer science research log in semi microbloging style</description>
	<lastBuildDate>Tue, 24 Aug 2010 21:34:55 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.9</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>Bias-variance dilemma (Geman et al., 199 &#8230;</title>
		<link>http://lakm.us/thesit/330/bias-variance-dilemma-geman-et-al-199/</link>
		<comments>http://lakm.us/thesit/330/bias-variance-dilemma-geman-et-al-199/#comments</comments>
		<pubDate>Tue, 24 Aug 2010 15:07:29 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[Geman 1992]]></category>
		<category><![CDATA[MSE]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[poor data]]></category>
		<category><![CDATA[Silvert 1998]]></category>
		<category><![CDATA[variance]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=330</guid>
		<description><![CDATA[Bias-variance dilemma (Geman et al., 1992). It can be demonstrated that the mean square value of the estimation error between the function to be modelled and the neural network consists of the sum of the (squared) bias and variance. With a neural network using a training set of fixed size, a small bias can only [...]]]></description>
			<content:encoded><![CDATA[<p>Bias-variance dilemma (Geman <em>et al.</em>, 1992). It can be demonstrated that the mean square value of the estimation error between the function to be modelled and the neural network consists of the sum of the (squared) bias and variance. With a neural network using a training set of fixed size, a <b>small bias</b> can only be achieved with a <b>large variance</b> (Haykin, 1994). This dilemma can be circumvented if the training set is made very large, but if the total amount of data is limited, this may not be possible.</p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/330/bias-variance-dilemma-geman-et-al-199/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Easiest description standard deviation</title>
		<link>http://lakm.us/thesit/288/easiest-standard-deviation-is-distance-f/</link>
		<comments>http://lakm.us/thesit/288/easiest-standard-deviation-is-distance-f/#comments</comments>
		<pubDate>Mon, 28 Jun 2010 22:27:38 +0000</pubDate>
		<dc:creator>Arif</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[basic]]></category>
		<category><![CDATA[standard deviation]]></category>
		<category><![CDATA[statistics]]></category>
		<category><![CDATA[variance]]></category>

		<guid isPermaLink="false">http://xp-racy.lan/s2/?p=287</guid>
		<description><![CDATA[Easiest description for standard deviation definition is distance from mean (expected value) as shown in this graphical depiction

where all the values fall at σ distance within the dotted circle radius. Of course a more real-life situation is shown as

where σ is the square root of the following mean


σ² a.k.a. variance is averaged quadratic distances. Explanation:
Distance may [...]]]></description>
			<content:encoded><![CDATA[<p>Easiest description for <strong>standard deviation</strong> definition is distance from mean (expected value) as shown in this graphical depiction</p>
<p><img src="../../images/deviation_equal_distance.jpg" alt="" /><br />
where all the values fall at <em>σ</em> distance within the dotted circle radius. Of course a more real-life situation is shown as</p>
<p><img src="../../images/deviation_real_distance.jpg" alt="" /><br />
where <em>σ</em> is the square root of the following mean<br />
<img src="http://lakm.us/thesit/wp-content/uploads/eq_ea1bdb005435e38c51b9b9f4b79453a9.png" align="absmiddle" class="tex" alt="\sigma^2 =\frac{\sum_{i=1}^{n}{\sigma_{i}^2}}{n}" /><br />
<em></em></p>
<p><em>σ</em>²<em> </em>a.k.a. <strong>variance</strong> is averaged quadratic distances. <span style="background-color: #ffffff;">Explanation:</span></p>
<p><span style="background-color: #ffffff;">Distance may have several concepts, in this variance description, <strong>distance</strong> shows &#8220;how far&#8221; a value is from its population expected value (mean). Quadratic form of this &#8220;how far&#8221; is</span></p>
<p><img src="http://lakm.us/thesit/wp-content/uploads/eq_dce0a2d4d66dbfe40115f1fe0aedb9ab.png" align="absmiddle" class="tex" alt="\sigma_i=(x_i-\bar{x} )^2" /></p>]]></content:encoded>
			<wfw:commentRss>http://lakm.us/thesit/288/easiest-standard-deviation-is-distance-f/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
