In this refresh article, we are going to take you back to basics and quickly recap expected value and different types of mean. Although this is basic it is important that you are fluent with these terms to master the more advanced lessons coming later in this series.

The expected value of a random variable is the probability-weighted average of all possible values. When these probabilities are equal, the expected value is the same as the arithmetic mean, defined as the sum of the observations divided by the number of observations:

where are our observations.

For example, if a dice is rolled repeatedly many times, we expect all numbers from 1 - 6 to show up an equal number of times. So the expected value in rolling a six-sided die is 3.5.

```
```from __future__ import print_function
from auquanToolbox.dataloader import load_data_nologs
import numpy as np
import scipy.stats as stats
# Let's say the random variables x1 and x2 have the following values
x1 = [10,9,8,5,6,7,4,3,2]
x2 = x1 + [100]
print ('Mean of x1:', sum(x1), '/', len(x1), '=', np.mean(x1))
print ('Mean of x2:', sum(x2), '/', len(x2), '=', np.mean(x2))

Mean of x1: 54 / 9 = 6.0 Mean of x2: 154 / 10 = 15.4

When the probabilities of different observations are not equal, i.e a random variable can take value with probability , with probability , and so on, the expected value of X is the same as *weighted* arithmetic mean. The weighted arithmetic mean is defined as

where

Therefore, the expected value is the average of all values obtained you perform the experiment it represents many times. This follows from the law of large numbers - the average of the results obtained from a large number of repetitions of an experiment should be close to the expected value, and will tend to become closer as more trials are performed.

- The expected value of a constant is equal to the constant itself
- The expected value is linear, i.e
- If , then
- The expected value is not multiplicative i.e. is not necessarily equal to . The amount by which they differ is called the covariance, covered in a later notebook. If X and Y are uncorrelated,

**Median**

Number which appears in the middle of the list when it is sorted in increasing or decreasing order, i.e. the value in when is odd and the average of the values in and positions when is even. One advantage of using median in describing data compared to the mean is that it is not skewed so much by extremely large or small values

The median us the value that splits the data set in half, but not how much smaller or larger the other values are.

```
```print('Median of x1:', np.median(x1))
print('Median of x2:', np.median(x2))

Median of x1: 6.0 Median of x2: 6.5

**Mode**

Most frequently occuring value in a data set. The mode of a probability distribution is the value x at which its probability distribution function takes its maximum value.

```
```def mode(l):
# Count the number of times each element appears in the list
counts = {}
for e in l:
if e in counts:
counts[e] += 1
else:
counts[e] = 1
# Return the elements that appear the most times
maxcount = 0
modes = {}
for key in counts:
if counts[key] > maxcount:
maxcount = counts[key]
modes = {key}
elif counts[key] == maxcount:
modes.add(key)
if maxcount > 1 or len(l) == 1:
return list(modes)
return 'No mode'
print ('All of the modes of x1:', mode(x1))

All of the modes of x1: No mode

**Geometric mean**

It is the central tendency of a set of numbers by using the product of their values (as opposed to the arithmetic mean which uses their sum). The geometric mean is defined as the nth root of the product of n numbers:

for observations . We can also rewrite it as an arithmetic mean using logarithms:

The geometric mean is always less than or equal to the arithmetic mean (when working with nonnegative observations), with equality only when all of the observations are the same.

```
```# Use scipy's gmean function to compute the geometric mean
print ('Geometric mean of x1:', stats.gmean(x1))
print ('Geometric mean of x2:', stats.gmean(x2))

Geometric mean of x1: 5.35627121246 Geometric mean of x2: 7.1775512683

**Harmonic mean**

The harmonic mean is less commonly used than the other types of means. It is defined as

As with the geometric mean, we can rewrite the harmonic mean to look like an arithmetic mean. The reciprocal of the harmonic mean is the arithmetic mean of the reciprocals of the observations:

The harmonic mean for non-negative numbers is always at most the geometric mean (which is at most the arithmetic mean), and they are equal only when all of the observations are equal.

```
```print ('Harmonic mean of x1:', stats.hmean(x1))
print ('Harmonic mean of x2:', stats.hmean(x2))

Harmonic mean of x1: 4.66570664472 Harmonic mean of x2: 5.15738201465

The harmonic mean can be used when the data can be naturally phrased in terms of ratios.

Variance and Standard Deviation are measures of dispersion of dataset from the mean.

We can define the mean absolute deviation as the average of the distances of observations from the arithmetic mean. We use the absolute value of the deviation, so that 5 above the mean and 5 below the mean both contribute 5, because otherwise the deviations always sum to 0.

where is the number of observations and is their mean.

Instead of using absolute deviations, we can use the squared deviations, this is called **variance** : the average of the squared deviations around the mean:

**Standard deviation** is simply the square root of the variance, , and it is the easier of the two to interpret because it is in the same units as the observations.

Note that variance is additive while standard deviation is not.

```
```print('Variance of x1:', np.var(x1))
print('Standard deviation of x1:', np.std(x1))
print('Variance of x2:', np.var(x2))
print('Standard deviation of x2:', np.std(x2))

Standard deviation indicates the amount of variation in a set of data values. A low standard deviation indicates that the data points tend to be close to the expected value, while a high standard deviation indicates that the data points are spread out over a wider range of values.

- The standard deviation of a constant is equal to 0
- Standard deviations cannot be added. Therefore,
- However, variance, can be added. In fact,
- If X and Y are uncorrelated, and

If an experiment is performed daily and the results of an experiment on one day do not affect the on their results any other day, daily observation are uncorrelated. If we measure daily standard deviation as then we can calculate the standard deviation for a year, also called annualized standard deviation as:

In finance, we sum over all trading days and this annualized standard deviation is called **Volatility**.

It is important to remember that when we are working with a subset of actual data, these computations will only give you sample statistics, that is mean and standard deviation of a sample of data. Whether or not this reflects the current true population mean and standard deviation is not always obvious, and more effort has to be put into determining that. This is especially problematic in finance because all data are time series and the mean and variance may change over time. In general do not assume that because something is true of your sample, it will remain true going forward.

This topic was modified 2 months ago 4 times by David

Quote

Forum Statistics

9
Forums

31
Topics

36
Posts

0
Online

33
Members

Latest Post: Toolbox Breakdown: getFeatureConfigDicts Our newest member: felixenzo Recent Posts Unread Posts Tags

Forum Icons: Forum contains no unread posts Forum contains unread posts Mark all read

Topic Icons: Not Replied Replied Active Hot Sticky Unapproved Solved Private Closed

Working