Using higher moments to your advantage: Kurtosis

Or “can daedalean words actually help make more accurate descriptions of your random variable? Part 1: Kurtosis Is a common belief that gaussians and uniform distributions will take you a long way. Which is understandable if one considers the law of large numbers: with a large enough number of trials, the mean converges to the expectation. […]

Read More Using higher moments to your advantage: Kurtosis

On types of randomized algorithms

There is more than Monte Carlo when talking about randomized algorithms. It is not uncommon to see the expresions “Monte Carlo Approach” and “randomized approach” used interchangeably. More than once you start reading a paper or listening to a presentation, in which the words “Monte Carlo” appear on the keywords and even on the title, […]

Read More On types of randomized algorithms

Studying random variables with Doob-Martingales

Or “Martingales are awesome!”. In a previous post, we talked about bounds for the deviation of a random variable from its expectation that built upon Martingales, useful for cases  in which the random variables cannot be modeled as sums of independent random variables (or in the case in which we do not know if they are […]

Read More Studying random variables with Doob-Martingales

Useful rules of thumb for bounding random variables (Part 2)

In the previous post  we looked at Chebyshev’s, Markov’s and Chernoff’s expressions for bounding (under certain conditions) the divergence of a random variable from its expectation. Particularly, we saw that the Chernoff bound was a tighter bound for the expectation, as long as your random variable was modeled as sum of independent Poisson trials. In […]

Read More Useful rules of thumb for bounding random variables (Part 2)