There is more than Monte Carlo when talking about randomized algorithms. It is not uncommon to see the expresions “Monte Carlo Approach” and “randomized approach” used interchangeably. More than once you start reading a paper or listening to a presentation, in which the words “Monte Carlo” appear on the keywords and even on the title, […]Read More On types of randomized algorithms
Or “Martingales are awesome!”. In a previous post, we talked about bounds for the deviation of a random variable from its expectation that built upon Martingales, useful for cases in which the random variables cannot be modeled as sums of independent random variables (or in the case in which we do not know if they are […]Read More Studying random variables with Doob-Martingales
In the previous post we looked at Chebyshev’s, Markov’s and Chernoff’s expressions for bounding (under certain conditions) the divergence of a random variable from its expectation. Particularly, we saw that the Chernoff bound was a tighter bound for the expectation, as long as your random variable was modeled as sum of independent Poisson trials. In […]Read More Useful rules of thumb for bounding random variables (Part 2)
In a previous post we were discussing the pros and cons of parametric and non-parametric models, and how they can complement each other. In this post, we will add a little more into the story. More specifically, we are going to talk about bounds to the probability that a random variable deviates from its expectation. In these […]Read More Useful rules of thumb for bounding random variables (Part 1)
This post is my interpretation of Chapter 10 of the book “Advanced Data Analysis from an Elementary point of view“. It is one of the most interesting reads I have found in quite some time (together with this). Actually, the original title for the post was “Book Chapter review: Using non-parametric models to test parametric model […]Read More Book Chapter Review: If your model is mis-specified, are you better off?