Predicting outcomes based on correlations in historical data is for faggots.
Please use a deterministic method.
>>8252215
you can't
based on previous data from this thread, there is
100% chance this post ends in > 5
>>8252215
Ban p-values and assuming everything is gaussian. Don't ban statistics.
>>8252278
There's little wrong with p-values and gaussians, the problem lies in univariate analysis of multifactorial phenomena.
When you don't know anything about the distribution, Gaussian is your best bet.
>>8252351
Based on previous data from this thread,
there is 80% chance this post ends in > 5
>>8252353
Based on the size of your dataset, the confidence interval of your prediction is as big as your mother's asshole after I drilled her last night.
>>8252227
nice but according to statistics this post will be dubs
checkem
>>8252354
The average post will end in 5
>>8252355
REKT
>>8252355
Lad.
>>8252355
wtf I hate math now
>>8252355
booooiiiiii
>>8252355
>>8252215
I love you.
>>8252674
I love you too
Staistics fall victim to the idea that correlation implies causation far too often. It literally is the worst "science."
>>8252355
Lol, kek got rekt by science.
Atheist 1
Christians 0
>>8252215
Feynman has a famous anecdote about Pacific Islanders during WWII. The U.S. set up bases on their islands, and the islanders noticed that whenever the soldiers dressed up a certain way and waved some flares around, a plane would drop a box of supplies on the island. After the Americans left, the islanders would dress up in their uniforms and try to mimic their actions in the hopes of getting a box of supplies to fall from the sky.
What is the difference between this and a headline in the newspaper that says "Study shows eating broccoli causes colon cancer?".
Here (http://pss.sagepub.com/content/22/11/1359.full.pdf+html) is an article where the authors show that listening to songs such as "When I'm 64" causes the participants' age to increase by approximately one year compared to the control group with p<.05.
Of course, this is done to illustrate how easy it is to find ludicrous "statistically significant" results using accepted methodologies and reporting conventions.
Also, p-values are a less meaningful than people seem to realize. In the Frequentist Statistics chapter of Murphy's Machine Learning book, he shows how you can compute two very different p-values for the exact same empirical result (a particular sequence of coin tosses) by making different assumptions about how the data were collected. However, I don't find the alternative-'Just use Bayes' Rule, bro!'- to be completely convincing either, since this requires you to select a prior, i.e., a distribution on the set of hypotheses. Either way, before you collect any data, you have to assume that *something* follows a particular distribution that you choose based on your "background knowledge". If you're a frequentist, it's the data itself, and if you're a Bayesian, it's the set of possible hypotheses, but both require an ultimately subjective extra-empirical input to get off the ground.
Based on previous data from this thread,
there is 40% chance this post ends in >= 5