WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

(post is archived)

[–] 2 pts

I'd like to see their data and how they determine randomness is being lost from "the system."

In statistics, for very large sets of numbers, you start to see things happening that do not appear random.

Use the coinflip example: with a true random number generator, after billions of coin flips, you may see heads flipped 99 times in a row. This is normal in statistics with very large sets of data. It's still random. An extremely large set of data could have heads flipped 1 million times in a row, for example.

So how has he accounted for this fact in statistics? I need to know that methodology for determining more ordered randomness.

"

Because a separate group analyzed the same data and determined that the data was still random for 9/11:

http://www.jsasoc.com/docs/Sep1101.pdf

However, we show that the choice was fortuitous in that had the analysis window been a few minutes shorter or 30 minutes longer, the formal test would not have achieved significance.

We conclude that the network random number generators produced data consistent with mean chance expectation during the worst single day tragedy in American history.

So they chose starting and stopping points that fit their data modeling, eh?

[–] 1 pt

I agree that it is suspicious, especially since there is no practical application that seems to have come out of this. But it's nice to wonder sometimes.