I like this analysis. I did a similar one in my head a few minutes ago, although I didn't do it numerically. Instead, I rationalized the outcome point-by-point. For any single fact, such as the selection of China (and Wuhan, specifically), I was able to come up with what seemed like good reasons. It was the confluence of each of these together, combined with the confluence of each of these real events, that made the whole seem implausible.
I'm no math whiz. I'm sure that you are probably somewhere close with that approximation, but I would just point out that the probability that you estimated is the probability that Koontz selected this particular set of facts from among all of the possible permutations for the whole set. It's another problem to figure out the probability that this scenario would actually occur according to all of those details. I think you'd be talking about a number astronomically larger.
But I would like to point out something I consider interesting: the model assumes that Koontz was selecting at random. Given that he's both human and an artist, we might think that his choices weren't random. Now we've engaged with the problem of trying to show why this or that fact was selected. I'm sure, to an extent, this depends upon the artist. All novelists attempt to create verisimilitude by doing research when it comes to plot points and naming (etc.). Some may be less meticulous than others. I think of a filmmaker like Kubrick who is famous for his absolute obsession with the finest details in every scene of his movies.
I think it's safe to say that Koontz probably didn't pick these facts from out of a hat, so we'd have to start reasoning about why he'd pick them.
For example, maybe even back in the 1980s it was possible to do a little research on China and discover where its main regions of industry were. You'd get from that some short list of candidate cities. And from there, maybe you could find out the most biomedical research in the country was done in Wuhan, even back then.
The date of 2020 could be arbitrary. If an author is trying to keep the future events far enough into the future, while keeping them close enough to be thrilling, 40 years out from the time of writing isn't an awful number.
Of course, you'd need the virus to be global to be truly scary, and for that you need a high rate of transmission that would hold for even highly developed societies where sanitation was state of the art. So a respiratory virus might be the best choice, because a virus that transmits from exposure to feces probably won't blanket the globe.
Maybe it wasn't random, but there were some less-than-impossible reasons an author like Koontz would have picked this set of details. Maybe. It's another issue altogether that in the SAME YEAR the EXACT situation actually pops off as described.
To truly grasp the probability of that, you'd have to know probabilities about today's current events that are probably impossible to establish.
So I think we'd have a different equation here, something closer to:
P(Koontz picks this particular set of details) * P(this particular set of details actually happens as specified)
“Scientists have calculated that the chances of something so patently absurd actually existing are millions to one. But magicians have calculated that million-to-one chances crop up nine times out of ten.”
― Terry Pratchett, Mort
, is this CSI?
Is that a detachable target?
Did CSI just prove this was designed!?
The target is definitely detachable - the enfolding of this conjunction of events is not determined, necessarily, by this same conjunction having been written in Koontz book.
What is important is whether this conjunction has a complexity exceeding the complex specification criterion of 500 bits - which is well beyond what any unintelligent process could produce - and whether it is in fact specified, namely, conceptually meaningful to an intelligent observer.
I think the meaningfulness is evident. It really comes down to proving the sufficient complexity. I'm not sure how the complex specification criterion, usually expressed in bits, would convert to probabilities. But my inkling is that, if the probability is only as small as 1/1000000, as has posited, then this would be insufficient complexity, since "one in a million" events, that are also meaningful, do routinely occur.
Let me try to math this out, as Shakespeare would say, "anon."
I pointed out that Helena's calculation would have only been for randomly selecting the facts that Koontz did. We'd have to calculate a separate probability for the events to actually have occurred themselves in the actual world. That's kind of what made me think of CSI here. We have the target, it appears to be cognitively meaningful, but the odds of the conjunction of both Koontz writing these things (which Helena did roughly at 1/1,000,000) times the probability of the actual events occurring according to those exact specifications is likely to be way, way, way less than 1/1,000,000.
EDIT: I'm being a little facetious. I really doubt if the probability for the actual events occurring meets the UPB. At best, its just a kind of analogy.
Okay. Let's take a binary string where n ~= 500:
01010100 01101111 00100000 01100010 01100101 00101100 00100000 01101111 01110010 00100000 01101110 01101111 01110100 00100000 01110100 01101111 00100000 01100010 01100101 00101100 00100000 01110100 01101000 01100001 01110100 00100000 01101001 01110011 00100000 01110100 01101000 01100101 00100000 01110001 01110101 01100101 01110011 01110100 01101001 01101111 01101110 00111010 00001010 01010111 01101000 01100101 01110100 01101000 01100101 01110010 00100000 00100111 01110100 01101001 01110011 00100000 01101110 01101111 01100010 01101100 01100101 01110010 00100000
This translates to text as follows:
To be, or not to be, that is the question: Whether 'tis nobler
So unless I'm making a tremendous blunder, Dembski's theorem states that it is basically inconceivable that a monkey typing randomly at a keyboard - or any other deterministic, random, or stochastic process - would produce a coherent and meaningful string of this length. These same words might appear distributed throughout a much longer document, but then we would be dealing with many more 1s and 0s that were not actually meaningful to any intelligence. For a meaningful (specified) string of this length (complexity) to arise, it would have to be by design - so the theorem claims.
What are the "odds" of such a string forming by natural (unguided) processes?
has gone through this exercise before when trying to get me to demonstrate, specifically, what the numbers and figures would be for the formation of replicating molecules from mere component parts in a cosmic soup (and as I ultimately acknowledged, someone like Tour would be much better suited to answer that question): neglecting punctuation and capitalization, but including spaces, we can say there are 27 options from which to choose, and just like rolling dice, the probability of any particular outcome string is the probability of a single roll, multiplied by the probability of each successive roll. I count 58 letter / space characters in that word string. So the mathematical probability of this string arising is (1/27)58, or 9.56977394 x 10-84 power.
Theoretically I could have just put (1/2)500 into my calculator, but when I do so, I get 0.
Anyway, if this above probability is a reasonable approximation of just how small the probability has to be to qualify as "complex" under Dembski's theorem, then I would say a "mere" 1/1000000 chance is much too probable to fall under CSI considerations. Such events, as I noted in my above comment, do in fact happen "all the time", even while also being meaningful.
With that said, I agree with you in that 's analysis is most likely incomplete, and that the actual determination of the probability would be more involved. I think certain considerations would decrease the probability, like adding the probability of a given human successfully writing a popular book to the mix, whereas other considerations might increase the probability, like perhaps historical socioeconomic or cultural conditions that might sooner compel a writer to choose China or Wuhan rather than Mexico or Tokyo.
Whatever the case, I think it is probably unlikely that the probabilities would end up being on the order of 10-84 power, so this is likely not an appropriate application of Dembski's theorem.
Then again, CNN felt the need to , which, given the consistency of CNN's gaslighting the masses, may actually tip the probability that this whole thing was designed into the realm of undeniability.
Whats csi?
Complex Specified Information
It's a theory put forward by a mathematician named William Dembski that is intended to support Intelligent Design. Peace and I had been arguing about it for several weeks prior to Voat going down.
The really, really roughshod summary would be: you take the minimum definition for an event (a duration in time) possible in the universe, and you calculate a conservative value for the total possible events that have ever occurred in the universe. Say that's 10100 of these events in the whole life of the universe. Call that the Universal Probability Bound (UPB). If something happened, it happened in less steps than that.
If you could show that a complex event with a specified pattern (like DNA) had a lower likelihood than 1 chance / UPB of occurring, then it couldn't have happened randomly. Something made it happen.
I was just comparing the logic behind it with how we're looking at the Koontz situation; there is some similar shit going on there.
(post is archived)