WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

(post is archived)

[–] 0 pt (edited )

Okay. Let's take a binary string where n ~= 500:

01010100 01101111 00100000 01100010 01100101 00101100 00100000 01101111 01110010 00100000 01101110 01101111 01110100 00100000 01110100 01101111 00100000 01100010 01100101 00101100 00100000 01110100 01101000 01100001 01110100 00100000 01101001 01110011 00100000 01110100 01101000 01100101 00100000 01110001 01110101 01100101 01110011 01110100 01101001 01101111 01101110 00111010 00001010 01010111 01101000 01100101 01110100 01101000 01100101 01110010 00100000 00100111 01110100 01101001 01110011 00100000 01101110 01101111 01100010 01101100 01100101 01110010 00100000

This translates to text as follows:

To be, or not to be, that is the question: Whether 'tis nobler

So unless I'm making a tremendous blunder, Dembski's theorem states that it is basically inconceivable that a monkey typing randomly at a keyboard - or any other deterministic, random, or stochastic process - would produce a coherent and meaningful string of this length. These same words might appear distributed throughout a much longer document, but then we would be dealing with many more 1s and 0s that were not actually meaningful to any intelligence. For a meaningful (specified) string of this length (complexity) to arise, it would have to be by design - so the theorem claims.

What are the "odds" of such a string forming by natural (unguided) processes?

has gone through this exercise before when trying to get me to demonstrate, specifically, what the numbers and figures would be for the formation of replicating molecules from mere component parts in a cosmic soup (and as I ultimately acknowledged, someone like Tour would be much better suited to answer that question): neglecting punctuation and capitalization, but including spaces, we can say there are 27 options from which to choose, and just like rolling dice, the probability of any particular outcome string is the probability of a single roll, multiplied by the probability of each successive roll. I count 58 letter / space characters in that word string. So the mathematical probability of this string arising is (1/27)58, or 9.56977394 x 10-84 power.

Theoretically I could have just put (1/2)500 into my calculator, but when I do so, I get 0.

Anyway, if this above probability is a reasonable approximation of just how small the probability has to be to qualify as "complex" under Dembski's theorem, then I would say a "mere" 1/1000000 chance is much too probable to fall under CSI considerations. Such events, as I noted in my above comment, do in fact happen "all the time", even while also being meaningful.

With that said, I agree with you in that 's analysis is most likely incomplete, and that the actual determination of the probability would be more involved. I think certain considerations would decrease the probability, like adding the probability of a given human successfully writing a popular book to the mix, whereas other considerations might increase the probability, like perhaps historical socioeconomic or cultural conditions that might sooner compel a writer to choose China or Wuhan rather than Mexico or Tokyo.

Whatever the case, I think it is probably unlikely that the probabilities would end up being on the order of 10-84 power, so this is likely not an appropriate application of Dembski's theorem.

Then again, CNN felt the need to , which, given the consistency of CNN's gaslighting the masses, may actually tip the probability that this whole thing was designed into the realm of undeniability.

[–] 0 pt (edited )

Thanks for the ping, jerk :).

I'm not interested in arguing against CSI anymore. I think I may have actually started to swing in favor of it, although I'm still not even sure why. But that's a point that I want to highlight.

There is an overwhelming intuition that what Koontz did would entail impossible odds. Helena's analysis sort of reversed the direction, and I'm not even sure what I mean by this, or how it contributes! But, work with me: it's almost as if she wanted to work backward from the probability that a set of events (our real-world pandemic, Covid) occurred, granting it the de facto probability of 1. From there, we can estimate the probability (now that the real events have granted a new meaning to the older literary facts, and only now) of Koontz having picked each of these details. Call the initial picking out of these details by Koontz in 1981: K.

So, P(K) = 1/1,000,000

That doesn't seem so outlandish.

But here is what I cannot quite articulate. We are trying to guess the probability at the moment he wrote the words in 1981, of both P(K) and also P(K actually occurring according to {a,b,c...year is 2020}). Let's say {...} = T.

So the overall equation would be: P(K) * P(K → T).

This is one of the great difficulties that I had with CSI. It is that we are attempting to predict the past probability of an event which has actually occurred (which in the present just makes it de facto 1).

If we put ourselves at Koontz's desk in 1981, I still think that P(K) * P(K→T) hits us intuitively as virtually impossible odds. But why?

I wish we could calculate the probability for P(K→T).

What is the probability that (a) a virus is engineered, (b) it is engineered in Wuhan, China, (c) it escapes containment, (d) spreads beyond China to become a world-scale pandemic, (e) that the disease it causes is a pneumonia-like ailment, and (f) this happens in 2020? There is no way to tell because there is no probability distribution. Moreover, it's not something we can really abstract across in symbolic terms like a string of text. So many fucking things had to come together that it's impossible to quantify. We might think we could get close, but then what if we discovered that every contingent fact in this total event relied on, say, one US government official having one meeting with some IVY-league academic, that turned into one phone call to some university contact overseas...blah, blah blah.

I think our own internal sense of chaos is what contributes to our intuition that for an author to make this highly particular prediction, and for that prediction to come true 40 years later, would involve cosmically small odds - and not for the least reason that the direction is what counts!

For an author to come up with things is a matter of the imagination, but part of what we understand as chaos (and what makes our fictional worlds conveniently fictional) is that what we imagine rarely goes that way. Just try to think about all of the myriad factors and dependencies that intervene on the timeline between 1981 and 2020...it's just that almost any factor (yes, the butterfly wings) on any day in the intervening time between 1981 and 2020 could have set a different chain of events in motion, but it all played out so that Koontz's description became true.


One more point about your example involving the binary string. I want to try to distill the underlying logic here, and put it in plain English. The chimp just is the symbol of random processes in the universe, and its every key stroke (whether it hits a 1 or a 0) is truly random.

CSI says that if a human being happens across a subset of the larger set of random bits, and that subset can be showed to encode a corresponding string of meaningful human language, then it is impossible for that particular sequence of bits to have been printed randomly. It doesn't matter if all other subsets of the chimp code did not encode anything. This particular sequence could not have been done randomly by the chimp (i.e. the unintelligent universe).

There is an interesting assumption in all of this that we have to point out. We have to think that our own language has some kind of supremacy, some privileged place in the universe. I think maybe we ought to be willing to grant this, although it could be accused of being a religious assumption embedded in CSI, begging the question. Maybe not though.

We think that our language, irrespective of the symbols used or the phoenetics, does have a logic (Logos) in the syntactical structure. When I say, "There is a cat on the rug," it wouldn't matter whether I translated this into 1,000 different languages, the basic logic is there. "(declarative article) + X + (prepositional phrase) + (declarative article) + Y" → "The cat (X) is on the rug (Y)."

That logic dictates the syntax of a sentence, conferring meaning to the sentence. So there would be no way to say, "To be, or not to be, that is the question: Whether 'tis nobler" in any human language which could at the same time be both (a) meaningful and (b) illogical.

So what is really being said is that a meaningful human sentence is a syntactical structure that must be valid for ANY intelligence in the universe. In fact, we might begin to just say that intelligence itself is what can register logic.

So what becomes important about the chimpanzee string is that this sequence of 1's and 0's winds up encoding a sentence that (even translated into any human language) has a syntactical structure that is meaningful to any intelligence in the universe. It's the logical structure of the sentence that is important!

But now we raise the question: can a random process encode a logical syntax in a natural system? Let's say we were to look at chemistry as an example. Is there a logical alphabet in chemistry?

Perhaps we say, "Yes", to that question, and maybe the more important question becomes origins. Can such a logic arise from something which was not logical first? (Put another way: can the logical arise from randomness)

If not, then we come up with a string of regress back to the beginnings of the universe itself. We can continue to answer yes until we get to the origin of something which appears to have come from the illogical/random, and at that point we are confronted by what could account for that change (similar to the creatio ex nihilo), not from 'nothing to something', but from 'non-meaning to meaning' logically.

[–] 0 pt

Thanks for the ping, jerk :).

Oh whoops; I was writing as if it was a reply to you, but forgot I replied to myself.

Perhaps we say, "Yes", to that question, and maybe the more important question becomes origins. Can such a logic arise from something which was not logical first? (Put another way: can the logical arise from randomness)

This relates to 's point about the program that just increments a bit string. Such a program will eventually yield the same bit string I posted above. But that program was impossible without the intelligence, and the ability to ever detach, if you will, the meaningful bit string from the rest also, at some point in the causal series, must depend on an intelligence.

[–] 0 pt

Okay. This is interesting. I haven't thought about detachability in quite this way.

So for such a program, we're saying that the program could not itself 'detach' a segment of its output by recognizing it as meaningful without external input telling it what to look for.

In other words, it would be impossible for the program itself (absent training) to, from out of the randomness, begin to evolve a higher order language based on logical structures from the basis of the 1's and 0's.