Thanks for the ping, jerk :).
I'm not interested in arguing against CSI anymore. I think I may have actually started to swing in favor of it, although I'm still not even sure why. But that's a point that I want to highlight.
There is an overwhelming intuition that what Koontz did would entail impossible odds. Helena's analysis sort of reversed the direction, and I'm not even sure what I mean by this, or how it contributes! But, work with me: it's almost as if she wanted to work backward from the probability that a set of events (our real-world pandemic, Covid) occurred, granting it the de facto probability of 1. From there, we can estimate the probability (now that the real events have granted a new meaning to the older literary facts, and only now) of Koontz having picked each of these details. Call the initial picking out of these details by Koontz in 1981: K.
So, P(K) = 1/1,000,000
That doesn't seem so outlandish.
But here is what I cannot quite articulate. We are trying to guess the probability at the moment he wrote the words in 1981, of both P(K) and also P(K actually occurring according to {a,b,c...year is 2020}). Let's say {...} = T.
So the overall equation would be: P(K) * P(K → T).
This is one of the great difficulties that I had with CSI. It is that we are attempting to predict the past probability of an event which has actually occurred (which in the present just makes it de facto 1).
If we put ourselves at Koontz's desk in 1981, I still think that P(K) * P(K→T) hits us intuitively as virtually impossible odds. But why?
I wish we could calculate the probability for P(K→T).
What is the probability that (a) a virus is engineered, (b) it is engineered in Wuhan, China, (c) it escapes containment, (d) spreads beyond China to become a world-scale pandemic, (e) that the disease it causes is a pneumonia-like ailment, and (f) this happens in 2020? There is no way to tell because there is no probability distribution. Moreover, it's not something we can really abstract across in symbolic terms like a string of text. So many fucking things had to come together that it's impossible to quantify. We might think we could get close, but then what if we discovered that every contingent fact in this total event relied on, say, one US government official having one meeting with some IVY-league academic, that turned into one phone call to some university contact overseas...blah, blah blah.
I think our own internal sense of chaos is what contributes to our intuition that for an author to make this highly particular prediction, and for that prediction to come true 40 years later, would involve cosmically small odds - and not for the least reason that the direction is what counts!
For an author to come up with things is a matter of the imagination, but part of what we understand as chaos (and what makes our fictional worlds conveniently fictional) is that what we imagine rarely goes that way. Just try to think about all of the myriad factors and dependencies that intervene on the timeline between 1981 and 2020...it's just that almost any factor (yes, the butterfly wings) on any day in the intervening time between 1981 and 2020 could have set a different chain of events in motion, but it all played out so that Koontz's description became true.
One more point about your example involving the binary string. I want to try to distill the underlying logic here, and put it in plain English. The chimp just is the symbol of random processes in the universe, and its every key stroke (whether it hits a 1 or a 0) is truly random.
CSI says that if a human being happens across a subset of the larger set of random bits, and that subset can be showed to encode a corresponding string of meaningful human language, then it is impossible for that particular sequence of bits to have been printed randomly. It doesn't matter if all other subsets of the chimp code did not encode anything. This particular sequence could not have been done randomly by the chimp (i.e. the unintelligent universe).
There is an interesting assumption in all of this that we have to point out. We have to think that our own language has some kind of supremacy, some privileged place in the universe. I think maybe we ought to be willing to grant this, although it could be accused of being a religious assumption embedded in CSI, begging the question. Maybe not though.
We think that our language, irrespective of the symbols used or the phoenetics, does have a logic (Logos) in the syntactical structure. When I say, "There is a cat on the rug," it wouldn't matter whether I translated this into 1,000 different languages, the basic logic is there. "(declarative article) + X + (prepositional phrase) + (declarative article) + Y" → "The cat (X) is on the rug (Y)."
That logic dictates the syntax of a sentence, conferring meaning to the sentence. So there would be no way to say, "To be, or not to be, that is the question: Whether 'tis nobler" in any human language which could at the same time be both (a) meaningful and (b) illogical.
So what is really being said is that a meaningful human sentence is a syntactical structure that must be valid for ANY intelligence in the universe. In fact, we might begin to just say that intelligence itself is what can register logic.
So what becomes important about the chimpanzee string is that this sequence of 1's and 0's winds up encoding a sentence that (even translated into any human language) has a syntactical structure that is meaningful to any intelligence in the universe. It's the logical structure of the sentence that is important!
But now we raise the question: can a random process encode a logical syntax in a natural system? Let's say we were to look at chemistry as an example. Is there a logical alphabet in chemistry?
Perhaps we say, "Yes", to that question, and maybe the more important question becomes origins. Can such a logic arise from something which was not logical first? (Put another way: can the logical arise from randomness)
If not, then we come up with a string of regress back to the beginnings of the universe itself. We can continue to answer yes until we get to the origin of something which appears to have come from the illogical/random, and at that point we are confronted by what could account for that change (similar to the creatio ex nihilo), not from 'nothing to something', but from 'non-meaning to meaning' logically.
Thanks for the ping, jerk :).
Oh whoops; I was writing as if it was a reply to you, but forgot I replied to myself.
Perhaps we say, "Yes", to that question, and maybe the more important question becomes origins. Can such a logic arise from something which was not logical first? (Put another way: can the logical arise from randomness)
This relates to 's point about the program that just increments a bit string. Such a program will eventually yield the same bit string I posted above. But that program was impossible without the intelligence, and the ability to ever detach, if you will, the meaningful bit string from the rest also, at some point in the causal series, must depend on an intelligence.
Okay. This is interesting. I haven't thought about detachability in quite this way.
So for such a program, we're saying that the program could not itself 'detach' a segment of its output by recognizing it as meaningful without external input telling it what to look for.
In other words, it would be impossible for the program itself (absent training) to, from out of the randomness, begin to evolve a higher order language based on logical structures from the basis of the 1's and 0's.
Yes, this notion of the detachability is inseparable from the intellect that recognizes the "target". If it weren't detachable, no intellect would be required, if that makes sense.
Some words of Smith on "detachability":
One needs therefore to distinguish (as I have intimated previously) between two kinds of temporal process or horizontal causation: the kind that derives from natural causes alone, and the kind that springs from intelligent agency. It is the violinist, acting as an intelligent agent, who first apprehends the music - Dembski's "detachable pattern" - on the plane of intellect, and then, by an act of his free will, conveys that pattern to the world of sense by way of a temporal process, an action of horizontal causality.
Whoa, check this out - Smith even directly addresses 's point about the program generating bit strings bigger than itself:
Given the crucial role of CSI in both physics and biology, it behooves us now to reflect further upon that notion, beginning with the mathematical concept of information as such. The danger, when it comes to the latter, is that we are prone to read more into the technical term than it is meant to signify: the word had after all been in use for a very long time before Shannon bestowed upon it a technical sense. That sense is in fact rather bare: it boils down to the actualization of an event represented by a subset of E in a mathematical space with probability measure P. If I flip a coin n times I have produced information - n bits worth, to be exact. And even now, as I am striking the keys of my keyboard, I am producing Shannon information. I am also, however, generating semantic information, which is something else entirely: something, in fact, which no mathematical theory can ever encompass for the obvious reason that semantic information does not reduce to quantity, to mere sets and relations. There is an ontological discrepancy between semantic and Shannon information, not unlike the ontologic hiatus separating the corporeal and the physical domains. And just as a corporeal object X determines an associated physical object SX, so also does every item of semantic information determine a corresponding item of Shannon information which serves, so to speak, as its material base: the latter is simply what remains when all that is non-quantitative has been cast out or "bracketed". One thus arrives, once again, at Rene Guenon's crucial point that "quantity itself, to which they [the moderns] strive to reduce everything, when considered from their own special point of view, is no more than the 'residue' of an existence emptied of everything that constitutes its essence."
Having distinguished between semantic and Shannon information, it should be noted that the semantic component constitutes a specification in Dembski's sense, and in fact defines a detachable target. To be sure, the example of semantic information is highly special, which is to say that specification can arise in a thousand other ways. Think of a bit string in which 1's and 0's alternate, or in which they represent a sequence of prime numbers in binary notation; or again, think of a bit string of length n which is "algorithmically compressible" in the sense that it can be generated by a computer algorithm of "length" less than n (a notion which can indeed be defined): all these are examples of specification. It appears, however, that despite its highly special nature semantic specification enjoys a certain primacy in the natural domain: if indeed God "spoke" the world into being as Scripture declares, such CSI or "design" as it carries must derive ultimately from a divine Idea or logos, which may by analogy be termed a word. And I would add that nowhere in the natural world is the linguistic character of specification more clearly in evidence than in the genetic code of an organism, which as I have noted before, constitutes a text recorded in a four-letter alphabet. The genetic code is thus written text imprinted on DNA. Yet one may conjecture on theological ground that this written text derives from a spoken word: the kind to which Christ alludes when He testifies that "the words I speak to you are spirit and life". No other scientific finding, I believe, is as profoundly reflective of theological truth as is the discovery of what may be termed the informational basis of life.
So Smith acknowledges the program-writing program as itself constituting a kind of specification; and he likewise acknowledges this linguistic character of specification, what you, Chiro, have recently termed the linguistic "logic", wherein the meaning or specification is found.
Onward:
Let us suppose that someone shoots arrows at a wall. To conclude that a given strike cannot be attributed to chance - in other words, to effect a design inference - one evidently needs to prescribe a target or bull's-eye that sufficiently reduces the likelihood of an accidental hit. What is essential is that the target can be specified without reference to the actual shot; it would not do, for example, to shoot the arrow and then paint a bull's-eye centered upon the point where the arrow struck. What stands at issue, however, has nothing to do with a temporal sequence of events: it does not in fact matter whether the target is given before or after the arrow is shot. What counts, as I have said, is that the target can be specified without reference to the shot in question. In Dembski's terminology, the target must be "detachable" in an appropriate sense.
Consider a scenario in which the keys of a typewriter are struck in succession. If the resultant sequence of characters spells out, let us say, a series of grammatical and coherent English sentences, we conclude that this event cannot be ascribed to chance. An exceedingly "small" and indeed "detachable" target has been struck, which however was, in this case, specified after the event. In general, the specification of a target requires both knowledge and intelligence; one might mention the example of cryptanalysis, in which specification is achieved through the discovery of a code. What at first appeared to be a random sequence of characters proves thus to be the result of intelligent agency. The fact is that it takes intelligence to detect intelligent design.
I would like to emphasize that it is impossible to rule out the hypothesis of chance simply on the basis of low probability. If a sequence of 1's and 0's is generated by tossing a fair coin a billion times, the possibility that the resultant bit string will contain not a single 0, say, can indeed be validly ruled out. Yet, if one does actually toss a coin a billion times, one produces a bit string having exactly the same probability as the first: one half to the power one billion, to be precise. Why, then, can the first sequence (the one containing no 0's) be ruled out, whereas the second can not? The reason is that the first conforms to a pattern or rule which can be defined independently; it is a question, once again, of a "detachable" target which itself has low probability.
We've all read these passages before, but revisiting them, after we have meditated on them, and discussed them, and discussed points and subjects related to them, certainly improves understanding.
In the case of the first sequence, the prescription "no 0's" in itself defines a target of that kind: the subset, namely, containing the given bit string and no other. But this is precisely what can not be done in the case of the second bit string (the one produced by tossing a coin a billion times): it is virtually certain, in that case, that no detachable target of low probability has been hit. It is possible, of course, to produce a target from the empirical sequence itself; but that description or pattern (if such it may be called) would turn out not to be detachable. It would be comparable to a bull's-eye painted around to spot on the wall where the arrow has struck: such a description, of course, proves nothing. The discovery of a detachable pattern of sufficiently low probability, on the other hand, proves a great deal: it may prove, in fact, that the event in question cannot be attributed to chance. Thus, what rules out chance is not low probability alone, but low probability in conjunction with a detachable target: this winning combination is what Dembski terms the complex specification criterion.
(post is archived)