Transcendental Territory

transcendent mindLast time we considered the possibility that human consciousness somehow supervenes on the physical brain, that it only emerges under specific physical conditions. Perhaps, like laser light and microwaves, it requires the right equipment.

We also touched on how Church-Turing implies that, if human consciousness can be implemented with software, then the mind is necessarily an algorithm — an abstract mathematical object. But the human mind is presumed to be a natural physical object (or at least to emerge from one).

This time we’ll consider the effect of transcendence on all this.

This is not the religious kind of transcendence, this is the mathematical kind. A special property that p and e and many other numbers have.¹

We start by considering three Yin-Yang pairs with regard to numbers.

§

Yin-Yang-1The first is the finite versus the (countably) infinite.

On the one side, precise numbers that match their quantities. “Dave donated a dozen dimes!” “Twenty-two teens turned twenty.” “Only ate one waffle.”

On the other side, a row of three dots (…) or a lazy eight (8). “The road goes on forever.” “My curiosity is endless.” “Close your eyes and count to infinity.”

But the thing about countable things is that they’re countable.

More to the point, computers can count things really good.²

§

Yin-Yang-2So the second Yin-Yang pair is the countable versus the uncountableCantorville — the discrete versus the continuous.

On the one side, everything from the first pair, the countable things, even infinite ones. Here numbers are cardinals; they stand for quantities of discrete objects.

On the other side, the real numbers, the smooth and continuous. These numbers are magnitudes; they stand for points along a number continuum. They are a different kind of number.³

Calculating with real numbers offers some challenges, especially with regard to chaos. Calculation necessarily rounds off real numbers, so there is a loss in absolute precision.

We’re only at the second level, and already calculation is in trouble. To the extent calculation involves discrete symbols (i.e. digital calculation), we can’t calculate with real number values, only their approximations.

§

Yin-Yang-3The final Yin-Yang pair is real numbers versus the transcendental numbers.

Again, on the one side, everything from so far (including the complex numbers). Sane numbers tamed with algebra.

On the other side, wild mysterious numbers with some vaguely magical properties.

Firstly, their decimal expressions never form any repeating patterns (as with rational numbers).

Secondly, there is no algebraic expression that specifies their value (as with ordinary real numbers).

That latter property results in the “game” demonstrated in this YouTube video. It’s way worth watching (seriously, please do watch it, at least the first half):

I knew about algebraic roots, but I never realized they implied the game here.⁴ It’s a neat way to look at it, and it got me thinking about transcendent numbers. (Which invokes Euler’s Identity, hence the Beautiful Math post.)

One can definitely make a case that God invented the integers, that the countable correlates with physical reality whereas the real numbers (let alone the transcendentals) are abstract inventions.

The problem is that God presumably invented circles, too, and little old p is one of those things you notice if you look at circles. It’s just the ratio between the diameter and circumference.⁵

There is something magical about the transcendentals that sets them apart. The name is certainly evocative.

§

The question I want to ask is whether consciousness could be, in some sense, transcendental (and just to reiterate, I don’t mean that in the spiritual sense).

If so, does that present a problem with regard to an algorithmic theory of mind?

(The presumption being that transcendental calculation is somehow a problem, so there really are two thesisesthesisispoints to demonstrate here.⁶)

§

There is a Yin-Yang situation regarding numbers like p. On the one hand, no perfect circles exist, so p never actually occurs anywhere physically. On the other hand, its true value underlies every circle and sinuous process!

circlesLook at it this way: All the inaccurate real-world instances that involve p are inaccurate in their own way.

The average of all those inaccuracies converges on the true value (proof it does lurk beneath all physical circles).

While no individual circle is transcendental, circles certainly are.

So why might a human brain be a transcendental process?

The answer partly may lie in the sheer complexity and scale of the brain.

Not only are the parts complex, there are hundreds of trillions of them! Perhaps transcendence emerges from the multitude just as it does with circles.

synapse photoSynapses are hugely complicated in their own right (and amazing). Is it possible that a full model of a synapse is complex enough to be subject to chaos?

That’s not at all a stretch.

If so, that means each synapse is just a little unpredictable (mathematically).

The synapse knows what it’s doing, but for us to determine that precisely may be effectively impossible (like the three-body problem or weather prediction).

The network of the brain is also highly complex and vast. Neurons all operate in parallel and talk to each other in variable frequency pulse trains. It’s even easier to imagine that chaos plays a role here. (It’s harder to imagine it wouldn’t!)

So it’s possible the parts transcend calculation and even more possible the whole network does.

§

Z80The obvious question is: A CPU has many billions of transistors, why can’t that multitude be transcendent?

Another form of the question is: A software model can model trillions or quadrillions of (virtual) parts; why isn’t that multitude transcendent?

The answer is that, potentially, it could if those parts, or the network of those parts, had the same indeterminacy as does the operation of the human brain.

But so far computer technology works very hard to remove all indeterminacy from all levels of computer operation! It’s considered noise that degrades the system.

There is research into the idea of introducing noise or uncertainty into algorithms, and it’s possible that may bear fruit some day.⁷ (It’s not the same as “fuzzy logic” which is just logic over a value range.)

DivideAs it stands now, algorithms are fully deterministic at all levels. There is nothing that’s allowed to transcend.

Keep in mind that if hardware is the only possible source of indeterminacy or transcendence (as is the case in the physical world brains inhabit), then conscious is not algorithmic.

It can’t be if it supervenes on hardware!

In order for consciousness to be strictly algorithmic, any indeterminacy or transcendence must come from the software steps. And as we’ve seen, those amount to: Input numbers; Do math on numbers; Output numbers.

Where is the transcendence?

§

signI’ve been wondering if Turing’s Halting Problem or Gödel’s Incompleteness Theorems might play any role in this. It’s possible to read their conclusions as addressing transcendental territory.⁸

In the Turing case, no algorithm can transcend the algorithmic context such that it can solve the halting problem.

In the Gödel case, no axiomatic arithmetic system can transcend its context such that all true statements in the system can be proved in the system.

Either way there’s chaos theory telling us that some calculable systems are so sensitive to input conditions that any rounding off of real numbers degrades the calculation.

This all seems to suggest (to me, anyway) that real world processes, while wildly mathematically “inaccurate” on their own account, converge on mathematically ineffable transcendence given sufficiently large numbers.

Think of it as actually doing quadrillions of steps in an infinite mathematical series. How close to its real transcendent value would p be after 500 trillion steps?


[1] This all started out as a flight of fancy, but the more I think about it, the more it fits.

[2] Even if it takes them forever, just like it would you.

[3] Transfinite mathematics involves multiple levels of infinity, but I think the countable versus uncountable one is the foundation. (I’m not convinced the others exist meaningfully.)

[4] Which, if you didn’t watch the video, is that you can reduce any algebraic expression to zero using only addition (and subtraction), multiplication (and division), or exponentiation.

(The video at the bottom has some extra bits they didn’t include in the main video. It’s the same one linked to at the end of the main one.)

[5] It’s when you look closely at p that you realize how weird it is. See the Pi Day post for how far down the rabbit hole that goes!

[6] AKA: “theses” 😁

[7] One problem is that calculating random numbers is nearly impossible. It takes some real world source (semiconductor noise is a good one) for true randomness.

(The difficulty of calculating random numbers is just another illustration of the limits of discrete math and algorithmic processes.)

[8] Cantor is clearly addressing the countable-uncountable divide, and it’s possible Turing and Gödel are as well.

 

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

57 responses to “Transcendental Territory

  • SelfAwarePatterns

    I always knew that Pi and e were trouble, from the minute I laid eyes on them. 🙂

    Carl Sagan reportedly had a message from a deistic god buried extremely far into the digits of Pi in his book Contact (which sounds much better than the movie).

    On determinism, neuroscientist Michael Gazzaniga, in his book ‘Who’s In Charge?’ points out that there is substantial evidence is that the brain is mostly deterministic. He notes that this makes sense when you think about, evolutionarily, what the brain is for, which is to make movement decisions for an organism based on sensory inputs. Rampant indeterminism would destroy any evolutionary advantage in that function.

    Is the brain *fully* deterministic? No one knows. In truth, due to chaos theory dynamics, it may never be known. The question is whether a fully deterministic system could approximate its workings. Again, no one knows for sure, but the fact that the brain is at least mostly deterministic gives me hope. Ultimately, the only way we’ll know for sure is if someone succeeds, or after the brain has thoroughly been mapped and understood, fails anyway.

    • Wyrd Smythe

      “Carl Sagan reportedly had a message from a deistic god buried extremely far into the digits of Pi in his book Contact (which sounds much better than the movie).”

      Yes, a raster pattern of a circle buried deep in the digits of pi. When Ellie meets the aliens they tell her that even more complex messages are buried in other transcendental numbers.

      I wrote about this last Pi Day in Here Today; Pi Tomorrow and quoted the relevant passage from the book. (I like both the book and the movie.)

      The funny thing is that Sagan was right. Sort of. Transcendental numbers can have the numerical quality of being “normal.” Pi has shown to be normal! That means, somewhere in the string, every possible finite sequence occurs. So there definitely is a raster pattern of a circle in the digits of pi.

      There’s also every GIF, JPEG, PNG, and every other image format, image ever created or potential. Also all the images of just random gibberish. And every novel, magazine or other form of printed material ever. In every language. In every variation of typos and whatnot. And every audio file. And so on.

      {{I recently went and grabbed a 10-million digit file of π so I could play around with digit distributions. I just started mucking about with it, but check this out:

      0:   999440 (0.099944, +0.000056)
      1:   999333 (0.099933, +0.000067)
      2:  1000306 (0.100031, -0.000031)
      3:   999964 (0.099996, +0.000004)
      4:  1001093 (0.100109, -0.000109)
      5:  1000466 (0.100047, -0.000047)
      6:   999337 (0.099934, +0.000066)
      7:  1000207 (0.100021, -0.000021)
      8:   999814 (0.099981, +0.000019)
      9:  1000040 (0.100004, -0.000004)

      This is a digit frequency histogram. The first number is the digit. The second is the number of times it appeared in the 10-mega digit string. The third number is the percentage; we’d expect 10% if π is normal (and that’s what we got). The fourth number is the difference from 10% — not much! It’d be interesting to get a lot more digits and see if the differences approach zero.}}

      “Rampant indeterminism would destroy any evolutionary advantage in that function.”

      Makes sense. (A world where that wasn’t true sounds like something Greg Egan would write. He likes to turn things on their ear. He has one, which I’ve not read, where SR works the opposite. The faster you go, the more the external world slows down. A civilization is threatened with a supernova, so they send out a fast ship of scientists to make a long loop through space (that takes the scientists generations) so they can solve the problem and return “shortly after they left” comparatively speaking.)

      Absolutely agree with your last paragraph.

      The point of this post, really, is that question about whether the brain is fully deterministic. My guess is that it’s not, although that would seem to require quantum behavior. Chaotic behavior is deterministic, but utterly unpredictable. Chaos destroys calculation, so it’s really hard for software to be chaotic, but physical systems can be.

      (As I pointed out, we work really hard to keep it out of computers!)

      While we disagree about the likelihood hard AI, perhaps you can at least appreciate why I think it’s such a big leap from what we know is possible.

      This post, and the previous one, are the point I’ve been headed to all along.

      • SelfAwarePatterns

        Sometimes I wonder, if the stuff of spacetime is quantum in nature, as periodically gets pondered in science articles, whether that means we’d eventually hit the end of Pi digits. Or if this is just a case where our mathematics, built upon observed patterns at the level of reality we live in, is just different than the fundamental layers.

        I can appreciate incredulity toward hard AI, and I’ve written myself several times that I think we’ll have to understand human minds in order to accomplish it. (We don’t need that understanding to have very intelligent systems, just for ones we’d consider “conscious”.) But to me the possibility logically follows from what we currently know.

        Now, it’s possible that something we *don’t* currently know will prevent it, but until / unless we encounter that something, I think regarding it as impossible is unjustified. But I’m an empiricist, so I fully admit that we won’t know for sure until either someone accomplishes it, or demonstrates that it’s impossible in principle.

      • Wyrd Smythe

        “Sometimes I wonder, if the stuff of spacetime is quantum in nature, as periodically gets pondered in science articles, whether that means we’d eventually hit the end of Pi digits.”

        Mathematically speaking, no. Pi goes on forever. It’s possible to actually derive this from the properties of a circle, which is why the ancient Greeks knew something was very weird about π. (As the guy in the video mentions, people died over this stuff! It was that weird and offensive to consider.)

        In any physical world we can imagine, Planck level is a limit, so, yeah, there is some ultimate precision of π on that basis.

        {We believe it’s impossible to inspect the world below the Planck level. It takes energy to look at small things (hence CERN), and if you use enough energy to look as small as sub-Plank, that much energy in that small a space creates a black hole and whisks away anything you could see. Kinda defeats the purpose! 😀 }

        {{I think I mentioned my hope (wish) that spacetime by Einstein-smooth. It seems definite that matter-energy are lumpy, but I hold out a faint hope spacetime isn’t. I know a theoretical physicist who calls that hope “idiotic.” He’s probably right, but the jury is still out for the moment. 🙂 }}

        “But I’m an empiricist, so I fully admit that we won’t know for sure until either someone accomplishes it, or demonstrates that it’s impossible in principle.”

        Very much likewise!

        From where I sit, there seem strong (but not definitive) arguments against software AI, and I’ve tried to lay those out in these posts.

        Equally, from where I sit, I see little that argues against a physical (non-biological) network that replicates the brain’s structure.

        Really, the whole point is the gap I see in those two. We have a long, long way to go to establish clear connections between calculation and consciousness.

  • Wyrd Smythe

    An interesting aspect occurred to me about how computers work very hard to remove possible sources of transcendence. They are engineered to treat one entire voltage range as logical one and another voltage range as logical zero. There’s usually some forbidden territory between where the system will treat it arbitrarily.

    But as part of their processing, computers throw away vast amounts of tiny “irrelevant noise” that would degrade their behavior. It would make them inaccurate.

    At the very least, this is hugely different from how human brains work. Neurons communicate analog signals with timed pulses, and analog systems are capable of transcendent behavior, especially ones with 500 trillion “moving” parts.

  • Steve Morris

    Interesting post, Wyrd. Lots of IFs though. The brain as a chaotic system? Possibly. No reason to assume that it is, or that this rules out modelling its behaviour though.

    Are you familiar with the use of cellular automata to model turbulence in fluids? Simple rules can give rise to complex non-linear behaviour even in digital systems, i.e. digital computers can model chaotic behaviour of non-linear systems despite the fact that they are not transcendental.

    Engineers are smart. Unless the science says categorically “no” I wouldn’t rule anything out.

    • Wyrd Smythe

      “The brain as a chaotic system? Possibly. No reason to assume that it is, or that this rules out modelling its behaviour though.”

      Given that the brain is a complex analog physical system with lots of parts, I think the odds of it being chaotic are closer to “probably” than “possibly” but it does remain to be seen.

      “[D]igital computers can model chaotic behaviour of non-linear systems despite the fact that they are not transcendental.”

      Indeed. The Mandelbrot is an example of a simple chaotic system easily modeled with an algorithm. You can zoom in on sections of the Mandelbrot to any precision you’re willing to calculate.

      What’s not clear is how you can model chaotic analog systems with infinite precision, and chaos theory (as I understand it; it’s always possible I don’t) says a digital model will always diverge. Improved precision just delays that divergence.

      Fundamentally, there is an inescapable difference between analog and digital. The latter can get awfully close, but never quite there. (My premise is that matters.) At some point you’re down to Planck level, and it’s all “digital.” But now Heisenberg is a problem. 🙂

      “Unless the science says categorically “no” I wouldn’t rule anything out.”

      Absolutely! (Although, honestly, I have finally ruled out all possibility that I’ll marry Lucy Liu or Lisa Edelstein.)

  • Steve Morris

    MIke, “Sometimes I wonder, if the stuff of spacetime is quantum in nature, as periodically gets pondered in science articles, whether that means we’d eventually hit the end of Pi digits.”
    I think that if that happened, then the circumference of a circle would become scale-dependent at the Planck scale. It’s rather like the way the length of a coastline is scale-dependent and the question “how long is the coastline of this island?” is meaningless.
    In any case, pi has a definite value:
    https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80

  • Steve Morris

    Wyrd, intriguing thought if the mind really isn’t a Turing machine. Does that imply that it is capable of doing things that a Turing machine can’t, such as solve the halting problem?

    • Wyrd Smythe

      I’ve wondered that, and I think it might imply that, yes. There’s a potential tie-in to Gödel as well in that Incompleteness suggests there are intuitive truths impossible to prove formally. It’s possible a mind can intuit halting — intuition has been shown to be surprising accurate, at least in some cases.

      (Wielding Gödel philosophically is a questionable. It applies strictly to axiomatic arithmetic systems, but if mind is algorithmic then it’s mathematical, so Gödel might have some bearing.)

      The Traveling Salesman is thought to be a problem in NP and thus intractable to digital calculation (it’d doable, the algorithm is known, it just takes longer than the age of the universe). Yet it seems that bees may solve the problem naturally.

      The brain may turn out to be a kind of analog computer (think of the dynamic motion equations you solve playing, say, raquetball), but like most analog computers doesn’t work by crunching numbers.

      • Steve Morris

        Interesting thought. I actually have a half-finished sci-fi novel that’s a mash-up of religion, an ancient secret society, and and some Turing-related ideas. It’s like Dan Brown meets The Matrix! Your series of posts relates directly to the underlying premise of the novel, although in my novel I have turned a lot of ideas inside out and upside down. I think it’s still scientifically rigorous, though. May even finish writing it one day.

      • Wyrd Smythe

        Cool, go for it! (Does that mean I’ll get passing mention in the credits? 🙂 )

      • Steve Morris

        I could name one of the characters Wyrd 🙂

      • Wyrd Smythe

        Or even just Smythe would do. [Back in my Special Relativity series, I floated an idea about how FTL “ansibles” might work (without obviously violating Einstein… there might still be something lurking… it’s still not clear to me that FTL communication between points in the same reference frame has to be ruled out… I understand it’s supposed to be ruled out, but I’m not sure I understand exactly why.) Anyway, if anyone wanted to use that idea, I just asked they call it “Smythe Waves”… 🙂 ]

  • Disagreeable Me (@Disagreeable_I)

    Hi Wyrd,

    As stated on Self Aware Patterns, I’m not really seeing an argument here.

    There are transcendental numbers, where transcendental just means that they cannot be written as an algebraic expression.

    There are uncomputable numbers, which are a superset of the transcendental numbers (not including e and Pi by the way) which are just numbers for which there exists no algorithm to enumerate the digits to whatever precision we might desire. Any number with randomly chosen digits is such a number. It’s tricky to really define what specific uncomputable numbers might be, because a definition might be tantamount to an algorithm. One example is the so-called Chaitin’s constant (more a family of constants), which seems to have a clear definition but the definition is not particularly useful because using it to find the value of the constant would require being able to solve the halting problem.

    So, yes, there are some pretty weird numbers. I wouldn’t go so far as to call them even vaguely magical.

    I’m not seeing the connection to the mind and consciousness, though. It seems to me that you’re conflating transcendental in the sense of non-algebraic with transcendental in the sense of, I don’t know, generic mysteriousness, mysticism, pre-rational intuition and that sort of stuff. That’s a clear equivocation to me. These are entirely different uses of the term and it is not at all legitimate to draw the inferences you are making in my view. You might as well conclude from the existence of irrational numbers that we are all crazy.

    You’re also drawing in other ideas which I don’t see as particularly related to transcendental numbers, like chaos and complexity.

    You may have something, I don’t know, but you haven’t really joined the dots for me and so I’m afraid it comes across to me like a mish-mash of disparate ideas that don’t really belong together. Perhaps you’re aiming more for poetry than argument and I’m missing the point.

    • Wyrd Smythe

      Hello DM-

      Welcome to my blog… 🙂

      “It seems to me that you’re conflating transcendental in the sense of non-algebraic with transcendental in the sense of, I don’t know, generic mysteriousness, mysticism, pre-rational intuition and that sort of stuff.”

      Yet I said, “This is not the religious kind of transcendence, this is the mathematical kind.” And later, “(and just to reiterate, I don’t mean that in the spiritual sense)”

      I’m talking strictly about the impossibility of calculating with such numbers in any system that processes discrete symbols.

      “There are transcendental numbers, where transcendental just means that they cannot be written as an algebraic expression.”

      Exactly so. They require infinite series which is the same as saying infinite calculation. How can you calculate with numbers that go on forever? You can’t.

      “I wouldn’t go so far as to call them even vaguely magical.”

      Okay, your call. I’m not the one that named them “transcendental” and though.. The mathematicians who discovered them, and who were very excited by them, did. (Did you note the enthusiasm Mr. Pampena showed in the video over these numbers?) That they’re so weird is what makes them a little magical (obviously a metaphor, since none of us believes in magic).

      (As I’ve said, it seems more like an ungrounded, unfounded, unsupported by any shred of evidence belief that “information processing” will give rise to self-awareness is the belief in the magic of numbers.)

      “These are entirely different uses of the term and it is not at all legitimate to draw the inferences you are making in my view.”

      Except that I’m not using it that sense at all, so you’ve apparently missed what I am getting at. In a word: incalculability.

      There are things you simply cannot calculate with a system that processes discrete symbols. The analog and discrete worlds are different. The break from countable to uncountable numbers is bad enough. Chaos enters the picture at that point. The break between algebraic numbers and non-algebraic numbers is even starker. Now we’re dealing with numbers we can’t even write down!

      “You’re also drawing in other ideas which I don’t see as particularly related to transcendental numbers, like chaos and complexity.”

      Yes. Other aspects that also support my argument and make this one stronger.

      “[Y]ou haven’t really joined the dots for me”

      Fair enough. We see the world differently, so naturally we have a different sense of it. 🙂

      • Disagreeable Me (@Disagreeable_I)

        Hi Wyrd,

        You say you’re not using it in the religious sense, but the leap you’re making seems to belie that. But, OK, you say this is about calculability so let’s continue…

        > I’m talking strictly about the impossibility of calculating with such numbers

        Don’t you run into the same problem with your everyday rational numbers? Most of these cannot be exactly represented in binary. The best you can do is to model the fraction explicitly (e.g. recording numerators and denominators) and defer actually rendering this into a single value. But you can do that with pi and e also. You can record your values in terms of pi or e.

        So, again, I don’t think the existence of transcendental numbers has any relevance here.

        > How can you calculate with numbers that go on forever? You can’t.

        You can do so by getting answers to whatever precision you require, as well as having an algorithm that will continue to fetch more precise digits as you need them. I think where we differ is that I don’t think you ever really need infinite precision for any purpose. Having an algorithm to get as many digits you want is in all cases sufficient.

        > The analog and discrete worlds are different.

        Not that different in this respect! You couldn’t calculate with this stuff with analog systems either, because analog systems are inherently imprecise. Due to the impossibility of measuring any quantity to infinite precision, you’ll get no more accurate a reproduction with an analog computation than you will with a discrete one. Indeed you’ll probably be less accurate. Try to find a precise value of pi with a compass and a tape measure and you’ll see what I mean — you will get much better results with a discrete algorithm.

        I would also hazard that due to quantum uncertainty it is incorrect to assume that there actually is a precise value to measure, in many cases.

        > Now we’re dealing with numbers we can’t even write down!

        But we can! There’s more than one way to represent a number. Apparently, you’re not very concerned with the inability of a decimal representation to precisely capture a number like one third, perhaps because we can also represent it as 1 over 3. But we can also represent pi and e in ways other than decimal expansion. One way to do so is with infinite sum notation.

        And of course you’ve also got the symbols themselves. Nothing wrong with writing down e as “e”. Conceptually it’s no different than writing 1 as “1”. Would you say we can’t write down 1?

      • Wyrd Smythe

        “Don’t you run into the same problem with your everyday rational numbers?”

        To some extent, yes, of course. As you say they can be represented as “a/b” and it is possible to design machines that work with algebraic symbols.

        So, yes, absolutely! There are limitations with what can be calculated with numbers. That’s the point!

        “But you can do that with pi and e also.”

        No, there is no algebraic formula for pi or e.

        “I think where we differ is that I don’t think you ever really need infinite precision for any purpose.”

        We know that’s a mathematically false assertion.

        “You couldn’t calculate with this stuff with analog systems either, because analog systems are inherently imprecise.”

        In so far as my thesis is that “calculation is limited” I agree, of course. Indeed, you cannot “calculate with this stuff with analog systems either.” That’s the point. Calculation is limited.

        Essentially you’re pointing out that there are problems calculating with non-transcendental numbers, let alone transcendental ones. This is absolutely true.

      • Disagreeable Me (@Disagreeable_I)

        Right, so calculating with all kinds of numbers leads to situations where absolute precision in decimal or binary representations is not possible. Transcendental numbers are not particularly special in this regard, so I don’t see what they have to do with anything.

        Neither do I see what this limitation of decimal/binary representation has to do with consciousness. Consciousness must be robust with respect to small disturbances. It cannot possibly rely on absolutely precise state because absolutely precise state would be disturbed by environmental interactions.

        > We know that’s a mathematically false assertion.

        Well, no, because “purpose” isn’t really a mathematical concept. I’m just saying there is no reason you would ever need an absolutely precise binary or decimal representation of a number. Or at least I can’t think of one. When modelling real systems uncertainty of measurement is of far greater concern than anything to do with transcendental numbers anyway.

      • Wyrd Smythe

        “Transcendental numbers are not particularly special in this regard, so I don’t see what they have to do with anything.”

        It strikes me that they’re a little extra special. They were named transcendental because they seem to go beyond the normal algebraic numbers we use to describe most of reality. And yet we find them lurking everywhere.

        “Neither do I see what this limitation of decimal/binary representation has to do with consciousness.”

        Then you don’t see it. All I can say is that I see the limitations of calculation as being a potential limit with regard to calculation of self-aware consciousness.

        “Consciousness must be robust with respect to small disturbances.”

        Indeed. In fact, it may even supervene on them. Very subtle effects in analog systems can turn out to have subtle effects.

        Nearly all physical systems share a fundamental property of “least action.” Undisturbed soap bubbles are spheres; water seeks a level; light refracts.

        We have a hard time calculating the three-body problem. Calculation gives us approximations that eventually turn out to be wrong (because of chaos). But the physical natural system operates through least action and the whole solar system of myriad objects solves the N-body problem perfectly.

        Nature follows it own physical laws down to the quantum level, so it effectively solves “unsolveable” (i.e. uncalculable) math problems through the agency of physical properties.

        “I’m just saying there is no reason you would ever need an absolutely precise binary or decimal representation of a number.”

        I think we’d love to be able to predict the weather with precision! But it’s not clear that’s possible, even in principle, with a discrete calculation.

        I understand what you’re saying. I’m saying we don’t know, and it’s possible discrete calculation won’t work in calculating mind. (Assuming minds are at least as complex as weather.)

  • Wyrd Smythe

    Note to self: There seems to me no requirement that a Tegmarkian has to also believe in a computational theory of mind. There is plenty that is both mathematical and incalculable, so a belief in an underlying mathematical foundation need not imply mind is software running on some form of Turing Machine.

    A real-time analog network of mathematical relationships can be both purely mathematical and not possible to calculate, even in principle.

  • Philosopher Eric

    Hello Wyrd,

    Given your excellent commentary over at Mike’s, I’ve taken the suggestion that you left for James of Seattle to check out this series of posts. (https://selfawarepatterns.com/2019/01/06/is-the-singularity-right-around-the-corner/#comment-25641 ) Very impressive! My sense is that we see things quite similarly in this regard. And indeed, I’m not entirely convinced that Mike, Steve, or DM see things all that differently — that is if variable term usages could somehow be accounted for. My own epistemology formally obligates the reader to accept the definitions of the writer in the attempt to comprehend what’s being proposed. So perhaps I’m a bit more flexible? Or hopefully at least for sensible proposals. And indeed, late 2015 was a while back. If any of them continue to believe that they dispute the message of these posts, I’d appreciate hearing about it. (I chose this particular one since they’re each on the comment board and so should be notified.)

    If you recall I did stop by your blog after that “Trump as president” hoax thing turned out to not be a hoax. From there apparently my attention was diverted to other sources of education. Hopefully today I’m a bit more prepared for discussions with you than I was back then however.

    Let’s begin this with a question. Of course we refer to our various varieties of “glorified pocket calculators” as “computers”. These are teleologically fabricated, or designed and built by us to serve us. But what exists in nature that may effectively be analogized with our computers? And note that I do appreciate the response of “nothing”. That’s surely the case when defined strictly. But at least when not so strictly defined, I’m able to come up with some analogies which seem quite effective. (I consider analogies to exist as essentially the only medium by which the human makes sense of things, and thus these associations are not made idly.) Mike is quite aware of my “four forms of computer” discussion, though given this series of posts I’d love your thoughts about what else in nature might effectively be said to “compute”?

    Let me also mention what I seek beyond your general insights regarding these matters. Apparently it’s possible for complex ideas to be grasped in a “lecture level” capacity, where they’re understood as spoken and might even be recalled later. But they might also be grasped in a far more advanced “practical level” capacity. I believe that to gain such an understanding for anything reasonably complex, one must test an initial conception of an idea against situations where it may practically be implemented. For example math and physics problems effectively refine vague lecture level understandings of math and physics students by practically showing what a given concept both does and does not mean in quite specific ways that go beyond what lectures are able to provide.

    If I’m able to interest you, my hope is that you’ll use your general grasp of my ideas to “solve” various specific problems from this perspective, and so gain a practical grasp of how my ideas work. Such an understanding could be demonstrated by predicting the sorts of things that I’d say, for example, about a given blog article. Then once mastered you should be able to assess the strengths and weaknesses of my various ideas in general. Essentially I’d like to know where improvements are still needed. And indeed, perhaps you have some original models that I could try to gain a practical understanding of?

    For the moment however, what in nature might effectively be analogized with our computers?

    • Wyrd Smythe

      Welcome back, Eric.

      “For the moment however, what in nature might effectively be analogized with our computers?”

      For me this is something of a “tree falls in the forest” question in that the answer comes mainly from precisely defining the question.

      To be clear, I take your question to ask what in nature acts like our computers. The other way around is common: computers model nature in many ways; weather models, for instance.

      In terms of digital, stored-program computers, I don’t think we see the exact Von Neumann architecture in nature, but even so it can depend on how broadly we define that architecture.

      One could draw parallels in how DNA expresses genes (or other biochemical processes). DNA could be seen as a stored program of sorts. A transcription enzyme could be seen as a “CPU” executing its instructions.

      The mechanism is chemical and analog, not electronic and digital, but one could make the argument bio-chemistry “computes.”

      Mike and I had a discussion recently about the distinction between (what I call) calculation versus evaluation. He sees them more under an umbrella of computation.

      (I wrote a couple posts detailing the distinction I saw. If interested, see: Calculated Mathand Coded Math.)

      What we generally do not see is any sense of nature performing an algorithm such that it could perform a different algorithm. We do see things proceed according to their physics, which to me seems a different thing.

      DNA transcription always works the same way. Chemistry is an entirely determined physical process. I don’t label that as “computing,” but others do. It is a matter of definition.

      My problem with the broad umbrella is that it’s too broad. Concepts such as “computing” or “information processing” or even just “process” are so general one has to ask what isn’t a process, or information, or computing.

      So to make sense of it all, I use more restrictive definitions and generally avoid the general terms entirely.

      “Evaluation” is a physical process, common in nature; “calculation” is a multi-step algorithm performed by some engine according to some program. I think that seeing it in nature requires, perhaps, a bit of poetry.

      • Philosopher Eric

        Wyrd,
        I appreciate how committed you are to definition. Yes the term “sound” can be defined in a number of ways, and therefore asking if a falling tree makes sound is not a complete question until the term becomes defined. Furthermore from my own epistemology the reader is obligated to use the writer’s explicit and implicit definitions in the attempt to understand. Thus if you were to tell me that vibrating particles constitute “sound”, I’d be obligated to accept this to assess your general point. In the end if you’re unable to say anything that I consider sensible about that however, then I might judge you poorly. For the most part I consider it useful to define “sound” as something phenomenal, which is to say, an input to the conscious variety of function. Here the tree by itself cannot produce sound. But if you were to tell me about a machine which produces “sonic waves” in order to help process certain materials by means of vibration energy, well that process surely wouldn’t depend upon anything phenomenal. Thus I could endorse such a definition for “sound”.

        I’m pleased that you’ve gotten into what I currently consider reality’s first form of “computer”, which is to say genetic material. Yes this refers to something that’s entirely chemical and analog. I can’t think of any “recipes” before the function of genetic material that may reasonably be termed computational. Indeed, from this perspective I’d say that we could build an effective definition for the long troubling term “life”. (Would you say that “recipe” is a better way to term genetic function than the “algorithm” term which I’ve been using? Or perhaps you have another suggestion?)

        Then the second form of computer that I see isn’t quite as controversial. This is the neuron based central organism processor. Apparently neuroscientist have observed the essential building blocks for all computation in these biological machines, which is to say “and”, “or”, and “not” gates.

        So if we include the technological computers that we build, we’re now up to three of the four forms of computer that I’ve found useful to distinguish. The final one is consciousness itself. Apparently it’s possible for a non-conscious central organism processor (or “brain”), to produce a punishment/ reward dynamic for something other than it to experience. I define the sentience here as “consciousness”.

        Still I don’t consider a functional conscious form of computer to exist at this point. Here sentience will merely be an epiphenomenal trait that’s carried along with organism function in general. But I believe that this dynamic was able to evolve to become the purpose driven form of computer by which you’re able to read these words, as well as experience existence in general. Here consciousness exists as an output of a non-conscious brain. The processor (“thought”) interprets conscious inputs (senses, valence, and memory) and runs scenarios about how to make itself feel better by means of muscle function output.

        And why was standard non-conscious central organism processing insufficient? I suspect that under more open environments there were often too many potential contingencies to effectively program for. Note how much trouble our non-conscious robots have trying to function under more open environments. Apparently evolution was able to get around this difficulty by fabricating good to bad personal experiences, and thus purpose driven forms of life.

        There’s an excellent chance that I’m going too deep too fast here however…

      • Wyrd Smythe

        I’m having some difficulty following you, so at this point I mainly have questions…

        “…reality’s first form of ‘computer’, which is to say genetic material.”

        I think we are in sync here, although I do feel there is some poetry in calling the processes associated with DNA “computing.” (One of my questions is, how do you define the term?)

        That said, there is a distinct stored program and execution engine aspect to DNA function that I find fascinating.

        (And an evolutionary leap that I think begs for explanation. Big part of what’s so fascinating. How the hell did RNA evolve? I’ve yet to hear a reasonable account. Mostly a lot of hand-waving about “self-replicating clays.” But last I heard they’ve yet to find a pathway for natural synthesis of one of the four amino acids required.)

        If you wish to classify the operation of DNA, RNA, and related enzymes, as a form of computing, especially for purposes of discussion, I’m clear on how and why.

        “Would you say that “recipe” is a better way to term genetic function than the “algorithm” term…”

        I don’t think I would. Recipe is such a general word that using it is, I feel, a recipe for misunderstanding.

        A recipe often has stronger focus on the ingredients than the process — quite the opposite of an algorithm, where the concept of “ingredients” may not even apply.

        They do have in common (usually) a set of steps to their process, but “algorithm” far more carries the necessary meanings of computation (the requirements for saving state, run-time selection, and recursion).

        (Incidentally, it’s the difficulty in applying those three requirements to DNA that causes me to say calling it “computation” is a bit poetic, because I define computation in terms of TMs or lambda calculus.)

        “Then the second form of computer that I see isn’t quite as controversial. This is the neuron based central organism processor.”

        Just so I’m clear, we’re talking about brains here, yes? This might be more controversial than you expected. As I’ve said in this post series, I’m in the camp thinking, “The brain is nothing like a computer.” It’s kind of the point of the series. 😀

        For purposes of discussion, I’ll accept that you see brains as another natural machine you classify as “Computer, Type II.”

        And certainly I’m fine with Type III, the metal and plastic machines we make and call computers.

        I do have questions: Do you mean only “digital” versions of these machines (i.e. discrete symbol processors), or are analog computers included under this umbrella? Does the scope of “digital” include an abacus as well as the latest super-computer? If I execute a computer algorithm on paper, is a computer still involved? Is hardware separate from software?

        “The final one is consciousness itself.”

        I’m afraid I find myself lost from about this point on, because I don’t understand what you’re saying. Are you positing consciousness as distinct from brain operation? Are you positing dualism?

        Or are you saying consciousness is just what being inside a working brain “feels” like? (I stopped contributing weeks ago, but am still reading a 600+ (and still growing) comment thread mainly debating the claim that the “hard problem of consciousness” isn’t hard at all, but just an engineering problem. Some of what you wrote seems reminiscent of that, so I ask.)

        Whereas I can see how one might classify DNA, brains, and computer hardware, as three types of “computer,” I cannot understand how consciousness itself can be. I don’t follow the logic. (As you say, “too deep too fast.”) I’m not even sure I’m understanding exactly what you’re saying Computer, Type IV, is.

        Assuming I do follow, four types, DNA, brains, human-made machines, and consciousness, what about them? Those four things all seem quite different to me.

      • Philosopher Eric

        It’s certainly understandable that you’d have difficulties following everything here Wyrd, given that there are some radical ideas are included. So thanks for your on point questions. And you do seem to understand a good bit more than most even initially. Of course you’re aware that some go straight into challenges before grasping what they supposedly dispute. To me it seems far more productive to ask questions.

        One of my questions is, how do you define [a computer]?

        I’ve found this useful to define as something which takes input information (possibly chemical), processes it by means of logical operations, and thus produces output function. So when fruit is added to a juicer to produce juice, does this qualify? Well to me logic based nuances aren’t sufficiently reflected here. I don’t see “If… then…” sorts of steps where, for examples, peaches might be treated far differently from carrots given their separate natures.

        Conversely when a substance enters a cell and interacts with genetic material in a way that outputs associated novel proteins and such, to me it does seem that logical steps must be occurring based upon the nature of what’s inputted to the system. Beyond the “factory” component to this, a “stored program” dynamic seems apparent (as you’ve mentioned). (I like “algorithm” here much better than “recipe” as well.) Before genetic material, can you think of anything in nature that functioned so dynamically? And if “computer” seems too poetic for you personally to endorse in general, can you think of a more appropriate analogy?

        (The evolution of genetic material is something that impresses me as well. But here’s a bit of logic that satisfies me somewhat at least: If an organic material were to naturally replicate itself somehow when exposed to the proper conditions, then this stuff would obviously become less rare as future iterations also replicate themselves. And if there were any subtle changes to such replication over time (as we’d expect), then the changes which hinder future replication should tend to die while the ones that promote such replication should tend to become more prominent. Thus here we have “evolution”. But as bizarre as it might be to us that such a process would end up producing organisms which harbor genetic material, shouldn’t things have gone something like this? What alternative might there be, or at least presuming naturalism?)

        Then regarding the second computer, or the central organism processor which is commonly referred to as “brain” I think it’s best to begin basic — the human brain should be far too evolved to provide us with a good place to start. I don’t know if you recall Mike’s first series of posts on Feinberg and Mallatt’s “The Ancient Origins of Consciousness” (https://selfawarepatterns.com/2016/09/12/what-counts-as-consciousness/ ), but I was fortunate enough to meet him while he was doing these posts. They’ve helped fill some blanks regarding my own theory.

        Consider life before the Cambrian Explosion. Single cellular organisms would do what they do based upon the nature of what their genetic material and general circumstances built them to be. So here such organisms would have some central direction.

        Apparently it was adaptive for multicellular organisms to evolve as well. So in multicellular life which harbor countless individually governed cells that each play their own body part roles, notice that there isn’t yet a central organism processor. No “brains”.

        Now imagine the evolution of a “nerve”, which is to say something that incites unique output function when it detects something that it’s set up to detect. (Of course the organism would already be functional based upon the genetics of individual cells, though now we move into instruction concerning the whole structure.) Then once various nerves evolved to provide their information to a single location rather than directly incite individual output sources of function, it’s here that I think the potential emerged for algorithmic processing of input information to produce “computation” as I’m defining the term. Thus information from all sorts of nerves could be factored together, whether to regulate mechanisms in the body like a heart, or to help decide a direction to move in next. Note that plants don’t have central organism processors and yet function brilliantly for what they do. For organisms in more “open” environments however, I suspect that central organism processing was adaptive. And this is non-conscious function just as our robots aren’t conscious.

        On your point that your posts argue that it’s not useful to define brains as computers, my agreement was given the way that you’ve defined the term. Brains most certainly aren’t “Turing Machines”. But then I hope for the same concession regarding definition for my own arguments. I seek useful analogies, and because I consider them essential to build understandings in general. Why do educated people seem so much more educable? Perhaps because education brings more potential to learn through analogies.

        Since this one is going long I won’t continue on to consciousness in general. We can work on that sort of thing at a more appropriate time. But I will at least provide some abbreviated explanations for your remaining questions.

        Do you mean only “digital” versions of these machines (i.e. discrete symbol processors), or are analog computers included under this umbrella?

        Analog computers are most definitely included. I don’t consider evolution to use “symbols” at all, and therefore I don’t consider it to create any digital forms of life.

        Does the scope of “digital” include an abacus as well as the latest super-computer?

        No I wouldn’t call the abacus digital. And could a human using one function as “computer”? Or the human that writes out computer operations on paper? Well not given those tools specifically, though hopefully we’ll get into the conscious form of function soon enough.

        Is hardware separate from software

        To me that seems like a useful distinction. And I don’t consider anything non-conscious to “write software” (not to mention the rarity of this sort of activity among conscious life).

        Regarding consciousness, I’m certainly no dualist. I really should set the foundation here before saying too much about what I mean by the term however. And indeed, even from my own definition I’m far less certain that “consciousness” qualifies as a computer. But I do have an extensive model for you to consider regarding such function once we’re ready.

      • Wyrd Smythe

        “It’s certainly understandable that you’d have difficulties following everything here Wyrd, given that there are some radical ideas are included.”

        No, that’s not it. I eat radical ideas for breakfast. 🙂

        “I’ve found this useful to define as something which takes input information (possibly chemical), processes it by means of logical operations, and thus produces output function.”

        That is so general you may, indeed, have to consider the juicer a computer!

        The only distinction between the processes going on in the juicer and those going on with gene expression is the degree of complexity. If you consider the juicer at the same low-level we’re considering genetic machinery, we find similar (albeit simpler) chemical interactions.

        “I don’t see ‘If… then…’ sorts of steps where, for examples, peaches might be treated far differently from carrots given their separate natures. “

        The formal CS term for “If-then-else” constructs is “selection.” The formal definition of “computing” also requires the ability to save state and to either recurse or iterate (or at least the dreaded GOTO).

        To me, in DNA, these seem metaphorical, at best (not existing at worst 🙂 ), but how I see it isn’t the point. I accept for this conversation that you do.

        “…to me it does seem that logical steps must be occurring based upon the nature of what’s inputted to the system.”

        It depends on what’s meant by “logical steps” I guess. If I electrolyse water, is getting oxygen and hydrogen the result of logical steps? Are chemical reactions “logical steps?”

        A “logical step,” can mean, informally, “doing the obvious thing required by the circumstances,” or it can have a more precise mathematical definition. For example: A AND B =: X where X is true or false depending on the truth of A and B.

        “And if “computer” seems too poetic for you personally to endorse in general, can you think of a more appropriate analogy?”

        “Machine.” It’s not an analogy, it’s a descriptive term. These things are machines.

        I believe you (and Mike, I think) would define any machine as a computer. (Whereas I don’t.)

        The argument for is that any machine can be said to follow a “program” (implemented by the machine’s hardware) and to execute “logical steps” of that program. One can view the hardware as the stored program, the CPU, and even the system state and iteration, rolled up in one.

        The argument against is the high degree of metaphor involved and the lack of distinction between the “computing” parts. Essentially, under such a broad definition, everything becomes a computer, and the term loses its descriptive power.

        FWIW, I greatly favor strong distinctions and definitions for words because I so appreciate their descriptive power. If I speak of algorithms and computers, no one has to ask me what I really mean. What I mean is what is formally meant by those terms.

        “(…If an organic material were to naturally replicate itself somehow when exposed to the proper conditions,…)”

        (The problem lies in the vagueness of “naturally replicate itself” which assumes the thing we’re trying to get to. Plus that there seems no natural path for synthesis of one of the four necessary amino acids (the G, I think). The other three can all occur in the organic soup of early Earth sparked by lightning.)

        “Now imagine the evolution of a ‘nerve’,…”

        You don’t need to sell me on the evolution of organic brains. 😀

        We have, as examples, everything from the brainless jellyfish with the most rudimentary of nervous systems, to worms, to bugs, to various mammals, and to us. Much of that evolutionary path is visible in creatures today.

        While I won’t agree the brain is a computer, as with DNA, it’s definitely a unique and interesting machine. I have no problem with that classification.

        “Why do educated people seem so much more educable? Perhaps because education brings more potential to learn through analogies.”

        You’ve mentioned analogies several times, and I’d like to tender my 1/50 of a buck.

        Analogies are great tools for entry-level knowledge. I’d equate them with what you called ‘lecture level’ understanding. The real understanding, the ‘problem-solving level,’ comes with understanding the details. In the sciences, it often means understanding the math, at least a little. It always means being fully conversant with the details.

        There are myriad reasons educated people can learn. They’re often drawn to knowledge and have a thirst for understanding. They’ve often learned the most important lesson: How to learn. And gotten a faith in themselves that they can learn. Many are autodidacts, actively seeking knowledge.

        Analogies are nice, but they’re no substitute for real understanding, and they can often lead you down an inaccurate road. Especially in the sciences, one must be very wary of analogies.

        “Analog computers are most definitely included.”

        But an abacus is not? Why not? (To me, an abacus is far more a “computer” than an analog computer.)

        “I don’t consider evolution to use ‘symbols’ at all, and therefore I don’t consider it to create any digital forms of life.”

        We may have different definitions of “symbol” then. The DNA process (in my view) definitely uses symbols. There are the four amino acids, symbolized C, G, A, & T, and those combine into three-group symbols used to create proteins.

        Unlike digital computers as we know them, there’s a lot of analog stuff going on with the chemistry and various potentials, but the DNA system is definitely symbolic processing. That’s part of what makes it so fascinating. There’s a literal code inside each of us that defines our physical being.

        I had asked about the distinction between hardware and software:

        “To me that seems like a useful distinction. And I don’t consider anything non-conscious to ‘write software’ (not to mention the rarity of this sort of activity among conscious life).”

        How does this connect with your view that DNA is a computer? Where is its software? Who wrote it?

        If software is so rare, how can there be so many things that are computers?

        “Regarding consciousness, I’m certainly no dualist.”

        Okay. I guess I’ll have to wait to see what you think consciousness is. 🙂

  • Philosopher Eric

    Wyrd,
    Let me ask you why you think some of your other friends have had difficulties accepting the theme to these posts? I’d enjoy their response to my own assessment of that, but in the end I suspect that they think that you’ve gotten a bit too greedy. Here you’ve taken the “computer” term, and then defined it such that it can essentially only exist as a specific variety of intelligently designed machine.

    Conversely I have no problem accepting the theme to these posts. Why? Well beyond that you didn’t say anything that I consider suspect, this is given the devotion that I have to my first principle of epistemology. It permits the theorist to be perfectly greedy regarding any definition at all. And I can’t blame them (or you) for seeking some kind of “truth” to various humanly fabricated terms. Unfortunately we’ve inherited the convention of asking what is computation, time, life, consciousness, good, and so on. I consider the resulting implicit perspective to exist as academia’s most widespread flaw. Instead I think theorists must formally be afforded the opportunity to construct their terms as they see fit in the quest to convey their positions, whether insightful or idiotic. In order to better found science, I believe that this institution needs to develop various generally accepted principles of metaphysics, epistemology, and axiology.

    (Unfortunately many philosophers today jealously guard their domain as a fundamentally speculative form of contemplation, or even “art”. When someone implies the need for more I’m sure that you’ve heard them shout accusations of “Scentism!”. My four principles of philosophy have nevertheless been developed to potentially found the institution of science more effectively than it is today.)

    I’ve given your “machine” suggestion some thought, and upon reflection I’ll stick with “computer”. The machine term as commonly used simply does not get to what I’m referring to. Originally I think it was meant to describe more complex sorts of things that people create. Here a door is not a machine, nor a person, nor a star. But mechanical typewriters and juicers do represent machines from this perspective. I agree that the function of DNA is far more complex than the juicer, but that’s not my point. In the juicer I don’t see logical steps such that one input substance may thus be treated quite differently from another in a logic based capacity. In DNA I do. And I see this as well in the vastly more simple digital timer. Thus I’ve come to consider “computer” to exist as a better analogy. Of course it’s all physics in the end, though the question here concerns classifications such that they make sense to the human. For example we need to classify physical dynamics in all sorts of ways. But per my EP1, as my definition it’s your obligation to grasp and accept whatever distinction I’m making in the attempt to understand the nature of my arguments.

    I see genetic material as reality’s first form of computer. Next there is the central organism processor since there exists the potential for logic based algorithmic functional output when various forms of input information come together in one place. Thus from around the Cambrian Explosion things could be said to function somewhat in the manner that our robots do. Then chronologically the conscious form would be III. And then quite recently there was the emergence of type IV, or the technological form which provides this analogy its example.

    My own computer definition does not concern “selection”, “save state”, “recurse”, “iterate” or “goto” that I know of, since I don’t have a functional grasp of them. But electrolyzing water does not seem useful to me to refer to as a logical step in itself based upon the nature of what’s inputted. This might however be an output of computer function. Perhaps peach input would incite such treatment while carrot input would not? The “reaction” term may effectively be associated with “output” by definition.

    By the way, since you’ve gotten into “logical steps” I’m curious what you think about how neuroscientists in general seem to have decided that they observe “AND”, “OR”, and “NOT” gates regarding neuron function? (I wonder why Mike, Steve, and DM didn’t get into this? They’d ask far more proficiently and are certainly welcome to join in this capacity or another.) Do you believe that neuroscientists have essentially been seeing what they want to see? Or perhaps you believe that neurons harbor the supposed constituents to all logic based function, but that this doesn’t get close enough to your own definition?

    One thing about my last reply is that it was submitted well after my bedtime (at 1:43 am!). Though by then I couldn’t quite see straight, I didn’t want to wait yet another day. But beyond random mistakes there was one consequential one. I wrote “organic” in a spot where I meant “inorganic”. So for belated repair, before the emergence of genetic material, all that existed here was “inorganic” (regardless of any lightning based reactions and such — I’m defining all that as “inorganic” as well). If anything on this planet were to be copied somehow such that the copy could then be copied, copies that promote this sort of copying would grow more common than other iterations. So the logic here is that this process must have eventually resulted in what we see today as genetic material. It’s an assessment that could be made about the evolution of “life” anywhere. Tautologies can sometimes be helpful.

    Where I used the term “natural”, I should have been more specific. Apparently we don’t yet know each other quite well enough there. A more descriptive term would be “causal”. In this regard I’m as strong a determinist as they come.

    I believe you (and Mike, I think) would define any machine as a computer. (Whereas I don’t.)

    Now there’s a statement that could get you into some trouble! 🙂 No that’s not the case for me as I’ve described above, and I’m quite sure that Mike would say that the computer is a relatively recent human invention though we’ve been building machines for quite a while.

    I’m pleased with your distinction between “analogies” and “lecture level understandings”. Exactly. We take an initial perspective of “it’s kind of like…”, and then hone them into working understandings as we continue exploring. Right. And given the significance of this dynamic, the theorist will need to choose his or her analogies wisely.

    I don’t consider an abacus to be a computer in itself because it doesn’t effectively “do” anything. It just sits there. But if you mean that a human can use one to preform computations, well I do agree with that. I’ve yet to address the conscious form of computer from which to complete such computation however.

    Regarding symbols, yes there again we’re getting too far ahead. I consider the human to have long ago evolved a symbolic form of conscious processing (or “thought”) which has proven very powerful. Thus we use our symbols to help us grasp things, such as genetic function. Surely evolution itself, however, remains “the blind watchmaker”. Look mom, no symbols. 🙂

    Regarding software, to me that instrument needs to be entirely left for the form of computer which a language equipped creature builds. As you’ve said, DNA seems to function as a program, and apparently even though there’s nothing “soft” about such function. I’d say the same regarding neuronal function. Evolution doesn’t build things and then add software in order to institute various apps in those structures, as we do. Here it seems to be “hardware all the way!”

    Well darn, once again I’ve not gotten into consciousness. Hopefully soon. But I’ll never tell you what consciousness is. That would violate my first principle of epistemology. No such definition should exist. Instead I’ll provide what I consider to be a
    useful
    definition. Surely our soft sciences will at least some day develop such an understanding? It may be, however, that philosophy will first need to develop some generally accepted principles from which to better found the institution of science. Of course I’m ready there as well.

    • Wyrd Smythe

      I’m breaking this into two replies. This one touches on topics directly related to your four types of “computer.” The second is various sidebar topics I didn’t think directly related.

      “In DNA I do.”

      What specifically do you perceive DNA does in terms “input substance” and “logical steps”? I’m asking about your precise understanding of the DNA “computer” (because it will help me understand why you include some things and not others).

      I accept your category “Computer, Type I” (I have from the beginning). I’m exploring your notion of it so that I can fully understand it.

      Likewise, I accept your other three categories, and am exploring what their membership functions are. Three have fairly obvious membership functions: DNA, organic brains, modern computing devices. You have yet to cover your view on consciousness.

      “I don’t consider an abacus to be a computer in itself because it doesn’t effectively ‘do’ anything. It just sits there. But if you mean that a human can use one to preform computations, well I do agree with that.”

      But doesn’t a computer just sit there unless a human uses it? What about an electronic calculator? What about the Babbage engine?

      You said analog computers are computers per your definition. How about a slide rule?

      “Regarding symbols,…”

      I’m not sure we’re on the same page on what I meant by “symbolic processing.”

      I meant a system that uses discrete symbols, opposed to some form of analog processing. A discrete symbol processor includes digital computers, digital music, abacuses, even thermostats. In contrast are analog computers, such as slide rules, resistive networks, and tubes of liquid.

      The general operation of DNA expressing a gene does use symbols, as I mentioned. The reason I asked about symbols is that the membership function for your “Computer, Type IV” (chronologically) isn’t clear to me. It does include analog computers (including slide rules?), but not a symbolic processor like an abacus.

      The “just sits there” criteria isn’t clear to me; don’t they all? Alternately, aren’t they all human-made devices that do what they’re designed to do, and human interaction gives that behavior meaning?

    • Wyrd Smythe

      “I suspect that they think that you’ve gotten a bit too greedy.”

      What is “greedy” about formal definitions of terms?

      Talking about calculation in any form is talking about Computer Science, the formal study of calculation. That study predates actual computers. (There’s a common phrase for new CS students: “Computer Science isn’t about computers any more than astronomy is about telescopes.”)

      Of course someone can define terms their own way, but I feel it adds an unnecessary translation layer. The whole point of formal definitions is to enable precise, transparent communication of ideas.

      “Here you’ve taken the “computer” term, and then defined it such that it can essentially only exist as a specific variety of intelligently designed machine.”

      No. Let me be clear about what I mean: A “computer” is something that “calculates.” Modern use implies a device, but the term dates back to the 1600s where it meant a person who “calculates.” During WWII, Bletchley Park employed many such “computers,” and NASA employed them into at least the 1970s.

      The important definition is “calculate,” and it’s well-defined in computer science. Which, as I said, predates actual computers by centuries.

      “I have no problem accepting the theme to these posts.”

      By “theme” do you mean my conclusions or just the terms of my arguments?

      I believe Mike and others accept the terms; they just don’t agree with the conclusions. Which is entirely fair. And expected; we disagree on a key premise. Naturally we come to different conclusions.

      “It permits the theorist to be perfectly greedy regarding any definition at all.”

      So you’ve mentioned. When it comes to science, I don’t agree. There is a vocabulary, both of words and concepts, well-equipped for technical discussion. It isn’t so much a matter of “truth” but of a common and precise language.

      But as I’ve said all along: I’m willing to try to follow your definitions, so I’d rather just talk about the content of your views. Trust me to keep up or ask questions.

      “Here a door is not a machine, nor a person, nor a star. But mechanical typewriters and juicers do represent machines from this perspective.”

      Okay. (Out of curiosity: How about the hinges and latch on the door? How about the fusion engine in the heart of a star?)

      You don’t think a person can be a “well-oiled comedy machine”? 😀

      Although, seriously, the metaphor “body as machine” is well-established and common for good reason.

      “My own computer definition does not concern ‘selection’, ‘save state’, ‘recurse’, ‘iterate’ or ‘goto’ that I know of, since I don’t have a functional grasp of them.”

      They are fundamental to how calculation is defined. This blog post of mine might help.

      “I’m curious what you think about how neuroscientists in general seem to have decided that they observe ‘AND’, ‘OR’, and ‘NOT’ gates regarding neuron function?”

      Those are fundamental logical operations that can be seen in how neurons sum inputs and fire or don’t fire. Electronic logic gates have few inputs and are strictly binary, whereas neurons have many inputs and analog aspects. (I wrote this post about logic gates if you’re interested.)

      It may not have been mentioned because it’s so basic. It’s what neurons do. They are a (super-complex) logic gate (with analog properties).

      “So for belated repair, before the emergence of genetic material, all that existed here was ‘inorganic’…”

      Okay. (For the record, “organic chemistry” is formally defined as chemistry with carbon compounds. You will need to explain your personal definition when you talk about this.)

      “I’m as strong a determinist as they come.”

      Okay.

      “I’m pleased with your distinction between ‘analogies’ and ‘lecture level understandings’.”

      You understand I saw them as similar, right? And generally inadequate, or entry-level at best, for science and technical discussion?

      “Here it seems to be ‘hardware all the way!'”

      Yes. What I was getting at is that when talking about natural “computers,” if we mean they “calculate” then we mean they have “software” but that it’s embodied in the “hardware” architecture.

      To the extent nature creates anything one can call a “computer” it creates Turing Machines (TM) — mechanisms with a single purpose. Modern electronic computers are Universal Turing Machines (UTM).

      The key distinction is that a UTM loads separate “software” and is general purpose. That’s what we mean by “von Neumann architecture” and “stored program.”

      Note that “calculation” is defined in terms of a TM. The UTM, let alone von Neumann architecture, is just a refinement.

  • Philosopher Eric

    Wyrd,
    I’m afraid that I’ll need to get back to you on precisely what DNA does in terms of input substances and logical steps. At the moment I don’t see clear cut examples of what I’m referring to from standard scenarios provided on the web. Previously I’ve probed my wife about such function given that she works in the field of biology. I’d ask her, “So can I say this…?”. Here she might reply “Well I wouldn’t quite say that for reason of…, but you might say something like this…”. In the past I’ve decided that there are reasonable ways to make the “computer” association with genetic material as I define the term, though I don’t recall specifically how I’ve made these arguments or where I gave them. She’ll be with her family for at least a week and I know that she wouldn’t appreciate me asking her about this on the phone. Thanks for the question however. Now I’m curious as well.

    Anyway “computation” as I define it concerns a process where inputs become processed by means of logical steps to produce output function given specific features of the input’s nature. A certain protein input should be treated by genetic material differently from another. Thus computer type I. Or a given sensory input should be treated by neural function differently from another. Thus computer type II. Or the press of a given input key should be treated differently by CPU algorithms and whatnot than another key. Thus computer type IV. Since you seem fine with this I’ll begin introducing how I define computer type III after going through your most recent questions.

    On me saying that the abacus just sit there until it’s used (yes like the pocket calculator), let’s see if I can come up with something a bit more constructive. When a key is pressed on a calculator, this input is processed through logical steps that might output an associated symbol on its screen. So this is clearly computation as I define the term. Similarly sliding an abacus piece could be referred to as an “input” to such a device. But here this sliding could also be referred to as “logical processing”. And here this sliding could also be referred to as “output”. All that with just a nudge! As I define it, input function needs to be distinct from processing function, which again needs to be distinct from output function. Similarly the slide rule does not seem to qualify in this regard as well. A Babbage engine however does seem to harbor separate input, processing, and output components of function, and so does qualify.

    You’d call the abacus and a standard mercury thermostat discrete symbol processors? Interesting. Yes we must be using the “symbol” term quite differently here. I’d say that a sufficiently educated human might interpret their function by means of discrete associated symbols, though I wouldn’t say that their function itself occurs by means of such symbols. And I also wouldn’t say that the computer that I’m now typing on functions on the basis of symbols. I’d say that the human can be said to use symbols to make sense of things, though my computer and all else functions on the basis of physics alone.

    You realize this as well of course. Ah, so the difference must be that I was going “ontological” here while you were going “epistemological”. So if we’re speaking in terms of epistemology, I do agree that both the abacus and DNA harbor symbolic function, and specifically in respect to human interpretation. And if we’re speaking of in terms of ontological function, you do agree that symbols do not exist beyond conscious interpretations don’t you?

    On being “greedy” with terms, I accept that we often need standard and precise terms for well established concepts in specific fields. I have no problem with computer scientists being perfectly greedy within their own domain. But I won’t say that they should be permitted to own a given term in other contexts as well. Without analogies such as “car” or “box” or “cloud”, it’s difficult for us to communicate effectively let alone make productive associations regarding speculative topics. Neuroscientists today are having tremendous difficulties developing effective working models regarding “brains”, “consciousness”, and so on. If they believe that useful associations can be made with our computers (as I do), then I’d hope for computer scientists to advocate their use of the term as well.

    (The real problem here, I think, is that we do not yet have a respected community of professionals with their own generally accepted principles of epistemology. Therefore scientists today must make up their epistemology as they go. Harder sciences seem to have done reasonable so far, though softer sciences are clearly struggling in this regard.)

    By “theme”, I mean that it’s from your definitions that I’m able to accept your conclusions regarding computer science (as far as I can tell). In turn I very much hope that given my definitions you’ll be able to grasp my ideas well enough to assess them in a working level capacity.

    On my restrictive definition for “machine”, actually that was simply my sense of how people originally used the term. But here you seem to be arguing my point. What if there were “machine scientists” who didn’t want people to use the “machine” term in ways that do not correspond with their narrow technical form? What if they had a problem with a human “well oiled comedy machine”? That position should be a problematic.

    “Here you’ve taken the “computer” term, and then defined it such that it can essentially only exist as a specific variety of intelligently designed machine.”

    No. Let me be clear about what I mean: A “computer” is something that “calculates.” Modern use implies a device, but the term dates back to the 1600s where it meant a person who “calculates.”

    I’m relieved that this was a misconception Wyrd. This helps square us. And perhaps more than you might think. Notice that in “a person who calculates”, computer scientists seem to have began with a form of computer that correlates with the computer type III model that I’ll begin describing here, or consciousness! In truth my own model doesn’t stop with “educated person”, or even “person” however. It addresses all forms of conscious life. You and I simply have a few more conscious tools than, say, a snake. All to come. So without further ado, let’s get to it…

    I theorize that genetic material, or computer type I, was necessary for life. Then with multicellular forms of function the need for whole organism central processing must have emerged, or computer type II. (These “brains” might have incited the Cambrian Explosion of life 541 million years ago.) I consider each form of computer to be entirely non-conscious. In this respect they might be associated with our robots (though I consider our robots to be incredibly less advanced).

    So how shall I begin to describe my conception of the conscious form of computer? Well I’m looking up at my bedroom ceiling fan right now. The wind that it makes isn’t a part of the machine, but rather an output of it. Yes that’s how I’d liked you to begin thinking about “consciousness” as I define the term. The computer by which you and I perceive existence, is outputted by the vast central organism processor which resides in our heads. Thus like there is no “wind hardware”, this particular variety of computer contains no hardware as well. Fan wind is wholly a product of a fan, just as consciousness is wholly a product of a brain.

    (One interesting point is that I’m saying a computer is outputting a very different variety of computer. Conversely fans produce winds, though I don’t know of any winds that produce fans. Nevertheless I’m saying that consciousness can occur as a computer that exists as an output of another computer.)

    At this point I need to define “consciousness” to help you grasp what it is that I’m saying the brain outputs. In a word this is sentience. Per my single principle of axiology, it’s possible for a computer that is not conscious, to produce a punishment/ reward dynamic for something other than it to experience. I’m speaking of a property of physics. This might resonate with you since here there is no potential for “simulated existence” to constitute “conscious existence”. Physics would still still be needed presumably — a truly “hard problem” indeed! (I personally doubt that we’ll ever knowingly build something that can output an entity which is sentient. I consider our machines utterly pathetic when compared against, say, the function of an ant.)

    From this model the brain outputs sentience, and I define sentience as consciousness, though in itself this is not a functional computer. Sentience exists as one of three varieties of input of a conscious form of computer. One other is a purely informational input that I call “senses”, such as vision. The last is referred to as “memory”, or past conscious processing that remains. I refer to the processor as “thought”, and it interprets such inputs and constructs scenarios about how to feel better. Then the only non-thought output function that I know of is “muscle operation.”

    One tricky part of getting this is that all three forms of input exist as output of the non-conscious brain. It’s the same for the conscious processor. Then as for the “muscle operation output,” that’s essentially the thought processor requesting if the brain will do what it’s decided. It has no ability to operate muscles in itself. As I tell Mike, whatever number of processing computations that the non-conscious brain does, the number of processing computations that the conscious computer does should be less than one thousandth of one percent of that.

    There’s a great deal more to discuss, and especially if you’re to gain a working level grasp, let alone lecture level. But I’ll stop here for the moment to see how things sit.

    • Wyrd Smythe

      This first reply just addresses some early points…

      “In the past I’ve decided that there are reasonable ways to make the “computer” association with genetic material as I define the term,…”

      Okay, we’ll come back to it. What appears significant here is:

      “A certain protein input should be treated by genetic material differently from another. Thus computer type I.”

      Okay. (Contingent on a detailed discussion of your perception of how this is different than lots and lots of other chemical reactions that also treat different “inputs” differently.)

      “Anyway ‘computation’ as I define it concerns a process where inputs become processed by means of logical steps to produce output function given specific features of the input’s nature.”

      Okay, an initial definition of a membership function for the class: Things That Compute

      More precision is necessary to fully define it, especially with regard to “logical steps” — what exactly constitutes a “logical” step? (Part of why I asked about DNA.)

      Can you clarify what you mean by “output function” as opposed to just ‘output’ or ‘outputs.’ Per your overall thesis, you do seem to involve the idea of outputs that are themselves functional.

      The nature of such functional outputs, and how they’re produced, needs some discussion. You have not laid any foundation or mechanism for it so far.

      “You’d call the abacus and a standard mercury thermostat discrete symbol processors?”

      Yes. Because, unlike a mercury thermometer, a thermostat detects discrete states: Too cold, turn on furnace. Warm enough, turn it off. (Or too warm, turn on AC. Cool enough, turn it off.)

      There is input, the temperature. There is processing by the heat-sensitive assembly. There is output, discrete electrical signals to the HVAC system.

      (What’s a “standard mercury thermostat”? I’ve never seen a thermostat that used mercury as the heat-sensitive part. I have seen them with a mercury switch on bi-metal coil, but never as a temperature sensor.)

      “I also wouldn’t say that the computer that I’m now typing on functions on the basis of symbols.”

      You definitely are using “symbol” differently. You’re thinking of symbolic thinking.

      That’s important enough that I’ll respond in a separate reply.

      “…sliding an abacus piece could be referred to as an “input” to such a device.”

      Exactly.

      “But here this sliding could also be referred to as “logical processing”. And here this sliding could also be referred to as “output”.”

      The former is questionable; I would say the latter is incorrect.

      The moving of stones is just input; it configures the abacus with an input value. It’s similar to pressing a calculator key.

      The protocols for how movements, especially groups of movements, interact is the processing. The algorithm here is in the user aided by the abacus, which acts as a register (memory) and simple processor.

      The output is the final configuration of the stones, the “read out” of the abacus.

      A slide rule uses logarithmic scales to transform linear movement into multiplication. The input, again, is configuring the device. The algorithm, such as it is, is in the design of the scales and the rule. The output is selecting a point along the scale to see the result.

      The Babbage engine is similar in that user input configures it physically, processing occurs through mechanical device operation, output is read from a mechanical display.

      Arguably, the abacus and Babbage engine are digital computers, just much simpler than electronic ones. The slide rule is an analog computer, as is a mercury thermometer, a resistive network, or various other clever devices.

      “As I define it, input function needs to be distinct from processing function, which again needs to be distinct from output function.”

      Distinct under what interpretation? Distinct how? The above examples all have distinct aspects.

      “I’m speaking of a property of physics.”

      Which property?

      [more to come]

    • Wyrd Smythe

      With regard to symbols in the context of computers and calculation…

      “When a key is pressed on a calculator, this input is processed through logical steps that might output an associated symbol on its screen. So this is clearly computation as I define the term.”

      Yes. There is a key distinction to make here (bear with me, and I’ll bring it back to symbols):

      What you describe is the machine responding to your single keystroke input. Contrast with the multi-step process of entering one or more numbers (one or more keystrokes each) plus an operation to perform. For example: 42.1 × 512.73 =

      Casually speaking, we say the calculator performs a calculation which returns an output answer. That’s our general sense of a “computer.”

      But there is a computer (chip) inside the box that performs the gross calculation resulting in an answer as well as computations involving interacting with the user.

      As you mentioned, there is an algorithm behind that interaction of pressing a key and the device showing you the result of that keystroke. It’s part of a larger algorithm that manages input, which is a part of a larger algorithm that manages the device and does the “computation” the user requests.

      So there is the computer machine that implements a calculator algorithm. Importantly, however, the computer machine performs all calculations.

      Your keystroke is machine-level. Your performing a “calculation” is virtual-level.

      I think you are referring to symbols that are virtual-level, that are interpretative. You see certain lights light up on the display and interpret them as numbers.

      When I speak of symbolic processing, that is not what I mean. I’m talking about the difference between analog and discrete. The latter form uses symbols, that’s what the “discrete” refers to, discrete objects of processing. Analog processing (generally) does not.

      The terms “digital,” “discrete,” and “symbolic processing” are essentially synonyms.

      Opposing them, Yin-Yang style, is “analog,” “smooth,” and “continuous.”

      There is a further question of whether the operation is algorithmic. Is there a set of instructions that guides the operation of the machine?

      Most analog computers don’t use algorithms in the obvious sense. They operate according to “least free energy” physics principles (which in many regards also applies to brains).

      A definition of input-process-output allows a broad class of “computers” but may include things you might wish to exclude. (Much depends on your definition of “process.”)

      An abacus, a slide rule, the Babbage engine, an IBM laptop, and a quantum computer, are all input-process-output devices under almost any interpretation. Only the slide rule is analog; the rest are discrete. Only the last two have distinct algorithms.

      Most things in nature are analog until you get down to the very small scale. (Quantum mechanics, obviously, is discrete.) DNA is very small and, as we’ve discussed, has strong symbolic aspects in the amino acids used and in their tripartite groups.

      Brains are almost entirely physical, analog, but individual neurons have an excited state and a not-excited state, so individual neurons are symbolic processors. They sum their inputs and make a decision, not unlike a thermostat.

      Consider an old-fashioned bowling alley pin machine.

      Those are complicated beasts, mechanical engineering marvels. Given that they keep score and manage the pins, they are arguably computers. They are definitely symbol processors; their symbols are pins, balls, and game states.

      But as far as I know, they have no algorithm, per se. They work solely in virtue of their design. Or you can say the algorithm in the designer’s mind was reified in the machinery. (I used to work with relay-based switching systems. Same thing. There is logic, but no algorithm, except in the designer’s mind or reified in machinery.)

      For any machine with interacting parts, there is an algorithm in the designer’s mind that defines the part interactions. Is the algorithm in the machine or its operation? Matter of interpretation.

      Bottom line, per your definitions, if DNA can be a computer, and especially if brains can be computers, then all the devices I’ve mentioned are equally computers.

    • Wyrd Smythe

      Replying to the meat of your thesis will take me some time. I want to chew on it a bit.

      The one bit of feedback I can give you now is that you need a foundation and a mechanism for how a brain “computer” gives rise to a consciousness “computer” (not to mention how and why you define consciousness as a computer in its own right, but deny dualism).

    • Wyrd Smythe

      “Fan wind is wholly a product of a fan, just as consciousness is wholly a product of a brain.”

      Okay. As you say, fan wind shares none of the characteristics of the fan. In my analogy to lasers, laser light shares no properties with the lasing material.

      What something produces is almost always quite different from what produces it.

      The one exception I know of is that a mathematical object can produce an identical mathematical object. Check out a quine, for instance.

      “I’m saying that consciousness can occur as a computer that exists as an output of another computer.”

      So I gather. So far I haven’t seen any foundation (or mechanism) for such a belief.

      “In a word [consciousness] is sentience.”

      Then a dog, a turtle, and a lobster are all conscious? They are all sentient.

      (Do you mean sapience rather than sentience? The latter is just the ability to perceive and feel. Sensing images and feeling pain qualify. Sapience roughly equates to “human intelligence.”)

      Machines, let alone computers, are neither sentient nor sapient.

      “Per my single principle of axiology, it’s possible for a computer that is not conscious, to produce a punishment/reward dynamic for something other than it to experience.”

      Only if it has been programmed to do so.

      If you are referring to evolution, there is no programming involved. Random mutation gives an organism and its progeny an advantage that allows greater survival rates (as you say, axiology). Of course, most mutations are inimical, but every once in a while, one isn’t.

      Computers do not evolve unless they have been programmed to. (Consider evolutionary computation.)

      If you mean the brain produces some kind of dynamic for the mind, you’ll have to be a lot more specific and detailed.

      “From this model the brain outputs sentience,…”

      An odd way to put it, but yes, the result of brain operation is a sentient creature. A dog, a turtle, a lobster, a human.

      I take it you see this output, this sentience, as similar to fan wind? A distinct object? But you say you are not a dualist, so how can sentience wind be distinct from the brain fan?

      “I refer to the processor as ‘thought’, and it interprets such inputs and constructs scenarios about how to feel better.”

      Okay,… That seems the central idea, and I’ll have to think about it for a while.

      My initial response is that this needs a lot more specificity. So far it’s all very vague.

      The biggest problem I see is classifying both mind and brain as computers. As I’ve said, “brain as computer” is a much weaker metaphor than “DNA as computer.”

      And, frankly, I can’t see any metaphor for “sapience as computer.” I would be inclined to say that, whatever mind is, it’s definitely not a computer.

      So you have kind of an uphill battle there, is the point. 😉

      “As I tell Mike, whatever number of processing computations that the non-conscious brain does, the number of processing computations that the conscious computer does should be less than one thousandth of one percent of that.”

      How do you come to that conclusion? I would imagine consciousness would require vastly more computation than any non-conscious system.

  • Philosopher Eric

    Okay Wyrd, lots to address here. And there’s always the danger that my answers will simply give you more associated questions and thus provide no feeling of being better informed. So we’ll need to cross some issues off the list as best we can.

    Fortunately the “symbol” term seems like an easy one. Yes we were using it differently. I was going for the relatively standard “representation of something else” definition. Without a conscious entity, and indeed one equipped with language, symbols in this regard cannot exist. Conversely you were using “symbol” as an ontological marker of discrete states. Because an abacus harbors various discrete rather than continuous states of existence, it might thus be referred to as “symbolic beyond representation”. (Then its state at any given setting may also be referred to as symbolic in a representational capacity.) I’m just a simple guy however, so if it’s all the same to you then I’d prefer that we confine the “symbol” term to representation. Then we can use the “discrete” term when we mean non-continuous.

    Anyway the mercury thermometer functions continuously, and the markings on its vial are symbolic in nature — without symbolic representation those markings are meaningless. (Something similar may be said for conventions regarding the abacus and markings on the slide rule.) Neurons firing or not firing would be discrete, and I’m saying that doing so to constitute effective “and”, “or” and “not” gates permits them to function similarly to our computers, as maintained in general by neuroscientists. Algorithms seem to exist in my brain, for example, to regulate the beating of my heart given inputs associate with my physical activity and such. And I presume that distinct algorithms exist in genetic material to produce various novel substances given specific chemical input. No conscious conventions required for either of these.

    Regardless my models do not depend upon genetic function, or even central organism processors, effectively residing under the “computer” classification. But I will continue referring to them this way for now since I can think of no better, as well as believe that analogies are crucial in the quest to help build effective human understandings in general. The human just isn’t smart enough to understand things without relational strategies, I think. Then as for the “conscious” form of computer, well apparently that’s where computer science itself began! But we’ve hardly scratched the surface of my own associated model, so let’s continue.

    Regarding a mechanism for input, processing, and output, let me be clear that I believe all of reality functions by means of causality. So beyond this oneness of classification, all steps will be humanly interpreted. Given the direction of time we find it useful to define systems where the first parts are “inputs”, and final parts “outputs”, and middles can be called “processing”. This is simply useful convention for us monists. So naturalism is the foundation upon which I build.

    (On the mercury thermostat, yes thanks for correcting me about that — a bimetal coiled spring must instead provide the temperature input mechanism. And yes associates switches render such function discrete.)

    On my denial of dualism, this is not actually a “hard” denial. It’s just a belief given my own personal metaphysics. I observe that to the extent causality fails, there’s nothing to figure out. So if there’s nothing to figure out, then it wouldn’t be sensible for me to even try to figure things out. I’d certainly hate to not be sensible! Thus I presume naturalism. I also refer to this as the use of “reason over faith”.

    Actually there is a bit of evidence that consciousness occurs naturally, which is to say as a product of brain mechanics. One issue here is that a person can never directly feel the sorts of things that another does. Apparently this is all private. But if that weren’t the case, then causalists like me could note any causal connections between people that can feel what another does, and supernaturalists could note any supernatural connections from their faith based doctrines (which I suppose would be documented praying to this effect and whatnot).

    Well as it happens there are two Canadian girls that can each feel what the other does to some degree! And so who wins this round of naturalism versus supernaturalism? Given that they share the same thalamus, it would seem that we naturalists do. Here there is an obvious causal connection for their unique ability. Wouldn’t it be odd if the world’s only documented case of supernatural shared experience, happened to occur in people who share a crucial element of physical brain? It’s strange to me how little publicity this case gets given the prominence of dualism. You can be sure that if the supernaturalists had evidence like this on their side (rather than just figments of zealot imagination regarding holy statues and such), then we’d never hear the end of it. http://www.cbc.ca/cbcdocspov/m_features/the-hogan-twins-share-a-brain-and-see-out-of-each-others-eyes

    If it’s possible for a brain to cause something beyond it (such as you or I) to feel good/bad, then causality will mandate this to be a property of physics. Otherwise I’d call my ability to feel good/bad “supernatural”. And would such dualistic properties invalidate the models that I’ve developed? Not inherently, no. Though a naturalist, I’m also a hard Cartesian. If a “god” creates sentience rather than physics, my own models remain unchanged.

    Your uncertainties about my consciousness model are understandable given that I’ve only just begun. But note that there’s a fine line here regarding how much I should say at a given point. Too few answers will naturally hinder you regarding “the big picture”, thus imparting frustration. Then lots of answers could overwhelm you with understandably compounded misconceptions, thus again imparting frustration. Given each danger let’s begin with a broad outline. But try not to demand that everything make sense initially. Effective scenarios seem most helpful to display subtle points that my lectures simply will not address. After the following relatively thick lecture, hopefully we’ll be able to look into some practical scenarios from which to illustrate effective subtleties.

    Below is a functional diagram of how I see a central organism processor outputting a conscious form of computer. Of course it nonetheless requires explanation.

    At first try to ignore everything below the first three rows of boxes. Here the non-conscious brain will for example concern things like how fast my heart is instructed to beat. Associated information, such as physical activity (non-conscious input), is processed (in the processing step) to regulate non-conscious function (or associated output). I presume countless forms of input, processing, and output in a non-conscious capacity for the human, but provide no such examples in the boxes. In this particular diagram only the conscious form of function is noted below these first three rows. But at least with heart function as an example of what the main computer does, let’s now get into consciousness.

    For the moment disregard the one way arrowed lines. Here all of consciousness exists as an output of non-conscious function. Thus consciousness exists as something like wind from a fan. But better still think of certain molecules of fan wind that come together to form blades and so spin to propel wind in a different manner than the original fan does. So this is the analogy. Here the computer outputs a separate kind of computer (and this is the computer by which you and I experience existence, as well as lobsters, and refers to the computer by which early computer scientists spoke of “calculation”).

    Now moving down to the “valence” input, also known as sentience, this is the defining component of consciousness as I position the term. Here feeling good/bad is theorized to drive such computation, just as electricity drives the computer that I’m typing on, and molecular dynamics drive genetic material function, and neural dynamics drive brain function. Here anything that feels good/bad is conscious, even with no functionality. This is a presumed causal property of physics (though if David Chalmers or Rene Descartes happen to be right, then the supernatural actually prevails).

    Beyond the defining input there are “senses” such as the standard five: vision, smell, touch, hearing, and taste. I confine them to information only however. The bad part of a smell will exist under the valence input, while what it suggests causes such a smell will reside here.

    Then there is the memory input. There are various ways in which past consciousness can be retained for future recall. Essentially it seems that effective chains of neurons that have fired more recently have more propensity to fire again, and so various things tend to incite the memory form of input.

    The thought processor gets to what you’re doing right now. The theory is that you interpret those three varieties of conscious inputs, and then construct scenarios in the quest to figure out how you might become more happy. And why bother? Because feeling good rather than bad is theoretically all that matters to the conscious entity. (Actually feeling good rather than bad is defined to be all that matters to anything anywhere, though only the conscious entity has such potential.)

    Once the thought processor comes to a decision, the only non thought variety of output it has is “muscle operation”. But in truth consciousness does not directly operate muscles as I see it. Instead the vast non-conscious computer senses what’s decided and so operate such muscles as instructed.

    “As I tell Mike, whatever number of processing computations that the non-conscious brain does, the number of processing computations that the conscious computer does should be less than one thousandth of one percent of that.”

    How do you come to that conclusion? I would imagine consciousness would require vastly more computation than any non-conscious system.

    Okay, but think about this under the scope of what I’m saying. If the wind of a fan creates a distinct fan by means of it’s wind, this second fan cannot possibly be a larger fan than the one which produces it. But it could conceivably be hundreds of millions of times smaller. Well that’s exactly what I’m suggesting is the case regarding the conscious form of computer. I define it such that it’s a tiny product of what produces it.

    Consider how much personal processing is required of you to decide to make a fist and then make that fist. Nothing to it. But that’s the nature of the processing associated with consciousness as I’m defining the term. But the reason that should be so easy because “you” didn’t actually make the fist. Instead you decided to do it and then a vast non-conscious machine that is your brain, automatically took your decision to provide the illusion that “you” made the fist. That’s why I put an arrowed line which goes from conscious output to non-conscious input. Whatever muscles that you decide to move, the non-conscious side is the one that actually gets the job done. We take credit for too much of “the meat puppet’s” function, I think.

    Alright, enough lecture for now. Next time let’s run some scenarios!

    • Wyrd Smythe

      At this point, I have to ask: What is your technical background and education? What fields have you studied?

    • Wyrd Smythe

      “Neurons […] constitute effective “and”, “or” and “not” gates permits them to function similarly to our computers, as maintained in general by neuroscientists.”

      Remember, it’s a metaphor for basic illumination. I think you are reading too much into the metaphor.

      “Algorithms seem to exist in my brain,…”

      “Seem to” is the key phrase. Point to the algorithm.

      “I presume that distinct algorithms exist in genetic material to produce various novel substances given specific chemical input.”

      Point to the algorithm. You can’t (no one can). These “algorithms” exist only figuratively.

      “If the wind of a fan creates a distinct fan by means of it’s wind,…”

      The analogy does not work; there’s no way that can happen. You can’t just make something up. Analogies and metaphors have to be coherent. The idea that wind creates a fan is not a coherent idea.

      “…this second fan cannot possibly be a larger fan than the one which produces it.”

      Even granting the analogy, the conclusion does not follow. What if the first fan blows twice as long, so there is twice as much air. Wouldn’t the (magical) second fan be twice as big?

      But I’m thinking none of this matters. The business with DNA and man-made computers seems irrelevant to your main point. Which is sort of about brain/mind and sort of about hindbrain/forebrain.

      “Here the non-conscious brain will for example concern things like how fast my heart is instructed to beat.”

      What you’ve done here is describe the cerebellum (hindbrain) and, following that, the cerebrum (forebrain).

      The idea that thoughts in the forebrain are dispatched to the hindbrain for execution is well studied, but hasn’t delivered specific answers. (A diagram with boxes and arrows is not an answer.)

      How it happens is still something of a mystery and, obviously, turns on an understanding of consciousness — something that’s eluded experts for many centuries.

      Let me ask you this: As I perceive it, your definition of consciousness involves combining sensory data, memory, and sentience, with a driving force of reward-punishment. Dogs (and most animals) would be conscious under this definition (they are sentient, have senses, have memory), correct?

      Are humans significantly different? If so how?

  • Philosopher Eric

    Ah, thanks for asking! It’s rare that others ask me about myself online. I’d like for things to be a bit more personal.

    As for my credentials, I’m afraid that I don’t have much. I do have a bachelors degree in economics to my credit, though with no such professional work subsequently. I’ve worked with my hands in the field of construction since (and sometimes during) my days at school. I’m now fifty.

    My own sense of me, for what it’s worth, goes about like this: I was an overly sensitive kid who thus sought to understand why people would do such horrible things to each other. I feel that I was continually misled by the moral notions that my parents and society in general sought for me to have and promote. At about the age of 15 a clarifying realization hit — we’re all self interested products of our circumstances. From then on my observations started making sense.

    I wanted to further develop my position through university studies, but was quite disappointed with what I found in both philosophy and mental/ behavioral sciences — philosophy for remaining “moral”, and science for attempting to grasp our nature while ignoring value itself. I decided that I must not let such failure damage me as well.

    Here I yearned for something academically respectable, and it was physics that saved me. How might I ever understand how to fix the bullshit associated with the fields that interested me most, if I had only vague notions of “good science”? Physics provided me with a model to follow.

    I’m not actually that bright however. After scraping through the required math and basic physics courses, I began my upper division studies with quantum mechanics. (Actually given how it was taught I think that even my “genius” classmates generally failed this one.)

    From there I moved down to the field of economics, which I did respect given that it’s the only behavioral science with the balls to formally found itself upon the premise of utility — “If it feels good then it is good!”. Then by 2014 as a mature adult with wife and son, I decided that my ideas were ready for the scrutiny of others. I began blogging heavily, which has been great fun!

    I do know a bit about you from your blog, though I haven’t yet read enough. Is there anything that you’d like to tell me about yourself as well?

    • Wyrd Smythe

      Eric, I asked because this is a field that requires a deep technical knowledge in several sciences, and from your writing I wasn’t getting a sense you have that background.

      What you’ve presented is a metaphorical map of general consciousness — sort of an introductory overview lecture. But heaven, as they say, is in the details, and without a deep background in the relevant sciences, it’s impossible to be detailed.

      If it’s not obvious, I do have the background. I’m an autodidact; I’ve actively studied the hard sciences since I was a small kid. (I’m in my 60s.) Thinking back, damn, it’s a wide range: chemistry, electronics, optics, sound, math theory, high-energy physics,..

      I’ve designed, built, and programmed, computer systems; I’ve taught programming and computer science. I’m retired from 34 years at The Company (big, international) where I was a computer communications technician (first), teacher (later), and software designer (most of my career).

      In high school and college (in Los Angeles) I was in the arts, theatre, film, TV, mostly. I had a Computer Science minor, and that turned out to be my career (which was mostly pretty awesome — got me inside the Pentagon, for instance; way cool).

      I’ve been around. 😉

      “At about the age of 15 a clarifying realization hit — we’re all self interested products of our circumstances.”

      That explains your focus on reward-based systems. It’s certainly the way animals work, the way nature and evolution work.

      What (I believe) makes humans unique is our ability to intellectually transcend that. (The tragedy is that most of humanity doesn’t try very hard.) Moral philosophy is all about try to determine what ought to be from what is — the “ought from is” problem, right?

      It’s a product of our higher consciousness; animals don’t conceive of morality.

      A final note: The term “consciousness” has so many meanings, one needs to be precise in which form of consciousness is being discussed. Consider the possibilities:

      1. Sentient, not intelligent, purely reactive to conditions. Turtles, cows, fish. (Their sentience is the issue in animal cruelty discussions.)
      2. Sentient, roughly intelligent, interactive, can learn. Dogs, apes, corvids, capuchins, dolphins, elephants. You could even break this down into animals that pass the mirror test and those that don’t.
      3. Sapient (human), unconscious, in a coma.
      4. Sapient, alert (but locked in), in a coma.
      5. Sapient, asleep. (Normal human sleeping.)
      6. Sapient, awake, alert, active. (Normal human awake state.)

      That’s a lot of ways to be “conscious” but the usual default involves sapience. (Being awake or alert aren’t really determining factors here.) My point is, that’s what I usually mean by it, if I don’t qualify it: sapience.

  • Philosopher Eric

    But Wyrd, there must be countless prominent people working on “consciousness” out there who do have the credentials that you’ve mentioned. And what does science have to show for their efforts? A whole lot of conflicting and unaccepted theory. If you were to say that I wasn’t qualified to discuss certain advanced topics in math with you, or a hard science, I’d probably agree. But as an educated person who has been working on consciousness for most of my life, I should be qualified — the field remains wide open. This is apparently the heart of the softness to our soft sciences.

    With a rough scheme now presented, can’t we run through some scenarios regarding the nature of sentient function, and so get into some practical mechanics of this model?

    You’ve mentioned the problem of definition just above, and I couldn’t agree more. I consider this to be an epistemological hole in the fabric of science. Without any generally accepted principles of philosophy (as in metaphysics, epistemology, and value), the institution of science itself should not yet be sufficiently founded. I’d appreciate your thoughts on my ideas here as well. Or would you instead say that science functions just fine without any such accepted principles?

    • Wyrd Smythe

      “And what does science have to show for their efforts?”

      Quite a bit. All the fields we’ve discussed, genetics, computer science, neuroscience, have made, and continue to make, considerable progress. The AI Holy Grail does remain elusive.

      However, as you know, consciousness is Chalmers’ “hard problem” and it is an extremely hard nut to crack. The three main reasons are: (1) complexity of the parts — synapses are extraordinarily complex; (2) the scale of the brain — some 80 billion neurons averaging 7,000 interconnections each; (3) the disconnect between objective neurophysiology and subjective experience — the core of the “hard problem.”

      “If you were to say that I wasn’t qualified to discuss certain advanced topics in math with you, or a hard science, I’d probably agree.”

      Consider what you’ve said:

      You acknowledge that math or computer science require a background and understanding. You also acknowledge that the long-standing problem of consciousness has not been solved by trained experts devoting their lives to the study.

      Combine those two facts.

      The ironic truth is, math and other “hard” sciences are easy compared to the so-called “soft” sciences.

      The “soft” does not mean they are “easy.” They are called “soft” because it is incredibly difficult to find hard answers. Only the hard sciences can offer hard answers. For instance: 2+2=4, a nice hard answer.

      The reason has to do with degrees of freedom — the number of variables required in describing a system. I imagine you ran into this studying economics. (The concept also arises in quantum mechanics.)

      What we seek is the minimum number of variables that fully describe a system and are independent of each other. (A degree of freedom is an orthogonal axis in the phase space of the system, similar to X-Y-Z in 3D space.)

      The hard sciences deal with systems that have comparatively few variables.

      The soft sciences involve systems with huge numbers of variables. Often these systems are so complex that statistical analysis is the best we can hope to do. (At least for now.)

      “But as an educated person who has been working on consciousness for most of my life, I should be qualified — the field remains wide open.”

      No, I’m sorry, not really. The requirements in these fields are at least as demanding as any hard science and arguably greater. Certainly the problems are vastly more challenging.

      There is a study-proven truth involving all complex, technical subject matter: A certain level of understanding is required to even understand what’s required. Below that level, one simply cannot appreciate how much one does not understand. (Education in technical subjects is often an exercise in finding out how much one has yet to learn!)

      You acknowledge that math requires training. I hope you can acknowledge that so does the study of consciousness. At least as much, if not more.

      (When I asked you about fields of study, you didn’t mention anything about studying consciousness most of your life. What does it consist of?)

      “With a rough scheme now presented, can’t we run through some scenarios regarding the nature of sentient function, and so get into some practical mechanics of this model?”

      Have at it.

      “Without any generally accepted principles of philosophy (as in metaphysics, epistemology, and value), the institution of science itself should not yet be sufficiently founded. I’d appreciate your thoughts on my ideas here as well.”

      Give me a “for instance.”

      “Or would you instead say that science functions just fine without any such accepted principles?”

      I think scientists definitely benefit from philosophical training, which essentially is training in clear, precise, deep, detailed thinking. Likewise, philosophers benefit from science training, because philosophical views should be grounded in current knowledge.

      I keep in mind that science was originally called “natural philosophy” and that both are just the study of reality. Philosophy in a metaphysics context, science in a physics context.

      If you’re talking about morality, that’s a whole other discussion. Moral philosophy is just one branch of philosophy, and as with all philosophy and science, it has bearing on all our lives.

  • Philosopher Eric

    Wyrd,
    It’s good to hear that you’re ready to assess consciousness scenarios through my model. I desperately need to get https://logosconcarne.files.wordpress.com/2019/01/tuit-0.png Apparently there are some more immediate issues to address right now however.

    I’m commonly told that we shouldn’t be concerned about the softness of psychology, psychiatry, sociology, or our mental and behavioral sciences in general. They tell me that we’re just too complex for general modeling prediction. And I’m told that these sciences do still have various verified effective models, such as “the Hawthorn effect” (or the tendency for people to alter/ improve their behavior when they perceive being watched).

    My reply is that even though we have various situational heuristics that seem effective, we don’t yet have acceptable big picture theory from which to explain, for example, why the Hawthorn effect should be observed. All broad fundamental theory has failed to achieve general consensus to date. Given such failure psychologists don’t even seem to propose such theory anymore. Instead they go “small ball” (and sometimes include a bit of p-hacking for good measure).

    I believe that I’ve developed effective big picture theory from which of help harden up these struggling fields. It’s not that the human is too complex, I think, but rather that it’s too close to be sufficiently objective. Why would the human be the only thing that it has plenty of information about, that it cannot grasp? Perhaps because this is the only situation where what’s being studied also represents what’s doing the studying? This is to say that we’re naturally biased. If we could consider ourselves more objectively then perhaps improvement could finally begin to occur?

    I believe that in order for the human to finally be able to effectively study itself, progress will be needed to the foundation upon which science rests. This is to say the three branches of philosophy: metaphysics (literally “what comes before physics), epistemology (or principles from which to build effective beliefs), and axiology (or the nature of value, commonly distinguished by aesthetics and ethics).

    Here are my own four principles from which to potentially help found the institution of science.

    My single principle of metaphysics: To the extent that causality fails, nothing exists to figure out.

    My first principle of epistemology: There are no true or false definitions, but rather only more and less useful ones.

    My second principle of epistemology: There is only one process by which anything conscious, consciously figures anything out. It takes what it thinks it knows (evidence) and uses this to assess what it’s not so sure about (a model). As a model continues to remain consistent with evidence, it tends to progressively become more believed.

    My single principle of axiology: It’s possible for a “computer” that is not conscious, to create a punishment/ reward dynamic for something other than it to experience, or all that’s valuable to anything throughout all of existence. (As I define it, what’s created here is the conscious entity, whether functional or not. Apparently evolution transformed this property of physics into something functional.)

    The greatest impediment to our mental and behavioral sciences sciences today, I think, is the social tool of morality. Here’s a quick run down of that: https://selfawarepatterns.com/2019/01/19/what-positions-do-you-hold-that-are-not-popular/#comment-26125 This was essentially my epiphany as a 15 year old kid, or contrary to what I’d been taught, that we’re all self interested products of our circumstances.

    • Wyrd Smythe

      “Why would the human be the only thing that it has plenty of information about, that it cannot grasp?”

      There are many things we have lots of data about but cannot grasp. The Turing Halting problem, and Gödel’s Incompleteness Theorems, tell us there are things we can never grasp fully no matter how much data we have.

      That appears to be the nature of reality. Weather is an example of a system for which we could have almost perfect information and still not know whether it will rain exactly one year from today.

      “My single principle of metaphysics: To the extent that causality fails, nothing exists to figure out.”

      What exactly are you saying here? Are you saying causality does fail? Or never fails? It seems like just a convoluted way to assert causal determination. Is there more to it?

      “My first principle of epistemology: There are no true or false definitions, but rather only more and less useful ones.”

      Okay,… To me that seems the opposite of what epistemology is about (the study of what we can say is true, of what we can be justified in believing), but… okay, whatever.

      “There is only one process by which anything conscious, consciously figures anything out.”

      Okay. (Stated so vaguely, it’s trivially true: Guesses are refined by facts, obviously.)

      “My single principle of axiology: It’s possible for a “computer” that is not conscious, to create a punishment/ reward dynamic for something other than it to experience,”

      Provide a specific example. As stated, it’s just hand-waving.

      “The greatest impediment to our mental and behavioral sciences sciences today, I think, is the social tool of morality.”

      What is the “social tool of morality” and how is it the “greatest impediment” to science?

  • Philosopher Eric

    Wyrd,
    I certainly agree that there’s plenty that we can’t know. In fact I believe that only one thing can ever be known — something conscious can know that it exists in some manner, but can be certain of nothing else. (From Descartes of course.) But beyond that, hard science seems to have developed various effective ideas to believe in, though far less so on the soft side.

    As I see it meteorology should actually be classified as a “hard science”. This is to say that professionals in the field have various big picture models regarding the mechanics of atmospheric dynamics that apparently correspond with observation pretty well, or exactly what our mental and behavioral sciences lack. Freudism has failed, behaviorism has failed, and so on.

    Neuroscience might be termed “quasi-hard” in the sense that lots of productive anatomy does seem to occur in the field. But without any generally accepted big picture ideas regarding brain/ human function (and the model that I’ve developed could serve such a role), our mental and behavioral sciences in general remain “soft”.

    On my metaphysics, I’m not saying that causality does or does not fail. I’m instead saying that to the extent that it does fail, nothing exists to figure out. Thus if you’d like to figure something out, but presume a void in causality in that regard, then apparently the “figuring” part would be hopeless here. With this metaphysics I’d like to segregate science into a faction that is entirely naturalistic, as well as another that goes both ways. Then when a naturalist is approached with a less than naturalistic idea, I’d like for the standard answer to be something like “You might be correct about that, though the club which I belong to presumes contrary metaphysics. There is a club which is open to that sort of thing however, so I encourage you to bring it up with them”.

    On there being no true or false definitions (in which case the reader needs to accept the writer’s definitions in the attempt to understand), I believe this would help fix one of academia’s most widespread problems. Today people seem to commonly argue past each other by presuming separate definitions for their terms — falsely believing that they know what it “truly means”. Ludwig Wittgenstein tried to fix this through his “ordinary language” approach, but that clearly wasn’t sufficient.

    On my second principle of epistemology, one example of what it could do is help a physics community that’s gotten itself “Lost in the Math”. I’m sure that Sabine Hossenfelder would rather do physics than epistemology. In truth however I think that all of science could use such a formally grounded mission statement from which to work.

    On my axiology and the paradigm of morality, consider a scenario. Imagine a creature which functioned somewhat like our machines — no sentience. Thus there shouldn’t be anything that it’s like to exist as such a creature, just as we presume for our robots and most things. Now imagine a branching in the species such that a sentient creature were to evolve. Existing as one of these new creatures might feel horrible sometimes and wonderful others depending upon the circumstances. Thus if you wanted to assess the value of existing as such a creature over some duration, you’d take each theoretical unit of good feeling and subtract each theoretical unit of bad feeling over that period for a total score. Or to assess the welfare of any number of such creatures as a whole you’d do the same for each member and combine the scores. (Note that as presented here this is all true by definition.)

    If this creature were to evolve over millions of years to produce something with advanced natural languages and rich culture, note that value for any defined personal or social subject would remain unchanged — the summation of it’s positive minus negative experiences over an associated period of time. So this should be the case for the human as well. It’s largely the social tool of morality which prevents our mental and behavioral sciences from grasping this yet I think.

    The idea here is that the human is sensible enough on some level to realize that happiness constitutes value, though it’s also effective for it to keep its own selfishness secret. Here it may publicly admonish “wrong doers” and praise “right doers”, both to restrain the selfishness of others, as well as to help convince them that it deserves friendly treatment. Thus in practice the nature of the human should convince it to deny its nature — the social tool of morality. And thus its mental and behavioral sciences sciences should remain soft.

    • Wyrd Smythe

      “I’d like for the standard answer to be something like ‘You might be correct about that, though the club which I belong to presumes contrary metaphysics. There is a club which is open to that sort of thing however, so I encourage you to bring it up with them’.”

      [shrug] Works for me. We clearly belong to different “clubs.”

  • Philosopher Eric

    It could be that some of the “clubs” that we associate ourselves with oppose, though I haven’t noticed many obvious signs. I certainly do not worship at the alter of “New Atheism”! And we do belong to some of the same clubs, such as Sabine Hossenfelder’s and Mike Smith’s. Furthermore I am fond of your commentary in general. For the moment however it would seem that I’ve overstayed my welcome. For that you have my apologies.

    • Wyrd Smythe

      “It could be that some of the ‘clubs’ that we associate ourselves with oppose, though I haven’t noticed many obvious signs.”

      I’ve been pointing them out all along the way.

      “And we do belong to some of the same clubs, such as Sabine Hossenfelder’s and Mike Smith’s.”

      But that’s just it. Sabine and Mike are (together) in a different “club” from mine when it comes to theories of consciousness. If you read the discussions between Mike and I, you’ll see we have opposite views on key aspects.

      Sabine doesn’t acknowledge the “hard problem.” There was a very long discussion on one of her posts with people going back and forth over the issue. Sabine and I went a few rounds one night, and, no, she and I are definitely not in the same club on consciousness. 🙂

      “For the moment however it would seem that I’ve overstayed my welcome. For that you have my apologies.”

      I wouldn’t put it that way, and no apology necessary, there is no offense. Per your first epistemological principle, I haven’t found anything useful so far in your metaphors. To me they are too vague and seem to reveal misunderstandings about the way things work.

      My “club” (and Mike and Sabine would share in this, I’m sure) requires specifics, details, and understanding of the material involved. The door to that club is always open.

  • 2022: I Hardly Knew Ya | Logos con carne

    […] Transcendental Territory (2015, 999) […]

  • The Dozen Year Charts | Logos con carne

    […] Transcendental Territory (1,033): This post is a summary of a series of related posts preceding it. The series topic is computationalism. This post discussed the possible issues with transcendental numbers with regard to trying to compute consciousness. As usual, I’ve no idea why this post attracts hits but the others in the series don’t. […]

And what do you think?