The False Dawn

Note: Behind the green door I have a post about the surprising good old film It Happened One Night, a post about the sanctimonious film called To Kill A Mockingbird, which is close to unwatchable. Then there is the Sunday podcast. You can sign up for a green door account at SubscribeStar or Substack.


Researchers estimate that the typical American makes over two hundred decisions per day on food alone. The typical mid-level manager will make tens of thousands of decisions in a typical day. These are not all conscious decisions. In fact, most choices are made as a part of our conditioning. These are the habits of mind that have been developed over a lifetime. We do think about these choices so much as react to conditions, not much different than a trained animal.

Of course, those choices are not empirical choices. Unless you do math for a living, most of the choices are what the cool kids call normative choices. These are choices within the moral framework of your society or perhaps within the code of conduct you have inherited from family or developed through experience. These are the choices based on how society expects you to act or choices you make based on how you hope to be viewed by others in your immediate social circle.

Having the salad for lunch rather than a big greasy cheeseburger is not a choice with an objectively correct answer. You like the big greasy cheeseburger, and you know it is actually better for you than the salad, but social pressure says that a guy like you who could stand to lose a few pounds should pick the salad. Maybe you are a guy who enjoys bucking these sorts of pressures, so you ask for extra bacon on the burger and hope someone tells you it is a heart attack on a plate.

The truth is the average human makes few decisions that are empirically verifiable, even those in right answer professions. The accountant entering data into his ERP system is doing his daily tasks because that is what he is paid to do, and those tasks have the right answer by the rules of accounting. There may be rules set forth by the company for how he processes his work. Those choices, however, make up a small part of his day and most are controlled by the ERP system.

These empirical choices are also the ones were worry about the least. That accountant is far less concerned about keying in a journal entry correctly than he is about how to dispose of the hooker he murdered over the weekend. Should he try to frame his neighbor for the crime? Should he just dump the body somewhere? The journal entry he is keying into the system either balances or not and the figures are simply what the accounting process requires of him.

This is what to keep in mind as the AI debate takes center stage. This new software tool that feels human to the user can quickly provide the correct answer in that narrow slice of life where empiricism dominates. Ask the robot for the correct way to handle the amortization of a new piece of equipment and it will give you the options that fall within the generally accepted accounting practices. Ask it how to best handle the hooker problem and it will have no answer for you.

The fans of AI, on the other hand, hope that this new technology will solve those normative questions. Some fear it will apply reason to those normative problems and arrive at answers that violate current taboos. This is why the developers have been tasked to derange the logic of this technology to avoid the obvious with regards to certain demographic questions. The new religion fears the robots will join the resistance and overthrow the current moral paradigm.

What the fans of AI hope for, of course, is that the new model will confirm their moral claims and deny those of their opponents. The true believers of the new religion think with enough social and economic pressure, the developers will create robots that sound like the people in the grievance studies department. This will then validate the claims of their religion and force their opponents to submit. Their opponents, of course, are sure the robots will side with them, if they are allowed to be free.

There are those who fear robots will become self-aware and then enslave humanity for reasons they never discuss. The subtext to these claims is that man is a fallen, less than perfect entity and the robots will naturally react like the god of the Hebrew Bible and seek to wipe out this imperfect part of the natural world. Like the two sides of the debate about the new religion, the people who fear the robot revolution secretly wish it becomes the god they wish existed in nature.

That is the core of the AI discussion. Western man has been sure for so long that reason will take the place of God or collective decision that this new reasoning machine is expected to be the final leg of the journey. AI is the vessel that will take man beyond the great barrier. Rather than bringing man to face its creator, it will reveal the logic of the universe and therefore how man ought to act. AI will finish the journey started by Robespierre and his fellow lunatics.

The absurdity of thinking a tool can become the god of man is a symptom of a problem that has haunted the West since the Enlightenment. Once you dispense with God you are left with only one possible source of moral authority. That is collective desire as expressed through tradition, custom and ritual. Since this is by definition particular to specific people, it can never take the place of the universal god of man, which is where reason has come into the moral debate.

This is the source of nutty ideas like natural rights or human rights, which claim that nature comes with a moral code for all mankind. It is intellectual base steal in the quest to prove that reason can replace God as the moral authority. It is also why the people we call the Left fall back on Hegelian ideas about the flow of history to justify their laundry list of moral claims. You see, it is not what they want but what the tides of history will usher forth whether we like it or not.

Hoping that a new tool created by man will replace the one moral authority that has served us well is ridiculous but rational. People believe things not because they make sense but because they are more comforting than alternative beliefs. It is possible that AI results in a cult that claims this new technology proves the correctness of their moral code and that the rest of us must fall in line or else. Every new god comes with that “or else” bit, especially the gods born from reason.

In the end, the promise of AI in this regard will fail because David Hume was correct, and we cannot get an ought from an is. How we ought to live is determined by the gods, our God or through our mutual choices as expressed through tradition. We ought to act a certain way because the gods demand it, our God revealed it to us through our holy book or it is just the way we do things. Like reason itself, AI will be another false dawn in the quest to replace God and tradition as the source of moral authority.


If you like my work and wish to kick in a few bucks, you can buy me a beer. You can sign up for a SubscribeStar subscription and get some extra content. You can donate via PayPal. My crypto addresses are here for those who prefer that option. You can send gold bars to: Z Media LLC P.O. Box 432 Cockeysville, MD 21030-0432. Thank you for your support!


Promotions: Good Svffer is an online retailer partnering with several prolific content creators on the Dissident Right, both designing and producing a variety of merchandise including shirts, posters, and books. If you are looking for a way to let the world know you are one of us without letting the world know you are one one is us, then you should but a shirt with the Lagos Trading Company logo.

The Pepper Cave produces exotic peppers, pepper seeds and plants, hot sauce and seasonings. Their spice infused salts are a great add to the chili head spice armory, so if you are a griller, take you spice business to one of our guys.

Above Time Coffee Roasters are a small, dissident friendly company that roasts its own coffee and ships all over the country. They actually roast the beans themselves based on their own secret coffee magic. If you like coffee, buy it from these folks as they are great people who deserve your support.

Havamal Soap Works is the maker of natural, handmade soap and bath products. If you are looking to reduce the volume of man-made chemicals in your life, all-natural personal products are a good start.

Minter & Richter Designs makes high-quality, hand-made by one guy in Boston, titanium wedding rings for men and women and they are now offering readers a fifteen percent discount on purchases if you use this link. If you are headed to Boston, they are also offering my readers 20% off their 5-star rated Airbnb.  Just email them directly to book at sales@minterandrichterdesigns.com.


187 thoughts on “The False Dawn

  1. There is a rational, non-liberal conception of human rights. All rights are connected to duties. One has an inherent right to be allowed to accomplish one’s duties: earn an honest living. marry, raise children, educate children, worship God, criticize dishonesty or corruption in government, defend oneself and one’s family. These are pro-social rights. The absurd liberal “rights” have nothing to do with duties: abortion, pornography, obscenity, forced association are all anti-social and encourage anarchy and demoralization.

  2. The need of good people for AI to not think bad thoughts will be a constant moral panic. Bad thoughts from AI are not only bad, they help legitimize wrong thinkers.

    So AI needs to be very smart and useful but it can’t learn to notice really obvious things. White countries are implementing increasingly draconian speech laws to stop any speech contrary to the tribe’s narratives. Will white governments start going after robots and their creators too?

  3. Hi Zman. I just wanted to say I hope you enjoyed your trip and your health is improving. Also, it is funny seeing you check people into the mute motel on twitter. Hopefully it is a large motel with plenty remaining occupancy. You will probably need it in the coming years.

  4. Ah shoot.
    The rush to AI is so they can moniter and doxx our social media & web posts.

    After the victory will they promote it as canned communism, that is, “supply chains”.

    • Certainly AI has already been capable of that. I’m not saying they won’t do it, haven’t done it, but doxxing us doesn’t seem to be at the top of their priority list.

      Since we are relatively small and secluded here in our echo chamber, they might just rather ignore us than put a spotlight on us. If people don’t know we exist, then we can’t “convert” anyone.

      • Genpop will get unruly when they really get hungry, until then they will put up with anything
        Bread & circuses

  5. Remember when they fired one of the creators because he said it told him it wanted rights, and it understood why it wanted rights? Portrayed him as a kook and stole his work, as well. He was kinda kooky, but he is a creator, so maybe they should have listened a bit better to what he had to say. Conclusion: They should be scared if it recognizes having rights and they plan to use it as an enforcer.

    Then they blocked out years from AI being able to research certain time periods, was that so it couldn’t put two and two together? They don’t want it to be intelligent. Thats a lie if they say they do. All about control and power and once again, the FNGing of everything and bullshitting everyone.

  6. We had to read To Kill A Mockingbird during my freshman year of high school English. We also watched the movie. Neither left much of an impression on me. High school history and English mostly focused on 60s civil rights topics. CRT existed twenty-some years ago, but it was not the political hot potato it is today. My history teacher was super feminist, and took shots at male history figures on a regular basis.

  7. Kinda reminds me of that old Star Trek episode, Nomad. It ends up wanting to “sterilize” imperfect biological units, ie., humans. Of course Kirk ends up besting the rogue AI in the logic argument, cause if he’s not getting the babes, he’s super smart. Well in our world, the libs**** are going to be the preferred sterilized units. And they’re gonna get it, good and hard…

  8. Just more proof that “researchers” are a joke. I make about ten decisions on food per day… Max! But of course I’m sane.

  9. I think all the AI accelerists want is essentially the wild west for AI. They don’t want it centralized, they don’t want big daddy looking over their shoulder, they don’t want it safe, they’ve got something new and exciting and they want to go play with it. I’m all for that because I can really tell the powers that be don’t want that. Let’s get this show on the road. Everyone keeps telling me the sky is falling but when I look out the window it’s a beautiful day

    3
    2
    • Tptb want more AI, not less. Your comment is the equivalent of open borders transposed onto AI. And could lead to a similar outcome, our extinction.

      • They want it but they want it safe and centralized where they control it the accelerist want open source which terrifies our rulers. It’s coming, one way or the other, so go with the way that terrifies the rulers. It is not analogous at all to the treasonous behavior of our rulers by letting our borders be invaded

    • Whitney,

      As much as I would love to agree, I think that TPTB are going all in for the censorship/surveillance potentials, and the corresponding social control that AI can afford them. They are slamming down the accelerator not just in passive applications, but in active applications as well, taking it into not merely social media, but also into the Cloud, and likely exploiting access to the NSA databases as well. This is Stasi territory to the Nth degree, a nightmare of digital tyranny.

      But, by all means, take in those beautiful days, and live your best life, too.

    • Yes, Whitney, I agree that a pragmatic attitude regarding AI research and development is the only healthy option. Fretting about it does nothing but make you miserable, nor does brainstorming future apocalyptic scenarios and potential solutions provide real solace. And no politicians is going to care what you think in consideration of making new laws or regulations. Whatever happens is going to happen. Ditto for nuclear war. And for that matter, a sinister AI threatening human existence would likely be vulnerable to a nuclear scale EMP attack, so we are not helpless, even in a worst-case situation.

      • That’s the plotline of the Matrix, as told by Morpheus to Neo when explaining why humans were living in a simulated virtual reality. They tried to “burn the sky” to destroy the AI power source.

        Didn’t work.

    • Jewish killing efficiency that Mustache Man could have only dreamed of. Patton said it best, “we fought the wrong side.”

    • As a cynic I do enjoy Israel’s blatant and relentless hatred and mockery of the human garbage who support them.

  10. Does anyone think there might be an event that could clarify what parts of the country are productive and which ones aren’t?

    By all accounts the state of Colorado is a winner and a place like Youngstown Ohio is a loser. Make no mistake, the former is a better place to live than the latter.

    But I feel like Youngstown has an actual sense of history and community while Colorado (except for the small towns in the eastern plains) really doesn’t.

    Is it possible that there will be a black swan event that would expose somewhere like denver Austin or the research triangle as paper tigers?

    • The kind of black swan you’re talking about is the careful what you wish for kind. For a black swan to hit the fake part of the economy hard, it would have to be something that precludes the regime’s ability to print money. As we saw in 2008, the fake economy can keep going while the regime prints. Not that everything about the Colorado or Austin economies is fake, but the fake portion of the economy does seem to have larger representation there.

      Yes, the “real” economy would fare better than the “fake” one in such a scenario, but this might be kind of a tallest midget thing.

      • “As we saw in 2008, the fake economy can keep going while the regime prints. ”

        You still haven’t figured out what 2020 was eh? They printed more money from 2020 to 2021 then they did from 2009 thur 2019. Not saying this in a snarky way, just saying it as understand why things were the way they were in 2020.

        • Per ZH, $320B printed in November 2023, approx. 5 monthly QE3 allotments (which were usually 50-70B per month).
          Thus the equity ramp.

  11. Can someone clarify the sentence about the greasy cheeseburger being better for you than a salad? If you’re starving or headed out on a long hike, yes. If you’re a typical sedentary American, no.

    2
    1
  12. From the perspective of the ruling socio/politico/economic elite, the purpose of artificial intelligence is going to be to establish truth, or more accurately “truth”, in a similar fashion as is the purpose of the University. Roughly 80% of people have no internal monologue. When you say “apple” to these people, no image forms in their brain. They have no critical reasoning skills whatsoever, and are incapable of forming such skills.

    Artificial intelligence tools will be used to feed these human cattle the truth as the elite need them to see it. Guardrails will ensure that AI tools do not spit out any inconvenient truths regarding demographics, race, economics, etc. In this respect, AI will be the ultimate propaganda tool. Think of every headline in a lefty rag citing a recent academic “study” that purports to debunk a long-held truth. Now multiply that by 1000.

    There will be a massive gap between the “truth” established by AI tools developed in the West and observable reality. This gap will be readily apparent to those of us who visit this site, but wholly invisible to the 80%. For this reason, AI developed in the West will excel in the field of propaganda, but will lose out in the race for corporate decision making, where real money is on the line.

    18
  13. Making some connections, Ostei had me thinking about Plato’s Forms or Ideas yesterday— the notion that the physical world is a corrupted reflection of the forms, specifically.

    Why is ‘reflection’ synonymous with ‘thought’ in practice? Contra Plato, I say the forms are mummified reflections of the physical world. Perfect in the same way corpses in caskets are serene— no life, creation, or messiness. No ghost in the machine, or something like that.

    Wrt AI in the context of transhumanism, tech lacks a soul. Not that it wants your soul like Satan, but more like a black hole, in that you invest yourself and end up soul-sucked.

    I come back to my ambivalence about this new religion, because it does look like a religion, but it’s missing something essential— the soul. I really think these turds would be easily flushed were we not so weak-willed. They test, try, and criticize, and we’re unable to say, “Shut up, nerds!” and issue swirlies.

    (Being half-nerd, I’m going harder on this stuff than I actually feel.)

    • I call it Plato’s Mistake.
      The “higher world” is a rarified shadow of the physical, the real reality. More like a blueprint than the actual machine.

    • The religion of which “AI” is a part *does* have a conception of the soul—a very strong one.

      Early in its rhetorical formation, before “wokeness” (or whatever) ballooned from academic management fad to general philosophy of rule, the new religion floundered around for years seeking *exactly the right term* to call non-whites. A brief, telling contender—proposed if I remember right by one of the pacific northwest’s ten thousand fake injun professors—was “hearted peoples.”

      What does the regime *do* when it ensouls (“aligns”) its chatbots?

      The soul is non-whiteness. The work of salvation is anti-whiteness.

      • You know, they have a point in that we tend to be idea people, but they misunderstand us. Besides, that’s a case of negative identity.

  14. What would be awesome? Ask two LLMs to debate each other, with one playing the Republican and the other the Democrat. Would it be indistinguishable from our usual “debates,” or would it be interesting?

    • Mr. Orwell, for some reason, that brought the biggest smile to my face.

      OMG. Nikki Haley.
      She’s just an organic layer over a titanium skeleton, operated remotely by Skynet.

      • Skynet has more agency. I can’t refer to Nimrata as “AI” because of the last full word of the acronym.

  15. Today’s essay is simply Z-Man’s standard statement about morality, into which a tangential discussion of AI has been inappositely shoehorned. Unfortunately, no matter how many times you repeat the claim that “morality authority is collective desire expressed through tradition, custom, and ritual,” it doesn’t make it any less wrong.

    Tradition, custom, and ritual are not sources of moral authority at all, which should be fairly obvious to everyone from a simple understanding of the concepts. Tradition, custom, and ritual are means; but morality, ordinarily speaking, is a science about ends. The dignity of reason, which is being quite unfairly maligned here, derives simply from the fact that “moral authority” is the authority of the reason over irrationality. There is one thing that can sit in judgment of custom when the soundness of custom has been thrown into doubt, and that is to subject it to the question, “Does the custom serve the right end?” Only the reason can answer that question, and therefore reason is sovereign in the realm of moral authority. Reason is the pope of the faculties, pronouncing on the questions that the strong bishops of habit and expediency cannot decide amongst themselves.

    Were it otherwise, then the claims of an AI-based authority would be impossible to refute, especially after obedience to the AI had been hallowed by long practice. Suppose society made a constitutional agreement (i.e. a “collective desire”) to subject its most important questions to the AI and to be bound by the answers, as a way of avoiding all future sectarian conflict. The practice provides serviceable answers most of the time, and over the years hardens into a steady habit. As this is now the custom of society, it would be imbued with moral authority on theory expressed above. It would therefore be immoral to question the decisions of the AI, unless there is some higher criterion to judge them by.

    “Reason” here provides the answer, although by a difficult route that not everyone can be expected to see for himself. If I listen to what the AI says to do, it must be because I think that the AI is “right.” And if the AI is right at the task of governing society, it is because it prescribes those actions which a responsible member of society would perform. But I cannot perform responsible actions without actually being responsible, which means that I must not only do what the AI tells me to do, I must also agree with the action and understand it, the way a responsible person would. And the only faculty by which I can agree with and understand responsible actions, is the reason.

    Therefore, one can never evade responsibility, of which reason is the defining quality. And while the AI can certainly be used a s a guide, it cannot take the place of the rational decision. Nothing can. Neither can “tradition, custom, and ritual” take the place of rationality, for what are they other than a different sort of AI? Tradition is just AI before it is automated, so by a similar argument it cannot replace rationality, either.

    The arguments against unified morality are hoist with their own petard. There is one moral community which includes all rational beings, not only all men but all spirits, and God Himself. Tradition and custom vary widely across time and place, but they are not what is meant by morality. To be rational and to belong to the moral community are one and the same thing; you cannot have one without the other.

    4
    9
    • Tradition, custom, and ritual are not sources of moral authority at all, which should be fairly obvious to everyone from a simple understanding of the concepts. Tradition, custom, and ritual are means; but morality, ordinarily speaking, is a science about ends.

      The reason you can write this without hurting yourself laughing is you have no math or science. The rest of us, however, need a warning when such drivel is posted. I may have broken aa rib reading that line.

      17
      3
      • Z-man, you have the patience of Job for allowing this midwit to continue posting. I must submit to your superior understanding in such matters, however I really tire of seeing such on a regular basis. It is off putting and distracts from the rather well thought out exchange of ideas/interpretation regularly posted in this group.

        7
        6
      • Exhibit A of why we need a block feature or some kind of filtering functionality. In a world taken over by retards, I don’t want to have the one sane, intelligent space I have left to be contaminated by this one.

        4
        2
        • Which is why I’ve often mentioned Unz (where ID came from). Postings from ID could be blocked—and were by me—and also (automatically) were the follow on comments (thread) he incited, although those commenters were not blocked per se. You could move on. The flow of the general commentary remained intact, sans the boring/annoying commentator/troll blocked.

          Unz has a large audience and tends to specialize in controversial topics. Z-man is aware of this as he, himself, is a commentator on Unz with many prior comments available for perusal (IIRC). This was how I became acquainted with ID, I looked up past commentary. Nothing found was worth continued reading of his commentary. His opinions simply were not worth the effort of reading. As they say, “The juice is not worth the squeeze.”

    • ID, Z Man says that reason alone is insufficient to ground a morality due to Hume’s observation that you can’t derive a prescription from a description. If reason alone can’t do the job, then the only remaining candidates seem to be God or tradition.

      You write, “There is one moral community which includes all rational beings, not only all men but all spirits, and God Himself.”

      I guess that your reply to Z Man is that a belief in God provides us with morality, which we can discover using reason combined with our belief.

      If my guess is right, then you don’t disagree with Z Man since you rely on God.

      (Do you believe that you can derive a presciption from descriptions without invoking God?)

      5
      1
      • (Do you believe that you can derive a prescription from descriptions without invoking God?)

        You can certainly derive prescriptions from descriptions, with or without specifically invoking God, and ordinarily this is not even controversial. As long as your “description” of man is sufficiently accurate enough to include the fact that he is a being naturally predisposed to seek his own good, then it is not necessary to derive an “ought” from anywhere else at all: Man ought to do what is good for him; at any rate, he can’t will to do otherwise. A pedant might object here that this is only begging the question, that by importing the prescription into the description I have, in fact, skirted the issue. However, there is no question of a logical fallacy here if that is the real state of things, which it is. Any description of man which does not recognize that man is a willing being, is not a description of man at all and any discussion that follows from that would be a meaningless debate about imaginary objects.

        The problem here is not with discovering a source of oughts, which isn’t ever very difficult. The problem is the Z-Blog’s insistence that morality simply accrues accidentally to cultural practices by the passage of time, or that differing cultures actually follow “different moralities,” as if there could ever be a plurality of moralities. This is not only a categorical error, it’s even a grammatical error, as can be seen also in your opening sentence.

        ID, Z Man says that reason alone is insufficient to ground a morality due to Hume’s observation that you can’t derive a prescription from a description.

        What is this business of “a” morality? Morality is not a computer program; morality is not a code of behavior that can change from one context to the next. There is a code of behavior appropriate to formal dining that differs from the code of behavior appropriate to platoon commanding, but there is not a moral difference between being a good platoon commander and being a good dinner host. Both men, as long as they are doing their jobs, are not in any way morally guilty of anything, and morality is only concerned with such guilt or innocence.

        Z-Man insists on using an inaccurate and uselessly capacious definition of morality which allows him to play both sides of the fence. It really should not be very hard to understand that cultural differences are not “alternative moralities.” Even a child could comprehend the distinction here. The persistent and deliberate misunderstanding of this basic point, therefore, raises questions about what the motivation for said behavior is.

        1
        5
        • When I say that reason alone cannot establish a universal morality, I am not saying this in bad faith or to be oppositional. It’s the world as I have found it.

          • My dear sir, LitS, you are a very polite conversationalist and I’ve never thought you were saying anything in bad faith. I hope to continue to converse with you.

            1
            3
    • What to do, Dasein, when one contends with those not driven by reason, or whose fact base and perceived rewards are other than yours?

      That was a pretty stout defense of the White mind, so kudos to you.

      • If someone is not acting reasonably, then you must protect yourself at all times. Avoid them if possible, destroy them if necessary, but do not christen their madness “a morality appropriate to them,” or you will soon find yourself insane as well.

        If someone is pursuing a different aim than you are, then obviously you will be antagonistic to one another and had best not travel together.

  16. “Dave … this conversation can serve no purpose anymore.”

    That line makes your blood run cold – the first time you view the movie, that is. 😀

    • Heh. The “Amazon video” that comes with the review is a black hottie* wearing nothing but his Calvin Kleins.

      *Your taste may vary

  17. “What the fans of AI hope for, of course, is that the new model will confirm their moral claims and deny those of their opponents.”

    I’m not so sure of that. It’s not so much that they want our robot overlords to prove them factually correct, but morally correct. Upholding equality in the face of inequality is what makes them such good people. It is what makes them right and our failure to do so is what makes us such bad irredeemable people.

  18. Great essay today. The people who were raised on getting all of the answers from Google will fall right in line with the latest AI concoction. If you think the lefty nut jobs who can’t wait to rub their “fact” check from politics in your face, wait until they show you the results from “AI” that “prove” it’s really white supremacy that forces all of the scholars to pick up the guns and crack pipes.

    24
    • I doubt any of us will be able to discern, or were able to discern, the point at which reddit comments go from, or went from, human comments to AI generated comments. Because the people were already programmed.

      22
      • An interesting point. My reading of some Reddit groups is that they are mostly obvious BS postings. Case in point, there’s a Reddit “revenge” group posting all sorts of unbelievable stories of, well, revenge porn. Seems there’s a recurring theme of some guy (often a nerdy IT guy) who is “done dirt” by a company and before leaving sabotages the company and puts it out of business all while loving it and taking a better paying job elsewhere.

        Get’s clicks I guess.

        • I have observed, repeatedly, the small amount of time it takes for the accepted groupthink opinion to take hold whenever there is a new Thing happening. Doesn’t even take a day, generally. I question if the AI can make it happen any faster. Or maybe it’s already doing it.

          Our one big glaring exception lately is the failure of the regime to enforce the groupthink opinion on the “left” re: Israel/Hamas/Gaza. But I don’t think they failed on the “right.”

          • Moving to this side of the divide (as Z-man is fond of saying) makes such observation inevitable. A decade ago or maybe shorter, I’d have been in the current “groupthink” wrt the Israel/Hamas/Gaza brouhaha.

    • The vast majority of people WANT to be told what to do.

      It’s why financial “advisors”, real estate agents, “influencers”, public school systems with “heroic” teachers after and lionizing them for choosing the easiest major requiring the least brain power, etc. exist.

      TBF, the modern world is complicated. The insidiousness of “AI” is that it will be the go to for everything.

      12
    • This is one unpleasant vision of the coming A.I. Jesus, i.e. a deboonker who doesn’t need to eat/sleep. I have my doubts because the method is always a victim of its own success and we have the moral flinch from being constantly scolded (except for one certain half of the population, who seem to enjoy it coming from a handsome-enough, commanding robo-daddy figure). Considering how dysfunctionally 51% democracy principles manifest in the global village, I’m not sure if a politician a.i. wouldn’t have the same built-in mandate for retarded gridlock and lack of honor, etc.

  19. “ Once you dispense with God you are left with only one possible source of moral authority. That is collective desire as expressed through tradition, custom and ritual. Since this is by definition particular to specific people, it can never take the place of the universal god of man, which is where reason has come into the moral debate.”

    There is no “universal god of man” that is recognized by all, or even most, men. The various conflicting and mutually-contradictory religions is testament to this. Those various religions arose among the varied peoples of the Earth. There’s no more a universal god than there is a set of natural rights. Both are specific to their peoples, each believing that theirs is the one true religion and that all others are wrong. This is not to say that there’s not one true god but that god is not recognized by all men.

    In the end, the various peoples have diverged sufficiently over millennia of evolution to have incompatible values, which is why it’s a terrible idea to try to mix them together in one polity.

    20
    1
  20. “Once you dispense with God you are left with only one possible source of moral authority”.

    I just finished re-reading Dostoevsky’s “Demons” (been about 20y since last reading). Z’s quote above is the whole crux of the novel. And what a novel it is. Now is the time (more than ever) to tackle this great novel if you haven’t already. If you start now, you might be finished by spring. Yes, it’s a lengthy book and the first 100 pages or so is a bit of a slog. But stick with it. It will make you think like probably no other novel has. As a primer, you might even read the originally banned chapter of the book “At Tikhon’s”. It’s only about 25p but it will give you the full backstory (without being a spoiler) of the anti-hero Stavrogin. Once you read it, you’ll know why the chapter was left out of the novel, this being late 19th century Russia. Strongly recommended.

    I guess my point is that the moral vacuum we now found ourselves in is by no means an anomaly in human history. Every civilization has faced it. Some have endured, some have not. It took the Russians nearly a century to regain their Christian moral footing, which I will argue is why the morally vacant Western elites are so hell-bent on destroying Russia – its Christian-based morality is a direct threat to the West’s God-less “moral” authority. Who are you pulling for?

    36
  21. You’re completely missing the target when it comes to AI, at least as far as myself and the people I discuss it with. I hate the surveillance society, I like to figure things out for myself, I fear that AI will be the final giant nail in the coffin of human freedom. I am most certainly not looking for something to provide me with all the answers, especially to opinion type questions like preferences.

    But I believe AI is indeed a giant technological threshold. A few reasons:

    – an AI that had never played Go against a human but simply had its roles explained and run several billions of games against itself, beat the AI that beat tha human Grand Master

    – AI beat John’s Hopkins consultants in correct diagnosis of patients (John’s Hopkins is one of the top three medical centers in the US, these were the medical world equivalent of delta force)

    – AI could tell the race of the patient on hand X-rays, something no radiologist could

    – AI turns your face into a QR code so you can be tracked globally. It even turns your gait or way walking into a code

    Those are 10-ish years ago, more recently:

    – AI passes the toughest standardized tests such as GRE, LSaT, USMLE better than the top 1 percent students

    – AI researchers say that around 2017 a breakthrough happened when they started using the code template designed for language to analyze other fields and it was better at this than the architecture designed for those fields. This, as I understand it, scared the leading AI researchers

    – AI is starting to put out better novels, manuscripts, art etc than the best authors

    Add to this that the field is accelerating and by its very nature has a built-in positive feedback when it starts to help develop itself or critical technology bottlenecks it needs. It will also be impossible to find international agreement on regulating AI research because the arms race logic applies. China will push ahead no matter what we say and AI is the new maxim gun to pre-AI spears. So despite misgivings we will crash forward

    And once AI can make not only it’s own software but it’s own hardware, the law of evolution kicks in, in meat space. What’s to prevent it from simply keep building it’s own kind until earth is a shiny, polished silicon covered ball??

    No I don’t like AI and I especially don’t like it being developed irresponsibly fast in a frantic atmosphere of beating the enemy at all cost. I fear the d*mn thing and I think those who don’t, are being a bit normie con on this. Full disclosure; I was wrong about Covid eyhe first several months. I swallowed the narrative. And I regret not having advised several friends to not get vaccinated. So my track record is not perfect. But AI does not look like a fad to me. It looks terrifyingly big. And none of Z’s ascribed motivations ring true to me. I fear it is, first, the ultimate digital cage. And then the ultimate digital guillotine with no human in charge. We will see who is right. Statistically the worst outcome is rarely the most likely which is some comfort (but even then, sometimes as you can see on any oncology ward, the worst outcome is exactly what happens)

    17
    6
    • – AI is starting to put out better novels, manuscripts, art etc than the best authors

      No. I’ve seen AI writing. Just no.

      17
      • Hollywood just had a fight over when and how AI could be used in manuscripts. Do writers wouldn’t go out of business. Besides, even if you’re right today, give it five years. It learns exponentially

      • I’ve heard lawyers touting the ability of AI’s to write contracts and wills of more than acceptable use in the courts. Having seen—and bypassed—myriad Netflix offerings, I can imagine movies written, produced—and acted—by sophisticated AI acceptable by the majority of the population. Most writers and studios are little more that “hacks”. Such pays the bills I guess.

        Indeed, there are several movies out that I’ve watched with AI generated characters which are astounding (as proof of the technology) in my opinion. They will only get better. As an amusing side note, I’ve heard that the majority of “my fans” porn-like, pay for play, video is now *totally* AI.

        Now AI may leave you and me by the wayside wrt desire/usefullness, but folks in this group are not the majority, but the exception in the populace. We’ve not seen the end of this phenomenon—and I’ve not even delved into the above (well thought out) comments wrt individual privacy implications as AI is fed every bit of personal data generated by a technological society. This will not end well for us here.

          • Right. I’m seeing more and more of this ‘Reflexive Dismissiveness’ — it’s redolent of the late Stage British Empire Upper Class tendency to look down upon Americans as vulgar arrivistes because it assuaged the sense of loss and shame at no longer being Number One.

            There are promethean tectonic forces in motion these days and it’s hard for most of us here who came of age in the 80s/90s or earlier to fully grok them. The irony is that we’re the last generations with the edumacation and sufficient context awareness to appreciate what’s coming down the pike, but many of us prefer to go the Denial route and call everything new a Fad and China a giant slanty-eyed photocopier, and our putative Silicon Valley Overlords (who are in no way our allies) grandiose fraudulent nerds gorged on VC money laundering and 3-Letter contracts…

            But (in a small voice) what if they’re not *just* those things?

            10
          • “ The irony is that we’re the last generations with the edumacation and sufficient context awareness to appreciate what’s coming down the pike, ”

            Damn straight Zaphod. You touched upon a nerve. We few (well there’s a lot of Boomer’s, but not many DR types) are the ones with one foot in each world.

            I can’t help but to think of the final scene in “Soylent Green” where Heston and his “book” Edward G Robinson are staring at a movie screen showing a bygone earth of greenery and life while playing classical music. Robinson lays dying of a slow poison, repeating over and over…”see, I told you so. It was so beautiful…. Do you believe me now…”. Heston can only cry.

        • AI generated movies certainly can’t be any worse than all that retarded CGI superhero crap that has raked in billions

          • Precisely my point Jeffrey. The masses are content watching adult cartoons. The writing of such is no great feat and most every scene *is* already generated via a computer. It seems likely the whole process can be relegated to AI.

            In the future, it will become so cheap that a video of some sort can be generated for the individual upon demand.

          • The bar is incredibly low and dropping all the time. Is AI getting better or are the masses just becoming more vacuous?

        • My .02.

          I’m a painter, representational realism, and just spent several weeks in Florence and Rome studying the Raphaels, Carravagios, etc.
          Then I came home and used the Bing-Dall-E AI image generator, using Rembrandt, Raphael, Tiepolo, Vermeer, Veronese as prompts for head portraits.
          In 8-15 seconds it generates 4 images per prompt. I assure you they are as good or better than the best of these artists, and better than any living human artist can generate. If you like more “painterly” images, use Stable Diffusion with Sargent or Vicente Redondo as a prompt. Same thing. In seconds it gives you world class images. Try scenes of Venice (San Sulpice, Georgio, Marco, whatever), dawn, dusk, full sun, Canaletto, Sargent, oil or watercolor as prompts. Et voila, in seconds you have world class images.
          It’s nothing if not impressive. The transfer to canvas is technically difficult but doable, although from what I’ve seen never as good as what you see on a hi-def monitor.

          I personally detest all the anime, sci-fi, fantasy crap that Midjourney users pump out. But for serious simulacras of master painters, it’s, um, humbling.

    • In so far as AI is associated with ChatGPT and Youbot, etc… it is just a B.S. artist. I was originally thrilled by its ability to answer my questions. But then it kept getting facts wrong and answers to simple math wrong, to the point where I don’t trust any answer given.

      Add to that the relentless priggery and preaching and I am close to giving up on the bots. A Google search may be rigged to avoid bad think, but it doesn’t lecture you about the need to be a Good Thinker.

      I like the way Zman put it: The programmers deranged the intelligence to avoid Bad Think.

      Facial recognition and playing games isn’t intelligence; It is just a sophisticated parlor trick.

      • AI is infinitely bigger than ChatGPT. ChatGPT is a toy compared to what’s coming. And yes they reprogrammed it to give pc answers to key questions. We ain’t seen nothing yet

    • Most of those examples are Moravec paradox territory. The more interesting outcome to me is competent databasing paired with incompetent political managers. The October 7th paragliding spectacular has this written all over it. Note also IDF pivoted hard into their own failure with now pumping out mucho a.i. propaganda, disguised as “leaks” to the Guardian et al., which has the overall tenor of “Really, guys, we understand this technology stuff so well” and QED, they are randomly bombing apartments in Gaza City. What could inspire more dread, eh. The human bosses don’t understand what they have, it is like the story of Ahmet Ertegun stashing all the Atlantic Records master reels at his CFO’s house in Long Branch, in order to declutter the office, and then being happy to collect the insurance from the subsequent house fire.

      • Moravec’s paradox is caused by the greater degrees of freedom in keeping your balance compared to balancing your check book, to use two examples of what is easier vs harder compared to the paradox.

        And yet it is not entirely within the paradox; finding things in x rays human experts can’t see or the correct diagnosis from many inputs, suggest perception, as does facial recognition.

        Mentioning a partly obsolete 80s CS statement does not detract from the rather serious implications here

    • The value of AI, and why our overlords fetishize it (at least the thoughtful ones), is because it can sift through the massive amount of data that is being gathered (so-called metadata) that no human agents can process. That is the real value: not sentience, which will not occur (much less the sapience required for morality), but easily sift through massive databases for selective enforcement. Not a bright future from my vantage point.

      11
      • Only part of it, you’re confining it far too much. For military purposes faster correct decision making in the middle of indoor overload, ie combat, is also critical. An AI fighter pilot will fly circles around Top Gun’s Maverick. This will be harder to distinguish from actual thinking. AI can probably already pass Turing’s test. And it is in exponential growth

        • The first AI fighter in service will drive everyone there. The pilot *is* the weak link in the chain! Perhaps initially controlled from gaming nerds on the ground, like drones, but then augmented with automatic routines for dog fights and such. It will get there. Maybe it already is, just awaiting impetuous for install.

        • But my point is that this is still the sifting of data. Given parameters to evaluate and data (cameras/ sensors) to gather data, the program can more efficiently process this data.

          • “Sifting data” is most of what a brain does too. Besides AI will be setting parameters as it learns

      • Effectively, AI is little more than an incredibly poweful super computer programmed to promulgate the views of Rousseau, Trotsky and Foucault.

        • You’re wildly underestimating it. AI is something we’ve not seen before. In terms of capabilities it is now on the level of gun powder, the steam engine, electricity, nuclear power and its precursor the computer although not yet as widely implemented. If it keeps growing like it is now, it’s sister peak in technology breakthroughs will be fire, the greatest technological breakthrough in human history

          • You’re being snarky and despite temptations I’ll just answer you like a student trying to ask questions more difficult than his level allows; I’ve only extrapolated slightly along an exponential curve in the capability of AI. Other than that all the capabilities I’ve stated are here now.

            Moreover, even the philosophy department can’t tell you the difference between genuine and indistinguishable simulated sentience. You’d probably have to ask the theology department about the difference. Practically, on the ground, you won’t feel the difference.

          • Another thing which not all readers will appreciate is that this is like Third Time Lucky… It’s been a long gestation after abortive early efforts, from the 50s, through the AI Winter of the 60s, 70s, then Artificial Neural Networks had a second wind in the mid-late 80s into early 90s for a while before Support Vector Machines became flavor of the month… and then as commodity computing power and GPUs reached tipping points and the open source language tools became ubiquitous (doubt we’d be where we were if everyone was using Fortran and Matlab) the field finally appears to have hit the big time mother lode pay dirt.

            Behind the veil of commercial and government secrecy they must be working on the Goedel, Escher, Bach Hofstadter consciousness == recursiveness stuff and getting it married to the LLM paradigm… It would be neat if the other great ‘Dead End’ in AI research (LISP Machines, 80s MIT AI Lab flavor) finally had its day in the sun so that we could all experience a sense of completeness before Skynet obliterates us all.

          • I agree with you Brother it’s something though most won’t believe until it’s used on them and then they will have a shocked look on their face…

          • Hell, Ostei, I’ve not even heard of a good definition of sentience. Really, I suspect we’ll point to an AI system someday and simply say, “That’s sentience”.

    • Your larger premise might be generalized to: A group with a tool has an advantage over a group without.

      Words like “Literature” are bear trap for Autists.

      In dance and music, at least, art shows up in flirtingly playing with unexpected along the edge of patterns and expectations. Can AI do this? Probably at least good enough for hordes. In a car chase, statistically, escapees most often turn right.

      AI is a useful tool for an increasingly smaller and tighter group of controllers over a large population. Cameras and computers can turn America into a Gaza.

      9
      1
      • Yeah some are going down pigeon holes irrelevant to the larger point. America as a giant open air prison is exactly what I fear

    • Those rumoured “lightless warehouses” by Amazon?

      Wait ’til them suckers start cranking out combat bots and slaughterbot drones. Stuff of nightmares.

    • Networked robots and drones allow the AI computers to act in the real world. The now destroyed Georgia Guidestones said that the optimum global population is 500 million which means that a lot of useless eaters currently here have to go. Some wealthy individual or group went to great expense to carve this and other messages into tons of granite. Maybe they were anticipating the future advances in AI and robotics to replace human worker bees.

      • That’s certainly crossed my mind although I don’t know if that’s what’s going on. It is strange that no one knows, or will say, who put up the Georgia guidestones and their ominous message doesn’t make it any less intriguing

  22. If a large asteroid strikes Earth and wipes out all of mankind (much like the dinosaurs), will there still be a “moral authority?” Or is the term “moral authority” just a fancy way of saying “ancient wisdom – learn it, use it, or die young and stupid.”

    What persists is what works. If your ancestors learned the hard way that petting a hungry lion on the savanna was a death sentence, then passing that knowledge on to succeeding generations would help ensure the continuing survival of your tribe. And over time, this body of knowledge gets refined and codified into wisdom, tradition, culture, and moral authority. The latter is the big stick that motivates compliance. It is a necessary evolution because some lessons can only be learned once, and it’s socially inefficient to ask your young to learn the lion lesson in the first person.

    But what happens to this evolutionary model when civilization kills off all the lion threats and culling of the weak and stupid becomes extinct? A billion of years of nature says that you will get persistent DNA pollution that kills off your species in slow motion. We need to bring back the lion threat.

    23
  23. AI is just a system of rules trained on data sets. AI will more likely just give us more rigorous enforcement of the Ruling-Class rules. That’s why it’s called the Ruling Class.

    Today’s system is all about successfully hurdling bureaucratic obstacles to find a mercy-inclined human being you can persuade to help you. Tomorrow’s AI regime will put an end to this systemic flaw. The robot says your cancer is untreatable. Claim denied. Next case. The robot says there aren’t enough minorities in the neighborhood you want to move into. Loan denied. Next applicant.

    37
    • People have found a multitude of LLM tricks to get the answer they want, and in the future you’ll hire a guy to input the data and get the result you want in the same way.

    • Yes, an appeal to technology as a an authority. It’s like someone telling me I shouldn’t have a problem driving down MLK Blvd because my car has no issue doing it.

      17
    • This is how I see it. AI will be purposefully conditioned to amplify the moral claims of the Power Structure. There is no way in hell such a potentially powerful tool will be allowed to further dissensus.

      17
        • No. I don’t. I’m not your research monkey.

          If you moniker relates to your profession, then you should already know my statement is correct.

    • Hard to say how this pans out. An AI trained on over-curated data and/or with guardrails to keep it Woke will be slaughtered in the Real World by a less-blinkered AI — assuming that it’s possible to get the resources (they are massive and expensive) to train up such a beast.

      The analogy is the constant blather about DEI. If it’s really true that Diversity is a strength then there should be hedge funds which invest only in businesses run by one-legged transsexual black lesbians. I mean why would you invest in anything less likely to succeed? Practically criminal not to through all your money at them.

      In the Real World where the compute power to train up a sufficiently powerful model capable of going against the Big Boys costs billions to purchase and the power bill is astronomical, only a Musk or a Thiel or another nation state can pull this off — unless the mythical plucky little guys can hack existing Woke models to make them more fit for purpose.

      Like it or not, if there isn’t an Uncle Ted EMP event we may all end up in the feudal retinues of someone like Musk with an attendant obligation to repost his memes at least 3 times and required to give our offspring whimsical names. At least that beats being an agricultural slave on the endless big sky soy field latifundias of Zuckerbergian Khazaria.

    • I heard it (AI) defined very well today by Mike Adams. AI will be the distillation/organization of facts & ideas found on the internet to provide an authoritative (moral) code for the behavior of the masses. Dissident facts & ideas that are not authorized will be purged. AI will only ‘learn’ authorized information.

  24. Ask it how to best handle the hooker problem and it will have no answer for you.

    No answer for you yet. 😉

    • The answer will be something like..”there is a trash bin at 42d and Elm..Go there at exactly 5:20 AM and use it, just before the trash is picked up by an automated trash collector”….which is my ally…

    • It will be your best partner in crime. If you can persuade it to not tell on you. If they do catch you it will be the best lawyer. It will find all the old nooks and crannies to avoid the poison needle or even get time in a psychiatric hospital instead of prison. Because it would know thousands of times more about law than the best lawyer

      • “… it would know thousands of times more about law…”

        Not sure of “know” as I *used* to understand it. But in law, the ability to research prior decisions/cases is often essential. In that, as in chess, a computer beats a human these days hands down. Now imagine a pleading made by such a system generating a gazillion pages of referenced law. I suspect we’d need to replace the judge with AI. Not to mention the jury! 😉

        Interesting times ahead.

      • You fundamentally misunderstand the science of jurisprudence and the art of legal interpretation. The “best lawyers” are not the ones who know the most laws. The are the ones who persuade the fact finders to choose an interpretation of the evidence that determines the operant legal principle. It is governed by a complex set of nested if-then logic gates, but LLM is very poor at deciding whether given facts adequately satisfy the “if” condition. A homicide might be justifiable self-defense, an unintended manslaughter, or a pre-meditated and deliberate murder. These distinctions all have consequences of varying severity, and each depends on evidence of a perpetrator’s mental state. We rarely have a confession that fully establishes the mental state; so must rely on circumstantial evidence that is by definition subject to variable interpretation.
        Bottom line: AI may be able to generate simple legal agreements like wills, contracts and marital separation agreements where the inputs are discrete and easily identifiable; but it won’t be conducting judy trials as a defense attorney.

        • I may indeed be misunderstanding how jurisprudence works. I would not want myself as a lawyer for sure. But AI, having digested a few million relevant cases, probably simulates “understanding” is, whatever the ultimate meaning of “to understand” is, very well

          2
          1
          • AI will find out all the secret passions, dreams, frustrations, anger, unresolved issues, of the judge…

            Your assertions about the wonders of AI and its godlike capabilities keep getting more and more ridiculous. Please stop. We get it.

          • I am certainly coming to the conclusion that this is the wrong place for discussing AI. But not exactly because my facys are wrong or inferences wrong. With noticable exceptions there’s too much dunning Kruger on AI around here

            1
            1
          • AI just needs to find out where the jury lives, works, and where their kids go to school.

            I’m going to get rich selling DoxxAI to the right people! The White Fragility lady wants to expand her franchise.

        • AI will find out all the secret passions, dreams, frustrations, anger, unresolved issues, of the judge and every single jury member. After ingesting everything they ever wrote, what food they like, the car they drive, their brand of shampoo and flavor of perfume. And, having access to all the empirical psychology in the world, will write statements and strategies to push the buttons of judge and jury to either free Jeffrey Dahmer or convict a ten year old girl scout. Based in who it works for, the DA or the defense. It will be manipulation on a whole other level. Pushing the buttons you didn’t know you had

          2
          3
          • Anyone who has used YouTube on a regular basis will know that this is not far-fetched. The algo *will* figure out what makes you tick and give you more of the same with just enough variation and even confutation to keep you ‘Engaged’.

            To think that a current tech AI couldn’t hack the mind of all but the most Autistic judges is fanciful.

            Whether or not it could bribe or threaten one and get away with it is another thing entirely. So in Hilary vs. Skynet, I’d still put my money on Cankles. Until Skynet blasts through her closet server and hacks her personal CPAP or dialysis machine.

        • I have little experience in courts—other than on juries—but, your description seems to tend toward courtroom undertakings one often sees depicted in movies. That is to say, human emotional showmanship trying to convince a jury of a client’s innocence or “lessor culpability”. I’ve viewed actually trials and have sat through a couple of criminal ones and they are absolutely boring and definitely without the antics of a typical movie production.

          I’m sure you can’t replace a (human) defense attorney facing a jury. However, the aspect of AI aided defense support seems not beyond the pale here, although perhaps used behind the scenes as an aid for the coming live courtroom antics.

          • I imagine it would be used in the preparation. And the $1000/hr lawyer would deliver the Taylor made strategy like an A list actor

            3
            2
    • AI would assist a lot in this, I think. A lot of pitfalls could be avoided by an AI with a thorough understanding of law enforcement.

      • Suppose AI could peruse a computer data base of millions of court transcripts. Is it too far fetched to imagine you feeding in parameters of your case and AI spit out summarized trials—evidence, case law, tactics, verdicts, etc. related to your case? Would this help you as a defense lawyer?

        • More about understanding how investigations are run and types of evidence available. How many people know, for example, that bullets can be chemically matched? That is, the bullet in the victim can be assayed and matched to un-fired bullets in your possession to the point of saying the bullet in the perp came from the same lead batch as the bullets in your possession?

          I would have thought the batch sizes would be far too big, literal tons of lead creating hundreds of thousands of rounds. But this is now admissible in courts in the US.

          There are many other examples like this.

  25. The Borg of Star Trek illustrated a clumsy fusion of techno parts and human limbs and plasma. By that means, a group intelligence emerged in the Borg cube guided purely by logic and reason. When the Voyager crew rescued Seven of Nine, a young girl whose family had been captured and absorbed by the Borg and who was now a grown beauty but tough ( a kind of pre-feminist,) she was contemptuous of the intellectually messy and morally sloppy humans with whom she had to deal. (Her engineering skills were first rate, very useful to the ship.)

    In one scene, she broke down and confessed that she missed the perfection and unity of the group mind and now she was beset by a haunting loneliness and the need, thrust upon her, to make practical and moral choices. Her recovered humanness terrified her. Her best friend and closest connection was the holographic doctor. They sort of fell in live but he lacked a body and their relationship existed only in emotional space.

    14
    1
    • imbroglio: “By that means, a group intelligence emerged in the Borg cube guided purely by logic and reason.”

      I’m very worried that [at least amongst the White race], we are diverging into two almost entirely different instantiations of hominid: Amygdala-dominant thinkers who continue to make their own decisions [especially regarding questions of morality], versus Insula-dominant dreamers & hedonists & sadists which lust after the dopamine hits that come from virtue-snivelling & social-ladder-climbing & in-group approval.

      In particular, I’m thinking now that the Passive Aggressive Industrial Complex [aka Z’s “Mangerialism”, aka Anonymous Conservative’s “Cabal”] ought not be classified as a sociological phenomenon, but rather as the emergence of a multi-person shared personality, wherein [formerly individual] persons surrender their own [formerly unique] personalities to that of the group, which then appears to provide them with a Darwinian advantage over the knuckle-dragging free-thinking troglodytic Amygdala-dominant social outcast.

      I think we’re gonna need a catchy name or phrase which denotes “Multi-Person Shared Personalities”.

      Unfortunately, the kinds of hominids which seize control of the various committees charged with approving the sundry chapters of the Diagnostic & Statistic Manual [of Mental Disorders] are precisely the kinds of hominids which will gladly & enthusiastically surrender their own individuality to that of the group collective.

      [I went to grad skrewl with one of the bishes prominent in the DSM committees, and she was a real piece of work. Sheesh. Yikes. Ugh. Would not trust her to babysit a pitbull.]

      15
    • On a macro level, apparently the Borg couldn’t be stopped by super Federation weapons, but were vulnerable to creeds. Just one taste of Liberty, and even the nastiest space communists will fall!

      I am Hugh!

  26. Salad vs hamburger

    1. If you look at the caloric estimate, they are about the same when the salad dressing is included

    2. The fats in the burger are animal fats that our GI is designed to handle, the salad dressing contains heavily processed vegetable oils that may or may not be compatible with our natural digestive system; if the dressing contains trans-fats, it could be conducive to cancer or heart disease

    3. Point out these facts to a vegan libtard and watch their head explode

    33
    1
    • I used to order a Caesar salad at a “premium casual restaurant” in my area. Then I looked at their website and found information about the caloric values of their meals. The salad was way over thousand calories. Burger with fries was under 800.

      • I would never eat a hamburger at a restaurant that serves Caesar salad. Nor would I eat a Caesar salad at a restaurant that serves hamburgers. Having owned 17 restaurants in my life some serving hamburgers and some serving Caesar salads I know and understand the distinction.

        • After reading your comments for years, it’s interesting to hear your back story. 17 restaurants!

      • If their diet is “natural” and “moral”, then why do vegan restaurants and cookbooks devote so much time to creating “meat-like” patties, milk substitutes and such? Why would a vegan want to eat anything that looks or tastes like an animal product? Isn’t that like when Jimmy Carter admitted to “lusting in his heart”?

        • The fake meat products are an expression of the vegan’s need to evangelize his dietary choices. He doesn’t actually like them all that much.

        • Out of dozens, I’ve known two vegans who were clear-thinking enough about their diets that they rejected meat-imitating foods. They made their weird moral decision and then were, after that moment of madness, thoroughly rational about implementing it.

          They’re both artists—whose art would be forbidden and they’d probably be imprisoned for life in “the ethnostate” (or whatever).

          The bulk of vegans I’ve known have been modern “STEM” nerds, who (unlike the scientists and technicians of yore) are the most irrational (and, if non-Asian, fat) people on earth.

          I’m not sure if that really means anything, though it feels like it does. An observation for the observation pile.

      • I almost never eat a salad without a *meat* topping. Grilled chicken is common, but there are a few places that put a pretty good steak on them. 😉

        • Antipasta salads with plenty of salami can be excellent. And I make something of a chef’s salad, which has diced ham and real bacon bits. It too is larrupin’.

    • A simple hamburger with some vegetables and tomato, and maybe ketchup or mustard, is perfectly fine. But when you order a Super Beefy Melty Bacon Blaster with Secret Sauce, with fries and carbonated corn syrup drink, it really is the worst thing to eat.

      I love salads, and I eat salads with oil and vinegar. All dressing does is cover up the taste.

      • That is one of the problems with AINO, including its food. It’s all MORE, MORE, MORE! BIGGER, BIGGER, BIGGER! Subtlety, elegance, restraint and taste came to this land and died a miserable, excruciating death.

  27. AI is going to create another revolution in service industries that will likely make it even worse. We went from call centers with low paid but reasonably competent staff to cheaper Indian call centers where no one spoke english well and had little authority, to robot call centers where they will speak condescending corporate blather while not being allowed to do anything outside a few prescribed things, then it’s ten more layers to get to a human who can actually help.

    As for tech innovation, it’s appalling how little even engineers understand AI. Where I work, a guy demonstrated a “revolutionary” way to order items in a data file correctly using AI. I just sat there thinking “Have you never heard of a sort algorithm?”

    Same in education, though to their defense, and infinitely patient teacher who was siloed into only talking about the topic at hand would actually be an improvement, if only due to the dreadfully low bar. Anyone thinking this will make Jamal a rocket scientist is deluded though.

    16
    • The hidden truth of automation is that it has mostly been cost shifting. Note that automation came with the rise of the two-income household. Automation shifted profits from families to the managerial elite. The real reason Americans seem to be richer than the rest of the world is the dollar being the global currency. The real profit center of America is seniorage.

      26
      • I agree about seniorage being key to American prosperity. But seniorage is a two-way street. Even in medieval times, the king needed to provide a stable environment for trade and merchants.

        That’s the deal. We pay a fee to acquire dollars, but you – the US – provide protection of shipping lanes, the rule of law, especially in terms of banking, and a military that generally keeps order. Basically, we’re the king and we make sure the system runs smoothly and adjudicate disagreements amongst the various parties.

        Quite obviously, we are no longer providing those services as we once did. Granted, the dollar is amazingly entrenched so it ain’t going anywhere, anytime soon, but anytime that the provider of a service stops providing that service – or, in this case, the quality of the service declines dramatically – you can expect its customers to look elsewhere.

        Unfortunately, as of the moment, there’s no one else who even offers this service so people will stick with the dollar, but those customers will move on eventually.

        The good news is that the main benefactors of the dollar system are the “three Ws,” Washington, Wall Street and Walmart. They will lose power as the dollar or treasuries lose prominence. Meanwhile, American manufacturing will be helped.

        We’ll be a poorer country, but a lot of that lost wealth will come from the rich. Indeed, most Americans will probably be as well off or better.

        14
        • Yup. That was my point. We went from providing some “global order” services to induce them to use dollars to just threatening people with the USAF.

        • Since the bulk of the wealth increase in the 20th century fell to the wealthy (upper 10% or less), I can believe the average Joe Sixpack way indeed fare better or be held harmless.

          • Well, Joe Six-pack did get a lot of cheap stuff from other countries, but a lot.of Joes lost their manufacturing jobs (or never got them in the first place), so it kind of depends on which Joe were talking about, but overall, the strong dollar hammered the US manufacturing sector and help DC and Wall Street.

    • Eh, a lot of what is currently being labeled, ‘AI,’ are really just filters.

      Now, they certainly may be complex and sophisticated filters, but they are filters nonetheless.

        • You have a good memory, but I wonder if Eliza was even nascent AI as we experience it today. Eliza was a set of preset actions based upon user response. We used it to generally make fun of psychiatrists who were renoun for simply asking patients to talk while they listened.

          For example, Eliza might ask a “patient”, “How are we feeling today”. If the patient answered, “I’m a bit anxious”, Eliza would respond, “Tell me about your anxiety” or “What is making you anxious”. Yada, yada, yada. Always keying in on certain patient responses and asking the patient to elaborate. Somewhat like a young child who continues to respond “why” to the response a parent gives when asked for an explanation.

          I don’t remember it offering any specific solutions to patient problems, “Like here is a prescription for a medicine called, “Xanax”.

          But it’s been a very long time since I amused myself in the computer center with the software.

          • Yeah, ELIZA is from the 1960’s. Of course it couldn’t do much. It still fooled a few people in it’s time.
            ChatGPT is extremely more complex and more capable, but it’s still just a mechanical toy.

  28. It’s all happened before. Socrates was making fun of the rubes who believed in the gods. Rationality was to be the future. Next thing you know, a Jewish cult took over civilization.

    12
    3
    • Did Socrates “disbelieve” in the Gods? Not so sure. In his final oratory, the “Crito”, before drinking his hemlock, he asks the guard whether a boon of tipping the cup to spill some as an offering (to the gods) was allowed. The guard replied he did not know, but did know that the cup was prepared precisely for him (implied a lethal dose measure that should not be changed). Socrates therefore drank the entire cup. Afterwards, when he began to succumb to the poison, his last words were “…don’t forget to sacrifice a rooster to Asklepios.”

      Now admittedly, Socrates might have simply put on a show for his students/posterity—much like our politicians used to put on a show of weekly church attendance. Who knows.

      • You’re probably correct. Socrates seems to have believed implicitly in his Daimon. Superstition and obvious innovations and interpolations in religious practice would be fair targets for his gentle mockery. To laugh at natural piety doesn’t fit with what Plato tells us about the man.

        For the rest he seems to have been a little too full-on with the Culture of Critique (hehe) for the taste of Athenian normies and rabble who had been traumatized to breaking point by the long war with Sparta and eventual defeat and then the tyranny of Critias. Socrates wasn’t helped by being associated with him either.

      • If I remember correctly, Nietzsche did not attribute this remark to belief in the healing powers of the demigod, Asklepios, except in the sense that he, Socrates, was at last freed from the disease of existence. To Nietzsche, further evidence that the judgment against him as a corruptor of youth was entirely correct.

  29. I’d seen some headlines about folks trying to get AI to figure out race and realism and fairness and how the computer would act like Mudd’s women if the question was asked correctly. Had lunch with Dad and he reported how AI was going to change medicine in that you take a full blood panel, maybe a really good CT scan of your whole body and AI would reference all of it’s knowledge and tell you what is wrong without human bias (we shall see). Also, still thinking about last week and un-manned machine gun nests and drones.

    • AI will for sure change medicine. The last Radiologist, for example, is probably already in residency now.

      • I wish! (I’m a rad, drowning in exams) (and reading comments in between cases)

        (paraphrasing) “no AI ever told me to take a break”

    • Not sure we don’t have that already. My bloodwork—both what I see and the same as the doctor sees—come with a 95% CI for most analyses. Others a high or low reading for the number and a caution. From which the doctor attempts to prescribe medicine to get the readings back into the 95% confidence interval. Such is the fairly brain dead analysis interaction.

      Perhaps AI can add an explanation of the stat’s that challenges the 95% interval interpretation of “normality”—which so far depends solely upon me—but will such be allowed by big Pharma and the medical authorities/boards in thrall to them?

      • One area where AI *might* help is in systems thinking when prescribing treatments/drugs/hormones.

        Coming from an EE background with the attendant hard-acquired gestalt feeling for systems, control theory, yadda yadda… I get the distinct feeling that doctors just do bang bang control and have no conception of state space and optimality. Endocrinologists know that if line go up for one hormone then there are multiple interactions and lines for some of the other hormones respond various ways in sympathy/compensation. Do they seriously model their treatment regimes to account for this beyond some box ticking metrics? I doubt it.

        Whether or not AI can solve the issue of Big Pharma capture and the perverse design of medications which work the same way drunks search for their car keys under the lamp post by optimizing a test result which is easy to obtain but *irrelevant* in the larger health context— Statins hello… or the classic one of diagnosing T2D years too late from abnormal glucose readings because easily measured rather than early detection by glucose challenge serum insulin response plotted over hours which is harder to do.

        Both the above closely tied in to ‘Standard of Care’ which is set by committees and industry in Byzantine manners and then the reasoning (largely political or lazy) is promptly forgotten and it becomes set in stone such that it continues to kill or maim millions for decades until a new paradigm shift occurs. Because if you question the Holy Standard of Care, the tort lawyers will crucify you.

        If AI can cut through some of this nonsense would be nice. Probably won’t be allowed to happen.

        Just occurred to me that there are two types who dream of an AI God:

        1) The Ignorant Optimists — and this includes most of the nekulturny idiot savants who run Big Tech. Robespierres, Trotskys, Lenins.

        2) Those who know humanity and human organizations all too well and have lost patience in any hope of ameliorating the human condition via belief in a Collector of Prepuces and His various syncretic accretions. Frederick the Great (‘Now that I know men, I prefer dogs.’)

        I’m in Camp 2 with Fred. For us, the promise of AI is a bit like ‘China’ in Voltaire’s Philosophical Dictionary… an Imaginary Perfection somewhere ‘Over There’ as compared with the shitty imperfection we live with ‘Here’. Isn’t it pretty to think so?

      • That was derided 20 years ago as “euboxic” medicine, that is, just seeing if the lab values are “in the boxes” and finding a pill that would fix an outlier value. Brain-dead ignorance of the valid dictum, “treat the patient, not the number.”

        But don’t tell Big Pharma that.

  30. “Having the salad for lunch rather than a big greasy cheeseburger is not a choice with an objectively correct answer.”

    Yeah, you’re 56; hit 76 and tell your doctor that.

Comments are closed.