Shutterstock 478583230

NeuralCapsule Free

Contemporary confederacies of dunces now coalesce around very stable genius it seems..

Recent Comments

  1. 1 day ago on Scott Stantis

    A very good research enquiry into the intelligence of GPT-4 was conducted by 14 researchers at Microsoft a little over a year ago, it can be read on the arxiv “dot” org server by searching for

    “Sparks of Artificial General Intelligence: Early experiments with GPT-4”

    It is a long read at over 150 pages, but chapter 9, “Societal Influences” is especially worth reading..

    Bear in mind that GPT-4o now supercedes this in many significant ways..

    BTW there is a piece on the BBC today by Geoffrey Hinton (considered by many to be the father of today’s AI) about how we’re going to need universal basic income :

    (prepend the usual ‘h t t p s’ prefix and the main BBC address to the following address)


    Also in that article:

    “Professor Hinton reiterated his concern that there were human extinction-level threats emerging.

    Developments over the last year showed governments were unwilling to rein in military use of AI, he said, while the competition to develop products rapidly meant there was a risk tech companies wouldn’t “put enough effort into safety”

    Interesting times indeed.

  2. 3 days ago on Scott Stantis

    The definition of “we” is so often misconstrued here. In the early stages of the commercialization of the internet, there was the widely held perception among tech people that everyone could just run their own servers and internet services, which is, strictly speaking, true.. but also now obviously irrelevant.

    Beyond the general purpose systems like ChatGPT the public are playing with (and the very general business tasks so called ‘prompt engineering’ can adapt them to), even the ‘open source’ AI models require a minimum of many tens of thousands of dollars and some seriously specialized specialized knowledge to ‘fine tune’ to one’s purposes (and of course many millions of dollars / teams of math and CS specialists to create from scratch).

    Which means that this technology is even more inherently oligopolistic than the original internet (which nevertheless centralized on massive network effect social media like Facebook, hyper scale cloud providers like AWS and edge POPs like Cloudflare..)

    So, saying “we can just turn it off” is true, so long as ‘we’ is understood to be the trillionaires and autocrats who will actually make decisions like that..

  3. 3 days ago on Scott Stantis

    So no actual points then..? K, Thx !

  4. 4 days ago on Scott Stantis

    Excuse the delay, busy couple of days in the tech world..

    It appears you feel like you are making some kind of point here by being unimpressed.. Okay..

    Do you think that because you are unimpressed software, legal, medical and other record management groups are not using LLMs to radically redesign business processes in ways that are already resulting in large scale layoffs and will continue to do so through the decade you mentioned?

    As an enterprise software consultant involved in large scale business systems, I can personally state that they are doing so.

    Do you think LLMs will not totally change what little remains of journalism in the same period? Politics (you know the stuff these comics we’re debating are about)? These are going through massive (and one would think obvious) generative AI driven changes as we speak..

    Basically, all the things related to human language information processes that were incrementally more productive after various editors and the piecemeal automation of macros, regex, and then autocorrect, autocomplete and various (internal) search engines are all being superceded by tech that is rapidly on its way to taking instructions that a senior programmer, writer, billing coder, whatever would currently give to junior staff and execute autonomously..

    I’m not really sure why the fact that LLMs are unlikely to be replacing the senior staff makes it any less catastrophic for junior staff (and upcoming graduates) just as I am unclear as to why the inability to do logic proofs makes this tech any less devastating in the public ICT sphere we all must try to stay informed in.. I mean, in a democracy, is it really any comfort that the top n percent minority of citizens continue to be well informed…?

    So, yeah, LLMs can’t do physics and math.. If we were talking about this on a math substack then, sure.. You do get that this is a political cartoon referencing AI and creatives, right? Any creative work will be upended by LLMs.. And soon.

  5. 6 days ago on Scott Stantis

    So, the salient question becomes one of learning, not a recapitulation of the symbolic logic AI winter.

    Here, we must leave behind the “copy/paste on steroids”, “stochastic parrot” type definition of LLMs and consider what LLMs really are:

    Machine Learning is the general term for statistical learning, which really only began to eclipse the then dominant ‘expert systems’ in the age of GPU parallel tensor processing of unlimited data stores, leading to techniques like neural networks and, increasingly sophisticated ‘deep’ types of such networks like recurrent, convolutional and, importantly transformers..

    In their 2017 paper “Attention Is All You Need”, Google scientists introduced the architecture of LLMs, which, through truly massive (approaching trillions) of parameters and ‘self attention’ repetitive weighting of all words in its corpus maps a kind of sophisticated topological map of every likely use of all ingested words..

    This, to me, is the greatest difference with ‘auto complete’.. As we prompt an LLM it is not merely predicting the most likely word sequence, it is navigating an unfathomably complex stochastic topology in a way unique to that interaction, which is why ‘prompt engineering’ is now the new hot kool kidz kareer..

    Beyond that (which is actually a function of the computation cost of training LLMs) I think the nature of these topologies ought to prompt us to think on the nature of learning and knowledge representation in our own neuronal structures..

    LLMs do not ‘think’ in any way that we understand thinking, and I remain firmly in the no AGI camp, but I am not at all confident that these systems are incapable of learning other human thought patterns beyond languages with equal rapidity and fluency..

    Such systems, even if they are never capable of engaging in mathematics or physics as humans do, nevertheless are already well on the way to becoming force multipliers for almost any typical form of human communication..

    ..which is a bit scary.

  6. 6 days ago on Scott Stantis

    Oxford Languages (née Dictionaries) defines intelligence as:

    “the ability to acquire and apply knowledge and skills” this admittedly concise definition, LLMs are, at least according to the current research in IEEE, ACM, Sage, etc., demonstrating advanced (if artificial) applications of knowledge and skill.. I am happy to provide specific references to peer review research for the various applications in diverse areas such as in my previous post, and, of course, to explore the intricacies of the more complex definitions of intelligence..

    Considering the specific aspect of LLMs and logic, while it is currently correct that these models are incapable of any reasoning that follows mathematics or physics, there are already promising lines of work to address these deficiencies by leveraging symbolic logic (which had been the main thrust of AI research previously) without losing the broad ability of generative approaches.

    Ex.“Coupling Large Language Models with Logic Programming”

    “Learning Non-linguistic Skills without Sacrificing Linguistic Proficiency”

    “Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning”

    “Solving Math Word Problems by Combining Language Models With Symbolic Solvers”

    “Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs”

    It is certainly early days for this type of LLM application and none of these will be ready to engage in even the most basic interaction that probes these kinds of logic, but I would be very hesitant to bet against the development in those areas over the next decade, to say nothing of the next few years..

    While there are many more aspects of a (more) complete definition of intelligence that bear on the Dystopian social aspects of LLMs as I briefly touched on in my post above, this area is very important in that, while not very important to the potential impact of LLMs in society, gets to the heart of what most consider ‘true’ intelligence..


  7. 7 days ago on Scott Stantis

    So, in the ancient world of early computers, programming languages evolved from mere convenience mnemonics over op-codes to increasingly sophisticated systems of logic which shaped the future of system designs themselves.. tempting as it is for some to equate these languages (in their 4th generation and onwards) to human language, there was always a hard divide between these deterministic languages and the ‘natural’ languages we co-evolved with as a species.. we have large language models (LLMs) which run on deterministic, stateful machines but are stochastic black boxes and somehow in trillions of weight activation numeric ‘neurons’ are rapidly manipulating human language..

    ..these same LLMs are rapidly progressing in areas from legal discovery in terabytes of complex professional legal documents to drug discovery in petabytes of bioinformatics by going against unstructured medical records and research publications..

    ..we may say that LLMs “have no soul or depth” but the fact is that they are already capable of passing any graduate level writing exam and of imitation of writers living and dead in ways that all but literature researchers could not detect..

    ..the (Dystopian) future is already here, and in addition to running a nail gun over the coffin lid of believable online information, LLMs will expose the lie of cherished human terms which we have always resorted to without any precise definitions, like soul, intelligence, sapience, consciousness, sentience..

    ..such terms are, like democracy, camaraderie and empathy, so powerful precisely because we can feel connected to others through them by believing that our shared evocation of them is the product of some deeply shared values, and not merely of widely held misconception.. the brave new world of LLMs (which is already upon us) these cherished terms, like Turing’s puckish test, will fall away in a supernova of inhuman abilities very possibly rapidly heading beyond our species’ ken…

  8. 11 days ago on Gary Varvel

    It is great that you are not opposed to higher education, but is it really just a fancy form of vocational training in your analysis..? I get the way it is marketed, the distasteful practices around maximizing subsidies, and that this is the only driver for the majority of the students paying tuition (taking out loans, etc), but that is not the mission of universities, it is the mission of trade and vocational schools, which are the vast majority of those with 95% acceptance.

    In America, high school does not prepare anyone for any career worth the designation anymore, this has been true at least since the beginning of the millenium.. We need post secondary training to succeed as a (the?) preeminent global economy and that runs the gamut from vocational school to post doctoral research : universities are, frankly speaking, the critical leaders in this equation.

    Of course, this is not the only place where universities (must) lead.. Research, for those who are not involved in the process, is not (and never was) some lone genius in a lab full of Pyrex percolator props, it is a highly collaborative globally interconnected (and often unavoidably political) process. What manufacturers we have – and most importantly what manufacturers we are likely to have in our future – are determined directly by the research ability of our universities, NOT by proprietary subsequent downstream technology applications at corporations. This is the vital national interest of having a vibrant system of universities – we fsck with this at our peril.

  9. 12 days ago on Gary Varvel

    I’ve only been here (more off than on as of late) for a few years, but even in that span the comment quality has sunk drastically, so, if I’m going to do anything here, it will be solely out of interest in some increasingly rare, limited aspects of threads as I can find them.

    The comics like Varvel, Lester, Payne, Goodwyn, Bok, Gorrell, et al are well known, as are the dynamics of the ‘featured comment’, so, I feel like once the name calling coalesces into threads, I will try to comment where I can say something.. The repetitive and well known biases here are frankly uninteresting to me at this point.

  10. 12 days ago on Gary Varvel

    Yeah.. I’m really feeling my age trying to keep up with the math of language transformer models.