Observations Archives - Page 2 of 5 - Wootcrisp
 

Category: Observations

ChatGPT related, part three.

I really haven’t been able to keep up with much of the technical mechanics of all the different “large language models” everyone is so rightly interested in these days. Therefore, my opinions here might not be fair to the widespread idea that the interesting recent “AI” model outputs we’re seeing, arise from “unexpected” and emergent phenomena, once these models are scaled up to the tens of billions in their number of tunable parameters. The story, as I have it, is that harvesting all the text on the public internet, as well as other various sources of text, and using this text data to train models of word prediction—i.e. predicting the next 1–3 most likely words that historically would come next—and then scaling up these models in size, afforded by using selective attention mechanisms, is responsible for significantly improved word prediction accuracy, or lengthening the distance into the future words predicted to appear in a sentence or paragraph. The important scientific claim being that these models “somehow” jump from “just” predicting words, to intelligently applying knowledge learned from predicting words to shape functional roles for those words, effectively bootstrapping a recurrent process of incremental “semi-supervised” improvement using feedback from the iterated predictions. That’s roughly what I think is the dominant narrative about what is happening with these models.

I think that’s probably true a bit. But, I think there’s a less exciting interpretation, or alternative hypothesis at least, that might be generously garnered from Noam Chomsky’s comments on this topic. I admit that I’m conjecturing all this from ridiculously truncated soundbites that I vaguely remember from a couple months ago, but ignorance also excusably facilitated by little obstacles like New York Times paywalls for example. His stance appeared to me to fit a classic pattern of needlessly cantankerous and cryptic positions, but, positions that are also very easy to misunderstand, and that have previously become honeypots for intellectuals that underestimate how well Chomsky usually covers his bases. I believe he referred to these “LLM” models as producing types of glorified plagiarism, or something like that. As nobody has the time to look into the nuance of statements like these, that don’t seem to directly address the central issue of “emergent” language entities, probably nobody has looked into the matter further, and just moved on. “Chomsky is getting old, and last I came across him he seemed to be late in understanding the Ukraine situation, so he must be just babbling or something due to age, so I’ll just see what’s up on Tik Tok again”, someone might say upon seeing the issue come up derisively in r/machinelearning or Scott Aaronson’s blog. As I say though, it is my experience that Chomsky often has a safe, wise, conclusion in mind when he says something easily perceived as provocative. Even with his Ukraine opinions, one could say he had a responsibility to be last on board with anything, so he wasn’t wrong at all about being very skeptical about what was happening with that.

In this case of recent LLM performance, I think one could connect his seeming dismissiveness to some of his previous “infuriating” positions on the topic of our “cognitive limits”. Nobody likes that topic I find, because it doesn’t feel like it could possibly feel mentally stimulating to study something that immediately is impossible to study. But what if GPT, scaled up an order of magnitude, is really just exceptionally good at plagiarism, like he’s saying? Where we as individuals just don’t know all the different writings there have been on millions of topics, so we just can’t really see the plagiarism, unless on a topic we’re intimately familiar with? Our individual cognitive limit is exceeded in this case, so we’re inclined to see magic, but it’s just plagiarism. Thus, the “babbling” about cognitive limits matters.

Probably both of these hypotheses have some truth to them. But who knows because there’s hardly time to read all the expert thoughts on the matter, especially when important models aren’t even public, and experts are disagreeing or fighting for attention or confused. I think the super-plagiarism metaphor should be like a null hypothesis we can do some comparisons with later on, and rather we should keep the focus on what it means to coexist in a society with entities that can only be understood if we collectively share our expertise. That’s what I’m getting at here.

If my ignorance is showing, do enlighten me. I freely admit that I’m conjecturing about things I could really look more closely at. I just don’t have time to look really, as the pace of things can take on the look of a hopeless shell game.

Currently watching:

YouTube player

Noam Chomsky reference remarks about ChatGPT:

Chomsky reference passage on cognitive limits, with bonus me dancing.

YouTube player

Other videos I’ve been kind of watching:

“ML” playlist – https://youtube.com/playlist?list=PLh9Uewtj3bwkBwNZe75u7b5nvQaO4ETP6

YouTube player

Papers recently read:

None on this topic in months, but some of the ML playlist videos are visually focused on relevant pdfs and I am surprised by some of what I have seen in them. For example, simply having GPT-4 reflect on what it has just said, really noticeably makes a difference. I mean, of course, but still, it was a striking contrast with the claim that GPT-3 didn’t really improve using this reflection technique. The comparison graph is in one of the recent papers about ChatGPT and reflection, and discussed in one of the recent ML playlist videos linked above. No time to serve it to you on a silver platter tonight. I’ll edit this later if nuance is needed.

Kaponion

| Leave a comment

Noun: kaponion – the sort of shabby opinion one would expect a kapo to have.

See: kapo, opinion.

Example kaponion – “Once you’ve had a job for a week or two, it’s your responsibility to shit all over everyone when they aren’t working [sic]”

Sleeple

| Leave a comment

Plural noun: sleeple – first order pejorative term, or second order ironic term, for a political demographic seen to be asleep at the wheel.

Example usage: I’ve felt so sleepy recently, I’m at risk of becoming a member of set sleeple.

See: sheeple, daydreaming.

Contrast with: wolfle [plural noun] – demographic of hyper-vigilant individuals reinforced by the predation returns of both sheeple and sleeple.

Lobotomism

| Leave a comment

Noun: Lobotomism – political ideology espousing the need to lobotomize most citizens, in the interest of stability. Where stable solutions for planetary systems, having billions of sentient individuals, are very difficult to find, reducing citizen complexity could mean the difference between having a model for some amount of time, and not having one.

Example application: resolving the tension between libraries and capitalism, by censoring the former.

See: lobotomy, capitalism, monarchism, eusociality, individuality.

Contrast with: “optimism” – a political ideology, that assumes the returns from superrational actions can dynamically find stable solutions, by having a sandbox for citizen activities that otherwise would warrant a good old fashioned round of lobotomies if everyone were to do them.

This is part two in a series of explorations looking into “ChatGPT”. Here I look at how it compares to an important model of analogy-making in the field of cognitive science, as well as looking at how to save the transcript of a chat. 

In a spasm of inspiration, I wanted to use ChatGPT on my phone while at the gym this morning. After having an inspired chat about analogies with it, I wanted to save the record of our conversation, but I could see no way of doing so from the OpenAI interface, or even printing to pdf from the Brave mobile browser I was using. Scroll screenshot was also not able to understand which part of the screen to scroll while taking the screenshot. It then occurred to me that I could ask it to output a LaTeX transcript of the conversation, which I could copy and paste to a note file on my phone. It did this, but it ended up being quite confusing to write about later, and I think it’s worth telling a bit of that story, real quick.

I say it was confusing because the LaTeX output that ChatGPT produced, was incomplete, and therefore could not be compiled. It was no big deal really, as I just needed to manually paste in the remainder of the transcript and close the document with the “/end{document}” command.  But I wanted to emphasize this missing output, and my confusions began when I found myself having to edit ChatGPT’s output, to include the italics “\textit{}” command: I was in effect modifying the transcript. Again, no big deal, but on top of this, if I was going to do this correctly, I needed to italicize “/end{document}” of the LaTeX code. The problem with this is that the closing “}” would have to come after the /end{document} command, which would obviously result in a compile error. At this point, words like “Quine” and “warning: escape characters will be needed for this, and don’t do it” started pulsing in my mind, where the difference between the source tex, and the rendered pdf it produces, is a different problem entirely from rendering a quote of the source tex. Yep. Obviously. Anyway, still too much recursion for my brain to handle, so I just fixed up the LaTeX, compiled it, then highlighted the last bit of text in the output pdf. Good enough.

However, simultaneous to these issues, I was testing out VS code for LaTeX, and there seems to be a frustratingly minimal default set of tools for writing LaTeX in VS code. In particular, the keybinding that should set “<cmd> i” to “\textit{…}” was not working. Despite several attempts to edit the mysterious keybindings.json configuration file in VS code, I was left having to write out \textit{…} whenever I wanted to use italics, which I found quite humiliating if I’m to be honest. 

To bring it back to the original point, all of this was caused by the simple desire to save the chat log, which should obviously just be a normal feature of the interface. But the bigger purpose of the exploration today was to ask ChatGPT a couple minor skill-testing analogy questions: questions that I think have particular importance in characterizing the limits of ChatGPT’s intelligence. 


Test 3. The Copycat test.

For ChatGPT tests [1-2], click here

The “Copycat test”, as I call it here, can illuminate the subsymbolic concepts at work from the output of a blackbox, expressed in compounds of these “atomic” terms which are comparable to relations used in defining the axioms of arithmetic, such as: “successorship”, “first”, “identity”, and then many other domain-specific subconcepts, such as knowledge of the letters of the English alphabet, as in the case of Copycat. The problem Copycat is concerned with, is applying an appropriate analogical transformation to a string of letters, given a source analogy described by a sequence of letters of the alphabet, e.g., “ABC is to ABD, as DEF is to ???”. The nice thing about using these very elementary concepts as the building blocks of an analogical transform, is that it has an interpretable “perceptual structure” describing its decision. For this reason, Copycat, and its extension Metacat, are in the top three cognitive models dearest to my heart…Actually, if I’m saying top three, then I have to include Letter Spirit and Phaeco with them, as they are each superb exemplars of “fluid analogy” type models, just operating in different problem domains: letter sequence analogies, typeface design, and Bongard problems. I explain Metacat, poorly, in this video, but it’s a good video anyway imo. The discussion of ChatGPT continues below.

YouTube player

1. Prompt:

ABBCCC is to AABBBC, as 123 is to?

Output: “112223”.

 

2. Prompt:

…ABC is to ABD, as XYZ is to?

Part 1, chat record.

Output: “XYA”, and “YXA” over several attempts.

Verdict: not bad, but not great.

Test 3.1.1 – It was unexpectedly difficult to judge the quality of ChatGPT’s “112223” answer. I had “231” in mind, and I think that is a simpler answer, but ChatGPT’s answer does actually work quite well in a way. They definitely make for an interesting comparison, the more I think about it. You can use “231” to tell you the number of repetitions of the first number, “1”, then the number “2” in the second position, and the number “3” in the third position, place in its solution “112223”. 

Test 3.1.2 – This was a fail I thought, as both XYA and YXA are pretty poor answers. ChatGPT is seemingly unable to infer how the use of periodic or non-periodic boundary conditions, should change the model’s internal conceptual representation of the English alphabet. Does the alphabet wrap around so that “a” is the next letter after “z”? No, and that can be modulated by an intermediary “opposite” relation connecting the first and last letters of a sequence, like “a” and “z” in this case. Activating that “opposite” relation, is critical to coming up with clever, mirrored solutions to the analogy problem, such as  WYZ or even DYZ.

And then this sentence that ChatGPT came up with to justify its answer at one point was baffling to me: 

the analogy is ”book is to reading as fork is to eating,” the answer would be ”XYZ is to YXA” because both pairs of words are related to common activities (reading and eating).


Test 4. Export the chat record as LaTeX

Prompt:

Can you format this conversation log in LaTeX?

Output: “Yes, I can format this conversation log in LaTeX. Here is an
example of how it might look:”

ChatGPT_prompt_3_2_2_Copycat_compiled_latex

Part 2, chat record: compiled. 

Verdict: decent.

Pretty good, but the section headers are redundant a bit and it abruptly cut itself off at the green highlighted bit in the pdf. 

Trying out LaTeX in VS Code. Bold and italic keybindings didn’t work out of the box without a plugin, which I found annoying.

Today I decided to try out OpenAI’s recently released “AI” chatbot called “ChatGPT“. There have been so many advancements in machine learning, in such a short amount of time, that it’s temporarily exhausted my normal interest in neural networks, but the hype around this particular system cannot be ignored. I can ignore technical articles from r/machinelearning easily enough, but I find it much harder to ignore the relentless intrusions of my Tik Tok feed by content creators hyping up the capacities of ChatGPT. I’ve put together a little video gallery of the sort of Tik Tok videos I’m referring to, so you can see what I mean. There’s stuff you might expect, like using it to help build apps, and stuff that’s a little eccentric like asking it for a plan to end world hunger, or explaining dark matter, and then there’s stuff that is at least morally nebulous, like having it write opening sentences to people on dating apps. Okie doke, I guess… 


Test 1.

Have ChatGPT write me a small Python program to extract all the links I saved to Google Keep this year, so that I can move away from this program, and complete my transition to a self-hosted note-taking application called “Joplin“. I have dozens of these sorts of tasks needing to be done, so I figured this would be a good representative test.

Prompt:

“write a python program that loops through a directory of json files and extracts any website urls inside each json file, them[sic] puts them into a list, and then exports that list to a csv file.”

Output

Verdict: success. The program worked correctly on the first execution. Not the most elegant code in parts, but good enough.


Test 2.

I recently caused myself considerable suffering, on the order of ~30 hours lost to trial-and-error debugging, by casually trying to upgrade my home Linux server, without following a guide. Can ChatGPT offer me any life advice given my lack of progress fixing the problems I caused?

Prompt:

“I tried to upgrade Ubuntu 20.04 server to Ubuntu 22.04, but something went wrong during the upgrade and once it finished, the package manager, Aptitude, now wont connect to the repositories listed in /etc/sources/list.d and gives a whole bunch of ‘https cannot connect’ errors. Manually reinstalling Aptitude from the [sic] along with about a dozen of its dependencies has not fixed the problem. The server will boot and will run all its services as a headless server, but trying to log into any version of ubuntu-desktop crashes out of the GUI and only terminal is stable. Should I just give up on this install, and do a backup and start again with a fresh install?”

Output

Verdict: success? Probably it gave me sage advice, congruent with at least one comment on stackoverflow, but ultimately of unknowable veracity.