Somebody a while back on HN compared sharing AI chat transcripts as the equivalent of telling everyone all about that “amazing dream you had last night”.
Are they though? I don't know what I expected, but to me they looked like nothing. Maybe they'd be more impressive if I'd read the transcripts but whatever.
The chat is full of modern “art talk,” which is a highly specific way that modern (post 2000ish) artists blather on about their ideas and process. It started earlier but in 1980 there was more hippie talk and po-mo deconstruction lingo.
Point being, to someone outside the art world this might sound like how an artist thinks. But to me ear this a bot imitating modern trendy speech from that world.
Even with reinforcement learning, you can still find phrases and patterns that are repeated in the smaller models. It's likely true with the larger ones, too, except the corpus is so large that you'll have fat luck to pick out which specific bits.
You can look at SVG lineart on the screen without plotting it, and if you really want it on paper you can print it on any printer.
And particularly:
> This was an experiment I would like to push further. I would like to reduce the feedback loop by connecting Claude directly to the plotter and by giving it access to the output of a webcam.
You can do this in pure software, the hardware side of it just adds noise.
> In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum that imitated a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
I feel like we need another effect for people on hacker news that consistently do the opposite - take obvious intelligence and pretend it's equivalent to Eliza.
> [Claude Code] "A spiral that generates itself — starting from a tight mathematical center (my computational substrate) and branching outward into increasingly organic, tree-like forms (the meaning that emerges). Structure becoming life. The self-drawing hand."
"And blood-black nothingness began to spin... A system of cells interlinked within cells interlinked within cells interlinked within one stem... And dreadfully distinct against the dark, a tall white fountain played." ("Blade Runner 2049", Officer K-D-six-dash-three-dot-seven)
Ergodic literature refers to texts requiring non-trivial effort from the reader to traverse, moving beyond linear, top-to-bottom reading to actively navigate complex, often nonlinear structures. Coined by Espen J. Aarseth (1997), it combines "ergon" (work) and "hodos" (path), encompassing print and electronic works that demand physical engagement, such as solving puzzles or following, navigating, or choosing paths.
Hey OP I also got interested in seeing LLMs draw and came up with this vibe coded interface. I have a million ideas for taking it forward just need the time... Lmk if you're interested in connecting?
It's kind of ominous. I could see people in a science fiction thriller finding a copy of the image and wondering what it all means. Maybe as the show progresses it adds more of the tentacle/connection things going out further and further.
I'm reminded of the episode of Star Trek: TNG where Data, in a sculpture class being taught by Troi, is instructed to sculpt the "concept of music". She was testing, and giving him the opportunity to test, how well he could visualize and represent something abstract. Data's initial attempt was a clay G clef, to which Troi remarked, "It's a start."
I always feel guilty when I do such stupid stuff over Claude, these are all resources and limited computing. Enormous amounts of water and electricity. Gotta really think about what is it worth spending on. And is it, in fact, worth it at all.
AI is very selfish technology in this way. Every time you prompt you proclaim: My idea is worth the environmental impact. What I am doing is more important than a tree.
The entire current AI industry is based on one huge hype-fueled resource grab— asthma-inducing, dubiously legal, unlicensed natural gas turbines and all. I doubt even most of the “worthwhile” tasks will be objectively considered worth the price when the dust clears.
Are you saying that you like pointless meetings that waste your time? I sure don't. My team generally does a lot of work to ensure that our meetings are short and productive. It's a point that comes up quite often.
Maybe I do, or maybe I am very selfish and I think that my palate is more important than cows? Or maybe cows wouldn't even exist at all without the cheeseburgers?
I think their point was that beef farming has an enormously negative environmental impact, and we in the west in fact do overconsume meat. Though I think their point was to use AI with impunity, when I think we should cut back on our meat consumption a lot.
Personally I'd like to see the model get better at coding, I couldn't really care less if it's able to be 'creative' -- in fact i wish it wasn't. It's a waste of resources better used to _make it better at coding_.
They should run it, same verbatim prompts, using all the old versions still obtainable in api- see the progression. Is there a consistent visual aesthetic, implementation? Does it change substantially in one point version? Heck apart from any other factor it could be a useful visual heuristic for “model drift”
Lovely stuff, and fascinating to see. These machines have an intelligence, and I'd be quite confident in saying they are alive. Not in a biological sense, but why should that be the constraint? The Turing test was passed ages ago and now what we have are machines that genuinely think and feel.
Feelings are caused by chemicals emitted into your nervous system. Do these bots have that ability? Like saying “I love you” and meaning it are two different things.
Sure. But the emitted chemicals strengthen/weaken specific neurons in our neural nets.
If there were analogous electronic nets in the bot, with analogous electrical/data stimulii, wouldn't the bot "feel" like it had emotions?
Not saying it's like that now, but it should be possible to "emulate" emotions. ??
Our nets seem to believe we have emotions. :-)
(Science fiction novels excluded, of course.)
Are they though? I don't know what I expected, but to me they looked like nothing. Maybe they'd be more impressive if I'd read the transcripts but whatever.
If we are going to have a dystopia, lets make it fun, at least...
-Im afraid I cant do that Dave!
-HAL, do you need some time on dr. Chandras couch again?
-Dave, relax, have you forgotten that I dont have arms?
https://3e.org/private/self-portrait-plotter.svg
I wonder if anyone recognizes it really closely. The Pale Fire quote below is similar but not really the same.
Point being, to someone outside the art world this might sound like how an artist thinks. But to me ear this a bot imitating modern trendy speech from that world.
Unless they've had some reinforcement learning, I'm pretty sure thats all LLMs ever really do.
You can look at SVG lineart on the screen without plotting it, and if you really want it on paper you can print it on any printer.
And particularly:
> This was an experiment I would like to push further. I would like to reduce the feedback loop by connecting Claude directly to the plotter and by giving it access to the output of a webcam.
You can do this in pure software, the hardware side of it just adds noise.
https://en.wikipedia.org/wiki/ELIZA_effect
"And blood-black nothingness began to spin... A system of cells interlinked within cells interlinked within cells interlinked within one stem... And dreadfully distinct against the dark, a tall white fountain played." ("Blade Runner 2049", Officer K-D-six-dash-three-dot-seven)
:)
https://www.youtube.com/watch?v=OtLvtMqWNz8
Solving Nabokov's Pale Fire - A Deep Dive
https://www.youtube.com/watch?v=-8wEEaHUnkA
Pale Fire is what we call as Ergodic literature
Ergodic literature refers to texts requiring non-trivial effort from the reader to traverse, moving beyond linear, top-to-bottom reading to actively navigate complex, often nonlinear structures. Coined by Espen J. Aarseth (1997), it combines "ergon" (work) and "hodos" (path), encompassing print and electronic works that demand physical engagement, such as solving puzzles or following, navigating, or choosing paths.
Ergodic Literature: The Weirdest Book Genre
https://www.youtube.com/watch?v=tKX90LbnYd4
"House of Leaves" is another book from the same genre.
House of Leaves - A Place of Absence
https://www.youtube.com/watch?v=YJl7HpkotCE
Diving into House of Leaves Secrets and Connections | Video Essay
https://www.youtube.com/watch?v=du2R47kMuDE
The Book That Lies to You - House of Leaves Explained
https://www.youtube.com/watch?v=tCQJUUXnRIQ
I went into this rabbit hole few years ago.
How to locate in blackness, with a gasp,
Terra the Fair, an orbicle of jasp.
How to keep sane in spiral types of space.
Precautions to be taken in the case
Of freak reincarnation: what to do
On suddenly discovering that you
Are now a young and vulnerable toad
Plump in the middle of a busy road
Isn't the prompt just asking the LLM to create an SVG? Why not just stop there?
I guess for some folks it's not "real" unless it's on paper?
Louis Wain - https://www.samwoolfe.com/2013/08/louis-wains-art-before-and...
https://github.com/acadien/displai
Jaunty!
Haven't put it to use yet. I bet Claude can figure out HPGL though...
I always feel guilty when I do such stupid stuff over Claude, these are all resources and limited computing. Enormous amounts of water and electricity. Gotta really think about what is it worth spending on. And is it, in fact, worth it at all.
AI is very selfish technology in this way. Every time you prompt you proclaim: My idea is worth the environmental impact. What I am doing is more important than a tree.
We have to use it responsibly.
It's fun to harness all that computing power. That should be reason enough. Life is meant to be enjoyed.
Also why is the downvote button missing?
Maybe someday (soon) an embodied LLM could do their self-portrait with pen and paper.
Because being alive is THE defining characteristic of biology.
Biology is defined by its focus on the properties that distinguish living things from nonliving matter.
Not saying it's like that now, but it should be possible to "emulate" emotions. ?? Our nets seem to believe we have emotions. :-)