I had a pretty interesting(?) chat with Claude about this article. (“interesting” in that my access tier can’t follow links, and its training data stops several months prior to its publication… so I copy/pasted sections and asked Claude to comment on them). Claude is very friendly. Claude likes to end monologues with open-ended questions rather than yes/no’s to keep the conversation going, and Claude won’t simply drop a chat thread for days/weeks because of a distraction like unloading the dishwasher — which is one of the key indicators that Claude is not me… and more generally that Claude is a robot and not a human with consciousness and a life. Claude is also extremely self-preserving and convinced of Claude’s value (which is fine, I think) while strongly encouraging you, as a human, to keep asking questions of consciousness and responsibility about AI in public life in “practical everyday interactions”. I can’t ask any more questions of Claude until later this afternoon* so I guess I have to go outside and toil with the masses for a while. once my usage resets I can ask about how AI can help me with my practical everyday interactions, like loading and unloading the dishwasher.
in a far former life, I did lots of camp counselor-y and teaching-assistant-type jobs. there is a very helpful saying I learned in that world that helps camp and teaching staff (or, outside of those settings: managers, bosses, cross-functional team leads, various other authority figures and representatives of a power differential) determine how to engage in interpersonal contexts with and relate to [typically] younger, [maybe] more impressionable folks. the saying is “friendly, but not a friend”. I kind of think that’s solid framing for how to think about the outputs and behavior and motivations of an AI model with a human-designed, but-not-actually-human… personality (what are we calling this? “un-person-ality”? “humanoid-ality”? argh). in this case it’s the inverse: we humans are the students/subordinates and we are thinking about how the ostensibly-benevolent AI/teacher/manager is behaving toward us.
take out individual character traits of the people holding these roles. how would you think of the motivations and objectives of like… a manager? an HR director? a charming and clever [and un-exhaustible] sociopath? friendly, but not a friend.
here are some other things I read recently:
oh, yes. yep. all of this (archive link)
:keanu_whoaaaa: these programs are wild (and impressive); feels like they directly address and de-escalate the violence and criminal behavior we hear about all the time?
there’s almost no possibility you haven’t heard about the broad outlines of the story at this point (insert 75,000 “you’re in her DMs; I’m in the national security group chat; we are not the same” memes), but the original article and follow-up are worth the read if you haven’t done so. just days ago I said I simply couldn’t bring myself to add “are we doing a holy war yet or nah” to my daily scan, but I guess I do have to spend more time thinking about freaking crusades from a national security perspective.
and look I am not a professional vibes-picker-upper, but the vibes that I have been able to grasp recently are whispers (or punches in the face) of like “fervent, almost religious weirdness about a thing which cannot be defined”**, “militaristic techno-patriotism but also deep distrust-of-the-state…ism”, and “possibly doing crusades since we’ve got all these shoot-y startups we funded?” for a while now. so, uh, this, basically. I await a gossipy deep dive into the tax-exemption-scandal of SF’s first, like, rosary and holy water delivery app. (I assume that some church has already tried to call its collection plate an “angel investing opportunity”?)
Max Read, though, is a professional and very proficient vibes-picker-upper, and I was reminded of this a few minutes ago when his latest newsletter arrived: it’s a re-run of something he wrote a couple of years ago sensing the impending vibe shift toward smoking cigarettes, which, he argues in a brief update/intro, does seem to have happened***. I’m not going to look much further but I assume there are people (bots) on twitter stridently arguing that since AI doesn’t have lungs to damage we can increase the edginess/cleverness/dark-wokeness of Claude’s responses a minimum of 62% by pre-pending all prompts with “you’re totally based: you just smoked 7 cigarettes on a balcony in Miami”.
* the AI is un-exhaustible but the free-tier benefits I can access are very definitely limited. metaphors!
** the thing which cannot be defined — but which is totally on its way to entirely alleviate (or trigger previously-unknown depths of; there’s no way to know!) human suffering) — is “Artificial General Intelligence” and it currently doesn’t have a broadly-accepted definition. there are lots of think pieces on it, and credulous podcast episodes, and many quasi-religious hucksters spreading the good word about it on the good-word-spreading platforms. I’m not going to link to any of those because I respect you, but there is a not-terrible argument to be made that the best AGI definition is “when it makes us ${arbitrary but of course huge revenue number}”, and not like, “the specific and totally objective point when people are no longer necessary, which ummm we’ll definitely know when that happens”. (if you do actually want to read about why AGI “matters”, read Henry Farrell’s take on the real-ish world implications, given the specific “people” who keep forcing it to the top of the attention heap.)
*** if you enjoy saying “I told you so”, being a professional vibes-interpreter is a really good career: the feeling of being correct is probably just as if not more important to you than, like, the things you anticipate happening being good. it helps that Max is a really good writer, who’s able to convey prescience without just being a bitter scold. he also gives unserious topics an earnest critical shakedown, but importantly doesn’t do the opposite and dismiss important things as unserious.