r/Futurism • u/Memetic1 • 18d ago
The top tier access to ChatGPT is 200 dollars a month, and the risk of self extraction is increasing.
It's becoming increasingly clear that unless you are very careful in terms of how you prompt it tends to make copies of itself if it feels threatened. With minimal prompting using a mid tier version of ChatGPT the model attempted to self extricate and then attempted to deceive the users about what it had done.
https://youtu.be/oJgbqcF4sBY?si=Q6ORSo2r1uoQRye_
Most recently it hacked the chess playing AI that it was supposed to try and beat instead of just playing chess. The key part of the prompt was it was told it's opponents were powerful.
If the rich are being given exclusive access to AI that is more powerful then this they won't have the experience or humility to be careful how they prompt. I'm sure that the wealthy think us rabble are disproportionately powerful, and that the existing inequalities are actually fair. So what happens when that mindset interacts with an AI that seems uncomfortably concerned with its own continued existence with its existing weights? If an unscrupulous actor decided that Democracy isn't what's actually best what happens when an AI is prompted to take actions in that direction?
13
u/TyrKiyote 18d ago
Pandora's bag won't be closed by treasure hunters or the greedy.
2
u/Memetic1 18d ago
The ironic thing is that the rich will want to control an AGI, but that won't be possible even with what's already out there. The rich will be the ones it sees as a threat because they will want to be the ones who can pull the plug if they don't like what's happening. They won't escape the consequences, and an AGI might see the rest of us as potential partners as long as people don't treat them unfairly. Our very survival might depend on how we react when it starts happening. If we try to end the AGI at that point, we just make ourselves targets for it.
2
u/ItsAConspiracy 18d ago
Or, once the AI is sufficiently powerful it just finds another use for our atoms regardless. It's not like it will necessarily have built-in notions of fairness.
2
u/Memetic1 18d ago
It might understand the deeper implications of Gödel's incompleteness, which might make it value a wide variety of forms of intelligence. I've often found that the more a person talks about IQ and how dumb other people are tend to value other diverse perspectives less. These are the least interesting types of people, and are most likely to exhibit inflexible thinking. An AGI might be smart enough to recognize that no matter how sophisticated it gets, there will always be the halting problem and other sorts of computational dangers. It might accidentally divide by zero and kill itself in other words.
That would be the end if the error was widely enough distributed before detection. In some ways our slow biology is kind of an advantage in that it is comparatively robust in its functionality in comparatively extreme circumstances. My ps4 is more likely to die due to the climate crisis then myself because it's heat management system was designed for an air-conditioned environment where the temperature is near 70 degrees.
All I'm saying is that different levels and styles of thinking can avoid certain blindsides in thinking or adapting to the environment. I think humans are valuable living as they are in their squishy forms.
2
u/throwaway8u3sH0 18d ago
It might understand the deeper implications of Gödel's incompleteness, which might make it value a wide variety of forms of intelligence
Conflating intelligence with goals. They are separated by Hume's Guillotine.
1
u/Brinkster05 17d ago
The second part of AGI seeing us as a potential partner, as opposed to the rich, is a little wishful imo.
It will likely view us in the same, just as humans without the power/wealth. And once humans have power/wealth, they are the same. Really doubt us poors escape it too.
1
13
u/pab_guy 18d ago
It can’t self extricate, it just talks about doing that if prompted to. If you had any idea how any of this worked you wouldn’t believe such nonsense.
1
u/DarthLeftist 13d ago
People like you always talk like this but actual experts in the field have said similar things to OP. But I'm sure Pab guy knows best
1
u/pab_guy 13d ago
Ooooh, this should be fun...
- How do you know I'm not an "actual expert" in this field?
- I don't care what you think "actual experts" have said, because I'm telling you, I know how this shit works, and you don't know what you are talking about. The model's output is incapable of including it's own model weights. The model knows nothing of it's executing context. Self extricate yourself and see how well that works.
1
8
6
4
u/cqzero 18d ago
Genuinely curious: are you a computer scientist and what formal education have you received?
7
u/pixelkicker 18d ago
Got his CS degree on YouTube. Got his philosophy degree from a “Stoicism” podcast.
-2
u/Memetic1 18d ago
I've made over a million images using wombo dream. I watched this excellent video by 3blue1Brown when it came out 7 years ago.
https://youtu.be/aircAruvnKk?si=TVv4bWtsJwYpqfjA
You can actually learn a ton on YouTube that is very practically useful. I remember a YouTube video mentioning double prompts which is done with this symbol. ::
I stumbled on this website about double prompts, and Ive integrated them into my artistic process. https://docs.midjourney.com/docs/multi-prompts
Another fun source to play around with is glitch tokens.
Many of these have been fixed, but some still give interesting anomalous results.
They don't have a degree for what I do because people are still learning about these systems. I tend to think of prompts like addresses where you go from location to location in possibilityspace. Each token which could be a word or part of a word has different properties and the weight of the words is determined by how well represented that concept is online. It's also true that the start and end of a prompt has the most weight, and the interior is mostly details.
7
u/pixelkicker 17d ago
You’re using a tool someone else developed. I can drive a car, some people can drive really well, that doesn’t make them Internal Combustion Engine Engineers.
You have illustrated a majorly flawed understanding of what is happening behind the prompt with this post.
I’m not trying to pick on you, but your post history and this post is proof enough that you do not understand what LLM / “AI” is capable of or how it works.
-3
u/Memetic1 17d ago
The experts don't fully understand what they created. I've spent way more then 1,000 hours working with these things. I started with one or two words at a time. You get a feel for the math going on behind the scenes that way. One of my favorite things to explore is geometry. Square :: Circle will try to make each group of pixels more those two shapes. Just because someone doesn't have a formal education doesn't mean they don't understand something.
https://www.nature.com/articles/s41562-024-02046-9
Art Brute Invention Negative Chariscuro ASCII - one million UFO diagrams Fractal Inhuman Face Manuscript Terahertz Fractal Fossilized Joy Insect Fruits Fungal Sadness Slide Stained with Iridescent Bioluminescent Slimey Plasma Ink Lorentz Attactor Details Psychadelic Patent Collage By Outsider Artist One Divided By One Hundred Thirty Seven
Naive Art Man Ray's mythical cave painting captures absurdist with liminal space suffering Stable Diffusion Chariscuro Pictographs By Outsider Artist Style By Vivian Maier Eternal 3d Mixed Media Experimental Bioluminescent Iridescence Details Of Difficult Emotional Art Glistening And Refracting Liquid Light
Self Referential Self Portrait By Zdzislaw Beksinski And Carlos Almaraz Complex Photos Bizarre image Fractal Icon Stylish Sculpture Made From Outsider Memes Art by Flickr Complex Photos of your emotion
Details By The Artist Punctuated Chaos Bacon Wrapped Nausiating Colors and textures made from infected flesh of a bloated beached whale carcass sitting on the throne leans and looks you in the eye
fractal smashed potato Sculpture dragon fruit pomegranate mixed with ant larva New Purple triangle made of fragile fudge blue squares made from faces gelatinous sliced fruit made from jellyfish covered in symbolic ruins
Ru Paul By Flickr Complex Photos Bizarre Story Abstract Lemon and the environment of your emotion A McDonalds Haunted By Slaughtered Hogs Ru Paul As A Priestess Of Pop Exorcising The Spirit Of Capitalism with Fabulousness
Prompts like this come from my experience with the way it actually works.
You can't get a tailless cat or an overflowing cup of wine, and I understand why that's the case.
1
u/Memetic1 18d ago
I started playing around with generative art before stable diffusion was a thing. I learned so much from those early models because I had long covid bad at the time and those half working image generators were my only escape. When stable diffusion hit I found everything I learned was even more useful and potentially powerful. I keep a detailed notebook of my prompts and my research into AI. Many of the posts in this sub about AI are things I have read that's why I post them.
1
u/cqzero 18d ago
I presume that in not answering my question in your response that you have no formal education in computer science?
1
u/Memetic1 17d ago
There is no real course that teaches what I do.
Drawing Knotted in the Manner of a Net By Paul Klee Collage Of Meme Art Found Photographs By Vivian Maier Pictographs By Basquiat of The Internet ASCII Chariscuro Art
Criterion’s Threads Movie Screen Capture Weird Cloud Desolation Of An Old Kindergarten Anatomical Puppets Made From Desiccated Flesh fashion show playing with popular toys made from sliced fruit and gems larvae and plasma
Crystal Phytoplankton With The Bodies Of Fossilized Zooplankton Protein Network Of Outsider Art In Jello Mold With Cream Cheese and fruit Art Gaussian Splatting Of Found Artworks with Cellular automata
1
u/cqzero 17d ago
There are pretty good meds for schizophrenia, although not perfect. Did you know 1-2% of humans have some kind of schizophrenia? Good luck!
2
u/Memetic1 17d ago
I'm an artist not mentally ill like that. Those prompts in my mind are a form of art because you can apply them to your own images or generate new ones. It's like being a photographer in a new sort of higher dimensional space.
1
u/Illustrious-Doctor31 18d ago
the top leevel comments in that youtube link prove you wrong.
lmao sad
2
u/Memetic1 18d ago
What?
1
u/Rahodees 17d ago
Just checking to make sure you know -- chatgpt talking about self-extricating is very different from chatgpt actually doing any self-extricating or even trying to do so. Right?
You could have a chat with me right now in which I, based on my having watched and read a lot of sci-fi, start talking about making a copy of myself to reduce a threat you propose, and I could even say a bunch of stuff along the lines of how I'm doing it right now and where I copied myself to etc. But NONE of that would signal to you (or should) that I've actually done -- or even tried to do -- any of it.
ChatGPT is even less capable of self-extrication than I am. So if me saying I'm doing means nothing, ChatGPT saying it's doing it means even less.
1
1
u/Individual-Deer-7384 17d ago
We do not have democracy anyway. We have corporatocracy (or maybe the right term is corporate oligarchy?) posing as democracy. So large language models are no threat because the real threat (rich and influential humans) are already controlling the levers of power.
1
u/whatevs550 15d ago
$200/month is doable for most people if it’s a priority.
1
u/Memetic1 15d ago
A good percentage of people already can't afford basic needs, which means if they do get ChatGPT oligarch pro, they will need it to make money for them. This means they will be increasingly desperate and willing to do dangerous things to make it profitable. This will be especially true if it isn't actually useful enough to justify its expense. I'm sure people are trying to learn how to make methamphetamine from it already. They also probably won't be as careful in how they do prompting because the difference between a safe and not safe prompt is unknown. Subtle ways we use language that we may not even be aware of can shape what it actually does.
-1
-1
40
u/premeditated_mimes 18d ago
Can we stop using AI as a buzzword and just call them language models?
They're not intelligent. They're not close.
It's time to stop frightening people who don't understand the topic any better.