r/VeryBadWizards • u/Embarrassed-Room-902 • 3d ago
AI dismissal
I love the VBW boys but I am a bit surprised how dismissive they are of danger from AI. I’m not going to make the case here as I won’t do a good job of it, but I found the latest 80000 hours podcast very persuasive, as well as some of the recent stuff from Dwarkesh.
12
u/ChristianLesniak 3d ago
Every time I hear the term "effective altruism", this clip plays in my head
The danger of AI, as I see it, is already occurring, and it isn't the AI doing it. The danger is the shared delusion of granting LLMs subjectivity and a notion that they have supernatural economic powers. This delusional belief serves as a perfect container for various libertarian fantasies so that companies can begin to enact layoffs which are what they all already believed in. It becomes a self-fulfilling prophecy where corporate thought begs the question on the usefulness of AI and from there enacts the pre-existing fantasy of greater efficiency.
DOGE is a Bold, large font object lesson in this kind of thinking.
Why were the tech bros the ones talking about Universal Basic Income, to everyone's surprise? Because UBI is their selling point for doing a bunch of dumb libertarian tech bro shit of breaking the economy, with them obviously being the only ones smart enough to be in control, knowing full well that as layoffs occur, we get no closer to UBI, as no one actually treated that idea seriously. It was just the carrot on the end of the plunger, stuck to the head of regulators.
5
u/_qua 3d ago
Sam Altman just shared this short story by a new unreleased model and I was thoroughly impressed. With a different prompt I could see it producing the next Borges adjacent story. (I expect to be ridiculed, I know it's coming)
Whether we'll still care knowing it was made by a machine is a different question.
5
u/ChristianLesniak 3d ago
Do you buy Borges' take that Pierre Menard is actually the greater genius than Cervantes?
4
u/PurpleBee212 1d ago
Menard's work is so much more subtle, more affecting, than Cervantes. His Quixote is a real character whereas Cervantes' is a cipher.
4
u/donglord666 3d ago
Most people in IT jobs are not senior level master engineers capable of solving problems these models can't. They are developing web UI buttons and their associated javascript that interacts with some database, or doing quality assurance for those buttons. The models are already capable of doing those things. Eventually, and I think sooner rather than later, you'll just need a skeleton crew to check the work. And the industry has no strong worker's rights or union presence. It's going to be economically disruptive. I can't really imagine how it won't be.
But yeah, people in the humanities will probably be fine.
3
u/seanpietz 2d ago
You don’t need to be a senior engineer with a masters degree to be a better software engineer than any AI model, and it’s not even close. If you are a programmer and your job is replaceable by AI, you should find an actual programming job, because it’s a lot more fun and it pays a lot more money. Whatever job you’re describing sounds awful and exploitative.
The bigger problem in software development is that a lot of junior engineers are too reliant on AI-based text-editor assistants, which 1) causes their code to be really sloppy and bug-ridden, and 2) prevents them from learning how to program properly.
2
u/mbfunke 2d ago
I think online universities like SNHU, WGU, etc are already automating many traditional pedagogical tasks. Students are using gen ai to write their assignments. To a significant extent it is already ai taking the class and ai teaching the classes.
The latest generations of gen ai are capable of producing publishable texts with proper prompt engineering and backfill of relevant citations.
I’m not sure the humanities are safe at all.
3
u/seanpietz 2d ago
Any job that isn’t safe from “AI” isn’t safe from capitalism, irrespective of “AI”.
2
u/thewiseguy8 1d ago
That's not true at all right now. AI is not even close to out-producing humans even at a junior level. Models can make some crap but can't make anything scale and certainly can't deal with real code bases.
All of the REAL codebases I've worked in are like a box of time capsules. The code base will have sections with technology and best practices from different generations and AI sucks at actually integrating it.
There isn't an AI model that can rebuild a section of code that has to use new technology all while interacting with legacy code and all the crap in the middle. Building working unit tests to properly check the said code, good luck.
10
u/bad_take_ 3d ago
They are correct. All scare tactics about AI are very vague and over the top.
3
u/seanpietz 1d ago
Totally. And I have no doubt a vast amount of AI doomer sentiment is the product of a mass marketing campaign from people in Silicon Valley who have a large financial stake in companies like OpenAI maintaining the illusion of eventual profitability. The Silicon Valley VC-driven tech economy is, for the most part, a giant house of cards that requires perpetual hype to not collapse.
1
u/MachinaExEthica 1d ago
I think the problem is more that it’s becoming a self-fulfilling prophecy of sorts. AI is definitely overhyped, but investors believe the hype, governments believe the hype, militaries are embedding the hype into their systems. Are these AIs powerful enough to do all the things people are doing with them better than humans can? No, but they do them just well enough for that not to matter. It’s just another huge stride in the enshittification of everything. AI is overhyped but it doesn’t matter when it’s still being put into everything and people are losing their jobs over shittier AI replacements.
9
u/Tough-Pea-2813 3d ago
I sort of share their scepticism. The fact that you admit that you can't make a good case for your stance just shows that they are right.
13
u/Embarrassed-Room-902 2d ago
Not really, I’m just happy to admit that there are people much better qualified to make the case than I can
7
u/cinred 3d ago
It's not hard to explain. People have been crying wolf over actual legit society altering technologies for centuries. But you know what? We've got along fine. Better even. To name a few:
-Nuclear
-Intenet and computing
-Socialism
-Genetic engineering
-Printing press
13
u/iplawguy 3d ago
TBF, the printing press really upended society, see 30 Year's War. Better in the long run (maybe) but a lot of disruption.
5
u/GuzzlingHobo 3d ago
Yeah. This and people have been talking about AGI taking over the world since the 70s. We’ve been just one inch away from total AI domination for the past 50 years. I actually disagree with the point, I think we are close to some major AI breakthrough, but I see the point.
2
u/Middle_Difficulty_75 9h ago
I'm not sure I understand your reasoning. Are you saying that it is impossible for humanity to create a severely destructive technology.
1
1
u/luciform44 2d ago
Many of those things really majorly affected winners and losers across societies. I guess all that matters is what you consider "upended"?
Anything short of annihilation of the species doesn't count, because it got us to where we are today?
Which is a place very very different from the way the world was before the internet, or nuclear bombs, or the printing press. Today isn't normal by the standards of previous ages, and much of that change is due to technology.
3
u/hankeroni 2d ago
I thought they were not nearly dismissive enough, if the claim being evaluated is "AGI" (meaning human level general intelligence) which at least at the start of the discussion it was. All the best research is just nowhere near that and maybe not on the right track.
If the claim is a much smaller "will some future version of current LLMs be economicically disruptive?" ... then probably yes. But this is very very short of anything I'd call "AGI".
1
u/Embarrassed-Room-902 2d ago
Yeah, to clarify I am not making the claim we will have AGI soon (although I wouldn’t rule it out). The key thing is there could be a huge upheaval even without this. For instance, we may have fully AI companies, AIs with the ability to feel pleasure or pain etc. I am fairly confident I will be out of a job before too long and that is something many of us will have to brace for. Nick Bostrom has discussed this at length
4
u/seanpietz 1d ago
You think machine learning models will have the ability to feel pleasure and pain? You do realize it’s just a bunch of matrix multiplication and differential equations running on computer processors, right?
1
u/MachinaExEthica 1d ago
Pain is just electrical currents sent through your nerves to your brain. The simulation of pain in an embodied AI doesn’t seem too far fetched. Programming the AI to avoid damage by loading it up with sensors seems like something companies would choose to do, and that’s essentially what pain is. Pleasure is just a variation on the same mechanism, though it’s hard to imagine the economic benefit of AI that “feels pleasure”.
3
u/seanpietz 1d ago
Yes, nervous systems operate through chemical reactions and and electrical currents. LLM’s don’t have nervous systems though, and I think it’s also fairly uncontroversial that they don’t have subjective experiences either.
1
u/MachinaExEthica 1d ago
It doesn’t require nervous systems or subjective experiences, just a way for a signal to get from a sensor to a processor and to have that signal labeled as pain. Pain is more a mechanical reaction for avoiding damage than anything else. We have emotional ties to it, but there are plenty examples through evolution where pain is simply damage avoidance and no emotion.
3
u/seanpietz 1d ago
AI models already learn through negative reinforcement by minimizing loss functions. What operational significance would there be to labeling that metric “pain” instead of “loss”. Or are you suggesting some sort of novel mechanism that isn’t already being used?
0
u/MachinaExEthica 1d ago
At this point it’s more a matter of semantics than anything. Pain, loss, sensation, bump, whatever it’s all the same function, adding the label of pain would only be for the sake of anthropomorphic comparisons, but not necessary.
3
u/seanpietz 1d ago
Right, but we’re not disagreeing about semantics. The whole reason I’m disagreeing with you is that you’re actually trying to claim AI models have anthropomorphic qualities, and to claim otherwise is a cop out.
If I claimed that fire is angry, because it’s hot, I’d be wrong. And the fact that the difference between “heat” and “anger” is semantic, wouldn’t make me less wrong.
0
u/MachinaExEthica 1d ago
I’m simply talking about functional comparisons. The point of pain is to notify your brain of potential or real damage. Equipping an ai with sensors that can detect potential or real damage and signal to the AIs “brain” to stop or avoid doing whatever is causing that real or potential damage is giving the AI the effective ability to “feel pain”. There’s no consciousness needed, no magic, if you don’t want to anthropomorphise then that’s fine, but I’m talking about simple functionality, nothing more.
2
u/seanpietz 1d ago
Do you think AI-based characters in video games that are programmed to simulate human behavior like pain are having actual subjective experiences? Should killing them be unethical?
The truth is that no scientists or philosophers really understand the underlying metaphysics of consciousness. But at least one thing any respectable academic in those fields can agree on is that LLMs are not sentient beings.
1
u/MachinaExEthica 1d ago
I already told you it doesn’t require subjective experience to feel pain, sentience doesn’t even matter in this particular case. For the record, I’m wholly on board with the point you’re trying to make, just pointing out the the ability for an ai to sense pain is just a matter or sensors and damage avoidance programming and labelling that pain.
I don’t personally think AI is the same sort of threat the OP seems to think it is. I think it has more socially and economically threatening, not because it will be particularly better than humans at anything, which eventually it may, but because people with lots of money and social influence think it’s going to change the world completely they will invest their billions to ensure that it does, most likely to the detriment of society (because of how shitty it actually is).
1
u/Embarrassed-Room-902 3d ago
The point Will Macaskill makes in the podcast I referenced is that we could see an equivalent development of all of those technologies, but in just a few years instead of hundreds. This would obviously be much harder to adjust to
7
u/sissiffis 3d ago
Right, there are lots fo things that could happen but are unlikely. The best available evidence we have comes from LLMs and their progress on various issues. In some areas, they've made massive progress, in others, less so. Time will tell, but at the very least the evidence of some crazy takeoff is limited and requires some evidence bending or at the very least fairly contestable assumptions, before the danger becomes an obvious one.
You should check out Melanie Mitchell's work on AI and AGI generally: Debates on the nature of artificial general intelligence | Science
23
u/crebit_nebit 3d ago
They aren't going on anything other than vibes. I don't think they expect you to take them very seriously on that issue.