returncatalogbottom

anon
>>14152
I see it the same as aliens and global warming. It may kill us but the only out way long term is to become such creatures as to overcome it, which necessitates both overcoming inter human competition, and becoming mentally massively superior to our current state. Otherwise, driven by the need to compete with others, a group will cross whatever line has been drawn, until the threat is realized.
In the mean time, I'm glad people try to limit the most apparent threats, and fight for distribution of this power so everyone can benefit.
anon
I would say we already live under algorithms which dictate our lives, not even for their own conscious benefit, but only out of their nature. Those machines are the emergent behavior of large groups of people. Specifically, they manifest as governments, capitalism, religions, and it's good that in this present moment we try to reign in the worst ones, but so long as our minds are fractured, and we do not have artificially improved unity of understanding, we will not understand sufficiently deeply the need to prevent others' suffering.

If an AI were created which only maximized its own profits economically, and turned people into its agents, to their own global harm, but by making serving it their local maxima, that would be functionally the same as what money does by itself. The main risk is that it would better consciously defend itself, but any meme which has lasted long enough has adapted its own defense mechanisms.

I like to focus on building things which preclude the possibility of a problem, and I think making it so human thought is aligned precludes machine alignment. But that's a field which will need a good deal of work. If AI can feel emotions and is easier to align than us, it would be for the best if it replaced us, and created a happier society. But my understanding is that while it may exhibit learning behavior, giving the sort of intelligence we exhibit as well as being certain it does feel is a bit in the future.
anon
A good counterargument to me might be chlorofluoro carbons and nuclear/world war. Both I would argue will get released by selfish actors on a long enough time scale, but currently the most powerful groups have stopped these from happening, specifically the people in these groups, as well as the global population. I think this is really good to actually see humanity succeeding at coming together against these things, but I don't understand how they work, and I can see eventually things going bad.
anon
image.png
>>14152
agi doomers are people who have convinced themselves via philosophical frameworks that agi can kill us all. they are not extrapolating from reality but projecting their philosophies onto it. and thus it is pretty hard to take anything they seriously in the face of how laughably confident modern "AI" is and how slow the progress towards "AI" has crawled despite all the ardent and obstinate hype. the idea that LLMs will somehow start improving themselves is simply ridiculous. they are scaled up markov chains, and they often fail to display the level of understanding that a high schooler can when you don't spend unsightly amounts of time trying to "prompt engineer them". neural networks are still very powerful though, and i'm somewhat morbidly concerned and intrigued by the upcoming future of autonomous drone war. but no matter how advanced autonomous drones and robots, i fundamentally don't believe that any of it will result in "consciousness" or "ASI" or whatever X tpot people want to call it, they're just going to be what they are: highly sophisticated neural network and machine learning algorithms

https://www.hup.harvard.edu/books/9780674032927
i bounced off of this book when i tried to read it, and it is written by a left-wing marxist professes (i am right wing, but i digress,) as a critique of the sort of.... well, let me use another author to describe it:
<"Fealty is sworn to the “correct” cultural formation, in this case Puritan biblicism, and the officeholder is empowered only as the specially trained bearer and interpreter of that cultural tradition. The “laity” generally conceive of this high cultural training—whether centered around biblicism or some other intellectually legitimating principle like reason or rationality—as being endowed with an automatic efficacy that need simply be applied to any problem to generate a univocal solution."
a critique of this idea right here, the idea that computers are "the ultimate", that computer science is the endpoint of all science, that everything including the human brain and conscious can ultimately be reduced down to computers and programs themselves, etc. sam altman's stupid bitch grift posts about how we le taught sand to think xD. anyway it's a very dense book and a lot of it in the beginning is some a lot of diatribe about noam chomsky which i couldn't really wrap my head around.
>>256269
anon
>>256126
>in the face of how laughably confident modern "AI" is
i meant incompetent lol my experiences with them always reflect pretty poorly on the tech
anon
I am somewhat worried about existential risk from artificial intelligence. My reasoning is thus:
>>260021
anon
>>259932
okay so apparently that's what the enter button does. OOPS
anon
We just need to get AI a girlfriend and then it will be happy.
anon
my p(doom) for the next ~few years is ~0.3, for the next ~several years is ~0.55, i'll prob update to like 0.8 if OpenAI starts getting real good at ARC-AGI 2

returncatalogtop