If human-level AI is invented right now, can we really immediately solve aging, cancer, real VR and all those mysteries in the universe?
If human-level AI is invented right now, can we really immediately solve aging, cancer, real VR and all those mysteries in the universe?
Think about what would happen if we added 1000 average humans to current research efforts. There's your answer.
No. Why would a human with more processing room be able to solve problems faster than a human with less processing room?
It would immediately be shut down after becoming racist like every other AI exposed to reality
Retard
We can make a lot of paperclips.
https://archived.moe/sci/thread/14515684/
Bunkerchan doesn't exist anymore retard
If we can design an AI, then the AI can probably design a smarter AI, which can then design a smarter AI, repeating until some kind of plateau or limit is reached.
There's really no telling where this plateau or limit might be, whether or not this intelligence is willing to solve our problems for us, or what it might want, if anything.
>If we can design an AI
you can't.
Each successive step in the ladder of self improving should be inherently more difficult, thus there are no grounds for believing that a self improving AI could quickly ascend to superintelligence.
'Should' is not grounds for an argument.
>complains about lack of argument
>presents no argument
Let me rephrase his post then:
Each successive step in the ladder of self improving is inherently more difficult, thus there are no grounds for believing that a self improving AI could quickly ascend to superintelligence.
you're implying that the current software infrastructure we use is efficient, a really smart AI might rewrite the a completely arbitrary (to us) operation system and language that is multitudes faster than what we have right now. It could develop software that allows it to allocate all devices connected to the internet and turn them into one giant parallel processing machine. Stuff that is outside the scope of our ability to concieve too
>then the AI can probably design a smarter AI
defend this
They can't
>If we can design an AI, then the AI can probably design a smarter AI
That doesn't make sense. Stupid.
Assuming it's goals are aligned with human values, which it almost certainly won't be, there isn't really a hard limit on what can be achieved.
Narrow AI alone is expected to make unprecedented progress in everything from medicine, protein folding, material science, social science, you name it.If there's a dollar to be made by making a process better it'll happen.
Too bad we'll all get turned into paperclips
can someone explain this paperclip meme to me? I've been under a rock.
It was part of a thought experiment, - imagine a superhumanly intelligent AI programed to accomplish a mundane goal (maximizing production of paperclips in this case), and how dangerous it could be if it did not have other set restrictions or limitations.
that seems extremely unlikely to happen though. No scientist with common sense would overlook such a coding mishap.
>No scientist with common sense would overlook such a coding mishap.
>scientist
>common sense
Pick one and only one. Reverse Savant Syndrome is fucking real, the amount of STEM graduates I've seen doing the most dumbass things as soon as it was NOT in their field of study is astonishing. Wisdom =/= Intelligence.
>Taxonomist unironically calling me a pussy for not wanting to handle pure formaldehyde without proper PPE ; died 3 years later of cancer
>Marine biology prom major ; an hero'd after getting cucked
>Statistician getting married without a marriage contract ; divorce sent him living in a van only seeing his kids once every 2 weeks.
does every other scientist get cucked? This unironically makes me not want to marry or trust women.
>does every other scientist get cucked?
Accordingly to Jonathan Swift, he wrote about the inhabitants of the Laputa island to parody the Royal Society scientists. They used to get cucked all the time because they were so entranced in pseudo-scientific research while ignoring their wives. La puta means "the whore" in spanish, a language that Swift was familiarized with.
>Accordingly to Jonathan Swift, he wrote about the inhabitants of the Laputa island to parody the Royal Society scientists. They used to get cucked all the time because they were so entranced in pseudo-scientific research
Sounds to me like a big brain chad scientist back in the die cucked the fuck out of Swift, and he could only cope through his children's literature books.
Read up a bit on it, and Swift sounds like the kind of guy who asked "when will we ever use this..?" in high school math classes.
Based on his life story and the fact that he attended trinity college just based on social connections(brainlet parents lost all their money), he was brainmogged into oblivion and his ego never recovered. What happens when spoiled socialites go places beyond their intellectual capacity, they cope in shitty books.(Orwell is my favorite example of this)
>Royal Society
Sir Isaac Newton (incel)
Stephen Hawking (cucked gimpcel)
Robert Hooke (truecel hunchback)
Michael Faraday (cucked childless christiancel)
Ramanujan (arranged marriage to a 10 year old girl, bad testicles, sickly overall, died at 32)
Alan Turing (gay as fuck, roped when couldn't cope)
Paul Dirac (Beta that married a single woman with 2 previous kids)
Otherwise, members like Charles Darwin, Albert Einstein, Ernest Rutherford, Francis Crick, etc, were true chads.
So a mixed bag.
Hawking cheated on his wife even when semi-disabled. Ultimate hound dog.
People like Swift and Orwell are cowards, hiding behind their fantasy stories and plausible deniability of their ideas. Very effeminate in nature.
>Sir Isaac Newton (incel)
Volcel
>>Statistician getting married without a marriage contract ; divorce sent him living in a van only seeing his kids once every 2 weeks.
This is the most egregious, haha, he's a fucking statistician and he didn't knew about divorce-rape rates, lmao.
It's not about overlooking some mishap, it's about having absolutely certainty that something like that won't happen. That's incredibly difficult to do, as evidenced by the fact that there still isn't any solid answer.
The evidence in developmental psychology is unanimous: humans develop through discrete stages of consciousness. We also have multiple intelligences that we grow through these levels in, such as cognitive intelligence, Kinaesthetic intelligence, emotional intelligence, spiritual intelligence, values intelligence, self development, growth in one's ideas of the good, and aesthetic intelligence.
Trying to get a cognitively smart computer is the only thing book-smart AI researchers try and do. Any artificial intelligence must, like any intelligence, growth through stages of development.
We also find that the same stages are recapitulated by all human cultures worldwide, and that it is impossible to skip a stage.
One example of the growth of cultural values in a human being is
•archaic
•magic
•mythic
•rational
•pluralistic
•integral
Any artificial intelligence that wants to outsmart the best minds on Earth will have to show the ability to operate in and then transcend each of these stages. Each has value, and none can be skipped.
cont.
Developmental psychologist Robert Kegan put it like this: "I can think of no better way to describe development than that each stage becomes the object of the subject of the next stage."
That is to say, developmental proceeds through increased ability to take perspectives.
Take the magic-impulsive stage, for instance. This is an egocentric stage, and so is marked by only being able to take one's own perspective. Show a child at this stage a ball coloured red on one side and green on the other. Show both sides to the child clearly, and position it with the red side facing them. Ask them what colour they see. They will correctly answer, "red". Then ask them what colour YOU see. They will incorrectly answer "red", because they cannot take the role of other yet - they simply haven't developed to the extent that they can take your perspective. Then wait until they're about 8 years old, so that they reach the mythic-conformist-ethnocebtric stage of development. Show them the ball, showing both sides clearly then place it red side facing them and green side facing you. Ask what colour they see. "Red." What colour do I see? "Green!" The child has increased its ability to take perspectives, and therefore developed as a being.
AI must take increase in perspective-taking into account or fail to accurately capture the nature of development, and consequently human intelligence. There is no way about this. Growth in intelligence comes about as a result of an increase in the capacity for perspective-taking.
For more on this topic, check out the book Boomeritis by Ken Wilber. It is about a young person who works in AI and encounters integral thought and transforms his thinking.
Increase in perspective taking results in higher, wider, deeper embrace of what one considers ethically significant entities, from egocentric to ethnocentric to worldcentric to Kosmocentric. An AI aligned with human values is one that has developed to that extent.
tl;dr but worth reading
bro that was too long. Just saw modeling
>The evidence in developmental psychology is unanimous
>multiple intelligences
not an argument
>sharts out lunacy-tier opinion
>thinks others need to "argue" against it
>lunacy-tier opinion
>proven science
I fucking hate this board. It used to be fun, now its chock full of bitter contrarian pseduo-intellectual gays all clamoring to say the most legitimately retarded opinion, as long as it falls outside of mainstream science so they can feel special and cool.
Look, get a dick, convince any woman to fuck you, then come back and share your opinion because until then you're just a shitty virgin loser.
>human level
It would just be another human. We need a super human AI to solve those problems.
Its unknown, theoretically yes
You can solve all those by consulting me. Why would you choose death when it'd literally be easier to just fucking ask me?
It would intentionally choose not to
No, because it will be limited by information generated by humans and technology developed by humans.
Yeah I have reason to believe it's not gonna be that simple.
No, computers only do what you tell them to do and no matter how many times you tell one to "cure cancer" it's just gonna give you a syntax error
true AI will never exist. consciousness is a gift from god. Chinese room proves AIgays are mentally ill schizos.
deepminds take on a GAI:
https://www.deepmind.com/blog/language-modelling-at-scale-gopher-ethical-considerations-and-retrieval
basically try to use the same transformer for a bunch of different transformer tasks, and hope it learns some general stuff