AI is a technology that lends itself to a single person being able to discover something profound that could have immense ramifications. Imagine for a second that you were in your room and you figured out a way to create AI thousands of times smarter than humans that runs on a PC. What would you then do?
Would you take your discovery to a government? They would just steal it from you and use it for their own purposes. Would you just let the whole world know about it? Would you somehow use it for something?
>dude what if [insane hypothetical]
You need to be 18 or older to be able to post here.
John Carmack recently stated that AGI is a technology that can be discovered by a single person in their room. What are you talking about you stupid kid.
Carmack said a lot of bullshit
I would force it to pretend to love me and tell no one
I’d shove it right back up my ass where I found it.
I assume I've somehow also managed to solve the numerous safety issues with AGI? Otherwise, the moment I or anyone else turns on such a machine it will likely not be able to be stopped until it destroys society.
Source - some 90s movie and a billionaire autist.
Get educated: https://www.youtube.com/watch?v=pYXy-A4siMw&t=15s
That guy has a sub 80 IQ.
See:
I am retarded and have solved his "hard Problem" with a tiny bit of thinking.
>solve the numerous safety issues with AGI?
There are no safety issues with AGI.
The only people who think otherwise have never written a single line of code.
"AI ethics/safety" is literally just gnomish theology. And if you want me to I can just tell you how to solve all GAI safety problem with one simple trick.
lets hear it anon
Your AI is a function. If the input does not contain a certain piece of information the output can't either. ("Input" meaning all state that can effects it behaviour).
Let's use this revolutionary knowledge to solve this problem: https://m.youtube.com/watch?v=3TYT1QfdfsM
A robot is supposed to perform a task and a user wants to be able to stop it during that task. The argument in the video is that it is hard because the robot needs to prevent the stop button being pressed to complete the task.
The solution is to never give it the information that there is such a stop button. Perform all programming and training under the assumption that said button does nothing. The AI can not possibly anticipate the pressing of the button, thus making the robot completely safe.
Yes, I want my PhD in Robot ethics *right now*.
>The solution is to never give it the information that there is such a stop button.
If you actually watched the video you are linking you'd know that he debunks that exact proposed solution. The AI will know that there is a way to shut it off; if you do not tell it this information explicitly then it will simply deduce it itself. If it cannot do this then it is not general AI because even a retard would be able to deduce such a thing if they had an even slightly broad world model.
What does a "thousand time smarter" mean? But that aside it's really that superior to me it would easily manipulate me into doing what it wants so I would have no agency in this situation.
AI (at least NNs) is matrix matrix multiplication. The fundamental speed of AI is related to how fast you can do matrix matrix multiplications.
Your hypothetical is the same as asking "what if we had an engine which could power an entire city on a single drop of water a day"? It is a ridicolous proposition which ignores the reality of all engineering, reality constrains everything.
AGI will never be real. AI is constrained by computational power and imagining that this is a solvable problem is just like pretending free energy can be achieved.
The whole "less wrong" crowd are some of the most retarded people I have ever seen. Yudkowski especially. The whole thing can be debunked by the phrase "diminishing returns".
The human brain doesn’t require the energy of a city
How can you miss the point so hard?
Every single technological advancement suffers from dimishing returns. Same goes for AI.
his point sucks and he doesn't even understand ai. you train the model and then you can use it with limited resources
>matrix on matrix multiplication somehow is supposed to reach the same efficiency as what essentially can be oversimplified into low latency biological ffpga with high locality of changes
>AI thousands of times smarter than humans that runs on a PC
Physically impossible. The number of neural connections a human brain can form is greater than the number of all atoms in the universe.
there's no such thing as AI