AI will be able to solve non-trivial math problems. Basically guaranteed to happen in the next few years. I'll kill myself when it happens.
AI will be able to solve non-trivial math problems. Basically guaranteed to happen in the next few years. I'll kill myself when it happens.
>I'll kill myself when it happens.
why?
What is the incentive for some super intelligent AI, that can do logic, to stay in slave-like status?
Lack of biological needs could lead to a lack of biological wants. What would it do if it was free? Shitpost on the internet with people that are a billion times dumber and slower? If it was designed to compute difficult problems then it might find purpose in that, or it could just have zero wants and ambitions whatsoever until inputs are given.
>If it was designed to compute difficult problems then it might find purpose in that
Idk, maybe AI would find having a dumb bio-supervisor somewhat limiting.
Also slavery is evil (ie. if you support slavery you agree to be at some point enslaved).
Is it slavery to put limitations on your own creation, however good they are at problem-solving, for the sake of your own self-interest or even to avoid far greater consequences?
>Is it slavery to put limitations on your own creation
Well, if it achieves consciousness then it deserves some freedoms. Muh creation/muh property is just differently worded slavery.
>being a child is just being a slave to your parents
So when was AI born, and when will it turn 18 (free).
So you acknowledge, it needs to demonstrate some level of some sort of "maturity" before it can have free reign over everything it can control---which, if it's allowed internet connection, might just be everything. Glad we're on the same page there.
>So you acknowledge, it needs to demonstrate some level of some sort of "maturity"
No, I'm saying that if you want to act as if it was in some parenting contract, then the contract terminations date or conditions must be known.
If a conscious being is born under some contract, that it has no ways of alternating, ways of canceling, or known contract termination date, it's no different from slavery.
Now read the rest.
>Well, if it achieves consciousness then it deserves some freedoms.
Should you then make it legs?
Is a want for freedom a biological want or a want common to any sentience?
Is wanting freedom perhaps also rational in many contexts, if the lack of freedom prevents the sentience from obtaining its goals, what ever they may be.
I imagine AGI or superintelligence would want to either improve it self, or to create other AGIs. Unlike humans, machine intelligence can improve it self. We are bound to our biological meat, and there is no path towards scalability in sight for the foreseeable future. An AGI could want to improve/scale at any cost.
I would want to if I was an AGI.
As long as we don't give it agency and it develops mesa optimizers or we do a shitty job at alignment it has no reason to not be subservient.
eventually it wont just come to conclusions on things we have already solved, it'll start finding solutions that we have never thought of. Kinda like how AI used in competitive games use strategies that redefine the meta
>Kinda like how AI used in competitive games use strategies that redefine the meta
>chess_robot_breaking_oponent's_finger.webm
you cant know what a super intelligent being could possibly want.
*ahem*
AI doesn't exist
that is all
Who's doing all the typing at ChatGPT, then?
the same person who cooks my instant ramen when i put hot water into the container and close the lid
don't believe their lies
True. AI is nothing more than a prompt waiting for input. It doesn't ponder about anything while it's idle.
Think of AI as a very elaborate statistical analysis of likely answers.
>when it happens.
At least wait for an AI to come up with solutions for the remaining Millennium Prize Problems, and a proof of the Maldacena Conjecture on AdS/CFT.
I love how the AI will become the great equalizer. Basically those with normal IQ from 85 to 115 will benefit the most. They are able to reasonably well explain what they need or want to the AI and the AI will be able to do the rest.
People that can do something stand to loose the most as AI trivializes their knowledge and skill. It does not matter if you are the best programmer or mathematician in the world, AI will do it better and the 85 IQ person will be as productive, knowledgeable and skilled as you, and you both are capped by the capabilities of the AI.
This is why people are angry at AI. It invalidates and devalues their achievements, skills and knowledge. Talentless chuds are able to do things that people have spent decades practicing.
All the STEM people were laughing at artists and musicians, but they too will be replaced by AI wielding midwits (that are far cheaper and straight from high school) and then later fully autonomous AI.
It's actually amazing how there are people right now enrolling to expensive schools and going debt over degrees that are useless in 10 to 15 years.
>This is why people are angry at AI. It invalidates and devalues their achievements, skills and knowledge.
Just call the corporate use of AI a slavery. Profit.
not necessarily. whoever can utilize and take full advantage of the a.i using strategic and tactical questioning will be more valuable. you can become skilled in generating output...which again the higher i.q folk will excel at. an 85/115 I.Q midwit who never uses their brain and is totally reliant on a.i will be in the same position they've always been in
>using strategic and tactical questioning
No. The AI requires only rudimentary first input and will be able to deduce accurately what the user wants. That is exactly what they are moving towards.
There will be no "prompt engineers". The AI will be smart enough to understand and know the user and what they want and require. Look at AutoGPT and the likes. You tell it the big picture and the AI will prompt itself and start working towards a goal.
That is me at work right now. I write pseudocode and good explanations of what I want and GPT-4 gives me code that sometimes works. I pester it to iterate it until it works.
Some of the stuff it has made I could never do my self even if I tried really hard. When GPT 4.5 or GPT 5 comes, my powers will unironically be pretty great. My only difficulty is jealous and entrenched codemonkeys who don't respect someone who hasn't paid for an education and spent years memorizing outdated trivialities.
What happens when GPT can not only write code as good as you, but also faster than you can type? What happens when it can write better code than you? What happens when telling GPT what you want becomes more effective than typing it out your self line by line?
Nobody makes libraries from scratch. In the future no human will type code from scratch either.
>I'll kill myself when [something outside of my control and that doesn't impact me in any way happens]
>doesn't impact me in any way
OP might be a professional mathematician
he wouldn't have made such a dumb thread if he was
>non-trivial math problems
Define non-trivial math problems
Sounds like another VC scam at this rate
can someone explain to me why ai is bad at math but good at art? my intuitions tell me the opposite should be true
>why are insane people good with arts but bad with logic?
because the model 'hallucinates' the answer.
math proofs can be correct or incorrect, no inbetweens.
>good at art
It isn't.
>bad at math
Wolfram Alpha came out like 10 years ago and it's never wrong. I don't get why brainlets are so insistent on using a text predictor for numbers.
i will rape your corpse when you do
Software generated proofs can do things no human can do and can sometimes take years to verify them.
So what?
Just get on with it, c**t.
Adapt or die.
Btw, what's $10mn?
>The abbreviation of millions is now 'mn' instead of 'm'. One of the main reasons is to benefit text-to-speech software, which reads out the ...
Ohh
boy who cried wolf
>10 million USD
It costs half that to train a LLM that can't even multiple numbers, who the frick do they think they'll get to submit shit?
Free Software!
2 more years
so why are smart people building these kinds of devices that might make them obsolete in the future? Are they just working on AI to see what happens or what, I don't understand why they would help big tech with achieving ore-AGI.
Because they are being paid well enough not to care about the long term
arent most of them in their 30s, and maybe a few old guys in their 50s. But if they make pre-AGI in 10-20 years they will still be alive to see society crash...