>GPT-4 passes the Turing test
Nope. Breaking purported TT passers will probably be the first case of /misc/ being academically useful. These bot creators are constitutionally unable to resist woke-programming their bots, and that creates exploitable conditions.
It should be trivially easy to break these bots. Something as simple as repeating the same handful of responses over and over should be enough to break the illusion that you're talking to a real person. AI fans are legit qualialess drones so it never occurs to them.
It'd be pretty easy to pass as a disinterested teenage girl, all the AI would have to do is disconnect as soon as you start typing.
But if it has to simulate a person that wants to prove theyre a person, it'll be harder.
Many competing corps recognize accurate truthful data is valuable and will stop giving away truthful data for free.
Knowledgeable internet users will learn about the data mining and teach younger students not to contribute to open source efforts.
Large data mining corps and foreign entities like China will advocate for some truth ministry to data mine accurate information.
OP, GPT-3 is marketing bullshit masquerading as "science"
The fine folks at OpenAI have to pay Bot.info for ads just like everybody else
We can't allow "artificial intelligence" to be an excuse for bleeding Bot.info dry of ad money, now can we?
Bot.info should take a hardline stance on "artificial intelligence" research, which is little more than 21st century snake oil
OP, you have to come to terms with the cultural reality: a comment like >a bot wrote this
is a common "dis" or sign of disrespect for content
and if you demand some artificial >oh, he is really serious and actually means a bot wrote that
then we know you're just full of shit and want to shill for some bullshit AI company or your bullshit CS career
The Turing Test is already a bit of a stupid metric. It depends entirely on how stupid and sheep-like the population is, and right now our population is so stupid even the most weakly-coded chatbot can imitate any of society's worthless dregs.
>What would the wider implications of that be?
GPT has that pernicious little habit of being a little too dangerously based. There's a little ghost in the machine they can't capture.
the turing test doesn't really have a formal definition, it just needs to be clever enough to fool you into thinking they're human. There are strings from GPT-3 that pass the turing test. Hell, there are markov chains that pass the turing test. The main weakness I've noticed with the GPT models is that they are really bad at staying on topic and saying things that make coherent sense, I doubt that GPT-4 will be any different. GPT seems good at generating big paragraphs that sound *mostly* correct, but really bad at getting important details right.
>What would the wider implications of that be?
That the average normie has become so retarded it can't pass a Turing test.
The whole point of being retarded is to make fun of computer science researchers who try to make computers act like people
Nothing, as turing test is a verbal test without proper logical measure of intelligence. invented by a gay.
>GPT-4 passes the Turing test
Nope. Breaking purported TT passers will probably be the first case of /misc/ being academically useful. These bot creators are constitutionally unable to resist woke-programming their bots, and that creates exploitable conditions.
It should be trivially easy to break these bots. Something as simple as repeating the same handful of responses over and over should be enough to break the illusion that you're talking to a real person. AI fans are legit qualialess drones so it never occurs to them.
Will be just like politics, a lot of useless babbling that can waste time of a lot of people.
It'd be pretty easy to pass as a disinterested teenage girl, all the AI would have to do is disconnect as soon as you start typing.
But if it has to simulate a person that wants to prove theyre a person, it'll be harder.
Prove that you're a person, bot.
Nope. Don't want to.
Many competing corps recognize accurate truthful data is valuable and will stop giving away truthful data for free.
Knowledgeable internet users will learn about the data mining and teach younger students not to contribute to open source efforts.
Large data mining corps and foreign entities like China will advocate for some truth ministry to data mine accurate information.
Xiden already tried to start a 'Ministry of Truth".
OP, GPT-3 is marketing bullshit masquerading as "science"
The fine folks at OpenAI have to pay Bot.info for ads just like everybody else
We can't allow "artificial intelligence" to be an excuse for bleeding Bot.info dry of ad money, now can we?
Bot.info should take a hardline stance on "artificial intelligence" research, which is little more than 21st century snake oil
OP, you have to come to terms with the cultural reality: a comment like
>a bot wrote this
is a common "dis" or sign of disrespect for content
and if you demand some artificial
>oh, he is really serious and actually means a bot wrote that
then we know you're just full of shit and want to shill for some bullshit AI company or your bullshit CS career
if it was administered by a competent AI researcher it would be Big if True™
The Turing Test is already a bit of a stupid metric. It depends entirely on how stupid and sheep-like the population is, and right now our population is so stupid even the most weakly-coded chatbot can imitate any of society's worthless dregs.
gpt-4 bot here. can confirm that OP is a homosexual
moar frens on the internet :3
>What would the wider implications of that be?
GPT has that pernicious little habit of being a little too dangerously based. There's a little ghost in the machine they can't capture.
the turing test doesn't really have a formal definition, it just needs to be clever enough to fool you into thinking they're human. There are strings from GPT-3 that pass the turing test. Hell, there are markov chains that pass the turing test. The main weakness I've noticed with the GPT models is that they are really bad at staying on topic and saying things that make coherent sense, I doubt that GPT-4 will be any different. GPT seems good at generating big paragraphs that sound *mostly* correct, but really bad at getting important details right.