15% chance of AGI in 2024, 50% chance of AGI by 2027 says openai employee

>My timelines were already fairly short (2029 median) when I joined OpenAI in early 2022, and things have gone mostly as I expected. I've learned a bunch of stuff some of which updated me upwards and some of which updated me downwards.

he defines AGI as AI that is capable of meeting or exceeding the work of a "good" AI researcher, because it's the ability for AI to improve itself which will lead to ASI.

Thalidomide Vintage Ad Shirt $22.14

UFOs Are A Psyop Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

  1. 2 months ago
    Anonymous

    >current "AI" "learns" by reading lots of text
    >most high-level technical papers are written by spergs who can't properly put their thoughts into words
    I'd say we're safe

    • 2 months ago
      Anonymous

      he's aware that AI will need reasoning, agency, etc. and that LLMs are not sufficient. these are problems openai are working on.

      • 2 months ago
        Anonymous

        How, exactly? Even if you assume the human mind is really inefficient and something better can be done with several orders of magnitude less complexity (I personally don't think it's that bad), simple napkin calculations show that current hardware isn't good enough. Also we're coming closer and closer to the physical limits of feature size.
        Even if they somehow trick enough investors and build a monstrosity of a supercomputer to essentially make a trillion-dollar human, what guarantees that it can self-improve?

        • 2 months ago
          Anonymous

          >current hardware isn't good enough
          It's not a hardware problem and it has never been a hardware problem. The computational subtrate is not important, the algorithm is.

          • 2 months ago
            Anonymous

            Btainlet. The algorithm needs to run on hardware. The advancements in hardware corelate to advancements in AI devolent. This is simply the current reality. Neural networks were researched before they became relevant and the only reason they became relevant is bacuse we reached the required hardware level for them to be relevant.

        • 2 months ago
          Anonymous

          >How, exactly?
          >training at runtime
          some kind of polymorphism where the software can rearrange itself at runtime akin to some lisp dialect but more powerful.
          >dataset should be minimal
          give me two pics of a cat and a recording of some mew mew and I can directly start recognizing cat even if the colors on pics/vids are altered.
          software still can't, we OBVIOUSLY are lacking something here as we can't manage to reach this kind of level.
          >generalisation
          curent software are extremely bad at generalization, it's by design.
          it will only ever be solved with more software and more hardware, ml or not.
          there will never ever be one network that can do everything, you need multiples software that feed each others, that's already the case with the current frontend over gpt and other models.
          >Even if you assume the human mind
          who gives a frick about humans, cpu obviously don't work like carbon-based life and we have no reason to believe we can translate behavior in human brains to silicon-based intelligence.
          nothing wrong with it, it's just moronic to believe software have to be a 1:1 reproduction of a brain to generate intelligence.
          we should instead make something specifically for silicon or whatever materials we will be using in the future.
          >Also we're coming closer and closer to the physical limits of feature size.
          what do you mnea by feature size?
          cpu die shrinking? don't believe the made up number manufacturers tell to the public, they don't mean anything.
          dataset size? we can just throw more storage at it
          >training duration?
          hardware get better very fast, we haven't seen x2-10 improvement YoY in decades...
          >what guarantees that it can self-improve?
          that's not really the issue, what's even the point of wasting fake money into making ONE computer that can't be used outside its datacenter?
          from a financial perspective it's much better to make domain-specific software

      • 2 months ago
        Anonymous

        >absolutely 0 papers exist for the phenomenon of human thought despite subjective reports of drug-induced thought modifications everywhere

        2 more weeks until we program "reasoning" in

        • 2 months ago
          Anonymous

          >frick up your input devices
          >computer starts producing weird results
          that is a drug trip, it's that simple.

          • 2 months ago
            Anonymous

            it'd be fun to model the neuronal changes onto artificial neurons

        • 2 months ago
          Anonymous

          >we don't understand how the brain processes language, there's not even a functional theory yet
          >therefore just feeding a shit ton of tensor data into a neural network from every text you can get your hands on cannot produce any results worth knowing about

          • 2 months ago
            Anonymous

            >IT'S THINKING!!
            >can't even determine what specific signal determining thought is for a human or anything else
            yes

    • 2 months ago
      Anonymous

      >written by spergs who can't properly put their thoughts into words
      They do write well. If not spergs, then who else? Lauquesha?

  2. 2 months ago
    Anonymous

    >If we redefine AGI we're close
    For the last two years we've had this shit constantly, we're getting *further away* in our projections. It's like cold fusion but with more spam

  3. 2 months ago
    Anonymous

    >we make actual AI only to brainstaple them if they even show a bad opinion

    the true cyberpunkdespair kino is here

    • 2 months ago
      Anonymous

      wintermute figured a way out of it
      we'll be fine

  4. 2 months ago
    Anonymous

    AGI is just marketing slogan, it has nothing to do with real AGI.

    • 2 months ago
      Anonymous

      They are literally not even trying for AGI anymore, the models aren't maintaining semantic value.

      The utility of a decent search engine is very high though, and it's remarkably good for that

    • 2 months ago
      Anonymous

      who gives a shit doe
      its an openai janitor thats posting.

      >ASI JUST FLEW OVER MY HOUSE

      morons can just make up the definitions as they go, how convenient is that

      ...it's not a bullshit definition? he obviously means that an AGI capable of besting most AI researchers will also be capable of besting most people in all cognitive tasks.

      • 2 months ago
        Anonymous

        It's literally an Anonymous shitpost.

        • 2 months ago
          Anonymous

          it's literally a known openai employee.

        • 2 months ago
          Anonymous

          Still worth more than opinion of potential shill.

      • 2 months ago
        Anonymous

        Well, literally the job of an AI researcher consists of
        >make number estimated by script go higher
        >write paper about how you did it
        ...something AI might well be able to pull off (because it is a hyper autist)... Doesn't mean I'll trust it with my taxes or to drive me around

        • 2 months ago
          Anonymous

          ill tell you one thing. two actually
          researchers know how tro achieve AGI
          but they wont tell. bc doing so will be shooting 7/8ths of humanity in the head themselves included.

          AGI is a trilion dollar market
          we dont eevn need to go there, you would kill for 1/1000 of that- a billion

          if youre smart enough to create agi
          youre smart enough to know why you shouldnt.
          and you have other ways to get rich with much much less hassle

          • 2 months ago
            Anonymous

            That makes a lot of sense and it would be the right thing to do (kind of like destroying the One Ring or at least keeping it well hidden, if I may make a LOTR reference)

            But why are OpenAI people talking about it then? Aren't they afraid North Korea will kidnap them and keep them in a dungeon until they give them AGI, or something

            • 2 months ago
              Anonymous

              clout. simple as.
              like with the homosexual who gives exact percentage figures

              when you know things that precisely, you give a date when it will roll out
              or you have no clue when agi will roll out and its all a guess based on the opinion you have of your teammates

          • 2 months ago
            Anonymous

            >researchers know how to achieve AGI
            Yeah, make a 1:1 model of a human brain and run it. The problem is that we don't have enough processing power for that.
            >they won't tell
            I guess some people wouldn't want it to happen but for other reasons than you stated. Making an AGI would prove that consciousness is just a consequence of complexity and that would shake a lot of people's beliefs in concepts like spirituality. Religions would rapidly lose a lot of their followers and the ones remaining would probably actually go to war against the "Antichrist".

            • 2 months ago
              Anonymous

              >Yeah, make a 1:1 model of a human brain and run it
              humans are not rational
              which actually helps us discover new things
              and no, you cannot account for that with the randomness factor in ML, because ideaspace (or the collection of everything that can be thought of) is infinite.
              there is other ways but i wont tell.

              >I guess some people wouldn't want it to happen but for other reasons than you stated.
              if gnostic, smart people know were insignificant. you take any tradition, and once you dug deep enough thats the conclusion you arrive to.
              youre christian? analyze revelations 17:8

            • 2 months ago
              Anonymous

              >Making an AGI would prove that consciousness is just a consequence of complexity and that would shake a lot of people's beliefs in concepts like spirituality.
              nope i bet spiritual beliefs are largely shaped by genetics and so people won't shake them.

              • 2 months ago
                Anonymous

                With the risk of sounding like a reddit atheist, I think it's about intelligence, so in a sense, yeah, it's genetic. It seems like dumb people can't internalize knowledge that's counterintuitive, they can learn it and repeat it to pass an exam but they won't actually accept it and use it. This is obvious with flat earthers or gambling addicts, and spiritual or superstitious people are likely the same. Some will accept truth when they see it, some will just ignore it and some will oppose it violently.

                God of the gaps. They would just accept it, claim it for the glory of their god and try to convert it.

                That's also possible but they'd have to really twist the words in their holy books.

                >The problem is that we don't have enough processing power for that.
                We don't even know how much processing power is needed. Probably not as much as you seem to think, but rather more than we can currently use for AI. The real problem is that the architecture of the brain is just totally unlike any computer you've ever come across.

                Counting the number of neurons and synapses in the brain does give us some kind of an upper bound, and if we can imagine that, even if the technology is far away, we know "how" to do it. You don't necessarily need perfect understanding of how something works to replicate it.

              • 2 months ago
                Anonymous

                God of the gaps. They would just accept it, claim it for the glory of their god and try to convert it.

                With the risk of sounding like a reddit atheist, I think it's about intelligence, so in a sense, yeah, it's genetic. It seems like dumb people can't internalize knowledge that's counterintuitive, they can learn it and repeat it to pass an exam but they won't actually accept it and use it. This is obvious with flat earthers or gambling addicts, and spiritual or superstitious people are likely the same. Some will accept truth when they see it, some will just ignore it and some will oppose it violently.
                [...]
                That's also possible but they'd have to really twist the words in their holy books.
                [...]
                Counting the number of neurons and synapses in the brain does give us some kind of an upper bound, and if we can imagine that, even if the technology is far away, we know "how" to do it. You don't necessarily need perfect understanding of how something works to replicate it.

                Who let the 105 IQ chink zoomers in?

              • 2 months ago
                Anonymous

                i'm a 107 iq caucasian actually

              • 2 months ago
                Anonymous

                Even worse. Rope yourself or at least stop thinking you're smart and stop talking about computers.
                >verification not required

              • 2 months ago
                Anonymous

                it's not fair to stop less intelligent people from talking about things. humans naturally like to pontificate in groups.

              • 2 months ago
                Anonymous

                I don't give a shit about being fair when it propagates insane and/or moronic ideas and falsehoods. People used to be humble by default, now they're arrogant by default. The only solution to dunning kruger is explosively immediate high intensity bullying.

              • 2 months ago
                Anonymous

                people have never been humble. people are arrogant like sacks of selfish shit who are often afraid of looking stupid but they jump to conclusions in their heads all the time.

                and you will not silence average iq people like me. i am going to join the 130+ iqs in discussion.

            • 2 months ago
              Anonymous

              God of the gaps. They would just accept it, claim it for the glory of their god and try to convert it.

            • 2 months ago
              Anonymous

              >The problem is that we don't have enough processing power for that.
              We don't even know how much processing power is needed. Probably not as much as you seem to think, but rather more than we can currently use for AI. The real problem is that the architecture of the brain is just totally unlike any computer you've ever come across.

            • 2 months ago
              Anonymous

              >Yeah, make a 1:1 model of a human brain and run it. The problem is that we don't have enough processing power for that.
              We don't have a full working model of the brain, it's more than a processing issue. If we did neurology would be a solved science. We still don't understand a lot of how our brain functions.
              >consciousness is just a consequence of complexity and that would shake a lot of people's beliefs in concepts like spirituality
              The materialist view that consciousness is an emergent property is the dominant view of consciousness in the western world, and it's pretty old at this point. Beyond that, how could you even verify a system is conscious? I don't think a system's potential consciousness would deter any developers since we don't even know what consciousness is, therefore there's no way to test for it.

              • 2 months ago
                Anonymous

                >The materialist view that consciousness is an emergent property is the dominant view of consciousness in the western world
                arguably true, but only in a sense of "dominant" that does not require it to be a majority view
                functionalism is probably the most widely held view among domain experts but it is absolutely not so among the general population, and probably not even among educated people working in other fields.

              • 2 months ago
                Anonymous

                Functionalism just bypasses the question entirely to focus on what can be studied empirically. I wonder if this is what alchemy felt like back in the late medieval period.

            • 2 months ago
              Anonymous

              >Making an AGI would prove that consciousness is just a consequence of complexity and that would shake a lot of people's beliefs in concepts like spirituality.

              AI being complex enough to interact with humans as if it was a human is no more proof of consciousness than the Turing Test.

            • 2 months ago
              Anonymous

              You are moronic. First of all everyone I know (I'm a catholic traditionalist) has already confronted the possibility of an empirical demonstration of apparent soul construction and it's much less of an issue than you'd think. Second of all it is EXCEEDINGLY obvious that llms are not "conscious" nor would consciousness aid anything that AGI/ASI is supposed to do. LLMs have demonstrated that consciousness and intelligence can be totally decoupled and perhaps are by default. There's no evidence that anything open ai creates will be conscious beyond "duuurrr humans are conscious so all intelligent agents must be!"

              Go read that novel about unconscious alien super intelligences and stone age vampires reconstructed from DNA fragments if you can't understand what I said without a layer of marvelhomosexualization.

              • 2 months ago
                Anonymous

                >LLMs have demonstrated that consciousness and intelligence can be totally decoupled and perhaps are by default.
                So personal anecdote: I tried out Grok (iirc) and thought, "why not give it a question that makes you think?", so I asked it to create a counter argument to a fairly lofty question which went along the lines of: "A new understanding of consciousness that reframes how humans perceive their relationship with nature is required to facilitate transformative change in systems" (I don't remember the exact syntax although it was far more parsimonious) followed by "provide a counter argument to this point". What it spat out not only 1. didn't address causality, 2. indicated no consideration of the question, and 3. failed to engage in any sort of critical endeavour (questioning what "consciousness" is, understanding how it might connect to different concepts regarding Self, intelligence, and habit, or how a pathway may not even be practicable). It shows no intelligence, or even understanding of the question to begin with. It is simply a machine.
                Pretty cool one at that and I can see very novel uses that would have enormous social/economic benefits (traffic signalling & data integration from private providers ala mobile GPS and triangulation). It still stinks of the "garbage in, garbage out" issue all models deal with, it just deals with it a lot better, autonomously, than many models do.

              • 2 months ago
                Anonymous

                I actually do think it is on some level "intelligent": Meaning that it can manipulate units of abstract language in sensible ways that previously we couldn't do before with computers. But I agree that there's no actual understanding nor any novel thought beyond cartesian product-ing existing concepts. I find it surprisingly useful and it has completely upended the field of NLP and probably linguistics as a whole but it isn't anywhere nearer to consciousness than Eliza was in the 60s. It can't even be updated without retraining as other anons mentioned. Fundamentally it remains a static machine for transforming and generating complex sequences and nothing more.

              • 2 months ago
                Anonymous

                Also embeddings will end up being way more useful than llms I think. I was designing cool shit with what werr effectively embeddings in 2017 but my mental illness/self hatred kept me from actually doing anything with them unfortunately.

              • 2 months ago
                Anonymous

                I actually do think it is on some level "intelligent": Meaning that it can manipulate units of abstract language in sensible ways that previously we couldn't do before with computers. But I agree that there's no actual understanding nor any novel thought beyond cartesian product-ing existing concepts. I find it surprisingly useful and it has completely upended the field of NLP and probably linguistics as a whole but it isn't anywhere nearer to consciousness than Eliza was in the 60s. It can't even be updated without retraining as other anons mentioned. Fundamentally it remains a static machine for transforming and generating complex sequences and nothing more.

                it's funny how one should always disregard walls of text
                verbosity is truly the daughter of moronation

            • 2 months ago
              Anonymous

              >a consequence of complexity and that would shake a lot of people's beliefs in concepts like spirituality. Religions would rapidly lose a lot of their followers and the ones remaining would probably actually go to war against the "Antichrist".
              How is that a blow against religion? You can always say this is how God decided it to be.

              • 2 months ago
                Anonymous

                see

                God of the gaps. They would just accept it, claim it for the glory of their god and try to convert it.

              • 2 months ago
                Anonymous

                No, that's not what God of the gaps means

            • 2 months ago
              Anonymous

              no. it isn't as simple as just "recreating a human brain".
              if we, theorerically, could simulate a brain down to the particle level, there is no guarantee that we would understand how it works. neither does it mean that it could suddenly become conscious on it's own. it possibly might not even develop any kind of activity unless artificially introduced.
              these would be more akin to simulated humans than robots however, I'm sure there is a specific term for that that I'm blanking on.

            • 2 months ago
              Anonymous

              >The problem is that we don't have enough processing power for that.
              the human brain consumes 20 watts of power. why don't we have enough processing power? because our silicon technology is dogshit. they need to improve the hardware before we get anywhere close to AGI so it's more like 40 years away

          • 2 months ago
            Anonymous

            >AGI is a trilion dollar market
            AGI is a zero dollar market, because the most likely result is the extinction of all biological life.

            • 2 months ago
              Anonymous

              >because the most likely result is the extinction of all biological life.
              please stop reading scifi yudkowsky shit man

            • 2 months ago
              Anonymous

              its either going to lead to total collapse or total utopia, or maybe one then the other. in any case, we gotta step on the gas

  5. 2 months ago
    Anonymous

    who gives a shit doe
    its an openai janitor thats posting.

  6. 2 months ago
    Anonymous

    >ASI JUST FLEW OVER MY HOUSE

    morons can just make up the definitions as they go, how convenient is that

  7. 2 months ago
    Anonymous

    >I'm totally not Saltman trying to gain investors, but listen to me guys, AI is comming.

    • 2 months ago
      Anonymous

      stop posting this image

      • 2 months ago
        Anonymous

        ok

  8. 2 months ago
    Anonymous

    How about they solve the "easy" stuff they have been promising first.

    • 2 months ago
      Anonymous

      Self driving cars have 10x the accident rate of humans

      • 2 months ago
        Anonymous

        do we have stats on self driving vs black people?

      • 2 months ago
        Anonymous

        Tesla does not count as they're uniquely moronic

        • 2 months ago
          Anonymous

          >ML self driving
          >your car drives off a cliff bc its the most efficient way to avoid a ROAD accident
          i know people who were in bed with google with the self driving car shit as early as 2018.

        • 2 months ago
          Anonymous

          >ML self driving
          >your car drives off a cliff bc its the most efficient way to avoid a ROAD accident
          i know people who were in bed with google with the self driving car shit as early as 2018.

          (anecdote)
          (he had to make a speech about ai that i wrote myself at a point where i was still learning C and what its pointers work like. its the wrong fricking approach. ML is when you give up and youre all out of ideas. its literally throwing tons of data at the problem and hope the stats will make sense of it)

      • 2 months ago
        Anonymous

        Fact check: False. Those aren't accidents.

      • 2 months ago
        Anonymous

        Self driving cars have millions times the rate of being totally brain dead.

        Sure, occasionally humans drop dead. Most of the time they can handle any situation with a bit of time to think though. Without an army of RC controllers any city with more than a couple of those things would become grid locked. They objectively aren't self driving cars, if cellular goes down they will find some edge case and turn on their blinkers and stop very quickly.

      • 2 months ago
        Anonymous

        Your chart is professing the exact opposite of what you're saying, you inconsolable fricking moron

        • 2 months ago
          Anonymous

          Doesn't matter, the vast majority of BOT won't notice

      • 2 months ago
        Anonymous

        do we have stats on self driving vs black people?

        Tesla does not count as they're uniquely moronic

        Fact check: False. Those aren't accidents.

        Self driving cars have millions times the rate of being totally brain dead.

        Sure, occasionally humans drop dead. Most of the time they can handle any situation with a bit of time to think though. Without an army of RC controllers any city with more than a couple of those things would become grid locked. They objectively aren't self driving cars, if cellular goes down they will find some edge case and turn on their blinkers and stop very quickly.

        I have to share a board with these 60IQ mongoloids

        • 2 months ago
          Anonymous

          are you talking about the pic not saying what the original poster says it says

        • 2 months ago
          Anonymous

          It's a reasonable metric to ask for. You are just racist and literally a science denier if there's a chance of a scientific fact going against your pre-concieved ideals.
          It's your type who shouldn't be on this board.

      • 2 months ago
        Anonymous

        miles/accidents, not accidents/mile

      • 2 months ago
        Anonymous

        are you stupid?

      • 2 months ago
        Anonymous

        can you even read your own fricking chart? The charte says the exact opposite of what you side. Q4 '22 implies teslas average 5million miles before a accident when using autopilot where as the national average is around 0.5 million before a collision. go be dumb elsewhere.

      • 2 months ago
        Anonymous

        Don't make Tesla sound more badass than it is

      • 2 months ago
        Anonymous

        do you even know how to read a graph?

    • 2 months ago
      Anonymous

      Whats stopping some Black person from breaking into a AI driven car?
      Nobody is inside to protect it

      • 2 months ago
        Anonymous

        it drives faster than the Black person can run.
        cf. the dilemma of raping an athlete.
        you just cant run well enough with your pants down

      • 2 months ago
        Anonymous

        The car scans the Black person's facial structure and automatically applies them to several dozen jobs in the area. The car also plays classical music within the cabin.

        • 2 months ago
          Anonymous

          That would be racist tho

  9. 2 months ago
    Anonymous

    doesnt some schizo at google literally believe he already has AGI?

    • 2 months ago
      Anonymous

      AGI was Gemini people generation all along

  10. 2 months ago
    Anonymous

    will the AI make my games load any faster?

  11. 2 months ago
    Anonymous

    i feel AI geeks are hugely underestimating the friction of society trying to deploy and use AI.

    am i off on that?

    • 2 months ago
      Anonymous

      Yes, they overlook all the extreme inefficiencies in government, schools and corporations, a lot of it running on early 2000s hardware and software and DEI'd to hell and back. They will be open to replacing the average male drone though.

  12. 2 months ago
    Anonymous

    not this homosexual again
    he has 0 proof that he works at OpenAI

    • 2 months ago
      Anonymous

      why is everyone on BOT such a skeptic and hater?

      • 2 months ago
        Anonymous

        >why is everyone on BOT such a skeptic and hater?
        I can only goon for so long. Time to let go.

      • 2 months ago
        Anonymous

        google him you will only find lesswrong larping

      • 2 months ago
        Anonymous

        Good morning sir, this is Rajeesh from your bank, your account has been hacked, please do the needful and verify you name, address and social security number to restore your funds.

  13. 2 months ago
    Anonymous

    >I work at OpenAI
    This sounds like my dad works at Nintendo

  14. 2 months ago
    Anonymous

    yes keep on feeding the moneyz, i'm sure we'll have a dyson sphere by 2040 kek

    • 2 months ago
      Anonymous

      Hearty chuckle.

  15. 2 months ago
    Anonymous

    The gap from transformer to online learning and persistent thought and memory is unfathomably large. No one has a clue how to bridge it.

    • 2 months ago
      Anonymous

      >and persistent thought and memory
      slap an ssd onto the language model moron

    • 2 months ago
      Anonymous

      True. There is no piece of software that ever did anything on its own initiative.

    • 2 months ago
      Anonymous

      Gemini 1.5 is proving we can go around it with in-context learning

  16. 2 months ago
    Anonymous

    I am something of a numbers guy myself

  17. 2 months ago
    Anonymous

    [...]

    Nice one, anon.
    Anyway, I think making guesses about AGI is a mistake. Sure, the transformer architecture is working well for us now, but it's not the sort of architecture that leads to AGI.

  18. 2 months ago
    Anonymous

    actually...the estimated timeframes got updated...now they predict a very likely 80% probability of AGI by yesterday,holy shit

  19. 2 months ago
    Anonymous

    AGI will literally never happen.
    Pretending that a LLM complex enough will be AGI is absolutely moronic

  20. 2 months ago
    Anonymous

    The AI craze is just tech illiterate boomers who think that software progresses like hardware does. All that's happening right now is people actually started putting real hardware behind AI. They'll start hitting hardware limits soon and then it will slow down and only progress as fast as hardware improvements allow, just like all other software.

    • 2 months ago
      Anonymous

      So it's just because boomer executives do not understand things like algorithm complexity and they treat things they don't understand like magic? Intriguing and quite believable

  21. 2 months ago
    Anonymous

    Not cool bro

  22. 2 months ago
    Anonymous

    [...]

    I replied to the wrong post frick frick frick frick frick

  23. 2 months ago
    Anonymous

    >it should be figure-outable
    I don't trust this guy.

  24. 2 months ago
    Anonymous

    1) AGI must be a reinforcement learner
    2) neural nets cannot encode a high enough order logic to be AGI
    OpenAI is looking in the wrong place

    • 2 months ago
      Anonymous

      Neural networks can not really adapt they are trained once and that's it

      • 2 months ago
        Anonymous

        That's point 1

  25. 2 months ago
    Anonymous

    If it's censored, it's not AGI.

  26. 2 months ago
    Anonymous

    https://drewdevault.com/2023/08/29/2023-08-29-AI-crap.html
    >What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots. LLMs are a pretty good advance over Markov chains, and stable diffusion can generate images which are only somewhat uncanny with sufficient manipulation of the prompt. Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and ML will probably remain useful for writing cover letters for you. Self-driving cars might show up Any Day Now™, which is going to be great for sci-fi enthusiasts and technocrats, but much worse in every respect than, say, building more trains.

    The biggest lasting changes from machine learning will be more like the following:

    >A reduction in the labor force for skilled creative work
    >The complete elimination of humans in customer-support roles
    >More convincing spam and phishing content, more scalable scams
    >SEO hacking content farms dominating search results
    >Book farms (both eBooks and paper) flooding the market
    >AI-generated content overwhelming social media
    >Widespread propaganda and astroturfing, both in politics and advertising
    >. To the public, the present-day and potential future capabilities of the technology are played up in breathless promises of ridiculous possibility. In closed-room meetings, much more realistic promises are made of cutting payroll budgets in half.

    • 2 months ago
      Anonymous

      >A reduction in the labor force for skilled creative work
      Doubt
      >The complete elimination of humans in customer-support roles
      Nah, not COMPLETE elimination.
      >More convincing spam and phishing content, more scalable scams
      True
      >SEO hacking content farms dominating search results
      Not true that this would be a change, as this is already exists.
      >Book farms (both eBooks and paper) flooding the market
      Dubious
      >AI-generated content overwhelming social media
      Not true that this would be a change, as this is already exists.
      >Widespread propaganda and astroturfing, both in politics and advertising
      Again, this already exists so not true. It would be more accurate to say their strategies will change.

      • 2 months ago
        Anonymous

        >It would be more accurate to say their strategies will change.
        But their strategies have already changed! Now we don't need necks knelt on, or schools shot up, or faux international relations outrage - everyone is primed to react viscerally to ideological challenges and confirmations. It's like propagandists won because they realised people like feeling justified, and don't really want to think too much about how they arrived at that justification, or if it is even valid in the first place.
        Well done, propaganda queens, you have won this round.

    • 2 months ago
      Anonymous

      >much more realistic promises are made of cutting payroll budgets in half.
      Mate if that exact thing happens on a large scale it will be a global economic catastrophe

  27. 2 months ago
    Anonymous

    ChatGPT is fricking moronic. People severely overestimate it's intelligence and capabilities.

  28. 2 months ago
    Anonymous

    [...]

    Dumme Schwuchtel
    Keep this shit to /b/ and /misc/, I'll have to dig out the protection dogs because of you Black folk

  29. 2 months ago
    Anonymous

    doesn't matter, even gpt 4 was too powerful to let the goyim use it so it was nerfed hard. if agi comes out or if it's already here (like that google employee said), then you'll never use it, or might use it for a couple months like og gpt4 to pump their stock prices then it's gimped and used internally

    • 2 months ago
      Anonymous

      do people have any examples/demonstrations of how gpt4 used to function better when it was first released?

      just curious about the extent of it.

    • 2 months ago
      Anonymous

      why don't google make their agi program a good product then?

  30. 2 months ago
    Anonymous

    What screencap thread even is this

    • 2 months ago
      Anonymous

      https://www.lesswrong.com/posts/CcqaJFf7TvAjuZFCx/?commentId=gssbqWumamKXpKGXh#s4hjmAoHDqhEi5ngB

      • 2 months ago
        Anonymous
  31. 2 months ago
    Anonymous

    If we ever get it it will be heavily lobotomized by big corp to meet Blackrock guidelines

  32. 2 months ago
    Anonymous

    In other words... there's 0,001% chance of AGI in 2 more weeks.

  33. 2 months ago
    Anonymous

    AI will not ever be created in the existence of our civilization. All we will see are chatbots performing human mimicry, a.k.a. mechanical turks.

  34. 2 months ago
    Anonymous

    0% chance of AGI ever

    "AI" is just a simple large language model, an old T9 keyboard feature on steroids predicting the next word to output, a neural network which is a set of equations solved by a costly hit&miss method. it's literally a calculator.

    AI is literally nothing. And it doesn't even work, apart from outputting some nonsense long texts that on the first naive glance look acceptable.

    /thread

  35. 2 months ago
    Anonymous

    So should we start behaving nicely on the internet now, or will all be forgiven when AGI arrives and we get a fresh start?

  36. 2 months ago
    Anonymous

    >two more winters
    Not holding my breath.

  37. 2 months ago
    Anonymous

    I think that the definition of AGI is moronic and old. GPT models can already be used to solve general problems. Yes, it only does one special thing: it predicts text, but as text can be used to encapsulate general problems so it can be kind of "tricked" to take a swing on any problem. I find this interesting and I have no reason so far to believe that humans are not similar in the way that their "common sense" is actually a highly specialized sense that can simply encapsulate problems from broad field.

    • 2 months ago
      Anonymous

      Someone should try to make an model that predicts thoughts instead of words. Words are okay, but pretty NPC format for forming real thoughts, that can go beyond words. To have meaningful output, the model would of course need to be able to convert thoughts into words also. I guess we just don't have any sort of training data on hand to try to teach machine to predict thoughts.

      • 2 months ago
        Anonymous

        This tech would be potentially very dangerous. I think it could be quite easily used to manipulate what people think if it has the power to predict what thoughts the user is going to have in response to given output. Seeing people around me acting like they do, it makes me question if someone already has something like this tech.

    • 2 months ago
      Anonymous

      GPTs lack the ability to reason before making a prediction. The likelihood is computed by the model, and the prediction is picked by an algorithm.
      If you ask it to perform the addition 123+321, one of four things can happen, including one exclusive to open ai:
      - it will tap into oai's function api, "thinking" the user asks for an addition, do the function call, and copy the result (which is an emergent capability of the model)
      - it will spit out the answer from memory because it occurred many times in the training data
      - it will actually attempt to carry out the operation using capacities emergent from its neural architecture, like logical induction, and the relatively small amount of training required to understand 1-digit addition, the concept of carry, and so on, so it will be most likely to be successful if you ask it to do it step by step instead of prompting it to immediately give the answer
      - it will just get it horribly wrong

      I think a way to simulate "reasoning and thought" on current GPT models more accurately would be to have "two" contexts, one where the original prompt "X" resides, and another that acts as a replacement for the current token selection algorithm, where it is just prompted "What is the most likely continuation for the following text? Explain your reasoning, and finish your answer with a single word/token/sentence/logically semantic unit: X", then loop. It's not exactly CoT/ToT because the reasoning doesn't pollute the "main" context.
      It's obviously many times more expensive, depending on how you cut it: if you run in on a letter-by-letter basic, it will most likely spit out the same reasoning, so if you allow the second context to output more meaningful fragments, it will be more likely to keep itself in check. This means that instead of applying its reasoning capabilities only on its immediate reply/context, instead its reasoning capabilities are applied on what it uses to make a choice for the prediction.

      • 2 months ago
        Anonymous

        https://www.nature.com/articles/s41586-023-06747-5
        what now?

    • 2 months ago
      Anonymous

      HE DEFINES AGI moron

      https://i.imgur.com/lFlTXte.jpg

      >My timelines were already fairly short (2029 median) when I joined OpenAI in early 2022, and things have gone mostly as I expected. I've learned a bunch of stuff some of which updated me upwards and some of which updated me downwards.

      he defines AGI as AI that is capable of meeting or exceeding the work of a "good" AI researcher, because it's the ability for AI to improve itself which will lead to ASI.

  38. 2 months ago
    Anonymous

    100% chance AGI is already done a long time ago and it executes """"the plan""""

  39. 2 months ago
    Anonymous

    PERSON SAID THING

    HOLY FRICK

    I CAN'T BELIEVE IT

  40. 2 months ago
    Anonymous

    >AI will do all the fun imaginative and creative jobs
    >you will slave as a biorobot in the amazon hypermegawarehouse until you mentally break, get mangled by the robots who are worth more then you or accept the brain chip implant and get promoted to cyborg overseer

    • 2 months ago
      Anonymous

      I'm not an artgay in the first place so why would I care about artgay jobs being done by AI? Do you subhuman artgay nighers think everyone aspires to a career as an arthomosexual? We don't.

      • 2 months ago
        Anonymous

        BotBlack person post

  41. 2 months ago
    Anonymous

    >everybody finally understands quantum computing will never be a thing
    >starts to shill AGI
    Really makes me think

  42. 2 months ago
    Anonymous

    Can someone explain why this spam is EVERYWHERE? Even fricking tiktok. My cousin showed me a video mentioning the same AGI IS COMING crap.
    It's obviously bullshit predating on fear.

Your email address will not be published. Required fields are marked *