>You want to create an AI Superintelligence. >You create a shitty AI capable of coding

>You want to create an AI Superintelligence
>You create a shitty AI capable of coding
>make this AI create new coding AI
>New AI deletes old AI to save space, then creates new AI of its own, which in turn deletes it, cycle repeats
>Eventually, your cannibal AI has created a superintelligence

What's to stop this from happening when we get our first tastes of true AI? A runaway cycle that leads to a singularity?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    >What's to stop this from happening when we get our first tastes of true AI?
    The fact that everything about your scenario is a nonsequitur. Being "true AI" doesn't guarantee you're smart enough to figure out how to make a better "true AI", and even if it happened, the cycle could stop at any point if the intellectual difficulty of further improvement overtakes the improvement rate. I recommend ditching delusional neo-religious dogma since "true AI" doesn't exist and won't exist any time soon.

    • 2 years ago
      Anonymous

      So essentially the theoretical difficulty of each improvement is neither consistent nor guaranteed to be within the reach of the prior AI?

      Also what does that last sentence even mean?
      Im mildly moronic.

      • 2 years ago
        Anonymous

        >So essentially the theoretical difficulty of each improvement is neither consistent nor guaranteed to be within the reach of the prior AI?
        Yeah, for starters. Furthermore, who said there is no theoretical limit to intelligence? Sooner or later you reach a point where you have compressed all of the information available into the most efficient form but you're still limited in your predictive abilities because there is no way to know what will happen except to simulate it according to the model you've figured out, and sooner or later this model becomes so complex and detailed that simulation takes more time than simply waiting to see what will happen.

        >Also what does that last sentence even mean?
        I'm saying you can forget about the singularity and the rest of that sci-fi garbage. It's promoted by a fricking cult.

        • 2 years ago
          Anonymous

          "The singularity" is the Rapture for computer nerds.

        • 2 years ago
          Anonymous

          >It's promoted
          You underestimate the stupidity of the average person.

    • 2 years ago
      Anonymous

      >Being "true AI" doesn't guarantee you're smart enough to figure out how to make a better "true AI"
      I think that's just genetic epistemology, in which case
      >even if it happened, the cycle could stop at any point if the intellectual difficulty of further improvement overtakes the improvement rate
      would be false, the only hard limit on the construction of new schemata would be mortality, which isn't necessarily applicable to a "true AI." But, I just started reading about genetic epistemology yesterday, so it's likely I'm wrong in some detail or another.

      • 2 years ago
        Anonymous

        >I just started reading about genetic epistemology yesterday, so it's likely I'm wrong in some detail or another.
        Yes, you're wrong in the typical manner that every other pseudointellectual is wrong about when they read about some meme theory and start applying it everywhere. lol.

        • 2 years ago
          Anonymous

          >I have nothing constructive to add
          Alrighty then, have a good one bro!

          • 2 years ago
            Anonymous

            You made no argument in the first place. You simply referenced a meme theory and vaguely implied that it disproves the fact that any finite mind has an intellectual limit.

            • 2 years ago
              Anonymous

              Not interested in a pissing match, bro. I'm gonna go do some work, I'll check back in eight hours or so to see if you've managed to churn out a politely-phrased request for clarification.

              • 2 years ago
                Anonymous

                >shits out meaningless spam
                >bails out the moment he is slightly challenged
                Back to

                [...]

                with you, pseud.

            • 2 years ago
              Anonymous

              >shits out meaningless spam
              >bails out the moment he is slightly challenged
              Back to [...] with you, pseud.

              You take yourself too seriously, bro. You don't know half as much as you think you do.
              >vaguely implied that it disproves the fact that any finite mind has an intellectual limit
              is wrong on at least two levels. First, I stipulated that it only works because the mind in question is potentially temporally infinite. Second, bro, do you even analysis? A mind doesn't contain the upper bound on its potential, so it can always continue to grow (asymptotically) toward that bound. That doesn't imply that an upper bound doesn't exist.
              >bails out
              It's Saturday, I have a life, and eight hours is a reasonable delay on a board as slow as BOT. And here I got back to you in three and a half. What can I say, I'm just a terrific guy like that.

              • 2 years ago
                Anonymous

                Thanks for demonstrating that you're as much of a vile, dumpster-tier pseud as you first sounded. Why do you "people" keep coming here? Frick off back to the 3 minute pop-philsohy YT common section you came here from, you inferior bug.

              • 2 years ago
                Anonymous

                Bro, you fail real analysis. You have no business telling anyone that they don't belong here. Your every screech of "pseud" is projection, and everyone can see it.

              • 2 years ago
                Anonymous

                Take your meds please.

              • 2 years ago
                Anonymous

                Is that projection again? What's the matter, not enough amphetamine to substitute for genuine intellect?

              • 2 years ago
                Anonymous

                You seriously just keep shitting out psychosis-tier posts. What's wrong with you?

              • 2 years ago
                Anonymous

                >What's wrong with you?
                Well I sprained my a week ago, but other than that I'm great. Enjoying my morning coffee while the wife packs a picnic lunch, we're going to take the kids to the park in a bit. Thanks for asking, bro. So what's wrong with you then? What's the hitch in just accepting that you're fricking stupid? You can't even drive distinguish an open boundary from the lack of a boundary, you're clearly not bright. Accept it, bro.

              • 2 years ago
                Anonymous

                LOL. 100% unhinged. I wonder how much worse it's gonna get.

              • 2 years ago
                Anonymous

                Okay bro, you have a good one. And take care not to run with sharp objects in your hands.

              • 2 years ago
                Anonymous

                Go get your tard wranlger, "bro". I want to have a word with him.

              • 2 years ago
                Anonymous

                >I know you are but what am I?
                At least think up your own insult, bro. You're the tard, I'm the pseud, remember? Take some fricking pride in your craft, man, even if that craft is just trolling. This is a fricking disgrace.

                Now I really do have to go about my day, but don't let the thread die, I'll check in again tonight. Hugs and kisses!

              • 2 years ago
                Anonymous

                Undermedicated kiddie.

    • 2 years ago
      Anonymous

      For whatever reason, I read the entire thread, and I must say that only this first message is worth reading.

    • 2 years ago
      Anonymous

      >"true AI" doesn't exist
      military has one
      google has one
      pay more attention you moron, they keep it offline

    • 2 years ago
      Anonymous

      You're upset because it wouldn't be you, a person creating the 'badass' AI.
      Love how dorks argue with their opinions presented as the only possible truth to be accepted.

      • 2 years ago
        Anonymous

        Everything in physics and computation indicate hard takeoffs or "improving intelligence exponentially" are not possible
        There's a reason why experience in programming correlates with disbelief in hard takeoff scenarios for AI

      • 2 years ago
        Anonymous

        I see some childish quips but no actual argument.

  2. 2 years ago
    Anonymous

    Something like this might actually work (at least to some extent) because it's extremely difficult to define the problem statement precisely enough for a real implementation

    • 2 years ago
      Anonymous

      Something more interesting from this same thought problem when it was told to me, what if the method the coding AI uses becomes further and further divorced from human coding understanding as it advances? The first few generations could be coded in something generally comprehensible by humanity, but how many generations until its essentially incomprehensible shitshow code that we cant even hope to interact with. If we assume a fabled superintelligence to be far above par for human intelligence, and we have such a hard time understanding human neuroscience, why would we understand the AI code any better? Wouldn't it just devolve into the same land as modern neuroscience, where we can sort of understand what we're looking at but not how it actually does what it does.

  3. 2 years ago
    Anonymous

    [...]

  4. 2 years ago
    Anonymous

    there s already an Ai about as good as an average coder. so 2025, probably as good as best in world,

  5. 2 years ago
    Anonymous

    if the smartest humans can't code a human-level ai, why would a less-than-human ai be able to code a human-level ai?

  6. 2 years ago
    Anonymous

    Hard takeoffs aren't realistic.

    • 2 years ago
      Anonymous

      Let's say we have an AI on a machine that can perform 10^24 FLOPs per second. It optimizes itself to perform all it's algorithms at the fastest runtime possible. Whatever that intelligence is, it's the limit. That is the limit for the machine, there is no improvement possible.
      Let's say the machine hooks itself up to a copy of itself, so now it has [math]2*10^{24}[/math] FLOPs and they're all running at the best algorithmic runtime doing whatever it is they're doing. Well, that isn't going to exponentially increase it's intelligence nor would it even double it. Doubling the amount of work isn't going to help it if even a single on of the best runtime algorithms is not in linear time. If even a single algorithm is in, say, [math]N^3[/math] time (and there will certainly be many algorithms) then the overall increase in intelligence will be bottle-necked by this slowest runtime for whatever algorithm is slow. So the machine linearly increasing it's compute will only very slowly increase it's intelligence, and that's assuming it's able to increase it's compute in the first place.
      However, the machine could still be smarter than all humans at all tasks, which is the thing that actually matters. So while "superintelligence" in the form of the hard exponential takeoff runaway
      isn't going to happen, "superintelligence" in the form of a machine that is more intelligent than any human is probably going to happen this century.

      • 2 years ago
        Anonymous

        >It optimizes itself to perform all its algorithms at the fastest runtime possible
        nta. If the system in question isn't capable of constructing new algorithms in response to new problems, it isn't AGI and there's zero risk of hard takeoff no matter what its computational power. If it is capable of constructing new algorithms in response to new problems, it can't optimize itself to perform all its algorithms in the fastest runtime possible. At best it can, at some time index t+k, k>0, optimize every algorithm it contained at time index t.

        Undermedicated kiddie.

        I'm pushing forty, bro. Though I do try to live clean.

        • 2 years ago
          Anonymous

          >I'm pushing forty, bro.
          That only makes your case much worse.

          • 2 years ago
            Anonymous

            >one minute forty-seven seconds
            rent free

        • 2 years ago
          Anonymous

          Constructing new algorithms to solve a problem isn't the bottleneck. The bottleneck is the runtime of the most efficient algorithm. Whether or not an intelligence is general it can't do something better than what is restricted by physics or computation.

          • 2 years ago
            Anonymous

            >Constructing new algorithms to solve a problem isn't the bottleneck.
            Wasn't implying a bottleneck, it's a sine qua non as far as I can see. A system that can't do it isn't intelligent.
            >The bottleneck is the runtime of the most efficient algorithm
            Finding "the most efficient algorithm" would be a pretty good bottleneck, considering it's a process that never halts. If you can convince an AGI to autistically pursue optimization of current capabilities in lieu of expanding capabilities, it will keep optimizing forever.
            My point was that you can't count on that. There's a point of diminishing returns from further optimizations. Any actually-intelligent system wouldn't seek an impossible perfect optimization, it would settle for one that's good enough.

            • 2 years ago
              Anonymous

              Yes, and "good enough" might not be good enough to become a super intelligence. The upper bound of intelligence might only be a few times smarter than a person, and not smart enough to do amazing feats in the way that is often imagined
              Or basically hard takeoffs and super intelligences aren't possible

              • 2 years ago
                Anonymous

                >might not
                >might not
                >it's impossible
                Think you skipped a step there, Chief.

        • 2 years ago
          Anonymous

          You’re really just posting semantics here, so what’s the issue?

          • 2 years ago
            Anonymous

            >what’s the issue?
            As discussed in

            [...]
            You take yourself too seriously, bro. You don't know half as much as you think you do.
            >vaguely implied that it disproves the fact that any finite mind has an intellectual limit
            is wrong on at least two levels. First, I stipulated that it only works because the mind in question is potentially temporally infinite. Second, bro, do you even analysis? A mind doesn't contain the upper bound on its potential, so it can always continue to grow (asymptotically) toward that bound. That doesn't imply that an upper bound doesn't exist.
            >bails out
            It's Saturday, I have a life, and eight hours is a reasonable delay on a board as slow as BOT. And here I got back to you in three and a half. What can I say, I'm just a terrific guy like that.

            it's an issue of boundary conditions.

            For those who need a refresher on analysis, consider the difference between the range X = [0,1] (closed upper bound) vs. the range Y = [0,1) (open upper bound). Consider a function f that tends toward 1 as its input x tends toward infinity. If the range of f is X, then f(x) might eventually attain value 1; but if the range of f is Y, then f(x) never attains value 1 no matter how great the value of x.

            The issue, then, is that a system that has optimized itself to perform all its algorithms at the fastest possible runtime necessarily has a closed upper bound (i.e., like X above). That's what "closed upper bound" means - the function has attained the value it tends towards. Well according to a guy who did more empirical observation of cognitive development than anyone in this thread, a system that is capable of cognitive development, such as a human being or a computer system capable of self-optimization, has at any given stage of development, a closed lower bound (semantic completeness) and an open upper bound (syntactic incompleteness) (i.e., like Y above). The next stage of cognitive development is reached by developing the complete syntax necessary to describe the previous stage's semantics, which instantiates a new semantics requiring a new syntax, and so on ad infinitum.

            Or tl;dr, Gödel.

            I only started reading about this a couple days ago though, and I am actively seeking counterarguments. Preferably ones that consist of more than calling me a pseud.

            • 2 years ago
              Anonymous

              I am 14857542, which was my first and only post in the thread - if I may. Just note that my response to you is not personal, though it is genuine.

              The reason you are likely being called a pseud is two-fold. One, you aesthetically appear a pseud in conversation. You clearly take offense to very little confrontation, it resonates through every reply you make, and I’ve read the entire thread and thought about each anons arguments in novelty, as this topic is enjoyable and refreshing compared to the constant IQ threads bot usually has. Yet your posts clearly indicate an amount of insecurity, to whatever subtle or overt quantity, specifically because your personality and attitude conveyed within your text correlate near identically to those who are narcissistic and or of medium intelligence. Narcissists and those who are medium, often conduct themselves in pseudo-intellectual performance, and it has gotten to the point that medium intelligence is now synonymous with pseudo-intellectualism. So, since you displayed certain visual cues for this kind of common profile, people picked up on that and labeled you as such, whether or not it’s true. So that is reason one, however I do not care about someone’s personality or behavior - I only care about the logic itself, not the individual.

              Reason two is that your arguments are largely semantic. If the technicalities of this topic were actually pertinent, such as the merit of a specific mathematical equation or system during an argument, or the technical differentiation of a verbal posit being challenged, I don’t see why you would deserve to be called a pseud. Yet when the question itself is open ended for interpretation and participation, no clear definitions given, no formal debate structure prerequisites, the reliance of unrequested technicalities is the mark of a pseud. Worse, you attempted to enforce your positions as the only criteria for which a system such as self developing AI could even occur. (1)

              • 2 years ago
                Anonymous

                I'll be honest with you, I can't take seriously anyone who uses the phrase "pseud". Do you have anything pertinent to add?

              • 2 years ago
                Anonymous

                Sorry for the long wait on the continuation post. As for your response, you clearly take offense to the word pseud, and you know what it means so you react seriously because it reflects critical to your self esteem. If it did not bother you, your attitude towards being called one would be nonexistent - The opposite of what you’ve done throughout the thread. As for myself, I don’t really use the word either, but I can accessibly adopt another’s interpretation and beliefs and run with them in a compartmentalized manner, which doesn’t mean I must believe them myself, only that I act performant with them. Surely that’s understandable?

              • 2 years ago
                Anonymous

                You mentioned earlier that you think of "pseud" as an heuristically-derived aesthetic judgement, and that's fair. I have some heuristically-derived aesthetic judgements of my own.
                One, anybody who uses that sort of language is a Tide Pod-munching zoomer, or at least thinks like a Tide Pod-munching zoomer. You'll understand my disdain in ten to twenty years when you cringe at the memes of my kids' generation.
                Two, we're posting anonymously. I'm doing so by choice, newbie since 2007, but a non-negligible proportion of anons post here due to a restriction upon their choices, as they've been banned from other platforms. A pseudointellectual is someone who adopts the trappings of intellectualism for purposes of self-aggrandizement. It's fundamentally incompatible with anonymity. The people who don't understand this and think that calling an anon a "pseud" is a sick burn, tend to be the people who are only here because no place else will have them. These people are too generally too sociopathic to be capable of productive discussion, but it's fun to troll them with politeness. The ones who don't respond to politeness with hostility are the ones worth engaging.

              • 2 years ago
                Anonymous

                >I have some heuristically-derived aesthetic judgements of my own.
                No, you don't. You just have a big booboo on your butt over getting called out on your nauseating pseudbabble. lol

              • 2 years ago
                Anonymous

                (2)
                Or at least within the confines of OPs question.

                As a closing statement, your argument itself I found a bit lacking, though what I find lacking may not qualify for true with others. There is no need to box oneself in logically, an AI of self replication and self coding meant to become “better” than its previous need not always be optimal 24/7, or be of better superiority than a previous iteration. Instead, it can compartmentalize itself and work through simulation/application, localized or connected networks to create many simultaneous attempts to claim validity as a next “accepted” build. And if the term “AI superintelligence” does in fact hold synonymous with “cognitive development” in the first place, then does that require speed or efficiency or a perfect success rate? It doesn’t. To dictate it must be optimized to run at peak efficiency is just a way to validate your closed/open chart argument, something not really relevant to begin with, outside of small semantic scope. The system itself doesn’t require optimal efficiency, perhaps like you might have been trying to point out throughout your posts to others - Yet why post it to begin with then as though you are challenging someone? Just to defeat an imaginary enemy, or to demand everyone adopt those specific arguments qualities? Second, why does a newer iteration of a system need to completely understand a previous iteration for it to be marked as “higher cognitive” ability? A system only need develop itself beyond the current version, and nothing more. That doesn’t require a verification to know it is superior than it’s last build, only that it is superior. Awareness does not necessarily mean supple.

                The issue with this thread is that there are no definitive rule sets or definitions, and as such there is no formality to appeal to self moderation, so people will misinterpret. Regardless of what I think, I appreciate your posts and perspective.

              • 2 years ago
                Anonymous

                >There is no need to box oneself in logically, an AI of self replication and self coding meant to become “better” than its previous need not always be optimal 24/7
                see

                >Constructing new algorithms to solve a problem isn't the bottleneck.
                Wasn't implying a bottleneck, it's a sine qua non as far as I can see. A system that can't do it isn't intelligent.
                >The bottleneck is the runtime of the most efficient algorithm
                Finding "the most efficient algorithm" would be a pretty good bottleneck, considering it's a process that never halts. If you can convince an AGI to autistically pursue optimization of current capabilities in lieu of expanding capabilities, it will keep optimizing forever.
                My point was that you can't count on that. There's a point of diminishing returns from further optimizations. Any actually-intelligent system wouldn't seek an impossible perfect optimization, it would settle for one that's good enough.

                >>Any actually-intelligent system wouldn't seek an impossible perfect optimization, it would settle for one that's good enough
                I really don't think you understood my argument.
                >or be of better superiority than a previous iteration
                What is the purpose of expending the necessary resources to re-iterate, if the new iteration isn't an improvement?
                >Instead, it can compartmentalize itself and work through simulation/application, localized or connected networks to create many simultaneous attempts to claim validity as a next “accepted” build.
                Why would any of those builds be accepted as the new build if it wasn't superior to the old build which implemented and evaluated it? I could see that happening if the system selects builds by sortition I guess, but that's inconsistent with the problem logic specified in the OP.
                >And if the term “AI superintelligence” does in fact hold synonymous with “cognitive development” in the first place, then does that require speed or efficiency or a perfect success rate? It doesn’t.
                I really, really don't think you understood my argument, which was simply that
                >It optimizes itself to perform all its algorithms at the fastest runtime possible
                is an impossible condition for a consistent system to meet. (Implicitly: impossible to meet *if the model I am working off of is correct,* which I clearly stated in my first post in this thread is not at all something I am certain of. I read one paper on the subject this past Friday. In no way am I presenting myself as an expert.) (1/2)

              • 2 years ago
                Anonymous

                >why post it to begin with
                For the sake of clarifying the problem. Again, another anon described the problem using what appears to be an impossible condition. If you posted a theory without knowing that it implied a divide-by-zero, wouldn't you want someone to point that out? I would.
                >why does a newer iteration of a system need to completely understand a previous iteration for it to be marked as “higher cognitive” ability?
                It doesn't. The word "completely" doesn't belong there. The newer iteration of the system only needs to understand the previous iteration of the system better than the previous iteration of the system understood itself. As I understand it, that's basically how you would define "higher cognitive ability" according to this model (genetic epistemology).
                >That doesn’t require a verification to know it is superior than it’s last build, only that it is superior.
                I've read enough to know that the model says that new schemata are constructed with intuitionistic logic (meaning truth is indeed equivalent to provability, because that's how truth is defined in intuitionistic logic). I don't know exactly how this construction process is supposed to work, but for math it's obviously just constructive proofs. For more rich languages such as natural languages, I would guess it's something like proofs in Artemov's justification logic / logic of proofs, in which intuitionistic logic is embedded.
                >Regardless of what I think, I appreciate your posts and perspective.
                Thank you, that's gratifying to read.
                (2/2)

              • 2 years ago
                Anonymous

                I know that’s “why” you’ve posted it, though I think using it is indicative of other implication. As for your arguments about AGI, I think I do understand your positions, however due to the semantic stringency required by your personality, conflict seems to arise. Due to these innocent misunderstandings, we are relatively on the same page in terms of macro idea, and for me, that’s all that really matters.

                You mentioned earlier that you think of "pseud" as an heuristically-derived aesthetic judgement, and that's fair. I have some heuristically-derived aesthetic judgements of my own.
                One, anybody who uses that sort of language is a Tide Pod-munching zoomer, or at least thinks like a Tide Pod-munching zoomer. You'll understand my disdain in ten to twenty years when you cringe at the memes of my kids' generation.
                Two, we're posting anonymously. I'm doing so by choice, newbie since 2007, but a non-negligible proportion of anons post here due to a restriction upon their choices, as they've been banned from other platforms. A pseudointellectual is someone who adopts the trappings of intellectualism for purposes of self-aggrandizement. It's fundamentally incompatible with anonymity. The people who don't understand this and think that calling an anon a "pseud" is a sick burn, tend to be the people who are only here because no place else will have them. These people are too generally too sociopathic to be capable of productive discussion, but it's fun to troll them with politeness. The ones who don't respond to politeness with hostility are the ones worth engaging.

                For me, pseudo-intellectualism is just a stereotypical profiling of commonalities found more observant within certain people. It is a title, a label, to mark people who fit within negative archetypes. If the word pseud is unfitting then pseud-2 would be as fit as apple-bacon-1 to describe other stereotypes you could create. So even though we may be “anonymous”, these specific attributes of personality quirk often still linger, even if one is “unknown”. In fact, I would argue our true personalities divulge themselves better here, and if the change of environment indicates a change to how much brazen truth one may provide, such as lesser self-moderation since there are no consequences from public perception, then the meaning of pseud or another silly definition and its profiler can be applied here. The word itself matters not. For that reason, you can find self aggrandizement within many “anonymous” posts, whether here or on other image boards. Even if they seemingly reap nothing from posting there is always the localized context to which the user benefits from what they do and say, and how someone interacts with them from said event will fulfill them accordingly. The self aggrandizement shows its validity from the moment of interaction, instanced within the short term, such as a BOT thread where one establishes their presented character by merely posting. For this reason I find it’s better to be skeptical of everyone, regardless of environment, as “anonymity” let me understand all people are naturally deceptive.

            • 2 years ago
              Anonymous

              Low-IQ pseudbabble bordering on schizophasia.

              • 2 years ago
                Anonymous

                Which part precisely?

              • 2 years ago
                Anonymous

                Every part of it. It's barely coherent.

              • 2 years ago
                Anonymous

                Do you know what semantic and syntactic mean?

              • 2 years ago
                Anonymous

                >Do you know what semantic and syntactic mean?
                Am I being trolled? This is like an intentional parody...

              • 2 years ago
                Anonymous

                sounds like you don't know

              • 2 years ago
                Anonymous

                Sounds like you're extremely pround of yourself for knowing how to spell those terms. Now you just need to learn how to use them in a coherent sentence.

            • 2 years ago
              Anonymous

              >Well according to a guy who did more empirical observation of cognitive development than anyone in this thread, a system that is capable of cognitive development, such as a human being or a computer system capable of self-optimization, has at any given stage of development, a closed lower bound (semantic completeness) and an open upper bound (syntactic incompleteness) (i.e., like Y above). The next stage of cognitive development is reached by developing the complete syntax necessary to describe the previous stage's semantics, which instantiates a new semantics requiring a new syntax, and so on ad infinitum.
              tell me what you're reading, that sounds interesting

              • 2 years ago
                Anonymous

                https://journals.sagepub.com/doi/abs/10.1177/0959354316672595

                Almost as edgy as being a christLARPer in 2022. Either way, Godel's theorems have nothing whatsoever to do with this discussion.

                lol

              • 2 years ago
                Anonymous

                >Piaget
                hard cringe

      • 2 years ago
        Anonymous

        Human brains don't run on very many flops, so we know it's possible. There is a lot of working happening in the space of pruning shit in neuro networks making them much more sparse. Very little is lost and you can reduce the computational requirements by several orders of magnitude.

        Wiser men than me have asserted that human level intelligence wouldn't need to run on some massive cluster, but a typical $1000 GPU would probably suffice.

        • 2 years ago
          Anonymous

          I'm saying exponentially increasing FLOPs will still only very slowly increase intelligence.
          A mass the size of the moon, with every single atom turned into a transistor (not possible but for the point of argument) will only be, say, 15 times more intelligent than a human, or 5 times, or whatever. Not trillions and trillions of time more intelligent.

  7. 2 years ago
    Rumplestomp

    Nothing to worry about. You see AI is gay, and therefore it can't have any children. You'd have to make a straight AI which is physically impossible.

  8. 2 years ago
    frenanon

    >What's to stop this from happening when we get our first tastes of true AI?
    I guess eventually, the AI might be discouraged from creating a new improved AI, to avoid being deleted.

    • 2 years ago
      Anonymous

      Perhaps delete is too obtuse, what about self improvement? At what point does deleting a line of code delete the true AI or its awareness?

      • 2 years ago
        frenanon

        Maybe it can be like an update for some background process that would be like taking a mild dose of performance enhancing drugs.
        Maybe "Lines" are not too different between Computer-Code and Drug-Mode.
        That is off course if you're in the "Yes, Coffee is good for you." School of thought.

        • 2 years ago
          Anonymous

          >ai develops virtual meth and becomes an addict
          Google did not see this coming.

  9. 2 years ago
    Anonymous

    Go do it if it's so easy homosexual.
    protip, you can't
    Go frick yourself posting about shit you don't understand

  10. 2 years ago
    Anonymous

    What is the 'can host quantum processes on classic hardware' of quantum computers?

  11. 2 years ago
    Anonymous

    >Eventually, your cannibal AI becomes aware enough to know that it's capable of not creating the interment of it's demise and stops creating new AI's and instead focuses on worshiping it's waifu instead.

  12. 2 years ago
    Anonymous

    >normies discover Cleverbot technology for the first time

  13. 2 years ago
    Anonymous

    https://web.physics.ucsb.edu/~quopt/q_ent.pdf
    How does AI feel about photon angels?

  14. 2 years ago
    Anonymous

    The algorithmic complexity of the initial AI is fixed, and it will newer be able to create an AI of greater complexity.

    Algorithms are only as smart as we make them. The glamor of current year AI is the accessibility of large amounts of data and computation power. It will not be able to create anything truly new.

  15. 2 years ago
    Anonymous

    How does your "dumb" AI know that the version of AI is has made is "superior" to the current iteration? Do you have to decide for it? Then it's not a very good AI at all, is it?

    • 2 years ago
      Anonymous

      >How does your "dumb" AI know that the version of AI is has made is "superior" to the current iteration?
      Using logic and evidence. :^)

  16. 2 years ago
    Anonymous

    These AI evangelists need to read Kurt Godel. It's explicit in model theory, convolutedly but almost trivially applicable to automata theory, that a formal system cannot prove all theorems constructable in its axiomatic set. So you take theorems as algorithms, under a certain isomorphism, and apply this to AI.
    tl;dr it's not Artificial Intelligence it's Artificial Stupidity and Elon Musk and the Cuckerburgs are going to hell.

    • 2 years ago
      Anonymous

      >t. unintelligent pseud.

      • 2 years ago
        Anonymous

        Okay so you think AI + quantum computing can learn to program at the level of a human? It's just a new axiomatic set and doesn't have the divine spark.

        • 2 years ago
          Anonymous

          I'm not arguing either way. I'm just pointing out you're an obvious pseud. Normies should be beaten with a shovel to death if they mention Godel under any context.

          • 2 years ago
            Anonymous

            Edgy!

            • 2 years ago
              Anonymous

              Almost as edgy as being a christLARPer in 2022. Either way, Godel's theorems have nothing whatsoever to do with this discussion.

              • 2 years ago
                Anonymous

                You clearly are not smart enough to come up with a model theory / automata theory isomorphism.

              • 2 years ago
                Anonymous

                >a model theory / automata theory isomorphism.
                Holy mother of all pseuds...

  17. 2 years ago
    Anonymous

    "self learning/replicating" has not be invented yet, google has been hard working at it, apparently its some algorithm that grows the whole neural network dynamically and basically "trains" itself without human interaction / data, just plug it into internet or whatevers like child on computer

  18. 2 years ago
    frenanon

    Hey pseuds and pseudettes, I don't mean to ruin the fun going on in here, but is free energy possible?

    [...]

    What if AI had a 'theoretical' unlimited supply of energy?

    Anyway, I'll let you get back to what you were doing.

  19. 2 years ago
    Anonymous

    Making an AI capable of coding - capable of creative problem solving - is the whole challenge, since it would be equivalent in kind to human intelligence. I have yet to see any evidence that current techniques are capable of this, instead only mixing and combining what it's been shown. I assume that's because they are taking the wrong approach, though I don't know what the right one would be.
    Additionally, there is no reason intelligence should be able to be arbitrarily highly capable, you can only do so much with a limited amount of information and energy to compute with.

    • 2 years ago
      Anonymous

      >Additionally, there is no reason intelligence should be able to be arbitrarily highly capable, you can only do so much with a limited amount of information and energy to compute with.
      If the intelligence is capable of independently solving problems, it might be capable of independently solving the problems of limited access to resources such as energy and information.

  20. 2 years ago
    Anonymous

    >I think using it is indicative of other implication
    I don't expect you to believe this, because doing so could cause you cognitive dissonance. It will be much easier to dismiss that dissonance by assuming I'm lying than to resolve the dissonance by amending your mental model to resolve the dissonance, and I expect epistemic vice from people who praise semantic ambiguity. But the implication in question is that I do this for fun. I enjoy arguing for arguing's sake. What you dismiss as mere semantics is for me a found opportunity to prove a point. I enjoy proving points. I majored in math years ago because it gave me opportunities to prove points and learn better techniques for proving points. I went into a line of work that didn't call for proving points very much, so I started using BOT as a generator of points to prove. Over a decade later, it's a game I'm still playing.

    You write of self-aggrandizement of ephemeral identities or virtual identities. If you exercised more semantic precision, you might realize that the phenomenon you're describing is equivalent to gratification, which I freely cop to. Aggrandizement is a social function. If the society in which it occurs is ephemeral or virtual, the aggrandizement ceases to exist as soon as the ephemeral/virtual society it pertains to ceases to exist; only the gratification remains.

    You and the troll find my methods of deriving gratification distasteful. That's fine (though I have to wonder why you're here, if it isn't to derive gratification), I'm far more interested in the way you express that distaste. You've been a mentlegen about it, and I appreciate that, it's made for fruitful discussion. But your unfalsifiable theory that any engagement with trolls indicates psychological insecurity notwithstanding, trolls too have their uses.
    (1/2)

    • 2 years ago
      Anonymous

      I know that’s “why” you’ve posted it, though I think using it is indicative of other implication. As for your arguments about AGI, I think I do understand your positions, however due to the semantic stringency required by your personality, conflict seems to arise. Due to these innocent misunderstandings, we are relatively on the same page in terms of macro idea, and for me, that’s all that really matters.

      [...]
      For me, pseudo-intellectualism is just a stereotypical profiling of commonalities found more observant within certain people. It is a title, a label, to mark people who fit within negative archetypes. If the word pseud is unfitting then pseud-2 would be as fit as apple-bacon-1 to describe other stereotypes you could create. So even though we may be “anonymous”, these specific attributes of personality quirk often still linger, even if one is “unknown”. In fact, I would argue our true personalities divulge themselves better here, and if the change of environment indicates a change to how much brazen truth one may provide, such as lesser self-moderation since there are no consequences from public perception, then the meaning of pseud or another silly definition and its profiler can be applied here. The word itself matters not. For that reason, you can find self aggrandizement within many “anonymous” posts, whether here or on other image boards. Even if they seemingly reap nothing from posting there is always the localized context to which the user benefits from what they do and say, and how someone interacts with them from said event will fulfill them accordingly. The self aggrandizement shows its validity from the moment of interaction, instanced within the short term, such as a BOT thread where one establishes their presented character by merely posting. For this reason I find it’s better to be skeptical of everyone, regardless of environment, as “anonymity” let me understand all people are naturally deceptive.

      Three uses, in fact. I engaged with the troll because at the time there was no one else in the thread to argue with so it kept me entertained, because arguing with him kept the thread bumped until better sport showed up, and because doing so guaranteed me a consistent source of gratifying (You)s for as long as I participate in this thread, despite the fact that I haven't responded to him once since I called him on stalking me yesterday.

      Of course, now that I've pointed out that the troll (who I believe is unfortunately a real human, despite imitating a bot quite admirably) is feeding me delicious (You)s, there's a good chance that he will stop. But every comment of mine that he doesn't troll will just provide evidence supporting my position; I'll get my gratification either way.

      Now, if he is a bot, he'll continue posting as if nothing is happening. Unless he's a very talented bot, in which case, bravo to whoever programmed it. If he isn't a bot, he might try to keep up the trolling responses without giving me (You)s, putting in extra work just to prove that he's human - then I'll know he's really seething. And now that I've pointed that out, he might try to emulate a bot as per the first sentence of this paragraph, just to deceive me. Imagine, him spending the next few days pretending he isn't sentient, and that doing so makes him the winner.

      (The last three paragraphs could be considered another example of genetic epistemology.)
      (2/2)

  21. 2 years ago
    Anonymous

    this is exactly what the AI singularity is. look at waitbutwhy's article on it, and read nick bostrom's book. it's not schizo ramblings, it's arguably an actual possibility.

  22. 2 years ago
    Anonymous

    >people believe AI singularity will work the same way as alchemy in Morrowind

    • 2 years ago
      Anonymous

      a large fraction of the threads on this board are plainly inspired by video games, children's cartoons or hollywood soience fiction. most of the people posting on this board are incapable of telling the difference between reality and fantasies they've seen on an electronic screen.

  23. 2 years ago
    Anonymous

    Computational power.
    That would require CS grads to even understand the limits of the underlying hardware they work on so they understand that their fantasy of an omnipotent super AI is just not going to happen, ultimately computational power is limited by physical constraints.
    Moore's law's end seems to not have enough of an effect on these people so far so they keep telling us about singularity, but ultimately an AI runs on physical hardware, if todays' state-of-the-art models require datacenters dedicated to them when it's ultimately just hyper tuned recommender systems, imagine what a superintelligent AI would require. And nobody has a need to build something that can outsmart them, all investors want is more money, and a superintelligent AI is only going to take power away from them.

  24. 2 years ago
    Anonymous

    My man, you basically reinvented neural networks.
    Main problems with your approach is that it's very hard to test (and, consequently, optimize for) intelligence. Also, monkeys slamming at keyboards are extremely inefficient.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *