This AI stuff is getting out of hand. What do I tell my friends who are off in cloud cuckoo land over it?

This AI stuff is getting out of hand. What do I tell my friends who are off in cloud cuckoo land over it? Or should I join in the fun?

Shopping Cart Returner Shirt $21.68

Nothing Ever Happens Shirt $21.68

Shopping Cart Returner Shirt $21.68

  1. 2 months ago
    Anonymous

    You can start by speaking English

    • 2 months ago
      Anonymous

      Ironic. You must be a Europoor

  2. 2 months ago
    Anonymous

    We're just getting started.

  3. 2 months ago
    Anonymous

    What exactly is your average person using AI for that is fun?

    • 2 months ago
      Anonymous

      https://voca.ro/1gagsLZ3cT61

    • 2 months ago
      Anonymous

      anon...

      • 2 months ago
        Anonymous

        Are you going to tell me? Generating audio/video shit is boring, idc if the quality has improved. The novelty isn't there anymore. Seriously what are people still doing with this shit? I don't personally know anyone who still gives a crap.

        • 2 months ago
          Anonymous

          the real fun is taking a side on x-risk and spending all day debating idiots from the other side online

  4. 2 months ago
    Anonymous

    AI hype is getting out of hand. These are just advanced search algorithms/programs, they're not intelligent by any metric. AI's usefulness is being grossly overstated
    >ask ChatGPT to write a basic C program consisting of 2 functions
    >it doesn't work
    >try copilot
    >it won't compile

    • 2 months ago
      Anonymous

      While it's true there is no semantic reasoning occuring in the models, the human response is for the most part equally pathetic.

      It's like we're living in an /x/ shitpost where there aren't enough souls to go around and the rest have to make do with symbolic automata

    • 2 months ago
      Anonymous

      >they're not intelligent by any metric
      except, you know, IQ tests
      https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq

      • 2 months ago
        Anonymous

        This is stupid. If a human with no familiarity with the test gets 100 that tells you something. If an AI that has who knows what in its training set passes that tells you absolutely nothing.

        • 2 months ago
          Anonymous

          most humans that take IQ tests have at least a few decades of training data, not counting 600 million years of architectural optimization

          • 2 months ago
            Anonymous

            There are answers to the test online. There are a lot of reasons to suspect the results are tainted. Even if the test was new to it there are a lot of subtypes of questions that it could recognize and answer correctly simply because it has such strong recall capabilities.

    • 2 months ago
      Anonymous

      >"humans are just token predictors"
      Are we not?
      Well, some people aren't even that sophisticated.
      I've yet to see an example of some type of intellectual task that couldn't be done by a sufficiently scaled up token predictor.
      Later this year, when GPT5 is released with Q* built into it, the question will be moot, as ChatGPT will no longer be just predicting tokens but searching through the space of possible outputs, like AlphaGeometry did for mathematical reasoning.

      >how do LLMs do with prompt engineering themselves?
      Unsurprisingly, researchers have tried to get LLMs to write their own prompts, which is a bit like asking a genie to give you more wishes.
      https://aclanthology.org/2023.findings-acl.216/
      > we propose Consistency-based Self-adaptive Prompting (COSP), a novel prompt design method for LLMs. Requiring neither handcrafted responses nor ground-truth labels, COSP selects and builds the set of examples from the LLM zero-shot outputs via carefully designed criteria combining consistency, diversity and repetition. In the zero-shot setting for three different LLMs, we show that using only LLM predictions, COSP significantly improves performance up to 15%

      >t. election tourist

      • 2 months ago
        Anonymous

        kys

  5. 2 months ago
    Anonymous

    Squabbling over "It's intelligent! it's not intelligent! it's self aware! it has no semantics!"/whatever is a psychic trap. Either discuss properties you can actually quantify, or moral dilemmas you think are relevant NOW, or move on.

  6. 2 months ago
    Anonymous

    There's so many midwits on r/singularity claiming that "humans are just token predictors" lmao

    Materialist reductionism taken to absurd extremes

    • 2 months ago
      Anonymous

      >"humans are just token predictors"
      Are we not?
      Well, some people aren't even that sophisticated.
      I've yet to see an example of some type of intellectual task that couldn't be done by a sufficiently scaled up token predictor.
      Later this year, when GPT5 is released with Q* built into it, the question will be moot, as ChatGPT will no longer be just predicting tokens but searching through the space of possible outputs, like AlphaGeometry did for mathematical reasoning.

      • 2 months ago
        Anonymous

        >I've yet to see an example of some type of intellectual task that couldn't be done by a sufficiently scaled up token predictor.

        how do LLMs do with prompt engineering themselves?

        • 2 months ago
          Anonymous

          >how do LLMs do with prompt engineering themselves?
          Unsurprisingly, researchers have tried to get LLMs to write their own prompts, which is a bit like asking a genie to give you more wishes.
          https://aclanthology.org/2023.findings-acl.216/
          > we propose Consistency-based Self-adaptive Prompting (COSP), a novel prompt design method for LLMs. Requiring neither handcrafted responses nor ground-truth labels, COSP selects and builds the set of examples from the LLM zero-shot outputs via carefully designed criteria combining consistency, diversity and repetition. In the zero-shot setting for three different LLMs, we show that using only LLM predictions, COSP significantly improves performance up to 15%

        • 2 months ago
          Anonymous

          >prompt engineering
          I can't believe this is a real job. For women.

      • 2 months ago
        Anonymous

        >as ChatGPT will no longer be just predicting tokens but searching through the space of possible outputs, like AlphaGeometry did for mathematical reasoning.

        Isn't that just predicting tokens with a slightly different algorithm? not that I think there's anything invalid about token prediction

        • 2 months ago
          Anonymous

          The only reasonable criticism about the way LLMs "predict tokens" is that they are a single pass through the inference process, and they can't backtrack or search around for novel ideas.
          But you're right that criticizing an artificial neural network for "predicting tokens" is like criticizing a biological neural network for "predicting inclusive genetic fitness".
          It's true that humans are constantly trying to model the world and picking actions that will avoid pain and achieve pleasure, including having healthy offspring, but that doesn't mean that our intellectual achievements aren't real, just like the achievements of LLMs.

Your email address will not be published. Required fields are marked *