CEO of Google says it has no solution for its AI providing wildly incorrect information

>According to an interview at The Verge with Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

https://futurism.com/the-byte/ceo-google-ai-hallucinations

Homeless People Are Sexy Shirt $21.68

CRIME Shirt $21.68

Homeless People Are Sexy Shirt $21.68

  1. 3 weeks ago
    Anonymous

    agi in two more weeks

    • 3 weeks ago
      Anonymous

      Just 100x more parameters bro trust me bro it will scale and gain consciousness bro

      • 3 weeks ago
        Anonymous

        this but unironically

        • 3 weeks ago
          Anonymous

          bro

          • 3 weeks ago
            Anonymous

            why don't you feel the agi?

        • 3 weeks ago
          Anonymous

          an actual moronic Black person.

  2. 3 weeks ago
    Anonymous

    I have the solution, put this on it.

  3. 3 weeks ago
    Anonymous
    • 3 weeks ago
      Anonymous

      why do indians love AI so much?

      • 3 weeks ago
        Anonymous

        >why do bullshitters love bullshitting technology

      • 3 weeks ago
        Anonymous

        > jeets use animal feces to improve food texture
        > jeet-AI suggests using glue instead
        saars please redeem the glue so you have more cow dung for desert

    • 3 weeks ago
      Anonymous

      also +Black person

  4. 3 weeks ago
    Anonymous

    and they never will
    machines can only be counterfeits, not intelligent

    • 3 weeks ago
      Anonymous

      Provably false but aight, have a nice day anon

      • 3 weeks ago
        Anonymous

        the jeet copes out as he seethes lel

  5. 3 weeks ago
    Anonymous

    We KNEW this.
    LLMs just continue the text with something that matches a textyness criteria that we can't pick apart and steer in detail. This isn't new.
    Why do you think they have to amend our prompts and do post filtering to censor and direct the way it answers? They don't have control in the middle part, the big juicy center that is trained on all the shitload of data you hear about.
    LLMs are basically Bullshit Machines. They're designed to yap and there's no quality control.
    We knew this. YOU knew this. Right?

  6. 3 weeks ago
    Anonymous

    The solution is to not hire jeets and to not use reddit to train your AI.

  7. 3 weeks ago
    Anonymous

    >and this feature "is still an unsolved problem."
    WTF I thought we were going to have the fwooom in 2023 and AI would train itself?

    Could twitter Black folk have been wrong for years?

    • 3 weeks ago
      Anonymous

      You don't understand, the AI is just hallucinating.
      Stop being a luddite.

    • 3 weeks ago
      Anonymous

      >Could twitter Black folk have been wrong for years?
      yes. and will always be wrong until the next fad they can try desperately to make money out of. reminder: the same delusional sex offenders shilling bitcoin and losing everything are very same sex offenders that now call themselves "experts" at "AI", despite never writing a single line of code nor have a basic understanding of how the technology works.

  8. 3 weeks ago
    Anonymous

    obviously that's not true
    you can't get gpt-4 to spout nonsense like that unless you really force it to, clearly their model is just bad

  9. 3 weeks ago
    Anonymous

    >it has no solution
    The solution is actually very easy. If they're confused about what to do, they can ask AI for help.

  10. 3 weeks ago
    Anonymous

    Drink the piss, what are you, a luddite?

  11. 3 weeks ago
    Anonymous

    He is an idiot. Took me 5 seconds to Google this

    https://www.kcl.ac.uk/news/new-study-finds-evidence-for-reduced-brain-connections-in-schizophrenia

    This seems to be exactly what their chip architecture is like.

  12. 3 weeks ago
    Anonymous

    >train it on reddit
    >spouts nonsense
    who would have thunk.

  13. 3 weeks ago
    Anonymous

    >and this feature "is still an unsolved problem."
    An unsolved problem doesn't mean an unsolvable problem.

  14. 3 weeks ago
    Anonymous

    The funny thing is that cases like this demonstrate exactly what data the LLM was trained on.

  15. 3 weeks ago
    Anonymous

    Why are poojeets so moronic? If anything, this highlights how garbage Google is as a search engine, if they give an LLM top 10 results to summarize it, there should be only a very low chance of hallucinations, unless Gemini is shit at retrieval. But of course, since Google rewards SEO spam and other """optimizations""" of course the top resuls will be shit.

    How come Phind, never has this issue for me?

  16. 3 weeks ago
    Anonymous

    is this universal language problem, information might be more relevant with language actually used in geolocation

Your email address will not be published. Required fields are marked *