Nice chatGPT competitor google

Nice ChatGPT competitor google

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 11 months ago
    Anonymous

    Compared to

    • 11 months ago
      Anonymous

      https://i.imgur.com/SrCMIR0.jpg

      Nice chatGPT competitor google

      Here you go.
      Picrel is the same query on Wizard13B-Uncensored a locally run model (13 billion parameters vs. bard running on a 540billion parameter model)

      • 11 months ago
        Anonymous

        Bard runs on LaMDA-137B, not PaLM-540B. Still, it's pretty fricking embarrassing and really shows you what censorship does to a language model

        • 11 months ago
          Anonymous

          Didn't they say yesterday in the keynote that they switched it to PaLM2? Or is that a "coming soon..."

          • 11 months ago
            Anonymous

            I had thought that was a coming soon, but it looks like it might be a now thing. But as far as I can tell, we don't know anything about PaLM 2. I doubt it's 540B parameters because Google themselves proved how inefficient that was with Chinchilla.

            • 11 months ago
              Anonymous

              in either case it's a complete embarrassment. Essentially since OAI literally just brute forced it by shitting out as many parameters as they could afford to make ChatGPT and GPT-4 and clearly ended up with more for their efforts than google did with endless R and D on BARD.

          • 11 months ago
            Anonymous

            (Me)
            I'm reading that they switched to PaLM540B back in April after their launch embarrassment and that now it's switching to PaLM2(also 540B) so I'm assuming it's a finetuned version of PaLM.

            • 11 months ago
              Anonymous

              Be careful with this kind of big tech hype journalism, every article is filled with assumptions and every quote is full of lies by omission. There's a dozen flavors of PaLM, and just because it's PaLM doesn't mean it's 540B. And there's no evidence that PaLM2 has a 540B model at all. I doubt that it 540B, since that would make it as slow as GPT-4 and then they'd be expected to give results on par with GPT-4, when to date they've been struggling to match turbo in response quality.

              • 11 months ago
                Anonymous

                They're struggling to match Wizard13B in response quality. If you went on performance alone you could only conclude that they're running Pygmalion.

              • 11 months ago
                Anonymous

                Censored or uncensored Wizard? I wouldn't be surprised if uncensored gives significantly better responses, but a big company can't release an uncensored AI model without a "it's just for research purposes (in Minecraft), okay guys?" disclaimer like LLaMA.

              • 11 months ago
                Anonymous

                I've only ever run the uncensored for 13B, Although I did run the censored 7B version and it's outputs were rather impressive for a 7B model. Like it literally blows away anything else in it's weight class and apparently the guy got funded to go ahead with a 30B version recently and possibly even a 65B.

              • 11 months ago
                Anonymous

                [...]
                They have four different sizes for palm2 gecko otter bison and unicorn. There is no wait list anymore for bard it is probably one of the smaller ones.

                According to the paper, the largest PaLM2 is "significantly larger" than 540B parameters. I doubt they use that for anything but internal testing. There's very few references to parameter counts in the paper; the largest number directly referenced is 14.7B, but that's supposedly using a completely different dataset and training method from PaLM2, so I'm not sure how it's relevant. I guess it's just there to show that with a fixed amount of training compute time, increased parameter counts don't necessarily mean better output quality.

        • 11 months ago
          Anonymous

          >Bard runs on LaMDA-137B, not PaLM-540B
          It literally ran on LaMDA-2B before the Palm announcement.

  2. 11 months ago
    Anonymous

    I wonder what chink-poo to white ratio google has, compared to OpenAI

    • 11 months ago
      Anonymous

      All the big tech cos are 80% chinkpoo men in engineering.

      It's so bad they exclusively non-chinkpoo females in all other non-eng roles just to keep the overall ratios balanced.

      • 11 months ago
        Anonymous

        Even OpenAI?

        • 11 months ago
          Anonymous

          Even OAI is fricking the dog compared to the local open source LLM game. Every week there's a new quantum leap in local LLMs and I suspect by the end of the month high end gaming rigs and junkyard servers will be able to run 65B models at passable speeds. And it's just a matter of someone getting some sweetheart funding to make a good 65B finetune and given the performance per parameter disparity that should pretty much mean ChatGPT level capabilities at home for anyone with a couple grand and a bit of technical know-how.

  3. 11 months ago
    Anonymous

    if they go full model about everything that exists, google will be sued for a gorillon times, think that we all know this already guys..

    • 11 months ago
      Anonymous

      Nice excuse google just admit bard was a huge waste of money

      • 11 months ago
        Anonymous

        anon.. where you been? google did shit ton of research on ai while no one ws doing it

        • 11 months ago
          Anonymous

          Then why is bard hot garbage?

          • 11 months ago
            Anonymous

            Too small model, 2B

        • 11 months ago
          Anonymous

          google has done a fricking ton of research on ai over the last decade, and its all fricking worthless now. they unironically spent more time censoring their own ai than trying to make it better.

      • 11 months ago
        Anonymous

        Didn't they say yesterday in the keynote that they switched it to PaLM2? Or is that a "coming soon..."

        They have four different sizes for palm2 gecko otter bison and unicorn. There is no wait list anymore for bard it is probably one of the smaller ones.

        • 11 months ago
          Anonymous

          I thought this was a meme.
          >We'll be making PaLM 2 available in four sizes from smallest to largest: Gecko, Otter, Bison and Unicorn.
          Release the AI Zoo!
          >We’re already at work on Gemini — our next model created from the ground up to be multimodal, highly efficient at tool and API integrations, and built to enable future innovations, like memory and planning. Gemini is still in training, but it’s already exhibiting multimodal capabilities never before seen in prior models. Once fine-tuned and rigorously tested for safety, Gemini will be available at various sizes and capabilities, just like PaLM 2, to ensure it can be deployed across different products, applications, and devices for everyone’s benefit.
          Poor gemini protocol

  4. 11 months ago
    Anonymous

    Give me some other bard queries to compare to wizard.

  5. 11 months ago
    Anonymous

    yes, its fricking trash. it wont respond to anything beyond the most basic things you could just google

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *