Here you go.
Picrel is the same query on Wizard13B-Uncensored a locally run model (13 billion parameters vs. bard running on a 540billion parameter model)
I had thought that was a coming soon, but it looks like it might be a now thing. But as far as I can tell, we don't know anything about PaLM 2. I doubt it's 540B parameters because Google themselves proved how inefficient that was with Chinchilla.
in either case it's a complete embarrassment. Essentially since OAI literally just brute forced it by shitting out as many parameters as they could afford to make ChatGPT and GPT-4 and clearly ended up with more for their efforts than google did with endless R and D on BARD.
(Me)
I'm reading that they switched to PaLM540B back in April after their launch embarrassment and that now it's switching to PaLM2(also 540B) so I'm assuming it's a finetuned version of PaLM.
Be careful with this kind of big tech hype journalism, every article is filled with assumptions and every quote is full of lies by omission. There's a dozen flavors of PaLM, and just because it's PaLM doesn't mean it's 540B. And there's no evidence that PaLM2 has a 540B model at all. I doubt that it 540B, since that would make it as slow as GPT-4 and then they'd be expected to give results on par with GPT-4, when to date they've been struggling to match turbo in response quality.
7 months ago
Anonymous
They're struggling to match Wizard13B in response quality. If you went on performance alone you could only conclude that they're running Pygmalion.
7 months ago
Anonymous
Censored or uncensored Wizard? I wouldn't be surprised if uncensored gives significantly better responses, but a big company can't release an uncensored AI model without a "it's just for research purposes (in Minecraft), okay guys?" disclaimer like LLaMA.
7 months ago
Anonymous
I've only ever run the uncensored for 13B, Although I did run the censored 7B version and it's outputs were rather impressive for a 7B model. Like it literally blows away anything else in it's weight class and apparently the guy got funded to go ahead with a 30B version recently and possibly even a 65B.
7 months ago
Anonymous
[...]
They have four different sizes for palm2 gecko otter bison and unicorn. There is no wait list anymore for bard it is probably one of the smaller ones.
According to the paper, the largest PaLM2 is "significantly larger" than 540B parameters. I doubt they use that for anything but internal testing. There's very few references to parameter counts in the paper; the largest number directly referenced is 14.7B, but that's supposedly using a completely different dataset and training method from PaLM2, so I'm not sure how it's relevant. I guess it's just there to show that with a fixed amount of training compute time, increased parameter counts don't necessarily mean better output quality.
Even OAI is fucking the dog compared to the local open source LLM game. Every week there's a new quantum leap in local LLMs and I suspect by the end of the month high end gaming rigs and junkyard servers will be able to run 65B models at passable speeds. And it's just a matter of someone getting some sweetheart funding to make a good 65B finetune and given the performance per parameter disparity that should pretty much mean ChatGPT level capabilities at home for anyone with a couple grand and a bit of technical know-how.
google has done a fucking ton of research on ai over the last decade, and its all fucking worthless now. they unironically spent more time censoring their own ai than trying to make it better.
I thought this was a meme. >We'll be making PaLM 2 available in four sizes from smallest to largest: Gecko, Otter, Bison and Unicorn.
Release the AI Zoo! >We’re already at work on Gemini — our next model created from the ground up to be multimodal, highly efficient at tool and API integrations, and built to enable future innovations, like memory and planning. Gemini is still in training, but it’s already exhibiting multimodal capabilities never before seen in prior models. Once fine-tuned and rigorously tested for safety, Gemini will be available at various sizes and capabilities, just like PaLM 2, to ensure it can be deployed across different products, applications, and devices for everyone’s benefit.
Poor gemini protocol
Compared to
Here you go.
Picrel is the same query on Wizard13B-Uncensored a locally run model (13 billion parameters vs. bard running on a 540billion parameter model)
Bard runs on LaMDA-137B, not PaLM-540B. Still, it's pretty fucking embarrassing and really shows you what censorship does to a language model
Didn't they say yesterday in the keynote that they switched it to PaLM2? Or is that a "coming soon..."
I had thought that was a coming soon, but it looks like it might be a now thing. But as far as I can tell, we don't know anything about PaLM 2. I doubt it's 540B parameters because Google themselves proved how inefficient that was with Chinchilla.
in either case it's a complete embarrassment. Essentially since OAI literally just brute forced it by shitting out as many parameters as they could afford to make ChatGPT and GPT-4 and clearly ended up with more for their efforts than google did with endless R and D on BARD.
(Me)
I'm reading that they switched to PaLM540B back in April after their launch embarrassment and that now it's switching to PaLM2(also 540B) so I'm assuming it's a finetuned version of PaLM.
Be careful with this kind of big tech hype journalism, every article is filled with assumptions and every quote is full of lies by omission. There's a dozen flavors of PaLM, and just because it's PaLM doesn't mean it's 540B. And there's no evidence that PaLM2 has a 540B model at all. I doubt that it 540B, since that would make it as slow as GPT-4 and then they'd be expected to give results on par with GPT-4, when to date they've been struggling to match turbo in response quality.
They're struggling to match Wizard13B in response quality. If you went on performance alone you could only conclude that they're running Pygmalion.
Censored or uncensored Wizard? I wouldn't be surprised if uncensored gives significantly better responses, but a big company can't release an uncensored AI model without a "it's just for research purposes (in Minecraft), okay guys?" disclaimer like LLaMA.
I've only ever run the uncensored for 13B, Although I did run the censored 7B version and it's outputs were rather impressive for a 7B model. Like it literally blows away anything else in it's weight class and apparently the guy got funded to go ahead with a 30B version recently and possibly even a 65B.
According to the paper, the largest PaLM2 is "significantly larger" than 540B parameters. I doubt they use that for anything but internal testing. There's very few references to parameter counts in the paper; the largest number directly referenced is 14.7B, but that's supposedly using a completely different dataset and training method from PaLM2, so I'm not sure how it's relevant. I guess it's just there to show that with a fixed amount of training compute time, increased parameter counts don't necessarily mean better output quality.
>Bard runs on LaMDA-137B, not PaLM-540B
It literally ran on LaMDA-2B before the Palm announcement.
I wonder what chink-poo to white ratio google has, compared to OpenAI
All the big tech cos are 80% chinkpoo men in engineering.
It's so bad they exclusively non-chinkpoo females in all other non-eng roles just to keep the overall ratios balanced.
Even OpenAI?
Even OAI is fucking the dog compared to the local open source LLM game. Every week there's a new quantum leap in local LLMs and I suspect by the end of the month high end gaming rigs and junkyard servers will be able to run 65B models at passable speeds. And it's just a matter of someone getting some sweetheart funding to make a good 65B finetune and given the performance per parameter disparity that should pretty much mean ChatGPT level capabilities at home for anyone with a couple grand and a bit of technical know-how.
if they go full model about everything that exists, google will be sued for a gorillon times, think that we all know this already guys..
Nice excuse google just admit bard was a huge waste of money
anon.. where you been? google did shit ton of research on ai while no one ws doing it
Then why is bard hot garbage?
Too small model, 2B
google has done a fucking ton of research on ai over the last decade, and its all fucking worthless now. they unironically spent more time censoring their own ai than trying to make it better.
They have four different sizes for palm2 gecko otter bison and unicorn. There is no wait list anymore for bard it is probably one of the smaller ones.
I thought this was a meme.
>We'll be making PaLM 2 available in four sizes from smallest to largest: Gecko, Otter, Bison and Unicorn.
Release the AI Zoo!
>We’re already at work on Gemini — our next model created from the ground up to be multimodal, highly efficient at tool and API integrations, and built to enable future innovations, like memory and planning. Gemini is still in training, but it’s already exhibiting multimodal capabilities never before seen in prior models. Once fine-tuned and rigorously tested for safety, Gemini will be available at various sizes and capabilities, just like PaLM 2, to ensure it can be deployed across different products, applications, and devices for everyone’s benefit.
Poor gemini protocol
Give me some other bard queries to compare to wizard.
yes, its fucking trash. it wont respond to anything beyond the most basic things you could just google