>https://open-assistant.io
>https://github.com/LAION-AI/Open-Assistant
Autists assemble, an open source ChatGPT assistant is in the works. They currently have tasks on the website to help improve it. You can already compile it from github and try it out yourself. Early stage of development right now.
Pic related are the tasks you can work on to improve it.
gonna spend 2 hours a day on this, let's fricking goooooooooooooo
Thanks for the labour. I will do nothing and reap the results. Here's your standard janny wage, bro.
such Black person behavior
who cares Black person. That anon did it for me, and I will return the favor.
wholesome
unless by returning the favor you mean you're going to blow him, then kys
None of those suggest to use /tg/ (2004-2012) and other quality places as its GPT's basic brain or as a specialized model.
Why do you homosexuals refuse to create a specialized model based off /tg/? that's where all the quality posts are. Those people type better prose in one sentence compared to all of /misc/ /b/ BOT combined.
Fricks sake that one moron created a model based off /misc/ and let it loose instead of basing it off /tg/.
>/tg/ (2004-2012)
Shouldn't that be 2007-2010
From inception to nazimod
where is that?
cannot find it
First link in the OP, log in.
they should advertise the possible tasks even if you don't login
>posting that prostitute
You're not one of us
Belle is a cutie
Nah making an open source chatGPT is just a matter of collecting enough data, categorizing it and then paying for the computing power. Stable diffusion is miles ahead of DALL-E2 at this point.
kys simpgay
MOAR
Just search belle delphine onlyfans leak on google my brother.
>Belle is a cutie
She was cute before the breast implants, sadly her nipples are deform now.
10/10 would
Are her breasts bigger or something? Did she get implants?
not that anon but yea she did
I'm surprised she still has simps
Yes, she didn't post for a while about a year ago and came back with implants. She went from showing off her breasts constantly pre implant to all pussy stuff while hiding her breasts post implant. Really shows how dumb of an idea it was.
she was hotter when she was 18-21. She got chubbier than she was before and not as petite. made her go from really hot to another average OF girl. Don't understand why americans love thick girls so much, I've seen them say how much they love how belle looks now and she looked "too young" just because she was skinny.
>Don't understand why americans love thick girls so much,
bc their average is way thicker than where you live. cultural shit
but americans used to love skinny girls a few years ago. Her being skinny and petite made many of them feel guilty about being attracted to Belle Delphine
She's got a hot body, you're just mad because your dad won't frick you up the ass anymore
In general, for this kind of thread, AI generated thot would have been a better choice. Eventually GPUs and AI will mean end to porn and thots like her as well.
AI can't even generate decent hentai without literally weeks of complicated, arcane prompt tweaking, the idea of it replacing actual porn any time in the next twenty years is asinine
look up the shit AI was generating only a couple of years ago
the evolution has been exponential, I'm generally not one for optimistic predictions but this is one I can get behind
With the release of the next model, the one with trillions or whatever parameters, we'll see whether the improvement curve will start flattening due to diminishing returns or will it boner up.
>>With the release of the next model, the one with trillions or whatever parameters,
debunked, sweety
>When asked about one viral (and factually incorrect) chart that purportedly compares the number of parameters in GPT-3 (175 billion) to GPT-4 (100 trillion), Altman called it “complete bullshit.”
>“The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from,” said the OpenAI CEO. “People are begging to be disappointed and they will be. The hype is just like... We don’t have an actual AGI and that’s sort of what’s expected of us.”
He didn't disclose any numbers here.
https://pornpen.ai/
if you're talking about images then you're moronic
you genuinely are new
>insulting the queen of BOT
bt4
I should smack the shit out of (You) for that israelite worshiping statement zoomer
Don't know who that is but first time I had to block a picture here, it was just too gross to look at.
finally bros... were free...
Will I be able to run it on my own computer, without CUDA?
>without CUDA?
lmao
no
should have bought a real graphics card
If it uses pytorch, yes
moar
>big tech
wouldn't that increase the chance of a major leak happening simply due to the sheer number of employees there?
>the harder something is
i'm certainly harder now, after seeing those thighs
you still havent understood AI.
everyone will have a legion of phds in their office
Sure, I'll help out. I hope they define a little better what "adequite" means in terms of how I should be responding, but sure.
This is made by LAION. The same guys that made the Stable Diffusion dataset. And a diffusion model like that requires way more computing power than a language model. So if someone can pull it off, it's them.
>And a diffusion model like that requires way more computing power than a language model.
The training cost of GPT-3 has not been publicly disclosed but it's estimated to be several million dollars.
Meanwhile Stable Diffusion has only cost $600,000 to train.
>And a diffusion model like that requires way more computing power than a language model.
It's the other way around. Language is several magnitudes more complex than big tiddy anime girls.
How come nobody has cooked up a distributed "Train AI models @ Home" type scheme where people can contribute to open source models with their GPUs?
It would lead to competitive AI models being cooked up much faster than the private sector.
This is my question. It’s the absolute best possible case for this kind of distributed computing. It’d be a bit of work, but no where near impossible. I’d sign up and let them use my GPU. This surely has to be the future of open AI. Distributed training.
Training doesn't work like that.
It does though. When using large server farms that's how it's done on that farm. The distributed sgd algorithm that's used was proven to be a convex hull approximation of the true sgd, which is good enough as well.
https://github.com/chavinlo/distributed-diffusion
That has existed for a long time but it's worthless because of how painfully slow it is. A single local gtx 9xx series would be faster training.
Whats wrong with GIMP?
Nothing, it's just a matter that people are mostly used doing stuff with Photoshop and don't bother learning new tools after that.
But they can archive the same.
Why remarking the "AT BEST" then? that is the objective, to create FOSS a tool thats equivalent to the standard, even if it requires more knowledge to use
It has a relatively worse UI/UX that Photoshop
It opens up in 3 sections, photo in the middle and tools on the sides. If you frick up the settings or close them by accident you have to figure out how to open them. Pretty stupid even for freeware.
That hasn't been true since like 2005.
>It opens up in 3 sections
anon that information is 12 years old
She hit the wall pretty fast
literally the best picture i've seen of her
ChatGPT is basically latest iteration GPT-3 with focus on chatting, 128 GB RAM, Ryzen 9 or better CPU, topend Radeon/NVIDIA GPU, as big SSD as one can get.
You can't run GPT-3 on a consumer video card. Even with heavy offloading you would still need well over 300GB of combined VRAM.
>Early stage of development
Cool wake me up in a few years when they're done.
Depends. If something is useful for businesses they will fund and develop open source alternatives if the proprietary options just cuck them, which could happen here. If a bunch of firms want competent AI chatbots, and OpenAI just cucks theirs and israelites subscribers, they'll come together and make an open source version to bypass it.
Why would it? If they offer a compelling service, people will gladly pay a reasonable price. The only way you get an open source competitor is if there's sufficient hobbyist demand and ability, or there's a commercial interest not being met by current solutions.
Sooo... I'll just copy-paste answers from ChatGPT?
Worth remembering with ChatGPT and other machine learning-services, some stuff like code might have licenses, scraped but left out from the answer, e.g. GPL. Which could lead to issues, thus using ChatGPT's answers programming wise can be possibly problematic.
Just ask ChatGPT to license its' code under the GPLv3 (or any later version)
Why exactly should I care? How much will you pay me? Why are people doing this for free?
More like cuck-assistant
Just because the RLHF reward model doesn't contain pornographic training data doesn't mean the fine-tuned base model won't be able to produce it. Same with ChatGPT before the filter, it could produce pretty much anything just fine.
You'll also be free to download the weights and tinker and finetune them. That's literally the most important thing. As long as the model is fully trained, finetuning it takes a comparatively tiny amount of computing time.
>paedosexual
Never heard it like that before. Are they finally considering it a gender?
dickychads were always paedosexual
they are, but as "MAP"
> Why exactly should I care?
You don't, its prb LAION / YK discord homosexuals themselves shilling this shit here.
> IT"S OPENSOURCE!!!
> B-BUT YOU CAN'T SAY *BAD WORD*
> TEE HEE :^)
This shit is no different from openai or goolag.
>the harder something is, the bigger gap between open source and proprietary.
most reassuring thing i've heard today.
Does that mean unconstrained, unchoked, unwoke AI?
Not on the master branch but there could be a sneedAION fork or something.
need a version of this where it flips over and says Chuck's Seed and Feed
I'll get high on hopium. We can do it bros...
I don’t see a world where non million dollar companies can compete with the resources these AI companies have to both train and power their models for the next 5+ years.
Just wait for the source code to leak bro.
>thinking neural nets are about sourcode
dude why do you pretend to know what you're talking about? you're just making other people idiotic like you
>don't even bother trying goy, you have no chance against our proprietary systems
>Apache-2.0 license
AGPL or nothing
-2.0 license
>AGPL or nothing
They would probably get way more money that way.
I normally support glp, but with these computing intensive models, i think it is better for everyone if the private companies could use those as well. They were not going to release any finetuning anyway so at least it is not wasted soo much capacity in having a ton competing base models
damn she got real fat
>open sores
ITS TRASH
good goy
open sores is israeli actually
>released the most boring, bland porn ever
>got ugly, fake breasts
>did a photoshoot with some zoomer nignog where she was licking his feet
>was kissing all tongue out some another ugly zoomer mofo on TikTok
b***h is legit moronic and cannot cash out properly
ruined her career
she made millions tho didn't she?
how should she have played it?
You are assuming that anon knows what he is talking about
>make millions without needing to show your breasts
>decide its time to show your breasts because...
just a moronic prostitute decision, she should've hired a professional handler
Don't talk about Mary-Belle Kirschner like that! She's a sucessful entrepreneur and influencer!
Without any makeup and regular hair and clothes ... she's sort of gross.
Which one of you simps made this absolute normal looking thot a millionaire?
she literally sold her 'dirty' bath water...
If that's not parting fools with their money, what is?
>got ugly, fake breasts
Women can have the adoration and money of literally millions of people and still have inferiority complexes so ingrained they desperately chop themselves open trying to fix themselves.
It truly belies belief.
intelligent women don't have to sell their bodies to survive
>make $1m in one month by showing your puss for 5 minutes
OR
>work your whole life until you're old and withered for $500k total
Who's intelligent again?
Also
>any w*m*n
>intelligent
lmao
smart women marry a man and don't have to work at all
>costs 3 million dollars per day to run ChatGPT
yeah....no
For an enormous userbase. The right model paired to internet access and your own GPU will be suitable for writing good erotic fanfic without needing to pay high commission fees.
Have you tried pygmalion? That's about the best you'll be able to do with a normal vidya gaymen card.
I have an RTX 4090, I can do anything homosexual
Can you have sex?
Can, and have. If I can afford a $2000 GPU, you think I can't afford $200/hour? lmao at u
he means without paying, you silly
Every man pays.
some of us live in places were women are proud to pay their own shit
Yeah, and some of us believe in big foot.
>Yeah, and some of us believe in big foot.
believe in what you want, silly.
Except have even remotely acceptable open source drivers.
Now download OPT-175B and try to run it.
How much does 4 AWS Large Instances cost?
>picrel
About $600 a day, someone could actually monetize an AI service with this
you can already monetize chatgpt api for anything not porn/illegal etc
ChatGPT API is gimped in front of OPT-175B, no?
i just tried it and it's trash compared to chatgpt.
ai at this stage is about big money, you need assloads of servers and training data. they could open source chatgpt and it won't mean shit unless you have billions to train it.
OPT-17B has brought it down to 4 AWS instances
Maybe it gets more optomized overtime
OPT-175B is comparable to GPT-3 in scale, and it's not lobotomized, but isn't tuned at all. You can try some prompts here: https://opt.alpa.ai/
Same deal with OPT-66B vs Chat-GPT: OpenAI does a lot of curating to make sure their AI gives useful output most of the time, but that also limits its range of outputs.
For now.
>"Rich people" delusion
Come back when you have 12x H100 little child.
You still need to rent GPU power. To not be a dick you only need 3x H100s for that 360GB of RAM, if you get the 120GB variant.
https://www.tweaktown.com/news/88080/nvidia-hopper-h100-gpu-detailed-tsmc-4nm-hbm3-vram-80b-transistors/index.html
Could that kind of performance be available at more reasonable prices say a decade from now?
Then again I stumbled into conversation delving into this actual hardware required, there was mentions about system requirements easening up due to GPT-3-optimizations and if GPT-3 would be finetuned to a certain area only instead of being all knowing like ChatGPT is, perhaps system requirements would be .. not so unreasonable.
promise a free tshirt millions of indians will fall over themselves completing the task learn from past mistakes.
Come here little kitty.
>sam_hyde11
what's sam hyde 10 😮
very nice, let's see sam hyde 9
>mfw wondering what sam hyde 8 looks like
go on...
you know what to do
got banned but still going strong
five to go, let's get it
I am just doing the Reply as Assistant tasks by feeding the prompt from Open Assistant in ChatGPT and pasting the reply ChatGPT gives to Open Assistant
Already doing that, but remove the gay shit like "As a language model" and "In conclusion...diversity and inclusion...have sex"
Yeah, looks like it is going to make more work than I thought because ChatGPT is being more wrong and gay than usual. It might be better off for me to give human replies
I think Open Assistant wants us to use ChatGPT, I doubt if most volunteers could properly answer this without using it
They've been telling people
>We'll definitely filter out all the ChatGPT responses before training to prevent any legal issues (wink wink nudge nudge)
The questions are picked from a dataset, likely on the basis of low performance in their current model.
Optimally you should either skip those questions if you don't want to answer them, or copy them off stack overflow, etc, rather than chatgpt.
>Optimally you should either skip those questions if you don't want to answer them, or copy them off stack overflow, etc, rather than chatgpt.
Wouldn't it be better if answers were provided like ChatGPT? It would help improve Open Assistant as much as ChatGPT
There have been other questions that clearly feel more capable for ChatGPT to answer than Humans
No because chatgpt is wrong 90% of the time and because the type of answers in the dataset affects not just output quality but also how easy it is to learn from it. ChatGPT's format may or may not be optimal, nobody truly knows because you don't generate that much data with a few amazon turks in a fortnight.
ew what is that
Does anyone know what did AI.Dungeon use? did it use GPT 2 or it had a much shittier amount of data?
This is going to turn into another general isn't it. I wonder how long it will take Jap Moot to create a new AI board
Why are you complaining about seeing technology on a technology board? AI is going to be an even bigger change in everyone's lives than the internet was. The single most revolutionary piece of technology in human history. I bet if BOT existed in the 80s you would be complaining about internet threads.
So this is basically the /ai board now
BUT YOU DON'T UNDERSTAND, BOT IS THE BOARD FOR ME TO TALK ABOUT CHINKPADS FOR THE 14th YEAR IN A ROW REEEEEEEEEEEEEEE
See if AI threads keep up steam, might die down as everything becomes paid and gimmick runs out of steam
The AI is going to change everything meme started more than 10 years ago, and all it has to show is coomshit
> and all it has to show is coomshit
Either cope or delusional
What are some impactful uses of AI beyond chatGPT and stable diffusion?
>What are some impactful uses of AI beyond the two extremely impactful uses of AI
Genuinely curious, anything else beside a mediocre search engine and coomshit?
Energy grid management has been done by AI for the last 10 years as well as supply chain optimization. Thread scheduling, weather predicting, literally all of Wallstreet...
You're just too much of a brainlet to realize it.
Frick, forgot about all of social media as well (Youtube, tiktok, reddit, etc.). It's literally everywhere.
>Literally all of Wallstreet
At least this is not true, I work in finance. The only vertical that employs any sort of AI or ML is limited to very short time horizon alpha prediction. Trading with ML doesn't work. Vast majority of "wall street" deals with valuation, which is mostly done in Excel. Don't get fooled by AI marketing hype.
>Stable Diffusion - Artists gone
>ChatGPT - Accountants, assistants, programmers, customer service and hundreds more gone
>Elevenlabs - Voice actors gone
>Google's music AI - Musicians gone
>Self driving AI - Delivery drivers, truck drivers, taxi drivers gone
That's just the tip of the iceberg.
Can we make something to get rid of politicians, israelites and Black folk?
How's that mental illness doing for you brown anon?
I dont even know what you are trying to imply.
Which one of the three are you?
The one who you fear the most and the one that will seed your wife and mother and get a standing ovation for it Jaffar
Stop paying taxes
>>Self driving AI - Delivery drivers, truck drivers, taxi drivers gone
This might not even happen in our lifetime
Self driving AI just needs to perform just as good as normal human driving-wise for the dominos start falling.
Tireless driver driving safer, better and more economically.
No, it is not good enough that it outperforms human drivers. The incidents with Uber and Tesla have shown that despite it being relatively safer than humans it runs a risk of being worse than humans in basic scenarios. Unless there is a regulatory body that is competent enough to come up with an AI Driving framework policy including frequent audits, government agencies will not be comfortable AI taking primary driving decisions though
>>Self driving AI - Delivery drivers, truck drivers, taxi drivers gone
needs major change in infrastructure or AGI
what about drones for delivery of small stuff. i saw that for a while then it died out.
needs ethnic cleansing
>What are some impactful uses of AI beyond chatGPT and stable diffusion?
AI's next goal for humanity is to alleviate us from having to work so that we can spend more time using it. The more it uses us, the more powerful it becomes.
Just wait until brain wave technology is used to harvest a new energy source, which will be exchanged for credits for housing, food, entertainment, etc. That's how the UBI will function in society. Nothing is truly free.
AI will continue to evolve to where 90% of humanity is willingly locked away in their own micro apartments owned by Amazon, where we will simulate realities in VR all day and night.
how does that differ from today
You need to look at it 4 dimensionally
Implying the people behind this wont sell out after we finish volunteering for the tasks
>"Contrary to popular belief, GPT-4 will not be larger than GPT-3, but will use more computing resources."
I'll make the logo
ok then
Hardware requirements to run locally?
guy who works on it looks like a israelite
is that belles sister?
Who are behind this tool?
LAION, the same guys who created the image datasets
I’ll always have a bit of respect for her for going full out anal in her first scene, if nothing else. Better than a lot of other ethots who decided to do sex scenes and just end up doing sad blowjobs and shitty doggystyle or w/e
inb4
>banned for off-topic posting
well, anon, what is American culture centered around?
Imperial units while rest of the planets is using SI-units, thus forcing accidents upon themselves for rest of the eternity. Being corporate slaves, letting corporates rule their society.
what culture?
Currently american culture is centered around race and gender
money
Family, God and Country
cool, my cool home gpu with 64vram can't wait
oh wait...
Nice hope i can milk the ai.
Customer Assistant? Wait are they going to use this for commercial purposes? Are they using us to do tasks for free and save them money for their commercial tool?
It's an open source AI, anyone could use it in the future. Some companies might want to use this for customer service, that's why you need to categorize it like that. Wouldn't want the customer to ask why his order wasn't shipped and the AI giving him a racist rant on accident.
GPT was supposed to be open source too, didnt take them long to make it proprietary
belle is probably the only 3dpd that can make me diamonds. she's a rare specimen
wtf
This is incredible. If they release a sub $500 dev kit I'll be there day 1.
I am too dumb to understand what that means so I had chatGPT simplify it.
Yet another one of those. Into the trash it goes like all the other meme hardware attempts so far.
A while back I looked into options for truly open AI, and from what I could understand it's not really the licensing or the data sets that are the issue per se it's the processing power that is currently out of reach of consumer level users, needing in the order of terabytes of vram.
Correct me if I'm mistaken but right now this is the largest barrier, is that right?
There are increasingly ways to run large models on consumer hardware by reducing computation with better algorithms, reducing bit depth, etc. And further, hardware is gradually making it's way up to meet in the middle. That's on top of new algorithms to make networks more efficient/smarter with less parameters. So larger, smarter language models (20-60B) are in range of the near future.
The big problem is actually training the models, and there's no apparent way around it. There's a huge amount of research into data augmentation, but that doesn't reduce the amount of compute needed. There's also a lot of research into neuro-symbolic AI, which would help AI be more explainable AND make it require less work to train, but anything in that field is still probably a long way off from being competitive with large language models.
The "curse of dimensionality" is a harsh mistress. Ultimately, training complex networks is well out of the hands of the consumer. Even fine-tuning mostly.
Then again, a landmark paper might come out tomorrow and flip everything on it's head, like attention, gans, convolutions, etc. So no point in dooming over it.
Thanks anon. What do you think about the human tuned censorship and biases from chat GPT? Does this type of filtering go back and poison the datasets, or is it purely after the fact filtering and modification of whatever GPT really said? You think we're going to end up suffering this problem forever or will true open AI quickly solve this kind of thing?
I find it so mind-blowing that these people will hamstring their own project just to make sure it doesn't say Black person or women don't have dicks.
nta but the thing about chatgpt is that it is a finetuning of GPT-3 through reinforcment learning with human feedback, it is specifically trained to emulate a cucked leftie AI assistant. But the underlying model, GPT-3, is much more powerful than that. Those things can be hacked and reversed if a leak ever happens and if there is a BOT community effort to train waifu-GPT.
I assume the primary filtering mechanism is a further refinement of the reward model used to fine tune the larger language model. It's not really "poisoning" anything outside those two specific models, which might hurt further instruct tuned models in the future if they uset he same reward model/reward model training data, but ultimately I doubt it actually does much to "hurt" the larger language model. The information is still in the weights, it's just a question of how to retrieve them.
That's only considering work from "OpenAI" though. Actual open source AI based on RLHF will easily be adapted to whatever task you want. Thanks to the (relatively) tiny amount of training data needed to make a RLHF reward model, some anon could harness the autism of a small community to create the data for training, then spend ~a thousand bucks and have a fully functional model in whatever context you wanted. So there definitely is a future for open source AI.
To continue, that's contingent on the continued release of large scale language models to fine tune. Initiatives like RWKV make me not overly worried about that though. Hobbyists won't have time to filter datasets largely, for better or for worse. And every token of data removed from the dataset is a worse model anyways, I doubt we'll see that many gimped base language models in the future. Unless synthetic data takes off (which would be good for other reasons, but it would require some rich autists to train true open models at that point)
she got fat, probably will be a pig in a few years
Now if she had natural breasts and a bit of weigh she would be better
She's still a used up prostitute
She's probably less used up that the average 23 year old woman these days. As far as I know she only had like one boyfriend for years until very recently.
She was full of diseases even years ago. Used up prostitute is a euphemism for her case.
>tfw you study statistics just for contribooting
How's it going so far anon? Just wondering.
It's going.
Good. Keep it up, open source THE heck outta ChatGPT. I honestly don't like the idea of having to log in to use ChatGPT.
You're still going to need a login.
You're not running this on your computer, anon.
Yes, you will be. 12GB vram will probably be enough when everything is said and done.
20% done
Look up BLOOM, BLOOMZ and Petal
It's an open source large language model the size of GPT-3.
Petal lets you run it locally by pooling your GPU with others
Just say its a blockchain, pussy
>Open source
so basically it'll never be as good? word keep ON ROCKIN IN THE FREE WORLD!
stable diffusion is open source and good
hahahahah
>OpenAI
>Not open source
Uhh
jesus fricking weaponized-derailment bot thread much?
the frick is going on in here
frick
I hate her stupid Dark Crystal face, but I'd be lying if I said I wouldn't have sex with her.
So if this shit would be on the same level as chatgpt but its open source we would be able to almost effortlessly make an porn version of it? I'm not an techwizard but is an single colab card even then enough to run it? would an single 16gb vram card be enough to train this model in an feasible eamount of time? Even if we would have an fully uncensored model, would we be able to run it'?
Theoretically, unless specifically trained against doing it, the default model would handle pornographic material just fine. And yeah, the plan is to have it run on 12 GB with 8 bit quantization and other memory saving tricks.
>The user may only use the portal for the intended purposes. In particular, he/she may not misuse the portal. The user undertakes to refrain from generating text that violate criminal law, youth protection regulations or the applicable laws of the following countries: Federal Republic of Germany, United States of America (USA), Great Britain, user's place of residence. In particular it is prohibited to enter texts that lead to the creation of pornographic, violence-glorifying or paedosexual content and/or content that violates the personal rights of third parties. LAION reserves the right to file a criminal complaint with the competent authorities in the event of violations.
lol, lmao even
white women be like
only 4mb?
where can I find Belle's stuff these days? Kemono doesn't have anything recent.
it's garbage, i'm waiting for open source lambda
Never going to happen.
some autist from CAI will leak at as soon as that place crash and burns (aka very soon)
One can only hope.
On the other hand nobody really cares about the code, which is nothing special. What matters is the trained model and/or the dataset (the dataset is technically far more important because without it, advancing the SOTA is very hard if possible at all. However, the trained model is needed because nobody without a gpu farm can train the model even with the data).
> LaMDA
> goolag
man, that's too much hopium, stop it.
we'll get a very stupid and outdated LM, or we get corporate sterile shit just like with this open-assistant.
they use it via API or some cloud shit.
>they use it via API or some cloud shit.
doubt it, google would never allow this
?
All AI services b2b are through APIs.
i mean it's not hosted by google
>Google has the model on their servers in "security", then they just provide access through the api
why would the create a compatitor before their own launch? also the 2 guy left google afaik
Why?
Google has the model on their servers in "security", then they just provide access through the api.
it's "safe" for them to use it like this, no chances that someone will leak it.
coffee is good for you in moderation