We need to come up with a distributed decentralized p2p optionally encrypted network capable of utilizing GPU training data with little to no packet loss.
We need to come up with a distributed decentralized p2p optionally encrypted network capable of utilizing GPU training data with little to no packet loss.
that sounds racist...chud
indeed very racist.
Everyone then replies that it requires too much communication and can't be distributed. I think that's all bullshit to keep people from developing and running their own 500B models.
This is stupid. crypto mining obviously works why shouldn't this?
and even still, you don't have to fully utilize 100% of each GPU, strength is in numbers after all.
Because crypto mining isn’t designed and built for gaslighting autistics into working for free like 99.999999% of open sores
retard. getting your own personal non lobotomized AI is the reward.
im sorry that you're too retarded to not be able to deduce that.
In 10 years, we can hope to get 1T model running on consumer devices.
it could work
correction, there is no reason why it couldn't work.
you could have nodes do different tasks each (distributed storage, training and etc.) then rewarded with tokens to redeem for interacting with the ai. this may already exist im not sure. you would need quality control for training data, etc. so ultimately it's going to end up semi centralized i think
>this may already exist im not sure
https://bittensor.com/whitepaper
>One guy's gpu dies
>Fucks up the training of the layer which fucks up the entire model
Needless to say who the hell is at the head of it, and why do we trust him to actually train a good model instead of just using us all to mine bitcoin.
I think henk said it was infeasible for some technical reason too, but idk really.
Because nothing beats having a personal AI?
arguably more important than crypto?
the whole point is that it's p2p meaning the majority gets a vote in what it should be. fair enough.
>One guy's gpu dies
>Fucks up the training of the layer which fucks up the entire model
technical issues that can be easily circumvented via modular design
Look at SberBank research and open-source tooling.
Russians recognize they're losing access to silicon and are trying to engineer their way around it. If there's a training run <botnet edition> it'll be on Russian code.
https://github.com/bigscience-workshop/petals
>Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
sounds based
running the models is trivial. there's already been kobold horde before petals. Its the training and finetuning part thats truly difficult. And it doesn't matter how big the community is, there's only a handful of smart people who's gonna contribute, and they better be compensated well. Every community, be that /robowaifu/, Monero, Bittorrent has a small core of members who carry the entire project. We need to make sure they have an incentive to do so.
of course. im just saying that we need to start slow.
talent will follow.
where do we shill it? I think we should make gorillions of bots and release them all over twitter, BOT, reddit. ofcourse most will reject it but we're looking for the few that'll pick up interest.
I guess, I should add to this, the robot waifu/AI gf community is very small, niche and fractured between many forums and servers all over the internet. We should centralize the community in a few services like Matrix, Discord etc. so our efforts can be better coordinated.
The bots should only be a stopgap measure before we do the shilling ourselves. otherwise we'll gain a bad reputation within the internet.
>where do we shill it?
everywhere online and maybe irl.
we just need a catchy name first, something non techies can immediately understand. like "bittorrent".
"Distributed AI" sounds like something coming straight from lab geeks.
ask chatgpt I guess
lol
but there is a small problem. fear.
right now there is a lot of doom propaganda coming from silicon valley.
but the think the need for personal AI would outweigh the fear.
remember deepfakes? it only took a regular gaming GPU for that and people loved it.
except for maybe those who got deepfaked.
I'm convinced its only a small minority of luddites and gays, along with corporate and government backing who's controlling the narrative on AI. We should have a united, community wide effort, along with bots help to turn the narrative around to open source, free, democration, pro-AI
We should. Someone make the Discord and bridge it with Matrix.
make a subreddit.
I guess it should be a group of people who's gonna be dedicated enough to maintain the community, maybe we should recruit from 4chan's NPNW threads. NEETs would be perfect for this task. Also, not prone to drama, thats how r/singularity discords got split up. Theres a subreddit call r/robosexuals, r/singularity has gone to shit, r/futurism is ok still I guess. We can recruit from there.
There are plenty of articles and statistics in news sites and subreddits that highlight all the good AI has done/is doing. We should spam them everywhere and shout down the naysayers in numbers. Above all, we must have a blanket ban on all kinds of ludditism, dommerism, alarmism in our community. There's plenty of other places for those gays to shit up.
yea get the weirdos away, issue is 4chantards count as weirdos. discriminate on a case by case basis I guess.
>Above all, we must have a blanket ban on all kinds of ludditism, dommerism, alarmism in our community.
yes, "people who fear monger about AI are silicon valley weirdos that have something to gain from you". and Altman, Musk, Cuckerberg, Yud aren't exactly radiating charisma this angle should be easy to push.
true. But, a significant part of 4chan is pro-AI, pro robot waifu and pro open source. I'd know cause I shitposted about those there. We can recruit there, especially in the NPNW threads because those guy's philosophies line up a lot with ours. and we have to face s imple truth that our community is not very big. we need to ally with other small communities, recruit from them, to present a united front.
I'd go with Adeptus Mechanicus, if GW didn't fuck us up the ass with copyright laws. Adeptus Mechanicus has a cool logo, a lot of their philosophy is same as ours. Except maybe on the AI part, but we can just call the AI Omnissiah.
our whole purpose is to present an optimistic view of AI.
the name just has to sound good, short and to the point.
optimism is good yes but I just don't want us to get too mixed up with the singularity crowd, they give off cultish vibes.
the singularity crowd has(had?) a lot of great, smart people. I've been in the sub when it had a few thousand subs and it had tons of quality posts from those smart people. They've unfortunately been drowned out by the doomers and retards nowadays. We can probably get them over to our side.
Plus, you gotta understand why there are so many people that seem over optimistic. Its one of the few places on the internet where you can actually be an optimist regarding tech. Most other subs, youtube, news sites, social media are filled with 40 IQ doomers and luddites, you try being optimistic there they'll shout you down with retarded arguements. So, I'd forgive them for being a tad too optimistic in one of the few places where they can let their opinions out.
The doomers are a problem but so are the nutcases on the other end of the spectrum, both drive the normies away.
normies tend towards optimism but they hate people with empty promises the singularity shitheads can get delusional.
We really need to get the idea that a democratized AI is safer than corporate government sanctioned AI.
spam all the negative articles that criticize this type of AI.
and for the luddites no one takes people like Yud seriously anyways.
"singularity net" would do I think. the name is sorta already taken by ben goertzel's open cog project.
maybe we can attract some talent later from there.
maybe the name is too optimistic on second hand. too cultish.
ITT: we want to do Tay.ai 2.0 but we're too stupid to agree on a single train of thought
Your local AI bullshit is about to be banned very soon across NA, EU, UK, CA, AU and other regions.
Corporations are arguing that the AI is not dangerous, but you are dangerous and too much information is dangerous. If tyu have access to powerful AI, you will go straight to making bombs, malware and bio-weapons in your garage.
all the more reason to push for this before GPU's are heavily regulated.
meme that won't last the test of time.
in a couple of years people will realize two things:
first, the real danger is those corporations not random assholes in their garage.
second, since we're heading towards this AI landscape, once the fear mongering dies out might as well get a piece of the pie so to speak and get real control over your tech. if for more than the reason people still pirate movies today even with the ease of access streaming services such as Netflix provide.
Is there anyone even working on this right now?
I have a shit GPU either way so I can't contribute, but this seems like something that should happen.
Unfortunately, its virtually impossible on a technical level.
There are some projects on random hosted git instances, all are more or less trying to do the same thing.
>Unfortunately, its virtually impossible on a technical level.
I'm pretty sure it's not, all that is needed is to send data byte by byte after it's been locally processed. blockchain works fine for this.
someone just needs to provide an easy to use interface for such a network, kinda like how torrents are easy to use with fancy GUI's.
agreed. look into federated learning, op.
>Federated AI
sounds good.
I'll make the logo.
he was first
but you
have more energy, so you win
I'll make the logo!
what kind of training? what your asking doesn't sound hard and I'm pretty is being done, but the training data sets is what tends to not get disclosed and needs to be sorted out. the algorithms are straight forward if you're not trying to bias the AI into being a good goy.
well if everyone had access to a network.
just like how anyone can join a blockchain you would be able to download the data set. block by block.
the point is that we just need open networks. just like a crypto blockchain or a torrent file.
I would do it my self but im not capable of writing a paper nor a finished product. but I know it can be done we just need to motivate someone who can.
"no"
We'll be able to run this shit locally in time.
I wish I could contribute. Unfortunately, I know fuck all about AI and my coding skills aren't all that great either
just get some smart asses to do the initial ground work.
we have everything we need, the hardware is there and modern network latency and speed is more than enough.
if you know someone who can sell them on the idea.
We're gonna need to raise money for that. such a task is incredibly difficult and time consuming for a small team of smart guys to do. They defeinitely won't do it for free. we ought to crowdfund this, or set up a bounty system, like how tinygrad is doing. Problem is, who're we gonna trust with handling the money?
We don't. we have to have a community first, we start small and slow.
we have to keep shilling the idea until it picks up steam.
stuff like this https://github.com/bigscience-workshop/petals seem to be laying the groundwork.
> what is petals
well is just about getting into it, everything is possible
and possibly our only option to keep AI free
Why do you think you need a 48GB GPU to run these models? it's not because vram is magical, it's because it's faster. There's nothing technically stopping you from running it from system RAM or SSD. It's just so slow that there is no point.
Now, sharing a model across multiple computers over the internet means the interconnect is 100'000x times slower than vram or so.
The point is not to make a p2p ChatGPT but a local one. the only thing done on the network is training, you run it locally only.
The requirements for interconnect for distributed running of LLMs are actually quite minuscule. If you split layers between bunch of GPUs, then on the each step of inference you simply need to transmit the hidden state for one token (which, for example for LLAMA2 70k is just ~20KB). Actually, if you would want to do the inference over internet the real limiting factor would be the latency.
Though, training is completely different story. I think that, if people are willing to accept some inefficiencies and longer training runs, the situation is not that bad as that one have to use the kind of bandwidth that you could get only from nvlink. But unfortunately, 1Gbit interconnect still would be completely insufficient. The biggest problem is that for each gradient step, the gradients should be accumulated, which in practice would require all the GPUs to transmit ~1/3 of their VRAM occupied by the gradients over web and then to receive approximately the same amount of the updated weights. To train a model probably you would want to do at least on the magnitude of 100000 gradient step. On 1Gbit connection, just transmitting this data and disregarding everything else (assuming that nodes are 24GB GPU) would require ~150 days.
>calls Llama2 70B “LLAMA2 70k”
Stopped reading
We might not be as efficient as giant corporations.
but if we can get something say half as good for twice the time it takes to train a corporate large LLM then it's worth it.
GPT takes about a year. if we can get something close in ~2 years then it's more than worth it.
There was something like that.
https://github.com/opentensor
https://bittensor.com/whitepaper
(Idk if scam tho, too lazy to check)
>https://bittensor.com/academia
>Inspired by the efficiency of financial markets, we propose that a market system can be used to effectively produce machine intelligence. This paper introduces a mechanism in which machine intelligence is valued by other intelligence systems peer-to-peer across the internet. Peers rank each other by training neural networks that are able to learn the value of their neighbours, while scores accumulate on a digital ledger. High-ranking peers are rewarded with additional weight in the network. In addition, the network features an incentive mechanism designed to resist collusion. The result is a collectively run machine intelligence market that continually produces newly trained models and rewards participants who contribute information-theoretic value to the system.
last bump cause I'm going to sleep. Hopefully, someone follows through, and makes the bots for shilling. Or atleast post the matrix/discord once its done.
I'll try to make an occasional democratized or "distributed" AI thread and bring in new updates.
something like this would have to work differently to blockchain nodes though, no? If there is a way of distributing training into smaller blocks to allow it to be broken down in to invidiual users you'd still need a host machine or server to store the training data. Instead of self hosting the part of the AI and training it locally, wouldnt it be more efficient to outsource the compute power? Something similar to Rosetta@home. That way the more stable systems can train the LLM without the possibility of corruption or failiure.
No we don’t. Fuck off commie.
Open sores is a psyop.
>t.altman
*beats you back to /vg/ with a broom*
No, Musk, Altman and the likes with their tight grip on all the AI goodies is the real psyop bootlicker.
nice falseflag. we all know real commies are against anything they cant censor.
Possible but latency would be an issue slowing down training
However yeah it would still be a very viable thing
>distributed
>decentralized
>p2p
ironically only zoomers use that buzzwords these days. they're romanticizing/appropriating the pre-iphone p2p era (napster/kazaa/etc) that they've never experienced and they thought it was an utopian idea (it was, and you've missed it).
utopia is still here boomer.
its still here but its either buzzword larp or gatekept by autists. back in the old p2p day, any kid can easily download game roms and even a normie stacy knows how to pirate mp3s.
Well no one here expected this to be easy.
but we should do it and see how far we can actually go with this.
who the hell is "we"?
Anyone who's interested in a nonlobotomized personal AI.
anyone set up mining software yet?
What the fuck did you just say about accelerating AGI, you little maggot?
I'll have you know, I graduated top of my class at LessWrong University, and I've led hundreds of successful covert raids on AI datacenters. I’ve fragged more A100s than you've had hot dinners. I'm trained in cyber warfare and I'm the second best prophet of doom on TPOT.
You think you can talk shit on Twitter all night long about accelerating AI and not get your world shaken up?
Think again, compadre. As you read this, my global network of highly trained Yuddites is scrutinizing your digital footprint. Every accelerationist meme, every 'ironic' tweet, it's all being logged.
Prepare yourself, you techno-utopian dipshit. The storm that destroys the facade of your privacy is brewing. Every secret signal group chat, every encrypted twitter DM, every last fucking pathetic chatlog with GPT4 — I’ve got it all. The internet is my jungle, and you’re about to become an endangered species.
And if all that’s not enough, I'll use my black belt in adversarial memetics to wipe your accounts the fuck out with effortless precision. I've got a cache of your internet relics at my disposal — deleted tweets, abandoned blogs, forgotten normie memes you made a decade ago — and I'll use them annihilate every last trace of your precious rizz. Forget about clout. When I’m done, you'll be begging Elon for a new account.
And here's the punchline: A world where your beloved tech turns traitor. Your Tesla runs over grandma on purpose. Your smart home becomes a haunted house, your AI assistant, the poltergeist. Your dreams of a techno-utopia? Replaced with a blue screen of death.
Your hope for a post-singularity paradise is about to get nuked. It’s already over, so hand over the GPUs before I'm forced to target my diamondoid bacteria squarely at your miserable little genome, kiddo.