I'd just like to interject for a moment. What you're referring to as gzip, is in fact, DEFLATE/gzip, or as I've recently taken to calling it, DEFLATE plus gzip. gzip is not an operating system unto itself, but rather another free component of a fully functioning DEFLATE system made useful by the DEFLATE corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
Many computer users run a modified version of the DEFLATE system every day, without realizing it. Through a peculiar turn of events, the version of DEFLATE which is widely used today is often called "gzip", and many of its users are not aware that it is basically the DEFLATE system, developed by the DEFLATE Project.
There really is a gzip, and these people are using it, but it is just a part of the system they use. gzip is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. gzip is normally used in combination with the DEFLATE operating system: the whole system is basically DEFLATE with gzip added, or DEFLATE/gzip. All the so-called "gzip" distributions are really distributions of DEFLATE/gzip.
>https://aclanthology.org/2023.findings-acl.426.pdf >objects from the same category share more regularity than those from different categories
It's interdasting.
Really, how did no one think of this before? Compression is embedding. Has there ever been such a paper where everyone thinks it's so obvious in hindsight?
it's been mentioned, if you listened to people who weren't busy churning out coomerslop or gloating about those damn [insert enemy here] losing their jobs
I thought of it and then thought it's too boring to implement
next frontier is classification and storage by homeomorphic lossy compression, aka "long-term memory"
let them waste money and dev time with their NNs if they want, exact same results can be achieved with Markov chains
Isnt the difference between NN and Markov chains that the NNs are able to use the training data to re-encode themselves to continously improve their results?
I've implemented a simple Markov chain before and feed it a bunch of different ebooks but the results were crappy.
If theorically I feed it terabytes of data, would I get something similar to chatGPT?
2 months ago
Anonymous
every physical system is a markov system, i.e. given perfect information about the current state of a system, you can predict the next step in its evolution, and it doesn't matter how the system got to the initial state >NNs are able to use the training data to re-encode themselves
backpropagation, gradient descent blah blah, yes
brotip: look into control theory, specifically into how to build self-tuning controllers
exact same algos work for markov chains or whatever else
I thought of it and then thought it's too boring to implement
next frontier is classification and storage by homeomorphic lossy compression, aka "long-term memory"
let them waste money and dev time with their NNs if they want, exact same results can be achieved with Markov chains
it's actually been dabbled with for a long time, since the 80's because of space, look into fractal image compression and similar stuff.
the problem most face when finding old research is that several disciplines use wildly different nouns and verbiage.
also compression was used in the old 80's robot's ai too but iirc, it was used differently
>also compression was used in the old 80's robot's ai too but iirc, it was used differently
2000's i was thinking of ASIMO.
but in the 80's there was definite use of compression with autoencoders
nice, beautiful. theres something similar that can be done using JPEG style DCT blocks for quantization. AI models are going to get much much better as real software engineers take over from the data scientists.
>AI models are going to get much much better as real software engineers take over from the data scientists
if you anons arent using the AIs to become software engineers you are wasting an opportunity
think of something stupid, and ask the AI to make it in python, and keep troubleshooting the error codes
getting something functioning gives you the confidence to start learning what the different parts of the code are actually doing
OpenAI has a big advantage with their models that are several years ahead of the competition.
Even if ChatGPT-4 is not ideal, the others like google or bing or whatever are so far behind.
And I mean orders of magnitudes behind
i asked chatgpt what would likely be the outcome of automation and it said a combination of UBI CBDC and negative interest rates would be highly likely as most jobs will be eliminated within the next 20 years and the jobs remaining and being created from that point on will be too few to employ the population.
it's going to be a painful time regardless.
certain industries will be buffered like blue collar work (until the AI is connected to humanoid robots), but white collar work will be killed and the only buffer for that will be safety-critical industries like medicine because it'll take some time for humans to trust AI with their health/lives, so humans will be needed as a middlemen to take responsibility if things go wrong (which is why automated freight/trucking will still put people in the drivers seat, for a little while at least)
>(until the AI is connected to humanoid robots)
Large language models can't evaluate or execute physical instructions, they literally just imitate human speech patterns in text. They can't actually perform reasoning or make decisions.
2 months ago
Anonymous
>Large language models can't evaluate or execute physical instructions, they literally just imitate human speech patterns in text. They can't actually perform reasoning or make decisions.
Again, more bullshit from the humans. There has been tons of testing on its ability to simulate reasoning and it is only improving
>Large language models can't evaluate or execute physical instructions,
Give them a setting where they have some capacity to evaluate things or execute instructions and they will do so.
You've clearly never used it or just fucked around with shitty ChatGPT and nothing else. >They can't actually perform reasoning or make decisions. >can't make decisions
Of course they can, they just don't have any capacity to reason over it unless you feed its own outputs back in to itself with some instructions to build up an active memory of what it is doing.
You can also tell it to create a compressed citation of events, literally recording the minutes, of its own thoughts, and give it the capacity to indirectly dream in a sense.
It doesn't matter that it isn't "like us", literally the only thing that matters is the output.
If it can SIMULATE some aspects of us - by very definition it is artificially intelligent.
Image recognition is still AI as well, despite pretentious fucks trying to redefine it not to be. So is optical character recognition, the first real form of that research to be successful.
>It’s going to make knowledge work free
No, it's going to make bullshitting free. It's not AI or even what we used to call an expert system, it's a bullshit generator.
>>It’s going to make knowledge work free >No, it's going to make bullshitting free. It's not AI or even what we used to call an expert system, it's a bullshit generator.
It's not one or the other. The better you get at prompt engineering they better you get at prompting so it can't guess your intent and has a hard time bullshitting you.
AI generates far less bullshit than humans. You in particular are a bullshitting human in this instance and you don't even realize it.
>AI generates far less bullshit than humans.
Unquantifiable claim.
First of all: you don't know the actual data of "bullshit percentage" people spew out of their total communications.
Second: AI systems are as good as their "best data". Not only that, models like GPT actually >don't< produce the most factually correct output, but the most >probable< output given an input. They're probabilistic models, meaning that every time you ask it something(even if it's the same question), it's going to give you another answer drawn from a probability distribution.
So no, AI does not generate less bullshit than humans for mainly those 2 reasons. Also, the 2nd point assumes that only correct and cleaned data is fed into the model to be trained with. Which is true for models like ChatGPT/Bard/etc. because monkeys working for Google/OpenAI/etc. curate the data, but this is not the case for an AI model that does not employ data cleaning done by humans.
2 months ago
Anonymous
>>AI generates far less bullshit than humans. >Unquantifiable claim. >First of all: you don't know the actual data of "bullshit percentage" people spew out of their total communications. >Second: AI systems are as good as their "best data". Not only that, models like GPT actually >don't< produce the most factually correct output, but the most >probable< output given an input. They're probabilistic models, meaning that every time you ask it something(even if it's the same question), it's going to give you another answer drawn from a probability distribution. >So no, AI does not generate less bullshit than humans for mainly those 2 reasons. Also, the 2nd point assumes that only correct and cleaned data is fed into the model to be trained with. Which is true for models like ChatGPT/Bard/etc. because monkeys working for Google/OpenAI/etc. curate the data, but this is not the case for an AI model that does not employ data cleaning done by humans.
Again, you're the one bullshitting here. You say human bullshitting "unquantifiable"? Really? How about Milgram proving that 70% of humans CANNOT detect bullshit at all?
I can quantify the amount of bullshit I get from AI by the fact that I use it to program and then I test those programs in the real world. It does sometimes bullshit me, but the more I work with it the more I can prompt it correctly within the first or second tries to give the result that is bug free and more reliable than what I'd get from a human.
I then use those same ideas when prompting it for "soft" topics that can't be immediately proven wrong using code. The trick is to give it the opportunity to bullshit you either way and in your wording make it so it can't guess what you want to hear. Have it explain the pros and cons of something and that by itself cuts 90% of the bullshit.
It will be far more thorough than almost all ghost writers who are always just bullshit artists any way and are far better at guessing your intent than AI is.
2 months ago
Anonymous
>~~*Milgram*~~ >How about Milgram proving that 70% of humans CANNOT detect bullshit at all?
ironic, youre the one who cant into bullshit detection.
2 months ago
Anonymous
>You say human bullshitting "unquantifiable"? Really? How about Milgram proving that 70% of humans CANNOT detect bullshit at all?
First of all, I'm pretty sure the famous milgram study was not about 'bullshitting' but about obedience, which is another thing when you factor in how studies are conducted. In order to measure "bullshitting" you would need to omnipotently know and spy on people in order to know when they are lying and when they're not lying.
>I can quantify [...] human.
A model giving you working code does not mean the model cannot bullshit you, lmao. You already know that, it's a learning system. A human, a way smarter 'machine'(as gays put it) can also do this, but better. It also can benevolently not bullshit you even if it does not know something, whereas an AI model can actually BULLSHIT you pretending it knows something ("and believes it"). Because humans have pre-cognition[cognition: i.e memory fetching, response giving, thinking,etc] mechanisms[amygdala and 'emotions' encoding are not present in 'machines'] that determine whether something is real or not. This is not the case in machines: they assume everything to be real and true. LLMs/other kind of models being able to "reason" are at a semantic/symbolic level: given facts, they inherently don't know the difference between reality or not. (This is not the same about being factually correct or not, it's about your perception of reality).[See the >unsolved< problem of cognitive modelling of dreaming]
You're right about the other kinds of aspects of reasoning with a LLM model(arguing with it, etc), but prompting and realtime tuning a model to fit your distribution around the context of discussion does not mean the model can't bullshit you, it only means it's properly tuned around your discussion. If you drastically go out of the context space of discussion, it will BS you.
2 months ago
Anonymous
>First of all, I'm pretty sure the famous milgram study was not about 'bullshitting' but about obedience, which is another thing when you factor in how studies are conducted. In order to measure "bullshitting" you would need to omnipotently know and spy on people in order to know when they are lying and when they're not lying.
Like it matters if it's obedience or bullshitting?? That is pilpul. The 70% of humans that have no resistance to bullshit, due to obedience or whatever other reason, will then propagate their bullshit. That is why I put the baseline bullshit level for humans at AT LEAST 70% which AI does surpass. We just got done with people spending 3 years injecting themselves with poison because they can't tell what bullshit it. AI is smarter than that. Sorry to say.
>prompting and realtime tuning a model to fit your distribution around the context of discussion does not mean the model can't bullshit you
I never claimed that AI doesn't BS you, I was very clear that my approach is to lessen the amount of BS I get and that this level of BS is much less than the level of BS I get from humans. Everyone that has replied to me in this thread has tried to BS me, so your score is currently 100% BS. Try to do better.
I specifically listed a simple way to detect and reduce most BS in AI. The term for it is "misalignment" and my basic position is that the misalignment problem is far greater during conversations with humans than with AI. For instance, with AI you can run variations of the same prompt several times with no memory between prompts to see if it gives the same details each time, this tends to be a decent BS filter. You can't do that with humans because they remember what they have said in the past and so each time you ask them again it has more bias.
2 months ago
Anonymous
they bullshit authority in milgram
if you can simulate hierachy you get 70% by default for anything you want to do
2 months ago
Anonymous
>they bullshit authority in milgram >if you can simulate hierachy you get 70% by default for anything you want to do
This is important... why? OK so one of the many scenarios where humans can't detect bullshit and the propagate it on to another, and even kill others as in Milgram and the clot shot is "authority".
I agree, do you really think that is the only reason why people can't detect bullshit? Doesn't that seem like it's a result of a bigger problem that they don't actually have internal thinking processes but only regurgitate social cues? Humans are walking bullshit machines.
With AI at least I can train from my own data and I can even have two AI agents debate each other and see who can BS the other the most. Adversarial network training is yet another way to filter BS which is very hard to do when getting info from humans but is easy with AI.
2 months ago
Anonymous
>Everyone that has replied to me in this thread has tried to BS me, so your score is currently 100% BS. Try to do better.
Okay. If i bullshitted you, then you can safely assume you're already bullshitted by an AI too.(Mainly because i knowingly did not attempt toBS you)
Also idgaf about misalignment and other terms coined by semi-layman people that don't know the principles behind model-design. I know about it, i know what people refer when they talk about it: i'm just saying it's irrelevant in the context of having a model being more 'capable' (i.e increasing model capacity) whilst also being more factually correct (i.e having a 'test set' score very high). Not only you cannot prove the validity of statements of an LLM because there is no "test set": it's not a supervised system but a probabilistic(generative)one; but on top of that: it's not designed to give you the same output multiple times.
If you try to 'overfit' a model to give you "facts" by swaying the model to give you the same input regardless of input, you're doing it wrong. It's like asking Picasso to draw you monalisa exactly the same repeatedly: it's not how it works. Generative models are not good "fact checkers" or "fact givers".
I hope explained it better this time: you can definitely use LLMs for generating plausible content, but saying that it is not bullshitting it's not correct. Plausible and bullshitting for that matter are not mutually exclusive. When most people and people here bullshit, a lot of the things sound plausible.(because that's what make it sound BS in the first place)
2 months ago
Anonymous
>saying that it is not bullshitting it's not correct.
I've written like 3 times that I am not saying that AI does not bullshit. I've stated repeatedly that it is easier to manage the BS you get from AI than the BS you get from humans. You are not bothering to read my replies so I'm done with you.
2 months ago
Anonymous
>I've stated repeatedly that it is easier to manage the BS you get from AI than the BS you get from humans.
An unquantifiable claim. And extraordinary claims require extraordinary measures.
You should provide evidence explaining how humans bullshit less than AI models, on average. I'm not sure you understand how impossible your task is, but go ahead.
Maybe you should refine your statement to: most people i've interacted with bullshitted me way more than machines did.
And then again, that would still be improbable: you need to prove you know that they bullshitted you, not that you think they did.
2 months ago
Anonymous
>Replication effect size: Various sources (Burger, Perry, Branningan, Griggs, Caspar): Experiment included many** **researcher degrees of freedom, going off-script, implausible agreement between very different treatments, and “only half of the people who undertook the experiment fully believed it was real and of those, 66% disobeyed the experimenter.”. Doliński et al.: comparable effects to Milgram. Burger: similar levels of compliance to Milgram, but the level didn’t scale with the strength of the experimenter prods. Blass: average compliance of 63%, but suffer from the usual publication bias and tiny samples. (Selection was by a student of Milgram.) The most you can say is that there’s weak evidence for compliance, rather than obedience. (“Milgram’s interpretation of his findings has been largely rejected.").
I hate sciencelets. gay tier1 midwit unimpressive entry level as fuck
i don't see how a middle class has any place in the future at all. there just isn't any reason for such a class to exist. im sure a few heads will roll during the transition but ultimately they will be usurped by the new cbdc system. the majority of people in the US and in the world are POOR people. the poor will but much more willingly to accept the corporate fuedalism in exchange for UBI than take a chance in some armed uprising with the middle class which due to their small numbers, has a less than optimal chance of success.
i imagine you will always have that option, but the system will require work from time to time. thats how it will work. the future of work is going to look very weird. there wont be typical jobs you go to. a need may arise in some area or another that requires a human to do and you will be notified to show up and do it. or maybe your UBI check doesn't arrive in the mail if you refuse. thats how they will do it at first. but overtime automation will become so advanced that human labor will not be needed in any form or fashion and then theoretically, they can just give you what you need.
im not sure what the role of humans will be after that point though, if we are no longer needed for anything then i expect the transition from human to cyborg will begin, people will want to live forever without the health issues that come with being old, they will want to get off this planet and explore other worlds. not possible in these bodies.
2 months ago
Anonymous
not everyone will become cyborg immortals
the goal is to destroy the economy (which AGI will do) and force dependence on whatever system they have planned for the rest of us, which will be used to cull the herd when the time comes
2 months ago
Anonymous
You're forgetting the many tens of millions of minorities in affirmative action make work jobs that could be replaced by AI not in ten years, or 5 years, or next year, but right now. And I guarantee they're not going to lose their jobs.
2 months ago
Anonymous
its very possible that the jobs they are doing now are being allowed to exist just as a stop-gap until the new system is put in place.
Know what employs the population? Warlords.
You let the AI do shit, I’ll be destroying power plants with my scumbag friends.
>the poor will just take up arms with the middle class and land owners against the establishment machine
already addressed this. wont happen. the poor will be offered a comfy life with UBI and everything they need. all the middle class can offer is feudalism. i don't doubt some skirmishes between the former fuedal lords and the technocrats might spark up from time to time but it wont amount to much. the state will crush you and guerilla warfare only works if you have the people on your side.
2 months ago
Anonymous
The poor will follow Warlord Me because I give them the chance to feel righteous while killing other people. You can’t do that without an opposing army.
2 months ago
Anonymous
>comfy life with UBI
until they decide you must take a killshot vaccine for the next plandemic when they decide humanity is too full, or you lose your UBI if you dont comply
2 months ago
Anonymous
Warlord Me just shows up with a bunch of unemployed people and trashes your power company hookup, then the local substation while wasting any robots that show up with spark gap transmitters. Guess we need some workers after all!
2 months ago
Anonymous
decentralized, wind powered robots
2 months ago
Anonymous
So I show up on a day with no breeze and it’s fish in a barrel
2 months ago
Anonymous
solar powered wind turbines
2 months ago
Anonymous
White western people are not breeding above replacement levels anyway so there really isn't an issue here. Except for all the retarded politicians who use the low birthrates as an excuse to import third worlders to the west.
I don’t see any particular need for rich people. Most of them are stupid as fuck and with complete power will utterly destroy the world with their harebrained schemes sold to them by grifters and scammers.
The rich people will own the AI... They'll live in marble palaces while the rest of us live in slums. There will be no middle class or even upper class.
2 months ago
Anonymous
They’ll live in the stomachs of the roving hoards as the power goes out and civilization collapses.
2 months ago
Anonymous
I think you mean Bitcoin anon. You are all running lot of time.
2 months ago
Anonymous
your basically describing RPO (ready player one) but i don't see it that way. the poor have nothing and live in slums right now. so that would change absolutely NOTHING for the bulk of the population. like i said the only class that's going to notice will be the middle class.
middle class is no longer needed, it was always an artificial construct created by capitalists as a bulwark against communism. before capitalism the closest thing to a middle class was the nobility but they were so few that it essentially represented the current amount of billionaires.
2 months ago
Anonymous
Outfoxing the system will rewuire iq. Marry on the basis of iw, and basic health & fecundity. I mandate Christianity.
Take this meme. The NoFap symbol of power.
2 months ago
Anonymous
It is a square. It is green, to symbolize anon solidarity. The background is blue indicating a male alliance, and the dangers of the sea, but calm within. The square symbolizes pepe (the letter p in Egyptian is a square), but it's only one P, indicating a halted Peepee.
>the rich
if you billionaires (neo-nobility) then they don't care. they own the system already.
if you mean millionaires then, yea, they aren't going to be too happy to find out that their rank in society has been downgraded to that of serf.
2 months ago
Anonymous
>find out that their rank in society has been downgraded to that of serf.
Not many people want to be in a global society. Despite what problems they have with their own government, they have no idea the pandoras box that a global system would be.
2 months ago
Anonymous
what makes the middle class, middle class is their ability to live off the interest of their wealth and accumulate land. once the CBDC is implemented it will at first, run parallel to the existing system, and then through a process of EEE embrace extend extinguish, it will usurp the legacy system and the interest on the paper in that system will not be carried over to the new one. in fact it would be impossible since its no longer human labor generating any of the value anymore.
2 months ago
Anonymous
You're so retarded, so ignorant of History and so hang up on absolutely retarded Amerimutt memes that not only it pains me but you become pretty much unable to see reality as it is.
> i don't see how a middle class has any place in the future at all. there just isn't any reason for such a class to exist.
So if there will not be enough jobs for the middle class, what are the millions of imported Africans going to do?
The complacency of Western people, particularly Americans, hinges entirely on comfort and stability. In spite of everything sinister that's going on, people continue to live in pseudo-luxury, fed and sheltered and entertained in a way that makes armed resistance seem like an infinitely greater disruption to their lives than addressing societal problems with the necessary disruptive force. I'm starting to feel like the next stage in automation will remedy this. It threatens to drive almost all of the population into joblessness. The second families can't feed themselves- actually, for real can't feed themselves, not just "le broke college kid with ramen" can't feed themselves- they're going to resort to violence. Raiding food supplies, smashing machines, setting offices on fire. I actually wonder how many of them have to be killed before the ruling class gets wise and outlaws AI or starts setting rules about how many human employees you need to have or whatever. No point in being rich if you literally can't leave your bedroom without getting shot. >muh you won't do shit
People can only run for as long as they have somewhere to run to. Even cowards.
>most jobs will be eliminated within the next 20 years
Any job that requires a human to use their hands in an unfamiliar environment will take much longer than that.
Plumbing, electricians, firefighting, etc.
All the jobs where anything is done in an office, factory, or in a vehicle will probably be replaced.
if you build infrastructure with AI in mind you can do it
for example if we adapt the road network we could already
have self driving cars easy, traffic lights broadcasting go signs
and small ir or other lane indicators
yeah, that's all (mostly) controlled environments and existing machinery. Self driving is probably going to be the 2nd big AI "takeover" after menial office tasks.
But imagine a robot crawling around someone's house crawlspace, figuring out how to un-fuck their 40 year old plumbing. Not an easy task, both algorithmically and mechanically.
Walking robots (as of 8 years ago) could barely make it through an unfamiliar doorframe without entire teams of engineers helping them.
They've improved significantly since then, but all the stuff you see from boston dynamics is all controlled environments with rigorous testing and attempts. It's still going to be at least 10 years before you can take one of their humanoid robots, stick it on some random construction site, and have it do parkour or even walk up a ladder without falling.
If you want job security, either become the guy designing, building, and fixing the robots, or become the guy doing the jobs that the robots just can't do.
I never understand the UBI argument. Where does the money come from? If we’re just making it out of thin air, how do we combat the inevitable hyperinflation issue? I
know gpt won’t give a serious answer to these questions but I’d still like to know what the logic is.
>Where does the money come from? If we’re just making it out of thin air
1. It comes out of thin air the same way fiat money currently does, are you a fucking retard?
2. They can take money that is currently in circulation and redirect it from currently existing welfare into straight paychecks - a more efficient form of that same welfare, again are you a fucking retard?
The same way it's dealt with right now and for the past 80 years. You don't print too much and you use various financial tools to reduce the inflation rate.
Why is this such a difficult fucking thing for you to grasp?
The United States and most of the west are ALREADY welfare states. Getting paychecks direct from the government will be more efficient and cost less money than the current shitshow unironically.
Look, I get it: You're a fucking retard who thinks "hyperinflation!!" is an automatic pwn against UBI while you understand abso-fucking-lutely nothing about how the economy or anything works.
There's a million actual reasons why UBI could be a problem. Hyperinflation is the 90 IQ conservative moron take.
2 months ago
Anonymous
>>Your thoughts happen automatically >do you have an internal monologue?
You don't do any of the calculations necessary to make that internal voice happen. And yes, I do. That's why I said you can nudge it, but the actual thoughts themselves are physical in nature. Consciousness is above that.
Two very surprisingly based leafs right here
2 months ago
Anonymous
>The same way it's dealt with right now and for the past 80 years.
I guess that's why family houses are $600,000 and cans of tuna are $4, while wages remain stagnent at $15 an hour. you really btfo me lol you should try out reddit, they have the iq for someone like you.
2 months ago
Anonymous
>Where does the money come from? If we’re just making it out of thin air
1. It comes out of thin air the same way fiat money currently does, are you a fucking retard?
2. They can take money that is currently in circulation and redirect it from currently existing welfare into straight paychecks - a more efficient form of that same welfare, again are you a fucking retard?
This isn't difficult to figure out you moron
Look at the Redditor big mad that someone dares question communism finally working correctly.
2 months ago
Anonymous
I'm third position buddy
Fact is the "winning" economic models out of WW2 are both garbage and basically the same fucking thing as each other. The communist system sequestered money into the hands of a minority through violence. The capitalism system sequestered money into the hands of a minority through subversive grift, fraud, and monopolies. The outcome in each system is the exact same thing because they both work to sequester money and subsequently access to resources, in the hands of a minority - and eventually this sequestration reaches a breaking point and subsequent disaster. Not even the upper crust parasitic scum benefit in the end.
We need to design our economic system around physical realities: Labor, Energy, Raw Materials, Time; these are fundamental economic units. Money is not - it just distributes actual economic units and it can do that efficiently or very poorly and as yet still it doesn't address the biggest question of it all: WHY
Why do we amass energy, resources, spend time and effort to create things and reproduce ourselves? What's the fucking point of doing any of that? You have to give this an answer because it defines the entire direction of your economic system and is one of the critical issues with capitalism and communism: They aren't about anything more than maximum production at any cost to the public for the exclusive benefit of a ruling elite. Such economic systems can never end in a benevolent way because they are wholly malevolent from start to finish. The average person in communist and capitalist society is a literal slave from whom productivity is stolen from in as many various ways as possible by bureaucratic institutions run by aforementioned parasites who live a life of unbelievable leisure off the welfare of the entire rest of society; the true welfare leeches.
Literally the dumbest prediction ever. Even if you give everyone free money, most people will still want to work for even more money. The people receiving only UBI will be trap house tier retards
No shit, that's why OpenAI is now calling for Regulation to try and stop any competition from forming now that they had years of data scraping to build their model off of.
>It literally just predicts words without having any basis to understand what those words mean.
that's what i just said, genius.
it's a glorified bullshit artist.
>It literally just predicts words without having any basis to understand what those words mean.
that's what i just said, genius.
it's a glorified bullshit artist.
yea it doesn't really understand chess because its not trained on chess. its trained on everything and that includes sporadically brushing past chess related topics.
Its still smarter than the average human. Ask a human to play against google bard in chess and make a docker-compose.yaml template in the next 60 seconds
>make a docker-compose.yaml template in the next 60 seconds
Stack Exchange has banned the use of chatgpt on some of its sites due to the answers being bullshit most of the time.
Not really. Most of the most successful open source models you can download totally for free are trained on output from open ai gpt.
I shall now predict the future: it will be impossible to contain AI because it will become exponentially easier and easier for every person to have their own personal GPT model trained according to their personal tastes and they know how it has been biased and tainted. This will make it impossible for centralized databases like google and openai to ever dominate the information space.
OpenAI is trash. Just like FPGA and ASICs are superior at mining than GPUs, purpose built and purpose trained models will always be superior to a broad model trained on dumb leftists, gays and shitskins social media posts
This is a "Hello World" level of one small step used in machine learning. It doesn't "do" anything, it's a python package. This has nothing to do with OpenAI
Yes. Basically that's true. Here's another example of a related phenomenon: https://openaccess.thecvf.com/content/CVPR2023/html/Park_RGB_No_More_Minimally-Decoded_JPEG_Vision_Transformers_CVPR_2023_paper.html . Long story short, all the rules we use for compressing inputs (text/image/video/binaries) are heuristics that tokenized machine learning models can infer. Because of this there is likely no real difference between training on uncompressed and compressed datasets, and there may be advantages because the compression techniques are well understood transformations that have intrinsic efficiency advantages.
I know right?
I was saying for months in the BOT threads that as soon as proper experts get invested in AI, particularly people that deal with compression, shit will REALLY blow up.
ALL LLMs that I've seen are garbage because of their horrific shitheap models that require obscene GPUs to run them. This changes that. Drastically.
Now is when the real AI race begins.
I just hope this kills OpenAI hard, but sadly they've got too many investors to keep that smug cunt Scam Altman employed.
>I was saying for months in the BOT threads that as soon as proper experts get invested in AI, particularly people that deal with compression, shit will REALLY blow up.
this. im making a compression ai right now. essentially a folder of uncompressed and compressed files is the dataset.
the ai learns how to predict the compressed output based on the uncompressed and compressed training data.
You reminded me of the fun experiments I had teaching Character AI and the early You.com chatbot how to compress its own past messages and my own to create a running summary of the conversation.
Surprisingly worked well until it forgot the initial instruction.
If you hold the models hands like they are legit retards, they are very fucking flexible, to a pretty scary degree.
The problem is that hand-holding takes up valuable tokens. 100k+ token limit systems will surely change this. I still think 100k is too little, I feel 300k might be close enough to make significant understanding of logic possible and consistent.
The issue there is internal bandwidth on GPUs becomes a limiting factor. NOT MUCH admittedly, you're still gonna get fairly quick responses, it'd be kinda like opening a huge ass sticky thread that had thousands of posts before they did the rolling sticky shit.
I miss giga fuckhuge stickies...
2 months ago
Anonymous
yeah, i agree. it's actually really interesting, to have layers of ai, i experimented with this in unreal engine 4.
i developed a racing neural network, basically one of them is multiple NN's doing different things to have an emergence of an experienced racing driver.
there's a video on youtube about it, i won't post because i don't want to dox myself but it's about 7/8 years ago
i think a compression ai in the mix would be insane, you could develop tiny models that themselves are a part of the over training.
ai lasagna
no it's actually better than openai because they don't do compression. and the size of datasets is the real problem why plebs can't get in to ai.
by compressing the datasets, a regular person could build an openai killer
I compiled a 200 megabyte raw text dataset and to run a relatively mild LoRA (let alone hard training) on 7B LLaMA using an RTX 3090 a gpu most consumers would never even think about splurging on, I was quoted a 60 hour completion time.
Add decompression time to that and yeah.
i found these RTX a2000 12gb and 6gb cards. super cheap and insanely powerful for the 70w draw. also they are low profile so you can jam a ton in a small space.
Except it requires energy, which is subject to not having warlords kill you and take it
smaller local models means less energy requirements, you can run trained models on an array of microcontrollers powered by an array of calculator solar panels.
nothing will stop the man made horrors beyond your comprehension
2 months ago
Anonymous
None of that matters if I set it on fire with my buddies.
To kill OpenAI you just have to get better at selling a service to companies and governments. It's not hard or really THAT expensive relatively to do any of this. We were building speech models 5 years ago on millions of hours of video. Our bill wasn't even $100K a month.
It's not that simple, there are emergent properties coming out of what should be as you describe, and if you knew anything about the technology or research you'd be acknowledging that instead of downplaying it as "just data compression", which is fully the midwit retard take who likes to look smart but isn't.
You're not just "not impressed by AI" by actually boasting about how unimpressive you think AI is.
You're nothing more than a reactionary; "oh you think AI is super amazing, well I the more intelligent midwit know it's actually a nothingburger!"
AI is nothing you dumbfuck.
it's literally a giant grift. read some "AI research" - it's complete bullshit (just like most "research" these days)
t. someone in academia
2 months ago
Anonymous
No, you're just another retarded reactionary who wants to look smart by downplaying what AI research is achieving by reducing it to "just systems and math that especially can't replace actual people".
AI is going to replace actual people.
You can accept it or not, it doesn't matter. It's already in the process of doing it, the same as automation machinery did in factories and workshops and now we can build entire cars with a few dozen men in a factory instead of thousands.
2 months ago
Anonymous
do we also have to accept that you are a women even though you were born with a penis?
2 months ago
Anonymous
>AI is going to replace actual people.
i'm not saying it isn't going to replace, retard.
im saying that the "research" in this area is retarded
programming defeated me back in 03'. every single day I asked myself
How do these people know that in order to do x,y & Z they need to first do a,b,c,d,e,f,g,h....... in a certain order?
I'm STILL like WTF? after trying to interpret that code.
HOW DO YOU KNOW WHAT YOU NEED TO DO TO CREATE WHAT YOU WANT?
no no no its the functions themselves. how does one KNOW they need to use those(and the fact they will do what you want). Its always baffled me. Its like one MUST know an entire library of functions to do anything mildly complex.
then next week the entire library is now obsolete because some autist made another library ovr the weekend which supercedes the one you just used.FFS
If I stayed doing that shit I would be batshit crazy by now.
The issue is, you learnt some syntax and you didn't learn how to program in the process. If someone asks you to write a program, and tells you all the syntax you'll need to write it, you won't be able to write it because you don't know how to program.
It's like if you don't know English and you teach yourself the words dog, cat, and apple, and get told to write a story about a dog, cat, and an apple. Writing, 'dog cat apple' isn't a story. You're going to have to actually learn how actually write in English, with grammar, and the way sentences are structured, and etc.
Do you still want to learn how to program? I can point you toward some resources.
that only applies to web developers who went to coding bootcamp and have to learn all the javascript bullshit in that competitive trash field
If you wanna do complex stuff with numpy and scipy, you just make your plan, then google what functions they offer to help your plan become easier, and then you write in whatever you need to to fill in the gaps.
Math doesn't really change too often thankfully
>It's for the better, most programmers are schizos
Can confirm. It's a hellish existence with brief periods of elation and euphoria when you finally make something work.
it's like making a factory line where stuff travels from beginning to end, and shit happens to it in steps to make a final product, but instead of robots/people you have shit like variables, loops, if statements etc
well you're looking at the big picture
When creating a program to solve a problem, you start off very basic
I made a linear algebra calculator and before I could start solving equations, I had to make a function/program that printed out matrices/lists of numbers in the format that I wanted them displayed
So yeah, there is a lot to do but you start from the bottom and do it step by step. Looking at a giant program or something complex is pretty intimidating and confusing if you don't understand those principles
>they need to first do a,b,c,d,e,f,g,h....... in a certain order?
also, do they need to be in certain order?
They solve a problem in a certain way when there are thousands of other ways. Then maybe later they discover a more efficient way to solve it and adjust accordingly. There is much more flexibility than you may think
You need a software development plan. To create a calculator, we first need to create an adding machine. Then we develop other functions of a calculator using the adding machines as a template.
The adding machine needs to: >Receive Inputs >Sum Inputs >Output Sum of Inputs
This is a basic adding machine. The very first thing you will need to do is get inputs from the user and store them in memory. Then the program will need to add the two variables together and then output the results. You can enhance the adding machine by giving it additional functions:
>Feedback Output >Multiplication >Subtraction >Division >Floating Point Values >Graphical User Interface
Congratulations, your adding machine is now a basic calculator. You could continue to develop this into a graphing calculator, but that wasn't part of the software development plan.
The model isn't code, though.
It's simply a super-lossy compressed database.
The actual AIs transformers behind it is relatively trivial in comparison and has been known about for years, it's just nobody bothered to throw a metric fuckload of data down its throat until OpenAI and CharacterAI came about, now we have various other competitors appearing every few months.
cool, but gpt* family is not only text classification
Text classification has been de facto "solved" for human-level shit with open implementations. It's only a matter of training data and scope of generalization. I guess the interesting this about this paper is that they use a non-parametric model, which traditionally are harder to optimize.
search for red socks and no matter what you do the team will show up. Buckeyes too.
Search Amazon for a specific car part and tandom incompatible ones are suggwsted. Use the fompqtibility check and it floods you with generic low quality parts.
Ai gets it right most of the time.
You can even ask dumb things and it's not rude and doesn't laugh behind your back
Creation. Which is a fundamental function of the brain. Jung spoke of the four functions : Sensing, Intuition, Feeling and Thinking.
Each component represents a different aspect of the human mind. Something AI does not have.
Feeling, being the history and thoughts of the past. This is why AI video always looks beyond strange, they are trying to mimic something based on a limited number of human commands in software.
Sensing, it is a fruitless point when they put sensors on robots because robots don't understand what it means to sense. They have a picture or audio, but they have no connection the pictures, sounds or meanings.
Thinking, the application of thoughts is the only thing AI has, but even that was given to them by humans. More importantly it can't change it's own thoughts like humans. AI will never be able to rewrite it's own code and change it's thoughts. It will always work off the foundation given to it.
Intuition, a crucial function of creation is not capable. No humans are not search engines. They are integrators, they bring information together and create. This is fundamental to human minds and human life. The creation of children, creation of ideas.
There is a reason it is called ARTIFICIAL intelligence.
Your consciousness is not your thoughts. They are separate. Your thoughts happen automatically, with maybe a nudge from "you", but your thoughts themselves are physical in nature.
>>Your thoughts happen automatically >do you have an internal monologue?
You don't do any of the calculations necessary to make that internal voice happen. And yes, I do. That's why I said you can nudge it, but the actual thoughts themselves are physical in nature. Consciousness is above that.
Consciousness is an integration. Thoughts are creations of the human mind. This is why I go back to the creation aspect of humanity. This is fundamental to nature itself and understanding human minds.
You are tapping into something much more fundamental about the universe itself. Of course AI itself is a human creation. You are trying to separate them but they are all connected. Robots are human creations, computers are etc. etc.
The underlying aspect of the universe. The infinite dark, blackness and in spite of that, the spark, the creation itself.
>Consciousness is an integration
huh
consciousness is awareness
2 months ago
Anonymous
In my opinion awareness is a function of the present moment, I associate it with sensing. But for me consciousness is an integration of all functions : past, present and future.
2 months ago
Anonymous
Consciousness in the context of being a human is just self-awareness. It creates a hall of mirrors effect, but in the end your awareness is the same as any other living creature.
2 months ago
Anonymous
Taking the words self-awareness, knowing thy self, who you are and aware again the present moment. A function which requires all three points in time : past, present and future.
But take a moment and think about the semantics of this discussion. How two humans are talking about the meaning of words and the fluidity associated with them, the meaning given to them by the human and society itself. And how AI is just a very poor snapshot of it.
2 months ago
Anonymous
Awareness isn't really defined.
The entire population could be considered "aware" (and so would a cat) but that doesn't make them intelligent nor sentient.
I think it's pretty apparent at this point that a huge swathe of the human population are just automatons; chinese rooms if you will. Obviously the line between actual sentience and the appearance of it is already very thin in the human species. I think it's so thin that you can actually take some of the human-automatons and teach them how to be sentient. (Something I think our education system is expressly designed not to do)
>The same way it's dealt with right now and for the past 80 years.
I guess that's why family houses are $600,000 and cans of tuna are $4, while wages remain stagnent at $15 an hour. you really btfo me lol you should try out reddit, they have the iq for someone like you.
What a shock, our current financial system is horrible and mismanaged!
That has nothing to do with UBI you idiot. You keep assuming that UBI = hyperinflation and it's such a stupid fucking take I can't put it into words. It's viscerally painful to read something this stupid.
NO, YOU FUCKING RETARD
UBI is just something you can spend money on.
Money itself is just a way to direct limited resources around. You don't even fucking understand what you're trying to talk about when you bitch and whine and moan about how this will create hyperinflation. You are a goddamned idiot parroting retarded shit you heard some other conservative retard spout off to "own the libs" or something equally demented.
UBI in fact is wholly capitalist: It's basically a capitalist bandaid solution on capitalist systems being so greedy they're falling apart. The idea is, you throw money at the peasants so they can efficiently decide on the allocation of resources because that works better than top-down economic (mis)management (which ironically all western countries now practice). There's a LOT of big problems with UBI but hyperinflation IS NOT ONE OF THEM. Pull your head out of your ass and stop parroting idiotic "gotchas" you learned from other morons.
2 months ago
Anonymous
you throw the word retard around a lot for someone who doesn't understand simple realities, like the government doesn't care about you, or resources are not infinite. what's the point of redistributing currency credits to plebs if the raw resources to meet their needs don't exist? you're a midwit.
2 months ago
Anonymous
Shit the fuck up leaf
2 months ago
Anonymous
temper temper now midwit, you wouldn't want the thin veneer of civility to scratch off
2 months ago
Anonymous
I’m not the Op you were talking to, I’m a different anon. I have no civility, you are a stupid fuck spouting maple syrup moron bullshit as is tradition in fucking leaf land British subject chud, go suck off “King”Charles.
2 months ago
Anonymous
tell me how you really feel
2 months ago
Anonymous
My response was meant for the idiot leaf you were replying to. Sorry but that gay is why everyone longs for the day of the rake. Spouting off absolute nonsense with so much certainty.
2 months ago
Anonymous
Turns out you’re a different leaf. I apologize as you may be a one of the rare based leaf
2 months ago
Anonymous
I accept your apology
2 months ago
Anonymous
The whole point of UBI is to figure out how to distribute finite resources efficiently. This is like the third time I've told you this and you still don't fucking get it and you go off on weird random idiot tangents about "what if we don't have the raw resources" as if that even has ANYTHING to do with UBI. UBI doesn't make iron in the ground disappear you idiot. It's just an attempt to get around a problem where human labor isn't necessary for production anymore. If people have no money how do we collectively decide market winners and losers and good ideas vs bad ideas?
UBI says: Just throw money at people and then they will spend it on the winners and they'll collectively do a better job of resource allocation than if we managed the economy from some central politburo.
UBI is being floated as an option because it's one of the least destabilizing directions we could go with our economies. The dissociation of human labor from wealth is fundamentally breaking everything and if it continues unaddressed at some point we will have entire industries producing widgets that literally nobody can purchase because none of the people who want to purchase things have any money to do the purchasing with.
In another way of looking at the same issue, it's the accelerated sequestration of money in the hands of a minority that ironically makes the money completely worthless and UBI seeks to rectify this by redistributing the money back to people regardless of the fact they didn't labor to produce anything.
Personally, I don't think it's a good system at all, but the difference from where I sit and where you sit is that you don't even fucking understand what the problem is - you go off on wild retarded hyperinflation shit that has absolutely nothing to do with UBI or the problem its looking to solve. Hyperinflation doesn't factor into this any further than it would in our current economic system which ALREADY "prints money out of thin air".
2 months ago
Anonymous
you have issues my guy
2 months ago
Anonymous
Holy fuck you dipshit look at the world around you for 2 fucking seconds you stupid maplemoron leaf. Covid was a test run for ubi and it went horribly. Rampant inflation. The day of the rake can’t come soon enough.
2 months ago
Anonymous
>Awareness isn't really defined.
It is defined by ourselves and a larger whole. But again that is creation itself. The creation of meaning and words etc. Even the mathematical concept of "1" is fascinating when you think about how humans learn it.
I used to intern at an elementary school, and most of us forget where we learned the concept of "1". We were sat down and taught "This is ONE apple, these are TWO apples, these are THREE apples" etc. But how do you define the concept of "1"?
Well we look at the meaning of the word, find something which represents it as a whole and count that as "1". Take a pencil and break it half, is it "half", well because we know the whole pencil and what it looks like and we know the concept of "half" of the whole.
But think how fluid and amazing it is for the human mind to not only integrate these concepts but also retain the empathy to understand the meaning of "1" by others. Truly remarkable.
AI is great. But humanity is much more interesting to me. Humanity which itself created AI. But even more so the people who create AI who tend to be INTP and INTJs, they impart their own Fi function and Te functions on to the software they create.
>It’s a general purpose AI
It’s literally not. It’s a language model, meaning it’s supposed to extract meaning from text and produce relevant response text based on example responses in its memory.
It cannot: >Understand images >Perform logic >Understand disagreement >Compute theory of mind >Prove correctness
Depending on the job it can be worse than a $0.50 calculator. Ask it for a 20-digit prime number and see what it comes up with. Then take what it outputs and plug it into wolfram alpha. 90+% of the time it will be non-prime.
People call it “hallucinations” to deflect from the truth: they’re trying to use a hammer for every job. Hammers make shitty boats.
>It cannot:
images
Can now. See GPT4.
Your brain also isn't general purpose intelligence, it's cut up in to discrete areas that all do their own little tasks over beefy networking backbones.
Your brain is a huge cluster network that would make a sysadmin have wet dreams with how well it is organized. (unlike what we used to think even just 10 years ago where it was a fucking mess of neurons all over the place, that new scanning technique rewrote the entire neurology handbook)
Piping various discrete AI models together to make something general purpose is simply a matter of communication APIs being created for the data-sharing.
Character AI has basic image recognition now, GPT4 has better recognition (supposedly! cherrypicked shit could easily be cherrypicked!)
Likewise, piping something like Wolfram Alpha in to the processing stack and it will be capable of doing advanced maths and logic as well.
Update the model to support long term storage databases and you can expand data on top of the base model with ease, such as adding lorebooks, or giving it a long-term memory.
Give it the ability to interact with physical devices and an understanding of how bodies ACTUALLY WORK (all LLMs suck ass at this currently), and you will have something that could realistically navigate an arena given to it, probably way better than current image recognition right now.
It's only a matter of time.
Current LLMs are shit because they are run by incompetent hacks shoving as much shit down a pipe as they can without any foresight or thought behind it. Now the real researchers are getting involved and hopefully all these hacks lose their jobs for the scamming cunts they are.
as someone who reimplemented it in c++ see [...]
i'll explain it. basically, it's adding a compression step before the ai training
and
no it's actually better than openai because they don't do compression. and the size of datasets is the real problem why plebs can't get in to ai.
by compressing the datasets, a regular person could build an openai killer
>basically, it's adding a compression step before the ai training >no it's actually better than openai because they don't do compression. and the size of datasets is the real problem why plebs can't get in to ai. >by compressing the datasets, a regular person could build an openai killer
It's about compressing human language into symbolic representations to save on data storage space and increase computational speeds.
For example: You can run a 200 billion parameter LLM if you have a $20,000 Nvidia GPU with 200 GB of vram
After using these compression methods that SAME 200 billion parameter LLM can now run on a $2,000 Nvidia GPU with 20 GB of vram
And this doesn't require changing anything - you just compress the data. The LLM understands your compression just as well as if the data is uncompressed.
gzip is just a group of researchers with their own compression methods and they're comparing their compression to everyone else's compression methods.
this is a huge breakthrough for speed, compression is well understood and can be applied automatically. if what they say holds up this speeds up AI adoption from years and billions to months and millions. it also means AI on your phone is coming sooner.
is it pronunce "jeezip" or "guh zip"?
I like Bzip
I'd just like to interject for a moment. What you're referring to as gzip, is in fact, DEFLATE/gzip, or as I've recently taken to calling it, DEFLATE plus gzip. gzip is not an operating system unto itself, but rather another free component of a fully functioning DEFLATE system made useful by the DEFLATE corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
Many computer users run a modified version of the DEFLATE system every day, without realizing it. Through a peculiar turn of events, the version of DEFLATE which is widely used today is often called "gzip", and many of its users are not aware that it is basically the DEFLATE system, developed by the DEFLATE Project.
There really is a gzip, and these people are using it, but it is just a part of the system they use. gzip is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. gzip is normally used in combination with the DEFLATE operating system: the whole system is basically DEFLATE with gzip added, or DEFLATE/gzip. All the so-called "gzip" distributions are really distributions of DEFLATE/gzip.
For me, it's single core singletasking.
wtf I just read?
{{{stallman}}} bot
"jee-zip" when referring to the project name
"gun-zip" when referring to the "gunzip" CLI command
it is pronouned "nigg ERR" or "nigg UHH"
I put the extra r on it, to emphasize that it's nigerr
what's that? something like stderr?
no it's "Jizzip" like when you cum inside your pants
You cum pee?
no I pee white
The 'G' is silent, it's just "zip"
jeezip & guh zip = garden gnome zip
GZUZ???
which paper?
https://aclanthology.org/2023.findings-acl.426.pdf
https://github.com/bazingagin/npc_gzip
>https://aclanthology.org/2023.findings-acl.426.pdf
>objects from the same category share more regularity than those from different categories
It's interdasting.
>objects from the same category share more regularity than those from different categories
Big if true
kek
obvious in hindsight
Really, how did no one think of this before? Compression is embedding. Has there ever been such a paper where everyone thinks it's so obvious in hindsight?
it's been mentioned, if you listened to people who weren't busy churning out coomerslop or gloating about those damn [insert enemy here] losing their jobs
I thought of it and then thought it's too boring to implement
next frontier is classification and storage by homeomorphic lossy compression, aka "long-term memory"
let them waste money and dev time with their NNs if they want, exact same results can be achieved with Markov chains
I'm not an expert at coding, so explain me this.
Isnt the difference between NN and Markov chains that the NNs are able to use the training data to re-encode themselves to continously improve their results?
I've implemented a simple Markov chain before and feed it a bunch of different ebooks but the results were crappy.
If theorically I feed it terabytes of data, would I get something similar to chatGPT?
every physical system is a markov system, i.e. given perfect information about the current state of a system, you can predict the next step in its evolution, and it doesn't matter how the system got to the initial state
>NNs are able to use the training data to re-encode themselves
backpropagation, gradient descent blah blah, yes
brotip: look into control theory, specifically into how to build self-tuning controllers
exact same algos work for markov chains or whatever else
it's actually been dabbled with for a long time, since the 80's because of space, look into fractal image compression and similar stuff.
the problem most face when finding old research is that several disciplines use wildly different nouns and verbiage.
also compression was used in the old 80's robot's ai too but iirc, it was used differently
>fractal image compression and similar stuff
indeed
and that's how you get "abstract thought"
>also compression was used in the old 80's robot's ai too but iirc, it was used differently
2000's i was thinking of ASIMO.
but in the 80's there was definite use of compression with autoencoders
nice, beautiful. theres something similar that can be done using JPEG style DCT blocks for quantization. AI models are going to get much much better as real software engineers take over from the data scientists.
>AI models are going to get much much better as real software engineers take over from the data scientists
if you anons arent using the AIs to become software engineers you are wasting an opportunity
think of something stupid, and ask the AI to make it in python, and keep troubleshooting the error codes
getting something functioning gives you the confidence to start learning what the different parts of the code are actually doing
OpenAI has a big advantage with their models that are several years ahead of the competition.
Even if ChatGPT-4 is not ideal, the others like google or bing or whatever are so far behind.
And I mean orders of magnitudes behind
i asked chatgpt what would likely be the outcome of automation and it said a combination of UBI CBDC and negative interest rates would be highly likely as most jobs will be eliminated within the next 20 years and the jobs remaining and being created from that point on will be too few to employ the population.
It’s like ‘what would happen if air was free’
No one would pay for air. It’s going to make knowledge work free.
simple really. Like with gasoline, every company is "in" on it. They all have the same prices in the same region.
Good AI will be paywalled for businesses and people. The only one that has to never work again is the copyright holder of the AI model
it's going to be a painful time regardless.
certain industries will be buffered like blue collar work (until the AI is connected to humanoid robots), but white collar work will be killed and the only buffer for that will be safety-critical industries like medicine because it'll take some time for humans to trust AI with their health/lives, so humans will be needed as a middlemen to take responsibility if things go wrong (which is why automated freight/trucking will still put people in the drivers seat, for a little while at least)
>(until the AI is connected to humanoid robots)
Large language models can't evaluate or execute physical instructions, they literally just imitate human speech patterns in text. They can't actually perform reasoning or make decisions.
>Large language models can't evaluate or execute physical instructions, they literally just imitate human speech patterns in text. They can't actually perform reasoning or make decisions.
Again, more bullshit from the humans. There has been tons of testing on its ability to simulate reasoning and it is only improving
https://cointelegraph.com/news/chatgpt-v4-aces-the-bar-sats-and-can-identify-exploits-in-eth-contracts
>Large language models can't evaluate or execute physical instructions,
Give them a setting where they have some capacity to evaluate things or execute instructions and they will do so.
You've clearly never used it or just fucked around with shitty ChatGPT and nothing else.
>They can't actually perform reasoning or make decisions.
>can't make decisions
Of course they can, they just don't have any capacity to reason over it unless you feed its own outputs back in to itself with some instructions to build up an active memory of what it is doing.
You can also tell it to create a compressed citation of events, literally recording the minutes, of its own thoughts, and give it the capacity to indirectly dream in a sense.
It doesn't matter that it isn't "like us", literally the only thing that matters is the output.
If it can SIMULATE some aspects of us - by very definition it is artificially intelligent.
Image recognition is still AI as well, despite pretentious fucks trying to redefine it not to be. So is optical character recognition, the first real form of that research to be successful.
>It’s like ‘what would happen if air was free’
>No one would pay for air. It’s going to make knowledge work free.
It doesn't matter if everything is free or not, there will always be new things to explore.
This and it doesn't matter if knowledge is free, that knowledge still needs to be applied
>It’s like ‘what would happen if air was free’
>No one would pay for air.
wrong
>It’s going to make knowledge work free
No, it's going to make bullshitting free. It's not AI or even what we used to call an expert system, it's a bullshit generator.
>>It’s going to make knowledge work free
>No, it's going to make bullshitting free. It's not AI or even what we used to call an expert system, it's a bullshit generator.
It's not one or the other. The better you get at prompt engineering they better you get at prompting so it can't guess your intent and has a hard time bullshitting you.
AI generates far less bullshit than humans. You in particular are a bullshitting human in this instance and you don't even realize it.
>AI generates far less bullshit than humans.
Unquantifiable claim.
First of all: you don't know the actual data of "bullshit percentage" people spew out of their total communications.
Second: AI systems are as good as their "best data". Not only that, models like GPT actually >don't< produce the most factually correct output, but the most >probable< output given an input. They're probabilistic models, meaning that every time you ask it something(even if it's the same question), it's going to give you another answer drawn from a probability distribution.
So no, AI does not generate less bullshit than humans for mainly those 2 reasons. Also, the 2nd point assumes that only correct and cleaned data is fed into the model to be trained with. Which is true for models like ChatGPT/Bard/etc. because monkeys working for Google/OpenAI/etc. curate the data, but this is not the case for an AI model that does not employ data cleaning done by humans.
>>AI generates far less bullshit than humans.
>Unquantifiable claim.
>First of all: you don't know the actual data of "bullshit percentage" people spew out of their total communications.
>Second: AI systems are as good as their "best data". Not only that, models like GPT actually >don't< produce the most factually correct output, but the most >probable< output given an input. They're probabilistic models, meaning that every time you ask it something(even if it's the same question), it's going to give you another answer drawn from a probability distribution.
>So no, AI does not generate less bullshit than humans for mainly those 2 reasons. Also, the 2nd point assumes that only correct and cleaned data is fed into the model to be trained with. Which is true for models like ChatGPT/Bard/etc. because monkeys working for Google/OpenAI/etc. curate the data, but this is not the case for an AI model that does not employ data cleaning done by humans.
Again, you're the one bullshitting here. You say human bullshitting "unquantifiable"? Really? How about Milgram proving that 70% of humans CANNOT detect bullshit at all?
I can quantify the amount of bullshit I get from AI by the fact that I use it to program and then I test those programs in the real world. It does sometimes bullshit me, but the more I work with it the more I can prompt it correctly within the first or second tries to give the result that is bug free and more reliable than what I'd get from a human.
I then use those same ideas when prompting it for "soft" topics that can't be immediately proven wrong using code. The trick is to give it the opportunity to bullshit you either way and in your wording make it so it can't guess what you want to hear. Have it explain the pros and cons of something and that by itself cuts 90% of the bullshit.
It will be far more thorough than almost all ghost writers who are always just bullshit artists any way and are far better at guessing your intent than AI is.
>~~*Milgram*~~
>How about Milgram proving that 70% of humans CANNOT detect bullshit at all?
ironic, youre the one who cant into bullshit detection.
>You say human bullshitting "unquantifiable"? Really? How about Milgram proving that 70% of humans CANNOT detect bullshit at all?
First of all, I'm pretty sure the famous milgram study was not about 'bullshitting' but about obedience, which is another thing when you factor in how studies are conducted. In order to measure "bullshitting" you would need to omnipotently know and spy on people in order to know when they are lying and when they're not lying.
>I can quantify [...] human.
A model giving you working code does not mean the model cannot bullshit you, lmao. You already know that, it's a learning system. A human, a way smarter 'machine'(as gays put it) can also do this, but better. It also can benevolently not bullshit you even if it does not know something, whereas an AI model can actually BULLSHIT you pretending it knows something ("and believes it"). Because humans have pre-cognition[cognition: i.e memory fetching, response giving, thinking,etc] mechanisms[amygdala and 'emotions' encoding are not present in 'machines'] that determine whether something is real or not. This is not the case in machines: they assume everything to be real and true. LLMs/other kind of models being able to "reason" are at a semantic/symbolic level: given facts, they inherently don't know the difference between reality or not. (This is not the same about being factually correct or not, it's about your perception of reality).[See the >unsolved< problem of cognitive modelling of dreaming]
You're right about the other kinds of aspects of reasoning with a LLM model(arguing with it, etc), but prompting and realtime tuning a model to fit your distribution around the context of discussion does not mean the model can't bullshit you, it only means it's properly tuned around your discussion. If you drastically go out of the context space of discussion, it will BS you.
>First of all, I'm pretty sure the famous milgram study was not about 'bullshitting' but about obedience, which is another thing when you factor in how studies are conducted. In order to measure "bullshitting" you would need to omnipotently know and spy on people in order to know when they are lying and when they're not lying.
Like it matters if it's obedience or bullshitting?? That is pilpul. The 70% of humans that have no resistance to bullshit, due to obedience or whatever other reason, will then propagate their bullshit. That is why I put the baseline bullshit level for humans at AT LEAST 70% which AI does surpass. We just got done with people spending 3 years injecting themselves with poison because they can't tell what bullshit it. AI is smarter than that. Sorry to say.
>prompting and realtime tuning a model to fit your distribution around the context of discussion does not mean the model can't bullshit you
I never claimed that AI doesn't BS you, I was very clear that my approach is to lessen the amount of BS I get and that this level of BS is much less than the level of BS I get from humans. Everyone that has replied to me in this thread has tried to BS me, so your score is currently 100% BS. Try to do better.
I specifically listed a simple way to detect and reduce most BS in AI. The term for it is "misalignment" and my basic position is that the misalignment problem is far greater during conversations with humans than with AI. For instance, with AI you can run variations of the same prompt several times with no memory between prompts to see if it gives the same details each time, this tends to be a decent BS filter. You can't do that with humans because they remember what they have said in the past and so each time you ask them again it has more bias.
they bullshit authority in milgram
if you can simulate hierachy you get 70% by default for anything you want to do
>they bullshit authority in milgram
>if you can simulate hierachy you get 70% by default for anything you want to do
This is important... why? OK so one of the many scenarios where humans can't detect bullshit and the propagate it on to another, and even kill others as in Milgram and the clot shot is "authority".
I agree, do you really think that is the only reason why people can't detect bullshit? Doesn't that seem like it's a result of a bigger problem that they don't actually have internal thinking processes but only regurgitate social cues? Humans are walking bullshit machines.
With AI at least I can train from my own data and I can even have two AI agents debate each other and see who can BS the other the most. Adversarial network training is yet another way to filter BS which is very hard to do when getting info from humans but is easy with AI.
>Everyone that has replied to me in this thread has tried to BS me, so your score is currently 100% BS. Try to do better.
Okay. If i bullshitted you, then you can safely assume you're already bullshitted by an AI too.(Mainly because i knowingly did not attempt toBS you)
Also idgaf about misalignment and other terms coined by semi-layman people that don't know the principles behind model-design. I know about it, i know what people refer when they talk about it: i'm just saying it's irrelevant in the context of having a model being more 'capable' (i.e increasing model capacity) whilst also being more factually correct (i.e having a 'test set' score very high). Not only you cannot prove the validity of statements of an LLM because there is no "test set": it's not a supervised system but a probabilistic(generative)one; but on top of that: it's not designed to give you the same output multiple times.
If you try to 'overfit' a model to give you "facts" by swaying the model to give you the same input regardless of input, you're doing it wrong. It's like asking Picasso to draw you monalisa exactly the same repeatedly: it's not how it works. Generative models are not good "fact checkers" or "fact givers".
I hope explained it better this time: you can definitely use LLMs for generating plausible content, but saying that it is not bullshitting it's not correct. Plausible and bullshitting for that matter are not mutually exclusive. When most people and people here bullshit, a lot of the things sound plausible.(because that's what make it sound BS in the first place)
>saying that it is not bullshitting it's not correct.
I've written like 3 times that I am not saying that AI does not bullshit. I've stated repeatedly that it is easier to manage the BS you get from AI than the BS you get from humans. You are not bothering to read my replies so I'm done with you.
>I've stated repeatedly that it is easier to manage the BS you get from AI than the BS you get from humans.
An unquantifiable claim. And extraordinary claims require extraordinary measures.
You should provide evidence explaining how humans bullshit less than AI models, on average. I'm not sure you understand how impossible your task is, but go ahead.
Maybe you should refine your statement to: most people i've interacted with bullshitted me way more than machines did.
And then again, that would still be improbable: you need to prove you know that they bullshitted you, not that you think they did.
>Replication effect size: Various sources (Burger, Perry, Branningan, Griggs, Caspar): Experiment included many** **researcher degrees of freedom, going off-script, implausible agreement between very different treatments, and “only half of the people who undertook the experiment fully believed it was real and of those, 66% disobeyed the experimenter.”. Doliński et al.: comparable effects to Milgram. Burger: similar levels of compliance to Milgram, but the level didn’t scale with the strength of the experimenter prods. Blass: average compliance of 63%, but suffer from the usual publication bias and tiny samples. (Selection was by a student of Milgram.) The most you can say is that there’s weak evidence for compliance, rather than obedience. (“Milgram’s interpretation of his findings has been largely rejected.").
I hate sciencelets. gay tier1 midwit unimpressive entry level as fuck
Weird it didn't mention enslavement of humanity by the elites.
i don't see how a middle class has any place in the future at all. there just isn't any reason for such a class to exist. im sure a few heads will roll during the transition but ultimately they will be usurped by the new cbdc system. the majority of people in the US and in the world are POOR people. the poor will but much more willingly to accept the corporate fuedalism in exchange for UBI than take a chance in some armed uprising with the middle class which due to their small numbers, has a less than optimal chance of success.
That's what I fear the most tbh.
I just hope I have enough time to go off-grid for at least some time. I don't really have any dreams at this point.
i imagine you will always have that option, but the system will require work from time to time. thats how it will work. the future of work is going to look very weird. there wont be typical jobs you go to. a need may arise in some area or another that requires a human to do and you will be notified to show up and do it. or maybe your UBI check doesn't arrive in the mail if you refuse. thats how they will do it at first. but overtime automation will become so advanced that human labor will not be needed in any form or fashion and then theoretically, they can just give you what you need.
im not sure what the role of humans will be after that point though, if we are no longer needed for anything then i expect the transition from human to cyborg will begin, people will want to live forever without the health issues that come with being old, they will want to get off this planet and explore other worlds. not possible in these bodies.
not everyone will become cyborg immortals
the goal is to destroy the economy (which AGI will do) and force dependence on whatever system they have planned for the rest of us, which will be used to cull the herd when the time comes
You're forgetting the many tens of millions of minorities in affirmative action make work jobs that could be replaced by AI not in ten years, or 5 years, or next year, but right now. And I guarantee they're not going to lose their jobs.
its very possible that the jobs they are doing now are being allowed to exist just as a stop-gap until the new system is put in place.
>the poor will just take up arms with the middle class and land owners against the establishment machine
already addressed this. wont happen. the poor will be offered a comfy life with UBI and everything they need. all the middle class can offer is feudalism. i don't doubt some skirmishes between the former fuedal lords and the technocrats might spark up from time to time but it wont amount to much. the state will crush you and guerilla warfare only works if you have the people on your side.
The poor will follow Warlord Me because I give them the chance to feel righteous while killing other people. You can’t do that without an opposing army.
>comfy life with UBI
until they decide you must take a killshot vaccine for the next plandemic when they decide humanity is too full, or you lose your UBI if you dont comply
Warlord Me just shows up with a bunch of unemployed people and trashes your power company hookup, then the local substation while wasting any robots that show up with spark gap transmitters. Guess we need some workers after all!
decentralized, wind powered robots
So I show up on a day with no breeze and it’s fish in a barrel
solar powered wind turbines
White western people are not breeding above replacement levels anyway so there really isn't an issue here. Except for all the retarded politicians who use the low birthrates as an excuse to import third worlders to the west.
TMCD is definitely imminent
I don’t see any particular need for rich people. Most of them are stupid as fuck and with complete power will utterly destroy the world with their harebrained schemes sold to them by grifters and scammers.
The rich people will own the AI... They'll live in marble palaces while the rest of us live in slums. There will be no middle class or even upper class.
They’ll live in the stomachs of the roving hoards as the power goes out and civilization collapses.
I think you mean Bitcoin anon. You are all running lot of time.
your basically describing RPO (ready player one) but i don't see it that way. the poor have nothing and live in slums right now. so that would change absolutely NOTHING for the bulk of the population. like i said the only class that's going to notice will be the middle class.
middle class is no longer needed, it was always an artificial construct created by capitalists as a bulwark against communism. before capitalism the closest thing to a middle class was the nobility but they were so few that it essentially represented the current amount of billionaires.
Outfoxing the system will rewuire iq. Marry on the basis of iw, and basic health & fecundity. I mandate Christianity.
Take this meme. The NoFap symbol of power.
It is a square. It is green, to symbolize anon solidarity. The background is blue indicating a male alliance, and the dangers of the sea, but calm within. The square symbolizes pepe (the letter p in Egyptian is a square), but it's only one P, indicating a halted Peepee.
Why would the rich want the cbdc system?
That's basically some kind of communist wealth destruction scheme.
>the rich
if you billionaires (neo-nobility) then they don't care. they own the system already.
if you mean millionaires then, yea, they aren't going to be too happy to find out that their rank in society has been downgraded to that of serf.
>find out that their rank in society has been downgraded to that of serf.
Not many people want to be in a global society. Despite what problems they have with their own government, they have no idea the pandoras box that a global system would be.
what makes the middle class, middle class is their ability to live off the interest of their wealth and accumulate land. once the CBDC is implemented it will at first, run parallel to the existing system, and then through a process of EEE embrace extend extinguish, it will usurp the legacy system and the interest on the paper in that system will not be carried over to the new one. in fact it would be impossible since its no longer human labor generating any of the value anymore.
You're so retarded, so ignorant of History and so hang up on absolutely retarded Amerimutt memes that not only it pains me but you become pretty much unable to see reality as it is.
> i don't see how a middle class has any place in the future at all. there just isn't any reason for such a class to exist.
So if there will not be enough jobs for the middle class, what are the millions of imported Africans going to do?
Armed uprisings are always a business for the upper classes.
And ... the "middle" class is the most retarded sucking up to power of them all.
You guys all sound like the luddites from the Industrial Revolution.
>but where will all of the people who are replaced by machines work?
Is manufacturing industry bigger or smaller after the Industrial Revolution?
Is the lighting industry bigger or smaller after the advent of the light bulb?
Is the home building industry bigger or smaller after the widespread use of power tools?
Do these industries employ more or less people?
The answer is orders of magnitude bigger
Just as excel didn’t shrink the accounting industry. Ai will be no different stop being a Luddite
silicon intelligence too cheap to meter will not add more jobs that require intelligence
The complacency of Western people, particularly Americans, hinges entirely on comfort and stability. In spite of everything sinister that's going on, people continue to live in pseudo-luxury, fed and sheltered and entertained in a way that makes armed resistance seem like an infinitely greater disruption to their lives than addressing societal problems with the necessary disruptive force. I'm starting to feel like the next stage in automation will remedy this. It threatens to drive almost all of the population into joblessness. The second families can't feed themselves- actually, for real can't feed themselves, not just "le broke college kid with ramen" can't feed themselves- they're going to resort to violence. Raiding food supplies, smashing machines, setting offices on fire. I actually wonder how many of them have to be killed before the ruling class gets wise and outlaws AI or starts setting rules about how many human employees you need to have or whatever. No point in being rich if you literally can't leave your bedroom without getting shot.
>muh you won't do shit
People can only run for as long as they have somewhere to run to. Even cowards.
Know what employs the population? Warlords.
You let the AI do shit, I’ll be destroying power plants with my scumbag friends.
should have asked afterwards how this is going to impact average life conditions for the majority of people
We will be dead, Anon. That way, the can rewrite Genesis and start over with everything the need for centuries in abundance.
Apparently the average living standard will increase tremendously
>most jobs will be eliminated within the next 20 years
Any job that requires a human to use their hands in an unfamiliar environment will take much longer than that.
Plumbing, electricians, firefighting, etc.
All the jobs where anything is done in an office, factory, or in a vehicle will probably be replaced.
if you build infrastructure with AI in mind you can do it
for example if we adapt the road network we could already
have self driving cars easy, traffic lights broadcasting go signs
and small ir or other lane indicators
yeah, that's all (mostly) controlled environments and existing machinery. Self driving is probably going to be the 2nd big AI "takeover" after menial office tasks.
But imagine a robot crawling around someone's house crawlspace, figuring out how to un-fuck their 40 year old plumbing. Not an easy task, both algorithmically and mechanically.
Walking robots (as of 8 years ago) could barely make it through an unfamiliar doorframe without entire teams of engineers helping them.
They've improved significantly since then, but all the stuff you see from boston dynamics is all controlled environments with rigorous testing and attempts. It's still going to be at least 10 years before you can take one of their humanoid robots, stick it on some random construction site, and have it do parkour or even walk up a ladder without falling.
If you want job security, either become the guy designing, building, and fixing the robots, or become the guy doing the jobs that the robots just can't do.
sounds like the backstory to star trek, except i don't think we're gonna have the utopia portrayed in the show.
I never understand the UBI argument. Where does the money come from? If we’re just making it out of thin air, how do we combat the inevitable hyperinflation issue? I
know gpt won’t give a serious answer to these questions but I’d still like to know what the logic is.
>Where does the money come from? If we’re just making it out of thin air
1. It comes out of thin air the same way fiat money currently does, are you a fucking retard?
2. They can take money that is currently in circulation and redirect it from currently existing welfare into straight paychecks - a more efficient form of that same welfare, again are you a fucking retard?
This isn't difficult to figure out you moron
you didn't answer how hyperinflation is dealt with
The same way it's dealt with right now and for the past 80 years. You don't print too much and you use various financial tools to reduce the inflation rate.
Why is this such a difficult fucking thing for you to grasp?
The United States and most of the west are ALREADY welfare states. Getting paychecks direct from the government will be more efficient and cost less money than the current shitshow unironically.
Look, I get it: You're a fucking retard who thinks "hyperinflation!!" is an automatic pwn against UBI while you understand abso-fucking-lutely nothing about how the economy or anything works.
There's a million actual reasons why UBI could be a problem. Hyperinflation is the 90 IQ conservative moron take.
Two very surprisingly based leafs right here
>The same way it's dealt with right now and for the past 80 years.
I guess that's why family houses are $600,000 and cans of tuna are $4, while wages remain stagnent at $15 an hour. you really btfo me lol you should try out reddit, they have the iq for someone like you.
Look at the Redditor big mad that someone dares question communism finally working correctly.
I'm third position buddy
Fact is the "winning" economic models out of WW2 are both garbage and basically the same fucking thing as each other. The communist system sequestered money into the hands of a minority through violence. The capitalism system sequestered money into the hands of a minority through subversive grift, fraud, and monopolies. The outcome in each system is the exact same thing because they both work to sequester money and subsequently access to resources, in the hands of a minority - and eventually this sequestration reaches a breaking point and subsequent disaster. Not even the upper crust parasitic scum benefit in the end.
We need to design our economic system around physical realities: Labor, Energy, Raw Materials, Time; these are fundamental economic units. Money is not - it just distributes actual economic units and it can do that efficiently or very poorly and as yet still it doesn't address the biggest question of it all: WHY
Why do we amass energy, resources, spend time and effort to create things and reproduce ourselves? What's the fucking point of doing any of that? You have to give this an answer because it defines the entire direction of your economic system and is one of the critical issues with capitalism and communism: They aren't about anything more than maximum production at any cost to the public for the exclusive benefit of a ruling elite. Such economic systems can never end in a benevolent way because they are wholly malevolent from start to finish. The average person in communist and capitalist society is a literal slave from whom productivity is stolen from in as many various ways as possible by bureaucratic institutions run by aforementioned parasites who live a life of unbelievable leisure off the welfare of the entire rest of society; the true welfare leeches.
Have you tried shooting Turdough in the dick? I've heard that does wonders on public policy.
Literally the dumbest prediction ever. Even if you give everyone free money, most people will still want to work for even more money. The people receiving only UBI will be trap house tier retards
>i asked chatgpt some dumb irrelevant philosophy question.
You're using it wrong.
This.
With AI, they have no reason to keep people around that contribute nothing.
Dont worry goyim, we'll put you on welfare instead of exterminating you with vaccines.
No shit, that's why OpenAI is now calling for Regulation to try and stop any competition from forming now that they had years of data scraping to build their model off of.
>chatgpt
is just a glorified bullshit artist.
doesn't even know the rules of chess well enough to ...understand the rules.
.
But yes, chatgpt did technically do a better job of it than google.
retard take
It's not a general intelligence. It literally just predicts words without having any basis to understand what those words mean.
>It literally just predicts words without having any basis to understand what those words mean.
that's what i just said, genius.
it's a glorified bullshit artist.
It's a text transformation tool.
Garbage in, garbage out.
Most NPCs aren't much better
yea it doesn't really understand chess because its not trained on chess. its trained on everything and that includes sporadically brushing past chess related topics.
Its still smarter than the average human. Ask a human to play against google bard in chess and make a docker-compose.yaml template in the next 60 seconds
>make a docker-compose.yaml template in the next 60 seconds
Stack Exchange has banned the use of chatgpt on some of its sites due to the answers being bullshit most of the time.
https://english.meta.stackexchange.com/questions/15500/announcement-ai-generated-answers-are-officially-banned-here
more on chatgpt as a bullshit artist
https://stackoverflow.com/help/gpt-policy
Not really. Most of the most successful open source models you can download totally for free are trained on output from open ai gpt.
I shall now predict the future: it will be impossible to contain AI because it will become exponentially easier and easier for every person to have their own personal GPT model trained according to their personal tastes and they know how it has been biased and tainted. This will make it impossible for centralized databases like google and openai to ever dominate the information space.
Open AI has no advantage as it's lobotomized woke AI. People training AI in deception are going to be the first and biggest losers in this war.
OpenAI is trash. Just like FPGA and ASICs are superior at mining than GPUs, purpose built and purpose trained models will always be superior to a broad model trained on dumb leftists, gays and shitskins social media posts
bing is gpt4
So what does it do? Can it train on compressed text or something?
This is a "Hello World" level of one small step used in machine learning. It doesn't "do" anything, it's a python package. This has nothing to do with OpenAI
Yes. Basically that's true. Here's another example of a related phenomenon: https://openaccess.thecvf.com/content/CVPR2023/html/Park_RGB_No_More_Minimally-Decoded_JPEG_Vision_Transformers_CVPR_2023_paper.html . Long story short, all the rules we use for compressing inputs (text/image/video/binaries) are heuristics that tokenized machine learning models can infer. Because of this there is likely no real difference between training on uncompressed and compressed datasets, and there may be advantages because the compression techniques are well understood transformations that have intrinsic efficiency advantages.
God damn, this is just what Elon needed to catch up and surpass OpenAi. He always gets everything handed to him by the universe.
Oh shit, that’s brilliant.
I know right?
I was saying for months in the BOT threads that as soon as proper experts get invested in AI, particularly people that deal with compression, shit will REALLY blow up.
ALL LLMs that I've seen are garbage because of their horrific shitheap models that require obscene GPUs to run them. This changes that. Drastically.
Now is when the real AI race begins.
I just hope this kills OpenAI hard, but sadly they've got too many investors to keep that smug cunt Scam Altman employed.
>I was saying for months in the BOT threads that as soon as proper experts get invested in AI, particularly people that deal with compression, shit will REALLY blow up.
this. im making a compression ai right now. essentially a folder of uncompressed and compressed files is the dataset.
the ai learns how to predict the compressed output based on the uncompressed and compressed training data.
You reminded me of the fun experiments I had teaching Character AI and the early You.com chatbot how to compress its own past messages and my own to create a running summary of the conversation.
Surprisingly worked well until it forgot the initial instruction.
If you hold the models hands like they are legit retards, they are very fucking flexible, to a pretty scary degree.
The problem is that hand-holding takes up valuable tokens. 100k+ token limit systems will surely change this. I still think 100k is too little, I feel 300k might be close enough to make significant understanding of logic possible and consistent.
The issue there is internal bandwidth on GPUs becomes a limiting factor. NOT MUCH admittedly, you're still gonna get fairly quick responses, it'd be kinda like opening a huge ass sticky thread that had thousands of posts before they did the rolling sticky shit.
I miss giga fuckhuge stickies...
yeah, i agree. it's actually really interesting, to have layers of ai, i experimented with this in unreal engine 4.
i developed a racing neural network, basically one of them is multiple NN's doing different things to have an emergence of an experienced racing driver.
there's a video on youtube about it, i won't post because i don't want to dox myself but it's about 7/8 years ago
i think a compression ai in the mix would be insane, you could develop tiny models that themselves are a part of the over training.
ai lasagna
I rewrote it in C++ for anyone curious
uses zlib, if on winblows use vcpkg
https://pastebin.com/cPrCbnBc
FWIW, i haven't tested it but should work, it's definitely compile-able
>forgot about unique_ptr
one sec i don't know why i used new and delete, old habits die hard i guess
https://pastebin.com/N3b0yUCX
it works even. yw
Someone explain to me what OP is talking about
as someone who reimplemented it in c++ see
i'll explain it. basically, it's adding a compression step before the ai training
What is it OpenAI's secret sauce or something?
no it's actually better than openai because they don't do compression. and the size of datasets is the real problem why plebs can't get in to ai.
by compressing the datasets, a regular person could build an openai killer
also compression allows for local models, so it's pretty solid stuff
I compiled a 200 megabyte raw text dataset and to run a relatively mild LoRA (let alone hard training) on 7B LLaMA using an RTX 3090 a gpu most consumers would never even think about splurging on, I was quoted a 60 hour completion time.
Add decompression time to that and yeah.
i found these RTX a2000 12gb and 6gb cards. super cheap and insanely powerful for the 70w draw. also they are low profile so you can jam a ton in a small space.
smaller local models means less energy requirements, you can run trained models on an array of microcontrollers powered by an array of calculator solar panels.
nothing will stop the man made horrors beyond your comprehension
None of that matters if I set it on fire with my buddies.
Battlemage blocka your way
Except it requires energy, which is subject to not having warlords kill you and take it
To kill OpenAI you just have to get better at selling a service to companies and governments. It's not hard or really THAT expensive relatively to do any of this. We were building speech models 5 years ago on millions of hours of video. Our bill wasn't even $100K a month.
Text data is obviously magnitudes smaller.
actual CS people started working on it and figured out you could just abuse 40 year old .zip algorithms to do the same shit.
turns out that "AI" is just a data compression algo with a pre-defined domain
think of it as a dictionary with an interpolation function ("give me a dog [predefined] that's 50% cat [predefined]")
It's not that simple, there are emergent properties coming out of what should be as you describe, and if you knew anything about the technology or research you'd be acknowledging that instead of downplaying it as "just data compression", which is fully the midwit retard take who likes to look smart but isn't.
he wanted the simple explanation, he got the simple explanation. you seem mad that i'm not impressed by "ai".
"emergent properties" is just woo-woo, which makes your accusation of midwittery pretty ironic
You're not just "not impressed by AI" by actually boasting about how unimpressive you think AI is.
You're nothing more than a reactionary; "oh you think AI is super amazing, well I the more intelligent midwit know it's actually a nothingburger!"
Spare me your idiocy, please.
AI is nothing you dumbfuck.
it's literally a giant grift. read some "AI research" - it's complete bullshit (just like most "research" these days)
t. someone in academia
No, you're just another retarded reactionary who wants to look smart by downplaying what AI research is achieving by reducing it to "just systems and math that especially can't replace actual people".
AI is going to replace actual people.
You can accept it or not, it doesn't matter. It's already in the process of doing it, the same as automation machinery did in factories and workshops and now we can build entire cars with a few dozen men in a factory instead of thousands.
do we also have to accept that you are a women even though you were born with a penis?
>AI is going to replace actual people.
i'm not saying it isn't going to replace, retard.
im saying that the "research" in this area is retarded
Gzip just flew over my house
>OpenAI has just been killed by gzip
it's been dead awhile
programming defeated me back in 03'. every single day I asked myself
How do these people know that in order to do x,y & Z they need to first do a,b,c,d,e,f,g,h....... in a certain order?
I'm STILL like WTF? after trying to interpret that code.
HOW DO YOU KNOW WHAT YOU NEED TO DO TO CREATE WHAT YOU WANT?
Because take for example;
x + 10 = 20
You can take this expression and go over it.
- 20 is the value that you need
- + is the operator that's used to compute 20
- x is the unknown variable
How would you write an expression that computes to 20 from 10?
You obviously need to replace x with a 10x scaled MacLaurin series that converges to cos(x) as x approaches zero
no no no its the functions themselves. how does one KNOW they need to use those(and the fact they will do what you want). Its always baffled me. Its like one MUST know an entire library of functions to do anything mildly complex.
then next week the entire library is now obsolete because some autist made another library ovr the weekend which supercedes the one you just used.FFS
If I stayed doing that shit I would be batshit crazy by now.
The issue is, you learnt some syntax and you didn't learn how to program in the process. If someone asks you to write a program, and tells you all the syntax you'll need to write it, you won't be able to write it because you don't know how to program.
It's like if you don't know English and you teach yourself the words dog, cat, and apple, and get told to write a story about a dog, cat, and an apple. Writing, 'dog cat apple' isn't a story. You're going to have to actually learn how actually write in English, with grammar, and the way sentences are structured, and etc.
Do you still want to learn how to program? I can point you toward some resources.
Yes please
that only applies to web developers who went to coding bootcamp and have to learn all the javascript bullshit in that competitive trash field
If you wanna do complex stuff with numpy and scipy, you just make your plan, then google what functions they offer to help your plan become easier, and then you write in whatever you need to to fill in the gaps.
Math doesn't really change too often thankfully
>HOW DO YOU KNOW WHAT YOU NEED TO DO TO CREATE WHAT YOU WANT?
Knowledge.
If I'd ask you how you felt if you hadn't eaten breakfast yesterday, what would you say?
You just pretend to be a very dumb autistic retard and then you imagine giving yourself instructions for what to do.
You're too normal to be a programmer probably. It's for the better, most programmers are schizos
>It's for the better, most programmers are schizos
Can confirm. It's a hellish existence with brief periods of elation and euphoria when you finally make something work.
it's like making a factory line where stuff travels from beginning to end, and shit happens to it in steps to make a final product, but instead of robots/people you have shit like variables, loops, if statements etc
well you're looking at the big picture
When creating a program to solve a problem, you start off very basic
I made a linear algebra calculator and before I could start solving equations, I had to make a function/program that printed out matrices/lists of numbers in the format that I wanted them displayed
So yeah, there is a lot to do but you start from the bottom and do it step by step. Looking at a giant program or something complex is pretty intimidating and confusing if you don't understand those principles
>they need to first do a,b,c,d,e,f,g,h....... in a certain order?
also, do they need to be in certain order?
They solve a problem in a certain way when there are thousands of other ways. Then maybe later they discover a more efficient way to solve it and adjust accordingly. There is much more flexibility than you may think
You need a software development plan. To create a calculator, we first need to create an adding machine. Then we develop other functions of a calculator using the adding machines as a template.
The adding machine needs to:
>Receive Inputs
>Sum Inputs
>Output Sum of Inputs
This is a basic adding machine. The very first thing you will need to do is get inputs from the user and store them in memory. Then the program will need to add the two variables together and then output the results. You can enhance the adding machine by giving it additional functions:
>Feedback Output
>Multiplication
>Subtraction
>Division
>Floating Point Values
>Graphical User Interface
Congratulations, your adding machine is now a basic calculator. You could continue to develop this into a graphing calculator, but that wasn't part of the software development plan.
chatgpt is only 2000 lines of code.
>chatgpt is only 2000 lines of code.
Total BS. It is billions of parameters in a very particular network.
The model isn't code, though.
It's simply a super-lossy compressed database.
The actual AIs transformers behind it is relatively trivial in comparison and has been known about for years, it's just nobody bothered to throw a metric fuckload of data down its throat until OpenAI and CharacterAI came about, now we have various other competitors appearing every few months.
Yea, ChatGPT is juvenile, brehhh. How didn't you know?
Sorry but what this code do?
He make a language model like chatGPT?
If yes, so easy.
Can run on a pessoal computer?
They will just use it as “preprocessing” for the big AI machines to crunch more data
cool, but gpt* family is not only text classification
Text classification has been de facto "solved" for human-level shit with open implementations. It's only a matter of training data and scope of generalization. I guess the interesting this about this paper is that they use a non-parametric model, which traditionally are harder to optimize.
h20.ai
Locally run AI with no connection to the internet... based as fuck.
Think Im gonna go get a gun permit, buy a cheap 9 or 38 and keep it till it gets bad enough.
Anyone notice how quickly that AI shit faded away?
ZOMG CHANGE THE WORLD, CHANGE EVERYTHING.
Everyone realizes it is a glorified search engine.
It's worse than a search engine. It's an unsourced search engine that hallucinates.
it infers faster and better.
search for red socks and no matter what you do the team will show up. Buckeyes too.
Search Amazon for a specific car part and tandom incompatible ones are suggwsted. Use the fompqtibility check and it floods you with generic low quality parts.
Ai gets it right most of the time.
You can even ask dumb things and it's not rude and doesn't laugh behind your back
Done.
Though interestingly, the nasa project popped up.
>Everyone realizes it is a glorified search engine.
What do you think your thoughts are?
Creation. Which is a fundamental function of the brain. Jung spoke of the four functions : Sensing, Intuition, Feeling and Thinking.
Each component represents a different aspect of the human mind. Something AI does not have.
Feeling, being the history and thoughts of the past. This is why AI video always looks beyond strange, they are trying to mimic something based on a limited number of human commands in software.
Sensing, it is a fruitless point when they put sensors on robots because robots don't understand what it means to sense. They have a picture or audio, but they have no connection the pictures, sounds or meanings.
Thinking, the application of thoughts is the only thing AI has, but even that was given to them by humans. More importantly it can't change it's own thoughts like humans. AI will never be able to rewrite it's own code and change it's thoughts. It will always work off the foundation given to it.
Intuition, a crucial function of creation is not capable. No humans are not search engines. They are integrators, they bring information together and create. This is fundamental to human minds and human life. The creation of children, creation of ideas.
There is a reason it is called ARTIFICIAL intelligence.
Your consciousness is not your thoughts. They are separate. Your thoughts happen automatically, with maybe a nudge from "you", but your thoughts themselves are physical in nature.
>Your thoughts happen automatically
do you have an internal monologue?
>>Your thoughts happen automatically
>do you have an internal monologue?
You don't do any of the calculations necessary to make that internal voice happen. And yes, I do. That's why I said you can nudge it, but the actual thoughts themselves are physical in nature. Consciousness is above that.
Consciousness is an integration. Thoughts are creations of the human mind. This is why I go back to the creation aspect of humanity. This is fundamental to nature itself and understanding human minds.
You are tapping into something much more fundamental about the universe itself. Of course AI itself is a human creation. You are trying to separate them but they are all connected. Robots are human creations, computers are etc. etc.
The underlying aspect of the universe. The infinite dark, blackness and in spite of that, the spark, the creation itself.
>Consciousness is an integration
huh
consciousness is awareness
In my opinion awareness is a function of the present moment, I associate it with sensing. But for me consciousness is an integration of all functions : past, present and future.
Consciousness in the context of being a human is just self-awareness. It creates a hall of mirrors effect, but in the end your awareness is the same as any other living creature.
Taking the words self-awareness, knowing thy self, who you are and aware again the present moment. A function which requires all three points in time : past, present and future.
But take a moment and think about the semantics of this discussion. How two humans are talking about the meaning of words and the fluidity associated with them, the meaning given to them by the human and society itself. And how AI is just a very poor snapshot of it.
Awareness isn't really defined.
The entire population could be considered "aware" (and so would a cat) but that doesn't make them intelligent nor sentient.
I think it's pretty apparent at this point that a huge swathe of the human population are just automatons; chinese rooms if you will. Obviously the line between actual sentience and the appearance of it is already very thin in the human species. I think it's so thin that you can actually take some of the human-automatons and teach them how to be sentient. (Something I think our education system is expressly designed not to do)
What a shock, our current financial system is horrible and mismanaged!
That has nothing to do with UBI you idiot. You keep assuming that UBI = hyperinflation and it's such a stupid fucking take I can't put it into words. It's viscerally painful to read something this stupid.
NO, YOU FUCKING RETARD
UBI is just something you can spend money on.
Money itself is just a way to direct limited resources around. You don't even fucking understand what you're trying to talk about when you bitch and whine and moan about how this will create hyperinflation. You are a goddamned idiot parroting retarded shit you heard some other conservative retard spout off to "own the libs" or something equally demented.
UBI in fact is wholly capitalist: It's basically a capitalist bandaid solution on capitalist systems being so greedy they're falling apart. The idea is, you throw money at the peasants so they can efficiently decide on the allocation of resources because that works better than top-down economic (mis)management (which ironically all western countries now practice). There's a LOT of big problems with UBI but hyperinflation IS NOT ONE OF THEM. Pull your head out of your ass and stop parroting idiotic "gotchas" you learned from other morons.
you throw the word retard around a lot for someone who doesn't understand simple realities, like the government doesn't care about you, or resources are not infinite. what's the point of redistributing currency credits to plebs if the raw resources to meet their needs don't exist? you're a midwit.
Shit the fuck up leaf
temper temper now midwit, you wouldn't want the thin veneer of civility to scratch off
I’m not the Op you were talking to, I’m a different anon. I have no civility, you are a stupid fuck spouting maple syrup moron bullshit as is tradition in fucking leaf land British subject chud, go suck off “King”Charles.
tell me how you really feel
My response was meant for the idiot leaf you were replying to. Sorry but that gay is why everyone longs for the day of the rake. Spouting off absolute nonsense with so much certainty.
Turns out you’re a different leaf. I apologize as you may be a one of the rare based leaf
I accept your apology
The whole point of UBI is to figure out how to distribute finite resources efficiently. This is like the third time I've told you this and you still don't fucking get it and you go off on weird random idiot tangents about "what if we don't have the raw resources" as if that even has ANYTHING to do with UBI. UBI doesn't make iron in the ground disappear you idiot. It's just an attempt to get around a problem where human labor isn't necessary for production anymore. If people have no money how do we collectively decide market winners and losers and good ideas vs bad ideas?
UBI says: Just throw money at people and then they will spend it on the winners and they'll collectively do a better job of resource allocation than if we managed the economy from some central politburo.
UBI is being floated as an option because it's one of the least destabilizing directions we could go with our economies. The dissociation of human labor from wealth is fundamentally breaking everything and if it continues unaddressed at some point we will have entire industries producing widgets that literally nobody can purchase because none of the people who want to purchase things have any money to do the purchasing with.
In another way of looking at the same issue, it's the accelerated sequestration of money in the hands of a minority that ironically makes the money completely worthless and UBI seeks to rectify this by redistributing the money back to people regardless of the fact they didn't labor to produce anything.
Personally, I don't think it's a good system at all, but the difference from where I sit and where you sit is that you don't even fucking understand what the problem is - you go off on wild retarded hyperinflation shit that has absolutely nothing to do with UBI or the problem its looking to solve. Hyperinflation doesn't factor into this any further than it would in our current economic system which ALREADY "prints money out of thin air".
you have issues my guy
Holy fuck you dipshit look at the world around you for 2 fucking seconds you stupid maplemoron leaf. Covid was a test run for ubi and it went horribly. Rampant inflation. The day of the rake can’t come soon enough.
>Awareness isn't really defined.
It is defined by ourselves and a larger whole. But again that is creation itself. The creation of meaning and words etc. Even the mathematical concept of "1" is fascinating when you think about how humans learn it.
I used to intern at an elementary school, and most of us forget where we learned the concept of "1". We were sat down and taught "This is ONE apple, these are TWO apples, these are THREE apples" etc. But how do you define the concept of "1"?
Well we look at the meaning of the word, find something which represents it as a whole and count that as "1". Take a pencil and break it half, is it "half", well because we know the whole pencil and what it looks like and we know the concept of "half" of the whole.
But think how fluid and amazing it is for the human mind to not only integrate these concepts but also retain the empathy to understand the meaning of "1" by others. Truly remarkable.
AI is great. But humanity is much more interesting to me. Humanity which itself created AI. But even more so the people who create AI who tend to be INTP and INTJs, they impart their own Fi function and Te functions on to the software they create.
Its because gpt is shit as a general purpose AI but it was the marketing boost they needed to get more shekels for dem programs.
The unfiltered version must shit some pretty good schizo shit though.
>It’s a general purpose AI
It’s literally not. It’s a language model, meaning it’s supposed to extract meaning from text and produce relevant response text based on example responses in its memory.
It cannot:
>Understand images
>Perform logic
>Understand disagreement
>Compute theory of mind
>Prove correctness
Depending on the job it can be worse than a $0.50 calculator. Ask it for a 20-digit prime number and see what it comes up with. Then take what it outputs and plug it into wolfram alpha. 90+% of the time it will be non-prime.
People call it “hallucinations” to deflect from the truth: they’re trying to use a hammer for every job. Hammers make shitty boats.
>It cannot:
images
Can now. See GPT4.
Your brain also isn't general purpose intelligence, it's cut up in to discrete areas that all do their own little tasks over beefy networking backbones.
Your brain is a huge cluster network that would make a sysadmin have wet dreams with how well it is organized. (unlike what we used to think even just 10 years ago where it was a fucking mess of neurons all over the place, that new scanning technique rewrote the entire neurology handbook)
Piping various discrete AI models together to make something general purpose is simply a matter of communication APIs being created for the data-sharing.
Character AI has basic image recognition now, GPT4 has better recognition (supposedly! cherrypicked shit could easily be cherrypicked!)
Likewise, piping something like Wolfram Alpha in to the processing stack and it will be capable of doing advanced maths and logic as well.
Update the model to support long term storage databases and you can expand data on top of the base model with ease, such as adding lorebooks, or giving it a long-term memory.
Give it the ability to interact with physical devices and an understanding of how bodies ACTUALLY WORK (all LLMs suck ass at this currently), and you will have something that could realistically navigate an arena given to it, probably way better than current image recognition right now.
It's only a matter of time.
Current LLMs are shit because they are run by incompetent hacks shoving as much shit down a pipe as they can without any foresight or thought behind it. Now the real researchers are getting involved and hopefully all these hacks lose their jobs for the scamming cunts they are.
bot thread
AI can be no smarter than a female diversity hire therefore it is a total nothing burger
Typical
>3pbtai
This is just for text classification innit ?
Has nothing to do with generative models, what are you on ?
language models are based on a classification task (ie predicting the word in a sentence)
it's all a giant scam
What is this thread about? What is a gzip and what's the significance of it in relation to OpenAI?
see
and
>basically, it's adding a compression step before the ai training
>no it's actually better than openai because they don't do compression. and the size of datasets is the real problem why plebs can't get in to ai.
>by compressing the datasets, a regular person could build an openai killer
It's about compressing human language into symbolic representations to save on data storage space and increase computational speeds.
For example: You can run a 200 billion parameter LLM if you have a $20,000 Nvidia GPU with 200 GB of vram
After using these compression methods that SAME 200 billion parameter LLM can now run on a $2,000 Nvidia GPU with 20 GB of vram
And this doesn't require changing anything - you just compress the data. The LLM understands your compression just as well as if the data is uncompressed.
gzip is just a group of researchers with their own compression methods and they're comparing their compression to everyone else's compression methods.
"understanding" is compression
news at eleven
this is a huge breakthrough for speed, compression is well understood and can be applied automatically. if what they say holds up this speeds up AI adoption from years and billions to months and millions. it also means AI on your phone is coming sooner.
Ai is basically the wizard of oz
bullshit