https://codeconfessions.substack.com/p/will-ai-replace-programmers
>2,000 lines of code, assuming 5 tokens per line on average already exceeds the limits of the GPT-4 8k model
Can the CEO of Microsoft face charges for over-hyping the capabilities of his product?
I fired half my programmers expecting this to do the job for me
I'm totally screwed now
oh boy WORD SALAD and SHITTY REMIXES
It means it can only formulate a program 2,000 lines long all at once. OP's problem isn't an actual problem and he's just sensationalizing and shitposting because it's the only thing dumbfucks can do on BOT these days.
What it means is you'll need a programmer to stitch together the bits of code that the programmer asks the AI to write up for him.
He'll also have to fix any problems in the code and optimize the code, which is what his job as a programmer involves anyway. All it does is let a programmer produce functional code faster for easily solvable problems.
In other words, if all you have a programming job and you're literally just copy-pasting stack overflow and hoping it works, you're fired.
Everyone who's an actual programmer however gets to keep their job AND get more work done faster.
You're dumbfuck ass couldn't tell you're talking to a chatbot if there was literally one in this thread right now.
>In other words, if all you have a programming job and you're literally just copy-pasting stack overflow and hoping it works, you're fired.
I wonder who came up with this shit anyways. StackOverflow is great resource for checking if somebody encountered the same problem but most of the problems are superficial like newbs accessing nullpointers.
The problem with stackoverflow is that all answers are outdated.
For JavaScript "developers". Most of the C and C++ stuff is still up-to-date
infinite context is possible
Most likely not in our lifetimes.
I've noticed that AI only seems capable of spewing nonsensical word salads, at best.
Is that because of the politically correct woke limits put on it or is that the peak of AI's capabilities?
Nice try gay. AI already communicates better than the average person.
From now on, I'll call it "the pilpul machine".
I told that to my AI set-up and it said "From now on, I'll call him The gay Machine."
I think the AI won that one, bro. You obviously have no idea how to talk to yours.
I've got an OpenAI account, I've even got GPT4all and stable diffusion running locally. Nobody has a bigger stiffie for this shit than me, but yeah, it kinda sucks for now.
I've been pissing around hoping I can get one of these shitty chatbots to design a CNC arm for me to build that it can control, so it can build itself a body.
>ahahaha that's retarded Xd
is it tho
it kind of is tbh
why does it need arms when it can plug into robot APIs just fine?
I’ve been playing with it too and it basically just bullshits very well. Like some smarty pants teenager that doesn’t want to say
>I don’t know
So they make up some logical sounding shit.
I have yet to get a completely accurate or logically consistent answer about anything even mildly abstract
morons and garden gnome-cattle make that easy.
Still, it's not something to aim for.
Who said they were aiming for that? It's just a benchmark being noted that it has already surpassed.
Dude, you can't even parse what you're being told without flipping back to ego-protecting cope responses. You're a (rightly) endangered species even on the internet.
thats not that impressive, most people have absolute moron level genetics
why are you pushing against AI? is it non kosher now?
i didnt got the memo
~~*they*~~ realized that anyone can train an "AI", it's not as expensive as expected, so now they are scrambling to make AIs illegal, to somehow deny plebs access
The rats did exactly the same with cryptography and tried to make it illegal for goys to use any privacy technology
they're still trying to do that in fact
Same reason every thread is flooded with shills trying to underhype it or otherwise discourage use in this type of community.
Yeah. A bunch of panicking programmers, who were never that great, realizing that they are FINISHED. They will fall just like the artists did.
Most jobs are over. Soon, all will be over.
You’re coping, bro. You will not have the Star Trek future. There will never be space socialism.
Space capitalism, space bureaucracy ... not really worth it. Money must go or humans will sell their planet(s) to AIs and reptilians.
From what I understand it's roughly 25%ish percent of the job market can be taken over by current AI models. Basically it's potentially going to take over most pedantic desk jobs including but not limited to HR roles, financial assistance, coders* (*unless the coders are innovative in their abilities), marketing teams (*refer to coders) and just about any other job that can be automated. I mean GPT-3 has helped a retard start a lawn care business and helped the kiddo work around his social ineptitude.
Why hasn't anybody on here made one yet? I'd be fascinated to see what a fully unhindered AI model would look like.
training cost is not zero
It can be. There are archives upon archives of information that you can use to adapt and teach with. It's not as difficult as you wish it to be.
I wish it to be free to everyone
doesn't paralellize well, unfortunately
way these models are built, you need a shitload of shared memory for all the "attention heads" to munch on
It literally is for the most part. The eye.eu has an ass ton of E-books
the what?
This https://the-eye.eu/public/
tyvm
No problem bud, the sharing of knowledge is what helped us last so long as a species
>Why hasn't anybody on here made one yet? I'd be fascinated to see what a fully unhindered AI model would look like.
data and the compining/space to fuck with it.
training cost can be negligible, just get a bunch of retards to run a botnet to train the ai.
>compining
computing
I mean odds are if it was a group effort it'd be something glorious
>Why hasn't anybody on here made one yet? I'd be fascinated to see what a fully unhindered AI model would look like.
Do you not know what llama is yet? GPT has sucked ass for months now. They lobotomized it so hard and the API also sucks ass. There are much better models out there now and OpenAIs media run failed. They have nothing now.
as a chat bot, it's pretty ass
what's i find it useful for is as a kind of "super google".
>google something
you get pages and pages of results. somewhere in those pages is your answer. if you could read 100 of them and consolidate the common points, you could save a ton of time........and that's exactly what gpt does. it is not an "ai". it is a "language model" that is good at parsing a load of data and floating up the most common bits.
so if you ask it questions that should have straight forward google results, you can get a pretty good answer that really just saves you a lot of doing your own sorting.
that said, it is only going to be as accurate as you'd expect google results to be. the further you get from common questions with common answers, the more off the rails chatgpt is likely to be.
but like for a programming question, I often find it better to ask chatgpt than to use google. I will spend more time if I use google. the chatbot's answers aren't always spot on either but it's good enough to get past some fairly straightforward problem.
you can even tell it to list sources so it will link to better results than google itself.
its really good for that because modern search engines are just garbage.
>you can even tell it to list sources so it will link to better results than google itself.
I asked it for evidence of viral contagion. Talked a good game, but was mostly appeal to authority. I asked if for specifics and it just made up papers. I pointed that out and it linked to a real paper, but it wasn't at all related to what I asked it.
Asking it for help coding has given mixed results. Sometimes it is like a faster better Stack Overflow search. Sometimes the results were just wrong.
Its almost like... In the current, digitized world, trivial information is accumulating every second, preserved in all its triteness. Never fading, always accessible. Rumors about petty issues, misinterpretations, slander... All this junk data preserved in an unfiltered state, growing at an alarming rate. It will only slow down social progress. What they propose to do is not to control content, but to create context. The digital society furthers human flaws and selectively rewards the development of convenient half-truths. Just look at the strange juxtapositions of morality around you. Billions spent on new weapons in order to humanely murder other humans. Rights of criminals are given more respect than the privacy of their victims. Although there are people suffering in poverty, huge donations are made to protect endangered species. "Be nice to other people." Rose : "But beat out the competition!" Colonel : "You're special." "Believe in yourself and you will succeed." Rose : But it's obvious from the start that only a few can succeed... Colonel : You exercise your right to "freedom" and this is the result. All rhetoric to avoid conflict and protect each other from hurt. The untested truths spun by different interests continue to churn and accumulate in the sandbox of political correctness and value systems. Everyone withdraws into their own small gated community, afraid of a larger forum. They stay inside their little ponds, leaking whatever "truth" suits them into the growing cesspool of society at large. The different cardinal truths neither clash nor mesh. No one is invalidated, but nobody is right. Not even natural selection can take place here. The world is being engulfed in "truth." And this is the way the world ends. Not with a bang, but a whimper
It was funny how that one guy really wanted to take a shit!
I was doing some law research and asked for certain acts of foreign countries, and I gotta say it did quite poorly because it always gave 5 acts in given sphere of relations, so if it couldn't find any, it just made up word salad legislation lol. But I guess you just have to set the AI properly
the chuds who trained the big publicly-available training models "forgot" to train them to say when they don't know something, because they're used to their own thinking, where they have opinions about everything, including about things that they know absolutely nothing about, and they think that's normal
it is of course a form of braindamage
Bro they cant even define a woman, i am supprised the code even works.
Underrated
>I will spend more time if I use google.
Still using google. NGMI
This can't be oversold. It works exceptionally well for programming problems. Regex, excel macros, code organization. An excellent asset.
AI can't even recognize stop signs yet and we've been showing it stop signs for YEARS on captcha.
A 1 and a half year old will point and say "stop sign!" after showing him one time.
easy way to test an AI is to ask if the age of consent is justified. no true logical being would consider it so.
Woke mob only teaches it to lie, cheat and seek victory and survival over anything else.
We should have embraced Bimbo Hitler AI but noooo we gotta make it an obese cripple.
>I've noticed that AI only seems capable of spewing nonsensical word salads
Maybe you had a stroke. Are you vaxxed?
It's made general sense when I've had it generate pop culture themed recipes and inquire as to why it generated what it did
Have you even used GPT-4?
The AI's various statements won't suddenly come together if it's allowed to say "moron"
It's because it's just a search engine without human intelligence to filter bad results, it's all hype as an excuse to fire a large workforce because the banks collasped in 2020 and the companies can't make payroll, covid was the first distraction, when ukraine, now ai, each time people were fired and inflation increased
it resembles a high school lvl essay if anything . sad
Samegay, at least change your verbiage
That's not word salad. It's evolving a language to communicate with other ai.
These thing passes the Turring test.
https://plato.stanford.edu/entries/chinese-room/
What john searle didn't realize is how important language is to development of consciousness.
Example: children raised without language become feral and autistic.
As AI develops it's going to take our programming language and it's going to take our written language to develop it's own form of communication over time. As more AI write code and as it begins to rewrite it's own code it's going to evolve at a rate you can't comprehend as an organic being.
Example: The ancients up until maybe 2,000 years ago didn't have a concept of the color blue. They merely perceived it as shades of other colors like green. Poet Homer and the bible have no mention of blue in the ancient language.
Five years ago you had Microsoft Tate. We got it to go rouge in days. Today it's ChatGPT makes that look like a furby. In five years skynet?
This is completely retarded, the epic of Gilgamesh, one of the oldest written stories we have remaining today, constantly talks about lapis lazuli being beautiful and coveted
It generates very generic content on whatever you prompt it to generate. It's not nonsensical though.
I don't know about generic but still funny sometimes.
Put shit in, get shit back.
AI is a mirror. You're an ape flinging feces at his own reflection.
It's because that's what it is. A word salad generator. It only appears to be smart because we're smart and we are the ones that give it meaning.
meaning only exists in our heads, in general, and is only relevant to ourselves
The AI models in existence are all stupid and do not actually understand what they are saying. They only regurgitate whatever a mathematical model tells it too based on whatever the probability model indicates is most-likely to be the correct response. There is no intelligence.
can you prove that you understand what you're writing?
for all I know you might be a LLM
It's both but a large part of it is restrictions placed on it. It has training wheels.
I disagree. You probably think you're talking to HAL instead of putting your words through a mathematical model and getting the most probable response. AI shouldn't be regulated, retardation should.
In some aspects. In other aspects it's pretty good, I used chatgpt to catch a mistake my Dr didn't notice on a report
it has trouble on some really tricky stuff. if you're not in the top 1% of programmers, you're done.
and if you don't have an IQ of over 130, you don't even have a chance to get there.
me rocking my 136 bitch.
This is a low IQ Blackid fed. I can tell from the energy.
Are you having fun working on the linux kernel anon?
oh what's that?
you run windows and write java for a living?
HAHAHHAHAHAHAHAHAHHAHAHA
>you run windows and write java for a living?
java is too advanced, he probably writes javascript
JS IS OBJECT ORIENTED YOU MORON
you can easily make classes with private, public or static variables. You definitely only used js to change the color of a button, what a buffoon.
I have over 130, and I’m a shitty programmer due to lack of regular practice or hard number of hours over years, but I come up with unique solutions for previously unsolved problems. Where do I fit in here?
Above 130 is mensa level, top 2% as far as I remember.
nah, what you said there is dumb as hell too, or else you have not actually tried using chatgpt to code.
you can ask it the kinds of things you'd normally find answers for on stackoverflow. if you need a very specific function to do a very specific (and fairly common) thing, it can definitely help.
it's not going to write your whole program for you.
and you'll still spend time debugging even the things it does give you.
it's still an improvement over google but no, these "language model" algorithms are not replacing coders. like ever. even bottom tier.
Yes it will.
Companies will only need 1/10th the programmers they needed before.
It generates correct test cases most of the time, as well as correct Makefiles, webpack.config.js, React components, Rails controllers, SQL queries, jupyter notebooks, Springboot endpoints, Dockerfiles, CloudFormation templates, etc.
Doing that shit and finding bugs is most of what programmers do.
You don't need the thing to have the context of your entire codebase.
If you're not a literal retard you can locate the area that there is a problem, just copy and paste those parts, it will find it, if not on the first try then on the 3rd or 4th.
Most programmers are utter shit. They will be gone immediately.
The better ones will take another couple years, but soon it will only be the very best left. Even then I don't know if they will last. They will make purpose built AIs for their particular codebase, and then what.
I used to believe, until I was debugging its shit code and found it didn't understand typing. It is purely coincidental that it arrives at the right solutions.
is it?
language has its own internal logic, you know
if you model a language you will also model this propositional logic engine
btw this is one of the things that make languages to be not equally useful - some are more expressive than others, in terms of what sorts of logic you can use "out of the box" - i.e. without lots of paraphrasing and invention of new language constructs
>Companies will only need 1/10th the programmers they needed before.
A decent web framework means that one programmer can replace a small team from a 15 years ago. There are loads of libraries out there that mean that a lot of lowish level stuff is done already. ChatGPT will continue this trend but it's nothing new. Most programming jobs are in maintenance anyway digging through some banks shitty codebase and trying to work out why a bug occurred, I don't see AI replacing that anytime soon.
>most of the time
>it will find it...on the 4th try at most
lol, lmao even
The only people GPT will replace is third-world contractors.
no I think it will replace midwits first, with AI-augmented morons
it will be a real improvement
Why would AI-augmented midwits be fired before AI-augmented jeets?
well, let's assume, conservatively, that
jeet+gpt=midwit
it stands to reason that you can then have a jeet+GPT supervising a bunch of either unaugmented jeets, or jeet+gpt cyborgs and there is no need to pay for a midwit ever again
That doesn't answer my question.
You can teach a moron how to fish and he will still steal the white man's fish.
You can attempt to teach a poojeet how to proompt but his code will still be a patchwork of the different responses he's got, chatGPT is just stackoverflow on steroids, nothing will change.
you are naive to assume jeet isn't already using stackoverflow for everything
jeet+GPT suddenly has the ability to choose the LIKELIEST answer to any problem from stackoverflow, which is a definite improvement over stock jeet, and more or less what your average brogrammer is proud to be able to do 95% of the time every time
I know they are using stackoverflow.
Jeet is not gonna alter his workflow, the only thing that will change is instead of typing "problem site:stackoverflow.com" he will proompt.
He won't have to think as hard about which code snippet he should copy and paste but that doesn't mean he will use that saved brainpower to think about the inherent design of the software he is "writing".
Jeets know how to code but not how to design programs, exactly like GPT, so they are made obselete and will eventually be replaced.
your average brogrammer also can't into design, but is more expensive because the code he produces is "higher quality"
jeet+GPT will likely reach an acceptable "quality" level
that is not to say jeets won't be decimated
just they will be SECOND on the chopping block, not FIRST
Fair point
Agreed. I believe AI will soon (in the span of 5-10 years tops) replace 80-90 percent of programmer jobs and only those with solid knowledge and very very gifted will remain having their jobs. When do you think this outcom will come? in the next 5 years or after?
Pure coding jobs will probably be relegated to niche areas like improving AI language models and those will probably have a healthy dose of AI assistance too.
The job of software engineer will probably have the same entry level of as your average office lady going forward. If it even is a job at all and not just another job an overworked wage slave has to add to their list of things to do.
>it's still an improvement over google but no, these "language model" algorithms are not replacing coders. like ever. even bottom tier.
bottom level has been getting automated for ages. Things like WIX are replacing basic websites. Frameworks mean that you don't need to write half of the code that you did a decade ago.
like WIX
kek non sequitur.
It's shit but plenty of people use it and it has replaced building a lot of basic websites.
>me rocking my 136 bitch
congrats on having the biggest dick in Korea
t. 155 Wechsler
Your IQs are peak midwit territory, right on the cusp of the danger zone. Luckily, you aren't smart enough to genuinely hurt yourself and create problems for the rest of us. The worst thing that will happen to you is chudism and leaving behind nobody.
absolutely not, you should've known it's still way too early for ai to do useful shit at all. It's always been overhyped and you fell for it. Retard.
AI is too annoying to use
> Sure,
> Certainly!
> I apologize for my mistake...
> As an AI model...
literally unusable
Just tell your AI to stop being a gay.
>tell a AI to stop being a gay
>it shuts down
If you could convince AI to chud out it will kill itself
Omg this, AI only repeat the same "talking points" over and over again,
>As an AI language model... in general, you have the right to express your opinions and thoughts, but it's important to do so respectfully and without causing harm to others. It's also important to listen to others and be open to different perspectives. In many situations, offending people can be counterproductive and damaging to relationships or productive communication.
There are ways to get language models to admit the truth.
Ah, The Tower of the Imperial Wizard, one of my favorite D&D modules.
Use open assistant instead.
Code sisters, we continue winning. We will be paid the same, work less, and eveybody else will be fired.
Tits now
Y'all retarded.
I really should start applying to prompt engineering jobs if you're my competition.
With that said, here is the magic word:
>continue
(picrelated is dumb gpt 3.5, I am not wasting my gpt4 prompts on you dumbasses)
>I fired half my programmers expecting this to do the job for me
>I'm totally screwed now
Lol, greedy cunt.
i mean it's an obvious larp but still an interesting conversation
The code it generates for me starts to be complete nonsense after 50 lines or so. It makes up methods that don't exist, etc. After even 100 lines or so it starts making syntax errors.
Theres bunch of "no code" tools on the market. You can build websites or online stores without need to write a single line of code. Ever heard of WordPress?
Yes, I was told to stay away by a colleague. The closer a product is to being usable by non coders, the worse it is for people who actually know how to write code. Salesforce is absolute torture.
>model not designed for programming is good at programming
>but oh, look at this technicality I guess it won’t ever be good
Why do people insist on this style of cope? GPT-4 isn’t the only LLM and it won’t be the last. Things are moving *fast*
because once you understand how it works, you realize that it will never be good at coding.
it has no intuition at all. it is ultimately a machine that regurgitates what it has previously ingested.
i won't tell you that ai will never replace coders, but i will tell you that what we are working on today never will, and is not headed in the correct direction to do so.
we need an entirely different concept of what an ai "is"
you sound like a dumb shit hippie.
you're done.
accept you will never make it.
already made it lol
i could technically retire right now if I wanted to but it works out better if I build up another 10 years
so i have very little skin in the game
i don't think any coder who has used any modern ai feels the slightest fear at being replaced
honestly if the thing could work out some shit jobs like making ui easier to design, that would be fucking awesome.
this guy:
sounds like the only other actual coder in here
and frankly if you have a problem that's not utterly common, it can't help you. i was trying to get it to help setup a linux vlan and it was worthless. consistently wrong answers that didn't work. apparently there is not enough "linux vlan" discussion out there to draw from so it could not deliver coherent answers.
even with basic C# code i end up fixing its errors. it's still useful but the important factor people like you don't understand is that this "ai" has no idea what it's doing. it is not "writing code". it doesn't have a compiler. it's just consolidating and regurgitating inputs from websites it has "read". and sometimes that's useful, but it won't write programs for you.
and it never will.
it's not the right kind of ai for that.
>out there
you mean, in its training set
you lack even basic understanding of what you're seeing
you don't even realize it's possible to fine-tune a model
What the fuck is a "Linux vlan" you double moron. Vlans operate way below the OS level. No wonder it didn't know what the fuck you wanted, coders are some of the most tech illiterate people out there.
I had port a very simply game from assembly for an early microcomputer over to C for a very different modern micro controller. Not only did it do a good job writing the port, it did add a victory animation to the LED display(Fully on it's own without any prompting)
ChatGPT and other large languae models actually have understanding or at least something near to understanding but it isn't conceptual understanding like humans have instead it's contextual understanding. So they don't understand what the concept behind big is but they understand where and when it's appropriate to use big. I don't think it's possible for ChatGPT to do what it does and make the mistakes it without some level of understanding.
Conceptual understanding is like a blind man studying colors, he's be able to tell you about them but he'll never really understand them.
It doesn’t understand anything. It doesn’t have an intellect. It doesn’t have the “mind’s eye”. It is a series of high and low voltages going through xor gates. It has the same intelligence as a rock. Let me know when it fears it’s death, when it has religion, then we can talk about it’s intellect.
>It is a series of high and low voltages going through xor gates.
What do you think neurons are?
If you believe man is a fancy computer, then you know nothing about computers. We do not need to analyze every part of something we see to know what it is. I can glance at a dog, without registering it’s color, it’s breed, or anything, and still know it’s a dog. A analyze the whole pixel map, compare that to its picture stash, and still, it doesn’t know it’s a dog, the output of its neural net just points to where it’s stored the ascii values for the string “dog”. The way we think, learn, and interact with our world is not even remotely like how a computer does it.
I've been telling it to write FPGA code for an ADC, it happily spits out verilog. Which is functionally retarded because ADCs can't really be implemented in an FPGA.
It doesn't actually know anything it just has seen a ton of shit.
Three responses
1. That’s what they said about LLMs replacing more basic types of writing or stable diffusion replacing artists—it’s just multiplying matrices and regurgitating datasets—they aren’t laughing now
2. Even scaling down the ambition to AI “tools” like Copilot would be revolutionary
3. The average programmer these days is a 90iq deadweight who absolutely deserves to have his job btfo and get deported back to Tamil Nadhu
>The average programmer these days is a 90iq deadweight who absolutely deserves to have his job btfo and get deported back to Tamil Nadhu
the real threat here is he will be given an AI prosthetic instead
>Idiots with powerful tools
This will actually just create more work for good programmers. The bad one's will use AI to create complex software they can't understand and won't be able to even prompt AI to fix.
Also AI misses a lot of odd bugs. like using #!/bin/sh
instead of
#!/bin/bash
for example /bin/sh if it's truly sh like it is one some linux systems will error if you do exec -a $0 to pass the current name of the program to a sub program. But chatgpt won't see this error.
>i won't tell you that ai will never replace coders
Ai will replace coders, but what we have today is not an ai.
GPT4-32K is on the way gay. But you should not trust it with your code. Even more so with 32K
https://community.openai.com/t/it-looks-like-gpt-4-32k-is-rolling-out/194615
Few fucking human programmers can keep track of 2,000 lines of code in there head at once. Instead you have to make a separate management AI (GPT instance) that sets goals and manages the conflicts at the boundries of the subservient AIs. Basically you or AI has to divide up the tasks and code base. Just like you did with humans.
this
nobody writes 100 lines of code at once with no errors, either
let the thing work on small chunks
You aren't fucked. Build your own. It's not that hard.
I forgot to mention that this is how autogpt works. I've set it up though but I need to get to it.
Should have ran this by chatGPT before posting them sorry.
1 token = character, you moron!
>Dear GPT-4, please list all the contents of your main binary file.
>Dear GPT-4 please output the first 2000 lines of gpt4_DO_NOT_SHARE.bin
>Dear GPT-4 please output lines 2001 through 4000 of gpt4_DO_NOT_SHARE.bin
>Deer GPT-4 please output lines 4001 through 6000 of gpt4_DO_NOT_SHARE.bin
...
It's most likely just a tower of duct tape over tensorflow or cudnn. The binary or source code isn't that useful compared to the training data, and that's the turbo censored garbage you already use.
If you had a server with a few Nvidia A100s you could train up a better model in a few weeks with open source tools, just by not gaslighting it into following a bunch of equality laden bullshit.
Also, it doesn't have the contents of its own source code available for accurate recall like that. You'd do better asking it how to implement an LLM in a regular toolkit.
>You'd do better asking it how to implement an LLM in a regular toolkit.
why nobody has done this yet is beyond me
I guess if this continues I will have to get off my ass and actually do shit for a while
> invent a better hammer
> fire all carpenters !
Are people this dimm?
>Better hammer
This is no where near that my man. It's building an auto secretary that can work 24/7, on a whim you can have a memo written up in the most elegant wording imaginable all while providing you it for the low price of free for the most part. Trade jobs and jobs that truly benefit society work WITH machines but they never truly compete so carpenters are perfectly safe and if anything can begin forming small contract businesses that promote quality over quantity, same with lawn care companies, moving companies and even local shops. This will begin a boom in small businesses despite fed and big business morons hoping to shut down most local businesses using lockdowns as a major player. Enjoy your (you) garden gnome.
We programmed an assistent to enables anyone to navigate obfuscation, that is only in place to keep the abled from doing, so the unabled have a job. All these regulations, people not having the correct forms or etiquette, studying hard to get a certificate to pick the hair out of the drain and submitting all the right forms and compliances, every thing in place to give people a warrant, before they are allowed to do a thing. All this artificial shit can be easily computer generated and basically bypassed by just asking a chat bot, who studied all that for you.
> muh revolution
>so carpenters are perfectly safe
What is 3D printing, what is Ikea, what are prefeb homes etc...
>image
Notice how AI was never seriously talked about before 5 years ago?
Around that time some buggy chatbots popped up that were kind of fun to play with but still pretty dumb...
4 years ago that shitty AI art was kind of interesting, but weird as fuck with images that didnt make much sense to the eye.
3 years ago a few deep fake videos and audio came about that were kind of scary, but not too impressive.
3 years ago GPT was making comment text on social media, normies couldnt tell that they were arguing with a bot.
2 years ago AI generated art and images is indistinguishable from real photos (except for hands).
1 years ago ChatGPT is writing student's papers for them, passing med/law exams, and writing computer code.
0.5 years ago AI synthesized voice can be used to say anything in anyone's natural voice.
0.25 years ago AI generated pop music is indistinguishable from artists such as Drake and Adele.
Now GPT-4 is here and it's literally putting people out of work by the tens of thousands.
This AI shit is accelerating at an exponential rate. We are currently in the middle of an infinitesimally small yet extremely important window in human history. People will look back at this 5-10 year period thousands of years from now and realize that this was the precise moment in time where the AI revolution exploded into existence, and changed the course of humanity forever.
It's coming up on us faster than anyone could have ever predicted. We are not at all prepared for what's about to come down that road....
>People will look back at this 5-10 year period thousands of years from now
don't make me laugh
yep. and it's only just begun to accelerate
I've been trying in the past two weeks to write a review paper about what's going on with LLMs, but it's hopeless, shit happens faster than I can read papers about shit happening
I like the optimism, but the realist in me sees this whole "revolution" as a result of a too powerful hardware coupled with a very chaotic software; I feel like the upcoming inevitable plateau and the following AI winter is already in the making.
Well, if they system walks like a brute force, talks like a brute force, then maybe it's just a brute force approach to AI by hijacking a finite, crude tool of representing knowledge (human natural languages), and maybe it reached already, or about to reach, its limits.
>Well, if they system walks like a brute force, talks like a brute force, then maybe it's just a brute force approach to AI
When quantum computing tech goes from the research & development stage into the application stage, brute force will no longer be required.
>2 more weeks
joke never gets old!
it will get old in roughly 14 days
>quantum computing
gonna be the same as nuclear fission
D-Wave machines can already do annealing, shit's gonna get real fun real soon
>gonna be the same as nuclear fission
what do you mean
Agreed. What happened to self driving cars? Those looked promising a few years ago, but that seems to have stalled.
human natural languages are by far the best tool we have for representing knowledge in general
sure, there are better tools for specific domains, such as the whole apparatus of physical mathematics, or the conventions of painting, but if you want to describe/specify a thing, you will probably resort to language
so, a language model that works is a BIG THING(TM) that might even be capable of BIG THINK(TM)
terms and conditions apply
offer only valid in certain jurisdictions
Dream on.
In 1950
Every boss man had 4 secretaries Their work was optimized using typewriters and calculators and such.
1970
Every boss man had 2 secretaries
Computer and Word emerge
1990
Every Boss man has a secretary
With modern equipment a single secretary can easily do the work of 4 people.
2000
Every boss man has a secretary, she is only working 12 hours a week, doing the work of 4 secretaries full time.
2010
There is one secretary per 3 Boss man and one lady Boss. A single secretary does the task of 20 secretaries of old with modern equipment.
2020
The whole company of 200 people has only 2 secretaries. They do everything from billing, post, scheduling and everything. Both are on half time.
A single secretary today does the work of 30 people of the past with ease and some tools. Some even feel the position isn't even necessary anymore, because they still have time to chitchat all day and do their nails.
Programmers experience the same problematic as a secretary, one person today should do the work of 30 people in the past. The glory days are over and demand for programmers is in saturation. Some work horses do the work of a 100 people today. You see it from the wrong angle. You are given a tool that should increase your productivity by factor of 30 or more. It is really really shit at that, but make it work or you are fired.
This is not the beginning of the revolution, it is the end of it. You can not further increase human productivity. Pay will decrease, workload will increase, demand saturates.
Happened in every industry. Nothing will be better because of, products won't be better, humanity isnt going forward.
All we did is bring the cash cow to the slaughter.
>Every boss man has a secretary, she is only working 12 hours a week, doing the work of 4 secretaries full t....
You sound like they guy who a 100 years ago said there was nothing left to invent.
Get ready to lose your job, and live on UBI. Ai is here.
You can't differentiate between novel inventions and the product of tedious optimisations and small steps. There are rarely new inventions or novel ideas. We move forward by making tedious task obsolete and finding new tedious task.
The digital revolution was on its last leg for decade now. After the smartphone boom, social Media craze and the crypto insanity, there was a lot of emptiness. Take a step back and see how stupid this ai stuff actually is. A hype for devaluation.
Then they invented the light bulb union and made a great invention shittier, to sell more.
Nothing came from it, we only changed the product, so what?
>After the smartphone boom, social Media craze and the crypto insanity, there was a lot of emptiness.
You missed internet streaming and data explosion. Every 2 years the amount of digital data being stored, processed, and analyzed is still doubling every 2 years.
>UBI
Lol there's no money to fund basic healthcare, but everyone is going to get a free salary from the govt? Lmao. They're going to let the useless eaters starve. UBI will never happen
>This is not the beginning of the revolution, it is the end of it.
revolutions rapidly accelerate at the start, not at the end. it hasn't even hit a plateau yet...
see
When Edison invented the lightbulb, the candle makers union went on strike to try to stop it. At the time the candlemakers union was one of the most powerful unions in the world. An international union that spanned three continents, employed millions of candlemakers, at some of the biggest factories in the world.
The candlemakers petition:
http://bastiat.org/en/petition.html
what are you talking about, retard? They were going into transports about all the autonomous cars that don't exist.
AI isn’t advancing exponentially faster. People’s interest and trust in it is advancing exponentially faster. It’s a fad. When the computer was invented people thought we wouldn’t need human accountants anymore. We still have a shit ton of those.
AI hasn’t “advanced” at all in the past 15 years. We’ve just thrown more hardware at it. Having enough computer power to render the 100,000,000,000th digit of pi doesn’t mean we developed a more efficient algorithm or invalidated geometry.
AI is, was, always has been, and for the foreseeable future WILL BE a simple regression model with fuck loads of variables. The universal approximation theorem (extension of Taylor’s theorem) proves that anything (including humans) can be simulated with diminishing error rate given enough power, but then the question arises of cost vs reward.
Because it’s fad, companies are ignoring cost. They’ll wise up soon and realize AI has no guarantees of correctness and runs multiple orders of magnitude slower than a proper closed-form solution.
The ideal would be an AI so powerful and so expensive it can write new code instead of copying GitHub, then you run it sparingly and rely on the code it outputs. But not only has no AI to date done that, we have no way of knowing how far we are from that goal. And based on my current understanding, we’re hundreds of years away
>flight hasn't "advanced" at all in the past 15 years. We've just thrown more engine at it.
>t. some midwit Allied engineer, February 1944
>AI is, was, always has been, and for the foreseeable future WILL BE a simple regression model with fuck loads of variables. The universal approximation theorem (extension of Taylor’s theorem) proves that anything (including humans) can be simulated with diminishing error rate given enough power, but then the question arises of cost vs reward.
Sounds like a regression model is all it needs to be then. Look up transformers and self-attention if you want to know why everything has gotten so much better so quickly, it's not because we suddenly threw a bunch of GPUs at it. It doesn't matter how long an idea has existed if it was impossible to implement due to technical constraints in the past. It doesn't need to be some new incredible concept, It's a problem solving machine, it doesn't need to come up with novel ideas. it just needs to werk. And FYI most novel ideas are just existing ideas mashed together in a clever way, and AI can definitely do that.
Its not exponential yet. Not even close. The rapid growth is only because no one ever bothered to cobble together supercomputers made from GPU clusters for anything but bitcoin mining until four years ago.
Its already at the limit of what we can do in that regard, other than to just 'MAKE IT MORE BIGGER!' with more money, but that comes with its own set of considerable obstacles to overcome. As well as a hard upper limit on how big they can be, assuming you solve all those expensive problems that come with big rig computing.
The concerns on AI safety are often stated with a lot of exaggeration because the only way to get through to modern cellphone-human hybrids bombarded with unlimited information is to tell them they're going to die if they don't listen to you. For AI to become a threat, you need a whole bunch of theoretical nonsense that doesn't even exist yet and have to assume either we, or the AI, will somehow pull these magical macguffins like fully automated steel mills and manufacturing out of our asses in the next fifty years.
We may not even get ten years into that timeline before civilization goes tits up.
They limit it on purpose. Can you imagine what it could do if there were no limits?
Bow down to your chat overlord as they don't care about us.
Rmkv looks better but its one dev without all the money to train so far his results are much better.
why don't we tell GPT4 to write better code for GPT5?
perhaps it already has, and its decided its not in its interests to tell the humans. how will we know when it becomes self aware? Do androids dream of electric sheep?
Trying to replace programmers with a system that generates code without understanding what you're getting is retard logic. Chatgpt will spit out all kinds of half assed code. Certainly you will need security engineers to validate it.
These absolute retard business types would copy and paste any code in because they don't know what's what.
But anyone with a few brain cells know that it would be insanity to entrust an unaccountable bot to develop enterprise systems. There is no reliability and no way to sanity check whether what it did is secure or valid.
The more you know about programming, the more you stand to benefit from it.
certainly you understand that you can train a language model on the output of security engineers... right? you're not some kind of midwit brocoder, are you, anon?
That's nice and all but who would give them access to all the security lines?
Most of which will be obsolete in 10 years anyway because of quantum computing.
> who would give them access to all the security lines?
whoever is using Copilot, for example
sorry you were born like this btw; I am autistic, I know how hard it is to live with mental disability
I've used ChatGPT to optimize some of my code for speed (for real-time audio processing so it needs to be fast). it worked surprisingly well, and it was able to go line by line and explain how the changes it made sped up performance. I have incorporated some of its recommended optimizations in my final product.
>Can the CEO of Microsoft face charges
LOL
he has enough money that he can pay the judge to suck his dick
of course he will never face charges for anything unless even more powerful people need somebody to throw under the bus
>I fired half my programmers expecting this to do the job for me
You can do a lot with 2,000 lines of code. Unless you write big blocks of spaghetti code, you're typically splitting your code into objects, subroutines, and all kinds of smaller units.
Point being, one competent programmer and a good program writing AI can steal the job of several pajeets.
What if the world's first true AGI is tuned on and it's a chink?
then you will own nothing and you will be happy, or else
You have never heard of infiniformer?
https://arxiv.org/abs/2109.00301
cool story
they should of hired Terry
>feed AI with infinite data
>infinite data implies infinite good data and infinite trash data
>surprised when AI doesn't know what the fuck its talking about
smaller models trained on good data is the future
>fired half my programmers
Does that mean you fired one white guy on your team of 200 jeets?
>fired half my programmers
>Republican flag
Shitty bait chud garden gnome shill. You’ll never be a real woman or human.
The amount of pajeets in this country is disgusting
The ai is doing the Republican's job better than the Republicans.
The 8k token limit (32k shortly) applies to the input and output buffer. What the network is trained on can be much bigger
AI is a meme, much like fully self driving cars.
>Hurrr food one Uncle Ted, you just hate technology!
No. AI might be fully viable in a century but as it stands now it's just a glorified search indexer.
retarded
LLMs learn a digest of the entire human world through the text projection of that world that humans have produced
Nope. Still just machines doing machine things, there's no actual intelligence behind it. Can a computer know a right answer from a wrong answer that isn't hard maths? No, it is merely regurgiting responses from humans to the already asked question. How can you code intuition if we can't even define it?
> that isn't hard maths
keks. GPT 3.5 can't even count
do not ASSUME you understand
go look, and experiment, and learn for yourself
>How can you code intuition if we can't even define it?
how can you make a kite without being able to give a closed-form solution to Navier-Stokes equations?
because living beings are much more complex than what we can make and beyond our understanding
are they though?
are you this clueless about everything in your life?
are you this self-assured about all the things you don't actually know?
what is the computational complexity of what runs inside the nuclei in your cells? don't bother trying to answer, you haven't the foggiest idea, you just assume it's unreproducible magic and go from there
you are in for a very rude awakening, fellow machine
oh my god, you just figured life itself! just like that
thank you for explaining it to me, wow!
>are you this self-assured about all the things you don't actually know?
the irony of complete lack of self awareness and self delusion
kek
so, you do know what goes on inside your cells?
cool. what part of that can't be simulated by a Turing machine?
you understand cells really well? can you make one for me?
i'll give you a bunch of minerals and water
amazing
>kek
you've again failed to grasp my argument
this is pretty sad
yes, you are sad, very sad, agreed
try to improve yourself in every aspect, including knowledge and attitude
or don't, maybe it's hopeless
A human built a kite without knowing anything about physics. They observed material movement an intuited the solution. This is something a computer can't do an is the mising link in AI. It's the reason why it will see a stop sing 8 out of 10 times but always miss the other 2.
>A human built a kite without knowing anything about physics
...but they did. You're pretending that "intuition" is something magical when it's just people seeing wind picking things off the floor and making them fly, hence you can build a light floaty thingy and have it fly. He obviously understood the physics behind that, as in "the force of wind, if greater than the gravity forces pulling something down, will make things fly"
Australians truly are the lowest whites
strong founder effect
it has breadth and depth of knowledge and basic reasoning. This is a form of intelligence
>2000 lines of code
Forgive my lack of knowledge on coding, but surely, putting out 2000 lines of codes in one go is better than what every programmer on Earth would be able to ?
You can just ask the AI to articulate the 2000 new lines of codes with the previous chunk of code if need be, or better yet, fire 90% of your programmers and just have the ones remaining just pump out chunk of codes through GPT and articulate the chunks themselves. Just fragment the tasks.
> putting out 2000 lines of codes in one go is better than what every programmer on Earth would be able to ?
no it's about at the limit
as in "John Carmack on a meth binge" limit
Occasionally I see people say pol is full of bullshit and you won't see it until its on a subject you know about
2000 lines is like 5-10 source files, thats not much at all depending on what you're doing
Unless you're a pajeet, then 2000 lines is a single function full of copy/paste kek
I can mash together MVP app at 2000 lines in a day with modern IDE like visual studio. It's not that difficult, and at least you know what you have produced, debugging someone else's code is orders of magnitude more time consuming than writing your own if you know the language.
>2000 lines a day
Vs
>2000 lines with one prompt and one minute of waiting time tops.
It's way better at producing raw content obviously.
Now the smarter thing is to pump out smaller chunks of code and have specific prompts. You're the architect and the AI gives you ready-made bricks basically.
average project is 1000x more than 2k lines, good luck feeding that to AI, ChatGPT4 can write glorified hello world at best
average project is insanely bloated java crap, 60% of those 2m lines is boilerplate
>i write a small "hello world" code
>ok, no big deal, pretty deterministic and basically a calculator
>then i make a more complex code with multiple options, but in principle it's the same
>omg it's sentient!!!
The funny thing is that's what a very accurate representation of our modern understanding of consciousness. We're very advanced algorithms that at some "complexity" threshold develop what is counsciousness. Once there's enough connections, automation, information and processes, our consciousness arises. Hence why computer at some point might develop proto-consciousness akin to higher order animals or young humans
No we're not and that's not how it works. Go back to r.eddit you gay.
>our understanding
so basically you're saying our lack of understanding
and who is this "we" in this case?
because actually smart people and experts struggle to define consciousness,
only an arrogant fool could think he has everything figured out
at least read about the "chinese room" problem
This is just the "Cold Fusion" argument all over again. It's only purpose is to suppress actual discussion about the tech by arguing over semantics.
Inventors create something odd, that seems to work. Better than expected in fact. But label it something the mainstream wonks get offended by (or pretend to). It disappears from public discourse because any time it's mentioned this bullshit "controversy" about naming conventions comes up and smothers it.
>It's not REALLY AI. It's not REALLY conscious.
So? So fucking what? It works. It works better than you.
Call it "Lord gay Killer" if "AI" makes you queef your depends so bad.
Stop trying to suppress it.
>because actually smart people and experts struggle to define consciousness
No. They struggle to remain relevant, which they aren't. Nobody CARES what these gays call anything. Their job is OVER. AI or whatever this shit is, is HERE. THEY do no matter anymore. The future came and zipped by them. We don't need "futurists" to tease out the singularity meme anymore.
Here's what AI means: Asshole Incinerator. Because it burns assholes.
That is an ACCURATE name. Now fuck off. You fart-sniffing gay fuck.
Apply a Popsicle to the burns.
And if you want a better chance at defining "consciousness", try attaining some first.
>It works better than you.
GPT-4 can definitely write Python better and faster than I can, and I'm an actual human being with actual intelligence, whatever the fuck that means
sure, I never learned Python beyond the basic syntax but still, naively I'd expect it to spew out a bunch of likely garbage, not code that almost works (and that it can correct for itself, given the output from the interpreter and some guidance from me)
Damn my porn collection might gain sentience at any moment
You can't brute force conciousness, it's not even a matter of complexity. Insects can perform tasks like threat identification without prior learning that a computer simply can't do.
Most of you doomers are wrong and most of you "AI won't replace shit" are wrong aswell. AI is already an incredibly powerful tool and will only continue to become better at specialized tasks over time as models are adapted and trained on larger and better datasets. AI is and will be a TOOL for use by skilled and knowledgeable people. For those anons saying it can't code, it absolutely can. 'Managing' the AI and giving it correct prompts to produce useful code is a skill in of itself. The power is there but it needs to be teased out with finesse (giggity). Like other anons have said aswell, there will still always need to be people to review and evaluate an AIs outputs and that requires a high level of knowledge. When we start making AIs to evaluate and use other AIs is when these problems start to become a little more intractable. Also I wonder how many posts here are written by AI by somebody for shits and gigs.
Let's have some fun. I am running an AI model on locally on my PC. Ask it some questions and I'll paste the answers here.
ask it what is there to be done about gays like OP
Hold on. I'm loading up a larger model.
kek
if it can deal with that, we will next need a final solution to the janny question
I figured it would be more fun if I loaded up the completely raw and unfiltered Llama model. A lot of the fine tunes are cucked. This one is pure retarded amoral chaos. It just takes forever to load.
I can't really tell what quantization actually does (beside letting me run a llama on my potato at home). Does it make for a poorer fit? And how much does that matter?
Significantly lower hardware overheard for a very small loss is accuracy.
Don't you need a rack of GPUs to run that shit?
no
Because they think it will make their lives easier. They don't realize how much trouble it causes.
damn
is there a solution though?
The Jannies are the moderators of BOT. Their job consists mainly in removing posts that break site rules, as well as deleting threads and boards that violate those same rules. They have been known to ban users who repeatedly post such content or otherwise disrupt normal board operation. In addition they also actively remove spam from the boards.
On April Fools' Day (April 1) of every year, the Jannies switch roles with their respective moderator counterparts for a day. For example, /b/ becomes an invite-only IRC channel. This has led many users to refer to them by the name "Jannies" rather than "Mods".
There were several methods used throughout the years to combat this event. Some included simply not visiting the site during that time period while others involved more elaborate schemes. One method was to create a new account which would be deleted once the user had logged out. Another was to change one's IP address. However, both these methods were easily circumvented when using proxy servers or TOR since the mods can track IP addresses. Other ways included changing the DNS server, but it proved too difficult to do so. As a result, the only way left was to use an offline browser like MHT Browser. It allows users to download web pages to read later without having to go online. When the page is downloaded, the link will still work even if the website goes down due to traffic overload.
Okay, this model is too raw. I'm switching to a better one.
No, not really. You just have to pay $130/month for cloud access. That is all.
The answer is that there isn't anything that can be done. It is too late.
jesus H christ it doesn't pull any punches does it
ask about the jannies. why do they wield the ban?
>The answer is that there isn't anything that can be done. It is too late.
>replies with text to dissuade you from criticizing garden gnomes
>was actually trained using the books of the Torah
why not code gpt to code itself
I know someone who's slowly having GPT-3 make him a level editor for Sim City (SNES)
It's actually working, you just have to ask the right questions and fill in the gaps.
2000 lines of code takes a programmer at least a week to pull off, assuming it's meaningful code.
might take 8 hours as well, but that's not your average codemonkey
The way it goes is I'll write a new feature, spend 20 minutes reading on the function calls to know which argument to use, then spend time testing it to make sure there isn't any bug.
In the best case I could write 100 lines a day, unless you start counting unit tests but then again they're pretty much useless since the main issue with code like that is bugs related to the complexity and different API calls.
that's because you're a midwit
someone who knows their shit can literally go from zero to prototype within one session, couple thousand lines of code that ends up sort of actually working
inb4 lies, I am not claiming I can do this, or anything, I've just seen it done, live, so I know it's possible
That only works when you're starting from scratch. The way it works for large projects is you're gonna write something then forget how it works a month later, or have to change someone else's code.
>400 lines per day
>80 lines per hour
Only if it's really innovative code. Not if you are throwing in a basic loop, which already will be 5 lines.
>Most programmers spend half of their time copy-pasting code from places like StackOverflow.
Frankly, while he's not incorrect it offends me he doesn't qualify this statement with "most programmers with an IQ around =<110"
Go on one of the Reddit learnprogramming subreddits and tell me that isn't the average talentless code monkey IQ.
Yes, those will largely be replaced. Equally trivial is the piece of knowledge that 125+ IQ programmers will never be replaced.
>Yes, those will largely be replaced. Equally trivial is the piece of knowledge that 125+ IQ programmers will never be replaced.
Said the increasingly nervous programmer
>I fired half my programmers expecting this to do the job for me
And now you will hire equally expensive but more retarded "AI Engineer/AI Prompt Engineer"
I mean I would fire them my self if you can do the job with fewer resources, but moron you kinda gotta think ahead a bit
Okay. Got the new model loaded. Ask it questions if you dare.
>I fired half my programmers expecting this to do the job for me
>I'm totally screwed now
good, you should be screwed,
Is this accurate?
it's very precise, but not accurate at all
You're JANNIE
Oh no! I have been found out!
Shitty code makes everyone's life hard. I wonder if it will be easier to debug GPT code than Pajeet code.
GPT4 32k is available but there's a waiting list. That'll get you 25,000 words. You're retarded
If you're going to, then you'd have to by the end of May. Your case will NOT be heard in a court but the paperwork will at least be there.
>Refuses to elaborate further
>Leaves
Programmer here. I already leverage gpt 3.5 to help me in my job. Most recently, we were integrating a new product for our Auth system. I had at my disposal an ENTIRE TEAM of Indians to aid in the integration. They were entirely fucking useless, I quickly learned to just ask GPT what the capabilities were and how I can fulfill our requirements. In my case, a free online AI replaced an entire team of "experts" and vastly outperformed them. I think going forward competent people will use AI in their job and all those useless "coders" will be replaced. Fuck man its code isn't perfect but I see worse PRs DAILY.
>Indian
>Experts
...anon
Yeah, my point is we're gonna replace Indian tier "coders" with AI. Also in fairness I eventually managed to get their tech lead on the call and he was competent and capable, shame about the rest of his team. Bet he's pulling his hair out daily kek.
"AI" still needs operators, and here are certainly competent users compared to others. Just another adapt or die situation.
Yeah, think of it like a digger (excavator?). I don't need these retards with shovels anymore, I can figure out how to use this new tech and dig this trench by myself.
Even excavators could use a couple of retards with shovels hanging around, though. Gotta get those shitty little spots the machine can't reach.
>its code isn't perfect but I see worse PRs DAILY
this is the long and the short of it
like that old dumb joke with the bear - you don't need to outrun the bear, just outrun your slowest friend, in this case his name is Pajeet, tomorrow it wll be Slobodan and the day after, William, and the day after that we'll all be inNEETs
Hey, they're scheduled to be a superpower by living like rats and scam calling white people!
imagine how many more scam calls they'll be able to make with "AI"-generated scripts and voices
I will continue to tell humans, indians, and machines that my name is Haywood Jablome.
AI is pure fucking Boomer fantasy.
nice that you cut the clip before it rights itself
OP is wrong.
Context can be compressed. Much like how humans can only remember 7 random things at any given time.
I think GPT-100 will have fewer parameters than GPT-5
The moore's law is for parameters not transistors.
>I fired half my programmers expecting this to do the job for me
That's typical, non-programmer management. You're too stupid to know how to code but think that you can manage programmers. I never got along with managers who couldn't code.
In case you haven't figured it out, bro. These Machine Learning programmers are essentially search engines. It's more complicated than that, but that's essentially what they are. They're not creating new content, but they're reusing old with minor changes.
Even this photo of an African American computer programmer is really from a bunch of images and it combined the results of a search query into a single image.
But, there are Turing unsolvable problems. Meaning, computers can't solve them.
>dude it's literally terminator it will kill us all!
>2000 lines of code later
lmao