https://twitter.com/abacaj/status/1777574208337215678
Really though, this would have been completely unthinkable 5 years ago, but now it's here, and commercially available.
How fricked will your job be in 5 years?
DMT Has Friends For Me Shirt $21.68 |
DMT Has Friends For Me Shirt $21.68 |
at this point im getting unsure about my career
>How fricked will your job be in 5 years?
In 5 years there will be no programmers
*makes 7,053,568 compile errors in ur path*
*uses single letters as variables and remakes variables down the line*
*uses syntax from another language*
*makes nonsense comments on code that makes it impossible to decipher*
*steals code from github without including license*
*takes longer to fix than it does to just write it yourself*
*has been at this level for years and is actively being shilled on bot DOT org (-nel) (replace DOT with .)*
>locked to a certain timeframe so can't handle constantly changing tech like android, web, other libraries
>copy-paste from online indian tutorials
>most of the time spent guiding it to do the right thing and adding context
>can't handle project above the imposed character limit
>can't fix complicated bugs with pointers, ipc, and drivers
>know nothing about programming
AKA have never shipped a product. Only script-kiddies needing AI to move files around think it can replace programming jobs.
I am actually all in for AI replacing stackoverflow or prolonged docs searching.
But enought about Pajeets. AI poses a significant danger to real programmers.
Oh no, another demoralization thread!. What shall we do?. No, no, no, no, no, no, no... etc.
>real programmers.
Maybe to you FizzBuzz morons, but to everyone else that gets paid to solve problems and code?, no, not really.
> twitter nobodies
> schizo screenshots
literally shitcoins 2.0 but with far more moronic people involved. what a fricking disaster.
factual
> schizo gibberish from a fricking idiot
lmao. two more weeks etc.
sperg
don't forget, also writes trivial exploitable code.
usually some overflowing arithmetic or otherwise, because it's basically trained on babby tier educational sources.
>*uses syntax from another language*
kek this gets me every time
>chatgpt write this and this function in javascript
>randomly adds in 2 lines of java after 150 lines
ye it's surely gonna replace us soon, just 2 more years am I right
This is what chess players thought until Deep Blue beat Kasparov. Programmers are like chess players in that sense, arrogant pricks with specific domain knowledge who think they are untouchable.
apples to oranges comparison
thats the usual cope, then you cry like everyone else getting replaced thinking writing code is somethign ai cant do, you think companies dont know you can be replaced? the ai companies already planning to do it because is already tested if anything they dont wanna cause an uproar like automation on industries in general, they can replace humans in a lot of shit but that would cause chaos specially on things that require a career becoming obsolete, is not like they cant do it is that they dont wanana do it yet or at least slowly to avoid the economic collapse.
bloody frick you
ok I get it that you hate indians, but this is an AI thread.
Hearty kek
Agreed. Helps a lot to avoid verbose documentation or clickfarm websites with shitskins telling you how to do some basic shit like mass rename files. Ironically, in producing AI, they ended up producing excellent indexing algos that blow google out of the water in every way
Okay, but enough about programmers, let's talk about gpt.
you do know a model improves with every iteration, yes? is it really that hard to understand?
>it'll keep improving at this speed because...it just will, OK?
>it won't keep improving at this speed because...it just will, OK?
>improves with every iteration
What's BOT?
Still waiting for one of these magic models to fix Kuroba-dev.
67 issues about replacing master/slave.
I have been using chatgpt as a helping hand since it came out. if anyone thinks that gpt can replace programmers they know nothing about programming
This. Helpful tool but if you don't know how to program it will create more problems than it fixes.
You're moronic. Fixing code is not the same as writing code from scratch. With enough context (aka code) it's doing very well to fix even obscure issues.
t. Using GPT since november 2022
>With enough context (aka code) it's doing very well to fix even obscure issues.
Oops... hey guys everyone look over here we got an incoming freshman here thinking GPT really can replace programmers LOL! Looks like someone haven't really been working in larger scale projects or dealt with complicated matters in software designs. Your little weather app or tic tac toe app doesn't count so simmer down lil bro!
>Moving the goalposts
I never said it can replace programmers, but it certainly can fix minor issues that would take several days for devs to fix because they don't have time to do so.
You're evaluating where they are now, without looking at where they will be based on where they were.
People like this make me wish AI really could replace programmers in large projects. They can never be like "Oh wow, cool" and leave it at that.
have you seen any of these AI agent pull requests lmao
yea, it takes 10 minutes because it gives a rats ass about what is actually happening in the code. go look at the curl pull requests that completely ignored lines of code and completely ignored the responses from the developer. it's shit, I wouldn't let these things even update my readme
So we just went from AI replacing pajeets to AI replacing whitoids lmao
Any 2$ an hour jeet can write shit code and the AI will fix everything KEK
GET FRICKED WHITEY
it will kill bootcamp webdevs and that's a good thing. but llms are utterly incapable of systems design, and cannot write maintainable code. realistically, they never will, llms simply do not have the ability to plan or reason about a problem. that would require true AGI.
Planning is a lot like next token prediction, so yes it can.
you certainly could be replaced by llm
Yeah I really don't get how these people think this shit will solve all our problems. It sucks dick at being creative. Just look at how no one gives a frick about AI art anymore. These models cannot reason or plan. Are they useful? Yes. Can they problem solve? No. Will they ever do that better than people who know what they're doing? Not in their current form.
>that would require true AGI.
The entire history of AI is people saying this about things and then it turning out that whatever it is can be solved with mundane statistically methods and a lot of computing power.
We'll all go back to working in factories where high mechanical precision will be necessary. Carrying boxes and the like. How fun!
trannies are going to petition to ban ai due to copyright and hurting minorities chances for jobs.
we end up with a million pajeets making shitcode as a result
>faster
Cool. How accurate is it?
Good thing I didn't listen to BOT about learning to program
the new meta is to not learn to code
You mean mantra?
You mean manga?
You mean tnd?
Probably produces garbage code/side effects in a real project
What kind of issues does this "SWE-bench lite" even contain in the first place
Are they highly specific issue descriptions/requirements which you only see in mature code bases or more vague ones
What's to stop people from just training an LLM on the benchmark problems?
https://www.swebench.com/
>SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
https://arxiv.org/abs/2310.06770
SWE bench is easy problems than pass unit test.
This thing will shit out so many CVEs, my career is safe :^)
Don't care I'm going to program 'til I die.
Productivity increases
Short term losses as demand is met with fewer man hours
"Learn to code" jobs will be dead within a year even for 3rd world
Long term benefits as demands scale up
You should already be accustomed to lateral movement across companies or a independent consulting
Most programmers should have never had their job. Most humans, in general, are completely inadequate garbage at what they do. Society simply does not filter its creations hard enough.
Is this surprising?
>Society simply does not filter its creations hard enough
it does it filters out the competent people so theres no competition
This means I can do one year of work in one day. So I can make a AA game in 2 or 3 days.
copy and pasting is infinitely faster than both
I wish poopgpt was as good as people claim, but it fricking sucks. It can't solve any of my problems and it just makes me waste more time because of its hallucinations.
That's nice. Make sure to keep using it for all your projects.
When you finally realise that your codebases are an unmaintainable pile of shit that almost nobody is prepared to work with resulting in a complete loss of institutional knowledge... give me a call, I'll save you.
Just be aware my services do not cost $20/mo.
and it'll still be worth it, and you'll be paid the wages of 10 developers for your time. good for you.
except the tech also obsoletes 100 other developers in the meantime, so I hope you're in the top 1%
>When you finally realise that your codebases are an unmaintainable pile of shit that almost nobody is prepared to work with resulting in a complete loss of institutional knowledge
This happens with human devs regardless
Good. I hate computers. Get me the frick out of here. If it weren't for the pay cut I'd become an apprentice electrician.
LLMs are fricking terrible at anything that's not writing actual text.
Why can't normalgays realise it already, they're "large language models", not "large programming models" or "large engineering models". The sad part is it cannot even do text right.
My company wants our newest industrial use device to be ATEX IEC60079-7 certified, so i turn to ChatGPT to summarise section of the standard and it gives me information so wrong i can't believe my eyes
Engineers and actual programmers are never losing their jobs, at least to LLMs
I think LLMs have shot themselves in the foot. Internet is going to get flooded with generated content which means that content can't be used to effectively train new models. AI research is following predictable hype cycles. We're past the peak already and stagnation will follow. There will be no AGI for a long time.
>nobody really understands intricacies of human brain
>some söydevs from silicon valley think they can challenge nature with their primitive models
>nobody really understands intricacies of human brain
Its really mind boggling how this is mentioned so rarely. The hubris of those tech pricks thinking they can whip up an AGI when nobody understands how intelligence actually works in the first place is astounding
It's not complicated. It's something you can literally scan down to a neuron (the most granular) level.
>It's not complicated. It's something you can literally scan down to a neuron (the most granular) level.
Surely you won't have a problem to explain something simple then, for example, how does the brain store data? Lets say there is this little saying that I like, its effectively just a text string. How is it stored in my brain? Does the brain keep a string literal somewhere? How is it represented? When I want to retrieve it to speak or write it, what do the neurons do?
When you memorize a saying or a piece of information, it is not stored as a literal string of text in your brain. Instead, the information is encoded in the connections between neurons, a process known as synaptic plasticity.
Here's a simplified explanation of how this works:
1. Encoding: When you learn a new piece of information, like a saying, your brain processes it through various sensory inputs (e.g., visual, auditory). This activates specific patterns of neurons in different brain regions, such as the hippocampus, which is crucial for forming new memories.
2. Synaptic plasticity: As these neurons fire together, the connections (synapses) between them become stronger. This process is known as long-term potentiation (LTP). The strengthened connections form a neural network that represents the stored information.
3. Consolidation: Over time, the memory undergoes consolidation, where it is transferred from short-term storage (hippocampus) to long-term storage in various cortical areas of the brain. This process involves the strengthening and reorganization of neural connections.
4. Retrieval: When you want to recall the saying, your brain activates the neural network that represents the stored information. The activation spreads through the network, allowing you to access and reconstruct the memory. This process involves the coordinated activity of multiple brain regions, including the prefrontal cortex, which is important for memory retrieval.
It's important to note that memories are not stored in a single location but are distributed across multiple brain regions. The exact pattern of neural activity that represents a specific memory is unique to each individual and can change over time as the memory is updated or associated with new information..
thanks chatgpt
Claude Opus actually, you fricking Black person israelite.
Nooo you're not allowed to ask GPT!! You just aren't okay!! W-what if it's inaccurate?? We need to fact-check this!
>When you learn a new piece of information, like a saying, your brain processes it
I bet you feel smug now that you've got your electronic monkey to regurgitate this information for you. I don't want vague information about which brain area might have a function in what cognitive process, I want to know the mechanics. I'm an engineer, tell me exactly how I am able to store that bit of information down to the nitty gritty.
What you gave me is the equivalent of staring at a computer under power and inferring how it does things by measuring the power draw of individual components.
When you learn a saying, specific neurons fire in a unique pattern, triggering the release of neurotransmitters at synapses. If the same neurons fire together repeatedly, the synapses undergo long-term potentiation (LTP):
1. More neurotransmitters are released by the presynaptic neuron due to changes in calcium signaling and vesicle fusion.
2. The postsynaptic neuron develops more receptors (e.g., AMPA and NMDA) and becomes more sensitive to neurotransmitters.
3. Structural changes occur, like the growth of dendritic spines, increasing the surface area for synaptic transmission.
LTP also triggers gene expression and protein synthesis (e.g., CREB and BDNF), creating new synapses or modifying existing ones. The strengthened connections form an engram - a specific pattern of neuronal activation representing the stored saying. This involves multiple brain regions, including the hippocampus for initial encoding and the cortex for long-term storage.
When you recall the saying, the engram is reactivated, firing the neurons in a similar pattern to when you learned it. This reactivation, orchestrated by regions like the prefrontal cortex, allows you to access and reconstruct the stored information.
In summary, the saying is stored as a unique pattern of strengthened synaptic connections, not as a literal string of text. Recalling it involves reactivating this specific neuronal pattern.
>It's not complicated
>it's just neurons lol
Neurons don't just exist there by themselves as an isolated system. Change a balance of chemicals a bit and everything changes. Complexity of brains and human body in general is just insane. Some rudimentary matrix multiplications repeated a shit load of times won't come even close to exhibit similar behavior.
You realise that by posting that chart you're only serving to illustrate that how the human brain is well within our understanding dont you? And since it is within our understanding and it can just be put into a .png or .pdf it can certainly be processed by something like Claude or GPT.
Here, I summarized AGI:
A = B * C
Now go forth and create the singularity.
Biology is just chemistry with a layer inbetween, chemistry is just maths with a layer inbetween, computers do maths.
That's great and all, but the current systems don't have nearly enough transistors to create a ANN on the same scale as the brain.
You forgot the RELU.
>just put this anatomy book into claude and it will generate an efficient brain sim algorithm!
Maybe in 30 years when we have 500T param models.
>even the ai doubters who think its just autocomplete concede that we're ~30 years from complete ASI
Over, innit?
>~30 years from complete ASI
That's based on the assumption that we'll have hardware powerful enough to just braindead brute-force some useful algorithm out.
We could probably do it right now if they dedicated all those giant supercomputers to it. Supposedly Microsoft is making some giant AI supercomputer, maybe they'll try to make a giant model to run on that, but I'm not entirely sure it's possible with publicly traded companies to run an experiment that expensive.
You need to have a nice day. Low IQ useless eaters like yourself are holding back humanity.
With one equation I will utterly define the entirety of everything that has existed or will ever exist, thus all problems are solved.
U=1
Reductionist drivel. Try harder.
honestly have a nice day it'll optimize the rest of humanity's intelligence
the most granular level is the synapse for which it is actually almost impossible to truly capture since you've got abstract concepts at play and it doesn't work in vitro
This.
The computer power analogy is pretty good.
It's kind of like how we have no idea how DNA truely works. Sure we can frick around with a couple genes on a fruit fly and make it grow random legs everywhere, but we sure as frick ain't going to make a dragon from scratch
>no dragons in your lifetime
;~;
And wtf do you think source code is? Moron?
>My company wants our newest industrial use device to be ATEX IEC60079-7 certified, so i turn to ChatGPT
This is bait
Why does that sound like bait? I tried to use ChatGPT to explain parts of standard to me (it's like half legalese)
It's not like i was going to let ChatGPT touch the design at all
>He didn't use gpt4
It really is a matter of time until human programmers become irrelevant. This feels like the evolution of chess engines, the difference is that of course programming isn't a sport so companies have no reason not to completely replace humans with AI.
At the end of the day, just like chess, programming requires no creativity, and every problem has an optimal solution path, just like chess.
>every problem has an optimal solution path
prove it
If you ever programmed you know that is the case. No need to "prove" it here. The optimal solution is the one which uses the least amount of memory and CPU cycles to achieve a goal. At some point all AI will need is the goal, it can be vague, then you can make changes to what it produces by giving it different instructions.
This will all be done in natural language, no programmer required.
>The optimal solution is the one which uses the least amount of memory and CPU cycles
Nocoder.
>butchers the quote
>gives a vague insult as an argument
Good one.
Samegay. Still no argument.
>least amount of memory and CPU cycles to achieve a goal
lmao, this is not even a junior level take, this is a complete nocoder take
Hahahahahahah holy shit, are you being paid for this?
you've never written more than a hello world after following a youtube tutorial.
>The optimal solution is the one which uses the least amount of memory and CPU cycles to achieve a goal.
thanks for exposing yourself as a fizzbuzzer lmao mong
But the problem (if that) is if you automate programmers you can automate literally every other job that can be done on a computer by sicing the ai to write program to do tasks even if one of throwaway programs
>10 minutes per issue
Utterly pathetic, Ranjesh can solve 10 issues every minute.
>github
Is there a local version control system that these agents work on?
GitHub is not real life, nice try though.
every time i was laid off "because you're too expensive" in the past decade, i quadrupled my wage when inevitably, the big boss would call me to extinguish the resulting dumpster fire
happened with outsourcing to poos, happened with "low-code" hype where big man was expecting business people to ship a working product, will happen again with llms trained on poojeetware from github
can't wait to get laid off again and be begged to come back half a year later at any cost
it's ogre
Cool, you can now cleanup all those shit issues and focus on actual issues which chatgippity can't solve. Did I mention you need to review and clean all garbage gpt will spout out like some janny?
The reality isn't as simple and neat as a benchmark unfortunately. This is the equivalent of driving in perfect conditions and claiming you have FSD.
>humans take 1 year to write code
>I ctrl+C ctrl+V their code
>I'm 365 times faster than human programmers
>it's over
>literal chud face profile picture
will be funny when businesses try to replace devs with business people for the 100th time and give up and come crawling back. the ceo of my company even said last year we will be able to ship features in 10 minutes with a business analyst prompting, that would normally take a team of devs months to do, yet no one has been replaced yet. Normie executives hate devs with a passion and have tried a million times to replace us but have always failed. they also won't be able to replace artists, vfx and so on. If they could have, it would have already happened.
>use a tree for something
>feel too lazy to write a slight variation on a breadth first traversal to get all nodes within N links of node X
>write the function signature and a comment explaining exactly what it should do
>GPT-4 writes something that LOOKS correct, but has a few swapped variables and a missing line that makes it frick up
>takes the same amount of time to debug as it would have taken to write
If you aren’t doing CRUD it still isn’t a threat. If you are doing CRUD you’ve been fricked forever anyway.
All this posting about how AI will never replace programmers seems a bit silly. It has already replaced programmers
name one
>It has already replaced programmers
Let's be honest anon, those "programmers" were never meant to be employed anyway.
AI is the great normalhomosexual killer of the tech world. If you aren't a nerd who's actually passionate about tech and makes tech in your free time, you will be replaced by AI.
It's the same with art. Actual good artists don't have to worry, but arthoes will lose their income
normies have been made infinitely abundant
>400x faster at writing code that compiles and passes unit tests, no further checks performed
>with 20% accuracy on babby tier changes
A faster pajeet, basically.
the people who say AI art is not stealing, then decide to sell 'prompt packs' and are buttblasted when people share them for free never cease to amaze me
why do they add 'engineering' to it? sounds like a meme now
It will be great. I no longer have to talk to ultra autistic cooders. I can just proompt and it will do what I want.
How will you know if your prompt resulted in the correct code normalhomosexual?
The reality will be the ultra autistic cooder will take your job because he can actually verify (and write, if need be) the code that he prompts out of the LLM
I love how normalhomosexuals think their "superior" social skills somehow make the good at prompting than less social people.
Reminds me of when people advertised their coding skills by the lines of code per hour.KS0X
I tried getting chat gpt to do Wordle with clear instructions, it repeat answers, gave me words with banned letters in it, everything wrong
This is the future
More jobs.
>waow!!! ChatGPT can implement Dijkstra's in 5 seconds!
why not just
>google -> cpalgorithms -> dijkstra -> copy paste
AI is horrendous at solving novel problems efficiently and will just implement a brute force while gaslighting you into thinking its optimal