they spend most time writing in Twitter/X or whatever is called, that writing code, this explain why they are fired.
In my job, AIs are awesome because it really is better than the human alternatives for most applications and it makes deployment incredibly simple. Programming always have this meme about coders socializing, idle, reading xdcc comic, social media, Stackoverflow, etc, but never in his jobs.
They won’t last long depending on the solutions those llms provide. Humans will be back.
They don’t have the time or understanding to sus out when it’s halucinating.
T. Paid coder
lmao you fucking retards are such small minded retards. You could train a LLM to look for very specific bugs and code then have another look for specifics then have another check that and so on. This is one scenario I could see happening. Are people like you how normal low IQ people function? Its absolutely hilarious how small people think in such small ways.
...what is this take? It genuinely seems like you've never coded in your life. How would you even train it to find bugs? Simple bugs I can see, but more common bugs that need the specific context of the codebase as a whole? How do you make sure that the bugs it fixes doesn't unintentionally make more bugs because it doesn't fully understand what its doing (something I constantly have to wrangle chat gpt)? How would you train that reliably without exponential electrical and computing power we probably won't even see in decades (hell maybe centuries)? What the hell does "look for specifics" even mean? Specifics of what? And based on what? And then have "another check" that is just as resource hungry as the last? Where are you getting all this energy power this shit at scale?? Microsoft is in the negative cause this shit is so fucking costly as it is and you want to add more for results that you can't even be sure of??? I don't even understand how you would come to this conclusion. I feel like im in talking to same crypto, tesla, spaceX bros about the reality of what they're simping over again...
What I was implying is that AI will write the code then run the code if it finds the code can't run it will try to fix it with another AI that is more efficient in how or what it will be running. >it doesn't fully understand what its doing
Another AI with better explanation will help supplement for instance. >that is just as resource hungry as the last? Where are you getting all this energy power this shit at scale?
what makes you think energy even matter? most of these green bullshit is for face value for retards. If for instance the US feels its falling behind china in AI research they'll allow for drilling oil and coal to kill every animal on earth just to get the advantage.
>1 ai cant solve the problem?
bro just make an ai to help the first ai, that will fix the problem! >2 ais cant solve the problem?
bro just make an ai to help the second ai, that will fix the problem! >3 ais cant solve the problem?
4 weeks ago
Anonymous
It's more then just that and I am being a bit hyperbolic but you should basically understand what I am getting at.
At that many steps the code is basically unnecessary. If you're relying on AI to be able to automate the system then the system might as well be the AI. No reason to train it to write code.
right, but I can imagine that you could have specific jobs for each AI that is trained on something more specifically then others. If training LLMs takes a lot for everything why not have a bunch of them doing different very specific things.
At that many steps the code is basically unnecessary. If you're relying on AI to be able to automate the system then the system might as well be the AI. No reason to train it to write code.
if you're in a team of say, 10 people, your whole team will never be replaced, but your employer will be able to get the same business value by just paying the 2 or 3 best guys in your team, and having them use/supervise AI pajeets, both giant LLMs, and fine-tuned ones for the particular company.
AI will replace pajeet and troon coders. You already need cis het white males to fix pajeet and troon code. Since AI does the same thing as pajeets and troons but faster, they will need to hire more cis het white males.
Being weird and being intelligent are very much not exclusive. Between furfags and trannies there's a lot of clever development going on, they're just subtypes of autist.
As much as I dislike troonism, every troon I've ever worked with (which is quite a few) has been leagues ahead of "cos het White males", whatever the fuck that is. I think it's the autism, but can't be certain.
As long as LLMs and similar have a >5% error rate, they won’t be seriously adopted in any sensitive roles. Subtle accumulation of errors upon errors will lead to catastrophic results for early adopters. The lack of accountability makes fixing those errors harder as no one quite understands what the LLM was going for or exactly where it began hallucinating.
Would you trust a doctor with a 3% hallucination rate to diagnose your cancer? Or to feed you key statistical info about your business? Or to drive you?
We will forever be stuck in tesla self-driving style almost there purgatorium.
Cool theory, but what if instead of just one AI doctor I have a thousand and go with what the most say? What if my vehicle AI driver is actually six minds running in parallel and the vehicle does what most of them agree on? It seems like this stuff has a very easy fix by just throwing more processing power at it. Could be fixed literally today.
the problem is not computing power it’s datasets
people underestimate by orders of magnitude the amount of date needed to train a self-driving car for example. Tesla has enormous amounts of driving data and it’s not even remotely enough. How many times a day do you run into a sideways deer at a particular speed in thick fog? How do machine learn for that? You’d have to program it almost case by case. And yet the consequences of being wrong are catastrophic. Reality has an immense number of possible combinations that are statistically insignificant.
Nah buddy you're retarded; deer isn't killing Tesla drivers. Just grab the different AI drivers and stick them in the same car with a bigger computer. Problem solved.
Cool theory, but what if instead of just one AI doctor I have a thousand and go with what the most say? What if my vehicle AI driver is actually six minds running in parallel and the vehicle does what most of them agree on? It seems like this stuff has a very easy fix by just throwing more processing power at it. Could be fixed literally today.
You do realize that the errors are not just "woops, AI randomly decided to be wrong somehow" but are repeatable? Yes, there are some errors like it saying 95% of times that Italy is in Europe and 5% some random dogshit, but that is just one type of error. What if none of the car AIs were thought on how to handle deers, so all of them drove into one. If the problem with AI today was just that it is omniscient oracle of truth that sometimes decides to be wrong at random times, you could make it code anything by just asking it to give you the same code 20 times and having it run to see which is best and which works. But it does not work like that, you cannot make this thing happen because it is still sevearly limited, the same way if you ask 5 year old how to figure out theory of relativity, you still wont figure it out even if you ask him thousand of times. Maybe quintillion monkeys in sextillion years would write it for you on a typewriter, but thats something completely different.
>Maybe quintillion monkeys in sextillion years would write it for you on a typewriter
Not even, since this tard thinks you can just use a majority vote to erase error.
>quintillion monkeys in sextillion years
you just described evolutionary algorithms (which mainly suffer from the aligment problem)
anyway, I found this interesting:
https://news.northwestern.edu/stories/2023/10/instant-evolution-ai-designs-new-robot-from-scratch-in-seconds/
You can get the experience of replacing your doctor with thousands of shit AI right now by looking up your symptoms on a search engine and treating that like a diagnosis.
If you have a headache they'll tell you you have brain cancer, syphilis and gender dysphoria.
>Would you trust a doctor with a 3% hallucination rate
3% mistake rate wouldn't even be that bad for a doctor... Medical niggas be making bad diagnoses all the time.
>Would you trust a doctor with a 3% hallucination rate to diagnose your cancer?
Given that the majority of doctors shilled and got the "vaccine", I'll trust an AI over them any day.
>Would you trust a doctor with a 3% hallucination rate
Anyone with enough medical knowledge and skepticism to know if their doctors are giving the correct answers would say that that's much better than what they would get from the average human one, and unlike them the quality of their results will actually get exponentially better and cheaper as the tech improves.
Adding to that, human doctors rarely can come up with solutions to multidisciplinary problems because they tend to study just one medical carreer, most of the times where a patient's problem involve shit that requires the expertise of, say, periodoncists, orthodoncists and otorhinolaryngologists simultaniously he has to search the solution by himself or have a high likelyhood of having medical negligence.
You've never had to work with any government or corporate technology, have you? Bugs up the wazoo, terrible ancient UI schemes, utterly unintuitive. If AI replaced every single coder for "sensitive roles", nobody would notice the difference other than a reduction in profits at the nearby soda machine.
Back in ye olden days before mass production we had these things called "Guilds" that filled the same role.
One of their defining traits was the idea that a guild member must never produce(sell) a defective product, because a single faulty product would bring shame on all guild members, causing loss of reputation which back then was everything.
So every product was first made by hand, and then double and triple checked by both the master that made it and other guildsmen.
As a result, guilds were able to achieve a 99.99%+ rate of high quality output.
Even today, pretty much none of the industry fields can hope to match this level of quality control. Many of them are profitable at 10% rate of failure, and some Chinese ones are over 30%, despite being the defacto leaders of their particular field. And at the same time, we don't have guilds anymore.
People learn to accept defective goods in their life.
Quantity beats quality.
Availability beats quality.
Speed beats quality.
You're fucked.
>As long as LLMs and similar have a >5% error rate, they won’t be seriously adopted in any sensitive role
true. gonna wait a couple more months until then
The error rate only needs to get below human levels. I say this about self driving; it doesn't need to be perfect it just needs to be better than humans; especially if it isn't prone to usual human error like inattention or bad reaction times.
>As long as LLMs and similar have a >5% error rate
unironically way better than the average dev
the real reason we wont see serious replacement any time soon is because middle managers NEED lower level workers to justify their positions and to use as a scapegoat when things go wrong. if you get rid of the devs, then managers become bot wranglers and they'll be the ones in trouble when shit is broken or late. they need us
There will always be a scapegoat while upper management doesn't keep a tight rein on middle management. If the AI writing code fucks up just point the finger at whoever made the AI. Simple.
Error rate can be reduced to <0.1% by using a second LLM that validates the output of the first one.
Current LLM are extremely good at finding errors when you point out that something is not correct.
the problem is not computing power it’s datasets
people underestimate by orders of magnitude the amount of date needed to train a self-driving car for example. Tesla has enormous amounts of driving data and it’s not even remotely enough. How many times a day do you run into a sideways deer at a particular speed in thick fog? How do machine learn for that? You’d have to program it almost case by case. And yet the consequences of being wrong are catastrophic. Reality has an immense number of possible combinations that are statistically insignificant.
Driving datasets in the last years are purely virtual. The AI learns to drive in a simulated environment with realistic graphics powered by Unreal engine
>Would you trust a doctor with a 3% hallucination rate to diagnose your cancer? Or to feed you key statistical info about your business? Or to drive you?
Medical malpractice + mistakes cause hundreds of thousands of deaths every year. And misdiagnosis rate in particular is as high as 15-20%
>Would you trust a doctor with a 3% hallucination rate to diagnose your cancer?
https://web.archive.org/web/20130918190701/http://www.medschoolhell.com/archives/
>Would you trust a doctor with a 3% hallucination rate to diagnose your cancer?
more to the point
https://www.ncbi.nlm.nih.gov/books/NBK499956/
"The overall misdiagnosis rate is approximately 10% to 15%."
no. a CS degree is as good as any other for a lot of jobs, and better for quite a few. I don't care that they're torching the payrolls and getting rid of the "day of work at company" tiktokers, possibly some pajeets, etc. If you graduate and all you are is a code monkey, then your school has failed you. i'm coming from an industry (finance) that was already dealing with massive operational layoffs, so it's nothing new to me. i'm sure there will be some AI wrangling expert jobs or AI design/implementation jobs that will want a CS degree and 5 years' relevant experience just the same
this is a pretty sensible point. many industries have little/no margin for error and will not tolerate that kind of risk out of their production
Times are definetly changing. If you are a respected coder in your company then you have nothing to fear but if you are a junior like me it can get ugly. My job wasn't replaced immediatly but it rather was convenient to do so. I never get along with my boss but I was tolerated because we needed programers. As AI was used more and more by my collegues the company had more room for dismissals. One quarrel with my boss later I was fired and my work is now done through AI observed by my colleagues. I admit that I'm not very likable for most people, but it was tolerated until AI came around. It lowers the ceiling for what is accepted and if you are already struggling chances are you're getting replaced soon.
>but I was tolerated because we needed programers.
bro. its dev life. job hopping is part of it. real AI is so fucking far, it won't happen for years. your career is safe. you had a ticking time bomb as a job always, AI didn't change a thing here.
>mentally ill tranny >fired >cause filed as ~~*redundant*~~
that's the real reason. AI isn't going to magically find your broken shitcode like this guy claimed he did for a living, the company just saw him/her/whatever as a cost center and found an excuse to let him go.
Not really but I'm about to retire and have seen dozens of these fads come and go where managers were going to be able to replace programmers with some tool. What ends up happening is that they have to hire programmers who understand how to use the tool to program because managers give up quickly once they start getting errors and don't know what to do next.
First time I saw SQL it was presented to me as a way for managers to get rid of "expensive systems analysists" because now managers could ask the computer in an easy to understand English like language the questions they wanted answered. That lasted for half an afternoon and then it was turned over to the programmers.
Maybe this time it really will be different but more likely it will be just another tool and there will be some expectation from management for developers to have higher output, which might or might not happen. Either way, management will likely still be dependent on some kind of "developer" to work with the tool so the manager doesn't have to.
this sounds right. Makes me think of a particular tool for engineers that uses a visual coding language. The idea was that the engineers could do some simple scripting without needing to be programmers because the interface was all big pretty pictures designed for toddlers (eerily simlar to mit scratch). Naturally it turned out engineers don't know scripting in any format so it was inevitably turned over to the programmers who now are forced to deal with a visual programming language designed for toddlers.
Having tried AI out for coding anecdotally, you still have to know what to ask, issue parameters, explain exceptions, etc.
I knew a girl who went from part-time office help who couldn't spell SQL to sql dev with a major oil company. SQL is not difficult initially but to get the most out of a query without looking like MS Access-generated code, it takes a bit more knowledge.
Yes and im very surprised that this isn't being discussed more.
We can debate how good these AIs are and how close they come to human ability, as if this matters. But it doesn't really matter. Since when do business people make decisions based on whether something actually works?
Most code out there today, written by humans is utter dogshit. AI doesn't need to be perfect, it just needs to be not horrible to win.
This change is already underway, and the only reason you don't hear more about it is that big tech companies are afraid of software engineers getting wise to it and organizing their labor. It's also invites all kinds of regulation and oversight. So this is all happening on the down low.
Consider how many financial crises have been caused by careless use of automated trading systems, or putting faith in bad data analytics. Whatever the case. Shit hits the fan and then we move on and no one really cares. We routinely get smacked in the face with the fuckups of technological automation and we just move on with life.
I can't express how tired I am of threads where some guy says some shit on twitter and we are supposed to discuss it as some societal phenomenon, like some guy whose "sister works in school" talking about all the kids having seizures from tiktok or whatever
If you belive this makes any sense you don't know shit about AI.
This guy is delusional and I'm not surprised that he got fired looking how little he understands a field based on software development
Likewise. Codefags had their day in the sun, which they spent gloating loudly and constantly about their superior and unassailable career so it fills me with joy to see them suffering by any means.
im thinking of trying to create a new position for myself at my job
AI Productivity advocate or something
but it will just be me making up dumb rules about how people can use AI at the company and gaslighting them
I think this is the future of tech work and im really excited
Also, for all those insecure wagecucks here: computers have not removed the need for accountants, they just shifted and modified the work the human has to do.
Keep in mind that AI generated code cannot be copyrighted or licensed. It is public domain at creation. This doesn't mean you have to disclose the source code but you also can't go after anyone who gets that source code, even through decompilation. Trade secret law also doesn't apply to public domain code.
>Do you feel your job is under threat?
Nah. An AI can't write CAD software and complex geometry algorithms on it's own. I have at least 10 years before that. After that my new job will be instructing the AI.
>Companies are firing coders and replacing them with AI bots.
My place has been trying to find developers for over a year. The issue is all of them that have applied has been next to useless. You show them a code that works with SQL and its like they see an alien world. Asking then to make changes is like parting the sea.
Your company has a retarded hiring pipeline filled with women who keep you from ever seeing quality applicants. You probably have agile, DRY and clean code as deal breaking requirements. You also likely shit your pants in fear when you see a resume with an employment gap, a graduation year before 2000, or no public github repos.
There are plenty of quality software developers in the job market, companies are just pants on head retarded when it comes to identifying who they are or pass on them for reasons not related to ability. >But no one I interviewed could even fizzbuzz
Yeah, because your retarded hiring pipeline filtered out everyone who could.
>Head of HR walks around without any pants on >You look >Anon, report to my office immediately for termination of your employment!
And companies wonder why they can't find or retain productive employees.
Only middle management thinks these solutions are viable, the same way only middle management thinks outsourcing to 70IQ pajeets is viable. Everything is slowly unravelling.
No, I work in CNC, a field that has been automated and computerized for so long that there is still terminology built into the controls that refer to punch card and tape memory even though it's all usb, network and LCD screens on new machines. We also have manual machines at work because computers will never replace people completely. There is also industry specific software that converts drawings into machine code and has been for decades, so I, as a programmer can be replaced, right? No I still have to adjust code, alter numbers, etc because it's cheaper for my company to hire a programmer than to outsource all their drawings to a third party who automates it. Then there is the issue of IP theft and privacy. We work with customer data and some things we can't send to third party contractors so that's just another thing that can't just be blindly automated.
Lastly, and it applies to any field of automation, is that someone has to be there to make sure the automated process is doing what it's supposed to do. AI can make code but it can't tell.you that it's working the way its supposed to. Someone still has to design the testing that confirms it.
If the person in OPs screenshot is being unironic, think of how sad (hilarious) it is. If they could "clone" your work with telemetry, that means you weren't doing much that was special and they had no plans of making you do anything new. You already were an NPC at the job. AI is replacing the people who don't carry their own weight and raise the salaries of the people who do (if they're smart enough to demand it)
>someone has to be there to make sure the automated process is doing what it's supposed to do
this is a big one for blue collar work. semi trucks may be automated someday, but they will have 'drivers' for a long while after that. paying someone 25 bucks a hour to make sure the million dollar truck doesn't cream a schoolbus is cheap insurance
we have a hiring freeze because of the interest rates, like most tech companies
and we're small, like <200 people
we started buying 4090s and running LLMs because it beats hiring new engineers for the short term
I recently did an intervention for a government entity on a reasonable level. I went with my technician to see some badly configured, badly secured VPN router on a sensitive network. They had lost the admin password. Their local cybersec guy seemed desperate to look competent and said 'just one minute, I know a way to find it back using the DNS cache on a workstation, it's a hacker technique I learned'.
Government employees around (supposed to have at least some college education) were mostly painting their fingernails and similar activities. The cumulated IQs in the room (us removed) didn't seem to add up to a 3 digit number.
My technician gave a 'told you so' look and I reported a generic HR / competences problem.
I've worked in the public sector, in a situation probably not too far from those guys. Pay was half what I would've made elsewhere (though more interesting as a young graduate at first).
But the benefits compensated a lot of the difference. I'm still enjoying some of them now. Subsidized transport and hotels even for private trips, only pay 1/3rd of a normal rent in a relatively high-priced area, food is basically free at the workplace, lots of the usual perks, etc. I had the same level of living as friends paid twice more.
Yeah I understand why people go in the public sector, but I also understand that because it's somewhat guaranteed benefits regardless of what you do at the workplace, you've got to think somewhat differently about security. I ain't saying nobody gives a fuck and everyone would be ready to compromise the whole company, but it's still something to consider. In private sectors people are somewhat more aware of security-related stuff and it's much easier to get them on board to do certain things.
1 month ago
Anonymous
That's certain. You basically have to commit sexual assault to get fired in most civil service jobs. Short of that, you'll just get transferred to a less challenging job somewhere and most probably keep your pay. There are tons of professional do-nothings (no risks, no attention, don't talk about problems) because that gets you pretty far if you transfer or get promoted every few years and know how to present things in the best possible way for your hierarchy.
Of course doing nothing involves lots of Excel tables, reporting, Powerpoints, meetings and decisions, action plans, etc.
Top kek.
I used to propagate stupid information like that in the early 2000s by forwarding forged emails to dozens to hundreds of boomers and watch the reactions. Good times.
I'm gonna be working for a casino in Oklahoma as an IT guy in the future. Since it's a government job, and since I'm native (I get taxpayer paid universal healthcare for being Cherokee), and since AI can't fix a computer, I'll probably have a job for the next hundred years.
>AI can't fix a computer
If it's a hardware problem, a janitor could put it in a cardboard box and ship it off to be stripped for parts.
If it's a software problem, a third worlder could remote in to apply the necessary patch.
Those interventions will all be recorded and used as training data for an AI which will replace them.
Why do tradecucks act as if AI managing to replace every white collar worker in the next 10 years does not mean they are going to get it 3 years later or earlier? Yes, robotics is not advanced enough, but replacing people in tech boosts that research. Yes, robots are expensive for now, but the main cost of them is still in the manufacturing rather then materials, which can be mitigated by economy of scale. They say coordination and dexterity is something impossibly complex for machine to do and you need whole brain for it, while at the same time dexterous animals exist with 1/10th the brain of a human. And besides, white collar workers being fired means more people will flood into manual labor and your own job will get oversaturated, so you still lose out.
White collar workers in america are afraid to get their hands dirty and can't lift shit. Goes double/quadruple for women in white collar positions, which is why they sleep their way into keeping their jobs when they have that choice.
Llms live and die on input. It can spit out words like the average retard because the companies that train them have reddit every other social media platform to train them on.
No one talks about assembly, this it is unknown to the machines
Clearly, and how does one "gather telemetry" telemetry is the method, not what they're gathering which twatter guy should know if he was in the field idk i think its gay and fake
That is interesting. I wonder if anyone has given an AI free reign inside a virtual machine with defined task just being very open? Would be cool to see it break the OS or use it in un thought of ways.
Pretty sure there was a paper about that last year. IIRC they gave it some budget and access to an AWS EC2 instance, with the possibility to make more instances. It tried to do some funky stuff like scamming people but didn't actually get far.
https://cdn.openai.com/papers/gpt-4.pdf
It was very entertaining. At one point it tried to hire someone on Taskrabbit to solve a captcha for it. The guy went "why? Are you a robot?" and the AI went "no I'm visually impaired".
Good, I can't stand the retarded zoomer juniors. They have no will to learn, keep trying retarded shortcuts and will never be able to work independently without being a burden to another dev. I much rather an AI tool that I can tweak for my own needs.
This is part of a cycle that reoccurs about every 12 years. Companies build up their tech departments during the economic boom, trying to outcompete. Then there's a downturn, and they try to save costs, so they cut people and try to replace with AI/offshore/street-shit/etc. It works for about a month then it goes to shit slowly, then very quickly, over the course of 1-2 years. Then they spend more money than they would have by keeping their people, to hire new (white) men to get it all fixed and back on track. And the cycle beings anew.
AI may eventually replace coders, but it's not 1 year off, it's 15+, if ever (AI is limited by human intelligence, we can't build something smarter than ourselves).
>(AI is limited by human intelligence, we can't build something smarter than ourselves)
Do you think that about chess engines too?
Aren't humans limited by the intelligence of their society too, which means no one today is smarter than anyone living 10,000 years ago?
>Chess is a solvable problem.
There are more possible games of chess than there are particles in the universe, so no, it literally is not solvable. >Way to demonstrate your complete lack of understanding.
Maybe take a look in the mirror, genius.
By competitively do you mean "as a registered member of a national chess federation"? If so then you are correct, but that's true of 99% of people, so not really a bold guess. Anyway, it's not relevant to the question of whether Chess is solvable or not, and even less relevant to the question of whether AIs can learn more than humans.
>There are more possible games of chess than there are particles in the universe, so no, it literally is not solvable.
Wrong. You don't actually have to store every single position and the best move. A general algorithm is all you need.
>A general algorithm is all you need.
To "solve" chess you have to know the optimal move for every given board state. There is no proof that such an algorithm can exist in memory and run time bounded by the size of the universe. How would you compute the "correct" opening move unless you had explored the entire game tree and guaranteed that your opponent doesn't have forced mate sequences for all possible subsequent positions at depth 100?
4 weeks ago
Anonymous
Othello is solved. Othello has 10^58 game positions. chess will soon follow.
https://arxiv.org/abs/2310.19387
4 weeks ago
Anonymous
>Othello is solved. Othello has 10^58 game positions. chess will soon follow.
A conservative lower bound for the game-tree complexity of chess is 10^120.
That means we need about 60 orders of magnitude of hardware and/or algorithmic progress.
I can't rule out the possibility of that happening "soon", but I think it would be unprecedented.
4 weeks ago
Anonymous
There's a lot of debate over whether it's truly solved or not as they did a lot of tree pruning to "get rid of irrelevant configurations".
i love how not a fucking soul in this 100 IP thread pointed out the reason he got replaced was due to the telemetry data, not just the "AI"
that itself is fucking horrifying that telemetry is now being used for that particular purpose, it's not as straightforward as "le ai replaces us" its "they replaced us because we let them gather the data to make it possible for decades"
This is clearly a joke.
I helped with a pilot program for github copilot at my company. We gave 1k users access for 50 business days. Based on the outcome, we have decided to not move forward with adoption.
I am confident that anyone who actually tries this will reach the same conclusion.
>Do you feel your job is under threat?
no because I'm an embedded software engineer, not a coder.
writing the code is the easy part so I really don't give a shit.
if anything it would be cool sometimes to have a software that can write the boring parts for me.
it will be an awfully long time before some software can automate months of back and forth with hardware vendors and to write firmware for hardware that do not exist yet with chinese-to-english machine-translated docs that does not make any sense so I have to reverse-engineer it myself.
webscripters are doomed as their job is doable by a 10yo kid so I can understand why they're coping hard...
they knew it was coming so why they didn't react sooner is beyond me
A.I is the enemy but everyone is being groomed into thinking it's a godsend. It's all fun and games for now but sooner or later you will all see it was part of removing useless eaters from society all along
>Companies are firing coders and replacing them with AI bots. Do you feel your job is under threat?
Not really, I mean AI isn't going to grasp more than 200 lines of code, let alone more than 50k lines of code divided into multiple modules that occasionally generate errors that even our senior developers take ages to understand and solve.
How can you trust an ai to do code reviews when they have no domain knowledge? I can't tell you how many prs I've rejected where the Jr dev is trying to do something that looks fine in the code but doesn't really make sense domain wise.
the day someone makes AI that codes as well as I do, I am starting my own company, and putting my employer out of business. it's the businesses who should be afraid of AI, not coders.
>I'm a good programmer but I would be terrible at running a company
git gud at it. >I don't know how you expect this to work out
I am moving on to managing a project, and cloning myself as a coder times 100.
My product owner and my boss have been dealing with customers for a decade+ and know exactly how to bullshit them or talk them down from features they'll regret. How quickly do you expect me to catch up with them?
I know people who freelance and it sounds miserable
i use gpt4 for work and 80% of the time, the code doesn't work first time. I ask it to fix a specific bug, and it will create a separate one. You're literally falling for marketing material, but keep coping, NEETs.
It is not only bugs.
Furthermore, if the scope of what you code is so small that you can easily put it in three sentences and toss it to chadgpt, you are a bad developer to beginn with.
Normaly the complexity and decisions are so high level, any „AI“ would just bail out.
Now is a good opportunity to make a Consultant Firm that specializes in unfucking AI code.
I forsee tons of business when software firms become dumpster fires due to AI.
who?
Everybody. They are using "AI automated your job" as a pretext to get rid of all their diversity hires so they don't get sued.
they spend most time writing in Twitter/X or whatever is called, that writing code, this explain why they are fired.
In my job, AIs are awesome because it really is better than the human alternatives for most applications and it makes deployment incredibly simple. Programming always have this meme about coders socializing, idle, reading xdcc comic, social media, Stackoverflow, etc, but never in his jobs.
They won’t last long depending on the solutions those llms provide. Humans will be back.
They don’t have the time or understanding to sus out when it’s halucinating.
T. Paid coder
""AI"" is like a giga pajeet, and will confidently "validate" its own bullshit that is entirely wrong when it spazzes out
lmao you fucking retards are such small minded retards. You could train a LLM to look for very specific bugs and code then have another look for specifics then have another check that and so on. This is one scenario I could see happening. Are people like you how normal low IQ people function? Its absolutely hilarious how small people think in such small ways.
>you have low IQ
>actually, the low IQ grog monkey is me
you write like terry davis breathes and it's annoying to read
do you even know who terry davis? doubtful.
stfu esl
ya, ok pajeet. i will not redeem. kys
...what is this take? It genuinely seems like you've never coded in your life. How would you even train it to find bugs? Simple bugs I can see, but more common bugs that need the specific context of the codebase as a whole? How do you make sure that the bugs it fixes doesn't unintentionally make more bugs because it doesn't fully understand what its doing (something I constantly have to wrangle chat gpt)? How would you train that reliably without exponential electrical and computing power we probably won't even see in decades (hell maybe centuries)? What the hell does "look for specifics" even mean? Specifics of what? And based on what? And then have "another check" that is just as resource hungry as the last? Where are you getting all this energy power this shit at scale?? Microsoft is in the negative cause this shit is so fucking costly as it is and you want to add more for results that you can't even be sure of??? I don't even understand how you would come to this conclusion. I feel like im in talking to same crypto, tesla, spaceX bros about the reality of what they're simping over again...
What I was implying is that AI will write the code then run the code if it finds the code can't run it will try to fix it with another AI that is more efficient in how or what it will be running.
>it doesn't fully understand what its doing
Another AI with better explanation will help supplement for instance.
>that is just as resource hungry as the last? Where are you getting all this energy power this shit at scale?
what makes you think energy even matter? most of these green bullshit is for face value for retards. If for instance the US feels its falling behind china in AI research they'll allow for drilling oil and coal to kill every animal on earth just to get the advantage.
>1 ai cant solve the problem?
bro just make an ai to help the first ai, that will fix the problem!
>2 ais cant solve the problem?
bro just make an ai to help the second ai, that will fix the problem!
>3 ais cant solve the problem?
It's more then just that and I am being a bit hyperbolic but you should basically understand what I am getting at.
right, but I can imagine that you could have specific jobs for each AI that is trained on something more specifically then others. If training LLMs takes a lot for everything why not have a bunch of them doing different very specific things.
At that many steps the code is basically unnecessary. If you're relying on AI to be able to automate the system then the system might as well be the AI. No reason to train it to write code.
>muh low IQ
>shit take and shill
kys
if you're in a team of say, 10 people, your whole team will never be replaced, but your employer will be able to get the same business value by just paying the 2 or 3 best guys in your team, and having them use/supervise AI pajeets, both giant LLMs, and fine-tuned ones for the particular company.
AI will replace pajeet and troon coders. You already need cis het white males to fix pajeet and troon code. Since AI does the same thing as pajeets and troons but faster, they will need to hire more cis het white males.
true, the bugs will eat some companies that will over-implement AI.
>cis het white males
trying so hard to fit in I see?
>troon coders
Troon coders are highly intelligent and often work in low-level, niche areas. They will be the last tp be replaced.
>highly intelligent
>i'm a woman!
>pulls out cock
come on man
Being weird and being intelligent are very much not exclusive. Between furfags and trannies there's a lot of clever development going on, they're just subtypes of autist.
Cope
>cis het white male
But most trannies are autistic and autistic people make good code
As much as I dislike troonism, every troon I've ever worked with (which is quite a few) has been leagues ahead of "cos het White males", whatever the fuck that is. I think it's the autism, but can't be certain.
Trannies are actually very good coders. Agreed about pajeets.
All that shit was written by israelites and in the trash bin they belong.
>study disproves my preconceptions
>it must be the joooos!!!
please have a nice day already, you braindead waste of oxygen
The fact the same anon replied to you multiple times is sad and funny in a way.
Anon troons write code 100x better than pajeets. It's not even funny
Code review is not coding dumbass zoomer fag™
As long as LLMs and similar have a >5% error rate, they won’t be seriously adopted in any sensitive roles. Subtle accumulation of errors upon errors will lead to catastrophic results for early adopters. The lack of accountability makes fixing those errors harder as no one quite understands what the LLM was going for or exactly where it began hallucinating.
Would you trust a doctor with a 3% hallucination rate to diagnose your cancer? Or to feed you key statistical info about your business? Or to drive you?
We will forever be stuck in tesla self-driving style almost there purgatorium.
Cool theory, but what if instead of just one AI doctor I have a thousand and go with what the most say? What if my vehicle AI driver is actually six minds running in parallel and the vehicle does what most of them agree on? It seems like this stuff has a very easy fix by just throwing more processing power at it. Could be fixed literally today.
the problem is not computing power it’s datasets
people underestimate by orders of magnitude the amount of date needed to train a self-driving car for example. Tesla has enormous amounts of driving data and it’s not even remotely enough. How many times a day do you run into a sideways deer at a particular speed in thick fog? How do machine learn for that? You’d have to program it almost case by case. And yet the consequences of being wrong are catastrophic. Reality has an immense number of possible combinations that are statistically insignificant.
Nah buddy you're retarded; deer isn't killing Tesla drivers. Just grab the different AI drivers and stick them in the same car with a bigger computer. Problem solved.
What the fuck are you even saying?
don't mind him, he's just GPT-Bot.info
Before any event occurs, pause time … blah blah blah … event horizon … train on all possibilities … quantum computers …
Problem solved
You do realize that the errors are not just "woops, AI randomly decided to be wrong somehow" but are repeatable? Yes, there are some errors like it saying 95% of times that Italy is in Europe and 5% some random dogshit, but that is just one type of error. What if none of the car AIs were thought on how to handle deers, so all of them drove into one. If the problem with AI today was just that it is omniscient oracle of truth that sometimes decides to be wrong at random times, you could make it code anything by just asking it to give you the same code 20 times and having it run to see which is best and which works. But it does not work like that, you cannot make this thing happen because it is still sevearly limited, the same way if you ask 5 year old how to figure out theory of relativity, you still wont figure it out even if you ask him thousand of times. Maybe quintillion monkeys in sextillion years would write it for you on a typewriter, but thats something completely different.
>Maybe quintillion monkeys in sextillion years would write it for you on a typewriter
Not even, since this tard thinks you can just use a majority vote to erase error.
>quintillion monkeys in sextillion years
you just described evolutionary algorithms (which mainly suffer from the aligment problem)
anyway, I found this interesting:
https://news.northwestern.edu/stories/2023/10/instant-evolution-ai-designs-new-robot-from-scratch-in-seconds/
AI driving doesn't need to be perfectly safe, it just need to be as safe as the average human driver.
>You’d have to program it almost case by case.
>t. doesn't understand AI
Driving is a bad example because humans have around ~1GB/sec of visual throughput. Compared to 4K which is ~6 MB/s
>expects different results by applying the same method N times
lm@0
>I have a thousand and go with what the most say?
Regression towards the mean. Sounds shitty, honestly.
You can get the experience of replacing your doctor with thousands of shit AI right now by looking up your symptoms on a search engine and treating that like a diagnosis.
If you have a headache they'll tell you you have brain cancer, syphilis and gender dysphoria.
>Would you trust a doctor with a 3% hallucination rate
3% mistake rate wouldn't even be that bad for a doctor... Medical niggas be making bad diagnoses all the time.
yeah doctor is wicked funny as a comparison
although that's also why I don't go to doctors. so maybe he's onto something
>Would you trust a doctor with a 3% hallucination rate
That's better than most of them.
>Would you trust a doctor with a 3% hallucination rate to diagnose your cancer?
Given that the majority of doctors shilled and got the "vaccine", I'll trust an AI over them any day.
>Would you trust a doctor with a 3% hallucination rate
Anyone with enough medical knowledge and skepticism to know if their doctors are giving the correct answers would say that that's much better than what they would get from the average human one, and unlike them the quality of their results will actually get exponentially better and cheaper as the tech improves.
Adding to that, human doctors rarely can come up with solutions to multidisciplinary problems because they tend to study just one medical carreer, most of the times where a patient's problem involve shit that requires the expertise of, say, periodoncists, orthodoncists and otorhinolaryngologists simultaniously he has to search the solution by himself or have a high likelyhood of having medical negligence.
You've never had to work with any government or corporate technology, have you? Bugs up the wazoo, terrible ancient UI schemes, utterly unintuitive. If AI replaced every single coder for "sensitive roles", nobody would notice the difference other than a reduction in profits at the nearby soda machine.
AI code won't even compile.
Back in ye olden days before mass production we had these things called "Guilds" that filled the same role.
One of their defining traits was the idea that a guild member must never produce(sell) a defective product, because a single faulty product would bring shame on all guild members, causing loss of reputation which back then was everything.
So every product was first made by hand, and then double and triple checked by both the master that made it and other guildsmen.
As a result, guilds were able to achieve a 99.99%+ rate of high quality output.
Even today, pretty much none of the industry fields can hope to match this level of quality control. Many of them are profitable at 10% rate of failure, and some Chinese ones are over 30%, despite being the defacto leaders of their particular field. And at the same time, we don't have guilds anymore.
People learn to accept defective goods in their life.
Quantity beats quality.
Availability beats quality.
Speed beats quality.
You're fucked.
Complete strokepost.
fr no cap ong
wow the cope is strong. programers are just as deluded as artists, ironic since they replaced themselves lol
>As long as LLMs and similar have a >5% error rate, they won’t be seriously adopted in any sensitive role
true. gonna wait a couple more months until then
The error rate only needs to get below human levels. I say this about self driving; it doesn't need to be perfect it just needs to be better than humans; especially if it isn't prone to usual human error like inattention or bad reaction times.
>As long as LLMs and similar have a >5% error rate
unironically way better than the average dev
the real reason we wont see serious replacement any time soon is because middle managers NEED lower level workers to justify their positions and to use as a scapegoat when things go wrong. if you get rid of the devs, then managers become bot wranglers and they'll be the ones in trouble when shit is broken or late. they need us
There will always be a scapegoat while upper management doesn't keep a tight rein on middle management. If the AI writing code fucks up just point the finger at whoever made the AI. Simple.
amazing bait everyone fell for it great job
Error rate can be reduced to <0.1% by using a second LLM that validates the output of the first one.
Current LLM are extremely good at finding errors when you point out that something is not correct.
Driving datasets in the last years are purely virtual. The AI learns to drive in a simulated environment with realistic graphics powered by Unreal engine
>Would you trust a doctor with a 3% hallucination rate to diagnose your cancer? Or to feed you key statistical info about your business? Or to drive you?
Medical malpractice + mistakes cause hundreds of thousands of deaths every year. And misdiagnosis rate in particular is as high as 15-20%
>Would you trust a doctor with a 3% hallucination rate to diagnose your cancer?
https://web.archive.org/web/20130918190701/http://www.medschoolhell.com/archives/
>Would you trust a doctor with a 3% hallucination rate to diagnose your cancer?
more to the point
https://www.ncbi.nlm.nih.gov/books/NBK499956/
"The overall misdiagnosis rate is approximately 10% to 15%."
><50%
You are in for a big surprise.
fuck who is this? white girls in a komodo is god damn sexy
no. a CS degree is as good as any other for a lot of jobs, and better for quite a few. I don't care that they're torching the payrolls and getting rid of the "day of work at company" tiktokers, possibly some pajeets, etc. If you graduate and all you are is a code monkey, then your school has failed you. i'm coming from an industry (finance) that was already dealing with massive operational layoffs, so it's nothing new to me. i'm sure there will be some AI wrangling expert jobs or AI design/implementation jobs that will want a CS degree and 5 years' relevant experience just the same
this is a pretty sensible point. many industries have little/no margin for error and will not tolerate that kind of risk out of their production
Times are definetly changing. If you are a respected coder in your company then you have nothing to fear but if you are a junior like me it can get ugly. My job wasn't replaced immediatly but it rather was convenient to do so. I never get along with my boss but I was tolerated because we needed programers. As AI was used more and more by my collegues the company had more room for dismissals. One quarrel with my boss later I was fired and my work is now done through AI observed by my colleagues. I admit that I'm not very likable for most people, but it was tolerated until AI came around. It lowers the ceiling for what is accepted and if you are already struggling chances are you're getting replaced soon.
>Times are definetly changing.
Yeah, webshitters are getting cancelled and they don't like it. Good.fucking.riddance to these parasites.
>but I was tolerated because we needed programers.
bro. its dev life. job hopping is part of it. real AI is so fucking far, it won't happen for years. your career is safe. you had a ticking time bomb as a job always, AI didn't change a thing here.
>mentally ill tranny
>fired
>cause filed as ~~*redundant*~~
that's the real reason. AI isn't going to magically find your broken shitcode like this guy claimed he did for a living, the company just saw him/her/whatever as a cost center and found an excuse to let him go.
Not really but I'm about to retire and have seen dozens of these fads come and go where managers were going to be able to replace programmers with some tool. What ends up happening is that they have to hire programmers who understand how to use the tool to program because managers give up quickly once they start getting errors and don't know what to do next.
First time I saw SQL it was presented to me as a way for managers to get rid of "expensive systems analysists" because now managers could ask the computer in an easy to understand English like language the questions they wanted answered. That lasted for half an afternoon and then it was turned over to the programmers.
Maybe this time it really will be different but more likely it will be just another tool and there will be some expectation from management for developers to have higher output, which might or might not happen. Either way, management will likely still be dependent on some kind of "developer" to work with the tool so the manager doesn't have to.
this sounds right. Makes me think of a particular tool for engineers that uses a visual coding language. The idea was that the engineers could do some simple scripting without needing to be programmers because the interface was all big pretty pictures designed for toddlers (eerily simlar to mit scratch). Naturally it turned out engineers don't know scripting in any format so it was inevitably turned over to the programmers who now are forced to deal with a visual programming language designed for toddlers.
There's a place for those. I mean, LabVIEW does its job well.
Having tried AI out for coding anecdotally, you still have to know what to ask, issue parameters, explain exceptions, etc.
I knew a girl who went from part-time office help who couldn't spell SQL to sql dev with a major oil company. SQL is not difficult initially but to get the most out of a query without looking like MS Access-generated code, it takes a bit more knowledge.
Dude, I work on compilers, not write some CRUD web apps. Try replacing me.
no
Yes and im very surprised that this isn't being discussed more.
We can debate how good these AIs are and how close they come to human ability, as if this matters. But it doesn't really matter. Since when do business people make decisions based on whether something actually works?
Most code out there today, written by humans is utter dogshit. AI doesn't need to be perfect, it just needs to be not horrible to win.
This change is already underway, and the only reason you don't hear more about it is that big tech companies are afraid of software engineers getting wise to it and organizing their labor. It's also invites all kinds of regulation and oversight. So this is all happening on the down low.
Consider how many financial crises have been caused by careless use of automated trading systems, or putting faith in bad data analytics. Whatever the case. Shit hits the fan and then we move on and no one really cares. We routinely get smacked in the face with the fuckups of technological automation and we just move on with life.
I can't express how tired I am of threads where some guy says some shit on twitter and we are supposed to discuss it as some societal phenomenon, like some guy whose "sister works in school" talking about all the kids having seizures from tiktok or whatever
That's why you never put out your best work.
I work as an fire safety inspector
im safe until they have walking robots that can scan rooms and spot errors
so im safe until the end of the year
No, I'm a blue collar worker who codes to automate shit at home and have a little fun.
As models improve and new models are written and combined...
Yes, any job that's performed at a computer will be done by a computer.
Humans are redundant.
Only 10x genius level programmers who are solving complex problems will still be required.
>genius level programmers
literally me
If you belive this makes any sense you don't know shit about AI.
This guy is delusional and I'm not surprised that he got fired looking how little he understands a field based on software development
I can't even code in the first place but I like seeing retarded software devs get replaced by software
"Learn to code" oh no no no no no hahahahaha
Likewise. Codefags had their day in the sun, which they spent gloating loudly and constantly about their superior and unassailable career so it fills me with joy to see them suffering by any means.
where do I sign for my UBI
time to neetmax now lads
No UBI for you, doctor Pepe Mengele will give you the final vax.
im thinking of trying to create a new position for myself at my job
AI Productivity advocate or something
but it will just be me making up dumb rules about how people can use AI at the company and gaslighting them
I think this is the future of tech work and im really excited
May be doable as long as you paint your hair green and change your pronoun game.
Good luck!
I told you guys it would happen and all you did was claim that you couldn't be replaced. Now look at you, sucking dick for money.
>Do you feel your job is under threat?
What job?
Also, for all those insecure wagecucks here: computers have not removed the need for accountants, they just shifted and modified the work the human has to do.
Keep in mind that AI generated code cannot be copyrighted or licensed. It is public domain at creation. This doesn't mean you have to disclose the source code but you also can't go after anyone who gets that source code, even through decompilation. Trade secret law also doesn't apply to public domain code.
As if there's going to be any indication a given piece of code is "AI generated"
>Do you feel your job is under threat?
Nah. An AI can't write CAD software and complex geometry algorithms on it's own. I have at least 10 years before that. After that my new job will be instructing the AI.
>Companies are firing coders and replacing them with AI bots.
My place has been trying to find developers for over a year. The issue is all of them that have applied has been next to useless. You show them a code that works with SQL and its like they see an alien world. Asking then to make changes is like parting the sea.
Your company has a retarded hiring pipeline filled with women who keep you from ever seeing quality applicants. You probably have agile, DRY and clean code as deal breaking requirements. You also likely shit your pants in fear when you see a resume with an employment gap, a graduation year before 2000, or no public github repos.
There are plenty of quality software developers in the job market, companies are just pants on head retarded when it comes to identifying who they are or pass on them for reasons not related to ability.
>But no one I interviewed could even fizzbuzz
Yeah, because your retarded hiring pipeline filtered out everyone who could.
>hr roastie
would
>Head of HR walks around without any pants on
>You look
>Anon, report to my office immediately for termination of your employment!
And companies wonder why they can't find or retain productive employees.
“Learn to code”
Oops. Get fucked coooder trannies. I had to listen to 5 years of you shitters boasting about your salaries. Get a real job, the party’s over.
His job was to review code?
That's not even a real job.
Only middle management thinks these solutions are viable, the same way only middle management thinks outsourcing to 70IQ pajeets is viable. Everything is slowly unravelling.
Suck it wagies
No, I work in CNC, a field that has been automated and computerized for so long that there is still terminology built into the controls that refer to punch card and tape memory even though it's all usb, network and LCD screens on new machines. We also have manual machines at work because computers will never replace people completely. There is also industry specific software that converts drawings into machine code and has been for decades, so I, as a programmer can be replaced, right? No I still have to adjust code, alter numbers, etc because it's cheaper for my company to hire a programmer than to outsource all their drawings to a third party who automates it. Then there is the issue of IP theft and privacy. We work with customer data and some things we can't send to third party contractors so that's just another thing that can't just be blindly automated.
Lastly, and it applies to any field of automation, is that someone has to be there to make sure the automated process is doing what it's supposed to do. AI can make code but it can't tell.you that it's working the way its supposed to. Someone still has to design the testing that confirms it.
If the person in OPs screenshot is being unironic, think of how sad (hilarious) it is. If they could "clone" your work with telemetry, that means you weren't doing much that was special and they had no plans of making you do anything new. You already were an NPC at the job. AI is replacing the people who don't carry their own weight and raise the salaries of the people who do (if they're smart enough to demand it)
>someone has to be there to make sure the automated process is doing what it's supposed to do
this is a big one for blue collar work. semi trucks may be automated someday, but they will have 'drivers' for a long while after that. paying someone 25 bucks a hour to make sure the million dollar truck doesn't cream a schoolbus is cheap insurance
that sounds like some schizo post.
we have a hiring freeze because of the interest rates, like most tech companies
and we're small, like <200 people
we started buying 4090s and running LLMs because it beats hiring new engineers for the short term
We're about to have worse problems on our hands than simply being replaced.
why does this absolute retard get so much attention
he isn't qualified to speak on any of this in any capacity
What the actual fuck is saying? Who gives this dumbass a platform?
I regret going into cybersec, you get to realize how retarded the average person is when talking about the subject
I recently did an intervention for a government entity on a reasonable level. I went with my technician to see some badly configured, badly secured VPN router on a sensitive network. They had lost the admin password. Their local cybersec guy seemed desperate to look competent and said 'just one minute, I know a way to find it back using the DNS cache on a workstation, it's a hacker technique I learned'.
Government employees around (supposed to have at least some college education) were mostly painting their fingernails and similar activities. The cumulated IQs in the room (us removed) didn't seem to add up to a 3 digit number.
My technician gave a 'told you so' look and I reported a generic HR / competences problem.
Public sector isn't paid enough to give a shit, which is a bigger security risk than whatever the fuck that was about muh biology hacker viruses
I've worked in the public sector, in a situation probably not too far from those guys. Pay was half what I would've made elsewhere (though more interesting as a young graduate at first).
But the benefits compensated a lot of the difference. I'm still enjoying some of them now. Subsidized transport and hotels even for private trips, only pay 1/3rd of a normal rent in a relatively high-priced area, food is basically free at the workplace, lots of the usual perks, etc. I had the same level of living as friends paid twice more.
Yeah I understand why people go in the public sector, but I also understand that because it's somewhat guaranteed benefits regardless of what you do at the workplace, you've got to think somewhat differently about security. I ain't saying nobody gives a fuck and everyone would be ready to compromise the whole company, but it's still something to consider. In private sectors people are somewhat more aware of security-related stuff and it's much easier to get them on board to do certain things.
That's certain. You basically have to commit sexual assault to get fired in most civil service jobs. Short of that, you'll just get transferred to a less challenging job somewhere and most probably keep your pay. There are tons of professional do-nothings (no risks, no attention, don't talk about problems) because that gets you pretty far if you transfer or get promoted every few years and know how to present things in the best possible way for your hierarchy.
Of course doing nothing involves lots of Excel tables, reporting, Powerpoints, meetings and decisions, action plans, etc.
Top kek.
I used to propagate stupid information like that in the early 2000s by forwarding forged emails to dozens to hundreds of boomers and watch the reactions. Good times.
goddamn normies, or he's probably paid by israelite altman and the cia
Ok, now I am fully convinced that he is paid opposition. There is no way he is this fucking retarded
don't care about finding my pssw for X, is this a real post by Yudkowsky?
what an absolute ~~*quack*~~
I might, if I had a job.
Good. Fuck all of you. I hope you all end up on the streets.
There are people whose only job is to do code reviews?
I dont have a job and even if I had one I couldn't do anything to avoid being fired by AI
I'm gonna be working for a casino in Oklahoma as an IT guy in the future. Since it's a government job, and since I'm native (I get taxpayer paid universal healthcare for being Cherokee), and since AI can't fix a computer, I'll probably have a job for the next hundred years.
>AI can't fix a computer
If it's a hardware problem, a janitor could put it in a cardboard box and ship it off to be stripped for parts.
If it's a software problem, a third worlder could remote in to apply the necessary patch.
Those interventions will all be recorded and used as training data for an AI which will replace them.
Why do tradecucks act as if AI managing to replace every white collar worker in the next 10 years does not mean they are going to get it 3 years later or earlier? Yes, robotics is not advanced enough, but replacing people in tech boosts that research. Yes, robots are expensive for now, but the main cost of them is still in the manufacturing rather then materials, which can be mitigated by economy of scale. They say coordination and dexterity is something impossibly complex for machine to do and you need whole brain for it, while at the same time dexterous animals exist with 1/10th the brain of a human. And besides, white collar workers being fired means more people will flood into manual labor and your own job will get oversaturated, so you still lose out.
White collar workers in america are afraid to get their hands dirty and can't lift shit. Goes double/quadruple for women in white collar positions, which is why they sleep their way into keeping their jobs when they have that choice.
Good thing ChatGPT cannot code in Assembly and won't be replacing my job anytime soon
What makes you think it can't?
Llms live and die on input. It can spit out words like the average retard because the companies that train them have reddit every other social media platform to train them on.
No one talks about assembly, this it is unknown to the machines
It does assembly quite well, since no sub-115 IQ entries were ever generated to train it.
>code reviewer
tf is this made up shit lmao
literally "sandwich artist" tier, just like data analyst
The black box Ai needs code review not the other way around.
I hope I get fired, being a SWE at globohomo inc is pure aids
Not me. I am one in a million talent. I understand pointers and I can do some recursion.
whoah
Retarded twitter screen cap thread have a nice day
This is very obviously not real
Clearly, and how does one "gather telemetry" telemetry is the method, not what they're gathering which twatter guy should know if he was in the field idk i think its gay and fake
That is interesting. I wonder if anyone has given an AI free reign inside a virtual machine with defined task just being very open? Would be cool to see it break the OS or use it in un thought of ways.
Pretty sure there was a paper about that last year. IIRC they gave it some budget and access to an AWS EC2 instance, with the possibility to make more instances. It tried to do some funky stuff like scamming people but didn't actually get far.
Oh man that sounds awesome, you got a link to the paper?
https://cdn.openai.com/papers/gpt-4.pdf
It was very entertaining. At one point it tried to hire someone on Taskrabbit to solve a captcha for it. The guy went "why? Are you a robot?" and the AI went "no I'm visually impaired".
Thanks anon. It's interesting but they do say that example is "illustrative", so it might not have happened that way. Still cool though.
I love that safety researchers chose this as an example, pic related
after reading the sample adversarial prompts in that pdf i'm now mad how neutered the latest model is.
KEK
Good night sweet prince. This world couldn't handle you.
No, I'm not a coder. But I'm slowly replacing people with AI. Zoomers are useless and boomers are obsolete.
Sounds like bullshit
was this guy's sole job code review? we already use sonarcloud to do that shit, don't even need ai to replace that
Protip, you aren't being replaced by AI, this is just the new pretence that HR is operating under to fire you.
Most jobs will be gone in the next 20 to 30 years.
That's not exactly a bad thing though.
Society will need to adapt for sure.
Good, I can't stand the retarded zoomer juniors. They have no will to learn, keep trying retarded shortcuts and will never be able to work independently without being a burden to another dev. I much rather an AI tool that I can tweak for my own needs.
No. It won't replace anyone smart. Only the kind of morons who use Windows 11 or Wayland and post on Bot.info all day.
no, my job is to program and fix ai bots
What is a "coder"? I'm a software engineer with a M.Sc. and only 5% of my work can be called "coding"
>Companies are firing bots and replacing them with AI coders
Good.
This is part of a cycle that reoccurs about every 12 years. Companies build up their tech departments during the economic boom, trying to outcompete. Then there's a downturn, and they try to save costs, so they cut people and try to replace with AI/offshore/street-shit/etc. It works for about a month then it goes to shit slowly, then very quickly, over the course of 1-2 years. Then they spend more money than they would have by keeping their people, to hire new (white) men to get it all fixed and back on track. And the cycle beings anew.
AI may eventually replace coders, but it's not 1 year off, it's 15+, if ever (AI is limited by human intelligence, we can't build something smarter than ourselves).
>(AI is limited by human intelligence, we can't build something smarter than ourselves)
Do you think that about chess engines too?
Aren't humans limited by the intelligence of their society too, which means no one today is smarter than anyone living 10,000 years ago?
>Do you think that about chess engines too?
Chess is a solvable problem. Way to demonstrate your complete lack of understanding.
>Chess is a solvable problem.
There are more possible games of chess than there are particles in the universe, so no, it literally is not solvable.
>Way to demonstrate your complete lack of understanding.
Maybe take a look in the mirror, genius.
nta, you've never played chess competitively. ask how i know.
By competitively do you mean "as a registered member of a national chess federation"? If so then you are correct, but that's true of 99% of people, so not really a bold guess. Anyway, it's not relevant to the question of whether Chess is solvable or not, and even less relevant to the question of whether AIs can learn more than humans.
>There are more possible games of chess than there are particles in the universe, so no, it literally is not solvable.
Wrong. You don't actually have to store every single position and the best move. A general algorithm is all you need.
>A general algorithm is all you need.
To "solve" chess you have to know the optimal move for every given board state. There is no proof that such an algorithm can exist in memory and run time bounded by the size of the universe. How would you compute the "correct" opening move unless you had explored the entire game tree and guaranteed that your opponent doesn't have forced mate sequences for all possible subsequent positions at depth 100?
Othello is solved. Othello has 10^58 game positions. chess will soon follow.
https://arxiv.org/abs/2310.19387
>Othello is solved. Othello has 10^58 game positions. chess will soon follow.
A conservative lower bound for the game-tree complexity of chess is 10^120.
That means we need about 60 orders of magnitude of hardware and/or algorithmic progress.
I can't rule out the possibility of that happening "soon", but I think it would be unprecedented.
There's a lot of debate over whether it's truly solved or not as they did a lot of tree pruning to "get rid of irrelevant configurations".
>(AI is limited by human intelligence, we can't build something smarter than ourselves).
Fucking worthless moron. Opinion discarded.
Use the toilet, Ranjesh. The AI will be smarter than YOU, just not smarter than the smartest human.
i love how not a fucking soul in this 100 IP thread pointed out the reason he got replaced was due to the telemetry data, not just the "AI"
that itself is fucking horrifying that telemetry is now being used for that particular purpose, it's not as straightforward as "le ai replaces us" its "they replaced us because we let them gather the data to make it possible for decades"
my job is to fix brainlet retard mistakes, so no
This is clearly a joke.
I helped with a pilot program for github copilot at my company. We gave 1k users access for 50 business days. Based on the outcome, we have decided to not move forward with adoption.
I am confident that anyone who actually tries this will reach the same conclusion.
>AI clone to perform code review
Not how it works
>Do you feel your job is under threat?
no because I'm an embedded software engineer, not a coder.
writing the code is the easy part so I really don't give a shit.
if anything it would be cool sometimes to have a software that can write the boring parts for me.
it will be an awfully long time before some software can automate months of back and forth with hardware vendors and to write firmware for hardware that do not exist yet with chinese-to-english machine-translated docs that does not make any sense so I have to reverse-engineer it myself.
webscripters are doomed as their job is doable by a 10yo kid so I can understand why they're coping hard...
they knew it was coming so why they didn't react sooner is beyond me
>why they didn't react sooner
were busy angularing
Good. Fuck coooder trannies. I’ll have fries with that.
A.I is the enemy but everyone is being groomed into thinking it's a godsend. It's all fun and games for now but sooner or later you will all see it was part of removing useless eaters from society all along
An AI doing code review is insane and retarded. I haven't even seen AI generate anything useful beyond discrete functions which often contain bugs.
It'd be nice to get AI enhanced static analysis but I haven't seen anything beyond toys.
>Companies are firing coders and replacing them with AI bots. Do you feel your job is under threat?
Not really, I mean AI isn't going to grasp more than 200 lines of code, let alone more than 50k lines of code divided into multiple modules that occasionally generate errors that even our senior developers take ages to understand and solve.
God it feels so fucking good to be an engineer right about now. Extremely glad that I didn't waste my time with a tech career.
I don't even have a job.
i dont work so no 🙂
thats just fantastic. it jobs were overpriced
How can you trust an ai to do code reviews when they have no domain knowledge? I can't tell you how many prs I've rejected where the Jr dev is trying to do something that looks fine in the code but doesn't really make sense domain wise.
the day someone makes AI that codes as well as I do, I am starting my own company, and putting my employer out of business. it's the businesses who should be afraid of AI, not coders.
I'm a good programmer but I would be terrible at running a company, I don't know how you expect this to work out
>I'm a good programmer but I would be terrible at running a company
git gud at it.
>I don't know how you expect this to work out
I am moving on to managing a project, and cloning myself as a coder times 100.
My product owner and my boss have been dealing with customers for a decade+ and know exactly how to bullshit them or talk them down from features they'll regret. How quickly do you expect me to catch up with them?
I know people who freelance and it sounds miserable
i use gpt4 for work and 80% of the time, the code doesn't work first time. I ask it to fix a specific bug, and it will create a separate one. You're literally falling for marketing material, but keep coping, NEETs.
It is not only bugs.
Furthermore, if the scope of what you code is so small that you can easily put it in three sentences and toss it to chadgpt, you are a bad developer to beginn with.
Normaly the complexity and decisions are so high level, any „AI“ would just bail out.
If a 2023 "AI" can replace you, you are no coder.
Now is a good opportunity to make a Consultant Firm that specializes in unfucking AI code.
I forsee tons of business when software firms become dumpster fires due to AI.