AI is smoke and mirrors

>be me
>IT security and Quality Assurance Consultant
>a client kicked off 60% of their devs
>now use code 90% AI generated code for ~4 months
>only reviewed by the remaining overworked devs
>deployed code
>some incidences happened
>called me to make an assesment
>wtf
>code introduces memory leaks
>code introduces code duplication and shitty performance loops
>code introduces some logfiles, with clear names of costumers
>tell the devs
>they start sweating
>give them a week to fix this of I have to report the incedence to some oversight because of data integrity breach
>work with them for 4 days
>then come back at day 7
>devs appear agitated and worn out and depressed
>2 Devs on sick leave for 3 weeks
>2 remaining devs, rewrite every section commited by AI
>company lost 400k + have to pay me
>because of incedent
>manager comes in, does a meeting with me and the rest some other executives
Summary:
>"Ok guys to avoid this incedent to happen ever again, we should use AI to do quality checls for the code"
>tell them AI code was the problem in the first place
>"But it is way more efficient and costs less"
>shows a graph of code pushed to application, comparing human devs vs. ai
>"AI adds more code to application"
>wtf
>try to elaborate of quality and complexity of abstraction that can only be done by humans with a full picture of the app
>then accuse me of having a conflict of interest
>"thats what I would say If my job would be replaced by AI soon."
>ok
>use Chat GPT to write a document to report incident of data breach
>proof read and adjusted for the case
>use chat GPT to poach their last devs to work in my consulting company
>proof read, replace the names

AI is a tool that seemingly is smart.
AI can create a rough sketch. But its rough.
AI is smoke and mirror.
It introduces unexpected errors and artifacts in placed nobody will see until its too late. It creates superficially amazing looking shit. But the devil is in the details.

It's All Fucked Shirt $22.14

Unattended Children Pitbull Club Shirt $21.68

It's All Fucked Shirt $22.14

  1. 2 months ago
    Anonymous

    >this was PUBLISHED in a peer-reviewed scientific journal
    so uhhh why did the "peers" allow it to be published?

    • 2 months ago
      Anonymous

      They won't listen. All they see AI makes everything for free and humans cost (your paycheck is just a fraction of costs)

      The peers were (using) AI 🙂

    • 2 months ago
      Anonymous

      >this was PUBLISHED in a peer-reviewed scientific journal
      TRUST THE SCIENCE, WHY ARE YOU QUESTIONING THE VALIDITY OF THE DATA?!?!

    • 2 months ago
      Anonymous

      Because it was an article about Midjourney AI imagery. Twitter is moronic and fake and gay.

      • 2 months ago
        Anonymous

        It wasn't an article about Midjourney. It actually was published in a scientific journal named Frontiers. It was supposed to be a review article. It came from a Chinese Hospital research lab lol

        https://www.frontiersin.org/news/2024/02/16/frontiers-statement-concerning-the-article-cellular-functions-of-spermatogonial-stem-cells

    • 2 months ago
      Anonymous

      Jews destroyed academia long before any functional AI existed.

      this, the problem is not the moron who wasted money to submit a fake paper with arbitrary crap, but bad reviewers who let it through

    • 2 months ago
      Anonymous

      I think you're getting a bit antisemitic there

      • 2 months ago
        Anonymous

        winrar

    • 2 months ago
      Anonymous

      Usually in scientific journals the submissions are given to scientists to review. These people are not affiliated with the journal. They may have just skimmed it without thinking about much.

      Also the editors are supposed to review it as well. I guess Frontier does a crap job about it. What's more surprising is that they let that signaling pathway get published. Absolutely nothing in that signaling pathway makes sense nor is accurate to anything. The rat with the humongous penis is humorous, but that isn't the main issue here. It's the labeling of the cells and the signaling pathway. In the Frontier retraction notice, they cited the main reason for it being the signaling pathway.

      • 2 months ago
        Anonymous

        Frontiers titles are pay to publish and like most such outlets lookin at you MDPI, reviewers are typically unqualified and unmotivated. I edit a major AI journal, taught at tech schools around the world, most academics are midwits and mindnumbingly arrogant who are without talent on their own but then diploma mill goes brrrr and eaxh of these pukes spawn another dozen even lesser intellects who do the same and now, these are the "peers" doing the reviewing. For anyone who cares about anything after one's own lifetime, the situation is frustrating - i am unable to speak midwit, try as I might I just end up soundimg schiz - and for me heartbreaking. About 10 years ago, editor of the Lancet wrote that at least half of all research is bullshit. In the Lancet. Yeah, and things have only gotten worse.

    • 2 months ago
      Anonymous

      >so uhhh why did the "peers" allow it to be published?
      It's been estimated that anywhere between 60% and 90% of all published scientific papers in the modern era were wrong. Most of this is because of flaws and errors in how people read and interpret the data, or otherwise tainting their data by the means in which it is collected.

      A good example is survivorship bias. In WWII, the American navy wanted to raise the survival rate of bombers on runs. So they got a statistician to help. He mapped out all the damage and bullet holes on 100 returning bombers, averaged it together, and then recommended they increase the armor on the spots that were hardly ever shot. Why?

      Because planes hit in those places weren't coming back and landing. They went down. That man, Abraham Wald, was right, and he dramatically increased the survival rate of planes in the pacific theater. That's survivorship bias. Wald correctly understood that the planes they never saw had to be included in the calculations, and deduced that the places the SURVIVING planes were taking damage in, weren't the places the lost planes were damaged in.

      This is also how the myth that cats can fall from absurd heights and survive came from. Don't get me wrong, they can. Cats are very, very good at landing safely. Much better than we are. But not as good as hearsay suggests. Why? Because a lot of those fun animal facts came from studies conducted on cases brought into veterinary clinics. Can you guess what the problem was?

      That's right. Nobody brings a dead cat to the vet. The studies never included cats that died from falling off of buildings. Just the ones that lived.

      You can still see even this exact thing right now in troon statistics. All trannies are happy with their transition, they're thrilled! It's got an overwhelming approval rating!

      Because they can't poll the trannies that killed themselves.

      In conclusion: the experts are moronic. They were never reliable. Diversity makes it worse.

      • 2 months ago
        Anonymous

        >science is politically correct

      • 2 months ago
        Anonymous

        Fantastic post.

      • 2 months ago
        Anonymous

        overabundant food resources is the reason that nggrs are thick and have slavery. when they have a bad year hunger makes them into cannibals. that's their whole economy in a nutshell. after thousands of years of genetic imprinting for this, we have exhibit no one, the Black person

    • 2 months ago
      Anonymous
      • 2 months ago
        Anonymous

        >indian quality control

    • 2 months ago
      Anonymous

      because "science" is full of midwit rule following boot lickers who are paid more by it than they could make anywhere else, but still not much

    • 2 months ago
      Anonymous

      because they get money and they pretend to be doing any work, it's a sweet deal

    • 2 months ago
      Anonymous

      >so uhhh why did the "peers" allow it to be published?
      https://en.wikipedia.org/wiki/Grievance_studies_affair
      https://en.wikipedia.org/wiki/Replication_crisis#Prevalence
      Academia has been subverted for decades. AI would be an improvement.

    • 2 months ago
      Anonymous

      We're talking about the same kind of frauds who supported the covid """vaccine""". Society is rotten, corrupted and on its last leg.

    • 2 months ago
      Anonymous

      The "peers" were also chincs

    • 2 months ago
      Anonymous

      people think that peer-review is some golden standard of veracity,
      but it's really just a bunch of people that have published there before--sometimes that means undergrads publishing a 2nd paper.
      when someone asks you if a paper is peer-reviewed, you should laugh at them.

    • 2 months ago
      Anonymous

      All the peers were likely jeets and chinks, and they all cheat/use AI also to write papers.

    • 2 months ago
      Anonymous

      They are also AI

    • 2 months ago
      Anonymous

      the issue is not ai, its the incompetent science scene.

    • 2 months ago
      Anonymous

      What's the problem?

    • 2 months ago
      Anonymous

      >so uhhh why did the "peers" allow it to be published?
      The AI reviewed itself and found it posted accurate information.

    • 2 months ago
      Anonymous

      >so uhhh why did the "peers" allow it to be published?
      It was reviewed by two guys. One indian and one chinese. One reviewer said "wtf is up with the figures"? The journal ignored the reviewer and published anyway, before shamefully retracting the article a few days later and appologizing.

    • 2 months ago
      Anonymous
    • 2 months ago
      Anonymous

      anon, you have won a vacation in the caribbean, grab these tickets, this 737 will take you there

    • 2 months ago
      Anonymous

      Peers are the biggest meme ever.
      They spend 30 seconds to judge your paper. Pure nepotism, lazyness and entitlement.
      Total peer review abolishment.

    • 2 months ago
      Anonymous

      Yep.

      Because it didn't defy their believes, like the discover of the double helix DNA having his paper being reject

    • 2 months ago
      Anonymous

      hello sirs

    • 2 months ago
      Anonymous

      It was peer reviewed by AI.

    • 2 months ago
      Anonymous

      95% of science journals are straight up frauds that take any paper that's submitted in exchange for money. Academia is publish or perish and requires constant begging for money so you must keep churning out fake research or you lose your funding. Thus, you force your garbage into these fake journals to keep getting paid.

      As a researcher you are judged by your H-Index (Hebrew Index), which is a simple function relating to how many people cite your papers and how many papers you produce. Outside of America the meta is to release hundreds and hundreds of AI generated/copypasta'd nonsense jargon papers and then have other chinks cite those papers.

      In America there is a little more scrutiny, but not much! If you're from a ~~*prestigious*~~ research group or university you can publish whatever you want in any journal you want at any time, even in the 5% of "real" journals that supposedly only show the good stuff! Harvard's cancer research division keeps getting exposed in this way.

      Because of this, despite what liberals would like you to believe, 99% of actual legitimate research comes from the private sector where there is a profit motive to generate good science. The fraud on that end comes in on the side of manipulating the FDA to release garbage drugs to the public. A big problem, but one that's actually solvable if we had an FDA with teeth (we don't.)

      • 2 months ago
        Anonymous

        >95% of science journals are straight up frauds that take any paper that's submitted in exchange for money.
        Doesn't this mean that Sturgeon's law is an actual, scientific law?

      • 2 months ago
        Anonymous

        That's incorrect. The number may be a bit lower for "hard" sciences, but it's pretty much the same there. Yes, even physics. I know the high-quality stuff uses open review, double blind experiments (the conductor does not know what the apparatus he's assembling does and the data evaluation later is done by someone else who doesn't know what happened) and so on, but the vast majority of publications does not. See biology for example. The sister of a good friend did her PhD on a rare genetic disease and a possible treatment. Her results at one point were all over the place, but the mean of all those values was consistent with what her supervisor deemed correct. So he told her to publish anyway and not to mention the data itself, just that the average is great.
        I have many more stories like that from my time in academia. Everyone good leaves asap. That's why professors and PostDocs these days are usually shit.

        So if someone capable would want to find, say the cure for psoriasis. Would they be fricked? Is there even any real research nowadays to heal such diseases? Maybe it's too profitable, so no true solutions will ever be found.

    • 2 months ago
      Anonymous

      It was created by chinks and reviewed by poos and chinks.

      Welcome to the modern world.

    • 2 months ago
      Anonymous

      >this was PUBLISHED in a peer-reviewed scientific journal
      >so uhhh why did the "peers" allow it to be published?
      this
      so, peer reviewed ~~*journals*~~ are fake and gay?

    • 2 months ago
      Anonymous

      the peers were AI too

  2. 2 months ago
    Anonymous

    Jews destroyed academia long before any functional AI existed.

    • 2 months ago
      Anonymous

      This. Like 75% of scientific studies cannot be replicated. We live in a world built on lies stacked on top of lies and it's a wonder it's still even kinda held together.

      • 2 months ago
        Anonymous

        No one is gonna give you funding for repeating someone else's study I think

      • 2 months ago
        Anonymous

        >Like 75% of scientific studies cannot be replicated.
        Only true for soft sciences that involve individuals. Social studies, psychology and to some extend medicine itself are not truly scientific for that reason.

        • 2 months ago
          Anonymous

          To what extent do you believe medicine is not based on replicable results and evidence? I hate tiktok nurses too but let’s not lump it in with literal institutional pilpul

          • 2 months ago
            Anonymous

            >To what extent do you believe medicine is not based on replicable results and evidence?
            Trust the science.

            • 2 months ago
              Anonymous

              if she wasn't clotshot she could've been my Pfeffernüsse catcher

        • 2 months ago
          Anonymous

          What about physics?

        • 2 months ago
          Anonymous

          That's incorrect. The number may be a bit lower for "hard" sciences, but it's pretty much the same there. Yes, even physics. I know the high-quality stuff uses open review, double blind experiments (the conductor does not know what the apparatus he's assembling does and the data evaluation later is done by someone else who doesn't know what happened) and so on, but the vast majority of publications does not. See biology for example. The sister of a good friend did her PhD on a rare genetic disease and a possible treatment. Her results at one point were all over the place, but the mean of all those values was consistent with what her supervisor deemed correct. So he told her to publish anyway and not to mention the data itself, just that the average is great.
          I have many more stories like that from my time in academia. Everyone good leaves asap. That's why professors and PostDocs these days are usually shit.

      • 2 months ago
        Anonymous

        >Like 75% of scientific studies cannot be replicated. We live in a world built on lies stacked on top of lies and it's a wonder it's still even kinda held together.
        Nobody outside of academia reads even 1% of what's published, and even in academia, 90% of papers aren't read by more than ten people. And those are the papers which are most likely to be full of shit (because the authors know that no one's reading their work closely). The world is not built on those papers. It's not built on lies.

        • 2 months ago
          Anonymous

          brokenscience.ogr
          the broken science initiative

      • 2 months ago
        Anonymous

        Reproducibility crisis

        Something the Normies literally saying verbatim "trust the science" have no idea about.

        But if there's a study they don't like, obviously those scientists were just paid off.

    • 2 months ago
      Anonymous

      oh come on no way

    • 2 months ago
      Anonymous

      All tech is just commercialized military research that has been safeguarded against. Good way to generate GDP.

    • 2 months ago
      Anonymous

      And music too

    • 2 months ago
      Anonymous

      yup

  3. 2 months ago
    Anonymous

    give it a couple years and AI usage will finally put pajeets on equal footing with whites, thanks to AI models trained on code made by whites

    • 2 months ago
      Anonymous

      Yes. And we will all see that RMS was right when he came up with the GPLv3.

    • 2 months ago
      Anonymous

      Ai is trainned on public repos
      post oubloc repos have zero coding standard or good practices and most dev don't learn them until they deep into the corporate world.
      as long as AI is trained on public repost, it will never produce secure maintaimable code

  4. 2 months ago
    Anonymous

    >this was published
    At this point if something is published, you are more likely to be correct if you believe the complement to the conclusion

  5. 2 months ago
    Anonymous

    Let me guess they were using 3.5? Also, they should know that even though GPT is good at coding it still needs to be reviewed by humans.

    • 2 months ago
      Anonymous

      >still needs to be reviewed by humans
      So they give it to jeets to review. What happens next do you think?

      https://i.imgur.com/XVgxuUd.png

      >be me
      >IT security and Quality Assurance Consultant
      >a client kicked off 60% of their devs
      >now use code 90% AI generated code for ~4 months
      >only reviewed by the remaining overworked devs
      >deployed code
      >some incidences happened
      >called me to make an assesment
      >wtf
      >code introduces memory leaks
      >code introduces code duplication and shitty performance loops
      >code introduces some logfiles, with clear names of costumers
      >tell the devs
      >they start sweating
      >give them a week to fix this of I have to report the incedence to some oversight because of data integrity breach
      >work with them for 4 days
      >then come back at day 7
      >devs appear agitated and worn out and depressed
      >2 Devs on sick leave for 3 weeks
      >2 remaining devs, rewrite every section commited by AI
      >company lost 400k + have to pay me
      >because of incedent
      >manager comes in, does a meeting with me and the rest some other executives
      Summary:
      >"Ok guys to avoid this incedent to happen ever again, we should use AI to do quality checls for the code"
      >tell them AI code was the problem in the first place
      >"But it is way more efficient and costs less"
      >shows a graph of code pushed to application, comparing human devs vs. ai
      >"AI adds more code to application"
      >wtf
      >try to elaborate of quality and complexity of abstraction that can only be done by humans with a full picture of the app
      >then accuse me of having a conflict of interest
      >"thats what I would say If my job would be replaced by AI soon."
      >ok
      >use Chat GPT to write a document to report incident of data breach
      >proof read and adjusted for the case
      >use chat GPT to poach their last devs to work in my consulting company
      >proof read, replace the names

      AI is a tool that seemingly is smart.
      AI can create a rough sketch. But its rough.
      AI is smoke and mirror.
      It introduces unexpected errors and artifacts in placed nobody will see until its too late. It creates superficially amazing looking shit. But the devil is in the details.

      >code introduces memory leaks
      So you're saying Firefox has been using AI code for over a decade now?

    • 2 months ago
      Anonymous

      Just get other AIs to review it. Have three review it and one to refactor. It’s not worth the extra cost of some sassy pink-haired brogrammer. Nothing will go wrong

  6. 2 months ago
    Anonymous

    >>then accuse me of having a conflict of interest
    >>"thats what I would say If my job would be replaced by AI soon."
    boomers?

  7. 2 months ago
    Anonymous

    plot twist: "peers" were also AIs

    • 2 months ago
      Anonymous

      They won't listen. All they see AI makes everything for free and humans cost (your paycheck is just a fraction of costs)

      The peers were (using) AI 🙂

      >this was PUBLISHED in a peer-reviewed scientific journal
      so uhhh why did the "peers" allow it to be published?

      >2025
      >AI has investigated itself and found no wrongdoing

      • 2 months ago
        Anonymous

        pretty much.

    • 2 months ago
      Anonymous

      That's some twilight zone shit.

    • 2 months ago
      Anonymous

      plot twist: OP is AI

  8. 2 months ago
    Anonymous

    >early adopters are paying the price is somehow surprising
    >this is clearly proof that it will prevent the inevitable
    It's coming for your useless IT job next. Cope more homosexual.

    • 2 months ago
      Anonymous

      Can one of you gays tell me if a M. Sc. in CS is worth the time? I'm 28 and will be done with my CS B. Sc. from a TU soon.

      Some gays are telling me "B. Sc. gibt es wie Sand am Meer" (everyone has a B. Sc., it's useless, get the M. Sc.). What do you guys think? Then you have people saying
      >I've never seen someone with a M. Sc. get shit for 2 years wasted
      Which is the truth?
      >Can't be fricked to be 30 then finally earn some cash kek.

      • 2 months ago
        Anonymous

        Go work. 2 years of software dev experience is huge in this niche.

        • 2 months ago
          Anonymous

          Also in EU? Here they coom from degrees

          In NA this isn't the case. Otherwise ty anon

      • 2 months ago
        Anonymous

        Bc is enough. Get some praxis instead of degree.

        Got for MSc only if you want to stay in academia for life and you want to comtinue for phd

        • 2 months ago
          Anonymous

          The uni greek anon prove that you don't need to be doing a master or phd to live forever on it.

          • 2 months ago
            Anonymous

            Kek, but that gay will never be able to get a family (and pay for it). Didn't he also have mountains of debt?

            Bc is enough. Get some praxis instead of degree.

            Got for MSc only if you want to stay in academia for life and you want to comtinue for phd

            Alright, guess I'm getting out. Idk who to trust since I don't know many devs who aren't pro-vaxx and see through bs hoops laid out to us

  9. 2 months ago
    Anonymous

    Can't wait for AI/LLMs get mass poisoned.
    Once shit starts to propagate it'll be damn near impossible to remove those quirks.

  10. 2 months ago
    Anonymous

    To be fair you could likely remove a lot of the moronic Devs and replace them with AI tools for the intelligent ones.
    Actually just firing my female Black person co worker dev would make our team more effective.
    Literally have to tell exactly what to do, and repeat it multiple times often write the code myself when she is sick again, often feel like she isn't even working (remote setup, I don't work all day either, but I produce a lot still).
    God I hate Black folk.

    • 2 months ago
      Anonymous

      They will fire you instead, for the DEI goodboy point.
      The boss
      His harem
      His Black folk

      Thats their dream.

      • 2 months ago
        Anonymous

        Boomers are moronic and that's what they think they are going to be,
        Except, everything doesn't magically work like in a fairy tale, and collapses.

    • 2 months ago
      Anonymous

      >Actually just firing my female Black person co worker dev would make our team more effective.

      You have an actual female Black person writing code in your company?

      • 2 months ago
        Anonymous

        Yeah, but there is not much output, and basically all needs to be rewritten before it is done.

        They will fire you instead, for the DEI goodboy point.
        The boss
        His harem
        His Black folk

        Thats their dream.

        My manager is a dev, we dont have diversity people yet. I am the most useful dev in my team/project. Unfortunately firing people is hard.

  11. 2 months ago
    Anonymous

    >THIS WAS PUBLISHED
    >PEER-REVIEWED JOURNAL
    yawn. genie does not go back into bottle

  12. 2 months ago
    Anonymous

    Fully agree, the issue is the same with writing scientific papers or investment cases. It just generates superficial bullshit that looks impressive to normies. It can replace low iq bullshit writing though

  13. 2 months ago
    Anonymous

    Its daniels 4th beast
    Its the beast that is and is not and yet is
    Turn to Christ sooner than later anon

    • 2 months ago
      Anonymous

      bro that's silly

      • 2 months ago
        Anonymous

        Chat gpt when queried answered the same self confession, dont take my word for: use thr applicable terminology and apply yourself to such a concept
        >ai is
        >ai is not
        >and yet it is
        It is not living, but a compilation of all human intellect, and yet it is very much alive with those that deem worship a useful abstact for their purposes

        I.e. does dignifai'd memes truly remove the degeneracy and grant repentance and Salvation through Christ Jesus?

        A prostitute is made wholesome but you kbow it is not actually her, and yet it is a preferable alternatove to the state of decay

    • 2 months ago
      Anonymous

      dont enter their ships

  14. 2 months ago
    Anonymous

    >peer review is shit
    nice to see you're finally awake

    • 2 months ago
      Anonymous

      >>peer review is shit
      >nice to see you're finally awake
      lemme know when you homosexuals get around to p-values

      • 2 months ago
        Anonymous

        The probability of the drawn conclusion being wrong? Is there a redpill on that?

        • 2 months ago
          Anonymous

          He's probably talking about p-hacking. You can curate the data to get any result you want if you know what you're doing.

  15. 2 months ago
    Anonymous

    Why are so many seemingly intelligent people unable to extrapolate and generalize?
    In both directions. The ones firing productive developers and deploying AI code are unable to differentiate between "impressive progress" and "working product", whereas the ones mocking AI's current mistakes consistently fail to notice the amount of time needed to go from "writing code with AI is science fiction" to "codes better than some actual developers who are also Black folk"

    I swear to god, you homosexuals really don't want to prepare at all for the inevitable.

    • 2 months ago
      Anonymous

      correct take bulgariananon. btw, is Georgi Gerganov of llama.cpp fame a celebrity in Bulgaria? Was Georgi Guninski? (an OG security researcher)
      Bulgaria seems to produce constant gems. I miss RARBG like you wouldn't believe.

    • 2 months ago
      Anonymous

      >seemingly intelligent people
      There's your answer, anon; smoke and mirrors.

    • 2 months ago
      Anonymous

      >Why are so many seemingly intelligent people unable to extrapolate and generalize?

      Because verbal intelligence gives a high IQ score, but it takes spatial intelligence to be actually high IQ. The reason women are so dumb even if they have high IQ on paper is because their right side brain, where the spaital functions reside, is underdeveloped compared to males, with thinner cortex and less neural connections.

    • 2 months ago
      Anonymous

      >whereas the ones mocking AI's current mistakes consistently fail to notice the amount of time needed to go from "writing code with AI is science fiction" to "codes better than some actual developers who are also Black folk"

      AI is an example of "intelligence" that is 100% verbal intelligence with zero visiospatial intelligence. AI produces gab, pilpul, bullshit that appears to be "smart" but its just noise.

      • 2 months ago
        Anonymous

        LLMs do have some form of visiospatial intelligence. Here is an example

        https://github.com/adamkarvonen/chess_llm_interpretability

      • 2 months ago
        Anonymous

        >AI is an example of "intelligence" that is 100% verbal intelligence with zero visiospatial intelligence. AI produces gab, pilpul, bullshit that appears to be "smart" but its just noise.
        Like israelites, prajeets and women.
        Hmmmmmmmmmmmmmmmmmmmmmm, makes sense now.

  16. 2 months ago
    Anonymous

    >scientists fear mice with huge dicks.
    BMC for teh win

  17. 2 months ago
    Anonymous

    All the AI fear-mongering is hilarious.

    • 2 months ago
      Anonymous

      Holy shit

  18. 2 months ago
    Anonymous

    if you unironically use AI, esp for your job, you are a midwit.

  19. 2 months ago
    Anonymous

    BMC(big mouse wiener)

  20. 2 months ago
    Anonymous

    stop calling a static database "ai"

  21. 2 months ago
    Anonymous

    >AI is smoke and mirrors
    It’s not, compare SORA vs Smith eating spaghetti videos from a year ago. That client is obviously moronic and ai code is trash for the most part, but give it few years and coders will be obsolete.

    • 2 months ago
      Anonymous

      >but give it few years and coders will be obsolete.
      what kind of Black person dysgenic brain feels a capacity to speak about stuff that it clearly doesn't understand?

    • 2 months ago
      Anonymous

      AI VIDEOS HAVE BEEN USED SINCE 2016 AND EARLIER THAN THAT
      https://m.bilibili.com/video/BV1Bs411W7RP?ts=1708040368&spmid=333.401.click.relateRecom&h5_buvid=948B3A3D-A0CC-7505-FF8D-B4F36AE03E6145725infoc&unique_k=&bsource=search_google&openid=&wxfid=

      • 2 months ago
        Anonymous

        extremely underrated post (minus the cancerous google link)

      • 2 months ago
        Anonymous

        Even earlier, probably. Google has had the tech, data, and computing power to do this since the late 2000s. And who knows what govt has had.

      • 2 months ago
        Anonymous

        Remember that footage of Bin Laden that looked odd that they told people it was genuine?
        I wonder if it was ai too.

      • 2 months ago
        Anonymous

        It’s all been completely fake and gay for quite a while now.

      • 2 months ago
        Anonymous

        HOLY FRICK I JUST REMEMBERED THIS VIDEO, THANKS FOR THE REMINDER, THE WAY HIS COLLAR MORPHS LOOKS EXACTLY THE SAME AS THE WAY AI BEHAVES WHEN IT HAS BLIND SPOTS IN THE IMAGES IT PRODUCES

        WOW WE REALLY LIVE IN AN ARTIFICIALLY PRODUCED HELLSCAPE HUH

  22. 2 months ago
    Anonymous

    AI is just the quintessential diversity hire, it's like replacing 5 capable white engineers bh a thousand dumb negresses

  23. 2 months ago
    Anonymous

    you can cope
    you can seethe

    during all my younger years frickers were telling us that AI and robots will take over and trades will be the first to get hit hard

    >AI finally happens
    >Acadecuck, wagies and artists becoming more and more obselete every semester

    by late 2025 either AI will be regulated to oblivion (a tragedy for humankind) or it will become the new normal so that the elites can finally have a real reason to exterminate useless eaters

    • 2 months ago
      Anonymous

      So, who WON'T be a useless eater under your vision of the future?

  24. 2 months ago
    Anonymous

    Show me the rats growing human testicles scientific article you moronic germ

  25. 2 months ago
    Anonymous

    Hackers will have fun time in the future.
    All the zero days that AI written shitty code will introduce in large corporation will be open season.
    Also lawyers and auditors will make lot of money.

  26. 2 months ago
    Anonymous

    AI will not replace you, someone using AI will. Get good homosexuals

  27. 2 months ago
    Anonymous

    Larp.

  28. 2 months ago
    Anonymous

    >UNREGULATED
    But regulated by ~~*who*~~, I wonder?

  29. 2 months ago
    Anonymous

    You are probably a woman.

  30. 2 months ago
    Anonymous

    and you think this is worse than pajeetcode?
    I mean when the remaining developers are moronic enough to dont catch errors, these errors would have happened in some way anyway.

    there are so little really good developers around - most of the ones coming from universities you can trash instantly and self taught autistic nerds are hard to come by.

    this isn't really an ai problem. I use and rewrite parts of ai code every day - these guys which basically copy with all problems from ai would have copied from stack overflow anyway.

  31. 2 months ago
    Anonymous

    >jak

    • 2 months ago
      Strange_Love

      >stat

  32. 2 months ago
    Anonymous

    a Programmer who needs AI assistance is not a programmer by my definition. I have never even tried large language models and I never will, I will continue coding perfect code at work and for my hobby projects for as long as I breath and no fricking pajeet shit AI "coders" are allowed to even look at my code.

    • 2 months ago
      Anonymous

      I will keep using AI because I'm a lazy pice of shit.
      morons that believe that AI can replace anyone outside of jeet Durgasoft "programmers" don't know anything about coding or making programs for businesses.
      Instead of correcting stinky jeets I will correct a far more responsive AI

    • 2 months ago
      Anonymous

      They are pretty cool in that they will find very new and very niche libraries.

      Which I guess is more useful fro front end than back end.

    • 2 months ago
      Anonymous

      if you ever wanna change languages, you can basically just write in the language you know and ask for a translation. i catch myself now writing full blown psuedo code and asking chatbot to syntax it

  33. 2 months ago
    Anonymous

    Never happened, moron

  34. 2 months ago
    Anonymous

    People often see work as a means to an end so they'll do anything they can to make it "easier" because they think it will make their life "easier". This is a huge mistake.

    • 2 months ago
      Anonymous

      Craftsmanship is practically obsolete.

      • 2 months ago
        Anonymous

        >Craftsmanship is practically obsolete.
        What a profoundly American golem perspective. If anything, we are coming full circle, with this system producing such a great abundance of worthless slop that more and more people are seeing the value in authentic creation.

        • 2 months ago
          Anonymous

          Hard agree. You can tell an AI generated image is one because the creators of such images lack an imagination outside of sexy AI waifu or photorealistic political propaganda. There are beautiful creations like those "hidden images", but the fact that AI art is discernable at all means that it has not destroyed art

          • 2 months ago
            Anonymous

            >but the fact that AI art is discernable at all means that it has not destroyed art
            you do well to refer to it what it is
            it's not AI art, but ML imagery
            I advise you to read Scurton's Beauty: A Brief Introduction before you open your hole again

        • 2 months ago
          Anonymous

          >If anything, we are coming full circle
          I hope you're right.

          • 2 months ago
            Anonymous

            You and I aren't the only ones alienated by an environment full of nothing but mass-produced ersatz. All it takes is for the people who still want to create, and the people who want to live in a world that is human and real, to acknowledge each other and realize that they can shape society with simple actions.

            • 2 months ago
              Anonymous
              • 2 months ago
                Anonymous

                You can seethe but it's not going to make people start buying more of your fake plastic slop, nor is it going to make authentic craftsmanship less appealing.

              • 2 months ago
                Anonymous

                I'm not seething, I'm telling you you're naive. People will buy the cheapest shit.

              • 2 months ago
                Anonymous

                You're not "people". You're a golem and your personal decisions have no impact on what I do with my money.

              • 2 months ago
                Anonymous

                Keep seething

              • 2 months ago
                Anonymous

                Keep seething.

              • 2 months ago
                Anonymous

                sovlful

  35. 2 months ago
    Anonymous

    I still don’t get it.
    What is it they’re afraid of about AI specifically?

  36. 2 months ago
    Anonymous

    Anon if a data set is made by professionals and then it is compiled into a working model, anything that comes into fruition from that model will be of a higher quality because the source material it’s compiled from is properly built.

    This is simple, if you have a shitty pajeet dataset which is what GITHUB is contaminated by, it will be trash, give it 5 years and people will have feature specific models for writing cogent code and making VGUI improvements.

    This isn’t rocket science. AI will inevitably eliminate 90% of your field simply because the code compiler can write at a speed that replaces 100 people instantly and can be edited by 1 person if they are using a project viewer program that links all cvars and imported objects in the code together with a simple mouse over feature to give the person looking over it pointers on how the AI built the program.

    Again, compile a model with code written by efficient programmers, watch how it churns out video games/ defense software/ art

    Literally anything you can think of because the model references clean and efficient code and was trained on a super computer with a very narrow project goal.

    • 2 months ago
      Anonymous

      >t. never worked in the industry a single day in his life

  37. 2 months ago
    Anonymous

    if hecking science says so, then ai is a danger to their business
    daily reminder that germ theory is bullshit
    evolution theory is bullshit
    gravity theory is bullshit
    and many more scientific stuff is bullshit

    • 2 months ago
      Anonymous

      wait for it...

      • 2 months ago
        Anonymous

        >gravity doesn't exist
        It's gravity is wrong, I thought. I mean if you call anything by a name, that thing exists even if it's totally misunderstood. Perhaps I just amn't schizo enough.

      • 2 months ago
        Anonymous

        https://i.imgur.com/37RyIbp.jpg

        >gravity doesn't exist
        It's gravity is wrong, I thought. I mean if you call anything by a name, that thing exists even if it's totally misunderstood. Perhaps I just amn't schizo enough.

        dude, "gravity" is just fricking mass
        daily reminder that science can't even resolve two bodies problem not to mention three bodies problem

  38. 2 months ago
    Anonymous

    The pic is more an indictment of the peer review process than AI.

  39. 2 months ago
    Anonymous

    This isn't real. Nobody is dumb enough to think "AI" generated working code. It even tells you it doesn't when you use it.

    • 2 months ago
      Anonymous

      >Nobody is dumb enough to think "AI" generated working code.
      have you been sleeping under a rock for the past few years?
      AI today can easily generate working code, you can literally do it right now

      • 2 months ago
        Anonymous

        >AI today can easily generate working code, you can literally do it right now
        Maybe if your "working code" is some fizzbuzz entry level exercise.

        • 2 months ago
          Anonymous

          wrong, you can easily generate far more complex code
          >t. software engineer for over a decade

      • 2 months ago
        Anonymous

        >AI today can easily generate working code, you can literally do it right now
        Maybe if your "working code" is some fizzbuzz entry level exercise.

        wrong, you can easily generate far more complex code
        >t. software engineer for over a decade

        It can't even do fizzbuzz properly. This is wysiwyg bullshit all over again. You fricking zoomers don't even know what a folder is.

        • 2 months ago
          Anonymous

          sounds like a skill issue. you can't even use chatbot lmao. its like not understanding a search engine at this point.

  40. 2 months ago
    Anonymous

    Ai spits ou blocks of naive code, an entry level dev would make a better job 100%
    It doesn't see the big picture, so it write things that could eventually work independantly (badly), but once you patch it all together, it falls appart real quick.
    It's bad as frick, it's the equivalent of hiring a team of unsupervised junior programmers just out of school.

  41. 2 months ago
    Anonymous

    people are at fault in your twitter example, not the generative ai program.

  42. 2 months ago
    Anonymous

    OP, this is exactly what very experienced devs have been saying all along. AI is 100% ready to replace all dev jobs. CEOs and CTOs would be idiots not to fire their developers now and leave a few juniors around to clean up after the AI.

    😉

    • 2 months ago
      Anonymous

      Anyone who uses smiley faces is a moronic incel. And you're delusional thinking subhumans and women will be able to compete with White men by printing GPT code.
      Let's see one single subhuman or woman make a successful video game using AI and then ill believe you.

      • 2 months ago
        Anonymous

        Eat shit homosexual encule Black person 🙂

  43. 2 months ago
    Anonymous

    Thats it, boys, pack it up. AI has failed. Shut it down.
    >its not like its improving or anything

  44. 2 months ago
    Anonymous

    Those journals are not managed by scientists but by public relation roasties hired by big corpos for the sake of profit.
    And when blind peer reviewed is done it's usually losers who do a high volume of peer review just because they are bored and will tell the researcher that he's moronic for not having read (himself) because he had no idea who he was writing the review for and he's a pseud as well or just like in this case they're utter morons who probably just OK a lot of papers for $/volume.

  45. 2 months ago
    Anonymous

    >>"thats what I would say If my job would be replaced by AI soon."
    pretty much exactly right
    lmao, "quality assurance consultant", you sound like the worst parasite imaginable
    it will only be a matter of years now before AI code generation smooths out and becomes far superior to anything a human could write

  46. 2 months ago
    Anonymous

    I'm going to love the next decade of ceo boomers and seat warmers firing actual workers because muh ai only to later kiss the ass of their former workers to make them come back.

  47. 2 months ago
    Anonymous

    >"thats what I would say If my job would be replaced by AI soon."
    .
    but yes, AI in its Infancy stage
    .
    I asked a few to make a color photo of the statue of liberty in 1886, not a single one so far came out as original copper color, unless you tell it

  48. 2 months ago
    Anonymous

    Hi IT anon, business anon here. It might surprise you but sometimes paying a fine is much cheaper than abiding the law and regulations.

  49. 2 months ago
    Anonymous

    we know.
    its a glorified search engine curated by poorly paid poos and overseen by israelites

  50. 2 months ago
    Anonymous

    ai is just a tool you behinderter
    you still need an software engineer to review the AI code
    if you don't do that it's your own fault
    the next gen coder is a person who is fluent in AI generation and simple coding

  51. 2 months ago
    Anonymous

    >But the devil is in the details.
    It always is.

  52. 2 months ago
    Anonymous

    I'm a programmer, why do people here think programming is such an important job? It's literally the least important job on the planet even if we get replaced by AI.

    • 2 months ago
      Anonymous

      >I'm a programmer, why do people here think programming is such an important job?
      It's their cope. Two more weeks and they'll finally lurn2code and get rich while everyone else gets replaced by the heckin' fantasy AI gods.

      • 2 months ago
        Anonymous

        I literally imagine some s°yb•y drinking his lattes with women hormones and a beanie hat thinking he is so important with his 100k salary LOL. Those types WILL be replaced. If you are a normal man and not a s•y troony you will learn and find a new job.

  53. 2 months ago
    Anonymous

    AI wouldnt have spelled "assessment" and "incident" incorrectly or used the word "incidences" to refer to incidents.

  54. 2 months ago
    Anonymous

    >at meme computer job
    >boss asks for detailed document on an app we want to build to give to the devs
    >i write my shit
    >hand it to zoomer coworker and tell him to go further on technical details
    >he passes my text to chatgpt
    >chatgpt writes bullshit marketing campaign with shit like "this app will do everything right"
    >tell coworker its shit
    >he doesnt understand the problem
    >discard all of his shit and i write and add more stuff and details into the document
    >tell boss about coworker using bullshit AI
    >boss understands but says nothing
    >uses document i wrote
    >tells me to be patient with zoomer coworker
    Luckily in the next months im gonna get payed extra for guiding this fricking moron as a mid boss. This guy is supposed to have learned more shit and be more updated than me and im a 33 yo boomer with a low ammount of diplomas

  55. 2 months ago
    Anonymous

    what did you think of the show devs? it kinda hinted at the trivial paradoxical moronic nature of the ai

  56. 2 months ago
    Anonymous

    AI gives us a future without your propaganda.
    You gays lost your chance when you decided to die on the modern art diversity hill.

    • 2 months ago
      Anonymous

      >AI gives us a future without your propaganda.
      Anon, I...

  57. 2 months ago
    Anonymous

    Same boat but I’m in security architecture. At least we’ve got job security kek

    • 2 months ago
      Anonymous

      >I’m in security architecture
      >we’ve got job security
      Strange statement if I've ever seen one. IT security is moving rapidly towards automated solutions.

      • 2 months ago
        Anonymous

        Explain to me how you’d use AI and automation to identify security issues/gaps/flaws/threats both present and future state, then strategize, proof of concept, budget for, test, implement, and operate not only at the tool level but at the SOC for a multi billion dollar organization with 30,000+ employees and 250,000+ systems both in data center and cloud - all without running the risk of causing revenue impacting outages and not wasting years and tens of millions of dollars. I will wait.

        • 2 months ago
          Anonymous

          Anything involving IT security on the higher level (what you call "architecture") involves stochastic and defeasible reasoning that AI excels at. This is in contrast to the low-level ant-work that involves excruciating attention to detail and highly specialized, deep knowledge of specific systems, like pentesting

          • 2 months ago
            Anonymous

            You didn’t answer the most important question. How will you accomplish everything without 100% guaranteeing no impact to the business.

            • 2 months ago
              Anonymous

              > all without running the risk of causing revenue impacting outages and not wasting years and tens of millions of dollars
              >100% guaranteeing
              You don't provide any such guarantees, either. As I said, in the end it's a purely statistical matter. If the automated system performs better than you (which they eventually will), you will get replaced. I don't know why we're having this discussion in the first place when the top brass in what you imagine to be your future field of activity all agree with me and strive towards more automation.

              • 2 months ago
                Anonymous

                I’m not sure where exactly where you have been building your experience at, but at very large American tech companies, service outages due to poorly implemented security changes can result in $10,000,000+ of revenue loss in a single hour. Leadership isn’t going to approve automation via AI unless it’s bullet proof. Some small baby company that pulls $1-2B in revenue a year? Maybe.

              • 2 months ago
                Anonymous

                >but at very large American tech companies, service outages due to poorly implemented security changes can result in $10,000,000+ of revenue loss in a single hour.
                And that's exactly why they see so much potential in replacing you. Anyway, good luck...

              • 2 months ago
                Anonymous

                Kek if you say so bud. Your arrogance will be your undoing, give it some time.

              • 2 months ago
                Anonymous

                It's not that I say so, it's that the people you want to work with say so. It doesn't affect me in any way one way or another. You can all get bent.

              • 2 months ago
                Anonymous

                It’s too late to back pedal.

              • 2 months ago
                Anonymous

                Backpedal on what? Are you capable of basic reading comprehension? Your Americanism truly shines here.

              • 2 months ago
                Anonymous

                Backpedal on what? Are you capable of basic reading comprehension? Your Americanism truly shines here.

                >it’s not that I said it was so
                >it’s just what “everyone else is saying!”
                >it doesn’t affect me in the slightest
                >bart simpson insult

              • 2 months ago
                Anonymous

                Americanism is a clinical mental illness.

              • 2 months ago
                Anonymous

                Says the person who replies to others with “keep seething” as a retort to “keep seething” nice comeback fren

              • 2 months ago
                Anonymous

                Keep seething.

              • 2 months ago
                Anonymous

                I won’t but you will later tonight specifically when you look in the mirror after reading this post and say out loud “heh I was right and he was wrong I am so intelligent”

              • 2 months ago
                Anonymous

                Good boy. Now seethe more.

          • 2 months ago
            Anonymous

            >T security on the higher level (what you call "architecture") involves stochastic and defeasible reasoning that AI excels at.
            You put quotes in the wrong places. it's "AI" and it doesn't excel in this. it cannot do architecture.

            Yes we know and people are moronic. Thanks Sherlock.
            Hint: Criminalize using the term "AI" for something that's not an AI and idiots will understand they're just working with a chatbot.

            >Criminalize using the term "AI" for something that's not an AI and idiots will understand they're just working with a chatbot
            THIS. But how would israelites and prajeets will get their billions of $ if their scam is revealed , and it is a chatbot?

            • 2 months ago
              Anonymous

              >You put quotes in the wrong places
              I put quotes in the right place and the way it triggers you is a testament to that. I'm willing to amend my post and put quotes around 'AI' as well if it makes you feel less bad about the future of IT security.

              • 2 months ago
                Anonymous

                I don't care about security in particular . AI cannot do technical architecture - as in hardware / software architecture.
                It can write some poems and draw some pictures though. Or pull bad prajeet code from stackoverflow.

              • 2 months ago
                Anonymous

                >It can write some poems and draw some pictures though
                Security "architecture" is a lot closer to poems and pictures than it is to rigorous and meticulous logic you think is involved.

          • 2 months ago
            Anonymous

            >Designing complex systems is stochastic
            I don't think it means what you think it means
            And AI is not even good at that. It's good at copying and adapting things done by people , as long as the changes needed fall withing certain limits

            • 2 months ago
              Anonymous

              Whom are you quoting?

              • 2 months ago
                Anonymous

                Who are you quoting yourself ?
                I base my words on my own tryout with ChatGPT and I have 20+ years of software design experience.
                Everytime I tried using it to help me design something , it produced nothing original , it only gave me pieces of articles and tangential stuff that wasn't even my target.
                When I asked him for code that was not done by anyone else online ( some Linux kernel stuff ) it hallucinated a grotesque mix and presented it as working, even though it had nothing to do with request.
                I don't know, maybe I'm not good at prompting it or maybe I need better bots, but what I've seen so far sucks big time.
                I am curious about code pilot and how much can it help with an actual project with hundreds of source files, not only bits of functions

              • 2 months ago
                Anonymous

                I wasn't quoting anyone. I said IT security architecture, which really revolves around threat assessment and mitigation, involves mainly stochastic and defeasible reasoning, which ML methods excel at, what with them being able to make decisions while taking into account a humanly incomprehensible amount of data, account for a humanly unmanageable number of factors and home in on patterns far too complex and subtle for a human to perceive. I didn't say that "designing complex systems is stochastic" and I don't know what this even means.

                >I base my words on my own tryout with ChatGPT and I have 20+ years of software design experience.
                But we're not talking about ChatGPT and we're not talking about software design. We're talking about specialized, purpose-built systems and organizational-level architecture.

              • 2 months ago
                Anonymous

                There were already stochastic tools for penetration testing that did not use machine learning
                As for security architecture, do you know any tool based on ML that will do that or are you just talking out of your ass ?
                And yes, ML is good at finding patterns, it's actually the only useful feature it has - create a complex function and minimize it's error by going downhill - to detect such patterns. That's all it does,
                But you don't even need ML for it , the math for this has been developed for some centuries

              • 2 months ago
                Anonymous

                >There were already stochastic tools for penetration testing that did not use machine learning
                Yep, and you can be sure their outputs will be incorporated into higher-level decision-making processes driven by ML.

                >As for security architecture, do you know any tool based on ML that will do that or are you just talking out of your ass ?
                Run-of-the-mill enterprise security analysis and threat notification systems already use ML and even higher-level systems are being worked on. This is not a secret. I don't know why we're having this argument. You can just google "IT security" +"AI" and see where the field is going and what they're aiming at.

                > it's actually the only useful feature it has - create a complex function and minimize it's error by going downhill - to detect such patterns. That's all it does,
                This is a completely useless description that doesn't support your case or undermine mine.

              • 2 months ago
                Anonymous

                >This is a completely useless description that doesn't support your case or undermine mine.
                I am not trying to make a point, this is just the true accurate description of ML and the rest of the bullshit isn't
                Not useless at all but you don't understand how it works

              • 2 months ago
                Anonymous

                >you don't understand how it works
                I understand exactly how it works. In fact, if you ask about a specific system, I could probably even explain it to you in a way that doesn't sound like a regurgitated Twitter talking point. Then you could regurgitate my explanation and pretend you know what you're talking about more convincingly. lol

              • 2 months ago
                Anonymous

                You apperently don't know how a consultant client releatiship works.

              • 2 months ago
                Anonymous

                I 'm just expecting from all the so called AI proponents to understand what it is.
                You don't tell your clients the technical gore but you are supposed to know it yourself

                >you don't understand how it works
                I understand exactly how it works. In fact, if you ask about a specific system, I could probably even explain it to you in a way that doesn't sound like a regurgitated Twitter talking point. Then you could regurgitate my explanation and pretend you know what you're talking about more convincingly. lol

                It does have specific applications. I'm not totally against it, I just don't trust it to produce anything intelligent . For now

          • 2 months ago
            Anonymous

            I just asked the chatbot and it says your moronic
            apudrool.jpg

  58. 2 months ago
    Anonymous

    AI will first learn to write code for languages which can describe correctness of their programs

    You input a logical claim, like write me a function that adds two numbers together AND provide me a proof that the function adds two numbers

    Then any output will be verifiably correct.

    • 2 months ago
      Anonymous

      If you had such a language, the precise specification of the program you need would be the program itself, leaving no place for your AI fantasy.

      • 2 months ago
        Anonymous

        This has existed since the late 1990s. Look up CoQ, Agda for languages based on type theory. For those based on formal logic look up prolog. It really takes a while to program in these, because it really makes you question what a program even is

        There are also projects that can attach logical statements to programs called "refinement types"

        • 2 months ago
          Anonymous

          >CoQ, Agda
          Right, what about them? Did you even understand what I said? It's specifying the types that encode your specifications that is hard, not writing the proofs (the programs themselves).

          • 2 months ago
            Anonymous

            Creating first order logic statements isn't too bad

            In Agda you can write a program that takes in two numbers, converts them to binary, adds them and formally prove that this corresponds to Peano Arithmetic addition all within 200 lines of straightforward code. A worked example is in the book Software Foundations with Agda. Another good book is Functional Programming in Lean4.

            They have a fully verified C compiler written in Coq I believe.

            Formal methods are quite mature

            • 2 months ago
              Anonymous

              >In Agda you can write a program that takes in two numbers, converts them to binary, adds them and formally prove that this corresponds to Peano Arithmetic addition all within 200 lines of straightforward code.
              150 of which are type definitions, meanwhile the equivalent program that takes two numbers, converts them to binary and adds them is 5 lines.

              • 2 months ago
                Anonymous

                Buddy you're talking to a fricking jeet, he doesn't even understand what he's saying let alone what you're saying. Pray that you never have to meet one in person or work with one.

              • 2 months ago
                Anonymous

                He doesn't seem that stupid and I guess he understood what I was saying because he stopped replying when I pointed out how his example illustrates my point.

  59. 2 months ago
    Anonymous

    I love how they're seething at AI and not the absolute cesspit that is modern peer reviewed basedence.

    • 2 months ago
      Anonymous

      *soience (science) fricking jannies

  60. 2 months ago
    Anonymous

    ChatGPT and similar models most insidious part is how boldly it lies and makes up complete bullshit. For the free version, even simple tasks like add together a list of a hundred numbers will be give a false answer. And then you tell it the answer was wrong, and it will "apologize" to you and give you another wrong answer.
    Of course the real thing won't frick up something as simple as adding together a list of numbers, but it will boldly lie about every subject in a similar false manner. You ask it to give you code, and it will proudly spill out complete bullshit. As it to analyze some data, and it will proudly spit out complete bullshit. Ask a language model to translate a document, and it will translate it just good enough to fool someone barely literate in the language, but under further inspection it translated it into complete bullshit.

    In a few years it wont be so completely obvious when it spews out bullshit, but that will make it even more insidious. It will be too credible, and when it does spit out complete bullshit it will be hard to detect and seemingly solid until put under shrewd criticism.

  61. 2 months ago
    Anonymous

    Peer reviewed means nothing. I could literally start a scientific journal today, publish any moronic shit I want, and so long as a "peer" can either recreate or affirm my moronation, it's peer reviewed.

    • 2 months ago
      Anonymous

      But "peer reviewed" are magic words to the redditor.

  62. 2 months ago
    Anonymous

    So. This is why aircraft are falling from the sky?

  63. 2 months ago
    Anonymous

    If devs are so smart why are they building away their own jobs?
    Fricking morons at every "IQ" level I swear.

  64. 2 months ago
    Anonymous

    +
    Amazed people still fall for hype.
    This shit is not so complex and unexpected that it requires an IT or related degree to understand either (though i do have one, technically...).

  65. 2 months ago
    Anonymous

    AI has seriously helped me save a whole lot of time, both in coding shit and writing papers or reports.
    just the more complex your task gets the more shit it does, you have to handhold it, instruct it and check every step, then you'll be fine.

    • 2 months ago
      Anonymous

      >business exec who just replaced 50% of his coders
      >hmmm today I will tell the AI
      >generate me a new app that looks at and save's all the images of breasts in the company, filter out trannies, prioritize young intern breasts, disguise this program as one built for verifying identity, attempt to sell it it to owners of long haul trucks and grocery stores. Make sure all titty data is completely obfuscated and no reasonable human will understand the true purpose of this program. save it only the important data locally, go ahead and run everything else on an AWS serverless's webserver.

      Priorities are
      1 - appear a legitimate identity verification software
      2 - collect and save extensive biometric titty data locally
      3 - be marketable
      4 - minimize costs

      Enjoy optimizing the output.

  66. 2 months ago
    Anonymous

    Things that did not happen: the thread.

  67. 2 months ago
    Anonymous

    i think one day it may get there but it's still pretty shit at doing good code

  68. 2 months ago
    Anonymous

    I want to see the face of both the morons who published it and the one who made this tweet.

  69. 2 months ago
    Anonymous

    blah blah blah

    we're basically at the point where we discovered fire concerning AI

    Learn to cook useless IT nerd

  70. 2 months ago
    Anonymous

    People who aim Ai art is bad is following a script by the elite.
    Total secret society death.

    • 2 months ago
      Anonymous

      80-IQ take. It's the same people promoting both """AI art""" and AI """regulation""".

      • 2 months ago
        Anonymous

        Yeah AI art is the trojan horse to get a bunch of fricked up control systems in place no rich c**t is ever gonna pay a million dollars for ai garbage to hang on their wall

  71. 2 months ago
    Anonymous

    The chat bot couldn't even copy/paste dates right from resume to resume when I used it. Not saying it won't get there, but geezus what a disappointment.

  72. 2 months ago
    Anonymous

    >AI is smoke and mirrors
    more like bread and circuses
    It's not really bread though, it's like mental bread.
    Like fairy bread though

  73. 2 months ago
    Anonymous

    Not reading, AI won.

    • 2 months ago
      Anonymous

      Opinion discarded. Germany is a product of American nation-building.

  74. 2 months ago
    Anonymous

    In this ITT

  75. 2 months ago
    Anonymous

    where is the problem? each mistake that non-technical pure manager makes costs more money to fix than it would have cost to do properly. you profit from this. as the text ai tools get better, the context window grows, they implement more/bigger expert models for different tasks in the next gpt n+1, it will become more useful. keep making money from his frickups, keep an eye on the new ai tools to do more with less resources. this honestly seems like a win all round for you. sux for that company tho with captain midwit manager.

  76. 2 months ago
    Anonymous

    Ai is supose to be an assistance tool only, not the full job.

  77. 2 months ago
    Anonymous

    I couldn't care less.

  78. 2 months ago
    Anonymous

    Yes we know and people are moronic. Thanks Sherlock.
    Hint: Criminalize using the term "AI" for something that's not an AI and idiots will understand they're just working with a chatbot.

  79. 2 months ago
    Anonymous

    Pic related is just an example of the gravy train coming to an end for the useless class. "Experts" like lawyers and researchers and academics have been playing life on easy mode for decades. They ALWAYS take the path of least resistance and it's made them incredibly lazy. What happens now is that being lazy with this technology will set you up for a potential career killing event, and they deserve everything that happens to them because they're mostly useless. If these people disappeared tomorrow, the world would not notice.

  80. 2 months ago
    Anonymous

    >indians now can do shitty research papers for grants, with AI
    lmfoa

  81. 2 months ago
    H.A.N.D.anon

    >overblown OP listing credentials and anecdotes I’m not going to read
    >AI is smoke and mirrors!
    AI’s are recommender systems built into a generative models that simply guess what you want to see as close as it can using vast, yet still limited, databases. That’s it. Now (You) and everyone else can stop complaining and watch the world burn not because of AI but because of laziness and incompetence, as it always has.
    >Have A Nice Day!

  82. 2 months ago
    Anonymous

    I feel like it was really good for like a month and then got moronic
    and not in programming - about everything. It does everything to give you an answer, even if it has no means to do so and produces gibberish like it's truth revealed

  83. 2 months ago
    Anonymous

    >AI replacing jobs

    Absolutly based
    All the tech chicks are going to be house wives soon

  84. 2 months ago
    Anonymous

    AI will be useful when it will replace manager and money cucks. It's good at transcribing documents.
    Any real work that requires imagination coupled with logic can not be done by an over hyped database engine

    • 2 months ago
      Anonymous

      The sad truth is that managers don't even need an AI to replace them, they can be replaced with a spreadsheet macro and audio cues.

  85. 2 months ago
    Anonymous

    I lost a company 400k because they have bad insurance lamo.

  86. 2 months ago
    Anonymous

    Bloody ser what is an incedent?

  87. 2 months ago
    https://twitter.com/4chan_AI_Terror
  88. 2 months ago
    Anonymous

    Another "AI sucks at xyz now, so it'll definitely never improve or get better!" threads.

    • 2 months ago
      Anonymous

      OK, imagine your chatbot runs 10x faster . It is still a chatbot.
      It cannot do any intelligence. It cannot build new things based on it's data, it has artificially programmed rules that it uses. Only humans can.

    • 2 months ago
      Anonymous

      It will get more powerful but I don't think the nature of the beast will change.
      >Vnr.

    • 2 months ago
      Anonymous

      This

      Because one moronic idiot pajeet used pascal to write a shitty application for windoze that crashes ALL PROGRAMMERS ARE SHIT omg omg wtf

      Rule #1: Do not argue with an idiot. Smile, nod and walk away.

  89. 2 months ago
    Anonymous

    dont care let me create my porno masterpiece

  90. 2 months ago
    Anonymous

    >look up editors and people who allowed the pics in op
    >all jeets
    Based jeets ruining the world

  91. 2 months ago
    Anonymous

    AI doesnt exist, it is literally science fiction. They just slap an AI label on any half functioning bot and the morons eat it up. AI will never be real, and this is pretty much as good as this "AI" will ever get. More processing power does not equal better "AI".

  92. 2 months ago
    Anonymous

    You're an idiot if you don't notice how fast the advancements are. Every 6 months you have to move the goalpost.

  93. 2 months ago
    Anonymous

    >art and academia
    >progress
    HAHAHAHAHAHAHAHAHA

  94. 2 months ago
    Anonymous

    Reminder that if you do not feed ai info it will not work. It is literally advanced machine learning. Feed it wrong info and it doesn't work. Guess what there is more of in the world? Wrong info.

  95. 2 months ago
    Anonymous

    The israelites are the ones who push this shit in the first place.
    Everyone who says otherwise is either a useful idiot or another israelite.

  96. 2 months ago
    Anonymous

    I agree. Artist here, yeah it can recreate what it is programmed to do. It is a Tool NOT a replacement. AI is not what it’s cracked up to be. I dont actually think it can replace humans. Humans think critically rather than programmed. (Well most humans can think critically, there are lots of programmed human NPC)

  97. 2 months ago
    Anonymous

    Why should I believe you? Why should I believe that your post isn't AI generated fiction? Am I supposed to just trust you on this?

    Show receipts.

  98. 2 months ago
    Anonymous

    It is only a matter of time before the mother of all exploits appears in a piece of critical infrastructure or finance, just waiting for an enterprising practitioner of chaos to cause a meltdown.

  99. 2 months ago
    Anonymous

    If ai founders actually paid people for the data they stole, they'd be entirely bankrupt. It's illegal at its core. It's founded on exploitation for revenue generation.

    • 2 months ago
      Anonymous

      >Jews
      >Paying anyone

  100. 2 months ago
    Anonymous

    >Consultant
    small brain, big mouth
    t. AI and Quantum Consultant

  101. 2 months ago
    Anonymous

    >AI is smoke and mirrors

    There are two ways you can react to a new technology. You can point out what it can't do at the time or you can extrapolate. The first response ensures you will eventually become obsolete and poor. The latter response is very profitable for your personal and financial lives.

  102. 2 months ago
    Anonymous

    lmao I didn't read your post before replying. You're completely right. There've been some studies done on the use of Copilot etc. and that these tools merely shift the time saved for coding to time needed for debugging plus introduce 90s-style security holes. It's a nightmare. I know so many people swearing that copilot is their savior and does everything better than them. Imagine that.

  103. 2 months ago
    Anonymous

    Wtf

  104. 2 months ago
    Anonymous

    AI coding will improve. The problem is quality control of the AI itself, not quality control of only the AI output.

    You're correct in the short term that AI generated code takes more billable hours to fix than is saved by generating. But that's a temporary thing, and again the problem is people black boxing AI and not forcing it to conform to programming conventions.

    AI isn't a black box. Only pajeets think so

  105. 2 months ago
    Anonymous

    AI is an amazing tool that allow someone like me whos generally decent at most tasks to make even better output. for instance, ive used ai to generate great seamless textures that i wouldnt be able to paint myself. The issue is when people dont modify and tool the output to make sure it clean and correct because ai is wrong sometime.

  106. 2 months ago
    Anonymous

    AI is like doing an entire book report on a book you never read. To anyone who hasn't read the book it seems like it makes sense but anyone that has read it knows you're full of it

  107. 2 months ago
    Anonymous

    >The paper, written by researchers at the Honghui Hospital in China
    The problem isn't just AI, it's also chinks flooding journals with garbage. Apparently they have a "publish or perish" doctrine over there

  108. 2 months ago
    Anonymous

    I see AI like the main computer in star trek ships. You still need all the engineers to make the whole thing work as new problems and existing wear and tear take their toll.

  109. 2 months ago
    Anonymous

    >Not quitting the job on the spot after being told that talk like that meant you were concerned about your job being replaced by AI
    >Not immediately filing a class action lawsuit against their company
    You want to make a difference, you have to strike hard and fast.

  110. 2 months ago
    Anonymous

    I work as an IT manager with a 22 year old zoomer as my assistant. He literally needs ChatGPT to write emails. He asks me for clarification on literally everything I tell him. He even ran a prompt on ChatGPT to see how he could get a raise.

    This generation is doomed. I'm not even kidding.

    • 2 months ago
      Anonymous

      >He even ran a prompt on ChatGPT to see how he could get a raise.
      That's a good idea. I need to make a link between my Teams app and ChatGPT so all the buggers at my work place won't bother me anymore

  111. 2 months ago
    Anonymous

    >IT security and Quality Assurance Consultant
    IT s a Qu A C
    It's a quack

  112. 2 months ago
    Anonymous

    > Be me, need simple app: graphical tcp server app that lets you see data on each socket
    > Use chatgpt, get a 60% working app
    > Spent literaly 13 hours fine-tuning the app (correcting threading model, debugging)
    > App now works 85%
    > I could have written this from scratch in maybe 2 hours
    The future

  113. 2 months ago
    Anonymous

    You're trying to use logic and proof, to explain something to "management"? That always goes well. It took Blue Cross seven years to realize that "cheap" H1B pajeets and contractors were actually COSTING them more money and losing business.

  114. 2 months ago
    Anonymous

    AI art is 50% trash, 47% mid, and maybe 3% are decent

    • 2 months ago
      Anonymous

      Internet Coom art is a multi billion dollar industry.

  115. 2 months ago
    Anonymous

    Would be believable but how did they create something that even works with AI? Is that the job of the devs to patch the AI code to compile? I am a programmer and I can't even imagine this. Maybe I became a boomer because I avoid AI like the plaque.

    • 2 months ago
      Anonymous

      >how did they create something that even works with AI?
      Probably piecemeal and iteratively, but really, it's just not worth the time unless you're actually too stupid to do it yourself, in which case you're not gonna know it when the bot gets it wrong. The one thing LLM code is good for, is learning how to do some basic pedestrian shit using a library you can't be bothered to learn properly.

      • 2 months ago
        Anonymous

        yeah, or like for telling it to write you a basic algorithm like mergesort

    • 2 months ago
      Anonymous

      Mostly joining data from different APIs of different suppliers and microservices.
      Code snippets generated were huge for and while loops n layers deep. Leading to huge moronic matchings, that worked but were not scalable.
      Also code that works, but nobody understood why, because it contained a buckload of curried funtions.
      Curried functions returned desired results, but lead to memory leaks because of moronic yet impressive closure statments.

      Logfile generation to log api data matchings and exceptions because generated code threw errors, that were not understandable, so the error cases were logged.
      Logger for the API 100% generated by the AI.

      It was a moronic chain of faustian deals.

  116. 2 months ago
    Anonymous

    Wypipo greedy af always looking to outsource and avoid paying native meatsuits
    not even German autism for efficiency and order can resist kiking out like this

  117. 2 months ago
    Anonymous

    Academia was already worthless.

    AI talks just about as good as any “degree holder”

    Just because you paid 120k for a piece of paper doesn’t make that piece of paper or you worth 120k

  118. 2 months ago
    Anonymous

    >it's ai's fault that journals don't actually review submissions but act as a useless middleman to collect money and we have to regulate ai instead of the publication israelite

  119. 2 months ago
    Anonymous

    the company is moronic for jumping the gun. but ai isn't smoke and mirrors. it's just not at a level that it should be given so many responsibilities. currently it is a handholding tool. either you're holding its hand or it's holding yours.
    I use it constantly for scripts. it's a nice touch because beforehand I always wanted to poweshell everything so I could have an easy option going forward. but it was time consuming and out of project scope to build our scripts. now I can have gpt4 do it. it very rarely gets things completely wrong or writes sloppy code.
    this is in large part due to token consumption and output length. this is what too many CIOs and managers don't get. currently if you ask gpt4 for something, it will attempt to answer your question within a constrained output length. this means asking for anything overly complicated will cause the AI to truncate it into the target output length, leading to sloppy garbage. but this is a PEBKAC issue, not AI issue.
    I don't want to be an arrogant c**t and tell the QA Microsoft Defender genius to sit back but I would not recommend sleeping on ai due to a bad experience caused entirely by your management. your company should 100% be learning ai implementation if you wish to be around in 10 years.
    tell your managers I said they're homosexuals that should find new jobs because they are either incapable of reading or they're so greedy that they'll throw everything away for some shiny new toy. they don't deserve to oversee anyone, even themselves.

Your email address will not be published. Required fields are marked *