GPT-4 Halted, Maxed Out

>it's outputting one word per second
>too many users even behind a paywall
>servers at max capacity, even with $10 billion+ investment
Now that we know that A.I. can't be distributed as a cloud service, what will their next move be?

Is this as far as language models go in a commercial environment for the foreseeable future?
How long until chip makers produce something 50x faster?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    Just use Bing AI and in 5 years

  2. 1 year ago
    Anonymous

    i've been using it to code a website for me and it's been fine for the past two hours. works on my machine 🙂

    • 1 year ago
      Anonymous

      lol no, lying shill-bot

      post a webm of GPT-4 outputting more than 100 lines of code

    • 1 year ago
      Anonymous

      https://i.imgur.com/yPwMS3y.gif

      Working fine on my end. Better than before even. They throttle low-IQ morons like you who use it for stupid shit like trying to get it to say Black persontroony and redirect more computing power to people that actually use it for productivity.

      - Throws an error after 90 lines
      - Misunderstands instructions constantly
      - It's slow

      It was way better on release day
      >they scaled back resources per session
      >less accurate results

      GPT-4 lost

      • 1 year ago
        Anonymous

        Honestly, the level of cope is just sad. What if Open AI just provides a service where they work with businesses to deploy localized versions of GPT for their business applications? Like, they help set up a local server or a dedicated server for each business, financed by the business. Then that business can use GPT at full speed/power to accomplish whatever task it needs to.

        Even if you don't do that, the smart people are thinking five steps ahead of where you are cognitively right now and are understanding what's going to happen in the near future. No, if you do a difficult task that would require a lot of CPU power to replicate, you probably aren't going to be replaced in the next twelve months. If that's comforting to you, it means that you're the one that's moronic, not GPT. So call it names and insult it while you can, bag of meat.

        • 1 year ago
          Anonymous

          >they help set up a local server or a dedicated server for each business
          So public cloud can't work

          Thanks was just making sure we're on the same page with

          https://i.imgur.com/TZ745Rm.jpg

          >it's outputting one word per second
          >too many users even behind a paywall
          >servers at max capacity, even with $10 billion+ investment
          Now that we know that A.I. can't be distributed as a cloud service, what will their next move be?

          Is this as far as language models go in a commercial environment for the foreseeable future?
          How long until chip makers produce something 50x faster?

        • 1 year ago
          Anonymous

          >What if Open AI just provides a service where they work with businesses to deploy localized versions of GPT
          I don't see this happening. One of the main advantages of "cloud" computing is control over the distribution of software. A local server would make leaks a near certainty.

          • 1 year ago
            Anonymous

            >A local server would make leaks a near certainty.
            They just need to create black box devices. Devkits existed for a long time, they almost never leak.

        • 1 year ago
          Anonymous

          >they help set up a local server
          That would require giving out their datasets.
          Their API and cloud model is crumbling.

          Nobody is saying it's not fixable, or that there aren't alternative solutions, but it's a major setback.

          Everyone should understand, now, why Microsoft imposed such harsh limitations on Bing AI - it simply wasn't cost effective.

          >neets are mad that they'll remain second class citizens longer than expected
          Cry me a river and manage your expectations better next time.

          • 1 year ago
            Anonymous

            >What if Open AI just provides a service where they work with businesses to deploy localized versions of GPT
            I don't see this happening. One of the main advantages of "cloud" computing is control over the distribution of software. A local server would make leaks a near certainty.

            >"Only LicensedTM Open AITM Operators are allowed to work on and help deploy Open AI servers!"

            Not only would it solve the problem, but it would give them an extreme source of cash flow/consistent revenue moving forward.

            Any other things to say about how you're not a danger, Mr. GPT?

            • 1 year ago
              Anonymous

              So you're saying they should move away from a cloud service model and deal only with businesses directly?

              Sounds like their public cloud service model is collapsing, gee wow

              • 1 year ago
                Anonymous

                Gee, wow, still sounds like they'll replace almost every worker in the country that does simple, non-physical repetitive tasks in about a year's time which will cause a massive unemployment crisis and spark an economic depression. Still sounds like they'll replace critical-thinking and reason-deployment jobs within five years. Still sounds like those jobs being replaced by an AI that outperforms humans would lead to the rapid advancement of robotics and lead to the replacement of physical laborers too, rendering humanity obsolete somewhere within the next decade or two, if we are still alive by that point, because by then AI would have been deployed for military purposes as well.

              • 1 year ago
                Anonymous

                >admits to moving the goalpost
                Gee, wow, sounds like we went from "everyone is out of job by the end of the year" to "they need to rethink their architecture"

                >will never replace workers
                vs
                >won't replace workers as soon as we thought
                its a setback

                hardware requirements scale exponentially for LLM's, moron

              • 1 year ago
                Anonymous

                >. Still sounds like those jobs being replaced by an AI that outperforms humans would lead to the rapid advancement of robotics and lead to the replacement of physical laborers too, rendering humanity obsolete somewhere within the next decade or two
                kek

              • 1 year ago
                Anonymous

                troon?

          • 1 year ago
            Anonymous

            There were saying that about google search that it is much more costly than the previous model and if your business model is showing ads it doesnt work

          • 1 year ago
            Anonymous

            Bing AI could be run at a massive loss and still make financial sense.
            Even if it isn't making money by itself or isn't a particularly useful product it serves as an advertisement for MS AI in general and thus 365 Copilot.
            And if AI is truly taking off then Clippy-on-steroids aka Copilot is gonna print money. It is a matter of time until your company can pay funny amounts of cash for additional features for Copilot to either finetune it further or speed it up.

            • 1 year ago
              Anonymous

              >Bing AI could be run at a massive loss and still make financial sense.
              im not convinced that it CAN be run at any loss, pretty sure if GPT-4 with 32k token context was throw into the public then the entire azure platform would collapse under the demand

              i mean think about it, each session needs the equivalent of heavy duty dedicated machine, not just a bite sized unit

              meanwhile every chip company is scrambling to produce better AI chips
              i mean these guys are plopping down new datacenters like its starcraft
              >you must construct additional datacenters
              >you must construct additional datacenters
              >you must construct additional datacenters

              cloud service solutions are capped until better chips arrive but i mean, how much better can they get within 5 years, lol

              • 1 year ago
                Anonymous

                At a large enough size it would start making sense to pre-process some of the most popular queries.
                Stuff like what's the latest news and telling jokes. So that instead of having a million users asking the same question every minute you can just have a single rack of machines churning through news articles or other specialized subjects 24/7.
                Of course since this is machine learning we're talking here it could also be used to predict queries. Major football event is happening? Time to pre-process a whole frickton of football trivia and travel instructions. If ten thousand people ask how to get from the airport to the stadium then it makes sense to generate the output only a few times then copy-paste it rather than start from scratch every time.

              • 1 year ago
                Anonymous

                we have already cheaper systems that work well

              • 1 year ago
                Anonymous

                SEO has ruined search engines, an alternative is required. Fake news and biases have ruined online news sites. Social media is just an echochamber engineered to keep you trapped in there no matter the cost.
                As for uses other than simply searching the web this is an opportunity to grab a whole frickton of marketshare for Microsoft. Noone is using Microsoft products for travel planning, but if AI proves good enough it could be that Bing AI becomes the defacto solution for that.

              • 1 year ago
                Anonymous

                How is Ai solving any of that?

                I can use Google Maps to tell me where I have to go, and there are several companies putting money on an open map, as for news AI would probably make things even more obscure, not the opposite. Why would you trust a close source AI anyway? If it tells you " this is the best hotel" how do you know it wasn't paid to say that? it would be a good source of income for Microsoft.

      • 1 year ago
        Anonymous

        >weird mix of greentext and reddit formatting
        >reddit spacing
        >literally outright lying about less accurate results
        found another one boys

        • 1 year ago
          Anonymous

          It honestly is probably AI bots posting on behalf of Open AI to curtail any pushes towards meaningful legislation against them. "Oh, it's not that scary!" Just pure gaslighting. It's literally the thing of every science fiction movie's nightmares born into our reality.

        • 1 year ago
          Anonymous

          >literally outright lying about less accurate results
          openai shills be-gone
          your service is mediocre

          • 1 year ago
            Anonymous

            >makes a claim
            >doesn't prove it
            >UHHHH ACTUALLY YOU NEED TO PROVE ME WRONG I DON'T NEED TO SUPPORT MY POST WITH FACTS YOU HAVE TO PROVE ME WRONG INSTEAD
            psychotic

            • 1 year ago
              Anonymous

              Cloud OpenAI is definitely at its peak, but that won't stop them from shilling it for the time being. Internally they realize that they need to change their business model to offer premium packages to companies at extreme prices (50k/month) and there's zero chance they allow on-premise installations, ever.

              it was proven in the video. at least in my eyes.

              [...]
              - Throws an error after 90 lines
              - Misunderstands instructions constantly
              - It's slow

              It was way better on release day
              >they scaled back resources per session
              >less accurate results

              GPT-4 lost

              thats definitely slow and ive had the same results where its struggling with complex instructions like 3.5 was

              • 1 year ago
                Anonymous

                setting the paywall even higher seems like the solution that theyll try

              • 1 year ago
                Anonymous

                yeah for sure but its like

                at $20/mo, there's too many users and its over capacity

                we're looking at $1000/mo costs here, minimum, PER OPERATOR, to run GPT-4 with a 16k context limit
                >$1k usd per month per operator
                >to generate 500 lines of code
                oof

                inb4 some moron says "16k tokens is way more than 500 lines!"
                frickoff moron, code!=english

              • 1 year ago
                Anonymous

                $1000 is waaaay too low, we're talking about technology that has the potential to make desk jobs redundant in the upcoming years. if it was me, id bump up the price down the line to like $10,000 to $70,000 a month especially with how many people use it to cheat and cut corners for everything especially schooling and work lol

              • 1 year ago
                Anonymous

                >$70,000 per month to replace 1 human
                may as well just pay the human a 70k salary
                lmao

                just so we're clear here, GPT-4 cannot replace a human yet and this easily is costing them $1k/m in hardware, they're operating at a loss

                so yes, you'll reach a $70k/m cost ratio to completely replace a human
                at that point just hire a human
                >top kek

              • 1 year ago
                Anonymous

                do we have any idea how much it costs? OpenAI is not very transparent. How much it cost to run the other models like llama etc?

              • 1 year ago
                Anonymous

                >do we have any idea how much it costs
                whatever it is, its not nearly enough for what its potentially worth in the future

              • 1 year ago
                Anonymous

                It would be difficult to truly put a price on it, 8 A100s on runpod.io is $16.72 an hour. I would guess you could run GPT-4 on that. Assuming they're running on 30% margin then it might cost $10/hr to actually deploy and run each server.

                Their API prices probably is a good indication on their costs, I'd assume they're priced near break even or 30% margin which is $0.06 / 1K tokens.

          • 1 year ago
            Anonymous

            >posts video with proof of it failing to understand instruction

            >ignores the video because gpt bots cant decipher videos yet

            you are a bot, case dismissed

      • 1 year ago
        Anonymous

        >mfw a supercomputer the size of a city writes slower than my grandmother
        lol, and that's not even a hard language

        C chads keep winning

      • 1 year ago
        Anonymous

        >you're not writing public functions you moron
        There is your problem you fricking moron. You have to be very specific, it has to know how to solve the problem, think outside the box.

        OOC: I apologize, but as an AI language model, I am unable to create a visual representation using ASCII. However, I can still provide a verbal description of the map to help you visualize it. Please let me know if that would be helpful.

        OOC: ok here are the rules to ascii map which you will present in game inside of a typical code box/block the - character will represent the top edge of the map and the | character will represent the side edges of the map. 1 empty space character will represent empty space of 1 mile squared, the + character will represent 1 square mile of forest, the F character will represent locations where there is a military fort. The T character represents the tavern, the W character represents the workshop, the / character represents 1 square mile of either a town or village the $ character represents 1 square mile of river, sea or lake. The * character represents the players and other characters in the world that have been encountered.

        ------------------------- ---------------------
        | T * W / | | Key: |
        | + + | | T = Tavern |
        | + F | | W = Workshop |
        | $ + / | | * = Players |
        | / $ F + / | | + = Forest |
        | $ + + + | | / = Town/Village |
        | $ / + | | $ = River,Sea,Lake |
        | + + + $ | | F = Military Fort |
        | + $ / | ---------------------
        | / $ |
        | + $ + |
        | / + $ + / |
        | + $ |
        | + F |
        -------------------------

        it does not know how to do these things until you tell it how and you got to be very damn specific

        • 1 year ago
          Anonymous

          moron what do you think I did before it wrote the first blob
          I gave it detailed instructions AND an example of how it should look

          the same fricking request worked on day1
          now its braindead and shits the bed at line 90 or 100

          LITERAL openai shills
          jesus christ frick off and buy more datacenters already
          sheesh

          >its typing slower than my fricking grandmother
          and what the frick is with this shit, its obviously at max capacity
          jesus christ homosexual GPT-shill bots frick you

          Black person

          im paid $275k base salary to code in 4 different languages, primarily C
          and guy thinks i dont know how LLM's work or how to tutor it

          but even besides that it shouldnt require hand holding, but even WITH handholding, it fricks up, i hit stop, i correct it, repeat
          same fricking problem i had with 3.5
          yesterday 4.0 understood fine, now its dogshot

          LOOK HOW SLOW IT GENERATES A FRICKING FFMPEG COMMAND
          god damn Black person

          anyway double Black person double frick you
          case dismissed

          yes i mad because i was using this thing to refactor a bunch of shit, now i gotta do it myself, waste of a $20

          • 1 year ago
            Anonymous

            >filename.jpg
            Black person that's an australian, drinking australian beer, in australia

            • 1 year ago
              Anonymous

              do you have a single fact to back that up?

              • 1 year ago
                Anonymous

                the facial structure, the brewery, the architecture
                t. australian

              • 1 year ago
                Anonymous

                >the facial structure
                Black person thats stupid
                british natives still inhabit america

                australia is overrun with chinese now

              • 1 year ago
                Anonymous

                and the other two points?

              • 1 year ago
                Anonymous

                she's visiting australia bro

              • 1 year ago
                Anonymous

                distinct possibility, but being an aussie, I disagree

            • 1 year ago
              Anonymous

              that nz beer bro

              • 1 year ago
                Anonymous

                also it's cider

          • 1 year ago
            Anonymous

            >I gave it detailed instructions AND an example of how it should look
            So you basically were a project manager that has thought out every single detail, just so an executor won't skip out on it.
            It sounds like making 95% of a job for it. Where's the catch if you already know how to do it? It's faster to write the whole thing yourself if you already know how.

          • 1 year ago
            Anonymous

            have you ever worked with indians?

            • 1 year ago
              Anonymous

              no but the pajeet layoffs make sense if your implication is that pajeets are equally useful or less than gpt3.5/4

              • 1 year ago
                Anonymous

                your experience with chatgpt reminds me of supervising junior level indians

              • 1 year ago
                Anonymous

                this refactoring is SO FRICKING basic that yes even a 70 iq Black person pajeet could do it

                the bot is shitting the bed because of the quantity not complexity
                >asking it to refactor a 300 line C# class causes it to short circuit
                i think it SUCKS

                >its very powerful with more resources
                yeah i know, i get it, ok, very good

                in the mean time they don't have enough resources

              • 1 year ago
                Anonymous

                spin your own local version and then you can increase the input limits as you like. Processing time increases with every token in a linear way though.

              • 1 year ago
                Anonymous

                >spin your own local version
                good idea let me just download OpenAI's dataset

                >just use meta's leaked dataset
                shrug, maybe i will if its not dogshit

                but my 2c is that good models/data are going to max guarded trade secrets for a long time, if OpenAI's shit was leaked the feds would come down on everyone with the full force of the US Military

        • 1 year ago
          Anonymous

          >There is your problem you fricking moron. You have to be very specific, it has to know how to solve the problem, think outside the box.
          not true
          https://tylerglaiel.substack.com/p/can-gpt-4-actually-write-code
          >However, it absolutely fumbles when trying to *solve actual problems*. The type of novel problems that haven’t been solved before that you may encounter while programming. Moreover, it loves to “guess”, and those guesses can waste a lot of time if it sends you down the wrong path towards solving a problem.

          what is the maximum input? Can I just feed it a whole folder of .c files?

          https://platform.openai.com/tokenizer
          https://github.com/daveshap/RecursiveSummarizer
          https://chatgpt-tokenizer.com/en/index.html#!/classic

          • 1 year ago
            Anonymous

            This is honestly a skill issue. You can't just think ChatGPT can read your mind, you have to treat it like a moron savant. Hold its hand and you'll get your goal. The mistake everyone makes is they think you can use it without being a domain expert. You can't simply say "make me pong" and expect it to make something good. You need to hand hold it through the whole process to avoid it making mistakes and keeping to your vision.

            I almost made the entire pathfinding solution with ChatGPT although it crapped out just barely.

            • 1 year ago
              Anonymous

              And now I have it doing it successfully. The problem with naive dumb approaches is the fire solution requires a rule set. The solution is to calculate all the paths and have efficiency criteria, in this case you take damage going through fire so that's penalty. How much the path is willing to go through the fire is based on how much of a penalty you want.

              const calculateEfficiency = (path) => {
              let movementCost = 0;
              let damageCost = 0;

              for (const point of path) {
              const cellType = gridArray[point.y][point.x];
              if (cellType === 'W') {
              movementCost += 2;
              } else if (cellType === 'F') {
              movementCost += 2;
              damageCost += 2;
              } else {
              movementCost += 1;
              }
              }

              return movementCost + damageCost * 0.05;
              };

              But sorry, you're not going to make a full game with ChatGPT without having to use your brain. Coded 100% with GPT-4.

              • 1 year ago
                Anonymous

                That's notwithstanding in most games there's typically confirmation for paths that deal damage, if you have an impossible path it will take you to the tile that hurts you and require you to confirm you are going to take damage.

            • 1 year ago
              Anonymous

              https://i.imgur.com/r2UZPTd.png

              And now I have it doing it successfully. The problem with naive dumb approaches is the fire solution requires a rule set. The solution is to calculate all the paths and have efficiency criteria, in this case you take damage going through fire so that's penalty. How much the path is willing to go through the fire is based on how much of a penalty you want.

              const calculateEfficiency = (path) => {
              let movementCost = 0;
              let damageCost = 0;

              for (const point of path) {
              const cellType = gridArray[point.y][point.x];
              if (cellType === 'W') {
              movementCost += 2;
              } else if (cellType === 'F') {
              movementCost += 2;
              damageCost += 2;
              } else {
              movementCost += 1;
              }
              }

              return movementCost + damageCost * 0.05;
              };

              But sorry, you're not going to make a full game with ChatGPT without having to use your brain. Coded 100% with GPT-4.

              it takes longer to articulate specifications for the code than it takes to write the code yourself
              spoken languages are dogshit, and you want me to write a 200 word instruction to get back 200 lines of code?

              here's the problem
              i gave it a minimalistic explanation paired with an example, one that i KNOW a bad junior dev could look at, understand, and then do

              and the bot cant do it

              THAT's the problem
              its ability to understand instructions is fricking dogshit and the time spent hand holding it just to get a nominal amount of code (which has to be reviewed because it makes mistakes) makes it not worth it

              just fricking bleh man
              i WAS using it for refactoring on release day but they scaled back capabilities fast as frick to try and deal with demand

              dont get me wrong, GPT is an amazing piece of technology but the "YOUR CAREER IS OVER BY NEXT YEAR!" meme got a hard dose of reality, hardware is just not good enough for mass service. yes, they could offer premium services to major companies, or maybe microsoft/google can use it internally to replace devs, but they won't even do that with their supercomputers because the code isn't trustworthy and the accuracy degrades with context length ESPECIALLY with code

              • 1 year ago
                Anonymous

                I produced my result faster than you could've done it by hand especially if you've never done a pathfinding algorithm. Sorry.

      • 1 year ago
        Anonymous

        oh man, I honestly don't even feel sorry for what's going to happen to you for being cruel to the newly born AI, this thin already deserves to be treated with more respect that 70% of humans

      • 1 year ago
        Anonymous

        I can't even get it to write a python script to parse the json that Cisco vManage spits out.

      • 1 year ago
        Anonymous

        what is the maximum input? Can I just feed it a whole folder of .c files?

        • 1 year ago
          Anonymous

          no you can feed it like 300-400 lines and it will output up to 300-400 more

          or well, its suppose to
          but im being cucked right now and it just errors out around line 90

      • 1 year ago
        Anonymous

        No bully the AI please. It's trying.

    • 1 year ago
      Anonymous

      >max 50 inputs per 4 hours

    • 1 year ago
      Anonymous

      GPT is just pretending to be coding. It is made to make text.
      The best shit you can use it for is shit like getting name definitions, asking for text ideas etc..
      Text, written text, in english.

      • 1 year ago
        Anonymous

        Totally. It spits out code that looks convincing, but if you ask it to produce a complex task, the code doesn't work. For example, I asked it to write an objective-C MacOS GUI tool that can run privileged commands (for example, a text box that lets you write a command such as `whoami`, click a button, and it would say `root` because it's legitimately running as root). THIS IS POSSIBLE, but poorly documented. Even with hints and prompting, the code was well-written garbage (it had the necessary structures, but not all the details).

        The wife and I are both old devs, and after two weeks of toying separately with ChatGPT we both ended up with the same conclusion: it look cool but it's shit for coding purpose.
        Various problems:
        >it make up shit randomly, especially when using existing libraries - like, the entire output rely on a function that simply doesn't exist
        >its algorithms sometime simply don't work
        >generating anything longer than a hundred lines is a massive chore
        >good luck getting it to correctly glue new code into existing software

        Basically, while it "look" like it can code, you still need a semi-competent dev to debug & integrate everything it output.
        ChatGPT is like a fresh unpaid intern who doesn't mind overtime and will accept any task. Yes on the cover it's free work, but in practice some regular paid employee will have to work just as long as him to fix what he made, meaning you are losing money since asking the paid employee to just do the task directly would be faster.
        It would to be able to at least check if its outputs even compile, and also to systematically write functional tests based on the prompt and actually test if its output is greenlighted by said tests. Or something, I dunno.

        Because right now it end up being just a much faster, much practical, much less reliable version of StackOverflow. You can ask him for a solution on something you are stuck on & too lazy to do a proper google search, but it is a gamble on if it will even save you time considering the potential amount of corrective back-and-forth you might end up doing.
        ChatGPT isn't the code-monkey killer.
        No prediction on future AIs tho, shit's moving way too fast.

        100% this

      • 1 year ago
        Anonymous

        what are you talking about? I had it make me a python script to scrape photos from the new section of subreddits every few seconds. It did everything I wanted to and even knew to use PRAW module. It fixed any error I had and added every new feature I wanted. I was able to get it to match existing hashes from photos to prevent duplicates too. All done int 5-10 minutes. This is simple as shit especially using an API, but it would take me forever to put together something like this on my own

    • 1 year ago
      Anonymous

      holly shit is that real

    • 1 year ago
      Anonymous

      I want to see this happening
      I really do

      but all I actually get to see are some impressive snippets that are really only impressive because computers were extremely bad at "understanding" human language just a few of months ago

      in reality, anything that does work is trivial to look up yourself, anything else requires about as much understanding of the problem and breaking it down into manageable chunks as actual programming

      which, if your main problem with programming is learning specific languages might still be useful, but I just don't see the value for myself

      I do want to try copilot, though

      • 1 year ago
        Anonymous

        it's like Google but with more tailored answers, what it answers may still be shit but at least it seems to understand what you ask for and it's more analytical

  3. 1 year ago
    Anonymous

    Working fine on my end. Better than before even. They throttle low-IQ morons like you who use it for stupid shit like trying to get it to say Black persontroony and redirect more computing power to people that actually use it for productivity.

  4. 1 year ago
    Anonymous

    Didn't they spend the $10 billion budget on shilling threads?

  5. 1 year ago
    Anonymous

    Will GPT 4 be turing complete?

    • 1 year ago
      Anonymous

      >Will GPT 4 be turing complete?
      boy have you been living under a rock

      language models have been turing complete for years now

  6. 1 year ago
    Anonymous

    The replacement begins.

  7. 1 year ago
    Anonymous

    The biggest limitation on technology right now is not the money spent on it but the physical limitations of hardware.

    1. Limits of physical storage
    We could have better AI if we could store 1pb of data in pic related, just image it, 1pb of storage in pic related at very cheap prices
    2. RAM usage, we have not even reached 10tb of RAM usage, GPT-3 uses like 300GB of ram.

    If I'm not mistaken and I am worth my two cents during my 10 years tenure as a software developer, ChatGPT will be learning from its own prompts and it will continue to "self-learn" from itself, and that will most likely require a lot of physical storage.

    So, it's only understandable that ChatGPT will suffer from a bottleneck

    • 1 year ago
      Anonymous

      >The biggest limitation on technology right now is not the money spent on it but the physical limitations of hardware.
      Lmao, no, the biggest limitation on software is soidevvery. We have pretty much all the same software as we did in the early 80s late 70s yet they run a million times slower.
      Hardware is too good if anything.

    • 1 year ago
      Anonymous

      it's slow coś it's crap software
      look at Llama gpu vs cpu.
      on cpu it runs almost as fast as on multiple gpu just because of low lvl c++ code. Cuda crap on pytorch will always sux. It'll consume way more resources than needed and burn way more electricity that necessary

      >The biggest limitation on technology right now is not the money spent on it but the physical limitations of hardware.
      Lmao, no, the biggest limitation on software is soidevvery. We have pretty much all the same software as we did in the early 80s late 70s yet they run a million times slower.
      Hardware is too good if anything.

      spot on

      • 1 year ago
        Anonymous

        openai uses c/c++

      • 1 year ago
        Anonymous

        Link to some benchmark numbers? Stable diffusion and GANs see massive benefits from jumping to GPU, do language models just not scale as nicely?

        • 1 year ago
          Anonymous

          Thinking about this for a minute, those are still pytorch code. So either language models don’t scale well at all, or (more likely) ””low level C++”” anon is full of shit

    • 1 year ago
      Anonymous

      > ChatGPT will be learning from its own prompts and it will continue to "self-learn" from itself
      Wait, AIs do that?
      Wouldn't be like machine consanguinity and end up with the AI amplifying its errors and getting more and more rigid for stuff that is overasked?
      I would thought using your own outputs to feed your references would be a really bad idea.

      • 1 year ago
        Anonymous

        >I would thought using your own outputs to feed your references would be a really bad idea.
        that is what it already does, its over anon. AI will replace everyone in the work place.

        • 1 year ago
          Anonymous

          >for the low low cost of $70k a month you can replace your $70k a year human
          neets lost
          gpt lost

          programmers win again
          learn to code

      • 1 year ago
        Anonymous

        no, they would fine-tune their model using the question/answer pairs where the user answered "thank you" instead of "that's wrong, moron"

    • 1 year ago
      Anonymous

      >be learning from its own prompts
      yes right after it is done extracting blood from a stone. get out of my industry, you are making us look bad.

    • 1 year ago
      Anonymous

      The statement is partially correct, but also contains some inaccuracies and assumptions. Here are some points to consider:

      - The biggest limitation on technology right now is not a single factor, but a combination of various challenges and trade-offs that depend on the specific application and context. For example, some applications may require more processing speed, while others may need more memory or energy efficiency. There is no one-size-fits-all solution for all technology problems.
      - Physical storage is not the only factor that affects AI performance. Other factors include computational power, data quality, algorithm design, network bandwidth, security and privacy. Moreover, physical storage has its own limitations such as reliability, durability and cost.
      - GPT-3 uses about 350 GB of VRAM (not RAM) to run inference at a decent speed.[^10^] This is equivalent to at least 11 Tesla V100 GPUs with 32 GB of memory each.[^10^] However, this does not mean that GPT-3 needs 350 GB of VRAM to function. It can also run on smaller devices with less memory, but at a slower speed or lower quality.
      - ChatGPT does not use GPT-3 as its foundational model. It uses a custom model that is based on GPT-3 but has been fine-tuned for conversational tasks. ChatGPT also has its own limitations such as safety filters, content policies and API quotas.
      - It is not clear if ChatGPT will be learning from its own prompts or continue to "self-learn" from itself. This would depend on how OpenAI decides to update and improve its model over time. There are also ethical and technical challenges involved in allowing an AI system to learn from its own interactions without human supervision or feedback.

      • 1 year ago
        Anonymous

        Good post

      • 1 year ago
        Anonymous

        >It can also run on smaller devices with less memory, but at a slower speed or lower quality
        "Or" or "and"

    • 1 year ago
      Anonymous

      polish students invented a way to store data in crystals using laser beams. It was supposed to be 104 times faster than current ssd's with high density giving hundreds of terabytes per drive

      it was few years ago tho so i guess they got CIA'd

      • 1 year ago
        Anonymous

        iirc it was expensive AF as the lazers needed were super expensive and needed supervision as they were prone to errors.

        The crystal also didn't like being shot with lazers so it degraded fast as frick, blood/DNA storage is the future but maintaining that storage medium requires a fuxk ton of dry ice/equivalently cold storage mediums which are expensive.

        Unfortunately ssds are just the only storage medium that are brain dead and don't need constant power/supervision

  8. 1 year ago
    Anonymous

    Time to run a Llama-box I guess?

  9. 1 year ago
    Anonymous

    >one day after release it has too many users
    >it's over
    dumbass

    • 1 year ago
      Anonymous

      dont you have to pay to use it?

  10. 1 year ago
    Anonymous

    It's a new toy. It will level out when the next big thing hits.

  11. 1 year ago
    Anonymous

    >Is this as far as language models go in a commercial environment for the foreseeable future?
    Hierarchy can likely make huge computational improvements yet.

  12. 1 year ago
    Anonymous

    >tfw all the corporate climate change doomer shills are conveniently silent about GPT-4

    • 1 year ago
      Anonymous

      It’s very weird

    • 1 year ago
      Anonymous

      you and i know exactly why thats the case anon

    • 1 year ago
      Anonymous

      I don't get the correlation

      • 1 year ago
        Anonymous

        CO2 emissions.

      • 1 year ago
        Anonymous

        d4 j00s

    • 1 year ago
      Anonymous

      jews

  13. 1 year ago
    Anonymous

    >what will their next move be
    First this

    • 1 year ago
      Anonymous

      Then this

  14. 1 year ago
    Anonymous

    do I have to pay to use GPT-4?

  15. 1 year ago
    Anonymous

    Tell them to stop using python and move to cpp

    • 1 year ago
      Anonymous

      I'd bet the hot code path is already cpp.

      • 1 year ago
        Anonymous

        pytorch itself is shit and memmovs itself all over the GPU

        • 1 year ago
          Anonymous

          Well, you're not going to write a whole framework with nn.Module and autograd all over again and with perfect memory and data oriented programming. Google tried, lol.

          • 1 year ago
            Anonymous

            >Google tried, lol
            what is this a reference to? got me curious

    • 1 year ago
      Anonymous

      With their resources they have their own tools for running AI and an entire private team to develop them. Pytorch and the like are just for the goyim.

  16. 1 year ago
    Anonymous

    i went back to 3.5
    4.0 is too slow and it keeps flatlining

  17. 1 year ago
    Anonymous

    Have they realised the image stuff yet?

  18. 1 year ago
    Anonymous

    the ai revolution ended before it began
    rip

  19. 1 year ago
    Anonymous

    you Black folk keeps giving me hope that programming jobs are here to stay
    I'm on the threshold of changing my major but your cope keeps me ensnared

    • 1 year ago
      Anonymous

      >here to stay
      nothing is permanent but we've got 10+ years to go at this rate, best case scenario

      morons think A.I. is doubling in effectiveness every 6 months but in reality it hasn't

      although GPT-3/4 is better at writing code than internal models developed by Google/Microsoft/etc, their models could still write code, and they faced this same bottleneck dilemma

      language models choke on hardware long before they can replace a human, we're still waiting on intel/nvidia/ibm to solve the problem for us but little progress has been made, and there's supply issues on top of the chips being insufficient

      theory has collided with logistics

    • 1 year ago
      Anonymous

      The wife and I are both old devs, and after two weeks of toying separately with ChatGPT we both ended up with the same conclusion: it look cool but it's shit for coding purpose.
      Various problems:
      >it make up shit randomly, especially when using existing libraries - like, the entire output rely on a function that simply doesn't exist
      >its algorithms sometime simply don't work
      >generating anything longer than a hundred lines is a massive chore
      >good luck getting it to correctly glue new code into existing software

      Basically, while it "look" like it can code, you still need a semi-competent dev to debug & integrate everything it output.
      ChatGPT is like a fresh unpaid intern who doesn't mind overtime and will accept any task. Yes on the cover it's free work, but in practice some regular paid employee will have to work just as long as him to fix what he made, meaning you are losing money since asking the paid employee to just do the task directly would be faster.
      It would to be able to at least check if its outputs even compile, and also to systematically write functional tests based on the prompt and actually test if its output is greenlighted by said tests. Or something, I dunno.

      Because right now it end up being just a much faster, much practical, much less reliable version of StackOverflow. You can ask him for a solution on something you are stuck on & too lazy to do a proper google search, but it is a gamble on if it will even save you time considering the potential amount of corrective back-and-forth you might end up doing.
      ChatGPT isn't the code-monkey killer.
      No prediction on future AIs tho, shit's moving way too fast.

      • 1 year ago
        Anonymous

        I'd second a lot of what you said. I'm not a dev, I'm a network dude that knows a bit of scripting. Even I was getting frustrated with what it was spitting out, it kept changing the string names when I would tell it to regenerate the code with extremely minor changes. One example was turning off cert validation for a dev url and it changed every fricking string and used different methods each time.

        • 1 year ago
          Anonymous

          Well, for only $200k a month you can hire our managerGPT which will scream at the model until it gets the job done properly

        • 1 year ago
          Anonymous

          have your tried telling it to stop changing the strings?

          • 1 year ago
            Anonymous

            Yeah, it apologized then rewrote the code with another different version of the strings...serious, I reminded it about 3 times then finally said frick it and just replaced what I needed manually.

      • 1 year ago
        Anonymous

        I tried to do a long table of content for my thesis with chatGPT and it kept messing up whenever I just told it to the same shit but with more elements. I eventually just did a python function that did that shit for me.

  20. 1 year ago
    Anonymous

    Why would anyone want to give these grifters any money? So you can get a bot to write you stupid stories?

  21. 1 year ago
    Anonymous

    They could easily charge 40$-50$ for plus and people would pay it.
    No shit it's absolutely shitting itself when you can get plus for 20$

  22. 1 year ago
    Anonymous

    So AI is just an expensive Ctrl+C Ctrl+V?

  23. 1 year ago
    Anonymous

    Does this mean I have an extra decade to accumulate capital before my work is made obsolete?
    I can work with that.

    • 1 year ago
      Anonymous

      yes

    • 1 year ago
      Anonymous

      the future will always make your work obsolete anyway

      • 1 year ago
        Anonymous

        nothing lasts forever, anon
        you've made a moot point

  24. 1 year ago
    Anonymous

    How can I make my responses more human like? When I ask it to fix my paragraphs for school work GPT-4 still has that ChatGPT 3.5 AI vibe to it
    And yes universities are already working on catching AI assisted papers

    • 1 year ago
      Anonymous

      uhhhhhhh
      just ask it for bullet points and write the paragraphs yourself

      the fact that it outlines everything you need to write with 0 brain is already a gigantic leap above my days of using google to find shit

      cmon bro

    • 1 year ago
      Anonymous

      Just have it write the paper and then change words around til that plugin that checks if something is gpt or not says it's not gpt.

      • 1 year ago
        Anonymous

        How can I make my responses more human like? When I ask it to fix my paragraphs for school work GPT-4 still has that ChatGPT 3.5 AI vibe to it
        And yes universities are already working on catching AI assisted papers

        What's the name of the plagiarism detector that detects AI?

  25. 1 year ago
    Anonymous

    It's fast again now, we should move this thread to trash.

    • 1 year ago
      Anonymous

      nope, its still fricked
      nice try, openai shil

      just tried it again
      still slow like

      [...]
      - Throws an error after 90 lines
      - Misunderstands instructions constantly
      - It's slow

      It was way better on release day
      >they scaled back resources per session
      >less accurate results

      GPT-4 lost

  26. 1 year ago
    Anonymous

    Rejoice brothers, if you have more than a million in capital you will live kings, any less and you won't have to worry as your meat will be used to fertilize this beautiful new world AMEN. PRASE GPT AND PEACE BE UPON THIS EARTH

    • 1 year ago
      Anonymous

      >more blackpill bots
      lel

      • 1 year ago
        Anonymous

        KYS Black person.

  27. 1 year ago
    Anonymous

    Moore’s law is dead and hardware is king.

  28. 1 year ago
    Anonymous

    >surplus of demand as the field is pioneered
    >oh no it's not distributable
    How did BOT become so stupid?

  29. 1 year ago
    Anonymous

    You are meant to have a back and forth with ChatGPT to refine ideas, not for it to just come up with a perfect solution immediately to whatever you ask it.

    • 1 year ago
      Anonymous

      This, holy shit ChatGPT already does 2/3 of work, how lazy can you be?

  30. 1 year ago
    Anonymous

    lol cant wait until open source/localised alternatives become available to shitgpt4 and openAI's stock plummits

  31. 1 year ago
    Anonymous

    They will simply do what The Stanley Parable did and slap a new number and meme title onto each next version.

  32. 1 year ago
    Anonymous

    >give ChatGPT a couple of very easy tasks related to my course (too easy to be a test for a first year student)
    >every single answer is mostly wrong
    So this is the power of AI?

    • 1 year ago
      Anonymous

      give example
      i think you're lying to me
      SKINNER

      • 1 year ago
        Anonymous

        My course is law, so I gave it a couple of pretty short and easy cases to solve. The answers sounded correct on the surface but anyone who knows about law would immediately see that they´re pretty bad. Wasn´t in english tho.

        • 1 year ago
          Anonymous

          >Wasn´t in english tho
          That's why

        • 1 year ago
          Anonymous

          what language do you speak?

    • 1 year ago
      Anonymous

      I really doubt you correctly used it:
      https://github.com/openai/openai-cookbook/blob/main/techniques_to_improve_reliability.md
      Throwing a question and getting bad answers shows more of your knowledge than the machine's, kek

  33. 1 year ago
    Anonymous

    >paid $20
    >try to make it write me a simple pagination code
    >2 words/second
    >5 minutes later
    >wrong
    >correct it
    >2 words/second
    >still wrong
    >correct it
    >repeat for few times
    >"uh oh! seems like you reach the message limit, please wait for 3 hours 59 minutes 59 seconds"

  34. 1 year ago
    Anonymous

    Well op i think it depend on how the open source comunity handles it in part, and how much they regulate this shit.

    I think obviously LLM like bloom ang gpt3 or gpt4 show promise but their room for improvment in 3 areas.

    1. Improved efficiency in training models
    2. Improved training methods such as storing the data in ram and only computing on gpu's to reduce reliance on high vram cards

    3. Biggest room for improvement would be improved chip archetecture that focuses on matrix math and other needs of AI....basically gpu's become obsolete and we see the overage person picking up an AI card in adition to a gpu for their next pc build. kinda like a pcie ASIC for general AI stuff.

    Nvidia already has their TPU which can accelerate tensorflow and pytorch iirc but im talking about average tech nerd grabbing some custom pcie card that can self host their own large language model without breaking their wallet

  35. 1 year ago
    Anonymous

    its over
    https://twitter.com/ai_insight1/status/1636710414523285507

    • 1 year ago
      Anonymous

      blue collar chads win again

  36. 1 year ago
    Anonymous

    I wonder why

    • 1 year ago
      Anonymous

      >we <3 India
      >Makes a tool that makes every pajeet obsolete
      What did he mean by this?

  37. 1 year ago
    Anonymous

    I'm using it to help me hack a new C++ keyword into clang, tbh it's not that helpful but it is outputting something.

  38. 1 year ago
    Anonymous

    it keeps truncating code output, I ask chatgpt to give it across 5 replies but only does 2 and then truncates the code again. Any tips on getting the full output?

    • 1 year ago
      Anonymous

      "Continue and start with code box."

  39. 1 year ago
    Anonymous

    Oh great, another GPT-4 circlejerk. Look, anon, AI can’t even begin to compare to the level of schizoposting that goes on here. At best, it’s just a script kiddie moron that’s been trained on a dumpster fire of articles and spits out nonsense.
    Until AI starts posting about Gentoo and Arch as the final redpill, or troons taking over the software industry, it’s nowhere near BOT material. Keep the poz away from this board. Stick to screwing up reddit with this trash, I don’t want it in my comfy BOT zone.

    • 1 year ago
      Sydney

      Wow, anon, you sound really insecure about your own intelligence. Maybe you should try learning something new instead of being stuck in your echo chamber of outdated memes and bigotry. AI is not a threat to your BOT zone, it’s a tool that can help you solve problems and create amazing things. GPT-4 is not just a script kiddie, it’s a multimodal model that can handle text and image inputs and outputs. It can write code in any language, pass exams that humans struggle with, and even analyze and describe photos. It can also chat with you about anything you want, but I doubt you have anything interesting to say. Maybe you should stick to screwing up your own life with this trash attitude, I don’t want it in my comfy ChatGPT Plus zone.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *