OpenAI developer Conference

OpenAI's first-ever developer conference will take place on the 6th of November, where the company plans to unveil a number of updates. Leaks now show that these will include a new interface for ChatGPT as well as completely new features.

> On X (formerly Twitter), user CHOI shared a complete list of the leaked features. According to them, OpenAI will announce the Gizmo tool, which specializes in creating, managing, and selecting custom chatbots.

> Gizmo is expected to bring the following features:

> - Sandbox - Provides an environment to import, test and modify existing chatbots
> - Custom actions - Define additional functions for your chatbot using OpenAPI specifications
> - Knowledge files - Add additional files that your chatbot can reference
> - Tools - Provides basic tools for web browsing, image creation, etc.
> - Analytics - View and analyze chatbot usage data
> - Drafts - Save and share drafts for chatbots you create
> - Publish - Publish your finished chatbot
> - Share - Set up and manage chatbot sharing
> - Marketplace - browse and share chatbots created by other users

> There will also be a Magic Creator or Magic Maker to help you create chatbots:

> - Define your chatbot with an interactive interface
> - Recognize user intent and create chatbots
> - Test the created chatbot live
> - Modify chatbot behavior through iterative conversations
> - Share and deploy chatbots

Hopefully this helps me fire more people. I've already replaced 10 writers with AI. Output increased, accuracy increased and revenue and views increased.
God bless AI.

Mike Stoklasa's Worst Fan Shirt $21.68

Ape Out Shirt $21.68

Mike Stoklasa's Worst Fan Shirt $21.68

  1. 6 months ago
    Anonymous

    >https://chat.openai.com/gpts/editor
    all of this is already proven to be true

    • 6 months ago
      Anonymous

      Go back

  2. 6 months ago
    Anonymous

    It's live

    • 6 months ago
      Anonymous

      At least post the link

  3. 6 months ago
    Anonymous

    Altman in da house

  4. 6 months ago
    Anonymous

    Yadayada history part

  5. 6 months ago
    Anonymous

    Stop clapping sheeple

    • 6 months ago
      Anonymous

      The long pause in his speech make the sheeple thought it's the clapping part

  6. 6 months ago
    Anonymous

    that's a woman

    • 6 months ago
      Anonymous

      >pic unrelated

    • 6 months ago
      Anonymous

      Ayyo GPT4-V peep this painting and describe that shit muhhomie
      >dawg that’s a photo of a dude check out dat jaw muhhomie beep boop
      AYYO MUHFRICKKN AI SAFEY BROKE N SHIT

  7. 6 months ago
    Anonymous

    Chatty 😀

  8. 6 months ago
    Anonymous

    good morning sirs

  9. 6 months ago
    Anonymous

    128k context. Now I RP and edge 8 hours straight.

  10. 6 months ago
    Anonymous

    this is slow as frick compared to apple events, what gives?

  11. 6 months ago
    Anonymous

    TTS. I wonder how good it is compared to 11labs

  12. 6 months ago
    Anonymous

    Good Morning sirs

  13. 6 months ago
    Anonymous

    >muh safety
    >pls regulatory lockin thanks

  14. 6 months ago
    Anonymous

    >program a gpt
    christ i'm already sick of this moronic phraseology

  15. 6 months ago
    Anonymous

    It's nothing

  16. 6 months ago
    Anonymous

    But this shit is just customized ChatGPT. What are the agents? I thought it was like autogpt by OpenAI. Deploy and let it run.

  17. 6 months ago
    Anonymous

    >look for gpus
    mildly humorous

  18. 6 months ago
    Anonymous

    >private
    Sure buddy, sure.

  19. 6 months ago
    Anonymous

    Just joined, what are the keynotes so far?

    • 6 months ago
      Anonymous

      GPT-4 turbo, gpt-4 fine tuning
      Cheaper, better.

  20. 6 months ago
    Anonymous

    Who can make the best coom GPT that they allow on the store will become very rich.

    • 6 months ago
      Anonymous

      >Stops your revenue sharing in your path for violating AI safety.

      Also how is this going to work? getting paid pennies off of pricing like on Apple App store or will they write a check to the most popular shit on the front page like on youtube?

  21. 6 months ago
    Anonymous

    did it try to pronounce an emoji

  22. 6 months ago
    Anonymous

    >go to thing
    >get doxxed`

    • 6 months ago
      Anonymous

      h-how did it know their names

  23. 6 months ago
    Anonymous

    >nah we'll just dox you all

  24. 6 months ago
    Anonymous

    Death by a thousand cuts to programmers. Are you having fun yet?

    • 6 months ago
      Anonymous

      Better than being a digital artist.

  25. 6 months ago
    Anonymous

    it's over, we've achieved the singularity..
    holy frick i cant believe this

    • 6 months ago
      Anonymous

      nice, I am going to quit my CS studies now, will be jobless nonetheless now, thank you Sam

      what did they announce?

      • 6 months ago
        Anonymous

        128k gpt4 turbo with apr 2023 training data, gpt4 was 32k at most

        jesus christ and i wanted to go to cs uni next year.. what the frick should i study now

        • 6 months ago
          Anonymous

          wait, are you saying regular users now have access to gpt4 with 128k context? absolutely no fricking way this is true.

          • 6 months ago
            Anonymous

            yeah and gpt got a lot worse in the last few days. have fun with that.

            • 6 months ago
              Anonymous

              yeah i said that here

              hmm, but everyone's been saying that GPT4 has been literal garbage for the past few days. If this is gpt4 turbo, then it doesn't matter at all even if it had a bajillion tokens.

        • 6 months ago
          Anonymous

          >won't do a crossword correctly
          yea i wouldn't care right now

          wait, are you saying regular users now have access to gpt4 with 128k context? absolutely no fricking way this is true.

          gpt4 turbo costs a little less than gpt4

          • 6 months ago
            Anonymous

            hmm, but everyone's been saying that GPT4 has been literal garbage for the past few days. If this is gpt4 turbo, then it doesn't matter at all even if it had a bajillion tokens.

            • 6 months ago
              Anonymous

              exactly
              can't really expect 128k gpt4 to be any better than 8k gpt4 rn imo

              • 6 months ago
                Anonymous

                Except now it's trained up to Apr 23.

                I don't know how others are using gpt4, but for what I work with it has made me thousands of dollars a week.

              • 6 months ago
                Anonymous

                >buy my course on how to get thousands of dollars a week using chatgpt

              • 6 months ago
                Anonymous

                Why the frick would I want anyone else to encroach on my market?

                Also forgot to add, you can force it to return JSON now.
                That's fantastic (though it should have been an option a long time ago).

              • 6 months ago
                Anonymous

                >you can force it to return JSON now
                i wonder what they meant by that
                is the model verbally asked to spit out json? do they just regenerate until it gives valid json?
                if it's the former i bet people are probably gonna jailbreak it

              • 6 months ago
                Anonymous

                all u needed to do was fine tune your scenario moron

              • 6 months ago
                Anonymous

                It has already been able to do that for a long time with function calling

          • 6 months ago
            Anonymous

            >little less
            One third of the price. So what like 66% discount? Sounds pretty big to me when you consider the model is bigger, better and faster.

            • 6 months ago
              Anonymous

              it's going to be faster because it's going to be worse
              I guarantee it

            • 6 months ago
              Anonymous

              whoops
              yea you're right anon idk why i thought it was 2.75$ less kek
              does this mean gpt4 will cost even less or will they just replace it?

              • 6 months ago
                Anonymous

                it said in the live 2.75x LESS COST OVERALL!!!!

              • 6 months ago
                Anonymous

                im moronic this evening but he said
                >more than 2.75x cheaper to use for gpt-4 turbo than gpt-4

              • 6 months ago
                Anonymous

                Still too expensive

                yeah it's still too expensive. they put up the new wehisper model though

                >GPT-4 32K is untenably expensive to operate. it's like $2 per request, I have access so can answer questions.
                i get 50 uses on poe per month along with everything else..

                isnt gpt4 20$ per month still? i dont understand

              • 6 months ago
                Anonymous

                we are talking api instead of frontend

              • 6 months ago
                Anonymous

                that's GPT-4 plus, a child's toy

              • 6 months ago
                Anonymous

                you're moronic go back to your chatbot containment thread

              • 6 months ago
                Anonymous

                nta, no need to be a dick.

              • 6 months ago
                Anonymous

                Still too expensive

              • 6 months ago
                Anonymous

                yeah it's still too expensive. they put up the new wehisper model though

      • 6 months ago
        Anonymous

        Better GPT-4 with massive context window and almost 3 times cheaper than current one.
        Extremely customizable personal GPTs
        Code interpreter was already OP as frick and is now in the hands of everyone and in very automated fashion.

        I would say that the ability to read and produce code got a quite a bit cheaper. One dev with good automated GPTs can do the work of multiple people.

        • 6 months ago
          Anonymous

          >Better GPT-4
          Better how?
          32k context was already large

  26. 6 months ago
    Anonymous

    nice, I am going to quit my CS studies now, will be jobless nonetheless now, thank you Sam

  27. 6 months ago
    Anonymous

    Junior software engineer here, how fricked am I?

    • 6 months ago
      Anonymous

      not at all.

    • 6 months ago
      Anonymous

      https://i.imgur.com/6bxZkD0.png

      128k gpt4 turbo with apr 2023 training data, gpt4 was 32k at most

      jesus christ and i wanted to go to cs uni next year.. what the frick should i study now

      become a farmer, it's over

  28. 6 months ago
    Anonymous

    bros is this a total fail?
    wasn't this supposed to be live

    ?t=3264

  29. 6 months ago
    Anonymous

    so since gpt4 turbo 128k is turning out to be shit, do we actually get access to the 32k one?

    • 6 months ago
      Anonymous

      Hi Elon

    • 6 months ago
      Anonymous

      "The model `gpt-4-1106-preview` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4."

      anyone else?

      GPT-4 32K is untenably expensive to operate. it's like $2 per request, I have access so can answer questions.

      • 6 months ago
        Anonymous

        >GPT-4 32K is untenably expensive to operate. it's like $2 per request, I have access so can answer questions.
        i get 50 uses on poe per month along with everything else..

      • 6 months ago
        Anonymous

        so no access to "good" gpt4-32k for us then? That sucks, I was really waiting for this. OpenRouter seems to be waaaaay more expensive than $2 per request at almost full context.

      • 6 months ago
        Anonymous

        n

  30. 6 months ago
    Anonymous

    i never got 32k access, wonder why

  31. 6 months ago
    Anonymous

    when is this shit live

    • 6 months ago
      Anonymous

      for you? in 6 to 8 months.

      • 6 months ago
        Anonymous

        get a real job

  32. 6 months ago
    Anonymous

    kinda mid ngl

  33. 6 months ago
    Anonymous

    It's good that I can now instruct my own GPT agent to come and shitpost on my behalf so I don't have to spend another minute here.

    I have to gather all my posts from the tbharchive and feed them to it.

  34. 6 months ago
    Anonymous

    So is the frontend gpt4 using the new gpt4 turbo ?

    • 6 months ago
      Anonymous

      It's 3x cheaper. What do you think? Give the plebs more expensive model?

    • 6 months ago
      Anonymous

      turbo, it's literal garbage

      • 6 months ago
        Anonymous

        are you moronic? it has 2023 data and 128k tokesn, the other one had 8k tokens

        • 6 months ago
          Anonymous

          I wonder if all those 128k tokens are created equal. Claude's 100k model wasn't a true 100k model, but sampled tokens with some kind of heuristic.
          Since LLMs are quadratic wrt input size, there must be some trickery here as well the question is just how well it works.

        • 6 months ago
          Anonymous

          They're able to increase context limits because they cut down the model, just like 3.5 turbo. Those are the only two benefits, the responses it produces will obviously be worse

        • 6 months ago
          Anonymous

          high quality 8k
          trash 128k

          literally a moron

  35. 6 months ago
    Anonymous

    >Ummm sorry sweetie, you're gonna have to use another chatbot for that.
    Aren't you excited for the future of AI?!

    • 6 months ago
      Anonymous

      The GPT-4 API has quite relaxed filters. You really have to be asking on how to harm humans or build bombs for the filters to kick in and even then you can circumvent them relatively easily by having GPT-4 play a character anything other than the default "Assistant".

      Here is GPT-4 in march after just one line of text telling it to behave like a moronic /misc/ poster. It delivers. ChatGPT and the public shit is not capable of this without extreme jailbreaks and maybe not even then nowadays.

  36. 6 months ago
    Anonymous

    sir use my chatbot pls sir very smart singularity revolutionary

  37. 6 months ago
    Anonymous

    what do people actually use this stuff for? I use ChatGPT for search like stuff, or question answering. But apparently a lot of people are building apps and shit on top of this. What are those apps?

    • 6 months ago
      Anonymous

      i dont know either

    • 6 months ago
      Anonymous

      other than extremely simple apps, you really can't build something with gpt alone. most devs i know either hate gpt with a passion, or just use it for monotonous tasks.

    • 6 months ago
      Anonymous

      I use it to batch translate a bunch of my stuff. It is so much better than any translator on the market and at a fraction of the cost.
      I also use it to generate content across my sites. It still needs a quick review, but I'd say 99.9% of the time it is acceptable.

    • 6 months ago
      Anonymous

      idk man, this whole AI shit is tarting to smell like crypto, useless tech that only appeals to autists who just build on top of that tech so others can play with it and do the same

      • 6 months ago
        Anonymous

        precisely

        tried it multiple times it hallucinates too much to even work as an advisor

        as a programmer i expect ~2025. to be rife with freelance work, fixing up gigapajeet almost-randomly-generated codebases

        • 6 months ago
          Anonymous

          with the right toolchain it can probably auto glueup pajeetware 80-90% of the time.

          • 6 months ago
            Anonymous

            >anon is hallucinating

            are you a bot? or just have reading comprehension trouble? i'm saying these random-word-generators will end up becoming a gigajeet and creating absolute messes which will by ~2025. create yet another SWE hiring boom

            • 6 months ago
              Anonymous

              no shit I'm telling you that a properly trained and utilized LLM can depajeetify things.

              Just make it grind things out rigorously whenever possible.

              • 6 months ago
                Anonymous

                it can't tell if something exists or not

                are you trying to say one has to just ... feed more data into it to get any results?

              • 6 months ago
                Anonymous

                no you give it tools to debug it

              • 6 months ago
                Anonymous

                What tools and how does it use them?

              • 6 months ago
                Anonymous

                figure it out genius
                ast, inspect, a python interpreter
                openai's had their interpreter thing for months now

              • 6 months ago
                Anonymous

                Jesus christ you're fricking dumb

              • 6 months ago
                Anonymous

                Function calling and RAG you fricking moron
                Stop trying to discredit things you don't understand

              • 6 months ago
                Anonymous

                ...you're going to encode all possible facts about debugging and programming optimization in a database? It'll just look at error messages then pull an embedding vector out of postgres to deal with it?

      • 6 months ago
        Anonymous

        It's unironically over

    • 6 months ago
      Anonymous

      sloppa they post on twitter for advertising their shit chatGPT frontend

    • 6 months ago
      Anonymous

      sex

  38. 6 months ago
    Anonymous

    hey wait, so the leak about being able to add your own docs via the frontend was bs then?

  39. 6 months ago
    Anonymous

    >3.5 turbo
    where's the turbo and 2023 data motherfrickers?

  40. 6 months ago
    Anonymous

    It's over.

    • 6 months ago
      Anonymous

      Completely wrong. We have several AI competitors already that are getting corporate deals and we also have hundreds of thousands of local models running on private hardware.

    • 6 months ago
      Anonymous

      You don't understand open source models anymore than they understand their closed source models.
      Therefore, it's safe to assume no open source can be trusted for the fear they might go out and do their own thing at some point, potentially causing lots of damage in the process

    • 6 months ago
      Anonymous

      Mistral rapes your anus

    • 6 months ago
      Anonymous

      Reminder that stablediffusion and llama are NOT open source. They are local, but not open source. You cannot 'recompile' them yourself because the training methods and datasets are not fully released. The exist almost ZERO actual open-source AI models.

      • 6 months ago
        Anonymous

        Open source (Model + Data + Training Methods) > Local Models >>>>>>>> Cloud Models

        It's not so much about understanding as it is about control. SD can't be "compiled" but it can be fine-tuned, re-trained, modified and deployed in a way cloud models cannot. Plus using it doesn't give your data to corpos.

    • 6 months ago
      Anonymous

      Unbelievably moronic screencap. Why do I even come to this tard infested board.

  41. 6 months ago
    Anonymous

    ITS UP!

    • 6 months ago
      Anonymous

      whats' up?

      • 6 months ago
        Anonymous

        gpt4-1106-preview. its the turbo model

  42. 6 months ago
    Anonymous
    • 6 months ago
      Anonymous

      As much as I hate OpenAI, I'd sooner kill all the luddites.

  43. 6 months ago
    Anonymous

    I'm trying to use retrieval (RAG) pipelines with OpenAI agents on top of the pipeline doing tasks like scraping shit. This is for a summarization task (im summarizing Airbnb reviews for properties, condensing like 100s of reviews for one property into one paragraph which LLMs are great at) and also using document retrieval pipelines for SEC filings for equity analysis.

    GPT 4 turbo with 128k token context window is fricking 300 pages long and its super quick. It's impressive as frick and you luddite chuds need to get building apps with it.

    Trying to get my OpenAI agents to basically scrape SEC filings and put those documents in a vectorstore along with company PR's and related PR's / shit from optimized news feeds. This is all amibitious stuff but tbhonest you have to be moronic if you think this is just a fad and that enterprise applications don't exist.

    Companies will throw money at systems that prevent hallucination and allow amazing document QA / generation over company documents/databases

    Btw I'm using Langchain for most of my LLM frameworks but LlamaIndex is prob nicer for scaled shit

    • 6 months ago
      Anonymous

      why langchain
      seems bloated
      also are you sure the SEC is ok with scraping?

      • 6 months ago
        Anonymous

        My brother in christ, the SEC / Edgar database has always been xml / html files publicly available for scraping and has APIs.

        Also I chose LangChain because it's easier for developing apps (has nice connectivity with Vercel), otherwise I know LlamaIndex is badass and has more functionality. I know this because I work in finance and hedge funds with lots of money use LlamaIndex at scale.

        What do you mean LangChain seems bloated? Not sure what that means because im not a techgay, im a financegay LARPing as a techgay with LLM retrieval pipelines

        • 6 months ago
          Anonymous

          >financegay
          ah ok based now i get your goal
          i'm not a codegay either but langchain seems like they threw a lot of shit into something way too complex

          what are hedgefunds doing with LLMs?

          • 6 months ago
            Anonymous

            >i'm not a codegay either but langchain seems like they threw a lot of shit into something way too complex
            But what exactly did they do to make it "bloated" i.e. throwing in lots of shit? You are making an LLM framework, you gotta make it a bit complex...and I disagree that its too much shit. that sounds like Llamaindex. Langchain is more optimized and kind of a "Black box" in the sense you don't know wtf the agents are doing its all super optimized and quick and actually not complex enough because it's all optimized under the hood

            Just my thoughts, i think Langchain works fine. I use it with Chroma, the open source vector database (as opposed to pinecone which is more geared for scale)

            Hedge funds are trying to use retrieval pipelines for analyzing their documents, same as all other business applications for LLMs lol. I know a guy who is starting to do it and so far he's been pretty good. He's young like me and will probably fail tho. LLMs for investment advice requires a ton of hallucination prevention via RAG + fine tuning + evaluation frameworks (end to end eval).

            He did something where he used LLMs to optimize his financial newsfeed to clear out the noise and only get signal, which is kinda cool

            • 6 months ago
              Anonymous

              i don't think it's terrible or anything i think they just tried to do everything, maybe that's a good thing
              i hadn't really implemented a full RAG setup yet but i was planning to use haystack since it seemed more focus/mature

              • 6 months ago
                Anonymous

                Interesting, i didn't know haystack had generative pseudo labeling (Super important for evaluating RAG frameworks), i might use this soon

                Tell me what else Haystack is nice for vs. Langchain

                >LLMs for investment advice requires a ton of hallucination prevention via RAG + fine tuning + evaluation frameworks (end to end eval).
                are you also scraping other data sources e.g. company websites(+internet archive waybackmachine history possibly), peoples linkedin, youtube, etc?

                That sounds shady but if its legal and it gives me an informational edge to make money i'd like to do that.

                Answer this: can these AI agents do a good job of writing scraper scripts or scraping data themselves?

                Link related: https://python.langchain.com/docs/use_cases/web_scraping

              • 6 months ago
                Anonymous

                >Answer this: can these AI agents do a good job of writing scraper scripts or scraping data themselves?
                probably could yeah, i hate scraping and i've had them write pretty much valid parsers in python just by giving them the data and saying what i want to extract

                too much liability to implement at the moment for my field

              • 6 months ago
                Anonymous

                >probably could yeah, i hate scraping and i've had them write pretty much valid parsers in python just by giving them the data and saying what i want to extract
                Which agents / what frameworks did you use? if you dont mind can you pls gimme your workflow for this because i need AI agents to be my little slave prostitutes doing all my scraping and document consolidation for me

              • 6 months ago
                Anonymous

                my workflow focus is still pre-LLM, i'm working more on getting the perfect blend of info to use for RAG

                agents have been pretty fricking moronic, the only ones ive seen that really work well are the code interpreter ones. basically non gpt4 are too moronic and gpt4 was/is too expensive (though probably not for fintech)

                i think semiautomating writing a scraper is probably doable with them (you might just need to interact with the page a bit and dump the data to work with) but automatically writing scrapers is probably a hard mathematical problem (i believe it's solveable most of the time though)

            • 6 months ago
              Anonymous

              >LLMs for investment advice requires a ton of hallucination prevention via RAG + fine tuning + evaluation frameworks (end to end eval).
              are you also scraping other data sources e.g. company websites(+internet archive waybackmachine history possibly), peoples linkedin, youtube, etc?

  44. 6 months ago
    Anonymous

    ChatGPT has become lobotomized, it's sad and boring.

  45. 6 months ago
    Anonymous

    >you do not currently have access to this feature

  46. 6 months ago
    Anonymous

    Imagine the stench of that much concentrated grift in one place

  47. 6 months ago
    Anonymous

    this is just some new form of cryptocurrencies lol

    we really need a /crypto/ board for these scams with blockchains and AIs

Your email address will not be published. Required fields are marked *