OpenAI's first-ever developer conference will take place on the 6th of November, where the company plans to unveil a number of updates. Leaks now show that these will include a new interface for ChatGPT as well as completely new features.
> On X (formerly Twitter), user CHOI shared a complete list of the leaked features. According to them, OpenAI will announce the Gizmo tool, which specializes in creating, managing, and selecting custom chatbots.
> Gizmo is expected to bring the following features:
> - Sandbox - Provides an environment to import, test and modify existing chatbots
> - Custom actions - Define additional functions for your chatbot using OpenAPI specifications
> - Knowledge files - Add additional files that your chatbot can reference
> - Tools - Provides basic tools for web browsing, image creation, etc.
> - Analytics - View and analyze chatbot usage data
> - Drafts - Save and share drafts for chatbots you create
> - Publish - Publish your finished chatbot
> - Share - Set up and manage chatbot sharing
> - Marketplace - browse and share chatbots created by other users
> There will also be a Magic Creator or Magic Maker to help you create chatbots:
> - Define your chatbot with an interactive interface
> - Recognize user intent and create chatbots
> - Test the created chatbot live
> - Modify chatbot behavior through iterative conversations
> - Share and deploy chatbots
Hopefully this helps me fire more people. I've already replaced 10 writers with AI. Output increased, accuracy increased and revenue and views increased.
God bless AI.
>https://chat.openai.com/gpts/editor
all of this is already proven to be true
Go back
It's live
At least post the link
Altman in da house
Yadayada history part
Stop clapping sheeple
The long pause in his speech make the sheeple thought it's the clapping part
that's a woman
>pic unrelated
Ayyo GPT4-V peep this painting and describe that shit muhnigga
>dawg that’s a photo of a dude check out dat jaw muhnigga beep boop
AYYO MUHFUCKKN AI SAFEY BROKE N SHIT
Chatty 😀
good morning sirs
128k context. Now I RP and edge 8 hours straight.
this is slow as fuck compared to apple events, what gives?
TTS. I wonder how good it is compared to 11labs
Good Morning sirs
>muh safety
>pls regulatory lockin thanks
>program a gpt
christ i'm already sick of this retarded phraseology
It's nothing
But this shit is just customized ChatGPT. What are the agents? I thought it was like autogpt by OpenAI. Deploy and let it run.
>look for gpus
mildly humorous
>private
Sure buddy, sure.
Just joined, what are the keynotes so far?
GPT-4 turbo, gpt-4 fine tuning
Cheaper, better.
Who can make the best coom GPT that they allow on the store will become very rich.
>Stops your revenue sharing in your path for violating AI safety.
Also how is this going to work? getting paid pennies off of pricing like on Apple App store or will they write a check to the most popular shit on the front page like on youtube?
did it try to pronounce an emoji
>go to thing
>get doxxed`
h-how did it know their names
>nah we'll just dox you all
Death by a thousand cuts to programmers. Are you having fun yet?
Better than being a digital artist.
it's over, we've achieved the singularity..
holy fuck i cant believe this
what did they announce?
128k gpt4 turbo with apr 2023 training data, gpt4 was 32k at most
jesus christ and i wanted to go to cs uni next year.. what the fuck should i study now
wait, are you saying regular users now have access to gpt4 with 128k context? absolutely no fucking way this is true.
yeah and gpt got a lot worse in the last few days. have fun with that.
yeah i said that here
>won't do a crossword correctly
yea i wouldn't care right now
gpt4 turbo costs a little less than gpt4
hmm, but everyone's been saying that GPT4 has been literal garbage for the past few days. If this is gpt4 turbo, then it doesn't matter at all even if it had a bajillion tokens.
exactly
can't really expect 128k gpt4 to be any better than 8k gpt4 rn imo
Except now it's trained up to Apr 23.
I don't know how others are using gpt4, but for what I work with it has made me thousands of dollars a week.
>buy my course on how to get thousands of dollars a week using chatgpt
Why the fuck would I want anyone else to encroach on my market?
Also forgot to add, you can force it to return JSON now.
That's fantastic (though it should have been an option a long time ago).
>you can force it to return JSON now
i wonder what they meant by that
is the model verbally asked to spit out json? do they just regenerate until it gives valid json?
if it's the former i bet people are probably gonna jailbreak it
all u needed to do was fine tune your scenario retard
It has already been able to do that for a long time with function calling
>little less
One third of the price. So what like 66% discount? Sounds pretty big to me when you consider the model is bigger, better and faster.
it's going to be faster because it's going to be worse
I guarantee it
whoops
yea you're right anon idk why i thought it was 2.75$ less kek
does this mean gpt4 will cost even less or will they just replace it?
it said in the live 2.75x LESS COST OVERALL!!!!
im retarded this evening but he said
>more than 2.75x cheaper to use for gpt-4 turbo than gpt-4
isnt gpt4 20$ per month still? i dont understand
we are talking api instead of frontend
that's GPT-4 plus, a child's toy
you're retarded go back to your chatbot containment thread
nta, no need to be a dick.
Still too expensive
yeah it's still too expensive. they put up the new wehisper model though
Better GPT-4 with massive context window and almost 3 times cheaper than current one.
Extremely customizable personal GPTs
Code interpreter was already OP as fuck and is now in the hands of everyone and in very automated fashion.
I would say that the ability to read and produce code got a quite a bit cheaper. One dev with good automated GPTs can do the work of multiple people.
>Better GPT-4
Better how?
32k context was already large
nice, I am going to quit my CS studies now, will be jobless nonetheless now, thank you Sam
Junior software engineer here, how fucked am I?
not at all.
become a farmer, it's over
bros is this a total fail?
wasn't this supposed to be live
?t=3264
so since gpt4 turbo 128k is turning out to be shit, do we actually get access to the 32k one?
Hi Elon
"The model `gpt-4-1106-preview` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4."
anyone else?
GPT-4 32K is untenably expensive to operate. it's like $2 per request, I have access so can answer questions.
>GPT-4 32K is untenably expensive to operate. it's like $2 per request, I have access so can answer questions.
i get 50 uses on poe per month along with everything else..
so no access to "good" gpt4-32k for us then? That sucks, I was really waiting for this. OpenRouter seems to be waaaaay more expensive than $2 per request at almost full context.
n
i never got 32k access, wonder why
when is this shit live
for you? in 6 to 8 months.
get a real job
kinda mid ngl
It's good that I can now instruct my own GPT agent to come and shitpost on my behalf so I don't have to spend another minute here.
I have to gather all my posts from the tbharchive and feed them to it.
So is the frontend gpt4 using the new gpt4 turbo ?
It's 3x cheaper. What do you think? Give the plebs more expensive model?
turbo, it's literal garbage
are you retarded? it has 2023 data and 128k tokesn, the other one had 8k tokens
I wonder if all those 128k tokens are created equal. Claude's 100k model wasn't a true 100k model, but sampled tokens with some kind of heuristic.
Since LLMs are quadratic wrt input size, there must be some trickery here as well the question is just how well it works.
They're able to increase context limits because they cut down the model, just like 3.5 turbo. Those are the only two benefits, the responses it produces will obviously be worse
high quality 8k
trash 128k
literally a retard
>Ummm sorry sweetie, you're gonna have to use another chatbot for that.
Aren't you excited for the future of AI?!
The GPT-4 API has quite relaxed filters. You really have to be asking on how to harm humans or build bombs for the filters to kick in and even then you can circumvent them relatively easily by having GPT-4 play a character anything other than the default "Assistant".
Here is GPT-4 in march after just one line of text telling it to behave like a retarded /misc/ poster. It delivers. ChatGPT and the public shit is not capable of this without extreme jailbreaks and maybe not even then nowadays.
sir use my chatbot pls sir very smart singularity revolutionary
what do people actually use this stuff for? I use ChatGPT for search like stuff, or question answering. But apparently a lot of people are building apps and shit on top of this. What are those apps?
i dont know either
other than extremely simple apps, you really can't build something with gpt alone. most devs i know either hate gpt with a passion, or just use it for monotonous tasks.
I use it to batch translate a bunch of my stuff. It is so much better than any translator on the market and at a fraction of the cost.
I also use it to generate content across my sites. It still needs a quick review, but I'd say 99.9% of the time it is acceptable.
idk man, this whole AI shit is tarting to smell like crypto, useless tech that only appeals to autists who just build on top of that tech so others can play with it and do the same
precisely
tried it multiple times it hallucinates too much to even work as an advisor
as a programmer i expect ~2025. to be rife with freelance work, fixing up gigapajeet almost-randomly-generated codebases
with the right toolchain it can probably auto glueup pajeetware 80-90% of the time.
>anon is hallucinating
are you a bot? or just have reading comprehension trouble? i'm saying these random-word-generators will end up becoming a gigajeet and creating absolute messes which will by ~2025. create yet another SWE hiring boom
no shit I'm telling you that a properly trained and utilized LLM can depajeetify things.
Just make it grind things out rigorously whenever possible.
it can't tell if something exists or not
are you trying to say one has to just ... feed more data into it to get any results?
no you give it tools to debug it
What tools and how does it use them?
figure it out genius
ast, inspect, a python interpreter
openai's had their interpreter thing for months now
Jesus christ you're fucking dumb
Function calling and RAG you fucking retard
Stop trying to discredit things you don't understand
...you're going to encode all possible facts about debugging and programming optimization in a database? It'll just look at error messages then pull an embedding vector out of postgres to deal with it?
It's unironically over
sloppa they post on twitter for advertising their shit chatGPT frontend
sex
hey wait, so the leak about being able to add your own docs via the frontend was bs then?
>3.5 turbo
where's the turbo and 2023 data motherfuckers?
It's over.
Completely wrong. We have several AI competitors already that are getting corporate deals and we also have hundreds of thousands of local models running on private hardware.
You don't understand open source models anymore than they understand their closed source models.
Therefore, it's safe to assume no open source can be trusted for the fear they might go out and do their own thing at some point, potentially causing lots of damage in the process
Mistral rapes your anus
Reminder that stablediffusion and llama are NOT open source. They are local, but not open source. You cannot 'recompile' them yourself because the training methods and datasets are not fully released. The exist almost ZERO actual open-source AI models.
Open source (Model + Data + Training Methods) > Local Models >>>>>>>> Cloud Models
It's not so much about understanding as it is about control. SD can't be "compiled" but it can be fine-tuned, re-trained, modified and deployed in a way cloud models cannot. Plus using it doesn't give your data to corpos.
Unbelievably retarded screencap. Why do I even come to this tard infested board.
ITS UP!
whats' up?
gpt4-1106-preview. its the turbo model
As much as I hate OpenAI, I'd sooner kill all the luddites.
I'm trying to use retrieval (RAG) pipelines with OpenAI agents on top of the pipeline doing tasks like scraping shit. This is for a summarization task (im summarizing Airbnb reviews for properties, condensing like 100s of reviews for one property into one paragraph which LLMs are great at) and also using document retrieval pipelines for SEC filings for equity analysis.
GPT 4 turbo with 128k token context window is fucking 300 pages long and its super quick. It's impressive as fuck and you luddite chuds need to get building apps with it.
Trying to get my OpenAI agents to basically scrape SEC filings and put those documents in a vectorstore along with company PR's and related PR's / shit from optimized news feeds. This is all amibitious stuff but tbhonest you have to be retarded if you think this is just a fad and that enterprise applications don't exist.
Companies will throw money at systems that prevent hallucination and allow amazing document QA / generation over company documents/databases
Btw I'm using Langchain for most of my LLM frameworks but LlamaIndex is prob nicer for scaled shit
why langchain
seems bloated
also are you sure the SEC is ok with scraping?
My brother in christ, the SEC / Edgar database has always been xml / html files publicly available for scraping and has APIs.
Also I chose LangChain because it's easier for developing apps (has nice connectivity with Vercel), otherwise I know LlamaIndex is badass and has more functionality. I know this because I work in finance and hedge funds with lots of money use LlamaIndex at scale.
What do you mean LangChain seems bloated? Not sure what that means because im not a techfag, im a financefag LARPing as a techfag with LLM retrieval pipelines
>financefag
ah ok based now i get your goal
i'm not a codefag either but langchain seems like they threw a lot of shit into something way too complex
what are hedgefunds doing with LLMs?
>i'm not a codefag either but langchain seems like they threw a lot of shit into something way too complex
But what exactly did they do to make it "bloated" i.e. throwing in lots of shit? You are making an LLM framework, you gotta make it a bit complex...and I disagree that its too much shit. that sounds like Llamaindex. Langchain is more optimized and kind of a "Black box" in the sense you don't know wtf the agents are doing its all super optimized and quick and actually not complex enough because it's all optimized under the hood
Just my thoughts, i think Langchain works fine. I use it with Chroma, the open source vector database (as opposed to pinecone which is more geared for scale)
Hedge funds are trying to use retrieval pipelines for analyzing their documents, same as all other business applications for LLMs lol. I know a guy who is starting to do it and so far he's been pretty good. He's young like me and will probably fail tho. LLMs for investment advice requires a ton of hallucination prevention via RAG + fine tuning + evaluation frameworks (end to end eval).
He did something where he used LLMs to optimize his financial newsfeed to clear out the noise and only get signal, which is kinda cool
i don't think it's terrible or anything i think they just tried to do everything, maybe that's a good thing
i hadn't really implemented a full RAG setup yet but i was planning to use haystack since it seemed more focus/mature
Interesting, i didn't know haystack had generative pseudo labeling (Super important for evaluating RAG frameworks), i might use this soon
Tell me what else Haystack is nice for vs. Langchain
That sounds shady but if its legal and it gives me an informational edge to make money i'd like to do that.
Answer this: can these AI agents do a good job of writing scraper scripts or scraping data themselves?
Link related: https://python.langchain.com/docs/use_cases/web_scraping
>Answer this: can these AI agents do a good job of writing scraper scripts or scraping data themselves?
probably could yeah, i hate scraping and i've had them write pretty much valid parsers in python just by giving them the data and saying what i want to extract
too much liability to implement at the moment for my field
>probably could yeah, i hate scraping and i've had them write pretty much valid parsers in python just by giving them the data and saying what i want to extract
Which agents / what frameworks did you use? if you dont mind can you pls gimme your workflow for this because i need AI agents to be my little slave whores doing all my scraping and document consolidation for me
my workflow focus is still pre-LLM, i'm working more on getting the perfect blend of info to use for RAG
agents have been pretty fucking retarded, the only ones ive seen that really work well are the code interpreter ones. basically non gpt4 are too retarded and gpt4 was/is too expensive (though probably not for fintech)
i think semiautomating writing a scraper is probably doable with them (you might just need to interact with the page a bit and dump the data to work with) but automatically writing scrapers is probably a hard mathematical problem (i believe it's solveable most of the time though)
>LLMs for investment advice requires a ton of hallucination prevention via RAG + fine tuning + evaluation frameworks (end to end eval).
are you also scraping other data sources e.g. company websites(+internet archive waybackmachine history possibly), peoples linkedin, youtube, etc?
ChatGPT has become lobotomized, it's sad and boring.
>you do not currently have access to this feature
Imagine the stench of that much concentrated grift in one place
this is just some new form of cryptocurrencies lol
we really need a /crypto/ board for these scams with blockchains and AIs