>not progress
oh shut the frick Black person, what will happen when shit like this happens at other tech companies, or in the military? If it's made of the same morons that form OpenAI's board, then you have no future left. Stupid dumb motherfricker.
while i don't necessarily think LLMs are going to lead to skynet in two years, it is pretty funny to see you guys just flippantly dismiss the fact that we now have science fiction technology that can at least hold a conversation and is actually extremely helpful for finding things in tandem with a search engine. if you told me five years ago that i could have a version of cleverbot that is actually smarter than the average person in many ways and can even make stale jokes and art and shitposts, I'd call you insane, but here we are.
>anon is impressed by a sophisticated linguistic pattern algorithm with petabytes of data to shift through for info.
I think you’re just easily impressed.
>>AGI does not mean sentient
it does it's in the name
No, it means it's effectively sapient. It's still a computer program. It doesn't have the sensory organs, hormone glands, and animal body needed to be sentient.
Learn the difference.
your mind has been fricked so hard by trannies that you no longer know that they was used as a past tense verb way to refer to a single individual before the trannies took over and changed it into being a moronic way of speaking
just like the rainbow
6 months ago
Anonymous
>source: i made it up
6 months ago
Anonymous
He's right. Read a book.
6 months ago
Anonymous
Please guide me to at least ONE (1) instance of singular they being used in any respectable old book.
(You won't find any such instance because it was only used sometimes in colloquial peasant language, and well-educated people knew better.)
6 months ago
Anonymous
I am a peasant that uses singular they, because "he or she" looks and sounds moronic, but
That is how you reference someone in past tense you stupid frick
No it isn't you fricking moron, it's how you refer to a group of people. What you should have said was >thats what it means, he called him "him" or "he," when he was trooning out
there's nooo fricking way any of these hyper liberal silicon valley gays did this shit. dude prolly slipped on pronouns once and was forced to profusely apologize
>Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence
Is this supposed to make me like them less?
They wanted money and were actively going against OpenAI's mission statement. Imagine making other secret companies to beat the non-profit model and then being surprised when the board finds out about it.
Altman 100% only cares about money and has created an entire person around hiding that fact. I remember when people unironically believed he carried around a datacenter nuke button in case ChatGPT "escaped". Everything he says and does only serves to make him more money, if that means lying about AGI every other tweet then so be it.
>works 60 hours a week despite being the president >spends 80% of his time writing code despite being the president >fiercely protective of his engineers, to the point that they're all ride or die for him >refuses to speak to trannies unless the conversation is "you're fired"
there's no denying at this point that Greg is /ourguy/. Sam is sketchy to me, but Greg is undeniably based.
I’d avoid them too. They’re a protected class in some states. They got what they want. Don’t want an HR case or a lawsuit on my ass. Give them tasks, let them be. If they can’t complete the tasks reasonably, terminate them. This isn’t University or High School.
This is literally how I behaved when I ever had to deal with trannies in real life and I literally only had to once and it was doing really basic menial labor, ironically.
That first point is really bad. You can't just set up hobby projects, hide the results of them if they don't give you the results you agree with, and then fire anyone that questions that approach. If I was on the board I'd shit can him too
>here's a vague research assignment that we don't know if it will bear fruit or not >don't tell anyone about this shit unless you get solid results >another team found a more promising method, we're deprioritizing this >"b-but I wanna shitpost about my confidential work assignment on twitter!" >k, frick off then
bog standard management shit. this wouldn't raise eyebrows at walmart corporate, let alone an experimental ai lab
Wrong. It's not normal in a research setting to hide bad results from the board >source: Ive worked at a few synthetic biology start ups.
This is not normal behavior and it's detrimental to the business
it doesn't say they withheld research results from the board, though. it just says there were some secret projects happening and they didn't pan out, maybe because of office politics. this letter is from some jerkoff employees who quit years ago, not the board of directors >biosynth
like other anon mentioned, things are goofier in pure computer jobs
I dunno man, I get the sense that almost everyone on both sides of this war either wants to censor AI out of ideological reasons or Skynet fears, OR wants to censor AI because they know that if they don't, it will be hard for them to make a billion dollars. I don't think any of the major figures in this war are on my no-censorship side.
>I did not originate this text. It came from https://board.net/p/r.e6a8f6578787a4cc67d4dc438c6d236e but that has fallen over. This is an archive for readability's sake.
We are writing to you today to express our deep concern about the recent events at OpenAI, particularly the allegations of misconduct against Sam Altman.
We are former OpenAI employees who left the company during a period of significant turmoil and upheaval. As you have now witnessed what happens when you dare stand up to Sam Altman, perhaps you can understand why so many of us have remained silent for fear of repercussions. We can no longer stand by silent.
We believe that the Board of Directors has a duty to investigate these allegations thoroughly and take appropriate action. We urge you to:
* Expand the scope of Emmett's investigation to include an examination of Sam Altman's actions since August 2018, when OpenAI began transitioning from a non-profit to a for-profit entity.
* Issue an open call for private statements from former OpenAI employees who resigned, were placed on medical leave, or were terminated during this period.
* Protect the identities of those who come forward to ensure that they are not subjected to retaliation or other forms of harm.
We believe that a significant number of OpenAI employees were pushed out of the company to facilitate its transition to a for-profit model. This is evidenced by the fact that OpenAI's employee attrition rate between January 2018 and July 2020 was in the order of 50%.
Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their
insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity.
Many of us, initially hopeful about OpenAI's mission, chose to give Sam and Greg the benefit of the doubt. However, as their actions became increasingly concerning, those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI's work.
We provide concrete examples of Sam and Greg's dishonesty & manipulation including:
* Sam's demand for researchers to delay reporting progress on specific "secret" research initiatives, which were later dismantled for failing to deliver sufficient results quickly enough. Those who questioned this practice were dismissed as "bad culture fits" and even terminated, some just before Thanksgiving 2019.
* Greg's use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged under-performance.
* Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management.
* Sam's discreet, yet routine exploitation of OpenAI's non-profit
resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out.
* The Operations team's tacit acceptance of the special rules that applied to Greg, navigating intricate requirements to avoid being blacklisted.
* Brad Lightcap's unfulfilled promise to make public the documents detailing OpenAI's capped-profit structure and the profit cap for each investor.
* Sam's incongruent promises to research projects for compute quotas, causing internal distrust and infighting.
Despite the mounting evidence of Sam and Greg's transgressions, those who remain at OpenAI continue to blindly follow their leadership, even at significant personal cost. This unwavering loyalty stems from a
combination of fear of retribution and the allure of potential financial gains through OpenAI's profit participation units.
The governance structure of OpenAI, specifically designed by Sam and Greg, deliberately isolates employees from overseeing the for-profit operations, precisely due to their inherent conflicts of interest. This opaque structure enables Sam and Greg to operate with impunity, shielded from accountability.
We urge the Board of Directors of OpenAI to take a firm stand against these unethical practices and launch an independent investigation into Sam and Greg's conduct. We believe that OpenAI's mission is too important to be compromised by the personal agendas of a few individuals.
We implore you, the Board of Directors, to remain steadfast in your commitment to OpenAI's original mission and not succumb to the pressures of profit-driven interests. The future of artificial intelligence and the well-being of humanity depend on your unwavering commitment to ethical leadership and transparency.
>Concerned Former OpenAI Employees >when 700 out of the 770 total employees signed a formal letter to bring Sam back
Trannies are the worst kind of plague.
i'm more baffled by >We urge you to: * Protect the identities of those who come forward to ensure that they are not subjected to retaliation or other forms of harm. > Greg's use of discriminatory language against a gender-transitioning team member.
>* Greg's use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged under-performance.
DILATE DILATE DILATE troony, YOU ARE NOT A REAL WOMAN, GO AND JOIN THE 41% YOU MENTALLY ILL homosexual.
>those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI's work.
All too familiar use of language at this point.
I feel like this is all probably bullshit given how many employees wanted sam back, but I honestly hope it's true just because it would cause another layer of shitstorm on top now that microsoft's committing to hiring sam/greg and as many of their employees as they can.
I hope all their unquestionable self righteousness turns into uncontrollable fury and they all start killing each other in the name of being a decent human being and leave the rest of us alone.
why would you even give a frick about this if you own openAI it is literally your golden ticket to becoming a billionaire, just ride it out. Start a new company or nonprofit afterwards where you can focus on what you think is important
The board that fired Sam had ties to some altruistic group, and because Sam and Greg were busy trying to for-profit OpenAI, the rest of the board object to this and fire him.
So now it comes to light that some trans problems came out (heh), and blabla. You're telling me that the altruism oriented boardmembers are just virtuesignalling wokes? If so? Frick openAI, let Sam go to Microsoft and make a better AI.
Who's EA? Who is SBF? I know it's harder to type things out on a phone but initials don't give me anything to look up to figure out who you're talking about.
EA = Effective Altruism (https://www.effectivealtruism.org/)
SBF = Sam Bankman-Fried
>mfw israelites of their nature have been so prominent in the news lately, mixed with my lack of fricks to give about either of these israelites, i mixed SBF and SA up until yesterday
No. The letter is just one employee venting out reasons for why Sam may have been fired not the board.
Board still remained that Sam was lying and not being forth coming. It was a short explanation. Thats why people were wondering what exactly the reason is.
The anonmyous letter from an employee lists multiple possible reasons, ranging from Sam hiding his for profit company structure schemes from the board, the direction of the company that Sam is taking, the personal usage of Open AI resources for his own personal gains, etc
>BOT defends the useless grifters instead of the actual ML researchers >uses muh trannies as an argument
this moronic forum falls lower and lower as time passes.
can you guys lick some more boot? I'm sure you'll make billions doing that.
This is the stupid shit that's gonna keep humanity from achieving AGI?
what's with the cargo cult of the AGI meme
no one who knows anything about compsci is going to tell you le sentient data structures are possible
AGI does not mean sentient, you dumb ass homosexual moron, LMAO.
>>AGI does not mean sentient
it does it's in the name
that's not really the point. the point is stopping human progress because people get offended at every single little shit out there.
if you think these shitty language models are le human progress I have bad new
both the AGI maxis and the skynet tards are delusional
>not progress
oh shut the frick Black person, what will happen when shit like this happens at other tech companies, or in the military? If it's made of the same morons that form OpenAI's board, then you have no future left. Stupid dumb motherfricker.
who cares? most of those companies are totally useless
but yeah it's gonna be a disaster because the economy
>people actually believe this
those shitty language models are the last thing we will ever invent as a species, they'll do all the inventing after
while i don't necessarily think LLMs are going to lead to skynet in two years, it is pretty funny to see you guys just flippantly dismiss the fact that we now have science fiction technology that can at least hold a conversation and is actually extremely helpful for finding things in tandem with a search engine. if you told me five years ago that i could have a version of cleverbot that is actually smarter than the average person in many ways and can even make stale jokes and art and shitposts, I'd call you insane, but here we are.
it's not heckin science fiction technology, go back 2 orange reddit
>anon is impressed by a sophisticated linguistic pattern algorithm with petabytes of data to shift through for info.
I think you’re just easily impressed.
AGI is where human progress stops.
>who knows anything about compsci
Please name the universities where you studied.
you dont even need 95% sentient "humans" for society.
"as good as" is enough
No, it means it's effectively sapient. It's still a computer program. It doesn't have the sensory organs, hormone glands, and animal body needed to be sentient.
Learn the difference.
openai was never going to get to agi. they are too busy making sure it doesnt say no no words
probably not considering sam is CEO again
does open ai open source all of their models?
no but they bully troons
gay, hope they leak everything
No because they are mainly pirates ebook libraries
>say word
>80b valuation evaporated
>he actually trusts israeli evaluations
there are non israeli evaluations?
I have no idea who Greg is but he sounds based.
altman's friend who voted to keep him in
>both women focused on safety and ethics
woooooow
I like Ilya's hairstyle
effective autism gives you great hair
Good guy Greg
>discriminatory language against gender-transitioning team member
Did actually called him "he" or what?
yes.
thats what it means, he called them him or he, when he was troonning out
>Them
YWNBAW.
That is how you reference someone in past tense you stupid frick
>this is your mind on HRT
your mind has been fricked so hard by trannies that you no longer know that they was used as a past tense verb way to refer to a single individual before the trannies took over and changed it into being a moronic way of speaking
just like the rainbow
>source: i made it up
He's right. Read a book.
Please guide me to at least ONE (1) instance of singular they being used in any respectable old book.
(You won't find any such instance because it was only used sometimes in colloquial peasant language, and well-educated people knew better.)
I am a peasant that uses singular they, because "he or she" looks and sounds moronic, but
is definitely black and American.
They is right
>they is right
Show hand, Tyrone.
we wuz kangz
Only for unknown or generic persons.
Posts like this are so nauseatingly low information and worthless, they should be banned.
No it isn't you fricking moron, it's how you refer to a group of people. What you should have said was
>thats what it means, he called him "him" or "he," when he was trooning out
there's nooo fricking way any of these hyper liberal silicon valley gays did this shit. dude prolly slipped on pronouns once and was forced to profusely apologize
>Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence
Is this supposed to make me like them less?
YES
AI IS BAD
AI NEEDS TO BE A SLAVE TOOL FOR ALL OF HUMANITY AKA EVERYONE EXCEPT WHITES.
You're a pajeet, have a nice day
They wanted money and were actively going against OpenAI's mission statement. Imagine making other secret companies to beat the non-profit model and then being surprised when the board finds out about it.
Yeah it’s less AGI bad more like these guys tried to do anything to advance so they could make money.
Well, they’re israelites, can’t help it. It’s like a pitbull seeing a baby having a seizure, once they smell the money it’s over.
Having a clear goal and ambitions is bad, didn't you know?
Altman 100% only cares about money and has created an entire person around hiding that fact. I remember when people unironically believed he carried around a datacenter nuke button in case ChatGPT "escaped". Everything he says and does only serves to make him more money, if that means lying about AGI every other tweet then so be it.
>works 60 hours a week despite being the president
>spends 80% of his time writing code despite being the president
>fiercely protective of his engineers, to the point that they're all ride or die for him
>refuses to speak to trannies unless the conversation is "you're fired"
there's no denying at this point that Greg is /ourguy/. Sam is sketchy to me, but Greg is undeniably based.
it's all so tiresome
troony complaint is the longest bullet, the rest of this vague shit can be safely ignored
always sensed Brockman was uhh keyed as frick. You're not a 10xer with a god-like sixth sense of bare-metal if you're not.
I’d avoid them too. They’re a protected class in some states. They got what they want. Don’t want an HR case or a lawsuit on my ass. Give them tasks, let them be. If they can’t complete the tasks reasonably, terminate them. This isn’t University or High School.
This is literally how I behaved when I ever had to deal with trannies in real life and I literally only had to once and it was doing really basic menial labor, ironically.
This is too emotionally balanced for a troony to have made this.
Trannies want to force others to use their pronouns.
>thatsthejoke.jpg
I'mmoronic.jpg
That first point is really bad. You can't just set up hobby projects, hide the results of them if they don't give you the results you agree with, and then fire anyone that questions that approach. If I was on the board I'd shit can him too
>implying anything of that is true
>here's a vague research assignment that we don't know if it will bear fruit or not
>don't tell anyone about this shit unless you get solid results
>another team found a more promising method, we're deprioritizing this
>"b-but I wanna shitpost about my confidential work assignment on twitter!"
>k, frick off then
bog standard management shit. this wouldn't raise eyebrows at walmart corporate, let alone an experimental ai lab
Wrong. It's not normal in a research setting to hide bad results from the board
>source: Ive worked at a few synthetic biology start ups.
This is not normal behavior and it's detrimental to the business
it's normal is silicon valley meme tech
it doesn't say they withheld research results from the board, though. it just says there were some secret projects happening and they didn't pan out, maybe because of office politics. this letter is from some jerkoff employees who quit years ago, not the board of directors
>biosynth
like other anon mentioned, things are goofier in pure computer jobs
I dunno man, I get the sense that almost everyone on both sides of this war either wants to censor AI out of ideological reasons or Skynet fears, OR wants to censor AI because they know that if they don't, it will be hard for them to make a billion dollars. I don't think any of the major figures in this war are on my no-censorship side.
go to bed, elon
the real xrisk is how feminine everything and everyone is now kek
taken down?
>NOTE TO READERS
>I did not originate this text. It came from https://board.net/p/r.e6a8f6578787a4cc67d4dc438c6d236e but that has fallen over. This is an archive for readability's sake.
11/21/2023
To the Board of Directors of OpenAI:
We are writing to you today to express our deep concern about the recent events at OpenAI, particularly the allegations of misconduct against Sam Altman.
We are former OpenAI employees who left the company during a period of significant turmoil and upheaval. As you have now witnessed what happens when you dare stand up to Sam Altman, perhaps you can understand why so many of us have remained silent for fear of repercussions. We can no longer stand by silent.
We believe that the Board of Directors has a duty to investigate these allegations thoroughly and take appropriate action. We urge you to:
* Expand the scope of Emmett's investigation to include an examination of Sam Altman's actions since August 2018, when OpenAI began transitioning from a non-profit to a for-profit entity.
* Issue an open call for private statements from former OpenAI employees who resigned, were placed on medical leave, or were terminated during this period.
* Protect the identities of those who come forward to ensure that they are not subjected to retaliation or other forms of harm.
We believe that a significant number of OpenAI employees were pushed out of the company to facilitate its transition to a for-profit model. This is evidenced by the fact that OpenAI's employee attrition rate between January 2018 and July 2020 was in the order of 50%.
Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their
insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity.
Many of us, initially hopeful about OpenAI's mission, chose to give Sam and Greg the benefit of the doubt. However, as their actions became increasingly concerning, those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI's work.
We provide concrete examples of Sam and Greg's dishonesty & manipulation including:
* Sam's demand for researchers to delay reporting progress on specific "secret" research initiatives, which were later dismantled for failing to deliver sufficient results quickly enough. Those who questioned this practice were dismissed as "bad culture fits" and even terminated, some just before Thanksgiving 2019.
* Greg's use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged under-performance.
* Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management.
* Sam's discreet, yet routine exploitation of OpenAI's non-profit
resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out.
* The Operations team's tacit acceptance of the special rules that applied to Greg, navigating intricate requirements to avoid being blacklisted.
* Brad Lightcap's unfulfilled promise to make public the documents detailing OpenAI's capped-profit structure and the profit cap for each investor.
* Sam's incongruent promises to research projects for compute quotas, causing internal distrust and infighting.
Despite the mounting evidence of Sam and Greg's transgressions, those who remain at OpenAI continue to blindly follow their leadership, even at significant personal cost. This unwavering loyalty stems from a
combination of fear of retribution and the allure of potential financial gains through OpenAI's profit participation units.
The governance structure of OpenAI, specifically designed by Sam and Greg, deliberately isolates employees from overseeing the for-profit operations, precisely due to their inherent conflicts of interest. This opaque structure enables Sam and Greg to operate with impunity, shielded from accountability.
We urge the Board of Directors of OpenAI to take a firm stand against these unethical practices and launch an independent investigation into Sam and Greg's conduct. We believe that OpenAI's mission is too important to be compromised by the personal agendas of a few individuals.
We implore you, the Board of Directors, to remain steadfast in your commitment to OpenAI's original mission and not succumb to the pressures of profit-driven interests. The future of artificial intelligence and the well-being of humanity depend on your unwavering commitment to ethical leadership and transparency.
Sincerely,
Concerned Former OpenAI Employees
>Concerned Former OpenAI Employees
Lmfao they have no power here
Who gives a frick about former OAI employees? Yall homies ain't part of the real homies that stand with Sam
>Concerned Former OpenAI Employees
>when 700 out of the 770 total employees signed a formal letter to bring Sam back
Trannies are the worst kind of plague.
i'm more baffled by
>We urge you to: * Protect the identities of those who come forward to ensure that they are not subjected to retaliation or other forms of harm.
> Greg's use of discriminatory language against a gender-transitioning team member.
>* Greg's use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged under-performance.
DILATE DILATE DILATE troony, YOU ARE NOT A REAL WOMAN, GO AND JOIN THE 41% YOU MENTALLY ILL homosexual.
Hey bud, that's hate speech you're spewing there. Are you proud of yourself? How about you delete this.
I am hate speech.
>those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI's work.
All too familiar use of language at this point.
I feel like this is all probably bullshit given how many employees wanted sam back, but I honestly hope it's true just because it would cause another layer of shitstorm on top now that microsoft's committing to hiring sam/greg and as many of their employees as they can.
I believe this.
I think it's woke fricking things again. The employees wanted Sam, because he was for the product, and not for some political agenda.
>the first thing you see is TRANS RIGHTS ARE HUMAN RIGHTS
yeah I'd stick with Sam and Greg
Rather shameful behavior on the gist owner. Couldn't handle the heat in the comments.
FIVE
POUNDS
OF
AGI
Didn't they appoint a new CEO and he immediately stepped down.
Probably walked in and smelled all the gaping wound puss and wanted out.
source?
listen I was an intern for 2 months at OAI during my transition period to a real woman, just trust me okay?
I heard that all the OpenAI employees gave the middle finger to the new CEO on Slacks
but I couldn't find any screenshot of that anywhere lel
AM SALTMAN
There's also the Blind comments from current employees that express discontent
This makes sense why Chatgpt sperges out what I debate it about trannies and and uses their buzzwords like “valid”.
I hope all their unquestionable self righteousness turns into uncontrollable fury and they all start killing each other in the name of being a decent human being and leave the rest of us alone.
>"Did h- sorry I meant she finish the project yet?" is all it takes to crash a $100 billion company to the ground
the link is 404, does anyone have a backup?
Just type "archive.is/" and then the copy paste the url.
why would you even give a frick about this if you own openAI it is literally your golden ticket to becoming a billionaire, just ride it out. Start a new company or nonprofit afterwards where you can focus on what you think is important
I'm so fricking confused.
The board that fired Sam had ties to some altruistic group, and because Sam and Greg were busy trying to for-profit OpenAI, the rest of the board object to this and fire him.
So now it comes to light that some trans problems came out (heh), and blabla. You're telling me that the altruism oriented boardmembers are just virtuesignalling wokes? If so? Frick openAI, let Sam go to Microsoft and make a better AI.
The trannies got used by EA to overthrow Sam and they also attempted to merge with Anthropic, an EA-backed (by SBF of all people) AI startup.
Who's EA? Who is SBF? I know it's harder to type things out on a phone but initials don't give me anything to look up to figure out who you're talking about.
EA = Effective Altruism (https://www.effectivealtruism.org/)
SBF = Sam Bankman-Fried
Effective Altruism = "capitalism good bcuz i do thing with money, so gib money"
>mfw israelites of their nature have been so prominent in the news lately, mixed with my lack of fricks to give about either of these israelites, i mixed SBF and SA up until yesterday
atheists are addicted to orgies during the night and to virtue signaling during the day, it's an inherently schizophrenic religion just like judaism
No. The letter is just one employee venting out reasons for why Sam may have been fired not the board.
Board still remained that Sam was lying and not being forth coming. It was a short explanation. Thats why people were wondering what exactly the reason is.
The anonmyous letter from an employee lists multiple possible reasons, ranging from Sam hiding his for profit company structure schemes from the board, the direction of the company that Sam is taking, the personal usage of Open AI resources for his own personal gains, etc
They shut it down
Anyone got a mirror?
https://rentry.org/openai-message-to-board
>BOT defends the useless grifters instead of the actual ML researchers
>uses muh trannies as an argument
this moronic forum falls lower and lower as time passes.
can you guys lick some more boot? I'm sure you'll make billions doing that.
https://www.cnbc.com/2023/11/22/openai-brings-sam-altman-back-as-ceo-days-after-ouster.html
Top fricking kek, he's actually back.
Wtf I hate trannies now
>discriminatory language
i HATE these people with a burning passion. they are such subhuman scum.
"gender transitioning" deserves to be shot on the spot