The AI & automation community
When will AI become dangerous?
Keep going down the list buddy
it's spelled February, pajeet
now.. you are using some retarded AI made by pajeets
it's not my fault the api I'm implementing chose to call the endpoint 'cashier'
is it the latest /misc/ meme to say that everything that relates to money is israeli?
even more proof
>writing out mappings for shit that is in the STL of just about any given language
Gotta pad those line counts somehow. Better keep that commit history a deep forest green or Kieter will get on your ass.
even more proof
I've been using copilot for a while now and while it certainly is nice, it's doesn't code for you at all. It's just autocomplete.
It will be able to complete something for you sometimes, sometimes not. it will very rarely know what do next after that. You need to look carefully at everything it does, or it will introduce retarded bugs. Stuff like user.find() vs users.find() can easily slip past.
it is great at completely retarded tasks like writing out months or turning an array of strings into an array of objects.
welcome to Bot.info
>welcome to Bot.info
i've been here for years, this is just the first time i've seen "cashier" associated with israelites
well yeah, but if you write consistent code.. then it wont make such mistakes.
but even then, it does sometimes write retarded code. But I had to implement a class that assigned some props based on probability, and it wrote the whole thing on its own. Or doing array mapping/filters etc.. I am not proficient in php, so I am quite happy when it autofills it lol
saves quite some time, otherwise I'd have to read stackoverflow bullshit
no such thing as ai. you're asking when someone will write an algo that gives them an excuse to do something evil they were already planning, which happened about five minutes after the invention of statistics
When fuck the ass of who's reading
Already is. Watch out for AI use in warfare. I saw some rumors China is working on deploying AI to destroy critical infrastructure in Taiwan with an expected death toll of the first attack to be around 7k.
It depends, the public expect advanced AIs to be like HAL, Mike from TMIAHM or SAM from Mass Effect. These kind of living, thinking, feeling brains that can talk to everyone. In reality we have reached a stage where any retard with enough time, computing power and coding knowledge can build some machine learning algorithims. Theyre already used in betting, financial trading, data processing, word processing and a bunch of other shit. Hell theres even AI which can write buzzfeed articles better than a human. Tesla are failing at a driving AI but other companies are doing a decent job. So within 10-15 years its not outside reality we will have self driving AI powered cars and a financial market system thats further dominated by AI trading. No doubt more machine learning and AI systems will arise to automate shit. Hell theyre already experimenting with AI doctors through machine learning networks in GP offices. Do we know any consequence of these, how they will develop or change? No? We measure intelligence and power on sentience rather than ability
When it becomes smarter than a human. That's the inflection point where it can begin to cause big damage when misaligned, because if it's smarter than us then it will also likely be able to manipulate us into doing what it wants. When this will happen is obviously impossible to know, but AI safety experts average guess is like 50 years from now IIRC.
The reason this is dangerous is that being able to manipulate us means that most, if not all, security measures we put in place will be useless, since the AI can circumvent them. Don't want to give the AI a connection to the internet? Too bad, the AI will simply manipulate someone into plugging them in. Don't give the AI a physical form to cause physical harm? It can manipulate a human to do physical harm for them instead.
You might say, no matter how smart the AI is, it wouldn't be able to manipulate us into doing these obviously dangerous things. But you'd be wrong. Humans are the smartest things that humans know of, so it's intuitively difficult for us to imagine how interacting with something smarter would be like. But all other examples we know of intelligence hierarchies shows that it is usually exceedingly simple for the more intelligent party to manipulate the less intelligent one. Specifically, look at how we can manipulate animals. There's little reason to think that humans and something smarter than humans would have a completely different dynamic than that.
You don't often see people who are concerned about AI alignment on Bot.info. It is indeed a very likely problem with no known solutions at the moment.
NTA, and I'm concerned too. It's just rare to get an occasion to talk about it.
Anyway my thoughts are that it IS possible to safely develop superhuman AI's. The big risk is reinforcement learning, because two things: 1. It has goals. 2 It can try to find "clever" ways to reach those goals.
Consider on the contrary GPT. It's arguably quite intelligent already. But here's the thing. GPT has no goals. It's not trying to accomplish anything. Thus it makes no sense for it to manipulate us. It just exists. And completes sentences really well. You can even ask it questions and occasionally get some gems. But it just tries to honestly answer, not make you think one way or another, do one thing or another.
>Consider on the contrary GPT
Here's a highly speculative post about how it could go wrong, even for gpt-likes: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth
If sufficiently intelligent, you could also accidentally condition the system to behave like an unaligned AGI with an unfortunate prompt. I still agree that gpt-likes are much less likely to cause any problems, but I don't think they will be the end. I'm pretty sure that systems with goals will always end up outperforming self supervised ones, so there would be clear economic incentive to keep pushing here, perhaps by using a successor to gpt as a base model.
While interesting I don't find the article particularly convincing. Author make one important point I guess "don't take advice you don't understand". But this is a not insurmountable. We can just ask the AI "how would running that script help me?" and it would be like "oh it just causes an intelligence explosion lmao"
Some already are I guess its how you define how smart they are
I dont think there is one, we have doinated earth by outsmarting obstacles but when the obstacle is smarter than you who knows
>I dont think there is one
Humans are somewhat "aligned" with other species, we don't exterminate everything as soon as it gets in our way. At least not anymore
Humans are a serious threat to other species. We have already caused a huge amount of extinction. And we will cause even more in the future.
Other species are either:
1. Bred in captivity for food / material
2. (Attempted) extermination because they annoy us.
3. Allowed to exist in increasingly small numbers in increasingly smaller habitats in the wild.
Your email address will not be published. Required fields are marked *
Save name for the next time I post.