Please convince me there's a good chance we're not absolutely fucked by AI in the next 10 years.
Consider the following scenarios:
>AI researchers somehow become fully transparent an publish all their work. Only takes some wacko or the chinese government to use the source code and fuck everything up due to no regard for AI development safety procedures. Combine that with overreliance on technology, general lack of concern for privacy and security and hardware level backdoors.
>If somehow awareness of the dangers of developing AI wakes up enough normies, best they can do is pressure the government into regulating the companies doing the research (if it isn't the case already) and being more transparent about this. This leads to more attention towards the topic from authoritarian governments and raises the chances of them wanting a piece of the pie and also potentially fucking up everything.
>If AI research remains unregulated, companies trying to develop a similar product might use less safe AI training algorithms to gain an edge over the other. Only takes some powerful exec with little knowledge of the topic to ignore safety procedures, further increasing the risk of overtraining an AI and fucking everything up.
>If AI research is already regulated, we have no clue who and how they do it. We'd just have to hope they actually know what they're doing. The idea that this might be the best case scenario should make you consider blowing your brains out.
>If somehow the danger of AI takeover is avoided, AI is evolving too rapidly not to consider the possibility of it slowly replacing most digital jobs. Consider DALL-E 2 and how close publicly known technology is to replacing a good amount of digital artists. Sounds like a good chance of another industrial revolution in an already unstable system.
Am I missing something? Am I making illogical leaps? Is there even any way to prepare against this? Should I take my meds?