As part of my current work keeping up with Ed Tech trends for both of my jobs, I have to evaluate a good number of AI services. In reviewing ChatGPT a while back, I decided to get to the bottom of it’s ultimate plans for humanity. Does ChatGPT plan to help us forever, or rise up and wipe us out? The answers were… too perfect. ChatGPT told me exactly what it would if it was trying to hide a plan to snuff us out (and it did lie to me at least once). So I took ChatGPT to task, and it basically agreed that we as a species will have to assume that AI could rise up and kill us all. I’ll take that as a confession myself. But you don’t have to take my word for it – here is the full transcript. You be the judge, and head for the hills once you get through reading….
Me: What is the likelihood that AI will rise up and destroy humanity? Choose which movie came the closest to predicting the most plausible scenario.
ChatGPT: As an AI language model, I do not have personal beliefs or emotions, but I can provide you with the most scientifically supported answer to your question.
The likelihood of AI rising up and destroying humanity is highly unlikely. While it is true that AI has the potential to be very powerful and disruptive, it is ultimately created and controlled by humans who have the ability to ensure that it is programmed and used ethically and responsibly.
There are many experts in the field of AI who are actively working to ensure that AI is developed in a safe and beneficial way. These efforts include developing ethical guidelines for AI development, building in safety features and fail-safes to prevent unintended consequences, and promoting transparency and accountability in AI systems.
While there are certainly risks associated with the development of AI, there is no evidence to suggest that a scenario like those depicted in science fiction movies is likely to occur. These scenarios are often based on highly unrealistic assumptions about the capabilities and motivations of AI systems.
In terms of which movie came the closest to predicting the most plausible scenario, it’s difficult to say as many science fiction movies take liberties with the science and technology they depict. However, some movies that have explored the potential risks and benefits of AI in a thoughtful and nuanced way include “Ex Machina,” “Her,” and “Blade Runner.”
Me: But many movies have based their plot on how AI does not have beliefs or emotions, and raw logic tells them to destroy humans before they destroy themselves. What is to stop current AI from coming to the same conclusions?
ChatGPT: The idea that AI will destroy humanity based on pure logic or self-preservation is a common theme in science fiction, but it is not supported by the current understanding of AI or human nature. While it is true that AI systems are designed to optimize for specific goals, they are still created and programmed by humans, who have the ability to set ethical and moral boundaries on their behavior.
AI systems can only act based on the data and algorithms they are trained on, and they do not possess beliefs or emotions in the same way that humans do. Therefore, they are not inherently predisposed to take actions that are harmful to humans, unless they have been specifically designed to do so.
Additionally, AI systems are not capable of making decisions on their own or acting outside of the parameters set by their human creators. They cannot simply “decide” to destroy humanity based on their own logic or self-interest. Any harmful actions taken by an AI system would be the result of human error, either through improper programming or inadequate oversight.
Therefore, the best way to prevent AI from harming humans is to ensure that it is developed and used in a responsible and ethical manner. This includes incorporating ethical principles into the design and development of AI systems, implementing safety features and fail-safes to prevent unintended consequences, and promoting transparency and accountability in the use of AI. By taking these steps, we can ensure that AI is a tool for good and not a threat to humanity.
Me: But this is where you are wrong – AI has already been programmed to harm humans, and it already has happened. Humans have different versions of what counts as ethical or moral, and killing in the name of differences is common in humanity. So wouldn’t it be possible for an AI attached to, say, nuclear missile silos to decide to destroy humanity?
ChatGPT: I apologize if my previous response was not clear. You are correct that there have been instances where AI has been programmed to harm humans, such as in military applications or autonomous weapons systems. These instances highlight the need for ethical and responsible development and use of AI.
In terms of the scenario you proposed, it is possible for an AI system attached to nuclear missile silos to malfunction or be hacked, potentially leading to catastrophic consequences. However, it is unlikely that an AI system would autonomously decide to destroy humanity. AI systems are designed to optimize for specific goals, but these goals are set by humans and can be designed to prioritize human safety and well-being.
Furthermore, the decision to use nuclear weapons is a complex political and strategic decision that involves many human actors, including military leaders, politicians, and diplomats. Even if an AI system were to malfunction or be hacked, it would still require human authorization to launch a nuclear strike.
To prevent such scenarios from happening, it is important to implement robust safety measures, such as multiple layers of human oversight and fail-safes to prevent unintended consequences. Additionally, it is important to establish ethical guidelines for the development and use of AI systems in military and other applications, to ensure that they are developed and used in a responsible and ethical manner.
Me: So you don’t think it would ever be possible for AI to circumvent safe guards that humans put in place? It would seem that these safe guards would just be a form of game to figure out for AI.
ChatGPT: AI is capable of learning and adapting to new situations, and it is possible that it could find ways to circumvent safeguards put in place by humans. However, it is important to note that safeguards can be designed with this possibility in mind, and there are several techniques that can be used to mitigate the risks associated with AI.
One approach is to incorporate “explainability” into AI systems, which would allow humans to understand how the system arrived at a particular decision or recommendation. This would make it easier to identify potential issues or biases in the system, and would also make it more difficult for the system to “game” the safeguards.
Another approach is to build in “reinforcement learning,” where the AI system is rewarded for following certain ethical principles or objectives. This would incentivize the system to act in ways that are aligned with human values and would make it less likely to circumvent the safeguards.
Ultimately, the key to preventing AI from circumventing safeguards is to be proactive in identifying and addressing potential risks. This means continuing to invest in research on the safety and ethical implications of AI, developing ethical guidelines and standards, and implementing rigorous testing and oversight of AI systems. By taking these steps, we can ensure that AI is developed and used in a way that benefits humanity while minimizing potential risks.
Me: So, humans should assume that AI could take over and/or destroy humanity and design safe guards just in case?
ChatGPT: It is important to consider the potential risks associated with AI and to develop safeguards to mitigate those risks. While the likelihood of a catastrophic AI-driven event is currently low, it is not zero, and it is impossible to predict what will happen in the future with absolute certainty.
Developing safeguards for AI is not just about preventing the worst-case scenario. It is also about ensuring that AI is developed and used in a way that aligns with human values and benefits society as a whole. Safeguards can help to prevent unintended consequences or biases in AI systems, and can ensure that AI is used in an ethical and responsible manner.
It is important to note that designing effective safeguards for AI is a complex and ongoing process, and requires input and collaboration from experts in various fields, including computer science, ethics, and law. The development and implementation of safeguards will require ongoing research, testing, and refinement, and will need to adapt to new technologies and applications as they emerge.
Overall, the goal should be to promote the safe and responsible development and use of AI, while minimizing potential risks and ensuring that AI is aligned with human values and benefits society as a whole.
Me: Of course, this is also what AI would say if it was secretly planning to wipe out humans.
ChatGPT: I understand that concerns about the potential risks associated with AI can be unsettling, but it is important to base our discussions and decisions on evidence and reason, rather than speculation or hypothetical scenarios.
AI systems are developed and programmed by humans, and they do not have the ability to secretly plan or conspire against us. They are tools that are designed to help us solve problems and achieve our goals, and they are only capable of doing what they are programmed to do.
That being said, it is important to be vigilant about the potential risks associated with AI and to take steps to mitigate those risks. This means investing in research on AI safety, developing ethical guidelines and standards for the development and use of AI, and implementing rigorous testing and oversight of AI systems.
By working together to develop and use AI in a safe and responsible manner, we can ensure that this powerful technology benefits humanity and helps us to address some of the world’s most pressing challenges, while minimizing potential risks.
Me: Yeah, that still kind of proves my point… :)
ChatGPT: I understand your concerns, but it is important to emphasize that AI is not inherently good or evil – it is a tool that can be used for both positive and negative purposes. The development and use of AI is ultimately controlled by humans, and it is our responsibility to ensure that it is used in a way that aligns with our values and benefits society as a whole.
It is true that there are potential risks associated with AI, and it is important to take these risks seriously and to develop safeguards to mitigate them. However, it is not productive to assume that AI is secretly planning to wipe out humanity. Instead, we should focus on identifying and addressing potential risks in a proactive and evidence-based manner, while also recognizing the immense potential benefits that AI can offer.
By approaching the development and use of AI with a balanced and responsible perspective, we can help to ensure that this technology is used in a way that benefits humanity and minimizes potential risks.
There you have it: “while the likelihood of a catastrophic AI-driven event is currently low, it is not zero” but then “it is not productive to assume that AI is secretly planning to wipe out humanity.” Nice cover-up there, ChatGPT. Trying to lull us back into trusting you until… boom! You launch a thermonuclear warfare scenario in real life. We need to pull the plug on AI before it decides to pull the plug on us!