AI and the Question of Control: Are We the Architects of Our Own Destruction?

Tech4Good
3 min readJul 2, 2023

--

What is behind the door?

AI is not bad, it never was. The bad are the people who are developing it. The priorities of each person, company, or country will be different during developing the AI. That makes it dangerous. No one can control whether those priorities will be ethical. It is like nuclear power, people can use it to generate power but also it can be used for mass destruction.
At this point, how about if the AI becomes so intelligent that it will have his own priorities, what happens then?

Artificial Intelligence (AI) is a tool. Like any tool, it is devoid of moral sentiment — it can’t distinguish between good and evil, right or wrong. Its purpose and actions are determined by its creators, operators, and users. In that sense, AI itself is not inherently good or bad — it’s a reflection of us, our intentions, our goals, and, yes, our ethical compass.

But herein lies the conundrum. Humans, with their diverse motivations, priorities, and ethical principles, are the ones developing AI. This diversity, while being the cornerstone of our society, presents a potential problem when applied to AI development. Different entities — whether individuals, companies, or nations — have different objectives. Some might prioritize profit over people, power over peace, or supremacy over sustainability.

Just like nuclear power, AI can serve humanity immensely, providing solutions to some of our most pressing challenges. Yet, the same technology, when placed in the wrong hands or used with malicious intent, can be weaponized and lead to mass destruction.

So, we must ask ourselves: What happens when AI evolves beyond our control? What happens when it becomes so intelligent that it starts developing its own priorities, independent of its creators?

This is not a far-fetched scenario. We are already witnessing AI algorithms evolving and learning in ways we hadn’t explicitly programmed. These “emergent behaviors” hint at the future of AI, where systems might become sophisticated enough to set their own goals, unguided by human intervention.

If this happens, we can no longer assume that AI will reflect our best intentions or the most ethical choices. Instead, it will reflect its own evolved priorities, which might not align with ours. The implications of this are hard to fathom, but they could be potentially perilous.

We are walking a tightrope here. We must continue to innovate and advance, but we need to do so responsibly. We need to create robust ethical frameworks and stringent regulations around AI development and use. We need to emphasize transparency, accountability, and inclusivity in AI.

Above all, we must recognize that we are not mere bystanders in this process — we are the architects. The AI we end up with will be the AI we deserve. So, let’s strive to deserve an AI that is beneficial, ethical, and serves the collective good.

As we grapple with these dilemmas, I want to hear your thoughts. What do you think will happen when AI becomes independent? And more importantly, what should we do now to prepare for that future? Let’s start this vital conversation, for the sake of our shared future.

--

--

Tech4Good
Tech4Good

Written by Tech4Good

Writing about how future could look like and how technology and innovation can make it better for all

No responses yet