With great power comes great responsibility, despite the numerous advantages that AI evolution brings to the world. It could pose profound risks to society if not handled properly. Along with other global/societal risks i.e., climate change, nuclear proliferation, and global warming, the risk of extinction posed by AI must also be considered a universal priority. Though scientists have long been warning of the potential challenges and risks associated with AI, the recent discoveries of AI tools generating content, audio, video, text, images, and codes, has alerted the world of the harmful consequences it can have. The content generated by these tools is hard to decipher whether it is created by a human or some AI system. The most alarming phenomenon is the widespread outreach of these AI tools, ChatGPT, text generating tool reached 100 million user thresholds within two months. People are using this technology to get their writing assignments done; threatening the already compromised academic integrity and values. According to a report published by McKinsey Global Institute, by 2030, around 800 million jobs will be lost because of AI. Nonetheless, along with job displacement, there is also a potential for an increase in the job market with the emergence of more creative jobs and less requiring physical presence. Writers, programmers, and other professionals can generate ideas to do their work more efficiently and quickly with the help of AI tools like GPT4. Also, AI has the potential to increase economic inequality by benefiting wealthy corporations and individuals. Job losses due to AI automation are more likely to impact low-skilled workers hence increasing the income gap. Inclusive AI development and re-skilling programs are the keys to combatting economic inequality. AI creators must seek the insights and experiences of people across various cultures. The use of cell phones as surveillance machines with the help of some spyware against human rights activists and journalists is one of the harmful consequences of these disruptive technologies. Recently developed AI systems like GPT-4 can generate biased, toxic, and untruthful information. There is no guarantee of these systems to be accurate on the tasks assigned to it. Also, people can misuse these tools to spread disinformation, false propaganda, and fake news. As there is a risk of disinformation with these systems, people can rely on these AI tools to seek medical advice, and other information to make decisions. Reliance on AI-driven communication can lead to a loss of human connection and empathy. An AI-generated “social credit system” is already operational in China, that gives citizens scores based on how they behave, play music in public spaces, play video games, post on social media, etc. With the help of this system, citizens with low scores can get punished in the form of a travel ban, rejection from state-owned firms’ jobs, and denial of other state privileges. This is a terrifying example of how artificial intelligence can access all areas of human life violating privacy which is a basic human right. Moreover, malicious actors and hackers can also harness the power of technology to bypass security measures, develop more advanced cyber-attacks, and took advantage of vulnerabilities in the system. Stephen Hawking predicted that “once humans develop complete Artificial Intelligence, it will take off on its own and would be able to redesign itself at an ever-increasing scale.”Autonomous weapons or killer robots are another area of concern, they can search their military targets and attack according to pre-programmed instructions. These autonomous robots can go rogue; ending up killing civilians and it would be hard to destroy these robots. Hence, the human rights community is calling on companies responsible for developing AI tools to be vigilant regarding any negative consequences they may have for humanity. A framework must be developed to identify any potential harms and implement remediation and mitigation where needed. Though AI has numerous benefits like powering self-driving cars, art creation, medical diagnosis, and much more; however plenty of regulation is needed to get the most out of this promising technology. Governments over the world can play their role by imposing restrictions on tech companies regarding the use of computing power to train AI and how much data they can feed it. There must be constraints on the accessibility of AI to the heaps of human-created knowledge over the years with ownership rights. Organizational standards must be formed to apply and explore AI technology. It’s significant to develop a regulatory body on the national and international level to keep check that every society is developing AI safely. EU and US are already working on implementing clear-cut measures to manage the spread of AI. Nevertheless, the spread of AI is inevitable and it’s essential for countries to innovate and keep up with the world. To develop AI ethically, organizations can take plenty of steps while integrating Artificial intelligence into their operations. Companies can develop processes to monitor algorithms, compile high-quality data, and explain the findings of AI algorithms. Tech innovations must be balanced with respect to humanistic thinking to develop technology in an idealistic and responsible manner. AI creators must seek the insights, concerns, and experiences of people across various cultures, professional areas, ethnicities, and socio-economic groups. Though a machine can never possess human qualities such as wisdom, compassion, or moral values. Still, to protect the rights of everyone and to keep pace with technological evolution, legal systems must be advanced. The writer is a columnist and a researcher and can be reached at aneezaamaham@gmail.com