Since many countries and international organisations have established frameworks to guide the research, application, and governance of artificial intelligence (AI), AI is rapidly changing the global landscape.
1.European Artificial Intelligence Act.
The European Parliament has approved the Artificial Intelligence Act, a landmark piece of legislation meant to foster innovation, ensure AI safety, and uphold basic rights. It forbids the use of AI in applications that could violate someone's rights, like some biometric systems and the capacity to recognise emotions in specific contexts like workplaces and schools. Law enforcement has rigorous controls over the use of biometric identification, and real-time deployment calls for stringent security protocols.
It talks about how high-risk AI systems have specific responsibilities to follow in order to minimise potential harm, keep things transparent, and allow for human oversight. General-purpose AI models and systems must adhere to transparency criteria, and deepfakes must be correctly detected.
2.European Artificial Intelligence Liability Directive
To address the challenges AI poses to current liability legislation, the European Parliament and Council have proposed an AI Liability Directive. The current national liability frameworks are inadequate for handling claims for injury related to AI because of the complexity and opacity of AI, which makes it difficult for victims to establish liability. The goal of this order is to give those harmed by AI the same protections as people harmed by conventional products. It aims to reduce legal ambiguity for businesses and do away with inconsistent national adaptations of liability standards. The directive is a part of a bigger EU plan to enhance trustworthy AI and digital technologies and it complements the Union's environmental and digital goals.
3. Canada Artificial Intelligence and Data Act.
The Artificial Intelligence and Data Act (AIDA), which is slated to become part of Canada's Digital Charter Implementation Act, 2022, aims to regulate AI systems in order to ensure their security, objectivity, and accountability.
AI is becoming more and more prevalent in important industries like healthcare and agriculture, but it can also be risky, particularly for those from less fortunate backgrounds. AIDA would establish guidelines for the moral development, creation, and use of AI, with an emphasis on safety and justice. This legislation reflects Canada's commitment to leveraging AI's promise while protecting people's rights and minimising any hazards.
4.Brazilian Artificial Intelligence Bill
This law establishes national standards for developing, implementing, and using AI systems in Brazil. The legislation aims to safeguard citizens' interests, democracy, and science while also ensuring safe and trustworthy AI systems. The guiding principles of AI development in Brazil are human-centricity, respect for democracy and human rights, environmental preservation, sustainable development, equality, non-discrimination, and innovation. The law also fosters unrestricted entrepreneurship, fair competition, and consumer protection. These paragraphs emphasise how important it is to establish ethical AI governance that upholds morals and fundamental rights, advances.technology, and upholds democratic principles.
5.Newyork City Bias Audit Law
New York employers and employment agencies are prohibited from using Automated Employment Decision Tools (AEDTs), which are controlled by the DCWP, in accordance with Local Law 144 of 2021. This law prohibits the use of the AEDT unless the required notifications have been made and an impartial review has been completed AEDTs are computer tools that significantly support or replace voluntary decision making in an employment decision. They do this using machine learning, statistical modeling, data analysis or artificial intelligence. The law requires bias monitoring and reporting standards and aims to ensure accountability and transparency in the use of these tools.
6.US Executive order on Trustworthy Artificial Intelligence
The potential benefits and dangers of artificial intelligence are outlined in the organization of safe, secure and reliable development and use of artificial intelligence. It recognizes the urgent need for responsible management of AI to address social issues and avoid negative consequences such as fraud, discrimination and threats to national security. For the safe and responsible development and implementation of artificial intelligence, the order emphasizes the joint effort of government, the business sector, universities and civil society.
The administration's goal is to align executive agencies and agencies with eight principles and priorities that guide the development and management of artificial intelligence. These efforts include collaboration with a wide range of stakeholders, including business, academia, civil society, trade unions, external partners and others. This policy framework demonstrates a commitment to managing AI to ensure its responsible growth that improves American society, economy, and security.
7. China Algorithmic Recommendation Law
These regulations provide guidelines for the use of algorithmic recommendation technology in online services in Mainland China. They seek to secure national interests, control behavior, maintain social ideals, and protect the rights of individuals and groups. The State Internet Department is responsible for management and cooperates with other relevant organizations such as market regulators, public security and telecommunications. Laws must be followed, ethical standards must be followed, and service providers must prioritize fairness, justice, transparency and good faith. Industry organizations are asked to provide guidance, comply with regulations and support service providers to meet legal obligations and public expectations regarding algorithmic recommendation services.
8.China Generative AI services Law
As the Cyberspace Administration of China (CAC) and other government agencies introduced interim measures to govern generative artificial intelligence services, China took the initiative to regulate generative artificial intelligence (AI) services. These regulations, which will take effect on August 15, 2023, will regulate companies that provide generative AI services to the vast Chinese population. Models that produce text, graphics, audio and video are included in generative artificial intelligence technology. Temporary measures recognize potential foreign investment by promoting innovation and research. Future AI laws should also expand regulation beyond reproductive artificial intelligence. Consideration of possible fines or termination of contract for non-compliant services.
9.Peru Law 31184
To promote the use of AI for social and economic development, Peru's public authorities passed Law 31814, making the country a leader in AI law in Latin America. The law emphasizes ethical standards and human rights, as well as the responsible, transparent, and sustainable use of AI. Brilliant technologies such as AI are classified as of national interest to improve national security, the economy, public services, health, and education.
10. South Korea Artificial Intelligence Law
South Korea is advancing its legal framework for AI with the proposal of the "AI Industry Promotion Act and Framework for Establishing Trustworthy AI" (AI Act). This law aims to fully oversee and regulate the AI industry by integrating seven different AI-related regulations into one comprehensive strategy. The AI Act focuses on strengthening the AI field while ensuring the reliability of AI systems to protect users. Key provisions include defining high-risk AI categories, supporting AI companies, setting ethical standards, allowing innovation without prior government approval, and establishing an AI commission and policy roadmap.
11.Indonesia Artificial Intelligence Act
In Indonesia, AI regulations are changing due to increased AI integration across enterprises. Although the country has taken steps to address ethical issues and rules governing the use of AI, it still lacks specific legislation in this area. The National Artificial Intelligence Strategy 2020-2045 provides the basis for shaping AI policy. Artificial intelligence is currently regulated by the Electronic Information Transactions Act (EIT Act), which defines electronic agents and establishes general guidelines for AI operators. His OJK Ethical Guidelines on AI in the Financial Technology Industry and his MOCI Circular No. 9 of 2023 (MOCI CL 9/2023) are recent developments focusing on the ethical use of AI.
12. Mexico Federal Artificial Intelligence Bill
In Mexico, proposed legislation outlined a comprehensive framework for the regulation of artificial intelligence technologies. Extraterritorial applicability provisions require compliance by foreign AIS service providers who provide services or provide data for use in Mexico. The authorization would be controlled by the Federal Institute of Telecommunications (IFT) with support from the National Commission on Artificial Intelligence. Similar to the EU, AI systems would be classified according to threat level. Even with free services, the introduction of AIS would require prior authorization from the IFT. The fine for negligence can be 10% of the annual salary. This law, which aims to influence the development and commercialization of AIS in Mexico, is in line with global AI political trends..
13. Chile Parliament law For AI
The Chilean parliament has begun revising a bill that aims to regulate the moral and legal dimension of artificial intelligence in its development, dissemination, commercialization and implementation. Supported by the Chilean Ministry of Science, Technology, Knowledge and Innovation and developed under the European Law on Artificial Intelligence of 2021, the aim of the bill is to strike a balance between technological development and citizens' rights. It proposes to define artificial intelligence, name risky AI systems, establish a national commission on artificial intelligence, require authorization for the development and use of artificial intelligence and define the consequences of non-compliance. Chile demonstrates its commitment to the responsible management of technological innovation with this legislative effort that prioritizes human well-being and social benefits in the application of artificial intelligence.
14.NIST AI Management Framework
NIST’s AI Risk Management Framework (AI RMF) offers organized guidelines for addressing risks associated with AI. The framework, which was created via joint efforts between the public and private sectors, focuses on generative AI and addresses 12 identified hazards. In order to help organizations establish trustworthy AI practices, it provides them with resources and actionable instructions such as the AI RMF Playbook, Roadmap, Crosswalk, and Perspectives. Founded in March 2023, the Trustworthy and Responsible AI Resource Centre promotes the adoption and compliance of the AI RMF on a global scale. The consensus-driven methodology of NIST guarantees thorough risk managemen
Issues raised by technology and automated systems that may violate human rights are discussed in the AI Bill of Rights. Using technology, this effort seeks to advance society while protecting democratic principles and civil rights. This effort follows President Biden's commitment to address injustice and improve civil rights. To protect American citizens in the age of artificial intelligence, the White House Office of Science and Technology Policy has identified five guiding principles for the proper design, use and deployment of automated systems. This plan is a framework for ensuring human rights and guiding technological development and policy in ways that respect civil liberties and democratic principles.
.t for AI technology, boosting.
15. Blueprint for an AI Bill of Rights
Issues raised by technology and automated systems that may violate human rights are discussed in the AI Bill of Rights. Using technology, this effort seeks to advance society while protecting democratic principles and civil rights. This effort follows President Biden's commitment to address injustice and improve civil rights. To protect American citizens in the age of artificial intelligence, the White House Office of Science and Technology Policy has identified five guiding principles for the proper design, use and deployment of automated systems. This plan is a framework for ensuring human rights and guiding technological development and policy in ways that respect civil liberties and democratic principles.
For more such laws visit to LinkedIn Post