Social Sciences
Social Sciences General
File | Description | Size | Format | |
---|---|---|---|---|
Attached File | 525.33 kB | Adobe PDF | ||
536.44 kB | Adobe PDF | |||
1.72 MB | Adobe PDF | |||
480.02 kB | Adobe PDF | |||
520.63 kB | Adobe PDF | |||
287.91 kB | Adobe PDF | |||
182.44 kB | Adobe PDF | |||
2.05 MB | Adobe PDF | |||
393.74 kB | Adobe PDF | |||
303.56 kB | Adobe PDF | |||
239.33 kB | Adobe PDF |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Discover the world's research
Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser .
Enter the email address you signed up with and we'll email you a reset link.
This paper mainly deals with the laws relating to the cyber crimes in India. The objectives of this research paper are four-fold: firstly, to analyze the concept of jurisdiction and the various theories to determine jurisdiction in cases where such offences are committed relating to cyber crimes; secondly, to analyze the jurisdiction theories applicable under Cybercrime Convention; thirdly, to analyze the jurisdiction theories applicable under the Information Technology Act, 2000; and fourthly, to analyze whether there can be one jurisdiction theory that may globally be applicable to all cyber crimes. For the sake of convenience, this research paper has been divided into various parts. The paper focuses on the various theories of jurisdiction and the jurisdiction principles applicable under the Cybercrime Convention, 2001 on cybercrime. The jurisdiction principle applicable under the Information Technology Act, 2000 have also been discussed in the paper.
Armando Cottim
Ever since Cain and Abel, crime is a “known known” to human kind. Modern criminal codes have dealt quite well with crime within terri torial boundaries and even, sometimes, outside those boundaries, by applying themselves to crimes committed by nationals abroad. Criminal law has then been almost purely domestic, w th external threats to the public order of States being dealt with by the military. Yet, the revolution in information technologies has changed society to a point where advances in programming artificial intelligence sof tware and in the processing capacity of desktop and/or laptop computers are leading society to a time when a cyberbot will be impossible (or at least very difficult) to distingu ish from a human person. 1
Ahmed Yousef ElGohary Yousef
Ahmed Y. ElGohary Yousef
The cyber operations that are not at the level of the prohibited use of force under article 2(4) or are not at the level of an armed attack under article 51; can nevertheless form a breach of many other laws, such as the principle of state sovereignty.
Computer Law & Security Review
Nazura Abdul Manap
International Review of Law, Computers & Technology
Subhajit Basu
In India, the Information Technology Act, received the Presidential Assent in June 2000. The Act is based on the Model Law on E-Commerce adopted by the United Nations Commission on International Trade Law (UNCITRAL). The essence of the Act is captured in the long title: ‘An act to provide for the legal recognition of transactions carried out by … alternatives to paper-based methods of communication and storage of information …’. In a previous article the authors reviewed the ‘heavy handed’ approach taken by the Indian government to the regulation of Certificating Authorities,1 this article continues this theme and evaluates the provisions of the Act in and around a range of jurisdiction, crime and privacy issues. Unlike similar legislation in Singapore, Malaysia, South Korea and Thailand, which primarily focuses on the regulation of e-commerce, the Information Technology Act 2000 introduces and enacts for the first time in India a range of e-commerce and Internet related criminal offences, these provisions provide a range of executive powers that the authors consider will significantly impair the rights of privacy and free speech of both citizens of India and of other countries.
Michael Vagias
This article discusses the question of the territorial jurisdiction of the International Criminal Court over international crimes committed through the Internet. It argues that the Court may assert its territorial jurisdiction over such conduct consistently with international law and the Rome Statute, by localising the cyber-commission of a core crime in whole or in part within the territory of States Parties. However, to mitigate state complaints of jurisdictional overreach, it further argues that the Court could avoid the outright endorsement of extensive versions of territorial jurisdiction. Instead, it should pursue first a detailed analysis of core crimes, followed by a well-versed application of territoriality. In closing, the article discusses the application of this approach in the example of online incitement to commit genocide.
Nahida Siddika
Abstract : Peoples from any country or nation have the access to cyberspace and this it’s like a global village. Law permits the exercise of states jurisdiction within or outside of the territory if the interests of that nation or any of its nationals are prejudiced by any act committed by any person of the world. That might be a real life offence or any virtual offence with or without any real life effect. From that point of view cyberspace is supposed to be regulated by the universally accepted and regulated rules. All user states should have equal opportunity to exercise their jurisdiction when the interest of the state or any of its nationals has been prejudiced. Right to “Equality” and “Non discrimination” are also established as peremptory norms ( jus cogens) under international law, derogation from which are not permissible. But unfortunately all the states do not get equal and non discriminatory treatment in cyberspace and the actual governing or controlling power of cyberspace are in the hand of some particular owner states of the internet intermediaries. This jurisdictional inequality has created an imbalanced situation in cyberspace which has affected the sovereign power of the other states both in virtual and real life.
Jurisdiction and the Internet: Regulatory Competence over Online Activity
chibuko ibekwe
Information & …
Kim Stevenson
Electronic Business Law Reports
Yaman Akdeniz
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Myriam Dunn Cavelty
Michelle Manda
Ignas Kalpokas
Aaron Alasa
FACULTY OF LAW, UNIVERSITY OF LAGOS
Victor Bett
Luisa Zappalà
Nazeni Simonyan
Bond Law Review
Jasmine Valcic
David Silla (Advocate)
Sorowar Nizami
Paul Arnell
Abdul Moomin
SMART M O V E S J O U R N A L IJELLH
Ketan Patel
Revista de Informação Legislativa
Jacqueline Abreu
Advocate Dipen Shah
Sibaleya Chikuba
James Banks
Mulungushi University
NYUJ Int'l L. & Pol.
Julien Mailland
Robert Currie
Tommaso De Zan
Research Handbook on Jurisdiction and Immunities in International Law
Tuan Phung Minh
Criminal Justice Matters
Russell Smith
H van der Wilt (ed), The Internationalisation of Transnational Law (Edward Elgar)
Ilias Bantekas
Center for International Security …
Seymour Goodman
Jurimetrics
Joel Reidenberg
Research Handbook on EU Internet Law
Alisdair Gillespie
Mahboob Usman
Potchefstroom Electronic Law Journal/Potchefstroomse Elektroniese Regsblad
Murdoch M Watney
Fahad Bin Siddique
Advertisement
You have full access to this open access article
16k Accesses
9 Citations
6 Altmetric
Explore all metrics
The purpose of this paper is to assess whether current international instruments to counter cybercrime may apply in the context of Artificial Intelligence (AI) technologies and to provide a short analysis of the ongoing policy initiatives of international organizations that would have a relevant impact in the law-making process in the field of cybercrime in the near future. This paper discusses the implications that AI policy making would bring to the administration of the criminal justice system to specifically counter cybercrimes. Current trends and uses of AI systems and applications to commit harmful and illegal conduct are analysed including deep fakes. The paper finalizes with a conclusion that offers an alternative to create effective policy responses to counter cybercrime committed through AI systems.
Avoid common mistakes on your manuscript.
Undoubtedly, AI has brought enormous benefits and advantages to humanity in the last decade and this trend will likely continue in coming years since AI is gradually becoming part of the digital services that we use in our daily lives. Many governments around the world are considering the deployment of AI systems and applications to help them achieve their activities and more concretely to facilitate the identification and prediction of crime. Footnote 1 Further, national security and intelligence agencies have also realized the potential of AI technologies to support and achieve national and public security objectives.
There are significant developments of AI technologies like the use of facial recognition in the criminal justice realm, the use of drones, lethal autonomous weapons and self-driving vehicles that when not properly configured or managed without proper oversight mechanisms in place have the potential to be used for disruptive purposes and harm individual’s rights and freedoms.
Currently, there is an ongoing discussion in international policy and legislative circles on the revision and improvement of the liability framework and threshold concerning AI systems and technologies, Footnote 2 although due to the complexity of the topic and the different legal approaches around the world concerning civil liability, there will probably not be a consensus on a harmonized and uniformed response, at least not in the near future.
Further, AI and machine learning have the potential and offer the possibility to detect and respond to cyberattacks targeted to critical infrastructure sectors including water, energy and electricity supplies, as well as the correct management of cybersecurity solutions to help reduce and mitigate security risks. Footnote 3 However, many complex challenges remain particularly for small and medium enterprises which continue to rely on limited budgets to improve their cybersecurity capabilities.
Due to the COVID-19 pandemic, a large part of the world’s connected population was confined. This situation made companies and individuals more dependent on the use of systems, technologies and applications based on AI to conduct their activities, including remote work, distance learning, online payments or simply having access to more entertainment options like streaming and video on demand services. Unfortunately, this situation also led organized criminal groups to reconsider and re-organized their criminal activities in order to specifically target a number of stakeholders, including international organizations, Footnote 4 research and health sector entities, Footnote 5 supply chain companies Footnote 6 and individuals. We have witnessed that organized criminal groups have largely improve their CasS (crime as a service) capabilities and turn their activities into higher financial profits with very small possibilities of being traced by law enforcement and brought to justice.
Through the use of AI technologies, cybercriminals have not only found a novel vehicle to leverage their unlawful activities, but particularly new opportunities to design and conduct attacks against governments, enterprises and individuals. Although, there is no sufficient evidence that criminal groups have a strong technical expertise in the management and manipulation of AI and machine learning systems for criminal purposes, it is true that said groups have realized its enormous potential for criminal and disruptive purposes. Footnote 7 Further, organized criminal groups currently recruit and bring technical skilled hackers into their files to manipulate, exploit and abuse computer systems and to perpetrate attacks and conduct criminal activities 24/7 from practically anywhere in the world. Footnote 8
Current trends and statistics show that cybercriminals are relying more on the use of IoT to write and distribute malware and target ransomware attacks which are largely enhanced through AI technologies. Footnote 9 This trend will likely continue as it is expected that more than 2.5 million devices will be fully connected online in the next 5 years including industrial devices and critical infrastructure operators which will make companies and consumers more vulnerable to cyberattacks. Footnote 10
Furthermore, the discussion on bias and discrimination Footnote 11 are also relevant debated aspects on AI policy in many international and policy making circles. Footnote 12 The widespread use of technologies based on facial recognition systems, Footnote 13 deserves further attention in the international policy arena because even when facial recognition may be very appealing for some governments to enhance aspects of public security and safety to prioritize national security activities, including terrorist activities, this technology may as well raises relevant and polemic issues concerning the protection of fundamental rights, including privacy and data protection under existing international treaties and conventions, topics that are currently being discussed in relevant international fora including the Council of Europe, the European Commission, the European Parliament Footnote 14 and the OECD.
There is an ongoing global trend to promote misinformation with the support of AI technologies known as ‘bots’. Footnote 15 Bots are mainly used to spread fake news and content throughout the internet and social networks and have the chilling effect of disinforming and misleading the population, particularly younger generations who cannot easily differentiate between legitimate sources of information and fake news. Further, the use of ‘bots’ have the potential to erode trust and question the credibility of the media and destabilize democratic and government institutions.
Although AI holds the prospect to enhance the analysis of big amounts of data to avoid the spread of misinformation in social networks, Footnote 16 humans still face the challenge to check and verify the credibility of the sources, an activity which is usually conducted by content moderators of technology companies and media outlets without specific links to government spheres, a situation that has led relevant policy making institutions like the European Commission to implement comprehensive and broad sets of action to tackle the spread and impact of online misinformation. Footnote 17
Another trend and technology widely used across many industries are deep fakes. Footnote 18
The abuse and misuse of deepfakes has become a major concern in national politics Footnote 19 and among law enforcement circles. Footnote 20 Deepfakes have been used to impersonate politicians, Footnote 21 celebrities and CEO’s of companies which may be used in combination with social engineering techniques and system automatization to perpetrate fraudulent criminal activities and cyberattacks. The use of deep fake technologies for malicious purposes is expanding rapidly and is currently being exploited by cybercriminals on a global scale. For example, in 2019, cybercriminals used AI voice generating software to impersonate the voice of a Chief Executive of an energy company based in the United Kingdom and were able to obtain $243,000 and distribute the transfers of the funds to bank accounts located in Mexico and other countries. Footnote 22
Another relevant case occurred in January 2020 where criminals used deep voice technology to simulate the voice of the director of a transnational company. Through various calls with the branch manager of a bank based in the United Arab Emirates, criminals were able to steal $35 million that were deposited into several bank accounts, making the branch manager of the bank believe that the funds will be used for the acquisition of another company. Footnote 23
The spoofing of voices and videos through deep fakes raise relevant and complex legal challenges for the investigation and prosecution of these crimes. First and foremost, many law enforcement authorities around the world do not yet have full capabilities and trained experts to secure evidence across borders, and often times the lack of legal frameworks particularly procedural measures in criminal law to order the preservation of digital evidence and investigate cybercrime represents another major obstacle. Second, since most of these attacks are usually orchestrated by well organized criminal groups located in different jurisdictions, there is the clear need for international cooperation, and in particular a close collaboration with global services providers to secure subscriber and traffic data, as well as to conduct more expedited investigations and law enforcement actions with other countries through the deployment of joint investigation teams in order to be able to trace and locate the suspects and follow the final destination of illicit funds. Footnote 24 Cross-border cybercrime investigations are complex, lengthy, and do not always necessarily result in convictions of the perpetrators.
Further, cyberattacks based on AI systems is a growing trend identified by the European Cybercrime Centre (EC3) of EUROPOL in its Internet Crime Threat Assessment Report 2020 . According to the EC3, the risks concerning the use of AI for criminal purposes need to be well understood in order to protect society against malicious actors. According to the EC3, “through AI, criminals may facilitate and improve their attacks by maximizing their opportunities for profit in a shorter period of time and create more innovative criminal business models, while reducing the possibility of being traced and identified by criminal justice authorities”. Footnote 25
Further, the EC3 of EUROPOL recommends the development of further knowledge regarding the potential use of AI by criminals with a view to better anticipating possible malicious and criminal activities facilitated by AI, as well as to prevent, respond to, or mitigate the effects of such attacks in a more proactive manner and in close cooperation with industry and academia. Footnote 26
Due to the complexities that the misuse and abuse of AI systems for criminal purposes entail for law enforcement agencies, key stakeholders are trying to promote the development of strategic partnerships between law enforcement, international organizations and the private sector to counter more effectively against the misuse and abuse of AI technologies for criminal purposes. For example, in November 2020, Trend Micro Research, the EC3 of EUROPOL and the Centre for Artificial Intelligence and Robotics of the UN Interregional Crime and Justice Research Institute (UNICRI) published the report: Malicious Uses and Abuses of Artificial Intelligence . Footnote 27 This report contains an in-depth technical analysis of present and future malicious uses and abuses of AI and related technologies that drew from the outcomes of a workshop organized by EUROPOL, Trend Micro and UNICRI in March 2020. The report highlights relevant technical findings and contains examples of AI capabilities divided into “malicious AI uses” and “malicious AI abuses”. The report also sets forth future scenarios in areas like AI supported ransomware, AI detection systems, and developed a case study on deepfakes highlighting the development of major policies to counter it, as well as recommendations and considerations for further and future research. Footnote 28
Strategic initiatives and more partnerships like the one mentioned above are further needed in the field of AI and cybercrime to ensure that relevant stakeholders particularly law enforcement authorities and the judiciary understand the complexities and dimensions of AI systems and start developing cooperation partnerships that may help to identify and locate perpetrators that misuse and abuse AI systems with the support of the private sector. The task is complex and needs to be achieved with the support of the technical and business community, otherwise isolated investigative and law enforcement efforts against criminals making use of AI systems will not likely succeed.
AI policy has been at the core of the discussions only in recent years. At the regional level, the European Commission has recently published a regulation proposal known as the Digital Services Act Footnote 29 though this proposal has just recently been opened for consultation and it will take a few years until it is finally approved.
On April 21, 20021, the European Commission published its awaited Regulation proposal for Artificial Intelligence Systems . Footnote 30 The proposal contains broad and strict rules and obligations before AI services can be put into the European market based on the assessment of different levels of risks. The regulation proposal of the European Commission also contains express prohibitions of AI practices that may contravene EU values and violate fundamental rights of citizens, and it establishes the European Artificial Intelligence Board (EIAB) as the official body that will supervise the application and enforcement of the regulation across the EU. Footnote 31
The prospect of developing a new international convention that will regulate relevant aspects concerning the impact and development of AI systems and the intersection with the protection of fundamental rights has been proposed by the Ad-Hoc Committee on Artificial Intelligence of the Council of Europe, better known as ‘CAHAI’. The work of CAHAI will be analysed in section 5.1 of this paper.
At the international level, there are a number of international and regional instruments that are used to investigate “cyber dependent crime”, “cyber enabled crime” and “computer supported crime”. Footnote 32 This paper will only focus on the analysis of three major instruments of the Council of Europe which are applicable to criminal conduct and activities concerning the use of computer and information systems, the exploitation and abuse of children and violence against women committed through information and computer systems:
The Convention on Cybercrime better known as the ‘the Budapest Convention’ ;
The Convention on Protection of Children against Sexual Exploitation and Sexual Abuse, better known as ‘the Lanzarote Convention’ ; and
The Convention on preventing and combating violence against women and domestic violence better known as the ‘the Istanbul Convention’ .
The Council of Europe’s Budapest Convention on Cybercrime is the only international treaty that criminalizes conducts and typologies committed through computer and information systems. This instrument contains substantive and procedural provisions for the investigation, execution and adjudication of crimes committed through computer systems and information technologies. Footnote 33 The Budapest Convention is mainly used as a vehicle for international cooperation to investigate and prosecute cybercrime among the now 66 State Parties, which includes many countries outside Europe. Footnote 34
The Cybercrime Convention Committee (T-CY) which is formed by State Parties, country observers invited to accede to the Budapest Convention and ad-hoc participants is the entity responsible inter alia for conducting assessments of the implementation of the provisions of the Budapest Convention, as well as the adoption of opinions and recommendations regarding the interpretation and implementation of its main provisions. Footnote 35
During the 2021 Octopus Conference on Cooperation against Cybercrime in November 2021 that marked the 20th anniversary of the Budapest Convention, the organizers announced that the Committee of Ministers of the Council of Europe approved the adoption of the Second Additional Protocol to the Budapest Convention on enhanced cooperation and the disclosure of electronic evidence as originally adopted by 24 the Plenary Session of the T-CY Committee in May 2021. The text of the Second Additional Protocol will be officially opened for signature among State parties to the Budapest Convention in the summer of 2022. Footnote 36
The Second Additional Protocol to the Budapest Convention on enhanced cooperation and the disclosure of electronic evidence regulates inter alia how the information and electronic evidence - including subscriber information, traffic data and content data - may be ordered and preserved in criminal investigations among State Parties to the Budapest Convention. It provides a legal basis for disclosure of information concerning the registration of domain names from domain name registries and registrars and other key aspects concerning cross-border investigations including mutual legal assistance procedures, direct cooperation with service providers, disclosure of data in emergency situations, protection of safeguards for transborder access to data and joint investigation teams. Footnote 37
Although, the T-CY Committee has not yet fully explored how the Budapest Convention and its first additional protocol on xenophobia and racism may be applicable in the context of technologies and systems based on AI, it is worth mentioning that the Budapest Convention was drafted with broad consideration of the principle of technological neutrality precisely because the original drafters of this instrument anticipated how the threat landscape for cybercrime would likely evolve and change in the future. Footnote 38
The Budapest Convention contains only a minimum of definitions; however, this instrument criminalizes a number of conducts and typifies many offenses concerning computer and content related crimes that may as well be applicable to crimes committed through the use of AI systems.
During the 2018 Octopus Conference on Cooperation against Cybercrime, the Directorate General of Human Rights and Rule of Law of the Council of Europe convened a panel on AI and Cybercrime Footnote 39 where representatives of the CoE presented its early activities and findings on AI policy. Footnote 40 Although the panel presentations were more descriptive concerning the technical terminology used in the field AI at that time, some speakers highlighted and discussed some of the challenges that AI poses to law enforcement authorities like for instance the criminalization of video and document forgery and how authorities could advance the challenge to obtain and preserve electronic evidence in court. Footnote 41
The 2021 Octopus Conference on Cooperation against Cybercrime held fully online from 16-18 November 2021 due to the COVID-19 situation, held a panel on “Artificial Intelligence, cybercrime and electronic evidence”. Footnote 42 This panel discussed complex questions concerning criminal liability and trustworthiness of evidence of AI systems in auditing and driving automation and assistance; and other relevant aspects concerning harms and threats of misinformation and disinformation developed by AI systems and effective responses, countermeasures and technical solutions from the private sector.
AI and cybercrime are relevant aspects that need further analysis and detailed discussions among the TC-Y and State Parties to the Budapest Convention, particularly since there has been an increase of cases concerning the misuse of AI technologies by cybercriminals and as vehicles to launch cyberattacks and commit criminal offenses against individuals in the cyberspace. Questions such as who will bear the responsibility for a conduct committed through the use of algorithms and machine learning and the liability threshold among State Parties need further discussion and clarification since the regulation of criminal liability differs significantly among the legal systems of many countries, as well as to explore the development of strategic partnerships in other regions of the world to counter attacks based on AI systems.
The Council of Europe Lanzarote Convention is an international treaty that contains substantive legal measures for the protection of children from sexual violence including sexual exploitation and abuse of children online. Footnote 43 This convention harmonizes minimum legal conducts at the domestic level to combat crimes against children and provide measures for international cooperation to counter the sexual exploitation of children. The Lanzarote Convention requires the current 48 State Parties to offer a holistic response to sexual violence against children through the “4Ps approach”: Prevention, Protection, Prosecution and Promotion of national and international cooperation. Footnote 44 The monitoring and implementation body of the Lanzarote Convention is conducted by the Committee of the Parties, also known as the ‘Lanzarote Committee’ . This committee is formed by State Parties and it is primarily responsible for monitoring how State Parties put legislation, policies and countermeasures into practice, including organizing capacity building activities to exchange information and best practices concerning the implementation of the Lanzarote Convention across State Parties. Footnote 45
Like, the TC-Y, the ‘Lanzarote Committee’ has not yet fully explored how the substantive and procedural criminal law provisions of the Lanzarote Convention may apply in the context of the use of AI systems for criminal related purposes, a situation that needs to be further discussed among State Parties in order to not only share and diffuse knowledge on current trends among State Parties of that treaty, but to also help identify illicit conducts and abuse and exploitation of children through AI systems, as well as an analysis of positive uses of AI technologies for the prevention of crimes concerning the protection of children online.
The Istanbul Convention is another treaty of the Council of Europe the main purpose of which is to protect women against all forms of violence and to counter and eliminate all forms of violence against women including aspects of domestic violence. Footnote 46 The Istanbul Convention consists of four main pillars: (i) prevention, (ii) protection of victims, (iii) prosecution of offenders, and (iv) implementation of comprehensive and coordinated policies to combat violence against women at all levels of government. The Istanbul Convention establishes an independent group of experts known as the GREVIO (Group of Experts on Action against Violence against Women and Domestic Violence). The GREVIO is responsible for monitoring the effective implementation of the provisions of the Istanbul Convention by the now 34 States Parties. Footnote 47
The Istanbul Convention does not specifically contain specific provisions in the context of violence committed through the use of information technologies, however the GREVIO is currently analysing approaches to extend the application of the commission of illegal conducts through the use of computer and information systems within the national legal framework of State Parties. Footnote 48 The GREVIO adopted during its twenty-fifth meeting on 20 October 2021, a General Recommendation on the Digital Dimension of Violence against Women . Footnote 49 The Recommendation addresses inter alia the application of the general provisions of the Istanbul Convention in relation to conducts and crime typologies committed against women in cyberspace and proposes specific actions to take, based on the four pillars of the Istanbul Convention: prevention, protection, prosecution and coordinated policies.
As part of promoting the scope of the adopted General Recommendation, the GREVIO held a conference in Strasbourg in November 24, 2021 that featured a keynote address of the Commissioner of Human Rights of the Council of Europe and presentations of the President of the GREVIO and the Chair of the Committee of the Parties to the Istanbul Convention followed by a panel discussion with representatives of EU member states, internet industry and civil society. Footnote 50 Among the relevant points made during the panel discussions were how the recommendation may help to advance legal and policy developments, attention of victims of current forms of cyberviolence, further international cooperation and to contribute to the general understanding of the scope of the provisions of the Istanbul Convention and other key instruments of the Council of Europe including the Budapest Convention and the Lanzarote Convention in relation to digital violence against women. Footnote 51
The Cybercrime Convention Committee (T-CY) issued a comprehensive report titled Mapping Study on Cyberviolence with recommendations adopted by the TC-Y on 9 July, 2018. Footnote 52
The mapping study developed a working definition on “cyberviolence” Footnote 53 and described how the different forms of cyberviolence may be classified and criminalized under the Budapest-, Lanzarote- and Istanbul Conventions. According to the mapping study “not all forms of violence are equally severe and not all of them necessarily require a criminal law solution but could be addressed with a combination of preventive, educational, protective and other measures” . The main conclusions of the Cybercrime Convention Committee (T-CY) in the Mapping Study on Cyberviolence were:
the Budapest Convention and its additional Protocol on Racism and Xenophobia covers and address some types of cyberviolence;
the procedural powers and the provisions on international cooperation of the Budapest Convention will help to support the investigation of cyberviolence and the secure and preservation of digital evidence; and
the Budapest, the Istanbul and Lanzarote conventions complement each other and should promote synergies. These synergies may include raising further awareness and capacity building activities among Parties to said treaties; encourage parties to the Lanzarote and Istanbul Conventions to introduce the procedural powers contained in the Budapest Convention ( Arts. 16-21 ) into domestic law and consider becoming parties to the Budapest Convention to facilitate international cooperation on electronic evidence in relation to crimes related to cyberviolence; encourage parties to the Budapest Convention to implement the provisions on psychological violence, stalking and sexual harassment of the Istanbul Convention, as well as the provisions on sexual exploitation and abuse of children online of the Lanzarote Convention, among others . Footnote 54
Cyberviolence and crimes concerning the abuse and exploitation of children online require strategic cooperation of different stakeholders. Other key institutions at the regional level like the European Commission have also explored paths on how AI systems may help to identify, categorise and remove child sexual abuse images and to minimise the exposure of human investigators to distressing images and the importance of the role of internet hotlines in facilitation the reporting process. Footnote 55
5.1 council of europe cahai.
The Ad-Hoc Committee on Artificial Intelligence of the Council of Europe (CAHAI) Footnote 56 was established by the Committee of Ministers during its 1353rd meeting on 11 September 2019. Footnote 57 The specific task of CAHAI is “to complete the feasibility study and produce the potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law.”
The work of CAHAI is relevant because it sets forth a multi-stakeholder group where global experts may provide their views on the development of policies on AI, to forward meaningful proposals to ensure the application of international treaties and technical standards on AI and submit proposals for the creation of a future legal instrument that will regulate AI while ensuring the protection of fundamental rights, rule of law and democracy principles contained in relevant instruments of the Council of Europe, like Convention 108+, the Budapest, Lanzarote and Istanbul Conventions, among others. Footnote 58
The work of CAHAI will impact the 47 members states and country observers of the Council of Europe, particularly state institutions including national parliamentarians and policy makers who are responsible for the implementation of international treaties into their national legal frameworks. Therefore, the inclusion and participation of relevant stakeholders from different nations will play a decisive role in the future implementation of a global treaty on AI in the coming years.
The European Parliament (EP) is perhaps the most proactive legislative and policy making institution worldwide. The European Parliament has a Centre for Artificial Intelligence known as (C4AI) that was established in December 2019. Footnote 59 The EP has Committees that analyse the impact of policy related aspects of AI in many different areas including cybersecurity, defence, predictive policing and criminal justice. The most active committee is the Special Committee on Artificial Intelligence in a Digital Age (AIDA Committee) Footnote 60 that has organized many hearings and workshops with different experts and stakeholders on AI from different regions of the world to hear views and opinions on the Regulation proposal for Artificial Intelligence Systems . Footnote 61
According to the President of the AIDA Committee, “the use of AI in law enforcement is a political decision and not a technical one, our duty is to apply the political worldview to determine what are the allowed uses of AI and under which conditions” . Footnote 62
As a result of the existing dangers and risks posed by the use of AI systems across Europe, the European Parliament adopted a resolution on 6 October 2021 that calls for a permanent ban on AI systems which allow for the use of automated recognition of individuals by law enforcement in public spaces. Further, the resolution calls for a moratorium on the deployment of facial recognition systems for law enforcement purposes and a ban on predictive policing based on behavioural data and social scoring in order to ensure the protection of fundamental rights of European citizens. Footnote 63
The Committee on Civil Liberties, Justice and Home Affairs of the European Parliament has also conducted relevant work on AI and criminal justice. On February 20, 2020, said committee conducted a public hearing on “Artificial Intelligence in Criminal Law and its use by the Police and Judicial Authorities” where relevant opinions and recommendations of experts and international organizations were discussed and presented. Footnote 64
Further, the AIDA Committee of the European Parliament held a two-day public hearing with the AFET Committee on March 1 st and 4 th 2021. The first hearing was on “AI Diplomacy and Governance in a Global Setting: Toward Regulatory Convergence”, and the second hearing on “AI, Cybersecurity and Defence”. Footnote 65 Many relevant aspects of AI policy were mentioned during the hearings, including the support of a transatlantic dialogue and cooperation on AI, the development of ethical frameworks and standards, the development of a shared system of norms, respect of fundamental rights, diplomacy and capacity building among others. Although, there was mention on the importance of AI for cybersecurity in the defence realm and how AI might be helpful to mitigate cyberattacks and protect critical infrastructure, there was no specific mention on how the current international treaties on cybercrime and national legal frameworks may coexist with a future treaty on AI to counter cybercrime more effectively.
The dialogue and engagement of the different committees of the European Parliament on AI policy is key for the future implementation of policies in the criminal justice area concerning the use and deployment of AI systems and applications. The European Parliament should continue to promote further dialogues and activities with other international organizations like the Council of Europe and the OECD, as well as with national parliamentarians around the world to help them understand the dimensions and implications of creating regulations and policies on AI to specifically counter cybercrime.
The Centre for Artificial Intelligence and Robotics of the United Nations Interregional Crime and Justice Research Institute (UNICRI), a research arm of the United Nations is very active in the organization of workshops and information and reports to demystify the world of robotics and AI and to facilitate an in-depth understanding of the crimes and threats conducted through AI systems among law enforcement officers, policy makers, practitioners, academia and civil society. UNICRI and INTERPOL drafted the report “ Artificial Intelligence and Robotics for Law Enforcement” Footnote 66 in 2019 that draws upon the discussions of a workshop held in Singapore in July 2018. Among the main findings of UNICRI and INTERPOL’s report are:
“AI and Robotics are new concepts for law enforcement and there are expertise gaps that should be filled to avoid law enforcement falling behind.” “Some countries have explored further than others and a variety of AI techniques are materializing according to different law enforcement authorities. There is, however, a need for greater international coordination on this issue.”
The mandate of the Centre for Artificial Intelligence and Robotics of UNICRI is quite broad. It covers policy related aspects of AI in the field of criminal justice including areas such as cybersecurity, autonomous weapons, self-driving vehicles and autonomous patrol systems. UNCRI organizes every year the Global Meeting on Artificial Intelligence for Law Enforcement , an event that discusses relevant developments on AI with experts and stakeholders from different sectors and countries to enhance and improve the capabilities for law enforcement authorities and the criminal justice system in the use and deployment of AI technologies. Footnote 67
The Centre for Artificial Intelligence and Robotics of UNICRI is currently working with a group of experts from INTERPOL, the European Commission and other relevant institutions and stakeholders in the development of a Toolkit for Responsible AI Innovation in Law Enforcement . The toolkit will provide and facilitate practical guidance for law enforcement agencies around the world on the use of AI in a trustworthy, lawful and responsible manner. The toolkit addresses practical insights, use cases, principles, recommendations, best practices and resources which will help to support law enforcement agencies around the world to use AI technologies and applications. Footnote 68
The use of AI systems across different sectors is an ongoing trend, and this includes authorities of the criminal justice system which have realized the benefits and advantages of using this technology. National law enforcement authorities involved in the investigation of cybercrime are not yet fully prepared to deal with the technical and legal dimensions of AI when used for disruptive or malicious purposes. Further, there is no yet sufficient evidence to justify whether law enforcement authorities around the world are well equipped and trained to gather cross-border evidence to conduct national investigations where an AI system was involved in the commission or perpetration of an illicit conduct.
Second, the coordination and cooperation with service providers and companies that manage and operate AI systems and services is crucial to help determine its abuse and misuse by perpetrators. However, these tasks bring a number of technical and legal challenges, since most AI systems rely on an internet connection to function where oftentimes subscriber and traffic data is needed to conduct an investigation. Therefore, global service providers will also have an important role to play in the possible identification and location of cybercriminals, a situation that needs well-coordinated efforts, measures and responses based on international treaties and national laws between law enforcement authorities and private sector entities. The need for further strategic partnerships to counter cybercrime is more important than ever.
The future work of international organizations like UNICRI, the Council of Europe through CAHAI and the T-CY Committee of the Budapest Convention will be very relevant for policy makers and law enforcement authorities for the correct guidance in the implementation of future national policies on AI. The CAHAI may fill up the missing discussions in international fora concerning AI to specifically counter cybercrime based on the current standards of the Council of Europe like the Budapest Convention, the Lanzarote Convention and the Istanbul Convention, as well as the emerging practices of members states to specifically counter cyber enable crimes.
The creation of national taskforces on cybercrime (composed of law enforcement authorities, representatives of the judiciary, AI technology developers and global service providers) may serve as a relevant vehicle to coordinate and tackle illicit conducts concerning the misuse and abuse of AI technologies. These taskforces may be articulated in the context of the national strategies on AI and should be linked to the tasks of the criminal justice authorities to specifically counter cybercrime.
Burgess, Matt, “Police built an AI to predict violent crime. It was seriously flawed”, WIRED, August 6, 2020, available at: https://www.wired.co.uk/article/police-violence-prediction-ndas .
European Commission, “Liability for Artificial Intelligence and other emerging digital technologies”, Report from the Experts Group on Liability and New Technologies-New Technologies Formation, European Union 2019, available at: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence . See also: European Parliament Research Service (EPRS), “The European added value of a common EU approach to liability rules and insurance for connected and autonomous vehicles” Study published by the European Added Value Unit, February 2018, available at: https://www.europarl.europa.eu/RegData/etudes/STUD/2018/615635/EPRS_STU(2018)615635_EN.pdf .
MIT Technology Review, “Transforming the Energy Industry with AI”, January 21, 2021, available at: https://www.technologyreview.com/2021/01/21/1016460/transforming-the-energy-industry-with-ai/ .
World Health Organization (WHO), “WHO reports fivefold increase in cyberattacks, urges vigilance”, April 23, 2020, available at: https://www.who.int/news/item/23-04-2020-who-reports-fivefold-increase-in-cyber-attacks-urges-vigilance .
The New York Times, “Cyber Attack Suspected in German Woman’s Death”, September 18, 2020, available at: https://www.nytimes.com/2020/09/18/world/europe/cyber-attack-germany-ransomeware-death.html .
Supply Chain, “Lessons Learned from the Vaccine Supply Chain Attack”, January 16, 2021, available at: https://www.supplychaindigital.com/supply-chain-risk-management/lessons-learned-vaccine-supply-chain-attack .
Prakarsh and Riya Khanna, “Artificial Intelligence and Cybercrime- A curate’s Egg”, Medium, June 14, 2020, available at: https://medium.com/the-%C3%B3pinion/artificial-intelligence-and-cybercrime-a-curates-egg-2dbaee833be1 .
INTSIGHTS, “The Dark Side of Latin America: Cryptocurrency, Cartels, Carding and the Rise of Cybercrime”, p.6, available at: https://wow.intsights.com/rs/071-ZWD-900/images/Dark%20Side%20of%20Latin%20America.pdf . See also, “The Next, El Chapo is Coming for your Smartphone”, June 26, 2020, available at: https://www.ozy.com/the-new-and-the-next/the-next-el-chapo-might-strike-your-smartphone-and-bank/273903/ .
Malwarebytes Lab, “When Artificial Intelligence goes awry: separating science fiction from fact”, without publication date, available at: https://resources.malwarebytes.com/files/2019/06/Labs-Report-AI-gone-awry.pdf .
SIEMENS Energy, “Managed Detection and Response Service”, 2020, available at: https://assets.siemens-energy.com/siemens/assets/api/uuid:a95b9cd3-9f4d-4a54-8c43-77fbdb6f418f/mdr-white-paper-double-sided-200930.pdf .
POLITICO, “Automated racism: How tech can entrench bias”, March 2, 2021, available at: https://www.politico.eu/article/automated-racism-how-tech-can-entrench-bias/ .
For a discussion on discrimination caused by algorithmic decision making on AI, see ZUIDERVEEN BORGESIUS, Frederik, “Discrimination, Artificial Intelligence and Algorithmic decision making”. Paper published by the Directorate General of Democracy of the Council of Europe, 2018, available at: https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73 .
See the Special Report on Facial Recognition of the Center for AI and Digital Policy (CAIDP) that contains a summary of key references on this topic contained in the 2020 Report on Artificial Intelligence and Democratic Values / The AI Social Contract Index 2020 prepared by CAIDP, December 2020, available at: https://caidp.dukakis.org/aisci-2020/ .
In October 2021, the European Parliament adopted a resolution to ban the use facial recognition technologies in public spaces by law enforcement authorities to ensure the protection of fundamental rights. See European Parliament, “Use of Artificial Intelligence by the police: MEPs oppose mass surveillance”. LIBE Plenary Session press release, October 6, 2021, available at: https://www.europarl.europa.eu/news/en/press-room/20210930IPR13925/use-of-artificial-intelligence-by-the-police-meps-oppose-mass-surveillance .
BBC, “What are ‘bots’ and how can they spread fake news, available at: https://www.bbc.co.uk/bitesize/articles/zjhg47h .
FORBES, “Fake News is Rampant, Here is How Artificial Intelligence Can Help” , January 21, 2021, available at: https://www.forbes.com/sites/bernardmarr/2021/01/25/fake-news-is-rampant-here-is-how-artificial-intelligence-can-help/?sh=17a6616e48e4 .
European Commission, “Tackling online disinformation”, 18 January 2021, available at: https://ec.europa.eu/digital-single-market/en/tackling-online-disinformation . For a general review of policy implications in the UK concerning the use of AI and content moderation, see Cambridge Consultants, “Use of AI in Online Content Moderation” . 2019 Report produced on behalf of OFCOM, available at: https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf .
Deepfakes are based on AI deep learning algorithms, an area of machine learning that applies neural net simulation to massive data sets to create fakes videos of real people. Deepfakes are trained algorithms that allows the recognition of data patterns, as well as human facial movement and expressions and can match voices that can imitate the real voice and gestures of an individual. See: European Parliamentary Research Service, “What if deepfakes made us doubt everything we see and hear (Science and Technology podcast], available at: https://epthinktank.eu/2021/09/08/what-if-deepfakes-made-us-doubt-everything-we-see-and-hear/ . Like, many technologies, deepfakes can be used as a tool for criminal related purposes such as fraud, extortion, psychological violence and discrimination against women and minors, see: MIT Technology Review, “A deepfake bot is being used to “undress” underage girls”, October 20, 2020, available at: https://bit.ly/3qj1qWx .
For specific information regarding the work of the US government to counter the use of deepfakes, see CNN, “ Inside the Pentagon’s race against deepfake videos” , available at: https://bit.ly/38aEqCS https://edition.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/ .
EURACTIV, “EU police recommend new online ‘screening tech’ to catch deepfakes”, November 20, 2020, available at: https://www.euractiv.com/section/digital/news/eu-police-recommend-new-online-screening-tech-to-catch-deepfakes/ .
The Verge, “Watch Jordan Peele use AI to make Barack Obama deliver a PSA about fake news”, April 17, 2018, available at: https://www.theverge.com/tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordan-peele-buzzfeed .
Wall Street Journal, “Fraudsters Use AI to Mimic CEO’s Voice in Unusual Cybercrime Case”, August 30, 2019, available at: https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402 .
GIZMODO, “Bank Robbers in the Middle East Reportedly ‘Cloned’ Someone’s Voice to Assist with $35 Million Heist”, October 14, 2021, available at: https://gizmodo.com/bank-robbers-in-the-middle-east-reportedly-cloned-someo-1847863805 .
The EC3 of Europol has developed good capacities and practice with other countries in the deployment of joint investigation teams to counter organized crime, including cybercrime. See the section on Join Investigation Team of Europol at: https://www.europol.europa.eu/activities-services/joint-investigation-teams .
INTERPOL (EC3), “Internet Crime Assessment Report 2020” (IOCTA 2020 Report), p. 18, available at: https://www.europol.europa.eu/activities-services/main-reports/internet-organised-crime-threat-assessment-iocta-2020 . The Internet Crime Assessment Report 2021 (IOCTA 2021 Report) was published on 11 November 2021. The report of this year does not actually make any novel references to misuse and abuse of AI systems for criminal purposes, available at: https://www.europol.europa.eu/activities-services/main-reports/internet-organised-crime-threat-assessment-iocta-2021 .
IOCTA 2020 Report, Op. cit . note 25, p. 18.
Trend Micro Research, EUROPOL EC3 and UN Interregional Crime and Justice Research Institute (UNICRI), Malicious Uses and Abuses of Artificial Intelligence , 19 November 2020, available at: https://www.europol.europa.eu/publications-documents/malicious-uses-and-abuses-of-artificial-intelligence .
This report was also presented in a workshop on cybercrime, e-evidence and artificial intelligence during the 2021 Octopus Conference on Cooperation against Cybercrime organized by the Council of Europe on November 17, 2021 where the representatives of each organization highlighted the main aspects and features of the report, including current trends and concrete examples of misuse of AI technologies. The presentation is available at: https://rm.coe.int/edoc-1193149-v1-coe-ai-ppt/1680a4892f . The Digital Services Act establishes new rules and requirements for intermediary service providers which includes hosting providers and online platforms. This regulation covers inter alia rules on liability for online intermediary service platforms, establishes internal complaint handling systems and implement measures against online legal content. The Digital Services Act is currently a draft proposal under discussion between the European Parliament and the Council of the EU and it may take some years until it is finally approved, available at: https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package .
See Proposal for a Regulation of the European Parliament and the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, Brussels 21.4.2021, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN .
See: European Commission, “Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence”, Brussels, April 21, 2021, available at: https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682 . See also the website of the European Commission that explains the approach of the EC on AI and the relevant milestones in this area, available at: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence .
Among those instruments are: (i) The United Convention against Organized Crime and its Protocols ( Palermo Convention ); (ii) The Council of Europe Convention on Cybercrime ( Budapest Convention ) and its Additional Protocol concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems; (iii) The Council of Europe Convention on Protection of Children against Sexual Exploitation and Sexual Abuse ( Lanzarote Convention ); (iv) The African Union Convention on Cyber Security and Personal Data Protection ( Malabo Convention ); (v) Directive 2013/40/UE on attacks against information systems; (vi) Directive 2011/92/UE on combating the sexual abuse and exploitation of children and child pornography, among others.
The Budapest Convention requires that Party States amend their substantive and procedural criminal legislation to make it consistent with the substantive and procedural criminal law provisions of that treaty. Considering that cybercrime has a transnational dimension, the Budapest Convention also requires that countries implement international cooperation measures either to supplement or complement the existing ones, particularly when a country does not have mutual assistance and cooperation treaties in criminal matters in place, as well as to equip investigative and law enforcement authorities with the necessary tools and procedural mechanisms to conduct cybercrime investigations including measures concerning: (i) expedited preservation of stored computer data, (ii) disclosure of preserved traffic data, (iii) mutual assistance measures regarding access to stored computer data, (iv) trans-border access to stored computer data, (v) mutual assistance regarding real-time collection of traffic data, (vi) mutual assistance regarding the interception of content data, and the (vii) creation of a network or point of contact 24/7 to centralize investigations and procedures related to requests for data and mutual assistance concerning cybercrime investigations with other 27/7 points of contact.
See the Budapest Convention Chart of Signatures and Ratifications at: https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/185/signatures?p_auth=yUQgCmNc .
Cybercrime Convention Committee, “T-CY Rules of Procedure. As revised by T-CY on 16 October 2020”, Strasbourg, 16 October 2020, available at: https://rm.coe.int/t-cy-rules-of-procedure/1680a00f34 .
Council of Europe, “Second Additional Protocol to the Budapest Convention adopted by the Committee of Ministers of the Council of Europe”, Strasbourg, 17 November 2021, available at: https://www.coe.int/en/web/cybercrime/-/second-additional-protocol-to-the-cybercrime-convention-adopted-by-the-committee-of-ministers-of-the-council-of-europe .
See the text of the Explanatory Report of the Second Additional Protocol to the Budapest Convention drafted by Cybercrime Convention Committee (T-CY) at: https://search.coe.int/cm/pages/result_details.aspx?objectid=0900001680a48e4b .
See the Explanatory Report to the Convention on Cybercrime at: https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09000016800cce5b .
The Conference program of the 2018 Octopus conference on cooperation against cybercrime is available at: https://rm.coe.int/3021-90-octo18-prog/16808c2b04 .
See: Activities of the Council of Europe on Artificial Intelligence (AI), 9 May, 2018, available at: https://rm.coe.int/cdmsi-2018-misc8-list-ai-projects-9may2018/16808b4eac .
See the presentations of this panel at the Plenary Closing session of the 2018 Octopus Conference, available at: https://www.coe.int/en/web/cybercrime/resources-octopus-2018 .
The presentation and materials of this panel are available at: https://www.coe.int/en/web/cybercrime/workshop-cybercrime-e-evidence-and-artificial-intelligence .
The Lanzarote Convention entered in force on 1 July 2010, available at: https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/201/signatures . Among the conducts that the Lanzarote Convention requires Sates parties to criminalize are: (i) Child sexual abuse; (ii) sexual exploitation through prostitution; (iii) child sexual abuse material; (iv) exploitation of a child in sexual performances; (v) corruption of children, and (vi) solicitation of children for sexual purposes.
See the Booklet of the Lanzarote Convention, available at: https://rm.coe.int/lanzarote-convention-a-global-tool-to-protect-children-from-sexual-vio/16809fed1d .
The Rules of procedure, adopted documents, activity reports and the Meetings of the ‘Lanzarote Committee’ are available at: https://www.coe.int/en/web/children/lanzarote-committee#{%2212441908%22:[] .
The Istanbul Convention entered into force on 1 August 2014 and it has been ratified by 34 countries. See the chart of signatures and ratifications at: https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/210/signatures?p_auth=OwhAGtPd .
The Rules of procedure and adopted documents of the GREVIO are available at: https://www.coe.int/en/web/istanbul-convention/grevio .
See the presentations of the webinar, “Cyberviolence against Women” organized by the CyberEast Project of the Council of Europe, 12 November, 2020, available at: https://www.coe.int/en/web/cybercrime/cyberviolence-against-women .
The Text of the GREVIO General Recommendation No. 1 on the digital dimension of violence against women adopted on 20 October 2021 is available at: https://rm.coe.int/grevio-rec-no-on-digital-violence-against-women/1680a49147 .
Council of Europe, “Launch Event: Combating violence against women in a digital age-utilizing the Istanbul Convention”, 24 November 2021, available at: https://www.coe.int/en/web/istanbul-convention/launching-event-of-grevio-s-first-general-recommendation-on-the-digital-dimension-of-violence-against-women .
Council of Europe Media Release, “New Council of Europe Recommendation tackles the ‘digital dimension” of violence against women and girls”, Strasbourg, 24 November, 2021, available at: https://search.coe.int/directorate_of_communications/Pages/result_details.aspx?ObjectId=0900001680a4a67b .
Council of Europe Cybercrime Convention Committee (TC-Y), “Mapping Study on Cybercrime” with recommendations adopted by the TC-Y on 9 July 2018, available at: https://rm.coe.int/t-cy-2017-10-cbg-study-provisional/16808c4914 .
The definition is an adaptation of the definition of violence against women contained in Art. 3 of the Istanbul Convention to the cyber context as follows: “ Cyberviolence is the use of computer systems to cause, facilitate, or threaten violence against individuals that results in, or is likely to result in, physical, sexual, psychological or economic harm or suffering and may include the exploitation of the individual’s circumstances, characteristics or vulnerabilities” .
“Mapping Study on Cybercrime”, Op. cit . note 52, pp. 42-43.
European Commission, “Exploring potential of AI in fight against child online abuse”, Event report 11 June 2020, available at: https://ec.europa.eu/digital-single-market/en/news/exploring-potential-ai-fight-against-child-online-abuse .
CAHAI’s composition consist of three main groups composed of up to 20 experts appointed by Members States, as well as observers and participants. The mandate of the Policy Development Group (CAHAI-PDG) is the development of the feasibility study of a legal framework on artificial intelligence applications, building upon the mapping work already undertaken by the CAHAI and to prepare key findings and proposals on policy and other measures, to ensure that international standards and international legal instruments in this area are up-to-date and effective and prepare proposals for a specific legal instrument regulating artificial intelligence. The Consultation and Outreach Group (CAHAI-COG) is responsible for taking stock of the analysis undertaken by the Secretariat of responses to online consultations and analysis of ongoing developments and reports which are directly relevant for CAHAI’s working groups’ tasks. The Legal Frameworks Group (CAHAI-LFG) is responsible for the preparation of key findings and proposals on possible elements and provisions of a legal framework with a view to draft legal instruments, for consideration and approval by the CAHAI, taking into account the scope of existing legal instruments applicable to artificial intelligence and policy options set out in the feasibility study approved by the CAHAI. Further info on the composition of CAHAI working groups, the plenary meetings and the documents issued by the three working groups is available at: https://www.coe.int/en/web/artificial-intelligence/cahai .
The terms of reference of CAHAI are available at: https://search.coe.int/cm/Pages/result_details.aspx?ObjectId=09000016809737a1 .
The Final Virtual Plenary Meeting of CAHAI from 30.11.2021 to 02.12.2021 will facilitate meaningful discussions towards the adoption of a document outlining the possible elements of a legal framework on AI, which may include binding and non-binding standards based on the Council of Europe’s standards on human rights, democracy and rule of law. See Council of Europe, “The CAHAI to hold its final meeting”, Strasbourg, 24 November 2021, available at: https://www.coe.int/en/web/artificial-intelligence/-/cahai-to-hold-its-final-meeting .
European Parliament, “STOA Centre for Artificial Intelligence (C4AI)”. The C4AI produces studies, organises public events and acts as a platform for dialogue and information exchange and coordinate its efforts and influence global AI standard-setting, available at: https://www.europarl.europa.eu/stoa/en/centre-for-AI .
The AIDA Committee website is available at: https://www.europarl.europa.eu/committees/en/aida/home/highlights .
See supra note 30.
See Dragos Tudorache Plenary Speech on Artificial Intelligence of 4 October 2021, available at: https://www.youtube.com/watch?v=V9y5gt39AD0 .
European Parliament News, “Use of artificial intelligence by the police: MEPs oppose mass surveillance”. Press release of the Plenary Session, October 6, 2021, available at: https://www.europarl.europa.eu/news/en/press-room/20210930IPR13925/use-of-artificial-intelligence-by-the-police-meps-oppose-mass-surveillance and Eurocadres, “European Parliament adopts resolution on the use of AI in law enforcement”, October 6, 2021, available at: https://www.eurocadres.eu/news/european-parliament-adopts-resolution-on-the-use-of-ai-in-law-enforcement/ .
European Parliament. “MEPs to look into Artificial Intelligence in criminal law on Thursday”, February 18, 2020, available at: https://www.europarl.europa.eu/news/en/press-room/20200217IPR72718/meps-to-look-into-artificial-intelligence-in-criminal-law-on-thursday .
European Parliament, Special Committee on Artificial Intelligence in a Digital Age (AIDA), “Joint hearing on the external policy dimension of AI”, March 1 st and 4 th 2021, available at: https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/AIDA/DV/2021/03-01/Final_Programme_externalpolicydimensionofAI_V26FEB_EN.pdf .
UNICRI and INTERPOL, “ Artificial Intelligence and Robotics for Law Enforcement” , 2019, available at: https://issuu.com/unicri/docs/artificial_intelligence_robotics_la/4?ff .
UNCRI, “2 nd INTERPOL, UNICRI Global Meeting on Artificial Intelligence for Law Enforcement”, Singapore, July 3, 2019, available at: http://www.unicri.it/news/article/ai_unicri_interpol_law_enforcement .
UNICRI, “The European Commission provides support to UNICRI for the Development of the Toolkit for Responsible AI Innovation in Law Enforcement”, The Hague, Monday November 1, 2021, available at: http://www.unicri.it/index.php/News/EC-UNICRI-agreement-toolkit-responsible-AI .
Open Access funding enabled and organized by Projekt DEAL.
Authors and affiliations.
Center for AI and Digital Policy (CAIDP), Washington (DC), USA
Cristos Velasco
DHBW Cooperative State University in Mannheim and Stuttgart, Stuttgart, Germany
Mexico City, Mexico
You can also search for this author in PubMed Google Scholar
Correspondence to Cristos Velasco .
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
C. Velasco is Research Fellow and Outreach Committee Board Member of the Center for AI and Digital Policy (CAIDP), also Law Lecturer on “Information Technology Law” and “International Business Law & International Organizations” at the DHBW Cooperative State University in Mannheim and Stuttgart.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Velasco, C. Cybercrime and Artificial Intelligence. An overview of the work of international organizations on criminal justice and the international applicable instruments. ERA Forum 23 , 109–126 (2022). https://doi.org/10.1007/s12027-022-00702-z
Download citation
Accepted : 24 January 2022
Published : 22 February 2022
Issue Date : May 2022
DOI : https://doi.org/10.1007/s12027-022-00702-z
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Jul 30, 2024 | Brad Smith - Vice Chair & President
AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation – especially to target kids and seniors. While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud. In short, we need new laws to help stop bad actors from using deepfakes to defraud seniors or abuse children.
While we and others have rightfully been focused on deepfakes used in election interference, the broad role they play in these other types of crime and abuse needs equal attention. Fortunately, members of Congress have proposed a range of legislation that would go a long way toward addressing the issue, the Administration is focused on the problem, groups like AARP and NCMEC and deeply involved in shaping the discussion, and industry has worked together and built a strong foundation in adjacent areas that can be applied here.
One of the most important things the U.S. can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.
We don’t have all the solutions or perfect ones, but we want to contribute to and accelerate action. That’s why today we’re publishing 42 pages on what’s grounded us in understanding the challenge as well as a comprehensive set of ideas including endorsements for the hard work and policies of others. Below is the foreword I’ve written to what we’re publishing.
____________________________________________________________________________________
The below is written by Brad Smith for Microsoft’s report Protecting the Public from Abusive AI-Generated Content. Find the full copy of the report here: https://aka.ms/ProtectThePublic
“The greatest risk is not that the world will do too much to solve these problems. It’s that the world will do too little. And it’s not that governments will move too fast. It’s that they will be too slow.”
Those sentences conclude the book I coauthored in 2019 titled “Tools and Weapons.” As the title suggests, the book explores how technological innovation can serve as both a tool for societal advancement and a powerful weapon. In today’s rapidly evolving digital landscape, the rise of artificial intelligence (AI) presents both unprecedented opportunities and significant challenges. AI is transforming small businesses, education, and scientific research; it’s helping doctors and medical researchers diagnose and discover cures for diseases; and it’s supercharging the ability of creators to express new ideas. However, this same technology is also producing a surge in abusive AI-generated content, or as we will discuss in this paper, abusive “synthetic” content.
Five years later, we find ourselves at a moment in history when anyone with access to the Internet can use AI tools to create a highly realistic piece of synthetic media that can be used to deceive: a voice clone of a family member, a deepfake image of a political candidate, or even a doctored government document. AI has made manipulating media significantly easier—quicker, more accessible, and requiring little skill. As swiftly as AI technology has become a tool, it has become a weapon. As this document goes to print, the U.S. government recently announced it successfully disrupted a nation-state sponsored AI-enhanced disinformation operation. FBI Director Christopher Wray said in his statement, “Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government.” While we should commend U.S. law enforcement for working cooperatively and successfully with a technology platform to conduct this operation, we must also recognize that this type of work is just getting started.
The purpose of this white paper is to encourage faster action against abusive AI-generated content by policymakers, civil society leaders, and the technology industry. As we navigate this complex terrain, it is imperative that the public and private sectors come together to address this issue head-on. Government plays a crucial role in establishing regulatory frameworks and policies that promote responsible AI development and usage. Around the world, governments are taking steps to advance online safety and address illegal and harmful content.
The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI. Technology companies must prioritize ethical considerations in their AI research and development processes. By investing in advanced analysis, disclosure, and mitigation techniques, the private sector can play a pivotal role in curbing the creation and spread of harmful AI-generated content, thereby maintaining trust in the information ecosystem.
Civil society plays an important role in ensuring that both government regulation and voluntary industry action uphold fundamental human rights, including freedom of expression and privacy. By fostering transparency and accountability, we can build public trust and confidence in AI technologies.
The following pages do three specific things: 1) illustrate and analyze the harms arising from abusive AI-generated content, 2) explain what Microsoft’s approach is, and 3) offer policy recommendations to begin combating these problems. Ultimately, addressing the challenges arising from abusive AI-generated content requires a united front. By leveraging the strengths and expertise of the public, private, and NGO sectors, we can create a safer and more trustworthy digital environment for all. Together, we can unleash the power of AI for good, while safeguarding against its potential dangers.
Microsoft’s responsibility to combat abusive AI-generated content
Earlier this year, we outlined a comprehensive approach to combat abusive AI-generated content and protect people and communities, based on six focus areas:
Core to all six of these is our responsibility to help address the abusive use of technology. We believe it is imperative that the tech sector continue to take proactive steps to address the harms we are seeing across services and platforms. We’ve taken concrete steps, including:
Protecting Americans through new legislative and policy measures
This February, Microsoft and LinkedIn joined dozens of other tech companies to launch the Tech Accord to Combat Deceptive Use of AI in 2024 Elections at the Munich Security Conference. The Accord calls for action across three key pillars that we utilized to inspire the additional work found in this white paper: addressing deepfake creation, detecting and responding to deepfakes, and promoting transparency and resilience.
In addition to combating AI deepfakes in our elections, it is important for lawmakers and policymakers to take steps to expand our collective abilities to (1) promote content authenticity, (2) detect and respond to abusive deepfakes, and (3) give the public the tools to learn about synthetic AI harms. We have identified new policy recommendations for policymakers in the United States. As one thinks about these complex ideas, we should also remember to think about this work in straightforward terms. These recommendations aim to:
Along those lines, it is worth mentioning three ideas that may have an outsized impact in the fight against deceptive and abusive AI-generated content.
These are not necessarily new ideas. The good news is that some of these ideas, in one form or another, are already starting to take root in Congress and state legislatures. We highlight specific pieces of legislation that map to our recommendations in this paper, and we encourage their prompt consideration by our state and federal elected officials.
Microsoft offers these recommendations to contribute to the much-needed dialogue on AI synthetic media harms. Enacting any of these proposals will fundamentally require a whole-of-society approach. While it’s imperative that the technology industry have a seat at the table, it must do so with humility and a bias towards action. Microsoft welcomes additional ideas from stakeholders across the digital ecosystem to address synthetic content harms. Ultimately, the danger is not that we will move too fast, but that we will move too slowly or not at all.
Tags: AI , elections , generative ai , LinkedIn , Online Safety , Responsible AI
News provided by
Aug 06, 2024, 03:00 ET
Share this article
CAMBRIDGE, United Kingdom , Aug. 6, 2024 /PRNewswire/ -- Darktrace, a global leader in cybersecurity AI, has today released its " First 6: Half-Year Threat Report 2024 ," identifying key threats and attack methods facing businesses across the first half of 2024. These insights, observed by Darktrace's Threat Research team using its unique Self-Learning AI across its customer fleet, shed light on the persistent nature of cyber threats and new techniques adopted by attackers attempting to sidestep traditional defenses.
"The threat landscape continues to evolve, but new threats often build upon old foundations rather than replacing them. While we have observed the emergence of new malware families, many attacks are carried out by the usual suspects that we have seen over the last few years, still utilizing familiar techniques and malware variants," comments Nathaniel Jones , Director of Strategic Threat and Engagement at Darktrace. "The persistence of MaaS/RaaS service models alongside the emergence of newer threats like Qilin ransomware underscores the continued need for adaptive, machine learning powered, security measures that can keep pace with a rapidly evolving threat landscape."
Cybercrime-as-a-Service continues to pose significant risk for organizations
The findings show that cybercrime-as-a-service continues to dominate the threat landscape, with Malware-as-a-Service (MaaS) and Ransomware-as-a-Service (RaaS) tools making up a significant portion of malicious tools in use by attackers. Cybercrime-as-a-Service groups, such as Lockbit and Black Basta, provide attackers with everything from pre-made malware to templates for phishing emails, lowering the barrier to entry for cybercriminals with limited technical knowledge.
The most common threats Darktrace observed from January to June 2024 were:
The report also reveals the emergence of new threats alongside persistent ones. Notably, the rise of Qilin ransomware, which employs refined tactics such as rebooting infected machines in safe mode to bypass security tools and making it more difficult for human security teams to react quickly.
Per the report, double extortion methods are now prevalent amongst ransomware strains. As ransomware continues to be a top security concern for organizations, Darktrace's Threat Research Team has identified three predominant ransomware strains impacting customers: Akira , Lockbit and Black Basta . All three have been observed using double extortion methods.
Email phishing and sophisticated evasion tactics rise
Phishing remains a significant threat to organizations. Darktrace detected 17.8 million phishing emails across its customer fleet between December 21, 2023 , and July 5, 2024 . Alarmingly, 62% of these emails successfully bypassed Domain-based Message Authentication, Reporting, and Conformance ( DMARC) verification checks which are industry protocols designed to protect email domains from unauthorized use, and 56% passed through all existing security layers.
The report highlights how cybercriminals are embracing more sophisticated tactics, techniques and procedures (TTPs) designed to evade traditional security parameters. Darktrace observed an increase in attackers leveraging popular, legitimate third-party services and sites, such as Dropbox and Slack, in their operations to blend in with normal network traffic. Additionally, there's been a spike in the use of covert command and control (C2) mechanisms, including remote monitoring and management (RMM) tools, tunneling, and proxy services.
Edge infrastructure compromise and exploitation of critical vulnerabilities are top concerns
Darktrace observed an increase in mass-exploitation of vulnerabilities in edge infrastructure devices, particularly those related to Ivanti Connect Secure , JetBrains TeamCity, FortiClient Enterprise Management Server, and Palo Alto Networks PAN-OS . These compromises often serve as a springboard for further malicious activities.
It is imperative that organizations do not lose sight of existing attack trends and CVEs – cybercriminals may resort to previous, predominately dormant methods to trick organizations. Between January and June, in 40% of cases investigated by the Threat Research team, attackers exploited Common Vulnerabilities and Exposures (CVEs).
For more in-depth analysis, download the First 6: Half-Year Threat Report 2024 at www.darktrace.com/resources/first-6-half-year-threat-report-2024 .
ABOUT DARKTRACE
Darktrace (DARK.L), a global leader in cybersecurity artificial intelligence, is on a mission to free the world from cyber disruption. Breakthrough innovations from our R&D teams in Cambridge, UK , and The Hague, Netherlands have resulted in over 200 patent applications filed. Rather than study historic attacks, Darktrace's technology continuously learns and updates its knowledge of your business data and applies that understanding to help transform security operations to a state of proactive cyber resilience. The Darktrace ActiveAI Security Platform™ provides a full lifecycle approach to cyber resilience that can autonomously spot and respond to known and unknown in progress threats within seconds across the entire organization, including cloud, apps, email, endpoint, network and operational technology (OT). Darktrace, which listed on the London Stock Exchange in 2021, employs over 2,400 people around the world and protects over 9,700 customers globally from advanced cyber threats. To learn more, visit https://darktrace.com/ .
SOURCE Darktrace
Also from this source.
Darktrace, a global leader in cybersecurity AI, today introduced its new global partner program, the Darktrace Defenders Partner Program, to...
Darktrace today announced it has won the UK 2024 Microsoft Partner of the Year Award. The company was honored among a global field of top Microsoft...
High Tech Security
Computer & Electronics
Computer Software
IMAGES
VIDEO
COMMENTS
Cybercrime and the Law: Computer Fraud and Abuse Act (CFAA) and the 116th Congress The Computer Fraud and Abuse Act (CFAA), 18 U.S.C. § 1030, is a civil and criminal cybercrime law prohibiting a variety of computer-related conduct. Although sometimes described as an anti-hacking law, the CFAA is much broader in scope. Indeed, it prohibits seven categories of conduct including, with certain ...
Cybercrime and the Law: Primer on the Computer Fraud and Abuse Act and Related Statutes There is no single, straightforward definition of cybercrime under federal law. Rather, depending on the context, "cybercrime" may refer to all crimes involving computers, or only to crimes targeting computers, or to crimes unique to the computer context. Regardless, federal prosecutors
Abstract A significant number of nations around the world have enacted cybercrime laws for the purpose of controlling the occurrence of cybercrimes and mitigating its ill effects. However, in ...
Second, federal cybersecurity laws play an important role in both preventing and responding to ransomware attacks. Cyber preparedness laws require federal agencies to secure their networks and authorize the Cybersecurity and Infrastructure Security Agency (CISA) and Office of Personnel Management (OPM) to establish federal network security requirements. Other cyber preparedness laws authorize ...
The articles included in this issue reflect three broad areas of cybercrime research: cybercrime victimization, cybercrime perpetration, and techniques and facilitators of cybercrime. While there is some overlap, the issue includes three papers focused on each of these three areas. The first area covered in the special issue focuses on ...
puter crime laws purport to protect.7 It is alarming that the law does not more clearly distinguish between security research and computer crimes, and we believe this situation needs to change.8 That said, this guide has the prag-matic focus of accurately describing the current legal landscape and helping
Throughout this paper, any criminal behaviour utilising the Internet will be termed 'cybercrime' unless referring to specific research using other terminology. Whilst recognising that cybercrime is a global issue, this paper will focus primarily on policing cybercrime in England and Wales.
If investigators are able to gain a complete picture of a crime, then they will be able to take action against the criminal or potentially stop a future crime from occurring. This paper utilizes the basic research and survey methodologies by leveraging existing research, synthesizing the material, and compiling information.
The research of cybercrime scholars should be the key information source for policymakers, the public, security professionals, and other academics on how to decrease various forms of cybercrime. Unfortunately, there is a lack of evidence-based studies testing the effectiveness of cybercrime policies.
Abstract A significant number of nations around the world have enacted cybercrime laws for the purpose of controlling the occurrence of cybercrimes and mitigating its ill effects. However, in spite of enacting such cybercrime laws, available data show that the incidence of cybercrime is rapidly increasing. There are many factors that contribute to the failure of criminal law to fully control ...
Research on cybercrime victimization is relatively diversified; however, no bibliometric study has been found to introduce the panorama of this subject. The current study aims to address this research gap by performing a bibliometric analysis of 387 Social Science Citation Index articles relevant to cybercrime victimization from Web of Science database during the period of 2010-2020. The ...
This is misplaced. There are a number of factors militating against it. The foundations of international law, human rights, the interests of justice, complexity and cost and the underlying purposes of criminalisation conspire to demand a reconsideration of the use of transnational and extraterritorial jurisdiction in the fight against cybercrime.
The paper can also help improve risk awareness and corporate behaviour, and provides the research community with a comprehensive overview of peer-reviewed datasets and other available datasets in the area of cyber risk and cybersecurity. This approach is intended to support the free availability of data for research.
This chapter covers the definitions, types, and intrusions of e-crimes. It has also focused on the laws against e-crimes in different countries. Cybersecurity and searching methods to get secured ...
Grabosky ( 2007) classified three general forms from his exploration of legislation and the common law: crimes where the computer is used as the instrument of crime; crimes where the computer is incidental to the offense, and crimes where the computer is the target of crime.
The USA and its authors and institutions were likely to connect widely and took a crucial position in research of cybercrime victimization. Cyberbullying was identified as the most concerned issue over the years and cyber interpersonal crimes had the large number of research comparing to cyber-dependent crimes.
Law enforcement officials have been frustrated by the inability of legislators to keep cyber-crime legislation ahead of the fast-moving technological curve.
To this end, this chapter aims to provide a critical analysis of the challenges that police and the wider law enforcement community encounter when responding to cybercrime. In view of this, the chapter also aims to assess law enforcement's cyber capacity and capability concerning their fight against cybercrime.
Abstract This research paper delves into the complex landscape of cybercrime and its legal implications within India. The advent of the digital age has given rise to numerous challenges for law enforcement agencies, policymakers, and legal systems worldwide. This paper examines the multifaceted issues surrounding cybercrime, focusing on jurisdictional challenges, privacy concerns, and the ...
Abstract This article analyses the evolution and interplay of national policies and international diplomacy on cyber terrorism within and across the UNSC's permanent five members and the UN process on cyber norms (GGE and OEWG). First, it reveals how - through the extension of preemptive measures to low-impact cyber activities and online content - national policies progressively ...
The Shodhganga@INFLIBNET Centre provides a platform for research students to deposit their Ph.D. theses and make it available to the entire scholarly community in open access. Shodhganga@INFLIBNET. Maharaja Agrasen University. Maharaja Agrasen School of Law.
Furthermore, the paper deals with the types of cyber-crimes which include email-spoofing, phishing, identity theft, internet fraud, etc. Additionally, authors have
This paper mainly deals with the laws relating to the cyber crimes in India. The objectives of this research paper are four-fold: firstly, to analyze the concept of jurisdiction and the various theories to determine jurisdiction in cases where such
The purpose of this paper is to assess whether current international instruments to counter cybercrime may apply in the context of Artificial Intelligence (AI) technologies and to provide a short analysis of the ongoing policy initiatives of international organizations that would have a relevant impact in the law-making process in the field of cybercrime in the near future. This paper ...
The purpose of this white paper is to encourage faster action against abusive AI-generated content by policymakers, civil society leaders, and the technology industry. As we navigate this complex terrain, it is imperative that the public and private sectors come together to address this issue head-on.
These insights, observed by Darktrace's Threat Research team using its unique Self-Learning AI across its customer fleet, shed light on the persistent nature of cyber threats and new techniques ...