BY SABO SAMBO AHMAD
1.0 Introduction
The only constant in life is change, as the philosopher Heraclitus once remarked.[1]The world has evolved from a seamless agrarian environment to a sophisticated era of technology, where information is both power and a weapon. What does this mean for lawmakers and policymakers? It necessitates constant adaptation to the changes of their time, or better yet, the ability to outpace these changes.
Nigeria is still grappling with the evolution of technology and its attendant risks, and its data protection law is barely two years old. But time and tide wait for no one. The fact that Nigeria, as a nation, is still in the early stages of engaging with technology does not mean it will be exempt from the emerging risks associated with it. This calls for a proactive and intentional approach to data protection. This chapter will highlight a few emerging challenges in data protection.
1.1 Emerging Issues in Data Protection
As new technologies become more integrated into our daily lives, new challenges to the right to the protection of personal data arise. The NDPA has attempted to remain technology-neutral to ensure that the protection of personal data does not depend on the techniques used in processing and is adaptable to the use of new technologies. However, this is not as simple as it seems. For example, the Snowden revelations of 2013 caused many legal frameworks to crumble, necessitating a revolutionized approach to data protection laws. Moreover, cybersecurity attacks, big data analytics, and the rapid pace at which technologies like artificial intelligence (AI), blockchain, the Internet of Things (IoT), and biometric systems advance often present unique challenges that require additional regulation.[2] The potential is limitless, which necessitates data protection to evolve with the tides of emerging technologies.
This chapter will consider three emerging issues in prominent domains: (1) automated decision-making, profiling, and artificial intelligence, (2) facial recognition technology and video processing, and (3) the newly emerged discussions on data sovereignty.
1.1.1 Automated Decision-Making, Profiling, and Artificial Intelligence
Data subjects are often unaware of the amount and type of information collected about them, as well as how this data can be connected and analyzed using artificial intelligence (AI) technologies to infer sensitive personal characteristics.[3] For example, platforms like Amazon suggest clothes or products based on users’ search histories, Google predicts sites and news likely to attract individual attention, and Netflix curates movies based on past watchlists. These everyday examples highlight the pervasive nature of AI-driven profiling in consumer experiences.
On a deeper level, research by Kosinski et al. revealed how combining seemingly innocuous data—such as Facebook “likes”—with limited additional information could predict a user’s sexual orientation with 88% accuracy and ethnic origin with 95% accuracy.[4] The Cambridge Analytica scandal underscores the more troubling implications of such capabilities. In this case, data from over 87 million Facebook users was harvested without consent and analyzed using a “five-factor model” of personality traits. This analysis provided insights into users’ openness, benevolence, and other psychological tendencies, which were exploited to create personalized political ads designed to manipulate voter behavior. These ads were tailored to target individuals’ fears and biases, influencing their electoral decisions without their informed consent. Cambridge Analytica notably played a significant role in the 2016 U.S. presidential election, supporting Donald Trump’s campaign, and participated in shaping elections globally.[5]
Automated decision-making refers to decisions made entirely through automated processes, without human involvement.[6] This concept is exemplified by platforms like Google, Amazon, and Netflix, where algorithms independently make predictions and decisions about user preferences.
Profiling, a more focused form of automated processing, involves using personal data to evaluate specific aspects of a person’s life. These include predictions about their performance at work, economic situation, health, personal preferences, interests, behavior, or movements.[7] The Kosinski study and the Cambridge Analytica case are prime examples of profiling, demonstrating how data can be weaponized to influence individual decisions on a grand scale.
Artificial intelligence compounds these risks as an advanced form of automated decision-making that facilitates profiling based on seemingly insignificant data points.[8] The integration of AI tools and methods creates significant threats to personal data, potentially undermining fundamental freedoms, including freedom of expression. As seen in the Cambridge Analytica scandal, profiling through platforms like Facebook can subvert democratic processes by manipulating electoral outcomes.
This concern is especially relevant in contexts like Nigeria, where public awareness of the risks associated with data use and AI technologies remains limited. With a largely uninformed population, the potential for misuse of these tools—such as targeted political manipulation—could have serious consequences for governance and democracy.
As AI and data analytics continue to evolve, their accuracy and ability to extract sensitive information will only improve, amplifying the need for robust legal and ethical safeguards. Without these protections, the unchecked use of automated decision-making and profiling poses grave threats to individual rights and societal integrity.
1.1.2 Facial Recognition Technology and Video Processing
Nigeria is currently marked by a scarcity of surveillance cameras. While precise figures are unavailable, it is evident that surveillance systems are limited to select governmental buildings, private corporate premises, and a handful of private residences. However, with a growing middle-class population, rapid urbanization, infrastructure development, rising governmental needs, and increasing terrorist attacks, this landscape is poised to change dramatically.[9] The Nigerian surveillance camera market is projected to grow significantly, reaching a value of USD 241.5 million at a CAGR of 8.4% between 2025 and 2031.
While this surge signals technological advancement, it also introduces practical challenges to data protection. Systematic monitoring of public and private spaces using audio-visual surveillance heightens the risk of infringing on individuals’ privacy. Such technology makes it easier to identify and track individuals within monitored areas, creating a potent tool that, in the wrong hands, could cause significant harm. A chilling example of this is the Chinese government’s treatment of Uyghur Muslims, which underscores the dangerous implications of unchecked surveillance.
In 2017, under the guise of countering terrorism, the Chinese government launched a mass data collection campaign targeting Muslim residents, particularly Uyghurs, in the Xinjiang region. Biometric data such as DNA samples, iris scans, and blood types were harvested from individuals aged 12 to 65 as part of the so-called “Strike Hard Campaign Against Violent Terrorism.” Surveillance cameras equipped with sophisticated AI were installed across Uyghur communities, monitoring them 24/7.[10]
These systems utilized AI algorithms to predict behaviors or potential “extremist tendencies” based on arbitrary criteria such as having more than two children, using a backdoor instead of the front door, minimal social interactions, or engaging in Islamic practices like prayer or Quran reading. Alarmingly, these computationally racist predictions were used to justify imprisonments and other punitive actions. Meanwhile, the Han Chinese population—making up 35% of the region’s total population—was exempt from such measures, exposing the campaign as a blatant exercise of state control and cultural suppression.[11]
In stark contrast, the European Union adopted a proactive approach to regulating surveillance and facial recognition technology, especially following the NSA leak. Beyond the General Data Protection Regulation (GDPR), the European Data Protection Board (EDPB) issued specific guidelines for video surveillance and facial recognition systems. These guidelines emphasize that the purpose of monitoring must be clearly documented for each camera, and generic purposes such as “safety” are insufficient justification.[12]
For instance, if a shop owner in the EU wishes to install surveillance cameras, they must demonstrate a legitimate interest under GDPR Article 6(1)(f). This requires proving that vandalism is a documented and actual threat in the neighborhood, not merely a general concern across the nation. Similarly, national laws requiring surveillance must align with GDPR principles to avoid violations.[13]
Unfortunately, Nigeria lags behind in this regard. Despite the Nigerian Data Protection Act (NDPA) and its implementation framework, GAID, there are no specific provisions addressing video surveillance and facial recognition akin to the EDPB’s comprehensive guidelines. The absence of such regulations creates a legal vacuum, leaving room for potential abuse.
Proactivity is crucial. Nigeria must not wait for surveillance technology to evolve into a tool of oppression or for scandals to erupt before taking action. Lessons from the Uyghur tragedy and the EU’s foresight in regulatory development highlight the importance of safeguarding citizens’ rights in the face of advancing technology. By enacting robust laws and guidelines now, Nigeria can still strike a balance between leveraging surveillance technology for security and preserving individual privacy and freedom.
1.1.3 The Newly Emerged Discussions on Data Sovereignty
The growing trend of collection and analysis of big data is automated, continuous, inexpensive, and opaque.[14] The National Security Agency leak (NSA leak) provides a chilling example of amassing such a meticulous data repository, across almost the entire world. The NSA also disrespected ideas of sovereignty and mined data extraterritorially against unaware foreign nationals, which included even politicians. The Five Eyes routinely exchanged the data obtained by them, thus breaching the trust of their citizens by sharing personal data as a commodity. This raised fresh concerns about the idea of data sovereignty.[15]
Apart from the NSA leak, the world has, in recent times, witnessed a flagrant disregard of the concept of sovereignty. For example, triggered by the catalytic events of Russia’s interference with the US general election and Brexit in 2016, these countries and regions became aware that their political systems were vulnerable to manipulation.[16] The recent COVID crisis constitutes the most recent catalytic event on data sovereignty, with countries worldwide relying heavily on global tech giants like Google, Apple, and Microsoft for contact tracing, vaccine distribution systems, and digital health passports.[17] This raised concerns about sovereignty over health data, as much of it was stored and processed in foreign jurisdictions, potentially exposing it to laws like the U.S. CLOUD Act.
Now, what do countries do on the verge of these political crises? They adopt political agendas and policies shaped towards guarding their domestic data jealously.[18] For example, after the NSA leak, the EU adopted digital sovereignty with a focus on the economic aspect.[19] They made their laws extraterritorial in a bid to both protect their local data while at the same time bringing foreign entities and corporations within the ambit of their law. This gives the EU significant control over how data is processed within and outside its borders.
The US, on the other hand, adopts a colossal control and “splinternet” digital sovereignty by passing the US CLOUD Act. This Act allows government access to data stored by U.S. companies abroad. With its dominance in big tech (e.g., Apple, Google, Microsoft), the U.S. focuses on maintaining global leadership in digital governance and control.[20]
China was the pacesetter in the concept of digital sovereignty. Concerned about the overwhelming US hegemony,[21] the country has, as early as 2014, championed the idea of strict digital sovereignty. It adopted data localization policies and invested heavily in domestic alternatives to foreign technologies, such as Huawei for 5G and Baidu for search engines.[22] India followed a similar trend by adopting domestic tech capabilities and reducing reliance on foreign platforms.[23] To date, India has banned over 200 Chinese apps, citing national security concerns. Russia also adopts strict data localization, mandating that personal data of citizens be stored on servers within the country.[24]
Now, what do African countries do? Uninspired by the growing concerns about digital sovereignty, African countries are only waking up to the pressing issues of protecting individuals’ data, and that too, in a less urgent manner. Most of these countries, including Nigeria, adopted GDPR-styled legislation with no concern about the political agendas embedded in these laws. Nigeria continues to suffer from weak institutional challenges, hampering its ability to take data protection seriously, let alone the concept of digital sovereignty. It remains to be seen how these countries will grapple with the challenges posed by neglecting digital sovereignty, including undermining a nation’s autonomy, security, and ability to thrive in the digital economy. Future research on data protection and privacy in Africa, particularly Nigeria, should focus on possible solutions to this problem.
1.2 Conclusion
Countries all over the world are studying new technologies and identifying potential risks to their citizens, or at best, attempting to contain them. Many scholars are currently paying special attention to how to further develop blockchain technology that is compliant with the law. Blockchain technology has gained wider popularity and use in recent times, and its operation on a decentralized model, where no single entity has full control over data, continues to pose risks to both data protection and digital sovereignty. With robust and adaptable laws, policies, and, more importantly, an intentional approach towards data protection, Nigeria stands a chance of winning the fight for data protection and digital sovereignty.
[1] Heraclitus,”The Fragments of the work of Heraclitus”(trans.Penguin Classics 2001) 22
[2] Elief Kiesow Cortez (ed) “Data Protection Around the World: Privacy Laws in Action” (Springer, 2020 ) 273
[3] Ibid 274
[4] Michal Kosinski, David Stillwell and Thore Graepel, ‘Private Traits and Attributes Are Predictable from Digital Records of Human Behavior’ (2013) 110(15) Proceedings of the National Academy of Sciences 5802 https://doi.org/10.1073/pnas.1218772110 accessed 18th January, 2025.
[5] Abhishek Vats and Claudia Masoni, ‘A Decade in Pixels: Analyzing Incidents of State-Sponsored Surveillance from the Last Decade’ in Abhishek Vats (ed), Handbook on Research on Cyber Security (MAIMS and Guru Gobind Singh Indraprastha University, 2023) 231.
[6] NDPA 2023, Section 65
[7] Article 4.4 GDPR
[8] Elief Kiesow Cortez (ed) “Data Protection Around the World: Privacy Laws in Action” (Springer, 2020 ) 273
[9] 6Wresearch, ‘Nigeria Video Surveillance Market’ (6Wresearch, 2025) https://www.6wresearch.com/industry-report/nigeria-video-surveillance-market accessed 2 January 2025.
[10] Abhishek Vats and Claudia Masoni, ‘A Decade in Pixels: Analyzing Incidents of State-Sponsored Surveillance from the Last Decade’ in Abhishek Vats (ed), Handbook on Research on Cyber Security (MAIMS and Guru Gobind Singh Indraprastha University, 2023) 240.
[11] Ibid 241
[12] EDPB 2020 pp 9-10
[13] Ibid
[14] Abhishek Vats and Claudia Masoni, ‘A Decade in Pixels: Analyzing Incidents of State-Sponsored Surveillance from the Last Decade’ in Abhishek Vats (ed), Handbook on Research on Cyber Security (MAIMS and Guru Gobind Singh Indraprastha University, 2023) 242.
[15] Ibid 243
[16] Johannes Thumfart “ The Norm Development of Digital Sovereighnty between China, Russia, the EU and the US in Dara Halinnan et al “Data Protection and Privacy: Enforcing Rights in a Changing World (Springer 2022) 1-45
[17] Ibid 12
[18] Ibid 5
[19] Ibid 1
[20] Ibid 25
[21]Rogier Creemers, ‘China’s Conception of Cyber Sovereignty’ in Nazli Choucri and others (eds), Governing Cyberspace: Behavior, Power, and Diplomacy (Routledge 2020) 107–120
[22] Ibid 110
[23] Johannes Thumfart “ The Norm Development of Digital Sovereighnty between China, Russia, the EU and the US in Dara Halinnan et al “Data Protection and Privacy: Enforcing Rights in a Changing World (Springer 2022) 23
[24] Times of India, ‘India Bans 200-Plus Chinese Mobile Apps in Boon for Paytm’ (Times of India, 8 February 2023) https://timesofindia.indiatimes.com/gadgets-news/india-bans-200-plus-chinese-mobile-apps-in-boon-for-paytm/articleshow/97683653.cms accessed 2 January 2025.