TRANSFORMING DIGITAL EDUCATION, TOGETHER

The Sinister Exploitation of Children’s Data in AI-Educational Environments

As the digital world expands, new educational frontiers are constantly being established, with AI-enabled learning environments at the forefront. But underneath the glossy sheen of technological innovation lurks a sinister reality: the rampant exploitation of children’s data. Tech giants like Google Education aren’t simply providing Virtual Learning Environments (VLEs) and assessment systems—they’re hoarding enormous data troves on unsuspecting students. Such data gathering generates a plethora of ethical and privacy concerns that are largely dismissed or ignored.

The undeniable power of AI has been seized by corporations to transform learning and assessment systems, tailoring education to individual needs. Yet this seemingly altruistic endeavor has birthed a lucrative well of data primed for exploitation. With each interaction on these AI-powered platforms, children unknowingly contribute to a comprehensive data profile. Promises of personalized learning experiences mask the extensive data collection and its ramifications on children’s digital rights.

There’s an insidious lack of transparency surrounding data use and the consent process. The concept of ‘informed consent’ has been twisted into a barely recognizable form. Children, and often their guardians, remain blissfully ignorant of the massive scale and far-reaching implications of this data harvesting, turning them into unwitting pawns in a system designed to commodify their data for corporate gain.

In today’s data economy, the extensive data harvested from AI-driven learning platforms isn’t just an accumulation of information—it’s a powerful weapon. Corporations are exploiting this data to manipulate educational trends, inform product development, and hone marketing strategies, effectively turning children’s data into commercial gold. This crass commercialization of children’s data bears an uncomfortable resemblance to traditional forms of child exploitation and urgently demands a thorough reassessment.

A blistering report by Human Rights Watch condemns governments across 49 populous countries for endorsing online learning products without sufficient regard for children’s privacy during Covid-19 induced school closures. The aptly named report, “How Dare They Peep into My Private Life?”, exposes 164 educational technology (EdTech) products, revealing a shocking 89% are infringing upon children’s rights by covertly monitoring their activities and harvesting personal data, often without informed consent.

The report uncovers the widespread use of tracking technologies by online learning platforms, monitoring children’s activities beyond virtual classrooms. Alarmingly, some are using tagging and fingerprinting methods that are almost impossible to avoid or erase. Moreover, children’s data is regularly handed over to advertising technology (AdTech) companies for targeted advertising, resulting in personalized content and advertisements that not only disrupt their online experiences but also risk manipulating their perceptions and beliefs.

By endorsing such EdTech products, governments have transferred the true cost of online education onto children, sacrificing their privacy and access to unbiased information. Furthermore, these practices have largely occurred without the knowledge or consent of children, parents, and teachers, effectively thrusting them into a system of hidden surveillance and data collection as part of their compulsory education.

In the light of this uncontrolled data exploitation, one glaring deficiency is the absence of international regulation to protect children’s digital rights. Many of these tech giants, like Google Education, operate within the legal jurisdiction of the United States where self-regulation and a pro-business mentality prevail, leading to dubious practices around children’s data use with scant oversight or accountability.

This laissez-faire attitude towards data regulation becomes problematic when these services operate internationally, especially within the European Union. The EU has been a forerunner in digital rights and privacy protections, exemplified by the robust General Data Protection Regulation (GDPR). Yet, the distinctive issues surrounding children’s digital rights within the realm of AI-enabled education call for targeted regulatory responses.

As these platforms operate within European borders, the EU has an ethical responsibility to shield its youngest citizens’ digital rights. The EU has the chance to challenge the shortcomings of US self-regulation, pushing for stricter rules on children’s data collection, use, and profiteering.

Stricter data protection policies, enhanced transparency about data usage, and educating children and guardians about their digital rights should be prioritized. The EU could even consider requiring profits derived from the use of children’s data to be reinvested in the education system, converting potential exploitation into a boon for education.

As we traverse the era of AI-powered education, the urgency for robust regulation has never been higher. Taking decisive action against the unregulated exploitation of children’s data by corporations is not just a legislative challenge—it’s a moral imperative. The EU has the opportunity to lead the global dialogue, championing the protection of our most vulnerable digital citizens: our children. By doing so, we can ensure that the transformative potential of AI in education serves the collective good, rather than profiteering by a few.

To counter rampant data exploitation in AI-enabled educational platforms, the European Union must enact decisive action. Given its history of implementing robust data protection policies, the EU is well-positioned to lead this charge. Leveraging successful past policy implementations, a three-pronged strategy involving legislation, cooperation, and education should be pursued. This strategy involves developing new policies, engaging international partners and corporations, and educating all stakeholders about the challenges at hand. By addressing this pressing concern, the EU has the chance to not only safeguard children’s rights within its jurisdiction but also to set a global standard for others to follow.

1. Legislation: A dedicated policy focusing on children’s digital rights in AI-enabled learning environments should be pursued. This task would fall under the purview of the Directorate-General for Justice and Consumers (DG JUST), which oversees the EU’s data protection policies. DG JUST, in collaboration with the Directorate-General for Education, Youth, Sport and Culture (DG EAC) due to the educational context, can propose new legislative initiatives. These initiatives can then be put forth to the European Parliament and the Council of the EU for approval. The policy should outline stringent rules on data collection and usage, incorporate child-specific consent procedures, and establish a framework for data transparency and accountability.

2. Cooperation: The EU External Action Service (EEAS) should spearhead a coordinated international approach, involving both the corporations and international partners. EEAS would be responsible for advocating these regulations on international forums, such as the United Nations, while also initiating dialogue with the tech giants involved. The Directorate-General for Communications Networks, Content and Technology (DG CONNECT) should be engaged to facilitate discussions with the tech industry, drawing them into the rule-making process for effective policy creation.

3. Education: Raising awareness and educating all stakeholders is crucial. The Directorate-General for Education, Youth, Sport and Culture (DG EAC) can develop awareness campaigns and educational programmes, and incorporate digital rights education into school curricula across member states. They can work closely with the European Institute of Innovation and Technology (EIT) and the European Agency for Safety and Health at Work (EU-OSHA) to ensure digital safety and awareness are integral components of the digital education landscape.

In addition to the roles outlined for various Directorates-General and agencies at the EU level, a collaborative approach must also involve the member states’ Commissioners for Children. They should be included in the policymaking process from the onset. Their unique position allows them an in-depth understanding of the specific needs and challenges faced by children in their respective countries, making them invaluable contributors to the policy’s development.

The collaborative process could be facilitated by the European Commission’s Coordinator for the Rights of the Child, who can convene regular meetings with the Commissioners for Children. This would ensure a constant flow of dialogue and mutual learning, and that the experiences and insights from national level are fed into the EU’s policy framework.

This action plan represents a holistic approach towards addressing the issue of children’s digital exploitation in AI-enabled learning environments. By addressing this pressing concern, the EU would not only be safeguarding children’s rights within its jurisdiction but setting a global standard for others to follow. It is a challenging path, but one that carries the promise of creating a safer and more equitable digital future for our children.

A childs learning environment has always been the sacred space where the child can feel safe, cared for and rest in the thought that his needs for safety are kept as an absolute priority. This is ingrained as the essential first priority for any learning environment. Now that the paradigm for a learning environment is leaving its brick-and-mortar foundations behind we cannot allow corporation learning environments exploit our children’s data counterpart for capitalistic gains. The right to privacy and ownership of data must be kept as paramount as the right to be safe from physical harm.

Scroll to Top