Our kids are being watched, and the gushing data streams they’re emitting are getting analysed in granularity so minute, it puts data-mining companies like Facebook and Google to shame.
Jose Ferreira, who runs a firm called Knewton that analyses how people learn, wasn’t shy about the scope of his operation when talking to Forbes a few years ago:
Education is the world's largest data industry, by far.
We’re physically collecting thousands of data points per student per day.
Those data points come from software that monitors students’ every mouse click, every keystroke, every split-second hesitation as children work through digital textbooks, as Politico describes it.
In fact, Knewton is able to not only find out what individual students know, but how they think – including which kids will have trouble focusing on science before lunch and those who’ll struggle with fractions next week, the news outlet reports.
It’s impressive analysis based on breathtakingly thorough data mining. But as it turns out, this type of education-related technology often comes with a scandalous disregard for its subjects’ privacy.
Indeed, parents should be asking what privacy policies, exactly, are protecting their kids’ personal information?
Politico recently examined hundreds of pages of privacy policies for education software such as Knewton to ascertain how children’s privacy is being handled in this lucrative world of education-related technology.
If you’ve been reading related studies and keeping up with news such as that of Google getting sued for data-mining students through its Google Apps for Education (and subsequently giving up on that data-mining), you won’t be surprised to hear that there’s not much out there to protect kids’ privacy.
As it is, a December 2013 Fordham University School of Law study found that although 95% of US school districts rely on cloud services, those cloud services are opaque, poorly understood and weakly governed.
Parents are mostly kept in the dark about cloud storage of their children’s data, Fordham found, with only 25% of districts informing parents that they use cloud services.
Many districts don’t even have privacy policies, which is just one sign of what the study’s authors called “rampant gaps” in contract documentation in a “sizable plurality” of districts.
From the report:
Districts frequently surrender control of student information when using cloud services: fewer than 25% of the agreements specify the purpose for student information disclosures, fewer than 7% of the contracts restrict the sale or marketing of student information by vendors, and many agreements allow vendors to change the terms without notice.
This is all in spite of FERPA, the Family Educational Rights and Privacy Act, which gives parents some control over the disclosure of information from students’ records and which generally requires districts to have direct control of student information when disclosed to third-party service providers.
(On the flip side, FERPA also gives school districts the rights to share student data with private companies if it furthers educational goals.)
Beyond these issues, the study also found an “overwhelming majority” of cloud service contracts don’t even address parental notice, consent, or access to student information.
From the report:
Some services even require parents to activate accounts and, in the process, consent to privacy policies that may contradict those in the district’s agreement with the vendor, which contradicts laws that require parental notice, consent, and access to student information.
Neither, generally, do agreements between districts and cloud service providers account for data security.
In fact, they frequently allow vendors to retain student information in perpetuity, in spite of the fact that, as the report points out, “basic norms of information privacy require data security.”
As Politico reports, both Republicans and Democrats are all for this practice of private-sector data mining, even as legislators attempt to rein in NSA surveillance.
The Obama administration has even encouraged the practice by relaxing federal privacy law so that school districts can share student data more widely, Politico reports.
Resistance to this free-for-all plundering of children’s data has been mounting, however.
Last week, Senators Edward Markey (D-Mass) and Orrin Hatch (R-Utah) proposed the “Protecting Student Privacy Act”, a bill that would:
- Require data security safeguards for student data held by private companies;
- Prohibit the use of students’ personally identifiable information to advertise or market a product or service;
- Give parents the right to access personal information about their children held by private companies and amend it if it’s wrong;
- Make transparent the name of companies that have access to student information by directing school districts to maintain a record of all outside companies with who the school contracts;
- Minimize the amount of personally identifiable information (PII) transferred from schools to private companies; and
- Ensure private companies can’t maintain dossiers on students in perpetuity by requiring the companies to at some point delete PII.
As well, in addition to the court action against Google’s data-mining in Google Apps for Education, the past year has seen ferocious pushback from parents against cloud storage of their children’s PII.
InBloom, a cloud storage service provider that was offering to house and manage student data for public school districts across the US by extracting a dizzying array of information – some 400 data fields – from disparate school databases and new, optional, sometimes intrusive fields, closed up shop in April as a result of that pushback.
But what evil, exactly, have ed-tech companies done with kids’ data?
As Politico notes, there haven’t been big data breaches in K-12 education in the US, although there have certainly been plenty at university level.
Still, problems are rife. One company the news outlet examined notes in passing that confidential student data should be shared “very carefully”, but it offers no guidelines.
When Politico inquired about another company’s privacy policy – a policy that gave the company the right to use students’ PII in order to send them targeted advertising – the policy was speedily rewritten and reposted, with a brand-new emphasis on protecting privacy and minus a line about the targeted ads.
We haven’t yet seen the most intrusive technologies. As Reuters reported at the time, the Bill & Melinda Gates Foundation in 2012 funded a $1.4 million research project to outfit middle-school students with biometric sensors designed to detect how they responded on a subconscious level to each minute of each lesson.
At the end of the day, we’re left with many serious questions, the most basic of which is this: Are these technologies helping children to the extent that they warrant their obtrusiveness?
Please, tell us what you think, in the comments section below.
Image of education technology courtesy of Shutterstock.
I think something needs to happen, first. As it stands, if I’m an advertiser directing ads at children, what benefit is it to me to know that little Timmy hates math versus not knowing anything about little Timmy? Is that a bad thing? Data isn’t evil. It’s what you do with it that is. What are they going to do with it?
How do I find out if this is happening and how do I stop it?
you can’t! mwa ha ha
I love today’s technology. I do. But at what price? I try and limit what information I put out into the cloud, but with every keystroke, I know that I’m sending information out into the cloud to be mined. What privacy (if any) will the next generation have at all? Maybe they’ll be used to it by the time they grow up, but articles like this just make me shiver.
Lisa,
This is a helpful article, but misses two very important questions.
First, the algorithmic assumptions that data providers make in determining how to easily and quickly display thousands of points of data in a simple way – to make them actionable – is questionable. At some level, a developer had to produce an algorithm, for instance, that “find[s] out what individual students know… [and] how they think…”.
The danger in this is that we have only a few means to capture “what students know” (e.g., a keyboard and mouse, or a single biometric) and that those data are then once again simplified to create a stereotyped view of a student “who doesn’t like to learn science before lunch”.
Put another way, the danger here is that the diagnostic tool is poorly created and leads to inappropriate interventions — or interventions that raise a test metric within a software environment, but that do not actually produce deeper, more robust understanding.
Second, these assumptions are themselves dangerous in the aggregate, on an individual basis, when shared with actors that use the data to make meaningful decisions. For example, educators may make decisions about tracking based solely upon the data of these environments, when a student could be quite capable in ways not observed by a computer (but observable in other contexts); OR, for that matter, in a hypothetical case, a student loan agency could determine loan eligibility or rates based upon past performance.
These detriments – data-driven discrimination, as I term it – are the ethical grey areas not so evident as a data breech, but potentially much more impactful in the long term.
I do appreciate your article and believe it brings important points to light. To take things a step further, what else can or should we know, or can or should we consider, as educators ourselves, to be better informed? Your article is a fine start. What next?
–Dave