On a winter’s evening in February 2015, a middle-aged man in the United States engages in an online dialogue, through a popular U.S.-based email service, with a similarly middle-aged man in Romania. The two men discuss sexually explicit photos of the Romanian man’s infant daughter, who is just 22 months old. The Romanian is willing to send to the American images of him sexual abusing his daughter and he hopes the American will send him sexual images of children the Romanian has not previously seen.
The email service provider discovers this transaction and, pursuant to U.S. federal law, immediately sends the IP address of the Romanian and any other information it has available to a non-governmental organization (NGO) based just outside Washington, D.C. The NGO examines the information before sending it on a secured channel to a U.S. criminal investigator based at Europol in The Hague.
The NGO, the National Center for Missing and Exploited Children (NCMEC), has already sent over 5,000 of these “leads” to the U.S. criminal investigator concerning Romanians possibly engaged in child sexual exploitation since October 2014.
But clearly this one is different and it receives the highest priority coding from the NCMEC. The U.S. criminal investigator engages with Europol and Romanian police over the next 72 hours, obtaining images of the child being sexually abused, while the Romanian police follow legal channels to prepare to arrest the father and rescue the child, which happens on 24 February because of the information provided by the email company and follow-up efforts of U.S. and European law enforcement.
Privacy versus Public Safety
Perhaps few people would argue that the 22-month old girl in Romania has less significant “right” to not be sexually abused when balanced against her father’s right to “privacy” of his digital data.
Admittedly, there is not in every NCMEC referral of child sexual exploitation, such a clear case where the balance of rights clearly favors the victim. But, as the use of technology by human beings grows and we look at ethical and philosophical questions surrounding ownership of data and privacy interests, we must start to ask how much of the user’s data is fair game for law enforcement to protect children from sexual abuse?
Or, how much of the potential jihadists data should intelligence agencies or law enforcement be able to examine to protect citizenry from terrorist attack?
Indeed, to determine the rights available to owners of data, threshold questions must be answered of “What data do we own, if any?” Under most accepted legal theories, we do not own our “physical” fingerprints, so can we really conclude we own our digital ones?
When a person drives a car on a highway, he or she agrees to display a license plate. The license plate’s identifiers are ignored most of the time by law enforcement. Law enforcement will use the identifiers though to determine the driver’s identity if the car is involved in a legal infraction or otherwise becomes a matter of public interest. Similarly, should not every individual be required to display a “license plate” on the digital super-highway? What level of public interest should be required by law enforcement to read that license plate?
Social media is used to generate support for terrorist groups. ISIL distributes videos showing its activities, most of which are abhorrent to law-abiding individuals. But there is a small segment of the population which will empathize with the struggle and perhaps seek to further it. How appropriate is the law enforcement engagement of the social media companies to reveal digital fingerprints of these extremist groups? Who determines the level of “extremism” of a group?
Few would disagree that law enforcement and intelligence services should have the ability, with appropriate judicial oversight, to investigate data to solve crimes and protect public safety. Our digital data, however, reveals much more personal information, than would normally be found in physical evidence at a crime scene.
It can also be hard to separate from criminal activity that can be found in a digital world, completely innocent, but perhaps embarrassing behavior in the same digital trail.
Where are the lines that cannot be crossed? Do the rules which apply to the physical world, both for citizenry and police, apply in the digital one?