The IRS, ID.me, and What They Mean for Privacy
Privacy in the age of the Internet is a constant concern for many and consistently raises questions about the role of government regulation of the Internet. Furthermore, it begs the question of how much we as individuals should give up in terms of privacy to use the Internet. Colossal technology companies such as Facebook and Google are embroiled in seemingly endless controversies over the sale or disclosure of personal data to other companies and advertisers. Massive scandals such as the Cambridge Analytica Scandal, in which the London based data analytics firm Cambridge Analytica used private personal data from millions of Facebook users without consent to aid the presidential campaign of Donald Trump, have brought data security to the attention of many, but the privacy of online information and data remains a pressing concern.
One area of data privacy that has particularly gained attention is facial recognition technology and how it can be used to gather biometric data from people without consent. Facial recognition and privacy concerns became a major talking point in early 2020 with the massive fallout of the Clearview AI scandal. In a groundbreaking piece, The New York Times reported that Clearview AI - a software company specializing in developing facial recognition software - was using a massive database made up of photos and personal data scraped from across the internet to test facial recognition software used by law enforcement to identify subjects. Clearview AI faced a massive public outcry and several major lawsuits were filed against them across the nation by civil liberties advocates. Most recently, this debate has become a major talking point once again after the Internal Revenue Service (IRS) announced its partnership with the online identity verification service ID.me, which uses facial recognition software to verify users' identities and prevent fraud. This latest controversy involving facial recognition brings to light major questions about not only privacy but also about the overreaches of private companies and how the government distributes aid to vulnerable people.
In late 2021, the IRS announced that by summer 2022, users would need to verify their identities by uploading a picture of themselves along with various forms of ID such as driver's license or passports. The IRS is not the first government agency to require users to use ID.me to access services, and in fact, about 30 states and 10 federal agencies use ID.me, primarily to verify the identity of government employees or those accessing social security or unemployment benefits. However, the decision to use ID.me for IRS users prompted a massive backlash, including a rare bipartisan pushback from both Democratic and Republican lawmakers in Congress. Ultimately, the IRS reversed its plans to work with ID.me, and ID.me began offering more flexible, non-intrusive identity verification measures for the government agencies that were still using their services, but the many major concerns surrounding facial recognition still remain.
Perhaps most important to address is the inherent racial bias that has been found to exist in facial recognition technologies. According to a study conducted by the National Institute of Standards and Technology (NIST), Asian Americans, African Americans, and Indigenous people experienced a higher rate of false positives in one-to-one facial matching. This misidentification, primarily of people of color, is especially significant given the fact that many government agencies using ID.me are using it with users trying to access much-needed services such as social security, unemployment benefits, and other forms of aid. This not only harms communities that already face racism, discrimination, and increased rates of poverty, but it additionally helps cement and recreate conditions of structural racism and oppression. This has already seen itself play out, according to a report by Vice, with numerous people across the country being locked out of receiving vital unemployment or social security aid due to failures of ID.me’s recognition technology. It is unclear how much of this is exactly due to racial biases in the software, but given that POC are more likely to be misidentified by facial recognition software, problems with ID.me are likely to disproportionately impact POC.
Moreover, for many economically disenfranchised people or people facing impoverishment, the private computers, smartphones, and internet connections that are required for safely using ID.me’s services are not always accessible. Even more overt instances of discrimination are apparent in the ways that law enforcement agencies have used and abused facial recognition technology to unjustly conduct surveillance on populations, and falsely imprison individuals who were the subject of racially biased false positives in facial recognition software.
While ID.me representatives have claimed to not sell information to law enforcement, the very fact that government agencies can and have used sensitive information to implement draconian surveillance measures on vulnerable populations such as migrants, giving up very sensitive and private biometric data to government agencies and third party corporations working with them is an uncomfortable gamble for many of those experiencing overt and systematic racism/discrimination. In fact, this has become such a large concern that several states and cities have attempted (and in some instances succeeded) to pass legislation limiting the use of facial recognition technology by law enforcement and other state agencies, citing privacy concerns.
Additionally, major concerns also arise from the fact that users, in order to access vital services, are being asked to give up very sensitive personal information to a private third-party company. While ID.me abides by a "Biometric Data Policy" that promises users that the company will not sell or trade personal data, the entrusting of very sensitive data to a private for-profit company opens the door to potential abuse, leaks, or cyber attacks, and begs the question of how much control we have of our identities and our privacy. Many have argued that privacy in the age of the Internet no longer exists, and the IRS’s debacle with ID.me is only the latest example of that. Facial recognition and lack of data privacy have left us without autonomy to choose what personal information we share with the state and private corporations and what they can do with that information. Overall, while this commodification of personal data seems to be the new normal under the changing face of surveillance capitalism, pushing back against these policies and exposing unjust data surveillance is all the more necessary. It is also essential to understand that despite these unjust breaches of privacy and surveillance tactics are affecting everyone, people facing systematic racism or impoverishment are likely to face the most adverse effects from it. Building a system of accountability for state and private entities will not solve the draconian problems of data privacy entirely, but they are the first step in regaining personal autonomy from the oppressive practices of the state and corporations.