您选择的不是数据库安装目录_您不是您的数据,但您的数据仍然是您

您选择的不是数据库安装目录

重点 (Top highlight)

by Tricia Wang

特里西娅王

What happens when your own data becomes your enemy?

当您自己的数据成为您的敌人时会发生什么?

Just shy of his 42nd birthday, Robert Julian-Borchak Williams found out.

罗伯特·朱利安·伯查克·威廉姆斯(Robert Julian-Borchak Williams)才快42岁生日。

In January 2020, the Detroit police uploaded a photo captured from a store robbery video to DataWorks. DataWorks — a mugshot management company turned facial recognition software company — then matched the video still with an outdated DMV photo of Williams, a Black man. That led to Williams’ arrest for felony larceny. But the match was wrong. And Williams became the first documented case in America of a wrongful arrest due to a mistaken identity match from facial recognition algorithms.

2020年1月,底特律警方将从抢劫案录像中捕获的照片上传到DataWorks。 DataWorks是一个面部照片管理公司,后来变成了面部识别软件公司。然后,该视频仍然与黑人男子Williams的过时DMV照片进行了匹配。 这导致威廉姆斯因重罪盗窃罪被捕。 但是比赛错了。 由于人脸识别算法中的身份匹配错误,威廉姆斯成为美国第一个有记录的不当逮捕案件。

In an op-ed, Williams writes that prior to his experience, he was a supporter of facial recognition technology. He rhetorically asks: ‘What’s so terrible if they’re not invading our privacy and all they’re doing is using this technology to narrow in on a group of suspects?’

威廉姆斯在专栏中写道,在经历之前,他是面部识别技术的支持者。 他口口声声地问:“如果他们不侵犯我们的隐私,而他们正在做的就是使用这项技术来缩小一组嫌疑人的范围,那有什么可怕的呢?”

A digital artwork of a colourful face made of small scaly parts. Some of these parts are flying away from the face.
‘Void System’, by Maxime DES TOUCHES (2015) Image courtesy: https://www.deviantart.com/elreviae/art/Void-System-538340110
Maxime DES TOUCHES的 《虚空系统》 (2015)图片提供: https : //www.deviantart.com/elreviae/art/Void-System-538340110

Williams goes on to answer his own question. Because of the wrongful arrest, he had to miss several days of work. Despite proving that he wasn’t the person in the photo, his case was only conditionally dismissed. This means he could still be charged again for the same crime — pending further evidence. Even though he was released without charges, he now has a police record for a crime he didn’t commit. This could adversely impact his chances of getting a job or finding a home. What’s more, his neighbours saw him getting arrested. His daughters saw him getting arrested. While his op-ed does not go into detail about the emotional trauma an event like this can cause, we know from other cases that this kind of trauma is real.

威廉姆斯继续回答他自己的问题。 由于被误捕,他不得不错过几天的工作。 尽管证明不是照片中的人物,但他的案子只是有条件地被驳回。 这意味着他仍可能再次因同一罪行而受到指控-有待进一步的证据。 即使他无罪释放,他现在也因未犯罪而拥有警方记录。 这可能会对他找工作或找房子的机会产生不利影响。 而且,他的邻居们看到他被捕。 他的女儿们看到他被捕。 尽管他的专栏文章并未详细介绍这种事件可能引起的情感创伤,但从其他案例中我们知道这种创伤是真实的。

If what happened to Williams was only a threat to privacy, then we would be talking only about Williams’ control over his data. We might be talking about the State’s right to access his old DMV photo without his consent. But instead, what we’re talking about, is how processes that involve algorithmic decision-making over human beings can become fundamental threats to our lives and our communities. The wrongful arrest compromised Williams’ agency to live his life freely without harm or fear, to do a job, and to be part of a community.

如果威廉姆斯发生的事情是对隐私的威胁,那么我们只会谈论威廉姆斯对他的数据的控制权。 我们可能在谈论纽约州未经其许可即可访问其旧DMV照片的权利。 但是,相反,我们正在谈论的是,涉及对人类进行算法决策的流程如何成为对我们生活和社区的根本威胁。 错误的逮捕使威廉姆斯的代理机构受损,使他的生活不受伤害或恐惧地自由生活,工作并成为社区的一部分。

Williams’ experience is not just an invasion of privacy. Something even more terrible happened to him that we don’t have a widely shared language for yet: the invasion of personhood. If privacy is about our ability to control information about ourselves, personhood is about our fundamental agency as human beings.

威廉姆斯的经历不仅仅是侵犯隐私。 发生了一件更可怕的事情,他还说我们还没有广泛使用的语言:入侵人格。 如果说隐私是关于我们控制关于自己的信息的能力,那么人格是关于我们作为人类的基本代理。

***

***

In the digital age, individual privacy in the broadest sense is about control over protecting one’s personally identifiable information (PII), such as information about health, credit, shopping, or communication. But the types of information deemed ‘personally identifiable’ and the amount of control one has over them varies around the world.

在数字时代,从广义上讲,个人隐私是指对保护个人身份信息 (PII)(例如有关健康,信用,购物或通讯的信息)的控制。 但是,在世界范围内,被认为是“个人可识别”的信息类型以及对该信息的控制程度各不相同。

I’ve done a lot of work to try and document concepts of privacy and identity, and found that conceptions of privacy vary radically across cultures. It’s quite common for Chinese internet users to share their blood types on social media platforms, whereas Westerners would see that as deeply personal health information. In Peru, people give out their DNI, the equivalent to a social security number, without even a second thought when asked by store cashiers.

我做了很多工作来尝试记录隐私和身份的概念,并发现隐私概念在不同文化之间通常存在差异中国互联网用户在社交媒体平台上分享他们的血型很普遍,而西方人则将其视为深层的个人健康信息。 在秘鲁,人们在商店收银员询问时甚至没有考虑过,就给出了相当于社会安全号码的DNI。

Regardless of the types of information it encompasses, privacy is about controlling which pools of data are protected. If the operative word for privacy is ‘control’, then personhood is all about agency. Personhood is the agency to determine one’s own life decisions and outcomes. Personhood is tied to the qualities that make us people. It’s about making decisions about everything: our careers, personal lives, homes, and relationships. It’s about having the freedom to determine where we live, who we live with, and how we live. It’s about self-determination.

不管隐私包含什么类型的信息,隐私都是关于控制哪些数据池受到保护的。 如果保护隐私的有效措词是“控制”,那么人格关乎代理。 人格是决定自己的人生决定和结果的机构。 人格与使我们成为人的素质息息相关。 这是关于一切事情的决策:我们的职业,个人生活,家庭和人际关系。 这是关于自由决定我们住在哪里,与谁住在一起以及我们如何生活的自由。 这是关于自决的问题。

An illustration of a short-haired person with several colours on their face, wearing a blue shirt, looking at us.
‘Guy #1’, by Ben Tam (2017) Image courtesy: https://www.artistsinourmidst.com/artists-gallery/byng-arts-mini-school/
Ben Tam(2017)的《 Guy#1》,图片提供: https : //www.artistsinourmidst.com/artists-gallery/byng-arts-mini-school/

Today, privacy and personhood are both mediated digitally. Our primary language for conceptualising the data we produce is through privacy, which treats our personal information as separate from us, a piece of property that can be measured, negotiated over, sold, and reused. But data doesn’t just belong to you in the way that your house or car might; it is also you. It is like a quantum particle that can exist in two places at the same time, as both a representation of who you are and also a commodity that can be sold.

如今,隐私和人格都以数字方式进行调解。 我们将产生的数据概念化的主要语言是通过隐私,隐私将我们的个人信息与我们分开,这是可以衡量,协商,出售和再利用的财产。 但是数据不仅以房屋或汽车的方式属于您,还属于您。 也是你 它就像一个可以同时存在于两个地方的量子粒子,既代表您的身份又代表可以出售的商品。

Violations of privacy are violations of data as a commodity. Violations of personhood are violations of the actual person represented by that data. That is why violations of personhood lead to unintended and debilitating effects that last beyond the violation itself. So not only is it important to talk about the privacy violations that come with algorithmic decision-making, but to also about the bigger threat to personhood. Evaluating the threats of automated decision making tools through the lens of privacy obscures the nature and scale of the threat, and it obscures who will be most negatively impacted: people and communities who are still fighting for their personhood.

侵犯隐私是对作为商品的数据的侵犯。 违反人格的行为违反了该数据所代表的实际个人。 这就是为什么侵犯人格会导致意想不到的破坏性影响,这种影响会持续到侵犯本身之外。 因此,不仅要讨论算法决策所带来的侵犯隐私的行为,而且要论及对人格的更大威胁,这一点很重要。 通过隐私权的角度评估自动化决策工具的威胁,掩盖了威胁的性质和规模,也掩盖了将受到最大负面影响的人:仍在为人格而战的人们和社区。

Even though we don’t use the concept of personhood widely, we may actually understand it better than we realise. The current Black Lives Matter protests in the USA did not start because protesters were demanding privacy around their data. They were, and are, demanding the right of Black people to live without discrimination — in other words, the right to their personhood.

即使我们没有广泛使用人格概念,但实际上我们可能比我们意识到的要好。 当前在美国的“黑色生活问题”抗议活动并未开始,因为抗议者要求其数据保密。 他们过去一直在要求黑人拥有不受歧视地生活的权利,换句话说,就是他们的人格权。

Perhaps the value of personhood is most intuitively understood by those who have had it taken away, either within their lifetimes or inter-generationally. Removal of personhood comes in many forms: abuse, surveillance, racism, genocide, homophobia, slavery, trafficking, colonialism, torture, and human rights violations. Immigrants who are detained, ethnic groups who are discriminated against, and prisoners kept in solitary confinement are all systemically subjected to dehumanisation, which is one way of removing personhood. The poet Joshua Bennett describes how Black people have historically been deemed as human nonpeople. Essentially, anyone whose sense of self has been reduced to a single identifier or data point by a majority group has experienced violations of personhood — and viscerally knows what it’s like to be denied agency and autonomy over their lives and decisions.

人格的价值也许是一生中或世代相传的人最直观地理解的。 解除人格有多种形式:虐待,监视,种族主义,种族灭绝,同性恋恐惧症,奴隶制,贩运,殖民主义,酷刑和侵犯人权行为。 被拘留的移民,受到歧视的族裔群体和被囚禁在单独监禁中的人都受到系统的非人道化处理,这是消除人格的一种方式。 诗人约书亚•贝内特(Joshua Bennett)描述了黑人在历史上被认为是人类的非人 。 从本质上讲,任何人的自我意识被多数群体简化为单个标识符或数据点的人都经历过人格侵害,并且内心地知道在生活和决策中被剥夺代理和自治的感觉。

The violation of personhood is not a new experience. It is, however, now happening in unfamiliar ways because the entire process is invisible, automated, and designed to obfuscate.

侵犯人格并不是新的经历。 但是,由于整个过程是不可见的,自动化的且旨在进行模糊处理,因此现在它以不熟悉的方式发生。

An abstract illustration of a big stack of books/diaries and some record players.
‘Memoria colectiva’, by Rosa Delgado Leyva (2004–2006) Image courtesy: https://artenet.es/en/paintings/collective-memory-
罗莎·德尔加多·莱瓦 ( Rosa Delgado Leyva ) 创作的《回忆录》(Memoria colectiva )(2004年至2006年)图片提供: https //artenet.es/en/paintings/collective-memory-

For example, the typical smartphone app user or Amazon shopper is aware that their behaviours are being tracked and purchases catalogued. But most people usually don’t understand the extent to which data is being gathered, by whom, or how it is being or could be used. People often don’t account for less obvious use cases like transit companies recording conversations on trains and buses, or advertisers tracking consumer travel patterns through billboards and smartphones. . Even something seemingly unrelated to data, like high school students taking SAT test, results in this data being sent to tech advertising companies. When people do see the extent to which their preferences and activities are being monitored, they feel overwhelmed.

例如,典型的智能手机应用程序用户或亚马逊购物者知道正在跟踪他们的行为并分类购买。 但大多数人通常不明白其中的数据正在收集的程度,是谁,或者它是如何被或可能被使用。 人们通常不会考虑不太明显的用例,例如公交公司在火车和公共汽车上记录对话 ,或广告商通过广告牌和智能手机跟踪消费者的出行模式 。 。 甚至某些似乎与数据无关的东西,例如高中生参加SAT测试,也会导致这些数据被发送到高科技广告公司。 当人们确实看到他们的偏好和活动受到监控的程度时,就会感到不知所措。

When it comes to privacy and data ownership, most of us feel downright helpless. High-profile stories like the Equifax data breach or the Facebook-Cambridge Analytica scandal are made public — and yet, here we are, likely still using credit cards and a Facebook product. We feel compelled to agree to terms and conditions designed to protect corporations so we can use the technologies we’ve come to rely upon to live our lives. When we demand better alternatives, technologists will often echo messages such as ‘If you care about your privacy, just stop using social media…and your phone…oh, and the internet!” or ‘Privacy is obsolete anyway, so stop worrying about it!’

当涉及到隐私和数据所有权时,我们大多数人都会感到无助。 诸如Equifax数据泄露或Face book-Cambridge Analytica丑闻之类的备受关注的故事已被公开-但是,在这里,我们很可能仍在使用信用卡和Facebook产品。 我们感到不得不同意旨在保护公司的条款和条件,因此我们可以使用赖以生存的技术。 当我们需要更好的替代方案时,技术人员通常会回响诸如“如果您关心自己的隐私,那就停止使用社交媒体……以及您的手机……哦,还有互联网!”之类的信息。 或“隐私已过时,请不要再担心它!”

These two extremes offer neither helpful advice nor peace of mind. They are also two extremes that only address surface-level privacy without examining its link to personhood. The more our personal data is being collected automatically and acted upon by institutions, the more our privacy and our personhood are compromised. I believe we need to level up conversations around our digital humanity before we can collectively advance solutions to protect it.

这两种极端情况既没有提供有用的建议,也没有提供内心的平静。 它们也是两个极端,仅解决表面级别的隐私而不检查其与人格的联系。 机构自动收集并采取行动的个人数据越多,我们的隐私人格受到的损害就越大。 我认为,我们需要在围绕数字人文的基础上加强对话,然后才能共同提出解决方案来保护它。

***

***

Let’s be real: the word ‘data’ feels cold and lifeless. Perhaps because of this, many of us believe our data is somehow separate from our true selves. Some people just aren’t worried, thinking: ‘Oh, sure, my data is being collected, but it’s fine. It has never gotten in the way of my life.’ The reality is that as we begin to live more and more of our lives online, and institutions surveil and interact with us digitally, our data selves become just as real as our physical selves. That information you got from 23andMe is you. Those Tik Tok videos are you. That Grindr profile is you. The GPS mobility data from your phone is you.. That chatbot conversation is you. In his book We are Data, John Cheney-Lippold documents how algorithms are used to construct digital versions of ourselves. Companies and institutions can then act upon us in the physical world just by acting upon our digital proxies.

让我们成为现实:“数据”一词感觉冷淡无味。 也许正因为如此,我们中的许多人认为我们的数据与我们的真实自我有所不同。 有些人只是不担心,他们在想:“哦,可以,我的数据正在收集中,但是很好。 它从来没有妨碍我的生活。 现实情况是,随着我们开始越来越多地生活在网络上,并且机构进行数字化监视并与我们互动,我们的数据自我也就变得与物质自我一样真实。 您从23andMe获得的信息就是 。 那些Tik Tok视频就是你。 那Grindr个人资料就是您 。 您手机的GPS移动性数据就是您。聊天机器人的对话就是您 。 John Cheney-Lippold在他的《 我们就是数据》一书中记录了如何使用算法来构造自己的数字版本。 然后,公司和机构可以仅通过对我们的数字代理进行操作就可以对我们在物理世界中进行操作。

A black, white and grey abstract digital art of a human face with closed eyes.
Untitled by pareidoloop . Image courtesy: https://rb.gy/07d80y
无标题由 pareidoloop 图片提供: https : //rb.gy/07d80y

Because your data is you, and because your data is acted on without your consent, and because that action can impact how much agency you have over your life and livelihood, data is at the core of digital personhood. This is why any proposed solution of abandoning connected technologies is bound to be ineffectual. For starters, our digital selves already exist, and quitting social media won’t change that. But more importantly, these apps have become so central to our lives, that eradicating them would make it difficult to be a person.

因为您的数据就是您自己,并且由于未经您的同意就对您的数据进行了处理,并且由于该操作会影响您在一生中拥有多少代理,所以数据是数字人格的核心。 这就是为什么任何提议的放弃连接技术的解决方案注定都是无效的原因。 首先,我们的数字自我已经存在,退出社交媒体也不会改变这一点。 但更重要的是, 这些应用程序已成为我们生活的中心,以至于根除这些应用程序将很难成为一个人。

Facebook and its suite of applications have become many people’s primary means of communicating with friends and family. Without it, they’d not only be without work, but also isolated from their social networks. Abandon Instagram and lose knowledge about the experiences of your friends and family. Abandon WhatsApp and lose the ability to contact people you know. Abandon Lyft and Uber and in certain cities you’ll either pay three times as much for your ride or be forced to walk because there are no cabs in sight. Imagine being unable to email your employer, or check your bank balance, or find the fastest and cheapest flight to see your family. You wouldn’t cease to exist on the spot, but the agency you have over your life would change drastically. Your sudden loss of personhood would be overwhelming.

Facebook及其应用程序套件已成为许多人与亲朋好友交流的主要手段。 没有它,他们不仅会没有工作,而且会与社交网络隔离。 放弃Instagram,并失去有关朋友和家人的经验的知识。 放弃WhatsApp,失去与您认识的人联系的能力。 放弃Lyft和Uber,在某些城市,您要么为骑行支付三倍的费用,要么因为看不见出租车而被迫行走。 想象一下无法给您的雇主发送电子邮件,无法检查您的银行余额或无法找到最快最便宜的航班去看望您的家人。 您不会在现场不复存在,但是您一生中拥有的代理权将发生巨大变化。 您突然失去人格将是压倒性的。

Orange background. There are small illustrations of several faces in a asymmetrical arrangement.
Untitled by Jean-Manuel Duvivier (2017) Image courtesy: https://rb.gy/gyzcei
Jean-Manuel Duvivier 无标题 (2017)图片提供: https : //rb.gy/gyzcei

We carry out so much of our personhood digitally that we don’t even think about it. Every text message and email we send, every map and browser search, every photo we upload to the cloud, every browser search — all of this is data, all of this is us. But our language around our data creates the illusion that it is external to us. And since our data is physically distributed across other people’s computers or companies’ databases, it’s hard to conceptualise that our data is just as much a part of who we are as our hair, bodily organs, arms, or legs.

我们以数字方式处理了很多人的事,甚至根本没有考虑过。 我们发送的每条短信和电子邮件,每张地图和浏览器搜索,每张上传到云的照片,每一个浏览器搜索-所有这些都是数据,所有这些都是 我们 。 但是我们围绕数据使用的语言产生了一种幻觉,即它对我们而言是外部的。 而且,由于我们的数据物理分布在其他人的计算机或公司的数据库中,因此很难概念化地认为我们的数据与头发,身体器官,手臂或腿一样,是我们自身的一部分。

***

***

A few years ago, Google released a new machine learning algorithm that automatically tagged photos to describe their content. Jacky Alcine discovered that his friend, who is Black, was tagged a Gorilla. Google claimed it was a computational error, but the real problem is twofold. First, they didn’t have a diverse enough photo set on which to train the algorithm. Second, they likely didn’t have a diverse enough team to raise the issue of diverse photo sets and approaches in the first place. In 2011, the National Institute of Standards and Technologies (NIST) did a study that revealed facial recognition algorithms performed better on the faces that looked like the team developing them. Meaning: Japanese teams produced better results at recognising Japanese faces while Caucasian teams produced better results at recognising Caucasian faces.

几年前,Google发布了一种新的机器学习算法,该算法会自动标记照片以描述其内容。 杰基·阿尔辛(Jacky Alcine)发现他的朋友布莱克(Black)被标记为大猩猩。 谷歌声称这是一个计算错误,但真正的问题是双重的。 首先,他们没有足够多样化的照片集来训练算法。 其次,他们可能没有足够的多元化团队,一开始就提出各种各样的照片集和方法问题。 2011年,美国国家标准技术研究院(NIST)进行了一项研究,发现面部识别算法在看起来像开发团队的面Kong上表现更好。 含义:日本队在识别日本面Kong方面产生了更好的结果,而白种人队在识别白种人面Kong方面产生了更好的结果。

In 2019, NIST performed another study on 189 algorithms and concluded that they were highly susceptible to bias. The algorithms misidentified Black and Asian faces ten to a hundred times more than Caucasian faces. Two of the DataWorks algorithms that misidentified Williams and led to his wrongful arrest were included in this study.

NIST在2019年对189种算法进行了另一项研究,得出的结论是它们极易出现偏差。 该算法误判了黑人和亚洲人的面Kong比白种人的面Kong多十到一百倍。 这项研究包括两个误认威廉姆斯并导致他被错误逮捕的DataWorks算法。

What is worrisome is that despite studies showing how automated decision-making systems are inaccurate and extend bias, institutions are still deploying them. Julie Angwin’s work on courtrooms using COMPAS, a machine learning program that gives risk scores, shows that the programme is overly biased in rating Black people as higher risk. This one score doesn’t reflect real risk, but does keep thousands of people, mostly Black, in jail.

令人担忧的是,尽管研究表明自动化决策系统不准确并扩大了偏见,但机构仍在部署它们。 朱莉·昂温(Julie Angwin)使用COMPAS(在计算机上提供风险评分的程序)在法庭上的工作表明,该程序在将黑人评定为较高风险方面过分偏见。 这个分数并没有反映出真正的风险,但是确实使成千上万的人入狱,其中大多数是黑人。

An abstract graphic art comprising of elements such as an eye, CCTV cameras and arrows.
Untitled by Oliver Munday (2013) Image courtesy: https://omunday.tumblr.com/post/69601272050/vollmann-feature-on-the-nsa-foreign-policy
奥利弗·蒙迪 ( Oliver Munday ) 无题 (2013)图片提供: https : //omunday.tumblr.com/post/69601272050/vollmann-feature-on-the-nsa-foreign-policy

Outside of courtrooms, companies are using software to make hiring more efficient. Amazon’s HR hiring tool was discovered to privilege male applicants over female applicants because it was trained on past hiring practices that were sexist. Employers are also paying software companies to perform background checks that include a sweep of all of the public online interactions of potential employees. Companies like Fama Technology claim that they can ‘identify problematic behaviour among potential hires and current employees by analysing publicly available online information.’ Soon after a job interview, Twitter user @bruisealmighty received a package of 300 pages of her tweets printed out. All tweets with the word ‘fuck’ were categorised as ‘bad.’ @bruisealmighty didn’t know that her tweets would be used and taken out of context, and possibly put her job prospects at risk. In South Korea, companies are using AI software to sort interviewees based on facial expressions. There are now facial coaches to help candidates prepare for algorithmically run interviews.

在法庭之外,公司正在使用软件来提高招聘效率。 人们发现亚马逊的人力资源招聘工具使男性申请人比女性申请人享有特权,因为它接受过以往性别歧视的雇用实践的培训。 雇主还向软件公司付款,以进行背景调查,其中包括对潜在雇员进行的所有公共在线互动的调查。 像Fama Technology这样的公司声称,他们可以“通过分析公开的在线信息来识别潜在员工和现有员工中的问题行为”。 面试后不久,Twitter用户@bruisealmighty收到了300页打印出来的推文包。 所有带有“ fuck”字样的推文都归类为“坏”。 @bruisealmighty并不知道她的推文会被使用或脱离上下文,并且可能使她的工作前景面临风险。 在韩国,公司正在使用AI软件根据面部表情对受访者进行分类。 现在有面部教练可以帮助求职者为算法面试做准备。

In 2013, Dr. Latanya Sweeney demonstrated how the algorithms in Google’s Ad Delivery are racist. Searches of Black-sounding names served ads for criminal background checks, while White-sounding names did not produce the same results. Imagine the psychological impact on the millions of Black people, as well as their potential employers, who see their names next to racist ads. Many Native Americans, such as Shane Creepingbear and Dana Lone Hill, have written about how Facebook makes it hard for people with non-Western names that mix adjectives and nouns to get an account.

2013年,Latanya Sweeney博士演示了Google广告投放中的算法是种族主义的。 听起来黑的名字的搜索结果为进行犯罪背景调查提供了广告,而听起来白的名字却没有得到相同的结果。 想象一下,对数以百万计的黑人及其潜在雇主的心理影响,他们在种族主义广告旁边看到了他们的名字。 许多美国原住民,例如Shane CreepingbearDana Lone Hill ,都写过关于Facebook如何使具有非西方名字混合形容词和名词的人难以获得帐户的信息。

All these stories involve individuals or entire communities whose personhood is threatened by systems that are not designed by them or even keeping them in mind. These are systems that have become embedded in existing systems, creating entanglements that have evolved faster than the law or policymakers can keep up with. These software spaces will increasingly be home to what Madelaine Claire-Elish calls moral crumple zones, instances where responsibility and liability are obscured in complex, automated systems.

所有这些故事都涉及到个人或整个社区,他们的人格受到非他们设计的系统或什至不牢记他们的系统的威胁。 这些系统已经嵌入到现有系统中,造成纠缠的速度超过了法律或政策制定者的追赶速度。 这些软件空间将越来越成为Madelaine Claire-Elish所谓的道德崩溃区的所在地,在这种情况下,责任和义务在复杂的自动化系统中被掩盖了。

We have crossed the uncanny valley of digital personhood, and if we want to have agency over that personhood, we have to have agency over our data.

我们已经跨越了数字人格化的神秘谷,如果我们想在这个人格上拥有代理权,就必须对我们的数据拥有代理权。

This is why the work of groups like Our Data Bodies is so important — they help marginalised communities understand and value their data. For this five-person team, community-hood is the basis of personhood, an idea underscored by the work of Sabelo Mhlambi. As more communities completely rely on social platforms to communicate, they are in effect centralising all their data, memories and history into one place — putting their entire community-hood at risk if it were to all be misused or disappear.

这就是为什么像我们的数据机构这样的小组的工作如此重要的原因-他们帮助边缘化社区理解和重视他们的数据。 对于这个由五人组成的团队,以社区为基础是人格的基础,而Sabelo Mhlambi的工作强调了这一思想。 随着越来越多的社区完全依靠社交平台进行交流,实际上它们将所有数据,内存和历史集中在一个地方,如果整个社区都被滥用或消失,就会使整个社区处于危险之中。

***

***

Tech companies like Facebook have long argued that they own our data because they’ve created free platforms for us to use. At first glance, we may agree: ‘Sure companies can claim my data is theirs. What would I do with it anyway? They’re the ones collecting the data and paying for it to be stored in some server.’ This logic serves the bigger argument that social media companies have the right to do with your data record as they please. And it potentially sets up the perversion that any entity creating a record of us could ‘own’ us because they own a part of our personhood.

像Facebook这样的科技公司长期以来一直在争辩说他们拥有我们的数据,因为它们已经创建了供我们使用的免费平台。 乍一看,我们可能会同意:“当然,公司可以声称我的数据是他们的。 无论如何,我将如何处理? 他们是收集数据并为存储在某些服务器中付费的人。” 这种逻辑提出了一个更大的论据,即社交媒体公司有权随心所欲地处理您的数据记录。 而且,这可能会造成一种变态,即任何创建我们记录的实体都可能“拥有”我们,因为他们拥有我们一部分人格。

An abstract graphic art of two silver human figures standing in a starry frame.
‘Homos Luminosos’, by Roseline de Thélin (2013) Image courtesy: https://rb.gy/ungtvm
Roseline deThélin的《 Homos Luminosos》 (2013)图片提供: https : //rb.gy/ungtvm

Companies collecting our data know that our personally identifiable information (PII) is valuable. They often use the metaphor that our data is oil, which is reflective of their conception that our data is at once both valueless without their refinement and valuable as a tradeable commodity. Seen as a natural, feral and raw resource, our data then belongs to the first company to create a record of us, capture us in their dataset, and create economic value out of it. Much like how oil companies stockpile land or how colonizers claim land as their property, a similar thing is happening right now with data. There is a rush to gather data even when the use case isn’t clear yet. Companies have long defaulted to collecting as much PII as possible instead of collecting the least amount of data needed to get the job done. Data hoarding is rampant because companies are confident that if the data is not valuable now it will become valuable someday. This is compounded by the technology logic that says that machine learning is only possible on large enough datasets, so it is better to collect more rather than less.

收集我们数据的公司知道我们的个人身份信息(PII)是有价值的。 他们经常用比喻说我们的数据是石油,这反映了他们的观念,即我们的数据既无价值又无精打采,并且作为可交易商品具有价值。 我们的数据被视为一种自然的,野生的和原始的资源,因此属于第一家创建我们记录,在其数据集中捕获我们并从中创造经济价值的公司。 就像石油公司如何储备土地或殖民者如何宣称土地为其财产一样,数据现在正在发生类似的事情。 即使用例尚不清楚,也急于收集数据。 长期以来,公司一直默认收集尽可能多的PII,而不是收集完成任务所需的最少数据。 数据ho积之所以猖ramp,是因为公司有信心,如果现在数据不值钱的话,它将有一天变得有价值。 技术逻辑说,机器学习只能在足够大的数据集上进行,这使情况更加复杂,因此最好收集更多而不是更少。

This business model is predicated upon a company’s ability to sell data repeatedly, extracting value from it over and over again. An entire shadow industry of data brokers has emerged to facilitate the trading of our PII to pharmaceutical companies, finance, healthcare, marketers, advertisers, and governments looking to act upon our data. The data brokerage industry is estimated to be valued at around $200 billion. It is designed to be opaque and untraceable.

此业务模型基于公司重复销售数据,一次又一次地从中提取价值的能力。 整个数据经纪人的影子行业已经出现,以促进我们的PII与希望根据我们的数据采取行动的制药公司,金融,医疗保健,市场营销商,广告商和政府进行交易。 数据经纪行业的价值估计约为2000亿美元 。 它被设计为不透明且不可追踪。

The purpose of this obfuscated market is less data brokerage and more data arbitrage, because it builds on the concept that companies can extract value from your data for less than its actual trading price. It is estimated that a personal data record that has an email address, social media and credit card transactions attached to it has a value of $8 per month. If companies can get that email address from you for free in exchange for signing up for a service or a marketing newsletter, then they have reduced the cost of acquisition of this valuable asset to near zero.

这个混乱的市场的目的是减少数据经纪业务和增加数据套利,因为它建立在公司可以以低于其实际交易价格的价格从您的数据中提取价值的概念之上。 据估计,附有电子邮件地址,社交媒体和信用卡交易的个人数据记录的价值为每月8美元 。 如果公司可以免费从您那里获取该电子邮件地址,以换取注册服务或营销时事通讯的权利,那么它们已将获得这一宝贵资产的成本降低到接近零。

A colourful abstract illustration of a human face with long hair & elements in boxes: speech bubbles, night sky, hands.
Untitled by Catalina Alzate (2019) Image courtesy: https://rb.gy/mtrc8g
卡塔琳娜·阿尔扎特 ( Catalina Alzate ) 无题 (2019)图片提供: https : //rb.gy/mtrc8g

Companies are taking advantage of the lack of shared, collective understanding around personal data by hiding their actions under complex legal terms and conditions, making it difficult to track their actions. While there is nothing illegal about arbitrage, a more accurate way to describe their use of our personal data might be data embezzlement. Companies from Facebook to all the unknown players in the data brokerage industry are siphoning off as much information as they legally can, and selling it to the highest bidder without giving a thought to how it might affect us; how it might interrupt our personhood.

公司通过在复杂的法律条款和条件下隐藏其行为,从而难以跟踪其行为,从而利用了对个人数据缺乏共享的集体理解的优势。 尽管套利没有违法行为,但描述其使用我们的个人数据的更准确方法可能是数据盗用。 从Facebook到数据经纪行业中所有未知参与者的公司,都在尽其所能地吸取合法信息,并将其出售给出价最高的竞标者,却没有考虑到它可能对我们造成的影响。 它可能如何打扰我们的人格。

When questioned about their data practices, companies will argue that their actions fall well within the law. But an entity can be legally compliant with privacy laws and still be socially flagrant of personhood. And for now, most companies will continue on their current path of orientating data policies towards privacy — which they perceive as easier to measure and execute than personhood.

当被问及其数据惯例时,公司将辩称其行为符合法律规定。 但是,实体可以在法律上遵守隐私法,并且仍然在社会上标榜人格。 目前,大多数公司将继续按照当前的方针来将数据策略导向隐私权,因为他们认为隐私权比人格容易衡量和执行。

***

***

Our data is being collected all the time. We might think opting out is an option if we don’t want our data collected. Like staying off social media, for example. But it’s not that simple. Just by existing, we expose ourselves to violations of our personhood on a daily basis. If we walk down a city street, surveillance cameras may capture our image and cellphone towers may log our movements. If we need to catch a flight, we’ll be asked to give up biometric data to pass through security. If we buy groceries, the supermarket’s loyalty programme will carefully tabulate how much we spent, when, and how often. These examples demonstrate that we’re no longer in the position to opt out when data collection is so automated and ingrained into our everyday lives.

我们的数据一直在收集中。 如果我们不希望收集数据,我们可能会认为选择退出是一种选择。 例如,远离社交媒体。 但这不是那么简单。 仅靠生存,我们每天就会遭受违反人格的侵害。 如果我们沿着城市的街道行走,监控摄像头可能会捕获我们的图像,而手机信号塔可能会记录我们的运动。 如果我们需要赶飞机,将被要求放弃生物识别数据以通过安全检查。 如果我们购买食品杂货,那么超市的忠诚度计划将仔细列出我们的消费金额,时间和频率。 这些示例表明,当数据收集如此自动化并根植于我们的日常生活中时,我们将不再选择退出。

If you take a walk around the block with your dog, your neighbour’s Amazon’s Ring camera might capture you. If your local police is one of the 630 police departments that Ring has a partnership with, and if they ask to view footage from your neighbour’s Ring, then your likeness could make it to their database without you ever knowing. The increasing use of facial recognition technologies, from cameras to matching software, complicate the possibilities of opting out. Cities, border patrol and police officers around the world have installed video cameras with facial recognition technologies where capturing is automatic and always on. As the costs to capture and store data have decreased, law enforcement is continuously grabbing data — not because they are looking for someone specific, but just because they can.

如果您与狗一起在街区周围散步,您邻居的亚马逊环形摄像头可能会捕获您。 如果您的当地警察是Ring与之合作的630个警察部门之一,并且如果他们要求查看邻居的Ring的录像,那么您的肖像可能会在您不知情的情况下记录到他们的数据库中。 从相机到匹配软件,面部识别技术的使用日益广泛,这使得选择退出的可能性更加复杂。 世界各地的城市,边境巡逻和警务人员都已安装了具有面部识别技术的摄像机,该摄像机具有自动捕获功能,并且始终处于打开状态。 随着捕获和存储数据的成本降低,执法部门一直在获取数据-不是因为他们在寻找特定的人,而是因为他们可以。

A graphic digital art of a shiny silver ball melting into the void.
‘Melting Memories’, by Refik Anadol (2019) Image courtesy: https://rb.gy/jqfrzq
融化的回忆》, Refik Anadol (2019)图片提供: https : //rb.gy/jqfrzq

In a study on facial recognition technologies deployed by cities, Claire Garvey and Laura Moy warn that technology meant to make residents feel more secure can do the opposite by violating neighbourhood privacy and potentially suppressing free speech. These types of warnings are starting to make their way to local, city and state leaders. Over the last few years, a patchwork of policies have emerged against the use of facial recognition at the city and state level. San Francisco, Oakland, Somerville, Massachusetts, and Portland all have bans already in place or that are being put into place regarding its use in law enforcement. Illinois has had one of the most forward-thinking data protection laws, the Biometric Information Privacy Act (BIPA), in effect for over a decade. Illinois residents recently filed a lawsuit against Amazon, Microsoft and Google for violating the BIPA by using their faces without permission in a database used to train facial recognition systems.

一项关于城市部署的面部识别技术研究中 ,克莱尔·加维(Claire Garvey)和劳拉·莫伊(Laura Moy)警告说,旨在使居民感到更安全的技术可以通过侵犯邻里隐私并可能抑制言论自由来起到相反的作用。 这些类型的警告已开始流向地方,城市和州的领导人。 在过去的几年中,出现了一系列反对在城市和州一级使用面部识别的政策。 旧金山 ,奥克兰, 马萨诸塞州萨默维尔波特兰都已经实施或正在实施有关在执法中使用的禁令。 伊利诺伊州已经制定了最具远见的数据保护法律之一,即《 生物识别信息隐私法》(BIPA) ,该法律已生效十多年。 伊利诺伊州居民最近对亚马逊,微软和谷歌提起诉讼,指控他们未经许可在训练面部识别系统的数据库中使用自己的面Kong侵犯了BIPA。

This flurry of bans on facial recognition technology across the country is a step forward in curtailing automated data gathering. Susan Crawford notes that the annoyances that tech companies experience in having to comply with the variations of local and state laws is often what brings them to the table to negotiate with policymakers. In June 2020, Amazon, Microsoft and IBM introduced a call for a national law on facial recognition (the same week they all agreed to temporarily stop selling facial recognition to the police).

全国范围内对面部识别技术的一系列禁令,在减少自动化数据收集方面迈出了一步。 苏珊·克劳福德(Susan Crawford)指出 ,科技公司不得不遵守当地和州法律的变化所带来的烦恼通常使他们与决策者进行谈判。 2020年6月, 亚马逊,微软和IBM提出了一项关于面部识别的国家法律的呼吁(同一周,他们都同意暂时停止向警察出售面部识别)。

The advent of social media has hastened the trend of passive, automatic data collection wielded against U.S. citizens. In 2015 the Baltimore police labelled two Black Lives Matters organisers, Deray Mckesson and Johnetta Elzie, as ‘high threat actors’ through monitoring their Twitter, Instagram, and email. The US Immigration and Customs Enforcment (ICE) department targeted suspected undocumented immigrants by purchasing access to state DMV data in order to obtain high resolution photographs. Most recently, federal agents used YouTube live streams to identify, arrest, and charge protestors with alleged crimes against federal buildings.

社交媒体的出现加速了对美国公民进行被动,自动数据收集的趋势。 2015年, 巴尔的摩警方通过监视其Twitter,Instagram和电子邮件, 两名“ Black Lives Matters”组织者Deray Mckesson和Johnetta Elzie称为“高威胁行为者”。 美国移民和海关执法(ICE)部门通过购买对州DMV数据的访问权来获取可疑照片,从而将可疑的无证件移民作为目标。 最近, 联邦特工使用YouTube实时流来识别,逮捕抗议者并指控其对联邦建筑物的犯罪指控。

A 3D digital art of numerous concentric circles made of small rectangular and square parts.
‘Virtual Archive’, by Refik Anadol (2017) Image courtesy: https://rb.gy/3h6olq
Refik Anadol的 ``虚拟档案'' (2017)图片提供: https //rb.gy/3h6olq

Much of the technology powering the government’s data surveillance infrastructure is provided by private Silicon Valley companies. One company, Palantir, has over $1.5 billion in federal contracts and several more with local police stations around the US. Palantir’s role is to create the data infrastructure and software to predict who to detain in ICE raids and who to target in drone strikes. While Palantir describes itself as a mission-based company committed to privacy and civil liberties, many critics and activists have argue that Palantir’s software enables the U.S. government to create a surveillance society and abuse human rights.

推动政府数据监视基础架构发展的许多技术是由私营硅谷公司提供的。 一家名为Palantir的公司拥有超过15亿美元的联邦合同 ,还有更多与美国当地警察局的合同 。 Palantir的作用是创建数据基础结构和软件,以预测谁将被扣留在ICE突袭中以及谁将在无人驾驶飞机袭击中成为目标。 虽然Palantir 称自己是一家致力于保护隐私和公民自由的基于使命的公司,但许多批评家和激进主义者认为Palantir的软件使美国政府能够创建监视社会并滥用人权。

This isn’t surprising, because corporations and governments have a long history of working together to define and enforce control over personhood.

这不足为奇,因为公司和政府在定义和实施对人格的控制方面有着悠久的合作历史。

***

***

In January 2020, Zachary McCoy became a suspect in a home burglary case in Gainesville, Florida. About a year earlier, in December 2018, the police department of Avondale, Arizona arrested Jorge Molina as a suspected murderer. These were two very different crimes, committed at two different ends of the the U.S. But despite the fact that Molina and McCoy never met, both their cases had many things in common.

2020年1月, 扎卡里·麦考伊 ( Zachary McCoy)在佛罗里达州盖恩斯维尔的一次家庭盗窃案中成为犯罪嫌疑人。 大约一年前,即2018年12月,亚利桑那州埃文代尔市警察局逮捕了豪尔赫·莫利纳 ( Jorge Molina)为涉嫌凶手。 这是两种非常不同的犯罪,分别发生在美国的两个不同地区。但是,尽管莫利纳和麦考伊从未谋面,但他们的案子有很多共同点。

First, they were both proven innocent. Second, they both became suspects only after Google fulfilled geofence warrants from police departments. A geofence warrant is a new type of warrant that enables police departments to request from Google the data of individuals who were near the site of a crime. In McCoy’s case, he had biked past the victim’s home several times while wearing his fitness app that was tracking location data and sending it to his Android device. In Molina’s case, the geofence warrant showed that a cellphone attached to Molina was at the location of the murder, along with his car. In the end, neither of the cases could prove that the person arrested was physically at the crime scene. McCoy had biked by the victim’s home, but that didn’t mean he was the robber. In Molina’s case, it was his stepfather who had murdered the victim; he used one of Molina’s old cellphones and drove his car without his permission.

首先,他们都被证明是无辜的。 其次,只有在Google履行了警察部门的地理围栏令后,他们俩才成为犯罪嫌疑人。 地理围栏令是一种新型的令状,可让警察部门向Google索取犯罪现场附近个人的数据。 在McCoy的情况下,他穿着健身应用程序跟踪了受害者的位置,并将其发送到他的Android设备 。 在莫利纳的案子中,地理围栏令显示,与莫利纳相连的一部手机和他的汽车在谋杀地点。 最后,这两个案件都不能证明被捕者在犯罪现场。 麦考伊曾在受害者家中骑过自行车,但这并不意味着他是强盗。 在莫利纳案中,是他的继父谋杀了受害者。 他使用了莫利纳(Molina)的一部旧手机,未经允许便开车。

Neither McCoy or Molina were aware that their data could be used in such a way. Since the passage of the Patriot Act in 2001, corporations have been compelled to comply with government requests for user data, whether they ethically agree with it or not, and irrespective of the consequences.

McCoy或Molina都没有意识到可以以这种方式使用其数据。 自2001年通过《爱国者法案》以来,公司被迫遵守政府对用户数据的要求,无论它们在伦理上是否同意,无论后果如何。

Both Molina and McCoy’s lives were disrupted by these wrongful accusations. Both of them were mentally traumatised, too scared to leave their homes for fear of being tracked again. McCoy’s parents used their savings to hire a lawyer to clear the warrant. Molina lost his job and was unable to get a new one. His car was repossessed and he dropped out of school.

这些不当的指控打断了莫利纳和麦考伊的生活。 他们俩都在精神上受了创伤,因为害怕再次被跟踪而害怕离开家园。 麦考伊的父母用他们的积蓄聘请律师来清算手令。 莫利纳(Molina)丢了工作,无法找到新工作。 他的车被收回,他辍学了。

An illustration of a person holding a briefcase and walking. There are two huge CCTV cameras point at them from above.
Untitled (2016) Image courtesy: https://rb.gy/ukg1tl
无题(2016)图片提供: https : //rb.gy/ukg1tl

Molina and McCoy join Williams in a common story. All three had their personal data used by institutions without their consent. In all three cases, data misidentified them with a crime that they did not commit. This experience negatively affected not just their privacy, but their personhood.

莫利纳(Molina)和麦考伊(McCoy)与威廉姆斯共同生活。 这三个人的个人数据未经机构同意都被机构使用。 在所有这三种情况下,数据都将他们误认为他们没有犯罪。 这种经历不仅对他们的隐私产生负面影响,而且对他们的人格也产生负面影响。

But Nathaniel Raymond argues that when data is being gathered remotely, such as through social media or geospatial data, informed consent is not practical. His essay on this topic provides many useful recommendations for how to deploy such technologies. The first suggestion is that organisations need to take the time to understand the potential harm before even seeking to do no harm. And that organisations need to work on data preparedness before data collection, meaning they need to plan in advance how the data will be used, shared, and protected. Raymond’s guidance speaks to why GDPR — the General Data Protection Regulations — was needed as a reset button. Despite its limited scope and impact, the policy is at least mandating organisations to explain what they do with the data they collect. The implementation of GDPR could pave the way for a more progressive approach by governments to link data practices to our personhood.

但是纳撒尼尔· 拉蒙德 (Nathaniel R aymond)认为,当远程收集数据(例如通过社交媒体或地理空间数据)时,知情同意是不现实的。 他关于该主题的文章为如何部署此类技术提供了许多有用的建议。 第一个建议是,组织甚至在寻求不造成伤害之前,都需要花时间了解其潜在的危害。 而且组织需要在数据收集之前就做好数据准备工作,这意味着他们需要预先计划如何使用,共享和保护数据。 雷蒙德(Raymond)的指南说明了为什么需要GDPR (通用数据保护条例)作为重置按钮。 尽管其范围和影响有限,但该政策至少要求组织解释其对所收集数据的处理方式。 GDPR的实施可以为政府采取更先进的方法来将数据实践与我们的人格联系铺平道路。

***

***

Stephanie Dinkins is an artist who brings digital personhood to life in a very real way. Her work seeks to demystify AI and make it accessible to everyone. In one project, ‘Not The Only One (NTOO)’, Dinkins builds a device that captures three generations of family stories. A locally stored AI is trained on the stories to share them with future generations. In another project, ‘Project al-Khwarizmi (PAK)’, Dinkins leads workshops that teach communities of colour to become more aware of the ways that their data is used to inform the algorithms that shape their daily life.

斯蒂芬妮·丁金斯(Stephanie Dinkins)是一位艺术家,他以一种非常真实的方式将数字化人物形象带入生活。 她的工作旨在揭开AI的神秘面纱,并使所有人都能使用它。 在一个名为“ Not Only Only(NTOO)”的项目中,Dinkins制造了一种能够捕获三代家庭故事的设备。 对本地存储的AI进行故事训练,以便与后代共享。 在另一个名为“项目al-Khwarizmi(PAK)”的项目中,丁金斯领导了一些讲习班,这些讲习班教会有色人种更加认识到他们的数据被用来告知影响他们日常生活的算法的方式。

In speaking to Dinkins about her work, she says, ‘For me, art has always been a space for me to ask about value, internally and externally. In NTOO, I am showing people that my family history has value to me, my community and society at large. I put all that history in a form that people can interact with, hopefully making technologies like algorithms seem more approachable so that it can spark people to ask: “What kind of value does my data have to myself?”’

在与丁金斯谈她的作品时,她说:“对我而言,艺术一直是我内部和外部询问价值的空间。 在NTOO中,我向人们展示了我的家族史对我,整个社区和整个社会都很有价值。 我将所有历史以人们可以与之互动的形式表达,希望使算法之类的技术看起来更加平易近人,从而激发人们发问:“我的数据对我自己有什么样的价值?”

A photo of a black person facing a statue, as if having a conversation with it.
‘Conversations with Bina48’, by Stephanie Dinkins (2014-ongoing) Image courtesy: https://rb.gy/3sct0x
史蒂芬妮·丁金斯 ( Stephanie Dinkins )的 《与Bina48对话》 (2014-进行中)图片提供: https : //rb.gy/3sct0x

For Dinkins, people need to see themselves in the product of these technologies in order to understand and even recognise the value in their own data. She goes on to explain that ‘an internal understanding of your own value is important for personhood. But personhood is a radical idea because it means you, as a person, just for being, are valuable. This extends to everything we do. But what happens when others don’t think of you as a being with full personhood?’ Dinkins raises an important question: if on some level we don’t view people as deserving of personhood, then we treat their data that way too. We slide back into thinking that data is not the person, it is just another thing produced by them that could be sold. When we don’t have a shared language to address a new social context, artists are often the ones who help society materialise abstract ideas.

对于Dinkins,人们需要在这些技术的产品中崭露头角,才能理解甚至认识自己数据中的价值。 她继续解释说,“对自己的价值的内部理解对人格很重要。 但是人格是一个激进的想法,因为它意味着您作为一个人,只是为了成为有价值的人。 这扩展到我们所做的一切。 但是,当别人不认为你是一个充满个性的人时,会发生什么呢?” Dinkins raises an important question: if on some level we don't view people as deserving of personhood, then we treat their data that way too. We slide back into thinking that data is not the person, it is just another thing produced by them that could be sold. When we don't have a shared language to address a new social context, artists are often the ones who help society materialise abstract ideas.

New worlds need new language. There are new things to name. And one of those things to name is what is happening to ourselves and our data proxies. Expanding our language from privacy to personhood enables us to have conversations that enable us to see that our data is us, our data is valuable, and our data is being collected automatically. We must expand our language to keep up with technology. Rights, policies, and laws are crafted in response to new needs, but we first must be able to describe those needs

New worlds need new language. There are new things to name. And one of those things to name is what is happening to ourselves and our data proxies. Expanding our language from privacy to personhood enables us to have conversations that enable us to see that our data is us, our data is valuable, and our data is being collected automatically. We must expand our language to keep up with technology. Rights, policies, and laws are crafted in response to new needs, but we first must be able to describe those needs

What would happen if we redesigned systems to protect personhood? Dinkins, along with Williams, McCoy and Molina, remind us that we must keep fighting for both privacy and personhood. This is not a binary decision. Apps, facial recognition software, surveillance cameras, social media and the internet don’t just affect our lives, they also affect what we believe we are capable of accomplishing in life. Personhood should not be for the privileged, it should be for everyone. And that’s something worth fighting for.

What would happen if we redesigned systems to protect personhood? Dinkins, along with Williams, McCoy and Molina, remind us that we must keep fighting for both privacy and personhood. This is not a binary decision. Apps, facial recognition software, surveillance cameras, social media and the internet don't just affect our lives, they also affect what we believe we are capable of accomplishing in life. Personhood should not be for the privileged, it should be for everyone. And that's something worth fighting for.

This work was carried out as part of the Big Data for Development (BD4D) network supported by the International Development Research Centre, Ottawa, Canada. Bodies of Evidence is a joint venture between Point of View and the Centre for Internet and Society (CIS).

This work was carried out as part of the Big Data for Development (BD4D) network supported by the International Development Research Centre, Ottawa, Canada. Bodies of Evidence is a joint venture between Point of View and the Centre for Internet and Society (CIS).

翻译自: https://deepdives.in/you-are-not-your-data-but-your-data-is-still-you-b41d2478ece2

您选择的不是数据库安装目录

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值