Ethics of AI Chapter 4——Surveillance Capitalism

本文探讨了监控资本主义,涉及数据获取、健康数据货币化和不公平商业实践,强调了权力不平等、隐私和数据保护的问题。文章提出通过反垄断法规、数据共享和增强消费者数据所有权来应对这些问题,同时指出这些措施在当前经济体系中的潜力和挑战。
摘要由CSDN通过智能技术生成

Abstract

Surveillance capitalism hinges on the appropriation and commercialisation personal data for profit-making.

This chapter spotlights three cases connected to surveillance capitalism: data appropriation, monetisation of health data and the unfair commercial practice when “free” isn’t “free”. It discusses related ethical concerns of power inequality, privacy and data protection, and lack of transparency and explainability. The chapter identifies responses to address concerns about surveillance capitalism and discusses three key responses put forward in policy and academic literature and advocated for their impact and implementation potential in the current socio-economic system: antitrust regulation, data sharing and access, and strengthening of data ownership claims of consumers/individuals. A combination of active, working governance measures is required to stem the growth and ill-effects of surveillance capitalism and protect democracy.

Keywords

Antitrust · Big Tech · Data appropriation · Data monetisation · Surveillance capitalism · Unfair commercial practices

4.1 Introduction

The flourishing of systems and products powered by artificial intelligence (AI) and increasing reliance on them fuels surveillance capitalism and “ruling by data”. The main beneficiaries, it is argued, are “Big Tech”——including Apple, Amazon, Alphabet, Meta, Netflix, Tesla——who are seen to be entrenching their power over individuals and society and affecting democracy.

Simply described, surveillance capitalism hinges on the appropriation and commercialisation of personal data for profit. Zuboff conceptualised and defined it as a “new form of information capitalism [which] aims to predict and modify human behavior as a means to produce revenue and market control.”She argues that surveillance capitalism “effectively exile[s] persons from their own behavior while producing new markets of behavioral prediction and modification”. It is underpinned by organisational use of behavioural data leading to asymmetries in knowledge and power. As a result, consumers often may not realise the extent to which they are responding to prompts driven by commercial interests.

Doctorow elaborates that surveillance capitalists engage in segmenting (targeting based on behaviour/attitudes/choices), deception (by making fraudulent claims, replacing beliefs with false or inaccurate ones) and domination (e.g. Google’s dominance on internet searches and the monopolisation of the market through mergers and acquisitions). He outlines three reasons why organisations continue to over-collect and over-retain personal data: first, that they are competing with people’s ability to resist persuasion techniques once they are wise to them, and with competitors’ abilities to target their customers; second, that the cheapness of data aggregation and storage facilitates the organisation’s acquiring an asset for future sales; and third, that the penalties are imposed for leaking data are negligible.

Surveillance capitalism manifests in both the private and public sectors; organisations in both sectors collect and create vast reserves of personal data under various guises. Examples for the private sector can be found in online marketplaces (e.g. Amazon), the social media industry (e.g. Facebook) and the entertainment industry(e.g. Netflix). The most cited examples of surveillance capitalism are Google’s and Facebook’s attempts to feed their systems and services. In the public sector, this is very notable in the healthcare and retail sectors. During the COVID-19 pandemic, attention has been drawn to health-related surveillance capitalism. For example, it has been argued that telehealth technologies were pushed too quickly during the COVID-19 pandemic.

4.2 Cases of AI-Enabled Surveillance Capitalism

Case 1: Data Appropriation

Clearview AI is a software company headquartered in New York which specialises in facial recognition software, including for law enforcement. The company, which holds ten billion facial images, aims to obtain a further 90 billion, which would amount to 14 photos of each person on the planet. In May 2021, legal complaints were filed against Clearview AI in France, Austria, Greece, Italy and the United Kingdom. It was argued that photos were harvested from services such as Instagram, LinkedIn and YouTube in contravention of what users of these services were likely to expect or have agreed to. On 16 December 2021, the French Commission Nationale de l’Informatique et des Libertés announced that it had “ordered the company to cease this illegal processing and to delete the data within two months”.

The investigations carried out by the Commission Nationale de l’Informatique et des Libertés (CNIL) into Clearview AI revealed two breaches of the General Data Protection Regulation (GDPR): first, an unlawful processing of personal data, as the collection and use of biometric data had been carried out without a legal basis; and second, a failure to take into account the rights of individuals in an effective and satisfactory way, especially with regard to access to their data.

CNIL ordered Clearview AI to stop the collection and use of data of persons on French territory in the absence of a legal basis, to facilitate the exercise of individuals’ rights and to comply with requests for erasure. The company was given two months to comply with the injunctions and justify its compliance or face sanctions.

This case is an example of data appropriation (i.e. the illegal, unauthorised or unfair taking or collecting of personal data for known or unknown purposes, without consent or with coerced or uninformed consent). Organisations appropriating data in such a fashion do not offer data subjects “comparable compensation”, while the organisations themselves gain commercial profits and other benefits from the activity.

Case 2: Monetisation of Health Data

In 2021, the personal data of 61 million people became publicly available without password protection, due to data leaks at a New York-based provider of health tracking services. The data included personal information such as names, gender, geographic locations, dates of birth, weight and height. Security researcher Jeremiah Fowler, who discovered the database, traced its origin to a company that offered devices and apps to track health and wellbeing data. The service users whose personal data had been leaked were located all over the world. Fowler contacted the company, which thanked him and confirmed that the data had now been secured.

This case highlights issues with the collection and storage of health data on a vast scale by companies using health and fitness tracing devices, and it reveals the vulnerability of such data to threats and exposure.

A related concern is the procurement of such data by companies such as Google through their acquisition of businesses such as Fitbit, a producer of fitness monitors and related software. Experts have indicated that such acquisition is problematic for various reasons: major risks of “platform envelopment”, the extension of monopoly power (by undermining competition) and consumer exploitation. Their concerns also relate to the serious harms that might result from Google’s ability to combine its own data with Fitbit’s health data.

The European Commission carried out an in-depth investigation into the acquisition of Fitbit by Google. The concerns that emerged related to advertising, in that the acquisition would increase the already extensive amount of data that Google was able to use for the personalisation of ads, and the resulting difficulty for rivals setting out to match Google’s services in the markets for online search advertising. It was argued that the acquisition would raise barriers to entry and expansion by Google’s competitors, to the detriment of advertisers, who would ultimately face higher prices and have less choice. The European Commission approved the acquisition of Fitbit by Google under the EU Merger Regulation, conditional on full compliance with a ten-year commitments package offered by Google.

Case 3: Unfair Commercial Practices

In 2021, Italy’s Consiglio di Stato (Council of State) agreed with the Autorità Garante della Concorrenza e del Mercato (Italian Competition Authority) and the Tribunale Amministrativo Regionale (Regional Administrative Tribunal) of the Lazio region to sanction Facebook for an unfair commercial practice. Facebook was fined seven million euros for misleading its users by not explaining to them in a timely and adequate manner, during the activation of their accounts, that data would be collected with commercial intent.

This case spotlights how companies deceive users into believing they are getting a social media service free of charge when this is not true at all. The problem is aggravated by the companies not communicating well enough to users that their data is the quid pro quo for the use of the service and that the service is only available to them conditional on their making their data available and accepting the terms of service. What is also fuzzily communicated is how and to what extent companies put such data into further commercial use and take advantage of it for targeted advertising purposes. Commentators have gone so far as to state that the people using such services have themselves become the product.

4.3 Ethical Questions About Surveillance Capitalism

One of the primary ethical concerns that arise in the context of all three case studies is that of power inequality. The power of Big Tech is significant; it has even been compared to that of nation states and is further strengthened by the development and/or acquisition of AI solutions. All three of the cases examined have served to further and enhance the control that AI owners hold. The concentrated power that rests with a handful of big tech companies and the control and influence they have, for example on political decision-making, market manipulation and digital lives, are disrupting economic processes and posing a threat to democracy, freedoms of individuals and political and social life.

Another key ethical concern brought to the forefront with these cases is privacy and data protection (see Chap. 3). Privacy is critical to human autonomy and well-being and helps individuals protect themselves from interference in their lives. For instance, leaked personal health data might be appropriated by employers or health insurers and used against the interests of the person concerned. Data protection requires that data be processed in a lawful, fair and transparent manner in addition to being purpose-limited, accurate and retained for a limited time. It also requires that such processing respect integrity, confidentiality and accountability principles.

Lack of transparency and explainability links to data appropriation, data monetisation and unfair commercial practices. While it may seem obvious from a data protection and societal point of view that transparency requirements should be followed by companies that acquire personal data, this imperative faces challenges. Transparency challenges are a result of the structure and operations of the data industry. Transparency is also challenged by the appropriation of transparency values in public relations efforts by data brokers (e.g. Acxiom, Experian and ChoicePoint) to water down government regulation. It is also highlighted that transparency may only create an “illusion of reform” and not address basic power imbalances.

Other ethical concerns relate to proportionality and do-no-harm. The UNESCO Recommendation on the Ethics of Artificial Intelligence, as adopted, suggests:

  1. The AI method chosen should be appropriate and proportional to achieve a given legitimate aim;
  2. The AI method chosen should not infringe upon foundational values; its use must not violate or abuse human rights;
  3. The AI method should be appropriate to the context and should be based on rigorous scientific foundations;

In the cases examined above, there are clear-cut failures to meet these checks. “Appropriateness” refers to whether the technological or AI solution used is the best (with regard to cost and quality justifying any invasions of privacy), whether there is a risk of human rights, such as that to privacy, being abused and data being reused, and whether the objectives can be satisfied using other means. The desirability of the use of AI solutions is also something that should be duly considered– with regard to the purpose, advantages and burden imposed by them on social values, justice and the public interest.

A key aspect of the perception of surveillance capitalism as being ethically problematic seems to be the encroaching of market mechanisms on areas of life that previously were not subject to financial exchange. To some degree this is linked to the perception of exploitation of the data producers. Many users of “free” online services are content to use services such as social media or online productivity tools in exchange for the use of their data by application providers. There is also, nevertheless, a perception of unfairness, as the service providers have been able to make gigantic financial gains that are not shared with the individuals on whose data they rely to generate these gains. In addition, criticism of surveillance capitalism seems to be based on the perception that some parts of social life should be free of market exchange. A manifestation of this may be the use of the term “friend” in social media, where not only does the nature of friendship differ substantially from that in the offline world, but the number of friends and followers can lead to financial transactions that would be deemed inappropriate in the offline world.

There is no single clearly identifiable ethical issue that is at the base of surveillance capitalism. The term should be understood as signifying opposition to technically enabled social changes that concentrate economic and political power in the hands of a few high-profile organisations.

4.4 Responses to Surveillance Capitalism

Many types of responses have been put forward to address concerns about surveillance capitalism: legal or policy-based responses, market-based responses, and societal responses.

Legal and policy measures include antitrust regulation, intergovernmental regulation, strengthening the data-ownership claims of consumers or individuals, socialising the ownership of evolving technologies, making big tech companies spend their monopoly profits on governance, mandatory disclosure frameworks and greater data sharing and access.

Market-based responses include placing value on the information provided to surveillance capitalists, monopoly reductions, defunding Big Tech and refunding community-oriented services and users employing their market power by rejecting and avoiding companies with perceived unethical behaviour.

Societal responses include indignation, naming /public indignation, personal data spaces or emerging intermediary services that allow users control over the sharing and use of their data, increasing data literacy and awareness of how transparent a company’s data policy is, and improving consumer education.

This section examines three responses which present promising ways to curtail the impact of surveillance capitalism and the ethical questions studied in a variety of ways, though none on its own is a silver bullet. These responses have been discussed in policy and academic literature and advocated for their impact and implementation potential in the current socio-economic system. The challenges of surveillance capitalism arise from the socio-political environment in which AI is developed and used, and the responses we have spotlighted here are informed by this.

4.4.1 Antitrust Regulation

“Antitrust” refers to actions to control monopolies, prevent companies from working together to unfairly control prices, and enhance fair business competition. Antitrust laws regulate monopolistic behaviour and prevent unlawful business practices and mergers. Courts review mergers case by case for illegality. Many calls and proposals have been made to counter the power of Big Tech (e.g. Warren). Discussions have proliferated on the use of antitrust regulations to break up big tech companies and the appointment of regulators to reverse illegal and anti-competitive tech mergers.

Big Tech is regarded as problematic for its concentration of power and control over the economy, society and democracy to the detriment of competition and innovation in small business. Grunes and Stucke emphasise the need for competition and antitrust’s “integral role to ensure that we capture the benefits of a data-driven economy while mitigating its associated risks”. However, the use of antitrust remedies to control dominant firms presents some problems, such as reducing competitive incentives (by forcing the sharing of information) and innovation, creating privacy concerns or resulting in stagnation and fear among platform providers.

There are both upsides and downsides to the use of antitrust regulation as a measure to curb the power of Big Tech. The upsides include delaying or frustrating acquisitions, generating better visibility, transparency and oversight, pushing Big Tech to improve their practices, and better prospects for small businesses. One downside is that despite the antitrust actions taken thus far, some Big Tech companies continue to grow their power and dominance. Another downside is the implementation and enforcement burdens antitrust places on regulators. Furthermore, antitrust actions are expensive and disruptive to business and might affect innovation.

Developments (legislative proposals, acquisition challenges, lawsuits and fines) in the USA and Europe show that Big Tech’s power is under deeper scrutiny than ever before.

4.4.2 Data Sharing and Access

Another response to surveillance capitalism concerns is greater data sharing and access (subject to legal safeguards and restrictions). Making data open and freely available under a strict regulatory environment is suggested as having the potential to better address the limitations of antitrust legislation. In similar vein, Kang suggests that data-sharing mandates (securely enforced through privacy-enhancing technologies) “have become an essential prerequisite for competition and innovation to thrive”; to counter the “monopolistic power derived from data, Big Tech should share what they know——and make this information widely usable for current and potential competitors”.

At the European Union level, the proposal for the Data Governance Act is seen as a “first building block for establishing a solid and fair data-driven economy” and “setting up the right conditions for trustful data sharing in line with our European values and fundamental rights”.

The Data Governance Act aims to foster the availability of data for more widespread use by increasing trust in data intermediaries and by strengthening data-sharing mechanisms across the EU. It specifies conditions for the reuse, within the European Union, of certain categories of data held by public sector bodies; a notification and supervisory framework for the provision of data sharing services; and a framework for the voluntary registration of entities which collect and process data made available for altruistic purposes.

The European Commission will also set out “a second major legislative initiative, the Data Act, to maximise the value of data for the economy and society” and “to foster data sharing among businesses, and between businesses and governments”. The proposed Digital Markets Act aims to lay down harmonised rules ensuring contestable and fair markets in the digital sector across the European Union. Gate-keepers will be present, and it is expected that business access to certain data will go through gatekeepers.

4.4.3 Strengthening of Data Ownership Claims of Consumers/Individuals

Another response to surveillance capitalism is to strengthen the data ownership claims of consumers and individuals. Jurcys et al. argue that even if user-held data is intangible, it meets all the requirements of an “asset” in property laws and that such “data is specifically defined, has independent economic value to the individual, and can be freely alienated”.

Fadler and Legner also suggest that data ownership remains a key concept to clarify rights and responsibilities but should be revisited in the Big Data and analytics context. They identify three distinct types of data ownership——data, data platform and data product ownership——which may guide the definition of governance mechanisms and serve as the basis for more comprehensive data governance roles and frameworks.

As Hummel, Braun and Dabrock outlined, the commonality in calls for data ownership relates to modes of controlling how data is used and the ability to channel, constrain and facilitate the flow of data. They also suggest that with regard to the marketisation and commodification of data, ownership has turned out be a double-edged sword, and that using this concept requires reflection on how data subjects can protect their data and share appropriately. They furthermore outline that “even if legal frameworks preclude genuine ownership in data, there remains room to debate whether they can and should accommodate such forms of quasi-ownership”.

Challenges that affect this response include the ambiguousness of the concept of ownership, the complexity of the data value cycle and the involvement of multiple stakeholders, as well as difficulty in determining who could or would be entitled to claim ownership in data.

4.5 Key Insights

A combination of active, working governance measures is required to stem the growth and ill effects of surveillance capitalism and protect democracy. As we move forward, there are some key points that should be considered.

Breaking Up with Antitrust Regulation is Hard to Do

While breaking up Big Tech using antitrust regulation might seem like a very attractive proposition, it is challenging and complex. Moss assesses the potential consequences of breakup proposals and highlights the following three issues:

  • • Size thresholds could lead to broad restructuring and regulation.
  • • Breakup proposals do not appear to consider the broader dynamics created by prohibition on ownership of a platform and affiliated businesses.
  • • New regulatory regimes for platform utilities will require significant thought.

A report from the EU-funded SHERPA project also highlights the implementation burdens imposed on legislators (who must define the letter and scope of the law) and on enforcement authorities (who must select appropriate targets for enforcement action and make enforcement decisions).

Further challenges include the limitations in antitrust enforcement officials’ knowledge and the potential impact of ill-advised investigations and prosecutions on markets, never-ending processes, defining what conduct contravenes antitrust law, business and growth disruption, and high costs, including the impact on innovation.

Are “Big Tech’s Employees One of the Biggest Checks on Its Power”?

Among the most significant actions that have changed the way digital companies behave and operate has been action taken by employees of such organisations to hold their employers to account over ethical concerns and illegal practices, while in the process risking career, reputation, credibility and even life. Ghaffary points out that tech employees are uniquely positioned (with their “behind the scenes” understanding of algorithms and company policies) to provide checks and enable the scrutiny needed to influence Big Tech. In the AI context, given issues of lack of transparency, this is significant for its potential to penetrate corporate veils.

A variety of issues have been brought to light by tech whistleblowers: misuse or illegal use of data, institutional racism, research suppression, suppression of the right to organise, the falsification of data, a lack of safety controls and the endangerment of life through hosting hate speech and illegal activitity.

Whistleblowing is now seen in the digital and AI context as a positive corporate governance tool. Laws have been and are being amended to increase whistleblower protections, for instance in New York and the European Union. Reporting by whistleblowers to enforcement bodies is expected to increase as regulators improve enforcement and oversight over AI. This might provide a necessary check on Big Tech. However, whistleblowing comes with its own price, especially for the people brave enough to take this step, and by itself is not enough, given the resources of Big Tech and the high human and financial costs to the individuals who are forced to undertake such activity.

Surveillance capitalism may be here to stay, at least for a while, and its effects might be strong and hard in the short to medium term (and longer if not addressed), but as shown above, there are a plethora of mechanisms and tools to address it. In addition to the responses discussed in this chapter, it is important that other measures be duly reviewed for their potential to support ethical AI and used as required——be they market-based, policy- or law-based or societal interventions. Even more important is the need to educate and inform the public about the implications for and adverse effects on society of surveillance capitalism. This is a role that civil society organisations and the media are well placed to support.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Snivellus Snape

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值